chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
• 2.1: Simulation of Continuous Probabilities
In this section we shall show how we can use computer simulations for experiments that have a whole continuum of possible outcomes.
• 2.2: Continuous Density Functions
In the previous section we have seen how to simulate experiments with a whole continuum of possible outcomes and have gained some experience in thinking about such experiments. Now we turn to the general problem of assigning probabilities to the outcomes and events in such experiments.
• 2.R: References
02: Continuous Probability Densities
In this section we shall show how we can use computer simulations for experiments that have a whole continuum of possible outcomes.
Probabilities
Example $1$
We begin by constructing a spinner, which consists of a circle of unit circumference and a pointer as shown in Figure 2.1. We pick a point on the circle and label it 0, and then label every other point on the circle with the distance, say x, from 0 to that point, measured counterclockwise. The experiment consists of spinning the pointer and recording the label of the point at the tip of the pointer. We let the random variable X denote the value of this outcome. The sample space is clearly the interval [0, 1). We would like to construct a probability model in which each outcome is equally likely to occur.
If we proceed as we did in Chapter 1 for experiments with a finite number of possible outcomes, then we must assign the probability 0 to each outcome, since otherwise, the sum of the probabilities, over all of the possible outcomes, would not equal 1. (In fact, summing an uncountable number of real numbers is a tricky business; in particular, in order for such a sum to have any meaning, at most countably many of the summands can be different than 0.) However, if all of the assigned probabilities are 0, then the sum is 0, not 1, as it should be.
In the next section, we will show how to construct a probability model in this situation. At present, we will assume that such a model can be constructed. We will also assume that in this model, if E is an arc of the circle, and E is of length p, then the model will assign the probability p to E. This means that if the pointer is spun, the probability that it ends up pointing to a point in E equals p, which is certainly a reasonable thing to expect.
To simulate this experiment on a computer is an easy matter. Many computer software packages have a function which returns a random real number in the interval [0, 1]. Actually, the returned value is always a rational number, and the values are determined by an algorithm, so a sequence of such values is not truly random. Nevertheless, the sequences produced by such algorithms behave much like theoretically random sequences, so we can use such sequences in the simulation of experiments. On occasion, we will need to refer to such a function. We will call this function rnd.
Monte Carlo Procedure and Areas
It is sometimes desirable to estimate quantities whose exact values are difficult or impossible to calculate exactly. In some of these cases, a procedure involving chance, called a Monte Carlo procedure, can be used to provide such an estimate.
Example $2$
In this example we show how simulation can be used to estimate areas of plane figures. Suppose that we program our computer to provide a pair (x, y) or numbers, each chosen independently at random from the interval [0, 1]. Then we can interpret this pair (x, y) as the coordinates of a point chosen at random from the unit square. Events are subsets of the unit square. Our experience with Example 2.1 suggests that the point is equally likely to fall in subsets of equal area. Since the total area of the square is 1, the probability of the point falling in a specific subset E of the unit square should be equal to its area. Thus, we can estimate the area of any subset of the unit square by estimating the probability that a point chosen at random from this square falls in the subset.
We can use this method to estimate the area of the region E under the curve y = x 2 in the unit square (see Figure $2$). We choose a large number of points (x, y) at random and record what fraction of them fall in the region E = { (x, y) : y ≤ x 2 }.
The program MonteCarlo will carry out this experiment for us. Running this program for 10,000 experiments gives an estimate of .325 (see Figure $3$).
From these experiments we would estimate the area to be about 1/3. Of course, for this simple region we can find the exact area by calculus. In fact,
$\text{Area of E} =\int_0^1x^2dx = \frac{1}{3}$
We have remarked in Chapter 1 that, when we simulate an experiment of this type n times to estimate a probability, we can expect the answer to be in error by at most 1/ √ n at least 95 percent of the time. For 10,000 experiments we can expect an accuracy of 0.01, and our simulation did achieve this accuracy.
This same argument works for any region E of the unit square. For example, suppose E is the circle with center (1/2, 1/2) and radius 1/2. Then the probability that our random point (x, y) lies inside the circle is equal to the area of the circle, that is,
$P(E) = \pi \Big(\frac{1}{2}\Big)^2 = \frac{\pi}{4}$
If we did not know the value of π, we could estimate the value by performing this experiment a large number of times!
The above example is not the only way of estimating the value of π by a chance experiment. Here is another way, discovered by Buffon.
Buffon's Needle
Example $3$:
Suppose that we take a card table and draw across the top surface a set of parallel lines a unit distance apart. We then drop a common needle of unit length at random on this surface and observe whether or not the needle lies across one of the lines. We can describe the possible outcomes of this experiment by coordinates as follows: Let d be the distance from the center of the needle to the nearest line. Next, let L be the line determined by the needle, and define θ as the acute angle that the line L makes with the set of parallel lines. (The reader should certainly be wary of this description of the sample space. We are attempting to coordinatize a set of line segments. To see why one must be careful in the choice of coordinates, see Example $6$.) Using this description, we have 0 ≤ d ≤ 1/2, and 0 ≤ θ ≤ π/2. Moreover, we see that the needle lies across the nearest line if and only if the hypotenuse of the triangle (see Figure $4$) is less than half the length of the needle, that is,
$\frac{d}{\sin\theta} < \frac{1}{2}$
Now we assume that when the needle drops, the pair (θ, d) is chosen at random from the rectangle 0 ≤ θ ≤ π/2, 0 ≤ d ≤ 1/2. We observe whether the needle lies across the nearest line (i.e., whether d ≤ (1/2) sin θ). The probability of this event E is the fraction of the area of the rectangle which lies inside E (see Figure $5$).
Now the area of the rectangle is π/4, while the area of E is
$\text{Area} = \int_0^{\pi/2} \frac{1}{2}\sin\theta d\theta = \frac{1}{2}$
Hence, we got
$P(E) = \frac{1/2}{\pi/4}=\frac{2}{\pi}$
The program BuffonsNeedle simulates this experiment. In Figure $6$, we show the position of every 100th needle in a run of the program in which 10,000 needles were “dropped.” Our final estimate for π is 3.139. While this was within 0.003 of the true value for π we had no right to expect such accuracy. The reason for this is that our simulation estimates P(E). While we can expect this estimate to be in error by at most 0.01, a small error in P(E) gets magnified when we use this to compute π = 2/P(E). Perlman and Wichura, in their article “Sharpening Buffon’s
Needle,”2 show that we can expect to have an error of not more than $5\sqrt{n}$ about 95 percent of the time. Here n is the number of needles dropped. Thus for 10,000 needles we should expect an error of no more than 0.05, and that was the case here. We see that a large number of experiments is necessary to get a decent estimate for π.
In each of our examples so far, events of the same size are equally likely. Here is an example where they are not. We will see many other such examples later.
Example $4$:
Suppose that we choose two random real numbers in [0, 1] and add them together. Let X be the sum. How is X distributed? To help understand the answer to this question, we can use the program Areabargraph. This program produces a bar graph with the property that on each interval, the area, rather than the height, of the bar is equal to the fraction of outcomes that fell in the corresponding interval. We have carried out this experiment 1000 times; the data is shown in Figure $7$. It appears that the function defined by
$f(x) = \left\{ \begin{array}{cc} x, & \text{if } 0 \leq x \leq 1 \ 2-x, & \text{if } 1 < x \leq 2 \end{array}\right$
fits the data very well. (It is shown in the figure.) In the next section, we will see that this function is the “right” function. By this we mean that if a and b are any two real numbers between 0 and 2, with a ≤ b, then we can use this function to calculate the probability that a ≤ X ≤ b. To understand how this calculation might be performed, we again consider Figure $7$. Because of the way the bars were constructed, the sum of the areas of the bars corresponding to the interval
[a, b] approximates the probability that a ≤ X ≤ b. But the sum of the areas of these bars also approximates the integral
$\int_a^bf(x)dx$
This suggests that for an experiment with a continuum of possible outcomes, if we find a function with the above property, then we will be able to use it to calculate probabilities. In the next section, we will show how to determine the function f(x).
Example $5$:
Suppose that we choose 100 random numbers in [0, 1], and let X represent their sum. How is X distributed? We have carried out this experiment 10000 times; the results are shown in Figure 2.8. It is not so clear what function fits the bars in this case. It turns out that the type of function which does the job is called a normal density function. This type of function is sometimes referred to as a “bell-shaped” curve. It is among the most important functions in the subject of probability, and will be formally defined in Section 5.2 of Chapter 4.3.
Our last example explores the fundamental question of how probabilities are assigned.
Bertrand's Paradox
Example $6$
A chord of a circle is a line segment both of whose endpoints lie on the circle. Suppose that a chord is drawn at random in a unit circle. What is the probability that its length exceeds √ 3? Our answer will depend on what we mean by random, which will depend, in turn, on what we choose for coordinates. The sample space Ω is the set of all possible chords in the circle. To find coordinates for these chords, we first introduce a
rectangular coordinate system with origin at the center of the circle (see Figure $9$). We note that a chord of a circle is perpendicular to the radial line containing the midpoint of the chord. We can describe each chord by giving:
1. The rectangular coordinates (x, y) of the midpoint M, or
2. The polar coordinates (r, θ) of the midpoint M, or
3. The polar coordinates (1, α) and (1, β) of the endpoints A and B.
In each case we shall interpret at random to mean: choose these coordinates at random.
We can easily estimate this probability by computer simulation. In programming this simulation, it is convenient to include certain simplifications, which we describe in turn:
1. To simulate this case, we choose values for x and y from [−1, 1] at random. Then we check whether x 2 + y 2 ≤ 1. If not, the point M = (x, y) lies outside the circle and cannot be the midpoint of any chord, and we ignore it. Otherwise, M lies inside the circle and is the midpoint of a unique chord, whose length L is given by the formula:
$L = 2\sqrt{1-(x^2+y^2)}$
2. To simulate this case, we take account of the fact that any rotation of the circle does not change the length of the chord, so we might as well assume in advance that the chord is horizontal. Then we choose r from [0, 1] at random, and compute the length of the resulting chord with midpoint (r, π/2) by the formula
$L=2\sqrt{1-r^2}$
3. To simulate this case, we assume that one endpoint, say B, lies at (1, 0) (i.e., that β = 0). Then we choose a value for α from [0, 2π] at random and compute the length of the resulting chord, using the Law of Cosines, by the formula:
$L=\sqrt{2-2\cos\alpha}$
The program BertrandsParadox carries out this simulation. Running this program produces the results shown in Figure 2.10. In the first circle in this figure, a smaller circle has been drawn. Those chords which intersect this smaller circle have length at least √ 3. In the second circle in the figure, the vertical line intersects all chords of length at least √ 3. In the third circle, again the vertical line intersects all chords of length at least √ 3.
In each case we run the experiment a large number of times and record the fraction of these lengths that exceed √ 3. We have printed the results of every 100th trial up to 10,000 trials.
It is interesting to observe that these fractions are not the same in the three cases; they depend on our choice of coordinates. This phenomenon was first observed by Bertrand, and is now known as Bertrand’s paradox.3 It is actually not a paradox at all; it is merely a reflection of the fact that different choices of coordinates will lead to different assignments of probabilities. Which assignment is “correct” depends on what application or interpretation of the model one has in mind.
One can imagine a real experiment involving throwing long straws at a circle drawn on a card table. A “correct” assignment of coordinates should not depend on where the circle lies on the card table, or where the card table sits in the room. Jaynes4 has shown that the only assignment which meets this requirement is (2). In this sense, the assignment (2) is the natural, or “correct” one (see Exercise $11$).
We can easily see in each case what the true probabilities are if we note that √ 3 is the length of the side of an inscribed equilateral triangle. Hence, a chord has length L > √ 3 if its midpoint has distance d < 1/2 from the origin (see Figure 2.9). The following calculations determine the probability that L > √ 3 in each of the three cases.
1. L > √ 3 if(x, y) lies inside a circle of radius 1/2, which occurs with probability
$p = \frac{\pi(1/2)^2}{\pi(1)^2}=\frac{1}{4}$
2. L > √ 3 if |r| < 1/2, which occurs with probability
$\frac{1/2 -(-1/2)}{1-(-1)}=\frac{1}{2}$
3. L > √ 3 if 2π/3 < α < 4π/3, which occurs with probability
$\frac{4\pi/3-2\pi/3}{2\pi-0}=\frac{1}{3}$
We see that our simulations agree quite well with these theoretical values.
Historical Remarks
G. L. Buffon (1707–1788) was a natural scientist in the eighteenth century who applied probability to a number of his investigations. His work is found in his monumental 44-volume Histoire Naturelle and its supplements. For example, he presented a number of mortality tables and used them to compute, for each age group, the expected remaining lifetime. From his table he observed: the expected remaining lifetime of an infant of one year is 33 years, while that of a man of 21 years is also approximately 33 years. Thus, a father who is not yet 21 can hope to live longer than his one year old son, but if the father is 40, the odds are already 3 to 2 that his son will outlive him.
Note
G. L. Buffon, Histoire Naturelle, Generali et Particular avec le Description du Cabinet du Roy, 44 vols. (Paris: L`Imprimerie Royale, 1749{1803).
G. L. Buffon, \Essai d'Arithmetique Morale,"
Table $1$:Buffon needle experiments to estimate $\pi$
Experimenter
Length of Needle
Number of Casts
Number of Crossings
Estimate for $\pi$
Wolf, 1850
.8
5000
2532
3.1596
Smith, 1855
.6
3204
1218.5
3.1553
De Morgan, c.1860
1.0
600
382.5
3.137
Fox, 1864
.75
1030
489
3.1595
Lazzerini, 1901
.83
3408
1808
3.1415929
Reina, 1925
.5419
2520
869
3.1795
Buffon wanted to show that not all probability calculations rely only on algebra, but that some rely on geometrical calculations. One such problem was his famous “needle problem” as discussed in this chapter.7 In his original formulation, Buffon describes a game in which two gamblers drop a loaf of French bread on a wide-board floor and bet on whether or not the loaf falls across a crack in the floor. Buffon asked: what length L should the bread loaf be, relative to the width W of the floorboards, so that the game is fair. He found the correct answer (L = (π/4)W) using essentially the methods described in this chapter. He also considered the case of a checkerboard floor, but gave the wrong answer in this case. The correct answer was given later by Laplace.
The literature contains descriptions of a number of experiments that were actually carried out to estimate π by this method of dropping needles. N. T. Gridgeman8 discusses the experiments shown in Table $1$. (The halves for the number of crossing comes from a compromise when it could not be decided if a crossing had actually occurred.) He observes, as we have, that 10,000 casts could do no more than establish the first decimal place of π with reasonable confidence. Gridgeman points out that, although none of the experiments used even 10,000 casts, they are surprisingly good, and in some cases, too good. The fact that the number of casts is not always a round number would suggest that the authors might have resorted to clever stopping to get a good answer. Gridgeman comments that Lazzerini’s estimate turned out to agree with a well-known approximation to π, 355/113 = 3.1415929, discovered by the fifth-century Chinese mathematician, Tsu Ch’ungchih. Gridgeman says that he did not have Lazzerini’s original report, and while waiting for it (knowing only the needle crossed a line 1808 times in 3408 casts) deduced that the length of the needle must have been 5/6. He calculated this from Buffon’s formula, assuming π = 355/113:
$L = \frac{\pi P(E)}{2}=\frac{1}{2}\bigg(\frac{355}{113}\bigg)\bigg(\frac{1808}{3408}\bigg)=\frac{5}{6}=.8333.$
Even with careful planning one would have to be extremely lucky to be able to stop so cleverly. The second author likes to trace his interest in probability theory to the Chicago World’s Fair of 1933 where he observed a mechanical device dropping needles and displaying the ever-changing estimates for the value of π. (The first author likes to trace his interest in probability theory to the second author.)
Exercises
$1$
In the spinner problem (see Example 2.1) divide the unit circumference into three arcs of length 1/2, 1/3, and 1/6. Write a program to simulate the spinner experiment 1000 times and print out what fraction of the outcomes fall in each of the three arcs. Now plot a bar graph whose bars have width 1/2, 1/3, and 1/6, and areas equal to the corresponding fractions as determined by your simulation. Show that the heights of the bars are all nearly the same.
$2$
Do the same as in Exercise 1, but divide the unit circumference into five arcs of length 1/3, 1/4, 1/5, 1/6, and 1/20.
$3$
Alter the program MonteCarlo to estimate the area of the circle of radius 1/2 with center at (1/2, 1/2) inside the unit square by choosing 1000 points at random. Compare your results with the true value of π/4. Use your results to estimate the value of π. How accurate is your estimate?
$4$
Alter the program MonteCarlo to estimate the area under the graph of y = sin πx inside the unit square by choosing 10,000 points at random. Now calculate the true value of this area and use your results to estimate the value of π. How accurate is your estimate?
$5$
Alter the program MonteCarlo to estimate the area under the graph of y = 1/(x + 1) in the unit square in the same way as in Exercise 4. Calculate the true value of this area and use your simulation results to estimate the value of log 2. How accurate is your estimate?
$6$
To simulate the Buffon’s needle problem we choose independently the distance d and the angle θ at random, with 0 ≤ d ≤ 1/2 and 0 ≤ θ ≤ π/2, and check whether d ≤ (1/2) sin θ. Doing this a large number of times, we estimate π as 2/a, where a is the fraction of the times that d ≤ (1/2) sin θ. Write a program to estimate π by this method. Run your program several times for each of 100, 1000, and 10,000 experiments. Does the accuracy of the experimental approximation for π improve as the number of experiments increases?
$7$
For Buffon’s needle problem, Laplace9 considered a grid with horizontal and vertical lines one unit apart. He showed that the probability that a needle of length L ≤ 1 crosses at least one line is
$p = \frac{4L-L^2}{\pi}$
To simulate this experiment we choose at random an angle θ between 0 and π/2 and independently two numbers d1 and d2 between 0 and L/2. (The two numbers represent the distance from the center of the needle to the nearest horizontal and vertical line.) The needle crosses a line if either d1 ≤ (L/2) sin θ or d2 ≤ (L/2) cos θ. We do this a large number of times and estimate π as
$\bar{\pi} = \frac{4L-L^2}{a}$
where a is the proportion of times that the needle crosses at least one line. Write a program to estimate π by this method, run your program for 100, 1000, and 10,000 experiments, and compare your results with Buffon’s method described in Exercise 6. (Take L = 1.)
$8$
A long needle of length L much bigger than 1 is dropped on a grid with horizontal and vertical lines one unit apart. We will see (in Exercise 6.3.28) that the average number a of lines crossed is approximately
$a =\frac{4L}{\pi}$
To estimate π by simulation, pick an angle θ at random between 0 and π/2 and compute Lsin θ + Lcos θ. This may be used for the number of lines crossed. Repeat this many times and estimate π by
$\bar{\pi} =\frac{4L}{a}$
where a is the average number of lines crossed per experiment. Write a program to simulate this experiment and run your program for the number of experiments equal to 100, 1000, and 10,000. Compare your results with the methods of Laplace or Buffon for the same number of experiments. (Use L = 100.)
The following exercises involve experiments in which not all outcomes are equally likely. We shall consider such experiments in detail in the next section, but we invite you to explore a few simple cases here.
$9$
A large number of waiting time problems have an exponential distribution of outcomes. We shall see (in Section 5.2) that such outcomes are simulated by computing (−1/λ) log(rnd), where λ > 0. For waiting times produced in this way, the average waiting time is 1/λ. For example, the times spent waiting for
a car to pass on a highway, or the times between emissions of particles from a radioactive source, are simulated by a sequence of random numbers, each of which is chosen by computing (−1/λ) log(rnd), where 1/λ is the average time between cars or emissions. Write a program to simulate the times between cars when the average time between cars is 30 seconds. Have your program compute an area bar graph for these times by breaking the time interval from 0 to 120 into 24 subintervals. On the same pair of axes, plot the function $f(x)=(1/30)e^{-(1/30)x}$. Does the function fit the bar graph well?
$10$
In Exercise 9, the distribution came “out of a hat.” In this problem, we will again consider an experiment whose outcomes are not equally likely. We will determine a function f(x) which can be used to determine the probability of certain events. Let T be the right triangle in the plane with vertices at the points (0, 0), (1, 0), and (0, 1). The experiment consists of picking a point at random in the interior of T, and recording only the x-coordinate of the point. Thus, the sample space is the set [0, 1], but the outcomes do not seem to be equally likely. We can simulate this experiment by asking a computer to return two random real numbers in [0, 1], and recording the first of these two numbers if their sum is less than 1. Write this program and run it for 10,000 trials. Then make a bar graph of the result, breaking the interval [0, 1] into 10 intervals. Compare the bar graph with the function f(x) = 2 − 2x. Now show that there is a constant c such that the height of T at the x-coordinate value x is c times f(x) for every x in [0, 1]. Finally, show that
$\int_0^1f(x)dx=1$
How might one use the function f(x) to determine the probability that the outcome is between .2 and .5?
$11$
Here is another way to pick a chord at random on the circle of unit radius. Imagine that we have a card table whose sides are of length 100. We place coordinate axes on the table in such a way that each side of the table is parallel to one of the axes, and so that the center of the table is the origin. We now place a circle of unit radius on the table so that the center of the circle is the origin. Now pick out a point (x0, y0) at random in the square, and an angle θ at random in the interval (−π/2, π/2). Let m = tan θ. Then the equation of the line passing through (x0, y0) with slope m is
$y=y_0+m(x-x_0)$
and the distance of this line from the center of the circle (i.e., the origin) is
$d=\bigg|\frac{y_0-mx_0}{\sqrt{m^2+!}}\bigg|$
We can use this distance formula to check whether the line intersects the circle (i.e., whether d < 1). If so, we consider the resulting chord a random chord.
This describes an experiment of dropping a long straw at random on a table on which a circle is drawn.
Write a program to simulate this experiment 10000 times and estimate the probability that the length of the chord is greater than $\sqrt{3}$. How does your estimate compare with the results of Example 2.6? | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/02%3A_Continuous_Probability_Densities/2.01%3A_Simulation_of_Continuous_Probabilities.txt |
In the previous section we have seen how to simulate experiments with a whole continuum of possible outcomes and have gained some experience in thinking about such experiments. Now we turn to the general problem of assigning probabilities to the outcomes and events in such experiments. We shall restrict our attention here to those experiments whose sample space can be taken as a suitably chosen subset of the line, the plane, or some other Euclidean space. We begin with some simple examples.
Spinners
Example $1$
The spinner experiment described in Example 2.1 has the interval [0, 1) as the set of possible outcomes. We would like to construct a probability model in which each outcome is equally likely to occur. We saw that in such a model, it is necessary to assign the probability 0 to each outcome. This does not at all mean that the probability of every event must be zero. On the contrary, if we let the random variable X denote the outcome, then the probability
$P(0\leq X\leq 1)$
that the head of the spinner comes to rest somewhere in the circle, should be equal to 1. Also, the probability that it comes to rest in the upper half of the circle should be the same as for the lower half, so that
$P\bigg( 0\leq X < \frac{1}{2} \bigg)=P\bigg( \frac{1}{2} \leq X <1\bigg) = \frac{1}{2}$
More generally, in our model, we would like the equation
$P(c\leq X < d) = d-c$
to be true for every choice of c and d.
If we let E = [c, d], then we can write the above formula in the form
$P(E) =\int_E f(x)dx$
where f(x) is the constant function with value 1. This should remind the reader of the corresponding formula in the discrete case for the probability of an event:
$P(E) - \sum_{\omega \in E} m(\omega)$
The difference is that in the continuous case, the quantity being integrated, f(x), is not the probability of the outcome x. (However, if one uses infinitesimals, one can consider f(x) dx as the probability of the outcome x.) In the continuous case, we will use the following convention. If the set of outcomes is a set of real numbers, then the individual outcomes will be referred to by small Roman letters such as x. If the set of outcomes is a subset of R2 , then the individual outcomes will be denoted by (x, y). In either case, it may be more convenient to refer to an individual outcome by using ω, as in Chapter 1. Figure $1$ shows the results of 1000 spins of the spinner. The function f(x) is also shown in the figure. The reader will note that the area under f(x) and above a given interval is approximately equal to the fraction of outcomes that fell in that interval. The function f(x) is called the density function of the random variable X. The fact that the area under f(x) and above an interval corresponds to a probability is the defining property of density functions. A precise definition of density functions will be given shortly.
Darts
Example $2$
A game of darts involves throwing a dart at a circular target of unit radius. Suppose we throw a dart once so that it hits the target, and we observe where it lands. To describe the possible outcomes of this experiment, it is natural to take as our sample space the set Ω of all the points in the target. It is convenient to describe these points by their rectangular coordinates, relative to a coordinate system with origin at the center of the target, so that each pair (x, y) of coordinates with $x^2+y^2 ≤ 1$ describes a possible outcome of the experiment. Then $Ω = \{ (x, y) : x^2 + y^2 ≤ 1 \}$ is a subset of the Euclidean plane, and the event E = { (x, y) : y > 0 }, for example, corresponds to the statement that the dart lands in the upper half of the target, and so forth. Unless there is reason to believe otherwise (and with experts at the game there may well be!), it is natural to assume that the coordinates are chosen at random. (When doing this with a computer, each coordinate is chosen uniformly from the interval [−1, 1]. If the resulting point does not lie inside the unit circle, the point is not counted.) Then the arguments used in the preceding example show that the probability of any elementary event, consisting of a single outcome, must be zero, and suggest that the probability of the event that the dart lands in any subset E of the target should be determined by what fraction of the target area lies in E. Thus,
$P(E) = \frac{\text{area of E}}{\text{area of target}} = \frac{\text{area of E}}{\pi}$
This can be written in the form
$P(E) = \int_E f(x)dx$
where f(x) is the constant function with value 1/π. In particular, if E = { (x, y) : x 2 + y 2 ≤ a 2 } is the event that the dart lands within distance a < 1 of the center of the target, then
$P(E) = \frac{\pi a^2}{\pi}=a^2$
For example, the probability that the dart lies within a distance 1/2 of the center is 1/4.
Example $3$
In the dart game considered above, suppose that, instead of observing where the dart lands, we observe how far it lands from the center of the target. In this case, we take as our sample space the set Ω of all circles with centers at the center of the target. It is convenient to describe these circles by their radii, so that each circle is identified by its radius r, 0 ≤ r ≤ 1. In this way, we may regard Ω as the subset [0, 1] of the real line.
What probabilities should we assign to the events E of Ω?
Solution
If
$E = \{ r:0 \leq r\leq a \},$
then E occurs if the dart lands within a distance a of the center, that is, within the circle of radius a, and we saw in the previous example that under our assumptions the probability of this event is given by
$P([0,1])=a^2$
More generally, if
$E = \{r:a\leq r \leq b \}$
then by our basic assumptions,
$\begin{array}{rcl} P(E) = P([a,b]) &=& P([0,b]) - P([0,a]) \ &=& b^2-a^2 \ &=& (b-a)(b+a) \ &=& 2(b-a)\frac{(b+a)}{2}\end{array}$
Thus, P(E) =2(length of E)(midpoint of E). Here we see that the probability assigned to the interval E depends not only on its length but also on its midpoint (i.e., not only on how long it is, but also on where it is). Roughly speaking, in this experiment, events of the form E = [a, b] are more likely if they are near the rim of the target and less likely if they are near the center. (A common experience for beginners! The conclusion might well be different if the beginner is replaced by an expert.)
Again we can simulate this by computer. We divide the target area into ten concentric regions of equal thickness. The computer program Darts throws n darts and records what fraction of the total falls in each of these concentric regions.
The program Areabargraph then plots a bar graph with the area of the ith bar equal to the fraction of the total falling in the ith region. Running the program for 1000 darts resulted in the bar graph of Figure $2$.
Note that here the heights of the bars are not all equal, but grow approximately linearly with r. In fact, the linear function y = 2r appears to fit our bar graph quite well. This suggests that the probability that the dart falls within a distance a of the center should be given by the area under the graph of the function y = 2r between 0 and a. This area is a 2 , which agrees with the probability we have assigned above to this event.
Sample Space Coordinates
These examples suggest that for continuous experiments of this sort we should assign probabilities for the outcomes to fall in a given interval by means of the area under a suitable function.
More generally, we suppose that suitable coordinates can be introduced into the sample space Ω, so that we can regard Ω as a subset of $\mathbb{R}^n$. We call such a sample space a continuous sample space. We let X be a random variable which represents the outcome of the experiment. Such a random variable is called a continuous random variable. We then define a density function for X as follows.
Density Functions of Continuous Random Variables
Definition $1$
Let X be a continuous real-valued random variable. A density function for X is a real-valued function f which satisfies
$P(a\leq X\leq b) = \int_a^bf(x)dx$
for all a, b $\in \mathbb{R}$
We note that it is not the case that all continuous real-valued random variables possess density functions. However, in this book, we will only consider continuous random variables for which density functions exist. In terms of the density $f(x)$, if E is a subset of $\mathbb{R}$, then
$P(X \in E) = \int_Ef(x)dx$
The notation here assumes that E is a subset of $\mathbb{R}$ for which $\int_E f(x)dx\) makes sense. Example $3$ In the spinner experiment, we choose for our set of outcomes the interval $0 \leq x<1$, and for our density function \[ f(x)=\left\{\begin{array}{ll} 1, & \text { if } 0 \leq x<1 \ 0, & \text { otherwise } \end{array}\right.$
Solution
If $E$ is the event that the head of the spinner falls in the upper half of the circle, then $E=\{x: 0 \leq x \leq 1 / 2\}$, and so
$P(E)=\int_0^{1 / 2} 1 d x=\frac{1}{2} .$
More generally, if $E$ is the event that the head falls in the interval $[a, b]$, then
$P(E)=\int_a^b 1 d x=b-a .$
Example $4$
In the first dart game experiment, we choose for our sample space a disc of unit radius in the plane and for our density function the function
$f(x, y)=\left\{\begin{array}{ll} 1 / \pi, & \text { if } x^2+y^2 \leq 1 \ 0, & \text { otherwise. } \end{array}\right.$
The probability that the dart lands inside the subset $E$ is then given by
\begin{aligned} P(E) & =\iint_E \frac{1}{\pi} d x d y \ & =\frac{1}{\pi} \cdot(\text { area of } E) . \end{aligned}
In these two examples, the density function is constant and does not depend on the particular outcome. It is often the case that experiments in which the coordinates are chosen at random can be described by constant density functions, and, as in Section 1.2, we call such density functions uniform or equiprobable. Not all experiments are of this type, however.
Example $5$
In the second dart game experiment, we choose for our sample space the unit interval on the real line and for our density the function
$f(r)=\left\{\begin{array}{ll} 2 r, & \text { if } 0<r<1, \ 0, & \text { otherwise. } \end{array}\right.$
Then the probability that the dart lands at distance $r, a \leq r \leq b$, from the center of the target is given by
\begin{aligned} P([a, b]) & =\int_a^b 2 r d r \ & =b^2-a^2 . \end{aligned}
Here again, since the density is small when $r$ is near 0 and large when $r$ is near 1 , we see that in this experiment the dart is more likely to land near the rim of the target than near the center. In terms of the bar graph of Example $3$, the heights of the bars approximate the density function, while the areas of the bars approximate the probabilities of the subintervals (see Figure $2$).
We see in this example that, unlike the case of discrete sample spaces, the value $f(x)$ of the density function for the outcome $x$ is not the probability of $x$ occurring (we have seen that this probability is always 0 ) and in general $f(x)$ is not a probability at all. In this example, if we take $\lambda=2$ then $f(3 / 4)=3 / 2$, which being bigger than 1 , cannot be a probability.
Nevertheless, the density function $f$ does contain all the probability information about the experiment, since the probabilities of all events can be derived from it. In particular, the probability that the outcome of the experiment falls in an interval $[a, b]$ is given by
$P([a, b])=\int_a^b f(x) d x,$
that is, by the area under the graph of the density function in the interval $[a, b]$. Thus, there is a close connection here between probabilities and areas. We have been guided by this close connection in making up our bar graphs; each bar is chosen so that its area, and not its height, represents the relative frequency of occurrence, and hence estimates the probability of the outcome falling in the associated interval.
In the language of the calculus, we can say that the probability of occurrence of an event of the form $[x, x+d x]$, where $d x$ is small, is approximately given by
$P([x, x+d x]) \approx f(x) d x,$
that is, by the area of the rectangle under the graph of $f$. Note that as $d x \rightarrow 0$, this probability $\rightarrow 0$, so that the probability $P(\{x\})$ of a single point is again 0 , as in Example $1$.
A glance at the graph of a density function tells us immediately which events of an experiment are more likely. Roughly speaking, we can say that where the density is large the events are more likely, and where it is small the events are less likely. In Example 2.4 the density function is largest at 1 . Thus, given the two intervals $[0, a]$ and $[1,1+a]$, where $a$ is a small positive real number, we see that $X$ is more likely to take on a value in the second interval than in the first.
Cumulative Distribution Functions of Continuous Random Variables
We have seen that density functions are useful when considering continuous random variables. There is another kind of function, closely related to these density functions, which is also of great importance. These functions are called cumulative distribution functions.
Definition: $2$
Let $X$ be a continuous real-valued random variable. Then the cumulative distribution function of $X$ is defined by the equation
$F_X(x)=P(X \leq x)$
If $X$ is a continuous real-valued random variable which possesses a density function, then it also has a cumulative distribution function, and the following theorem shows that the two functions are related in a very nice way.
Theorem $1$
Let $X$ be a continuous real-valued random variable with density function $f(x)$. Then the function defined by
$F(x)=\int_{-\infty}^x f(t) d t$
is the cumulative distribution function of $X$. Furthermore, we have
$\frac{d}{d x} F(x)=f(x)$
Proof. By definition,
$F(x)=P(X \leq x)$
Let $E=(-\infty, x]$. Then
$P(X \leq x)=P(X \in E),$
which equals
$\int_{-\infty}^x f(t) d t$
Applying the Fundamental Theorem of Calculus to the first equation in the statement of the theorem yields the second statement.
In many experiments, the density function of the relevant random variable is easy to write down. However, it is quite often the case that the cumulative distribution function is easier to obtain than the density function. (Of course, once we have the cumulative distribution function, the density function can easily be obtained by differentiation, as the above theorem shows.) We now give some examples which exhibit this phenomenon.
Example $6$
A real number is chosen at random from $[0,1]$ with uniform probability, and then this number is squared. Let $X$ represent the result. What is the cumulative distribution function of $X$ ? What is the density of $X$ ?
Solution
We begin by letting $U$ represent the chosen real number. Then $X=U^2$. If $0 \leq x \leq 1$, then we have
\begin{aligned} F_X(x) & =P(X \leq x) \ & =P\left(U^2 \leq x\right) \ & =P(U \leq \sqrt{x}) \ & =\sqrt{x} . \end{aligned}
It is clear that $X$ always takes on a value between 0 and 1 , so the cumulative distribution function of $X$ is given by
$F_X(x)=\left\{\begin{array}{ll} 0, & \text { if } x \leq 0 \ \sqrt{x}, & \text { if } 0 \leq x \leq 1 \ 1, & \text { if } x \geq 1 \end{array}\right.$
From this we easily calculate that the density function of $X$ is
$f_X(x)=\left\{\begin{array}{ll} 0, & \text { if } x \leq 0, \ 1 /(2 \sqrt{x}), & \text { if } 0 \leq x \leq 1, \ 0, & \text { if } x>1 \end{array}\right.$
Note that $F_X(x)$ is continuous, but $f_X(x)$ is not. (See Figure 2.13.)
When referring to a continuous random variable $X$ (say with a uniform density function), it is customary to say that " $X$ is uniformly distributed on the interval $[a, b] . "$ It is also customary to refer to the cumulative distribution function of $X$ as the distribution function of $X$. Thus, the word "distribution" is being used in several different ways in the subject of probability. (Recall that it also has a meaning when discussing discrete random variables.) When referring to the cumulative distribution function of a continuous random variable $X$, we will always use the word "cumulative" as a modifier, unless the use of another modifier, such as "normal" or "exponential," makes it clear. Since the phrase "uniformly densitied on the interval $[a, b]$ " is not acceptable English, we will have to say "uniformly distributed" instead
Example $7$
In Example 2.4, we considered a random variable, defined to be the sum of two random real numbers chosen uniformly from $[0,1]$. Let the random variables $X$ and $Y$ denote the two chosen real numbers. Define $Z=X+Y$. We will now derive expressions for the cumulative distribution function and the density function of $Z$.
Solution
Here we take for our sample space $\Omega$ the unit square in $\mathbf{R}^2$ with uniform density. A point $\omega \in \Omega$ then consists of a pair $(x, y)$ of numbers chosen at random. Then $0 \leq Z \leq 2$. Let $E_z$ denote the event that $Z \leq z$. In Figure 2.14, we show the set $E_{.8}$. The event $E_z$, for any $z$ between 0 and 1 , looks very similar to the shaded set in the figure. For $1<z \leq 2$, the set $E_z$ looks like the unit square with a triangle removed from the upper right-hand corner. We can now calculate the probability distribution $F_Z$ of $Z$; it is given by
\begin{aligned} F_Z(z) & =P(Z \leq z) \ & =\text { Area of } E_z \end{aligned}
$=\left\{\begin{array}{ll} 0, & \text { if } z<0, \ (1 / 2) z^2, & \text { if } 0 \leq z \leq 1, \ 1-(1 / 2)(2-z)^2, & \text { if } 1 \leq z \leq 2, \ 1, & \text { if } 2<z \end{array}\right.$
The density function is obtained by differentiating this function:
$f_Z(z)=\left\{\begin{array}{ll} 0, & \text { if } z<0 \ z, & \text { if } 0 \leq z \leq 1 \ 2-z, & \text { if } 1 \leq z \leq 2 \ 0, & \text { if } 2<z \end{array}\right.$
The reader is referred to Figure $5$ for the graphs of these functions.
Example $8$
In the dart game described in Example $2$, what is the distribution of the distance of the dart from the center of the target? What is its density?
Solution
Here, as before, our sample space $\Omega$ is the unit disk in $\mathbf{R}^2$, with coordinates $(X, Y)$. Let $Z=\sqrt{X^2+Y^2}$ represent the distance from the center of the target.
Let $E$ be the event $\{Z \leq z\}$. Then the distribution function $F_Z$ of $Z$ (see Figure 2.16) is given by
\begin{aligned} F_Z(z) & =P(Z \leq z) \ & =\frac{\text { Area of } E}{\text { Area of target }} \end{aligned}
Thus, we easily compute that
$F_Z(z)=\left\{\begin{array}{ll} 0, & \text { if } z \leq 0 \ z^2, & \text { if } 0 \leq z \leq 1 \ 1, & \text { if } z>1 \end{array}\right.$
The density $f_Z(z)$ is given again by the derivative of $F_Z(z)$ :
$f_Z(z)=\left\{\begin{array}{ll} 0, & \text { if } z \leq 0 \ 2 z, & \text { if } 0 \leq z \leq 1 \ 0, & \text { if } z>1 \end{array}\right.$
The reader is referred to Figure 2.$7$ for the graphs of these functions.
We can verify this result by simulation, as follows: We choose values for $X$ and $Y$ at random from $[0,1]$ with uniform distribution, calculate $Z=\sqrt{X^2+Y^2}$, check whether $0 \leq Z \leq 1$, and present the results in a bar graph (see Figure $8$).
Example $9$
Suppose Mr. and Mrs. Lockhorn agree to meet at the Hanover Inn between 5:00 and 6:00 P.M. on Tuesday. Suppose each arrives at a time between 5:00 and 6:00 chosen at random with uniform probability. What is the distribution function for the length of time that the first to arrive has to wait for the other? What is the density function?
Solution
Here again we can take the unit square to represent the sample space, and $(X, Y)$ as the arrival times (after 5:00 P.M.) for the Lockhorns. Let $Z=|X-Y|$. Then we have $F_X(x)=x$ and $F_Y(y)=y$. Moreover (see Figure 2.19),
\begin{aligned} F_Z(z) & =P(Z \leq z) \ & =P(|X-Y| \leq z) \ & =\text { Area of } E . \end{aligned}
Thus, we have
$F_Z(z)=\left\{\begin{array}{ll} 0, & \text { if } z \leq 0 \ 1-(1-z)^2, & \text { if } 0 \leq z \leq 1, \ 1, & \text { if } z>1 \end{array}\right.$
The density $f_Z(z)$ is again obtained by differentiation:
$f_Z(z)=\left\{\begin{array}{ll} 0, & \text { if } z \leq 0 \ 2(1-z), & \text { if } 0 \leq z \leq 1 \ 0, & \text { if } z>1 \end{array}\right.$
Example $10$
There are many occasions where we observe a sequence of occurrences which occur at "random" times. For example, we might be observing emissions of a radioactive isotope, or cars passing a milepost on a highway, or light bulbs burning out. In such cases, we might define a random variable $X$ to denote the time between successive occurrences. Clearly, $X$ is a continuous random variable whose range consists of the non-negative real numbers. It is often the case that we can model $X$ by using the exponential density. This density is given by the formula
$f(t)=\left\{\begin{array}{ll} \lambda e^{-\lambda t}, & \text { if } t \geq 0 \ 0, & \text { if } t<0 \end{array}\right.$
The number $\lambda$ is a non-negative real number, and represents the reciprocal of the average value of $X$. (This will be shown in Chapter 6.) Thus, if the average time between occurrences is 30 minutes, then $\lambda=1 / 30$. A graph of this density function with $\lambda=1 / 30$ is shown in Figure 2.20. One can see from the figure that even though the average value is 30 , occasionally much larger values are taken on by $X$.
Suppose that we have bought a computer that contains a Warp 9 hard drive. The salesperson says that the average time between breakdowns of this type of hard drive is 30 months.
It is often assumed that the length of time between breakdowns is distributed according to the exponential density. We will assume that this model applies here, with $\lambda=1 / 30$.
Now suppose that we have been operating our computer for 15 months. We assume that the original hard drive is still running. We ask how long we should expect the hard drive to continue to run. One could reasonably expect that the hard drive will run, on the average, another 15 months. (One might also guess that it will run more than 15 months, since the fact that it has already run for 15 months implies that we don't have a lemon.) The time which we have to wait is a new random variable, which we will call $Y$. Obviously, $Y=X-15$. We can write a computer program to produce a sequence of simulated $Y$-values. To do this, we first produce a sequence of $X$ 's, and discard those values which are less than or equal to 15 (these values correspond to the cases where the hard drive has quit running before 15 months). To simulate a value of $X$, we compute the value of the expression
$\left(-\frac{1}{\lambda}\right) \log (r n d),$
where $r n d$ represents a random real number between 0 and 1 . (That this expression has the exponential density will be shown in Chapter 4.3.) Figure $11$ shows an area bar graph of 10,000 simulated $Y$-values.
The average value of $Y$ in this simulation is 29.74 , which is closer to the original average life span of 30 months than to the value of 15 months which was guessed above. Also, the distribution of $Y$ is seen to be close to the distribution of $X$. It is in fact the case that $X$ and $Y$ have the same distribution. This property is called the memoryless property, because the amount of time that we have to wait for an occurrence does not depend on how long we have already waited. The only continuous density function with this property is the exponential density.
Assignment of Probabilities
A fundamental question in practice is: How shall we choose the probability density function in describing any given experiment? The answer depends to a great extent on the amount and kind of information available to us about the experiment. In some cases, we can see that the outcomes are equally likely. In some cases, we can see that the experiment resembles another already described by a known density. In some cases, we can run the experiment a large number of times and make a reasonable guess at the density on the basis of the observed distribution of outcomes, as we did in Chapter 1. In general, the problem of choosing the right density function for a given experiment is a central problem for the experimenter and is not always easy to solve (see Example 2.6). We shall not examine this question in detail here but instead shall assume that the right density is already known for each of the experiments under study.
The introduction of suitable coordinates to describe a continuous sample space, and a suitable density to describe its probabilities, is not always so obvious, as our final example shows.
Infinite Tree
Example $11$
Consider an experiment in which a fair coin is tossed repeatedly, without stopping. We have seen in Example 1.6 that, for a coin tossed $n$ times, the natural sample space is a binary tree with $n$ stages. On this evidence we expect that for a coin tossed repeatedly, the natural sample space is a binary tree with an infinite number of stages, as indicated in Figure $12$.
It is surprising to learn that, although the $n$-stage tree is obviously a finite sample space, the unlimited tree can be described as a continuous sample space. To see how this comes about, let us agree that a typical outcome of the unlimited coin tossing experiment can be described by a sequence of the form $\omega=\{\mathrm{H} \mathrm{H} \mathrm{T} \mathrm{H} \mathrm{T} \mathrm{T} \mathrm{H... \} .}$ If we write 1 for $\mathrm{H}$ and 0 for $\mathrm{T}$, then $\omega=\left\{\begin{array}{lllllll}1 & 1 & 0 & 1 & 0 & 0 & 1\end{array} \ldots\right\}$. In this way, each outcome is described by a sequence of 0 's and 1 's.
Now suppose we think of this sequence of 0's and 1's as the binary expansion of some real number $x=.1101001 \cdots$ lying between 0 and 1. (A binary expansion is like a decimal expansion but based on 2 instead of 10.) Then each outcome is described by a value of $x$, and in this way $x$ becomes a coordinate for the sample space, taking on all real values between 0 and 1 . (We note that it is possible for two different sequences to correspond to the same real number; for example, the sequences $\{\mathrm{T} \mathrm{H} \mathrm{H} \mathrm{H} \mathrm{H} \mathrm{H \ldots \}} \mathrm{and}\{\mathrm{H} \mathrm{T} \mathrm{T} \mathrm{T} \mathrm{T} \mathrm{T} \ldots\}$ both correspond to the real number $1 / 2$. We will not concern ourselves with this apparent problem here.)
What probabilities should be assigned to the events of this sample space? Consider, for example, the event $E$ consisting of all outcomes for which the first toss comes up heads and the second tails. Every such outcome has the form .10*****, where $*$ can be either 0 or 1 .
Now if $x$ is our real-valued coordinate, then the value of $x$ for every such outcome must lie between $1 / 2=.10000 \cdots$ and $3 / 4=.11000 \cdots$, and moreover, every value of $x$ between $1 / 2$ and $3 / 4$ has a binary expansion of the form $.10 * * * * \cdots$. This means that $\omega \in E$ if and only if $1 / 2 \leq x<3 / 4$, and in this way we see that we can describe $E$ by the interval $[1 / 2,3 / 4)$. More generally, every event consisting of outcomes for which the results of the first $n$ tosses are prescribed is described by a binary interval of the form $\left[k / 2^n,(k+1) / 2^n\right)$.
We have already seen in Section 1.2 that in the experiment involving $n$ tosses, the probability of any one outcome must be exactly $1 / 2^n$. It follows that in the unlimited toss experiment, the probability of any event consisting of outcomes for which the results of the first $n$ tosses are prescribed must also be $1 / 2^n$. But $1 / 2^n$ is exactly the length of the interval of $x$-values describing $E$ ! Thus we see that, just as with the spinner experiment, the probability of an event $E$ is determined by what fraction of the unit interval lies in $E$.
Consider again the statement: The probability is $1 / 2$ that a fair coin will turn up heads when tossed. We have suggested that one interpretation of this statement is that if we toss the coin indefinitely the proportion of heads will approach $1 / 2$. That is, in our correspondence with binary sequences we expect to get a binary sequence with the proportion of 1 's tending to $1 / 2$. The event $E$ of binary sequences for which this is true is a proper subset of the set of all possible binary sequences. It does not contain, for example, the sequence 011011011 ... (i.e., (011) repeated again and again). The event $E$ is actually a very complicated subset of the binary sequences, but its probability can be determined as a limit of probabilities for events with a finite number of outcomes whose probabilities are given by finite tree measures. When the probability of $E$ is computed in this way, its value is found to be 1 . This remarkable result is known as the Strong Law of Large Numbers (or Law of Averages ) and is one justification for our frequency concept of probability. We shall prove a weak form of this theorem in Chapter 8.
Exercises
Exercise $1$:
Suppose you choose at random a real number $X$ from the interval $[2,10]$.
(a) Find the density function $f(x)$ and the probability of an event $E$ for this experiment, where $E$ is a subinterval $[a, b]$ of $[2,10]$.
(b) From (a), find the probability that $X>5$, that $5<X<7$, and that $X^2-12 X+35>0$.
Exercise $2$:
Suppose you choose a real number $X$ from the interval $[2,10]$ with a density function of the form
$f(x)=C x,$
where $C$ is a constant.
(a) Find $C$.
(b) Find $P(E)$, where $E=[a, b]$ is a subinterval of $[2,10]$.
(c) Find $P(X>5), P(X<7)$, and $P\left(X^2-12 X+35>0\right)$.
Exercise $3$:
Same as Exercise $2$, but suppose
$f(x)=\frac{C}{x} .$
Exercise $4$:
Suppose you throw a dart at a circular target of radius 10 inches. Assuming that you hit the target and that the coordinates of the outcomes are chosen at random, find the probability that the dart falls
(a) within 2 inches of the center.
(b) within 2 inches of the rim.
(c) within the first quadrant of the target.
(d) within the first quadrant and within 2 inches of the rim.
Exercise $5$:
Suppose you are watching a radioactive source that emits particles at a rate described by the exponential density
$f(t)=\lambda e^{-\lambda t},$
where $\lambda=1$, so that the probability $P(0, T)$ that a particle will appear in the next $T$ seconds is $P([0, T])=\int_0^T \lambda e^{-\lambda t} d t$. Find the probability that a particle (not necessarily the first) will appear
(a) within the next second.
(b) within the next 3 seconds.
(c) between 3 and 4 seconds from now.
(d) after 4 seconds from now.
Exercise $6$:
Assume that a new light bulb will burn out after $t$ hours, where $t$ is chosen from $[0, \infty)$ with an exponential density
$f(t)=\lambda e^{-\lambda t} .$
In this context, $\lambda$ is often called the failure rate of the bulb.
(a) Assume that $\lambda=0.01$, and find the probability that the bulb will not burn out before $T$ hours. This probability is often called the reliability of the bulb.
(b) For what $T$ is the reliability of the bulb $=1 / 2$ ?
Exercise $7$:
Choose a number $B$ at random from the interval [0,1] with uniform density. Find the probability that
(a) $1 / 3<B<2 / 3$.
(b) $|B-1 / 2| \leq 1 / 4$.
(c) $B<1 / 4$ or $1-B<1 / 4$.
(d) $3 B^2<B$.
Exercise $8$:
Choose independently two numbers $B$ and $C$ at random from the interval $[0,1]$ with uniform density. Note that the point $(B, C)$ is then chosen at random in the unit square. Find the probability that
(a) $B+C<1 / 2$.
(b) $B C<1 / 2$.
(c) $|B-C|<1 / 2$.
(d) $\max \{B, C\}<1 / 2$.
(e) $\min \{B, C\}<1 / 2$.
(f) $B<1 / 2$ and $1-C<1 / 2$.
(g) conditions (c) and (f) both hold.
(h) $B^2+C^2 \leq 1 / 2$.
(i) $(B-1 / 2)^2+(C-1 / 2)^2<1 / 4$.
Exercise $9$:
Suppose that we have a sequence of occurrences. We assume that the time $X$ between occurrences is exponentially distributed with $\lambda=1 / 10$, so on the average, there is one occurrence every 10 minutes (see Example 2.17). You come upon this system at time 100, and wait until the next occurrence. Make a conjecture concerning how long, on the average, you will have to wait. Write a program to see if your conjecture is right.
Exercise $10$:
As in Exercise 9, assume that we have a sequence of occurrences, but now assume that the time $X$ between occurrences is uniformly distributed between 5 and 15 . As before, you come upon this system at time 100, and wait until the next occurrence. Make a conjecture concerning how long, on the average, you will have to wait. Write a program to see if your conjecture is right.
Exercise $11$:
For examples such as those in Exercises 9 and 10, it might seem that at least you should not have to wait on average more than 10 minutes if the average time between occurrences is 10 minutes. Alas, even this is not true. To see why, consider the following assumption about the times between occurrences. Assume that the time between occurrences is 3 minutes with probability .9 and 73 minutes with probability .1. Show by simulation that the average time between occurrences is 10 minutes, but that if you come upon this system at time 100, your average waiting time is more than 10 minutes.
Exercise $12$:
Take a stick of unit length and break it into three pieces, choosing the break points at random. (The break points are assumed to be chosen simultaneously.) What is the probability that the three pieces can be used to form a triangle? Hint: The sum of the lengths of any two pieces must exceed the length of the third, so each piece must have length $<1 / 2$. Now use Exercise $8(\mathrm{~g})$.
Exercise $13$:
Take a stick of unit length and break it into two pieces, choosing the break point at random. Now break the longer of the two pieces at a random point. What is the probability that the three pieces can be used to form a triangle?
Exercise $14$:
Choose independently two numbers $B$ and $C$ at random from the interval $[-1,1]$ with uniform distribution, and consider the quadratic equation
$x^2+B x+C=0 .$
Find the probability that the roots of this equation
(a) are both real.
(b) are both positive.
Hints: (a) requires $0 \leq B^2-4 C$, (b) requires $0 \leq B^2-4 C, B \leq 0,0 \leq C$.
Exercise $15$:
At the Tunbridge World's Fair, a coin toss game works as follows. Quarters are tossed onto a checkerboard. The management keeps all the quarters, but for each quarter landing entirely within one square of the checkerboard the management pays a dollar. Assume that the edge of each square is twice the diameter of a quarter, and that the outcomes are described by coordinates chosen at random. Is this a fair game?
Exercise $16$:
Three points are chosen at random on a circle of unit circumference. What is the probability that the triangle defined by these points as vertices has three acute angles? Hint: One of the angles is obtuse if and only if all three points lie in the same semicircle. Take the circumference as the interval $[0,1]$. Take one point at 0 and the others at $B$ and $C$.
Exercise $17$:
Write a program to choose a random number $X$ in the interval $[2,10] 1000$ times and record what fraction of the outcomes satisfy $X>5$, what fraction satisfy $5<X<7$, and what fraction satisfy $x^2-12 x+35>0$. How do these results compare with Exercise 1 ?
Exercise $18$:
Write a program to choose a point $(X, Y)$ at random in a square of side 20 inches, doing this 10,000 times, and recording what fraction of the outcomes fall within 19 inches of the center; of these, what fraction fall between 8 and 10 inches of the center; and, of these, what fraction fall within the first quadrant of the square. How do these results compare with those of Exercise 4 ?
Exercise $19$:
Write a program to simulate the problem describe in Exercise 7 (see Exercise 17). How do the simulation results compare with the results of Exercise 7?
Exercise $20$:
EWrite a program to simulate the problem described in Exercise 12.
Exercise $21$:
Write a program to simulate the problem described in Exercise 16.
Exercise $22$:
Write a program to carry out the following experiment. A coin is tossed 100 times and the number of heads that turn up is recorded. This experiment is then repeated 1000 times. Have your program plot a bar graph for the proportion of the 1000 experiments in which the number of heads is $n$, for each $n$ in the interval [35,65]. Does the bar graph look as though it can be fit with a normal curve?
Exercise $23$:
Write a program that picks a random number between 0 and 1 and computes the negative of its logarithm. Repeat this process a large number of times and plot a bar graph to give the number of times that the outcome falls in each interval of length 0.1 in $[0,10]$. On this bar graph plot a graph of the density $f(x)=e^{-x}$. How well does this density fit your graph?
2.R: References
1. G. L. Buffon, in “Essai d’Arithm´etique Morale,” Oeuvres Compl`etes de Buffon avec Supplements, tome iv, ed. Dum´enil (Paris, 1836).
2. M. D. Perlman and M. J. Wichura, “Sharpening Buffon’s Needle,” The American Statistician, vol. 29, no. 4 (1975), pp. 157–163
3. J. Bertrand, Calcul des Probabilit´es (Paris: Gauthier-Villars, 1889).
4. E. T. Jaynes, “The Well-Posed Problem,” in Papers on Probability, Statistics and Statistical Physics, R. D. Rosencrantz, ed. (Dordrecht: D. Reidel, 1983), pp. 133–148.
5. G. L. Buffon, Histoire Naturelle, Generali et Particular avec le Descripti´on du Cabinet du Roy, 44 vols. (Paris: L‘Imprimerie Royale, 1749–1803)
6. G. L. Buffon, “Essai d’Arithm´etique Morale,” p. 301.
7. ibid., pp. 277–278.
8. N. T. Gridgeman, “Geometric Probability and the Number π” Scripta Mathematika, vol. 25, no. 3, (1960), pp. 183–195
9. P. S. Laplace, Théorie Analytique des Probabilit´es (Paris: Courcier, 1812). | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/02%3A_Continuous_Probability_Densities/2.02%3A_Continuous_Density_Functions.txt |
• 3.1: Permutations
Many problems in probability theory require that we count the number of ways that a particular event can occur. For this, we study the topics of permutations and combinations. We consider permutations in this section and combinations in the next section.
• 3.2: Combinations
• 3.3: Card Shuffling
Given a deck of n cards, how many times must we shuffle it to make it “random"? Of course, the answer depends upon the method of shuffling which is used and what we mean by “random."
• 3.R: References
Thumbnail: Pascal's triangle contains the values from the binomial expansion where each number is the sum of the two numbers directly above it. (Public Domain; Hersfold via Wikipedia)
03: Combinatorics
Many problems in probability theory require that we count the number of ways that a particular event can occur. For this, we study the topics of permutations and combinations. We consider permutations in this section and combinations in the next section. Before discussing permutations, it is useful to introduce a general counting technique that will enable us to solve a variety of counting problems, including the problem of counting the number of possible permutations of $n$ objects.
Counting Problems
Consider an experiment that takes place in several stages and is such that the number of outcomes $m$ at the $n$th stage is independent of the outcomes of the previous stages. The number $m$ may be different for different stages. We want to count the number of ways that the entire experiment can be carried out.
Example $1$
You are eating at Émile’s restaurant and the waiter informs you that you have (a) two choices for appetizers: soup or juice; (b) three for the main course: a meat, fish, or vegetable dish; and (c) two for dessert: ice cream or cake. How many possible choices do you have for your complete meal? We illustrate the possible meals by a tree diagram shown in Figure $1$ Your menu is decided in three stages—at each stage the number of possible choices does not depend on what is chosen in the previous stages: two choices at the first stage, three at the second, and two at the third. From the tree diagram we see that the total number of choices is the product of the number of choices at each stage. In this examples we have $2 \cdot 3 \cdot 2 = 12$ possible menus. Our menu example is an example of the following general counting technique.
A Counting Technique
A task is to be carried out in a sequence of $r$ stages. There are $n_1$ ways to carry out the first stage; for each of these $n_1$ ways, there are $n_2$ ways to carry out the second stage; for each of these $n_2$ ways, there are $n_3$ ways to carry out the third stage, and so forth. Then the total number of ways in which the entire task can be accomplished is given by the product $N = n_1 \cdot n_2 \cdot \dots \cdot n_r$.
Tree Diagrams
It will often be useful to use a tree diagram when studying probabilities of events relating to experiments that take place in stages and for which we are given the probabilities for the outcomes at each stage. For example, assume that the owner of Émile’s restaurant has observed that 80 percent of his customers choose the soup for an appetizer and 20 percent choose juice. Of those who choose soup, 50 percent choose meat, 30 percent choose fish, and 20 percent choose the vegetable dish. Of those who choose juice for an appetizer, 30 percent choose meat, 40 percent choose fish, and 30 percent choose the vegetable dish. We can use this to estimate the probabilities at the first two stages as indicated on the tree diagram of Figure $2$.
We choose for our sample space the set $\Omega$ of all possible paths $\omega = \omega_1$, $\omega_2$, …, $\omega_6$ through the tree. How should we assign our probability distribution? For example, what probability should we assign to the customer choosing soup and then the meat? If 8/10 of the customers choose soup and then 1/2 of these choose meat, a proportion $8/10 \cdot 1/2 = 4/10$ of the customers choose soup and then meat. This suggests choosing our probability distribution for each path through the tree to be the product of the probabilities at each of the stages along the path. This results in the probability distribution for the sample points $\omega$ indicated in Figure $2$. (Note that $m(\omega_1) + \cdots + m(\omega_6) = 1$.) From this we see, for example, that the probability that a customer chooses meat is $m(\omega_1) + m(\omega_4) = .46$.
We shall say more about these tree measures when we discuss the concept of conditional probability in Chapter 4. We return now to more counting problems.
Example $2$
We can show that there are at least two people in Columbus, Ohio, who have the same three initials. Assuming that each person has three initials, there are 26 possibilities for a person’s first initial, 26 for the second, and 26 for the third. Therefore, there are $26^3 = 17,576$ possible sets of initials. This number is smaller than the number of people living in Columbus, Ohio; hence, there must be at least two people with the same three initials.
We consider next the celebrated birthday problem—often used to show that naive intuition cannot always be trusted in probability.
Example $3$: Birthday Problem
How many people do we need to have in a room to make it a favorable bet (probability of success greater than 1/2) that two people in the room will have the same birthday?
Solution
Since there are 365 possible birthdays, it is tempting to guess that we would need about 1/2 this number, or 183. You would surely win this bet. In fact, the number required for a favorable bet is only 23. To show this, we find the probability $p_r$ that, in a room with $r$ people, there is no duplication of birthdays; we will have a favorable bet if this probability is less than one half.
Table $1$: Birthday problem.
Number of people Probability that all birthdays are different
20 .5885616
21 .5563117
22 .5243047
23 .4927028
24 .4616557
25 .4313003
Assume that there are 365 possible birthdays for each person (we ignore leap years). Order the people from 1 to $r$. For a sample point $\omega$, we choose a possible sequence of length $r$ of birthdays each chosen as one of the 365 possible dates. There are 365 possibilities for the first element of the sequence, and for each of these choices there are 365 for the second, and so forth, making $365^r$ possible sequences of birthdays. We must find the number of these sequences that have no duplication of birthdays. For such a sequence, we can choose any of the 365 days for the first element, then any of the remaining 364 for the second, 363 for the third, and so forth, until we make $r$ choices. For the $r$th choice, there will be $365 - r + 1$ possibilities. Hence, the total number of sequences with no duplications is
$365 \cdot 364 \cdot 363 \cdot \dots \cdot (365 - r + 1)\ .$
Thus, assuming that each sequence is equally likely,
$p_r = \frac{365 \cdot 364 \cdot \dots \cdot (365 - r + 1)}{365^r}\ .$
We denote the product $(n)(n-1)\cdots (n - r +1)$ by $(n)_r$ (read “$n$ down $r$," or “$n$ lower $r$"). Thus,
$p_r = \frac{(365)_r}{(365)^r}\ .$
The program Birthday carries out this computation and prints the probabilities for $r = 20$ to 25. Running this program, we get the results shown in Table 3.$1$.
As we asserted above, the probability for no duplication changes from greater than one half to less than one half as we move from 22 to 23 people. To see how unlikely it is that we would lose our bet for larger numbers of people, we have run the program again, printing out values from $r = 10$ to $r = 100$ in steps of 10. We see that in a room of 40 people the odds already heavily favor a duplication, and in a room of 100 the odds are overwhelmingly in favor of a duplication.
Table $2$: Birthday problem
Number of people Probability that all birthdays are different
10 .8830518
20 .5885616
30 .2936838
40 .1087682
50 .0296264
60 .0058773
70 .0008404
80 .0000857
90 .0000062
100 .0000003
We have assumed that birthdays are equally likely to fall on any particular day. Statistical evidence suggests that this is not true. However, it is intuitively clear (but not easy to prove) that this makes it even more likely to have a duplication with a group of 23 people. (See Exercise $19$ to find out what happens on planets with more or fewer than 365 days per year.)
Permutations
We now turn to the topic of permutations.
Definition $PageIndex{1}$: Permutation
Let $A$ be any finite set. A permutation of $A$ is a one-to-one mapping of $A$ onto itself.
To specify a particular permutation we list the elements of $A$ and, under them, show where each element is sent by the one-to-one mapping. For example, if $A = \{a,b,c\}$ a possible permutation $\sigma$ would be
$\sigma = \pmatrix{ a & b & c \cr b & c & a \cr}.$
By the permutation $\sigma$, $a$ is sent to $b$, $b$ is sent to $c$, and $c$ is sent to $a$. The condition that the mapping be one-to-one means that no two elements of $A$ are sent, by the mapping, into the same element of $A$.
We can put the elements of our set in some order and rename them 1, 2, …, $n$. Then, a typical permutation of the set $A = \{a_1,a_2,a_3,a_4\}$ can be written in the form
$\sigma = \pmatrix{ 1 & 2 & 3 & 4 \cr 2 & 1 & 4 & 3 \cr},$
indicating that $a_1$ went to $a_2$, $a_2$ to $a_1$, $a_3$ to $a_4$, and $a_4$ to $a_3$.
If we always choose the top row to be 1 2 3 4 then, to prescribe the permutation, we need only give the bottom row, with the understanding that this tells us where 1 goes, 2 goes, and so forth, under the mapping. When this is done, the permutation is often called a rearrangement of the $n$ objects 1, 2, 3, …, $n$. For example, all possible permutations, or rearrangements, of the numbers $A = \{1,2,3\}$ are: $123,\ 132,\ 213,\ 231,\ 312,\ 321\ .$
It is an easy matter to count the number of possible permutations of $n$ objects. By our general counting principle, there are $n$ ways to assign the first element, for each of these we have $n - 1$ ways to assign the second object, $n - 2$ for the third, and so forth. This proves the following theorem.
Theorem $PageIndex{1}$
The total number of permutations of a set $A$ of $n$ elements is given by $n \cdot (n-1) \cdot (n - 2) \cdot \ldots \cdot 1$.
It is sometimes helpful to consider orderings of subsets of a given set. This prompts the following definition.
Let $A$ be an $n$-element set, and let $k$ be an integer between 0 and $n$. Then a $k$-permutation of $A$ is an ordered listing of a subset of $A$ of size $k$.
Using the same techniques as in the last theorem, the following result is easily proved.
Theorem $PageIndex{2}$
The total number of $k$-permutations of a set $A$ of $n$ elements is given by $n \cdot (n-1) \cdot (n-2) \cdot \ldots \cdot (n - k + 1)$.
Factorials
The number given in Theorem $1$ is called $n$ factorial, and is denoted by $n!$. The expression 0! is defined to be 1 to make certain formulas come out simpler. The first few values of this function are shown in Table $2$. The reader will note that this function grows very rapidly.
Table $3$: Values of the factorial function
n n!
0 1
1 1
2 2
3 6
4 24
5 120
6 720
7 5040
8 40320
9 362880
10 3628800
The expression $n!$ will enter into many of our calculations, and we shall need to have some estimate of its magnitude when $n$ is large. It is clearly not practical to make exact calculations in this case. We shall instead use a result called Stirling’s formula. Before stating this formula we need a definition.
Definition $PageIndex{3}$
Let $a_n$ and $b_n$ be two sequences of numbers. We say that $a_n$ is asymptotically equal to $b_n$, and write $a_n \sim b_n$, if
$\lim_{n \to \infty} \frac{a_n}{b_n} = 1\ .$
[exam 3.4] If $a_n = n + \sqrt n$ and $b_n = n$ then, since $a_n/b_n = 1 + 1/\sqrt n$ and this ratio tends to 1 as $n$ tends to infinity, we have $a_n \sim b_n$.
Theorem $3$(Stirling's Formula)
The sequence $n!$ is asymptotically equal to
$n^ne^{-n}\sqrt{2\pi n}\ .$.
The proof of Stirling’s formula may be found in most analysis texts. Let us verify this approximation by using the computer. The program StirlingApproximations prints $n!$, the Stirling approximation, and, finally, the ratio of these two numbers. Sample output of this program is shown in Table 3.4. Note that, while the ratio of the numbers is getting closer to 1, the difference between the exact value and the approximation is increasing, and indeed, this difference will tend to infinity as $n$ tends to infinity, even though the ratio tends to 1. (This was also true in our Example $4$ where $n + \sqrt n \sim n$, but the difference is $\sqrt n$.)
Table $4$: Stirling approximations to the factorial function.
$n$ $n!$ Approximation Ratio
1 1 0.922 1.084
2 2 1.919 1.042
3 6 5.836 1.028
4 24 23.506 1.021
5 120 118.019 1.016
6 720 710.078 1.013
7 5040 4980.396 1.011
8 40320 39902.395 1.010
9 362880 359536.873 1.009
10 3628800 3598696.619 1.008
Generating Random Permutations
We now consider the question of generating a random permutation of the integers between 1 and $n$. Consider the following experiment. We start with a deck of $n$ cards, labelled 1 through $n$. We choose a random card out of the deck, note its label, and put the card aside. We repeat this process until all $n$ cards have been chosen. It is clear that each permutation of the integers from 1 to $n$ can occur as a sequence of labels in this experiment, and that each sequence of labels is equally likely to occur. In our implementations of the computer algorithms, the above procedure is called Random Permutation.
Fixed Points
There are many interesting problems that relate to properties of a permutation chosen at random from the set of all permutations of a given finite set. For example, since a permutation is a one-to-one mapping of the set onto itself, it is interesting to ask how many points are mapped onto themselves. We call such points fixed points of the mapping.
Let $p_k(n)$ be the probability that a random permutation of the set $\{1, 2, \ldots, n\}$ has exactly $k$ fixed points. We will attempt to learn something about these probabilities using simulation. The program FixedPoints uses the procedure RandomPermutation to generate random permutations and count fixed points. The program prints the proportion of times that there are $k$ fixed points as well as the average number of fixed points. The results of this program for 500 simulations for the cases $n = 10$, 20, and 30 are shown in Table $5$.
Table $5$:Fixed point distributions.
Number of fixed points Fraction of permutations
n = 10 n = 20 n = 30
0 0.362 0.370 0.358
1 0.368 0.396 0.358
2 0.202 0.164 0.192
3 0.052 0.060 0.070
4 0.012 0.008 0.020
5 0.004 0.002 0.002
Average number of fixed points 0.996 0.948 1.042
Notice the rather surprising fact that our estimates for the probabilities do not seem to depend very heavily on the number of elements in the permutation. For example, the probability that there are no fixed points, when $n = 10,\ 20,$ or 30 is estimated to be between .35 and .37. We shall see later (see Example $12$ that for $n \geq 10$ the exact probabilities $p_n(0)$ are, to six decimal place accuracy, equal to $1/e \approx .367879$. Thus, for all practical purposes, after $n = 10$ the probability that a random permutation of the set $\{1, 2, \ldots, n\}$ has no fixed points does not depend upon $n$. These simulations also suggest that the average number of fixed points is close to 1. It can be shown (see Example $1$ ) that the average is exactly equal to 1 for all $n$.
More picturesque versions of the fixed-point problem are: You have arranged the books on your book shelf in alphabetical order by author and they get returned to your shelf at random; what is the probability that exactly $k$ of the books end up in their correct position? (The library problem.) In a restaurant $n$ hats are checked and they are hopelessly scrambled; what is the probability that no one gets his own hat back? (The hat check problem.) In the Historical Remarks at the end of this section, we give one method for solving the hat check problem exactly. Another method is given in Example $12$
Records
Here is another interesting probability problem that involves permutations. Estimates for the amount of measured snow in inches in Hanover, New Hampshire, in the ten years from 1974 to 1983 are shown in Table $6$
Table $6$ Snowfall in Hanover.
Date Snowfall in inches
1974
75
1975
88
1976
72
1977
110
1978
85
1979
30
1980
55
1981
86
1982
51
1983
64
Suppose we have started keeping records in 1974. Then our first year’s snowfall could be considered a record snowfall starting from this year. A new record was established in 1975; the next record was established in 1977, and there were no new records established after this year. Thus, in this ten-year period, there were three records established: 1974, 1975, and 1977. The question that we ask is: How many records should we expect to be established in such a ten-year period? We can count the number of records in terms of a permutation as follows: We number the years from 1 to 10. The actual amounts of snowfall are not important but their relative sizes are. We can, therefore, change the numbers measuring snowfalls to numbers 1 to 10 by replacing the smallest number by 1, the next smallest by 2, and so forth. (We assume that there are no ties.) For our example, we obtain the data shown in Table $6$.
Table $7$ Ranking of total snowfall
Year 1 2 3 4 5 6 7 8 9 10
Ranking 6 9 5 10 7 1 3 8 2 4
This gives us a permutation of the numbers from 1 to 10 and, from this permutation, we can read off the records; they are in years 1, 2, and 4. Thus we can define records for a permutation as follows:
Let $\sigma$ be a permutation of the set $\{1, 2, \ldots, n\}$. Then $i$ is a record of $\sigma$ if either $i = 1$ or $\sigma(j) < \sigma(i)$ for every $j = 1,\ldots,\,i - 1$.
Now if we regard all rankings of snowfalls over an $n$-year period to be equally likely (and allow no ties), we can estimate the probability that there will be $k$ records in $n$ years as well as the average number of records by simulation.
We have written a program Records that counts the number of records in randomly chosen permutations. We have run this program for the cases $n = 10$, 20, 30. For $n = 10$ the average number of records is 2.968, for 20 it is 3.656, and for 30 it is 3.960. We see now that the averages increase, but very slowly. We shall see later (see Example $11$) that the average number is approximately $\log n$. Since $\log 10 = 2.3$, $\log 20 = 3$, and $\log 30 = 3.4$, this is consistent with the results of our simulations.
As remarked earlier, we shall be able to obtain formulas for exact results of certain problems of the above type. However, only minor changes in the problem make this impossible. The power of simulation is that minor changes in a problem do not make the simulation much more difficult. (See Exercise $20$ for an interesting variation of the hat check problem.)
List of Permutations
Another method to solve problems that is not sensitive to small changes in the problem is to have the computer simply list all possible permutations and count the fraction that have the desired property. The program AllPermutations produces a list of all of the permutations of $n$. When we try running this program, we run into a limitation on the use of the computer. The number of permutations of $n$ increases so rapidly that even to list all permutations of 20 objects is impractical.
Historical Remarks
Our basic counting principle stated that if you can do one thing in $r$ ways and for each of these another thing in $s$ ways, then you can do the pair in $rs$ ways. This is such a self-evident result that you might expect that it occurred very early in mathematics. N. L. Biggs suggests that we might trace an example of this principle as follows: First, he relates a popular nursery rhyme dating back to at least 1730:
As I was going to St. Ives,
I met a man with seven wives,
Each wife had seven sacks,
Each sack had seven cats,
Each cat had seven kits.
Kits, cats, sacks and wives,
How many were going to St. Ives?
(You need our principle only if you are not clever enough to realize that you are supposed to answer one, since only the narrator is going to St. Ives; the others are going in the other direction!)
He also gives a problem appearing on one of the oldest surviving mathematical manuscripts of about 1650 B.C., roughly translated as:
Table Table $8$:
Houses
7
Cats
49
Mice
343
Wheat
2401
Hekat
16807
Total 19607
The following interpretation has been suggested: there are seven houses, each with seven cats; each cat kills seven mice; each mouse would have eaten seven heads of wheat, each of which would have produced seven hekat measures of grain. With this interpretation, the table answers the question of how many hekat measures were saved by the cats’ actions. It is not clear why the writer of the table wanted to add the numbers together.1
One of the earliest uses of factorials occurred in Euclid’s proof that there are infinitely many prime numbers. Euclid argued that there must be a prime number between $n$ and $n! + 1$ as follows: $n!$ and $n! + 1$ cannot have common factors. Either $n! + 1$ is prime or it has a proper factor. In the latter case, this factor cannot divide $n!$ and hence must be between $n$ and $n! + 1$. If this factor is not prime, then it has a factor that, by the same argument, must be bigger than $n$. In this way, we eventually reach a prime bigger than $n$, and this holds for all $n$.
The “$n!$" rule for the number of permutations seems to have occurred first in India. Examples have been found as early as 300 B.C., and by the eleventh century the general formula seems to have been well known in India and then in the Arab countries.
The hat check problem is found in an early probability book written by de Montmort and first printed in 1708.2 It appears in the form of a game called Treize. In a simplified version of this game considered by de Montmort one turns over cards numbered 1 to 13, calling out 1, 2, …, 13 as the cards are examined. De Montmort asked for the probability that no card that is turned up agrees with the number called out.
This probability is the same as the probability that a random permutation of 13 elements has no fixed point. De Montmort solved this problem by the use of a recursion relation as follows: let $w_n$ be the number of permutations of $n$ elements with no fixed point (such permutations are called derangements). Then $w_1 = 0$ and $w_2 = 1$.
Now assume that $n \ge 3$ and choose a derangement of the integers between 1 and $n$. Let $k$ be the integer in the first position in this derangement. By the definition of derangement, we have $k \ne 1$. There are two possibilities of interest concerning the position of 1 in the derangement: either 1 is in the $k$th position or it is elsewhere. In the first case, the $n-2$ remaining integers can be positioned in $w_{n-2}$ ways without resulting in any fixed points. In the second case, we consider the set of integers $\{1, 2, \ldots, k-1, k+1, \ldots, n\}$. The numbers in this set must occupy the positions $\{2, 3, \ldots, n\}$ so that none of the numbers other than 1 in this set are fixed, and also so that 1 is not in position $k$. The number of ways of achieving this kind of arrangement is just $w_{n-1}$. Since there are $n-1$ possible values of $k$, we see that $w_n = (n - 1)w_{n - 1} + (n - 1)w_{n -2}$ for $n \ge 3$. One might conjecture from this last equation that the sequence $\{w_n\}$ grows like the sequence $\{n!\}$.
In fact, it is easy to prove by induction that $w_n = nw_{n - 1} + (-1)^n\ .$ Then $p_i = w_i/i!$ satisfies $p_i - p_{i - 1} = \frac{(-1)^i}{i!}\ .$ If we sum from $i = 2$ to $n$, and use the fact that $p_1 = 0$, we obtain $p_n = \frac1{2!} - \frac1{3!} + \cdots + \frac{(-1)^n}{n!}\ .$ This agrees with the first $n + 1$ terms of the expansion for $e^x$ for $x = -1$ and hence for large $n$ is approximately $e^{-1} \approx .368$. David remarks that this was possibly the first use of the exponential function in probability.3 We shall see another way to derive de Montmort’s result in the next section, using a method known as the Inclusion-Exclusion method.
Recently, a related problem appeared in a column of Marilyn vos Savant.4 Charles Price wrote to ask about his experience playing a certain form of solitaire, sometimes called “frustration solitaire." In this particular game, a deck of cards is shuffled, and then dealt out, one card at a time. As the cards are being dealt, the player counts from 1 to 13, and then starts again at 1. (Thus, each number is counted four times.) If a number that is being counted coincides with the rank of the card that is being turned up, then the player loses the game. Price found that he rarely won and wondered how often he should win. Vos Savant remarked that the expected number of matches is 4 so it should be difficult to win the game.
Finding the chance of winning is a harder problem than the one that de Montmort solved because, when one goes through the entire deck, there are different patterns for the matches that might occur. For example matches may occur for two cards of the same rank, say two aces, or for two different ranks, say a two and a three.
A discussion of this problem can be found in Riordan.5 In this book, it is shown that as $n \rightarrow \infty$, the probability of no matches tends to $1/e^4$.
The original game of Treize is more difficult to analyze than frustration solitaire. The game of Treize is played as follows. One person is chosen as dealer and the others are players. Each player, other than the dealer, puts up a stake. The dealer shuffles the cards and turns them up one at a time calling out, “Ace, two, three,..., king," just as in frustration solitaire. If the dealer goes through the 13 cards without a match he pays the players an amount equal to their stake, and the deal passes to someone else. If there is a match the dealer collects the players’ stakes; the players put up new stakes, and the dealer continues through the deck, calling out, “Ace, two, three, ...." If the dealer runs out of cards he reshuffles and continues the count where he left off. He continues until there is a run of 13 without a match and then a new dealer is chosen.
The question at this point is how much money can the dealer expect to win from each player. De Montmort found that if each player puts up a stake of 1, say, then the dealer will win approximately .801 from each player.
Peter Doyle calculated the exact amount that the dealer can expect to win. The answer is:
265160721560102185822276079127341827846421204821360914467153719620899315231134354172455433491287054144029923925160769411350008077591781851201382176876653563173852874555859367254632009477403727395572807459384342747876649650760639905382611893881435135473663160170049455072017642788283066011710795363314273438247792270983528175329903598858141368836765583311324476153310720627474169719301806649152698704084383914217907906954976036285282115901403162021206015491269208808249133255538826920554278308103685781886120875824880068097864043811858283487754256095555066287892712304826997601700116233592793308297533642193505074540268925683193887821301442705197918823303692913358259222011722071315607111497510114983106336407213896987800799647204708825303387525892236581323015628005621143427290625658974433971 657194541229080070862898413060875613028189911673578636237560671849864913535355362219744889022326710115880101628593135197929438722327703339696779797069933475802423676949873661605184031477561560393380257070970711959696412682424550133198797470546935178093837505934888586986723648469505398886862858260990558627100131815062113440705698321474022185156770667208094586589378459432799868706334161812988630496327287254818458879353024498 00322425586446741048147720934108061350613503856973048971213063937040515 59533731591.
This is .803 to 3 decimal places. A description of the algorithm used to find this answer can be found on his Web page.6 A discussion of this problem and other problems can be found in Doyle et al.7
The birthday problem does not seem to have a very old history. Problems of this type were first discussed by von Mises.8 It was made popular in the 1950s by Feller’s book.9
Stirling presented his formula
$n! \sim \sqrt{2\pi n}\left(\frac{n}{e}\right)^n$
in his work Methodus Differentialis published in 1730.10 This approximation was used by de Moivre in establishing his celebrated central limit theorem that we will study in Chapter 9. De Moivre himself had independently established this approximation, but without identifying the constant $\pi$. Having established the approximation
$\frac{2B}{\sqrt n}$
for the central term of the binomial distribution, where the constant $B$ was determined by an infinite series, de Moivre writes:
… my worthy and learned Friend, Mr. James Stirling, who had applied himself after me to that inquiry, found that the Quantity $B$ did denote the Square-root of the Circumference of a Circle whose Radius is Unity, so that if that Circumference be called $c$ the Ratio of the middle Term to the Sum of all Terms will be expressed by $2/\sqrt{nc}\,$….11
Exercises
$1$
Four people are to be arranged in a row to have their picture taken. In how many ways can this be done?
$2$
An automobile manufacturer has four colors available for automobile exteriors and three for interiors. How many different color combinations can he produce?
$3$
In a digital computer, a bit is one of the integers {0,1}, and a word is any string of 32 bits. How many different words are possible?
$4$
What is the probability that at least 2 of the presidents of the United States have died on the same day of the year? If you bet this has happened, would you win your bet?
$5$
There are three different routes connecting city A to city B. How many ways can a round trip be made from A to B and back? How many ways if it is desired to take a different route on the way back?
$6$
In arranging people around a circular table, we take into account their seats relative to each other, not the actual position of any one person. Show that $n$ people can be arranged around a circular table in $(n - 1)!$ ways.
$7$
Five people get on an elevator that stops at five floors. Assuming that each has an equal probability of going to any one floor, find the probability that they all get off at different floors.
$8$
A finite set $\Omega$ has $n$ elements. Show that if we count the empty set and $\Omega$ as subsets, there are $2^n$ subsets of $\Omega$.
$9$
A more refined inequality for approximating $n!$ is given by $\sqrt{2\pi n}\left(\frac ne\right)^n e^{1/(12n + 1)} < n! < \sqrt{2\pi n}\left(\frac ne\right)^n e^{1/(12n)}\ .$ Write a computer program to illustrate this inequality for $n = 1$ to 9.
$10$
A deck of ordinary cards is shuffled and 13 cards are dealt. What is the probability that the last card dealt is an ace?
$11$
There are $n$ applicants for the director of computing. The applicants are interviewed independently by each member of the three-person search committee and ranked from 1 to $n$. A candidate will be hired if he or she is ranked first by at least two of the three interviewers. Find the probability that a candidate will be accepted if the members of the committee really have no ability at all to judge the candidates and just rank the candidates randomly. In particular, compare this probability for the case of three candidates and the case of ten candidates.
$12$
A symphony orchestra has in its repertoire 30 Haydn symphonies, 15 modern works, and 9 Beethoven symphonies. Its program always consists of a Haydn symphony followed by a modern work, and then a Beethoven symphony.
1. How many different programs can it play?
2. How many different programs are there if the three pieces can be played in any order?
3. How many different three-piece programs are there if more than one piece from the same category can be played and they can be played in any order?
$13$
A certain state has license plates showing three numbers and three letters. How many different license plates are possible
1. if the numbers must come before the letters?
2. if there is no restriction on where the letters and numbers appear?
$14$
The door on the computer center has a lock which has five buttons numbered from 1 to 5. The combination of numbers that opens the lock is a sequence of five numbers and is reset every week.
1. How many combinations are possible if every button must be used once?
2. Assume that the lock can also have combinations that require you to push two buttons simultaneously and then the other three one at a time. How many more combinations does this permit?
$15$
A computing center has 3 processors that receive $n$ jobs, with the jobs assigned to the processors purely at random so that all of the $3^n$ possible assignments are equally likely. Find the probability that exactly one processor has no jobs.
$16$
Prove that at least two people in Atlanta, Georgia, have the same initials, assuming no one has more than four initials.
$17$
Find a formula for the probability that among a set of $n$ people, at least two have their birthdays in the same month of the year (assuming the months are equally likely for birthdays).
$18$
Consider the problem of finding the probability of more than one coincidence of birthdays in a group of $n$ people. These include, for example, three people with the same birthday, or two pairs of people with the same birthday, or larger coincidences. Show how you could compute this probability, and write a computer program to carry out this computation. Use your program to find the smallest number of people for which it would be a favorable bet that there would be more than one coincidence of birthdays.
$19$
Suppose that on planet Zorg a year has $n$ days, and that the lifeforms there are equally likely to have hatched on any day of the year. We would like to estimate $d$, which is the minimum number of lifeforms needed so that the probability of at least two sharing a birthday exceeds 1/2.
1. In Example $3$, it was shown that in a set of $d$ lifeforms, the probability that no two life forms share a birthday is $\frac{(n)_d}{n^d} ,$
2. where $(n)_d = (n)(n-1)\cdots (n-d+1)$. Thus, we would like to set this equal to 1/2 and solve for $d$.
3. Using Stirling’s Formula, show that $ \sim \biggl(1 + {d\over{n-d}}\biggr)^{n-d + 1/2} e^{-d}\ .$
4. Now take the logarithm of the right-hand expression, and use the fact that for small values of $x$, we have $\log(1+x) \sim x - {{x^2}\over 2}\ .$ (We are implicitly using the fact that $d$ is of smaller order of magnitude than $n$. We will also use this fact in part (d).)
5. Set the expression found in part (c) equal to $-\log(2)$, and solve for $d$ as a function of $n$, thereby showing that $d \sim \sqrt{2(\log 2)\,n}\ .$ : If all three summands in the expression found in part (b) are used, one obtains a cubic equation in $d$. If the smallest of the three terms is thrown away, one obtains a quadratic equation in $d$.
6. Use a computer to calculate the exact values of $d$ for various values of $n$. Compare these values with the approximate values obtained by using the answer to part d).
i(20\)
At a mathematical conference, ten participants are randomly seated around a circular table for meals. Using simulation, estimate the probability that no two people sit next to each other at both lunch and dinner. Can you make an intelligent conjecture for the case of $n$ participants when $n$ is large?
$21$
Modify the program AllPermutations to count the number of permutations of $n$ objects that have exactly $j$ fixed points for $j = 0$, 1, 2, …, $n$. Run your program for $n = 2$ to 6. Make a conjecture for the relation between the number that have 0 fixed points and the number that have exactly 1 fixed point. A proof of the correct conjecture can be found in Wilf.12
$22$
Mr. Wimply Dimple, one of London’s most prestigious watch makers, has come to Sherlock Holmes in a panic, having discovered that someone has been producing and selling crude counterfeits of his best selling watch. The 16 counterfeits so far discovered bear stamped numbers, all of which fall between 1 and 56, and Dimple is anxious to know the extent of the forger’s work. All present agree that it seems reasonable to assume that the counterfeits thus far produced bear consecutive numbers from 1 to whatever the total number is.
“Chin up, Dimple," opines Dr. Watson. “I shouldn’t worry overly much if I were you; the Maximum Likelihood Principle, which estimates the total number as precisely that which gives the highest probability for the series of numbers found, suggests that we guess 56 itself as the total. Thus, your forgers are not a big operation, and we shall have them safely behind bars before your business suffers significantly."
“Stuff, nonsense, and bother your fancy principles, Watson," counters Holmes. “Anyone can see that, of course, there must be quite a few more than 56 watches—why the odds of our having discovered precisely the highest numbered watch made are laughably negligible. A much better guess would be 56."
1. Show that Watson is correct that the Maximum Likelihood Principle gives 56.
2. Write a computer program to compare Holmes’s and Watson’s guessing strategies as follows: fix a total $N$ and choose 16 integers randomly between 1 and $N$. Let $m$ denote the largest of these. Then Watson’s guess for $N$ is $m$, while Holmes’s is $2m$. See which of these is closer to $N$. Repeat this experiment (with $N$ still fixed) a hundred or more times, and determine the proportion of times that each comes closer. Whose seems to be the better strategy?
$23$
Barbara Smith is interviewing candidates to be her secretary. As she interviews the candidates, she can determine the relative rank of the candidates but not the true rank. Thus, if there are six candidates and their true rank is 6, 1, 4, 2, 3, 5, (where 1 is best) then after she had interviewed the first three candidates she would rank them 3, 1, 2. As she interviews each candidate, she must either accept or reject the candidate. If she does not accept the candidate after the interview, the candidate is lost to her. She wants to decide on a strategy for deciding when to stop and accept a candidate that will maximize the probability of getting the best candidate. Assume that there are $n$ candidates and they arrive in a random rank order.
1. What is the probability that Barbara gets the best candidate if she interviews all of the candidates? What is it if she chooses the first candidate?
2. Assume that Barbara decides to interview the first half of the candidates and then continue interviewing until getting a candidate better than any candidate seen so far. Show that she has a better than 25 percent chance of ending up with the best candidate.
$24$
For the task described in Exercise 23, it can be shown13 that the best strategy is to pass over the first $k - 1$ candidates where $k$ is the smallest integer for which $\frac 1k + \frac 1{k + 1} + \cdots + \frac 1{n - 1} \leq 1\ .$ Using this strategy the probability of getting the best candidate is approximately $1/e = .368$. Write a program to simulate Barbara Smith’s interviewing if she uses this optimal strategy, using $n = 10$, and see if you can verify that the probability of success is approximately $1/e$. | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/03%3A_Combinatorics/3.01%3A_Permutations.txt |
Having mastered permutations, we now consider combinations. Let $U$ be a set with $n$ elements; we want to count the number of distinct subsets of the set $U$ that have exactly $j$ elements. The empty set and the set $U$ are considered to be subsets of $U$. The empty set is usually denoted by $\phi$.
Example $1$:
Let $U = \{a,b,c\}$. The subsets of $U$ are $\phi,\ \{a\},\ \{b\},\ \{c\},\ \{a,b\},\ \{a,c\},\ \{b,c\},\ \{a,b,c\}\ .$
Binomial Coefficients
The number of distinct subsets with $j$ elements that can be chosen from a set with $n$ elements is denoted by ${n \choose j}$, and is pronounced “$n$ choose $j$." The number $n \choose j$ is called a binomial coefficient. This terminology comes from an application to algebra which will be discussed later in this section.
In the above example, there is one subset with no elements, three subsets with exactly 1 element, three subsets with exactly 2 elements, and one subset with exactly 3 elements. Thus, ${3 \choose 0} = 1$, ${3 \choose 1} = 3$, ${3 \choose 2} = 3$, and ${3 \choose 3} = 1$. Note that there are $2^3 = 8$ subsets in all. (We have already seen that a set with $n$ elements has $2^n$ subsets; see Exercise 3.1.8) It follows that
${3 \choose 0} + {3 \choose 1} + {3 \choose 2} + {3 \choose 3} = 2^3 = 8\ ,$ ${n \choose 0} = {n \choose n} = 1\ .$
Assume that $n > 0$. Then, since there is only one way to choose a set with no elements and only one way to choose a set with $n$ elements, the remaining values of $n \choose j$ are determined by the following :
Theorem $1$
For integers $n$ and $j$, with $0 < j < n$, the binomial coefficients satisfy:
${n \choose j} = {n-1 \choose j} + {n-1 \choose j - 1}\ . \label{eq 3.3}$
Proof
We wish to choose a subset of $j$ elements. Choose an element $u$ of $U$. Assume first that we do not want $u$ in the subset. Then we must choose the $j$ elements from a set of $n - 1$ elements; this can be done in ${n-1 \choose j}$ ways. On the other hand, assume that we do want $u$ in the subset. Then we must choose the other $j - 1$ elements from the remaining $n - 1$ elements of $U$; this can be done in ${n-1 \choose j - 1}$ ways. Since $u$ is either in our subset or not, the number of ways that we can choose a subset of $j$ elements is the sum of the number of subsets of $j$ elements which have $u$ as a member and the number which do not—this is what Equation 3.1.1 states.
The binomial coefficient $n \choose j$ is defined to be 0, if $j < 0$ or if $j > n$. With this definition, the restrictions on $j$ in Theorem $1$ are unnecessary.
Table $1$: Pascal's Triangle
j=0
1
2
3
4
5
6
7
8
9
10
n=0
1
1
1
1
2
1
2
1
3
1
3
3
1
4
1
4
6
4
1
5
1
5
10
10
5
1
6
1
6
15
20
15
6
1
7
1
7
21
35
35
21
7
1
8
1
8
28
56
70
56
28
8
1
9
1
9
36
84
126
126
84
36
9
1
10
1
10
45
120
210
252
210
120
45
10
1
Pascal’s Triangle
The relation 3.1, together with the knowledge that ${n \choose 0} = {n \choose n }= 1\ ,$ determines completely the numbers $n \choose j$. We can use these relations to determine the famous which exhibits all these numbers in matrix form (see Table $1$).
The $n$th row of this triangle has the entries $n \choose 0$, $n \choose 1$,…, $n \choose n$. We know that the first and last of these numbers are 1. The remaining numbers are determined by the recurrence relation Equation 3.1; that is, the entry ${n \choose j}$ for $0 < j < n$ in the $n$th row of Pascal’s triangle is the of the entry immediately above and the one immediately to its left in the $(n - 1)$st row. For example, ${5 \choose 2} = 6 + 4 = 10$.
This algorithm for constructing Pascal’s triangle can be used to write a computer program to compute the binomial coefficients. You are asked to do this in Exercise 4.
While Pascal’s triangle provides a way to construct recursively the binomial coefficients, it is also possible to give a formula for $n \choose j$.
Theorem $1$
The binomial coefficients are given by the formula ${n \choose j }= \frac{(n)_j}{j!}\ . \label{eq 3.4}$
Proof
Each subset of size $j$ of a set of size $n$ can be ordered in $j!$ ways. Each of these orderings is a $j$-permutation of the set of size $n$. The number of $j$-permutations is $(n)_j$, so the number of subsets of size $j$ is $\frac{(n)_j}{j!}\ .$ This completes the proof.
The above formula can be rewritten in the form
${n \choose j} = \frac{n!}{j!(n-j)!}\ .$
This immediately shows that
${n \choose j} = {n \choose {n-j}}\ .$
When using Equation 3.2 in the calculation of ${n \choose j}$, if one alternates the multiplications and divisions, then all of the intermediate values in the calculation are integers. Furthermore, none of these intermediate values exceed the final value. (See Exercise $40$.)
Another point that should be made concerning Equation [eq 3.4] is that if it is used to the binomial coefficients, then it is no longer necessary to require $n$ to be a positive integer. The variable $j$ must still be a non-negative integer under this definition. This idea is useful when extending the Binomial Theorem to general exponents. (The Binomial Theorem for non-negative integer exponents is given below as Theorem $5$.)
Poker Hands
Example $2$:
Poker players sometimes wonder why a beats a A poker hand is a random subset of 5 elements from a deck of 52 cards. A hand has four of a kind if it has four cards with the same value—for example, four sixes or four kings. It is a full house if it has three of one value and two of a second—for example, three twos and two queens. Let us see which hand is more likely. How many hands have four of a kind? There are 13 ways that we can specify the value for the four cards. For each of these, there are 48 possibilities for the fifth card. Thus, the number of four-of-a-kind hands is $13 \cdot 48 = 624$. Since the total number of possible hands is ${52 \choose 5} = 2598960$, the probability of a hand with four of a kind is $624/2598960 = .00024$.
Now consider the case of a full house; how many such hands are there? There are 13 choices for the value which occurs three times; for each of these there are ${4 \choose 3} = 4$ choices for the particular three cards of this value that are in the hand. Having picked these three cards, there are 12 possibilities for the value which occurs twice; for each of these there are ${4 \choose 2} = 6$ possibilities for the particular pair of this value. Thus, the number of full houses is $13 \cdot 4 \cdot 12 \cdot 6 = 3744$, and the probability of obtaining a hand with a full house is $3744/2598960 = .0014$. Thus, while both types of hands are unlikely, you are six times more likely to obtain a full house than four of a kind.
Bernoulli Trials
Our principal use of the binomial coefficients will occur in the study of one of the important chance processes called
Definition $1$
A Bernoulli is a sequence of $n$ chance experiments such that
1. Each experiment has two possible outcomes, which we may call and
2. The probability $p$ of success on each experiment is the same for each experiment, and this probability is not affected by any knowledge of previous outcomes. The probability $q$ of failure is given by $q = 1 - p$.
Example $3$
The following are Bernoulli trials processes:
1. A coin is tossed ten times. The two possible outcomes are heads and tails. The probability of heads on any one toss is 1/2.
2. An opinion poll is carried out by asking 1000 people, randomly chosen from the population, if they favor the Equal Rights Amendment—the two outcomes being yes and no. The probability $p$ of a yes answer (i.e., a success) indicates the proportion of people in the entire population that favor this amendment.
3. A gambler makes a sequence of 1-dollar bets, betting each time on black at roulette at Las Vegas. Here a success is winning 1 dollar and a failure is losing 1 dollar. Since in American roulette the gambler wins if the ball stops on one of 18 out of 38 positions and loses otherwise, the probability of winning is $p = 18/38 = .474$.
To analyze a Bernoulli trials process, we choose as our sample space a binary tree and assign a probability distribution to the paths in this tree. Suppose, for example, that we have three Bernoulli trials. The possible outcomes are indicated in the tree diagram shown in Figure 3.4. We define $X$ to be the random variable which represents the outcome of the process, i.e., an ordered triple of S’s and F’s. The probabilities assigned to the branches of the tree represent the probability for each individual trial. Let the outcome of the $i$th trial be denoted by the random variable $X_i$, with distribution function $m_i$. Since we have assumed that outcomes on any one trial do not affect those on another, we assign the same probabilities at each level of the tree. An outcome $\omega$ for the entire experiment will be a path through the tree. For example, $\omega_3$ represents the outcomes SFS. Our frequency interpretation of probability would lead us to expect a fraction $p$ of successes on the first experiment; of these, a fraction $q$ of failures on the second; and, of these, a fraction $p$ of successes on the third experiment. This suggests assigning probability $pqp$ to the outcome $\omega_3$. More generally, we assign a distribution function $m(\omega)$ for paths $\omega$ by defining $m(\omega)$ to be the product of the branch probabilities along the path $\omega$. Thus, the probability that the three events S on the first trial, F on the second trial, and S on the third trial occur is the product of the probabilities for the individual events. We shall see in the next chapter that this means that the events involved are in the sense that the knowledge of one event does not affect our prediction for the occurrences of the other events.
Binomial Probabilities
We shall be particularly interested in the probability that in $n$ Bernoulli trials there are exactly $j$ successes. We denote this probability by $b(n,p,j)$. Let us calculate the particular value $b(3,p,2)$ from our tree measure. We see that there are three paths which have exactly two successes and one failure, namely $\omega_2$, $\omega_3$, and $\omega_5$. Each of these paths has the same probability $p^2q$. Thus $b(3,p,2) = 3p^2q$. Considering all possible numbers of successes we have
\begin{aligned} b(3,p,0) &=& q^3\ ,\ b(3,p,1) &=& 3pq^2\ ,\ b(3,p,2) &=& 3p^2q\ ,\ b(3,p,3) &=& p^3\ .\end{aligned} We can, in the same manner, carry out a tree measure for $n$ experiments and determine $b(n,p,j)$ for the general case of $n$ Bernoulli trials.
Theorem $2$
Given $n$ Bernoulli trials with probability $p$ of success on each experiment, the probability of exactly $j$ successes is $b(n,p,j) = {n \choose j} p^j q^{n - j}$ where $q = 1 - p$.
Proof
We construct a tree measure as described above. We want to find the sum of the probabilities for all paths which have exactly $j$ successes and $n - j$ failures. Each such path is assigned a probability $p^j q^{n - j}$. How many such paths are there? To specify a path, we have to pick, from the $n$ possible trials, a subset of $j$ to be successes, with the remaining $n-j$ outcomes being failures. We can do this in $n \choose j$ ways. Thus the sum of the probabilities is $b(n,p,j) = {n \choose j} p^j q^{n - j}\ .$
Example $4$:
A fair coin is tossed six times. What is the probability that exactly three heads turn up? The answer is $b(6,.5,3) = {6 \choose 3} \left(\frac12\right)^3 \left(\frac12\right)^3 = 20 \cdot \frac1{64} = .3125\ .$
Example $5$:
A die is rolled four times. What is the probability that we obtain exactly one 6? We treat this as Bernoulli trials with success = “rolling a 6" and failure = “rolling some number other than a 6." Then $p = 1/6$, and the probability of exactly one success in four trials is $b(4,1/6,1) = {4 \choose 1 }\left(\frac16\right)^1 \left(\frac56\right)^3 = .386\ .$
To compute binomial probabilities using the computer, multiply the function choose$(n,k)$ by $p^kq^{n - k}$. The program BinomialProbabilities prints out the binomial probabilities $b(n, p, k)$ for $k$ between $kmin$ and $kmax$, and the sum of these probabilities. We have run this program for $n = 100$, $p = 1/2$, $kmin = 45$, and $kmax = 55$; the output is shown in Table 3.8. Note that the individual probabilities are quite small. The probability of exactly 50 heads in 100 tosses of a coin is about 0.08. Our intuition tells us that this is the most likely outcome, which is correct; but, all the same, it is not a very likely outcome.
Table $2$ Binomial probabilities for $n = 100,\ p = 1/2$.
$k$ $b(n,p,k)$
45 .0485
46 .0580
47 .0666
48 .0735
49 .0780
50 .0796
51 .0780
52 .0735
53 .0666
54 .0580
55 .0485
Binomial Distributions
Definition $6$
Let $n$ be a positive integer, and let $p$ be a real number between 0 and 1. Let $B$ be the random variable which counts the number of successes in a Bernoulli trials process with parameters $n$ and $p$. Then the distribution $b(n, p, k)$ of $B$ is called the binomial distribution.
We can get a better idea about the binomial distribution by graphing this distribution for different values of $n$ and $p$ (see Figure $2$). The plots in this figure were generated using the program BinomialPlot.
We have run this program for $p = .5$ and $p = .3$. Note that even for $p = .3$ the graphs are quite symmetric. We shall have an explanation for this in Chapter 9. We also note that the highest probability occurs around the value $np$, but that these highest probabilities get smaller as $n$ increases. We shall see in Chapter 6 that $np$ is the mean or expected value of the binomial distribution $b(n,p,k)$.
The following example gives a nice way to see the binomial distribution, when $p = 1/2$.
Example $6$:
A Galton board is a board in which a large number of BB-shots are dropped from a chute at the top of the board and deflected off a number of pins on their way down to the bottom of the board. The final position of each slot is the result of a number of random deflections either to the left or the right. We have written a program GaltonBoard to simulate this experiment.
We have run the program for the case of 20 rows of pins and 10,000 shots being dropped. We show the result of this simulation in Figure 3.6
Note that if we write 0 every time the shot is deflected to the left, and 1 every time it is deflected to the right, then the path of the shot can be described by a sequence of 0’s and 1’s of length $n$, just as for the $n$-fold coin toss.
The distribution shown in Figure 3.6 is an example of an empirical distribution, in the sense that it comes about by means of a sequence of experiments. As expected, this empirical distribution resembles the corresponding binomial distribution with parameters $n = 20$ and $p = 1/2$.
Hypothesis Testing
Example $7$:
Suppose that ordinary aspirin has been found effective against headaches 60 percent of the time, and that a drug company claims that its new aspirin with a special headache additive is more effective. We can test this claim as follows: we call their claim the and its negation, that the additive has no appreciable effect, the Thus the null hypothesis is that $p = .6$, and the alternate hypothesis is that $p > .6$, where $p$ is the probability that the new aspirin is effective.
We give the aspirin to $n$ people to take when they have a headache. We want to find a number $m$, called the for our experiment, such that we reject the null hypothesis if at least $m$ people are cured, and otherwise we accept it. How should we determine this critical value?
First note that we can make two kinds of errors. The first, often called a type 1 error in statistics, is to reject the null hypothesis when in fact it is true. The second, called a type 2 error is to accept the null hypothesis when it is false. To determine the probability of both these types of errors we introduce a function $\alpha(p)$, defined to be the probability that we reject the null hypothesis, where this probability is calculated under the assumption that the null hypothesis is true. In the present case, we have $\alpha(p) = \sum_{m \leq k \leq n} b(n,p,k)\ .$
Note that $\alpha(.6)$ is the probability of a type 1 error, since this is the probability of a high number of successes for an ineffective additive. So for a given $n$ we want to choose $m$ so as to make $\alpha(.6)$ quite small, to reduce the likelihood of a type 1 error. But as $m$ increases above the most probable value $np = .6n$, $\alpha(.6)$, being the upper tail of a binomial distribution, approaches 0. Thus $m$ makes a type 1 error less likely.
Now suppose that the additive really is effective, so that $p$ is appreciably greater than .6; say $p = .8$. (This alternative value of $p$ is chosen arbitrarily; the following calculations depend on this choice.) Then choosing $m$ well below $np = .8n$ will increase $\alpha(.8)$, since now $\alpha(.8)$ is all but the lower tail of a binomial distribution. Indeed, if we put $\beta(.8) = 1 - \alpha(.8)$, then $\beta(.8)$ gives us the probability of a type 2 error, and so decreasing $m$ makes a type 2 error less likely.
The manufacturer would like to guard against a type 2 error, since if such an error is made, then the test does not show that the new drug is better, when in fact it is. If the alternative value of $p$ is chosen closer to the value of $p$ given in the null hypothesis (in this case $p =.6$), then for a given test population, the value of $\beta$ will increase. So, if the manufacturer’s statistician chooses an alternative value for $p$ which is close to the value in the null hypothesis, then it will be an expensive proposition (i.e., the test population will have to be large) to reject the null hypothesis with a small value of $\beta$.
What we hope to do then, for a given test population $n$, is to choose a value of $m$, if possible, which makes both these probabilities small. If we make a type 1 error we end up buying a lot of essentially ordinary aspirin at an inflated price; a type 2 error means we miss a bargain on a superior medication. Let us say that we want our critical number $m$ to make each of these undesirable cases less than 5 percent probable.
We write a program PowerCurve to plot, for $n = 100$ and selected values of $m$, the function $\alpha(p)$, for $p$ ranging from .4 to 1. The result is shown in Figure [fig 3.9]. We include in our graph a box (in dotted lines) from .6 to .8, with bottom and top at heights .05 and .95. Then a value for $m$ satisfies our requirements if and only if the graph of $\alpha$ enters the box from the bottom, and leaves from the top (why?—which is the type 1 and which is the type 2 criterion?). As $m$ increases, the graph of $\alpha$ moves to the right. A few experiments have shown us that $m = 69$ is the smallest value for $m$ that thwarts a type 1 error, while $m = 73$ is the largest which thwarts a type 2. So we may choose our critical value between 69 and 73. If we’re more intent on avoiding a type 1 error we favor 73, and similarly we favor 69 if we regard a type 2 error as worse. Of course, the drug company may not be happy with having as much as a 5 percent chance of an error. They might insist on having a 1 percent chance of an error. For this we would have to increase the number $n$ of trials (see Exercise 3.2.28).
Binomial Expansion
We next remind the reader of an application of the binomial coefficients to algebra. This is the binomial expansion from which we get the term binomial coefficient.
Theorem $7$: (Binomial Theorem)
The quantity $(a + b)^n$ can be expressed in the form $(a + b)^n = \sum_{j = 0}^n {n \choose j} a^j b^{n - j}\ .$ To see that this expansion is correct, write $(a + b)^n = (a + b)(a + b) \cdots (a + b)\ .$
Proof
When we multiply this out we will have a sum of terms each of which results from a choice of an $a$ or $b$ for each of $n$ factors. When we choose $j$ $a$’s and $(n - j)$ $b$’s, we obtain a term of the form $a^j b^{n - j}$. To determine such a term, we have to specify $j$ of the $n$ terms in the product from which we choose the $a$. This can be done in $n \choose j$ ways. Thus, collecting these terms in the sum contributes a term ${n \choose j} a^j b^{n - j}$.
For example, we have \begin{aligned} (a + b)^0 & = & 1 \ (a + b)^1 & = & a + b \ (a + b)^2 & = & a^2 + 2ab + b^2 \ (a + b)^3 & = & a^3 + 3a^2b + 3ab^2 + b^3\ .\end{aligned} We see here that the coefficients of successive powers do indeed yield Pascal’s triangle.
Corollary $1$
The sum of the elements in the $n$th row of Pascal’s triangle is $2^n$. If the elements in the $n$th row of Pascal’s triangle are added with alternating signs, the sum is 0.
Proof
The first statement in the corollary follows from the fact that $2^n = (1 + 1)^n = {n \choose 0} + {n \choose 1} + {n \choose 2} + \cdots + {n \choose n}\ ,$ and the second from the fact that $0 = (1 - 1)^n = {n \choose 0} - {n \choose 1} + {n \choose 2}- \cdots + {(-1)^n}{n \choose n}\ .$
The first statement of the corollary tells us that the number of subsets of a set of $n$ elements is $2^n$. We shall use the second statement in our next application of the binomial theorem.
We have seen that, when $A$ and $B$ are any two events (cf. Section [sec 1.2]), $P(A \cup B) = P(A) + P(B) - P(A \cap B).$ We now extend this theorem to a more general version, which will enable us to find the probability that at least one of a number of events occurs.
Inclusion-Exclusion Principle
Theorem $1$
Let $P$ be a probability distribution on a sample space $\Omega$, and let $\{A_1,\ A_2,\ \dots,\ A_n\}$ be a finite set of events. Then
\begin{aligned} P\left(A_1 \cup A_2 \cup \cdots \cup A_n\right)= & \sum_{i=1}^n P\left(A_i\right)-\sum_{1 \leq i<j \leq n} P\left(A_i \cap A_j\right) \ & +\sum_{1 \leq i<j<k \leq n} P\left(A_i \cap A_j \cap A_k\right)-\cdots\end{aligned}
That is, to find the probability that at least one of $n$ events $A_i$ occurs, first add the probability of each event, then subtract the probabilities of all possible two-way intersections, add the probability of all three-way intersections, and so forth.
Proof
If the outcome $\omega$ occurs in at least one of the events $A_i$, its probability is added exactly once by the left side of Equation [eq 3.5]. We must show that it is added exactly once by the right side of Equation [eq 3.5]. Assume that $\omega$ is in exactly $k$ of the sets. Then its probability is added $k$ times in the first term, subtracted $k \choose 2$ times in the second, added $k \choose 3$ times in the third term, and so forth. Thus, the total number of times that it is added is ${k \choose 1} - {k \choose 2} + {k \choose 3} - \cdots {(-1)^{k-1}} {k \choose k}\ .$ But $0 = (1 - 1)^k = \sum_{j = 0}^k {k \choose j} (-1)^j = {k \choose 0} - \sum_{j = 1}^k {k \choose j} {(-1)^{j - 1}}\ .$ Hence, $1 = {k \choose 0} = \sum_{j = 1}^k {k \choose j} {(-1)^{j - 1}}\ .$ If the outcome $\omega$ is not in any of the events $A_i$, then it is not counted on either side of the equation.
Hat Check Problem
Example $8$:
We return to the hat check problem discussed in Section 1.1, that is, the problem of finding the probability that a random permutation contains at least one fixed point. Recall that a permutation is a one-to-one map of a set $A = \{a_1,a_2,\dots,a_n\}$ onto itself. Let $A_i$ be the event that the $i$th element $a_i$ remains fixed under this map. If we require that $a_i$ is fixed, then the map of the remaining $n - 1$ elements provides an arbitrary permutation of $(n - 1)$ objects. Since there are $(n - 1)!$ such permutations, $P(A_i) = (n - 1)!/n! = 1/n$. Since there are $n$ choices for $a_i$, the first term of Equation [eq 3.5] is 1. In the same way, to have a particular pair $(a_i,a_j)$ fixed, we can choose any permutation of the remaining $n - 2$ elements; there are $(n - 2)!$ such choices and thus $P(A_i \cap A_j) = \frac{(n - 2)!}{n!} = \frac 1{n(n - 1)}\ .$ The number of terms of this form in the right side of Equation [eq 3.5] is ${n \choose 2} = \frac{n(n - 1)}{2!}\ .$ Hence, the second term of Equation [eq 3.5] is $-\frac{n(n - 1)}{2!} \cdot \frac 1{n(n - 1)} = -\frac 1{2!}\ .$ Similarly, for any specific three events $A_i$, $A_j$, $A_k$, $P(A_i \cap A_j \cap A_k) = \frac{(n - 3)!}{n!} = \frac 1{n(n - 1)(n - 2)}\ ,$ and the number of such terms is ${n \choose 3} = \frac{n(n - 1)(n - 2)}{3!}\ ,$ making the third term of Equation [eq 3.5] equal to 1/3!. Continuing in this way, we obtain $P(\mbox {at\ least\ one\ fixed\ point}) = 1 - \frac 1{2!} + \frac 1{3!} - \cdots (-1)^{n-1} \frac 1{n!}$ and $P(\mbox {no\ fixed\ point}) = \frac 1{2!} - \frac 1{3!} + \cdots (-1)^n \frac 1{n!}\ .$
From calculus we learn that $e^x = 1 + x + \frac 1{2!}x^2 + \frac 1{3!}x^3 + \cdots + \frac 1{n!}x^n + \cdots\ .$ Thus, if $x = -1$, we have \begin{aligned} e^{-1} & = &\frac 1{2!} - \frac 1{3!} + \cdots + \frac{(-1)^n}{n!} + \cdots \ & = & .3678794\ .\end{aligned} Therefore, the probability that there is no fixed point, i.e., that none of the $n$ people gets his own hat back, is equal to the sum of the first $n$ terms in the expression for $e^{-1}$. This series converges very fast. Calculating the partial sums for $n = 3$ to 10 gives the data in Table [table 3.7].
Table $3$ Hat check problem.
n Probability that no one
gets his own hat back
3
.333333
4
.375
5
.366667
6
.368056
7
.367857
8
.367882
9
.367879
10
.367879
After $n = 9$ the probabilities are essentially the same to six significant figures. Interestingly, the probability of no fixed point alternately increases and decreases as $n$ increases. Finally, we note that our exact results are in good agreement with our simulations reported in the previous section.
Choosing a Sample Space
We now have some of the tools needed to accurately describe sample spaces and to assign probability functions to those sample spaces. Nevertheless, in some cases, the description and assignment process is somewhat arbitrary. Of course, it is to be hoped that the description of the sample space and the subsequent assignment of a probability function will yield a model which accurately predicts what would happen if the experiment were actually carried out. As the following examples show, there are situations in which “reasonable" descriptions of the sample space do not produce a model which fits the data.
In Feller’s book,14 a pair of models is given which describe arrangements of certain kinds of elementary particles, such as photons and protons. It turns out that experiments have shown that certain types of elementary particles exhibit behavior which is accurately described by one model, called while other types of elementary particles can be modelled using Feller:
We have here an instructive example of the impossibility of selecting or justifying probability models by arguments. In fact, no pure reasoning could tell that photons and protons would not obey the same probability laws.
We now give some examples of this description and assignment process.
Example $9$:
In the quantum mechanical model of the helium atom, various parameters can be used to classify the energy states of the atom. In the triplet spin state ($S = 1$) with orbital angular momentum 1 ($L = 1$), there are three possibilities, 0, 1, or 2, for the total angular momentum ($J$). (It is not assumed that the reader knows what any of this means; in fact, the example is more illustrative if the reader does know anything about quantum mechanics.) We would like to assign probabilities to the three possibilities for $J$. The reader is undoubtedly resisting the idea of assigning the probability of $1/3$ to each of these outcomes. She should now ask herself why she is resisting this assignment. The answer is probably because she does not have any “intuition" (i.e., experience) about the way in which helium atoms behave. In fact, in this example, the probabilities $1/9,\ 3/9,$ and $5/9$ are assigned by the theory. The theory gives these assignments because these frequencies were observed and further parameters were developed in the theory to allow these frequencies to be predicted.
Example $10$:
Suppose two pennies are flipped once each. There are several “reasonable" ways to describe the sample space. One way is to count the number of heads in the outcome; in this case, the sample space can be written $\{0, 1, 2\}$. Another description of the sample space is the set of all ordered pairs of $H$’s and $T$’s, i.e., $\{(H,H), (H, T), (T, H), (T, T)\}.$ Both of these descriptions are accurate ones, but it is easy to see that (at most) one of these, if assigned a constant probability function, can claim to accurately model reality. In this case, as opposed to the preceding example, the reader will probably say that the second description, with each outcome being assigned a probability of $1/4$, is the “right" description. This conviction is due to experience; there is no proof that this is the way reality works.
The reader is also referred to Exercise $26$ for another example of this process.
Historical Remarks
The binomial coefficients have a long and colorful history leading up to Pascal’s Treatise on the Arithmetical Triangle15 where Pascal developed many important properties of these numbers. This history is set forth in the book by Pascal's Arithmetical Triangle by A. W. F. Edwards.16 Pascal wrote his triangle in the form shown in Table 3.10.
Table $4$
$\begin{array}{rrrrrrrrrr}1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & \1 & 3 & 6 & 10 & 15 & 21 & 28 & 36 & & \1 & 4 & 10 & 20 & 35 & 56 & 84 & & & \1 & 5 & 15 & 35 & 70 & 126 & & & & \1 & 6 & 21 & 56 & 126 & & & & \1 & 7 & 28 & 84 & & & & & & \1 & 8 & 36 & & & & & & \1 & 9 & & & & & & & \1 & & & & & & & &\end{array}$
Table $5$ Figurate numbers
$\begin{array}{llllllllll} \text { natural numbers } & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \ \text { triangular numbers } & 1 & 3 & 6 & 10 & 15 & 21 & 28 & 36 & 45 \ \text { tetrahedral numbers } & 1 & 4 & 10 & 20 & 35 & 56 & 84 & 120 & 165 \end{array}$
Edwards traces three different ways that the binomial coefficients arose. He refers to these as the figurate numbers, the combinatorial numbers and the binomial numbers. They are all names for the same thing (which we have called binomial coefficients) but that they are all the same was not appreciated until the sixteenth century.
The figurate numbers date back to the Pythagorean interest in number patterns around 540 . The Pythagoreans considered, for example, triangular patterns shown in Figure $5$ The sequence of numbers $1, 3, 6, 10, \dots$ obtained as the number of points in each triangle are called triangular numbers. From the triangles it is clear that the $n$th triangular number is simply the sum of the first $n$ integers. The tetrahedral numbers are the sums of the triangular numbers and were obtained by the Greek mathematicians Theon and Nicomachus at the beginning of the second century . The tetrahedral number 10, for example, has the geometric representation shown in Figure $6$. The first three types of figurate numbers can be represented in tabular form as shown in Table $5$
$\begin{array}{llllllllll} \mbox {natural\ numbers} & 1\hskip.2in & 2\hskip.2in & 3\hskip.2in & 4\hskip.2in & 5\hskip.2in & 6\hskip.2in & 7\hskip.2in & 8\hskip.2in & 9 \cr \mbox {triangular\ numbers} & 1 & 3 & 6 & 10 & 15 & 21 & 28 & 36 & 45 \cr \mbox {tetrahedral\ numbers}& 1 & 4 & 10 & 20 & 35 & 56 & 84 & 120 &165 \end{array}$
These numbers provide the first four rows of Pascal’s triangle, but the table was not to be completed in the West until the sixteenth century.
In the East, Hindu mathematicians began to encounter the binomial coefficients in combinatorial problems. Bhaskara in his of 1150 gave a rule to find the number of medicinal preparations using 1, 2, 3, 4, 5, or 6 possible ingredients.17 His rule is equivalent to our formula ${n \choose r} = \frac{(n)_r}{r!}\ .$
Table $6$: Outcomes for the roll of two dice
$\begin{array}{lllllll}11 & & & & & \ 12 & 22 & & & & \ 13 & 23 & 33 & & & \ 14 & 24 & 34 & 44 & & \ 15 & 25 & 35 & 45 & 55 & \ 16 & 26 & 36 & 46 & 56 & 66\end{array}$
The binomial numbers as coefficients of $(a + b)^n$ appeared in the works of mathematicians in China around 1100. There are references about this time to “the tabulation system for unlocking binomial coefficients." The triangle to provide the coefficients up to the eighth power is given by Chu Shih-chieh in a book written around 1303 (see Figure [fig 3.12]).18 The original manuscript of Chu’s book has been lost, but copies have survived. Edwards notes that there is an error in this copy of Chu’s triangle. Can you find it? (: Two numbers which should be equal are not.) Other copies do not show this error.
The first appearance of Pascal’s triangle in the West seems to have come from calculations of Tartaglia in calculating the number of possible ways that $n$ dice might turn up.19 For one die the answer is clearly 6. For two dice the possibilities may be displayed as shown in Table [table $7$ ].
Table $7$ Outcomes for the roll of two dice
$\matrix{ 11\cr 12 & 22\cr 13 & 23 & 33\cr 14 & 24 & 34 & 44\cr 15 & 25 & 35 & 45 & 55\cr 16 & 26 & 36 & 46 & 56 & 66\ \cr }$
Displaying them this way suggests the sixth triangular number $1 + 2 + 3 + 4 + 5 + 6 = 21$ for the throw of 2 dice. Tartaglia “on the first day of Lent, 1523, in Verona, having thought about the problem all night,"20 realized that the extension of the figurate table gave the answers for $n$ dice. The problem had suggested itself to Tartaglia from watching people casting their own horoscopes by means of a Book of Fortune selecting verses by a process which included noting the numbers on the faces of three dice. The 56 ways that three dice can fall were set out on each page. The way the numbers were written in the book did not suggest the connection with figurate numbers, but a method of enumeration similar to the one we used for 2 dice does. Tartaglia’s table was not published until 1556.
A table for the binomial coefficients was published in 1554 by the German mathematician Stifel.21 Pascal’s triangle appears also in Cardano’s of 1570.22 Cardano was interested in the problem of finding the number of ways to choose $r$ objects out of $n$. Thus by the time of Pascal’s work, his triangle had appeared as a result of looking at the figurate numbers, the combinatorial numbers, and the binomial numbers, and the fact that all three were the same was presumably pretty well understood.
Pascal’s interest in the binomial numbers came from his letters with Fermat concerning a problem known as the problem of points. This problem, and the correspondence between Pascal and Fermat, were discussed in Chapter 1. The reader will recall that this problem can be described as follows: Two players A and B are playing a sequence of games and the first player to win $n$ games wins the match. It is desired to find the probability that A wins the match at a time when A has won $a$ games and B has won $b$ games. (See Exercises 4.1.40-4.1.42.)
Pascal solved the problem by backward induction, much the way we would do today in writing a computer program for its solution. He referred to the combinatorial method of Fermat which proceeds as follows: If A needs $c$ games and B needs $d$ games to win, we require that the players continue to play until they have played $c + d - 1$ games. The winner in this extended series will be the same as the winner in the original series. The probability that A wins in the extended series and hence in the original series is $\sum_{r = c}^{c + d - 1} \frac 1{2^{c + d - 1}} {{c + d - 1} \choose r}\ .$ Even at the time of the letters Pascal seemed to understand this formula.
Suppose that the first player to win $n$ games wins the match, and suppose that each player has put up a stake of $x$. Pascal studied the value of winning a particular game. By this he meant the increase in the expected winnings of the winner of the particular game under consideration. He showed that the value of the first game is $\frac {1\cdot3\cdot5\cdot\dots\cdot(2n - 1)}{2\cdot4\cdot6\cdot\dots\cdot(2n)}x\ .$ His proof of this seems to use Fermat’s formula and the fact that the above ratio of products of odd to products of even numbers is equal to the probability of exactly $n$ heads in $2n$ tosses of a coin. (See Exercise 39.)
Pascal presented Fermat with the table shown in Table $8$
Table $8$ Pascal’s solution for the problem of points.
From my opponent’s 256 6 5 4 3 2 1
positions I get, for the games games games games games games
1st game
63 70 80 96 128 256
2nd game 63 70 80 96 128
3rd game 56 60 64 64
4th game 42 40 32
5th game 24 16
6th game
He states:
You will see as always, that the value of the first game is equal to that of the second which is easily shown by combinations. You will see, in the same way, that the numbers in the first line are always increasing; so also are those in the second; and those in the third. But those in the fourth line are decreasing, and those in the fifth, etc. This seems odd.23
The student can pursue this question further using the computer and Pascal’s backward iteration method for computing the expected payoff at any point in the series.
In his treatise, Pascal gave a formal proof of Fermat’s combinatorial formula as well as proofs of many other basic properties of binomial numbers. Many of his proofs involved induction and represent some of the first proofs by this method. His book brought together all the different aspects of the numbers in the Pascal triangle as known in 1654, and, as Edwards states, “That the Arithmetical Triangle should bear Pascal’s name cannot be disputed."24
The first serious study of the binomial distribution was undertaken by James Bernoulli in his published in 1713.25 We shall return to this work in the historical remarks in Chapter 8
Exercises
$1$
Compute the following:
1. ${6 \choose 3}$
2. $b(5,.2,4)$
3. ${7 \choose 2}$
4. ${26} \choose {26}$
5. $b(4,.2,3)$
6. ${6 \choose 2}$
7. ${{10} \choose 9}$
8. $b(8, .3, 5)$
$2$
In how many ways can we choose five people from a group of ten to form a committee?
$3$
How many seven-element subsets are there in a set of nine elements?
$4$
Using the relation Equation 3.1 write a program to compute Pascal’s triangle, putting the results in a matrix. Have your program print the triangle for $n = 10$.
$5$
Use the program BinomialProbabilities to find the probability that, in 100 tosses of a fair coin, the number of heads that turns up lies between 35 and 65, between 40 and 60, and between 45 and 55.
$6$
Charles claims that he can distinguish between beer and ale 75 percent of the time. Ruth bets that he cannot and, in fact, just guesses. To settle this, a bet is made: Charles is to be given ten small glasses, each having been filled with beer or ale, chosen by tossing a fair coin. He wins the bet if he gets seven or more correct. Find the probability that Charles wins if he has the ability that he claims. Find the probability that Ruth wins if Charles is guessing.
$7$
Show that $b(n,p,j) = \frac pq \left(\frac {n - j + 1}j \right) b(n,p,j - 1)\ ,$ for $j \ge 1$. Use this fact to determine the value or values of $j$ which give $b(n,p,j)$ its greatest value. : Consider the successive ratios as $j$ increases.
$8$
A die is rolled 30 times. What is the probability that a 6 turns up exactly 5 times? What is the most probable number of times that a 6 will turn up?
$9$
Find integers $n$ and $r$ such that the following equation is true: ${13 \choose 5} + 2{13 \choose 6} + {13 \choose 7} = {n \choose r}\ .$
$10$
In a ten-question true-false exam, find the probability that a student gets a grade of 70 percent or better by guessing. Answer the same question if the test has 30 questions, and if the test has 50 questions.
$11$
A restaurant offers apple and blueberry pies and stocks an equal number of each kind of pie. Each day ten customers request pie. They choose, with equal probabilities, one of the two kinds of pie. How many pieces of each kind of pie should the owner provide so that the probability is about .95 that each customer gets the pie of his or her own choice?
$12$
A poker hand is a set of 5 cards randomly chosen from a deck of 52 cards. Find the probability of a
1. royal flush (ten, jack, queen, king, ace in a single suit).
2. straight flush (five in a sequence in a single suit, but not a royal flush).
3. four of a kind (four cards of the same face value).
4. full house (one pair and one triple, each of the same face value).
5. flush (five cards in a single suit but not a straight or royal flush).
6. straight (five cards in a sequence, not all the same suit). (Note that in straights, an ace counts high or low.)
$13$
If a set has $2n$ elements, show that it has more subsets with $n$ elements than with any other number of elements.
$14$
Let $b(2n,.5,n)$ be the probability that in $2n$ tosses of a fair coin exactly $n$ heads turn up. Using Stirling’s formula (Theorem 3.1), show that $b(2n,.5,n) \sim 1/\sqrt{\pi n}$. Use the program BinomialProbabilities to compare this with the exact value for $n = 10$ to 25.
$15$
A baseball player, Smith, has a batting average of $.300$ and in a typical game comes to bat three times. Assume that Smith’s hits in a game can be considered to be a Bernoulli trials process with probability .3 for Find the probability that Smith gets 0, 1, 2, and 3 hits.
$16$
The Siwash University football team plays eight games in a season, winning three, losing three, and ending two in a tie. Show that the number of ways that this can happen is ${8 \choose 3}{5 \choose 3} = \frac {8!}{3!\,3!\,2!}\ .$
$17$
Using the technique of Exercise 16, show that the number of ways that one can put $n$ different objects into three boxes with $a$ in the first, $b$ in the second, and $c$ in the third is $n!/(a!\,b!\,c!)$.
$18$
Baumgartner, Prosser, and Crowell are grading a calculus exam. There is a true-false question with ten parts. Baumgartner notices that one student has only two out of the ten correct and remarks, “The student was not even bright enough to have flipped a coin to determine his answers." “Not so clear," says Prosser. “With 340 students I bet that if they all flipped coins to determine their answers there would be at least one exam with two or fewer answers correct." Crowell says, “I’m with Prosser. In fact, I bet that we should expect at least one exam in which no answer is correct if everyone is just guessing." Who is right in all of this?
$19$
A gin hand consists of 10 cards from a deck of 52 cards. Find the probability that a gin hand has
1. all 10 cards of the same suit.
2. exactly 4 cards in one suit and 3 in two other suits.
3. a 4, 3, 2, 1, distribution of suits.
$20$
A six-card hand is dealt from an ordinary deck of cards. Find the probability that:
1. All six cards are hearts.
2. There are three aces, two kings, and one queen.
3. There are three cards of one suit and three of another suit.
$21$
A lady wishes to color her fingernails on one hand using at most two of the colors red, yellow, and blue. How many ways can she do this?
$22$
How many ways can six indistinguishable letters be put in three mail boxes? : One representation of this is given by a sequence $|$LL$|$L$|$LLL$|$ where the $|$’s represent the partitions for the boxes and the L’s the letters. Any possible way can be so described. Note that we need two bars at the ends and the remaining two bars and the six L’s can be put in any order.
$23$
Using the method for the hint in Exercise 22, show that $r$ indistinguishable objects can be put in $n$ boxes in ${n + r - 1} \choose {n - 1} = {n + r - 1} \choose r$ different ways.
$24$
A travel bureau estimates that when 20 tourists go to a resort with ten hotels they distribute themselves as if the bureau were putting 20 indistinguishable objects into ten distinguishable boxes. Assuming this model is correct, find the probability that no hotel is left vacant when the first group of 20 tourists arrives.
$25$
An elevator takes on six passengers and stops at ten floors. We can assign two different equiprobable measures for the ways that the passengers are discharged: (a) we consider the passengers to be distinguishable or (b) we consider them to be indistinguishable (see Exercise 23 for this case). For each case, calculate the probability that all the passengers get off at different floors.
$26$
You are playing with Prosser but you suspect that his coin is unfair. Von Neumann suggested that you proceed as follows: Toss Prosser’s coin twice. If the outcome is HT call the result if it is TH call the result If it is TT or HH ignore the outcome and toss Prosser’s coin twice again. Keep going until you get either an HT or a TH and call the result win or lose in a single play. Repeat this procedure for each play. Assume that Prosser’s coin turns up heads with probability $p$.
1. Find the probability of HT, TH, HH, TT with two tosses of Prosser’s coin.
2. Using part (a), show that the probability of a win on any one play is 1/2, no matter what $p$ is.
$27$
John claims that he has extrasensory powers and can tell which of two symbols is on a card turned face down (see Example 12). To test his ability he is asked to do this for a sequence of trials. Let the null hypothesis be that he is just guessing, so that the probability is 1/2 of his getting it right each time, and let the alternative hypothesis be that he can name the symbol correctly more than half the time. Devise a test with the property that the probability of a type 1 error is less than .05 and the probability of a type 2 error is less than .05 if John can name the symbol correctly 75 percent of the time.
$28$
In Example $12$ assume the alternative hypothesis is that $p = .8$ and that it is desired to have the probability of each type of error less than .01. Use the program PowerCurve to determine values of $n$ and $m$ that will achieve this. Choose $n$ as small as possible.
$29$
A drug is assumed to be effective with an unknown probability $p$. To estimate $p$ the drug is given to $n$ patients. It is found to be effective for $m$ patients. The for estimating $p$ states that we should choose the value for $p$ that gives the highest probability of getting what we got on the experiment. Assuming that the experiment can be considered as a Bernoulli trials process with probability $p$ for success, show that the maximum likelihood estimate for $p$ is the proportion $m/n$ of successes.
$30$
Recall that in the World Series the first team to win four games wins the series. The series can go at most seven games. Assume that the Red Sox and the Mets are playing the series. Assume that the Mets win each game with probability $p$. Fermat observed that even though the series might not go seven games, the probability that the Mets win the series is the same as the probability that they win four or more game in a series that was forced to go seven games no matter who wins the individual games.
1. Using the program PowerCurve of Example 3.11 find the probability that the Mets win the series for the cases $p = .5$, $p = .6$, $p =.7$.
2. Assume that the Mets have probability .6 of winning each game. Use the program PowerCurve to find a value of $n$ so that, if the series goes to the first team to win more than half the games, the Mets will have a 95 percent chance of winning the series. Choose $n$ as small as possible.
$31$
Each of the four engines on an airplane functions correctly on a given flight with probability .99, and the engines function independently of each other. Assume that the plane can make a safe landing if at least two of its engines are functioning correctly. What is the probability that the engines will allow for a safe landing?
$32$
A small boy is lost coming down Mount Washington. The leader of the search team estimates that there is a probability $p$ that he came down on the east side and a probability $1 - p$ that he came down on the west side. He has $n$ people in his search team who will search independently and, if the boy is on the side being searched, each member will find the boy with probability $u$. Determine how he should divide the $n$ people into two groups to search the two sides of the mountain so that he will have the highest probability of finding the boy. How does this depend on $u$?
$*33$
$2n$ balls are chosen at random from a total of $2n$ red balls and $2n$ blue balls. Find a combinatorial expression for the probability that the chosen balls are equally divided in color. Use Stirling’s formula to estimate this probability. Using BinomialProbabilities, compare the exact value with Stirling’s approximation for $n = 20$.
$34$
Assume that every time you buy a box of Wheaties, you receive one of the pictures of the $n$ players on the New York Yankees. Over a period of time, you buy $m \geq n$ boxes of Wheaties.
1. Use Theorem 3.8 to show that the probability that you get all $n$ pictures is \begin{aligned} 1 &-& {n \choose 1} \left(\frac{n - 1}n\right)^m + {n \choose 2} \left(\frac{n - 2}n\right)^m - \cdots \ &+& (-1)^{n - 1} {n \choose {n - 1}}\left(\frac 1n \right)^m.\end{aligned} : Let $E_k$ be the event that you do not get the $k$th player’s picture.
2. Write a computer program to compute this probability. Use this program to find, for given $n$, the smallest value of $m$ which will give probability $\geq .5$ of getting all $n$ pictures. Consider $n = 50$, 100, and 150 and show that $m = n\log n + n \log 2$ is a good estimate for the number of boxes needed. (For a derivation of this estimate, see Feller.26)
$35$
Prove the following binomial identity
${2n \choose n} = \sum_{j = 0}^n { n \choose j}^2\ .$ : Consider an urn with $n$ red balls and $n$ blue balls inside. Show that each side of the equation equals the number of ways to choose $n$ balls from the urn.
$36$ Let $j$ and $n$ be positive integers, with $j \le n$. An experiment consists of choosing, at random, a $j$-tuple of integers whose sum is at most $n$.
1. Find the size of the sample space. : Consider $n$ indistinguishable balls placed in a row. Place $j$ markers between consecutive pairs of balls, with no two markers between the same pair of balls. (We also allow one of the $n$ markers to be placed at the end of the row of balls.) Show that there is a 1-1 correspondence between the set of possible positions for the markers and the set of $j$-tuples whose size we are trying to count.
2. Find the probability that the $j$-tuple selected contains at least one 1.
$37$
Let $n\ (\mbox{mod}\ m)$ denote the remainder when the integer $n$ is divided by the integer $m$. Write a computer program to compute the numbers ${n \choose j}\ (\mbox{mod}\ m)$ where ${n \choose j}$ is a binomial coefficient and $m$ is an integer. You can do this by using the recursion relations for generating binomial coefficients, doing all the arithmetic using the basic function mod($n,m$). Try to write your program to make as large a table as possible. Run your program for the cases $m = 2$ to 7. Do you see any patterns? In particular, for the case $m = 2$ and $n$ a power of 2, verify that all the entries in the $(n - 1)$st row are 1. (The corresponding binomial numbers are odd.) Use your pictures to explain why this is true.
$38$
Lucas27 proved the following general result relating to Exercise 37. If $p$ is any prime number, then ${n \choose j}~ (\mbox{mod\ }p)$ can be found as follows: Expand $n$ and $j$ in base $p$ as $n = s_0 + s_1p + s_2p^2 + \cdots + s_kp^k$ and $j = r_0 + r_1p + r_2p^2 + \cdots + r_kp^k$, respectively. (Here $k$ is chosen large enough to represent all numbers from 0 to $n$ in base $p$ using $k$ digits.) Let $s = (s_0,s_1,s_2,\dots,s_k)$ and $r = (r_0,r_1,r_2,\dots,r_k)$. Then
$\left(\begin{array}{l} n \ j \end{array}\right)(\bmod p)=\prod_{i=0}^k\left(\begin{array}{l} s_i \ r_i \end{array}\right)(\bmod p)$
For example, if $p=7, n=12$, and $j=9$, then
\begin{aligned} 12 & =5 \cdot 7^0+1 \cdot 7^1, \ 9 & =2 \cdot 7^0+1 \cdot 7^1, \end{aligned}
so that
\begin{aligned} s & = & (5, 1)\ , \ r & = & (2, 1)\ , \end{aligned}
and this result states that
${12 \choose 9}~(\mbox{mod\ }p) = {5 \choose 2} {1 \choose 1}~(\mbox{mod\ }7)\ .$
Since ${12 \choose 9} = 220 = 3~(\mbox{mod\ }7)$, and ${5 \choose 2} = 10 = 3~ (\mbox{mod\ }7)$, we see that the result is correct for this example.
Show that this result implies that, for $p = 2$, the $(p^k - 1)$st row of your triangle in Exercise 37 has no zeros.
$39$
Prove that the probability of exactly $n$ heads in $2n$ tosses of a fair coin is given by the product of the odd numbers up to $2n - 1$ divided by the product of the even numbers up to $2n$.
$40$
Let $n$ be a positive integer, and assume that $j$ is a positive integer not exceeding $n/2$. Show that in Theorem 3.5, if one alternates the multiplications and divisions, then all of the intermediate values in the calculation are integers. Show also that none of these intermediate values exceed the final value. | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/03%3A_Combinatorics/3.02%3A_Combinations.txt |
Card Shuffling
Much of this section is based upon an article by Brad Mann,28 which is an exposition of an article by David Bayer and Persi Diaconis.29
Riffle Shuffles
Given a deck of $n$ cards, how many times must we shuffle it to make it “random"? Of course, the answer depends upon the method of shuffling which is used and what we mean by “random." We shall begin the study of this question by considering a standard model for the riffle shuffle.
We begin with a deck of $n$ cards, which we will assume are labelled in increasing order with the integers from 1 to $n$. A riffle shuffle consists of a cut of the deck into two stacks and an interleaving of the two stacks. For example, if $n = 6$, the initial ordering is $(1, 2, 3, 4, 5, 6)$, and a cut might occur between cards 2 and 3. This gives rise to two stacks, namely $(1, 2)$ and $(3, 4, 5, 6)$. These are interleaved to form a new ordering of the deck. For example, these two stacks might form the ordering $(1, 3, 4, 2, 5, 6)$. In order to discuss such shuffles, we need to assign a probability distribution to the set of all possible shuffles. There are several reasonable ways in which this can be done. We will give several different assignment strategies, and show that they are equivalent. (This does not mean that this assignment is the only reasonable one.) First, we assign the binomial probability $b(n, 1/2, k)$ to the event that the cut occurs after the $k$th card. Next, we assume that all possible interleavings, given a cut, are equally likely. Thus, to complete the assignment of probabilities, we need to determine the number of possible interleavings of two stacks of cards, with $k$ and $n-k$ cards, respectively.
We begin by writing the second stack in a line, with spaces in between each pair of consecutive cards, and with spaces at the beginning and end (so there are $n-k+1$ spaces). We choose, with replacement, $k$ of these spaces, and place the cards from the first stack in the chosen spaces. This can be done in
$n \choose{k}$
ways. Thus, the probability of a given interleaving should be
${1\over{ n\choose{k} }}.$
Next, we note that if the new ordering is not the identity ordering, it is the result of a unique cut-interleaving pair. If the new ordering is the identity, it is the result of any one of $n+1$ cut-interleaving pairs.
We define a in an ordering to be a maximal subsequence of consecutive integers in increasing order. For example, in the ordering $(2, 3, 5, 1, 4, 7, 6)\ ,$ there are 4 rising sequences; they are $(1)$, $(2, 3, 4)$, $(5, 6)$, and $(7)$. It is easy to see that an ordering is the result of a riffle shuffle applied to the identity ordering if and only if it has no more than two rising sequences. (If the ordering has two rising sequences, then these rising sequences correspond to the two stacks induced by the cut, and if the ordering has one rising sequence, then it is the identity ordering.) Thus, the sample space of orderings obtained by applying a riffle shuffle to the identity ordering is naturally described as the set of all orderings with at most two rising sequences.
It is now easy to assign a probability distribution to this sample space. Each ordering with two rising sequences is assigned the value
$\frac{b(n, 1/2, k)}{n \choose{k}} = \frac{1}{2^n},$
and the identity ordering is assigned the value
${n+1}\over{2^n} .$
There is another way to view a riffle shuffle. We can imagine starting with a deck cut into two stacks as before, with the same probabilities assignment as before i.e., the binomial distribution. Once we have the two stacks, we take cards, one by one, off of the bottom of the two stacks, and place them onto one stack. If there are $k_1$ and $k_2$ cards, respectively, in the two stacks at some point in this process, then we make the assumption that the probabilities that the next card to be taken comes from a given stack is proportional to the current stack size. This implies that the probability that we take the next card from the first stack equals
${k_1}\over{k_1 + k_2} ,$
and the corresponding probability for the second stack is
${k_2}\over{k_1 + k_2} .$
We shall now show that this process assigns the uniform probability to each of the possible interleavings of the two stacks.
Suppose, for example, that an interleaving came about as the result of choosing cards from the two stacks in some order. The probability that this result occurred is the product of the probabilities at each point in the process, since the choice of card at each point is assumed to be independent of the previous choices. Each factor of this product is of the form
$\dfrac{k_1}{k_1+k_2}$
where $i = 1$ or $2$, and the denominator of each factor equals the number of cards left to be chosen. Thus, the denominator of the probability is just $n!$. At the moment when a card is chosen from a stack that has $i$ cards in it, the numerator of the corresponding factor in the probability is $i$, and the number of cards in this stack decreases by 1. Thus, the numerator is seen to be $k!(n-k)!$, since all cards in both stacks are eventually chosen. Therefore, this process assigns the probability ${1\over{ n\choose{k} } }$
to each possible interleaving.
We now turn to the question of what happens when we riffle shuffle $s$ times. It should be clear that if we start with the identity ordering, we obtain an ordering with at most $2^s$ rising sequences, since a riffle shuffle creates at most two rising sequences from every rising sequence in the starting ordering. In fact, it is not hard to see that each such ordering is the result of $s$ riffle shuffles. The question becomes, then, in how many ways can an ordering with $r$ rising sequences come about by applying $s$ riffle shuffles to the identity ordering? In order to answer this question, we turn to the idea of an $a$-shuffle.
$a$-Shuffles
There are several ways to visualize an $a$-shuffle. One way is to imagine a creature with $a$ hands who is given a deck of cards to riffle shuffle. The creature naturally cuts the deck into $a$ stacks, and then riffles them together. (Imagine that!) Thus, the ordinary riffle shuffle is a 2-shuffle. As in the case of the ordinary 2-shuffle, we allow some of the stacks to have 0 cards. Another way to visualize an $a$-shuffle is to think about its inverse, called an $a$-unshuffle. This idea is described in the proof of the next theorem.
We will now show that an $a$-shuffle followed by a $b$-shuffle is equivalent to an $ab$-shuffle. This means, in particular, that $s$ riffle shuffles in succession are equivalent to one $2^s$-shuffle. This equivalence is made precise by the following theorem.
Theorem $1$
Let $a$ and $b$ be two positive integers. Let $S_{a,b}$ be the set of all ordered pairs in which the first entry is an $a$-shuffle and the second entry is a $b$-shuffle. Let $S_{ab}$ be the set of all $ab$-shuffles. Then there is a 1-1 correspondence between $S_{a,b}$ and $S_{ab}$ with the following property. Suppose that $(T_1, T_2)$ corresponds to $T_3$. If $T_1$ is applied to the identity ordering, and $T_2$ is applied to the resulting ordering, then the final ordering is the same as the ordering that is obtained by applying $T_3$ to the identity ordering.
Proof. The easiest way to describe the required correspondence is through the idea of an unshuffle. An $a$-unshuffle begins with a deck of $n$ cards. One by one, cards are taken from the top of the deck and placed, with equal probability, on the bottom of any one of $a$ stacks, where the stacks are labelled from 0 to $a-1$. After all of the cards have been distributed, we combine the stacks to form one stack by placing stack $i$ on top of stack $i+1$, for $0 \le i \le a-1$. It is easy to see that if one starts with a deck, there is exactly one way to cut the deck to obtain the $a$ stacks generated by the $a$-unshuffle, and with these $a$ stacks, there is exactly one way to interleave them to obtain the deck in the order that it was in before the unshuffle was performed. Thus, this $a$-unshuffle corresponds to a unique $a$-shuffle, and this $a$-shuffle is the inverse of the original $a$-unshuffle.
If we apply an $ab$-unshuffle $U_3$ to a deck, we obtain a set of $ab$ stacks, which are then combined, in order, to form one stack. We label these stacks with ordered pairs of integers, where the first coordinate is between 0 and $a-1$, and the second coordinate is between 0 and $b-1$. Then we label each card with the label of its stack. The number of possible labels is $ab$, as required. Using this labelling, we can describe how to find a $b$-unshuffle and an $a$-unshuffle, such that if these two unshuffles are applied in this order to the deck, we obtain the same set of $ab$ stacks as were obtained by the $ab$-unshuffle.
To obtain the $b$-unshuffle $U_2$, we sort the deck into $b$ stacks, with the $i$th stack containing all of the cards with second coordinate $i$, for $0 \le i \le b-1$. Then these stacks are combined to form one stack. The $a$-unshuffle $U_1$ proceeds in the same manner, except that the first coordinates of the labels are used. The resulting $a$ stacks are then combined to form one stack.
The above description shows that the cards ending up on top are all those labelled $(0, 0)$. These are followed by those labelled $(0, 1),\ (0, 2),$ $\ \ldots,\ (0, b - 1),\ (1, 0),\ (1,1),\ldots,\ (a-1, b-1)$. Furthermore, the relative order of any pair of cards with the same labels is never altered. But this is exactly the same as an $ab$-unshuffle, if, at the beginning of such an unshuffle, we label each of the cards with one of the labels $(0, 0),\ (0, 1),\ \ldots,\ (0, b-1),\ (1, 0),\ (1,1),\ \ldots,\ (a-1, b-1)$. This completes the proof.
In Figure [$1$, we show the labels for a 2-unshuffle of a deck with 10 cards. There are 4 cards with the label 0 and 6 cards with the label 1, so if the 2-unshuffle is performed, the first stack will have 4 cards and the second stack will have 6 cards. When this unshuffle is performed, the deck ends up in the identity ordering.
In Figure $2$, we show the labels for a 4-unshuffle of the same deck (because there are four labels being used). This figure can also be regarded as an example of a pair of 2-unshuffles, as described in the proof above. The first 2-unshuffle will use the second coordinate of the labels to determine the stacks. In this case, the two stacks contain the cards whose values are
$\{5, 1, 6, 2, 7\} {\rm and} \{8, 9, 3, 4, 10\}.$
After this 2-unshuffle has been performed, the deck is in the order shown in Figure [fig 3.14], as the reader should check. If we wish to perform a 4-unshuffle on the deck, using the labels shown, we sort the cards lexicographically, obtaining the four stacks
$\{1, 2\},\ \{3, 4\},\ \{5, 6, 7\},\ {\rm and}\ \{8, 9, 10\}.$
When these stacks are combined, we once again obtain the identity ordering of the deck. The point of the above theorem is that both sorting procedures always lead to the same initial ordering.
Theorem $2$
If $D$ is any ordering that is the result of applying an $a$-shuffle and then a $b$-shuffle to the identity ordering, then the probability assigned to $D$ by this pair of operations is the same as the probability assigned to $D$ by the process of applying an $ab$-shuffle to the identity ordering.
Proof. Call the sample space of $a$-shuffles $S_a$. If we label the stacks by the integers from $0$ to $a-1$, then each cut-interleaving pair, i.e., shuffle, corresponds to exactly one $n$-digit base $a$ integer, where the $i$th digit in the integer is the stack of which the $i$th card is a member. Thus, the number of cut-interleaving pairs is equal to the number of $n$-digit base $a$ integers, which is $a^n$. Of course, not all of these pairs leads to different orderings. The number of pairs leading to a given ordering will be discussed later. For our purposes it is enough to point out that it is the cut-interleaving pairs that determine the probability assignment.
The previous theorem shows that there is a 1-1 correspondence between $S_{a,b}$ and $S_{ab}$. Furthermore, corresponding elements give the same ordering when applied to the identity ordering. Given any ordering $D$, let $m_1$ be the number of elements of $S_{a,b}$ which, when applied to the identity ordering, result in $D$. Let $m_2$ be the number of elements of $S_{ab}$ which, when applied to the identity ordering, result in $D$. The previous theorem implies that $m_1 = m_2$. Thus, both sets assign the probability
$\dfrac{m_1}{\left ( ab \right )^{n}}$
to $D$. This completes the proof.
Connection with the Birthday Problem
There is another point that can be made concerning the labels given to the cards by the successive unshuffles. Suppose that we 2-unshuffle an $n$-card deck until the labels on the cards are all different. It is easy to see that this process produces each permutation with the same probability, i.e., this is a random process. To see this, note that if the labels become distinct on the $s$th 2-unshuffle, then one can think of this sequence of 2-unshuffles as one $2^s$-unshuffle, in which all of the stacks determined by the unshuffle have at most one card in them (remember, the stacks correspond to the labels). If each stack has at most one card in it, then given any two cards in the deck, it is equally likely that the first card has a lower or a higher label than the second card. Thus, each possible ordering is equally likely to result from this $2^s$-unshuffle.
Let $T$ be the random variable that counts the number of 2-unshuffles until all labels are distinct. One can think of $T$ as giving a measure of how long it takes in the unshuffling process until randomness is reached. Since shuffling and unshuffling are inverse processes, $T$ also measures the number of shuffles necessary to achieve randomness. Suppose that we have an $n$-card deck, and we ask for $P(T \le s)$. This equals $1 - P(T > s)$. But $T > s$ if and only if it is the case that not all of the labels after $s$ 2-unshuffles are distinct. This is just the birthday problem; we are asking for the probability that at least two people have the same birthday, given that we have $n$ people and there are $2^s$ possible birthdays. Using our formula from Example [exam 3.3], we find that
$P(T > s) = 1 - {2^s \choose n} \frac{n!}{2^{sn}}.$
In Chapter [chp 6], we will define the average value of a random variable. Using this idea, and the above equation, one can calculate the average value of the random variable $T$ (see Exercise [sec 6.1].[exer 6.1.42]). For example, if $n = 52$, then the average value of $T$ is about 11.7. This means that, on the average, about 12 riffle shuffles are needed for the process to be considered random.
Cut-Interleaving Pairs and Orderings
As was noted in the proof of Theorem [thm 3.3.2], not all of the cut-interleaving pairs lead to different orderings. However, there is an easy formula which gives the number of such pairs that lead to a given ordering.
Theorem $3$
If an ordering of length $n$ has $r$ rising sequences, then the number of cut-interleaving pairs under an $a$-shuffle of the identity ordering which lead to the ordering is
${ n + a - r \choose{n} }.$
Proof:To see why this is true, we need to count the number of ways in which the cut in an $a$-shuffle can be performed which will lead to a given ordering with $r$ rising sequences. We can disregard the interleavings, since once a cut has been made, at most one interleaving will lead to a given ordering. Since the given ordering has $r$ rising sequences, $r-1$ of the division points in the cut are determined. The remaining $a - 1 - (r - 1) = a - r$ division points can be placed anywhere. The number of places to put these remaining division points is $n+1$ (which is the number of spaces between the consecutive pairs of cards, including the positions at the beginning and the end of the deck). These places are chosen with repetition allowed, so the number of ways to make these choices is
${n + a - r \choose{a-r}} = {n + a - r \choose{n}} .$
In particular, this means that if $D$ is an ordering that is the result of applying an $a$-shuffle to the identity ordering, and if $D$ has $r$ rising sequences, then the probability assigned to $D$ by this process is
${ {n + a - r\choose{n}} \over{a^n}}.$
This completes the proof.
The above theorem shows that the essential information about the probability assigned to an ordering under an $a$-shuffle is just the number of rising sequences in the ordering. Thus, if we determine the number of orderings which contain exactly $r$ rising sequences, for each $r$ between 1 and $n$, then we will have determined the distribution function of the random variable which consists of applying a random $a$-shuffle to the identity ordering.
The number of orderings of $\{1, 2, \ldots, n\}$ with $r$ rising sequences is denoted by $A(n, r)$, and is called an Eulerian number. There are many ways to calculate the values of these numbers; the following theorem gives one recursive method which follows immediately from what we already know about $a$-shuffles.
Theorem $4$
Let $a$ and $n$ be positive integers. Then
$a^n = \sum_{r = 1}^a {n + a - r \choose{n} } A(n, r).$
Thus,
$A(n, a) = a^n - \sum_{r = 1}^{a-1} { n + a - r \choose{n} }A(n, r)$
In addition,
$A(n, 1) = 1$
Proof: The second equation can be used to calculate the values of the Eulerian numbers, and follows immediately from the Equation [eq 3.6]. The last equation is a consequence of the fact that the only ordering of $\{1, 2, \ldots, n\}$ with one rising sequence is the identity ordering. Thus, it remains to prove Equation [eq 3.6]. We will count the set of $a$-shuffles of a deck with $n$ cards in two ways. First, we know that there are $a^n$ such shuffles (this was noted in the proof of Theorem $2$. But there are $A(n, r)$ orderings of $\{1, 2, \ldots, n\}$ with $r$ rising sequences, and Theorem $3$ states that for each such ordering, there are exactly
${ n+a-r \choose n}$
cut-interleaving pairs that lead to the ordering. Therefore, the right-hand side of Equation [eq 3.6] counts the set of $a$-shuffles of an $n$-card deck. This completes theproof.
Random Orderings and Random Processes
We now turn to the second question that was asked at the beginning of this section: What do we mean by a “random" ordering? It is somewhat misleading to think about a given ordering as being random or not random. If we want to choose a random ordering from the set of all orderings of $\{1, 2, \ldots, n\}$, we mean that we want every ordering to be chosen with the same probability, i.e., any ordering is as “random" as any other.
The word “random" should really be used to describe a process. We will say that a process that produces an object from a (finite) set of objects is a random process if each object in the set is produced with the same probability by the process. In the present situation, the objects are the orderings, and the process which produces these objects is the shuffling process. It is easy to see that no $a$-shuffle is really a random process, since if $T_1$ and $T_2$ are two orderings with a different number of rising sequences, then they are produced by an $a$-shuffle, applied to the identity ordering, with different probabilities.
Variation Distance
Instead of requiring that a sequence of shuffles yield a process which is random, we will define a measure that describes how far away a given process is from a random process. Let $X$ be any process which produces an ordering of $\{1, 2, \ldots, n\}$. Define $f_X(\pi)$ be the probability that $X$ produces the ordering $\pi$. (Thus, $X$ can be thought of as a random variable with distribution function $f$.) Let $\Omega_n$ be the set of all orderings of $\{1, 2, \ldots, n\}$. Finally, let $u(\pi) = 1/|\Omega_n|$ for all $\pi \in \Omega_n$. The function $u$ is the distribution function of a process which produces orderings and which is random. For each ordering $\pi \in \Omega_n$, the quantity $|f_X(\pi) - u(\pi)|$ is the difference between the actual and desired probabilities that $X$ produces $\pi$. If we sum this over all orderings $\pi$ and call this sum $S$, we see that $S = 0$ if and only if $X$ is random, and otherwise $S$ is positive. It is easy to show that the maximum value of $S$ is 2, so we will multiply the sum by $1/2$ so that the value falls in the interval $[0, 1]$. Thus, we obtain the following sum as the formula for the between the two processes: $\parallel f_X - u \parallel = {1\over 2}\sum_{\pi \in \Omega_n} |f_X(\pi) - u(\pi)| .$
Now we apply this idea to the case of shuffling. We let $X$ be the process of $s$ successive riffle shuffles applied to the identity ordering. We know that it is also possible to think of $X$ as one $2^s$-shuffle. We also know that $f_X$ is constant on the set of all orderings with $r$ rising sequences, where $r$ is any positive integer. Finally, we know the value of $f_X$ on an ordering with $r$ rising sequences, and we know how many such orderings there are. Thus, in this specific case, we have
$\parallel f_X - u \parallel = {1\over 2}\sum_{r=1}^n A(n, r) \biggl| {2^s + n - r\choose{n}}/2^{ns} - {1\over{n!}}\biggr|$
Since this sum has only $n$ summands, it is easy to compute this for moderate sized values of $n$. For $n = 52$, we obtain the list of values given in Table $1$.
Table $1$: Distance to the random process.
1 1
2 1
3 1
4 0.9999995334
5 0.9237329294
6 0.6135495966
7 0.3340609995
8 0.1671586419
9 0.0854201934
10 0.0429455489
11 0.0215023760
12 0.0107548935
13 0.0053779101
14 0.0026890130
To help in understanding these data, they are shown in graphical form in Figure $1$. The program VariationList produces the data shown in both Table $1$ and Figure $1$ One sees that until 5 shuffles have occurred, the output of $X$ is very far from random. After 5 shuffles, the distance from the random process is essentially halved each time a shuffle occurs.
Given the distribution functions $f_X(\pi)$ and $u(\pi)$ as above, there is another way to view the variation distance $\parallel f_X - u\parallel$. Given any event $T$ (which is a subset of $S_n$), we can calculate its probability under the process $X$ and under the uniform process. For example, we can imagine that $T$ represents the set of all permutations in which the first player in a 7-player poker game is dealt a straight flush (five consecutive cards in the same suit). It is interesting to consider how much the probability of this event after a certain number of shuffles differs from the probability of this event if all permutations are equally likely. This difference can be thought of as describing how close the process $X$ is to the random process with respect to the event $T$.
Now consider the event $T$ such that the absolute value of the difference between these two probabilities is as large as possible. It can be shown that this absolute value is the variation distance between the process $X$ and the uniform process. (The reader is asked to prove this fact in Exercise $4$.)
We have just seen that, for a deck of 52 cards, the variation distance between the 7-riffle shuffle process and the random process is about $.334$. It is of interest to find an event $T$ such that the difference between the probabilities that the two processes produce $T$ is close to $.334$. An event with this property can be described in terms of the game called New-Age Solitaire.
New-Age Solitaire
This game was invented by Peter Doyle. It is played with a standard 52-card deck. We deal the cards face up, one at a time, onto a discard pile. If an ace is encountered, say the ace of Hearts, we use it to start a Heart pile. Each suit pile must be built up in order, from ace to king, using only subsequently dealt cards. Once we have dealt all of the cards, we pick up the discard pile and continue. We define the Yin suits to be Hearts and Clubs, and the Yang suits to be Diamonds and Spades. The game ends when either both Yin suit piles have been completed, or both Yang suit piles have been completed. It is clear that if the ordering of the deck is produced by the random process, then the probability that the Yin suit piles are completed first is exactly 1/2.
Now suppose that we buy a new deck of cards, break the seal on the package, and riffle shuffle the deck 7 times. If one tries this, one finds that the Yin suits win about 75% of the time. This is 25% more than we would get if the deck were in truly random order. This deviation is reasonably close to the theoretical maximum of $33.4$% obtained above.
Why do the Yin suits win so often? In a brand new deck of cards, the suits are in the following order, from top to bottom: ace through king of Hearts, ace through king of Clubs, king through ace of Diamonds, and king through ace of Spades. Note that if the cards were not shuffled at all, then the Yin suit piles would be completed on the first pass, before any Yang suit cards are even seen. If we were to continue playing the game until the Yang suit piles are completed, it would take 13 passes through the deck to do this. Thus, one can see that in a new deck, the Yin suits are in the most advantageous order and the Yang suits are in the least advantageous order. Under 7 riffle shuffles, the relative advantage of the Yin suits over the Yang suits is preserved to a certain extent.
Exercises
Exercise $1$
Given any ordering $\sigma$ of $\{1, 2, \ldots, n\}$, we can define $\sigma^{-1}$, the inverse ordering of $\sigma$, to be the ordering in which the $i$th element is the position occupied by $i$ in $\sigma$. For example, if $\sigma = (1, 3, 5, 2, 4, 7, 6)$, then $\sigma^{-1} = (1, 4, 2, 5, 3, 7, 6)$. (If one thinks of these orderings as permutations, then $\sigma^{-1}$ is the inverse of $\sigma$.)
A occurs between two positions in an ordering if the left position is occupied by a larger number than the right position. It will be convenient to say that every ordering has a fall after the last position. In the above example, $\sigma^{-1}$ has four falls. They occur after the second, fourth, sixth, and seventh positions. Prove that the number of rising sequences in an ordering $\sigma$ equals the number of falls in $\sigma^{-1}$.
Answer
Add answer text here and it will automatically be hidden if you have a "AutoNum" template active on the page.
Exercise $2$
Show that if we start with the identity ordering of $\{1, 2, \ldots, n\}$, then the probability that an $a$-shuffle leads to an ordering with exactly $r$ rising sequences equals
$\dfrac{\left(\begin{array}{c}n+a-r \ n\end{array}\right)}{a^n} A(n, r)$,
Answer
Add answer text here and it will automatically be hidden if you have a "AutoNum" template active on the page.
Exercise $3$
Let $D$ be a deck of $n$ cards. We have seen that there are $a^n$ $a$-shuffles of $D$. A coding of the set of $a$-unshuffles was given in the proof of Theorem [thm 3.3.1]. We will now give a coding of the $a$-shuffles which corresponds to the coding of the $a$-unshuffles. Let $S$ be the set of all $n$-tuples of integers, each between 0 and $a-1$. Let $M = (m_1, m_2, \ldots, m_n)$ be any element of $S$. Let $n_i$ be the number of $i$’s in $M$, for $0 \le i \le a-1$. Suppose that we start with the deck in increasing order (i.e., the cards are numbered from 1 to $n$). We label the first $n_0$ cards with a 0, the next $n_1$ cards with a 1, etc. Then the $a$-shuffle corresponding to $M$ is the shuffle which results in the ordering in which the cards labelled $i$ are placed in the positions in $M$ containing the label $i$. The cards with the same label are placed in these positions in increasing order of their numbers. For example, if $n = 6$ and $a = 3$, let $M = (1, 0, 2, 2, 0, 2)$. Then $n_0 = 2,\ n_1 = 1,$ and $n_2 = 3$. So we label cards 1 and 2 with a 0, card 3 with a 1, and cards 4, 5, and 6 with a 2. Then cards 1 and 2 are placed in positions 2 and 5, card 3 is placed in position 1, and cards 4, 5, and 6 are placed in positions 3, 4, and 6, resulting in the ordering $(3, 1, 4, 5, 2, 6)$.
1. Using this coding, show that the probability that in an $a$-shuffle, the first card (i.e., card number 1) moves to the $i$th position, is given by the following expression: $\frac{(a-1)^{i-1} a^{n-i}+(a-2)^{i-1}(a-1)^{n-i}+\cdots+1^{i-1} 2^{n-i}}{a^n}$.
2. Give an accurate estimate for the probability that in three riffle shuffles of a 52-card deck, the first card ends up in one of the first 26 positions. Using a computer, accurately estimate the probability of the same event after seven riffle shuffles.
Answer
Add answer text here and it will automatically be hidden if you have a "AutoNum" template active on the page.
Exercise $4$
Let $X$ denote a particular process that produces elements of $S_n$, and let $U$ denote the uniform process. Let the distribution functions of these processes be denoted by $f_X$ and $u$, respectively. Show that the variation distance $\parallel f_X - u\parallel$ is equal to $\max_{T \subset S_n} \sum_{\pi \in T} \Bigl(f_X(\pi) - u(\pi)\Bigr).$
: Write the permutations in $S_n$ in decreasing order of the difference $f_X(\pi) - u(\pi)$.
Answer
Add answer text here and it will automatically be hidden if you have a "AutoNum" template active on the page.
Exercise $5$
Consider the process described in the text in which an $n$-card deck is repeatedly labelled and 2-unshuffled, in the manner described in the proof of Theorem $1$]. (See Figures $1$] and $2$.) The process continues until the labels are all different. Show that the process never terminates until at least $\lceil \log_2(n) \rceil$ unshuffles have been done.
Answer
Add answer text here and it will automatically be hidden if you have a "AutoNum" template active on the page.
3.R: References
1. N. L. Biggs, “The Roots of Combinatorics," Historia Mathematica, vol. 6 (1979), pp. 109–136.
2. P. R. de Montmort, Essay d’Analyse sur des Jeux de Hazard, 2d ed. (Paris: Quillau, 1713).
3. F. N. David, Games, Gods and Gambling (London: Griffin, 1962), p. 146.
4. M. vos Savant, Ask Marilyn, , 21 August 1994.
5. J. Riordan, (New York: John Wiley & Sons, 1958).
6. P. Doyle, “Solution to Montmort’s Probleme du Treize,” math.ucsd.edu/$\tilde{\ }$doyle/.
7. P. Doyle, C. Grinstead, and J. Snell, “Frustration Solitaire," , vol. 16, no. 2 (1995), pp. 137-145.
8. R. von Mises, “Über Aufteilungs- und Besetzungs-Wahrscheinlichkeiten," Revue de la Faculté des Sciences de l’Université d’Istanbul, N. S. vol. 4 (1938-39), pp. 145-163.
9. W. Feller, vol. 1, 3rd ed. (New York: John Wiley & Sons, 1968).
10. J. Stirling, Methodus Differentialis, (London: Bowyer, 1730).
11. A. de Moivre, The Doctrine of Chances, 3rd ed. (London: Millar, 1756).
12. H. S. Wilf, “A Bijection in the Theory of Derangements," vol. 57, no. 1 (1984), pp. 37–40.
13. E. B. Dynkin and A. A. Yushkevich, trans. J. S. Wood (New York: Plenum, 1969).
14. W. Feller, vol. 1, 3rd ed. (New York: John Wiley and Sons, 1968), p. 41
15. B. Pascal, (Paris: Desprez, 1665).
16. A. W. F. Edwards, (London: Griffin, 1987).
17. ibid., p. 27.
18. J. Needham, vol. 3 (New York: Cambridge University Press, 1959), p. 135.
19. N. Tartaglia, (Vinegia, 1556).
20. Quoted in Edwards, op. cit., p. 37.
21. M. Stifel, (Norimburgae, 1544).
22. G. Cardano, (Basilea, 1570).
23. F. N. David, op. cit., p. 235.
24. A. W. F. Edwards, op. cit., p. ix.
25. J. Bernoulli, (Basil: Thurnisiorum, 1713).
26. W. Feller, vol. I, 3rd ed. (New York: John Wiley & Sons, 1968), p. 106.
27. E. Lucas, “Théorie des Functions Numériques Simplement Periodiques," vol. 1 (1878), pp. 184-240, 289-321.
28. B. Mann, “How Many Times Should You Shuffle a Deck of Cards?", , vol. 15, no. 4 (1994), pp. 303–331.
29. D. Bayer and P. Diaconis, “Trailing the Dovetail Shuffle to its Lair," , vol. 2, no. 2 (1992), pp. 294–313. | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/03%3A_Combinatorics/3.03%3A_Card_Shuffling.txt |
Thumbnail: On a tree diagram, branch probabilities are conditional on the event associated with the parent node. (CC0; Gnathan87 via Wikipedia).
04: Conditional Probability
Conditional Probability
In this section we ask and answer the following question. Suppose we assign a distribution function to a sample space and then learn that an event $E$ has occurred. How should we change the probabilities of the remaining events? We shall call the new probability for an event $F$ the and denote it by $P(F|E)$.
Example $1$
An experiment consists of rolling a die once. Let $X$ be the outcome. Let $F$ be the event $\{X = 6\}$, and let $E$ be the event $\{X > 4\}$. We assign the distribution function $m(\omega) = 1/6$ for $\omega = 1, 2, \ldots , 6$. Thus, $P(F) = 1/6$. Now suppose that the die is rolled and we are told that the event $E$ has occurred. This leaves only two possible outcomes: 5 and 6. In the absence of any other information, we would still regard these outcomes to be equally likely, so the probability of $F$ becomes 1/2, making $P(F|E) = 1/2$.
Example $2$:
In the Life Table (see Appendix C), one finds that in a population of 100,000 females, 89.835% can expect to live to age 60, while 57.062% can expect to live to age 80. Given that a woman is 60, what is the probability that she lives to age 80?
Solution
This is an example of a conditional probability. In this case, the original sample space can be thought of as a set of 100,000 females. The events $E$ and $F$ are the subsets of the sample space consisting of all women who live at least 60 years, and at least 80 years, respectively. We consider $E$ to be the new sample space, and note that $F$ is a subset of $E$. Thus, the size of $E$ is 89,835, and the size of $F$ is 57,062. So, the probability in question equals $57{,}062/89{,}835 = .6352$. Thus, a woman who is 60 has a 63.52% chance of living to age 80.
Example $3$
Consider our voting example from Section 1.2: three candidates A, B, and C are running for office. We decided that A and B have an equal chance of winning and C is only 1/2 as likely to win as A. Let $A$ be the event “A wins," $B$ that “B wins," and $C$ that “C wins." Hence, we assigned probabilities $P(A) = 2/5$, $P(B) = 2/5$, and $P(C) = 1/5$.
Suppose that before the election is held, $A$ drops out of the race. As in Example $1$ , it would be natural to assign new probabilities to the events $B$ and $C$ which are proportional to the original probabilities. Thus, we would have $P(B|~A) = 2/3$, and $P(C|~A) = 1/3$. It is important to note that any time we assign probabilities to real-life events, the resulting distribution is only useful if we take into account all relevant information. In this example, we may have knowledge that most voters who favor $A$ will vote for $C$ if $A$ is no longer in the race. This will clearly make the probability that $C$ wins greater than the value of 1/3 that was assigned above.
In these examples we assigned a distribution function and then were given new information that determined a new sample space, consisting of the outcomes that are still possible, and caused us to assign a new distribution function to this space.
We want to make formal the procedure carried out in these examples. Let $\Omega = \{\omega_1,\omega_2,\dots,\omega_r\}$ be the original sample space with distribution function $m(\omega_j)$ assigned. Suppose we learn that the event $E$ has occurred. We want to assign a new distribution function $m(\omega_j|E)$ to $\Omega$ to reflect this fact. Clearly, if a sample point $\omega_j$ is not in $E$, we want $m(\omega_j|E) = 0$. Moreover, in the absence of information to the contrary, it is reasonable to assume that the probabilities for $\omega_k$ in $E$ should have the same relative magnitudes that they had before we learned that $E$ had occurred. For this we require that $m(\omega_k|E) = cm(\omega_k)$ for all $\omega_k$ in $E$, with $c$ some positive constant. But we must also have $\sum_E m(\omega_k|E) = c\sum_E m(\omega_k) = 1\ .$ Thus, $c = \frac 1{\sum_E m(\omega_k)} = \frac 1{P(E)}\ .$ (Note that this requires us to assume that $P(E) > 0$.) Thus, we will define $m(\omega_k|E) = \frac {m(\omega_k)}{P(E)}$ for $\omega_k$ in $E$. We will call this new distribution the given $E$. For a general event $F$, this gives $P(F|E) = \sum_{F \cap E} m(\omega_k|E) = \sum_{F \cap E}\frac {m(\omega_k)}{P(E)} = \frac {P(F \cap E)}{P(E)}\ .$
We call $P(F|E)$ the and compute it using the formula $P(F|E) = \frac {P(F \cap E)}{P(E)}\ .$
Example $4$
Let us return to the example of rolling a die. Recall that $F$ is the event $X = 6$, and $E$ is the event $X > 4$. Note that $E \cap F$ is the event $F$. So, the above formula gives \begin{aligned} P(F|E) & = & \frac {P(F \cap E)}{P(E)} \ & = & \frac {1/6}{1/3} \ & = & \frac 12\ ,\\end{aligned} in agreement with the calculations performed earlier.
Example $5$
We have two urns, I and II. Urn I contains 2 black balls and 3 white balls. Urn II contains 1 black ball and 1 white ball. An urn is drawn at random and a ball is chosen at random from it. We can represent the sample space of this experiment as the paths through a tree as shown in Figure [fig 4.1]. The probabilities assigned to the paths are also shown.
Let $B$ be the event “a black ball is drawn," and $I$ the event “urn I is chosen." Then the branch weight 2/5, which is shown on one branch in the figure, can now be interpreted as the conditional probability $P(B|I)$.
Suppose we wish to calculate $P(I|B)$. Using the formula, we obtain
$\begin{array}{rcl} P(I|B) & = & \frac {P(I \cap B)}{P(B)} \ & = & \frac {P(I \cap B)}{P(B \cap I) + P(B \cap II)} \ & = & \frac {1/5}{1/5 + 1/4} = \frac 49\ .\end{array}$
Bayes Probabilities
Our original tree measure gave us the probabilities for drawing a ball of a given color, given the urn chosen. We have just calculated the that a particular urn was chosen, given the color of the ball. Such an inverse probability is called a Bayes probability and may be obtained by a formula that we shall develop later. Bayes probabilities can also be obtained by simply constructing the tree measure for the two-stage experiment carried out in reverse order. We show this tree in Figure $2$ .
The paths through the reverse tree are in one-to-one correspondence with those in the forward tree, since they correspond to individual outcomes of the experiment, and so they are assigned the same probabilities. From the forward tree, we find that the probability of a black ball is $\frac 12 \cdot \frac 25 + \frac 12 \cdot \frac 12 = \frac 9{20}\ .$
The probabilities for the branches at the second level are found by simple division. For example, if $x$ is the probability to be assigned to the top branch at the second level, we must have $\frac 9{20} \cdot x = \frac 15$ or $x = 4/9$. Thus, $P(I|B) = 4/9$, in agreement with our previous calculations. The reverse tree then displays all of the inverse, or Bayes, probabilities.
Example $6$
We consider now a problem called the Monty Hall problem. This has long been a favorite problem but was revived by a letter from Craig Whitaker to Marilyn vos Savant for consideration in her column in Parade Magazine.1 Craig wrote:
Suppose you’re on Monty Hall’s You are given the choice of three doors, behind one door is a car, the others, goats. You pick a door, say 1, Monty opens another door, say 3, which has a goat. Monty says to you “Do you want to pick door 2?" Is it to your advantage to switch your choice of doors?
Solution
Marilyn gave a solution concluding that you should switch, and if you do, your probability of winning is 2/3. Several irate readers, some of whom identified themselves as having a PhD in mathematics, said that this is absurd since after Monty has ruled out one door there are only two possible doors and they should still each have the same probability 1/2 so there is no advantage to switching. Marilyn stuck to her solution and encouraged her readers to simulate the game and draw their own conclusions from this. We also encourage the reader to do this (see Exercise $6$).Other readers complained that Marilyn had not described the problem completely. In particular, the way in which certain decisions were made during a play of the game were not specified. This aspect of the problem will be discussed in Section 4.3. We will assume that the car was put behind a door by rolling a three-sided die which made all three choices equally likely. Monty knows where the car is, and always opens a door with a goat behind it. Finally, we assume that if Monty has a choice of doors (i.e., the contestant has picked the door with the car behind it), he chooses each door with probability 1/2. Marilyn clearly expected her readers to assume that the game was played in this manner.
As is the case with most apparent paradoxes, this one can be resolved through careful analysis. We begin by describing a simpler, related question. We say that a contestant is using the “stay" strategy if he picks a door, and, if offered a chance to switch to another door, declines to do so (i.e., he stays with his original choice). Similarly, we say that the contestant is using the “switch" strategy if he picks a door, and, if offered a chance to switch to another door, takes the offer. Now suppose that a contestant decides in advance to play the “stay" strategy. His only action in this case is to pick a door (and decline an invitation to switch, if one is offered). What is the probability that he wins a car? The same question can be asked about the “switch" strategy.
Using the “stay" strategy, a contestant will win the car with probability 1/3, since 1/3 of the time the door he picks will have the car behind it. On the other hand, if a contestant plays the “switch" strategy, then he will win whenever the door he originally picked does not have the car behind it, which happens 2/3 of the time.
This very simple analysis, though correct, does not quite solve the problem that Craig posed. Craig asked for the conditional probability that you win if you switch, given that you have chosen door 1 and that Monty has chosen door 3. To solve this problem, we set up the problem before getting this information and then compute the conditional probability given this information. This is a process that takes place in several stages; the car is put behind a door, the contestant picks a door, and finally Monty opens a door. Thus it is natural to analyze this using a tree measure. Here we make an additional assumption that if Monty has a choice of doors (i.e., the contestant has picked the door with the car behind it) then he picks each door with probability 1/2. The assumptions we have made determine the branch probabilities and these in turn determine the tree measure. The resulting tree and tree measure are shown in Figure $3$. It is tempting to reduce the tree’s size by making certain assumptions such as: “Without loss of generality, we will assume that the contestant always picks door 1." We have chosen not to make any such assumptions, in the interest of clarity.
Now the given information, namely that the contestant chose door 1 and Monty chose door 3, means only two paths through the tree are possible (see Figure $4$). For one of these paths, the car is behind door 1 and for the other it is behind door 2. The path with the car behind door 2 is twice as likely as the one with the car behind door 1. Thus the conditional probability is 2/3 that the car is behind door 2 and 1/3 that it is behind door 1, so if you switch you have a 2/3 chance of winning the car, as Marilyn claimed.
At this point, the reader may think that the two problems above are the same, since they have the same answers. Recall that we assumed in the original problem if the contestant chooses the door with the car, so that Monty has a choice of two doors, he chooses each of them with probability 1/2. Now suppose instead that in the case that he has a choice, he chooses the door with the larger number with probability 3/4. In the “switch" vs. “stay" problem, the probability of winning with the “switch" strategy is still 2/3. However, in the original problem, if the contestant switches, he wins with probability 4/7. The reader can check this by noting that the same two paths as before are the only two possible paths in the tree. The path leading to a win, if the contestant switches, has probability 1/3, while the path which leads to a loss, if the contestant switches, has probability 1/4.
Independent Events
It often happens that the knowledge that a certain event $E$ has occurred has no effect on the probability that some other event $F$ has occurred, that is, that $P(F|E) = P(F)$. One would expect that in this case, the equation $P(E|F) = P(E)$ would also be true. In fact (see Exercise [exer 4.1.1]), each equation implies the other. If these equations are true, we might say the $F$ is of $E$. For example, you would not expect the knowledge of the outcome of the first toss of a coin to change the probability that you would assign to the possible outcomes of the second toss, that is, you would not expect that the second toss depends on the first. This idea is formalized in the following definition of independent events.
Definition
Let $E$ and $F$ be two events. We say that they are if either 1) both events have positive probability and $P(E|F) = P(E)\ {\rm and}\ P(F|E) = P(F)\ ,$ or 2) at least one of the events has probability 0.
As noted above, if both $P(E)$ and $P(F)$ are positive, then each of the above equations imply the other, so that to see whether two events are independent, only one of these equations must be checked (see Exercise 1).
The following theorem provides another way to check for independence.
Theorem $1$
Two events $E$ and $F$ are independent if and only if $P(E\cap F) = P(E)P(F)\ .$
Proof
If either event has probability 0, then the two events are independent and the above equation is true, so the theorem is true in this case. Thus, we may assume that both events have positive probability in what follows. Assume that $E$ and $F$ are independent. Then $P(E|F) = P(E)$, and so \begin{aligned} P(E\cap F) & = & P(E|F)P(F) \ & = & P(E)P(F)\ .\end{aligned}
Assume next that $P(E\cap F) = P(E)P(F)$. Then $P(E|F) = \frac {P(E \cap F)}{P(F)} = P(E)\ .$ Also, $P(F|E) = \frac {P(F \cap E)}{P(E)} = P(F)\ .$ Therefore, $E$ and $F$ are independent.
Example $7$
Suppose that we have a coin which comes up heads with probability $p$, and tails with probability $q$. Now suppose that this coin is tossed twice. Using a frequency interpretation of probability, it is reasonable to assign to the outcome $(H,H)$ the probability $p^2$, to the outcome $(H, T)$ the probability $pq$, and so on. Let $E$ be the event that heads turns up on the first toss and $F$ the event that tails turns up on the second toss. We will now check that with the above probability assignments, these two events are independent, as expected. We have
\begin{align}P(E) = p^2 + pq = p \ P(F) = pq + q^2 = q \end{align} Finally $P(E\cap F) = pq$, so $P(E\cap F) = P(E)P(F)$.
Example $8$
It is often, but not always, intuitively clear when two events are independent. In Example $7$, let $A$ be the event “the first toss is a head" and $B$ the event “the two outcomes are the same." Then $P(B|A) = \frac {P(B \cap A)}{P(A)} = \frac {P\{\mbox {HH}\}}{P\{\mbox {HH,HT}\}} = \frac {1/4}{1/2} = \frac 12 = P(B).$ Therefore, $A$ and $B$ are independent, but the result was not so obvious.
Example $9$
Finally, let us give an example of two events that are not independent. In Example $7$, let $I$ be the event “heads on the first toss" and $J$ the event “two heads turn up." Then $P(I) = 1/2$ and $P(J) = 1/4$. The event $I \cap J$ is the event “heads on both tosses" and has probability $1/4$. Thus, $I$ and $J$ are not independent since $P(I)P(J) = 1/8 \ne P(I \cap J)$.
We can extend the concept of independence to any finite set of events $A_1$, $A_2$, …, $A_n$.
Definition $2$
A set of events $\{A_1,\ A_2,\ \ldots,\ A_n\}$ is said to be mutually independent if for any subset $\{A_i,\ A_j,\ldots,\ A_m\}$ of these events we have
$P(A_i \cap A_j \cap\cdots\cap A_m) = P(A_i)P(A_j)\cdots P(A_m),$
or equivalently, if for any sequence $\bar A_1$, $\bar A_2$, …, $\bar A_n$ with $\bar A_j = A_j$ or $\tilde A_j$, $P(\bar A_1 \cap \bar A_2 \cap\cdots\cap \bar A_n) = P(\bar A_1)P(\bar A_2)\cdots P(\bar A_n).$ (For a proof of the equivalence in the case $n = 3$, see Exercise 33
Using this terminology, it is a fact that any sequence $(\mbox S,\mbox S,\mbox F,\mbox F, \mbox S, \dots,\mbox S)$ of possible outcomes of a Bernoulli trials process forms a sequence of mutually independent events.
It is natural to ask: If all pairs of a set of events are independent, is the whole set mutually independent? The answer is not necessarily and an example is given in Exercise 7
It is important to note that the statement $P(A_1 \cap A_2 \cap \cdots \cap A_n) = P(A_1)P(A_2) \cdots P(A_n)$ does not imply that the events $A_1$, $A_2$, …, $A_n$ are mutually independent (see Exercise 8).
Joint Distribution Functions and Independence of Random Variables
It is frequently the case that when an experiment is performed, several different quantities concerning the outcomes are investigated.
Example $10$:
Suppose we toss a coin three times. The basic random variable ${\bar X}$ corresponding to this experiment has eight possible outcomes, which are the ordered triples consisting of H’s and T’s. We can also define the random variable $X_i$, for $i = 1, 2, 3$, to be the outcome of the $i$th toss. If the coin is fair, then we should assign the probability 1/8 to each of the eight possible outcomes. Thus, the distribution functions of $X_1$, $X_2$, and $X_3$ are identical; in each case they are defined by $m(H) = m(T) = 1/2$.
If we have several random variables $X_1, X_2, \ldots, X_n$ which correspond to a given experiment, then we can consider the joint random variable ${\bar X} = (X_1, X_2, \ldots, X_n)$ defined by taking an outcome $\omega$ of the experiment, and writing, as an $n$-tuple, the corresponding $n$ outcomes for the random variables $X_1, X_2, \ldots, X_n$. Thus, if the random variable $X_i$ has, as its set of possible outcomes the set $R_i$, then the set of possible outcomes of the joint random variable ${\bar X}$ is the Cartesian product of the $R_i$’s, i.e., the set of all $n$-tuples of possible outcomes of the $X_i$’s.
Example $11$:
In the coin-tossing example above, let $X_i$ denote the outcome of the $i$th toss. Then the joint random variable ${\bar X} = (X_1, X_2, X_3)$ has eight possible outcomes.
Suppose that we now define $Y_i$, for $i = 1, 2, 3$, as the number of heads which occur in the first $i$ tosses. Then $Y_i$ has $\{0, 1, \ldots, i\}$ as possible outcomes, so at first glance, the set of possible outcomes of the joint random variable ${\bar Y} = (Y_1, Y_2, Y_3)$ should be the set $\{(a_1, a_2, a_3)\ :\ 0 \le a_1 \le 1, 0 \le a_2 \le 2, 0 \le a_3 \le 3\}\ .$ However, the outcome $(1, 0, 1)$ cannot occur, since we must have $a_1 \le a_2 \le a_3$. The solution to this problem is to define the probability of the outcome $(1, 0, 1)$ to be 0. In addition, we must have $a_{i+1} - a_i \le 1$ for $i = 1, 2$.
We now illustrate the assignment of probabilities to the various outcomes for the joint random variables ${\bar X}$ and ${\bar Y}$. In the first case, each of the eight outcomes should be assigned the probability 1/8, since we are assuming that we have a fair coin. In the second case, since $Y_i$ has $i+1$ possible outcomes, the set of possible outcomes has size 24. Only eight of these 24 outcomes can actually occur, namely the ones satisfying $a_1 \le a_2 \le a_3$. Each of these outcomes corresponds to exactly one of the outcomes of the random variable ${\bar X}$, so it is natural to assign probability 1/8 to each of these. We assign probability 0 to the other 16 outcomes. In each case, the probability function is called a joint distribution function.
We collect the above ideas in a definition.
Definition $3$
Let $X_1, X_2, \ldots, X_n$ be random variables associated with an experiment. Suppose that the sample space (i.e., the set of possible outcomes) of $X_i$ is the set $R_i$. Then the joint random variable ${\bar X} = (X_1, X_2, \ldots, X_n)$ is defined to be the random variable whose outcomes consist of ordered $n$-tuples of outcomes, with the $i$th coordinate lying in the set $R_i$. The sample space $\Omega$ of ${\bar X}$ is the Cartesian product of the $R_i$’s: $\Omega = R_1 \times R_2 \times \cdots \times R_n\ .$ The joint distribution function of ${\bar X}$ is the function which gives the probability of each of the outcomes of ${\bar X}$.
Example $12$:
We now consider the assignment of probabilities in the above example. In the case of the random variable ${\bar X}$, the probability of any outcome $(a_1, a_2, a_3)$ is just the product of the probabilities $P(X_i = a_i)$, for $i = 1, 2, 3$. However, in the case of ${\bar Y}$, the probability assigned to the outcome $(1, 1, 0)$ is not the product of the probabilities $P(Y_1 = 1)$, $P(Y_2 = 1)$, and $P(Y_3 = 0)$. The difference between these two situations is that the value of $X_i$ does not affect the value of $X_j$, if $i \ne j$, while the values of $Y_i$ and $Y_j$ affect one another. For example, if $Y_1 = 1$, then $Y_2$ cannot equal 0. This prompts the next definition.
Definition $4$
The random variables $X_1$, $X_2$, …, $X_n$ are if \begin{aligned} &&P(X_1 = r_1, X_2 = r_2, \ldots, X_n = r_n) \ && \;\;\;\;\;\;\;\;\;\;\;\;\;\;\; = P(X_1 = r_1) P(X_2 = r_2) \cdots P(X_n = r_n)\end{aligned} for any choice of $r_1, r_2, \ldots, r_n$. Thus, if $X_1,~X_2, \ldots,~X_n$ are mutually independent, then the joint distribution function of the random variable ${\bar X} = (X_1, X_2, \ldots, X_n)$ is just the product of the individual distribution functions. When two randomvariables are mutually independent, we shall say more briefly that they are independent.
Example $13$
In a group of 60 people, the numbers who do or do not smoke and do or do not have cancer are reported as shown in Table $1$.
Table $1$: Smoking and cancer.
Not smoke
Smoke
Total
Not cancer 40 10 50
Cancer
7
3
10
Totals 47 13 60
Let $\Omega$ be the sample space consisting of these 60 people. A person is chosen at random from the group. Let $C(\omega) = 1$ if this person has cancer and 0 if not, and $S(\omega) = 1$ if this person smokes and 0 if not. Then the joint distribution of $\{C,S\}$ is given in Table $2$.
Table $2$: Joint distribution.
S
0
1
0 40/60 10/60
C
1
7/60
3/60
For example $P(C = 0, S = 0) = 40/60$, $P(C = 0, S = 1) = 10/60$, and so forth. The distributions of the individual random variables are called The marginal distributions of $C$ and $S$ are: $p_C = \pmatrix{ 0 & 1 \cr 50/60 & 10/60 \cr},$
$p_S = \pmatrix{ 0 & 1 \cr 47/60 & 13/60 \cr}.$ The random variables $S$ and $C$ are not independent, since \begin{aligned} P(C = 1,S = 1) &=& \frac 3{60} = .05\ , \ P(C = 1)P(S = 1) &=& \frac {10}{60} \cdot \frac {13}{60} = .036\ .\end{aligned} Note that we would also see this from the fact that \begin{aligned} P(C = 1|S = 1) &=& \frac 3{13} = .23\ , \ P(C = 1) &=& \frac 16 = .167\ .\end{aligned}
Independent Trials Processes
The study of random variables proceeds by considering special classes of random variables. One such class that we shall study is the class of[def 5.5]
Definition $5$
A sequence of random variables $X_1$, $X_2$, …, $X_n$ that are mutually independent and that have the same distribution is called a sequence of independent trials or an
Independent trials processes arise naturally in the following way. We have a single experiment with sample space $R = \{r_1,r_2,\dots,r_s\}$ and a distribution function $m_X = \pmatrix{ r_1 & r_2 & \cdots & r_s \cr p_1 & p_2 & \cdots & p_s\cr}\ .$
We repeat this experiment $n$ times. To describe this total experiment, we choose as sample space the space $\Omega = R \times R \times\cdots\times R,$ consisting of all possible sequences $\omega = (\omega_1,\omega_2,\dots,\omega_n)$ where the value of each $\omega_j$ is chosen from $R$. We assign a distribution function to be the $m(\omega) = m(\omega_1)\cdot\ \ldots\ \cdot m(\omega_n)\ ,$ with $m(\omega_j) = p_k$ when $\omega_j = r_k$. Then we let $X_j$ denote the $j$th coordinate of the outcome $(r_1, r_2, \ldots, r_n)$. The random variables $X_1$, …, $X_n$ form an independent trials process.
Exercise $14$
An experiment consists of rolling a die three times. Let $X_i$ represent the outcome of the $i$th roll, for $i = 1, 2, 3$. The common distribution function is $m_i = \pmatrix{ 1 & 2 & 3 & 4 & 5 & 6 \cr 1/6 & 1/6 & 1/6 & 1/6 & 1/6 & 1/6 \cr}.$
The sample space is $R^3 = R \times R \times R$ with $R = \{1,2,3,4,5,6\}$. If $\omega = (1,3,6)$, then $X_1(\omega) = 1$, $X_2(\omega) = 3$, and $X_3(\omega) = 6$ indicating that the first roll was a 1, the second was a 3, and the third was a 6. The probability assigned to any sample point is $m(\omega) = \frac16 \cdot \frac16 \cdot \frac16 = \frac1{216}\ .$
Exercise $15$
Consider next a Bernoulli trials process with probability $p$ for success on each experiment. Let $X_j(\omega) = 1$ if the $j$th outcome is success and $X_j(\omega) = 0$ if it is a failure. Then $X_1$, $X_2$, …, $X_n$ is an independent trials process. Each $X_j$ has the same distribution function $m_j = \pmatrix{ 0 & 1 \cr q & p \cr},$ where $q = 1 - p$.
If $S_n = X_1 + X_2 +\cdots + X_n$, then $P(S_n = j) = {n \choose j} p^{j} q^{n - j}\ ,$ and $S_n$ has, as distribution, the binomial distribution $b(n,p,j)$.
Bayes’ Formula
In our examples, we have considered conditional probabilities of the following form: Given the outcome of the second stage of a two-stage experiment, find the probability for an outcome at the first stage. We have remarked that these probabilities are called
We return now to the calculation of more general Bayes probabilities. Suppose we have a set of events $H_1,$ $H_2$, …, $H_m$ that are pairwise disjoint and such that the sample space $\Omega$ satisfies the equation $\Omega = H_1 \cup H_2 \cup\cdots\cup H_m\ .$ We call these events We also have an event $E$ that gives us some information about which hypothesis is correct. We call this event
Before we receive the evidence, then, we have a set of $P(H_1)$, $P(H_2)$, …, $P(H_m)$ for the hypotheses. If we know the correct hypothesis, we know the probability for the evidence. That is, we know $P(E|H_i)$ for all $i$. We want to find the probabilities for the hypotheses given the evidence. That is, we want to find the conditional probabilities $P(H_i|E)$. These probabilities are called the
To find these probabilities, we write them in the form
$P(H_i|E) = \frac{P(H_i \cap E)}{P(E)}\ . \label{eq 4.1}$
We can calculate the numerator from our given information by $P(H_i \cap E) = P(H_i)P(E|H_i)\ . \label{eq 4.2}$
Since one and only one of the events $H_1$, $H_2$, …, $H_m$ can occur, we can write the probability of $E$ as
$P(E) = P(H_1 \cap E) + P(H_2 \cap E) + \cdots + P(H_m \cap E)\ .$ Using Equation $2$, the above expression can be seen to equal
$P(H_1)P(E|H_1) + P(H_2)P(E|H_2) + \cdots + P(H_m)P(E|H_m) \ . \label{eq 4.3}$ Using ([eq 4.1]), ([eq 4.2]), and ([eq 4.3]) yields : $P(H_i|E) = \frac{P(H_i)P(E|H_i)}{\sum_{k = 1}^m P(H_k)P(E|H_k)}\ .$
Although this is a very famous formula, we will rarely use it. If the number of hypotheses is small, a simple tree measure calculation is easily carried out, as we have done in our examples. If the number of hypotheses is large, then we should use a computer.
Bayes probabilities are particularly appropriate for medical diagnosis. A doctor is anxious to know which of several diseases a patient might have. She collects evidence in the form of the outcomes of certain tests. From statistical studies the doctor can find the prior probabilities of the various diseases before the tests, and the probabilities for specific test outcomes, given a particular disease. What the doctor wants to know is the posterior probability for the particular disease, given the outcomes of the tests.
Example $16$
A doctor is trying to decide if a patient has one of three diseases $d_1$, $d_2$, or $d_3$. Two tests are to be carried out, each of which results in a positive $(+)$ or a negative $(-)$ outcome. There are four possible test patterns $+{}+$, $+{}-$, $-{}+$, and $-{}-$. National records have indicated that, for 10,000 people having one of these three diseases, the distribution of diseases and test results are as in Table $1$:.
Table $1$: Diseases data.
Number having
Disease this disease
++
+–
–+
––
$d_{1}$
3215
2110
301
704
100
$d_{2}$
2125
396
132
1187
410
$d_{3}$
4660
510
3568
73
509
Total 10000
From this data, we can estimate the prior probabilities for each of the diseases and, given a particular disease, the probability of a particular test outcome. For example, the prior probability of disease $d_1$ may be estimated to be $3215/10{,}000 = .3215$. The probability of the test result $+{}-$, given disease $d_1$, may be estimated to be $301/3215 = .094$.
We can now use Bayes’ formula to compute various posterior probabilities. The computer program Bayes computes these posterior probabilities. The results for this example are shown in Table $2$:.
Table $2$:Posterior probabilities.
$d_1$ $d_2$ $d_3$
++ .700 .131 .169
+– .075 .033 .892
–+
.358 .604 .038
––
.098 .403 .499
We note from the outcomes that, when the test result is $++$, the disease $d_1$ has a significantly higher probability than the other two. When the outcome is $+-$, this is true for disease $d_3$. When the outcome is $-+$, this is true for disease $d_2$. Note that these statements might have been guessed by looking at the data. If the outcome is $--$, the most probable cause is $d_3$, but the probability that a patient has $d_2$ is only slightly smaller. If one looks at the data in this case, one can see that it might be hard to guess which of the two diseases $d_2$ and $d_3$ is more likely.
Our final example shows that one has to be careful when the prior probabilities are small.
Example $1$
A doctor gives a patient a test for a particular cancer. Before the results of the test, the only evidence the doctor has to go on is that 1 woman in 1000 has this cancer. Experience has shown that, in 99 percent of the cases in which cancer is present, the test is positive; and in 95 percent of the cases in which it is not present, it is negative. If the test turns out to be positive, what probability should the doctor assign to the event that cancer is present? An alternative form of this question is to ask for the relative frequencies of false positives and cancers.
We are given that $\mbox{prior(cancer)} = .001$ and $\mbox{prior(not\ cancer)} = .999$. We know also that $P(+| \mbox{cancer}) = .99$, $P(-|\mbox{cancer}) = .01$, $P(+|\mbox{not\ cancer}) = .05$, and $P(-|\mbox{not\ cancer}) = .95$. Using this data gives the result shown in Figure [fig 4.5].
We see now that the probability of cancer given a positive test has only increased from .001 to .019. While this is nearly a twenty-fold increase, the probability that the patient has the cancer is still small. Stated in another way, among the positive results, 98.1 percent are false positives, and 1.9 percent are cancers. When a group of second-year medical students was asked this question, over half of the students incorrectly guessed the probability to be greater than .5.
Historical Remarks
Conditional probability was used long before it was formally defined. Pascal and Fermat considered the problem of points: given that team A has won $m$ games and team B has won $n$ games, what is the probability that A will win the series? (See Exercises 40-42.) This is clearly a conditional probability problem.
In his book, Huygens gave a number of problems, one of which was:
Three gamblers, A, B and C, take 12 balls of which 4 are white and 8 black. They play with the rules that the drawer is blindfolded, A is to draw first, then B and then C, the winner to be the one who first draws a white ball. What is the ratio of their chances?2
From his answer it is clear that Huygens meant that each ball is replaced after drawing. However, John Hudde, the mayor of Amsterdam, assumed that he meant to sample without replacement and corresponded with Huygens about the difference in their answers. Hacking remarks that “Neither party can understand what the other is doing."3
By the time of de Moivre’s book, The Doctrine of Chances these distinctions were well understood. De Moivre defined independence and dependence as follows:
Two Events are independent, when they have no connexion one with the other, and that the happening of one neither forwards nor obstructs the happening of the other.
Two Events are dependent, when they are so connected together as that the Probability of either’s happening is altered by the happening of the other.4
De Moivre used sampling with and without replacement to illustrate that the probability that two independent events both happen is the product of their probabilities, and for dependent events that:
The Probability of the happening of two Events dependent, is the product of the Probability of the happening of one of them, by the Probability which the other will have of happening, when the first is considered as having happened; and the same Rule will extend to the happening of as many Events as may be assigned.5
The formula that we call Bayes’ formula, and the idea of computing the probability of a hypothesis given evidence, originated in a famous essay of Thomas Bayes. Bayes was an ordained minister in Tunbridge Wells near London. His mathematical interests led him to be elected to the Royal Society in 1742, but none of his results were published within his lifetime. The work upon which his fame rests, “An Essay Toward Solving a Problem in the Doctrine of Chances," was published in 1763, three years after his death.6 Bayes reviewed some of the basic concepts of probability and then considered a new kind of inverse probability problem requiring the use of conditional probability.
Bernoulli, in his study of processes that we now call Bernoulli trials, had proven his famous law of large numbers which we will study in Chapter 8. This theorem assured the experimenter that if he knew the probability $p$ for success, he could predict that the proportion of successes would approach this value as he increased the number of experiments. Bernoulli himself realized that in most interesting cases you do not know the value of $p$ and saw his theorem as an important step in showing that you could determine $p$ by experimentation.
To study this problem further, Bayes started by assuming that the probability $p$ for success is itself determined by a random experiment. He assumed in fact that this experiment was such that this value for $p$ is equally likely to be any value between 0 and 1. Without knowing this value we carry out $n$ experiments and observe $m$ successes. Bayes proposed the problem of finding the conditional probability that the unknown probability $p$ lies between $a$ and $b$. He obtained the answer: $P(a \leq p < b | m {\mbox{\,\,successes\,\, in}}\,\,n \,\,{\mbox{trials}}) = \frac {\int_a^b x^m(1 - x)^{n - m}\,dx}{\int_0^1 x^m(1 - x)^{n - m}\,dx}\ .$
Bayes clearly wanted to show that the conditional distribution function, given the outcomes of more and more experiments, becomes concentrated around the true value of $p$. Thus, Bayes was trying to solve an The computation of the integrals was too difficult for exact solution except for small values of $j$ and $n$, and so Bayes tried approximate methods. His methods were not very satisfactory and it has been suggested that this discouraged him from publishing his results.
However, his paper was the first in a series of important studies carried out by Laplace, Gauss, and other great mathematicians to solve inverse problems. They studied this problem in terms of errors in measurements in astronomy. If an astronomer were to know the true value of a distance and the nature of the random errors caused by his measuring device he could predict the probabilistic nature of his measurements. In fact, however, he is presented with the inverse problem of knowing the nature of the random errors, and the values of the measurements, and wanting to make inferences about the unknown true value.
As Maistrov remarks, the formula that we have called Bayes’ formula does not appear in his essay. Laplace gave it this name when he studied these inverse problems.7 The computation of inverse probabilities is fundamental to statistics and has led to an important branch of statistics called Bayesian analysis, assuring Bayes eternal fame for his brief essay.
Exercises
Exercise $1$
Assume that $E$ and $F$ are two events with positive probabilities. Show that if $P(E|F) = P(E)$, then $P(F|E) = P(F)$.
Exercise $2$
A coin is tossed three times. What is the probability that exactly two heads occur, given that
1. the first outcome was a head?
2. the first outcome was a tail?
3. the first two outcomes were heads?
4. the first two outcomes were tails?
5. the first outcome was a head and the third outcome was a head?
Exercise $3$
A die is rolled twice. What is the probability that the sum of the faces is greater than 7, given that
1. the first outcome was a 4?
2. the first outcome was greater than 3?
3. the first outcome was a 1?
4. the first outcome was less than 5?
Exercise $4$
A card is drawn at random from a deck of cards. What is the probability that
1. it is a heart, given that it is red?
2. it is higher than a 10, given that it is a heart? (Interpret J, Q, K, A as 11, 12, 13, 14.)
3. it is a jack, given that it is red?
Exercise $5$
A coin is tossed three times. Consider the following events $A$: Heads on the first toss. $B$: Tails on the second. $C$: Heads on the third toss. $D$: All three outcomes the same (HHH or TTT). $E$: Exactly one head turns up.
1. Which of the following pairs of these events are independent? (1) $A$, $B$ (2) $A$, $D$ (3) $A$, $E$ (4) $D$, $E$
2. Which of the following triples of these events are independent? (1) $A$, $B$, $C$ (2) $A$, $B$, $D$ (3) $C$, $D$, $E$
Exercise $6$
From a deck of five cards numbered 2, 4, 6, 8, and 10, respectively, a card is drawn at random and replaced. This is done three times. What is the probability that the card numbered 2 was drawn exactly two times, given that the sum of the numbers on the three draws is 12?
Exercise $7$
A coin is tossed twice. Consider the following events. $A$: Heads on the first toss. $B$: Heads on the second toss. $C$: The two tosses come out the same.
1. Show that $A$, $B$, $C$ are pairwise independent but not independent.
2. Show that $C$ is independent of $A$ and $B$ but not of $A \cap B$.
Exercise $8$
Let $\Omega = \{a,b,c,d,e,f\}$. Assume that $m(a) = m(b) = 1/8$ and $m(c) = m(d) = m(e) = m(f) = 3/16$. Let $A$, $B$, and $C$ be the events $A = \{d,e,a\}$, $B = \{c,e,a\}$, $C = \{c,d,a\}$. Show that $P(A \cap B \cap C) = P(A)P(B)P(C)$ but no two of these events are independent.
Exercise $9$
What is the probability that a family of two children has
1. two boys given that it has at least one boy?
2. two boys given that the first child is a boy?
Exercise $10$
In Example 4.2, we used the Life Table (see Appendix C) to compute a conditional probability. The number 93,753 in the table, corresponding to 40-year-old males, means that of all the males born in the United States in 1950, 93.753% were alive in 1990. Is it reasonable to use this as an estimate for the probability of a male, born this year, surviving to age 40?
Exercise $11$
Simulate the Monty Hall problem. Carefully state any assumptions that you have made when writing the program. Which version of the problem do you think that you are simulating?
Exercise $12$
In Example 4.17, how large must the prior probability of cancer be to give a posterior probability of .5 for cancer given a positive test?
Exercise $13$
Two cards are drawn from a bridge deck. What is the probability that the second card drawn is red?
Exercise $14$
If $P(\tilde B) = 1/4$ and $P(A|B) = 1/2$, what is $P(A \cap B)$?
Exercise $15$
1. What is the probability that your bridge partner has exactly two aces, given that she has at least one ace?
2. What is the probability that your bridge partner has exactly two aces, given that she has the ace of spades?
Exercise $16$
Prove that for any three events $A$, $B$, $C$, each having positive probability, and with the property that $P(A \cap B) > 0$, $P(A \cap B \cap C) = P(A)P(B|A)P(C|A \cap B)\ .$
Exercise $17$
Prove that if $A$ and $B$ are independent so are
1. $A$ and $\tilde B$.
2. $\tilde A$ and $\tilde B$.
Exercise $18$
A doctor assumes that a patient has one of three diseases $d_1$, $d_2$, or $d_3$. Before any test, he assumes an equal probability for each disease. He carries out a test that will be positive with probability .8 if the patient has $d_1$, .6 if he has disease $d_2$, and .4 if he has disease $d_3$. Given that the outcome of the test was positive, what probabilities should the doctor now assign to the three possible diseases?
Exercise $19$
In a poker hand, John has a very strong hand and bets 5 dollars. The probability that Mary has a better hand is .04. If Mary had a better hand she would raise with probability .9, but with a poorer hand she would only raise with probability .1. If Mary raises, what is the probability that she has a better hand than John does?
Exercise $20$
The Polya urn model for contagion is as follows: We start with an urn which contains one white ball and one black ball. At each second we choose a ball at random from the urn and replace this ball and add one more of the color chosen. Write a program to simulate this model, and see if you can make any predictions about the proportion of white balls in the urn after a large number of draws. Is there a tendency to have a large fraction of balls of the same color in the long run?
Exercise $21$
It is desired to find the probability that in a bridge deal each player receives an ace. A student argues as follows. It does not matter where the first ace goes. The second ace must go to one of the other three players and this occurs with probability 3/4. Then the next must go to one of two, an event of probability 1/2, and finally the last ace must go to the player who does not have an ace. This occurs with probability 1/4. The probability that all these events occur is the product $(3/4)(1/2)(1/4) = 3/32$. Is this argument correct?
Exercise $22$
One coin in a collection of 65 has two heads. The rest are fair. If a coin, chosen at random from the lot and then tossed, turns up heads 6 times in a row, what is the probability that it is the two-headed coin?
Exercise $23$
You are given two urns and fifty balls. Half of the balls are white and half are black. You are asked to distribute the balls in the urns with no restriction placed on the number of either type in an urn. How should you distribute the balls in the urns to maximize the probability of obtaining a white ball if an urn is chosen at random and a ball drawn out at random? Justify your answer.
Exercise $24$
A fair coin is thrown $n$ times. Show that the conditional probability of a head on any specified trial, given a total of $k$ heads over the $n$ trials, is $k/n$ $(k > 0)$.
Exercise $25$
(Johnsonbough8) A coin with probability $p$ for heads is tossed $n$ times. Let $E$ be the event “a head is obtained on the first toss’ and $F_k$ the event ‘exactly $k$ heads are obtained." For which pairs $(n,k)$ are $E$ and $F_k$ independent?
Exercise $26$
Suppose that $A$ and $B$ are events such that $P(A|B) = P(B|A)$ and $P(A \cup B) = 1$ and $P(A \cap B) > 0$. Prove that $P(A) > 1/2$.
Exercise $27$
(Chung9) In London, half of the days have some rain. The weather forecaster is correct 2/3 of the time, i.e., the probability that it rains, given that she has predicted rain, and the probability that it does not rain, given that she has predicted that it won’t rain, are both equal to 2/3. When rain is forecast, Mr. Pickwick takes his umbrella. When rain is not forecast, he takes it with probability 1/3. Find
1. the probability that Pickwick has no umbrella, given that it rains.
2. the probability that he brings his umbrella, given that it doesn’t rain.
Exercise $28$
Probability theory was used in a famous court case: 10 In this case a purse was snatched from an elderly person in a Los Angeles suburb. A couple seen running from the scene were described as a black man with a beard and a mustache and a blond girl with hair in a ponytail. Witnesses said they drove off in a partly yellow car. Malcolm and Janet Collins were arrested. He was black and though clean shaven when arrested had evidence of recently having had a beard and a mustache. She was blond and usually wore her hair in a ponytail. They drove a partly yellow Lincoln. The prosecution called a professor of mathematics as a witness who suggested that a conservative set of probabilities for the characteristics noted by the witnesses would be as shown in Table $5$
Table $5$: Collins case probabilities
man with mustache 1/4
girl with blond hair 1/3
girl with ponytail 1/10
black man with beard 1/10
interracial couple in a car 1/1000
partly yellow car 1/10
The prosecution then argued that the probability that all of these characteristics are met by a randomly chosen couple is the product of the probabilities or 1/12,000,000, which is very small. He claimed this was proof beyond a reasonable doubt that the defendants were guilty. The jury agreed and handed down a verdict of guilty of second-degree robbery.
If you were the lawyer for the Collins couple how would you have countered the above argument? (The appeal of this case is discussed in Exercise [sec 5.1].[exer 9.2.23].)
Exercise $29$
A student is applying to Harvard and Dartmouth. He estimates that he has a probability of .5 of being accepted at Dartmouth and .3 of being accepted at Harvard. He further estimates the probability that he will be accepted by both is .2. What is the probability that he is accepted by Dartmouth if he is accepted by Harvard? Is the event “accepted at Harvard" independent of the event “accepted at Dartmouth"?
Exercise $30$
Luxco, a wholesale lightbulb manufacturer, has two factories. Factory A sells bulbs in lots that consists of 1000 regular and 2000 bulbs each. Random sampling has shown that on the average there tend to be about 2 bad regular bulbs and 11 bad softglow bulbs per lot. At factory B the lot size is reversed—there are 2000 regular and 1000 softglow per lot—and there tend to be 5 bad regular and 6 bad softglow bulbs per lot.
The manager of factory A asserts, “We’re obviously the better producer; our bad bulb rates are .2 percent and .55 percent compared to B’s .25 percent and .6 percent. We’re better at both regular and softglow bulbs by half of a tenth of a percent each."
“Au contraire," counters the manager of B, “each of our 3000 bulb lots contains only 11 bad bulbs, while A’s 3000 bulb lots contain 13. So our .37 percent bad bulb rate beats their .43 percent."
Who is right?
Exercise $31$
Using the Life Table for 1981 given in Appendix C, find the probability that a male of age 60 in 1981 lives to age 80. Find the same probability for a female.
Exercise $32$
1. There has been a blizzard and Helen is trying to drive from Woodstock to Tunbridge, which are connected like the top graph in Figure [fig 4.51]. Here $p$ and $q$ are the probabilities that the two roads are passable. What is the probability that Helen can get from Woodstock to Tunbridge?
2. Now suppose that Woodstock and Tunbridge are connected like the middle graph in Figure [fig 4.51]. What now is the probability that she can get from $W$ to $T$? Note that if we think of the roads as being components of a system, then in (a) and (b) we have computed the of a system whose components are (a) and (b)
3. Now suppose $W$ and $T$ are connected like the bottom graph in Figure [fig 4.51]. Find the probability of Helen’s getting from $W$ to $T$. : If the road from $C$ to $D$ is impassable, it might as well not be there at all; if it is passable, then figure out how to use part (b) twice.
Exercise $33$
Let $A_1$, $A_2$, and $A_3$ be events, and let $B_i$ represent either $A_i$ or its complement $\tilde A_i$. Then there are eight possible choices for the triple $(B_1, B_2, B_3)$. Prove that the events $A_1$, $A_2$, $A_3$ are independent if and only if $P(B_1 \cap B_2 \cap B_3) = P(B_1)P(B_2)P(B_3)\ ,$ for all eight of the possible choices for the triple $(B_1, B_2, B_3)$.
Exercise $34$
Four women, A, B, C, and D, check their hats, and the hats are returned in a random manner. Let $\Omega$ be the set of all possible permutations of A, B, C, D. Let $X_j = 1$ if the $j$th woman gets her own hat back and 0 otherwise. What is the distribution of $X_j$? Are the $X_i$’s mutually independent?
Exercise $35$
A box has numbers from 1 to 10. A number is drawn at random. Let $X_1$ be the number drawn. This number is replaced, and the ten numbers mixed. A second number $X_2$ is drawn. Find the distributions of $X_1$ and $X_2$. Are $X_1$ and $X_2$ independent? Answer the same questions if the first number is not replaced before the second is drawn.
Exercise $36$
A die is thrown twice. Let $X_1$ and $X_2$ denote the outcomes. Define $X = \min(X_1, X_2)$. Find the distribution of $X$.
Exercise $37$
Given that $P(X = a) = r$, $P(\max(X,Y) = a) = s$, and $P(\min(X,Y) = a) = t$, show that you can determine $u = P(Y = a)$ in terms of $r$, $s$, and $t$.
Exercise $38$
A fair coin is tossed three times. Let $X$ be the number of heads that turn up on the first two tosses and $Y$ the number of heads that turn up on the third toss. Give the distribution of
1. the random variables $X$ and $Y$.
2. the random variable $Z = X + Y$.
3. the random variable $W = X - Y$.
Exercise $39$
Assume that the random variables $X$ and $Y$ have the joint distribution given in Table $6$.
Table $6$: Joint distribution.
$Y$
-1 0 1 2
$X$ -1 0 1/36 1/6 1/12
0 1/18 0 1/18 0
1 0 1/36 1/6 1/12
2 1/12 0 1/12 1/6
1. What is $P(X \geq 1\ \mbox {and\ } Y \leq 0)$?
2. What is the conditional probability that $Y \leq 0$ given that $X = 2$?
3. Are $X$ and $Y$ independent?
4. What is the distribution of $Z = XY$?
Exercise $40$
In the , discussed in the historical remarks in Section 3.2, two players, A and B, play a series of points in a game with player A winning each point with probability $p$ and player B winning each point with probability $q = 1 - p$. The first player to win $N$ points wins the game. Assume that $N = 3$. Let $X$ be a random variable that has the value 1 if player A wins the series and 0 otherwise. Let $Y$ be a random variable with value the number of points played in a game. Find the distribution of $X$ and $Y$ when $p = 1/2$. Are $X$ and $Y$ independent in this case? Answer the same questions for the case $p = 2/3$.
Exercise $41$
The letters between Pascal and Fermat, which are often credited with having started probability theory, dealt mostly with the described in Exercise [exer 5.1.11]. Pascal and Fermat considered the problem of finding a fair division of stakes if the game must be called off when the first player has won $r$ games and the second player has won $s$ games, with $r < N$ and $s < N$. Let $P(r,s)$ be the probability that player A wins the game if he has already won $r$ points and player B has won $s$ points. Then
1. $P(r,N) = 0$ if $r < N$,
2. $P(N,s) = 1$ if $s < N$,
3. $P(r,s) = pP(r + 1,s) + qP(r,s + 1)$ if $r < N$ and $s < N$;
and (1), (2), and (3) determine $P(r,s)$ for $r \leq N$ and $s \leq N$. Pascal used these facts to find $P(r,s)$ by working backward: He first obtained $P(N - 1,j)$ for $j = N - 1$, $N - 2$, …, 0; then, from these values, he obtained $P(N - 2,j)$ for $j = N - 1$, $N - 2$, …, 0 and, continuing backward, obtained all the values $P(r,s)$. Write a program to compute $P(r,s)$ for given $N$, $a$, $b$, and $p$. : Follow Pascal and you will be able to run $N = 100$; use recursion and you will not be able to run $N = 20$.
Exercise $42$
Fermat solved the problem of points (see Exercise 5.11) as follows: He realized that the problem was difficult because the possible ways the play might go are not equally likely. For example, when the first player needs two more games and the second needs three to win, two possible ways the series might go for the first player are WLW and LWLW. These sequences are not equally likely. To avoid this difficulty, Fermat extended the play, adding fictitious plays so that the series went the maximum number of games needed (four in this case). He obtained equally likely outcomes and used, in effect, the Pascal triangle to calculate $P(r,s)$. Show that this leads to a for $P(r,s)$ even for the case $p \ne 1/2$.
Exercise $43$
The Yankees are playing the Dodgers in a world series. The Yankees win each game with probability .6. What is the probability that the Yankees win the series? (The series is won by the first team to win four games.)
Exercise $44$
C. L. Anderson11 has used Fermat’s argument for the problem of points to prove the following result due to J. G. Kingston. You are playing the game of points (see Exercise 40) but, at each point, when you serve you win with probability $p$, and when your opponent serves you win with probability $\bar{p}$. You will serve first, but you can choose one of the following two conventions for serving: for the first convention you alternate service (tennis), and for the second the person serving continues to serve until he loses a point and then the other player serves (racquetball). The first player to win $N$ points wins the game. The problem is to show that the probability of winning the game is the same under either convention.
1. Show that, under either convention, you will serve at most $N$ points and your opponent at most $N - 1$ points.
2. Extend the number of points to $2N - 1$ so that you serve $N$ points and your opponent serves $N - 1$. For example, you serve any additional points necessary to make $N$ serves and then your opponent serves any additional points necessary to make him serve $N - 1$ points. The winner is now the person, in the extended game, who wins the most points. Show that playing these additional points has not changed the winner.
3. Show that (a) and (b) prove that you have the same probability of winning the game under either convention.
Exercise $45$
In the previous problem, assume that $p = 1 - \bar{p}$.
1. Show that under either service convention, the first player will win more often than the second player if and only if $p > .5$.
2. In volleyball, a team can only win a point while it is serving. Thus, any individual “play" either ends with a point being awarded to the serving team or with the service changing to the other team. The first team to win $N$ points wins the game. (We ignore here the additional restriction that the winning team must be ahead by at least two points at the end of the game.) Assume that each team has the same probability of winning the play when it is serving, i.e., that $p = 1 - \bar{p}$. Show that in this case, the team that serves first will win more than half the time, as long as $p > 0$. (If $p = 0$, then the game never ends.) : Define $p'$ to be the probability that a team wins the next point, given that it is serving. If we write $q = 1 - p$, then one can show that $p' = \frac p{1-q^2}\ .$ If one now considers this game in a slightly different way, one can see that the second service convention in the preceding problem can be used, with $p$ replaced by $p'$.
Exercise $46$
A poker hand consists of 5 cards dealt from a deck of 52 cards. Let $X$ and $Y$ be, respectively, the number of aces and kings in a poker hand. Find the joint distribution of $X$ and $Y$.
Exercise $47$
Let $X_1$ and $X_2$ be independent random variables and let $Y_1 = \phi_1(X_1)$ and $Y_2 = \phi_2(X_2)$.
1. Show that $P(Y_1 = r, Y_2 = s) = \sum_{\phi_1(a) = r \atop \phi_2(b) = s} P(X_1 = a, X_2 = b)\ .$
2. Using (a), show that $P(Y_1 = r, Y_2 = s) = P(Y_1 = r)P(Y_2 = s)$ so that $Y_1$ and $Y_2$ are independent.
Exercise $48$
Let $\Omega$ be the sample space of an experiment. Let $E$ be an event with $P(E) > 0$ and define $m_E(\omega)$ by $m_E(\omega) = m(\omega|E)$. Prove that $m_E(\omega)$ is a distribution function on $E$, that is, that $m_E(\omega) \geq 0$ and that $\sum_{\omega\in\Omega} m_E(\omega) = 1$. The function $m_E$ is called the
Exercise $49$
You are given two urns each containing two biased coins. The coins in urn I come up heads with probability $p_1$, and the coins in urn II come up heads with probability $p_2 \ne p_1$. You are given a choice of (a) choosing an urn at random and tossing the two coins in this urn or (b) choosing one coin from each urn and tossing these two coins. You win a prize if both coins turn up heads. Show that you are better off selecting choice (a).
Exercise $50$
Prove that, if $A_1$, $A_2$, …, $A_n$ are independent events defined on a sample space $\Omega$ and if $0 < P(A_j) < 1$ for all $j$, then $\Omega$ must have at least $2^n$ points.
Exercise $51$
Prove that if $P(A|C) \geq P(B|C) \mbox{\,\,and\,\,} P(A|\tilde C) \geq P(B|\tilde C)\ ,$ then $P(A) \geq P(B)$.
Exercise $52$
A coin is in one of $n$ boxes. The probability that it is in the $i$th box is $p_i$. If you search in the $i$th box and it is there, you find it with probability $a_i$. Show that the probability $p$ that the coin is in the $j$th box, given that you have looked in the $i$th box and not found it, is $p = \left \{ \matrix{ p_j/(1-a_ip_i),&\,\,\, \mbox{if} \,\,\, j \ne i,\cr (1 - a_i)p_i/(1 - a_ip_i),&\,\,\,\mbox{if} \,\, j = i.\cr}\right.$
Exercise $53$
George Wolford has suggested the following variation on the Linda problem (see Exercise 1.2.25). The registrar is carrying John and Mary’s registration cards and drops them in a puddle. When he pickes them up he cannot read the names but on the first card he picked up he can make out Mathematics 23 and Government 35, and on the second card he can make out only Mathematics 23. He asks you if you can help him decide which card belongs to Mary. You know that Mary likes government but does not like mathematics. You know nothing about John and assume that he is just a typical Dartmouth student. From this you estimate: .1in $\begin{array}{ll} P(\mbox {Mary\ takes\ Government\ 35}) &= .5\ , \ P(\mbox {Mary\ takes\ Mathematics\ 23}) &= .1\ , \ P(\mbox {John\ takes\ Government\ 35}) &= .3\ , \ P(\mbox {John\ takes\ Mathematics\ 23}) &= .2\ . \end{array}$ .1in Assume that their choices for courses are independent events. Show that the card with Mathematics 23 and Government 35 showing is more likely to be Mary’s than John’s. The conjunction fallacy referred to in the Linda problem would be to assume that the event “Mary takes Mathematics 23 and Government 35" is more likely than the event “Mary takes Mathematics 23." Why are we not making this fallacy here?
Exercise $54$
(Suggested by Eisenberg and Ghosh12) A deck of playing cards can be described as a Cartesian product $\mbox{Deck} = \mbox{Suit} \times \mbox{Rank}\ ,$ where $\mbox{Suit} = \{\clubsuit,\diamondsuit,\heartsuit,\spadesuit\}$ and $\mbox{Rank} = \{2,3,\dots,10,{\mbox J},{\mbox Q},{\mbox K},{\mbox A}\}$. This just means that every card may be thought of as an ordered pair like $(\diamondsuit,2)$. By a we mean any event $A$ contained in Deck which is described in terms of Suit alone. For instance, if $A$ is “the suit is red," then $A = \{\diamondsuit,\heartsuit\} \times \mbox{Rank}\ ,$ so that $A$ consists of all cards of the form $(\diamondsuit,r)$ or $(\heartsuit,r)$ where $r$ is any rank. Similarly, a is any event described in terms of rank alone.
1. Show that if $A$ is any suit event and $B$ any rank event, then $A$ and $B$ are (We can express this briefly by saying that suit and rank are independent.)
2. Throw away the ace of spades. Show that now no nontrivial (i.e., neither empty nor the whole space) suit event $A$ is independent of any nontrivial rank event $B$. : Here independence comes down to $c/51 = (a/51) \cdot (b/51)\ ,$ where $a$, $b$, $c$ are the respective sizes of $A$, $B$ and $A \cap B$. It follows that 51 must divide $ab$, hence that 3 must divide one of $a$ and $b$, and 17 the other. But the possible sizes for suit and rank events preclude this.
3. Show that the deck in (b) nevertheless does have pairs $A$, $B$ of nontrivial independent events. : Find 2 events $A$ and $B$ of sizes 3 and 17, respectively, which intersect in a single point.
4. Add a joker to a full deck. Show that now there is no pair $A$, $B$ of nontrivial independent events. : See the hint in (b); 53 is prime.
The following problems are suggested by Stanley Gudder in his article “Do Good Hands Attract?"13 He says that event $A$ event $B$ if $P(B|A) > P(B)$ and $B$ if $P(B|A) < P(B)$.
Exercise $55$
Let $R_i$ be the event that the $i$th player in a poker game has a royal flush. Show that a royal flush (A,K,Q,J,10 of one suit) attracts another royal flush, that is $P(R_2|R_1) > P(R_2)$. Show that a royal flush repels full houses.
Exercise $56$
Prove that $A$ attracts $B$ if and only if $B$ attracts $A$. Hence we can say that $A$ and $B$ are if $A$ attracts $B$.
Exercise $57$
Prove that $A$ neither attracts nor repels $B$ if and only if $A$ and $B$ are independent.
Exercise $58$
Prove that $A$ and $B$ are mutually attractive if and only if $P(B|A) > P(B|\tilde A)$.
Exercise $59$
Prove that if $A$ attracts $B$, then $A$ repels $\tilde B$.
Exercise $60$
Prove that if $A$ attracts both $B$ and $C$, and $A$ repels $B \cap C$, then $A$ attracts $B \cup C$. Is there any example in which $A$ attracts both $B$ and $C$ and repels $B \cup C$?
Exercise $61$
Prove that if $B_1$, $B_2$, …, $B_n$ are mutually disjoint and collectively exhaustive, and if $A$ attracts some $B_i$, then $A$ must repel some $B_j$.
Exercise $62$
1. Suppose that you are looking in your desk for a letter from some time ago. Your desk has eight drawers, and you assess the probability that it is in any particular drawer is 10% (so there is a 20% chance that it is not in the desk at all). Suppose now that you start searching systematically through your desk, one drawer at a time. In addition, suppose that you have not found the letter in the first $i$ drawers, where $0 \le i \le 7$. Let $p_i$ denote the probability that the letter will be found in the next drawer, and let $q_i$ denote the probability that the letter will be found in some subsequent drawer (both $p_i$ and $q_i$ are conditional probabilities, since they are based upon the assumption that the letter is not in the first $i$ drawers). Show that the $p_i$’s increase and the $q_i$’s decrease. (This problem is from Falk et al.14)
2. The following data appeared in an article in the Wall Street Journal.15 For the ages 20, 30, 40, 50, and 60, the probability of a woman in the U.S. developing cancer in the next ten years is 0.5%, 1.2%, 3.2%, 6.4%, and 10.8%, respectively. At the same set of ages, the probability of a woman in the U.S. eventually developing cancer is 39.6%, 39.5%, 39.1%, 37.5%, and 34.2%, respectively. Do you think that the problem in part (a) gives an explanation for these data?
Exercise $63$
Here are two variations of the Monty Hall problem that are discussed by Granberg.16
1. Suppose that everything is the same except that Monty forgot to find out in advance which door has the car behind it. In the spirit of “the show must go on," he makes a guess at which of the two doors to open and gets lucky, opening a door behind which stands a goat. Now should the contestant switch?
2. You have observed the show for a long time and found that the car is put behind door A 45% of the time, behind door B 40% of the time and behind door C 15% of the time. Assume that everything else about the show is the same. Again you pick door A. Monty opens a door with a goat and offers to let you switch. Should you? Suppose you knew in advance that Monty was going to give you a chance to switch. Should you have initially chosen door A? | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/04%3A_Conditional_Probability/4.01%3A_Discrete_Conditional_Probability.txt |
In situations where the sample space is continuous we will follow the same procedure as in the previous section. Thus, for example, if $X$ is a continuous random variable with density function $f(x)$, and if $E$ is an event with positive probability, we define a conditional density function by the formula $f(x|E) = \left \{ \matrix{ f(x)/P(E), & \mbox{if} \,\,x \in E, \cr 0, & \mbox{if}\,\,x \not \in E. \cr}\right.$ Then for any event $F$, we have $P(F|E) = \int_F f(x|E)\,dx\ .$ The expression $P(F|E)$ is called the conditional probability of $F$ given $E$. As in the previous section, it is easy to obtain an alternative expression for this probability: $P(F|E) = \int_F f(x|E)\,dx = \int_{E\cap F} \frac {f(x)}{P(E)}\,dx = \frac {P(E\cap F)}{P(E)}\ .$
We can think of the conditional density function as being 0 except on $E$, and normalized to have integral 1 over $E$. Note that if the original density is a uniform density corresponding to an experiment in which all events of equal size are then the same will be true for the conditional density.
Example $1$:
In the spinner experiment (cf. Example 2.1.1]), suppose we know that the spinner has stopped with head in the upper half of the circle, $0 \leq x \leq 1/2$. What is the probability that $1/6 \leq x \leq 1/3$?
Solution
Here $E = [0,1/2]$, $F = [1/6,1/3]$, and $F \cap E = F$. Hence \begin{aligned} P(F|E) &=& \frac {P(F \cap E)}{P(E)} \ &=& \frac {1/6}{1/2} \ &=& \frac 13\ ,\end{aligned} which is reasonable, since $F$ is 1/3 the size of $E$. The conditional density function here is given by
$f(x|E) = \left \{ \matrix{ 2, & \mbox{if}\,\,\, 0 \leq x < 1/2, \cr 0, & \mbox{if}\,\,\, 1/2 \leq x < 1.\cr}\right.$ Thus the conditional density function is nonzero only on $[0,1/2]$, and is uniform there.
Example $2$:
In the dart game (cf. Example 2.2.2), suppose we know that the dart lands in the upper half of the target. What is the probability that its distance from the center is less than 1/2?
Solution
Here $E = \{\,(x,y) : y \geq 0\,\}$, and $F = \{\,(x,y) : x^2 + y^2 < (1/2)^2\,\}$. Hence, \begin{aligned} P(F|E) & = & \frac {P(F \cap E)}{P(E)} = \frac {(1/\pi)[(1/2)(\pi/4)]} {(1/\pi)(\pi/2)} \ & = & 1/4\ .\end{aligned} Here again, the size of $F \cap E$ is 1/4 the size of $E$. The conditional density function is $f((x,y)|E) = \left \{ \matrix{ f(x,y)/P(E) = 2/\pi, &\mbox{if}\,\,\,(x,y) \in E, \cr 0, &\mbox{if}\,\,\,(x,y) \not \in E.\cr}\right.$
Example $3$:
We return to the exponential density (cf. Example 2.2.7). We suppose that we are observing a lump of plutonium-239. Our experiment consists of waiting for an emission, then starting a clock, and recording the length of time $X$ that passes until the next emission. Experience has shown that $X$ has an exponential density with some parameter $\lambda$, which depends upon the size of the lump. Suppose that when we perform this experiment, we notice that the clock reads $r$ seconds, and is still running. What is the probability that there is no emission in a further $s$ seconds?
Solution
Let $G(t)$ be the probability that the next particle is emitted after time $t$. Then \begin{aligned} G(t) & = & \int_t^\infty \lambda e^{-\lambda x}\,dx \ & = & \left.-e^{-\lambda x}\right|_t^\infty = e^{-\lambda t}\ .\end{aligned}
Let $E$ be the event “the next particle is emitted after time $r$" and $F$ the event “the next particle is emitted after time $r + s$." Then \begin{aligned} P(F|E) & = & \frac {P(F \cap E)}{P(E)} \ & = & \frac {G(r + s)}{G(r)} \ & = & \frac {e^{-\lambda(r + s)}}{e^{-\lambda r}} \ & = & e^{-\lambda s}\ .\end{aligned}
This tells us the rather surprising fact that the probability that we have to wait $s$ seconds more for an emission, given that there has been no emission in $r$ seconds, is of the time $r$. This property (called the memoryless property) was introduced in Example 2.17. When trying to model various phenomena, this property is helpful in deciding whether the exponential density is appropriate.
The fact that the exponential density is memoryless means that it is reasonable to assume if one comes upon a lump of a radioactive isotope at some random time, then the amount of time until the next emission has an exponential density with the same parameter as the time between emissions. A well-known example, known as the “bus paradox," replaces the emissions by buses. The apparent paradox arises from the following two facts: 1) If you know that, on the average, the buses come by every 30 minutes, then if you come to the bus stop at a random time, you should only have to wait, on the average, for 15 minutes for a bus, and 2) Since the buses arrival times are being modelled by the exponential density, then no matter when you arrive, you will have to wait, on the average, for 30 minutes for a bus.
The reader can now see that in Exercises 2.2.9, 2.2.10, and 2.2.11, we were asking for simulations of conditional probabilities, under various assumptions on the distribution of the interarrival times. If one makes a reasonable assumption about this distribution, such as the one in Exercise 2.2.10, then the average waiting time is more nearly one-half the average interarrival time.
Independent Events
If $E$ and $F$ are two events with positive probability in a continuous sample space, then, as in the case of discrete sample spaces, we define $E$ and $F$ to be independent if $P(E|F) = P(E)$ and $P(F|E) = P(F)$. As before, each of the above equations imply the other, so that to see whether two events are independent, only one of these equations must be checked. It is also the case that, if $E$ and $F$ are independent, then $P(E \cap F) = P(E)P(F)$.
Example $1$:
In the dart game (see Example 4.1.12, let $E$ be the event that the dart lands in the upper half of the target ($y \geq 0$) and $F$ the event that the dart lands in the right half of the target ($x \geq 0$). Then $P(E \cap F)$ is the probability that the dart lies in the first quadrant of the target, and
\begin{aligned} P(E \cap F) & = & \frac 1\pi \int_{E \cap F} 1\,dxdy \ & = & \mbox{Area}\,(E\cap F) \ & = & \mbox{Area}\,(E)\,\mbox{Area}\,(F) \ & = & \left(\frac 1\pi \int_E 1\,dxdy\right) \left(\frac 1\pi \int_F 1\,dxdy\right) \ & = & P(E)P(F)\end{aligned}
so that $E$ and $F$ are independent. What makes this work is that the events $E$ and $F$ are described by restricting different coordinates. This idea is made more precise below.
Joint Density and Cumulative Distribution Functions
In a manner analogous with discrete random variables, we can define joint density functions and cumulative distribution functions for multi-dimensional continuous random variables.
Definition $5$
Let $X_1,~X_2, \ldots,~X_n$ be continuous random variables associated with an experiment, and let ${\bar X} = (X_1,~X_2, \ldots,~X_n)$. Then the joint cumulative distribution function of ${\bar X}$ is defined by $F(x_1, x_2, \ldots, x_n) = P(X_1 \le x_1, X_2 \le x_2, \ldots, X_n \le x_n)\ .$ The joint density function of ${\bar X}$ satisfies the following equation: $F(x_1, x_2, \ldots, x_n) = \int_{-\infty}^{x_1} \int_{-\infty}^{x_2} \cdots \int_{-\infty}^{x_n} f(t_1, t_2, \ldots t_n)\,dt_ndt_{n-1}\ldots dt_1.$
It is straightforward to show that, in the above notation,
$f(x_1, x_2, \dots \dots , x_n) = \frac{\partial^nF(x_1,x_2, \dots \dots, x_n)}{\partial x_1\partial x_2 \cdots \partial x_n)}$
Independent Random Variables
As with discrete random variables, we can define mutual independence of continuous random variables.
Definition $6$
Let $X_1$, $X_2$, …, $X_n$ be continuous random variables with cumulative distribution functions $F_1(x),~F_2(x), \ldots,~F_n(x)$. Then these random variables are if $F(x_1, x_2, \ldots, x_n) = F_1(x_1)F_2(x_2) \cdots F_n(x_n)$ for any choice of $x_1, x_2, \ldots, x_n$.
Thus, if $X_1,~X_2, \ldots,~X_n$ are mutually independent, then the joint cumulative distribution function of the random variable ${\bar X} = (X_1, X_2, \ldots, X_n)$ is just the product of the individual cumulative distribution functions. When two random variables are mutually independent, we shall say more briefly that they are
Using Equation 4.4, the following theorem can easily be shown to hold for mutually independent continuous random variables.
Theorem $2$
Let $X_1$, $X_2$, …, $X_n$ be continuous random variables with density functions $f_1(x),~f_2(x), \ldots,~f_n(x)$. Then these random variables are mutually independent if and only if $f(x_1, x_2, \ldots, x_n) = f_1(x_1)f_2(x_2) \cdots f_n(x_n)$ for any choice of $x_1, x_2, \ldots, x_n$
Let’s look at some examples.
Example $5$:
In this example, we define three random variables, $X_1,\ X_2$, and $X_3$. We will show that $X_1$ and $X_2$ are independent, and that $X_1$ and $X_3$ are not independent. Choose a point $\omega = (\omega_1,\omega_2)$ at random from the unit square. Set $X_1 = \omega_1^2$, $X_2 = \omega_2^2$, and $X_3 = \omega_1 + \omega_2$. Find the joint distributions $F_{12}(r_1,r_2)$ and $F_{23}(r_2,r_3)$.
We have already seen (see Example 2.13 that \begin{aligned} F_1(r_1) & = & P(-\infty < X_1 \leq r_1) \ & = & \sqrt{r_1}, \qquad \mbox{if} \,\,0 \leq r_1 \leq 1\ ,\end{aligned} and similarly, $F_2(r_2) = \sqrt{r_2}\ ,$ if $0 \leq r_2 \leq 1$. Now we have (see Figure $1$) \begin{aligned} F_{12}(r_1,r_2) & = & P(X_1 \leq r_1 \,\, \mbox{and}\,\, X_2 \leq r_2) \ & = & P(\omega_1 \leq \sqrt{r_1} \,\,\mbox{and}\,\, \omega_2 \leq \sqrt{r_2}) \ & = & \mbox{Area}\,(E_1)\ & = & \sqrt{r_1} \sqrt{r_2} \ & = &F_1(r_1)F_2(r_2)\ .\end{aligned} In this case $F_{12}(r_1,r_2) = F_1(r_1)F_2(r_2)$ so that $X_1$ and $X_2$ are independent. On the other hand, if $r_1 = 1/4$ and $r_3 = 1$, then (see Figure $2$)
\begin{aligned} F_{13}(1/4,1) & = & P(X_1 \leq 1/4,\ X_3 \leq 1) \ & = & P(\omega_1 \leq 1/2,\ \omega_1 + \omega_2 \leq 1) \ & = & \mbox{Area}\,(E_2) \ & = & \frac 12 - \frac 18 = \frac 38\ .\end{aligned} Now recalling that
$F_3(r_3) = \left \{ \matrix{ 0, & \mbox{if} \,\,r_3 < 0, \cr (1/2)r_3^2, & \mbox{if} \,\,0 \leq r_3 \leq 1, \cr 1-(1/2)(2-r_3)^2, & \mbox{if} \,\,1 \leq r_3 \leq 2, \cr 1, & \mbox{if} \,\,2 < r_3,\cr}\right.$
(see Example 2.14, we have $F_1(1/4)F_3(1) = (1/2)(1/2) = 1/4$. Hence, $X_1$ and $X_3$ are not independent random variables. A similar calculation shows that $X_2$ and $X_3$ are not independent either.
Although we shall not prove it here, the following theorem is a useful one. The statement also holds for mutually independent discrete random variables. A proof may be found in Rényi.17
Theorem $1$
Let $X_1, X_2, \ldots, X_n$ be mutually independent continuous random variables and let $\phi_1(x), \phi_2(x), \ldots, \phi_n(x)$ be continuous functions. Then $\phi_1(X_1),$ $\phi_2(X_2), \ldots, \phi_n(X_n)$ are mutually independent.
Independent Trials
Using the notion of independence, we can now formulate for continuous sample spaces the notion of independent trials (see Definition 4.5).
Definition
A sequence $X_1$, $X_2$, …, $X_n$ of random variables $X_i$ that are mutually independent and have the same density is called an independent trials process
As in the case of discrete random variables, these independent trials processes arise naturally in situations where an experiment described by a single random variable is repeated $n$ times.
Beta Density
We consider next an example which involves a sample space with both discrete and continuous coordinates. For this example we shall need a new density function called the beta density. This density has two parameters $\alpha$, $\beta$ and is defined by
$B(\alpha,\beta,x) = \left \{ \matrix{ (1/B(\alpha,\beta))x^{\alpha - 1}(1 - x)^{\beta - 1}, & {\mbox{if}}\,\, 0 \leq x \leq 1, \cr 0, & {\mbox{otherwise}}.\cr}\right.$
Here $\alpha$ and $\beta$ are any positive numbers, and the beta function $B(\alpha,\beta)$ is given by the area under the graph of $x^{\alpha - 1}(1 - x)^{\beta - 1}$ between 0 and 1: $B(\alpha,\beta) = \int_0^1 x^{\alpha - 1}(1 - x)^{\beta - 1}\,dx\ .$ Note that when $\alpha = \beta = 1$ the beta density if the uniform density. When $\alpha$ and $\beta$ are greater than 1 the density is bell-shaped, but when they are less than 1 it is U-shaped as suggested by the examples in Figure 4.9.
We shall need the values of the beta function only for integer values of $\alpha$ and $\beta$, and in this case $B(\alpha,\beta) = \frac{(\alpha - 1)!\,(\beta - 1)!}{(\alpha + \beta - 1)!}\ .$
Example$23$
In medical problems it is often assumed that a drug is effective with a probability $x$ each time it is used and the various trials are independent, so that one is, in effect, tossing a biased coin with probability $x$ for heads. Before further experimentation, you do not know the value $x$ but past experience might give some information about its possible values. It is natural to represent this information by sketching a density function to determine a distribution for $x$. Thus, we are considering $x$ to be a continuous random variable, which takes on values between 0 and 1. If you have no knowledge at all, you would sketch the uniform density. If past experience suggests that $x$ is very likely to be near 2/3 you would sketch a density with maximum at 2/3 and a spread reflecting your uncertainly in the estimate of 2/3. You would then want to find a density function that reasonably fits your sketch. The beta densities provide a class of densities that can be fit to most sketches you might make. For example, for $\alpha > 1$ and $\beta > 1$ it is bell-shaped with the parameters $\alpha$ and $\beta$ determining its peak and its spread.
Assume that the experimenter has chosen a beta density to describe the state of his knowledge about $x$ before the experiment. Then he gives the drug to $n$ subjects and records the number $i$ of successes. The number $i$ is a discrete random variable, so we may conveniently describe the set of possible outcomes of this experiment by referring to the ordered pair $(x, i)$.
We let $m(i|x)$ denote the probability that we observe $i$ successes given the value of $x$. By our assumptions, $m(i|x)$ is the binomial distribution with probability $x$ for success:
$m(i|x) = b(n,x,i) = {n \choose i} x^i(1 - x)^j\ ,$ where $j = n - i$.
If $x$ is chosen at random from $[0,1]$ with a beta density $B(\alpha,\beta,x)$, then the density function for the outcome of the pair $(x,i)$ is
\begin{aligned} f(x,i) & = & m(i|x)B(\alpha,\beta,x) \ & = & {n \choose i} x^i(1 - x)^j \frac 1{B(\alpha,\beta)} x^{\alpha - 1}(1 - x)^{\beta - 1} \ & = & {n \choose i} \frac 1{B(\alpha,\beta)} x^{\alpha + i - 1}(1 - x)^{\beta + j - 1}\ .\end{aligned}
Now let $m(i)$ be the probability that we observe $i$ successes knowing the value of $x$. Then
\begin{aligned} m(i) & = & \int_0^1 m(i|x) B(\alpha,\beta,x)\,dx \ & = & {n \choose i} \frac 1{B(\alpha,\beta)} \int_0^1 x^{\alpha + i - 1}(1 - x)^{\beta + j - 1}\,dx \ & = & {n \choose i} \frac {B(\alpha + i,\beta + j)}{B(\alpha,\beta)}\ .\end{aligned}
Hence, the probability density $f(x|i)$ for $x$, given that $i$ successes were observed, is
$f(x|i) = \frac {f(x,i)}{m(i)}$
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ = \frac {x^{\alpha + i - 1}(1 - x)^{\beta + j - 1}}{B(\alpha + i,\beta + j)}\ ,\label{eq 4.5}$
that is, $f(x|i)$ is another beta density. This says that if we observe $i$ successes and $j$ failures in $n$ subjects, then the new density for the probability that the drug is effective is again a beta density but with parameters $\alpha + i$, $\beta + j$.
Now we assume that before the experiment we choose a beta density with parameters $\alpha$ and $\beta$, and that in the experiment we obtain $i$ successes in $n$ trials. We have just seen that in this case, the new density for $x$ is a beta density with parameters $\alpha + i$ and $\beta + j$.
Now we wish to calculate the probability that the drug is effective on the next subject. For any particular real number $t$ between 0 and 1, the probability that $x$ has the value $t$ is given by the expression in Equation 4.5. Given that $x$ has the value $t$, the probability that the drug is effective on the next subject is just $t$. Thus, to obtain the probability that the drug is effective on the next subject, we integrate the product of the expression in Equation 4.5 and $t$ over all possible values of $t$. We obtain:
\begin{align} & \frac{1}{B(\alpha + i, \beta + j)}\int_0^1t \cdot d^{\alpha+i-1}(1-t)^{\beta+j-1}dt \ = & \frac{B(\alpha + i +1, \beta + j)}{B(\alpha + i, \beta + j)} \ = & \frac{(\alpha + i)!(\beta +j-1)!}{(\alpha + \beta + i + j)!}\cdot \frac{(\alpha+\beta+i+j-1)!}{(\alpha+i-1)!(\beta+j-1)!} \ = & \frac{\alpha+i}{\alpha+\beta+n}\end{align}
If $n$ is large, then our estimate for the probability of success after the experiment is approximately the proportion of successes observed in the experiment, which is certainly a reasonable conclusion.
The next example is another in which the true probabilities are unknown and must be estimated based upon experimental data.
Example $7$: (Two-armed bandit problem)
You are in a casino and confronted by two slot machines. Each machine pays off either 1 dollar or nothing. The probability that the first machine pays off a dollar is $x$ and that the second machine pays off a dollar is $y$. We assume that $x$ and $y$ are random numbers chosen independently from the interval $[0,1]$ and unknown to you. You are permitted to make a series of ten plays, each time choosing one machine or the other. How should you choose to maximize the number of times that you win?
One strategy that sounds reasonable is to calculate, at every stage, the probability that each machine will pay off and choose the machine with the higher probability. Let win($i$), for $i = 1$ or 2, be the number of times that you have won on the $i$th machine. Similarly, let lose($i$) be the number of times you have lost on the $i$th machine. Then, from Example 4.16 the probability $p(i)$ that you win if you choose the $i$th machine is $p(i) = \frac {{\mbox{win}}(i) + 1} {{\mbox{win}}(i) + {\mbox{lose}}(i) + 2}\ .$ Thus, if $p(1) > p(2)$ you would play machine 1 and otherwise you would play machine 2. We have written a program TwoArm to simulate this experiment. In the program, the user specifies the initial values for $x$ and $y$ (but these are unknown to the experimenter). The program calculates at each stage the two conditional densities for $x$ and $y$, given the outcomes of the previous trials, and then computes $p(i)$, for $i = 1$, 2. It then chooses the machine with the highest value for the probability of winning for the next play. The program prints the machine chosen on each play and the outcome of this play. It also plots the new densities for $x$ (solid line) and $y$ (dotted line), showing only the current densities. We have run the program for ten plays for the case $x = .6$ and $y = .7$. The result is shown in Figure 4.7
The run of the program shows the weakness of this strategy. Our initial probability for winning on the better of the two machines is .7. We start with the poorer machine and our outcomes are such that we always have a probability greater than .6 of winning and so we just keep playing this machine even though the other machine is better. If we had lost on the first play we would have switched machines. Our final density for $y$ is the same as our initial density, namely, the uniform density. Our final density for $x$ is different and reflects a much more accurate knowledge about $x$. The computer did pretty well with this strategy, winning seven out of the ten trials, but ten trials are not enough to judge whether this is a good strategy in the long run.
Another popular strategy is the play-the-winner-strategy. As the name suggests, for this strategy we choose the same machine when we win and switch machines when we lose. The program TwoArm will simulate this strategy as well. In Figure 4.11, we show the results of running this program with the play-the-winner strategy and the same true probabilities of .6 and .7 for the two machines. After ten plays our densities for the unknown probabilities of winning suggest to us that the second machine is indeed the better of the two. We again won seven out of the ten trials.
Neither of the strategies that we simulated is the best one in terms of maximizing our average winnings. This best strategy is very complicated but is reasonably approximated by the play-the-winner strategy. Variations on this example have played an important role in the problem of clinical tests of drugs where experimenters face a similar situation.
Exercises
Exercise $1$
Pick a point $x$ at random (with uniform density) in the interval $[0,1]$. Find the probability that $x > 1/2$, given that
1. $x > 1/4$.
2. $x < 3/4$.
3. $|x - 1/2| < 1/4$.
4. $x^2 - x + 2/9 < 0$.
Exercise $2$
A radioactive material emits $\alpha$-particles at a rate described by the density function $f(t) = .1e^{-.1t}\ .$ Find the probability that a particle is emitted in the first 10 seconds, given that
1. no particle is emitted in the first second.
2. no particle is emitted in the first 5 seconds.
3. a particle is emitted in the first 3 seconds.
4. a particle is emitted in the first 20 seconds.
Exercise $3$
The Acme Super light bulb is known to have a useful life described by the density function $f(t) = .01e^{-.01t}\ ,$ where time $t$ is measured in hours.
1. Find the failure rate of this bulb (see Exercise 2.2.6)
2. Find the reliability of this bulb after 20 hours.
3. Given that it lasts 20 hours, find the probability that the bulb lasts another 20 hours.
4. Find the probability that the bulb burns out in the forty-first hour, given that it lasts 40 hours.
Exercise $4$
Suppose you toss a dart at a circular target of radius 10 inches. Given that the dart lands in the upper half of the target, find the probability that
1. it lands in the right half of the target.
2. its distance from the center is less than 5 inches.
3. its distance from the center is greater than 5 inches.
4. it lands within 5 inches of the point $(0,5)$.
Exercise $5$
Suppose you choose two numbers $x$ and $y$, independently at random from the interval $[0,1]$. Given that their sum lies in the interval $[0,1]$, find the probability that
1. $|x - y| < 1$.
2. $xy < 1/2$.
3. $\max\{x,y\} < 1/2$.
4. $x^2 + y^2 < 1/4$.
5. $x > y$.
Exercise $6$
Find the conditional density functions for the following experiments.
1. A number $x$ is chosen at random in the interval $[0,1]$, given that $x > 1/4$.
2. A number $t$ is chosen at random in the interval $[0,\infty)$ with exponential density $e^{-t}$, given that $1 < t < 10$.
3. A dart is thrown at a circular target of radius 10 inches, given that it falls in the upper half of the target.
4. Two numbers $x$ and $y$ are chosen at random in the interval $[0,1]$, given that $x > y$.
Exercise $7$
Let $x$ and $y$ be chosen at random from the interval $[0,1]$. Show that the events $x > 1/3$ and $y > 2/3$ are independent events.
Exercise $8$
Let $x$ and $y$ be chosen at random from the interval $[0,1]$. Which pairs of the following events are independent?
1. $x > 1/3$.
2. $y > 2/3$.
3. $x > y$.
4. $x + y < 1$.
Exercise $9$
Suppose that $X$ and $Y$ are continuous random variables with density functions $f_X(x)$ and $f_Y(y)$, respectively. Let $f(x, y)$ denote the joint density function of $(X, Y)$. Show that $\int_{-\infty}^\infty f(x, y)\, dy = f_X(x)\ ,$ and $\int_{-\infty}^\infty f(x, y)\, dx = f_Y(y)\ .$
Exercise *$10$
In Exercise 2.2.12 you proved the following: If you take a stick of unit length and break it into three pieces, choosing the breaks at random (i.e., choosing two real numbers independently and uniformly from [0, 1]), then the probability that the three pieces form a triangle is 1/4. Consider now a similar experiment: First break the stick at random, then break the longer piece at random. Show that the two experiments are actually quite different, as follows:
1. Write a program which simulates both cases for a run of 1000 trials, prints out the proportion of successes for each run, and repeats this process ten times. (Call a trial a success if the three pieces do form a triangle.) Have your program pick $(x,y)$ at random in the unit square, and in each case use $x$ and $y$ to find the two breaks. For each experiment, have it plot $(x,y)$ if $(x,y)$ gives a success.
2. Show that in the second experiment the theoretical probability of success is actually $2\log 2 - 1$.
Exercise $11$
A coin has an unknown bias $p$ that is assumed to be uniformly distributed between 0 and 1. The coin is tossed $n$ times and heads turns up $j$ times and tails turns up $k$ times. We have seen that the probability that heads turns up next time is $\frac {j + 1}{n + 2}\ .$ Show that this is the same as the probability that the next ball is black for the Polya urn model of Exercise 4.1.20 Use this result to explain why, in the Polya urn model, the proportion of black balls does not tend to 0 or 1 as one might expect but rather to a uniform distribution on the interval $[0,1]$.
Exercise $12$
Previous experience with a drug suggests that the probability $p$ that the drug is effective is a random quantity having a beta density with parameters $\alpha = 2$ and $\beta = 3$. The drug is used on ten subjects and found to be successful in four out of the ten patients. What density should we now assign to the probability $p$? What is the probability that the drug will be successful the next time it is used?
Exercise $13$
Write a program to allow you to compare the strategies play-the-winner and play-the-best-machine for the two-armed bandit problem of Example 4.17. Have your program determine the initial payoff probabilities for each machine by choosing a pair of random numbers between 0 and 1. Have your program carry out 20 plays and keep track of the number of wins for each of the two strategies. Finally, have your program make 1000 repetitions of the 20 plays and compute the average winning per 20 plays. Which strategy seems to be the best? Repeat these simulations with 20 replaced by 100. Does your answer to the above question change?
Exercise $14$
Consider the two-armed bandit problem of Example 4.24 Bruce Barnes proposed the following strategy, which is a variation on the play-the-best-machine strategy. The machine with the greatest probability of winning is played the following two conditions hold: (a) the difference in the probabilities for winning is less than .08, and (b) the ratio of the number of times played on the more often played machine to the number of times played on the less often played machine is greater than 1.4. If the above two conditions hold, then the machine with the smaller probability of winning is played. Write a program to simulate this strategy. Have your program choose the initial payoff probabilities at random from the unit interval $[0,1]$, make 20 plays, and keep track of the number of wins. Repeat this experiment 1000 times and obtain the average number of wins per 20 plays. Implement a second strategy—for example, play-the-best-machine or one of your own choice, and see how this second strategy compares with Bruce’s on average wins. | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/04%3A_Conditional_Probability/4.02%3A_Continuous_Conditional_Probability.txt |
Much of this section is based on an article by Snell and Vanderbei.18
One must be very careful in dealing with problems involving conditional probability. The reader will recall that in the Monty Hall problem (Example 4.1.6, if the contestant chooses the door with the car behind it, then Monty has a choice of doors to open. We made an assumption that in this case, he will choose each door with probability 1/2. We then noted that if this assumption is changed, the answer to the original question changes. In this section, we will study other examples of the same phenomenon.
Example $1$:
Consider a family with two children. Given that one of the children is a boy, what is the probability that both children are boys?
One way to approach this problem is to say that the other child is equally likely to be a boy or a girl, so the probability that both children are boys is 1/2. The “text-book" solution would be to draw the tree diagram and then form the conditional tree by deleting paths to leave only those paths that are consistent with the given information. The result is shown in Figure $1$. We see that the probability of two boys given a boy in the family is not 1/2 but rather 1/3.
This problem and others like it are discussed in Bar-Hillel and Falk.19 These authors stress that the answer to conditional probabilities of this kind can change depending upon how the information given was actually obtained. For example, they show that 1/2 is the correct answer for the following scenario.
Example $2$
Mr. Smith is the father of two. We meet him walking along the street with a young boy whom he proudly introduces as his son. What is the probability that Mr. Smith’s other child is also a boy?
Answer
As usual we have to make some additional assumptions. For example, we will assume that if Mr. Smith has a boy and a girl, he is equally likely to choose either one to accompany him on his walk. In Figure $2$ we show the tree analysis of this problem and we see that 1/2 is, indeed, the correct answer
.
Example $3$:
It is not so easy to think of reasonable scenarios that would lead to the classical 1/3 answer. An attempt was made by Stephen Geller in proposing this problem to Marilyn vos Savant.20 Geller’s problem is as follows: A shopkeeper says she has two new baby beagles to show you, but she doesn’t know whether they’re both male, both female, or one of each sex. You tell her that you want only a male, and she telephones the fellow who’s giving them a bath. “Is at least one a male?" she asks. “Yes," she informs you with a smile. What is the probability that the other one is male?
The reader is asked to decide whether the model which gives an answer of 1/3 is a reasonable one to use in this case.
In the preceding examples, the apparent paradoxes could easily be resolved by clearly stating the model that is being used and the assumptions that are being made. We now turn to some examples in which the paradoxes are not so easily resolved.
Exercise $4$
Two envelopes each contain a certain amount of money. One envelope is given to Ali and the other to Baba and they are told that one envelope contains twice as much money as the other. However, neither knows who has the larger prize. Before anyone has opened their envelope, Ali is asked if she would like to trade her envelope with Baba. She reasons as follows: Assume that the amount in my envelope is $x$. If I switch, I will end up with $x/2$ with probability 1/2, and $2x$ with probability 1/2. If I were given the opportunity to play this game many times, and if I were to switch each time, I would, on average, get $\frac 12 \frac x2 + \frac 12 2x = \frac 54 x\ .$ This is greater than my average winnings if I didn’t switch.
Of course, Baba is presented with the same opportunity and reasons in the same way to conclude that he too would like to switch. So they switch and each thinks that his/her net worth just went up by 25%.
Since neither has yet opened any envelope, this process can be repeated and so again they switch. Now they are back with their original envelopes and yet they think that their fortune has increased 25% twice. By this reasoning, they could convince themselves that by repeatedly switching the envelopes, they could become arbitrarily wealthy. Clearly, something is wrong with the above reasoning, but where is the mistake?
Answer
One of the tricks of making paradoxes is to make them slightly more difficult than is necessary to further befuddle us. As John Finn has suggested, in this paradox we could just have well started with a simpler problem. Suppose Ali and Baba know that I am going to give then either an envelope with $5 or one with$10 and I am going to toss a coin to decide which to give to Ali, and then give the other to Baba. Then Ali can argue that Baba has $2x$ with probability $1/2$ and $x/2$ with probability $1/2$. This leads Ali to the same conclusion as before. But now it is clear that this is nonsense, since if Ali has the envelope containing $5, Baba cannot possibly have half of this, namely$2.50, since that was not even one of the choices. Similarly, if Ali has $10, Baba cannot have twice as much, namely$20. In fact, in this simpler problem the possibly outcomes are given by the tree diagram in Figure 4.14. From the diagram, it is clear that neither is made better off by switching.
In the above example, Ali’s reasoning is incorrect because he infers that if the amount in his envelope is $x$, then the probability that his envelope contains the smaller amount is 1/2, and the probability that her envelope contains the larger amount is also 1/2. In fact, these conditional probabilities depend upon the distribution of the amounts that are placed in the envelopes.
For definiteness, let $X$ denote the positive integer-valued random variable which represents the smaller of the two amounts in the envelopes. Suppose, in addition, that we are given the distribution of $X$, i.e., for each positive integer $x$, we are given the value of $p_x = P(X = x)\ .$ (In Finn’s example, $p_5 = 1$, and $p_{n} = 0$ for all other values of $n$.) Then it is easy to calculate the conditional probability that an envelope contains the smaller amount, given that it contains $x$ dollars. The two possible sample points are $(x, x/2)$ and $(x, 2x)$. If $x$ is odd, then the first sample point has probability 0, since $x/2$ is not an integer, so the desired conditional probability is 1 that $x$ is the smaller amount. If $x$ is even, then the two sample points have probabilities $p_{x/2}$ and $p_x$, respectively, so the conditional probability that $x$ is the smaller amount is $\frac{p_x}{p_{x/2} + p_x}\ ,$ which is not necessarily equal to 1/2.
Steven Brams and D. Marc Kilgour21 study the problem, for different distributions, of whether or not one should switch envelopes, if one’s objective is to maximize the long-term average winnings. Let $x$ be the amount in your envelope. They show that for any distribution of $X$, there is at least one value of $x$ such that you should switch. They give an example of a distribution for which there is exactly one value of $x$ such that you should switch (see Exercise 4.3.5). Perhaps the most interesting case is a distribution in which you should always switch. We now give this example.
Exercise $5$
Suppose that we have two envelopes in front of us, and that one envelope contains twice the amount of money as the other (both amounts are positive integers). We are given one of the envelopes, and asked if we would like to switch.
Answer
As above, we let $X$ denote the smaller of the two amounts in the envelopes, and let $p_x = P(X = x)\ .$ We are now in a position where we can calculate the long-term average winnings, if we switch. (This long-term average is an example of a probabilistic concept known as expectation, and will be discussed in Chapter 6.) Given that one of the two sample points has occurred, the probability that it is the point $(x, x/2)$ is $\frac{p_{x/2}}{p_{x/2} + p_x}\ ,$ and the probability that it is the point $(x, 2x)$ is $\frac{p_x}{p_{x/2} + p_x}\ .$ Thus, if we switch, our long-term average winnings are $\frac{p_{x/2}}{p_{x/2} + p_x}\frac x2 + \frac{p_x}{p_{x/2} + p_x} 2x\ .$ If this is greater than $x$, then it pays in the long run for us to switch. Some routine algebra shows that the above expression is greater than $x$ if and only if \
[\frac{p_{x/2}}{p_{x/2} + p_x} < \frac 23\ . \label{eq 4.3.1}\]
It is interesting to consider whether there is a distribution on the positive integers such that the inequality is true for all even values of $x$. Brams and Kilgour22 give the following example.
We define $p_x$ as follows: $p_x = \left \{ \matrix{ \frac 13 \Bigl(\frac 23\Bigr)^{k-1}, & \mbox{if}\,\, x = 2^k, \cr 0, & \mbox{otherwise.}\cr }\right.$ It is easy to calculate (see Exercise 4.3.4) that for all relevant values of $x$, we have $\frac{p_{x/2}}{p_{x/2} + p_x} = \frac 35\ ,$ which means that the inequality is always true.
So far, we have been able to resolve paradoxes by clearly stating the assumptions being made and by precisely stating the models being used. We end this section by describing a paradox which we cannot resolve.
Exercise $6$
Suppose that we have two envelopes in front of us, and we are told that the envelopes contain $X$ and $Y$ dollars, respectively, where $X$ and $Y$ are different positive integers. We randomly choose one of the envelopes, and we open it, revealing $X$, say. Is it possible to determine, with probability greater than 1/2, whether $X$ is the smaller of the two dollar amounts?
Answer
Even if we have no knowledge of the joint distribution of $X$ and $Y$, the surprising answer is yes! Here’s how to do it. Toss a fair coin until the first time that heads turns up. Let $Z$ denote the number of tosses required plus 1/2. If $Z > X$, then we say that $X$ is the smaller of the two amounts, and if $Z < X$, then we say that $X$ is the larger of the two amounts.
First, if $Z$ lies between $X$ and $Y$, then we are sure to be correct. Since $X$ and $Y$ are unequal, $Z$ lies between them with positive probability. Second, if $Z$ is not between $X$ and $Y$, then $Z$ is either greater than both $X$ and $Y$, or is less than both $X$ and $Y$. In either case, $X$ is the smaller of the two amounts with probability 1/2, by symmetry considerations (remember, we chose the envelope at random). Thus, the probability that we are correct is greater than 1/2.
Exercises
Exercise $1$
One of the first conditional probability paradoxes was provided by Bertrand.23 It is called the Bot Paradox. A cabinet has three drawers. In the first drawer there are two gold balls, in the second drawer there are two silver balls, and in the third drawer there is one silver and one gold ball. A drawer is picked at random and a ball chosen at random from the two balls in the drawer. Given that a gold ball was drawn, what is the probability that the drawer with the two gold balls was chosen?
Exercise $2$
The following problem is called the two aces problem. This problem, dating back to 1936, has been attributed to the English mathematician J. H. C. Whitehead (see Gridgeman24). This problem was also submitted to Marilyn vos Savant by the master of mathematical puzzles Martin Gardner, who remarks that it is one of his favorites.
A bridge hand has been dealt, i. e. thirteen cards are dealt to each player. Given that your partner has at least one ace, what is the probability that he has at least two aces? Given that your partner has the ace of hearts, what is the probability that he has at least two aces? Answer these questions for a version of bridge in which there are eight cards, namely four aces and four kings, and each player is dealt two cards. (The reader may wish to solve the problem with a 52-card deck.)
Exercise $3$
In the preceding exercise, it is natural to ask “How do we get the information that the given hand has an ace?" Gridgeman considers two different ways that we might get this information. (Again, assume the deck consists of eight cards.)
1. Assume that the person holding the hand is asked to “Name an ace in your hand" and answers “The ace of hearts." What is the probability that he has a second ace?
2. Suppose the person holding the hand is asked the more direct question “Do you have the ace of hearts?" and the answer is yes. What is the probability that he has a second ace?
Exercise $4$
Using the notation introduced in Example 4.3.5, show that in the example of Brams and Kilgour, if $x$ is a positive power of 2, then $\frac{p_{x/2}}{p_{x/2} + p_x} = \frac 35\ .$
Exercise $5$
Using the notation introduced in Example 4.3.5, let $p_x = \left \{ \matrix{ \frac 23 \Bigl(\frac 13\Bigr)^k, & \mbox{if}\,\, x = 2^k, \cr 0, & \mbox{otherwise.}\cr }\right.$ Show that there is exactly one value of $x$ such that if your envelope contains $x$, then you should switch.
Exercise *$6$
(For bridge players only. From Sutherland.25) Suppose that we are the declarer in a hand of bridge, and we have the king, 9, 8, 7, and 2 of a certain suit, while the dummy has the ace, 10, 5, and 4 of the same suit. Suppose that we want to play this suit in such a way as to maximize the probability of having no losers in the suit. We begin by leading the 2 to the ace, and we note that the queen drops on our left. We then lead the 10 from the dummy, and our right-hand opponent plays the six (after playing the three on the first round). Should we finesse or play for the drop?
4.R: References
1. Marilyn vos Savant, Ask Marilyn, , 9 September; 2 December; 17 February 1990, reprinted in Marilyn vos Savant, , St. Martins, New York, 1992.↩
2. Quoted in F. N. David, (London: Griffin, 1962), p. 119.↩
3. I. Hacking, (Cambridge: Cambridge University Press, 1975), p. 99.↩
4. A. de Moivre, 3rd ed. (New York: Chelsea, 1967), p. 6.↩
5. ibid, p. 7.↩
6. T. Bayes, “An Essay Toward Solving a Problem in the Doctrine of Chances," vol. 53 (1763), pp. 370–418.↩
7. L. E. Maistrov, trans. and ed. Samual Kotz (New York: Academic Press, 1974), p. 100.↩
8. R. Johnsonbough, “Problem #103," vol. 8 (1977), p. 292.↩
9. K. L. Chung, (New York: Springer-Verlag, 1979), p. 152.↩
10. M. W. Gray, “Statistics and the Law," vol. 56 (1983), pp. 67–81.↩
11. C. L. Anderson, “Note on the Advantage of First Serve," Series A, vol. 23 (1977), p. 363.↩
12. B. Eisenberg and B. K. Ghosh, “Independent Events in a Discrete Uniform Probability Space," vol. 41, no. 1 (1987), pp. 52–56.↩
13. S. Gudder, “Do Good Hands Attract?" vol. 54, no. 1 (1981), pp. 13–16.↩
14. R. Falk, A. Lipson, and C. Konold, “The ups and downs of the hope function in a fruitless search," in G. Wright and P. Ayton, (eds.) (Chichester: Wiley, 1994), pgs. 353-377.↩
15. C. Crossen, “Fright by the numbers: Alarming disease data are frequently flawed," 11 April 1996, p. B1.↩
16. D. Granberg, “To switch or not to switch," in M. vos Savant, (New York: St. Martin’s 1996).↩
17. A. Rényi, (Budapest: Akadémiai Kiadó, 1970), p. 183.↩
18. J. L. Snell and R. Vanderbei, “Three Bewitching Paradoxes," in , CRC Press, Boca Raton, 1995.↩
19. M. Bar-Hillel and R. Falk, “Some teasers concerning conditional probabilities," , vol. 11 (1982), pgs. 109-122.↩
20. M. vos Savant, “Ask Marilyn," , 9 September; 2 December; 17 February 1990, reprinted in Marilyn vos Savant, , St. Martins, New York, 1992.↩
21. S. J. Brams and D. M. Kilgour, “The Box Problem: To Switch or Not to Switch," , vol. 68, no. 1 (1995), p. 29.↩
22. ibid.↩
23. J. Bertrand, , Gauthier-Uillars, 1888.↩
24. N. T. Gridgeman, Letter, , 21 (1967), pgs. 38-39.↩
25. E. Sutherland, “Restricted Choice — Fact or Fiction?", , November 1, 1993.↩ | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/04%3A_Conditional_Probability/4.03%3A_Paradoxes.txt |
• 5.1: Important Distributions
In this chapter, we describe the discrete probability distributions and the continuous probability densities that occur most often in the analysis of experiments. We will also show how one simulates these distributions and densities on a computer.
• 5.2: Important Densities
In this section, we will introduce some important probability density functions and give some examples of their use. We will also consider the question of how one simulates a given density using a computer.
• 5.R: References
05: Distributions and Densities
In this chapter, we describe the discrete probability distributions and the continuous probability densities that occur most often in the analysis of experiments. We will also show how one simulates these distributions and densities on a computer.
Discrete Uniform Distribution
In Chapter 1, we saw that in many cases, we assume that all outcomes of an experiment are equally likely. If $X$ is a random variable which represents the outcome of an experiment of this type, then we say that $X$ is uniformly distributed. If the sample space $S$ is of size $n$, where $0 < n < \infty$, then the distribution function $m(\omega)$ is defined to be $1/n$ for all $\omega \in S$. As is the case with all of the discrete probability distributions discussed in this chapter, this experiment can be simulated on a computer using the program GeneralSimulation. However, in this case, a faster algorithm can be used instead. (This algorithm was described in Chapter 1; we repeat the description here for completeness.) The expression $1 + \lfloor n\,(rnd)\rfloor$ takes on as a value each integer between 1 and $n$ with probability $1/n$ (the notation $\lfloor x \rfloor$ denotes the greatest integer not exceeding $x$). Thus, if the possible outcomes of the experiment are labelled $\omega_1\ \omega_2,\ \ldots,\ \omega_n$, then we use the above expression to represent the subscript of the output of the experiment.
If the sample space is a countably infinite set, such as the set of positive integers, then it is not possible to have an experiment which is uniform on this set (see Exercise 5.1.102). If the sample space is an uncountable set, with positive, finite length, such as the interval $[0, 1]$, then we use continuous density functions (see Section 2).
Binomial Distribution
The binomial distribution with parameters $n$, $p$, and $k$ was defined in Chapter 3. It is the distribution of the random variable which counts the number of heads which occur when a coin is tossed $n$ times, assuming that on any one toss, the probability that a head occurs is $p$. The distribution function is given by the formula $b(n, p, k) = {n \choose k}p^k q^{n-k}\ ,$ where $q = 1 - p$.
One straightforward way to simulate a binomial random variable $X$ is to compute the sum of $n$ independent $0-1$ random variables, each of which take on the value 1 with probability $p$.
Geometric Distribution
Consider a Bernoulli trials process continued for an infinite number of trials; for example, a coin tossed an infinite sequence of times. Thus, we can determine the distribution for any random variable $X$ relating to the experiment provided $P(X = a)$ can be computed in terms of a finite number of trials. For example, let $T$ be the number of trials up to and including the first success. Then
\begin{aligned} P(T = 1) & = & p\ , \ P(T = 2) & = & qp\ , \ P(T = 3) & = & q^2p\ , \\end{aligned}
and in general,
$P(T = n) = q^{n-1}p\ .$
To show that this is a distribution, we must show that $p + qp + q^2p + \cdots = 1\ .$
The left-hand expression is just a geometric series with first term $p$ and common ratio $q$, so its sum is ${p\over{1-q}}$ which equals 1.
In Figure $1$we have plotted this distribution using the program GeometricPlot for the cases $p = .5$ and $p = .2$. We see that as $p$ decreases we are more likely to get large values for $T$, as would be expected. In both cases, the most probable value for $T$ is 1. This will always be true since $\frac {P(T = j + 1)}{P(T = j)} = q < 1\ .$
In general, if $0 < p < 1$, and $q = 1 - p$, then we say that the random variable $T$ has a geometric distribution if $P(T = j) = q^{j - 1}p\ ,$ for $j = 1,\ 2,\ 3,\ \ldots$.
To simulate the geometric distribution with parameter $p$, we can simply compute a sequence of random numbers in $[0, 1)$, stopping when an entry does not exceed $p$. However, for small values of $p$, this is time-consuming (taking, on the average, $1/p$ steps). We now describe a method whose running time does not depend upon the size of $p$. Define $Y$ to be the smallest integer satisfying the inequality
$1 - q^Y \ge rnd\ .\label{eq 5.3}$
Then we have
\begin{aligned} P(Y=j) & =P\left(1-q^j \geq r n d>1-q^{j-1}\right) \ & =q^{j-1}-q^j \ & =q^{j-1}(1-q) \ & =q^{j-1} p .\end{aligned}
Thus, $Y$ is geometrically distributed with parameter $p$. To generate $Y$, all we have to do is solve Equation 5.1 for $Y$. We obtain
$Y = \Biggl\lceil \frac{\log(1-rnd)}{\log q} \Biggr\rceil\ ,$
where the notation $\lceil x \rceil$ means the least integer which is greater than or equal to $x$. Since $\log(1-rnd)$ and $\log(rnd)$ are identically distributed, $Y$ can also be generated using the equation
$Y=\left\lceil\frac{\log \; r n d}{\log q}\right\rceil$
Example $1$:
The geometric distribution plays an important role in the theory of queues, or waiting lines. For example, suppose a line of customers waits for service at a counter. It is often assumed that, in each small time unit, either 0 or 1 new customers arrive at the counter. The probability that a customer arrives is $p$ and that no customer arrives is $q = 1 - p$. Then the time $T$ until the next arrival has a geometric distribution. It is natural to ask for the probability that no customer arrives in the next $k$ time units, that is, for $P(T > k)$. This is given by
\begin{aligned} P(T > k) = \sum_{j = k+1}^\infty q^{j-1}p & = & q^k(p + qp + q^2p + \cdots) \ & = & q^k\ .\end{aligned}
This probability can also be found by noting that we are asking for no successes (i.e., arrivals) in a sequence of $k$ consecutive time units, where the probability of a success in any one time unit is $p$. Thus, the probability is just $q^k$, since arrivals in any two time units are independent events.
It is often assumed that the length of time required to service a customer also has a geometric distribution but with a different value for $p$. This implies a rather special property of the service time. To see this, let us compute the conditional probability $P(T > r + s\,|\,T > r) = \frac{P(T > r + s)}{P(T > r)} = \frac {q^{r + s}}{q^r} = q^s\ .$ Thus, the probability that the customer’s service takes $s$ more time units is independent of the length of time $r$ that the customer has already been served. Because of this interpretation, this property is called the “memoryless" property, and is also obeyed by the exponential distribution. (Fortunately, not too many service stations have this property.)
Negative Binomial Distribution
Suppose we are given a coin which has probability $p$ of coming up heads when it is tossed. We fix a positive integer $k$, and toss the coin until the $k$th head appears. We let $X$ represent the number of tosses. When $k = 1$, $X$ is geometrically distributed. For a general $k$, we say that $X$ has a negative binomial distribution. We now calculate the probability distribution of $X$. If $X = x$, then it must be true that there were exactly $k-1$ heads thrown in the first $x-1$ tosses, and a head must have been thrown on the $x$th toss. There are
$\binom{x-1}{k-1}$
sequences of length $x$ with these properties, and each of them is assigned the same probability, namely $p^{k-1}q^{x-k}\ .$ Therefore, if we define $u(x, k, p) = P(X = x)\ ,$ then
$u(x, k, p) = \binom{x-1}{k-1}p^kq^{x-k}\ .$
One can simulate this on a computer by simulating the tossing of a coin. The following algorithm is, in general, much faster. We note that $X$ can be understood as the sum of $k$ outcomes of a geometrically distributed experiment with parameter $p$. Thus, we can use the following sum as a means of generating $X$:
$\sum_{j = 1}^k \Biggl\lceil {\frac{\log\ rnd_j}{\log\ q}}\Biggr\rceil$
Example $2$:
A fair coin is tossed until the second time a head turns up. The distribution for the number of tosses is $u(x, 2, p)$. Thus the probability that $x$ tosses are needed to obtain two heads is found by letting $k = 2$ in the above formula. We obtain
$u(x, 2, 1/2) = {{x-1} \choose 1} \frac 1{2^x}\ ,$ for $x = 2, 3, \ldots\$.
In Figure $2$ we give a graph of the distribution for $k = 2$ and $p = .25$. Note that the distribution is quite asymmetric, with a long tail reflecting the fact that large values of $x$ are possible.
Poisson Distribution
The Poisson distribution arises in many situations. It is safe to say that it is one of the three most important discrete probability distributions (the other two being the uniform and the binomial distributions). The Poisson distribution can be viewed as arising from the binomial distribution or from the exponential density. We shall now explain its connection with the former; its connection with the latter will be explained in the next section.
Suppose that we have a situation in which a certain kind of occurrence happens at random over a period of time. For example, the occurrences that we are interested in might be incoming telephone calls to a police station in a large city. We want to model this situation so that we can consider the probabilities of events such as more than 10 phone calls occurring in a 5-minute time interval. Presumably, in our example, there would be more incoming calls between 6:00 and 7:00 P.M. than between 4:00 and 5:00 A.M., and this fact would certainly affect the above probability. Thus, to have a hope of computing such probabilities, we must assume that the average rate, i.e., the average number of occurrences per minute, is a constant. This rate we will denote by $\lambda$. (Thus, in a given 5-minute time interval, we would expect about $5\lambda$ occurrences.) This means that if we were to apply our model to the two time periods given above, we would simply use different rates for the two time periods, thereby obtaining two different probabilities for the given event.
Our next assumption is that the number of occurrences in two non-overlapping time intervals are independent. In our example, this means that the events that there are $j$ calls between 5:00 and 5:15 P.M. and $k$ calls between 6:00 and 6:15 P.M. on the same day are independent.
We can use the binomial distribution to model this situation. We imagine that a given time interval is broken up into $n$ subintervals of equal length. If the subintervals are sufficiently short, we can assume that two or more occurrences happen in one subinterval with a probability which is negligible in comparison with the probability of at most one occurrence. Thus, in each subinterval, we are assuming that there is either 0 or 1 occurrence. This means that the sequence of subintervals can be thought of as a sequence of Bernoulli trials, with a success corresponding to an occurrence in the subinterval.
To decide upon the proper value of $p$, the probability of an occurrence in a given subinterval, we reason as follows. On the average, there are $\lambda t$ occurrences in a time interval of length $t$. If this time interval is divided into $n$ subintervals, then we would expect, using the Bernoulli trials interpretation, that there should be $np$ occurrences. Thus, we want $\lambda t = n p\ ,$ so $p = {\frac{\lambda t}{n}}$
We now wish to consider the random variable $X$, which counts the number of occurrences in a given time interval. We want to calculate the distribution of $X$. For ease of calculation, we will assume that the time interval is of length 1; for time intervals of arbitrary length $t$, see Exercise [exer 5.1.26]. We know that $P(X = 0) = b(n, p, 0) = (1 - p)^n = \Bigl(1 - {\lambda \over n}\Bigr)^n\ .$ For large $n$, this is approximately $e^{-\lambda}$. It is easy to calculate that for any fixed $k$, we have
${\frac{b(n, p, k)}{b(n, p, k-1)}} = {\frac{\lambda - (k-1)p}{kq}}$
which, for large $n$ (and therefore small $p$) is approximately $\lambda/k$. Thus, we have
$P(X = 1) \approx \lambda e^{-\lambda},$
and in general, $P(X = k) \approx {\frac{\lambda^k}{k!}} e^{-\lambda}$
The above distribution is the Poisson distribution. We note that it must be checked that the distribution given in Equation 5.1 really is a distribution, i.e., that its values are non-negative and sum to 1. (See Exercise 5.1.27.)
The Poisson distribution is used as an approximation to the binomial distribution when the parameters $n$ and $p$ are large and small, respectively (see Examples 5.1.3 and 5.1.5). However, the Poisson distribution also arises in situations where it may not be easy to interpret or measure the parameters $n$ and $p$ (see Example 5.5.5.
Example $3$
A typesetter makes, on the average, one mistake per 1000 words. Assume that he is setting a book with 100 words to a page. Let $S_{100}$ be the number of mistakes that he makes on a single page. Then the exact probability distribution for $S_{100}$ would be obtained by considering $S_{100}$ as a result of 100 Bernoulli trials with $p = 1/1000$. The expected value of $S_{100}$ is $\lambda = 100(1/1000) = .1$. The exact probability that $S_{100} = j$ is $b(100,1/1000,j)$, and the Poisson approximation is
$\frac {e^{-.1}(.1)^j}{j!}.$
In Table $1$ we give, for various values of $n$ and $p$, the exact values computed by the binomial distribution and the Poisson approximation.
Table $1$ : Poisson approximation to the binomial distribution.
Poisson Binomial Poisson Binomial Poisson Binomial
$n = 100$ $n = 100$ $n = 1000$
$j$ $\lambda = .1$ $p = .001$ $\lambda = 1$ $p = .01$ $\lambda = 10$ $p = .01$
0 .9048 .9048 .3679 .3660 .0000 .0000
1 .0905 .0905 .3679 .3697 .0005 .0004
2 .0045 .0045 .1839 .1849 .0023 .0022
3 .0002 .0002 .0613 .0610 .0076 .0074
4 .0000 .0000 .0153 .0149 .0189 .0186
5 .0031 .0029 .0378 .0374
6 .0005 .0005 .0631 .0627
7 .0001 .0001 .0901 .0900
8 .0000 .0000 .1126 .1128
9 .1251 .1256
10 .1251 .1257
11 .1137 .1143
12 .0948 .0952
13 .0729 .0731
14 .0521 .0520
15 .0347 .0345
16 .0217 .0215
17 .0128 .0126
18 .0071 .0069
19 .0037 .0036
20 .0019 .0018
21 .0009 .0009
22 .0004 .0004
23 .0002 .0002
24 .0001 .0001
25 .0000 .0000
Example $4$
In his book,1 Feller discusses the statistics of flying bomb hits in the south of London during the Second World War.
Assume that you live in a district of size 10 blocks by 10 blocks so that the total district is divided into 100 small squares. How likely is it that the square in which you live will receive no hits if the total area is hit by 400 bombs?
We assume that a particular bomb will hit your square with probability 1/100. Since there are 400 bombs, we can regard the number of hits that your square receives as the number of in a Bernoulli trials process with $n = 400$ and $p = 1/100$. Thus we can use the Poisson distribution with $\lambda = 400 \cdot 1/100 = 4$ to approximate the probability that your square will receive $j$ hits. This probability is $p(j) = e^{-4} 4^j/j!$. The expected number of squares that receive exactly $j$ hits is then $100 \cdot p(j)$. It is easy to write a program LondonBombs to simulate this situation and compare the expected number of squares with $j$ hits with the observed number. In Exercise 9.2.15 you are asked to compare the actual observed data with that predicted by the Poisson distribution.
In Figure $3$, we have shown the simulated hits, together with a spike graph showing both the observed and predicted frequencies. The observed frequencies are shown as squares, and the predicted frequencies are shown as dots.
If the reader would rather not consider flying bombs, he is invited to instead consider an analogous situation involving cookies and raisins. We assume that we have made enough cookie dough for 500 cookies. We put 600 raisins in the dough, and mix it thoroughly. One way to look at this situation is that we have 500 cookies, and after placing the cookies in a grid on the table, we throw 600 raisins at the cookies. (See Exercise $29$.)
Example $5$
Suppose that in a certain fixed amount $A$ of blood, the average human has 40 white blood cells. Let $X$ be the random variable which gives the number of white blood cells in a random sample of size $A$ from a random individual. We can think of $X$ as binomially distributed with each white blood cell in the body representing a trial. If a given white blood cell turns up in the sample, then the trial corresponding to that blood cell was a success. Then $p$ should be taken as the ratio of $A$ to the total amount of blood in the individual, and $n$ will be the number of white blood cells in the individual. Of course, in practice, neither of these parameters is very easy to measure accurately, but presumably the number 40 is easy to measure. But for the average human, we then have $40 = np$, so we can think of $X$ as being Poisson distributed, with parameter $\lambda = 40$. In this case, it is easier to model the situation using the Poisson distribution than the binomial distribution.
To simulate a Poisson random variable on a computer, a good way is to take advantage of the relationship between the Poisson distribution and the exponential density. This relationship and the resulting simulation algorithm will be described in the next section.
Hypergeometric Distribution
Suppose that we have a set of $N$ balls, of which $k$ are red and $N-k$ are blue. We choose $n$ of these balls, without replacement, and define $X$ to be the number of red balls in our sample. The distribution of $X$ is called the hypergeometric distribution. We note that this distribution depends upon three parameters, namely $N$, $k$, and $n$. There does not seem to be a standard notation for this distribution; we will use the notation $h(N, k, n, x)$ to denote $P(X = x)$. This probability can be found by noting that there are ${N \choose n}$ different samples of size $n$, and the number of such samples with exactly $x$ red balls is obtained by multiplying the number of ways of choosing $x$ red balls from the set of $k$ red balls and the number of ways of choosing $n-x$ blue balls from the set of $N-k$ blue balls. Hence, we have
$h(N, k, n, x) = \frac{\binom{k}{x}\binom{N-k}{n-x}}{\binom{N}{n}}$
This distribution can be generalized to the case where there are more than two types of objects. (See Exercise 5.1.24.)
If we let $N$ and $k$ tend to $\infty$, in such a way that the ratio $k/N$ remains fixed, then the hypergeometric distribution tends to the binomial distribution with parameters $n$ and $p = k/N$. This is reasonable because if $N$ and $k$ are much larger than $n$, then whether we choose our sample with or without replacement should not affect the probabilities very much, and the experiment consisting of choosing with replacement yields a binomially distributed random variable (see Exercise 5.1.124).
An example of how this distribution might be used is given in Exercises 5.1.21 and 5.1.22. We now give another example involving the hypergeometric distribution. It illustrates a statistical test called Fisher’s Exact Test.
Example $6$:
It is often of interest to consider two traits, such as eye color and hair color, and to ask whether there is an association between the two traits. Two traits are associated if knowing the value of one of the traits for a given person allows us to predict the value of the other trait for that person. The stronger the association, the more accurate the predictions become. If there is no association between the traits, then we say that the traits are independent. In this example, we will use the traits of gender and political party, and we will assume that there are only two possible genders, female and male, and only two possible political parties, Democratic and Republican.
Suppose that we have collected data concerning these traits. To test whether there is an association between the traits, we first assume that there is no association between the two traits. This gives rise to an “expected" data set, in which knowledge of the value of one trait is of no help in predicting the value of the other trait. Our collected data set usually differs from this expected data set. If it differs by quite a bit, then we would tend to reject the assumption of independence of the traits. To nail down what is meant by “quite a bit," we decide which possible data sets differ from the expected data set by at least as much as ours does, and then we compute the probability that any of these data sets would occur under the assumption of independence of traits. If this probability is small, then it is unlikely that the difference between our collected data set and the expected data set is due entirely to chance.
Suppose that we have collected the data shown in Table $2$.
Table $2$ Observed data.
Democrat
Republican
Female
24
4
28
Male
8
14
22
32 18
50
The row and column sums are called marginal totals, or marginals. In what follows, we will denote the row sums by $t_{11}$ and $t_{12}$, and the column sums by $t_{21}$ and $t_{22}$. The $ij$th entry in the table will be denoted by $s_{ij}$. Finally, the size of the data set will be denoted by $n$. Thus, a general data table will look as shown in Table $3$.
Table $3$ General data table.
Democrat
Republican
Female
$s_{11}$ $s_{12}$
$t_{11}$
Male
$s_{21}$ $s_{22}$
$t_{12}$
$t_{21}$ $t_{22}$
$n$
We now explain the model which will be used to construct the “expected" data set. In the model, we assume that the two traits are independent. We then put $t_{21}$ yellow balls and $t_{22}$ green balls, corresponding to the Democratic and Republican marginals, into an urn. We draw $t_{11}$ balls, without replacement, from the urn, and call these balls females. The $t_{12}$ balls remaining in the urn are called males. In the specific case under consideration, the probability of getting the actual data under this model is given by the expression
$\frac{\binom{32}{24}\binom{18}{4}}{\binom{50}{28}}$
i.e., a value of the hypergeometric distribution.
We are now ready to construct the expected data set. If we choose 28 balls out of 50, we should expect to see, on the average, the same percentage of yellow balls in our sample as in the urn. Thus, we should expect to see, on the average, $28(32/50) = 17.92 \approx 18$ yellow balls in our sample. (See Exercise $36$.) The other expected values are computed in exactly the same way. Thus, the expected data set is shown in Table $4$.
Table $4$: Expected data.
Democrat
Republican
Female
18 10
28
Male
14
8
22
32 18
50
We note that the value of $s_{11}$ determines the other three values in the table, since the marginals are all fixed. Thus, in considering the possible data sets that could appear in this model, it is enough to consider the various possible values of $s_{11}$. In the specific case at hand, what is the probability of drawing exactly $a$ yellow balls, i.e., what is the probability that $s_{11} = a$? It is
$\frac{\binom{32}{a}\binom{18}{28-a}}{\binom{50}{28}}$
We are now ready to decide whether our actual data differs from the expected data set by an amount which is greater than could be reasonably attributed to chance alone. We note that the expected number of female Democrats is 18, but the actual number in our data is 24. The other data sets which differ from the expected data set by more than ours correspond to those where the number of female Democrats equals 25, 26, 27, or 28. Thus, to obtain the required probability, we sum the expression in (5.3) from $a = 24$ to $a = 28$. We obtain a value of $.000395$. Thus, we should reject the hypothesis that the two traits are independent.
Finally, we turn to the question of how to simulate a hypergeometric random variable $X$. Let us assume that the parameters for $X$ are $N$, $k$, and $n$. We imagine that we have a set of $N$ balls, labelled from 1 to $N$. We decree that the first $k$ of these balls are red, and the rest are blue. Suppose that we have chosen $m$ balls, and that $j$ of them are red. Then there are $k-j$ red balls left, and $N-m$ balls left. Thus, our next choice will be red with probability
$\frac{k-j}{N-m}$
So at this stage, we choose a random number in $[0, 1]$, and report that a red ball has been chosen if and only if the random number does not exceed the above expression. Then we update the values of $m$ and $j$, and continue until $n$ balls have been chosen.
Benford Distribution
Our next example of a distribution comes from the study of leading digits in data sets. It turns out that many data sets that occur “in real life" have the property that the first digits of the data are not uniformly distributed over the set $\{1, 2, \ldots, 9\}$. Rather, it appears that the digit 1 is most likely to occur, and that the distribution is monotonically decreasing on the set of possible digits. The Benford distribution appears, in many cases, to fit such data. Many explanations have been given for the occurrence of this distribution. Possibly the most convincing explanation is that this distribution is the only one that is invariant under a change of scale. If one thinks of certain data sets as somehow “naturally occurring," then the distribution should be unaffected by which units are chosen in which to represent the data, i.e., the distribution should be invariant under change of scale.
Theodore Hill2 gives a general description of the Benford distribution, when one considers the first $d$ digits of integers in a data set. We will restrict our attention to the first digit. In this case, the Benford distribution has distribution function $f(k) = \log_{10}(k+1) - \log_{10}(k)\ ,$ for $1 \le k \le 9$.
Mark Nigrini3 has advocated the use of the Benford distribution as a means of testing suspicious financial records such as bookkeeping entries, checks, and tax returns. His idea is that if someone were to “make up" numbers in these cases, the person would probably produce numbers that are fairly uniformly distributed, while if one were to use the actual numbers, the leading digits would roughly follow the Benford distribution. As an example, Nigrini analyzed President Clinton’s tax returns for a 13-year period. In Figure 5.1.4, the Benford distribution values are shown as squares, and the President’s tax return data are shown as circles. One sees that in this example, the Benford distribution fits the data very well.
This distribution was discovered by the astronomer Simon Newcomb who stated the following in his paper on the subject: “That the ten digits do not occur with equal frequency must be evident to anyone making use of logarithm tables, and noticing how much faster the first pages wear out than the last ones. The first significant figure is oftener 1 than any other digit, and the frequency diminishes up to 9."4
Exercises
Exercise $1$
For which of the following random variables would it be appropriate to assign a uniform distribution?
1. Let $X$ represent the roll of one die.
2. Let $X$ represent the number of heads obtained in three tosses of a coin.
3. A roulette wheel has 38 possible outcomes: 0, 00, and 1 through 36. Let $X$ represent the outcome when a roulette wheel is spun.
4. Let $X$ represent the birthday of a randomly chosen person.
5. Let $X$ represent the number of tosses of a coin necessary to achieve a head for the first time.
Exercise $2$
Let $n$ be a positive integer. Let $S$ be the set of integers between 1 and $n$. Consider the following process: We remove a number from $S$ at random and write it down. We repeat this until $S$ is empty. The result is a permutation of the integers from 1 to $n$. Let $X$ denote this permutation. Is $X$ uniformly distributed?
Exercise $3$
Let $X$ be a random variable which can take on countably many values. Show that $X$ cannot be uniformly distributed.
Exercise $4$
Suppose we are attending a college which has 3000 students. We wish to choose a subset of size 100 from the student body. Let $X$ represent the subset, chosen using the following possible strategies. For which strategies would it be appropriate to assign the uniform distribution to $X$? If it is appropriate, what probability should we assign to each outcome?
1. Take the first 100 students who enter the cafeteria to eat lunch.
2. Ask the Registrar to sort the students by their Social Security number, and then take the first 100 in the resulting list.
3. Ask the Registrar for a set of cards, with each card containing the name of exactly one student, and with each student appearing on exactly one card. Throw the cards out of a third-story window, then walk outside and pick up the first 100 cards that you find.
Exercise $5$
Under the same conditions as in the preceding exercise, can you describe a procedure which, if used, would produce each possible outcome with the same probability? Can you describe such a procedure that does not rely on a computer or a calculator?
Exercise $6$
Let $X_1,\ X_2,\ \ldots,\ X_n$ be $n$ mutually independent random variables, each of which is uniformly distributed on the integers from 1 to $k$. Let $Y$ denote the minimum of the $X_i$’s. Find the distribution of $Y$.
Exercise $7$
A die is rolled until the first time $T$ that a six turns up.
1. What is the probability distribution for $T$?
2. Find $P(T > 3)$.
3. Find $P(T > 6 | T > 3)$.
Exercise $8$
If a coin is tossed a sequence of times, what is the probability that the first head will occur after the fifth toss, given that it has not occurred in the first two tosses?
Exercise $9$
A worker for the Department of Fish and Game is assigned the job of estimating the number of trout in a certain lake of modest size. She proceeds as follows: She catches 100 trout, tags each of them, and puts them back in the lake. One month later, she catches 100 more trout, and notes that 10 of them have tags.
1. Without doing any fancy calculations, give a rough estimate of the number of trout in the lake.
2. Let $N$ be the number of trout in the lake. Find an expression, in terms of $N$, for the probability that the worker would catch 10 tagged trout out of the 100 trout that she caught the second time.
3. Find the value of $N$ which maximizes the expression in part (b). This value is called the for the unknown quantity $N$. : Consider the ratio of the expressions for successive values of $N$.
Exercise $10$
A census in the United States is an attempt to count everyone in the country. It is inevitable that many people are not counted. The U. S. Census Bureau proposed a way to estimate the number of people who were not counted by the latest census. Their proposal was as follows: In a given locality, let $N$ denote the actual number of people who live there. Assume that the census counted $n_1$ people living in this area. Now, another census was taken in the locality, and $n_2$ people were counted. In addition, $n_{12}$ people were counted both times.
1. Given $N$, $n_1$, and $n_2$, let $X$ denote the number of people counted both times. Find the probability that $X = k$, where $k$ is a fixed positive integer between 0 and $n_2$.
2. Now assume that $X = n_{12}$. Find the value of $N$ which maximizes the expression in part (a). : Consider the ratio of the expressions for successive values of $N$.
Exercise $11$
Suppose that $X$ is a random variable which represents the number of calls coming in to a police station in a one-minute interval. In the text, we showed that $X$ could be modelled using a Poisson distribution with parameter $\lambda$, where this parameter represents the average number of incoming calls per minute. Now suppose that $Y$ is a random variable which represents the number of incoming calls in an interval of length $t$. Show that the distribution of $Y$ is given by
$P(Y=k)=e^{-\lambda t} \frac{(\lambda t)^k}{k !}$,
i.e., $Y$ is Poisson with parameter $\lambda t$.
Hint: Suppose a Martian were to observe the police station. Let us also assume that the basic time interval used on Mars is exactly $t$ Earth minutes. Finally, we will assume that the Martian understands the derivation of the Poisson distribution in the text. What would she write down for the distribution of $Y$?
Exercise $PageIndex{12}$
Show that the values of the Poisson distribution given in Equation [eq 5.1] sum to 1.
Exercise $13$
The Poisson distribution with parameter $\lambda = .3$ has been assigned for the outcome of an experiment. Let $X$ be the outcome function. Find $P(X = 0)$, $P(X = 1)$, and $P(X > 1)$.
Exercise $14$
On the average, only 1 person in 1000 has a particular rare blood type.
1. Find the probability that, in a city of 10,000 people, no one has this blood type.
2. How many people would have to be tested to give a probability greater than 1/2 of finding at least one person with this blood type?
Exercise $15$
Write a program for the user to input $n$, $p$, $j$ and have the program print out the exact value of $b(n, p, k)$ and the Poisson approximation to this value.
Exercise $16$
Assume that, during each second, a Dartmouth switchboard receives one call with probability .01 and no calls with probability .99. Use the Poisson approximation to estimate the probability that the operator will miss at most one call if she takes a 5-minute coffee break.
Exercise $17$
The probability of a royal flush in a poker hand is $p = 1/649{,}740$. How large must $n$ be to render the probability of having no royal flush in $n$ hands smaller than $1/e$?
Exercise $18$
A baker blends 600 raisins and 400 chocolate chips into a dough mix and, from this, makes 500 cookies.
1. Find the probability that a randomly picked cookie will have no raisins.
2. Find the probability that a randomly picked cookie will have exactly two chocolate chips.
3. Find the probability that a randomly chosen cookie will have at least two bits (raisins or chips) in it.
Exercise $19$
The probability that, in a bridge deal, one of the four hands has all hearts is approximately $6.3 \times 10^{-12}$. In a city with about 50,000 bridge players the resident probability expert is called on the average once a year (usually late at night) and told that the caller has just been dealt a hand of all hearts. Should she suspect that some of these callers are the victims of practical jokes?
Exercise $20$
An advertiser drops 10,000 leaflets on a city which has 2000 blocks. Assume that each leaflet has an equal chance of landing on each block. What is the probability that a particular block will receive no leaflets?
Exercise $21$
In a class of 80 students, the professor calls on 1 student chosen at random for a recitation in each class period. There are 32 class periods in a term.
1. Write a formula for the exact probability that a given student is called upon $j$ times during the term.
2. Write a formula for the Poisson approximation for this probability. Using your formula estimate the probability that a given student is called upon more than twice.
Exercise $22$
Assume that we are making raisin cookies. We put a box of 600 raisins into our dough mix, mix up the dough, then make from the dough 500 cookies. We then ask for the probability that a randomly chosen cookie will have 0, 1, 2, … raisins. Consider the cookies as trials in an experiment, and let $X$ be the random variable which gives the number of raisins in a given cookie. Then we can regard the number of raisins in a cookie as the result of $n = 600$ independent trials with probability $p = 1/500$ for success on each trial. Since $n$ is large and $p$ is small, we can use the Poisson approximation with $\lambda = 600(1/500) = 1.2$. Determine the probability that a given cookie will have at least five raisins.
Exercise $23$
For a certain experiment, the Poisson distribution with parameter $\lambda = m$ has been assigned. Show that a most probable outcome for the experiment is the integer value $k$ such that $m - 1 \leq k \leq m$. Under what conditions will there be two most probable values? : Consider the ratio of successive probabilities.
Exercise $24$
When John Kemeny was chair of the Mathematics Department at Dartmouth College, he received an average of ten letters each day. On a certain weekday he received no mail and wondered if it was a holiday. To decide this he computed the probability that, in ten years, he would have at least 1 day without any mail. He assumed that the number of letters he received on a given day has a Poisson distribution. What probability did he find? : Apply the Poisson distribution twice. First, to find the probability that, in 3000 days, he will have at least 1 day without mail, assuming each year has about 300 days on which mail is delivered.
Exercise $25$
Reese Prosser never puts money in a 10-cent parking meter in Hanover. He assumes that there is a probability of .05 that he will be caught. The first offense costs nothing, the second costs 2 dollars, and subsequent offenses cost 5 dollars each. Under his assumptions, how does the expected cost of parking 100 times without paying the meter compare with the cost of paying the meter each time?
Exercise $26$
Feller5 discusses the statistics of flying bomb hits in an area in the south of London during the Second World War. The area in question was divided into $24 \times 24 = 576$ small areas. The total number of hits was 537. There were 229 squares with 0 hits, 211 with 1 hit, 93 with 2 hits, 35 with 3 hits, 7 with 4 hits, and 1 with 5 or more. Assuming the hits were purely random, use the Poisson approximation to find the probability that a particular square would have exactly $k$ hits. Compute the expected number of squares that would have 0, 1, 2, 3, 4, and 5 or more hits and compare this with the observed results.
Exercise $27$
Assume that the probability that there is a significant accident in a nuclear power plant during one year’s time is .001. If a country has 100 nuclear plants, estimate the probability that there is at least one such accident during a given year.
Exercise $28$
An airline finds that 4 percent of the passengers that make reservations on a particular flight will not show up. Consequently, their policy is to sell 100 reserved seats on a plane that has only 98 seats. Find the probability that every person who shows up for the flight will find a seat available.
Exercise $29$
The king’s coinmaster boxes his coins 500 to a box and puts 1 counterfeit coin in each box. The king is suspicious, but, instead of testing all the coins in 1 box, he tests 1 coin chosen at random out of each of 500 boxes. What is the probability that he finds at least one fake? What is it if the king tests 2 coins from each of 250 boxes?
Exercise $PageIndex{30}$
(From Kemeny6) Show that, if you make 100 bets on the number 17 at roulette at Monte Carlo (see Example 6.1.13), you will have a probability greater than 1/2 of coming out ahead. What is your expected winning?
Exercise $31$
In one of the first studies of the Poisson distribution, von Bortkiewicz7 considered the frequency of deaths from kicks in the Prussian army corps. From the study of 14 corps over a 20-year period, he obtained the data shown in Table $5$
Table $5$ Mule kicks.
Number of deaths Number of corps with $x$ deaths in a given year
0 144
1
91
2
32
3
11
4
2
Fit a Poisson distribution to this data and see if you think that the Poisson distribution is appropriate.
Exercise $32$
It is often assumed that the auto traffic that arrives at the intersection during a unit time period has a Poisson distribution with expected value $m$. Assume that the number of cars $X$ that arrive at an intersection from the north in unit time has a Poisson distribution with parameter $\lambda = m$ and the number $Y$ that arrive from the west in unit time has a Poisson distribution with parameter $\lambda = \bar m$. If $X$ and $Y$ are independent, show that the total number $X + Y$ that arrive at the intersection in unit time has a Poisson distribution with parameter $\lambda = m + \bar m$.
Exercise $33$
Cars coming along Magnolia Street come to a fork in the road and have to choose either Willow Street or Main Street to continue. Assume that the number of cars that arrive at the fork in unit time has a Poisson distribution with parameter $\lambda = 4$. A car arriving at the fork chooses Main Street with probability 3/4 and Willow Street with probability 1/4. Let $X$ be the random variable which counts the number of cars that, in a given unit of time, pass by Joe’s Barber Shop on Main Street. What is the distribution of $X$?
Exercise $34$
In the appeal of the People v. Collins case (see Exercise 4.1.28), the counsel for the defense argued as follows: Suppose, for example, there are 5,000,000 couples in the Los Angeles area and the probability that a randomly chosen couple fits the witnesses’ description is 1/12,000,000. Then the probability that there are two such couples given that there is at least one is not at all small. Find this probability. (The California Supreme Court overturned the initial guilty verdict.)
Exercise $35$
A manufactured lot of brass turnbuckles has $S$ items of which $D$ are defective. A sample of $s$ items is drawn without replacement. Let $X$ be a random variable that gives the number of defective items in the sample. Let $p(d) = P(X = d)$.
1. Show that $p(d) = \frac{\binom{D}{d}\binom{S-D}{s-d}}{\binom{S}{s}}\ .$ Thus, X is hypergeometric.
2. Prove the following identity, known as Euler's Formula: $\sum_{d = 0}^{\min(D,s)}\binom{D}{d}\binom{S-D}{s-d} = \binom{S}{s}}\ .$
Exercise $36$
A bin of 1000 turnbuckles has an unknown number $D$ of defectives. A sample of 100 turnbuckles has 2 defectives. The for $D$ is the number of defectives which gives the highest probability for obtaining the number of defectives observed in the sample. Guess this number $D$ and then write a computer program to verify your guess.
Exercise $37$
There are an unknown number of moose on Isle Royale (a National Park in Lake Superior). To estimate the number of moose, 50 moose are captured and tagged. Six months later 200 moose are captured and it is found that 8 of these were tagged. Estimate the number of moose on Isle Royale from these data, and then verify your guess by computer program (see Exercise 5.1.36).
Exercise $38$
A manufactured lot of buggy whips has 20 items, of which 5 are defective. A random sample of 5 items is chosen to be inspected. Find the probability that the sample contains exactly one defective item
1. if the sampling is done with replacement.
2. if the sampling is done without replacement.
Exercise $39$
Suppose that $N$ and $k$ tend to $\infty$ in such a way that $k/N$ remains fixed. Show that $h(N, k, n, x) \rightarrow b(n, k/N, x)\ .$
Exercise $40$
A bridge deck has 52 cards with 13 cards in each of four suits: spades, hearts, diamonds, and clubs. A hand of 13 cards is dealt from a shuffled deck. Find the probability that the hand has
1. a distribution of suits 4, 4, 3, 2 (for example, four spades, four hearts, three diamonds, two clubs).
2. a distrbution of suits 5, 3, 3, 2.
Exercise $PageIndex{41}$
Write a computer algorithm that simulates a hypergeometric random variable with parameters $N$, $k$, and $n$.
Exercise $42$
You are presented with four different dice. The first one has two sides marked 0 and four sides marked 4. The second one has a 3 on every side. The third one has a 2 on four sides and a 6 on two sides, and the fourth one has a 1 on three sides and a 5 on three sides. You allow your friend to pick any of the four dice he wishes. Then you pick one of the remaining three and you each roll your die. The person with the largest number showing wins a dollar. Show that you can choose your die so that you have probability 2/3 of winning no matter which die your friend picks. (See Tenney and Foster.8)
Exercise $43$
The students in a certain class were classified by hair color and eye color. The conventions used were: Brown and black hair were considered dark, and red and blonde hair were considered light; black and brown eyes were considered dark, and blue and green eyes were considered light. They collected the data shown in Table $6$
Table $61$: Observed data.
Dark Eyes Light Eyes
Dark Hair 28 15
43
Light Hair
9
23
32
37 38
75
Are these traits independent? (See Example $6$.)
Exercise $44$
Suppose that in the hypergeometric distribution, we let $N$ and $k$ tend to $\infty$ in such a way that the ratio $k/N$ approaches a real number $p$ between 0 and 1. Show that the hypergeometric distribution tends to the binomial distribution with parameters $n$ and $p$.
Exercise $45$
1. Compute the leading digits of the first 100 powers of 2, and see how well these data fit the Benford distribution.
2. Multiply each number in the data set of part (a) by 3, and compare the distribution of the leading digits with the Benford distribution.
Exercise $46$
In the Powerball lottery, contestants pick 5 different integers between 1 and 45, and in addition, pick a bonus integer from the same range (the bonus integer can equal one of the first five integers chosen). Some contestants choose the numbers themselves, and others let the computer choose the numbers. The data shown in Table $7$ are the contestant-chosen numbers in a certain state on May 3, 1996. A spike graph of the data is shown in Figure $5$
Table $7$: Numbers chosen by contestants in the Powerball lottery.
Integer Times Integer Times Integer Times
Chosen Chosen Chosen
1 2646 2 2934 3 3352
4 3000 5 3357 6 2892
7 3657 8 3025 9 3362
10 2985 11 3138 12 3043
13 2690 14 2423 15 2556
16 2456 17 2479 18 2276
19 2304 20 1971 21 2543
22 2678 23 2729 24 2414
25 2616 26 2426 27 2381
28 2059 29 2039 30 2298
31 2081 32 1508 33 1887
34 1463 35 1594 36 1354
37 1049 38 1165 39 1248
40 1493 41 1322 42 1423
43 1207 44 1259 45 1224 | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/05%3A_Distributions_and_Densities/5.01%3A_Important_Distributions.txt |
In this section, we will introduce some important probability density functions and give some examples of their use. We will also consider the question of how one simulates a given density using a computer.
Continuous Uniform Density
The simplest density function corresponds to the random variable $U$ whose value represents the outcome of the experiment consisting of choosing a real number at random from the interval $[a, b]$. $f(\omega) = \left \{ \matrix{ 1/(b - a), &\,\,\, \mbox{if}\,\,\, a \leq \omega \leq b, \cr 0, &\,\,\, \mbox{otherwise.}\cr}\right.$
It is easy to simulate this density on a computer. We simply calculate the expression $(b - a) rnd + a\ .$
Exponential and Gamma Densities
The exponential density function is defined by
$f(x) = \left \{ \matrix{ \lambda e^{-\lambda x}, &\,\,\, \mbox{if}\,\,\, 0 \leq x < \infty, \cr 0, &\,\,\, \mbox{otherwise}. \cr} \right.$
Here $\lambda$ is any positive constant, depending on the experiment. The reader has seen this density in Example 2.2.11. In Figure $1$ we show graphs of several exponential densities for different choices of $\lambda$. The exponential density is often used to describe experiments involving a question of the form: How long until something happens? For example, the exponential density is often used to study the time between emissions of particles from a radioactive source.
The cumulative distribution function of the exponential density is easy to compute. Let $T$ be an exponentially distributed random variable with parameter $\lambda$. If $x \ge 0$, then we have
\begin{aligned} F(x) & = & P(T \le x) \ & = & \int_0^x \lambda e^{-\lambda t}\,dt \ & = & 1 - e^{-\lambda x}\ .\\end{aligned}
Both the exponential density and the geometric distribution share a property known as the “memoryless" property. This property was introduced in Example 5.1.1; it says that $P(T > r + s\,|\,T > r) = P(T > s)\ .$ This can be demonstrated to hold for the exponential density by computing both sides of this equation. The right-hand side is just $1 - F(s) = e^{-\lambda s}\ ,$ while the left-hand side is
\begin{align} {\frac{P(T > r + s)}{P(T > r)}} & = & {\frac{1 - F(r + s)}{1 - F(r)}} & = & {\frac{e^{-\lambda (r+s)}}{e^{-\lambda r}}} \ & = & e^{-\lambda s}\end{align}
There is a very important relationship between the exponential density and the Poisson distribution. We begin by defining $X_1,\ X_2,\ \ldots$ to be a sequence of independent exponentially distributed random variables with parameter $\lambda$. We might think of $X_i$ as denoting the amount of time between the $i$th and $(i+1)$st emissions of a particle by a radioactive source. (As we shall see in Chapter 6, we can think of the parameter $\lambda$ as representing the reciprocal of the average length of time between emissions. This parameter is a quantity that might be measured in an actual experiment of this type.)
We now consider a time interval of length $t$, and we let $Y$ denote the random variable which counts the number of emissions that occur in the time interval. We would like to calculate the distribution function of $Y$ (clearly, $Y$ is a discrete random variable). If we let $S_n$ denote the sum $X_1 + X_2 + \cdots + X_n$, then it is easy to see that $P(Y = n) = P(S_n \le t\ \mbox{and}\ S_{n+1} > t)\ .$ Since the event $S_{n+1} \le t$ is a subset of the event $S_n \le t$, the above probability is seen to be equal to
$P(S_n \le t) - P(S_{n+1} \le t)\ .\label{eq 5.8}$
We will show in Chapter 7 that the density of $S_n$ is given by the following formula: $g_n(x) = \left \{ \begin{array}{ll} \lambda{ \frac{(\lambda x)^{n-1}}{(n-1)!}}e^{-\lambda x}, & \mbox{if x > 0,} \ 0, & \mbox{otherwise.} \end{array} \right.$ This density is an example of a gamma density with parameters $\lambda$ and $n$. The general gamma density allows $n$ to be any positive real number. We shall not discuss this general density.
It is easy to show by induction on $n$ that the cumulative distribution function of $S_n$ is given by:
$G_n(x) = \left \{ \begin{array}{ll} 1 - e^{-\lambda x}\bigg(1+\frac{\lambda x}{1!} + \cdots + \frac{(\lambda x)^{n-1}}{(n-1)!}\bigg), & \text{if } x > 0, \ 0, & \mbox{otherwise.} \end{array} \right.$
Using this expression, the quantity in ([eq 5.8]) is easy to compute; we obtain $e^{-\lambda t}\frac{(\lambda t)^n}{n!} ,$
which the reader will recognize as the probability that a Poisson-distributed random variable, with parameter $\lambda t$, takes on the value $n$.
The above relationship will allow us to simulate a Poisson distribution, once we have found a way to simulate an exponential density. The following random variable does the job:
$Y = -{1\over\lambda} \log(rnd)\ .\label{eq 5.9}$
Using Corollary $1$ (below), one can derive the above expression (see Exercise $3$). We content ourselves for now with a short calculation that should convince the reader that the random variable $Y$ has the required property. We have
\begin{aligned} P(Y \le y) & = & P\Bigl(-{1\over\lambda} \log(rnd) \le y\Bigr) \ & = & P(\log(rnd) \ge -\lambda y) \ & = & P(rnd \ge e^{-\lambda y}) \ & = & 1 - e^{-\lambda y}\ . \\end{aligned}
This last expression is seen to be the cumulative distribution function of an exponentially distributed random variable with parameter $\lambda$.
To simulate a Poisson random variable $W$ with parameter $\lambda$, we simply generate a sequence of values of an exponentially distributed random variable with the same parameter, and keep track of the subtotals $S_k$ of these values. We stop generating the sequence when the subtotal first exceeds $\lambda$. Assume that we find that $S_n \le \lambda < S_{n+1}\ .$ Then the value $n$ is returned as a simulated value for $W$.
Example $1$
Suppose that customers arrive at random times at a service station with one server, and suppose that each customer is served immediately if no one is ahead of him, but must wait his turn in line otherwise. How long should each customer expect to wait? (We define the waiting time of a customer to be the length of time between the time that he arrives and the time that he begins to be served.)
Answer
Let us assume that the interarrival times between successive customers are given by random variables $X_1$, $X_2$, …, $X_n$ that are mutually independent and identically distributed with an exponential cumulative distribution function given by $F_X(t) = 1 - e^{-\lambda t}.$ Let us assume, too, that the service times for successive customers are given by random variables $Y_1$, $Y_2$, …, $Y_n$ that again are mutually independent and identically distributed with another exponential cumulative distribution function given by $F_Y(t) = 1 - e^{-\mu t}.$
The parameters $\lambda$ and $\mu$ represent, respectively, the reciprocals of the average time between arrivals of customers and the average service time of the customers. Thus, for example, the larger the value of $\lambda$, the smaller the average time between arrivals of customers. We can guess that the length of time a customer will spend in the queue depends on the relative sizes of the average interarrival time and the average service time.
It is easy to verify this conjecture by simulation. The program Queue simulates this queueing process. Let $N(t)$ be the number of customers in the queue at time $t$. Then we plot $N(t)$ as a function of $t$ for different choices of the parameters $\lambda$ and $\mu$ (see Figure $1$ ).
We note that when $\lambda < \mu$, then $1/\lambda > 1/\mu$, so the average interarrival time is greater than the average service time, i.e., customers are served more quickly, on average, than new ones arrive. Thus, in this case, it is reasonable to expect that $N(t)$ remains small. However, if $\lambda > \mu$ then customers arrive more quickly than they are served, and, as expected, $N(t)$ appears to grow without limit.
We can now ask: How long will a customer have to wait in the queue for service? To examine this question, we let $W_i$ be the length of time that the $i$th customer has to remain in the system (waiting in line and being served). Then we can present these data in a bar graph, using the program Queue, to give some idea of how the $W_i$ are distributed (see Figure $3$). (Here $\lambda = 1$ and $\mu = 1.1$.)
We see that these waiting times appear to be distributed exponentially. This is always the case when $\lambda < \mu$. The proof of this fact is too complicated to give here, but we can verify it by simulation for different choices of $\lambda$ and $\mu$, as above.
Functions of a Random Variable
Before continuing our list of important densities, we pause to consider random variables which are functions of other random variables. We will prove a general theorem that will allow us to derive expressions such as Equation [eq 5.9].
Theorem $1$
Let $X$ be a continuous random variable, and suppose that $\phi(x)$ is a strictly increasing function on the range of $X$. Define $Y = \phi(X)$. Suppose that $X$ and $Y$ have cumulative distribution functions $F_X$ and $F_Y$ respectively. Then these functions are related by $F_Y(y) = F_X(\phi^{-1}(y)).$ If $\phi(x)$ is strictly decreasing on the range of $X$, then $F_Y(y) = 1 - F_X(\phi^{-1}(y))\ .$
Proof
Since $\phi$ is a strictly increasing function on the range of $X$, the events $(X \le \phi^{-1}(y))$ and $(\phi(X) \le y)$ are equal. Thus, we have \begin{aligned} F_Y(y) & = & P(Y \le y) \ & = & P(\phi(X) \le y) \ & = & P(X \le \phi^{-1}(y)) \ & = & F_X(\phi^{-1}(y))\ . \\end{aligned}
If $\phi(x)$ is strictly decreasing on the range of $X$, then we have \begin{aligned} F_Y(y) & = & P(Y \leq y) \ & = & P(\phi(X) \leq y) \ & = & P(X \geq \phi^{-1}(y)) \ & = & 1 - P(X < \phi^{-1}(y)) \ & = & 1 - F_X(\phi^{-1}(y))\ . \\end{aligned} This completes the proof.
Corollary $1$
Let $X$ be a continuous random variable, and suppose that $\phi(x)$ is a strictly increasing function on the range of $X$. Define $Y = \phi(X)$. Suppose that the density functions of $X$ and $Y$ are $f_X$ and $f_Y$, respectively. Then these functions are related by
$f_Y(y) = f_X ( \phi^{-1}(y)){\frac{d}{dy}}\phi^{-1}(y)$
If $\phi(x)$ is strictly decreasing on the range of $X$, then $f_Y(y) = -f_X(\phi^{-1}(y)){\frac{d }{dy}}\phi^{-1}(y)$
Proof
This result follows from Theorem 5.1.1 by using the Chain Rule.
If the function $\phi$ is neither strictly increasing nor strictly decreasing, then the situation is somewhat more complicated but can be treated by the same methods. For example, suppose that $Y = X^2$, Then $\phi(x) = x^2$, and \begin{aligned} F_Y(y) & = & P(Y \leq y) \ & = & P(-\sqrt y \leq X \leq +\sqrt y) \ & = & P(X \leq +\sqrt y) - P(X \leq -\sqrt y) \ & = & F_X(\sqrt y) - F_X(-\sqrt y)\ .\\end{aligned} Moreover, \begin{aligned} f_Y(y) & = & \frac d{dy} F_Y(y) \ & = & \frac d{dy} (F_X(\sqrt y) - F_X(-\sqrt y)) \ & = & \Bigl(f_X(\sqrt y) + f_X(-\sqrt y)\Bigr) \frac 1{2\sqrt y}\ . \\end{aligned}
We see that in order to express $F_Y$ in terms of $F_X$ when $Y = \phi(X)$, we have to express $P(Y \leq y)$ in terms of $P(X \leq x)$, and this process will depend in general upon the structure of $\phi$.
Simulation
Theorem $1$ tells us, among other things, how to simulate on the computer a random variable $Y$ with a prescribed cumulative distribution function $F$. We assume that $F(y)$ is strictly increasing for those values of $y$ where $0 < F(y) < 1$. For this purpose, let $U$ be a random variable which is uniformly distributed on $[0, 1]$. Then $U$ has cumulative distribution function $F_U(u) = u$. Now, if $F$ is the prescribed cumulative distribution function for $Y$, then to write $Y$ in terms of $U$ we first solve the equation
$F(y) = u$
for $y$ in terms of $u$. We obtain $y = F^{-1}(u)$. Note that since $F$ is an increasing function this equation always has a unique solution (see Figure 5.9). Then we set $Z = F^{-1}(U)$ and obtain, by Theorem $1$,
$F_Z(y) = F_U(F(y)) = F(y)\ ,$
since $F_U(u) = u$. Therefore, $Z$ and $Y$ have the same cumulative distribution function. Summarizing, we have the following.
Corollary $2$
If $F(y)$ is a given cumulative distribution function that is strictly increasing when $0 < F(y) < 1$ and if $U$ is a random variable with uniform distribution on $[0,1]$, then
$Y = F^{-1}(U)$ h
as the cumulative distribution $F(y)$
Thus, to simulate a random variable with a given cumulative distribution $F$ we need only set $Y = F^{-1}(\mbox{rnd})$.
Normal Density
We now come to the most important density function, the normal density function. We have seen in Chapter 3 that the binomial distribution functions are bell-shaped, even for moderate size values of $n$. We recall that a binomially-distributed random variable with parameters $n$ and $p$ can be considered to be the sum of $n$ mutually independent 0-1 random variables. A very important theorem in probability theory, called the Central Limit Theorem, states that under very general conditions, if we sum a large number of mutually independent random variables, then the distribution of the sum can be closely approximated by a certain specific continuous density, called the normal density. This theorem will be discussed in Chapter 9.
The normal density function with parameters $\mu$ and $\sigma$ is defined as follows:
$f_X(x) = \frac 1{\sqrt{2\pi}\sigma} e^{-(x - \mu)^2/2\sigma^2}\ .$
The parameter $\mu$ represents the “center" of the density (and in Chapter 6, we will show that it is the average, or expected, value of the density). The parameter $\sigma$ is a measure of the “spread" of the density, and thus it is assumed to be positive. (In Chapter 6, we will show that $\sigma$ is the standard deviation of the density.) We note that it is not at all obvious that the above function is a density, i.e., that its integral over the real line equals 1. The cumulative distribution function is given by the formula
$F_X(x) = \int_{-\infty}^x \frac 1{\sqrt{2\pi}\sigma} e^{-(u - \mu)^2/2\sigma^2}\,du\ .$
In Figure $3$ we have included for comparison a plot of the normal density for the cases $\mu = 0$ and $\sigma = 1$, and $\mu = 0$ and $\sigma = 2$.
One cannot write $F_X$ in terms of simple functions. This leads to several problems. First of all, values of $F_X$ must be computed using numerical integration. Extensive tables exist containing values of this function (see Appendix A). Secondly, we cannot write $F^{-1}_X$ in closed form, so we cannot use Corollary $2$ to help us simulate a normal random variable. For this reason, special methods have been developed for simulating a normal distribution. One such method relies on the fact that if $U$ and $V$ are independent random variables with uniform densities on $[0,1]$, then the random variables $X = \sqrt{-2\log U} \cos 2\pi V$ and $Y = \sqrt{-2\log U} \sin 2\pi V$ are independent, and have normal density functions with parameters $\mu = 0$ and $\sigma = 1$. (This is not obvious, nor shall we prove it here. See Box and Muller.9)
Let $Z$ be a normal random variable with parameters $\mu = 0$ and $\sigma = 1$. A normal random variable with these parameters is said to be a normal random variable. It is an important and useful fact that if we write $X = \sigma Z + \mu\ ,$ then $X$ is a normal random variable with parameters $\mu$ and $\sigma$. To show this, we will use Theorem 5.1.1. We have $\phi(z) = \sigma z + \mu$, $\phi^{-1}(x) = (x - \mu)/\sigma$, and
\begin{aligned} F_X(x) & = & F_Z\left(\frac {x - \mu}\sigma \right), \ f_X(x) & = & f_Z\left(\frac {x - \mu}\sigma \right) \cdot \frac 1\sigma \ & = & \frac 1{\sqrt{2\pi}\sigma} e^{-(x - \mu)^2/2\sigma^2}\ . \\end{aligned}
The reader will note that this last expression is the density function with parameters $\mu$ and $\sigma$, as claimed.
We have seen above that it is possible to simulate a standard normal random variable $Z$. If we wish to simulate a normal random variable $X$ with parameters $\mu$ and $\sigma$, then we need only transform the simulated values for $Z$ using the equation $X = \sigma Z + \mu$.
Suppose that we wish to calculate the value of a cumulative distribution function for the normal random variable $X$, with parameters $\mu$ and $\sigma$. We can reduce this calculation to one concerning the standard normal random variable $Z$ as follows:
\begin{aligned} F_X(x) & = & P(X \leq x) \ & = & P\left(Z \leq \frac {x - \mu}\sigma \right) \ & = & F_Z\left(\frac {x - \mu}\sigma \right)\ . \\end{aligned}
This last expression can be found in a table of values of the cumulative distribution function for a standard normal random variable. Thus, we see that it is unnecessary to make tables of normal distribution functions with arbitrary $\mu$ and $\sigma$.
The process of changing a normal random variable to a standard normal random variable is known as standardization. If $X$ has a normal distribution with parameters $\mu$ and $\sigma$ and if $Z = \frac{X - \mu}\sigma\ ,$ then $Z$ is said to be the standardized version of $X$.
The following example shows how we use the standardized version of a normal random variable $X$ to compute specific probabilities relating to $X$.
Example $2$
Suppose that $X$ is a normally distributed random variable with parameters $\mu = 10$ and $\sigma = 3$. Find the probability that $X$ is between 4 and 16.
Answer
To solve this problem, we note that $Z = (X-10)/3$ is the standardized version of $X$. So, we have
\begin{aligned} P(4 \le X \le 16) & = & P(X \le 16) - P(X \le 4) \ & = & F_X(16) - F_X(4) \ & = & F_Z\left(\frac {16 - 10}3 \right) - F_Z\left(\frac {4-10}3 \right) \ & = & F_Z(2) - F_Z(-2)\ . \\end{aligned}
This last expression can be evaluated by using tabulated values of the standard normal distribution function (see [app_a]); when we use this table, we find that $F_Z(2) = .9772$ and $F_Z(-2) = .0228$. Thus, the answer is .9544.
In Chapter 6, we will see that the parameter $\mu$ is the mean, or average value, of the random variable $X$. The parameter $\sigma$ is a measure of the spread of the random variable, and is called the standard deviation. Thus, the question asked in this example is of a typical type, namely, what is the probability that a random variable has a value within two standard deviations of its average value.
Maxwell and Rayleigh Densities
Exercise $1$
Suppose that we drop a dart on a large table top, which we consider as the $x$$y$-plane, and suppose that the $x$ and $y$ coordinates of the dart point are independent and have a normal distribution with parameters $\mu = 0$ and $\sigma = 1$. How is the distance of the point from the origin distributed?
Answer
This problem arises in physics when it is assumed that a moving particle in $R^n$ has components of the velocity that are mutually independent and normally distributed and it is desired to find the density of the speed of the particle. The density in the case $n = 3$ is called the Maxwell density.
The density in the case $n = 2$ (i.e. the dart board experiment described above) is called the Rayleigh density. We can simulate this case by picking independently a pair of coordinates $(x,y)$, each from a normal distribution with $\mu = 0$ and $\sigma = 1$ on $(-\infty,\infty)$, calculating the distance $r = \sqrt{x^2 + y^2}$ of the point $(x,y)$ from the origin, repeating this process a large number of times, and then presenting the results in a bar graph. The results are shown in Figure $4$
We have also plotted the theoretical density $f(r) = re^{-r^2/2}\ .$ This will be derived in Chapter 7; see Example 7.2.5
Chi-Squared Density
We return to the problem of independence of traits discussed in Example 5.1.6. It is frequently the case that we have two traits, each of which have several different values. As was seen in the example, quite a lot of calculation was needed even in the case of two values for each trait. We now give another method for testing independence of traits, which involves much less calculation.
Example $2$:
Suppose that we have the data shown in Table $1$ concerning grades and gender of students in a Calculus class.
Table $1$: Calculus class data.
Female
Male
A
37
56
93
B
63
60
123
C
47
43
90
Below C
5
8
13
152
167
319
We can use the same sort of model in this situation as was used in Example 5.1.6. We imagine that we have an urn with 319 balls of two colors, say blue and red, corresponding to females and males, respectively. We now draw 93 balls, without replacement, from the urn. These balls correspond to the grade of A. We continue by drawing 123 balls, which correspond to the grade of B. When we finish, we have four sets of balls, with each ball belonging to exactly one set. (We could have stipulated that the balls were of four colors, corresponding to the four possible grades. In this case, we would draw a subset of size 152, which would correspond to the females. The balls remaining in the urn would correspond to the males. The choice does not affect the final determination of whether we should reject the hypothesis of independence of traits.)
The expected data set can be determined in exactly the same way as in Example 5.1.6. If we do this, we obtain the expected values shown in Table $2$.
Table $2$ Expected data.
Female
Male
A
44.3
48.7
93
B
58.6
64.4
123
C
42.9
47.1
90
Below C
6.2
6.8
13
152 167
319
Even if the traits are independent, we would still expect to see some differences between the numbers in corresponding boxes in the two tables. However, if the differences are large, then we might suspect that the two traits are not independent. In Example 5.1.6, we used the probability distribution of the various possible data sets to compute the probability of finding a data set that differs from the expected data set by at least as much as the actual data set does. We could do the same in this case, but the amount of computation is enormous.
Instead, we will describe a single number which does a good job of measuring how far a given data set is from the expected one. To quantify how far apart the two sets of numbers are, we could sum the squares of the differences of the corresponding numbers. (We could also sum the absolute values of the differences, but we would not want to sum the differences.) Suppose that we have data in which we expect to see 10 objects of a certain type, but instead we see 18, while in another case we expect to see 50 objects of a certain type, but instead we see 58. Even though the two differences are about the same, the first difference is more surprising than the second, since the expected number of outcomes in the second case is quite a bit larger than the expected number in the first case. One way to correct for this is to divide the individual squares of the differences by the expected number for that box. Thus, if we label the values in the eight boxes in the first table by $O_i$ (for observed values) and the values in the eight boxes in the second table by $E_i$ (for expected values), then the following expression might be a reasonable one to use to measure how far the observed data is from what is expected: $\sum_{i = 1}^8 \frac{(O_i - E_i)^2}{E_i}\ .$ This expression is a random variable, which is usually denoted by the symbol $\chi^2$, pronounced “ki-squared." It is called this because, under the assumption of independence of the two traits, the density of this random variable can be computed and is approximately equal to a density called the chi-squared density. We choose not to give the explicit expression for this density, since it involves the gamma function, which we have not discussed. The chi-squared density is, in fact, a special case of the general gamma density.
In applying the chi-squared density, tables of values of this density are used, as in the case of the normal density. The chi-squared density has one parameter $n$, which is called the number of degrees of freedom. The number $n$ is usually easy to determine from the problem at hand. For example, if we are checking two traits for independence, and the two traits have $a$ and $b$ values, respectively, then the number of degrees of freedom of the random variable $\chi^2$ is $(a-1)(b-1)$. So, in the example at hand, the number of degrees of freedom is 3.
We recall that in this example, we are trying to test for independence of the two traits of gender and grades. If we assume these traits are independent, then the ball-and-urn model given above gives us a way to simulate the experiment. Using a computer, we have performed 1000 experiments, and for each one, we have calculated a value of the random variable $\chi^2$. The results are shown in Figure $5$, together with the chi-squared density function with three degrees of freedom.
As we stated above, if the value of the random variable $\chi^2$ is large, then we would tend not to believe that the two traits are independent. But how large is large? The actual value of this random variable for the data above is 4.13. In Figure [fig 5.14.5], we have shown the chi-squared density with 3 degrees of freedom. It can be seen that the value 4.13 is larger than most of the values taken on by this random variable.
Typically, a statistician will compute the value $v$ of the random variable $\chi^2$, just as we have done. Then, by looking in a table of values of the chi-squared density, a value $v_0$ is determined which is only exceeded 5% of the time. If $v \ge v_0$, the statistician rejects the hypothesis that the two traits are independent. In the present case, $v_0 = 7.815$, so we would not reject the hypothesis that the two traits are independent.
Cauchy Density
The following example is from Feller.10
Example $3$:
Suppose that a mirror is mounted on a vertical axis, and is free to revolve about that axis. The axis of the mirror is 1 foot from a straight wall of infinite length. A pulse of light is shown onto the mirror, and the reflected ray hits the wall. Let $\phi$ be the angle between the reflected ray and the line that is perpendicular to the wall and that runs through the axis of the mirror. We assume that $\phi$ is uniformly distributed between $-\pi/2$ and $\pi/2$. Let $X$ represent the distance between the point on the wall that is hit by the reflected ray and the point on the wall that is closest to the axis of the mirror. We now determine the density of $X$.
Let $B$ be a fixed positive quantity. Then $X \ge B$ if and only if $\tan(\phi) \ge B$, which happens if and only if $\phi \ge \arctan(B)$. This happens with probability $\frac{\pi/2 - \arctan(B)}{\pi}\ .$ Thus, for positive $B$, the cumulative distribution function of $X$ is $F(B) = 1 - \frac{\pi/2 - \arctan(B)}{\pi}\ .$ Therefore, the density function for positive $B$ is $f(B) = \frac{1}{\pi (1 + B^2)}\ .$ Since the physical situation is symmetric with respect to $\phi = 0$, it is easy to see that the above expression for the density is correct for negative values of $B$ as well.
The Law of Large Numbers, which we will discuss in Chapter 8, states that in many cases, if we take the average of independent values of a random variable, then the average approaches a specific number as the number of values increases. It turns out that if one does this with a Cauchy-distributed random variable, the average does not approach any specific number.
Exercises
Exercise $1$
Choose a number $U$ from the unit interval $[0,1]$ with uniform distribution. Find the cumulative distribution and density for the random variables
1. $Y = U + 2$.
2. $Y = U^3$.
Exercise $2$
Choose a number $U$ from the interval $[0,1]$ with uniform distribution. Find the cumulative distribution and density for the random variables
1. $Y = 1/(U + 1)$.
2. $Y = \log(U + 1)$.
Exercise $3$
Use Corollary $2$ to derive the expression for the random variable given in Equation [eq 5.9]. : The random variables $1 - rnd$ and $rnd$ are identically distributed.
Exercise $4$
Suppose we know a random variable $Y$ as a function of the uniform random variable $U$: $Y = \phi(U)$, and suppose we have calculated the cumulative distribution function $F_Y(y)$ and thence the density $f_Y(y)$. How can we check whether our answer is correct? An easy simulation provides the answer: Make a bar graph of $Y = \phi(\mbox{$rnd$})$ and compare the result with the graph of $f_Y(y)$. These graphs should look similar. Check your answers to Exercises 5.2.1 and 5.2.2 by this method.
Exercise $5$
Choose a number $U$ from the interval $[0,1]$ with uniform distribution. Find the cumulative distribution and density for the random variables
1. $Y = |U - 1/2|$.
2. $Y = (U - 1/2)^2$.
Exercise $6$
Check your results for Exercise $5$ by simulation as described in Exercise $4$.
Exercise $7$
Explain how you can generate a random variable whose cumulative distribution function is $F(x) = \left \{ \begin{array}{ll} 0, & \mbox{if x < 0}, \ x^2, & \mbox{if 0 \leq x \leq 1}, \ 1, & \mbox{if x > 1.} \end{array} \right.$
Exercise $8$
Write a program to generate a sample of 1000 random outcomes each of which is chosen from the distribution given in $7$ Plot a bar graph of your results and compare this empirical density with the density for the cumulative distribution given in Exercise $7$
Exercise $9$
Let $U$, $V$ be random numbers chosen independently from the interval $[0,1]$ with uniform distribution. Find the cumulative distribution and density of each of the variables
1. $Y = U + V$.
2. $Y = |U - V|$.
Exercise $PageIndex{10}$
Let $U$, $V$ be random numbers chosen independently from the interval $[0,1]$. Find the cumulative distribution and density for the random variables
1. $Y = \max(U,V)$.
2. $Y = \min(U,V)$.
Exercise $11$
Write a program to simulate the random variables of Exercises $9$ and $10$ and plot a bar graph of the results. Compare the resulting empirical density with the density found in Exercises $9$ and $10$.
Exercise $12$
A number $U$ is chosen at random in the interval $[0,1]$. Find the probability that
1. $R = U^2 < 1/4$.
2. $S = U(1 - U) < 1/4$.
3. $T = U/(1 - U) < 1/4$.
Exercise $PageIndex{13}$
Find the cumulative distribution function $F$ and the density function $f$ for each of the random variables $R$, $S$, and $T$ in Exercise $12$.
Exercise $14$
A point $P$ in the unit square has coordinates $X$ and $Y$ chosen at random in the interval $[0,1]$. Let $D$ be the distance from $P$ to the nearest edge of the square, and $E$ the distance to the nearest corner. What is the probability that
1. $D < 1/4$?
2. $E < 1/4$?
Exercise $15$
In Exercise $14$ find the cumulative distribution $F$ and density $f$ for the random variable $D$.
Exercise $16$
Let $X$ be a random variable with density function $f_X(x) = \left \{ \begin{array}{ll} cx(1 - x), & \mbox{if 0 < x < 1}, \ 0, & \mbox{otherwise.} \end{array} \right.$
1. What is the value of $c$?
2. What is the cumulative distribution function $F_X$ for $X$?
3. What is the probability that $X < 1/4$?
Exercise $17$
Let $X$ be a random variable with cumulative distribution function $F(x) = \left \{ \begin{array}{ll} 0, & \mbox{if x < 0}, \ \sin^2(\pi x/2), & \mbox{if 0 \leq x \leq 1}, \ 1, & \mbox{if 1 < x}. \end{array} \right.$
1. What is the density function $f_X$ for $X$?
2. What is the probability that $X < 1/4$?
Exercise $PageIndex{18}$
Let $X$ be a random variable with cumulative distribution function $F_X$, and let $Y = X + b$, $Z = aX$, and $W = aX + b$, where $a$ and $b$ are any constants. Find the cumulative distribution functions $F_Y$, $F_Z$, and $F_W$. : The cases $a > 0$, $a = 0$, and $a < 0$ require different arguments.
Exercise $19$
Let $X$ be a random variable with density function $f_X$, and let $Y = X + b$, $Z = aX$, and $W = aX + b$, where $a \ne 0$. Find the density functions $f_Y$, $f_Z$, and $f_W$. (See Exercise $18$.)
Exercise $20$
Let $X$ be a random variable uniformly distributed over $[c,d]$, and let $Y = aX + b$. For what choice of $a$ and $b$ is $Y$ uniformly distributed over $[0,1]$?
Exercise $PageIndex{21}$
Let $X$ be a random variable with cumulative distribution function $F$ strictly increasing on the range of $X$. Let $Y = F(X)$. Show that $Y$ is uniformly distributed in the interval $[0,1]$. (The formula $X = F^{-1}(Y)$ then tells us how to construct $X$ from a uniform random variable $Y$.)
Exercise $22$
Let $X$ be a random variable with cumulative distribution function $F$. The of $X$ is the value $m$ for which $F(m) = 1/2$. Then $X < m$ with probability 1/2 and $X > m$ with probability 1/2. Find $m$ if $X$ is
1. uniformly distributed over the interval $[a,b]$.
2. normally distributed with parameters $\mu$ and $\sigma$.
3. exponentially distributed with parameter $\lambda$.
Exercise $PageIndex{23}$
Let $X$ be a random variable with density function $f_X$. The mean of $X$ is the value $\mu = \int xf_x(x)\,dx$. Then $\mu$ gives an average value for $X$ (see Section 6.3). Find $\mu$ if $X$ is distributed uniformly, normally, or exponentially, as in Exercise $22$
Exercise $24$
Let $X$ be a random variable with density function $f_X$. The of $X$ is the value $M$ for which $f(M)$ is maximum. Then values of $X$ near $M$ are most likely to occur. Find $M$ if $X$ is distributed normally or exponentially, as in Exercise $22$ What happens if $X$ is distributed uniformly?
Exercise $25$
Let $X$ be a random variable normally distributed with parameters $\mu = 70$, $\sigma = 10$. Estimate
1. $P(X > 50)$.
2. $P(X < 60)$.
3. $P(X > 90)$.
4. $P(60 < X < 80)$.
Exercise $26$
Bridies’ Bearing Works manufactures bearing shafts whose diameters are normally distributed with parameters $\mu = 1$, $\sigma = .002$. The buyer’s specifications require these diameters to be $1.000 \pm .003$ cm. What fraction of the manufacturer’s shafts are likely to be rejected? If the manufacturer improves her quality control, she can reduce the value of $\sigma$. What value of $\sigma$ will ensure that no more than 1 percent of her shafts are likely to be rejected?
Exercise $27$
A final examination at Podunk University is constructed so that the test scores are approximately normally distributed, with parameters $\mu$ and $\sigma$. The instructor assigns letter grades to the test scores as shown in Table $3$ (this is the process of “grading on the curve").
Table $3$ Grading on the curve.
Test Score Letter grade
$\mu + \sigma < x$ A
$\mu < x < \mu + \sigma$ B
$\mu - \sigma < x < \mu$ C
$\mu - 2\sigma < x < \mu - \sigma$ D
$x < \mu - 2\sigma$ F
What fraction of the class gets A, B, C, D, F?
Exercise $28$
(Ross11) An expert witness in a paternity suit testifies that the length (in days) of a pregnancy, from conception to delivery, is approximately normally distributed, with parameters $\mu = 270$, $\sigma = 10$. The defendant in the suit is able to prove that he was out of the country during the period from 290 to 240 days before the birth of the child. What is the probability that the defendant was in the country when the child was conceived?
Exercise $29$
Suppose that the time (in hours) required to repair a car is an exponentially distributed random variable with parameter $\lambda = 1/2$. What is the probability that the repair time exceeds 4 hours? If it exceeds 4 hours what is the probability that it exceeds 8 hours?
Exercise $30$
Suppose that the number of years a car will run is exponentially distributed with parameter $\mu = 1/4$. If Prosser buys a used car today, what is the probability that it will still run after 4 years?
Exercise $31$
Let $U$ be a uniformly distributed random variable on $[0,1]$. What is the probability that the equation $x^2 + 4Ux + 1 = 0$ has two distinct real roots $x_1$ and $x_2$?
Exercise $32$
Write a program to simulate the random variables whose densities are given by the following, making a suitable bar graph of each and comparing the exact density with the bar graph.
1. $f_X(x) = e^{-x}\ \ \mbox{on}\,\, [0,\infty)\,\, (\mbox{but\,\,just\,\,do\,\,it\,\,on\,\,} [0,10]).$
2. $f_X(x) = 2x\ \ \mbox{on}\,\, [0,1].$
3. $f_X(x) = 3x^2\ \ \mbox{on}\,\, [0,1].$
4. $f_X(x) = 4|x - 1/2|\ \ \mbox{on}\,\, [0,1].$
Exercise $33$
Suppose we are observing a process such that the time between occurrences is exponentially distributed with $\lambda = 1/30$ (i.e., the average time between occurrences is 30 minutes). Suppose that the process starts at a certain time and we start observing the process 3 hours later. Write a program to simulate this process. Let $T$ denote the length of time that we have to wait, after we start our observation, for an occurrence. Have your program keep track of $T$. What is an estimate for the average value of $T$?
Exercise $34$
Jones puts in two new lightbulbs: a 60 watt bulb and a 100 watt bulb. It is claimed that the lifetime of the 60 watt bulb has an exponential density with average lifetime 200 hours ($\lambda = 1/200$). The 100 watt bulb also has an exponential density but with average lifetime of only 100 hours ($\lambda = 1/100$). Jones wonders what is the probability that the 100 watt bulb will outlast the 60 watt bulb.
If $X$ and $Y$ are two independent random variables with exponential densities $f(x) = \lambda e^{-\lambda x}$ and $g(x) = \mu e^{-\mu x}$, respectively, then the probability that $X$ is less than $Y$ is given by $P(X < Y) = \int_0^\infty f(x)(1 - G(x))\,dx,$ where $G(x)$ is the cumulative distribution function for $g(x)$. Explain why this is the case. Use this to show that $P(X < Y) = \frac \lambda{\lambda + \mu}$ and to answer Jones’s question.
Exercise $35$
Consider the simple queueing process of Example $1$. Suppose that you watch the size of the queue. If there are $j$ people in the queue the next time the queue size changes it will either decrease to $j - 1$ or increase to $j + 1$. Use the result of Exercise $34$ to show that the probability that the queue size decreases to $j - 1$ is $\mu/(\mu + \lambda)$ and the probability that it increases to $j + 1$ is $\lambda/(\mu + \lambda)$. When the queue size is 0 it can only increase to 1. Write a program to simulate the queue size. Use this simulation to help formulate a conjecture containing conditions on $\mu$ and $\lambda$ that will ensure that the queue will have times when it is empty.
Exercise $36$
Let $X$ be a random variable having an exponential density with parameter $\lambda$. Find the density for the random variable $Y = rX$, where $r$ is a positive real number.
Exercise $37$
Let $X$ be a random variable having a normal density and consider the random variable $Y = e^X$. Then $Y$ has a log normal density. Find this density of $Y$.
Exercise $38$
Let $X_1$ and $X_2$ be independent random variables and for $i = 1, 2$, let $Y_i = \phi_i(X_i)$, where $\phi_i$ is strictly increasing on the range of $X_i$. Show that $Y_1$ and $Y_2$ are independent. Note that the same result is true without the assumption that the $\phi_i$’s are strictly increasing, but the proof is more difficult.
5.R: References
1. ibid., p. 161.↩
2. T. P. Hill, “The Significant Digit Phenomenon," vol. 102, no. 4 (April 1995), pgs. 322-327.↩
3. M. Nigrini, “Detecting Biases and Irregularities in Tabulated Data," working paper↩
4. S. Newcomb, “Note on the frequency of use of the different digits in natural numbers," vol. 4 (1881), pgs. 39-40.↩
5. ibid., p. 161.↩
6. Private communication.↩
7. L. von Bortkiewicz, (Leipzig: Teubner, 1898), p. 24.↩
8. R. L. Tenney and C. C. Foster, , Math. Mag. 49 (1976) no. 3, pgs. 115-120.↩
9. G. E. P. Box and M. E. Muller, , Ann. of Math. Stat. 29 (1958), pgs. 610-611.↩
10. W. Feller, , vol. 2, (New York: Wiley, 1966)↩
11. S. Ross, 2d ed. (New York: Macmillan, 1984).↩ | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/05%3A_Distributions_and_Densities/5.02%3A_Important_Densities.txt |
When a large collection of numbers is assembled, as in a census, we are usually interested not in the individual numbers, but rather in certain descriptive quantities such as the average or the median. In general, the same is true for the probability distribution of a numerically-valued random variable. In this and in the next section, we shall discuss two such descriptive quantities: the expected value and the variance. Both of these quantities apply only to numerically-valued random variables, and so we assume, in these sections, that all random variables have numerical values. To give some intuitive justification for our definition, we consider the following game.
Average Value
A die is rolled. If an odd number turns up, we win an amount equal to this number; if an even number turns up, we lose an amount equal to this number. For example, if a two turns up we lose 2, and if a three comes up we win 3. We want to decide if this is a reasonable game to play. We first try simulation. The program Die carries out this simulation.
The program prints the frequency and the relative frequency with which each outcome occurs. It also calculates the average winnings. We have run the program twice. The results are shown in Table $1$.
$1$: Frequencies for dice game.
Winning
Frequency Relative Frequency Relative
Frequency Frequency
1 17 .17 1681 .1681
-2 17 .17 1678 .1678
3 16 .16 1626 .1626
-4 18 .18 1696 .1696
5 16 .16 1686 .1686
-6 16 .16 1633 .1633
In the first run we have played the game 100 times. In this run our average gain is $-.57$. It looks as if the game is unfavorable, and we wonder how unfavorable it really is. To get a better idea, we have played the game 10,000 times. In this case our average gain is $-.4949$.
We note that the relative frequency of each of the six possible outcomes is quite close to the probability 1/6 for this outcome. This corresponds to our frequency interpretation of probability. It also suggests that for very large numbers of plays, our average gain should be
\nonumber \begin{aligned} \mu & = & 1 \Bigl(\frac 16\Bigr) - 2\Bigl(\frac 16\Bigr) + 3 \Bigl(\frac 16\Bigr) - 4 \Bigl(\frac 16\Bigr) + 5 \Bigl(\frac 16\Bigr) - 6 \Bigl(\frac 16\Bigr) \ & = & \frac 96 - \frac {12}6 = -\frac 36 = -.5\ .\end{aligned} This agrees quite well with our average gain for 10,000 plays.
We note that the value we have chosen for the average gain is obtained by taking the possible outcomes, multiplying by the probability, and adding the results. This suggests the following definition for the expected outcome of an experiment.
Expected Value
Definition: expected value
Let $X$ be a numerically-valued discrete random variable with sample space $\Omega$ and distribution function $m(x)$. The expected value $E(X)$ is defined by
$\nonumber E(X) = \sum_{x \in \Omega} x m(x)\ ,$
provided this sum converges absolutely.
We often refer to the expected value as the mean and denote $E(X)$ by $\mu$ for short. If the above sum does not converge absolutely, then we say that $X$ does not have an expected value.
Example $1$:
Let an experiment consist of tossing a fair coin three times. Let $X$ denote the number of heads which appear. Then the possible values of $X$ are $0, 1, 2$ and $3$. The corresponding probabilities are $1/8, 3/8, 3/8,$ and $1/8$. Thus, the expected value of $X$ equals
$\nonumber [0\biggl(\frac 18\biggr) + 1\biggl(\frac 38\biggr) + 2\biggl(\frac 38\biggr) + 3\biggl(\frac 18\biggr) = \frac 32\ .$ Later in this section we shall see a quicker way to compute this expected value, based on the fact that $X$ can be written as a sum of simpler random variables.
Example $2$:
Suppose that we toss a fair coin until a head first comes up, and let $X$ represent the number of tosses which were made. Then the possible values of $X$ are $1, 2, \ldots$, and the distribution function of $X$ is defined by $m(i) = {1\over {2^i}}\ .$ (This is just the geometric distribution with parameter $1/2$.) Thus, we have \begin{aligned} E(X) & = &\sum_{i = 1}^\infty i {1\over{2^i}} \ & = & \sum_{i = 1}^\infty {1\over{2^i}} + \sum_{i = 2}^\infty {1\over{2^i}} + \cdots \ & = & 1 + {1\over 2} + {1\over{2^2}} + \cdots \ & = & 2\ .\end{aligned}
Exercise $3$
Suppose that we flip a coin until a head first appears, and if the number of tosses equals $n$, then we are paid $2^n$ dollars. What is the expected value of the payment?
Answer
We let $Y$ represent the payment. Then,
$P(Y = 2^n) = {1\over{2^n}}\ ,$ for $n \ge 1$. Thus, $E(Y) = \sum_{n = 1}^\infty 2^n {1\over{2^n}}\ ,$ which is a divergent sum. Thus, $Y$ has no expectation. This example is called the . The fact that the above sum is infinite suggests that a player should be willing to pay any fixed amount per game for the privilege of playing this game. The reader is asked to consider how much he or she would be willing to pay for this privilege. It is unlikely that the reader’s answer is more than 10 dollars; therein lies the paradox.
In the early history of probability, various mathematicians gave ways to resolve this paradox. One idea (due to G. Cramer) consists of assuming that the amount of money in the world is finite. He thus assumes that there is some fixed value of $n$ such that if the number of tosses equals or exceeds $n$, the payment is $2^n$ dollars. The reader is asked to show in Exercise $20$ that the expected value of the payment is now finite.
Daniel Bernoulli and Cramer also considered another way to assign value to the payment. Their idea was that the value of a payment is some function of the payment; such a function is now called a utility function. Examples of reasonable utility functions might include the square-root function or the logarithm function. In both cases, the value of $2n$ dollars is less than twice the value of $n$ dollars. It can easily be shown that in both cases, the expected utility of the payment is finite (see Exercise $20$.
Example $4$
Let $T$ be the time for the first success in a Bernoulli trials process. Then we take as sample space $\Omega$ the integers $1,~2,~\ldots\$ and assign the geometric distribution
$m(j) = P(T = j) = q^{j - 1}p .$
Thus, \begin{align} E(T) & = & 1 \cdot p + 2qp + 3q^2p +\cdots \ & = & p(1 + 2q + 3q^2 +\cdots ) .\end{align}
Now if $|x| < 1$, then $1 + x + x^2 + x^3 + \cdots = \frac{1}{1 - x} .$ Differentiating this formula, we get $1 + 2x + 3x^2 +\cdots = \frac{1}{(1 - x)^2} ,$ so $E(T) = \frac{p}{(1 - q)^2} = \frac{p}{p^2} = \frac{1}{p} .$ In particular, we see that if we toss a fair coin a sequence of times, the expected time until the first heads is 1/(1/2) = 2. If we roll a die a sequence of times, the expected number of rolls until the first six is 1/(1/6) = 6.
Interpretation of Expected Value
In statistics, one is frequently concerned with the average value of a set of data. The following example shows that the ideas of average value and expected value are very closely related.
Example $5$
The heights, in inches, of the women on the Swarthmore basketball team are 5’ 9", 5’ 9", 5’ 6", 5’ 8", 5’ 11", 5’ 5", 5’ 7", 5’ 6", 5’ 6", 5’ 7", 5’ 10", and 6’ 0".
A statistician would compute the average height (in inches) as follows: $\frac{69 + 69 + 66 + 68 + 71 + 65 + 67 + 66 + 66 + 67 + 70 + 72}{12} = 67.9\ .$ One can also interpret this number as the expected value of a random variable. To see this, let an experiment consist of choosing one of the women at random, and let $X$ denote her height. Then the expected value of $X$ equals 67.9.
Of course, just as with the frequency interpretation of probability, to interpret expected value as an average outcome requires further justification. We know that for any finite experiment the average of the outcomes is not predictable. However, we shall eventually prove that the average will usually be close to $E(X)$ if we repeat the experiment a large number of times. We first need to develop some properties of the expected value. Using these properties, and those of the concept of the variance to be introduced in the next section, we shall be able to prove the This theorem will justify mathematically both our frequency concept of probability and the interpretation of expected value as the average value to be expected in a large number of experiments.
Expectation of a Function of a Random Variable
Suppose that $X$ is a discrete random variable with sample space $\Omega$, and $\phi(x)$ is a real-valued function with domain $\Omega$. Then $\phi(X)$ is a real-valued random variable. One way to determine the expected value of $\phi(X)$ is to first determine the distribution function of this random variable, and then use the definition of expectation. However, there is a better way to compute the expected value of $\phi(X)$, as demonstrated in the next example.
Exercise $6$
Suppose a coin is tossed 9 times, with the result $HHHTTTTHT\ .$ The first set of three heads is called a . There are three more runs in this sequence, namely the next four tails, the next head, and the next tail. We do not consider the first two tosses to constitute a run, since the third toss has the same value as the first two.
Now suppose an experiment consists of tossing a fair coin three times. Find the expected number of runs.
Answer
It will be helpful to think of two random variables, $X$ and $Y$, associated with this experiment. We let $X$ denote the sequence of heads and tails that results when the experiment is performed, and $Y$ denote the number of runs in the outcome $X$. The possible outcomes of $X$ and the corresponding values of $Y$ are shown in Table $2$
$\begin{tabular}{cc} X & Y \ \hline HHH & 1\ HHT & 2\ HTH & 3\ HTT & 2\ THH & 2\ THT & 3\ TTH & 2\ TTT & 1\ \end{tabular}$
To calculate $E(Y)$ using the definition of expectation, we first must find the distribution function $m(y)$ of $Y$ i.e., we group together those values of $X$ with a common value of $Y$ and add their probabilities. In this case, we calculate that the distribution function of $Y$ is: $m(1) = 1/4,\ m(2) = 1/2,$ and $m(3) = 1/4$. One easily finds that $E(Y) = 2$.
Now suppose we didn’t group the values of $X$ with a common $Y$-value, but instead, for each $X$-value $x$, we multiply the probability of $x$ and the corresponding value of $Y$, and add the results. We obtain $1\biggl(\frac 18\biggr) +2\biggl(\frac 18\biggr) +3\biggl(\frac 18\biggr) +2\biggl(\frac 18\biggr) +2\biggl(\frac 18\biggr) +3\biggl(\frac 18\biggr) +2\biggl(\frac 18\biggr) +1\biggl(\frac 18\biggr)\ ,$ which equals 2.
This illustrates the following general principle. If $X$ and $Y$ are two random variables, and $Y$ can be written as a function of $X$, then one can compute the expected value of $Y$ using the distribution function of $X$.
Theorem $1$
If $X$ is a discrete random variable with sample space $\Omega$ and distribution function $m(x)$, and if $\phi : \Omega \to$ is a function, then $E(\phi(X)) = \sum_{x \in \Omega} \phi(x) m(x)\ ,$ provided the series converges absolutely.
Proof
The proof of this theorem is straightforward, involving nothing more than grouping values of $X$ with a common $Y$-value, as in Example $6$
The Sum of Two Random Variables
Many important results in probability theory concern sums of random variables. We first consider what it means to add two random variables.
Example $7$
We flip a coin and let $X$ have the value 1 if the coin comes up heads and 0 if the coin comes up tails. Then, we roll a die and let $Y$ denote the face that comes up. What does $X+Y$ mean, and what is its distribution?
Solution
This question is easily answered in this case, by considering, as we did in Chapter 4, the joint random variable $Z = (X,Y)$, whose outcomes are ordered pairs of the form $(x, y)$, where $0 \le x \le 1$ and $1 \le y \le 6$. The description of the experiment makes it reasonable to assume that $X$ and $Y$ are independent, so the distribution function of $Z$ is uniform, with $1/12$ assigned to each outcome. Now it is an easy matter to find the set of outcomes of $X+Y$, and its distribution function.
In Example $6$, the random variable $X$ denoted the number of heads which occur when a fair coin is tossed three times. It is natural to think of $X$ as the sum of the random variables $X_1, X_2, X_3$, where $X_i$ is defined to be 1 if the $i$th toss comes up heads, and 0 if the $i$th toss comes up tails. The expected values of the $X_i$’s are extremely easy to compute. It turns out that the expected value of $X$ can be obtained by simply adding the expected values of the $X_i$’s. This fact is stated in the following theorem.
Theorem $2$
Let $X$ and $Y$ be random variables with finite expected values. Then $E(X + Y) = E(X) + E(Y)\ ,$ and if $c$ is any constant, then $E(cX) = cE(X)\ .$
Proof
Let the sample spaces of $X$ and $Y$ be denoted by $\Omega_X$ and $\Omega_Y$, and suppose that $\Omega_X = \{x_1, x_2, \ldots\}$ and $\Omega_Y = \{y_1, y_2, \ldots\}\ .$ Then we can consider the random variable $X + Y$ to be the result of applying the function $\phi(x, y) = x + y$ to the joint random variable $(X,Y)$. Then, by Theorem $1$ the expected value of the sum of any finite number of random variables is the sum of the expected values of the individual random variables., we have
\begin{aligned} E(X+Y) & = &\sum_j \sum_k (x_j + y_k) P(X = x_j,\ Y = y_k) \ & = &\sum_j \sum_k x_j P(X = x_j,\ Y = y_k) + \sum_j \sum_k y_k P(X = x_j,\ Y = y_k) \ & = &\sum_j x_j P(X = x_j) + \sum_k y_k P(Y = y_k)\ .\end{aligned}
The last equality follows from the fact that $\sum_k P(X = x_j,\ Y = y_k)\ \ =\ \ P(X = x_j)$ and $\sum_j P(X = x_j,\ Y = y_k)\ \ =\ \ P(Y = y_k)\ .$
Thus, $E(X+Y) = E(X) + E(Y)\ .$
If $c$ is any constant, \begin{aligned} E(cX) & = & \sum_j cx_j P(X = x_j) \ & = & c\sum_j x_j P(X = x_j)\ & = & cE(X)\ .\end{aligned}
It is easy to prove by mathematical induction that the expected value of the sum of any finite number of random variables is the sum of the expected values of the individual random variables.
It is important to note that mutual independence of the summands was not needed as a hypothesis in the Theorem $2$ and its generalization. The fact that expectations add, whether or not the summands are mutually independent, is sometimes referred to as the First Fundamental Mystery of Probability.
Example $8$
Let $Y$ be the number of fixed points in a random permutation of the set $\{a,b,c\}$. To find the expected value of $Y$, it is helpful to consider the basic random variable associated with this experiment, namely the random variable $X$ which represents the random permutation. There are six possible outcomes of $X$, and we assign to each of them the probability $1/6$ see Table $3$. Then we can calculate $E(Y)$ using Theorem $1$, as $3\Bigl({1\over 6}\Bigr) + 1\Bigl({1\over 6}\Bigr) + 1\Bigl({1\over 6}\Bigr) + 0\Bigl({1\over 6}\Bigr) + 0\Bigl({1\over 6}\Bigr) + 1\Bigl({1\over 6}\Bigr) = 1\ .$
Table $3$ Number of fixed points.
$X$ $Y$
$a\;\;\;b\;\;\; c$ 3
$a\;\;\; c\;\;\; b$ 1
$b\;\;\; a\;\;\; c$ 1
$b\;\;\; c\;\;\; a$ 0
$c\;\;\; a\;\;\; b$ 0
$c\;\;\; b\;\;\; a$ 1
We now give a very quick way to calculate the average number of fixed points in a random permutation of the set $\{1, 2, 3, \ldots, n\}$. Let $Z$ denote the random permutation. For each $i$, $1 \le i \le n$, let $X_i$ equal 1 if $Z$ fixes $i$, and 0 otherwise. So if we let $F$ denote the number of fixed points in $Z$, then $F = X_1 + X_2 + \cdots + X_n\ .$ Therefore, Theorem $2$ implies that $E(F) = E(X_1) + E(X_2) + \cdots + E(X_n)\ .$ But it is easy to see that for each $i$, $E(X_i) = {1\over n}\ ,$ so $E(F) = 1\ .$ This method of calculation of the expected value is frequently very useful. It applies whenever the random variable in question can be written as a sum of simpler random variables. We emphasize again that it is not necessary that the summands be mutually independent.
Bernoulli Trials
Theorem $3$
Let $S_n$ be the number of successes in $n$ Bernoulli trials with probability $p$ for success on each trial. Then the expected number of successes is $np$. That is, $E(S_n) = np\ .$
Proof
Let $X_j$ be a random variable which has the value 1 if the $j$th outcome is a success and 0 if it is a failure. Then, for each $X_j$, $E(X_j) = 0\cdot(1 - p) + 1\cdot p = p\ .$ Since $S_n = X_1 + X_2 +\cdots+ X_n\ ,$ and the expected value of the sum is the sum of the expected values, we have \begin{aligned} E(S_n) & = & E(X_1) + E(X_2) +\cdots+ E(X_n) \ & = & np\ .\end{aligned}
Poisson Distribution
Recall that the Poisson distribution with parameter $\lambda$ was obtained as a limit of binomial distributions with parameters $n$ and $p$, where it was assumed that $np = \lambda$, and $n \rightarrow \infty$. Since for each $n$, the corresponding binomial distribution has expected value $\lambda$, it is reasonable to guess that the expected value of a Poisson distribution with parameter $\lambda$ also has expectation equal to $\lambda$. This is in fact the case, and the reader is invited to show this (see Exercise $21$).
Independence
If $X$ and $Y$ are two random variables, it is not true in general that $E(X \cdot Y) = E(X)E(Y)$. However, this is true if $X$ and $Y$ are independent.
Theorem $4$
If $X$ and $Y$ are independent random variables, then $E(X \cdot Y) = E(X)E(Y)\ .$
Proof
Suppose that $\Omega_X = \{x_1, x_2, \ldots\}$ and $\Omega_Y = \{y_1, y_2, \ldots\}$ are the sample spaces of $X$ and $Y$, respectively. Using Theorem $1$, we have $E(X \cdot Y) = \sum_j \sum_k x_jy_k P(X = x_j,\ Y = y_k)\ .$
But if $X$ and $Y$ are independent, $P(X = x_j, Y = y_k) = P(X = x_j)P(Y = y_k)\ .$ Thus, \begin{aligned} E(X \cdot Y) & = & \sum_j\sum_k x_j y_k P(X = x_j) P(Y = y_k) \ & = & \left(\sum_j x_j P(X = x_j)\right) \left(\sum_k y_k P(Y = y_k)\right) \ & = &E(X) E(Y)\ .\end{aligned}
Example $9$:
A coin is tossed twice. $X_i = 1$ if the $i$th toss is heads and 0 otherwise. We know that $X_1$ and $X_2$ are independent. They each have expected value 1/2. Thus $E(X_1 \cdot X_2) = E(X_1) E(X_2) = (1/2)(1/2) = 1/4$.
We next give a simple example to show that the expected values need not multiply if the random variables are not independent.
Example $10$:
Consider a single toss of a coin. We define the random variable $X$ to be 1 if heads turns up and 0 if tails turns up, and we set $Y = 1 - X$. Then $E(X) = E(Y) = 1/2$. But $X \cdot Y = 0$ for either outcome. Hence, $E(X \cdot Y) = 0 \ne E(X) E(Y)$.
We return to our records example of Section 3.1 for another application of the result that the expected value of the sum of random variables is the sum of the expected values of the individual random variables.
Records
Example $11$:
We start keeping snowfall records this year and want to find the expected number of records that will occur in the next $n$ years. The first year is necessarily a record. The second year will be a record if the snowfall in the second year is greater than that in the first year. By symmetry, this probability is 1/2. More generally, let $X_j$ be 1 if the $j$th year is a record and 0 otherwise. To find $E(X_j)$, we need only find the probability that the $j$th year is a record. But the record snowfall for the first $j$ years is equally likely to fall in any one of these years, so $E(X_j) = 1/j$. Therefore, if $S_n$ is the total number of records observed in the first $n$ years,
$E(S_n) = 1 + \frac 12 + \frac 13 +\cdots+ \frac 1n\ .$
This is the famous divergent harmonic series. It is easy to show that
$E(S_n) \sim \log n$
as $n \rightarrow \infty$. A more accurate approximation to $E(S_n)$ is given by the expression
$\log n + \gamma + {1\over {2n}}\ ,$
where $\gamma$ denotes Euler’s constant, and is approximately equal to .5772.
Therefore, in ten years the expected number of records is approximately $2.9298$; the exact value is the sum of the first ten terms of the harmonic series which is 2.9290.
Craps
Example $12$:
In the game of craps, the player makes a bet and rolls a pair of dice. If the sum of the numbers is 7 or 11 the player wins, if it is 2, 3, or 12 the player loses. If any other number results, say $r$, then $r$ becomes the player’s point and he continues to roll until either $r$ or 7 occurs. If $r$ comes up first he wins, and if 7 comes up first he loses. The program Craps simulates playing this game a number of times.
We have run the program for 1000 plays in which the player bets 1 dollar each time. The player’s average winnings were $-.006$. The game of craps would seem to be only slightly unfavorable. Let us calculate the expected winnings on a single play and see if this is the case. We construct a two-stage tree measure as shown in Figure $1$.
The first stage represents the possible sums for his first roll. The second stage represents the possible outcomes for the game if it has not ended on the first roll. In this stage we are representing the possible outcomes of a sequence of rolls required to determine the final outcome. The branch probabilities for the first stage are computed in the usual way assuming all 36 possibilites for outcomes for the pair of dice are equally likely. For the second stage we assume that the game will eventually end, and we compute the conditional probabilities for obtaining either the point or a 7. For example, assume that the player’s point is 6. Then the game will end when one of the eleven pairs, $(1,5)$, $(2,4)$, $(3,3)$, $(4,2)$, $(5,1)$, $(1,6)$, $(2,5)$, $(3,4)$, $(4,3)$, $(5,2)$, $(6,1)$, occurs. We assume that each of these possible pairs has the same probability. Then the player wins in the first five cases and loses in the last six. Thus the probability of winning is 5/11 and the probability of losing is 6/11. From the path probabilities, we can find the probability that the player wins 1 dollar; it is 244/495. The probability of losing is then 251/495. Thus if $X$ is his winning for a dollar bet,
\begin{aligned} E(X) & = & 1\Bigl(\frac {244}{495}\Bigr) + (-1)\Bigl(\frac {251}{495}\Bigr) \ & = & -\frac {7}{495} \approx -.0141\ .\end{aligned}
The game is unfavorable, but only slightly. The player’s expected gain in $n$ plays is $-n(.0141)$. If $n$ is not large, this is a small expected loss for the player. The casino makes a large number of plays and so can afford a small average gain per play and still expect a large profit.
Roulette
Example $13$:
In Las Vegas, a roulette wheel has 38 slots numbered 0, 00, 1, 2, …, 36. The 0 and 00 slots are green, and half of the remaining 36 slots are red and half are black. A croupier spins the wheel and throws an ivory ball. If you bet 1 dollar on red, you win 1 dollar if the ball stops in a red slot, and otherwise you lose a dollar. We wish to calculate the expected value of your winnings, if you bet 1 dollar on red.
Let $X$ be the random variable which denotes your winnings in a 1 dollar bet on red in Las Vegas roulette. Then the distribution of $X$ is given by $m_{X} = \pmatrix{ -1 & 1 \cr 20/38 & 18/38 \cr},$ and one can easily calculate (see Exercise $5$) that $E(X) \approx -.0526\ .$
We now consider the roulette game in Monte Carlo, and follow the treatment of Sagan.1 In the roulette game in Monte Carlo there is only one 0. If you bet 1 franc on red and a 0 turns up, then, depending upon the casino, one or more of the following options may be offered:
(a) You get 1/2 of your bet back, and the casino gets the other half of your bet.
(b) Your bet is put “in prison," which we will denote by $P_1$. If red comes up on the next turn, you get your bet back (but you don’t win any money). If black or 0 comes up, you lose your bet.
(c) Your bet is put in prison $P_1$, as before. If red comes up on the next turn, you get your bet back, and if black comes up on the next turn, then you lose your bet. If a 0 comes up on the next turn, then your bet is put into double prison, which we will denote by $P_2$. If your bet is in double prison, and if red comes up on the next turn, then your bet is moved back to prison $P_1$ and the game proceeds as before. If your bet is in double prison, and if black or 0 come up on the next turn, then you lose your bet. We refer the reader to Figure $2$, where a tree for this option is shown. In this figure, $S$ is the starting position, $W$ means that you win your bet, $L$ means that you lose your bet, and $E$ means that you break even.
It is interesting to compare the expected winnings of a 1 franc bet on red, under each of these three options. We leave the first two calculations as an exercise (see Exercise $37$). Suppose that you choose to play alternative (c). The calculation for this case illustrates the way that the early French probabilists worked problems like this.
Suppose you bet on red, you choose alternative (c), and a 0 comes up. Your possible future outcomes are shown in the tree diagram in Figure $3$. Assume that your money is in the first prison and let $x$ be the probability that you lose your franc. From the tree diagram we see that
$x = \frac {18}{37} + \frac 1{37}P({\rm you\ lose\ your\ franc\ }|\ {\rm your\ franc\ is \ in\ }P_2)$.
Also,
$P({\rm you\ lose\ your\ franc\ }|\ {\rm your\ franc\ is \ in\ }P_2) = \frac {19}{37} + \frac{18}{37}x$.
So, we have
$x = \frac{18}{37} + \frac 1{37}\Bigl(\frac {19}{37} + \frac{18}{37}x\Bigr)$.
Solving for $x$, we obtain $x = 685/1351$. Thus, starting at $S$, the probability that you lose your bet equals $\frac {18}{37} + \frac 1{37}x = \frac{25003}{49987}\ .$
To find the probability that you win when you bet on red, note that you can only win if red comes up on the first turn, and this happens with probability 18/37. Thus your expected winnings are $1 \cdot {\frac{18}{37}} -1 \cdot {\frac {25003}{49987}} = -\frac{687}{49987} \approx -.0137\ .$
It is interesting to note that the more romantic option (c) is less favorable than option (a) (see Exercise $37$).
If you bet 1 dollar on the number 17, then the distribution function for your winnings $X$ is $P_X = \pmatrix{ -1 & 35 \cr 36/37 & 1/37 \cr}\ ,$ and the expected winnings are $-1 \cdot {\frac{36}{37}} + 35 \cdot {\frac 1{37}} = -\frac 1{37} \approx -.027\ .$ Thus, at Monte Carlo different bets have different expected values. In Las Vegas almost all bets have the same expected value of $-2/38 = -.0526$ (see Exercises $4$ and $5$).
Conditional Expectation
Definition
If $F$ is any event and $X$ is a random variable with sample space $\Omega = \{x_1, x_2, \ldots\}$, then the conditional expectation is defined by $E(X|F) = \sum_j x_j P(X = x_j|F)\ .$ Conditional expectation is used most often in the form provided by the following theorem.
Theorem $5$
Let $X$ be a random variable with sample space $\Omega$. If $F_1$, $F_2$, …, $F_r$ are events such that $F_i \cap F_j = \emptyset$ for $i \ne j$ and $\Omega = \cup_j F_j$, then $E(X) = \sum_j E(X|F_j) P(F_j)\ .$
Proof
We have \begin{aligned} \sum_j E(X|F_j) P(F_j) & = & \sum_j \sum_k x_k P(X = x_k|F_j) P(F_j) \ & = & \sum_j \sum_k x_k P(X = x_k\,\, {\rm and}\,\, F_j\,\,{\rm occurs}) \ & = & \sum_k \sum_j x_k P(X = x_k\,\,{\rm and}\,\,F_j\,\,{\rm occurs}) \ & = & \sum_k x_k P(X = x_k) \ & = & E(X)\ .\end{aligned}
Example $14$:
Let $T$ be the number of rolls in a single play of craps. We can think of a single play as a two-stage process. The first stage consists of a single roll of a pair of dice. The play is over if this roll is a 2, 3, 7, 11, or 12. Otherwise, the player’s point is established, and the second stage begins. This second stage consists of a sequence of rolls which ends when either the player’s point or a 7 is rolled. We record the outcomes of this two-stage experiment using the random variables $X$ and $S$, where $X$ denotes the first roll, and $S$ denotes the number of rolls in the second stage of the experiment (of course, $S$ is sometimes equal to 0). Note that $T = S+1$. Then by Theorem $5$
$E(T) = \sum_{j = 2}^{12} E(T|X = j) P(X = j)\ .$
If $j = 7$, 11 or 2, 3, 12, then $E(T|X = j) = 1$. If $j = 4, 5, 6, 8, 9,$ or $10$, we can use Example $4$ to calculate the expected value of $S$. In each of these cases, we continue rolling until we get either a $j$ or a 7. Thus, $S$ is geometrically distributed with parameter $p$, which depends upon $j$. If $j = 4$, for example, the value of $p$ is $3/36 + 6/36 = 1/4$. Thus, in this case, the expected number of additional rolls is $1/p = 4$, so $E(T|X = 4) = 1 + 4 = 5$. Carrying out the corresponding calculations for the other possible values of $j$ and using Theorem $5$ gives
\begin{aligned} E(T) & = & 1\Bigl(\frac {12}{36}\Bigr) + \Bigl(1 + \frac {36}{3 + 6}\Bigr)\Bigl(\frac 3{36}\Bigr) + \Bigl(1 + \frac {36}{4 + 6}\Bigr)\Bigl(\frac 4{36}\Bigr) \ & & + \Bigl(1 + \frac {36}{5 + 6}\Bigr)\Bigl(\frac 5{36}\Bigr) + \Bigl(1 + \frac {36}{5 + 6}\Bigr)\Bigl(\frac 5{36}\Bigr) \ & & + \Bigl(1 + \frac {36}{4 + 6}\Bigr)\Bigl(\frac 4{36}\Bigr) + \Bigl(1 + \frac {36}{3 + 6}\Bigr)\Bigl(\frac 3{36}\Bigr) \ & = & \frac {557}{165} \ & \approx & 3.375\dots\ .\end{aligned}
Martingales
We can extend the notion of fairness to a player playing a sequence of games by using the concept of conditional expectation.
Example $15$
Let $S_1$, $S_2$, …, $S_n$ be Peter’s accumulated fortune in playing heads or tails (see Example 1.1.4). Then $E(S_n | S_{n - 1} = a,\dots,S_1 = r) = \frac 12 (a + 1) + \frac 12 (a - 1) = a\ .$
We note that Peter’s expected fortune after the next play is equal to his present fortune. When this occurs, we say the game is A fair game is also called a martingales. If the coin is biased and comes up heads with probability $p$ and tails with probability $q = 1 - p$, then $E(S_n | S_{n - 1} = a,\dots,S_1 = r) = p (a + 1) + q (a - 1) = a + p - q\ .$ Thus, if $p < q$, this game is unfavorable, and if $p > q$, it is favorable.
If you are in a casino, you will see players adopting elaborate of play to try to make unfavorable games favorable. Two such systems, the martingale doubling system and the more conservative Labouchere system, were described in Exercises 1.1.9 and 1.1.10. Unfortunately, such systems cannot change even a fair game into a favorable game.
Even so, it is a favorite pastime of many people to develop systems of play for gambling games and for other games such as the stock market. We close this section with a simple illustration of such a system.
Stock Prices
Example $16$
Let us assume that a stock increases or decreases in value each day by 1 dollar, each with probability 1/2. Then we can identify this simplified model with our familiar game of heads or tails. We assume that a buyer, Mr. Ace, adopts the following strategy. He buys the stock on the first day at its price $V$. He then waits until the price of the stock increases by one to $V + 1$ and sells. He then continues to watch the stock until its price falls back to $V$. He buys again and waits until it goes up to $V + 1$ and sells. Thus he holds the stock in intervals during which it increases by 1 dollar. In each such interval, he makes a profit of 1 dollar. However, we assume that he can do this only for a finite number of trading days. Thus he can lose if, in the last interval that he holds the stock, it does not get back up to $V + 1$; and this is the only way he can lose. In Figure $4$ we illustrate a typical history if Mr. Ace must stop in twenty days. Mr. Ace holds the stock under his system during the days indicated by broken lines. We note that for the history shown in Figure $4$, his system nets him a gain of 4 dollars.
We have written a program StockSystem to simulate the fortune of Mr. Ace if he uses his sytem over an $n$-day period. If one runs this program a large number of times, for $n = 20$, say, one finds that his expected winnings are very close to 0, but the probability that he is ahead after 20 days is significantly greater than 1/2. For small values of $n$, the exact distribution of winnings can be calculated. The distribution for the case $n = 20$ is shown in Figure $4$. Using this distribution, it is easy to calculate that the expected value of his winnings is exactly 0. This is another instance of the fact that a fair game (a martingale) remains fair under quite general systems of play.
Although the expected value of his winnings is 0, the probability that Mr. Ace is ahead after 20 days is about .610. Thus, he would be able to tell his friends that his system gives him a better chance of being ahead than that of someone who simply buys the stock and holds it, if our simple random model is correct. There have been a number of studies to determine how random the stock market is.
Historical Remarks
With the Law of Large Numbers to bolster the frequency interpretation of probability, we find it natural to justify the definition of expected value in terms of the average outcome over a large number of repetitions of the experiment. The concept of expected value was used before it was formally defined; and when it was used, it was considered not as an average value but rather as the appropriate value for a gamble. For example recall, from the Historical Remarks section of Chapter 1, Section 1.2, Pascal’s way of finding the value of a three-game series that had to be called off before it is finished.
Pascal first observed that if each player has only one game to win, then the stake of 64 pistoles should be divided evenly. Then he considered the case where one player has won two games and the other one.
Then consider, Sir, if the first man wins, he gets 64 pistoles, if he loses he gets 32. Thus if they do not wish to risk this last game, but wish to separate without playing it, the first man must say: “I am certain to get 32 pistoles, even if I lose I still get them; but as for the other 32 pistoles, perhaps I will get them, perhaps you will get them, the chances are equal. Let us then divide these 32 pistoles in half and give one half to me as well as my 32 which are mine for sure." He will then have 48 pistoles and the other 16.2
Note that Pascal reduced the problem to a symmetric bet in which each player gets the same amount and takes it as obvious that in this case the stakes should be divided equally.
The first systematic study of expected value appears in Huygens’ book. Like Pascal, Huygens find the value of a gamble by assuming that the answer is obvious for certain symmetric situations and uses this to deduce the expected for the general situation. He does this in steps. His first proposition is
Prop. I. If I expect $a$ or $b$, either of which, with equal probability, may fall to me, then my Expectation is worth $(a + b)/2$, that is, the half Sum of $a$ and $b$.3
Huygens proved this as follows: Assume that two player A and B play a game in which each player puts up a stake of $(a + b)/2$ with an equal chance of winning the total stake. Then the value of the game to each player is $(a + b)/2$. For example, if the game had to be called off clearly each player should just get back his original stake. Now, by symmetry, this value is not changed if we add the condition that the winner of the game has to pay the loser an amount $b$ as a consolation prize. Then for player A the value is still $(a + b)/2$. But what are his possible outcomes for the modified game? If he wins he gets the total stake $a + b$ and must pay B an amount $b$ so ends up with $a$. If he loses he gets an amount $b$ from player B. Thus player A wins $a$ or $b$ with equal chances and the value to him is $(a + b)/2$.
Huygens illustrated this proof in terms of an example. If you are offered a game in which you have an equal chance of winning 2 or 8, the expected value is 5, since this game is equivalent to the game in which each player stakes 5 and agrees to pay the loser 3 — a game in which the value is obviously 5.
Huygens’ second proposition is
Prop. II. If I expect $a$, $b$, or $c$, either of which, with equal facility, may happen, then the Value of my Expectation is $(a + b + c)/3$, or the third of the Sum of $a$, $b$, and $c$.4
His argument here is similar. Three players, A, B, and C, each stake $(a+b+c)/3$ in a game they have an equal chance of winning. The value of this game to player A is clearly the amount he has staked. Further, this value is not changed if A enters into an agreement with B that if one of them wins he pays the other a consolation prize of $b$ and with C that if one of them wins he pays the other a consolation prize of $c$. By symmetry these agreements do not change the value of the game. In this modified game, if A wins he wins the total stake $a + b + c$ minus the consolation prizes $b + c$ giving him a final winning of $a$. If B wins, A wins $b$ and if C wins, A wins $c$. Thus A finds himself in a game with value $(a + b + c)/3$ and with outcomes $a$, $b$, and $c$ occurring with equal chance. This proves Proposition II.
More generally, this reasoning shows that if there are $n$ outcomes $a_1,\ a_2,\ \ldots,\ a_n\ ,$ all occurring with the same probability, the expected value is $\frac {a_1 + a_2 +\cdots+ a_n}n\ .$
In his third proposition Huygens considered the case where you win $a$ or $b$ but with unequal probabilities. He assumed there are $p$ chances of winning $a$, and $q$ chances of winning $b$, all having the same probability. He then showed that the expected value is $E = \frac p{p + q} \cdot a + \frac q{p + q} \cdot b\ .$ This follows by considering an equivalent gamble with $p + q$ outcomes all occurring with the same probability and with a payoff of $a$ in $p$ of the outcomes and $b$ in $q$ of the outcomes. This allowed Huygens to compute the expected value for experiments with unequal probabilities, at least when these probablities are rational numbers.
Thus, instead of defining the expected value as a weighted average, Huygens assumed that the expected value of certain symmetric gambles are known and deduced the other values from these. Although this requires a good deal of clever manipulation, Huygens ended up with values that agree with those given by our modern definition of expected value. One advantage of this method is that it gives a justification for the expected value in cases where it is not reasonable to assume that you can repeat the experiment a large number of times, as for example, in betting that at least two presidents died on the same day of the year. (In fact, three did; all were signers of the Declaration of Independence, and all three died on July 4.)
In his book, Huygens calculated the expected value of games using techniques similar to those which we used in computing the expected value for roulette at Monte Carlo. For example, his proposition XIV is:
Prop. XIV. If I were playing with another by turns, with two Dice, on this Condition, that if I throw 7 I gain, and if he throws 6 he gains allowing him the first Throw: To find the proportion of my Hazard to his.5
A modern description of this game is as follows. Huygens and his opponent take turns rolling a die. The game is over if Huygens rolls a 7 or his opponent rolls a 6. His opponent rolls first. What is the probability that Huygens wins the game?
To solve this problem Huygens let $x$ be his chance of winning when his opponent threw first and $y$ his chance of winning when he threw first. Then on the first roll his opponent wins on 5 out of the 36 possibilities. Thus, $x = \frac {31}{36} \cdot y\ .$ But when Huygens rolls he wins on 6 out of the 36 possible outcomes, and in the other 30, he is led back to where his chances are $x$. Thus $y = \frac 6{36} + \frac {30}{36} \cdot x\ .$ From these two equations Huygens found that $x = 31/61$.
Another early use of expected value appeared in Pascal’s argument to show that a rational person should believe in the existence of God.6 Pascal said that we have to make a wager whether to believe or not to believe. Let $p$ denote the probability that God does not exist. His discussion suggests that we are playing a game with two strategies, believe and not believe, with payoffs as shown in Table [table 6.4].
$\nonumber \begin{array}{ccc} & \text{God does not exist} & God exists \ & \ & \p & 1 - p \end{array}$
$\nonumber \begin{array}{l|c|c|}believe & -u & v \ not believe & 0 & -x \end{array}$
Here $-u$ represents the cost to you of passing up some worldly pleasures as a consequence of believing that God exists. If you do not believe, and God is a vengeful God, you will lose $x$. If God exists and you do believe you will gain v. Now to determine which strategy is best you should compare the two expected values $p(-u) + (1 - p)v \qquad {\rm and} \qquad p0 + (1 - p)(-x),$ and choose the larger of the two. In general, the choice will depend upon the value of $p$. But Pascal assumed that the value of $v$ is infinite and so the strategy of believing is best no matter what probability you assign for the existence of God. This example is considered by some to be the beginning of decision theory. Decision analyses of this kind appear today in many fields, and, in particular, are an important part of medical diagnostics and corporate business decisions.
Another early use of expected value was to decide the price of annuities. The study of statistics has its origins in the use of the bills of mortality kept in the parishes in London from 1603. These records kept a weekly tally of christenings and burials. From these John Graunt made estimates for the population of London and also provided the first mortality data,7 shown in Table [table 6.5].
$\nonumber \begin{array}{cc} & \ \hline Age & Survivors \ \hline 0 & 100 \ 6 & 64 \ 16 & 40 \ 26 & 25 \ 36 & 16 \ 46 & 10 \ 56 & 6 \ 66 & 3 \ 76 & 1 \ \hline \end{array}$
As Hacking observes, Graunt apparently constructed this table by assuming that after the age of 6 there is a constant probability of about 5/8 of surviving for another decade.8 For example, of the 64 people who survive to age 6, 5/8 of 64 or 40 survive to 16, 5/8 of these 40 or 25 survive to 26, and so forth. Of course, he rounded off his figures to the nearest whole person.
Clearly, a constant mortality rate cannot be correct throughout the whole range, and later tables provided by Halley were more realistic in this respect.9
A terminal annuity provides a fixed amount of money during a period of $n$ years. To determine the price of a terminal annuity one needs only to know the appropriate interest rate. A life annuity provides a fixed amount during each year of the buyer’s life. The appropriate price for a life annuity is the expected value of the terminal annuity evaluated for the random lifetime of the buyer. Thus, the work of Huygens in introducing expected value and the work of Graunt and Halley in determining mortality tables led to a more rational method for pricing annuities. This was one of the first serious uses of probability theory outside the gambling houses.
Although expected value plays a role now in every branch of science, it retains its importance in the casino. In 1962, Edward Thorp’s book Beat the Dealer10 provided the reader with a strategy for playing the popular casino game of blackjack that would assure the player a positive expected winning. This book forevermore changed the belief of the casinos that they could not be beat.
Exercises
Exercise $1$
A card is drawn at random from a deck consisting of cards numbered 2 through 10. A player wins 1 dollar if the number on the card is odd and loses 1 dollar if the number if even. What is the expected value of his winnings?
Exercise $2$
A card is drawn at random from a deck of playing cards. If it is red, the player wins 1 dollar; if it is black, the player loses 2 dollars. Find the expected value of the game.
Exercise $3$
In a class there are 20 students: 3 are 5’ 6", 5 are 5’8", 4 are 5’10", 4 are 6’, and 4 are 6’ 2". A student is chosen at random. What is the student’s expected height?
Exercise $4$
In Las Vegas the roulette wheel has a 0 and a 00 and then the numbers 1 to 36 marked on equal slots; the wheel is spun and a ball stops randomly in one slot. When a player bets 1 dollar on a number, he receives 36 dollars if the ball stops on this number, for a net gain of 35 dollars; otherwise, he loses his dollar bet. Find the expected value for his winnings.
Exercise $5$
In a second version of roulette in Las Vegas, a player bets on red or black. Half of the numbers from 1 to 36 are red, and half are black. If a player bets a dollar on black, and if the ball stops on a black number, he gets his dollar back and another dollar. If the ball stops on a red number or on 0 or 00 he loses his dollar. Find the expected winnings for this bet.
Exercise $6$
A die is rolled twice. Let $X$ denote the sum of the two numbers that turn up, and $Y$ the difference of the numbers (specifically, the number on the first roll minus the number on the second). Show that $E(XY) = E(X)E(Y)$. Are $X$ and $Y$ independent?
Exercise $7$
Show that, if $X$ and $Y$ are random variables taking on only two values each, and if $E(XY) = E(X)E(Y)$, then $X$ and $Y$ are independent.
Exercise $8$
A royal family has children until it has a boy or until it has three children, whichever comes first. Assume that each child is a boy with probability 1/2. Find the expected number of boys in this royal family and the expected number of girls.
Exercise $9$
If the first roll in a game of craps is neither a natural nor craps, the player can make an additional bet, equal to his original one, that he will make his point before a seven turns up. If his point is four or ten he is paid off at $2 : 1$ odds; if it is a five or nine he is paid off at odds $3 : 2$; and if it is a six or eight he is paid off at odds $6 : 5$. Find the player’s expected winnings if he makes this additional bet when he has the opportunity.
Exercise $10$
In Example $16$ assume that Mr. Ace decides to buy the stock and hold it until it goes up 1 dollar and then sell and not buy again. Modify the program StockSystem to find the distribution of his profit under this system after a twenty-day period. Find the expected profit and the probability that he comes out ahead.
Exercise $11$
On September 26, 1980, the New York Times reported that a mysterious stranger strode into a Las Vegas casino, placed a single bet of 777,000 dollars on the “don’t pass" line at the crap table, and walked away with more than 1.5 million dollars. In the “don’t pass" bet, the bettor is essentially betting with the house. An exception occurs if the roller rolls a 12 on the first roll. In this case, the roller loses and the “don’t pass" better just gets back the money bet instead of winning. Show that the “don’t pass" bettor has a more favorable bet than the roller.
Exercise $12$
Recall that in the martingales doubling system (see Exercise 1.1.10 ), the player doubles his bet each time he loses. Suppose that you are playing roulette in a where there are no 0’s, and you bet on red each time. You then win with probability 1/2 each time. Assume that you enter the casino with 100 dollars, start with a 1-dollar bet and employ the martingale system. You stop as soon as you have won one bet, or in the unlikely event that black turns up six times in a row so that you are down 63 dollars and cannot make the required 64-dollar bet. Find your expected winnings under this system of play.
Exercise $13$
You have 80 dollars and play the following game. An urn contains two white balls and two black balls. You draw the balls out one at a time without replacement until all the balls are gone. On each draw, you bet half of your present fortune that you will draw a white ball. What is your expected final fortune?
Exercise $14$
In the hat check problem (see Example 3.2.8. ), it was assumed that $N$ people check their hats and the hats are handed back at random. Let $X_j = 1$ if the $j$th person gets his or her hat and 0 otherwise. Find $E(X_j)$ and $E(X_j \cdot X_k)$ for $j$ not equal to $k$. Are $X_j$ and $X_k$ independent?
Exercise $15$
A box contains two gold balls and three silver balls. You are allowed to choose successively balls from the box at random. You win 1 dollar each time you draw a gold ball and lose 1 dollar each time you draw a silver ball. After a draw, the ball is not replaced. Show that, if you draw until you are ahead by 1 dollar or until there are no more gold balls, this is a favorable game.
Exercise $16$
Gerolamo Cardano in his book, The Gambling Scholar written in the early 1500s, considers the following carnival game. There are six dice. Each of the dice has five blank sides. The sixth side has a number between 1 and 6—a different number on each die. The six dice are rolled and the player wins a prize depending on the total of the numbers which turn up.
1. Find, as Cardano did, the expected total without finding its distribution.
2. Large prizes were given for large totals with a modest fee to play the game. Explain why this could be done.
Exercise $17$
Let $X$ be the first time that a occurs in an infinite sequence of Bernoulli trials with probability $p$ for success. Let $p_k = P(X = k)$ for $k = 1$, 2, …. Show that $p_k = p^{k - 1}q$ where $q = 1 - p$. Show that $\sum_k p_k = 1$. Show that $E(X) = 1/q$. What is the expected number of tosses of a coin required to obtain the first tail?
Exercise $18$
Exactly one of six similar keys opens a certain door. If you try the keys, one after another, what is the expected number of keys that you will have to try before success?
Exercise $19$
A multiple choice exam is given. A problem has four possible answers, and exactly one answer is correct. The student is allowed to choose a subset of the four possible answers as his answer. If his chosen subset contains the correct answer, the student receives three points, but he loses one point for each wrong answer in his chosen subset. Show that if he just guesses a subset uniformly and randomly his expected score is zero.
Exercise $20$
You are offered the following game to play: a fair coin is tossed until heads turns up for the first time (see Example 6.1.3). If this occurs on the first toss you receive 2 dollars, if it occurs on the second toss you receive $2^2 = 4$ dollars and, in general, if heads turns up for the first time on the $n$th toss you receive $2^n$ dollars.
1. Show that the expected value of your winnings does not exist (i.e., is given by a divergent sum) for this game. Does this mean that this game is favorable no matter how much you pay to play it?
2. Assume that you only receive $2^{10}$ dollars if any number greater than or equal to ten tosses are required to obtain the first head. Show that your expected value for this modified game is finite and find its value.
3. Assume that you pay 10 dollars for each play of the original game. Write a program to simulate 100 plays of the game and see how you do.
4. Now assume that the utility of $n$ dollars is $\sqrt n$. Write an expression for the expected utility of the payment, and show that this expression has a finite value. Estimate this value. Repeat this exercise for the case that the utility function is $\log(n)$.
Exercise $21$
Let $X$ be a random variable which is Poisson distributed with parameter $\lambda$. Show that $E(X) = \lambda$. Hint : Recall that $e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots\,.$
Exercise $22$
Recall that in Exercise =1.1.4, we considered a town with two hospitals. In the large hospital about 45 babies are born each day, and in the smaller hospital about 15 babies are born each day. We were interested in guessing which hospital would have on the average the largest number of days with the property that more than 60 percent of the children born on that day are boys. For each hospital find the expected number of days in a year that have the property that more than 60 percent of the children born on that day were boys.
Exercise $23$
An insurance company has 1,000 policies on men of age 50. The company estimates that the probability that a man of age 50 dies within a year is .01. Estimate the number of claims that the company can expect from beneficiaries of these men within a year.
Exercise $24$
Using the life table for 1981 in Appendix C, write a program to compute the expected lifetime for males and females of each possible age from 1 to 85. Compare the results for males and females. Comment on whether life insurance should be priced differently for males and females.
Exercise *$25$
A deck of ESP cards consists of 20 cards each of two types: say ten stars, ten circles (normally there are five types). The deck is shuffled and the cards turned up one at a time. You, the alleged percipient, are to name the symbol on each card before it is turned up.
Suppose that you are really just guessing at the cards. If you do not get to see each card after you have made your guess, then it is easy to calculate the expected number of correct guesses, namely ten.
If, on the other hand, you are guessing with information, that is, if you see each card after your guess, then, of course, you might expect to get a higher score. This is indeed the case, but calculating the correct expectation is no longer easy.
But it is easy to do a computer simulation of this guessing with information, so we can get a good idea of the expectation by simulation. (This is similar to the way that skilled blackjack players make blackjack into a favorable game by observing the cards that have already been played. See Exercise $29$.)
1. First, do a simulation of guessing without information, repeating the experiment at least 1000 times. Estimate the expected number of correct answers and compare your result with the theoretical expectation.
2. What is the best strategy for guessing with information?
3. Do a simulation of guessing with information, using the strategy in (b). Repeat the experiment at least 1000 times, and estimate the expectation in this case.
Exercise $26$
Consider the ESP problem as described in Exercise $25$. You are again guessing with information, and you are using the optimal guessing strategy of guessing star if the remaining deck has more stars, circle if more circles, and tossing a coin if the number of stars and circles are equal. Assume that $S \geq C$, where $S$ is the number of stars and $C$ the number of circles.
We can plot the results of a typical game on a graph, where the horizontal axis represents the number of steps and the vertical axis represents the difference between the number of stars and the number of circles that have been turned up. A typical game is shown in Figure $6\. In this particular game, the order in which the cards were turned up is \((C,S,S,S,S,C,C,S,S,C)$. Thus, in this particular game, there were six stars and four circles in the deck. This means, in particular, that every game played with this deck would have a graph which ends at the point $(10, 2)$. We define the line $L$ to be the horizontal line which goes through the ending point on the graph (so its vertical coordinate is just the difference between the number of stars and circles in the deck).
1. Show that, when the random walk is below the line $L$, the player guesses right when the graph goes up (star is turned up) and, when the walk is above the line, the player guesses right when the walk goes down (circle turned up). Show from this property that the subject is sure to have at least $S$ correct guesses.
2. When the walk is at a point $(x,x)$ the line $L$ the number of stars and circles remaining is the same, and so the subject tosses a coin. Show that the probability that the walk reaches $(x,x)$ is $\frac{\binom{S}{x} \binom{C}{x} }{ \binom{S + C}{2x} }$Hint : The outcomes of $2x$ cards is a hypergeometric distribution (see Section [sec 5.1]).
3. Using the results of (a) and (b) show that the expected number of correct guesses under intelligent guessing is $S + \sum_{x = 1}^C \frac{1}{2} \frac{ \binom{S}{x} \binom{C}{x} }{ \binom{S + C}{2x} }$
Exercise $27$
It has been said12 that a Dr. B. Muriel Bristol declined a cup of tea stating that she preferred a cup into which milk had been poured first. The famous statistician R. A. Fisher carried out a test to see if she could tell whether milk was put in before or after the tea. Assume that for the test Dr. Bristol was given eight cups of tea—four in which the milk was put in before the tea and four in which the milk was put in after the tea.
1. What is the expected number of correct guesses the lady would make if she had no information after each test and was just guessing?
2. Using the result of Exercise [exer 6.1.26] find the expected number of correct guesses if she was told the result of each guess and used an optimal guessing strategy.
Exercise $28$
In a popular computer game the computer picks an integer from 1 to $n$ at random. The player is given $k$ chances to guess the number. After each guess the computer responds “correct," “too small," or “too big."
1. Show that if $n \leq 2^k - 1$, then there is a strategy that guarantees you will correctly guess the number in $k$ tries.
2. Show that if $n \geq 2^k - 1$, there is a strategy that assures you of identifying one of $2^k - 1$ numbers and hence gives a probability of $(2^k - 1)/n$ of winning. Why is this an optimal strategy? Illustrate your result in terms of the case $n = 9$ and $k = 3$.
Exercise $29$
In the casino game of blackjack the dealer is dealt two cards, one face up and one face down, and each player is dealt two cards, both face down. If the dealer is showing an ace the player can look at his down cards and then make a bet called an bet. (Expert players will recognize why it is called insurance.) If you make this bet you will win the bet if the dealer’s second card is a : namely, a ten, jack, queen, or king. If you win, you are paid twice your insurance bet; otherwise you lose this bet. Show that, if the only cards you can see are the dealer’s ace and your two cards and if your cards are not ten cards, then the insurance bet is an unfavorable bet. Show, however, that if you are playing two hands simultaneously, and you have no ten cards, then it is a favorable bet. (Thorp13 has shown that the game of blackjack is favorable to the player if he or she can keep good enough track of the cards that have been played.)
Exercise $30$
Assume that, every time you buy a box of Wheaties, you receive a picture of one of the $n$ players for the New York Yankees (see Exercise [sec 3.2].[exer 3.2.34]). Let $X_k$ be the number of additional boxes you have to buy, after you have obtained $k - 1$ different pictures, in order to obtain the next new picture. Thus $X_1 = 1$, $X_2$ is the number of boxes bought after this to obtain a picture different from the first pictured obtained, and so forth.
1. Show that $X_k$ has a geometric distribution with $p = (n - k + 1)/n$.
2. Simulate the experiment for a team with 26 players (25 would be more accurate but we want an even number). Carry out a number of simulations and estimate the expected time required to get the first 13 players and the expected time to get the second 13. How do these expectations compare?
3. Show that, if there are $2n$ players, the expected time to get the first half of the players is $2n \left( \frac 1{2n} + \frac 1{2n - 1} +\cdots+ \frac 1{n + 1} \right)\ ,$ and the expected time to get the second half is $2n \left( \frac 1n + \frac 1{n - 1} +\cdots+ 1 \right)\ .$
4. In Example [exam 6.5] we stated that $1 + \frac 12 + \frac 13 +\cdots+ \frac 1n \sim \log n + .5772 + \frac 1{2n}\ .$ Use this to estimate the expression in (c). Compare these estimates with the exact values and also with your estimates obtained by simulation for the case $n = 26$.
Exercise $31$
(Feller14) A large number, $N$, of people are subjected to a blood test. This can be administered in two ways: (1) Each person can be tested separately, in this case $N$ test are required, (2) the blood samples of $k$ persons can be pooled and analyzed together. If this test is this one test suffices for the $k$ people. If the test is each of the $k$ persons must be tested separately, and in all, $k + 1$ tests are required for the $k$ people. Assume that the probability $p$ that a test is positive is the same for all people and that these events are independent.
1. Find the probability that the test for a pooled sample of $k$ people will be positive.
2. What is the expected value of the number $X$ of tests necessary under plan (2)? (Assume that $N$ is divisible by $k$.)
3. For small $p$, show that the value of $k$ which will minimize the expected number of tests under the second plan is approximately $1/\sqrt p$.
Exercise $32$
Write a program to add random numbers chosen from $[0,1]$ until the first time the sum is greater than one. Have your program repeat this experiment a number of times to estimate the expected number of selections necessary in order that the sum of the chosen numbers first exceeds 1. On the basis of your experiments, what is your estimate for this number?
Exercise $33$
The following related discrete problem also gives a good clue for the answer to Exercise [exer 6.1.32]. Randomly select with replacement $t_1$, $t_2$, …, $t_r$ from the set $(1/n, 2/n, \dots, n/n)$. Let $X$ be the smallest value of $r$ satisfying $t_1 + t_2 +\cdots+ t_r > 1\ .$ Then $E(X) = (1 + 1/n)^n$. To prove this, we can just as well choose $t_1$, $t_2$, …, $t_r$ randomly with replacement from the set $(1, 2, \dots, n)$ and let $X$ be the smallest value of $r$ for which $t_1 + t_2 +\cdots+ t_r > n\ .$
1. Use Exercise [sec 3.2].[exer 3.2.35.5] to show that $P(X \geq j + 1) = {n \choose j}{\Bigl(\frac {1}{n}\Bigr)^j}\ .$
2. Show that $E(X) = \sum_{j = 0}^n P(X \geq j + 1)\ .$
Exercise $34$
(Banach’s Matchbox16) A man carries in each of his two front pockets a box of matches originally containing $N$ matches. Whenever he needs a match, he chooses a pocket at random and removes one from that box. One day he reaches into a pocket and finds the box empty.
1. Let $p_r$ denote the probability that the other pocket contains $r$ matches. Define a sequence of random variables as follows: Let $X_i = 1$ if the $i$th draw is from the left pocket, and 0 if it is from the right pocket. Interpret $p_r$ in terms of $S_n = X_1 + X_2 +\cdots+ X_n$. Find a binomial expression for $p_r$.
2. Write a computer program to compute the $p_r$, as well as the probability that the other pocket contains at least $r$ matches, for $N = 100$ and $r$ from 0 to 50.
3. Show that $(N - r)p_r = (1/2)(2N + 1)p_{r + 1} - (1/2)(r + 1)p_{r + 1}$.
4. Evaluate $\sum_r p_r$.
5. Use (c) and (d) to determine the expectation $E$ of the distribution $\{p_r\}$.
6. Use Stirling’s formula to obtain an approximation for $E$. How many matches must each box contain to ensure a value of about 13 for the expectation $E$? (Take $\pi = 22/7$.)
Exercise $35$
A coin is tossed until the first time a head turns up. If this occurs on the $n$th toss and $n$ is odd you win $2^n/n$, but if $n$ is even then you lose $2^n/n$. Then if your expected winnings exist they are given by the convergent series $1 - \frac 12 + \frac 13 - \frac 14 +\cdots$ called the alternating It is tempting to say that this should be the expected value of the experiment. Show that if we were to do this, the expected value of an experiment would depend upon the order in which the outcomes are listed.
Exercise $36$
Suppose we have an urn containing $c$ yellow balls and $d$ green balls. We draw $k$ balls, without replacement, from the urn. Find the expected number of yellow balls drawn. : Write the number of yellow balls drawn as the sum of $c$ random variables.
Exercise $37$
The reader is referred to Example [exam 6.7] for an explanation of the various options available in Monte Carlo roulette.
1. Compute the expected winnings of a 1 franc bet on red under option (a).
2. Repeat part (a) for option (b).
3. Compare the expected winnings for all three options.
Exercise $38$
(from Pittel17) Telephone books, $n$ in number, are kept in a stack. The probability that the book numbered $i$ (where $1 \le i \le n$) is consulted for a given phone call is $p_i > 0$, where the $p_i$’s sum to 1. After a book is used, it is placed at the top of the stack. Assume that the calls are independent and evenly spaced, and that the system has been employed indefinitely far into the past. Let $d_i$ be the average depth of book $i$ in the stack. Show that $d_i \le d_j$ whenever $p_i \ge p_j$. Thus, on the average, the more popular books have a tendency to be closer to the top of the stack. : Let $p_{ij}$ denote the probability that book $i$ is above book $j$. Show that $p_{ij} = p_{ij}(1 - p_j) + p_{ji}p_i$.
Exercise $39$
(from Propp18) In the previous problem, let $P$ be the probability that at the present time, each book is in its proper place, i.e., book $i$ is $i$th from the top. Find a formula for $P$ in terms of the $p_i$’s. In addition, find the least upper bound on $P$, if the $p_i$’s are allowed to vary. Hint : First find the probability that book 1 is in the right place. Then find the probability that book 2 is in the right place, given that book 1 is in the right place. Continue.
Exercise $40$
(from H. Shultz and B. Leonard19) A sequence of random numbers in $[0, 1)$ is generated until the sequence is no longer monotone increasing. The numbers are chosen according to the uniform distribution. What is the expected length of the sequence? (In calculating the length, the term that destroys monotonicity is included.) : Let $a_1,\ a_2,\ \ldots$ be the sequence and let $X$ denote the length of the sequence. Then $P(X > k) = P(a_1 < a_2 < \cdots < a_k)\ ,$ and the probability on the right-hand side is easy to calculate. Furthermore, one can show that $E(X) = 1 + P(X > 1) + P(X > 2) + \cdots\ .$
Exercise $41$
Let $T$ be the random variable that counts the number of 2-unshuffles performed on an $n$-card deck until all of the labels on the cards are distinct. This random variable was discussed in Section 3.3. Using Equation [eq 3.3.1] in that section, together with the formula $E(T) = \sum_{s = 0}^\infty P(T > s)$ that was proved in Exercise $33$, show that
$E(T) = \sum_{s = 0}^\infty \left(1 - {\binom{2^s}{n}}\frac{n!}{2^{sn}}\right) .$
Show that for $n = 52$, this expression is approximately equal to 11.7. (As was stated in Chapter 3, this means that on the average, almost 12 riffle shuffles of a 52-card deck are required in order for the process to be considered random.) | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/06%3A_Expected_Value_and_Variance/6.01%3A_Expected_Value_of_Discrete_Random_Variables.txt |
The usefulness of the expected value as a prediction for the outcome of an experiment is increased when the outcome is not likely to deviate too much from the expected value. In this section we shall introduce a measure of this deviation, called the variance.
Variance
Definition
Let $X$ be a numerically valued random variable with expected value $\mu = E(X)$. Then the variance of $X$, denoted by $V(X)$, is $V(X) = E((X - \mu)^2)\ .$
Note that, by Theorem 6.1.1, $V(X)$ is given by
$V(X) = \sum_x (x - \mu)^2 m(x)\ , \label{eq 6.1}$ where $m$ is the distribution function of $X$.
Standard Deviation
The standard deviation of $X$, denoted by $D(X)$, is $D(X) = \sqrt {V(X)}$. We often write $\sigma$ for $D(X)$ and $\sigma^2$ for $V(X)$.
Example $1$
Consider one roll of a die. Let $X$ be the number that turns up. To find $V(X)$, we must first find the expected value of $X$.
Solution
This is
\begin{align} \mu & = & E(X) = 1\Bigl(\frac 16\Bigr) + 2\Bigl(\frac 16\Bigr) + 3\Bigl(\frac{1}{6}\Bigr) + 4\Bigl(\frac{1}{6}\Bigr) + 5\Bigl(\frac{1}{6}\Bigr) + 6\Bigl(\frac{1}{6}\Bigr) \ & = & \frac{7}{2} .\end{align}
To find the variance of $X$, we form the new random variable $(X - \mu)^2$ and compute its expectation. We can easily do this using the following table.
X
m(x)
(* - 7/2Y
1
1/6
25/4
2
1/6
9/4
3
1/6
1/4
4
1/6
1/4
5
1/6
9/4
6
1/6
25/4
From this table we find $E((X - \mu)^2)$ is \begin{align} V(X) & = & \frac{1}{6} \left( \frac{25}{4} + \frac{9}{4} + \frac{1}{4} + \frac{1}{4} + \frac{9}{4} + \frac {25}{4} \right) \ & = &\frac{35}{12} \end{align}
and the standard deviation $D(X) = \sqrt{35/12} \approx 1.707$.
Calculation of Variance
We next prove a theorem that gives us a useful alternative form for computing the variance.
Theorem $1$
If $X$ is any random variable with $E(X) = \mu$, then $V(X) = E(X^2) - \mu^2\ .$
Proof
We have \begin{aligned} V(X) & = & E((X - \mu)^2) = E(X^2 - 2\mu X + \mu^2) \ & = & E(X^2) - 2\mu E(X) + \mu^2 = E(X^2) - \mu^2\ .\end{aligned}
Using Theorem $\PageIndex{1$, we can compute the variance of the outcome of a roll of a die by first computing \begin{align} E(X^2) & = & 1\Bigl(\frac 16\Bigr) + 4\Bigl(\frac 16\Bigr) + 9\Bigl(\frac 16\Bigr) + 16\Bigl(\frac 16\Bigr) + 25\Bigl(\frac 16\Bigr) + 36\Bigl(\frac 16\Bigr) \ & = &\frac {91}6\ ,\end{align} and, $V(X) = E(X^2) - \mu^2 = \frac {91}{6} - \Bigl(\frac 72\Bigr)^2 = \frac {35}{12}\ ,$ in agreement with the value obtained directly from the definition of $V(X)$.
Properties of Variance
The variance has properties very different from those of the expectation. If $c$ is any constant, $E(cX) = cE(X)$ and $E(X + c) = E(X) + c$. These two statements imply that the expectation is a linear function. However, the variance is not linear, as seen in the next theorem.
Theorem $2$
If $X$ is any random variable and $c$ is any constant, then $V(cX) = c^2 V(X)$ and $V(X + c) = V(X)\ .$
Proof
Let $\mu = E(X)$. Then $E(cX) = c\mu$, and \begin{aligned} V(cX) &=& E((cX - c\mu)^2) = E(c^2(X - \mu)^2) \ &=& c^2 E((X - \mu)^2) = c^2 V(X)\ .\end{aligned}
To prove the second assertion, we note that, to compute $V(X + c)$, we would replace $x$ by $x + c$ and $\mu$ by $\mu + c$ in Equation [eq 6.1]. Then the $c$’s would cancel, leaving $V(X)$.
We turn now to some general properties of the variance. Recall that if $X$ and $Y$ are any two random variables, $E(X + Y) = E(X) + E(Y)$. This is not always true for the case of the variance. For example, let $X$ be a random variable with $V(X) \ne 0$, and define $Y = -X$. Then $V(X) = V(Y)$, so that $V(X) + V(Y) = 2V(X)$. But $X + Y$ is always 0 and hence has variance 0. Thus $V(X + Y) \ne V(X) + V(Y)$.
In the important case of mutually independent random variables, however, the variance of the sum is the sum of the variances.
Theorem $3$
Let $X$ and $Y$ be two random variables. Then $V(X + Y) = V(X) + V(Y)\ .$
Proof
Let $E(X) = a$ and $E(Y) = b$. Then \begin{aligned} V(X + Y) & = & E((X + Y)^2) - (a + b)^2 \ & = & E(X^2) + 2E(XY) + E(Y^2) - a^2 - 2ab - b^2\ .\end{aligned} Since $X$ and $Y$ are independent, $E(XY) = E(X)E(Y) = ab$. Thus, $V(X + Y) = E(X^2) - a^2 + E(Y^2) - b^2 = V(X) + V(Y)\ .$
It is easy to extend this proof, by mathematical induction, to show that the variance of the sum of any number of mutually independent random variables is the sum of the individual variances. Thus we have the following theorem.
Theorem $4$
Let $X_1$, $X_2$, …, $X_n$ be an independent trials process with $E(X_j) = \mu$ and $V(X_j) = \sigma^2$. Let $S_n = X_1 + X_2 +\cdots+ X_n$ be the sum, and $A_n = \frac {S_n}n$ be the average. Then \begin{aligned} E(S_n) &=& n\mu\ , \ V(S_n) &=& n\sigma^2\ , \ \sigma(S_n) &=& \sigma \sqrt{n}\ , \ E(A_n) &=& \mu\ , \ V(A_n) &=& \frac {\sigma^2}\ , \ \sigma(A_n) &=& \frac{\sigma}{\sqrt n}\ .\end{aligned}
Proof
Since all the random variables $X_j$ have the same expected value, we have $E(S_n) = E(X_1) +\cdots+ E(X_n) = n\mu\ ,$ $V(S_n) = V(X_1) +\cdots+ V(X_n) = n\sigma^2\ ,$ and $\sigma(S_n) = \sigma \sqrt{n}\ .$
We have seen that, if we multiply a random variable $X$ with mean $\mu$ and variance $\sigma^2$ by a constant $c$, the new random variable has expected value $c\mu$ and variance $c^2\sigma^2$. Thus,
$E(A_n) = E\left(\frac {S_n}n \right) = \frac {n\mu}n = \mu\ ,$
and
$V(A_n) = V\left( \frac {S_n}n \right) = \frac {V(S_n)}{n^2} = \frac {n\sigma^2}{n^2} = \frac {\sigma^2}n\ .$
Finally, the standard deviation of $A_n$ is given by
$\sigma(A_n) = \frac {\sigma}{\sqrt n}\ .$
The last equation in the above theorem implies that in an independent trials process, if the individual summands have finite variance, then the standard deviation of the average goes to 0 as $n \rightarrow \infty$. Since the standard deviation tells us something about the spread of the distribution around the mean, we see that for large values of $n$, the value of $A_n$ is usually very close to the mean of $A_n$, which equals $\mu$, as shown above. This statement is made precise in Chapter 8 where it is called the Law of Large Numbers. For example, let $X$ represent the roll of a fair die. In Figure [fig 6.4.5], we show the distribution of a random variable $A_n$ corresponding to $X$, for $n = 10$ and $n = 100$.
Example $1$
Consider $n$ rolls of a die. We have seen that, if $X_j$ is the outcome if the $j$th roll, then $E(X_j) = 7/2$ and $V(X_j) = 35/12$. Thus, if $S_n$ is the sum of the outcomes, and $A_n = S_n/n$ is the average of the outcomes, we have $E(A_n) = 7/2$ and $V(A_n) = (35/12)/n$. Therefore, as $n$ increases, the expected value of the average remains constant, but the variance tends to 0. If the variance is a measure of the expected deviation from the mean this would indicate that, for large $n$, we can expect the average to be very near the expected value. This is in fact the case, and we shall justify it in Chapter 8 .
Bernoulli Trials
Consider next the general Bernoulli trials process. As usual, we let $X_j = 1$ if the $j$th outcome is a success and 0 if it is a failure. If $p$ is the probability of a success, and $q = 1 - p$, then \begin{aligned} E(X_j) & = & 0q + 1p = p\ , \ E(X_j^2) & = & 0^2q + 1^2p = p\ ,\end{aligned} and $V(X_j) = E(X_j^2) - (E(X_j))^2 = p - p^2 = pq\ .$
Thus, for Bernoulli trials, if $S_n = X_1 + X_2 +\cdots+ X_n$ is the number of successes, then $E(S_n) = np$, $V(S_n) = npq$, and $D(S_n) = \sqrt{npq}.$ If $A_n = S_n/n$ is the average number of successes, then $E(A_n) = p$, $V(A_n) = pq/n$, and $D(A_n) = \sqrt{pq/n}$. We see that the expected proportion of successes remains $p$ and the variance tends to 0. This suggests that the frequency interpretation of probability is a correct one. We shall make this more precise in Chapter 8 .
Example $3$
Let $T$ denote the number of trials until the first success in a Bernoulli trials process. Then $T$ is geometrically distributed. What is the variance of $T$?
Answer
In Example [exam 5.7], we saw that $m_T = \pmatrix{1 & 2 & 3 & \cdots \cr p & qp & q^2p & \cdots \cr}.$ In Example [exam 6.8], we showed that $E(T) = 1/p\ .$ Thus, $V(T) = E(T^2) - 1/p^2\ ,$ so we need only find \begin{aligned} E(T^2) & = & 1p + 4qp + 9q^2p + \cdots \ & = & p(1 + 4q + 9q^2 + \cdots )\ .\end{aligned} To evaluate this sum, we start again with $1 + x + x^2 +\cdots= \frac 1{1 - x}\ .$ Differentiating, we obtain $1 + 2x + 3x^2 +\cdots= \frac 1{(1 - x)^2}\ .$ Multiplying by $x$, $x + 2x^2 + 3x^3 +\cdots= \frac x{(1 - x)^2}\ .$ Differentiating again gives $1 + 4x + 9x^2 +\cdots= \frac {1 + x}{(1 - x)^3}\ .$ Thus, $E(T^2) = p\frac {1 + q}{(1 - q)^3} = \frac {1 + q}{p^2}$ and \begin{aligned} V(T) & = & E(T^2) - (E(T))^2 \ & = & \frac {1 + q}{p^2} - \frac 1{p^2} = \frac q{p^2}\ .\end{aligned}
For example, the variance for the number of tosses of a coin until the first head turns up is $(1/2)/(1/2)^2 = 2$. The variance for the number of rolls of a die until the first six turns up is $(5/6)/(1/6)^2 = 30$. Note that, as $p$ decreases, the variance increases rapidly. This corresponds to the increased spread of the geometric distribution as $p$ decreases (noted in Figure [fig 5.4]).
Poisson Distribution
Just as in the case of expected values, it is easy to guess the variance of the Poisson distribution with parameter $\lambda$. We recall that the variance of a binomial distribution with parameters $n$ and $p$ equals $npq$. We also recall that the Poisson distribution could be obtained as a limit of binomial distributions, if $n$ goes to $\infty$ and $p$ goes to 0 in such a way that their product is kept fixed at the value $\lambda$. In this case, $npq = \lambda q$ approaches $\lambda$, since $q$ goes to 1. So, given a Poisson distribution with parameter $\lambda$, we should guess that its variance is $\lambda$. The reader is asked to show this in Exercise $29$
Exercises
Exercise $1$
A number is chosen at random from the set $S = \{-1,0,1\}$. Let $X$ be the number chosen. Find the expected value, variance, and standard deviation of $X$.
Exercise $2$
A random variable $X$ has the distribution $p_X = \pmatrix{ 0 & 1 & 2 & 4 \cr 1/3 & 1/3 & 1/6 & 1/6 \cr}\ .$ Find the expected value, variance, and standard deviation of $X$.
Exercise $3$
You place a 1-dollar bet on the number 17 at Las Vegas, and your friend places a 1-dollar bet on black (see Exercises 1.1.6 and 1.1.7). Let $X$ be your winnings and $Y$ be her winnings. Compare $E(X)$, $E(Y)$, and $V(X)$, $V(Y)$. What do these computations tell you about the nature of your winnings if you and your friend make a sequence of bets, with you betting each time on a number and your friend betting on a color?
Exercise $4$
$X$ is a random variable with $E(X) = 100$ and $V(X) = 15$. Find
1. $E(X^2)$.
2. $E(3X + 10)$.
3. $E(-X)$.
4. $V(-X)$.
5. $D(-X)$.
Exercise $5$
In a certain manufacturing process, the (Fahrenheit) temperature never varies by more than $2^\circ$ from $62^\circ$. The temperature is, in fact, a random variable $F$ with distribution $P_F = \pmatrix{ 60 & 61 & 62 & 63 & 64 \cr 1/10 & 2/10 & 4/10 & 2/10 & 1/10 \cr}\ .$
1. Find $E(F)$ and $V(F)$.
2. Define $T = F - 62$. Find $E(T)$ and $V(T)$, and compare these answers with those in part (a).
3. It is decided to report the temperature readings on a Celsius scale, that is, $C = (5/9)(F - 32)$. What is the expected value and variance for the readings now?
Exercise $6$
Write a computer program to calculate the mean and variance of a distribution which you specify as data. Use the program to compare the variances for the following densities, both having expected value 0: $p_X = \pmatrix{ -2 & -1 & 0 & 1 & 2 \cr 3/11 & 2/11 & 1/11 & 2/11 & 3/11 \cr}\ ;$ $p_Y = \pmatrix{ -2 & -1 & 0 & 1 & 2 \cr 1/11 & 2/11 & 5/11 & 2/11 & 1/11 \cr}\ .$
Exercise $7$
A coin is tossed three times. Let $X$ be the number of heads that turn up. Find $V(X)$ and $D(X)$.
Exercise $8$
A random sample of 2400 people are asked if they favor a government proposal to develop new nuclear power plants. If 40 percent of the people in the country are in favor of this proposal, find the expected value and the standard deviation for the number $S_{2400}$ of people in the sample who favored the proposal.
Exercise $9$
A die is loaded so that the probability of a face coming up is proportional to the number on that face. The die is rolled with outcome $X$. Find $V(X)$ and $D(X)$.
Exercise $10$
Prove the following facts about the standard deviation.
1. $D(X + c) = D(X)$.
2. $D(cX) = |c|D(X)$.
Exercise $11$
A number is chosen at random from the integers 1, 2, 3, …, $n$. Let $X$ be the number chosen. Show that $E(X) = (n + 1)/2$ and $V(X) = (n - 1)(n + 1)/12$. : The following identity may be useful: $1^2 + 2^2 + \cdots + n^2 = \frac{(n)(n+1)(2n+1)}{6}\ .$
Exercise $12$
Let $X$ be a random variable with $\mu = E(X)$ and $\sigma^2 = V(X)$. Define $X^* = (X - \mu)/\sigma$. The random variable $X^*$ is called the associated with $X$. Show that this standardized random variable has expected value 0 and variance 1.
Exercise $13$
Peter and Paul play Heads or Tails (see Example [exam 1.3]). Let $W_n$ be Peter’s winnings after $n$ matches. Show that $E(W_n) = 0$ and $V(W_n) = n$.
Exercise $14$
Find the expected value and the variance for the number of boys and the number of girls in a royal family that has children until there is a boy or until there are three children, whichever comes first.
Exercise $15$
Suppose that $n$ people have their hats returned at random. Let $X_i = 1$ if the $i$th person gets his or her own hat back and 0 otherwise. Let $S_n = \sum_{i = 1}^n X_i$. Then $S_n$ is the total number of people who get their own hats back. Show that
1. $E(X_i^2) = 1/n$.
2. $E(X_i \cdot X_j) = 1/n(n - 1)$ for $i \ne j$.
3. $E(S_n^2) = 2$ (using (a) and (b)).
4. $V(S_n) = 1$.
Exercise $16$
Let $S_n$ be the number of successes in $n$ independent trials. Use the program BinomialProbabilities (Section [sec 3.2]) to compute, for given $n$, $p$, and $j$, the probability $P(-j\sqrt{npq} < S_n - np < j\sqrt{npq})\ .$
1. Let $p = .5$, and compute this probability for $j = 1$, 2, 3 and $n = 10$, 30, 50. Do the same for $p = .2$.
2. Show that the standardized random variable $S_n^* = (S_n - np)/\sqrt{npq}$ has expected value 0 and variance 1. What do your results from (a) tell you about this standardized quantity $S_n^*$?
Exercise $17$
Let $X$ be the outcome of a chance experiment with $E(X) = \mu$ and $V(X) = \sigma^2$. When $\mu$ and $\sigma^2$ are unknown, the statistician often estimates them by repeating the experiment $n$ times with outcomes $x_1$, $x_2$, …, $x_n$, estimating $\mu$ by the sample mean
$\bar{x} = \frac 1n \sum_{i = 1}^n x_i\ ,$ a
nd $\sigma^2$ by the sample variance
$s^2 = \frac 1n \sum_{i = 1}^n (x_i - \bar x)^2\ .$
Then $s$ is the sample standard deviation. These formulas should remind the reader of the definitions of the theoretical mean and variance. (Many statisticians define the sample variance with the coefficient $1/n$ replaced by $1/(n-1)$. If this alternative definition is used, the expected value of $s^2$ is equal to $\sigma^2$. See Exercise 6.2.19, part (d).)
Write a computer program that will roll a die $n$ times and compute the sample mean and sample variance. Repeat this experiment several times for $n = 10$ and $n = 1000$. How well do the sample mean and sample variance estimate the true mean 7/2 and variance 35/12?
Exercise $18$
Show that, for the sample mean $\bar x$ and sample variance $s^2$ as defined in Exercise [exer 6.2.18],
1. $E(\bar x) = \mu$.
2. $E\bigl((\bar x - \mu)^2\bigr) = \sigma^2/n$.
3. $E(s^2) = \frac {n-1}n\sigma^2$. : For (c) write \begin{aligned} \sum_{i = 1}^n (x_i - \bar x)^2 & = & \sum_{i = 1}^n \bigl((x_i - \mu) - (\bar x - \mu)\bigr)^2 \ & = & \sum_{i = 1}^n (x_i - \mu)^2 - 2(\bar x - \mu) \sum_{i = 1}^n (x_i - \mu) + n(\bar x - \mu)^2 \ & = & \sum_{i = 1}^n (x_i - \mu)^2 - n(\bar x - \mu)^2,\end{aligned} and take expectations of both sides, using part (b) when necessary.
4. Show that if, in the definition of $s^2$ in Exercise [exer 6.2.18], we replace the coefficient $1/n$ by the coefficient $1/(n-1)$, then $E(s^2) = \sigma^2$. (This shows why many statisticians use the coefficient $1/(n-1)$. The number $s^2$ is used to estimate the unknown quantity $\sigma^2$. If an estimator has an average value which equals the quantity being estimated, then the estimator is said to be unbiased. Thus, the statement $E(s^2) = \sigma^2$ says that $s^2$ is an unbiased estimator of $\sigma^2$.)
Exercise $19$
Let $X$ be a random variable taking on values $a_1$, $a_2$, …, $a_r$ with probabilities $p_1$, $p_2$, …, $p_r$ and with $E(X) = \mu$. Define the of spread $X$ as follows: $\bar\sigma = \sum_{i = 1}^r |a_i - \mu|p_i\ .$ This, like the standard deviation, is a way to quantify the amount that a random variable is spread out around its mean. Recall that the variance of a sum of mutually independent random variables is the sum of the individual variances. The square of the spread corresponds to the variance in a manner similar to the correspondence between the spread and the standard deviation. Show by an example that it is not necessarily true that the square of the spread of the sum of two independent random variables is the sum of the squares of the individual spreads.
Exercise $20$
We have two instruments that measure the distance between two points. The measurements given by the two instruments are random variables $X_1$ and $X_2$ that are independent with $E(X_1) = E(X_2) = \mu$, where $\mu$ is the true distance. From experience with these instruments, we know the values of the variances $\sigma_1^2$ and $\sigma_2^2$. These variances are not necessarily the same. From two measurements, we estimate $\mu$ by the weighted average $\bar \mu = wX_1 + (1 - w)X_2$. Here $w$ is chosen in $[0,1]$ to minimize the variance of $\bar \mu$.
1. What is $E(\bar \mu)$?
2. How should $w$ be chosen in $[0,1]$ to minimize the variance of $\bar \mu$?
Exercise $21$
Let $X$ be a random variable with $E(X) = \mu$ and $V(X) = \sigma^2$. Show that the function $f(x)$ defined by $f(x) = \sum_\omega (X(\omega) - x)^2 p(\omega)$ has its minimum value when $x = \mu$.
Exercise $22$
Let $X$ and $Y$ be two random variables defined on the finite sample space $\Omega$. Assume that $X$, $Y$, $X + Y$, and $X - Y$ all have the same distribution. Prove that $P(X = Y = 0) = 1$.
Exercise $23$
If $X$ and $Y$ are any two random variables, then the covariance of $X$ and $Y$ is defined by Cov$(X,Y) = E((X - E(X))(Y - E(Y)))$. Note that Cov$(X,X) = V(X)$. Show that, if $X$ and $Y$ are independent, then Cov$(X,Y) = 0$; and show, by an example, that we can have Cov$(X,Y) = 0$ and $X$ and $Y$ not independent.
Exercise $24$
A professor wishes to make up a true-false exam with $n$ questions. She assumes that she can design the problems in such a way that a student will answer the $j$th problem correctly with probability $p_j$, and that the answers to the various problems may be considered independent experiments. Let $S_n$ be the number of problems that a student will get correct. The professor wishes to choose $p_j$ so that $E(S_n) = .7n$ and so that the variance of $S_n$ is as large as possible. Show that, to achieve this, she should choose $p_j = .7$ for all $j$; that is, she should make all the problems have the same difficulty.
Exercise $25$
(Lamperti20) An urn contains exactly 5000 balls, of which an unknown number $X$ are white and the rest red, where $X$ is a random variable with a probability distribution on the integers 0, 1, 2, …, 5000.
1. Suppose we know that $E(X) = \mu$. Show that this is enough to allow us to calculate the probability that a ball drawn at random from the urn will be white. What is this probability?
2. We draw a ball from the urn, examine its color, replace it, and then draw another. Under what conditions, if any, are the results of the two drawings independent; that is, does $P(white,white) = P(white)^2 ?$
3. Suppose the variance of $X$ is $\sigma^2$. What is the probability of drawing two white balls in part (b)?
Exercise $26$
For a sequence of Bernoulli trials, let $X_1$ be the number of trials until the first success. For $j \geq 2$, let $X_j$ be the number of trials after the $(j - 1)$st success until the $j$th success. It can be shown that $X_1$, $X_2$, …is an independent trials process.
1. What is the common distribution, expected value, and variance for $X_j$?
2. Let $T_n = X_1 + X_2 + \cdots + X_n$. Then $T_n$ is the time until the $n$th success. Find $E(T_n)$ and $V(T_n)$.
3. Use the results of (b) to find the expected value and variance for the number of tosses of a coin until the $n$th occurrence of a head.
Exercise $27$
Referring to Exercise 6.1.30, find the variance for the number of boxes of Wheaties bought before getting half of the players’ pictures and the variance for the number of additional boxes needed to get the second half of the players’ pictures.
Exercise $28$
In Example 5.1.3, assume that the book in question has 1000 pages. Let $X$ be the number of pages with no mistakes. Show that $E(X) = 905$ and $V(X) = 86$. Using these results, show that the probability is ${} \leq .05$ that there will be more than 924 pages without errors or fewer than 866 pages without errors.
Exercise $29$
Let $X$ be Poisson distributed with parameter $\lambda$. Show that $V(X) = \lambda$.'6.2'}); | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/06%3A_Expected_Value_and_Variance/6.02%3A_Variance_of_Discrete_Random_Variables.txt |
In this section we consider the properties of the expected value and the variance of a continuous random variable. These quantities are defined just as for discrete random variables and share the same properties.
Expected Value
Definition: Term
Let $X$ be a real-valued random variable with density function $f(x)$. The expected value $\mu=E(X)$ is defined by
$\mu=E(X)=\int_{-\infty}^{+\infty} x f(x) d x,$
provided the integral
$\int_{-\infty}^{+\infty}|x| f(x) d x$
is finite.
The reader should compare this definition with the corresponding one for discrete random variables in Section 6.1. Intuitively, we can interpret $E(X)$, as we did in the previous sections, as the value that we should expect to obtain if we perform a large number of independent experiments and average the resulting values of $X$.
We can summarize the properties of $E(X)$ as follows (cf. Theorem 6.2).
Theorem $1$
If $X$ and $Y$ are real-valued random variables and $c$ is any constant, then
\begin{aligned} E(X+Y) & =E(X)+E(Y) \ E(c X) & =c E(X) . \end{aligned}
The proof is very similar to the proof of Theorem 6.2 , and we omit it.
More generally, if $X_1, X_2, \ldots, X_n$ are $n$ real-valued random variables, and $c_1, c_2$, $\ldots, c_n$ are $n$ constants, then
$E\left(c_1 X_1+c_2 X_2+\cdots+c_n X_n\right)=c_1 E\left(X_1\right)+c_2 E\left(X_2\right)+\cdots+c_n E\left(X_n\right) .$
Example $1$
Let $X$ be uniformly distributed on the interval $[0,1]$. Then
$E(X)=\int_0^1 x d x=1 / 2$
It follows that if we choose a large number $N$ of random numbers from $[0,1]$ and take the average, then we can expect that this average should be close to the expected value of $1 / 2$.
Example $2$
Let $Z=(x, y)$ denote a point chosen uniformly and randomly from the unit disk, as in the dart game in Example 2.8 and let $X=\left(x^2+y^2\right)^{1 / 2}$ be the distance from $Z$ to the center of the disk. The density function of $X$ can easily be shown to equal $f(x)=2 x$, so by the definition of expected value,
\begin{aligned} E(X) & =\int_0^1 x f(x) d x \ & =\int_0^1 x(2 x) d x \ & =\frac{2}{3} . \end{aligned}
Example $3$
In the example of the couple meeting at the Inn (Example 2.16), each person arrives at a time which is uniformly distributed between 5:00 and 6:00 PM. The random variable $Z$ under consideration is the length of time the first person has to wait until the second one arrives. It was shown that
$f_Z(z)=2(1-z)$
for $0 \leq z \leq 1$. Hence,
$E(Z)=\int_0^1 z f_Z(z) d z$
$\begin{array}{l} =\int_0^1 2 z(1-z) d z \ =\left[z^2-\frac{2}{3} z^3\right]_0^1 \ =\frac{1}{3} . \end{array}$
Expectation of a Function of a Random Variable
Suppose that $X$ is a real-valued random variable and $\phi(x)$ is a continuous function from $\mathbf{R}$ to $\mathbf{R}$. The following theorem is the continuous analogue of Theorem 6.1.
Theorem $2$
If $X$ is a real-valued random variable and if $\phi: \mathbf{R} \rightarrow \mathbf{R}$ is a continuous real-valued function with domain $[a, b]$, then
$E(\phi(X))=\int_{-\infty}^{+\infty} \phi(x) f_X(x) d x$
provided the integral exists.
For a proof of this theorem, see Ross. ${ }^{1}$
Expectation of the Product of Two Random Variables
In general, it is not true that $E(X Y)=E(X) E(Y)$, since the integral of a product is not the product of integrals. But if $X$ and $Y$ are independent, then the expectations multiply.
Theorem $3$
Let $X$ and $Y$ be independent real-valued continuous random variables with finite expected values. Then we have
$E(X Y)=E(X) E(Y) .$
Proof. We will prove this only in the case that the ranges of $X$ and $Y$ are contained in the intervals $[a, b]$ and $[c, d]$, respectively. Let the density functions of $X$ and $Y$ be denoted by $f_X(x)$ and $f_Y(y)$, respectively. Since $X$ and $Y$ are independent, the joint density function of $X$ and $Y$ is the product of the individual density functions. Hence
\begin{aligned} E(X Y) & =\int_a^b \int_c^d x y f_X(x) f_Y(y) d y d x \ & =\int_a^b x f_X(x) d x \int_c^d y f_Y(y) d y \ & =E(X) E(Y) . \end{aligned}
The proof in the general case involves using sequences of bounded random variables that approach $X$ and $Y$, and is somewhat technical, so we will omit it.
In the same way, one can show that if $X_1, X_2, \ldots, X_n$ are $n$ mutually independent real-valued random variables, then
$E\left(X_1 X_2 \cdots X_n\right)=E\left(X_1\right) E\left(X_2\right) \cdots E\left(X_n\right) .$
Example $4$
Let $Z=(X, Y)$ be a point chosen at random in the unit square. Let $A=X^2$ and $B=Y^2$. Then Theorem 4.3 implies that $A$ and $B$ are independent. Using Theorem $2$ , the expectations of $A$ and $B$ are easy to calculate:
\begin{aligned} E(A)=E(B) & =\int_0^1 x^2 d x \ & =\frac{1}{3} . \end{aligned}
Using Theorem $3$, the expectation of $A B$ is just the product of $E(A)$ and $E(B)$, or $1 / 9$. The usefulness of this theorem is demonstrated by noting that it is quite a bit more difficult to calculate $E(A B)$ from the definition of expectation. One finds that the density function of $A B$ is
$f_{A B}(t)=\frac{-\log (t)}{4 \sqrt{t}},$
so
\begin{aligned} E(A B) & =\int_0^1 t f_{A B}(t) d t \ & =\frac{1}{9} . \end{aligned}
Example $4$
Again let $Z=(X, Y)$ be a point chosen at random in the unit square, and let $W=X+Y$. Then $Y$ and $W$ are not independent, and we have
\begin{aligned} E(Y) & =\frac{1}{2}, \ E(W) & =1, \ E(Y W) & =E\left(X Y+Y^2\right)=E(X) E(Y)+\frac{1}{3}=\frac{7}{12} \neq E(Y) E(W) . \end{aligned}
We turn now to the variance.
Variance
Definition: Variance
Let $X$ be a real-valued random variable with density function $f(x)$. The variance $\sigma^2=V(X)$ is defined by
$\sigma^2=V(X)=E\left((X-\mu)^2\right) .$
The next result follows easily from Theorem 6.1.1. There is another way to calculate the variance of a continuous random variable, which is usually slightly easier. It is given in Theorem $6$ .
Theorem $4$
If $X$ is a real-valued random variable with $E(X)=\mu$, then
$\sigma^2=\int_{-\infty}^{\infty}(x-\mu)^2 f(x) d x .$
The properties listed in the next three theorems are all proved in exactly the same way that the corresponding theorems for discrete random variables were proved in Section 6.2.
Theorem $5$
If $X$ is a real-valued random variable defined on $\Omega$ and $c$ is any constant, then (cf. Theorem 6.2.7)
\begin{aligned} V(c X) & =c^2 V(X), \ V(X+c) & =V(X) . \end{aligned}
Theorem $6$
If $X$ is a real-valued random variable with $E(X)=\mu$, then (cf. Theorem 6.2.6)
$V(X)=E\left(X^2\right)-\mu^2 .$
Theorem $7$
If $X$ and $Y$ are independent real-valued random variables on $\Omega$, then (cf. Theorem 6.2.8)
$V(X+Y)=V(X)+V(Y) .$
Example $5$
If $X$ is uniformly distributed on $[0,1]$, then, using Theorem $6$, we have
$V(X)=\int_0^1\left(x-\frac{1}{2}\right)^2 d x=\frac{1}{12} .$
Example $6$
Let $X$ be an exponentially distributed random variable with parameter $\lambda$. Then the density function of $X$ is
$f_X(x)=\lambda e^{-\lambda x} .$
From the definition of expectation and integration by parts, we have
\begin{aligned} E(X) & =\int_0^{\infty} x f_X(x) d x \ & =\lambda \int_0^{\infty} x e^{-\lambda x} d x \ & =-\left.x e^{-\lambda x}\right|_0 ^{\infty}+\int_0^{\infty} e^{-\lambda x} d x \ & =0+\left.\frac{e^{-\lambda x}}{-\lambda}\right|_0 ^{\infty}=\frac{1}{\lambda} . \end{aligned}
Similarly, using Theorems $1$ and $6$ , we have
\begin{aligned} V(X) & =\int_0^{\infty} x^2 f_X(x) d x-\frac{1}{\lambda^2} \ & =\lambda \int_0^{\infty} x^2 e^{-\lambda x} d x-\frac{1}{\lambda^2} \ & =-\left.x^2 e^{-\lambda x}\right|_0 ^{\infty}+2 \int_0^{\infty} x e^{-\lambda x} d x-\frac{1}{\lambda^2} \ & =-\left.x^2 e^{-\lambda x}\right|_0 ^{\infty}-\left.\frac{2 x e^{-\lambda x}}{\lambda}\right|_0 ^{\infty}-\left.\frac{2}{\lambda^2} e^{-\lambda x}\right|_0 ^{\infty}-\frac{1}{\lambda^2}=\frac{2}{\lambda^2}-\frac{1}{\lambda^2}=\frac{1}{\lambda^2} . \end{aligned}
In this case, both $E(X)$ and $V(X)$ are finite if $\lambda>0$.
Example $7$
Let $Z$ be a standard normal random variable with density function
$f_Z(x)=\frac{1}{\sqrt{2 \pi}} e^{-x^2 / 2} .$
Since this density function is symmetric with respect to the $y$-axis, then it is easy to show that
$\int_{-\infty}^{\infty} x f_Z(x) d x$
has value 0 . The reader should recall however, that the expectation is defined to be the above integral only if the integral
$\int_{-\infty}^{\infty}|x| f_Z(x) d x$
is finite. This integral equals
$2 \int_0^{\infty} x f_Z(x) d x$
which one can easily show is finite. Thus, the expected value of $Z$ is 0 .
To calculate the variance of $Z$, we begin by applying Theorem $6$:
$V(Z)=\int_{-\infty}^{+\infty} x^2 f_Z(x) d x-\mu^2$
If we write $x^2$ as $x \cdot x$, and integrate by parts, we obtain
$\left.\frac{1}{\sqrt{2 \pi}}\left(-x e^{-x^2 / 2}\right)\right|_{-\infty} ^{+\infty}+\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{+\infty} e^{-x^2 / 2} d x .$
The first summand above can be shown to equal 0 , since as $x \rightarrow \pm \infty, e^{-x^2 / 2}$ gets small more quickly than $x$ gets large. The second summand is just the standard normal density integrated over its domain, so the value of this summand is 1 . Therefore, the variance of the standard normal density equals 1 .
Now let $X$ be a (not necessarily standard) normal random variable with parameters $\mu$ and $\sigma$. Then the density function of $X$ is
$f_X(x)=\frac{1}{\sqrt{2 \pi} \sigma} e^{-(x-\mu)^2 / 2 \sigma^2} .$
We can write $X=\sigma Z+\mu$, where $Z$ is a standard normal random variable. Since $E(Z)=0$ and $V(Z)=1$ by the calculation above, Theorems 6.10 and 6.14 imply that
\begin{aligned} E(X) & =E(\sigma Z+\mu)=\mu, \ V(X) & =V(\sigma Z+\mu)=\sigma^2 . \end{aligned}
Example $8$
Let $X$ be a continuous random variable with the Cauchy density function
$f_X(x)=\frac{a}{\pi} \frac{1}{a^2+x^2} .$
Then the expectation of $X$ does not exist, because the integral
$\frac{a}{\pi} \int_{-\infty}^{+\infty} \frac{|x| d x}{a^2+x^2}$
diverges. Thus the variance of $X$ also fails to exist. Densities whose variance is not defined, like the Cauchy density, behave quite differently in a number of important respects from those whose variance is finite. We shall see one instance of this difference in Section 8.2.
Independent Trials
Corollary $1$
If $X_1, X_2, \ldots, X_n$ is an independent trials process of real-valued random variables, with $E\left(X_i\right)=\mu$ and $V\left(X_i\right)=\sigma^2$, and if
$\begin{array}{l} S_n=X_1+X_2+\cdots+X_n, \ A_n=\frac{S_n}{n}, \end{array}$
then
\begin{aligned} E\left(S_n\right) & =n \mu, \ E\left(A_n\right) & =\mu, \ V\left(S_n\right) & =n \sigma^2, \ V\left(A_n\right) & =\frac{\sigma^2}{n} . \end{aligned}
It follows that if we set
$S_n^*=\frac{S_n-n \mu}{\sqrt{n \sigma^2}},$
then
$\begin{array}{l} E\left(S_n^*\right)=0, \ V\left(S_n^*\right)=1 . \end{array}$
We say that $S_n^*$ is a standardized version of $S_n$ (see Exercise 12 in Section 6.2).
Queues
Example $1$
Let us consider again the queueing problem, that is, the problem of the customers waiting in a queue for service (see Example 5.7). We suppose again that customers join the queue in such a way that the time between arrivals is an exponentially distributed random variable $X$ with density function
$f_X(t)=\lambda e^{-\lambda t} .$
Then the expected value of the time between arrivals is simply $1 / \lambda$ (see Example 6.26), as was stated in Example 5.7. The reciprocal $\lambda$ of this expected value is often referred to as the arrival rate. The service time of an individual who is first in line is defined to be the amount of time that the person stays at the head of the line before leaving. We suppose that the customers are served in such a way that the service time is another exponentially distributed random variable $Y$ with density function
$f_X(t)=\mu e^{-\mu t} .$
Then the expected value of the service time is
$E(X)=\int_0^{\infty} t f_X(t) d t=\frac{1}{\mu} .$
The reciprocal $\mu$ if this expected value is often referred to as the service rate
We expect on grounds of our everyday experience with queues that if the service rate is greater than the arrival rate, then the average queue size will tend to stabilize, but if the service rate is less than the arrival rate, then the queue will tend to increase in length without limit (see Figure 5.7). The simulations in Example 5.7 tend to bear out our everyday experience. We can make this conclusion more precise if we introduce the traffic intensity as the product
$\rho=(\text { arrival rate })(\text { average service time })=\frac{\lambda}{\mu}=\frac{1 / \mu}{1 / \lambda} .$
The traffic intensity is also the ratio of the average service time to the average time between arrivals. If the traffic intensity is less than 1 the queue will perform reasonably, but if it is greater than 1 the queue will grow indefinitely large. In the critical case of $\rho=1$, it can be shown that the queue will become large but there will always be times at which the queue is empty. ${ }^{22}$
In the case that the traffic intensity is less than 1 we can consider the length of the queue as a random variable $Z$ whose expected value is finite,
$E(Z)=N .$
The time spent in the queue by a single customer can be considered as a random variable $W$ whose expected value is finite,
$E(W)=T .$
Then we can argue that, when a customer joins the queue, he expects to find $N$ people ahead of him, and when he leaves the queue, he expects to find $\lambda T$ people behind him. Since, in equilibrium, these should be the same, we would expect to find that
$N=\lambda T .$
This last relationship is called Little's law for queues. ${ }^{23}$ We will not prove it here. A proof may be found in Ross. ${ }^{24}$ Note that in this case we are counting the waiting time of all customers, even those that do not have to wait at all. In our simulation in Section 4.2, we did not consider these customers.
If we knew the expected queue length then we could use Little's law to obtain the expected waiting time, since
$T=\frac{N}{\lambda} .$
The queue length is a random variable with a discrete distribution. We can estimate this distribution by simulation, keeping track of the queue lengths at the times at which a customer arrives. We show the result of this simulation (using the program Queue) in Figure $1$.
We note that the distribution appears to be a geometric distribution. In the study of queueing theory it is shown that the distribution for the queue length in equilibrium is indeed a geometric distribution with
$s_j=(1-\rho) \rho^j \quad \text { for } j=0,1,2, \ldots,$
if $\rho<1$. The expected value of a random variable with this distribution is
$N=\frac{\rho}{(1-\rho)}$
(see Example 6.4). Thus by Little's result the expected waiting time is
$T=\frac{\rho}{\lambda(1-\rho)}=\frac{1}{\mu-\lambda},$
where $\mu$ is the service rate, $\lambda$ the arrival rate, and $\rho$ the traffic intensity.
In our simulation, the arrival rate is 1 and the service rate is 1.1. Thus, the traffic intensity is $1 / 1.1=10 / 11$, the expected queue size is
$\frac{10 / 11}{(1-10 / 11)}=10$
and the expected waiting time is
$\frac{1}{1.1-1}=10$
In our simulation the average queue size was 8.19 and the average waiting time was 7.37. In Figure $2$, we show the histogram for the waiting times. This histogram suggests that the density for the waiting times is exponential with parameter $\mu-\lambda$, and this is the case.
${ }^{1}$ Ross, A First Course in Probability, (New York: Macmillan, 1984), pgs. 241-245
${ }^{2}$ L. Kleinrock, Queueing Systems, vol. 2 (New York: John Wiley and Sons, 1975).
${ }^{3}$ ibid., p. 17.
${ }^{4}$ S. M. Ross, Applied Probability Models with Optimization Applications, (San Francisco:Holden-Day, 1970)
Exercises
$1$
Let $X$ be a random variable with range $[-1,1]$ and let $f_X(x)$ be the density function of $X$. Find $\mu(X)$ and $\sigma^2(X)$ if, for $|x|<1$,
(a) $f_X(x)=1 / 2$.
(b) $f_X(x)=|x|$.
(c) $f_X(x)=1-|x|$.
(d) $f_X(x)=(3 / 2) x^2$.
$2$
Let $X$ be a random variable with range $[-1,1]$ and $f_X$ its density function. Find $\mu(X)$ and $\sigma^2(X)$ if, for $|x|>1, f_X(x)=0$, and for $|x|<1$,
(a) $f_X(x)=(3 / 4)\left(1-x^2\right)$.
(b) $f_X(x)=(\pi / 4) \cos (\pi x / 2)$.
(c) $f_X(x)=(x+1) / 2$.
(d) $f_X(x)=(3 / 8)(x+1)^2$.
$3$
The lifetime, measure in hours, of the ACME super light bulb is a random variable $T$ with density function $f_T(t)=\lambda^2 t e^{-\lambda t}$, where $\lambda=.05$. What is the expected lifetime of this light bulb? What is its variance?
$4$
Let $X$ be a random variable with range $[-1,1]$ and density function $f_X(x)=$ $a x+b$ if $|x|<1$.
(a) Show that if $\int_{-1}^{+1} f_X(x) d x=1$, then $b=1 / 2$.
(b) Show that if $f_X(x) \geq 0$, then $-1 / 2 \leq a \leq 1 / 2$.
(c) Show that $\mu=(2 / 3) a$, and hence that $-1 / 3 \leq \mu \leq 1 / 3$.
(d) Show that $\sigma^2(X)=(2 / 3) b-(4 / 9) a^2=1 / 3-(4 / 9) a^2$.
$5$
Let $X$ be a random variable with range $[-1,1]$ and density function $f_X(x)=$ $a x^2+b x+c$ if $|x|<1$ and 0 otherwise.
(a) Show that $2 a / 3+2 c=1$ (see Exercise 4).
(b) Show that $2 b / 3=\mu(X)$.
(c) Show that $2 a / 5+2 c / 3=\sigma^2(X)$.
(d) Find $a, b$, and $c$ if $\mu(X)=0, \sigma^2(X)=1 / 15$, and sketch the graph of $f_X$.
(e) Find $a, b$, and $c$ if $\mu(X)=0, \sigma^2(X)=1 / 2$, and sketch the graph of $f_X$.
$6$
Let $T$ be a random variable with range $[0, \infty]$ and $f_T$ its density function. Find $\mu(T)$ and $\sigma^2(T)$ if, for $t<0, f_T(t)=0$, and for $t>0$,
(a) $f_T(t)=3 e^{-3 t}$.
(b) $f_T(t)=9 t e^{-3 t}$.
(c) $f_T(t)=3 /(1+t)^4$.
$7$
Let $X$ be a random variable with density function $f_X$. Show, using elementary calculus, that the function
$\phi(a)=E\left((X-a)^2\right)$
takes its minimum value when $a=\mu(X)$, and in that case $\phi(a)=\sigma^2(X)$.
$8$
Let $X$ be a random variable with mean $\mu$ and variance $\sigma^2$. Let $Y=a X^2+$ $b X+c$. Find the expected value of $Y$.
$9$
Let $X, Y$, and $Z$ be independent random variables, each with mean $\mu$ and variance $\sigma^2$.
(a) Find the expected value and variance of $S=X+Y+Z$.
(b) Find the expected value and variance of $A=(1 / 3)(X+Y+Z)$.
(c) Find the expected value of $S^2$ and $A^2$.
$10$
Let $X$ and $Y$ be independent random variables with uniform density functions on $[0,1]$. Find
(a) $E(|X-Y|)$.
(b) $E(\max (X, Y))$.
(c) $E(\min (X, Y))$.
(d) $E\left(X^2+Y^2\right)$.
(e) $E\left((X+Y)^2\right)$.
$11$
The Pilsdorff Beer Company runs a fleet of trucks along the 100 mile road from Hangtown to Dry Gulch. The trucks are old, and are apt to break down at any point along the road with equal probability. Where should the company locate a garage so as to minimize the expected distance from a typical breakdown to the garage? In other words, if $X$ is a random variable giving the location of the breakdown, measured, say, from Hangtown, and $b$ gives the location of the garage, what choice of $b$ minimizes $E(|X-b|)$ ? Now suppose $X$ is not distributed uniformly over $[0,100]$, but instead has density function $f_X(x)=2 x / 10,000$. Then what choice of $b$ minimizes $E(|X-b|)$ ?
$12$
Find $E\left(X^Y\right)$, where $X$ and $Y$ are independent random variables which are uniform on $[0,1]$. Then verify your answer by simulation.
$13$
Let $X$ be a random variable that takes on nonnegative values and has distribution function $F(x)$. Show that
$E(X)=\int_0^{\infty}(1-F(x)) d x .$
Hint: Integrate by parts.
Illustrate this result by calculating $E(X)$ by this method if $X$ has an exponential distribution $F(x)=1-e^{-\lambda x}$ for $x \geq 0$, and $F(x)=0$ otherwise.
$14$
Let $X$ be a continuous random variable with density function $f_X(x)$. Show that if
$\int_{-\infty}^{+\infty} x^2 f_X(x) d x<\infty$
then
$\int_{-\infty}^{+\infty}|x| f_X(x) d x<\infty .$
Hint: Except on the interval $[-1,1]$, the first integrand is greater than the second integrand.
$15$
Let $X$ be a random variable distributed uniformly over $[0,20]$. Define a new random variable $Y$ by $Y=\lfloor X\rfloor$ (the greatest integer in $X$ ). Find the expected value of $Y$. Do the same for $Z=\lfloor X+.5\rfloor$. Compute $E(|X-Y|)$ and $E(|X-Z|)$. (Note that $Y$ is the value of $X$ rounded off to the nearest smallest integer, while $Z$ is the value of $X$ rounded off to the nearest integer. Which method of rounding off is better? Why?)
$16$
Assume that the lifetime of a diesel engine part is a random variable $X$ with density $f_X$. When the part wears out, it is replaced by another with the same density. Let $N(t)$ be the number of parts that are used in time $t$. We want to study the random variable $N(t) / t$. Since parts are replaced on the average every $E(X)$ time units, we expect about $t / E(X)$ parts to be used in time $t$. That is, we expect that
$\lim _{t \rightarrow \infty} E\left(\frac{N(t)}{t}\right)=\frac{1}{E(X)} .$
This result is correct but quite difficult to prove. Write a program that will allow you to specify the density $f_X$, and the time $t$, and simulate this experiment to find $N(t) / t$. Have your program repeat the experiment 500 times and plot a bar graph for the random outcomes of $N(t) / t$. From this data, estimate $E(N(t) / t)$ and compare this with $1 / E(X)$. In particular, do this for $t=100$ with the following two densities:
(a) $f_X=e^{-t}$.
(b) $f_X=t e^{-t}$.
$17$
Let $X$ and $Y$ be random variables. The covariance $\operatorname{Cov}(\mathrm{X}, \mathrm{Y})$ is defined by (see Exercise 6.2.23)
$\operatorname{cov}(\mathrm{X}, \mathrm{Y})=\mathrm{E}((\mathrm{X}-\mu(\mathrm{X}))(\mathrm{Y}-\mu(\mathrm{Y}))) .$
(a) Show that $\operatorname{cov}(\mathrm{X}, \mathrm{Y})=\mathrm{E}(\mathrm{XY})-\mathrm{E}(\mathrm{X}) \mathrm{E}(\mathrm{Y})$.
(b) Using (a), show that $\operatorname{cov}(X, Y)=0$, if $X$ and $Y$ are independent. (Caution: the converse is not always true.)
(c) Show that $V(X+Y)=V(X)+V(Y)+2 \operatorname{cov}(X, Y)$.
$18$
Let $X$ and $Y$ be random variables with positive variance. The correlation of $X$ and $Y$ is defined as
$\rho(X, Y)=\frac{\operatorname{cov}(X, Y)}{\sqrt{V(X) V(Y)}} .$
(a) Using Exercise 17(c), show that
$0 \leq V\left(\frac{X}{\sigma(X)}+\frac{Y}{\sigma(Y)}\right)=2(1+\rho(X, Y)) .$
(b) Now show that
$0 \leq V\left(\frac{X}{\sigma(X)}-\frac{Y}{\sigma(Y)}\right)=2(1-\rho(X, Y)) .$
(c) Using (a) and (b), show that
$-1 \leq \rho(X, Y) \leq 1 .$
$19$
Let $X$ and $Y$ be independent random variables with uniform densities in $[0,1]$. Let $Z=X+Y$ and $W=X-Y$. Find
(a) $\rho(X, Y)$ (see Exercise 18).
(b) $\rho(X, Z)$.
(c) $\rho(Y, W)$.
(d) $\rho(Z, W)$.
$20$
When studying certain physiological data, such as heights of fathers and sons, it is often natural to assume that these data (e.g., the heights of the fathers and the heights of the sons) are described by random variables with normal densities. These random variables, however, are not independent but rather are correlated. For example, a two-dimensional standard normal density for correlated random variables has the form
$f_{X, Y}(x, y)=\frac{1}{2 \pi \sqrt{1-\rho^2}} \cdot e^{-\left(x^2-2 \rho x y+y^2\right) / 2\left(1-\rho^2\right)} .$
(a) Show that $X$ and $Y$ each have standard normal densities.
(b) Show that the correlation of $X$ and $Y$ (see Exercise 18) is $\rho$.
$21$
For correlated random variables $X$ and $Y$ it is natural to ask for the expected value for $X$ given $Y$. For example, Galton calculated the expected value of the height of a son given the height of the father. He used this to show that tall men can be expected to have sons who are less tall on the average. Similarly, students who do very well on one exam can be expected to do less well on the next exam, and so forth. This is called regression on the mean. To define this conditional expected value, we first define a conditional density of $X$ given $Y=y$ by
$f_{X \mid Y}(x \mid y)=\frac{f_{X, Y}(x, y)}{f_Y(y)},$
where $f_{X, Y}(x, y)$ is the joint density of $X$ and $Y$, and $f_Y$ is the density for $Y$. Then the conditional expected value of $X$ given $Y$ is
$E(X \mid Y=y)=\int_a^b x f_{X \mid Y}(x \mid y) d x .$
For the normal density in Exercise 20, show that the conditional density of $f_{X \mid Y}(x \mid y)$ is normal with mean $\rho y$ and variance $1-\rho^2$. From this we see that if $X$ and $Y$ are positively correlated $(0<\rho<1)$, and if $y>E(Y)$, then the expected value for $X$ given $Y=y$ will be less than $y$ (i.e., we have regression on the mean).
$22$
A point $Y$ is chosen at random from $[0,1]$. A second point $X$ is then chosen from the interval $[0, Y]$. Find the density for $X$. Hint: Calculate $f_{X \mid Y}$ as in Exercise 21 and then use
$f_X(x)=\int_x^1 f_{X \mid Y}(x \mid y) f_Y(y) d y .$
Can you also derive your result geometrically?
$23$
Let $X$ and $V$ be two standard normal random variables. Let $\rho$ be a real number between -1 and 1 .
(a) Let $Y=\rho X+\sqrt{1-\rho^2} V$. Show that $E(Y)=0$ and $\operatorname{Var}(Y)=1$. We shall see later (see Example 7.5 and Example 10.17), that the sum of two independent normal random variables is again normal. Thus, assuming this fact, we have shown that $Y$ is standard normal.
(b) Using Exercises 17 and 18, show that the correlation of $X$ and $Y$ is $\rho$.
(c) In Exercise 20, the joint density function $f_{X, Y}(x, y)$ for the random variable $(X, Y)$ is given. Now suppose that we want to know the set of points $(x, y)$ in the $x y$-plane such that $f_{X, Y}(x, y)=C$ for some constant $C$. This set of points is called a set of constant density. Roughly speaking, a set of constant density is a set of points where the outcomes $(X, Y)$ are equally likely to fall. Show that for a given $C$, the set of points of constant density is a curve whose equation is
$x^2-2 \rho x y+y^2=D,$
where $D$ is a constant which depends upon $C$. (This curve is an ellipse.)
(d) One can plot the ellipse in part (c) by using the parametric equations
$\begin{array}{l} x=\frac{r \cos \theta}{\sqrt{2(1-\rho)}}+\frac{r \sin \theta}{\sqrt{2(1+\rho)}}, \ y=\frac{r \cos \theta}{\sqrt{2(1-\rho)}}-\frac{r \sin \theta}{\sqrt{2(1+\rho)}} . \end{array}$
Write a program to plot 1000 pairs $(X, Y)$ for $\rho=-1 / 2,0,1 / 2$. For each plot, have your program plot the above parametric curves for $r=1,2,3$.
$24$
Following Galton, let us assume that the fathers and sons have heights that are dependent normal random variables. Assume that the average height is 68 inches, standard deviation is 2.7 inches, and the correlation coefficient is .5 (see Exercises 20 and 21). That is, assume that the heights of the fathers and sons have the form $2.7 X+68$ and $2.7 Y+68$, respectively, where $X$ and $Y$ are correlated standardized normal random variables, with correlation coefficient .5.
(a) What is the expected height for the son of a father whose height is 72 inches?
(b) Plot a scatter diagram of the heights of 1000 father and son pairs. Hint: You can choose standardized pairs as in Exercise 23 and then plot $(2.7 X+$ $68,2.7 Y+68)$.
$25$
When we have pairs of data $\left(x_i, y_i\right)$ that are outcomes of the pairs of dependent random variables $X, Y$ we can estimate the coorelation coefficient $\rho$ by
$\bar{r}=\frac{\sum_i\left(x_i-\bar{x}\right)\left(y_i-\bar{y}\right)}{(n-1) s_X s_Y},$
where $\bar{x}$ and $\bar{y}$ are the sample means for $X$ and $Y$, respectively, and $s_X$ and $s_Y$ are the sample standard deviations for $X$ and $Y$ (see Exercise 6.2.17). Write a program to compute the sample means, variances, and correlation for such dependent data. Use your program to compute these quantities for Galton's data on heights of parents and children given in Appendix B.
Plot the equal density ellipses as defined in Exercise 23 for $r=4,6$, and 8 , and on the same graph print the values that appear in the table at the appropriate points. For example, print 12 at the point $(70.5,68.2)$, indicating that there were 12 cases where the parent's height was 70.5 and the child's was 68.12. See if Galton's data is consistent with the equal density ellipses.
$26$
(from Hamming ${ }^{25}$ ) Suppose you are standing on the bank of a straight river.
(a) Choose, at random, a direction which will keep you on dry land, and walk $1 \mathrm{~km}$ in that direction. Let $P$ denote your position. What is the expected distance from $P$ to the river?
(b) Now suppose you proceed as in part (a), but when you get to $P$, you pick a random direction (from among all directions) and walk $1 \mathrm{~km}$. What is the probability that you will reach the river before the second walk is completed?
$27$
(from Hamming ${ }^{26}$ ) A game is played as follows: A random number $X$ is chosen uniformly from $[0,1]$. Then a sequence $Y_1, Y_2, \ldots$ of random numbers is chosen independently and uniformly from $[0,1]$. The game ends the first time that $Y_i>X$. You are then paid $(i-1)$ dollars. What is a fair entrance fee for this game?
$28$
A long needle of length $L$ much bigger than 1 is dropped on a grid with horizontal and vertical lines one unit apart. Show that the average number $a$ of lines crossed is approximately
$a=\frac{4 L}{\pi} .$
6.R: References
1. H. Sagan, Math. Mag., vol. 54, no. 1 (1981), pp. 3-10.↩
2. Quoted in F. N. David, (London: Griffin, 1962), p. 231.↩
3. C. Huygens, translation attributed to John Arbuthnot (London, 1692), p. 34.↩
4. ibid., p. 35.↩
5. ibid., p. 47.↩
6. Quoted in I. Hacking, (Cambridge: Cambridge Univ. Press, 1975).↩
7. ibid., p. 108.↩
8. ibid., p. 109.↩
9. E. Halley, “An Estimate of The Degrees of Mortality of Mankind," vol. 17 (1693), pp. 596–610; 654–656.↩
10. E. Thorp, (New York: Random House, 1962).↩
11. P. Diaconis and R. Graham, “The Analysis of Sequential Experiments with Feedback to Subjects," vol. 9 (1981), pp. 3–23.↩
12. J. F. Box, (New York: John Wiley and Sons, 1978).↩
13. E. Thorp, (New York: Random House, 1962).↩
14. W. Feller, 3rd ed., vol. 1 (New York: John Wiley and Sons, 1968), p. 240.↩
15. H. Schultz, “An Expected Value Problem," vol. 10, no. 4 (1979), pp. 277–78.↩
16. W. Feller, Introduction to Probability Theory, vol. 1, p. 166.↩
17. B. Pittel, Problem #1195, vol. 58, no. 3 (May 1985), pg. 183.↩
18. J. Propp, Problem #1159, vol. 57, no. 1 (Feb. 1984), pg. 50.↩
19. H. Shultz and B. Leonard, “Unexpected Occurrences of the Number \(e\)," vol. 62, no. 4 (October, 1989), pp. 269-271.↩
20. Private communication.↩ | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/06%3A_Expected_Value_and_Variance/6.03%3A_Continuous_Random_Variables.txt |
• 7.1: Sums of Discrete Random Variables
In this chapter we turn to the important question of determining the distribution of a sum of independent random variables in terms of the distributions of the individual constituents. In this section we consider only sums of discrete random variables, reserving the case of continuous random variables for the next section.
• 7.2: Sums of Continuous Random Variables
In this section we consider the continuous version of the problem posed in the previous section: How are sums of independent random variables distributed?
Thumbnail: Visual comparison of convolution. (CC BY-SA 3.0; Cmglee via Wikipedia)
07: Sums of Random Variables
In this chapter we turn to the important question of determining the distribution of a sum of independent random variables in terms of the distributions of the individual constituents. In this section we consider only sums of discrete random variables, reserving the case of continuous random variables for the next section.
We consider here only random variables whose values are integers. Their distribution functions are then defined on these integers. We shall find it convenient to assume here that these distribution functions are defined for all integers, by defining them to be 0 where they are not otherwise defined.
Convolutions
Suppose $X$ and $Y$ are two independent discrete random variables with distribution functions $m_{1}(x)$ and $m_{2}(x)$. Let $Z=X+Y$. We would like to determine the distribution function $m_{3}(x)$ of $Z$. To do this, it is enough to determine the probability that $Z$ takes on the value $z$, where $z$ is an arbitrary integer. Suppose that $X=k$, where $k$ is some integer. Then $Z=z$ if and only if $Y=z-k$. So the event $Z=z$ is the union of the pairwise disjoint events
$(X=k) \text { and }(Y=z-k),$
where $k$ runs over the integers. Since these events are pairwise disjoint, we have
$P(Z=z)=\sum_{k=-\infty}^{\infty} P(X=k) \cdot P(Y=z-k)$
Thus, we have found the distribution function of the random variable $Z$. This leads to the following definition.
Definition: Term
Let $X$ and $Y$ be two independent integer-valued random variables, with distribution functions $m_{1}(x)$ and $m_{2}(x)$ respectively. Then the convolution of $m_{1}(x)$ and $m_{2}(x)$ is the distribution function $m_{3}=m_{1} * m_{2}$ given by
$m_{3}(j)=\sum_{k} m_{1}(k) \cdot m_{2}(j-k)$
for $j=\ldots,-2,-1,0,1,2, \ldots$. The function $m_{3}(x)$ is the distribution function of the random variable $Z=X+Y$.
It is easy to see that the convolution operation is commutative, and it is straightforward to show that it is also associative.
Now let $S_{n}=X_{1}+X_{2}+\cdots+X_{n}$ be the sum of $n$ independent random variables of an independent trials process with common distribution function $m$ defined on the integers. Then the distribution function of $S_{1}$ is $m$. We can write
$S_{n}=S_{n-1}+X_{n} .$
Thus, since we know the distribution function of $X_{n}$ is $m$, we can find the distribution function of $S_{n}$ by induction.
Example $1$
A die is rolled twice. Let $X_{1}$ and $X_{2}$ be the outcomes, and let $S_{2}=X_{1}+X_{2}$ be the sum of these outcomes. Then $X_{1}$ and $X_{2}$ have the common distribution function:
$m=\left(\begin{array}{cccccc} 1 & 2 & 3 & 4 & 5 & 6 \ 1 / 6 & 1 / 6 & 1 / 6 & 1 / 6 & 1 / 6 & 1 / 6 \end{array}\right) .$
The distribution function of $S_{2}$ is then the convolution of this distribution with itself. Thus,
\begin{aligned} P\left(S_{2}=2\right) & =m(1) m(1) \ & =\frac{1}{6} \cdot \frac{1}{6}=\frac{1}{36}, \ P\left(S_{2}=3\right) & =m(1) m(2)+m(2) m(1) \ & =\frac{1}{6} \cdot \frac{1}{6}+\frac{1}{6} \cdot \frac{1}{6}=\frac{2}{36}, \ P\left(S_{2}=4\right) & =m(1) m(3)+m(2) m(2)+m(3) m(1) \ & =\frac{1}{6} \cdot \frac{1}{6}+\frac{1}{6} \cdot \frac{1}{6}+\frac{1}{6} \cdot \frac{1}{6}=\frac{3}{36} . \end{aligned}
Continuing in this way we would find $P\left(S_{2}=5\right)=4 / 36, P\left(S_{2}=6\right)=5 / 36$, $P\left(S_{2}=7\right)=6 / 36, P\left(S_{2}=8\right)=5 / 36, P\left(S_{2}=9\right)=4 / 36, P\left(S_{2}=10\right)=3 / 36$, $P\left(S_{2}=11\right)=2 / 36$, and $P\left(S_{2}=12\right)=1 / 36$.
The distribution for $S_{3}$ would then be the convolution of the distribution for $S_{2}$ with the distribution for $X_{3}$. Thus
$P\left(S_{3}=3\right)=P\left(S_{2}=2\right) P\left(X_{3}=1\right)$
\begin{aligned} & =\frac{1}{36} \cdot \frac{1}{6}=\frac{1}{216}, \ P\left(S_{3}=4\right) & =P\left(S_{2}=3\right) P\left(X_{3}=1\right)+P\left(S_{2}=2\right) P\left(X_{3}=2\right) \ & =\frac{2}{36} \cdot \frac{1}{6}+\frac{1}{36} \cdot \frac{1}{6}=\frac{3}{216}, \end{aligned}
and so forth.
This is clearly a tedious job, and a program should be written to carry out this calculation. To do this we first write a program to form the convolution of two densities $p$ and $q$ and return the density $r$. We can then write a program to find the density for the sum $S_{n}$ of $n$ independent random variables with a common density $p$, at least in the case that the random variables have a finite number of possible values.
Running this program for the example of rolling a die $n$ times for $n=10,20,30$ results in the distributions shown in Figure 7.1. We see that, as in the case of Bernoulli trials, the distributions become bell-shaped. We shall discuss in Chapter 9 a very general theorem called the Central Limit Theorem that will explain this phenomenon.
Example $1$
A well-known method for evaluating a bridge hand is: an ace is assigned a value of 4 , a king 3 , a queen 2 , and a jack 1 . All other cards are assigned a value of 0 . The point count of the hand is then the sum of the values of the cards in the hand. (It is actually more complicated than this, taking into account voids in suits, and so forth, but we consider here this simplified form of the point count.) If a card is dealt at random to a player, then the point count for this card has distribution
$p_{X}=\left(\begin{array}{ccccc} 0 & 1 & 2 & 3 & 4 \ 36 / 52 & 4 / 52 & 4 / 52 & 4 / 52 & 4 / 52 \end{array}\right)$
Let us regard the total hand of 13 cards as 13 independent trials with this common distribution. (Again this is not quite correct because we assume here that we are always choosing a card from a full deck.) Then the distribution for the point count $C$ for the hand can be found from the program NFoldConvolution by using the distribution for a single card and choosing $n=13$. A player with a point count of 13 or more is said to have an opening bid. The probability of having an opening bid is then
$P(C \geq 13)$
Since we have the distribution of $C$, it is easy to compute this probability. Doing this we find that
$P(C \geq 13)=.2845$
so that about one in four hands should be an opening bid according to this simplified model. A more realistic discussion of this problem can be found in Epstein, The Theory of Gambling and Statistical Logic. ${ }^{1}$
${ }^{1}$ R. A. Epstein, The Theory of Gambling and Statistical Logic, rev. ed. (New York: Academic Press, 1977).
For certain special distributions it is possible to find an expression for the distribution that results from convoluting the distribution with itself $n$ times.
The convolution of two binomial distributions, one with parameters $m$ and $p$ and the other with parameters $n$ and $p$, is a binomial distribution with parameters $(m+n)$ and $p$. This fact follows easily from a consideration of the experiment which consists of first tossing a coin $m$ times, and then tossing it $n$ more times.
The convolution of $k$ geometric distributions with common parameter $p$ is a negative binomial distribution with parameters $p$ and $k$. This can be seen by considering the experiment which consists of tossing a coin until the $k$ th head appears.
Exercises
Exercise $1$: A die is rolled three times. Find the probability that the sum of the outcomes is
(a) greater than 9.
(b) an odd number.
Exercise $2$: The price of a stock on a given trading day changes according to the distribution
$p_{X}=\left(\begin{array}{cccc} -1 & 0 & 1 & 2 \ 1 / 4 & 1 / 2 & 1 / 8 & 1 / 8 \end{array}\right) .$
Find the distribution for the change in stock price after two (independent) trading days.
Exercise $3$: Let $X_{1}$ and $X_{2}$ be independent random variables with common distribution
$p_{X}=\left(\begin{array}{ccc} 0 & 1 & 2 \ 1 / 8 & 3 / 8 & 1 / 2 \end{array}\right)$
Find the distribution of the sum $X_{1}+X_{2}$.
Exercise $4$: In one play of a certain game you win an amount $X$ with distribution
$p_{X}=\left(\begin{array}{ccc} 1 & 2 & 3 \ 1 / 4 & 1 / 4 & 1 / 2 \end{array}\right)$
Using the program NFoldConvolution find the distribution for your total winnings after ten (independent) plays. Plot this distribution.
Exercise $5$: Consider the following two experiments: the first has outcome $X$ taking on the values 0,1 , and 2 with equal probabilities; the second results in an (independent) outcome $Y$ taking on the value 3 with probability $1 / 4$ and 4 with probability $3 / 4$. Find the distribution of
(a) $Y+X$.
(b) $Y-X$.
Exercise $6$: People arrive at a queue according to the following scheme: During each minute of time either 0 or 1 person arrives. The probability that 1 person arrives is $p$ and that no person arrives is $q=1-p$. Let $C_{r}$ be the number of customers arriving in the first $r$ minutes. Consider a Bernoulli trials process with a success if a person arrives in a unit time and failure if no person arrives in a unit time. Let $T_{r}$ be the number of failures before the $r$ th success.
(a) What is the distribution for $T_{r}$ ?
(b) What is the distribution for $C_{r}$ ?
(c) Find the mean and variance for the number of customers arriving in the first $r$ minutes.
Exercise $7$: (a) A die is rolled three times with outcomes $X_{1}, X_{2}$, and $X_{3}$. Let $Y_{3}$ be the maximum of the values obtained. Show that
$P\left(Y_{3} \leq j\right)=P\left(X_{1} \leq j\right)^{3} .$
Use this to find the distribution of $Y_{3}$. Does $Y_{3}$ have a bell-shaped distribution?
(b) Now let $Y_{n}$ be the maximum value when $n$ dice are rolled. Find the distribution of $Y_{n}$. Is this distribution bell-shaped for large values of $n$ ?
Exercise $8$: A baseball player is to play in the World Series. Based upon his season play, you estimate that if he comes to bat four times in a game the number of hits he will get has a distribution
$p_{X}=\left(\begin{array}{ccccc} 0 & 1 & 2 & 3 & 4 \ .4 & .2 & .2 & .1 & .1 \end{array}\right)$
Assume that the player comes to bat four times in each game of the series.
(a) Let $X$ denote the number of hits that he gets in a series. Using the program NFoldConvolution, find the distribution of $X$ for each of the possible series lengths: four-game, five-game, six-game, seven-game.
(b) Using one of the distribution found in part (a), find the probability that his batting average exceeds .400 in a four-game series. (The batting average is the number of hits divided by the number of times at bat.)
(c) Given the distribution $p_{X}$, what is his long-term batting average?
Exercise $9$: Prove that you cannot load two dice in such a way that the probabilities for any sum from 2 to 12 are the same. (Be sure to consider the case where one or more sides turn up with probability zero.)
Exercise $10$:$10\left(\right.$ Lévy $\left.^{2}\right)$ Assume that $n$ is an integer, not prime. Show that you can find two distributions $a$ and $b$ on the nonnegative integers such that the convolution of
${ }^{2}$ See M. Krasner and B. Ranulae, "Sur une Proprieté des Polynomes de la Division du Circle"; and the following note by J. Hadamard, in C. R. Acad. Sci., vol. 204 (1937), pp. 397-399. $a$ and $b$ is the equiprobable distribution on the set $0,1,2, \ldots, n-1$. If $n$ is prime this is not possible, but the proof is not so easy. (Assume that neither $a$ nor $b$ is concentrated at 0 .)
Exercise $11$: Assume that you are playing craps with dice that are loaded in the following way: faces two, three, four, and five all come up with the same probability $(1 / 6)+r$. Faces one and six come up with probability $(1 / 6)-2 r$, with $0<$ $r<.02$. Write a computer program to find the probability of winning at craps with these dice, and using your program find which values of $r$ make craps a favorable game for the player with these dice. | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/07%3A_Sums_of_Random_Variables/7.01%3A_Sums_of_Discrete_Random_Variables.txt |
In this section we consider the continuous version of the problem posed in the previous section: How are sums of independent random variables distributed?
Definition $1$: convolution
Let $X$ and $Y$ be two continuous random variables with density functions $f(x)$ and $g(y)$, respectively. Assume that both $f(x)$ and $g(y)$ are defined for all real numbers. Then the convolution $f ∗ g$ of $f$ and $g$ is the function given by
\begin{align*} (f*g) &= \int_{-\infty}^\infty f(z-y)g(y)dy \[4pt] &= \int_{-\infty}^\infty g(z-x)f(x)dx \end{align*}
This definition is analogous to the definition, given in Section 7.1, of the convolution of two distribution functions. Thus it should not be surprising that if X and Y are independent, then the density of their sum is the convolution of their densities. This fact is stated as a theorem below, and its proof is left as an exercise (see Exercise 1).
Theorem $1$
Let X and Y be two independent random variables with density functions fX (x) and fY (y) defined for all x. Then the sum Z = X + Y is a random variable with density function $f_Z(z)$, where $f_X$ is the convolution of $f_X$ and $f_Y$
To get a better understanding of this important result, we will look at some examples.
Sum of Two Independent Uniform Random Variables
Example $1$:
Suppose we choose independently two numbers at random from the interval [0, 1] with uniform probability density. What is the density of their sum? Let X and Y be random variables describing our choices and $Z = X + Y$ their sum. Then we have
$f_X(x) = f_Y(y) = \begin{array}{cc} 1 & \text{if } 0 \leq x \leq 1 \ 0 & \text{otherwise} \end{array} \nonumber$
and the density function for the sum is given by
$f_Z(z) = \int_{-\infty}^\infty f_X(z-y)f_Y(y)dy. \nonumber$
Since $f_Y (y) = 1 if 0 ≤ y ≤ 1$ and 0 otherwise, this becomes
$f_Z(z) = \int_{0}^1 f_X(z-y)dy. \nonumber$
Now the integrand is 0 unless 0 ≤ z − y ≤ 1 (i.e., unless z − 1 ≤ y ≤ z) and then it is 1. So if 0 ≤ z ≤ 1, we have
$f_Z (z) = \int_0^z dy = z , \nonumber$
while if 1 < z ≤ 2, we have
$f_Z(z) = \int_{z-1}^1 dy = 2-z, \nonumber$
and if $z < 0$ or $z > 2$ we have $_fZ(z) = 0$ (see Figure 7.2). Hence,
$f_Z(z) = \Bigg\{ \begin{array}{cc} z & \text{if } 0 \leq z \leq 1 \ 2-z, & \text{if} 1 < z \leq 2 \ 0, & \text{otherwise} \end{array} \nonumber$
Note that this result agrees with that of Example 2.4.
Sum of Two Independent Exponential Random Variables
Example $2$:
Suppose we choose two numbers at random from the interval [0, ∞) with an exponential density with parameter λ. What is the density of their sum? Let X, Y , and Z = X + Y denote the relevant random variables, and $f_X , f_Y ,$and $f_Z$ their densities. Then
$f_X(x) = f_Y(x) = \bigg\{ \begin{array}{cc} \lambda e^{-\lambda x}, & \text{if } x \geq 0 \ 0, & \text{otherwise} \end{array} \nonumber$
and so, if z > 0,
\begin{align*} f_Z(z) & = \int_{-\infty}^\infty f_X(z-y)f_Y(y)dy \[4pt] &= \int_0^z \lambda e^{-\lambda (z-y)} \lambda e^{-\lambda y} dy \[4pt] &= \int_0^z \lambda^2 e^{-\lambda z} dy \[4pt] &= \lambda^2 z e^{-\lambda z} \end{align*}
while if z < 0, $f_Z(z) = 0$ (see Figure 7.3). Hence,
$f_Z(z) = \bigg\{ \begin{array}{cc} \lambda^2 z^{-\lambda z}, & \text{if } z \geq 0, \ 0, & \text{otherwise} \end{array} \nonumber$
Sum of Two Independent Normal Random Variables
Example $3$
It is an interesting and important fact that the convolution of two normal densities with means $µ_1 and µ_2$ and variances $σ_1 and σ_2$ is again a normal density, with mean $µ_1 + µ_2$ and variance $\sigma_1^2 + \sigma_2^2$. We will show this in the special case that both random variables are standard normal. The general case can be done in the same way, but the calculation is messier. Another way to show the general result is given in Example 10.17.
Suppose X and Y are two independent random variables, each with the standard normal density (see Example 5.8). We have
$f_X(x) = f_Y(y) = \frac{1}{\sqrt{2\pi}}e^{-x^2/2} \nonumber$
and so
\begin{align*} f_Z(z) & = f_X * f_Y(z) \[4pt] &= \frac{1}{2\pi} \int_{-\infty}^\infty e^{-(z-y)^2/2} e^{-y^2/2}dy \[4pt] &= \frac{1}{2\pi} e^{-z^2/4} \int_{-\infty}^\infty e^{-(y-z/2)^2}dy \[4pt] &= \frac{1}{2\pi} e^{-z^2/4}\sqrt{\pi} \left[ \frac{1}{\sqrt{\pi}} \int_{-\infty}^{\infty} e^{-(y-z/2)^2dy} \right] \end{align*}
The expression in the brackets equals 1, since it is the integral of the normal density function with $\mu =0$ and $\sigma = \sqrt{2}$ So, we have
$f_Z(z) = \frac{1}{\sqrt{4\pi}}e^{-z^2/4} \nonumber$
Sum of Two Independent Cauchy Random Variables
Example $4$:
Choose two numbers at random from the interval $(-\infty, \infty$ with the Cauchy density with parameter $a = 1$ (see Example 5.10). Then
$f_X(x) = f_Y(y) = \frac{1}{\pi(1+x^2)} \nonumber$
and $Z = X +Y$ has density
$f_Z(z) = \frac{1}{\pi^2} \int_{-\infty}^\infty \frac{1}{1+(z-y)^2} \frac{1}{1+y^2}dy. \nonumber$
This integral requires some effort, and we give here only the result (see Section 10.3, or Dwass$^3$ ):
$fZ(z) =\frac{2}{\pi (4+z^2)} \nonumber$
Now, suppose that we ask for the density function of the average
$A = (1/2)(X + Y ) \nonumber$
of X and Y . Then A = (1/2)Z. Exercise 5.2.19 shows that if U and V are two continuous random variables with density functions $f_U(x)$ and $f_V(x)$, respectively, and if $V = aU$, then
$f_V (x) = \bigg( \frac{1}{a}\bigg) f_U \bigg( \frac{x}{a} \bigg). \nonumber$
Thus, we have
$f_A(z) = 2f_Z(2z) = \frac{1}{\pi(1+z^2)} \nonumber$
Hence, the density function for the average of two random variables, each having a Cauchy density, is again a random variable with a Cauchy density; this remarkable property is a peculiarity of the Cauchy density. One consequence of this is if the error in a certain measurement process had a Cauchy density and you averaged a number of measurements, the average could not be expected to be any more accurate than any one of your individual measurements!
Rayleigh Density
Example $5$:
Suppose X and Y are two independent standard normal random variables. Now suppose we locate a point $P$ in the xy-plane with coordinates (X, Y ) and ask: What is the density of the square of the distance of P from the origin? (We have already simulated this problem in Example 5.9.) Here, with the preceding notation, we have
\begin{align*} f_X(x) &= f_V(x) \[4pt] &= \frac{1}{\sqrt{2\pi}} e^{-x^2/2} \end{align*}
Moreover, if $X^2$ denotes the square of $X$, then (see Theorem 5.1 and the discussion following)
\begin{align*} f_{X^2}(r) &= \begin{array}{} \dfrac{1}{\sqrt{2\pi r}}(e^{-r/2}) & \text{if } r>0\ 0 & \text{otherwise}.\end{array} \[4pt] &= \begin{array}{cc} \dfrac{1}{2\sqrt{r}}(f_X\sqrt{r}) + f_X(-\sqrt{r})) & \text{if } r>0\ 0 & \text{otherwise}.\end{array} \end{align*}
This is a gamma density with $\lambda = 1/2$, $\beta = 1/2$ (see Example 7.4). Now let $R^2 = X^2 + Y^2$
Then
Hence, $R^2$ has a gamma density with λ = 1/2, β = 1. We can interpret this result as giving the density for the square of the distance of P from the center of a target if its coordinates are normally distributed. The density of the random variable R is obtained from that of $R^2$ in the usual way (see Theorem 5.1), and we find
$f_R(r) = \Bigg\{ \begin{array}{cc} \frac{1}{2}e^{-r^2/2} \cdot 2r = re^{-r^2/2}, & \text{if } r \geq 0 \ 0, & \text{otherwise} \end{array} \nonumber$
Physicists will recognize this as a Rayleigh density. Our result here agrees with our simulation in Example 5.9.
Chi-Squared Density
More generally, the same method shows that the sum of the squares of n independent normally distributed random variables with mean 0 and standard deviation 1 has a gamma density with λ = 1/2 and β = n/2. Such a density is called a chi-squared density with n degrees of freedom. This density was introduced in Chapter 4.3. In Example 5.10, we used this density to test the hypothesis that two traits were independent.
Another important use of the chi-squared density is in comparing experimental data with a theoretical discrete distribution, to see whether the data supports the theoretical model. More specifically, suppose that we have an experiment with a finite set of outcomes. If the set of outcomes is countable, we group them into finitely many sets of outcomes. We propose a theoretical distribution that we think will model the experiment well. We obtain some data by repeating the experiment a number of times. Now we wish to check how well the theoretical distribution fits the data.
Let $X$ be the random variable that represents a theoretical outcome in the model of the experiment, and let $m(x)$ be the distribution function of X. In a manner similar to what was done in Example 5.10, we calculate the value of the expression
$V = \sum_x \frac{(\sigma_x - n\cdot m(x))^2}{n\cdot m(x)} \nonumber$
where the sum runs over all possible outcomes x, n is the number of data points, and ox denotes the number of outcomes of type x observed in the data. Then
Table $1$: Observed data.
Outcome
Observed Frequency
i
15
2
8
3
7
4
5
5
7
6
18
for moderate or large values of n, the quantity V is approximately chi-squared distributed, with ν−1 degrees of freedom, where ν represents the number of possible outcomes. The proof of this is beyond the scope of this book, but we will illustrate the reasonableness of this statement in the next example. If the value of V is very large, when compared with the appropriate chi-squared density function, then we would tend to reject the hypothesis that the model is an appropriate one for the experiment at hand. We now give an example of this procedure.
Example $6$: DieTest
Suppose we are given a single die. We wish to test the hypothesis that the die is fair. Thus, our theoretical distribution is the uniform distribution on the integers between 1 and 6. So, if we roll the die n times, the expected number of data points of each type is n/6. Thus, if $o_i$ denotes the actual number of data points of type $i$, for $1 ≤ i ≤ 6$, then the expression
$V = \sum_{i=1}^6 \frac{(\sigma_i - n/6)^2}{n/6} \nonumber$
is approximately chi-squared distributed with 5 degrees of freedom.
Now suppose that we actually roll the die 60 times and obtain the data in Table 7.1. If we calculate V for this data, we obtain the value 13.6. The graph of the chi-squared density with 5 degrees of freedom is shown in Figure 7.4. One sees that values as large as 13.6 are rarely taken on by V if the die is fair, so we would reject the hypothesis that the die is fair. (When using this test, a statistician will reject the hypothesis if the data gives a value of V which is larger than 95% of the values one would expect to obtain if the hypothesis is true.)
In Figure 7.5, we show the results of rolling a die 60 times, then calculating V , and then repeating this experiment 1000 times. The program that performs these calculations is called DieTest. We have superimposed the chi-squared density with 5 degrees of freedom; one can see that the data values fit the curve fairly well, which supports the statement that the chi-squared density is the correct one to use.
So far we have looked at several important special cases for which the convolution integral can be evaluated explicitly. In general, the convolution of two continuous densities cannot be evaluated explicitly, and we must resort to numerical methods. Fortunately, these prove to be remarkably effective, at least for bounded densities.
Independent Trials
We now consider briefly the distribution of the sum of n independent random variables, all having the same density function. If $X_1, X_2, . . . , X_n$ are these random variables and $S_n = X_1 + X_2 + · · · + X_n$ is their sum, then we will have
$f_{S_n}(x) = (f_X, \times f_{x_2} \times \cdots \times f_{X_n}(x), \nonumber$
where the right-hand side is an n-fold convolution. It is possible to calculate this density for general values of n in certain simple cases.
Example $7$
Suppose the $X_i$ are uniformly distributed on the interval [0,1]. Then
$f_{X_i}(x) = \Bigg{\{} \begin{array}{cc} 1, & \text{if } 0\leq x \leq 1\ 0, & \text{otherwise} \end{array} \nonumber$
and $f_{S_n}(x)$ is given by the formula $^4$
$f_{S_n}(x) = \Bigg\{ \begin{array}{cc} \frac{1}{(n-1)!}\sum_{0\leq j \leq x}(-1)^j(\binom{n}{j}(x-j)^{n-1}, & \text{if } 0\leq x \leq n\ 0, & \text{otherwise} \end{array} \nonumber$
The density $f_{S_n}(x)$ for $n = 2, 4, 6, 8, 10$ is shown in Figure 7.6.
If the Xi are distributed normally, with mean 0 and variance 1, then (cf. Example 7.5)
$f_{X_i}(x) = \frac{1}{\sqrt{2pi}} e^{-x^2/2}, \nonumber$
and
$f_{S_n}(x) = \frac{1}{\sqrt{2\pi n}}e^{-x^2/2n} \nonumber$
Here the density $f_Sn$ for $n=5,10,15,20,25$ is shown in Figure 7.7.
If the $X_i$ are all exponentially distributed, with mean $1/\lambda$, then
$f_{X_i}(x) = \lambda e^{-\lambda x}. \nonumber$
and
$f_{S_n} = \frac{\lambda e^{-\lambda x}(\lambda x)^{n-1}}{(n-1)!} \nonumber$
In this case the density $f_{S_n}$ for $n = 2, 4, 6, 8, 10$ is shown in Figure 7.8.
Exercises
Exercise $1$: Let $X$ and $Y$ be independent real-valued random variables with density functions $f_{X}(x)$ and $f_{Y}(y)$, respectively. Show that the density function of the sum $X+Y$ is the convolution of the functions $f_{X}(x)$ and $f_{Y}(y)$. Hint: Let $\bar{X}$ be the joint random variable $(X, Y)$. Then the joint density function of $\bar{X}$ is $f_{X}(x) f_{Y}(y)$, since $X$ and $Y$ are independent. Now compute the probability that $X+Y \leq z$, by integrating the joint density function over the appropriate region in the plane. This gives the cumulative distribution function of $Z$. Now differentiate this function with respect to $z$ to obtain the density function of $z$.
Exercise $2$: Let $X$ and $Y$ be independent random variables defined on the space $\Omega$, with density functions $f_{X}$ and $f_{Y}$, respectively. Suppose that $Z=X+Y$. Find the density $f_{Z}$ of $Z$ if
(a)
$f_{X}(x)=f_{Y}(x)= \begin{cases}1 / 2, & \text { if }-1 \leq x \leq+1 \ 0, & \text { otherwise }\end{cases}$
(b)
$f_{X}(x)=f_{Y}(x)= \begin{cases}1 / 2, & \text { if } 3 \leq x \leq 5 \ 0, & \text { otherwise }\end{cases}$
(c)
$\begin{gathered} f_{X}(x)= \begin{cases}1 / 2, & \text { if }-1 \leq x \leq 1, \ 0, & \text { otherwise }\end{cases} \ f_{Y}(x)= \begin{cases}1 / 2, & \text { if } 3 \leq x \leq 5 \ 0, & \text { otherwise }\end{cases} \end{gathered}$
(d) What can you say about the set $E=\left\{z: f_{Z}(z)>0\right\}$ in each case?
Exercise $3$: Suppose again that $Z=X+Y$. Find $f_{Z}$ if
(a)
$f_{X}(x)=f_{Y}(x)= \begin{cases}x / 2, & \text { if } 0<x<2 \ 0, & \text { otherwise. }\end{cases}$
(b)
$f_{X}(x)=f_{Y}(x)= \begin{cases}(1 / 2)(x-3), & \text { if } 3<x<5 \ 0, & \text { otherwise }\end{cases}$
(c)
$f_{X}(x)= \begin{cases}1 / 2, & \text { if } 0<x<2 \ 0, & \text { otherwise }\end{cases}$
$f_{Y}(x)= \begin{cases}x / 2, & \text { if } 0<x<2, \ 0, & \text { otherwise. }\end{cases}$
(d) What can you say about the set $E=\left\{z: f_{Z}(z)>0\right\}$ in each case?
Exercise $4$: Let $X, Y$, and $Z$ be independent random variables with
$f_{X}(x)=f_{Y}(x)=f_{Z}(x)= \begin{cases}1, & \text { if } 0<x<1 \ 0, & \text { otherwise }\end{cases}$
Suppose that $W=X+Y+Z$. Find $f_{W}$ directly, and compare your answer with that given by the formula in Example 7.9. Hint: See Example 7.3.
Exercise $5$: Suppose that $X$ and $Y$ are independent and $Z=X+Y$. Find $f_{Z}$ if
(a)
\begin{aligned} & f_{X}(x)= \begin{cases}\lambda e^{-\lambda x}, & \text { if } x>0, \ 0, & \text { otherwise. }\end{cases} \ & f_{Y}(x)= \begin{cases}\mu e^{-\mu x}, & \text { if } x>0, \ 0, & \text { otherwise. }\end{cases} \end{aligned}
(b)
\begin{aligned} & f_{X}(x)= \begin{cases}\lambda e^{-\lambda x}, & \text { if } x>0, \ 0, & \text { otherwise. }\end{cases} \ & f_{Y}(x)= \begin{cases}1, & \text { if } 0<x<1 \ 0, & \text { otherwise }\end{cases} \end{aligned}
Exercise $6$: Suppose again that $Z=X+Y$. Find $f_{Z}$ if
\begin{aligned} & f_{X}(x)=\frac{1}{\sqrt{2 \pi} \sigma_{1}} e^{-\left(x-\mu_{1}\right)^{2} / 2 \sigma_{1}^{2}} \ & f_{Y}(x)=\frac{1}{\sqrt{2 \pi} \sigma_{2}} e^{-\left(x-\mu_{2}\right)^{2} / 2 \sigma_{2}^{2}} . \end{aligned}
Exercise $7$: Suppose that $R^{2}=X^{2}+Y^{2}$. Find $f_{R^{2}}$ and $f_{R}$ if
\begin{aligned} f_{X}(x) & =\frac{1}{\sqrt{2 \pi} \sigma_{1}} e^{-\left(x-\mu_{1}\right)^{2} / 2 \sigma_{1}^{2}} \ f_{Y}(x) & =\frac{1}{\sqrt{2 \pi} \sigma_{2}} e^{-\left(x-\mu_{2}\right)^{2} / 2 \sigma_{2}^{2}} . \end{aligned}
Exercise $8$: Suppose that $R^{2}=X^{2}+Y^{2}$. Find $f_{R^{2}}$ and $f_{R}$ if
$f_{X}(x)=f_{Y}(x)= \begin{cases}1 / 2, & \text { if }-1 \leq x \leq 1 \ 0, & \text { otherwise }\end{cases}$
Exercise $9$: Assume that the service time for a customer at a bank is exponentially distributed with mean service time 2 minutes. Let $X$ be the total service time for 10 customers. Estimate the probability that $X>22$ minutes.
Exercise $10$: Let $X_{1}, X_{2}, \ldots, X_{n}$ be $n$ independent random variables each of which has an exponential density with mean $\mu$. Let $M$ be the minimum value of the $X_{j}$. Show that the density for $M$ is exponential with mean $\mu / n$. Hint: Use cumulative distribution functions.
Exercise $11$: A company buys 100 lightbulbs, each of which has an exponential lifetime of 1000 hours. What is the expected time for the first of these bulbs to burn out? (See Exercise 10.)
Exercise $12$: An insurance company assumes that the time between claims from each of its homeowners' policies is exponentially distributed with mean $\mu$. It would like to estimate $\mu$ by averaging the times for a number of policies, but this is not very practical since the time between claims is about 30 years. At Galambos'5 suggestion the company puts its customers in groups of 50 and observes the time of the first claim within each group. Show that this provides a practical way to estimate the value of $\mu$.
Exercise $13$: Particles are subject to collisions that cause them to split into two parts with each part a fraction of the parent. Suppose that this fraction is uniformly distributed between 0 and 1. Following a single particle through several splittings we obtain a fraction of the original particle $Z_{n}=X_{1} \cdot X_{2} \cdot \ldots \cdot X_{n}$ where each $X_{j}$ is uniformly distributed between 0 and 1 . Show that the density for the random variable $Z_{n}$ is
$f_{n}(z)=\frac{1}{(n-1) !}(-\log z)^{n-1}$
Hint: Show that $Y_{k}=-\log X_{k}$ is exponentially distributed. Use this to find the density function for $S_{n}=Y_{1}+Y_{2}+\cdots+Y_{n}$, and from this the cumulative distribution and density of $Z_{n}=e^{-S_{n}}$.
Exercise $14$: Assume that $X_{1}$ and $X_{2}$ are independent random variables, each having an exponential density with parameter $\lambda$. Show that $Z=X_{1}-X_{2}$ has density
$f_{Z}(z)=(1 / 2) \lambda e^{-\lambda|z|}$
Exercise $15$: Suppose we want to test a coin for fairness. We flip the coin $n$ times and record the number of times $X_{0}$ that the coin turns up tails and the number of times $X_{1}=n-X_{0}$ that the coin turns up heads. Now we set
$Z=\sum_{i=0}^{1} \frac{\left(X_{i}-n / 2\right)^{2}}{n / 2} .$
Then for a fair coin $Z$ has approximately a chi-squared distribution with $2-1=1$ degree of freedom. Verify this by computer simulation first for a fair coin $(p=1 / 2)$ and then for a biased coin $(p=1 / 3)$.
${ }^{5}$ J. Galambos, Introductory Probability Theory (New York: Marcel Dekker, 1984), p. 159.
Exercise $16$: Verify your answers in Exercise 2(a) by computer simulation: Choose $X$ and $Y$ from $[-1,1]$ with uniform density and calculate $Z=X+Y$. Repeat this experiment 500 times, recording the outcomes in a bar graph on $[-2,2]$ with 40 bars. Does the density $f_{Z}$ calculated in Exercise 2(a) describe the shape of your bar graph? Try this for Exercises 2(b) and Exercise 2(c), too.
Exercise $17$: Verify your answers to Exercise 3 by computer simulation.
Exercise $18$: Verify your answer to Exercise 4 by computer simulation.
Exercise $19$: The support of a function $f(x)$ is defined to be the set
$\{x: f(x)>0\} .$
Suppose that $X$ and $Y$ are two continuous random variables with density functions $f_{X}(x)$ and $f_{Y}(y)$, respectively, and suppose that the supports of these density functions are the intervals $[a, b]$ and $[c, d]$, respectively. Find the support of the density function of the random variable $X+Y$.
Exercise $20$: Let $X_{1}, X_{2}, \ldots, X_{n}$ be a sequence of independent random variables, all having a common density function $f_{X}$ with support $[a, b]$ (see Exercise 19). Let $S_{n}=X_{1}+X_{2}+\cdots+X_{n}$, with density function $f_{S_{n}}$. Show that the support of $f_{S_{n}}$ is the interval $[n a, n b]$. Hint: Write $f_{S_{n}}=f_{S_{n-1}} * f_{X}$. Now use Exercise 19 to establish the desired result by induction.
Exercise $21$: Let $X_{1}, X_{2}, \ldots, X_{n}$ be a sequence of independent random variables, all having a common density function $f_{X}$. Let $A=S_{n} / n$ be their average. Find $f_{A}$ if
(a) $f_{X}(x)=(1 / \sqrt{2 \pi}) e^{-x^{2} / 2}$ (normal density).
(b) $f_{X}(x)=e^{-x}$ (exponential density).
Hint: Write $f_{A}(x)$ in terms of $f_{S_{n}}(x)$. | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/07%3A_Sums_of_Random_Variables/7.02%3A_Sums_of_Continuous_Random_Variables.txt |
Thumbnail: Diffusion is an example of the law of large numbers. Initially, there are solute molecules on the left side of a barrier (magenta line) and none on the right. The barrier is removed, and the solute diffuses to fill the whole container. Top: With a single molecule, the motion appears to be quite random. Middle: With more molecules, there is clearly a trend where the solute fills the container more and more uniformly, but there are also random fluctuations. Bottom: With an enormous number of solute molecules (too many to see), the randomness is essentially gone: The solute appears to move smoothly and systematically from high-concentration areas to low-concentration areas. In realistic situations, chemists can describe diffusion as a deterministic macroscopic phenomenon, despite its underlying random nature. (Pubic Domain; Sbyrnes321 via Wikipedia).
08: Law of Large Numbers
We are now in a position to prove our first fundamental theorem of probability. We have seen that an intuitive way to view the probability of a certain outcome is as the frequency with which that outcome occurs in the long run, when the experiment is repeated a large number of times. We have also defined probability mathematically as a value of a distribution function for the random variable representing the experiment. The Law of Large Numbers, which is a theorem proved about the mathematical model of probability, shows that this model is consistent with the frequency interpretation of probability. This theorem is sometimes called the To find out what would happen if this law were not true, see the article by Robert M. Coates.1
Chebyshev Inequality
To discuss the Law of Large Numbers, we first need an important inequality called the
(Chebyshev Inequality) Let $X$ be a discrete random variable with expected value $\mu = E(X)$, and let $\epsilon > 0$ be any positive real number. Then $P(|X - \mu| \geq \epsilon) \leq \frac {V(X)}{\epsilon^2}\ .$ Let $m(x)$ denote the distribution function of $X$. Then the probability that $X$ differs from $\mu$ by at least $\epsilon$ is given by $P(|X - \mu| \geq \epsilon) = \sum_{|x - \mu| \geq \epsilon} m(x)\ .$ We know that $V(X) = \sum_x (x - \mu)^2 m(x)\ ,$ and this is clearly at least as large as $\sum_{|x - \mu| \geq \epsilon} (x - \mu)^2 m(x)\ ,$ since all the summands are positive and we have restricted the range of summation in the second sum. But this last sum is at least \begin{aligned} \sum_{|x - \mu| \geq \epsilon} \epsilon^2 m(x) &=& \epsilon^2 \sum_{|x - \mu| \geq \epsilon} m(x) \ &=& \epsilon^2 P(|X - \mu| \geq \epsilon)\ .\\end{aligned} So, $P(|X - \mu| \geq \epsilon) \leq \frac {V(X)}{\epsilon^2}\ .$
Note that $X$ in the above theorem can be any discrete random variable, and $\epsilon$ any positive number.
Let $X$ by any random variable with $E(X) = \mu$ and $V(X) = \sigma^2$. Then, if $\epsilon = k\sigma$, Chebyshev’s Inequality states that $P(|X - \mu| \geq k\sigma) \leq \frac {\sigma^2}{k^2\sigma^2} = \frac 1{k^2}\ .$ Thus, for any random variable, the probability of a deviation from the mean of more than $k$ standard deviations is ${} \leq 1/k^2$. If, for example, $k = 5$, $1/k^2 = .04$.
Chebyshev’s Inequality is the best possible inequality in the sense that, for any $\epsilon > 0$, it is possible to give an example of a random variable for which Chebyshev’s Inequality is in fact an equality. To see this, given $\epsilon > 0$, choose $X$ with distribution $p_X = \pmatrix{ -\epsilon & +\epsilon \cr 1/2 & 1/2 \cr}\ .$ Then $E(X) = 0$, $V(X) = \epsilon^2$, and $P(|X - \mu| \geq \epsilon) = \frac {V(X)}{\epsilon^2} = 1\ .$
We are now prepared to state and prove the Law of Large Numbers.
Law of Large Numbers
(Law of Large Numbers) Let $X_1$, $X_2$, …, $X_n$ be an independent trials process, with finite expected value $\mu = E(X_j)$ and finite variance $\sigma^2 = V(X_j)$. Let $S_n = X_1 + X_2 +\cdots+ X_n$. Then for any $\epsilon > 0$, $P\left( \left| \frac {S_n}n - \mu \right| \geq \epsilon \right) \to 0$ as $n \rightarrow \infty$. Equivalently, $P\left( \left| \frac {S_n}n - \mu \right| < \epsilon \right) \to 1$ as $n \rightarrow \infty$. Since $X_1$, $X_2$, …, $X_n$ are independent and have the same distributions, we can apply Theorem [thm 6.9]. We obtain $V(S_n) = n\sigma^2\ ,$ and $V (\frac {S_n}n) = \frac {\sigma^2}n\ .$ Also we know that $E (\frac {S_n}n) = \mu\ .$ By Chebyshev’s Inequality, for any $\epsilon > 0$, $P\left( \left| \frac {S_n}n - \mu \right| \geq \epsilon \right) \leq \frac {\sigma^2}{n\epsilon^2}\ .$ Thus, for fixed $\epsilon$, $P\left( \left| \frac {S_n}n - \mu \right| \geq \epsilon \right) \to 0$ as $n \rightarrow \infty$, or equivalently, $P\left( \left| \frac {S_n}n - \mu \right| < \epsilon \right) \to 1$ as $n \rightarrow \infty$.
Law of Averages
Note that $S_n/n$ is an average of the individual outcomes, and one often calls the Law of Large Numbers the “law of averages." It is a striking fact that we can start with a random experiment about which little can be predicted and, by taking averages, obtain an experiment in which the outcome can be predicted with a high degree of certainty. The Law of Large Numbers, as we have stated it, is often called the “Weak Law of Large Numbers" to distinguish it from the “Strong Law of Large Numbers" described in Exercise [exer 8.1.16].
Consider the important special case of Bernoulli trials with probability $p$ for success. Let $X_j = 1$ if the $j$th outcome is a success and 0 if it is a failure. Then $S_n = X_1 + X_2 +\cdots+ X_n$ is the number of successes in $n$ trials and $\mu = E(X_1) = p$. The Law of Large Numbers states that for any $\epsilon > 0$ $P\left( \left| \frac {S_n}n - p \right| < \epsilon \right) \to 1$ as $n \rightarrow \infty$. The above statement says that, in a large number of repetitions of a Bernoulli experiment, we can expect the proportion of times the event will occur to be near $p$. This shows that our mathematical model of probability agrees with our frequency interpretation of probability.
Coin Tossing
Let us consider the special case of tossing a coin $n$ times with $S_n$ the number of heads that turn up. Then the random variable $S_n/n$ represents the fraction of times heads turns up and will have values between 0 and 1. The Law of Large Numbers predicts that the outcomes for this random variable will, for large $n$, be near 1/2.
In Figure [fig 8.1], we have plotted the distribution for this example for increasing values of $n$. We have marked the outcomes between .45 and .55 by dots at the top of the spikes. We see that as $n$ increases the distribution gets more and more concentrated around .5 and a larger and larger percentage of the total area is contained within the interval $(.45,.55)$, as predicted by the Law of Large Numbers.
Die Rolling
Consider $n$ rolls of a die. Let $X_j$ be the outcome of the $j$th roll. Then $S_n = X_1 + X_2 +\cdots+ X_n$ is the sum of the first $n$ rolls. This is an independent trials process with $E(X_j) = 7/2$. Thus, by the Law of Large Numbers, for any $\epsilon > 0$ $P\left( \left| \frac {S_n}n - \frac 72 \right| \geq \epsilon \right) \to 0$ as $n \rightarrow \infty$. An equivalent way to state this is that, for any $\epsilon > 0$, $P\left( \left| \frac {S_n}n - \frac 72 \right| < \epsilon \right) \to 1$ as $n \rightarrow \infty$.
Numerical Comparisons
It should be emphasized that, although Chebyshev’s Inequality proves the Law of Large Numbers, it is actually a very crude inequality for the probabilities involved. However, its strength lies in the fact that it is true for any random variable at all, and it allows us to prove a very powerful theorem.
In the following example, we compare the estimates given by Chebyshev’s Inequality with the actual values.
Let $X_1$, $X_2$, …, $X_n$ be a Bernoulli trials process with probability .3 for success and .7 for failure. Let $X_j = 1$ if the $j$th outcome is a success and 0 otherwise. Then, $E(X_j) = .3$ and $V(X_j) = (.3)(.7) = .21$. If $A_n = \frac {S_n}n = \frac {X_1 + X_2 +\cdots+ X_n}n$ is the of the $X_i$, then $E(A_n) = .3$ and $V(A_n) = V(S_n)/n^2 = .21/n$. Chebyshev’s Inequality states that if, for example, $\epsilon = .1$, $P(|A_n - .3| \geq .1) \leq \frac {.21}{n(.1)^2} = \frac {21}n\ .$ Thus, if $n = 100$, $P(|A_{100} - .3| \geq .1) \leq .21\ ,$ or if $n = 1000$, $P(|A_{1000} - .3| \geq .1) \leq .021\ .$ These can be rewritten as \begin{aligned} P(.2 < A_{100} < .4) &\geq& .79\ , \ P(.2 < A_{1000} < .4) &\geq& .979\ .\end{aligned} These values should be compared with the actual values, which are (to six decimal places) \begin{aligned} P(.2 < A_{100} < .4) &\approx& .962549 \ P(.2 < A_{1000} < .4) &\approx& 1\ .\\end{aligned} The program Law can be used to carry out the above calculations in a systematic way.
Historical Remarks
The Law of Large Numbers was first proved by the Swiss mathematician James Bernoulli in the fourth part of his work published posthumously in 1713.2 As often happens with a first proof, Bernoulli’s proof was much more difficult than the proof we have presented using Chebyshev’s inequality. Chebyshev developed his inequality to prove a general form of the Law of Large Numbers (see Exercise [exer 8.1.13]). The inequality itself appeared much earlier in a work by Bienaymé, and in discussing its history Maistrov remarks that it was referred to as the Bienaymé-Chebyshev Inequality for a long time.3
In Bernoulli provides his reader with a long discussion of the meaning of his theorem with lots of examples. In modern notation he has an event that occurs with probability $p$ but he does not know $p$. He wants to estimate $p$ by the fraction $\bar{p}$ of the times the event occurs when the experiment is repeated a number of times. He discusses in detail the problem of estimating, by this method, the proportion of white balls in an urn that contains an unknown number of white and black balls. He would do this by drawing a sequence of balls from the urn, replacing the ball drawn after each draw, and estimating the unknown proportion of white balls in the urn by the proportion of the balls drawn that are white. He shows that, by choosing $n$ large enough he can obtain any desired accuracy and reliability for the estimate. He also provides a lively discussion of the applicability of his theorem to estimating the probability of dying of a particular disease, of different kinds of weather occurring, and so forth.
In speaking of the number of trials necessary for making a judgement, Bernoulli observes that the “man on the street" believes the “law of averages."
Further, it cannot escape anyone that for judging in this way about any event at all, it is not enough to use one or two trials, but rather a great number of trials is required. And sometimes the stupidest man—by some instinct of nature and by no previous instruction (this is truly amazing)— knows for sure that the more observations of this sort that are taken, the less the danger will be of straying from the mark.4
But he goes on to say that he must contemplate another possibility.
Something futher must be contemplated here which perhaps no one has thought about till now. It certainly remains to be inquired whether after the number of observations has been increased, the probability is increased of attaining the true ratio between the number of cases in which some event can happen and in which it cannot happen, so that this probability finally exceeds any given degree of certainty; or whether the problem has, so to speak, its own asymptote—that is, whether some degree of certainty is given which one can never exceed.5
Bernoulli recognized the importance of this theorem, writing:
Therefore, this is the problem which I now set forth and make known after I have already pondered over it for twenty years. Both its novelty and its very great usefullness, coupled with its just as great difficulty, can exceed in weight and value all the remaining chapters of this thesis.6
Bernoulli concludes his long proof with the remark:
Whence, finally, this one thing seems to follow: that if observations of all events were to be continued throughout all eternity, (and hence the ultimate probability would tend toward perfect certainty), everything in the world would be perceived to happen in fixed ratios and according to a constant law of alternation, so that even in the most accidental and fortuitous occurrences we would be bound to recognize, as it were, a certain necessity and, so to speak, a certain fate.
I do now know whether Plato wished to aim at this in his doctrine of the universal return of things, according to which he predicted that all things will return to their original state after countless ages have past.7
Exercises
Exercise $1$
A fair coin is tossed 100 times. The expected number of heads is 50, and the standard deviation for the number of heads is $(100 \cdot 1/2 \cdot 1/2)^{1/2} = 5$. What does Chebyshev’s Inequality tell you about the probability that the number of heads that turn up deviates from the expected number 50 by three or more standard deviations (i.e., by at least 15)?
Exercise $2$
Write a program that uses the function $\mbox {binomial}(n,p,x)$ to compute the exact probability that you estimated in Exercise [exer 8.1.1]. Compare the two results.
Exercise $3$
Write a program to toss a coin 10,000 times. Let $S_n$ be the number of heads in the first $n$ tosses. Have your program print out, after every 1000 tosses, $S_n - n/2$. On the basis of this simulation, is it correct to say that you can expect heads about half of the time when you toss a coin a large number of times?
Exercise $4$
A 1-dollar bet on craps has an expected winning of $-.0141$. What does the Law of Large Numbers say about your winnings if you make a large number of 1-dollar bets at the craps table? Does it assure you that your losses will be small? Does it assure you that if $n$ is very large you will lose?
Exercise $5$
Let $X$ be a random variable with $E(X) =0$ and $V(X) = 1$. What integer value $k$ will assure us that $P(|X| \geq k) \leq .01$?
Exercise $6$
Let $S_n$ be the number of successes in $n$ Bernoulli trials with probability $p$ for success on each trial. Show, using Chebyshev’s Inequality, that for any $\epsilon > 0$ $P\left( \left| \frac {S_n}n - p \right| \geq \epsilon \right) \leq \frac {p(1 - p)}{n\epsilon^2}\ .$
Exercise $7$
Find the maximum possible value for $p(1 - p)$ if $0 < p < 1$. Using this result and Exercise [exer 8.1.6], show that the estimate $P\left( \left| \frac {S_n}n - p \right| \geq \epsilon \right) \leq \frac 1{4n\epsilon^2}$ is valid for any $p$.
Exercise $8$
A fair coin is tossed a large number of times. Does the Law of Large Numbers assure us that, if $n$ is large enough, with $\mbox {probability} > .99$ the number of heads that turn up will not deviate from $n/2$ by more than 100?
Exercise $9$
In Exercise [sec 6.2].[exer 6.2.16], you showed that, for the hat check problem, the number $S_n$ of people who get their own hats back has $E(S_n) = V(S_n) = 1$. Using Chebyshev’s Inequality, show that $P(S_n \geq 11) \leq .01$ for any $n \geq 11$.
Exercise $10$
Let $X$ by any random variable which takes on values 0, 1, 2, …, $n$ and has $E(X) = V(X) = 1$. Show that, for any positive integer $k$, $P(X \geq k + 1) \leq \frac 1{k^2}\ .$
Exercise $11$
We have two coins: one is a fair coin and the other is a coin that produces heads with probability 3/4. One of the two coins is picked at random, and this coin is tossed $n$ times. Let $S_n$ be the number of heads that turns up in these $n$ tosses. Does the Law of Large Numbers allow us to predict the proportion of heads that will turn up in the long run? After we have observed a large number of tosses, can we tell which coin was chosen? How many tosses suffice to make us 95 percent sure?
Exercise $12$
(Chebyshev8) Assume that $X_1$, $X_2$, …, $X_n$ are independent random variables with possibly different distributions and let $S_n$ be their sum. Let $m_k = E(X_k)$, $\sigma_k^2 = V(X_k)$, and $M_n = m_1 + m_2 +\cdots+ m_n$. Assume that $\sigma_k^2 < R$ for all $k$. Prove that, for any $\epsilon > 0$, $P\left( \left| \frac {S_n}n - \frac {M_n}n \right| < \epsilon \right) \to 1$ as $n \rightarrow \infty$.
Exercise $13$
A fair coin is tossed repeatedly. Before each toss, you are allowed to decide whether to bet on the outcome. Can you describe a betting system with infinitely many bets which will enable you, in the long run, to win more than half of your bets? (Note that we are disallowing a betting system that says to bet until you are ahead, then quit.) Write a computer program that implements this betting system. As stated above, your program must decide whether to bet on a particular outcome before that outcome is determined. For example, you might select only outcomes that come after there have been three tails in a row. See if you can get more than 50% heads by your “system."
Exercise $14$
Prove the following analogue of Chebyshev’s Inequality: $P(|X - E(X)| \geq \epsilon) \leq \frac 1\epsilon E(|X - E(X)|)\ .$
Exercise $15$
We have proved a theorem often called the “Weak Law of Large Numbers." Most people’s intuition and our computer simulations suggest that, if we toss a coin a sequence of times, the proportion of heads will really approach 1/2; that is, if $S_n$ is the number of heads in $n$ times, then we will have $A_n = \frac {S_n}n \to \frac 12$ as $n \to \infty$. Of course, we cannot be sure of this since we are not able to toss the coin an infinite number of times, and, if we could, the coin could come up heads every time. However, the “Strong Law of Large Numbers," proved in more advanced courses, states that $P\left( \frac {S_n}n \to \frac 12 \right) = 1\ .$ Describe a sample space $\Omega$ that would make it possible for us to talk about the event $E = \left\{\, \omega : \frac {S_n}n \to \frac 12\, \right\}\ .$ Could we assign the equiprobable measure to this space?
Exercise $16$
In this exercise, we shall construct an example of a sequence of random variables that satisfies the weak law of large numbers, but not the strong law. The distribution of $X_i$ will have to depend on $i$, because otherwise both laws would be satisfied. (This problem was communicated to us by David Maslen.) .1in Suppose we have an infinite sequence of mutually independent events $A_1, A_2, \ldots$. Let $a_i = P(A_i)$, and let $r$ be a positive integer.
1. Find an expression of the probability that none of the $A_i$ with $i>r$ occur.
2. Use the fact that $x-1 \leq e^{-x}$ to show that $P(\mbox{No \ A_i \ with \ i > r \ occurs}) \leq e^{-\sum_{i=r}^{\infty} a_i}$
3. (The first Borel-Cantelli lemma) Prove that if $\sum_{i=1}^{\infty} a_i$ diverges, then $P(\mbox{infinitely\ many\ A_i\ occur}) = 1.$ .1in Now, let $X_i$ be a sequence of mutually independent random variables such that for each positive integer $i \geq 2$, $P(X_i = i) = \frac{1}{2i\log i}, \quad P(X_i = -i) = \frac{1}{2i\log i}, \quad P(X_i =0) = 1 - \frac{1}{i \log i}.$ When $i=1$ we let $X_i=0$ with probability $1$. As usual we let $S_n = X_1 + \cdots + X_n$. Note that the mean of each $X_i$ is $0$.
4. Find the variance of $S_n$.
5. Show that the sequence $\langle X_i \rangle$ satisfies the Weak Law of Large Numbers, i.e. prove that for any $\epsilon > 0$ $P\biggl(\biggl|{\frac{S_n}{n}}\biggr| \geq \epsilon\biggr) \rightarrow 0\ ,$ as $n$ tends to infinity. .1in We now show that $\{ X_i \}$ does not satisfy the Strong Law of Large Numbers. Suppose that $S_n / n \rightarrow 0$. Then because $\frac{X_n}{n} = \frac{S_n}{n} - \frac{n-1}{n} \frac{S_{n-1}}{n-1}\ ,$ we know that $X_n / n \rightarrow 0$. From the definition of limits, we conclude that the inequality $|X_i| \geq \frac{1}{2} i$ can only be true for finitely many $i$.
6. Let $A_i$ be the event $|X_i| \geq \frac{1}{2} i$. Find $P(A_i)$. Show that $\sum_{i=1}^{\infty} P(A_i)$ diverges (use the Integral Test).
7. Prove that $A_i$ occurs for infinitely many $i$.
8. Prove that $P\biggl(\frac{S_n}{n} \rightarrow 0\biggr) = 0,$ and hence that the Strong Law of Large Numbers fails for the sequence $\{ X_i \}$.
Exercise $17$
Let us toss a biased coin that comes up heads with probability $p$ and assume the validity of the Strong Law of Large Numbers as described in Exercise [exer 8.1.16]. Then, with probability 1, $\frac {S_n}n \to p$ as $n \to \infty$. If $f(x)$ is a continuous function on the unit interval, then we also have $f\left( \frac {S_n}n \right) \to f(p)\ .$
Finally, we could hope that $E\left(f\left( \frac {S_n}n \right)\right) \to E(f(p)) = f(p)\ .$ Show that, if all this is correct, as in fact it is, we would have proven that any continuous function on the unit interval is a limit of polynomial functions. This is a sketch of a probabilistic proof of an important theorem in mathematics called the | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/08%3A_Law_of_Large_Numbers/8.01%3A_Discrete_Random_Variables.txt |
In the previous section we discussed in some detail the Law of Large Numbers for discrete probability distributions. This law has a natural analogue for continuous probability distributions, which we consider somewhat more briefly here.
Chebyshev Inequality
Just as in the discrete case, we begin our discussion with the Chebyshev Inequality.
Theorem $1$
Let $X$ be a continuous random variable with density function $f(x)$. Suppose $X$ has a finite expected value $\mu=E(X)$ and finite variance $\sigma^{2}=V(X)$. Then for any positive number $\epsilon>0$ we have
$P(|X-\mu| \geq \epsilon) \leq \frac{\sigma^{2}}{\epsilon^{2}}$
The proof is completely analogous to the proof in the discrete case, and we omit it.
Note that this theorem says nothing if $\sigma^{2}=V(X)$ is infinite.
Example $1$
Let $X$ be any continuous random variable with $E(X)=\mu$ and $V(X)=\sigma^{2}$. Then, if $\epsilon=k \sigma=k$ standard deviations for some integer $k$, then
$P(|X-\mu| \geq k \sigma) \leq \frac{\sigma^{2}}{k^{2} \sigma^{2}}=\frac{1}{k^{2}}$
just as in the discrete case.
Law of Large Numbers
With the Chebyshev Inequality we can now state and prove the Law of Large Numbers for the continuous case.
Theorem $2$
Let $X_{1}, X_{2}, \ldots, X_{n}$ be an independent trials process with a continuous density function $f$, finite expected value $\mu$, and finite variance $\sigma^{2}$. Let $S_{n}=X_{1}+X_{2}+\cdots+X_{n}$ be the sum of the $X_{i}$. Then for any real number $\epsilon>0$ we have
$\lim _{n \rightarrow \infty} P\left(\left|\frac{S_{n}}{n}-\mu\right| \geq \epsilon\right)=0$
or equivalently,
$\lim _{n \rightarrow \infty} P\left(\left|\frac{S_{n}}{n}-\mu\right|<\epsilon\right)=1$
Note that this theorem is not necessarily true if $\sigma^{2}$ is infinite (see Example 8.8).
As in the discrete case, the Law of Large Numbers says that the average value of $n$ independent trials tends to the expected value as $n \rightarrow \infty$, in the precise sense that, given $\epsilon>0$, the probability that the average value and the expected value differ by more than $\epsilon$ tends to 0 as $n \rightarrow \infty$.
Once again, we suppress the proof, as it is identical to the proof in the discrete case.
Uniform Case
Example $2$
Suppose we choose at random $n$ numbers from the interval $[0,1]$ with uniform distribution. Then if $X_{i}$ describes the $i$ th choice, we have
\begin{aligned} \mu & =E\left(X_{i}\right)=\int_{0}^{1} x d x=\frac{1}{2}, \ \sigma^{2} & =V\left(X_{i}\right)=\int_{0}^{1} x^{2} d x-\mu^{2} \ & =\frac{1}{3}-\frac{1}{4}=\frac{1}{12} . \end{aligned}
Hence,
\begin{aligned} & E\left(\frac{S_{n}}{n}\right)=\frac{1}{2}, \ & V\left(\frac{S_{n}}{n}\right)=\frac{1}{12 n}, \end{aligned}
and for any $\epsilon>0$,
$P\left(\left|\frac{S_{n}}{n}-\frac{1}{2}\right| \geq \epsilon\right) \leq \frac{1}{12 n \epsilon^{2}}$
This says that if we choose $n$ numbers at random from $[0,1]$, then the chances are better than $1-1 /\left(12 n \epsilon^{2}\right)$ that the difference $\left|S_{n} / n-1 / 2\right|$ is less than $\epsilon$. Note that $\epsilon$ plays the role of the amount of error we are willing to tolerate: If we choose $\epsilon=0.1$, say, then the chances that $\left|S_{n} / n-1 / 2\right|$ is less than 0.1 are better than $1-100 /(12 n)$. For $n=100$, this is about .92 , but if $n=1000$, this is better than .99 and if $n=10,000$, this is better than .999 .
We can illustrate what the Law of Large Numbers says for this example graphically. The density for $A_{n}=S_{n} / n$ is determined by
$f_{A_{n}}(x)=n f_{S_{n}}(n x) \text {. }$
We have seen in Section 7.2, that we can compute the density $f_{S_{n}}(x)$ for the sum of $n$ uniform random variables. In Figure 8.2 we have used this to plot the density for $A_{n}$ for various values of $n$. We have shaded in the area for which $A_{n}$ would lie between .45 and .55 . We see that as we increase $n$, we obtain more and more of the total area inside the shaded region. The Law of Large Numbers tells us that we can obtain as much of the total area as we please inside the shaded region by choosing $n$ large enough (see also Figure 8.1).
Normal Case
Example $3$
Suppose we choose $n$ real numbers at random, using a normal distribution with mean 0 and variance 1 . Then
\begin{aligned} \mu & =E\left(X_{i}\right)=0, \ \sigma^{2} & =V\left(X_{i}\right)=1 . \end{aligned}
Hence,
\begin{aligned} & E\left(\frac{S_{n}}{n}\right)=0 \ & V\left(\frac{S_{n}}{n}\right)=\frac{1}{n} \end{aligned}
and, for any $\epsilon>0$,
$P\left(\left|\frac{S_{n}}{n}-0\right| \geq \epsilon\right) \leq \frac{1}{n \epsilon^{2}} .$
In this case it is possible to compare the Chebyshev estimate for $P\left(\left|S_{n} / n-\mu\right| \geq \epsilon\right)$ in the Law of Large Numbers with exact values, since we know the density function for $S_{n} / n$ exactly (see Example 7.9). The comparison is shown in Table 8.1, for $\epsilon=.1$. The data in this table was produced by the program LawContinuous. We see here that the Chebyshev estimates are in general not very accurate.
Table $1$: Chebyshev estimates.
n
P\left(\left|S_n / n\right| \geq .1\right)
Chebyshev
100
.31731
1.00000
200
.15730
.50000
300
.08326
.33333
400
.04550
.25000
500
.02535
.20000
600
.01431
.16667
700
.00815
.14286
800
.00468
.12500
900
.00270
.11111
1000
.00157
.10000
Monte Carlo Method
Here is a somewhat more interesting example.
Example $4$
Let $g(x)$ be a continuous function defined for $x \in[0,1]$ with values in $[0,1]$. In Section 2.1, we showed how to estimate the area of the region under the graph of $g(x)$ by the Monte Carlo method, that is, by choosing a large number of random values for $x$ and $y$ with uniform distribution and seeing what fraction of the points $P(x, y)$ fell inside the region under the graph (see Example 2.2).
Here is a better way to estimate the same area (see Figure 8.3). Let us choose a large number of independent values $X_{n}$ at random from $[0,1]$ with uniform density, set $Y_{n}=g\left(X_{n}\right)$, and find the average value of the $Y_{n}$. Then this average is our estimate for the area. To see this, note that if the density function for $X_{n}$ is uniform,
\begin{aligned} \mu & =E\left(Y_{n}\right)=\int_{0}^{1} g(x) f(x) d x \ & =\int_{0}^{1} g(x) d x \ & =\text { average value of } g(x), \end{aligned}
while the variance is
$\sigma^{2}=E\left(\left(Y_{n}-\mu\right)^{2}\right)=\int_{0}^{1}(g(x)-\mu)^{2} d x<1$
since for all $x$ in [0,1], $g(x)$ is in $[0,1]$, hence $\mu$ is in $[0,1]$, and so $|g(x)-\mu| \leq 1$. Now let $A_{n}=(1 / n)\left(Y_{1}+Y_{2}+\cdots+Y_{n}\right)$. Then by Chebyshev's Inequality, we have
$P\left(\left|A_{n}-\mu\right| \geq \epsilon\right) \leq \frac{\sigma^{2}}{n \epsilon^{2}}<\frac{1}{n \epsilon^{2}} .$
This says that to get within $\epsilon$ of the true value for $\mu=\int_{0}^{1} g(x) d x$ with probability at least $p$, we should choose $n$ so that $1 / n \epsilon^{2} \leq 1-p$ (i.e., so that $n \geq 1 / \epsilon^{2}(1-p)$ ). Note that this method tells us how large to take $n$ to get a desired accuracy.
The Law of Large Numbers requires that the variance $\sigma^{2}$ of the original underlying density be finite: $\sigma^{2}<\infty$. In cases where this fails to hold, the Law of Large Numbers may fail, too. An example follows.
Cauchy Case
Example $1$
Suppose we choose $n$ numbers from $(-\infty,+\infty)$ with a Cauchy density with parameter $a=1$. We know that for the Cauchy density the expected value and variance are undefined (see Example 6.28). In this case, the density function for
$A_{n}=\frac{S_{n}}{n}$
is given by (see Example 7.6)
$f_{A_{n}}(x)=\frac{1}{\pi\left(1+x^{2}\right)}$
that is, the density function for $A_{n}$ is the same for all $n$. In this case, as $n$ increases, the density function does not change at all, and the Law of Large Numbers does not hold.
Exercises
Example $1$:
Let $X$ be a continuous random variable with mean $\mu=10$ and variance $\sigma^{2}=100 / 3$. Using Chebyshev's Inequality, find an upper bound for the following probabilities. (a) $P(|X-10| \geq 2)$.
(b) $P(|X-10| \geq 5)$.
(c) $P(|X-10| \geq 9)$.
(d) $P(|X-10| \geq 20)$.
Example $2$:
Let $X$ be a continuous random variable with values unformly distributed over the interval $[0,20]$.
(a) Find the mean and variance of $X$.
(b) Calculate $P(|X-10| \geq 2), P(|X-10| \geq 5), P(|X-10| \geq 9)$, and $P(|X-10| \geq 20)$ exactly. How do your answers compare with those of Exercise 1? How good is Chebyshev's Inequality in this case?
Example $3$:
Let $X$ be the random variable of Exercise 2 .
(a) Calculate the function $f(x)=P(|X-10| \geq x)$.
(b) Now graph the function $f(x)$, and on the same axes, graph the Chebyshev function $g(x)=100 /\left(3 x^{2}\right)$. Show that $f(x) \leq g(x)$ for all $x>0$, but that $g(x)$ is not a very good approximation for $f(x)$.
Example $4$:
Let $X$ be a continuous random variable with values exponentially distributed over $[0, \infty)$ with parameter $\lambda=0.1$.
(a) Find the mean and variance of $X$.
(b) Using Chebyshev's Inequality, find an upper bound for the following probabilities: $P(|X-10| \geq 2), P(|X-10| \geq 5), P(|X-10| \geq 9)$, and $P(|X-10| \geq 20)$.
(c) Calculate these probabilities exactly, and compare with the bounds in (b).
Example $5$:
Let $X$ be a continuous random variable with values normally distributed over $(-\infty,+\infty)$ with mean $\mu=0$ and variance $\sigma^{2}=1$.
(a) Using Chebyshev's Inequality, find upper bounds for the following probabilities: $P(|X| \geq 1), P(|X| \geq 2)$, and $P(|X| \geq 3)$.
(b) The area under the normal curve between -1 and 1 is .6827 , between -2 and 2 is .9545 , and between -3 and 3 it is .9973 (see the table in Appendix A). Compare your bounds in (a) with these exact values. How good is Chebyshev's Inequality in this case?
Example $6$:
If $X$ is normally distributed, with mean $\mu$ and variance $\sigma^{2}$, find an upper bound for the following probabilities, using Chebyshev's Inequality.
(a) $P(|X-\mu| \geq \sigma)$.
(b) $P(|X-\mu| \geq 2 \sigma)$.
(c) $P(|X-\mu| \geq 3 \sigma)$. (d) $P(|X-\mu| \geq 4 \sigma)$.
Now find the exact value using the program NormalArea or the normal table in Appendix A, and compare.
Example $7$:
If $X$ is a random variable with mean $\mu \neq 0$ and variance $\sigma^{2}$, define the relative deviation $D$ of $X$ from its mean by
$D=\left|\frac{X-\mu}{\mu}\right|$
(a) Show that $P(D \geq a) \leq \sigma^{2} /\left(\mu^{2} a^{2}\right)$.
(b) If $X$ is the random variable of Exercise 1, find an upper bound for $P(D \geq$ $.2), P(D \geq .5), P(D \geq .9)$, and $P(D \geq 2)$.
Example $8$:
Let $X$ be a continuous random variable and define the standardized version $X^{*}$ of $X$ by:
$X^{*}=\frac{X-\mu}{\sigma} .$
(a) Show that $P\left(\left|X^{*}\right| \geq a\right) \leq 1 / a^{2}$.
(b) If $X$ is the random variable of Exercise 1, find bounds for $P\left(\left|X^{*}\right| \geq 2\right)$, $P\left(\left|X^{*}\right| \geq 5\right)$, and $P\left(\left|X^{*}\right| \geq 9\right)$.
Example $9$:
(a) Suppose a number $X$ is chosen at random from $[0,20]$ with uniform probability. Find a lower bound for the probability that $X$ lies between 8 and 12, using Chebyshev's Inequality.
(b) Now suppose 20 real numbers are chosen independently from $[0,20]$ with uniform probability. Find a lower bound for the probability that their average lies between 8 and 12 .
(c) Now suppose 100 real numbers are chosen independently from $[0,20]$. Find a lower bound for the probability that their average lies between 8 and 12.
Example $10$:
A student's score on a particular calculus final is a random variable with values of $[0,100]$, mean 70 , and variance 25 .
(a) Find a lower bound for the probability that the student's score will fall between 65 and 75 .
(b) If 100 students take the final, find a lower bound for the probability that the class average will fall between 65 and 75 .
Example $11$:
The Pilsdorff beer company runs a fleet of trucks along the 100 mile road from Hangtown to Dry Gulch, and maintains a garage halfway in between. Each of the trucks is apt to break down at a point $X$ miles from Hangtown, where $X$ is a random variable uniformly distributed over $[0,100]$.
(a) Find a lower bound for the probability $P(|X-50| \leq 10)$. (b) Suppose that in one bad week, 20 trucks break down. Find a lower bound for the probability $P\left(\left|A_{20}-50\right| \leq 10\right)$, where $A_{20}$ is the average of the distances from Hangtown at the time of breakdown.
Example $12$:
A share of common stock in the Pilsdorff beer company has a price $Y_{n}$ on the $n$th business day of the year. Finn observes that the price change $X_{n}=$ $Y_{n+1}-Y_{n}$ appears to be a random variable with mean $\mu=0$ and variance $\sigma^{2}=1 / 4$. If $Y_{1}=30$, find a lower bound for the following probabilities, under the assumption that the $X_{n}$ 's are mutually independent.
(a) $P\left(25 \leq Y_{2} \leq 35\right)$.
(b) $P\left(25 \leq Y_{11} \leq 35\right)$.
(c) $P\left(25 \leq Y_{101} \leq 35\right)$.
Example $13$:
Suppose one hundred numbers $X_{1}, X_{2}, \ldots, X_{100}$ are chosen independently at random from $[0,20]$. Let $S=X_{1}+X_{2}+\cdots+X_{100}$ be the sum, $A=S / 100$ the average, and $S^{*}=(S-1000) /(10 / \sqrt{3})$ the standardized sum. Find lower bounds for the probabilities
(a) $P(|S-1000| \leq 100)$.
(b) $P(|A-10| \leq 1)$.
(c) $P\left(\left|S^{*}\right| \leq \sqrt{3}\right)$.
Example $14$:
Let $X$ be a continuous random variable normally distributed on $(-\infty,+\infty)$ with mean 0 and variance 1 . Using the normal table provided in Appendix A, or the program NormalArea, find values for the function $f(x)=P(|X| \geq x)$ as $x$ increases from 0 to 4.0 in steps of .25. Note that for $x \geq 0$ the table gives $N A(0, x)=P(0 \leq X \leq x)$ and thus $P(|X| \geq x)=2(.5-N A(0, x)$. Plot by hand the graph of $f(x)$ using these values, and the graph of the Chebyshev function $g(x)=1 / x^{2}$, and compare (see Exercise 3).
Example $15$:
Repeat Exercise 14, but this time with mean 10 and variance 3. Note that the table in Appendix A presents values for a standard normal variable. Find the standardized version $X^{*}$ for $X$, find values for $f^{*}(x)=P\left(\left|X^{*}\right| \geq x\right)$ as in Exercise 14, and then rescale these values for $f(x)=P(|X-10| \geq x)$. Graph and compare this function with the Chebyshev function $g(x)=3 / x^{2}$.
Example $16$:
Let $Z=X / Y$ where $X$ and $Y$ have normal densities with mean 0 and standard deviation 1 . Then it can be shown that $Z$ has a Cauchy density.
(a) Write a program to illustrate this result by plotting a bar graph of 1000 samples obtained by forming the ratio of two standard normal outcomes. Compare your bar graph with the graph of the Cauchy density. Depending upon which computer language you use, you may or may not need to tell the computer how to simulate a normal random variable. A method for doing this was described in Section 5.2. (b) We have seen that the Law of Large Numbers does not apply to the Cauchy density (see Example 8.8). Simulate a large number of experiments with Cauchy density and compute the average of your results. Do these averages seem to be approaching a limit? If so can you explain why this might be?
Example $17$:
Show that, if $X \geq 0$, then $P(X \geq a) \leq E(X) / a$.
Example $18$:
(Lamperti $^{9}$ ) Let $X$ be a non-negative random variable. What is the best upper bound you can give for $P(X \geq a)$ if you know
(a) $E(X)=20$.
(b) $E(X)=20$ and $V(X)=25$.
(c) $E(X)=20, V(X)=25$, and $X$ is symmetric about its mean.
${ }^{9}$ Private communication. | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/08%3A_Law_of_Large_Numbers/8.02%3A_Continuous_Random_Variables.txt |
The second fundamental theorem of probability is the Central Limit Theorem. This theorem says that if $S_n$ is the sum of $n$ mutually independent random variables, then the distribution function of $S_n$ is well-approximated by a certain type of continuous function known as a normal density function, which is given by the formula
$f_{\mu,\sigma}(x) = \frac{1}{\sqrt {2\pi}\sigma}e^{-(x-\mu)^2/(2\sigma^2)}\ ,$
as we have seen in Chapter 5. In this section, we will deal only with the case that $\mu = 0$ and $\sigma = 1$. We will call this particular normal density function the standard normal density, and we will denote it by $\phi(x)$:
$\phi(x) = \frac {1}{\sqrt{2\pi}}e^{-x^2/2}\ .$ A graph of this function is given in Figure [fig $1$]. It can be shown that the area under any normal density equals 1.
The Central Limit Theorem tells us, quite generally, what happens when we have the sum of a large number of independent random variables each of which contributes a small amount to the total. In this section we shall discuss this theorem as it applies to the Bernoulli trials and in Section 1.2 we shall consider more general processes. We will discuss the theorem in the case that the individual random variables are identically distributed, but the theorem is true, under certain conditions, even if the individual random variables have different distributions.
Bernoulli Trials
Consider a Bernoulli trials process with probability $p$ for success on each trial. Let $X_i = 1$ or 0 according as the $i$th outcome is a success or failure, and let $S_n = X_1 + X_2 +\cdots+ X_n$. Then $S_n$ is the number of successes in $n$ trials. We know that $S_n$ has as its distribution the binomial probabilities $b(n,p,j)$. In Section 3.2, we plotted these distributions for $p = .3$ and $p = .5$ for various values of $n$ (see Figure [fig 3.8]).
We note that the maximum values of the distributions appeared near the expected value $np$, which causes their spike graphs to drift off to the right as $n$ increased. Moreover, these maximum values approach 0 as $n$ increased, which causes the spike graphs to flatten out.
Standardized Sums
We can prevent the drifting of these spike graphs by subtracting the expected number of successes $np$ from $S_n$, obtaining the new random variable $S_n - np$. Now the maximum values of the distributions will always be near 0.
To prevent the spreading of these spike graphs, we can normalize $S_n - np$ to have variance 1 by dividing by its standard deviation $\sqrt{npq}$ (see Exercise 6.2.13 and Exercise 6.2.16
Definition: Term
The of $S_n$ is given by
$S_n^* = \frac {S_n - np}{\sqrt{npq}}\ .$
$S_n^*$ always has expected value 0 and variance 1.
Suppose we plot a spike graph with the spikes placed at the possible values of $S_n^*$: $x_0$, $x_1$, …, $x_n$, where
$x_j = \frac {j - np}{\sqrt{npq}}\$
We make the height of the spike at $x_j$ equal to the distribution value $b(n, p, j)$. An example of this standardized spike graph, with $n = 270$ and $p = .3$, is shown in Figure $2$. This graph is beautifully bell-shaped. We would like to fit a normal density to this spike graph. The obvious choice to try is the standard normal density, since it is centered at 0, just as the standardized spike graph is. In this figure, we have drawn this standard normal density. The reader will note that a horrible thing has occurred: Even though the shapes of the two graphs are the same, the heights are quite different.
If we want the two graphs to fit each other, we must modify one of them; we choose to modify the spike graph. Since the shapes of the two graphs look fairly close, we will attempt to modify the spike graph without changing its shape. The reason for the differing heights is that the sum of the heights of the spikes equals 1, while the area under the standard normal density equals 1. If we were to draw a continuous curve through the top of the spikes, and find the area under this curve, we see that we would obtain, approximately, the sum of the heights of the spikes multiplied by the distance between consecutive spikes, which we will call $\epsilon$. Since the sum of the heights of the spikes equals one, the area under this curve would be approximately $\epsilon$. Thus, to change the spike graph so that the area under this curve has value 1, we need only multiply the heights of the spikes by $1/\epsilon$. It is easy to see from Equation $1$ that
$\epsilon = \frac {1}{\sqrt {npq}}\ .$
In Figure $2$ we show the standardized sum $S^*_n$ for $n = 270$ and $p = .3$, after correcting the heights, together with the standard normal density. (This figure was produced with the program CLTBernoulliPlot.) The reader will note that the standard normal fits the height-corrected spike graph extremely well. In fact, one version of the Central Limit Theorem (see Theorem $1$) says that as $n$ increases, the standard normal density will do an increasingly better job of approximating the height-corrected spike graphs corresponding to a Bernoulli trials process with $n$ summands.
Let us fix a value $x$ on the $x$-axis and let $n$ be a fixed positive integer. Then, using Equation [eq $1$:], the point $x_j$ that is closest to $x$ has a subscript $j$ given by the formula $j = \langle np + x \sqrt{npq} \rangle\ ,$ where $\langle a \rangle$ means the integer nearest to $a$. Thus the height of the spike above $x_j$ will be $\sqrt{npq}\,b(n,p,j) = \sqrt{npq}\,b(n,p,\langle np + x_j \sqrt{npq} \rangle)\ .$ For large $n$, we have seen that the height of the spike is very close to the height of the normal density at $x$. This suggests the following theorem.
Theorem $1$
(Central Limit Theorem for Binomial Distributions) For the binomial distribution $b(n,p,j)$ we have $\lim_{n \to \infty} \sqrt{npq}\,b(n,p,\langle np + x\sqrt{npq} \rangle) = \phi(x)\ ,$ where $\phi(x)$ is the standard normal density.
Proof
The proof of this theorem can be carried out using Stirling’s approximation from Section 3.1. We indicate this method of proof by considering the case $x = 0$. In this case, the theorem states that $\lim_{n \to \infty} \sqrt{npq}\,b(n,p,\langle np \rangle) = \frac 1{\sqrt{2\pi}} = .3989\ldots\ .$ In order to simplify the calculation, we assume that $np$ is an integer, so that $\langle np \rangle = np$. Then $\sqrt{npq}\,b(n,p,np) = \sqrt{npq}\,p^{np}q^{nq} \frac {n!}{(np)!\,(nq)!}\ .$ Recall that Stirling’s formula (see Theorem 3.3) states that $n! \sim \sqrt{2\pi n}\,n^n e^{-n} \qquad \mbox {as \,\,\,} n \to \infty\ .$ Using this, we have $\sqrt{npq}\,b(n,p,np) \sim \frac {\sqrt{npq}\,p^{np}q^{nq} \sqrt{2\pi n}\,n^n e^{-n}}{\sqrt{2\pi np} \sqrt{2\pi nq}\,(np)^{np} (nq)^{nq} e^{-np} e^{-nq}}\ ,$ which simplifies to $1/\sqrt{2\pi}$.
Approximating Binomial Distributions
We can use Theorem $1$ to find approximations for the values of binomial distribution functions. If we wish to find an approximation for $b(n, p, j)$, we set $j = np + x\sqrt{npq}$ and solve for $x$, obtaining $x = {\frac{j-np}{\sqrt{npq}}}\ .$
Theorem $1$ then says that
$\sqrt{npq} ,b(n,p,j)$
is approximately equal to $\phi(x)$, so \begin{align} b(n,p,j) &\approx& {\frac{\phi(x)}{\sqrt{npq}}}\ &=& {\frac{1}{\sqrt{npq}}} \phi\biggl({\frac{j-np}{\sqrt{npq}}}\biggr) \end{align}
Example $1$
Let us estimate the probability of exactly 55 heads in 100 tosses of a coin. For this case $np = 100 \cdot 1/2 = 50$ and $\sqrt{npq} = \sqrt{100 \cdot 1/2 \cdot 1/2} = 5$. Thus $x_{55} = (55 - 50)/5 = 1$ and
\begin{align} P(S_{100} = 55) \sim \frac{\phi(1)}{5} &=& \frac{1}{5} \left( \frac{1}{\sqrt{2\pi}}e^{-1/2} \right) \ &=& .0484 \end{align}
To four decimal places, the actual value is .0485, and so the approximation is very good.
The program CLTBernoulliLocal illustrates this approximation for any choice of $n$, $p$, and $j$. We have run this program for two examples. The first is the probability of exactly 50 heads in 100 tosses of a coin; the estimate is .0798, while the actual value, to four decimal places, is .0796. The second example is the probability of exactly eight sixes in 36 rolls of a die; here the estimate is .1093, while the actual value, to four decimal places, is .1196.
The individual binomial probabilities tend to 0 as $n$ tends to infinity. In most applications we are not interested in the probability that a specific outcome occurs, but rather in the probability that the outcome lies in a given interval, say the interval $[a, b]$. In order to find this probability, we add the heights of the spike graphs for values of $j$ between $a$ and $b$. This is the same as asking for the probability that the standardized sum $S_n^*$ lies between $a^*$ and $b^*$, where $a^*$ and $b^*$ are the standardized values of $a$ and $b$. But as $n$ tends to infinity the sum of these areas could be expected to approach the area under the standard normal density between $a^*$ and $b^*$. The states that this does indeed happen.
Theorem $2$
Central Limit Theorem for Bernoulli Trials) Let $S_n$ be the number of successes in $n$ Bernoulli trials with probability $p$ for success, and let $a$ and $b$ be two fixed real numbers. Then $\lim_{n \rightarrow \infty} P\biggl(a \le \frac{S_n - np}{\sqrt{npq}} \le b\biggr) = \int_a^b \phi(x)\,dx\ .$
Proof
This theorem can be proved by adding together the approximations to $b(n,p,k)$ given in Theorem 9.1.1.
We know from calculus that the integral on the right side of this equation is equal to the area under the graph of the standard normal density $\phi(x)$ between $a$ and $b$. We denote this area by $NA(a^*, b^*)$. Unfortunately, there is no simple way to integrate the function $e^{-x^2/2}$, and so we must either use a table of values or else a numerical integration program. (See Figure [tabl 9.1] for values of $\NA(0, z)$. A more extensive table is given in Appendix A.)
It is clear from the symmetry of the standard normal density that areas such as that between $-2$ and 3 can be found from this table by adding the area from 0 to 2 (same as that from $-2$ to 0) to the area from 0 to 3.
Approximation of Binomial Probabilities
Suppose that $S_n$ is binomially distributed with parameters $n$ and $p$. We have seen that the above theorem shows how to estimate a probability of the form $P(i \le S_n \le j)\ , \label{eq 9.2}$ where $i$ and $j$ are integers between 0 and $n$. As we have seen, the binomial distribution can be represented as a spike graph, with spikes at the integers between 0 and $n$, and with the height of the $k$th spike given by $b(n, p, k)$. For moderate-sized values of $n$, if we standardize this spike graph, and change the heights of its spikes, in the manner described above, the sum of the heights of the spikes is approximated by the area under the standard normal density between $i^*$ and $j^*$.
Table $1$ of values of NA(0, z), the normal area from 0 to z
Z
NA(z)
z
NA(z)
z
NA(z)
z
NA(z)
.0
.0000
1.0
.3413
2.0
.4772
3.0
.4987
.1
.0398
1.1
.3643
2.1
.4821
3.1
.4990
.2
.0793
1.2
.3849
2.2
.4861
3.2
.4993
.3
.1179
1.3
.4032
2.3
.4893
3.3
.4995
.4
.1554
1.4
.4192
2.4
.4918
3.4
.4997
.5
.1915
1.5
.4332
2.5
.4938
3.5
.4998
.6
.2257
1.6
.4452
2.6
.4953
3.6
.4998
.7
.2580
1.7
.4554
2.7
.4965
3.7
.4999
.8
.2881
1.8
.4641
2.8
.4974
3.8
.4999
.9
.3159
1.9
.4713
2.9
.4981
3.9
.5000
It turns out that a slightly more accurate approximation is afforded by the area under the standard normal density between the standardized values corresponding to $(i - 1/2)$ and $(j + 1/2)$; these values are
$i^* = \frac{i - 1/2 - np}{\sqrt {npq}}$ and $j^* = \frac{j + 1/2 - np}{\sqrt {npq}}\ .$ Thus, $P(i \le S_n \le j) \approx \NA\Biggl({\frac{i - \frac{1}{2} - np}{\sqrt {npq}}} , {\frac{j + {\frac{1}{2}} - np}{\sqrt {npq}}}\Biggr)\ .$
It should be stressed that the approximations obtained by using the Central Limit Theorem are only approximations, and sometimes they are not very close to the actual values (see Exercise 9.2.111).
We now illustrate this idea with some examples.
Example $2$
A coin is tossed 100 times. Estimate the probability that the number of heads lies between 40 and 60 (the word “between" in mathematics means inclusive of the endpoints). The expected number of heads is $100 \cdot 1/2 = 50$, and the standard deviation for the number of heads is $\sqrt{100 \cdot 1/2 \cdot 1/2} = 5$. Thus, since $n = 100$ is reasonably large, we have \begin{aligned} P(40 \le S_n \le 60) &\approx& P\left( \frac {39.5 - 50}5 \le S_n^* \le \frac {60.5 - 50}5 \right) \ &=& P(-2.1 \le S_n^* \le 2.1) \ &\approx& \NA(-2.1,2.1) \ &=& 2\NA(0,2.1) \ &\approx& .9642\ . \end{aligned} The actual value is .96480, to five decimal places.
Note that in this case we are asking for the probability that the outcome will not deviate by more than two standard deviations from the expected value. Had we asked for the probability that the number of successes is between 35 and 65, this would have represented three standard deviations from the mean, and, using our 1/2 correction, our estimate would be the area under the standard normal curve between $-3.1$ and 3.1, or $2\NA(0,3.1) = .9980$. The actual answer in this case, to five places, is .99821.
It is important to work a few problems by hand to understand the conversion from a given inequality to an inequality relating to the standardized variable. After this, one can then use a computer program that carries out this conversion, including the 1/2 correction. The program CLTBernoulliGlobal is such a program for estimating probabilities of the form $P(a \leq S_n \leq b)$.
Example $3$
Dartmouth College would like to have 1050 freshmen. This college cannot accommodate more than 1060. Assume that each applicant accepts with probability .6 and that the acceptances can be modeled by Bernoulli trials. If the college accepts 1700, what is the probability that it will have too many acceptances?
If it accepts 1700 students, the expected number of students who matriculate is $.6 \cdot 1700 = 1020$. The standard deviation for the number that accept is $\sqrt{1700 \cdot .6 \cdot .4} \approx 20$. Thus we want to estimate the probability \begin{aligned} P(S_{1700} > 1060) &=& P(S_{1700} \ge 1061) \ &=& P\left( S_{1700}^* \ge \frac {1060.5 - 1020}{20} \right) \ &=& P(S_{1700}^* \ge 2.025)\ .\end{aligned}
From Table [tabl 9.1], if we interpolate, we would estimate this probability to be $.5 - .4784 = .0216$. Thus, the college is fairly safe using this admission policy.
Applications to Statistics
There are many important questions in the field of statistics that can be answered using the Central Limit Theorem for independent trials processes. The following example is one that is encountered quite frequently in the news. Another example of an application of the Central Limit Theorem to statistics is given in Section 1.2.
Example $4$
One frequently reads that a poll has been taken to estimate the proportion of people in a certain population who favor one candidate over another in a race with two candidates. (This model also applies to races with more than two candidates $A$ and $B$, and two ballot propositions.) Clearly, it is not possible for pollsters to ask everyone for their preference. What is done instead is to pick a subset of the population, called a sample, and ask everyone in the sample for their preference. Let $p$ be the actual proportion of people in the population who are in favor of candidate $A$ and let $q = 1-p$. If we choose a sample of size $n$ from the population, the preferences of the people in the sample can be represented by random variables $X_1,\ X_2,\ \ldots,\ X_n$, where $X_i = 1$ if person $i$ is in favor of candidate $A$, and $X_i = 0$ if person $i$ is in favor of candidate $B$. Let $S_n = X_1 + X_2 + \cdots + X_n$. If each subset of size $n$ is chosen with the same probability, then $S_n$ is hypergeometrically distributed. If $n$ is small relative to the size of the population (which is typically true in practice), then $S_n$ is approximately binomially distributed, with parameters $n$ and $p$.
The pollster wants to estimate the value $p$. An estimate for $p$ is provided by the value $\bar p = S_n/n$, which is the proportion of people in the sample who favor candidate $B$. The Central Limit Theorem says that the random variable $\bar p$ is approximately normally distributed. (In fact, our version of the Central Limit Theorem says that the distribution function of the random variable
$S_n^* = \frac{S_n - np}{\sqrt{npq}}$
is approximated by the standard normal density.) But we have
$\bar p = \frac{S_n - np}{\sqrt {npq}}\sqrt{\frac{pq}{n}}+p\ ,$
i.e., $\bar p$ is just a linear function of $S_n^*$. Since the distribution of $S_n^*$ is approximated by the standard normal density, the distribution of the random variable $\bar p$ must also be bell-shaped. We also know how to write the mean and standard deviation of $\bar p$ in terms of $p$ and $n$. The mean of $\bar p$ is just $p$, and the standard deviation is
$\sqrt{\frac{pq}{n}}\ .$
Thus, it is easy to write down the standardized version of $\bar p$; it is
$\bar p^* = \frac{\bar p - p}{\sqrt{pq/n}}\ .$
Since the distribution of the standardized version of $\bar p$ is approximated by the standard normal density, we know, for example, that 95% of its values will lie within two standard deviations of its mean, and the same is true of $\bar p$. So we have
$P\left(p - 2\sqrt{\frac{pq}{n}} < \bar p < p + 2\sqrt{\frac{pq}{n}}\right) \approx .954\ .$
Now the pollster does not know $p$ or $q$, but he can use $\bar p$ and $\bar q = 1 - \bar p$ in their place without too much danger. With this idea in mind, the above statement is equivalent to the statement
$P\left(\bar p - 2\sqrt{\frac{\bar p \bar q}{n}} < p < \bar p + 2\sqrt{\frac{\bar p \bar q}{n}}\right) \approx .954\ .$
The resulting interval
$\left( \bar p - \frac {2\sqrt{\bar p \bar q}}{\sqrt n},\ \bar p + \frac {2\sqrt{\bar p \bar q}}{\sqrt n} \right)$
is called the for the unknown value of $p$. The name is suggested by the fact that if we use this method to estimate $p$ in a large number of samples we should expect that in about 95 percent of the samples the true value of $p$ is contained in the confidence interval obtained from the sample. In Exercise $11$ you are asked to write a program to illustrate that this does indeed happen.
The pollster has control over the value of $n$. Thus, if he wants to create a 95% confidence interval with length 6%, then he should choose a value of $n$ so that
$\frac {2\sqrt{\bar p \bar q}}{\sqrt n} \le .03\ .$
Using the fact that $\bar p \bar q \le 1/4$, no matter what the value of $\bar p$ is, it is easy to show that if he chooses a value of $n$ so that
$\frac{1}{\sqrt n} \le .03\ ,$
he will be safe. This is equivalent to choosing
$n \ge 1111\ .$
So if the pollster chooses $n$ to be 1200, say, and calculates $\bar p$ using his sample of size 1200, then 19 times out of 20 (i.e., 95% of the time), his confidence interval, which is of length 6%, will contain the true value of $p$. This type of confidence interval is typically reported in the news as follows: this survey has a 3% margin of error. In fact, most of the surveys that one sees reported in the paper will have sample sizes around 1000. A somewhat surprising fact is that the size of the population has apparently no effect on the sample size needed to obtain a 95% confidence interval for $p$ with a given margin of error. To see this, note that the value of $n$ that was needed depended only on the number .03, which is the margin of error. In other words, whether the population is of size 100,000 or 100,000,000, the pollster needs only to choose a sample of size 1200 or so to get the same accuracy of estimate of $p$. (We did use the fact that the sample size was small relative to the population size in the statement that $S_n$ is approximately binomially distributed.)
In Figure [fig $1$], we show the results of simulating the polling process. The population is of size 100,000, and for the population, $p = .54$. The sample size was chosen to be 1200. The spike graph shows the distribution of $\bar p$ for 10,000 randomly chosen samples. For this simulation, the program kept track of the number of samples for which $\bar p$ was within 3% of .54. This number was 9648, which is close to 95% of the number of samples used.
Another way to see what the idea of confidence intervals means is shown in Figure [fig $6$]. In this figure, we show 100 confidence intervals, obtained by computing $\bar p$ for 100 different samples of size 1200 from the same population as before. The reader can see that most of these confidence intervals (96, to be exact) contain the true value of $p$.
The Gallup Poll has used these polling techniques in every Presidential election since 1936 (and in innumerable other elections as well). Table [table $1$]1 shows the results of their efforts. The reader will note that most of the approximations to $p$ are within 3% of the actual value of $p$. The sample sizes for these polls were typically around 1500. (In the table, both the predicted and actual percentages for the winning candidate refer to the percentage of the vote among the “major" political parties. In most elections, there were two major parties, but in several elections, there were three.)
Table $1$: Gallup Poll accuracy record.
Year $\,$ Winning Gallup Final Election Deviation
Candidate Survey Result
1936 Roosevelt 55.7% 62.5% 6.8%
1940 Roosevelt 52.0% 55.0% 3.0%
1944 Roosevelt 51.5% 53.3% 1.8%
1948 Truman 44.5% 49.9% 5.4%
1952 Eisenhower 51.0% 55.4% 4.4%
1956 Eisenhower 59.5% 57.8% 1.7%
1960 Kennedy 51.0% 50.1% 0.9%
1964 Johnson 64.0% 61.3% 2.7%
1968 Nixon 43.0% 43.5% 0.5%
1972 Nixon 62.0% 61.8% 0.2%
1976 Carter 48.0% 50.0% 2.0%
1980 Reagan 47.0% 50.8% 3.8%
1984 Reagan 59.0% 59.1% 0.1%
1988 Bush 56.0% 53.9% 2.1%
1992 Clinton 49.0% 43.2% 5.8%
1996 Clinton 52.0% 50.1% 1.9%
This technique also plays an important role in the evaluation of the effectiveness of drugs in the medical profession. For example, it is sometimes desired to know what proportion of patients will be helped by a new drug. This proportion can be estimated by giving the drug to a subset of the patients, and determining the proportion of this sample who are helped by the drug.
Historical Remarks
The Central Limit Theorem for Bernoulli trials was first proved by Abrahamde Moivre and appeared in his book, first published in 1718.2
De Moivre spent his years from age 18 to 21 in prison in France because of his Protestant background. When he was released he left France for England, where he worked as a tutor to the sons of noblemen. Newton had presented a copy of his to the Earl of Devonshire. The story goes that, while de Moivre was tutoring at the Earl’s house, he came upon Newton’s work and found that it was beyond him. It is said that he then bought a copy of his own and tore it into separate pages, learning it page by page as he walked around London to his tutoring jobs. De Moivre frequented the coffeehouses in London, where he started his probability work by calculating odds for gamblers. He also met Newton at such a coffeehouse and they became fast friends. De Moivre dedicated his book to Newton.
Confidence interval simulation.provides the techniques for solving a wide variety of gambling problems. In the midst of these gambling problems de Moivre rather modestly introduces his proof of the Central Limit Theorem, writing
A Method of approximating the Sum of the Terms of the Binomial $(a + b)^n$ expanded into a Series, from whence are deduced some practical Rules to estimate the Degree of Assent which is to be given to Experiments.3
De Moivre’s proof used the approximation to factorials that we now call Stirling’s formula. De Moivre states that he had obtained this formula before Stirling but without determining the exact value of the constant $\sqrt{2\pi}$. While he says it is not really necessary to know this exact value, he concedes that knowing it “has spread a singular Elegancy on the Solution."
The complete proof and an interesting discussion of the life of de Moivre can be found in the book by F. N. David.4
Exercises
Exercise $1$:
Let $S_{100}$ be the number of heads that turn up in 100 tosses of a fair coin. Use the Central Limit Theorem to estimate
1. $P(S_{100} \leq 45)$.
2. $P(45 < S_{100} < 55)$.
3. $P(S_{100} > 63)$.
4. $P(S_{100} < 57)$.
Exercise $2$:
Let $S_{200}$ be the number of heads that turn up in 200 tosses of a fair coin. Estimate
1. $P(S_{200} = 100)$.
2. $P(S_{200} = 90)$.
3. $P(S_{200} = 80)$.
Exercise $3$:
A true-false examination has 48 questions. June has probability 3/4 of answering a question correctly. April just guesses on each question. A passing score is 30 or more correct answers. Compare the probability that June passes the exam with the probability that April passes it.
Exercise $4$:
Let $S$ be the number of heads in 1,000,000 tosses of a fair coin. Use (a) Chebyshev’s inequality, and (b) the Central Limit Theorem, to estimate the probability that $S$ lies between 499,500 and 500,500. Use the same two methods to estimate the probability that $S$ lies between 499,000 and 501,000, and the probability that $S$ lies between 498,500 and 501,500.
Exercise $5$:
A rookie is brought to a baseball club on the assumption that he will have a .300 batting average. (Batting average is the ratio of the number of hits to the number of times at bat.) In the first year, he comes to bat 300 times and his batting average is .267. Assume that his at bats can be considered Bernoulli trials with probability .3 for success. Could such a low average be considered just bad luck or should he be sent back to the minor leagues? Comment on the assumption of Bernoulli trials in this situation.
Exercise $6$:
Once upon a time, there were two railway trains competing for the passenger traffic of 1000 people leaving from Chicago at the same hour and going to Los Angeles. Assume that passengers are equally likely to choose each train. How many seats must a train have to assure a probability of .99 or better of having a seat for each passenger?
Exercise $7$:
Dartmouth admits 1750 students. What is the probability of too many acceptances?
Exercise $8$:
A club serves dinner to members only. They are seated at 12-seat tables. The manager observes over a long period of time that 95 percent of the time there are between six and nine full tables of members, and the remainder of the time the numbers are equally likely to fall above or below this range. Assume that each member decides to come with a given probability $p$, and that the decisions are independent. How many members are there? What is $p$?
Exercise $9$:
Let $S_n$ be the number of successes in $n$ Bernoulli trials with probability .8 for success on each trial. Let $A_n = S_n/n$ be the average number of successes. In each case give the value for the limit, and give a reason for your answer.
1. $\lim_{n \to \infty} P(A_n = .8)$.
2. $\lim_{n \to \infty} P(.7n < S_n < .9n)$.
3. $\lim_{n \to \infty} P(S_n < .8n + .8\sqrt n)$.
4. $\lim_{n \to \infty} P(.79 < A_n < .81)$.
Exercise $10$:
Find the probability that among 10,000 random digits the digit 3 appears not more than 931 times.
Exercise $11$:
Write a computer program to simulate 10,000 Bernoulli trials with probability .3 for success on each trial. Have the program compute the 95 percent confidence interval for the probability of success based on the proportion of successes. Repeat the experiment 100 times and see how many times the true value of .3 is included within the confidence limits.
Exercise $12$:
A balanced coin is flipped 400 times. Determine the number $x$ such that the probability that the number of heads is between $200 - x$ and $200 + x$ is approximately .80.
Exercise $13$:
A noodle machine in Spumoni’s spaghetti factory makes about 5 percent defective noodles even when properly adjusted. The noodles are then packed in crates containing 1900 noodles each. A crate is examined and found to contain 115 defective noodles. What is the approximate probability of finding at least this many defective noodles if the machine is properly adjusted?
Exercise $14$:
A restaurant feeds 400 customers per day. On the average 20 percent of the customers order apple pie.
1. Give a range (called a 95 percent confidence interval) for the number of pieces of apple pie ordered on a given day such that you can be 95 percent sure that the actual number will fall in this range.
2. How many customers must the restaurant have, on the average, to be at least 95 percent sure that the number of customers ordering pie on that day falls in the 19 to 21 percent range?
Exercise $15$:
Recall that if $X$ is a random variable, the of $X$ is the function $F(x)$ defined by $F(x) = P(X \leq x)\ .$
1. Let $S_n$ be the number of successes in $n$ Bernoulli trials with probability $p$ for success. Write a program to plot the cumulative distribution for $S_n$.
2. Modify your program in (a) to plot the cumulative distribution $F_n^*(x)$ of the standardized random variable $S_n^* = \frac {S_n - np}{\sqrt{npq}}\ .$
3. Define the $N(x)$ to be the area under the normal curve up to the value $x$. Modify your program in (b) to plot the normal distribution as well, and compare it with the cumulative distribution of $S_n^*$. Do this for $n = 10, 50$, and $100$.
Exercise $16$:
In Example 3.12, we were interested in testing the hypothesis that a new form of aspirin is effective 80 percent of the time rather than the 60 percent of the time as reported for standard aspirin. The new aspirin is given to $n$ people. If it is effective in $m$ or more cases, we accept the claim that the new drug is effective 80 percent of the time and if not we reject the claim. Using the Central Limit Theorem, show that you can choose the number of trials $n$ and the critical value $m$ so that the probability that we reject the hypothesis when it is true is less than .01 and the probability that we accept it when it is false is also less than .01. Find the smallest value of $n$ that will suffice for this.
Exercise $17$:
In an opinion poll it is assumed that an unknown proportion $p$ of the people are in favor of a proposed new law and a proportion $1-p$ are against it. A sample of $n$ people is taken to obtain their opinion. The proportion ${\bar p}$ in favor in the sample is taken as an estimate of $p$. Using the Central Limit Theorem, determine how large a sample will ensure that the estimate will, with probability .95, be correct to within .01.
Exercise $18$:
A description of a poll in a certain newspaper says that one can be 95% confident that error due to sampling will be no more than plus or minus 3 percentage points. A poll in the New York Times taken in Iowa says that “according to statistical theory, in 19 out of 20 cases the results based on such samples will differ by no more than 3 percentage points in either direction from what would have been obtained by interviewing all adult Iowans." These are both attempts to explain the concept of confidence intervals. Do both statements say the same thing? If not, which do you think is the more accurate description? | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/09%3A_Central_Limit_Theorem/9.01%3A_Central_Limit_Theorem_for_Bernoulli_Trials.txt |
We have illustrated the Central Limit Theorem in the case of Bernoulli trials, but this theorem applies to a much more general class of chance processes. In particular, it applies to any independent trials process such that the individual trials have finite variance. For such a process, both the normal approximation for individual terms and the Central Limit Theorem are valid.
Let $S_n = X_1 + X_2 +\cdots+ X_n$ be the sum of $n$ independent discrete random variables of an independent trials process with common distribution function $m(x)$ defined on the integers, with mean $\mu$ and variance $\sigma^2$. We can prevent this just as we did for Bernoulli trials.
Standardized Sums
Consider the standardized random variable $S_n^* = \frac {S_n - n\mu}{\sqrt{n\sigma^2}}\ .$
This standardizes $S_n$ to have expected value 0 and variance 1. If $S_n = j$, then $S_n^*$ has the value $x_j$ with $x_j = \frac {j - n\mu}{\sqrt{n\sigma^2}}\ .$ We can construct a spike graph just as we did for Bernoulli trials. Each spike is centered at some $x_j$. The distance between successive spikes is $b = \frac 1{\sqrt{n\sigma^2}}\ ,$ and the height of the spike is $h = \sqrt{n\sigma^2} P(S_n = j)\ .$
The case of Bernoulli trials is the special case for which $X_j = 1$ if the $j$th outcome is a success and 0 otherwise; then $\mu = p$ and $\sigma^2 = \sqrt {pq}$.
We now illustrate this process for two different discrete distributions. The first is the distribution $m$, given by $m = \pmatrix{ 1 & 2 & 3 & 4 & 5 \cr .2 & .2 & .2 & .2 & .2\cr}\ .$
In Figure $1$ we show the standardized sums for this distribution for the cases $n = 2$ and $n = 10$. Even for $n = 2$ the approximation is surprisingly good.
For our second discrete distribution, we choose $m = \pmatrix{ 1 & 2 & 3 & 4 & 5 \cr .4 & .3 & .1 & .1 & .1\cr}\ .$
This distribution is quite asymmetric and the approximation is not very good for $n = 3$, but by $n = 10$ we again have an excellent approximation (see Figure 9.1.5).
Approximation Theorem
As in the case of Bernoulli trials, these graphs suggest the following approximation theorem for the individual probabilities.
Theorem $1$
Let $X_1$, $X_2$, …, $X_n$ be an independent trials process and let $S_n = X_1 + X_2 +\cdots+ X_n$. Assume that the greatest common divisor of the differences of all the values that the $X_j$ can take on is 1. Let $E(X_j) = \mu$ and $V(X_j) = \sigma^2$. Then for $n$ large,
$P(S_n = j) \sim \frac {\phi(x_j)}{\sqrt{n\sigma^2}}\ ,$
where $x_j = (j - n\mu)/\sqrt{n\sigma^2}$, and $\phi(x)$ is the standard normal density.
The program CLTIndTrialsLocal implements this approximation. When we run this program for 6 rolls of a die, and ask for the probability that the sum of the rolls equals 21, we obtain an actual value of .09285, and a normal approximation value of .09537. If we run this program for 24 rolls of a die, and ask for the probability that the sum of the rolls is 72, we obtain an actual value of .01724 and a normal approximation value of .01705. These results show that the normal approximations are quite good.
Central Limit Theorem for a Discrete Independent Trials Process
The Central Limit Theorem for a discrete independent trials process is as follows.
Theorem (Central Limit Theorem)$1$
Let $S_n = X_1 + X_2 +\cdots+ X_n$ be the sum of $n$ discrete independent random variables with common distribution having expected value $\mu$ and variance $\sigma^2$. Then, for $a < b$,
$\lim_{n \to \infty} P\left( a < \frac {S_n - n\mu}{\sqrt{n\sigma^2}} < b\right) = \frac 1{\sqrt{2\pi}} \int_a^b e^{-x^2/2}\, dx\ .$
Here we consider several examples.
Examples
A die is rolled 420 times. What is the probability that the sum of the rolls lies between 1400 and 1550?
The sum is a random variable
$S_{420} = X_1 + X_2 +\cdots+ X_{420}\ ,$
where each $X_j$ has distribution
$m_X = \pmatrix{ 1 & 2 & 3 & 4 & 5 & 6 \cr 1/6 & 1/6 & 1/6 & 1/6 & 1/6 & 1/6 \cr}$
We have seen that $\mu = E(X) = 7/2$ and $\sigma^2 = V(X) = 35/12$. Thus, $E(S_{420}) = 420 \cdot 7/2 = 1470$, $\sigma^2(S_{420}) = 420 \cdot 35/12 = 1225$, and $\sigma(S_{420}) = 35$. Therefore,
\begin{aligned} P(1400 \leq S_{420} \leq 1550) &\approx& P\left(\frac {1399.5 - 1470}{35} \leq S_{420}^* \leq \frac {1550.5 - 1470}{35} \right) \ &=& P(-2.01 \leq S_{420}^* \leq 2.30) \ &\approx& \mbox{NA}(-2.01, 2.30) = .9670\ . \end{aligned}
We note that the program CLTIndTrialsGlobal could be used to calculate these probabilities.
Example $6$:
A student’s grade point average is the average of his grades in 30 courses. The grades are based on 100 possible points and are recorded as integers. Assume that, in each course, the instructor makes an error in grading of $k$ with probability $|p/k|$, where $k = \pm1$, $\pm2$, $\pm3$, $\pm4$, $\pm5$. The probability of no error is then $1 - (137/30)p$. (The parameter $p$ represents the inaccuracy of the instructor’s grading.) Thus, in each course, there are two grades for the student, namely the “correct" grade and the recorded grade. So there are two average grades for the student, namely the average of the correct grades and the average of the recorded grades.
We wish to estimate the probability that these two average grades differ by less than .05 for a given student. We now assume that $p = 1/20$. We also assume that the total error is the sum $S_{30}$ of 30 independent random variables each with distribution
$m_X: \left\{ \begin{array}{ccccccccccc} -5 & -4 & -3 & -2 & -1 & 0 & 1 & 2 & 3 & 4 & 5 \ \frac1{100} & \frac1{80} & \frac1{60} & \frac1{40} & \frac1{20} & \frac{463}{600} & \frac1{20} & \frac1{40} & \frac1{60} & \frac1{80} & \frac1{100} \end{array} \right \}\ .$
One can easily calculate that $E(X) = 0$ and $\sigma^2(X) = 1.5$. Then we have
$\begin{array}{ll} P\left(-.05 \leq \frac {S_{30}}{30} \leq .05 \right) &= P(-1.5 \leq S_{30} \leq1.5) \ & \ &= P\left( \frac{-1.5}{\sqrt{30\cdot1.5}} \leq S_{30}^* \leq \frac{1.5}{\sqrt{30\cdot1.5}} \right) \ & \ &= P(-.224 \leq S_{30}^* \leq .224) \ & \ & \approx \mbox{NA}(-.224, .224) = .1772\ . \end{array}$
This means that there is only a 17.7% chance that a given student’s grade point average is accurate to within .05. (Thus, for example, if two candidates for valedictorian have recorded averages of 97.1 and 97.2, there is an appreciable probability that their correct averages are in the reverse order.) For a further discussion of this example, see the article by R. M. Kozelka.5
A More General Central Limit Theorem
In Theorem $1$, the discrete random variables that were being summed were assumed to be independent and identically distributed. It turns out that the assumption of identical distributions can be substantially weakened. Much work has been done in this area, with an important contribution being made by J. W. Lindeberg. Lindeberg found a condition on the sequence $\{X_n\}$ which guarantees that the distribution of the sum $S_n$ is asymptotically normally distributed. Feller showed that Lindeberg’s condition is necessary as well, in the sense that if the condition does not hold, then the sum $S_n$ is not asymptotically normally distributed. For a precise statement of Lindeberg’s Theorem, we refer the reader to Feller.6 A sufficient condition that is stronger (but easier to state) than Lindeberg’s condition, and is weaker than the condition in Theorem $1$, is given in the following theorem.
Theorem Central Limit Theorem $2$
( Let $X_1,\ X_2,\ \ldots,\ X_n\ ,\ \ldots$ be a sequence of independent discrete random variables, and let $S_n = X_1 + X_2 +\cdots+ X_n$. For each $n$, denote the mean and variance of $X_n$ by $\mu_n$ and $\sigma^2_n$, respectively. Define the mean and variance of $S_n$ to be $m_n$ and $s_n^2$, respectively, and assume that $s_n \rightarrow \infty$. If there exists a constant $A$, such that $|X_n| \le A$ for all $n$, then for $a < b$,
$\lim_{n \to \infty} P\left( a < \frac {S_n - m_n}{s_n} < b\right) = \frac 1{\sqrt{2\pi}} \int_a^b e^{-x^2/2}\, dx\ .$
The condition that $|X_n| \le A$ for all $n$ is sometimes described by saying that the sequence $\{X_n\}$ is uniformly bounded. The condition that $s_n \rightarrow \infty$ is necessary (see Exercise $15$).
We illustrate this theorem by generating a sequence of $n$ random distributions on the interval $[a, b]$. We then convolute these distributions to find the distribution of the sum of $n$ independent experiments governed by these distributions. Finally, we standardize the distribution for the sum to have mean 0 and standard deviation 1 and compare it with the normal density. The program CLTGeneral carries out this procedure.
In Figure $2$ we show the result of running this program for $[a, b] = [-2, 4]$, and $n = 1,\ 4,$ and 10. We see that our first random distribution is quite asymmetric. By the time we choose the sum of ten such experiments we have a very good fit to the normal curve.
The above theorem essentially says that anything that can be thought of as being made up as the sum of many small independent pieces is approximately normally distributed. This brings us to one of the most important questions that was asked about genetics in the 1800’s.
The Normal Distribution and Genetics
When one looks at the distribution of heights of adults of one sex in a given population, one cannot help but notice that this distribution looks like the normal distribution. An example of this is shown in Figure $3$. This figure shows the distribution of heights of 9593 women between the ages of 21 and 74. These data come from the Health and Nutrition Examination Survey I (HANES I). For this survey, a sample of the U.S. civilian population was chosen. The survey was carried out between 1971 and 1974.
A natural question to ask is “How does this come about?". Francis Galton, an English scientist in the 19th century, studied this question, and other related questions, and constructed probability models that were of great importance in explaining the genetic effects on such attributes as height. In fact, one of the most important ideas in statistics, the idea of regression to the mean, was invented by Galton in his attempts to understand these genetic effects.
Galton was faced with an apparent contradiction. On the one hand, he knew that the normal distribution arises in situations in which many small independent effects are being summed. On the other hand, he also knew that many quantitative attributes, such as height, are strongly influenced by genetic factors: tall parents tend to have tall offspring. Thus in this case, there seem to be two large effects, namely the parents. Galton was certainly aware of the fact that non-genetic factors played a role in determining the height of an individual. Nevertheless, unless these non-genetic factors overwhelm the genetic ones, thereby refuting the hypothesis that heredity is important in determining height, it did not seem possible for sets of parents of given heights to have offspring whose heights were normally distributed.
One can express the above problem symbolically as follows. Suppose that we choose two specific positive real numbers $x$ and $y$, and then find all pairs of parents one of whom is $x$ units tall and the other of whom is $y$ units tall. We then look at all of the offspring of these pairs of parents. One can postulate the existence of a function $f(x, y)$ which denotes the genetic effect of the parents’ heights on the heights of the offspring. One can then let $W$ denote the effects of the non-genetic factors on the heights of the offspring. Then, for a given set of heights $\{x, y\}$, the random variable which represents the heights of the offspring is given by $H = f(x, y) + W\ ,$ where $f$ is a deterministic function, i.e., it gives one output for a pair of inputs $\{x, y\}$. If we assume that the effect of $f$ is large in comparison with the effect of $W$, then the variance of $W$ is small. But since f is deterministic, the variance of $H$ equals the variance of $W$, so the variance of $H$ is small. However, Galton observed from his data that the variance of the heights of the offspring of a given pair of parent heights is not small. This would seem to imply that inheritance plays a small role in the determination of the height of an individual. Later in this section, we will describe the way in which Galton got around this problem.
We will now consider the modern explanation of why certain traits, such as heights, are approximately normally distributed. In order to do so, we need to introduce some terminology from the field of genetics. The cells in a living organism that are not directly involved in the transmission of genetic material to offspring are called somatic cells, and the remaining cells are called germ cells. Organisms of a given species have their genetic information encoded in sets of physical entities, called chromosomes. The chromosomes are paired in each somatic cell. For example, human beings have 23 pairs of chromosomes in each somatic cell. The sex cells contain one chromosome from each pair. In sexual reproduction, two sex cells, one from each parent, contribute their chromosomes to create the set of chromosomes for the offspring.
Chromosomes contain many subunits, called genes. Genes consist of molecules of DNA, and one gene has, encoded in its DNA, information that leads to the regulation of proteins. In the present context, we will consider those genes containing information that has an effect on some physical trait, such as height, of the organism. The pairing of the chromosomes gives rise to a pairing of the genes on the chromosomes.
In a given species, each gene can be any one of several forms. These various forms are called alleles. One should think of the different alleles as potentially producing different effects on the physical trait in question. Of the two alleles that are found in a given gene pair in an organism, one of the alleles came from one parent and the other allele came from the other parent. The possible types of pairs of alleles (without regard to order) are called genotypes.
If we assume that the height of a human being is largely controlled by a specific gene, then we are faced with the same difficulty that Galton was. We are assuming that each parent has a pair of alleles which largely controls their heights. Since each parent contributes one allele of this gene pair to each of its offspring, there are four possible allele pairs for the offspring at this gene location. The assumption is that these pairs of alleles largely control the height of the offspring, and we are also assuming that genetic factors outweigh non-genetic factors. It follows that among the offspring we should see several modes in the height distribution of the offspring, one mode corresponding to each possible pair of alleles. This distribution does not correspond to the observed distribution of heights.
An alternative hypothesis, which does explain the observation of normally distributed heights in offspring of a given sex, is the multiple-gene hypothesis. Under this hypothesis, we assume that there are many genes that affect the height of an individual. These genes may differ in the amount of their effects. Thus, we can represent each gene pair by a random variable $X_i$, where the value of the random variable is the allele pair’s effect on the height of the individual. Thus, for example, if each parent has two different alleles in the gene pair under consideration, then the offspring has one of four possible pairs of alleles at this gene location. Now the height of the offspring is a random variable, which can be expressed as $H = X_1 + X_2 + \cdots + X_n + W\ ,$ if there are $n$ genes that affect height. (Here, as before, the random variable $W$ denotes non-genetic effects.) Although $n$ is fixed, if it is fairly large, then Theorem $2$ implies that the sum $X_1 + X_2 + \cdots + X_n$ is approximately normally distributed. Now, if we assume that the $X_i$’s have a significantly larger cumulative effect than $W$ does, then $H$ is approximately normally distributed.
Another observed feature of the distribution of heights of adults of one sex in a population is that the variance does not seem to increase or decrease from one generation to the next. This was known at the time of Galton, and his attempts to explain this led him to the idea of regression to the mean. This idea will be discussed further in the historical remarks at the end of the section. (The reason that we only consider one sex is that human heights are clearly sex-linked, and in general, if we have two populations that are each normally distributed, then their union need not be normally distributed.)
Using the multiple-gene hypothesis, it is easy to explain why the variance should be constant from generation to generation. We begin by assuming that for a specific gene location, there are $k$ alleles, which we will denote by $A_1,\ A_2,\ \ldots,\ A_k$. We assume that the offspring are produced by random mating. By this we mean that given any offspring, it is equally likely that it came from any pair of parents in the preceding generation. There is another way to look at random mating that makes the calculations easier. We consider the set $S$ of all of the alleles (at the given gene location) in all of the germ cells of all of the individuals in the parent generation. In terms of the set $S$, by random mating we mean that each pair of alleles in $S$ is equally likely to reside in any particular offspring. (The reader might object to this way of thinking about random mating, as it allows two alleles from the same parent to end up in an offspring; but if the number of individuals in the parent population is large, then whether or not we allow this event does not affect the probabilities very much.)
For $1 \le i \le k$, we let $p_i$ denote the proportion of alleles in the parent population that are of type $A_i$. It is clear that this is the same as the proportion of alleles in the germ cells of the parent population, assuming that each parent produces roughly the same number of germs cells. Consider the distribution of alleles in the offspring. Since each germ cell is equally likely to be chosen for any particular offspring, the distribution of alleles in the offspring is the same as in the parents.
We next consider the distribution of genotypes in the two generations. We will prove the following fact: the distribution of genotypes in the offspring generation depends only upon the distribution of alleles in the parent generation (in particular, it does not depend upon the distribution of genotypes in the parent generation). Consider the possible genotypes; there are $k(k+1)/2$ of them. Under our assumptions, the genotype $A_iA_i$ will occur with frequency $p_i^2$, and the genotype $A_iA_j$, with $i \ne j$, will occur with frequency $2p_ip_j$. Thus, the frequencies of the genotypes depend only upon the allele frequencies in the parent generation, as claimed.
This means that if we start with a certain generation, and a certain distribution of alleles, then in all generations after the one we started with, both the allele distribution and the genotype distribution will be fixed. This last statement is known as the Hardy-Weinberg Law.
We can describe the consequences of this law for the distribution of heights among adults of one sex in a population. We recall that the height of an offspring was given by a random variable $H$, where $H = X_1 + X_2 + \cdots + X_n + W\ ,$ with the $X_i$’s corresponding to the genes that affect height, and the random variable $W$ denoting non-genetic effects. The Hardy-Weinberg Law states that for each $X_i$, the distribution in the offspring generation is the same as the distribution in the parent generation. Thus, if we assume that the distribution of $W$ is roughly the same from generation to generation (or if we assume that its effects are small), then the distribution of $H$ is the same from generation to generation. (In fact, dietary effects are part of $W$, and it is clear that in many human populations, diets have changed quite a bit from one generation to the next in recent times. This change is thought to be one of the reasons that humans, on the average, are getting taller. It is also the case that the effects of $W$ are thought to be small relative to the genetic effects of the parents.)
Discussion
Generally speaking, the Central Limit Theorem contains more information than the Law of Large Numbers, because it gives us detailed information about the of the distribution of $S_n^*$; for large $n$ the shape is approximately the same as the shape of the standard normal density. More specifically, the Central Limit Theorem says that if we standardize and height-correct the distribution of $S_n$, then the normal density function is a very good approximation to this distribution when $n$ is large. Thus, we have a computable approximation for the distribution for $S_n$, which provides us with a powerful technique for generating answers for all sorts of questions about sums of independent random variables, even if the individual random variables have different distributions.
Historical Remarks
In the mid-1800’s, the Belgian mathematician Quetelet7 had shown empirically that the normal distribution occurred in real data, and had also given a method for fitting the normal curve to a given data set. Laplace8 had shown much earlier that the sum of many independent identically distributed random variables is approximately normal. Galton knew that certain physical traits in a population appeared to be approximately normally distributed, but he did not consider Laplace’s result to be a good explanation of how this distribution comes about. We give a quote from Galton that appears in the fascinating book by S. Stigler9 on the history of statistics:
First, let me point out a fact which Quetelet and all writers who have followed in his paths have unaccountably overlooked, and which has an intimate bearing on our work to-night. It is that, although characteristics of plants and animals conform to the law, the reason of their doing so is as yet totally unexplained. The essence of the law is that differences should be wholly due to the collective actions of a host of independent influences in various combinations...Now the processes of heredity...are not petty influences, but very important ones...The conclusion is...that the processes of heredity must work harmoniously with the law of deviation, and be themselves in some sense conformable to it.
Galton invented a device known as a quincunx (now commonly called a Galton board), which we used in Example 3.2.1 to show how to physically obtain a binomial distribution. Of course, the Central Limit Theorem says that for large values of the parameter $n$, the binomial distribution is approximately normal. Galton used the quincunx to explain how inheritance affects the distribution of a trait among offspring.
We consider, as Galton did, what happens if we interrupt, at some intermediate height, the progress of the shot that is falling in the quincunx. The reader is referred to Figure [fig 9.62]. This figure is a drawing of Karl Pearson,10 based upon Galton’s notes. In this figure, the shot is being temporarily segregated into compartments at the line AB. (The line A$^{\prime}$B$^{\prime}$ forms a platform on which the shot can rest.) If the line AB is not too close to the top of the quincunx, then the shot will be approximately normally distributed at this line. Now suppose that one compartment is opened, as shown in the figure. The shot from that compartment will fall, forming a normal distribution at the bottom of the quincunx. If now all of the compartments are opened, all of the shot will fall, producing the same distribution as would occur if the shot were not temporarily stopped at the line AB. But the action of stopping the shot at the line AB, and then releasing the compartments one at a time, is just the same as convoluting two normal distributions. The normal distributions at the bottom, corresponding to each compartment at the line AB, are being mixed, with their weights being the number of shot in each compartment. On the other hand, it is already known that if the shot are unimpeded, the final distribution is approximately normal. Thus, this device shows that the convolution of two normal distributions is again normal.
Galton also considered the quincunx from another perspective. He segregated into seven groups, by weight, a set of 490 sweet pea seeds. He gave 10 seeds from each of the seven group to each of seven friends, who grew the plants from the seeds. Galton found that each group produced seeds whose weights were normally distributed. (The sweet pea reproduces by self-pollination, so he did not need to consider the possibility of interaction between different groups.) In addition, he found that the variances of the weights of the offspring were the same for each group. This segregation into groups corresponds to the compartments at the line AB in the quincunx. Thus, the sweet peas were acting as though they were being governed by a convolution of normal distributions.
He now was faced with a problem. We have shown in Chapter 7 and Galton knew, that the convolution of two normal distributions produces a normal distribution with a larger variance than either of the original distributions. But his data on the sweet pea seeds showed that the variance of the offspring population was the same as the variance of the parent population. His answer to this problem was to postulate a mechanism that he called , and is now called . As Stigler puts it:11
The seven groups of progeny were normally distributed, but not about their parents’ weight. Rather they were in every case distributed about a value that was closer to the average population weight than was that of the parent. Furthermore, this reversion followed “the simplest possible law," that is, it was linear. The average deviation of the progeny from the population average was in the same direction as that of the parent, but only a third as great. The mean progeny reverted to type, and the increased variation was just sufficient to maintain the population variability.
Galton illustrated reversion with the illustration shown in Figure [fig 9.63].12 The parent population is shown at the top of the figure, and the slanted lines are meant to correspond to the reversion effect. The offspring population is shown at the bottom of the figure.
Exercises
Exercise $1$:
A die is rolled 24 times. Use the Central Limit Theorem to estimate the probability that
1. the sum is greater than 84.
2. the sum is equal to 84.
Exercise $2$:
A random walker starts at 0 on the $x$-axis and at each time unit moves 1 step to the right or 1 step to the left with probability 1/2. Estimate the probability that, after 100 steps, the walker is more than 10 steps from the starting position.
Exercise $3$:
A piece of rope is made up of 100 strands. Assume that the breaking strength of the rope is the sum of the breaking strengths of the individual strands. Assume further that this sum may be considered to be the sum of an independent trials process with 100 experiments each having expected value of 10 pounds and standard deviation 1. Find the approximate probability that the rope will support a weight
1. of 1000 pounds.
2. of 970 pounds.
Exercise $4$:
Write a program to find the average of 1000 random digits 0, 1, 2, 3, 4, 5, 6, 7, 8, or 9. Have the program test to see if the average lies within three standard deviations of the expected value of 4.5. Modify the program so that it repeats this simulation 1000 times and keeps track of the number of times the test is passed. Does your outcome agree with the Central Limit Theorem?
Exercise $5$:
A die is thrown until the first time the total sum of the face values of the die is 700 or greater. Estimate the probability that, for this to happen,
1. more than 210 tosses are required.
2. less than 190 tosses are required.
3. between 180 and 210 tosses, inclusive, are required.
Exercise $6$:
A bank accepts rolls of pennies and gives 50 cents credit to a customer without counting the contents. Assume that a roll contains 49 pennies 30 percent of the time, 50 pennies 60 percent of the time, and 51 pennies 10 percent of the time.
1. Find the expected value and the variance for the amount that the bank loses on a typical roll.
2. Estimate the probability that the bank will lose more than 25 cents in 100 rolls.
3. Estimate the probability that the bank will lose exactly 25 cents in 100 rolls.
4. Estimate the probability that the bank will lose any money in 100 rolls.
5. How many rolls does the bank need to collect to have a 99 percent chance of a net loss?
Exercise $7$:
A surveying instrument makes an error of $-2$, $-1$, 0, 1, or 2 feet with equal probabilities when measuring the height of a 200-foot tower.
1. Find the expected value and the variance for the height obtained using this instrument once.
2. Estimate the probability that in 18 independent measurements of this tower, the average of the measurements is between 199 and 201, inclusive.
Exercise $8$:
For Example $9$ estimate $P(S_{30} = 0)$. That is, estimate the probability that the errors cancel out and the student’s grade point average is correct.
Exercise $9$:
Prove the Law of Large Numbers using the Central Limit Theorem.
Exercise $10$:
Peter and Paul match pennies 10,000 times. Describe briefly what each of the following theorems tells you about Peter’s fortune.
1. The Law of Large Numbers.
2. The Central Limit Theorem.
Exercise $11$:
A tourist in Las Vegas was attracted by a certain gambling game in which the customer stakes 1 dollar on each play; a win then pays the customer 2 dollars plus the return of her stake, although a loss costs her only her stake. Las Vegas insiders, and alert students of probability theory, know that the probability of winning at this game is 1/4. When driven from the tables by hunger, the tourist had played this game 240 times. Assuming that no near miracles happened, about how much poorer was the tourist upon leaving the casino? What is the probability that she lost no money?
Exercise $12$:
We have seen that, in playing roulette at Monte Carlo (Example [exam 6.7]), betting 1 dollar on red or 1 dollar on 17 amounts to choosing between the distributions $m_X = \pmatrix{ -1 & -1/2 & 1 \cr 18/37 & 1/37 & 18/37\cr }$ or $m_X = \pmatrix{ -1 & 35 \cr 36/37 & 1/37 \cr }$ You plan to choose one of these methods and use it to make 100 1-dollar bets using the method chosen. Using the Central Limit Theorem, estimate the probability of winning any money for each of the two games. Compare your estimates with the actual probabilities, which can be shown, from exact calculations, to equal .437 and .509 to three decimal places.
Exercise $13$:
In Example $9$ find the largest value of $p$ that gives probability .954 that the first decimal place is correct.
Exercise $14$:
It has been suggested that Example $9$ is unrealistic, in the sense that the probabilities of errors are too low. Make up your own (reasonable) estimate for the distribution $m(x)$, and determine the probability that a student’s grade point average is accurate to within .05. Also determine the probability that it is accurate to within .5.
Exercise $15$:
Find a sequence of uniformly bounded discrete independent random variables $\{X_n\}$ such that the variance of their sum does not tend to $\infty$ as $n \rightarrow \infty$, and such that their sum is not asymptotically normally distributed. | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/09%3A_Central_Limit_Theorem/9.02%3A_Central_Limit_Theorem_for_Discrete_Independent_Trials.txt |
We have seen in Section 1.2 that the distribution function for the sum of a large number $n$ of independent discrete random variables with mean $\mu$ and variance $\sigma^2$ tends to look like a normal density with mean $n\mu$ and variance $n\sigma^2$. What is remarkable about this result is that it holds for any distribution with finite mean and variance. We shall see in this section that the same result also holds true for continuous random variables having a common density function.
Let us begin by looking at some examples to see whether such a result is even plausible.
Standardized Sums
Example $1$
Suppose we choose $n$ random numbers from the interval $[0,1]$ with uniform density. Let $X_1$, $X_2$, …, $X_n$ denote these choices, and $S_n = X_1 + X_2 +\cdots+ X_n$ their sum.
We saw in Example [7.2.12 that the density function for $S_n$ tends to have a normal shape, but is centered at $n/2$ and is flattened out. In order to compare the shapes of these density functions for different values of $n$, we proceed as in the previous section: we $S_n$ by defining $S_n^* = \frac {S_n - n\mu}{\sqrt n \sigma}\ .$ Then we see that for all $n$ we have \begin{aligned} E(S_n^*) & = & 0\ , \ V(S_n^*) & = & 1\ .\end{aligned} The density function for $S_n^*$ is just a standardized version of the density function for $S_n$ (see Figure [fig 9.7]).
Example $1$
Let us do the same thing, but now choose numbers from the interval $[0,+\infty)$ with an exponential density with parameter $\lambda$. Then (see Example 6.21)
\begin{aligned} \mu & = & E(X_i) = \frac 1\lambda\ , \ \sigma^2 & = & V(X_j) = \frac 1{\lambda^2}\ .\end{aligned}
Here we know the density function for $S_n$ explicitly (see Section [sec 7.2]). We can use Corollary [cor 5.1] to calculate the density function for $S_n^*$. We obtain
\begin{aligned} f_{S_n}(x) & = & \frac {\lambda e^{-\lambda x}(\lambda x)^{n - 1}}{(n - 1)!}\ , \ f_{S_n^*}(x) & = & \frac {\sqrt n}\lambda f_{S_n} \left( \frac {\sqrt n x + n}\lambda \right)\ .\end{aligned} The graph of the density function for $S_n^*$ is shown in Figure [fig 9.9].
These examples make it seem plausible that the density function for the normalized random variable $S_n^*$ for large $n$ will look very much like the normal density with mean 0 and variance 1 in the continuous case as well as in the discrete case. The Central Limit Theorem makes this statement precise.
Central Limit Theorem
Theorem Central Limit Theorem $1$
(Let $S_n = X_1 + X_2 +\cdots+ X_n$ be the sum of $n$ independent continuous random variables with common density function $p$ having expected value $\mu$ and variance $\sigma^2$. Let $S_n^* = (S_n - n\mu)/\sqrt n \sigma$. Then we have, for all $a < b$,
$\lim_{n \to \infty} P(a < S_n^* < b) = \frac 1{\sqrt{2\pi}} \int_a^b e^{-x^2/2}\, dx\ .$
We shall give a proof of this theorem in Section10.3. We will now look at some examples.
Example $2$
Suppose a surveyor wants to measure a known distance, say of 1 mile, using a transit and some method of triangulation. He knows that because of possible motion of the transit, atmospheric distortions, and human error, any one measurement is apt to be slightly in error. He plans to make several measurements and take an average. He assumes that his measurements are independent random variables with a common distribution of mean $\mu = 1$ and standard deviation $\sigma = .0002$ (so, if the errors are approximately normally distributed, then his measurements are within 1 foot of the correct distance about 65% of the time). What can he say about the average?
He can say that if $n$ is large, the average $S_n/n$ has a density function that is approximately normal, with mean $\mu = 1$ mile, and standard deviation $\sigma = .0002/\sqrt n$ miles.
How many measurements should he make to be reasonably sure that his average lies within .0001 of the true value? The Chebyshev inequality says
$P\left(\left| \frac {S_n}n - \mu \right| \geq .0001 \right) \leq \frac {(.0002)^2}{n(10^{-8})} = \frac 4n\ ,$
so that we must have $n \ge 80$ before the probability that his error is less than .0001 exceeds .95.
We have already noticed that the estimate in the Chebyshev inequality is not always a good one, and here is a case in point. If we assume that $n$ is large enough so that the density for $S_n$ is approximately normal, then we have
\begin{aligned} P\left(\left| \frac {S_n}n - \mu \right| < .0001 \right) &=& P\bigl(-.5\sqrt{n} < S_n^* < +.5\sqrt{n}\bigr) \ &\approx& \frac 1{\sqrt{2\pi}} \int_{-.5\sqrt{n}}^{+.5\sqrt{n}} e^{-x^2/2}\, dx\ ,\end{aligned}
and this last expression is greater than .95 if $.5\sqrt{n} \ge 2.$ This says that it suffices to take $n = 16$ measurements for the same results. This second calculation is stronger, but depends on the assumption that $n = 16$ is large enough to establish the normal density as a good approximation to $S_n^*$, and hence to $S_n$. The Central Limit Theorem here says nothing about how large $n$ has to be. In most cases involving sums of independent random variables, a good rule of thumb is that for $n \ge 30$, the approximation is a good one. In the present case, if we assume that the errors are approximately normally distributed, then the approximation is probably fairly good even for $n = 16$.
Estimating the Mean
Example $3$
(Continuation of Example $2$) Now suppose our surveyor is measuring an unknown distance with the same instruments under the same conditions. He takes 36 measurements and averages them. How sure can he be that his measurement lies within .0002 of the true value?
Again using the normal approximation, we get \begin{aligned} P\left(\left|\frac {S_n}n - \mu\right| < .0002 \right) &=& P\bigl(|S_n^*| < .5\sqrt n\bigr) \ &\approx& \frac 2{\sqrt{2\pi}} \int_{-3}^3 e^{-x^2/2}\, dx \ &\approx& .997\ .\end{aligned}
This means that the surveyor can be 99.7 percent sure that his average is within .0002 of the true value. To improve his confidence, he can take more measurements, or require less accuracy, or improve the quality of his measurements (i.e., reduce the variance $\sigma^2$). In each case, the Central Limit Theorem gives quantitative information about the confidence of a measurement process, assuming always that the normal approximation is valid.
Now suppose the surveyor does not know the mean or standard deviation of his measurements, but assumes that they are independent. How should he proceed?
Again, he makes several measurements of a known distance and averages them. As before, the average error is approximately normally distributed, but now with unknown mean and variance.
Sample Mean
If he knows the variance $\sigma^2$ of the error distribution is .0002, then he can estimate the mean $\mu$ by taking the or of, say, 36 measurements:
$\bar \mu = \frac {x_1 + x_2 +\cdots+ x_n}n\ ,$
where $n = 36$. Then, as before, $E(\bar \mu) = \mu$. Moreover, the preceding argument shows that
$P(|\bar \mu - \mu| < .0002) \approx .997\ .$
The interval $(\bar \mu - .0002, \bar \mu + .0002)$ is called for $\mu$ (see Example $1$).
Sample Variance
If he does not know the variance $\sigma^2$ of the error distribution, then he can estimate $\sigma^2$ by the : $\bar \sigma^2 = \frac {(x_1 - \bar \mu)^2 + (x_2 - \bar \mu)^2 +\cdots+ (x_n - \bar \mu)^2}n\ ,$ where $n = 36$. The Law of Large Numbers, applied to the random variables $(X_i - \bar \mu)^2$, says that for large $n$, the sample variance $\bar \sigma^2$ lies close to the variance $\sigma^2$, so that the surveyor can use $\bar \sigma^2$ in place of $\sigma^2$ in the argument above.
Experience has shown that, in most practical problems of this type, the sample variance is a good estimate for the variance, and can be used in place of the variance to determine confidence levels for the sample mean. This means that we can rely on the Law of Large Numbers for estimating the variance, and the Central Limit Theorem for estimating the mean.
We can check this in some special cases. Suppose we know that the error distribution is with unknown mean and variance. Then we can take a sample of $n$ measurements, find the sample mean $\bar \mu$ and sample variance $\bar \sigma^2$, and form $T_n^* = \frac {S_n - n\bar\mu}{\sqrt{n}\bar\sigma}\ ,$ where $n = 36$. We expect $T_n^*$ to be a good approximation for $S_n^*$ for large $n$.
$t$-Density
The statistician W. S. Gosset13 has shown that in this case $T_n^*$ has a density function that is not normal but rather a with $n$ degrees of freedom. (The number $n$ of degrees of freedom is simply a parameter which tells us which $t$-density to use.) In this case we can use the $t$-density in place of the normal density to determine confidence levels for $\mu$. As $n$ increases, the $t$-density approaches the normal density. Indeed, even for $n = 8$ the $t$-density and normal density are practically the same (see Figure $3$).
Exercises
Notes of computer problems
(a) Simulation: Recall (see Corollary 5.2) that $X = F^{-1}(rnd)$ will simulate a random variable with density $f(x)$ and distribution $F(X) = \int_{-\infty}^x f(t)\, dt\ .$ In the case that $f(x)$ is a normal density function with mean $\mu$ and standard deviation $\sigma$, where neither $F$ nor $F^{-1}$ can be expressed in closed form, use instead
$X = \sigma\sqrt {-2\log(rnd)} \cos 2\pi(rnd) + \mu\ .$
(b) Bar graphs: you should aim for about 20 to 30 bars (of equal width) in your graph. You can achieve this by a good choice of the range $[x{\rm min}, x{\rm min}]$ and the number of bars (for instance, $[\mu - 3\sigma, \mu + 3\sigma]$ with 30 bars will work in many cases). Experiment!
Exercises $1$
Let $X$ be a continuous random variable with mean $\mu(X)$ and variance $\sigma^2(X)$, and let $X^* = (X - \mu)/\sigma$ be its standardized version. Verify directly that $\mu(X^*) = 0$ and $\sigma^2(X^*) = 1$.
Exercises $2$
Let $\{X_k\}$, $1 \leq k \leq n$, be a sequence of independent random variables, all with mean 0 and variance 1, and let $S_n$, $S_n^*$, and $A_n$ be their sum, standardized sum, and average, respectively. Verify directly that $S_n^* = S_n/\sqrt{n} = \sqrt{n} A_n$.
Exercises $3$
Let $\{X_k\}$, $1 \leq k \leq n$, be a sequence of random variables, all with mean $\mu$ and variance $\sigma^2$, and $Y_k = X_k^*$ be their standardized versions. Let $S_n$ and $T_n$ be the sum of the $X_k$ and $Y_k$, and $S_n^*$ and $T_n^*$ their standardized version. Show that $S_n^* = T_n^* = T_n/\sqrt{n}$.
Exercises $4$
Suppose we choose independently 25 numbers at random (uniform density) from the interval $[0,20]$. Write the normal densities that approximate the densities of their sum $S_{25}$, their standardized sum $S_{25}^*$, and their average $A_{25}$.
Exercises $5$
Write a program to choose independently 25 numbers at random from $[0,20]$, compute their sum $S_{25}$, and repeat this experiment 1000 times. Make a bar graph for the density of $S_{25}$ and compare it with the normal approximation of Exercise $4$. How good is the fit? Now do the same for the standardized sum $S_{25}^*$ and the average $A_{25}$.
Exercises $6$
In general, the Central Limit Theorem gives a better estimate than Chebyshev’s inequality for the average of a sum. To see this, let $A_{25}$ be the average calculated in Exercise $5$, and let $N$ be the normal approximation for $A_{25}$. Modify your program in Exercise [exer 9.4.5] to provide a table of the function $F(x) = P(|A_{25} - 10| \geq x) = {}$ fraction of the total of 1000 trials for which $|A_{25} - 10| \geq x$. Do the same for the function $f(x) = P(|N - 10| \geq x)$. (You can use the normal table, Table [tabl 9.1], or the procedure NormalArea for this.) Now plot on the same axes the graphs of $F(x)$, $f(x)$, and the Chebyshev function $g(x) = 4/(3x^2)$. How do $f(x)$ and $g(x)$ compare as estimates for $F(x)$?
Exercises $7$
The Central Limit Theorem says the sums of independent random variables tend to look normal, no matter what crazy distribution the individual variables have. Let us test this by a computer simulation. Choose independently 25 numbers from the interval $[0,1]$ with the probability density $f(x)$ given below, and compute their sum $S_{25}$. Repeat this experiment 1000 times, and make up a bar graph of the results. Now plot on the same graph the density $\phi(x) = \mbox {normal \,\,\,}(x,\mu(S_{25}),\sigma(S_{25}))$. How well does the normal density fit your bar graph in each case?
1. $f(x) = 1$.
2. $f(x) = 2x$.
3. $f(x) = 3x^2$.
4. $f(x) = 4|x - 1/2|$.
5. $f(x) = 2 - 4|x - 1/2|$.
Exercises $8$
Repeat the experiment described in Exercise $7$ but now choose the 25 numbers from $[0,\infty)$, using $f(x) = e^{-x}$.
Exercises $9$
How large must $n$ be before $S_n = X_1 + X_2 +\cdots+ X_n$ is approximately normal? This number is often surprisingly small. Let us explore this question with a computer simulation. Choose $n$ numbers from $[0,1]$ with probability density $f(x)$, where $n = 3$, 6, 12, 20, and $f(x)$ is each of the densities in Exercise $7$. Compute their sum $S_n$, repeat this experiment 1000 times, and make up a bar graph of 20 bars of the results. How large must $n$ be before you get a good fit?
Exercises $10$
A surveyor is measuring the height of a cliff known to be about 1000 feet. He assumes his instrument is properly calibrated and that his measurement errors are independent, with mean $\mu = 0$ and variance $\sigma^2 = 10$. He plans to take $n$ measurements and form the average. Estimate, using (a) Chebyshev’s inequality and (b) the normal approximation, how large $n$ should be if he wants to be 95 percent sure that his average falls within 1 foot of the true value. Now estimate, using (a) and (b), what value should $\sigma^2$ have if he wants to make only 10 measurements with the same confidence?
Exercises $11$
The price of one share of stock in the Pilsdorff Beer Company (see Exercise 8.2.12) is given by $Y_n$ on the $n$th day of the year. Finn observes that the differences $X_n = Y_{n + 1} - Y_n$ appear to be independent random variables with a common distribution having mean $\mu = 0$ and variance $\sigma^2 = 1/4$. If $Y_1 = 100$, estimate the probability that $Y_{365}$ is
1. ${} \geq 100$.
2. ${} \geq 110$.
3. ${} \geq 120$.
Exercises $1$
Test your conclusions in Exercise [exer 9.4.11] by computer simulation. First choose 364 numbers $X_i$ with density $f(x) = \mbox {normal}(x,0,1/4)$. Now form the sum $Y_{365} = 100 + X_1 + X_2 +\cdots+ X_{364}$, and repeat this experiment 200 times. Make up a bar graph on $[50,150]$ of the results, superimposing the graph of the approximating normal density. What does this graph say about your answers in Exercise $11$?
Exercises $1$
Physicists say that particles in a long tube are constantly moving back and forth along the tube, each with a velocity $V_k$ (in cm/sec) at any given moment that is normally distributed, with mean $\mu = 0$ and variance $\sigma^2 = 1$. Suppose there are $10^{20}$ particles in the tube.
1. Find the mean and variance of the average velocity of the particles.
2. What is the probability that the average velocity is ${} \geq 10^{-9}$ cm/sec?
Exercises $1$
An astronomer makes $n$ measurements of the distance between Jupiter and a particular one of its moons. Experience with the instruments used leads her to believe that for the proper units the measurements will be normally distributed with mean $d$, the true distance, and variance 16. She performs a series of $n$ measurements. Let $A_n = \frac {X_1 + X_2 +\cdots+ X_n}n$ be the average of these measurements.
1. Show that $P\left(A_n - \frac 8{\sqrt n} \leq d \leq A_n + \frac 8{\sqrt n}\right) \approx .95.$
2. When nine measurements were taken, the average of the distances turned out to be 23.2 units. Putting the observed values in (a) gives the for the unknown distance $d$. Compute this interval.
3. Why not say in (b) more simply that the probability is .95 that the value of $d$ lies in the computed confidence interval?
4. What changes would you make in the above procedure if you wanted to compute a 99 percent confidence interval?
Exercises $1$
Plot a bar graph similar to that in Figure [fig 9.61] for the heights of the mid-parents in Galton’s data as given in Appendix B and compare this bar graph to the appropriate normal curve.
9.R: References
1. The Gallup Poll Monthly, November 1992, No. 326, p. 33. Supplemented with the help of Lydia K. Saab, The Gallup Organization.↩
2. A. de Moivre, 3d ed. (London: Millar, 1756).↩
3. ibid., p. 243.↩
4. F. N. David, (London: Griffin, 1962).↩
5. R. M. Kozelka, “Grade-Point Averages and the Central Limit Theorem," vol. 86 (Nov 1979), pp. 773-777.↩
6. W. Feller, vol. 1, 3rd ed. (New York: John Wiley & Sons, 1968), p. 254.↩
7. S. Stigler, (Cambridge: Harvard University Press, 1986), p. 203.↩
8. ibid., p. 136↩
9. ibid., p. 281.↩
10. Karl Pearson, vol. IIIB, (Cambridge at the University Press 1930.) p. 466. Reprinted with permission.↩
11. ibid., p. 282.↩
12. Karl Pearson, vol. IIIA, (Cambridge at the University Press 1930.) p. 9. Reprinted with permission.↩
13. W. S. Gosset discovered the distribution we now call the \(t\)-distribution while working for the Guinness Brewery in Dublin. He wrote under the pseudonym “Student." The results discussed here first appeared in Student, “The Probable Error of a Mean," vol. 6 (1908), pp. 1-24.↩ | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/09%3A_Central_Limit_Theorem/9.03%3A_Central_Limit_Theorem_for_Continuous_Independent_Trials.txt |
So far we have considered in detail only the two most important attributes of a random variable, namely, the mean and the variance. We have seen how these attributes enter into the fundamental limit theorems of probability, as well as into all sorts of practical calculations. We have seen that the mean and variance of a random variable contain important information about the random variable, or, more precisely, about the distribution function of that variable. Now we shall see that the mean and variance do contain the available information about the density function of a random variable. To begin with, it is easy to give examples of different distribution functions which have the same mean and the same variance. For instance, suppose $X$ and $Y$ are random variables, with distributions
$p_X = \pmatrix{ 1 & 2 & 3 & 4 & 5 & 6\cr 0 & 1/4 & 1/2 & 0 & 0 & 1/4\cr},$ $p_Y = \pmatrix{ 1 & 2 & 3 & 4 & 5 & 6\cr 1/4 & 0 & 0 & 1/2 & 1/4 & 0\cr}.$
Then with these choices, we have $E(X) = E(Y) = 7/2$ and $V(X) = V(Y) = 9/4$, and yet certainly $p_X$ and $p_Y$ are quite different density functions.
This raises a question: If $X$ is a random variable with range $\{x_1, x_2, \ldots\}$ of at most countable size, and distribution function $p = p_X$, and if we know its mean $\mu = E(X)$ and its variance $\sigma^2 = V(X)$, then what else do we need to know to determine $p$ completely?
Moments
A nice answer to this question, at least in the case that $X$ has finite range, can be given in terms of the of $X$, which are numbers defined as follows: \begin{aligned} \mu_k &=& k \mbox{th}\,\,\mbox{moment~of}\,\, X\ &=& E(X^k) \ &=& \sum_{j = 1}^\infty (x_j)^k p(x_j)\ ,\end{aligned} provided the sum converges. Here $p(x_j) = P(X = x_j)$.
In terms of these moments, the mean $\mu$ and variance $\sigma^2$ of $X$ are given simply by
\begin{aligned} \mu &=& \mu_1, \ \sigma^2 &=& \mu_2 - \mu_1^2\ ,\end{aligned} so that a knowledge of the first two moments of $X$ gives us its mean and variance. But a knowledge of the moments of $X$ determines its distribution function $p$ completely.
Moment Generating Functions
To see how this comes about, we introduce a new variable $t$, and define a function $g(t)$ as follows:
\begin{aligned} g(t) &=& E(e^{tX}) \ &=& \sum_{k = 0}^\infty \frac {\mu_k t^k}{k!} \ &=& E\left(\sum_{k = 0}^\infty \frac {X^k t^k}{k!} \right) \ &=& \sum_{j = 1}^\infty e^{tx_j} p(x_j)\ .\end{aligned} We call $g(t)$ the for $X$, and think of it as a convenient bookkeeping device for describing the moments of $X$. Indeed, if we differentiate $g(t)$ $n$ times and then set $t = 0$, we get $\mu_n$:
\begin{aligned} \left. \frac {d^n}{dt^n} g(t) \right|_{t = 0} &=& g^{(n)}(0) \ &=& \left. \sum_{k = n}^\infty \frac {k!\, \mu_k t^{k - n}} {(k - n)!\, k!} \right|_{t = 0} \ &=& \mu_n\ .\end{aligned} It is easy to calculate the moment generating function for simple examples.
Examples
Example $1$:
Suppose $X$ has range $\{1,2,3,\ldots,n\}$ and $p_X(j) = 1/n$ for $1 \leq j \leq n$ (uniform distribution). Then
\begin{aligned} g(t) &=& \sum_{j = 1}^n \frac 1n e^{tj} \ &=& \frac 1n (e^t + e^{2t} +\cdots+ e^{nt}) \ &=& \frac {e^t (e^{nt} - 1)} {n (e^t - 1)}\ .\end{aligned} If we use the expression on the right-hand side of the second line above, then it is easy to see that
\begin{aligned} \mu_1 &=& g'(0) = \frac 1n (1 + 2 + 3 + \cdots + n) = \frac {n + 1}2, \ \mu_2 &=& g''(0) = \frac 1n (1 + 4 + 9+ \cdots + n^2) = \frac {(n + 1)(2n + 1)}6\ ,\end{aligned} and that $\mu = \mu_1 = (n + 1)/2$ and $\sigma^2 = \mu_2 - \mu_1^2 = (n^2 - 1)/12$.
Example $2$:
Suppose now that $X$ has range $\{0,1,2,3,\ldots,n\}$ and $p_X(j) = {n \choose j} p^j q^{n - j}$ for $0 \leq j \leq n$ (binomial distribution). Then \begin{aligned} g(t) &=& \sum_{j = 0}^n e^{tj} {n \choose j} p^j q^{n - j} \ &=& \sum_{j = 0}^n {n \choose j} (pe^t)^j q^{n - j} \ &=& (pe^t + q)^n\ .\end{aligned} Note that \begin{aligned} \mu_1 = g'(0) &=& \left. n(pe^t + q)^{n - 1}pe^t \right|_{t = 0} = np\ , \ \mu_2 = g''(0) &=& n(n - 1)p^2 + np\ ,\end{aligned} so that $\mu = \mu_1 = np$, and $\sigma^2 = \mu_2 - \mu_1^2 = np(1 - p)$, as expected.
Example $3$:
Suppose $X$ has range $\{1,2,3,\ldots\}$ and $p_X(j) = q^{j - 1}p$ for all $j$ (geometric distribution). Then \begin{aligned} g(t) &=& \sum_{j = 1}^\infty e^{tj} q^{j - 1}p \ &=& \frac {pe^t}{1 - qe^t}\ .\end{aligned} Here \begin{aligned} \mu_1 &=& g'(0) = \left. \frac {pe^t}{(1 - qe^t)^2} \right|_{t = 0} = \frac 1p\ , \ \mu_2 &=& g''(0) = \left. \frac {pe^t + pqe^{2t}}{(1 - qe^t)^3} \right|_{t = 0} = \frac {1 + q}{p^2}\ ,\end{aligned} $\mu = \mu_1 = 1/p$, and $\sigma^2 = \mu_2 - \mu_1^2 = q/p^2$, as computed in Example [exam 6.21].
Example $4$:
Let $X$ have range $\{0,1,2,3,\ldots\}$ and let $p_X(j) = e^{-\lambda}\lambda^j/j!$ for all $j$ (Poisson distribution with mean $\lambda$). Then \begin{aligned} g(t) &=& \sum_{j = 0}^\infty e^{tj} \frac {e^{-\lambda}\lambda^j}{j!} \ &=& e^{-\lambda} \sum_{j = 0}^\infty \frac {(\lambda e^t)^j}{j!} \ &=& e^{-\lambda} e^{\lambda e^t} = e^{\lambda(e^t - 1)}\ .\end{aligned} Then \begin{aligned} \mu_1 &=& g'(0) = \left. e^{\lambda(e^t - 1)}\lambda e^t \right|_{t = 0} = \lambda\ ,\ \mu_2 &=& g''(0) = \left. e^{\lambda(e^t - 1)} (\lambda^2 e^{2t} + \lambda e^t) \right|_{t = 0} = \lambda^2 + \lambda\ ,\end{aligned} $\mu = \mu_1 = \lambda$, and $\sigma^2 = \mu_2 - \mu_1^2 = \lambda$.
The variance of the Poisson distribution is easier to obtain in this way than directly from the definition (as was done in Exercise [sec 6.2].[exer 6.2.100]).
Moment Problem
Using the moment generating function, we can now show, at least in the case of a discrete random variable with finite range, that its distribution function is completely determined by its moments.
Theorem $1$
Add exercises text here. For the automatic number to work, you need to Let $X$ be a discrete random variable with finite range $\{x_1,x_2,\ldots, x_n\}$, distribution function $p$, and moment generating function $g$. Then $g$ is uniquely determined by $p$, and conversely.
Proof
We know that $p$ determines $g$, since $g(t) = \sum_{j = 1}^n e^{tx_j} p(x_j)\ .$ Conversely, assume that $g(t)$ is known. We wish to determine the values of $x_j$ and $p(x_j)$, for $1 \le j \le n$. We assume, without loss of generality, that $p(x_j) > 0$ for $1 \le j \le n$, and that $x_1 < x_2 < \ldots < x_n\ .$ We note that $g(t)$ is differentiable for all $t$, since it is a finite linear combination of exponential functions. If we compute $g'(t)/g(t)$, we obtain $ \ .$ Dividing both top and bottom by $e^{tx_n}$, we obtain the expression $ \ .$ Since $x_n$ is the largest of the $x_j$’s, this expression approaches $x_n$ as $t$ goes to $\infty$. So we have shown that $x_n = \lim_{t \rightarrow \infty} {{g'(t)}\over{g(t)}}\ .$ To find $p(x_n)$, we simply divide $g(t)$ by $e^{tx_n}$ and let $t$ go to $\infty$. Once $x_n$ and $p(x_n)$ have been determined, we can subtract $p(x_n) e^{tx_n}$ from $g(t)$, and repeat the above procedure with the resulting function, obtaining, in turn, $x_{n-1}, \ldots, x_1$ and $p(x_{n-1}), \ldots, p(x_1)$.
If we delete the hypothesis that $X$ have finite range in the above theorem, then the conclusion is no longer necessarily true.
Ordinary Generating Functions
In the special but important case where the $x_j$ are all nonnegative integers, $x_j = j$, we can prove this theorem in a simpler way.
In this case, we have $g(t) = \sum_{j = 0}^n e^{tj} p(j)\ ,$ and we see that $g(t)$ is a in $e^t$. If we write $z = e^t$, and define the function $h$ by $h(z) = \sum_{j = 0}^n z^j p(j)\ ,$ then $h(z)$ is a polynomial in $z$ containing the same information as $g(t)$, and in fact \begin{aligned} h(z) &=& g(\log z)\ , \ g(t) &=& h(e^t)\ .\end{aligned} The function $h(z)$ is often called the for $X$. Note that $h(1) = g(0) = 1$, $h'(1) = g'(0) = \mu_1$, and $h''(1) = g''(0) - g'(0) = \mu_2 - \mu_1$. It follows from all this that if we know $g(t)$, then we know $h(z)$, and if we know $h(z)$, then we can find the $p(j)$ by Taylor’s formula: \begin{aligned} p(j) &=& \mbox{coefficient~of}\,\, z^j \,\, \mbox{in}\,\, h(z) \ &=& \frac{h^{(j)}(0)}{j!}\ .\end{aligned}
For example, suppose we know that the moments of a certain discrete random variable $X$ are given by \begin{aligned} \mu_0 &=& 1\ , \ \mu_k &=& \frac12 + \frac{2^k}4\ , \qquad \mbox{for}\,\, k \geq 1\ .\end{aligned} Then the moment generating function $g$ of $X$ is \begin{aligned} g(t) &=& \sum_{k = 0}^\infty \frac{\mu_k t^k}{k!} \ &=& 1 + \frac12 \sum_{k = 1}^\infty \frac{t^k}{k!} + \frac14 \sum_{k = 1}^\infty \frac{(2t)^k}{k!} \ &=& \frac14 + \frac12 e^t + \frac14 e^{2t}\ .\end{aligned} This is a polynomial in $z = e^t$, and $h(z) = \frac14 + \frac12 z + \frac14 z^2\ .$ Hence, $X$ must have range $\{0,1,2\}$, and $p$ must have values $\{1/4,1/2,1/4\}$.
Properties
Both the moment generating function $g$ and the ordinary generating function $h$ have many properties useful in the study of random variables, of which we can consider only a few here. In particular, if $X$ is any discrete random variable and $Y = X + a$, then \begin{aligned} g_Y(t) &=& E(e^{tY}) \ &=& E(e^{t(X + a)}) \ &=& e^{ta} E(e^{tX}) \ &=& e^{ta} g_X(t)\ ,\end{aligned} while if $Y = bX$, then \begin{aligned} g_Y(t) &=& E(e^{tY}) \ &=& E(e^{tbX}) \ &=& g_X(bt)\ .\end{aligned} In particular, if $X^* = \frac{X - \mu}\sigma\ ,$ then (see Exercise [exer 10.1.14]) $g_{x^*}(t) = e^{-\mu t/\sigma} g_X\left( \frac t\sigma \right)\ .$
If $X$ and $Y$ are random variables and $Z = X + Y$ is their sum, with $p_X$, $p_Y$, and $p_Z$ the associated distribution functions, then we have seen in Chapter [chp 7] that $p_Z$ is the of $p_X$ and $p_Y$, and we know that convolution involves a rather complicated calculation. But for the generating functions we have instead the simple relations \begin{aligned} g_Z(t) &=& g_X(t) g_Y(t)\ , \ h_Z(z) &=& h_X(z) h_Y(z)\ ,\end{aligned} that is, $g_Z$ is simply the of $g_X$ and $g_Y$, and similarly for $h_Z$.
To see this, first note that if $X$ and $Y$ are independent, then $e^{tX}$ and $e^{tY}$ are independent (see Exercise [sec 5.2].[exer 5.2.38]), and hence $E(e^{tX} e^{tY}) = E(e^{tX}) E(e^{tY})\ .$ It follows that \begin{aligned} g_Z(t) &=& E(e^{tZ}) = E(e^{t(X + Y)}) \ &=& E(e^{tX}) E(e^{tY}) \ &=& g_X(t) g_Y(t)\ ,\end{aligned} and, replacing $t$ by $\log z$, we also get $h_Z(z) = h_X(z) h_Y(z)\ .$
Example $5$:
If $X$ and $Y$ are independent discrete random variables with range $\{0,1,2,\ldots,n\}$ and binomial distribution $p_X(j) = p_Y(j) = {n \choose j} p^j q^{n - j}\ ,$ and if $Z = X + Y$, then we know (cf. Section [sec 7.1]) that the range of $X$ is $\{0,1,2,\ldots,2n\}$ and $X$ has binomial distribution $p_Z(j) = (p_X * p_Y)(j) = {2n \choose j} p^j q^{2n - j}\ .$ Here we can easily verify this result by using generating functions. We know that \begin{aligned} g_X(t) = g_Y(t) &=& \sum_{j = 0}^n e^{tj} {n \choose j} p^j q^{n - j} \ &=& (pe^t + q)^n\ ,\end{aligned} and $h_X(z) = h_Y(z) = (pz + q)^n\ .$ Hence, we have $g_Z(t) = g_X(t) g_Y(t) = (pe^t + q)^{2n}\ ,$ or, what is the same, \begin{aligned} h_Z(z) &=& h_X(z) h_Y(z) = (pz + q)^{2n} \ &=& \sum_{j = 0}^{2n} {2n \choose j} (pz)^j q^{2n - j}\ ,\end{aligned} from which we can see that the coefficient of $z^j$ is just $p_Z(j) = {2n \choose j} p^j q^{2n - j}$.
Example $6$:
If $X$ and $Y$ are independent discrete random variables with the non-negative integers $\{0,1,2,3,\ldots\}$ as range, and with geometric distribution function $p_X(j) = p_Y(j) = q^j p\ ,$ then $g_X(t) = g_Y(t) = \frac p{1 - qe^t}\ ,$ and if $Z = X + Y$, then \begin{aligned} g_Z(t) &=& g_X(t) g_Y(t) \ &=& \frac{p^2}{1 - 2qe^t + q^2 e^{2t}}\ .\end{aligned} If we replace $e^t$ by $z$, we get \begin{aligned} h_Z(z) &=& \frac{p^2}{(1 - qz)^2} \ &=& p^2 \sum_{k = 0}^\infty (k + 1) q^k z^k\ ,\end{aligned} and we can read off the values of $p_Z(j)$ as the coefficient of $z^j$ in this expansion for $h(z)$, even though $h(z)$ is not a polynomial in this case. The distribution $p_Z$ is a negative binomial distribution (see Section [sec 5.1]).
Here is a more interesting example of the power and scope of the method of generating functions.
Heads or Tails
Exercise $1$
In the coin-tossing game discussed in Example [exam 1.3], we now consider the question “When is Peter first in the lead?"
Answer
Let $X_k$ describe the outcome of the $k$th trial in the game $X_k = \left \{ \matrix{ +1, &\mbox{if}\,\, k{\rm th}\,\, \mbox{toss~is~heads}, \cr -1, &\mbox{if}\,\, k{\rm th}\,\, \mbox{toss~is~tails.}\cr}\right.$ Then the $X_k$ are independent random variables describing a Bernoulli process. Let $S_0 = 0$, and, for $n \geq 1$, let $S_n = X_1 + X_2 + \cdots + X_n\ .$ Then $S_n$ describes Peter’s fortune after $n$ trials, and Peter is first in the lead after $n$ trials if $S_k \leq 0$ for $1 \leq k < n$ and $S_n = 1$.
Now this can happen when $n = 1$, in which case $S_1 = X_1 = 1$, or when $n > 1$, in which case $S_1 = X_1 = -1$. In the latter case, $S_k = 0$ for $k = n - 1$, and perhaps for other $k$ between 1 and $n$. Let $m$ be the such value of $k$; then $S_m = 0$ and $S_k < 0$ for $1 \leq k < m$. In this case Peter loses on the first trial, regains his initial position in the next $m - 1$ trials, and gains the lead in the next $n - m$ trials.
Let $p$ be the probability that the coin comes up heads, and let $q = 1-p$. Let $r_n$ be the probability that Peter is first in the lead after $n$ trials. Then from the discussion above, we see that \begin{aligned} r_n &=& 0\ , \qquad \mbox{if}\,\, n\,\, \mbox{even}, \ r_1 &=& p \qquad (= \mbox{probability~of~heads~in~a~single~toss)}, \ r_n &=& q(r_1r_{n-2} + r_3r_{n-4} +\cdots+ r_{n-2}r_1)\ , \qquad \mbox{if} \ n > 1,\ n\ \mbox{odd}.\end{aligned} Now let $T$ describe the time (that is, the number of trials) required for Peter to take the lead. Then $T$ is a random variable, and since $P(T = n) = r_n$, $r$ is the distribution function for $T$.
We introduce the generating function $h_T(z)$ for $T$:
$h_T(z) = \sum_{n = 0}^\infty r_n z^n\ .$
Then, by using the relations above, we can verify the relation
$h_T(z) = pz + qz(h_T(z))^2\ .$
If we solve this quadratic equation for $h_T(z)$, we get $h_T(z) = \frac{1 \pm \sqrt{1 - 4pqz^2}}{2qz} = \frac{2pz}{1 \mp \sqrt{1 - 4pqz^2}}\ .$ Of these two solutions, we want the one that has a convergent power series in $z$ (i.e., that is finite for $z = 0$). Hence we choose $h_T(z) = \frac{1 - \sqrt{1 - 4pqz^2}}{2qz} = \frac{2pz}{1 + \sqrt{1 - 4pqz^2}}\ .$ Now we can ask: What is the probability that Peter is in the lead? This probability is given by (see Exercise $10$)
\begin{aligned} \sum_{n = 0}^\infty r_n &=& h_T(1) = \frac{1 - \sqrt{\mathstrut1 - 4pq}}{2q} \ &=& \frac{1 - |p - q|}{2q} \ &=& \left \{ \begin{array}{ll} p/q, & \mbox{if p < q}, \ 1, & \mbox{if p \geq q}, \end{array}\right. \end{aligned} so that Peter is sure to be in the lead eventually if $p \geq q$.
How long will it take? That is, what is the expected value of $T$? This value is given by $E(T) = h_T'(1) = \left \{ \matrix { 1/(p - q), & \mbox{if}\,\, p > q, \cr \infty, & \mbox{if}\,\, p = q.\cr}\right.$ This says that if $p > q$, then Peter can expect to be in the lead by about $1/(p - q)$ trials, but if $p = q$, he can expect to wait a long time.
A related problem, known as the Gambler’s Ruin problem, is studied in Exercise [exer 11.2.22] and in Section 12.2.
Exercises
Exercise $1$
Find the generating functions, both ordinary $h(z)$ and moment $g(t)$, for the following discrete probability distributions.
1. The distribution describing a fair coin.
2. The distribution describing a fair die.
3. The distribution describing a die that always comes up 3.
4. The uniform distribution on the set $\{n,n+1,n+2,\ldots,n+k\}$.
5. The binomial distribution on $\{n,n+1,n+2,\ldots,n+k\}$.
6. The geometric distribution on $\{0,1,2,\ldots,\}$ with $p(j) = 2/3^{j + 1}$.
Exercise $2$
For each of the distributions (a) through (d) of Exercise $1$ calculate the first and second moments, $\mu_1$ and $\mu_2$, directly from their definition, and verify that $h(1) = 1$, $h'(1) = \mu_1$, and $h''(1) = \mu_2 - \mu_1$.
Exercise $3$
Let $p$ be a probability distribution on $\{0,1,2\}$ with moments $\mu_1 = 1$, $\mu_2 = 3/2$.
1. Find its ordinary generating function $h(z)$.
2. Using (a), find its moment generating function.
3. Using (b), find its first six moments.
4. Using (a), find $p_0$, $p_1$, and $p_2$.
Exercise $4$
In Exercise $3$ the probability distribution is completely determined by its first two moments. Show that this is always true for any probability distribution on $\{0,1,2\}$. : Given $\mu_1$ and $\mu_2$, find $h(z)$ as in Exercise $3$ and use $h(z)$ to determine $p$.
Exercise $5$
Let $p$ and $p'$ be the two distributions
$p = \pmatrix{ 1 & 2 & 3 & 4 & 5 \cr 1/3 & 0 & 0 & 2/3 & 0 \cr}\ ,$
$p' = \pmatrix{ 1 & 2 & 3 & 4 & 5 \cr 0 & 2/3 & 0 & 0 & 1/3 \cr}\ .$
1. Show that $p$ and $p'$ have the same first and second moments, but not the same third and fourth moments.
2. Find the ordinary and moment generating functions for $p$ and $p'$.
Exercise $6$
Let $p$ be the probability distribution
$p = \pmatrix{ 0 & 1 & 2 \cr 0 & 1/3 & 2/3 \cr}\ ,$ and let $p_n = p * p * \cdots * p$ be the $n$-fold convolution of $p$ with itself.
1. Find $p_2$ by direct calculation (see Definition 7.1.1).
2. Find the ordinary generating functions $h(z)$ and $h_2(z)$ for $p$ and $p_2$, and verify that $h_2(z) = (h(z))^2$.
3. Find $h_n(z)$ from $h(z)$.
4. Find the first two moments, and hence the mean and variance, of $p_n$ from $h_n(z)$. Verify that the mean of $p_n$ is $n$ times the mean of $p$.
5. Find those integers $j$ for which $p_n(j) > 0$ from $h_n(z)$.
Exercise $7$
Let $X$ be a discrete random variable with values in $\{0,1,2,\ldots,n\}$ and moment generating function $g(t)$. Find, in terms of $g(t)$, the generating functions for
1. $-X$.
2. $X + 1$.
3. $3X$.
4. $aX + b$.
Exercise $8$
Let $X_1$, $X_2$, …, $X_n$ be an independent trials process, with values in $\{0,1\}$ and mean $\mu = 1/3$. Find the ordinary and moment generating functions for the distribution of
1. $S_1 = X_1$. : First find $X_1$ explicitly.
2. $S_2 = X_1 + X_2$.
3. $S_n = X_1 + X_2 +\cdots+ X_n$.
Exercise $9$
Let $X$ and $Y$ be random variables with values in $\{1,2,3,4,5,6\}$ with distribution functions $p_X$ and $p_Y$ given by \begin{aligned} p_X(j) &=& a_j\ , \ p_Y(j) &=& b_j\ .\end{aligned}
1. Find the ordinary generating functions $h_X(z)$ and $h_Y(z)$ for these distributions.
2. Find the ordinary generating function $h_Z(z)$ for the distribution $Z = X + Y$.
3. Show that $h_Z(z)$ cannot ever have the form $h_Z(z) = \frac{z^2 + z^3 +\cdots+ z^{12}}{11}\ .$
: $h_X$ and $h_Y$ must have at least one nonzero root, but $h_Z(z)$ in the form given has no nonzero real roots.
It follows from this observation that there is no way to load two dice so that the probability that a given sum will turn up when they are tossed is the same for all sums (i.e., that all outcomes are equally likely).
Exercise $10$
Show that if $h(z) = \frac{1 - \sqrt{1 - 4pqz^2}}{2qz}\ ,$ then $h(1) = \left \{ \begin{array}{ll} p/q, & \mbox{if p \leq q,} \ 1, & \mbox{if p \geq q,} \end{array}\right.$ and $h'(1) = \left \{ \begin{array}{ll} 1/(p - q), & \mbox{if p > q,}\ \infty, & \mbox{if p = q.} \end{array}\right.$
Exercise $11$
Show that if $X$ is a random variable with mean $\mu$ and variance $\sigma^2$, and if $X^* = (X - \mu)/\sigma$ is the standardized version of $X$, then $g_{X^*}(t) = e^{-\mu t/\sigma} g_X\left( \frac t\sigma \right)\ .$ | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/10%3A_Generating_Functions/10.01%3A_Generating_Functions_for_Discrete_Distributions.txt |
Historical Background
In this section we apply the theory of generating functions to the study of an important chance process called a
Until recently it was thought that the theory of branching processes originated with the following problem posed by Francis Galton in the in 1873.1
Problem 4001: A large nation, of whom we will only concern ourselves with the adult males, $N$ in number, and who each bear separate surnames, colonise a district. Their law of population is such that, in each generation, $a_0$ per cent of the adult males have no male children who reach adult life; $a_1$ have one such male child; $a_2$ have two; and so on up to $a_5$ who have five.
Find (1) what proportion of the surnames will have become extinct after $r$ generations; and (2) how many instances there will be of the same surname being held by $m$ persons.
The first attempt at a solution was given by Reverend H. W. Watson. Because of a mistake in algebra, he incorrectly concluded that a family name would always die out with probability 1. However, the methods that he employed to solve the problems were, and still are, the basis for obtaining the correct solution.
Heyde and Seneta discovered an earlier communication by Bienaymé (1845) that anticipated Galton and Watson by 28 years. Bienaymé showed, in fact, that he was aware of the correct solution to Galton’s problem. Heyde and Seneta in their book 2 give the following translation from Bienaymé’s paper:
If …the mean of the number of male children who replace the number of males of the preceding generation were less than unity, it would be easily realized that families are dying out due to the disappearance of the members of which they are composed. However, the analysis shows further that when this mean is equal to unity families tend to disappear, although less rapidly ….
The analysis also shows clearly that if the mean ratio is greater than unity, the probability of the extinction of families with the passing of time no longer reduces to certainty. It only approaches a finite limit, which is fairly simple to calculate and which has the singular characteristic of being given by one of the roots of the equation (in which the number of generations is made infinite) which is not relevant to the question when the mean ratio is less than unity.3
Although Bienaymé does not give his reasoning for these results, he did indicate that he intended to publish a special paper on the problem. The paper was never written, or at least has never been found. In his communication Bienaymé indicated that he was motivated by the same problem that occurred to Galton. The opening paragraph of his paper as translated by Heyde and Seneta says,
A great deal of consideration has been given to the possible multiplication of the numbers of mankind; and recently various very curious observations have been published on the fate which allegedly hangs over the aristocrary and middle classes; the families of famous men, etc. This fate, it is alleged, will inevitably bring about the disappearance of the so-called 4
A much more extensive discussion of the history of branching processes may be found in two papers by David G. Kendall.5
Branching processes have served not only as crude models for population growth but also as models for certain physical processes such as chemical and nuclear chain reactions.
Problem of Extinction
We turn now to the first problem posed by Galton (i.e., the problem of finding the probability of extinction for a branching process). We start in the 0th generation with 1 male parent. In the first generation we shall have 0, 1, 2, 3, … male offspring with probabilities $p_0$, $p_1$, $p_2$, $p_3$, …. If in the first generation there are $k$ offspring, then in the second generation there will be $X_1 + X_2 +\cdots+ X_k$ offspring, where $X_1$, $X_2$, …, $X_k$ are independent random variables, each with the common distribution $p_0$, $p_1$, $p_2$, …. This description enables us to construct a tree, and a tree measure, for any number of generations.
Example $1$
Assume that $p_0 = 1/2$, $p_1 = 1/4$, and $p_2 = 1/4$. Then the tree measure for the first two generations is shown in Figure $1$.
Solution
Note that we use the theory of sums of independent random variables to assign branch probabilities. For example, if there are two offspring in the first generation, the probability that there will be two in the second generation is
\begin{aligned} P(X_1 + X_2 = 2) &=& p_0p_2 + p_1p_1 + p_2p_0 \ &=& \frac12\cdot\frac14 + \frac14\cdot\frac14 + \frac14\cdot\frac12 = \frac 5{16}\ .\end{aligned}
We now study the probability that our process dies out (i.e., that at some generation there are no offspring).
Let $d_m$ be the probability that the process dies out by the $m$th generation. Of course, $d_0 = 0$. In our example, $d_1 = 1/2$ and $d_2 = 1/2 + 1/8 + 1/16 = 11/16$ (see Figure [fig $1$]). Note that we must add the probabilities for all paths that lead to 0 by the $m$th generation. It is clear from the definition that $0 = d_0 \leq d_1 \leq d_2 \leq\cdots\leq 1\ .$ Hence, $d_m$ converges to a limit $d$, $0 \leq d \leq 1$, and $d$ is the probability that the process will ultimately die out. It is this value that we wish to determine. We begin by expressing the value $d_m$ in terms of all possible outcomes on the first generation. If there are $j$ offspring in the first generation, then to die out by the $m$th generation, each of these lines must die out in $m - 1$ generations. Since they proceed independently, this probability is $(d_{m - 1})^j$. Therefore
$d_m = p_0 + p_1d_{m - 1} + p_2(d_{m - 1})^2 + p_3(d_{m - 1})^3 +\cdots\ . \label{eq 10.2.1}$ Let $h(z)$ be the ordinary generating function for the $p_i$: $h(z) = p_0 + p_1z + p_2z^2 +\cdots\ .$ Using this generating function, we can rewrite Equation $1$ in the form
$d_m = h(d_{m - 1})\ .$
Since $d_m \to d$, by Equation $2$ we see that the value $d$ that we are looking for satisfies the equation
$d = h(d)\ .$
One solution of this equation is always $d = 1$, since $1 = p_0 + p_1 + p_2 +\cdots\ .$ This is where Watson made his mistake. He assumed that 1 was the only solution to Equation [eq 10.2.3]. To examine this question more carefully, we first note that solutions to Equation [eq 10.2.3] represent intersections of the graphs of $y = z$ and $y = h(z) = p_0 + p_1z + p_2z^2 +\cdots\ .$ Thus we need to study the graph of $y = h(z)$. We note that $h(0) = p_0$. Also, $h'(z) = p_1 + 2p_2z + 3p_3z^2 +\cdots\ , \label{eq 10.2.4}$ and $h''(z) = 2p_2 + 3 \cdot 2p_3z + 4 \cdot 3p_4z^2 + \cdots\ .$ From this we see that for $z \geq 0$, $h'(z) \geq 0$ and $h''(z) \geq 0$. Thus for nonnegative $z$, $h(z)$ is an increasing function and is concave upward. Therefore the graph of $y = h(z)$ can intersect the line $y = z$ in at most two points. Since we know it must intersect the line $y = z$ at $(1,1)$, we know that there are just three possibilities, as shown in Figure $2$].
In case (a) the equation $d = h(d)$ has roots $\{d,1\}$ with $0 \leq d < 1$. In the second case (b) it has only the one root $d = 1$. In case (c) it has two roots $\{1,d\}$ where $1 < d$. Since we are looking for a solution $0 \leq d \leq 1$, we see in cases (b) and (c) that our only solution is 1. In these cases we can conclude that the process will die out with probability 1. However in case (a) we are in doubt. We must study this case more carefully.
From Equation $4$ we see that
$h'(1) = p_1 + 2p_2 + 3p_3 +\cdots= m\ ,$
where $m$ is the expected number of offspring produced by a single parent. In case (a) we have $h'(1) > 1$, in (b) $h'(1) = 1$, and in (c) $h'(1) < 1$. Thus our three cases correspond to $m > 1$, $m = 1$, and $m < 1$. We assume now that $m > 1$. Recall that $d_0 = 0$, $d_1 = h(d_0) = p_0$, $d_2 = h(d_1)$, …, and $d_n = h(d_{n - 1})$. We can construct these values geometrically, as shown in Figure $3$.
We can see geometrically, as indicated for $d_0$, $d_1$, $d_2$, and $d_3$ in Figure $3$, that the points $(d_i,h(d_i))$ will always lie above the line $y = z$. Hence, they must converge to the first intersection of the curves $y = z$ and $y = h(z)$ (i.e., to the root $d < 1$). This leads us to the following theorem.
Theorem $1$
Consider a branching process with generating function $h(z)$ for the number of offspring of a given parent. Let $d$ be the smallest root of the equation $z = h(z)$. If the mean number $m$ of offspring produced by a single parent is ${} \leq 1$, then $d = 1$ and the process dies out with probability 1. If $m > 1$ then $d < 1$ and the process dies out with probability $d$.
We shall often want to know the probability that a branching process dies out by a particular generation, as well as the limit of these probabilities. Let $d_n$ be the probability of dying out by the $n$th generation. Then we know that $d_1 = p_0$. We know further that $d_n = h(d_{n - 1})$ where $h(z)$ is the generating function for the number of offspring produced by a single parent. This makes it easy to compute these probabilities.
The program Branch calculates the values of $d_n$. We have run this program for 12 generations for the case that a parent can produce at most two offspring and the probabilities for the number produced are $p_0 = .2$, $p_1 = .5$, and $p_2 = .3$. The results are given in Table $1$.
Table $1$:Probability of dying out.
Generation Probaboility of dying out
1 .2
2 .312
3 .385203
4 .437116
5 .475879
6 .505878
7 .529713
8 .549035
9 .564949
10 .578225
11 .589416
12 .598931
We see that the probability of dying out by 12 generations is about .6. We shall see in the next example that the probability of eventually dying out is 2/3, so that even 12 generations is not enough to give an accurate estimate for this probability.
We now assume that at most two offspring can be produced. Then $h(z) = p_0 + p_1z + p_2z^2\ .$ In this simple case the condition $z = h(z)$ yields the equation
$d = p_0 + p_1d + p_2d^2\ ,$
which is satisfied by $d = 1$ and $d = p_0/p_2$. Thus, in addition to the root $d = 1$ we have the second root $d = p_0/p_2$. The mean number $m$ of offspring produced by a single parent is
$m = p_1 + 2p_2 = 1 - p_0 - p_2 + 2p_2 = 1 - p_0 + p_2\ .$
Thus, if $p_0 > p_2$, $m < 1$ and the second root is ${} > 1$. If $p_0 = p_2$, we have a double root $d = 1$. If $p_0 < p_2$, $m > 1$ and the second root $d$ is less than 1 and represents the probability that the process will die out.
Example $2$
Keyfitz6 compiled and analyzed data on the continuation of the female family line among Japanese women. His estimates at the basic probability distribution for the number of female children born to Japanese women of ages 45–49 in 1960 are given in Table $2$.
Table $2$: Distribution of number of female children.
p1 = .2584
p2 = .2360
p3 = .1593
p4 = .0828
p5 = .0357
p6 = .0133
p7 = .0042
p8 = .0011
p9 = .0002
p10 = .0000
The expected number of girls in a family is then 1.837 so the probability $d$ of extinction is less than 1. If we run the program Branch, we can estimate that $d$ is in fact only about .324.
Distribution of Offspring
So far we have considered only the first of the two problems raised by Galton, namely the probability of extinction. We now consider the second problem, that is, the distribution of the number $Z_n$ of offspring in the $n$th generation. The exact form of the distribution is not known except in very special cases. We shall see, however, that we can describe the limiting behavior of $Z_n$ as $n \to \infty$.
We first show that the generating function $h_n(z)$ of the distribution of $Z_n$ can be obtained from $h(z)$ for any branching process.
We recall that the value of the generating function at the value $z$ for any random variable $X$ can be written as $h(z) = E(z^X) = p_0 + p_1z + p_2z^2 +\cdots\ .$ That is, $h(z)$ is the expected value of an experiment which has outcome $z^j$ with probability $p_j$.
Let $S_n = X_1 + X_2 +\cdots+ X_n$ where each $X_j$ has the same integer-valued distribution $(p_j)$ with generating function $k(z) = p_0 + p_1z + p_2z^2 +\cdots.$ Let $k_n(z)$ be the generating function of $S_n$. Then using one of the properties of ordinary generating functions discussed in Section [sec 10.1], we have $k_n(z) = (k(z))^n\ ,$ since the $X_j$’s are independent and all have the same distribution.
Consider now the branching process $Z_n$. Let $h_n(z)$ be the generating function of $Z_n$. Then \begin{aligned} h_{n + 1}(z) &=& E(z^{Z_{n + 1}}) \ &=& \sum_k E(z^{Z_{n + 1}} | Z_n = k) P(Z_n = k)\ .\end{aligned} If $Z_n = k$, then $Z_{n + 1} = X_1 + X_2 +\cdots+ X_k$ where $X_1$, $X_2$, …, $X_k$ are independent random variables with common generating function $h(z)$. Thus
$E(z^{Z_{n + 1}} | Z_n = k) = E(z^{X_1 + X_2 +\cdots+ X_k}) = (h(z))^k\ ,$
and
$h_{n + 1}(z) = \sum_k (h(z))^k P(Z_n = k)\ .$ But $h_n(z) = \sum_k P(Z_n = k) z^k\ .$
Thus, $h_{n + 1}(z) = h_n(h(z))\ . \label{eq 10.2.5}$
If we differentiate Equation [eq 10.2.5] and use the chain rule we have $h'_{n+1}(z) = h'_n(h(z)) h'(z) .$
Putting $z = 1$ and using the fact that $h(1) = 1$, $h'(1) = m$, and $h_n'(1) = m_n = {}$the mean number of offspring in the $n$’th generation, we have $m_{n + 1} = m_n \cdot m\ .$ Thus, $m_2 = m \cdot m = m^2$, $m_3 = m^2 \cdot m = m^3$, and in general $m_n = m^n\ .$
Thus, for a branching process with $m > 1$, the mean number of offspring grows exponentially at a rate $m$.
Example $3$
For the branching process of Example $1$ we have
\begin{aligned} h(z) &=& 1/2 + (1/4)z + (1/4)z^2\ , \ h_2(z) &=& h(h(z)) = 1/2 + (1/4)[1/2 + (1/4)z + (1/4)z^2] \ &=& + (1/4)[1/2 + (1/4)z + (1/4)z^2]^2 \ &=& 11/16 + (1/8)z + (9/64)z^2 + (1/32)z^3 + (1/64)z^4\ .\end{aligned}
The probabilities for the number of offspring in the second generation agree with those obtained directly from the tree measure (see Figure $1$).
It is clear that even in the simple case of at most two offspring, we cannot easily carry out the calculation of $h_n(z)$ by this method. However, there is one special case in which this can be done.Example $2$
Example $4$
Assume that the probabilities $p_1$, $p_2$, … form a geometric series: $p_k = bc^{k - 1}$, $k = 1$, 2, …, with $0 < b \leq 1 - c$ and $0 < c < 1$. Then we have
\begin{aligned} p_0 &=& 1 - p_1 - p_2 -\cdots \ &=& 1 - b - bc - bc^2 -\cdots \ &=& 1 - \frac b{1 - c}\ .\end{aligned}
The generating function $h(z)$ for this distribution is
\begin{aligned} h(z) &=& p_0 + p_1z + p_2z^2 +\cdots \ &=& 1 - \frac b{1 - c} + bz + bcz^2 + bc^2z^3 +\cdots \ &=& 1 - \frac b{1 - c} + \frac{bz}{1 - cz}\ .\end{aligned}
From this we find
$h'(z) = \frac{bcz}{(1 - cz)^2} + \frac b{1 - cz} = \frac b{(1 - cz)^2}$ and $m = h'(1) = \frac b{(1 - c)^2}\ .$
We know that if $m \leq 1$ the process will surely die out and $d = 1$. To find the probability $d$ when $m > 1$ we must find a root $d < 1$ of the equation
$z = h(z)\ ,$
or
$z = 1 - \frac b{1 - c} + \frac{bz}{1 - cz}.$
This leads us to a quadratic equation. We know that $z = 1$ is one solution. The other is found to be
$d = \frac{1 - b - c}{c(1 - c)}.$
It is easy to verify that $d < 1$ just when $m > 1$.
It is possible in this case to find the distribution of $Z_n$. This is done by first finding the generating function $h_n(z)$.7 The result for $m \ne 1$ is:
$h_n(z) = 1 - m^n \left[\frac{1 - d}{m^n - d}\right] + \frac{m^n \left[\frac{1 - d}{m^n - d}\right]^2 z} {1 - \left[\frac{m^n - 1}{m^n - d}\right]z}\ .$
The coefficients of the powers of $z$ give the distribution for $Z_n$:
$P(Z_n = 0) = 1 - m^n\frac{1 - d}{m^n - d} = \frac{d(m^n - 1)}{m^n - d}$
and
$P(Z_n = j) = m^n \Bigl(\frac{1 - d}{m^n - d}\Bigr)^2 \cdot \Bigl(\frac{m^n - 1}{ m^n - d}\Bigr)^{j - 1},$
for $j \geq 1$.
Example $5$
Let us re-examine the Keyfitz data to see if a distribution of the type considered in Example $4$ could reasonably be used as a model for this population. We would have to estimate from the data the parameters $b$ and $c$ for the formula $p_k = bc^{k - 1}$. Recall that
'$m = \frac b{(1 - c)^2} \label{eq 10.2.7}$
and the probability $d$ that the process dies out is
$d = \frac{1 - b - c}{c(1 - c)}\ . \label{eq 10.2.8}$
Solving Equation $7$ and $8$ for $b$ and $c$ gives $c = \frac{m - 1}{m - d}$ and $b = m\Bigl(\frac{1 - d}{m - d}\Bigr)^2\ .$
We shall use the value 1.837 for $m$ and .324 for $d$ that we found in the Keyfitz example. Using these values, we obtain $b = .3666$ and $c = .5533$. Note that $(1 - c)^2 < b < 1 - c$, as required. In Table [table 10.3] we give for comparison the probabilities $p_0$ through $p_8$ as calculated by the geometric distribution versus the empirical values.
Table $3$: Comparison of observed and expected frequencies.
Geometric
$p_j$ Data Model
0 .2092 .1816
1 .2584 .3666
2 .2360 .2028
3 .1593 .1122
4 .0828 .0621
5 .0357 .0344
6 .0133 .0190
7 .0042 .0105
8 .0011 .0058
9 .0002 .0032
10 .0000 .0018
The geometric model tends to favor the larger numbers of offspring but is similar enough to show that this modified geometric distribution might be appropriate to use for studies of this kind.
Recall that if $S_n = X_1 + X_2 +\cdots+ X_n$ is the sum of independent random variables with the same distribution then the Law of Large Numbers states that $S_n/n$ converges to a constant, namely $E(X_1)$. It is natural to ask if there is a similar limiting theorem for branching processes.
Consider a branching process with $Z_n$ representing the number of offspring after $n$ generations. Then we have seen that the expected value of $Z_n$ is $m^n$. Thus we can scale the random variable $Z_n$ to have expected value 1 by considering the random variable $W_n = \frac{Z_n}{m^n}\ .$
In the theory of branching processes it is proved that this random variable $W_n$ will tend to a limit as $n$ tends to infinity. However, unlike the case of the Law of Large Numbers where this limit is a constant, for a branching process the limiting value of the random variables $W_n$ is itself a random variable.
Although we cannot prove this theorem here we can illustrate it by simulation. This requires a little care. When a branching process survives, the number of offspring is apt to get very large. If in a given generation there are 1000 offspring, the offspring of the next generation are the result of 1000 chance events, and it will take a while to simulate these 1000 experiments. However, since the final result is the sum of 1000 independent experiments we can use the Central Limit Theorem to replace these 1000 experiments by a single experiment with normal density having the appropriate mean and variance. The program BranchingSimulation carries out this process.
We have run this program for the Keyfitz example, carrying out 10 simulations and graphing the results in Figure $4$.
The expected number of female offspring per female is 1.837, so that we are graphing the outcome for the random variables $W_n = Z_n/(1.837)^n$. For three of the simulations the process died out, which is consistent with the value $d = .3$ that we found for this example. For the other seven simulations the value of $W_n$ tends to a limiting value which is different for each simulation.
Example $6$
We now examine the random variable $Z_n$ more closely for the case $m < 1$ (see Example $4$). Fix a value $t > 0$; let $[tm^n]$ be the integer part of $tm^n$. Then
\begin{aligned} P(Z_n = [tm^n]) &=& m^n (\frac{1 - d}{m^n - d})^2 (\frac{m^n - 1}{m^n - d}) ^{[tm^n] - 1} \ &=& \frac1{m^n}(\frac{1 - d}{1 - d/m^n})^2 (\frac{1 - 1/m^n} {1 - d/m^n})^{tm^n + a}\ ,\end{aligned}
where $|a| \leq 2$. Thus, as $n \to \infty$,
$m^n P(Z_n = [tm^n]) \to (1 - d)^2 \frac{e^{-t}}{e^{-td}} = (1 - d)^2 e^{-t(1 - d)}\ .$ For $t = 0$, $P(Z_n = 0) \to d\ .$
We can compare this result with the Central Limit Theorem for sums $S_n$ of integer-valued independent random variables (see Theorem 9.3.5), which states that if $t$ is an integer and $u = (t - n\mu)/\sqrt{\sigma^2 n}$, then as $n \to \infty$, $\sqrt{\sigma^2 n}\, P(S_n = u\sqrt{\sigma^2 n} + \mu n) \to \frac1{\sqrt{2\pi}} e^{-u^2/2}\ .$ We see that the form of these statements are quite similar. It is possible to prove a limit theorem for a general class of branching processes that states that under suitable hypotheses, as $n \to \infty$,
$m^n P(Z_n = [tm^n]) \to k(t)\ ,$
for $t > 0$, and
$P(Z_n = 0) \to d\ .$
However, unlike the Central Limit Theorem for sums of independent random variables, the function $k(t)$ will depend upon the basic distribution that determines the process. Its form is known for only a very few examples similar to the one we have considered here.
Chain Letter Problem
Example $1$
An interesting example of a branching process was suggested by Free Huizinga.8 In 1978, a chain letter called the “Circle of Gold," believed to have started in California, found its way across the country to the theater district of New York. The chain required a participant to buy a letter containing a list of 12 names for 100 dollars. The buyer gives 50 dollars to the person from whom the letter was purchased and then sends 50 dollars to the person whose name is at the top of the list. The buyer then crosses off the name at the top of the list and adds her own name at the bottom in each letter before it is sold again.
Solution
Let us first assume that the buyer may sell the letter only to a single person. If you buy the letter you will want to compute your expected winnings. (We are ignoring here the fact that the passing on of chain letters through the mail is a federal offense with certain obvious resulting penalties.) Assume that each person involved has a probability $p$ of selling the letter. Then you will receive 50 dollars with probability $p$ and another 50 dollars if the letter is sold to 12 people, since then your name would have risen to the top of the list. This occurs with probability $p^{12}$, and so your expected winnings are $-100 + 50p + 50p^{12}$. Thus the chain in this situation is a highly unfavorable game.
It would be more reasonable to allow each person involved to make a copy of the list and try to sell the letter to at least 2 other people. Then you would have a chance of recovering your 100 dollars on these sales, and if any of the letters is sold 12 times you will receive a bonus of 50 dollars for each of these cases. We can consider this as a branching process with 12 generations. The members of the first generation are the letters you sell. The second generation consists of the letters sold by members of the first generation, and so forth.
Let us assume that the probabilities that each individual sells letters to 0, 1, or 2 others are $p_0$, $p_1$, and $p_2$, respectively. Let $Z_1$, $Z_2$, …, $Z_{12}$ be the number of letters in the first 12 generations of this branching process. Then your expected winnings are
$50(E(Z_1) + E(Z_{12})) = 50m + 50m^{12}\ ,$
where $m = p_1 + 2p_2$ is the expected number of letters you sold. Thus to be favorable we just have
$50m + 50m^{12} > 100\ ,$ or $m + m^{12} > 2\ .$
But this will be true if and only if $m > 1$. We have seen that this will occur in the quadratic case if and only if $p_2 > p_0$. Let us assume for example that $p_0 = .2$, $p_1 = .5$, and $p_2 = .3$. Then $m = 1.1$ and the chain would be a favorable game. Your expected profit would be $50(1.1 + 1.1^{12}) - 100 \approx 112\ .$
The probability that you receive at least one payment from the 12th generation is $1 - d_{12}$. We find from our program Branch that $d_{12} = .599$. Thus, $1 - d_{12} = .401$ is the probability that you receive some bonus. The maximum that you could receive from the chain would be $50(2 + 2^{12}) = 204{,}900$ if everyone were to successfully sell two letters. Of course you can not always expect to be so lucky. (What is the probability of this happening?)
To simulate this game, we need only simulate a branching process for 12 generations. Using a slightly modified version of our program BranchingSimulation we carried out twenty such simulations, giving the results shown in Table $4$.
Note that we were quite lucky on a few runs, but we came out ahead only a little less than half the time. The process died out by the twelfth generation in 12 out of the 20 experiments, in good agreement with the probability $d_{12} = .599$ that we calculated using the program Branch.
Table $4$: Simulation of chain letter (finite distribution case).
Profit
1
0
0
0
0
0
0
0
0
0
0
0
-50
1
1
2
3
2
3
2
1
2
3
3
6
250
0
0
0
0
0
0
0
0
0
0
0
0
-100
2
4
4
2
3
4
4
3
2
2
1
1
50
1
2
3
5
4
3
3
3
5
8
6
6
250
0
0
0
0
0
0
0
0
0
0
0
0
-100
2
3
2
2
2
1
2
3
3
3
4
6
300
1
2
1
1
1
1
2
1
0
0
0
0
-50
0
0
0
0
0
0
0
0
0
0
0
0
-100
1
0
0
0
0
0
0
0
0
0
0
0
-50
2
3
2
3
3
3
5
9
12
12
13
15
750
1
1
1
0
0
0
0
0
0
0
0
0
-50
1
2
2
3
3
0
0
0
0
0
0
0
-50
1
1
1
1
2
2
3
4
4
6
4
5
200
1
1
0
0
0
0
0
0
0
0
0
0
-50
1
0
0
0
0
0
0
0
0
0
0
0
-50
1
0
0
0
0
0
0
0
0
0
0
0
-50
1
1
2
3
3
4
2
3
3
3
3
2
50
1
2
4
6
6
9
10
13
16
17
15
18
850
1
0
0
0
0
0
0
0
0
0
0
0
-50
Let us modify the assumptions about our chain letter to let the buyer sell the letter to as many people as she can instead of to a maximum of two. We shall assume, in fact, that a person has a large number $N$ of acquaintances and a small probability $p$ of persuading any one of them to buy the letter. Then the distribution for the number of letters that she sells will be a binomial distribution with mean $m = Np$. Since $N$ is large and $p$ is small, we can assume that the probability $p_j$ that an individual sells the letter to $j$ people is given by the Poisson distribution $p_j = \frac{e^{-m} m^j}{j!}\ .$ The generating function for the Poisson distribution is \begin{aligned} h(z) &=& \sum_{j = 0}^\infty \frac{e^{-m} m^j z^j}{j!} \ &=& e^{-m} \sum_{j = 0}^\infty \frac{m^j z^j}{j!} \ &=& e^{-m} e^{mz} = e^{m(z - 1)}\ .\end{aligned}
The expected number of letters that an individual passes on is $m$, and again to be favorable we must have $m > 1$. Let us assume again that $m = 1.1$. Then we can find again the probability $1 - d_{12}$ of a bonus from Branch. The result is .232. Although the expected winnings are the same, the variance is larger in this case, and the buyer has a better chance for a reasonably large profit. We again carried out 20 simulations using the Poisson distribution with mean 1.1. The results are shown in Table Table $5$.
Table $5$: Simulation of chain letter (Poisson case).
Profit
1
2
6
7
7
8
11
9
7
6
6
5
200
1
0
0
0
0
0
0
0
0
0
0
0
-50
1
0
0
0
0
0
0
0
0
0
0
0
-50
1
1
1
0
0
0
0
0
0
0
0
0
-50
0
0
0
0
0
0
0
0
0
0
0
0
-100
1
1
1
1
1
1
2
4
9
7
9
7
300
2
3
3
4
2
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
-50
2
1
0
0
0
0
0
0
0
0
0
0
0
3
3
4
7
11
17
14
11
11
10
16
25
1300
0
0
0
0
0
0
0
0
0
0
0
0
-100
1
2
2
1
1
3
1
0
0
0
0
0
-50
0
0
0
0
0
0
0
0
0
0
0
0
-100
2
3
1
0
0
0
0
0
0
0
0
0
0
3
1
0
0
0
0
0
0
0
0
0
0
50
1
0
0
0
0
0
0
0
0
0
0
0
-50
3
4
4
7
10
11
9
11
12
14
13
10
550
1
3
3
4
9
5
7
9
8
8
6
3
100
1
0
4
6
6
9
10
13
0
0
0
0
-50
1
0
0
0
0
0
0
0
0
0
0
0
-50
We note that, as before, we came out ahead less than half the time, but we also had one large profit. In only 6 of the 20 cases did we receive any profit. This is again in reasonable agreement with our calculation of a probability .232 for this happening.
Exercises
Exercise $1$
Let $Z_1$, $Z_2$, …, $Z_N$ describe a branching process in which each parent has $j$ offspring with probability $p_j$. Find the probability $d$ that the process eventually dies out if
1. $p_0 = 1/2$, $p_1 = 1/4$, and $p_2 = 1/4$.
2. $p_0 = 1/3$, $p_1 = 1/3$, and $p_2 = 1/3$.
3. $p_0 = 1/3$, $p_1 = 0$, and $p_2 = 2/3$.
4. $p_j = 1/2^{j + 1}$, for $j = 0$, 1, 2, ….
5. $p_j = (1/3)(2/3)^j$, for $j = 0$, 1, 2, ….
6. $p_j = e^{-2} 2^j/j!$, for $j = 0$, 1, 2, … (estimate $d$ numerically).
Exercise $2$
Let $Z_1$, $Z_2$, …, $Z_N$ describe a branching process in which each parent has $j$ offspring with probability $p_j$. Find the probability $d$ that the process dies out if
1. $p_0 = 1/2$, $p_1 = p_2 = 0$, and $p_3 = 1/2$.
2. $p_0 = p_1 = p_2 = p_3 = 1/4$.
3. $p_0 = t$, $p_1 = 1 - 2t$, $p_2 = 0$, and $p_3 = t$, where $t \leq 1/2$.
Exercise $3$
In the chain letter problem (see Example [exam 10.2.5]) find your expected profit if
1. $p_0 = 1/2$, $p_1 = 0$, and $p_2 = 1/2$.
2. $p_0 = 1/6$, $p_1 = 1/2$, and $p_2 = 1/3$.
Show that if $p_0 > 1/2$, you cannot expect to make a profit.
Exercise $4$
Let $S_N = X_1 + X_2 +\cdots+ X_N$, where the $X_i$’s are independent random variables with common distribution having generating function $f(z)$. Assume that $N$ is an integer valued random variable independent of all of the $X_j$ and having generating function $g(z)$. Show that the generating function for $S_N$ is $h(z) = g(f(z))$. : Use the fact that $h(z) = E(z^{S_N}) = \sum_k E(z^{S_N} | N = k) P(N = k)\ .$
Exercise $5$
We have seen that if the generating function for the offspring of a single parent is $f(z)$, then the generating function for the number of offspring after two generations is given by $h(z) = f(f(z))$. Explain how this follows from the result of Exercise [exer 10.2.4].
Exercise $6$
Consider a queueing process such that in each minute either 1 or 0 customers arrive with probabilities $p$ or $q = 1 - p$, respectively. (The number $p$ is called the .) When a customer starts service she finishes in the next minute with probability $r$. The number $r$ is called the .) Thus when a customer begins being served she will finish being served in $j$ minutes with probability $(1 - r)^{j -1}r$, for $j = 1$, 2, 3, ….
Exercise $7$
Consider a by considering the offspring of a customer to be the customers who arrive while she is being served. Using Exercise [exer 10.2.4], show that the generating function for our customer branching process is $h(z) = g(f(z))$.
Exercise $8$
If we start the branching process with the arrival of the first customer, then the length of time until the branching process dies out will be the for the server. Find a condition in terms of the arrival rate and service rate that will assure that the server will ultimately have a time when he is not busy.
Exercise $9$
Let $N$ be the expected total number of offspring in a branching process. Let $m$ be the mean number of offspring of a single parent. Show that $N = 1 + \left(\sum p_k \cdot k\right) N = 1 + mN$ and hence that $N$ is finite if and only if $m < 1$ and in that case $N = 1/(1 - m)$.
Exercise $10$
Consider a branching process such that the number of offspring of a parent is $j$ with probability $1/2^{j + 1}$ for $j = 0$, 1, 2, ….
1. Using the results of Example [exam 10.2.4] show that the probability that there are $j$ offspring in the $n$th generation is $p_j^{(n)} = \left \{ \begin{array}{ll} \frac{1}{n(n + 1)} (\frac {n}{n + 1})^j, & \mbox{if j \geq 1}, \ \frac {n}{n + 1}, & \mbox{if j = 0}.\end{array}\right.$
2. Show that the probability that the process dies out exactly at the $n$th generation is $1/n(n + 1)$.
3. Show that the expected lifetime is infinite even though $d = 1$. | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/10%3A_Generating_Functions/10.02%3A_Branching_Processes.txt |
In the previous section, we introduced the concepts of moments and moment generating functions for discrete random variables. These concepts have natural analogues for continuous random variables, provided some care is taken in arguments involving convergence.
Moments
If $X$ is a continuous random variable defined on the probability space $\Omega$, with density function $f_X$, then we define the $n$th moment of $X$ by the formula
$\mu_n = E(X^n) = \int_{-\infty}^{+\infty} x^n f_X(x)\, dx\ ,$
provided the integral
$\mu_n = E(X^n) = \int_{-\infty}^{+\infty} |x|^n f_X(x)\, dx\ ,$
is finite. Then, just as in the discrete case, we see that $\mu_0 = 1$, $\mu_1 = \mu$, and $\mu_2 - \mu_1^2 = \sigma^2$.
Moment Generating Functions
Now we define the $g(t)$ for $X$ by the formula
\begin{aligned} g(t) &=& \sum_{k = 0}^\infty \frac{\mu_k t^k}{k!} = \sum_{k = 0}^\infty \frac{E(X^k) t^k}{k!} \ &=& E(e^{tX}) = \int_{-\infty}^{+\infty} e^{tx} f_X(x)\, dx\ ,\end{aligned}
provided this series converges. Then, as before, we have
$\mu_n = g^{(n)}(0)\ .$
Examples
Example $1$
Let $X$ be a continuous random variable with range $[0,1]$ and density function $f_X(x) = 1$ for $0 \leq x \leq 1$ (uniform density). Then
$\mu_n = \int_0^1 x^n\, dx = \frac1{n + 1}\ ,$
and
\begin{aligned} g(t) &=& \sum_{k = 0}^\infty \frac{t^k}{(k+1)!}\ &=& \frac{e^t - 1}t\ .\end{aligned}
Here the series converges for all $t$. Alternatively, we have
\begin{aligned} g(t) &=& \int_{-\infty}^{+\infty} e^{tx} f_X(x)\, dx \ &=& \int_0^1 e^{tx}\, dx = \frac{e^t - 1}t\ .\end{aligned}
Then (by L’Hôpital’s rule)
\begin{aligned} \mu_0 &=& g(0) = \lim_{t \to 0} \frac{e^t - 1}t = 1\ , \ \mu_1 &=& g'(0) = \lim_{t \to 0} \frac{te^t - e^t + 1}{t^2} = \frac12\ , \ \mu_2 &=& g''(0) = \lim_{t \to 0} \frac{t^3e^t - 2t^2e^t + 2te^t - 2t}{t^4} = \frac13\ .\end{aligned}
In particular, we verify that $\mu = g'(0) = 1/2$ and
$\sigma^2 = g''(0) - (g'(0))^2 = \frac13 - \frac14 = \frac1{12}$
as before (see Example 6.1.5]).
Example $2$
Let $X$ have range $[\,0,\infty)$ and density function $f_X(x) = \lambda e^{-\lambda x}$ (exponential density with parameter $\lambda$). In this case
\begin{aligned} \mu_n &=& \int_0^\infty x^n \lambda e^{-\lambda x}\, dx = \lambda(-1)^n \frac{d^n}{d\lambda^n} \int_0^\infty e^{-\lambda x}\, dx \ &=& \lambda(-1)^n \frac{d^n}{d\lambda^n} [\frac1\lambda] = \frac{n!} {\lambda^n}\ ,\end{aligned}
and
\begin{aligned} g(t) &=& \sum_{k = 0}^\infty \frac{\mu_k t^k}{k!} \ &=& \sum_{k = 0}^\infty [\frac t\lambda]^k = \frac\lambda{\lambda - t}\ .\end{aligned}
Here the series converges only for $|t| < \lambda$. Alternatively, we have
\begin{aligned} g(t) &=& \int_0^\infty e^{tx} \lambda e^{-\lambda x}\, dx \ &=& \left.\frac{\lambda e^{(t - \lambda)x}}{t - \lambda}\right|_0^\infty = \frac\lambda{\lambda - t}\ .\end{aligned}
Now we can verify directly that $\mu_n = g^{(n)}(0) = \left.\frac{\lambda n!}{(\lambda - t)^{n + 1}}\right|_{t = 0} = \frac{n!}{\lambda^n}\ .$
Example $3$
Let $X$ have range $(-\infty,+\infty)$ and density function
$f_X(x) = \frac1{\sqrt{2\pi}} e^{-x^2/2}$ (normal density).
In this case we have
\begin{aligned} \mu_n &=& \frac1{\sqrt{2\pi}} \int_{-\infty}^{+\infty} x^n e^{-x^2/2}\, dx \ &=& \left \{ \begin{array}{ll} \frac{(2m)!}{2^{m} m!}, & \mbox{if n = 2m,}\cr 0, & \mbox{if n = 2m+1.}\end{array}\right.\end{aligned}
(These moments are calculated by integrating once by parts to show that $\mu_n = (n - 1)\mu_{n - 2}$, and observing that $\mu_0 = 1$ and $\mu_1 = 0$.) Hence,
\begin{aligned} g(t) &=& \sum_{n = 0}^\infty \frac{\mu_n t^n}{n!} \ &=& \sum_{m = 0}^\infty \frac{t^{2m}}{2^{m} m!} = e^{t^2/2}\ .\end{aligned} This series converges for all values of $t$. Again we can verify that $g^{(n)}(0) = \mu_n$.
Let $X$ be a normal random variable with parameters $\mu$ and $\sigma$. It is easy to show that the moment generating function of $X$ is given by $e^{t\mu + (\sigma^2/2)t^2}\ .$ Now suppose that $X$ and $Y$ are two independent normal random variables with parameters $\mu_1$, $\sigma_1$, and $\mu_2$, $\sigma_2$, respectively. Then, the product of the moment generating functions of $X$ and $Y$ is
$e^{t(\mu_1 + \mu_2) + ((\sigma_1^2 + \sigma_2^2)/2)t^2}\ .$
This is the moment generating function for a normal random variable with mean $\mu_1 + \mu_2$ and variance $\sigma_1^2 + \sigma_2^2$. Thus, the sum of two independent normal random variables is again normal. (This was proved for the special case that both summands are standard normal in Example 7.1.5.)
In general, the series defining $g(t)$ will not converge for all $t$. But in the important special case where $X$ is bounded (i.e., where the range of $X$ is contained in a finite interval), we can show that the series does converge for all $t$.
Theorem $1$
Suppose $X$ is a continuous random variable with range contained in the interval $[-M,M]$. Then the series
$g(t) = \sum_{k = 0}^\infty \frac{\mu_k t^k}{k!}$
converges for all $t$ to an infinitely differentiable function $g(t)$, and $g^{(n)}(0) = \mu_n$.
Proof. We have
$\mu_k = \int_{-M}^{+M} x^k f_X(x)\, dx\ ,$ so \begin{aligned} |\mu_k| &\leq& \int_{-M}^{+M} |x|^k f_X(x)\, dx \ &\leq& M^k \int_{-M}^{+M} f_X(x)\, dx = M^k\ .\end{aligned}
Hence, for all $N$ we have
$\sum_{k = 0}^N \left|\frac{\mu_k t^k}{k!}\right| \leq \sum_{k = 0}^N \frac{(M|t|)^k}{k!} \leq e^{M|t|}\ ,$
which shows that the power series converges for all $t$. We know that the sum of a convergent power series is always differentiable.
Moment Problem
Theorem $2$
If $X$ is a bounded random variable, then the moment generating function $g_X(t)$ of $x$ determines the density function $f_X(x)$ uniquely.
Sketch of the Proof. We know that
\begin{aligned} g_X(t) &=& \sum_{k = 0}^\infty \frac{\mu_k t^k}{k!} \ &=& \int_{-\infty}^{+\infty} e^{tx} f(x)\, dx\ .\end{aligned}
If we replace $t$ by $i\tau$, where $\tau$ is real and $i = \sqrt{-1}$, then the series converges for all $\tau$, and we can define the function
$k_X(\tau) = g_X(i\tau) = \int_{-\infty}^{+\infty} e^{i\tau x} f_X(x)\, dx\ .$
The function $k_X(\tau)$ is called the of $X$, and is defined by the above equation even when the series for $g_X$ does not converge. This equation says that $k_X$ is the of $f_X$. It is known that the Fourier transform has an inverse, given by the formula
$f_X(x) = \frac1{2\pi} \int_{-\infty}^{+\infty} e^{-i\tau x} k_X(\tau)\, d\tau\ ,$
suitably interpreted.9 Here we see that the characteristic function $k_X$, and hence the moment generating function $g_X$, determines the density function $f_X$ uniquely under our hypotheses.
Sketch of the Proof of the Central Limit Theorem
With the above result in mind, we can now sketch a proof of the Central Limit Theorem for bounded continuous random variables (see Theorem 9.4.7). To this end, let $X$ be a continuous random variable with density function $f_X$, mean $\mu = 0$ and variance $\sigma^2 = 1$, and moment generating function $g(t)$ defined by its series for all $t$. Let $X_1$, $X_2$, …, $X_n$ be an independent trials process with each $X_i$ having density $f_X$, and let $S_n = X_1 + X_2 +\cdots+ X_n$, and $S_n^* = (S_n - n\mu)/\sqrt{n\sigma^2} = S_n/\sqrt n$. Then each $X_i$ has moment generating function $g(t)$, and since the $X_i$ are independent, the sum $S_n$, just as in the discrete case (see Section 10.1), has moment generating function
$g_n(t) = (g(t))^n\ ,$
and the standardized sum $S_n^*$ has moment generating function
$g_n^*(t) = \left(g\left(\frac t{\sqrt n}\right)\right)^n\ .$
We now show that, as $n \to \infty$, $g_n^*(t) \to e^{t^2/2}$, where $e^{t^2/2}$ is the moment generating function of the normal density $n(x) = (1/\sqrt{2\pi}) e^{-x^2/2}$ (see Example $3$]).
To show this, we set $u(t) = \log g(t)$, and
\begin{aligned} u_n^*(t) &=& \log g_n^*(t) \ &=& n\log g\left(\frac t{\sqrt n}\right) = nu\left(\frac t{\sqrt n}\right)\ ,\end{aligned}
and show that $u_n^*(t) \to t^2/2$ as $n \to \infty$. First we note that
\begin{aligned} u(0) &=& \log g_n(0) = 0\ , \ u'(0) &=& \frac{g'(0)}{g(0)} = \frac{\mu_1}1 = 0\ , \ u''(0) &=& \frac{g''(0)g(0) - (g'(0))^2}{(g(0))^2} \ &=& \frac{\mu_2 - \mu_1^2}1 = \sigma^2 = 1\ .\end{aligned}
Now by using L’Hôpital’s rule twice, we get
\begin{aligned} \lim_{n \to \infty} u_n^*(t) &=& \lim_{s \to \infty} \frac{u(t/\sqrt s)}{s^{-1}}\ &=& \lim_{s \to \infty} \frac{u'(t/\sqrt s) t}{2s^{-1/2}} \ &=& \lim_{s \to \infty} u''\left(\frac t{\sqrt s}\right) \frac{t^2}2 = \sigma^2 \frac{t^2}2 = \frac{t^2}2\ .\end{aligned}
Hence, $g_n^*(t) \to e^{t^2/2}$ as $n \to \infty$. Now to complete the proof of the Central Limit Theorem, we must show that if $g_n^*(t) \to e^{t^2/2}$, then under our hypotheses the distribution functions $F_n^*(x)$ of the $S_n^*$ must converge to the distribution function $F_N^*(x)$ of the normal variable $N$; that is, that
$F_n^*(a) = P(S_n^* \leq a) \to \frac1{\sqrt{2\pi}} \int_{-\infty}^a e^{-x^2/2}\, dx\ ,$
and furthermore, that the density functions $f_n^*(x)$ of the $S_n^*$ must converge to the density function for $N$; that is, that
$f_n^*(x) \to \frac1{\sqrt{2\pi}} e^{-x^2/2}\ ,$ as $n \rightarrow \infty$.
Since the densities, and hence the distributions, of the $S_n^*$ are uniquely determined by their moment generating functions under our hypotheses, these conclusions are certainly plausible, but their proofs involve a detailed examination of characteristic functions and Fourier transforms, and we shall not attempt them here.
In the same way, we can prove the Central Limit Theorem for bounded discrete random variables with integer values (see Theorem 9.3.6). Let $X$ be a discrete random variable with density function $p(j)$, mean $\mu = 0$, variance $\sigma^2 = 1$, and moment generating function $g(t)$, and let $X_1$, $X_2$, …, $X_n$ form an independent trials process with common density $p$. Let $S_n = X_1 + X_2 +\cdots+ X_n$ and $S_n^* = S_n/\sqrt n$, with densities $p_n$ and $p_n^*$, and moment generating functions $g_n(t)$ and $g_n^*(t) = \left(g(\frac t{\sqrt n})\right)^n.$ Then we have
$g_n^*(t) \to e^{t^2/2}\ ,$
just as in the continuous case, and this implies in the same way that the distribution functions $F_n^*(x)$ converge to the normal distribution; that is, that
$F_n^*(a) = P(S_n^* \leq a) \to \frac1{\sqrt{2\pi}} \int_{-\infty}^a e^{-x^2/2}\, dx\ ,$ as $n \rightarrow \infty$.
The corresponding statement about the distribution functions $p_n^*$, however, requires a little extra care (see Theorem 9.3.5). The trouble arises because the distribution $p(x)$ is not defined for all $x$, but only for integer $x$. It follows that the distribution $p_n^*(x)$ is defined only for $x$ of the form $j/\sqrt n$, and these values change as $n$ changes.
We can fix this, however, by introducing the function $\bar p(x)$, defined by the formula
$\bar p(x) = \left \{ \begin{array}{ll} p(j), & \mbox{if j - 1/2 \leq x < j + 1/2,} \cr 0\ , & \mbox{otherwise}.\end{array}\right.$
Then $\bar p(x)$ is defined for all $x$, $\bar p(j) = p(j)$, and the graph of $\bar p(x)$ is the step function for the distribution $p(j)$ (see Figure 3 of Section 9.1).
In the same way we introduce the step function $\bar p_n(x)$ and $\bar p_n^*(x)$ associated with the distributions $p_n$ and $p_n^*$, and their moment generating functions $\bar g_n(t)$ and $\bar g_n^*(t)$. If we can show that $\bar g_n^*(t) \to e^{t^2/2}$, then we can conclude that
$\bar p_n^*(x) \to \frac1{\sqrt{2\pi}} e^{t^2/2}\ ,$
as $n \rightarrow \infty$, for all $x$, a conclusion strongly suggested by Figure 9.1.2.
Now $\bar g(t)$ is given by
\begin{aligned} \bar g(t) &=& \int_{-\infty}^{+\infty} e^{tx} \bar p(x)\, dx \ &=& \sum_{j = -N}^{+N} \int_{j - 1/2}^{j + 1/2} e^{tx} p(j)\, dx\ &=& \sum_{j = -N}^{+N} p(j) e^{tj} \frac{e^{t/2} - e^{-t/2}} {2t/2} \ &=& g(t) \frac{\sinh(t/2)}{t/2}\ ,\end{aligned}
where we have put $\sinh(t/2) = \frac{e^{t/2} - e^{-t/2}}2\ .$
In the same way, we find that
\begin{aligned} \bar g_n(t) &=& g_n(t) \frac{\sinh(t/2)}{t/2}\ , \ \bar g_n^*(t) &=& g_n^*(t) \frac{\sinh(t/2\sqrt n)}{t/2\sqrt n}\ .\end{aligned}
Now, as $n \to \infty$, we know that $g_n^*(t) \to e^{t^2/2}$, and, by L’Hôpital’s rule,
$\lim_{n \to \infty} \frac{\sinh(t/2\sqrt n)}{t/2\sqrt n} = 1\ .$
It follows that $\bar g_n^*(t) \to e^{t^2/2}\ ,$ and hence that
$\bar p_n^*(x) \to \frac1{\sqrt{2\pi}} e^{-x^2/2}\ ,$ as $n \rightarrow \infty$.
The astute reader will note that in this sketch of the proof of Theorem [thm 9.3.5], we never made use of the hypothesis that the greatest common divisor of the differences of all the values that the $X_i$ can take on is 1. This is a technical point that we choose to ignore. A complete proof may be found in Gnedenko and Kolmogorov.10
Cauchy Density
The characteristic function of a continuous density is a useful tool even in cases when the moment series does not converge, or even in cases when the moments themselves are not finite. As an example, consider the Cauchy density with parameter $a = 1$ (see Example 5.20)
$f(x) = \frac1{\pi(1 + x^2)}\ .$
If $X$ and $Y$ are independent random variables with Cauchy density $f(x)$, then the average $Z = (X + Y)/2$ also has Cauchy density $f(x)$, that is,
$f_Z(x) = f(x)\ .$
This is hard to check directly, but easy to check by using characteristic functions. Note first that
$\mu_2 = E(X^2) = \int_{-\infty}^{+\infty} \frac{x^2}{\pi(1 + x^2)}\, dx = \infty$
so that $\mu_2$ is infinite. Nevertheless, we can define the characteristic function $k_X(\tau)$ of $x$ by the formula
$k_X(\tau) = \int_{-\infty}^{+\infty} e^{i\tau x}\frac1{\pi(1 + x^2)}\, dx\ .$
This integral is easy to do by contour methods, and gives us
$k_X(\tau) = k_Y(\tau) = e^{-|\tau|}\ .$ Hence, $k_{X + Y}(\tau) = (e^{-|\tau|})^2 = e^{-2|\tau|}\ ,$
and since
$k_Z(\tau) = k_{X + Y}(\tau/2)\ ,$
we have
$k_Z(\tau) = e^{-2|\tau/2|} = e^{-|\tau|}\ .$
This shows that $k_Z = k_X = k_Y$, and leads to the conclusions that $f_Z = f_X = f_Y$.
It follows from this that if $X_1$, $X_2$, …, $X_n$ is an independent trials process with common Cauchy density, and if
$A_n = \frac{X_1 + X_2 + \cdots+ X_n}n$
is the average of the $X_i$, then $A_n$ has the same density as do the $X_i$. This means that the Law of Large Numbers fails for this process; the distribution of the average $A_n$ is exactly the same as for the individual terms. Our proof of the Law of Large Numbers fails in this case because the variance of $X_i$ is not finite.
Exercises
Exercise $1$
Let $X$ be a continuous random variable with values in $[\,0,2]$ and density $f_X$. Find the moment generating function $g(t)$ for $X$ if
1. $f_X(x) = 1/2$.
2. $f_X(x) = (1/2)x$.
3. $f_X(x) = 1 - (1/2)x$.
4. $f_X(x) = |1 - x|$.
5. $f_X(x) = (3/8)x^2$.
: Use the integral definition, as in Examples [exam 10.3.1] and [exam 10.3.2].
Exercise $2$
For each of the densities in Exercise [exer 10.3.1] calculate the first and second moments, $\mu_1$ and $\mu_2$, directly from their definition and verify that $g(0) = 1$, $g'(0) = \mu_1$, and $g''(0) = \mu_2$.
Exercise $3$
Let $X$ be a continuous random variable with values in $[\,0,\infty)$ and density $f_X$. Find the moment generating functions for $X$ if
1. $f_X(x) = 2e^{-2x}$.
2. $f_X(x) = e^{-2x} + (1/2)e^{-x}$.
3. $f_X(x) = 4xe^{-2x}$.
4. $f_X(x) = \lambda(\lambda x)^{n - 1} e^{-\lambda x}/(n - 1)!$.
Exercise $4$
For each of the densities in Exercise [exer 10.3.3], calculate the first and second moments, $\mu_1$ and $\mu_2$, directly from their definition and verify that $g(0) = 1$, $g'(0) = \mu_1$, and $g''(0) = \mu_2$.
Exercise $5$
Find the characteristic function $k_X(\tau)$ for each of the random variables $X$ of Exercise [exer 10.3.1].
Exercise $6$
Let $X$ be a continuous random variable whose characteristic function $k_X(\tau)$ is $k_X(\tau) = e^{-|\tau|}, \qquad -\infty < \tau < +\infty\ .$ Show directly that the density $f_X$ of $X$ is $f_X(x) = \frac1{\pi(1 + x^2)}\ .$
Exercise $7$
Let $X$ be a continuous random variable with values in $[\,0,1]$, uniform density function $f_X(x) \equiv 1$ and moment generating function $g(t) = (e^t - 1)/t$. Find in terms of $g(t)$ the moment generating function for
1. $-X$.
2. $1 + X$.
3. $3X$.
4. $aX + b$.
Exercise $8$
Let $X_1$, $X_2$, …, $X_n$ be an independent trials process with uniform density. Find the moment generating function for
1. $X_1$.
2. $S_2 = X_1 + X_2$.
3. $S_n = X_1 + X_2 +\cdots+ X_n$.
4. $A_n = S_n/n$.
5. $S_n^* = (S_n - n\mu)/\sqrt{n\sigma^2}$.
Exercise $9$
Let $X_1$, $X_2$, …, $X_n$ be an independent trials process with normal density of mean 1 and variance 2. Find the moment generating function for
1. $X_1$.
2. $S_2 = X_1 + X_2$.
3. $S_n = X_1 + X_2 +\cdots+ X_n$.
4. $A_n = S_n/n$.
5. $S_n^* = (S_n - n\mu)/\sqrt{n\sigma^2}$.
Exercise $10$
Let $X_1$, $X_2$, …, $X_n$ be an independent trials process with density $f(x) = \frac12 e^{-|x|}, \qquad -\infty < x < +\infty\ .$
1. Find the mean and variance of $f(x)$.
2. Find the moment generating function for $X_1$, $S_n$, $A_n$, and $S_n^*$.
3. What can you say about the moment generating function of $S_n^*$ as $n \to \infty$?
4. What can you say about the moment generating function of $A_n$ as $n \to \infty$?
10.R: References
Footnotes
1. D. G. Kendall, “Branching Processes Since 1873," vol. 41 (1966), p. 386.↩
2. C. C. Heyde and E. Seneta, (New York: Springer Verlag, 1977).↩
3. ibid., pp. 117–118.↩
4. ibid., p. 118.↩
5. D. G. Kendall, “Branching Processes Since 1873," pp. 385–406; and “The Genealogy of Genealogy: Branching Processes Before (and After) 1873," vol. 7 (1975), pp. 225–253.↩
6. N. Keyfitz, rev. ed. (Reading, PA: Addison Wesley, 1977).↩
7. T. E. Harris, The Theory of Branching Processes (Berlin: Springer, 1963), p. 9.↩
8. Private communication.↩
9. H. Dym and H. P. McKean, (New York: Academic Press, 1972).↩
10. B. V. Gnedenko and A. N. Kolomogorov, (Reading: Addison-Wesley, 1968), p. 233.↩ | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/10%3A_Generating_Functions/10.03%3A_Generating_Functions_for_Continuous_Densities.txt |
Modern probability theory studies chance processes for which the knowledge of previous outcomes influences predictions for future experiments. In principle, when we observe a sequence of chance experiments, all of the past outcomes could influence our predictions for the next experiment. For example, this should be the case in predicting a student’s grades on a sequence of exams in a course. But to allow this much generality would make it very difficult to prove general results. In 1907, A. A. Markov began the study of an important new type of chance process. In this process, the outcome of a given experiment can affect the outcome of the next experiment. This type of process is called a Markov chain.
Thumbnail: A diagram representing a two-state Markov process, with the states labeled E and A. Each number represents the probability of the Markov process changing from one state to another state, with the direction indicated by the arrow. If the Markov process is in state A, then the probability it changes to state E is 0.4, while the probability it remains in state A is 0.6. (CC BY-SA 3.0; Joxemai4 via Wikipedia).
11: Markov Chains
Most of our study of probability has dealt with independent trials processes. These processes are the basis of classical probability theory and much of statistics. We have discussed two of the principal theorems for these processes: the Law of Large Numbers and the Central Limit Theorem.
We have seen that when a sequence of chance experiments forms an independent trials process, the possible outcomes for each experiment are the same and occur with the same probability. Further, knowledge of the outcomes of the previous experiments does not influence our predictions for the outcomes of the next experiment. The distribution for the outcomes of a single experiment is sufficient to construct a tree and a tree measure for a sequence of $n$ experiments, and we can answer any probability question about these experiments by using this tree measure.
Modern probability theory studies chance processes for which the knowledge of previous outcomes influences predictions for future experiments. In principle, when we observe a sequence of chance experiments, all of the past outcomes could influence our predictions for the next experiment. For example, this should be the case in predicting a student’s grades on a sequence of exams in a course. But to allow this much generality would make it very difficult to prove general results.
In 1907, A. A. Markov began the study of an important new type of chance process. In this process, the outcome of a given experiment can affect the outcome of the next experiment. This type of process is called a Markov chain.
Specifying a Markov Chain
Example $1$
According to Kemeny, Snell, and Thompson, ${ }^2$ the Land of $\mathrm{Oz}$ is blessed by many things, but not by good weather. They never have two nice days in a row. If they have a nice day, they are just as likely to have snow as rain the next day. If they have snow or rain, they have an even chance of having the same the next day. If there is change from snow or rain, only half of the time is this a change to a nice day. With this information we form a Markov chain as follows. We take as states the kinds of weather R, N, and S. From the above information we determine the transition probabilities. These are most conveniently represented in a square array as
$P= \begin{matrix} & \begin{matrix}R&N&S\end{matrix} \ \begin{matrix}R\N\S\end{matrix} & \begin{pmatrix}1/2 & 1/4 & 1/4 \1 / 2 & 0 & 1 / 2\ 1/4 & 1/4 & 1/2\end{pmatrix}\ \end{matrix}$
Transition Matrix
The entries in the first row of the matrix $\mathbf{P}$ in Example 11.1 represent the probabilities for the various kinds of weather following a rainy day. Similarly, the entries in the second and third rows represent the probabilities for the various kinds of weather following nice and snowy days, respectively. Such a square array is called the matrix of transition probabilities, or the transition matrix.
We consider the question of determining the probability that, given the chain is in state $i$ today, it will be in state $j$ two days from now. We denote this probability by $p_{i j}^{(2)}$. In Example $1$, we see that if it is rainy today then the event that it is snowy two days from now is the disjoint union of the following three events: 1) it is rainy tomorrow and snowy two days from now, 2) it is nice tomorrow and snowy two days from now, and 3) it is snowy tomorrow and snowy two days from now. The probability of the first of these events is the product of the conditional probability that it is rainy tomorrow, given that it is rainy today, and the conditional probability that it is snowy two days from now, given that it is rainy tomorrow. Using the transition matrix $\mathbf{P}$, we can write this product as $p_{11} p_{13}$. The other two
events also have probabilities that can be written as products of entries of $\mathbf{P}$. Thus, we have
$p_{13}^{(2)}=p_{11} p_{13}+p_{12} p_{23}+p_{13} p_{33} .$
This equation should remind the reader of a dot product of two vectors; we are dotting the first row of $\mathbf{P}$ with the third column of $\mathbf{P}$. This is just what is done in obtaining the 1,3-entry of the product of $\mathbf{P}$ with itself. In general, if a Markov chain has $r$ states, then
$p_{i j}^{(2)}=\sum_{k=1}^r p_{i k} p_{k j} .$
The following general theorem is easy to prove by using the above observation and induction.
Theorem $1$
Let $\mathbf{P}$ be the transition matrix of a Markov chain. The $i j$ th entry $p_{i j}^{(n)}$ of the matrix $\mathbf{P}^n$ gives the probability that the Markov chain, starting in state $s_i$, will be in state $s_j$ after $n$ steps.
Proof. The proof of this theorem is left as an exercise (Exercise $17$).
Example $2$
(Example $1$ continued) Consider again the weather in the Land of $\mathrm{Oz}$. We know that the powers of the transition matrix give us interesting information about the process as it evolves. We shall be particularly interested in the state of the chain after a large number of steps. The program MatrixPowers computes the powers of $\mathbf{P}$.
We have run the program MatrixPowers for the Land of $\mathrm{Oz}$ example to compute the successive powers of $\mathbf{P}$ from 1 to 6 . The results are shown in Table $1$. We note that after six days our weather predictions are, to three-decimal-place accuracy, independent of today's weather. The probabilities for the three types of weather, $\mathrm{R}, \mathrm{N}$, and $\mathrm{S}$, are $.4, .2$, and .4 no matter where the chain started. This is an example of a type of Markov chain called a regular Markov chain. For this type of chain, it is true that long-range predictions are independent of the starting state. Not all chains are regular, but this is an important class of chains that we shall study in detail later.
We now consider the long-term behavior of a Markov chain when it starts in a state chosen by a probability distribution on the set of states, which we will call a probability vector. A probability vector with $r$ components is a row vector whose entries are non-negative and sum to 1 . If $\mathbf{u}$ is a probability vector which represents the initial state of a Markov chain, then we think of the $i$ th component of $\mathbf{u}$ as representing the probability that the chain starts in state $s_i$.
With this interpretation of random starting states, it is easy to prove the following theorem.
$P= \begin{matrix} & \begin{matrix}Rain&Nice&Snow\end{matrix} \\begin{matrix}Rain\Nice\Snow\end{matrix} & \begin{pmatrix}.500 & .250 & .250 \ .500 & .000 & .500 \.250 & .250 & .500\end{pmatrix}\\end{matrix}$
$P= \begin{matrix} & \begin{matrix}Rain&Nice&Snow\end{matrix} \\begin{matrix}Rain\Nice\Snow\end{matrix} & \begin{pmatrix}.438 & .188 & .375 \ .375 & .250 & .375 \.375 & .188 & .438\end{pmatrix}\\end{matrix}$
$P= \begin{matrix} & \begin{matrix}Rain&Nice&Snow\end{matrix} \\begin{matrix}Rain\Nice\Snow\end{matrix} & \begin{pmatrix}.406 & .203 & .391 \ .406 & .188 & .406 \.391 & .203 & .406\end{pmatrix}\\end{matrix}$
$P= \begin{matrix} & \begin{matrix}Rain&Nice&Snow\end{matrix} \\begin{matrix}Rain\Nice\Snow\end{matrix} & \begin{pmatrix}.402 & .199 & .398 \ .398 & .203 & .398 \.398 & .199 & .402\end{pmatrix}\\end{matrix}$
$P= \begin{matrix} & \begin{matrix}Rain&Nice&Snow\end{matrix} \\begin{matrix}Rain\Nice\Snow\end{matrix} & \begin{pmatrix}.400 & .200 & .399 \ .400 & .199 & .400 \.399 & .200 & .400\end{pmatrix}\\end{matrix}$
$P= \begin{matrix} & \begin{matrix}Rain&Nice&Snow\end{matrix} \\begin{matrix}Rain\Nice\Snow\end{matrix} & \begin{pmatrix}.400 & .200 & .400 \ .400 & .200 & .400 \.400 & .200 & .400\end{pmatrix}\\end{matrix}$
Table $1$: Powers of the Land of $\mathrm{Oz}$ transition matrix.
Theorem $2$
Let $\mathbf{P}$ be the transition matrix of a Markov chain, and let $\mathbf{u}$ be the probability vector which represents the starting distribution. Then the probability that the chain is in state $s_i$ after $n$ steps is the $i$ th entry in the vector
$\mathbf{u}^{(n)}=\mathbf{u} \mathbf{P}^n$
Proof. The proof of this theorem is left as an exercise (Exercise $18$.).
We note that if we want to examine the behavior of the chain under the assumption that it starts in a certain state $s_i$, we simply choose $\mathbf{u}$ to be the probability vector with $i$ th entry equal to 1 and all other entries equal to 0 .
Example $3$
In the Land of Oz example (Example $1$) let the initial probability vector $\mathbf{u}$ equal $(1 / 3,1 / 3,1 / 3)$. Then we can calculate the distribution of the states after three days using Theorem $2$. and our previous calculation of $\mathbf{P}^3$. We obtain
\begin{aligned} \mathbf{u}^{(3)}=\mathbf{u} \mathbf{P}^3 & =\left(\begin{array}{lll} 1 / 3, & 1 / 3, & 1 / 3 \end{array}\right)\left(\begin{array}{ccc} .406 & .203 & .391 \ .406 & .188 & .406 \ .391 & .203 & .406 \end{array}\right) \ \ & =\left(\begin{array}{lll} .401, & .198, & .401 \end{array}\right) . \end{aligned}
Examples
The following examples of Markov chains will be used throughout the chapter for exercises.
Example $4$
The President of the United States tells person A his or her intention to run or not to run in the next election. Then A relays the news to B, who in turn relays the message to $\mathrm{C}$, and so forth, always to some new person. We assume that there is a probability $a$ that a person will change the answer from yes to no when transmitting it to the next person and a probability $b$ that he or she will change it from no to yes. We choose as states the message, either yes or no. The transition matrix is then
$P= \begin{matrix} & \begin{matrix}Yes&No\end{matrix} \\begin{matrix}Yes\No\end{matrix} & \begin{pmatrix}1-a & a \ b & 1-b\end{pmatrix}\\end{matrix}$
The initial state represents the President's choice.
Each time a certain horse runs in a three-horse race, he has probability $1 / 2$ of winning, 1/4 of coming in second, and 1/4 of coming in third, independent of the outcome of any previous race. We have an independent trials process, but it can also be considered from the point of view of Markov chain theory. The transition matrix is
$P= \begin{matrix} & \begin{matrix}W&P&S\end{matrix} \\begin{matrix}W\P\S\end{matrix} & \begin{pmatrix}.5 & .25&.25\.5&.25&.25 \.5&.25 &.25\end{pmatrix}\\end{matrix}$
Example $6$
A
In the Dark Ages, Harvard, Dartmouth, and Yale admitted only male students. Assume that, at that time, 80 percent of the sons of Harvard men went to Harvard and the rest went to Yale, 40 percent of the sons of Yale men went to Yale, and the rest split evenly between Harvard and Dartmouth; and of the sons of Dartmouth men, 70 percent went to Dartmouth, 20 percent to Harvard, and 10 percent to Yale. We form a Markov chain with transition matrix
$P= \begin{matrix} & \begin{matrix}H&Y&D\end{matrix} \\begin{matrix}H\Y\D\end{matrix} & \begin{pmatrix}.8 & .2 & 0 \.3 & .4 & .3 \.2 & .1 & .7\end{pmatrix}\\end{matrix}$
Example $7$
Modify Example $6$ by assuming that the son of a Harvard man always went to Harvard. The transition matrix is now
$P= \begin{matrix} & \begin{matrix}H&Y&D\end{matrix} \\begin{matrix}H\Y\D\end{matrix} & \begin{pmatrix}1 & 0 & 0 \.3 & .4 & .3 \.2 & .1 & .7\end{pmatrix}\\end{matrix}$
Example $8$
(Ehrenfest Model) The following is a special case of a model, called the Ehrenfest model, ${ }^3$ that has been used to explain diffusion of gases. The general model will be discussed in detail in Section 11.5. We have two urns that, between them, contain four balls. At each step, one of the four balls is chosen at random and moved from the urn that it is in into the other urn. We choose, as states, the number of balls in the first urn. The transition matrix is then
$P= \begin{matrix} & \begin{matrix}0&&1&&2&&3&&4\end{matrix} \\begin{matrix}0\1\2\3\4\end{matrix} & \begin{pmatrix}0 & 1 & 0 & 0 & 0 \1 / 4 & 0 & 3 / 4 & 0 & 0 \0 & 1 / 2 & 0 & 1 / 2 & 0 \0 & 0 & 3 / 4 & 0 & 1 / 4 \0 & 0 & 0 & 1 & 0\end{pmatrix}\\end{matrix}$
${ }^3$ P. and T. Ehrenfest, "Über zwei bekannte Einwände gegen das Boltzmannsche H-Theorem," Physikalishce Zeitschrift, vol. 8 (1907), pp. 311-314.
Example $9$
(Gene Model) The simplest type of inheritance of traits in animals occurs when a trait is governed by a pair of genes, each of which may be of two types, say $\mathrm{G}$ and $\mathrm{g}$. An individual may have a GG combination or Gg (which is genetically the same as $\mathrm{gG}$ ) or gg. Very often the GG and Gg types are indistinguishable in appearance, and then we say that the $\mathrm{G}$ gene dominates the g gene. An individual is called dominant if he or she has GG genes, recessive if he or she has gg, and hybrid with a Gg mixture.
In the mating of two animals, the offspring inherits one gene of the pair from each parent, and the basic assumption of genetics is that these genes are selected at random, independently of each other. This assumption determines the probability of occurrence of each type of offspring. The offspring of two purely dominant parents must be dominant, of two recessive parents must be recessive, and of one dominant and one recessive parent must be hybrid.
In the mating of a dominant and a hybrid animal, each offspring must get a $\mathrm{G}$ gene from the former and has an equal chance of getting $\mathrm{G}$ or $\mathrm{g}$ from the latter. Hence there is an equal probability for getting a dominant or a hybrid offspring. Again, in the mating of a recessive and a hybrid, there is an even chance for getting either a recessive or a hybrid. In the mating of two hybrids, the offspring has an equal chance of getting $\mathrm{G}$ or $\mathrm{g}$ from each parent. Hence the probabilities are $1 / 4$ for $\mathrm{GG}, 1 / 2$ for $\mathrm{Gg}$, and $1 / 4$ for gg.
Consider a process of continued matings. We start with an individual of known genetic character and mate it with a hybrid. We assume that there is at least one offspring. An offspring is chosen at random and is mated with a hybrid and this process repeated through a number of generations. The genetic type of the chosen offspring in successive generations can be represented by a Markov chain. The states are dominant, hybrid, and recessive, and indicated by GG, Gg, and gg respectively.
The transition probabilities are
$P= \begin{matrix} & \begin{matrix}GG&Gg&gg\end{matrix} \\begin{matrix}GG\Gg\gg\end{matrix} & \begin{pmatrix}1 & 0 & 0 \.25 & .5 & .25 \0 & .5 & .5\end{pmatrix}\\end{matrix}$
Example $10$
Modify Example $9$. as follows: Instead of mating the oldest offspring with a hybrid, we mate it with a dominant individual. The transition matrix is
$P= \begin{matrix} & \begin{matrix}GG&Gg&gg\end{matrix} \\begin{matrix}GG\Gg\gg\end{matrix} & \begin{pmatrix}1 & 0 & 0 \ .5 & .5 & 0 \ 0 & 1 & 0\end{pmatrix}\\end{matrix}$
Example $11$
We start with two animals of opposite sex, mate them, select two of their offspring of opposite sex, and mate those, and so forth. To simplify the example, we will assume that the trait under consideration is independent of sex.
Here a state is determined by a pair of animals. Hence, the states of our process will be:
$P= \begin{matrix} & \begin{matrix}GG,GG&GG,Gg&GG,gg&Gg,Gg&Gg,gg&gg,gg\end{matrix} \\begin{matrix}GG,GG\GG,Gg\GG,gg\Gg,Gg\Gg,gg\gg,gg\end{matrix} & \begin{pmatrix}1.000 && .000 && .000 && .000 && .000 && .000 \ .250 && .500 && .000 && .250 && .000 && .000 \ .000 && .000 &&.000 && 1.000 && .000 && .000 \ .062 && .250 && .125 && .250 && .250 && .062 \ .000 && .000 && .000 && .250 && .500 && .250 \ .000 && .000 && .000 && .000 && .000 && 1.000\end{pmatrix}\\end{matrix}$
We illustrate the calculation of transition probabilities in terms of the state $s_2$. When the process is in this state, one parent has GG genes, the other Gg. Hence, the probability of a dominant offspring is $1 / 2$. Then the probability of transition to $s_1$ (selection of two dominants) is $1 / 4$, transition to $s_2$ is $1 / 2$, and to $s_4$ is $1 / 4$. The other states are treated the same way. The transition matrix of this chain is:
Example $12$
(Stepping Stone Model) Our final example is another example that has been used in the study of genetics. It is called the stepping stone model. ${ }^4$ In this model we have an $n$-by- $n$ array of squares, and each square is initially any one of $k$ different colors. For each step, a square is chosen at random. This square then chooses one of its eight neighbors at random and assumes the color of that neighbor. To avoid boundary problems, we assume that if a square $S$ is on the left-hand boundary, say, but not at a corner, it is adjacent to the square $T$ on the right-hand boundary in the same row as $S$, and $S$ is also adjacent to the squares just above and below $T$. A similar assumption is made about squares on the upper and lower boundaries. The top left-hand corner square is adjacent to three obvious neighbors, namely the squares below it, to its right, and diagonally below and to the right. It has five other neighbors, which are as follows: the other three corner squares, the square below the upper right-hand corner, and the square to the right of the bottom left-hand corner. The other three corners also have, in a similar way, eight neighbors. (These adjacencies are much easier to understand if one imagines making the array into a cylinder by gluing the top and bottom edge together, and then making the cylinder into a doughnut by gluing the two circular boundaries together.) With these adjacencies, each square in the array is adjacent to exactly eight other squares.
A state in this Markov chain is a description of the color of each square. For this Markov chain the number of states is $k^{n^2}$, which for even a small array of squares
${ }^4$ S. Sawyer, "Results for The Stepping Stone Model for Migration in Population Genetics," Annals of Probability, vol. 4 (1979), pp. 699-728.
is enormous. This is an example of a Markov chain that is easy to simulate but difficult to analyze in terms of its transition matrix. The program SteppingStone simulates this chain. We have started with a random initial configuration of two colors with $n=20$ and show the result after the process has run for some time in Figure 1$2$..
This is an example of an absorbing Markov chain. This type of chain will be studied in Section 11.2. One of the theorems proved in that section, applied to the present example, implies that with probability 1 , the stones will eventually all be the same color. By watching the program run, you can see that territories are established and a battle develops to see which color survives. At any time the probability that a particular color will win out is equal to the proportion of the array of this color. You are asked to prove this in Exercise 11.2.32.
Exercises
Exercise $1$:
It is raining in the Land of $\mathrm{Oz}$. Determine a tree and a tree measure for the next three days' weather. Find $\mathbf{w}^{(1)}, \mathbf{w}^{(2)}$, and $\mathbf{w}^{(3)}$ and compare with the results obtained from $\mathbf{P}, \mathbf{P}^2$, and $\mathbf{P}^3$.
Exercise $2$:
In Example (4\, let $a=0$ and $b=1 / 2$. Find $\mathbf{P}, \mathbf{P}^2$, and $\mathbf{P}^3$. What would $\mathbf{P}^n$ be? What happens to $\mathbf{P}^n$ as $n$ tends to infinity? Interpret this result.
Exercise $3$:
In Example (5\, find $\mathbf{P}, \mathbf{P}^2$, and $\mathbf{P}^3$. What is $\mathbf{P}^n$ ?
Exercise $4$:
For Example (6\, find the probability that the grandson of a man from Harvard went to Harvard.
Exercise $5$:
In Example (7\, find the probability that the grandson of a man from Harvard went to Harvard.
Exercise $6$:
In Example (9\, assume that we start with a hybrid bred to a hybrid. Find $\mathbf{u}^{(1)}, \mathbf{u}^{(2)}$, and $\mathbf{u}^{(3)}$. What would $\mathbf{u}^{(n)}$ be?
Exercise $7$:
Find the matrices $\mathbf{P}^2, \mathbf{P}^3, \mathbf{P}^4$, and $\mathbf{P}^n$ for the Markov chain determined by the transition matrix $\mathbf{P}=\left(\begin{array}{ll}1 & 0 \ 0 & 1\end{array}\right)$. Do the same for the transition matrix $\mathbf{P}=\left(\begin{array}{ll}0 & 1 \ 1 & 0\end{array}\right)$. Interpret what happens in each of these processes.
Exercise $8$:
A certain calculating machine uses only the digits 0 and 1 . It is supposed to transmit one of these digits through several stages. However, at every stage, there is a probability $p$ that the digit that enters this stage will be changed when it leaves and a probability $q=1-p$ that it won't. Form a Markov chain to represent the process of transmission by taking as states the digits 0 and 1 . What is the matrix of transition probabilities?
Exercise $9$:
For the Markov chain in Exercise $8$., draw a tree and assign a tree measure assuming that the process begins in state 0 and moves through two stages of transmission. What is the probability that the machine, after two stages, produces the digit 0 (i.e., the correct digit)? What is the probability that the machine never changed the digit from 0 ? Now let $p=.1$. Using the program MatrixPowers, compute the 100th power of the transition matrix. Interpret the entries of this matrix. Repeat this with $p=.2$. Why do the 100th powers appear to be the same?
Exercise $10$:
Modify the program MatrixPowers so that it prints out the average $\mathbf{A}_n$ of the powers $\mathbf{P}^n$, for $n=1$ to $N$. Try your program on the Land of $\mathrm{Oz}$ example and compare $\mathbf{A}_n$ and $\mathbf{P}^n$.
Exercise $11$:
Assume that a man's profession can be classified as professional, skilled laborer, or unskilled laborer. Assume that, of the sons of professional men, 80 percent are professional, 10 percent are skilled laborers, and 10 percent are unskilled laborers. In the case of sons of skilled laborers, 60 percent are skilled laborers, 20 percent are professional, and 20 percent are unskilled. Finally, in the case of unskilled laborers, 50 percent of the sons are unskilled laborers, and 25 percent each are in the other two categories. Assume that every man has at least one son, and form a Markov chain by following the profession of a randomly chosen son of a given family through several generations. Set up the matrix of transition probabilities. Find the probability that a randomly chosen grandson of an unskilled laborer is a professional man.
Exercise $12$:
In Exercise (11\, we assumed that every man has a son. Assume instead that the probability that a man has at least one son is .8. Form a Markov chain with four states. If a man has a son, the probability that this son is in a particular profession is the same as in Exercise $11$.. If there is no son, the process moves to state four which represents families whose male line has died out. Find the matrix of transition probabilities and find the probability that a randomly chosen grandson of an unskilled laborer is a professional man.
Exercise $13$:
Write a program to compute $\mathbf{u}^{(n)}$ given $\mathbf{u}$ and $\mathbf{P}$. Use this program to compute $\mathbf{u}^{(10)}$ for the Land of $\mathrm{Oz}$ example, with $\mathbf{u}=(0,1,0)$, and with $\mathbf{u}=(1 / 3,1 / 3,1 / 3)$.
Exercise $14$:
Using the program MatrixPowers, find $\mathbf{P}^1$ through $\mathbf{P}^6$ for Examples $9$. and $10$.. See if you can predict the long-range probability of finding the process in each of the states for these examples.
Exercise $15$:
Write a program to simulate the outcomes of a Markov chain after $n$ steps, given the initial starting state and the transition matrix $\mathbf{P}$ as data (see Example $12$.). Keep this program for use in later problems.
Exercise $16$:
Modify the program of Exercise (15\ so that it keeps track of the proportion of times in each state in $n$ steps. Run the modified program for different starting states for Example (1\ and Example (8\. Does the initial state affect the proportion of time spent in each of the states if $n$ is large?
Exercise $17$:
Prove Theorem $1$.
Exercise $18$:
Prove Theorem $2$.
Exercise $19$:
Consider the following process. We have two coins, one of which is fair, and the other of which has heads on both sides. We give these two coins to our friend, who chooses one of them at random (each with probability $1 / 2$ ). During the rest of the process, she uses only the coin that she chose. She now proceeds to toss the coin many times, reporting the results. We consider this process to consist solely of what she reports to us.
(a) Given that she reports a head on the $n$th toss, what is the probability that a head is thrown on the $(n+1)$ st toss?
(b) Consider this process as having two states, heads and tails. By computing the other three transition probabilities analogous to the one in part (a), write down a "transition matrix" for this process.
(c) Now assume that the process is in state "heads" on both the $(n-1)$ st and the $n$th toss. Find the probability that a head comes up on the $(n+1)$ st toss.
(d) Is this process a Markov chain? | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/11%3A_Markov_Chains/11.01%3A_Introduction.txt |
The subject \mathbfof Markov chains is best studied by considering special types of Markov chains. The first type that we shall study is called an absorbing Markov chain.
A state $s_i$ of a Markov chain is called absorbing if it is impossible to leave it (i.e., $p_{ii} = 1$). A Markov chain is if it has at least one absorbing state, and if from every state it is possible to go to an absorbing state (not necessarily in one step).
In an absorbing Markov chain, a state which is not absorbing is called transient
Drunkard’s Walk
Example $1$
A man walks along a four-block stretch of Park Avenue (see Figure 11.1.3). If he is at corner 1, 2, or 3, then he walks to the left or right with equal probability. He continues until he reaches corner 4, which is a bar, or corner 0, which is his home. If he reaches either home or the bar, he stays there.
We form a Markov chain with states 0, 1, 2, 3, and 4. States 0 and 4 are absorbing states. The transition matrix is then
$P= \begin{matrix} & \begin{matrix}0&&1&&2&&3&&4\end{matrix} \\begin{matrix}0\1\2\3\4\end{matrix} & \begin{pmatrix}1 & 0 & 0 & 0 & 0 \ 1 / 2 & 0 & 1 / 2 & 0 & 0 \ 0 & 1 / 2 & 0 & 1 / 2 & 0 \ 0 & 0 & 1 / 2 & 0 & 1 / 2 \ 0 & 0 & 0 & 0 & 1\end{pmatrix}\\end{matrix}$
The states 1, 2, and 3 are transient states, and from any of these it is possible to reach the absorbing states 0 and 4. Hence the chain is an absorbing chain. When a process reaches an absorbing state, we shall say that it is absorbed.
The most obvious question that can be asked about such a chain is: What is the probability that the process will eventually reach an absorbing state? Other interesting questions include: (a) What is the probability that the process will end up in a given absorbing state? (b) On the average, how long will it take for the process to be absorbed? (c) On the average, how many times will the process be in each transient state? The answers to all these questions depend, in general, on the state from which the process starts as well as the transition probabilities.
Canonical Form
Consider an arbitrary absorbing Markov chain. Renumber the states so that the transient states come first. If there are $r$ absorbing states and $t$ transient states, the transition matrix will have the following canonical form
TR. ABS.
$\mathbf{P}=\underset{\text { ABS. }}{\text { TR. }}\left(\begin{array}{c|c} \mathbf{Q} & \mathbf{R} \ \hline \mathbf{0} & \mathbf{I} \end{array}\right)$
Here $\mathbf{I}$ is an $r$-by-$r$ indentity matrix, $\mathbf{0}$ is an $r$-by-$t$ zero matrix, $\mathbf{R}$ is a nonzero $t$-by-$r$ matrix, and $\mathbf{Q}$ is an $t$-by-$t$ matrix. The first $t$ states are transient and the last $r$ states are absorbing.
In Section 11.1, we saw that the entry $p_{ij}^{(n)}$ of the matrix $\mathbf{P}^n$ is the probability of being in the state $s_j$ after $n$ steps, when the chain is started in state $s_i$. A standard matrix algebra argument shows that $\mathbf{P}^n$ is of the form
TR. ABS.
$\mathbf{P}^n=\underset{\text { ABS. }}{\text { TR. }}\left(\begin{array}{c|c} \mathbf{Q}^n & * \ \hline 0 & \mathbf{I} \end{array}\right)$
where the asterisk $*$ stands for the $t$-by-$r$ matrix in the upper right-hand corner of $\mathbf{P}^n.$ (This submatrix can be written in terms of $\mathbf{Q}$ and $\mathbf{R}$, but the expression is complicated and is not needed at this time.) The form of $\mathbf{P}^n$ shows that the entries of $\mathbf{Q}^n$ give the probabilities for being in each of the transient states after $n$ steps for each possible transient starting state. For our first theorem we prove that the probability of being in the transient states after $n$ steps approaches zero. Thus every entry of $\mathbf{ Q}^n$ must approach zero as $n$ approaches infinity (i.e, $\mathbf{Q}^n \to \mathbf{ 0}$).
Probability of Absorption
Theorem $1$
In an absorbing Markov chain, the probability that the process will be absorbed is 1 (i.e., $\mathbf{Q}^n \to \mathbf{0}$ as $n \to \infty$).
Proof. From each nonabsorbing state $s_j$ it is possible to reach an absorbing state. Let $m_j$ be the minimum number of steps required to reach an absorbing state, starting from $s_j$. Let $p_j$ be the probability that, starting from $s_j$, the process will not reach an absorbing state in $m_j$ steps. Then $p_j <1$. Let $m$ be the largest of the $m_j$ and let $p$ be the largest of $p_j$. The probability of not being absorbed in $m$ steps is less than or equal to $p$, in $2m$ steps less than or equal to $p^2$, etc. Since $p<1$ these probabilities tend to 0. Since the probability of not being absorbed in $n$ steps is monotone decreasing, these probabilities also tend to 0, hence $\lim_{n \rightarrow \infty } \mathbf{Q}^n = 0.$
The Fundamental Matrix
Theorem $2$
For an absorbing Markov chain the matrix I - Q has an inverse $\mathbf{N}$ and $\mathbf{N}=\mathbf{I}+\mathbf{Q}+\mathbf{Q}^2+\cdots$. The $i j$-entry $n_{i j}$ of the matrix $\mathbf{N}$ is the expected number of times the chain is in state $s_j$, given that it starts in state $s_i$. The initial state is counted if $i=j$.
Proof. Let $(\mathbf{I} - \mathbf{Q})\mathbf{x}~=~0;$ that is $\mathbf{x}~=~\mathbf{Q}\mathbf{x}.$ Then, iterating this we see that $\mathbf{x}~=~\mathbf{Q}^{n}\mathbf x.$ Since $\mathbf{Q}^{n} \rightarrow \mathbf{0}$, we have $\mathbf{Q}^n\mathbf{x} \rightarrow \mathbf{0}$, so $\mathbf{x}~=~\mathbf{0}$. Thus $(\mathbf{I} - \mathbf{Q})^{-1}~=~\mathbf{N}$ exists. Note next that
$(\mathbf{I} - \mathbf{Q}) (\mathbf{I} + \mathbf{Q} + \mathbf{Q}^2 + \cdots + \mathbf{Q}^n) = \mathbf{I} - \mathbf{Q}^{n + 1}\ .$
Thus multiplying both sides by $\mathbf{N}$ gives $\mathbf{I} + \mathbf{Q} + \mathbf{Q}^2 + \cdots + \mathbf{Q}^n = \mathbf{N} (\mathbf{I} - \mathbf{Q}^{n + 1})\ .$
Letting $n$ tend to infinity we have
$\mathbf{N} = \mathbf{I} + \mathbf{Q} + \mathbf{Q}^2 + \cdots\ .$
Let $s_i$ and $s_j$ be two transient states, and assume throughout the remainder of the proof that $i$ and $j$ are fixed. Let $X^{(k)}$ be a random variable which equals 1 if the chain is in state $s_j$ after $k$ steps, and equals 0 otherwise. For each $k$, this random variable depends upon both $i$ and $j$; we choose not to explicitly show this dependence in the interest of clarity. We have
$P(X^{(k)} = 1) = q_{ij}^{(k)}\ ,$ and $P(X^{(k)} = 0) = 1 - q_{ij}^{(k)}\ ,$
where $q_{ij}^{(k)}$ is the $ij$th entry of $\mathbf{Q}^k$. These equations hold for $k = 0$ since $\mathbf{Q}^0 = \mathbf{I}$. Therefore, since $X^{(k)}$ is a 0-1 random variable, $E(X^{(k)}) = q_{ij}^{(k)}$.
The expected number of times the chain is in state $s_j$ in the first $n$ steps, given that it starts in state $s_i$, is clearly
$E\Bigl(X^{(0)} + X^{(1)} + \cdots + X^{(n)} \Bigr) = q_{ij}^{(0)} + q_{ij}^{(1)} + \cdots + q_{ij}^{(n)}\ .$
Letting $n$ tend to infinity we have
$E\Bigl(X^{(0)} + X^{(1)} + \cdots \Bigr) = q_{ij}^{(0)} + q_{ij}^{(1)} + \cdots = n_{ij} \ .$
Definition: Term
For an absorbing Markov chain $\mathbf{P}$, the matrix $\mathbf{N} = (\mathbf{I} - \mathbf{Q})^{-1}$ is called the fundamental matrix for $\mathbf{P}$. The entry $n_{ij}$ of $\mathbf{N}$ gives the expected number of times that the process is in the transient state $s_j$ if it is started in the transient state $s_i$.
Example $2$
(Example continued) In the Drunkard’s Walk example, the transition matrix in canonical form is
$P= \begin{matrix} & \begin{matrix}0&&1&&2&&3&&4\end{matrix} \\begin{matrix}0\1\2\3\4\end{matrix} & \left(\begin{array}{ccc|cc} 0 & 1 / 2 & 0 & 1 / 2 & 0 \ 1 / 2 & 0 & 1 / 2 & 0 & 0 \ 0 & 1 / 2 & 0 & 0 & 1 / 2 \ \hline 0 & 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 0 & 1 \end{array}\right)\\end{matrix}$
From this we see that the matrix $\mathbf{Q}$ is \
$\mathbf{Q}=\left(\begin{array}{ccc} 0 & 1 / 2 & 0 \ 1 / 2 & 0 & 1 / 2 \ 0 & 1 / 2 & 0 \end{array}\right)$
and
$\mathbf{I} - \mathbf{Q} = \pmatrix{ 1 & -1/2 & 0 \cr -1/2 & 1 & -1/2 \cr 0 & -1/2 & 1 \cr}\ .$
Computing $(\mathbf{I} - \mathbf{Q})^{-1}$, we find
$\mathbf{N}=(\mathbf{I}-\mathbf{Q})^{-1}=\begin{array}{c} 1 \ 2 \ 3 \end{array}\left(\begin{array}{ccc} 3 / 2 & 1 & 1 / 2 \ 1 & 2 & 1 \ 1 / 2 & 1 & 3 / 2 \end{array}\right)$
From the middle row of $\mathbf{N}$, we see that if we start in state 2, then the expected number of times in states 1, 2, and 3 before being absorbed are 1, 2, and 1.
Time to Absorption
We now consider the question: Given that the chain starts in state $s_i$, what is the expected number of steps before the chain is absorbed? The answer is given in the next theorem.
Theorem $3$
Let $t_i$ be the expected number of steps before the chain is absorbed, given that the chain starts in state $s_i$, and let $\mathbf{t}$ be the column vector whose $i$th entry is $t_i$. Then
$\mathbf{t} = \mathbf{N}\mathbf{c}\ ,$
where $\mathbf{c}$ is a column vector all of whose entries are 1.
Proof. If we add all the entries in the $i$th row of $\mathbf{N}$, we will have the expected number of times in any of the transient states for a given starting state $s_i$, that is, the expected time required before being absorbed. Thus, $t_i$ is the sum of the entries in the $i$th row of $\mathbf{N}$. If we write this statement in matrix form, we obtain the theorem.
Absorption Probabilities
Theorem $4$
Let $b_{ij}$ be the probability that an absorbing chain will be absorbed in the absorbing state $s_j$ if it starts in the transient state $s_i$. Let $\mathbf{B}$ be the matrix with entries $b_{ij}$. Then $\mathbf{B}$ is an $t$-by-$r$ matrix, and
$\mathbf{B} = \mathbf{N} \mathbf{R}\ ,$
where $\mathbf{N}$ is the fundamental matrix and $\mathbf{R}$ is as in the canonical form.
Proof. We have \begin{aligned} \mathbf{B}_{ij} &=& \sum_n\sum_k q_{ik}^{(n)} r_{kj} \ &=& \sum_k \sum_n q_{ik}^{(n)} r_{kj} \ &=& \sum_k n_{ik}r_{kj} \ &=& (\mathbf{N}\mathbf{R})_{ij}\ .\end{aligned} This completes the proof.
Another proof of this is given in Exercise $34$.
Example $3$
(Example $2$ continued) In the Drunkard’s Walk example, we found that
$N= \begin{matrix} & \begin{matrix}1&&2&&3\end{matrix} \\begin{matrix}1\2\3\end{matrix} & \begin{pmatrix}3/2 & 1 & 1/2 \ 1 & 2 & 1 \ 1/2 & 1 & 3/2 \ \end{pmatrix}\\end{matrix}$
Hence,
\begin{aligned} \mathbf{t} = \mathbf{N}\mathbf{c} &= \pmatrix{ 3/2 & 1 & 1/2 \cr 1 & 2 & 1 \cr 1/2 & 1 & 3/2 \cr } \pmatrix{ 1 \cr 1 \cr 1 \cr } \ &= \pmatrix{ 3 \cr 4 \cr 3 \cr }\ .\end{aligned}
Thus, starting in states 1, 2, and 3, the expected times to absorption are 3, 4, and 3, respectively.
From the canonical form,
$\mathbf{R}= \begin{matrix} & \begin{matrix}0&&4\end{matrix} \\begin{matrix}1\2\3\end{matrix} & \begin{pmatrix}1/2 & 0 \ 0 & 0 \ 0 & 1/2 \ \end{pmatrix}\\end{matrix}$
Hence,
$\mathbf{B}=\mathbf{N R}=\left(\begin{array}{ccc} 3 / 2 & 1 & 1 / 2 \ 1 & 2 & 1 \ 1 / 2 & 1 & 3 / 2 \end{array}\right) \cdot\left(\begin{array}{cc} 1 / 2 & 0 \ 0 & 0 \ 0 & 1 / 2 \end{array}\right)$
$\mathbf{R}= \begin{matrix} & \begin{matrix}0&&4\end{matrix} \\begin{matrix}1\2\3\end{matrix} & \begin{pmatrix}3/4 & 1/4 \ 1/2 & 1/2 \ 1/4 & 3/4 \ \end{pmatrix}\\end{matrix}$
Here the first row tells us that, starting from state $1$, there is probability 3/4 of absorption in state $0$ and 1/4 of absorption in state $4$.
Computation
The fact that we have been able to obtain these three descriptive quantities in matrix form makes it very easy to write a computer program that determines these quantities for a given absorbing chain matrix.
The program AbsorbingChain calculates the basic descriptive quantities of an absorbing Markov chain.
We have run the program AbsorbingChain for the example of the drunkard’s walk (Example [exam 11.2.1]) with 5 blocks. The results are as follows:
\$\mathbf{Q}}= \begin{matrix} & \begin{matrix}1&&2&&3&&4\end{matrix} \\begin{matrix}1\2\3\4\end{matrix} & \begin{pmatrix} .00 & .50 & .00 & .00 \ .50 & .00 & .50 & .00 \ .00 & .50 & .00 & .50 \ .00 & .00 & .50 & .00 \ \end{pmatrix}\\end{matrix}$
$\mathbf{R}= \begin{matrix} & \begin{matrix}0&&5\end{matrix} \\begin{matrix}1\2\3\4 \end{matrix} & \begin{pmatrix} .50 & .00 \ .00 & .00 \ .00 & .00 \ .00 & .50 \ \end{pmatrix}\\end{matrix}$
$\mathbf{N}= \begin{matrix} & \begin{matrix}1&&2&&3&&4\end{matrix} \\begin{matrix}1\2\3\4\end{matrix} & \begin{pmatrix} 1.60 & 1.20 & .80 & .40 \ 1.20 & 2.40 & 1.60 & .80 \ .80 & 1.60 & 2.40 & 1.20 \ .40 & .80 & 1.20 & 1.60 \end{pmatrix}\\end{matrix}$
$\mathbf{t}= \begin{matrix} & \\begin{matrix}1\2\3\4 \end{matrix} & \begin{pmatrix} 4.00 \ 6.00 \ 6.00 \ 4.00\ \end{pmatrix}\\end{matrix}$
$\mathbf{B}= \begin{matrix} & \begin{matrix}0&&5\end{matrix} \\begin{matrix}1\2\3\4 \end{matrix} & \begin{pmatrix} .80 & .20 \ .60 & .40 \ .40 & .60 \ .20 & .80 \ \end{pmatrix}\\end{matrix}$
Note that the probability of reaching the bar before reaching home, starting at $x$, is $x/5$ (i.e., proportional to the distance of home from the starting point). (See Exercise $23$
Exercises
Exercise $1$
In Example 11.1.4, for what values of $a$ and $b$ do we obtain an absorbing Markov chain?
Exercise $2$
Show that Example 11.1.7 is an absorbing Markov chain.
Exercise $3$
Which of the genetics examples (Examples 11.1.9, 11.1.10, and 11.1.11) are absorbing?
Exercise $4$
Find the fundamental matrix $\mathbf{N}$ for Example 11.1.10.
Exercise $5$
For Example 11.1.11, verify that the following matrix is the inverse of $\mathbf{I} - \mathbf{Q}$ and hence is the fundamental matrix $\mathbf{N}$. $\mathbf{N} = \pmatrix{ 8/3 & 1/6 & 4/3 & 2/3 \cr 4/3 & 4/3 & 8/3 & 4/3 \cr 4/3 & 1/3 & 8/3 & 4/3 \cr 2/3 & 1/6 & 4/3 & 8/3 \cr}\ .$ Find $\mathbf{N} \mathbf{c}$ and $\mathbf{N} \mathbf{R}$. Interpret the results.
Exercise $6$
In the Land of Oz example (Example 11.1.1), change the transition matrix by making R an absorbing state. This gives
$\mathbf{N}= \begin{matrix} & \begin{matrix}R&&N&&S\end{matrix} \\begin{matrix}R\N\S\end{matrix} & \begin{pmatrix} 1 & 0 & 0 \ 1/2 & 0 & 1/2 \ 1/4&1/4&1/2 \end{pmatrix}\\end{matrix}$
Find the fundamental matrix $\mathbf{N}$, and also $\mathbf{Nc}$ and $\mathbf{NR}$. Interpret the results.
Exercise $7$
In Example 11.1.8, make states 0 and 4 into absorbing states. Find the fundamental matrix $\mathbf{N}$, and also $\mathbf{Nc}$ and $\mathbf{NR}$, for the resulting absorbing chain. Interpret the results.
Exercise $8$
In Example $1$ (Drunkard’s Walk) of this section, assume that the probability of a step to the right is 2/3, and a step to the left is 1/3. Find $\mathbf{N},~\mathbf{N}\mathbf{c}$, and $\mathbf{N}\mathbf{R}$. Compare these with the results of Example $3$.
Exercise $9$
A process moves on the integers 1, 2, 3, 4, and 5. It starts at 1 and, on each successive step, moves to an integer greater than its present position, moving with equal probability to each of the remaining larger integers. State five is an absorbing state. Find the expected number of steps to reach state five.
Exercise $10$
Using the result of Exercise $9$, make a conjecture for the form of the fundamental matrix if the process moves as in that exercise, except that it now moves on the integers from 1 to $n$. Test your conjecture for several different values of $n$. Can you conjecture an estimate for the expected number of steps to reach state $n$, for large $n$? (See Exercise $11$ for a method of determining this expected number of steps.)
Exercise $11$
Let $b_k$ denote the expected number of steps to reach $n$ from $n-k$, in the process described in Exercise $9$.
1. Define $b_0 = 0$. Show that for $k > 0$, we have $b_k = 1 + \frac 1k \bigl(b_{k-1} + b_{k-2} + \cdots + b_0\bigr)\ .$
2. Let $f(x) = b_0 + b_1 x + b_2 x^2 + \cdots\ .$ Using the recursion in part (a), show that $f(x)$ satisfies the differential equation $(1-x)^2 y' - (1-x) y - 1 = 0\ .$
3. Show that the general solution of the differential equation in part (b) is $y = \frac{-\log(1-x)}{1-x} + \frac c{1-x}\ ,$ where $c$ is a constant.
Use part (c) to show that $b_k = 1 + \frac 12 + \frac 13 + \cdots + \frac 1k\ .$
Exercise $12$
Three tanks fight a three-way duel. Tank A has probability 1/2 of destroying the tank at which it fires, tank B has probability 1/3 of destroying the tank at which it fires, and tank C has probability 1/6 of destroying the tank at which it fires. The tanks fire together and each tank fires at the strongest opponent not yet destroyed. Form a Markov chain by taking as states the subsets of the set of tanks. Find $\mathbf{N},~\mathbf{N}\mathbf{c}$, and $\mathbf{N}\mathbf{R}$, and interpret your results. : Take as states ABC, AC, BC, A, B, C, and none, indicating the tanks that could survive starting in state ABC. You can omit AB because this state cannot be reached from ABC.
Exercise $13$
Smith is in jail and has 3 dollars; he can get out on bail if he has 8 dollars. A guard agrees to make a series of bets with him. If Smith bets $A$ dollars, he wins $A$ dollars with probability .4 and loses $A$ dollars with probability .6. Find the probability that he wins 8 dollars before losing all of his money if
1. he bets 1 dollar each time (timid strategy).
2. he bets, each time, as much as possible but not more than necessary to bring his fortune up to 8 dollars (bold strategy).
3. Which strategy gives Smith the better chance of getting out of jail?
Exercise $14$
With the situation in Exercise $13$, consider the strategy such that for $i < 4$, Smith bets $\min(i,4 - i)$, and for $i \geq 4$, he bets according to the bold strategy, where $i$ is his current fortune. Find the probability that he gets out of jail using this strategy. How does this probability compare with that obtained for the bold strategy?
Exercise $15$
Consider the game of tennis when is reached. If a player wins the next point, he has On the following point, he either wins the game or the game returns to Assume that for any point, player A has probability .6 of winning the point and player B has probability .4 of winning the point.
1. Set this up as a Markov chain with state 1: A wins; 2: B wins; 3: advantage A; 4: deuce; 5: advantage B.
2. Find the absorption probabilities.
3. At deuce, find the expected duration of the game and the probability that B will win.
Exercises $16$ and $17$ concern the inheritance of color-blindness, which is a sex-linked characteristic. There is a pair of genes, g and G, of which the former tends to produce color-blindness, the latter normal vision. The G gene is dominant. But a man has only one gene, and if this is g, he is color-blind. A man inherits one of his mother’s two genes, while a woman inherits one gene from each parent. Thus a man may be of type G or g, while a woman may be type GG or Gg or gg. We will study a process of inbreeding similar to that of Example [exam 11.1.9] by constructing a Markov chain.
Exercise $16$
List the states of the chain. : There are six. Compute the transition probabilities. Find the fundamental matrix $\mathbf{N}$, $\mathbf{N}\mathbf{c}$, and $\mathbf{N}\mathbf{R}$.
Exercise $17$
Show that in both Example 11.1.11 and the example just given, the probability of absorption in a state having genes of a particular type is equal to the proportion of genes of that type in the starting state. Show that this can be explained by the fact that a game in which your fortune is the number of genes of a particular type in the state of the Markov chain is a fair game.5
Exercise $18$
Assume that a student going to a certain four-year medical school in northern New England has, each year, a probability $q$ of flunking out, a probability $r$ of having to repeat the year, and a probability $p$ of moving on to the next year (in the fourth year, moving on means graduating).
1. Form a transition matrix for this process taking as states F, 1, 2, 3, 4, and G where F stands for flunking out and G for graduating, and the other states represent the year of study.
2. For the case $q = .1$, $r = .2$, and $p = .7$ find the time a beginning student can expect to be in the second year. How long should this student expect to be in medical school?
Find the probability that this beginning student will graduate.
Exercise $19$
(E. Brown6) Mary and John are playing the following game: They have a three-card deck marked with the numbers 1, 2, and 3 and a spinner with the numbers 1, 2, and 3 on it. The game begins by dealing the cards out so that the dealer gets one card and the other person gets two. A move in the game consists of a spin of the spinner. The person having the card with the number that comes up on the spinner hands that card to the other person. The game ends when someone has all the cards.
1. Set up the transition matrix for this absorbing Markov chain, where the states correspond to the number of cards that Mary has.
2. Find the fundamental matrix.
3. On the average, how many moves will the game last?
If Mary deals, what is the probability that John will win the game?
Exercise $20$
Assume that an experiment has $m$ equally probable outcomes. Show that the expected number of independent trials before the first occurrence of $k$ consecutive occurrences of one of these outcomes is $(m^k - 1)/(m - 1)$. : Form an absorbing Markov chain with states 1, 2, …, $k$ with state $i$ representing the length of the current run. The expected time until a run of $k$ is 1 more than the expected time until absorption for the chain started in state 1. It has been found that, in the decimal expansion of pi, starting with the 24,658,601st digit, there is a run of nine 7’s. What would your result say about the expected number of digits necessary to find such a run if the digits are produced randomly?i[exer 11.2.20] (Roberts7) A city is divided into 3 areas 1, 2, and 3. It is estimated that amounts $u_1$, $u_2$, and $u_3$ of pollution are emitted each day from these three areas. A fraction $q_{ij}$ of the pollution from region $i$ ends up the next day at region $j$. A fraction $q_i = 1 - \sum_j q_{ij} > 0$ goes into the atmosphere and escapes. Let $w_i^{(n)}$ be the amount of pollution in area $i$ after $n$ days.
1. Show that $\mathbf{w}^{(n)} = \mathbf{u} + \mathbf{u} \mathbf{Q} +\cdots + \mathbf{u}\mathbf{Q}^{n - 1}$.
2. Show that $\mathbf{w}^{(n)} \to \mathbf{w}$, and show how to compute from .
The government wants to limit pollution levels to a prescribed level by prescribing $\mathbf{w}.$ Show how to determine the levels of pollution $\mathbf{u}$ which would result in a prescribed limiting value $\mathbf{w}$.
Exercise $21$
$\left(\right.$ Roberts $^7$ ) A city is divided into 3 areas 1,2 , and 3 . It is estimated that amounts $u_1, u_2$, and $u_3$ of pollution are emitted each day from these three areas. A fraction $q_{i j}$ of the pollution from region $i$ ends up the next day at region $j$. A fraction $q_i=1-\sum_j q_{i j}>0$ goes into the atmosphere and escapes. Let $w_i^{(n)}$ be the amount of pollution in area $i$ after $n$ days.
(a) Show that $\mathbf{w}^{(n)}=\mathbf{u}+\mathbf{u} \mathbf{Q}+\cdots+\mathbf{u} \mathbf{Q}^{n-1}$.
(b) Show that $\mathbf{w}^{(n)} \rightarrow \mathbf{w}$, and show how to compute $\mathbf{w}$ from $\mathbf{u}$.
(c) The government wants to limit pollution levels to a prescribed level by prescribing w. Show how to determine the levels of pollution $\mathbf{u}$ which would result in a prescribed limiting value $\mathbf{w}$.
Exercise $21$
In the Leontief economic model,8 there are $n$ industries 1, 2, …, $n$. The $i$th industry requires an amount $0 \leq q_{ij} \leq 1$ of goods (in dollar value) from company $j$ to produce 1 dollar’s worth of goods. The outside demand on the industries, in dollar value, is given by the vector $\mathbf{d} = (d_1,d_2,\ldots,d_n)$. Let $\mathbf{Q}$ be the matrix with entries $q_{ij}$.
1. Show that if the industries produce total amounts given by the vector $\mathbf{x} = (x_1,x_2,\ldots,x_n)$ then the amounts of goods of each type that the industries will need just to meet their internal demands is given by the vector $\mathbf{x} \mathbf{Q}$.
2. Show that in order to meet the outside demand $\mathbf{d}$ and the internal demands the industries must produce total amounts given by a vector $\mathbf{x} = (x_1,x_2,\ldots,x_n)$ which satisfies the equation $\mathbf{x} = \mathbf{x} \mathbf{Q} + \mathbf{d}$.
3. Show that if $\mathbf{Q}$ is the $\mathbf{Q}$-matrix for an absorbing Markov chain, then it is possible to meet any outside demand $\mathbf{d}$.
4. Assume that the row sums of $\mathbf{Q}$ are less than or equal to 1. Give an economic interpretation of this condition. Form a Markov chain by taking the states to be the industries and the transition probabilites to be the $q_{ij}$. Add one absorbing state 0. Define $q_{i0} = 1 - \sum_j q_{ij}\ .$ Show that this chain will be absorbing if every company is either making a profit or ultimately depends upon a profit-making company.
5. Define $\mathbf{x} \mathbf{c}$ to be the gross national product. Find an expression for the gross national product in terms of the demand vector $\mathbf{d}$ and the vector $\mathbf{t}$ giving the expected time to absorption.
Exercise $23$
A gambler plays a game in which on each play he wins one dollar with probability $p$ and loses one dollar with probability $q = 1 - p$. The is the problem of finding the probability $w_x$ of winning an amount $T$ before losing everything, starting with state $x$. Show that this problem may be considered to be an absorbing Markov chain with states 0, 1, 2, …, $T$ with 0 and $T$ absorbing states. Suppose that a gambler has probability $p = .48$ of winning on each play. Suppose, in addition, that the gambler starts with 50 dollars and that $T = 100$ dollars. Simulate this game 100 times and see how often the gambler is ruined. This estimates $w_{50}$.
Exercise $24$
Show that $w_x$ of Exercise [exer 11.2.22] satisfies the following conditions:
1. $w_x = pw_{x + 1} + qw_{x - 1}$ for $x = 1$, 2, …, $T - 1$.
2. $w_0 = 0$.
3. $w_T = 1$.
Show that these conditions determine $w_x$. Show that, if $p = q = 1/2$, then $w_x = \frac xT$ satisfies (a), (b), and (c) and hence is the solution. If $p \ne q$, show that $w_x = \frac{(q/p)^x - 1}{(q/p)^T - 1}$ satisfies these conditions and hence gives the probability of the gambler winning.
Exercise $25$
Write a program to compute the probability $w_x$ of Exercise [exer 11.2.23] for given values of $x$, $p$, and $T$. Study the probability that the gambler will ruin the bank in a game that is only slightly unfavorable, say $p = .49$, if the bank has significantly more money than the gambler.
Exercise $26$
We considered the two examples of the Drunkard’s Walk corresponding to the cases $n = 4$ and $n = 5$ blocks (see Example $1$). Verify that in these two examples the expected time to absorption, starting at $x$, is equal to $x(n - x)$. See if you can prove that this is true in general. : Show that if $f(x)$ is the expected time to absorption then $f(0) = f(n) = 0$ and $f(x) = (1/2)f(x - 1) + (1/2)f(x + 1) + 1$ for $0 < x < n$. Show that if $f_1(x)$ and $f_2(x)$ are two solutions, then their difference $g(x)$ is a solution of the equation $g(x) = (1/2)g(x - 1) + (1/2)g(x + 1)\ .$ Also, $g(0) = g(n) = 0$. Show that it is not possible for $g(x)$ to have a strict maximum or a strict minimum at the point $i$, where $1 \le i \le n-1$. Use this to show that $g(i) = 0$ for all i. This shows that there is at most one solution. Then verify that the function $f(x) = x(n-x)$ is a solution.
Exercise $27$
Consider an absorbing Markov chain with state space $S$. Let $f$ be a function defined on $S$ with the property that $f(i) = \sum_{j \in S} p_{ij} f(j)\ ,$ or in vector form $\mathbf{f} = \mathbf{Pf}\ .$ Then $f$ is called a for $\mathbf{P}$. If you imagine a game in which your fortune is $f(i)$ when you are in state $i$, then the harmonic condition means that the game is in the sense that your expected fortune after one step is the same as it was before the step.
1. Show that for $f$ harmonic $\mathbf{f} = \mathbf{P}^n{\mathbf{f}}$ for all $n$.
2. Show, using (a), that for $f$ harmonic $\mathbf{f} = \mathbf{P}^\infty \mathbf{f}\ ,$ where $\mathbf{P}^\infty = \lim_{n \to \infty} \mathbf{P}^n = \pmatrix{$
$\cr}\ .$
Using (b), prove that when you start in a transient state $i$ your expected final fortune $\sum_k b_{ik} f(k)$ is equal to your starting fortune $f(i)$. In other words, a fair game on a finite state space remains fair to the end. (Fair games in general are called Fair games on infinite state spaces need not remain fair with an unlimited number of plays allowed. For example, consider the game of Heads or Tails (see Example [exam 1.3]). Let Peter start with 1 penny and play until he has 2. Then Peter will be sure to end up 1 penny ahead.)
Exercise $28$
A coin is tossed repeatedly. We are interested in finding the expected number of tosses until a particular pattern, say B = HTH, occurs for the first time. If, for example, the outcomes of the tosses are HHTTHTH we say that the pattern B has occurred for the first time after 7 tosses. Let $T^B$ be the time to obtain pattern B for the first time. Li9 gives the following method for determining $E(T^B)$.
We are in a casino and, before each toss of the coin, a gambler enters, pays 1 dollar to play, and bets that the pattern B = HTH will occur on the next three tosses. If H occurs, he wins 2 dollars and bets this amount that the next outcome will be T. If he wins, he wins 4 dollars and bets this amount that H will come up next time. If he wins, he wins 8 dollars and the pattern has occurred. If at any time he loses, he leaves with no winnings.
Let A and B be two patterns. Let AB be the amount the gamblers win who arrive while the pattern A occurs and bet that B will occur. For example, if A = HT and B = HTH then AB = 2 + 4 = 6 since the first gambler bet on H and won 2 dollars and then bet on T and won 4 dollars more. The second gambler bet on H and lost. If A = HH and B = HTH, then AB = 2 since the first gambler bet on H and won but then bet on T and lost and the second gambler bet on H and won. If A = B = HTH then AB = BB = 8 + 2 = 10.
Now for each gambler coming in, the casino takes in 1 dollar. Thus the casino takes in $T^B$ dollars. How much does it pay out? The only gamblers who go off with any money are those who arrive during the time the pattern B occurs and they win the amount BB. But since all the bets made are perfectly fair bets, it seems quite intuitive that the expected amount the casino takes in should equal the expected amount that it pays out. That is, $E(T^B)$ = BB.
Since we have seen that for B = HTH, BB = 10, the expected time to reach the pattern HTH for the first time is 10. If we had been trying to get the pattern B = HHH, then BB $= 8 + 4 + 2 = 14$ since all the last three gamblers are paid off in this case. Thus the expected time to get the pattern HHH is 14. To justify this argument, Li used a theorem from the theory of martingales (fair games).
We can obtain these expectations by considering a Markov chain whose states are the possible initial segments of the sequence HTH; these states are HTH, HT, H, and $\emptyset$, where $\emptyset$ is the empty set. Then, for this example, the transition matrix is
$\mathbf{N}= \begin{matrix} & \begin{matrix}HTH&HT&H&0\end{matrix} \\begin{matrix}HTH\HT\H\0\end{matrix} & \begin{pmatrix} 1 & 0 & 0 & 0 \ .5 & 0 & 0 & .5 \ 0 & .5 & .5 & 0 \ 0 & 0 & .5 & .5 \end{pmatrix}\\end{matrix}$
Show, using the associated Markov chain, that the values $E(T^B)$ = 10 and $E(T^B)$ = 14 are correct for the expected time to reach the patterns HTH and HHH, respectively.
Exercise $29$
We can use the gambling interpretation given in Exercise $28$ to find the expected number of tosses required to reach pattern B when we start with pattern A. To be a meaningful problem, we assume that pattern A does not have pattern B as a subpattern. Let $E_A(T^B)$ be the expected time to reach pattern B starting with pattern A. We use our gambling scheme and assume that the first k coin tosses produced the pattern A. During this time, the gamblers made an amount AB. The total amount the gamblers will have made when the pattern B occurs is BB. Thus, the amount that the gamblers made after the pattern A has occurred is BB - AB. Again by the fair game argument, $E_A(T^B)$ = BB-AB.
For example, suppose that we start with pattern A = HT and are trying to get the pattern B = HTH. Then we saw in Exercise [exer 11.2.26] that AB = 4 and BB = 10 so $E_A(T ^B)$ = BB-AB= 6.
Verify that this gambling interpretation leads to the correct answer for all starting states in the examples that you worked in Exercise [exer 11.2.26].
Exercise $30$
Here is an elegant method due to Guibas and Odlyzko10 to obtain the expected time to reach a pattern, say HTH, for the first time. Let $f(n)$ be the number of sequences of length $n$ which do not have the pattern HTH. Let $f_p(n)$ be the number of sequences that have the pattern for the first time after $n$ tosses. To each element of $f(n)$, add the pattern HTH. Then divide the resulting sequences into three subsets: the set where HTH occurs for the first time at time $n + 1$ (for this, the original sequence must have ended with HT); the set where HTH occurs for the first time at time $n + 2$ (cannot happen for this pattern); and the set where the sequence HTH occurs for the first time at time $n + 3$ (the original sequence ended with anything except HT). Doing this, we have $f(n) = f_p(n + 1) + f_p(n + 3)\ .$ Thus, $\frac{f(n)}{2^n} = \frac{2f_p(n + 1)}{2^{n + 1}} + \frac{2^3f_p(n + 3)}{2^{n + 3}}\ .$ If $T$ is the time that the pattern occurs for the first time, this equality states that $P(T > n) = 2P(T = n + 1) + 8P(T = n + 3)\ .$ Show that if you sum this equality over all $n$ you obtain $\sum_{n = 0}^\infty P(T > n) = 2 + 8 = 10\ .$ Show that for any integer-valued random variable $E(T) = \sum_{n = 0}^\infty P(T > n)\ ,$ and conclude that $E(T) = 10$. Note that this method of proof makes very clear that $E(T)$ is, in general, equal to the expected amount the casino pays out and avoids the martingale system theorem used by Li.
Exercise $31$
In Example [exam 11.1.9], define $f(i)$ to be the proportion of G genes in state $i$. Show that $f$ is a harmonic function (see Exercise [exer 11.2.29]). Why does this show that the probability of being absorbed in state $(\mbox{GG},\mbox{GG})$ is equal to the proportion of G genes in the starting state? (See Exercise $17$.)
Exercise $32$
Show that the stepping stone model (Example 11.1.12) is an absorbing Markov chain. Assume that you are playing a game with red and green squares, in which your fortune at any time is equal to the proportion of red squares at that time. Give an argument to show that this is a fair game in the sense that your expected winning after each step is just what it was before this step.: Show that for every possible outcome in which your fortune will decrease by one there is another outcome of exactly the same probability where it will increase by one.
Use this fact and the results of Exercise $27$ to show that the probability that a particular color wins out is equal to the proportion of squares that are initially of this color.
Exercise $33$
Consider a random walker who moves on the integers 0, 1, …, $N$, moving one step to the right with probability $p$ and one step to the left with probability $q = 1 - p$. If the walker ever reaches 0 or $N$ he stays there. (This is the Gambler’s Ruin problem of Exercise $23$.) If $p = q$ show that the function
$f(i) = i$
is a harmonic function (see Exercise $27$), and if $p \ne q$ then
$f(i) = \biggl(\frac {q}{p}\biggr)^i$
is a harmonic function. Use this and the result of Exercise $27$ to show that the probability $b_{iN}$ of being absorbed in state $N$ starting in state $i$ is
$b_{i N}=\left\{\begin{array}{cl} \frac{i}{N}, & \text { if } p=q \ \frac{\left(\frac{q}{p}\right)^i-1}{\left(\frac{q}{p}\right)^N-1}, & \text { if } p \neq q \end{array}\right.$
For an alternative derivation of these results see Exercise $24$.
Exercise $34$
Complete the following alternate proof of Theorem [thm 11.2.3]. Let $s_i$ be a transient state and $s_j$ be an absorbing state. If we compute $b_{ij}$ in terms of the possibilities on the outcome of the first step, then we have the equation $b_{ij} = p_{ij} + \sum_k p_{ik} b_{kj}\ ,$ where the summation is carried out over all transient states $s_k$. Write this in matrix form, and derive from this equation the statement $\mathbf{B} = \mathbf{N}\mathbf{R}\ .$
Exercise $35$
In Monte Carlo roulette (see Example [exam 6.1.5]), under option (c), there are six states ($S$, $W$, $L$, $E$, $P_1$, and $P_2$). The reader is referred to Figure [fig 6.1.5], which contains a tree for this option. Form a Markov chain for this option, and use the program AbsorbingChain to find the probabilities that you win, lose, or break even for a 1 franc bet on red. Using these probabilities, find the expected winnings for this bet. For a more general discussion of Markov chains applied to roulette, see the article of H. Sagan referred to in Example [exam 6.7].
Exercise $36$
We consider next a game called by its inventor W. Penney.11 There are two players; the first player picks a pattern A of H’s and T’s, and then the second player, knowing the choice of the first player, picks a different pattern B. We assume that neither pattern is a subpattern of the other pattern. A coin is tossed a sequence of times, and the player whose pattern comes up first is the winner. To analyze the game, we need to find the probability $p_A$ that pattern A will occur before pattern B and the probability $p_B = 1- p_A$ that pattern B occurs before pattern A. To determine these probabilities we use the results of Exercises [exer 11.2.26] and [exer 11.2.27]. Here you were asked to show that, the expected time to reach a pattern B for the first time is, $E(T^B) = BB\ ,$ and, starting with pattern A, the expected time to reach pattern B is $E_A(T^B) = BB - AB\ .$
1. and thus $BB = E(T^{A\ {\rm or}\ B}) + p_A (BB - AB)\ .$ Interchange A and B to find a similar equation involving the $p_B$. Finally, note that $p_A + p_B = 1\ .$ Use these equations to solve for $p_A$ and $p_B$.
2. Assume that both players choose a pattern of the same length k. Show that, if $k = 2$, this is a fair game, but, if $k = 3$, the second player has an advantage no matter what choice the first player makes. (It has been shown that, for $k \geq 3$, if the first player chooses $a_1$, $a_2$, …, $a_k$, then the optimal strategy for the second player is of the form $b$, $a_1$, …, $a_{k - 1}$ where $b$ is the better of the two choices H or T.13) | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/11%3A_Markov_Chains/11.02%3A_Absorbing_Markov_Chains.txt |
A second important kind of Markov chain we shall study in detail is an ergodic Markov chain, defined as follows.
Definition: ergodic chain
A Markov chain is called an ergodic Markov chain if it is possible to go from every state to every state (not necessarily in one move).
In many books, ergodic Markov chains are called irreducible.
Definition: Markov chain
A Markov chain is called an ergodic chain if some power of the transition matrix has only positive elements.
In other words, for some $n$, it is possible to go from any state to any state in exactly $n$ steps. It is clear from this definition that every regular chain is ergodic. On the other hand, an ergodic chain is not necessarily regular, as the following examples show.
Example $1$
Let the transition matrix of a Markov chain be defined by
$\mathbf{P}= \begin{matrix} & \begin{matrix}1&2\end{matrix} \\begin{matrix}1\2\end{matrix} & \begin{pmatrix} 0 & 1 \ 1 & 0 \ \end{pmatrix}\\end{matrix}$
Then is clear that it is possible to move from any state to any state, so the chain is ergodic. However, if $n$ is odd, then it is not possible to move from state 0 to state 0 in $n$ steps, and if $n$ is even, then it is not possible to move from state 0 to state 1 in $n$ steps, so the chain is not regular.
A more interesting example of an ergodic, non-regular Markov chain is provided by the Ehrenfest urn model.
Example $2$
Recall the Ehrenfest urn model (Example 11.1.8). The transition matrix for this example is
$\mathbf{P}= \begin{matrix} & \begin{matrix}0&&1&&2&&3&&4\end{matrix} \\begin{matrix}0\1\2\3\4\end{matrix} & \begin{pmatrix} 0 & 1 & 0 & 0 & 0 \ 1/4 & 0 & 3/4 & 0 & 0 \ 0 & 1/2 & 0 & 1/2 & 0 \ 0 & 0 & 3/4 & 0 & 1/4 \ 0 & 0 & 0 & 1 & 0 \end{pmatrix}\\end{matrix}$
In this example, if we start in state 0 we will, after any even number of steps, be in either state 0, 2 or 4, and after any odd number of steps, be in states 1 or 3. Thus this chain is ergodic but not regular.
Regular Markov Chains
Any transition matrix that has no zeros determines a regular Markov chain. However, it is possible for a regular Markov chain to have a transition matrix that has zeros. The transition matrix of the Land of Oz example of Section 11.1 has $p_{NN} = 0$ but the second power $\mathbf{P}^2$ has no zeros, so this is a regular Markov chain.
An example of a nonregular Markov chain is an absorbing chain. For example, let
$\mathbf{P} = \pmatrix{ 1 & 0 \cr 1/2 & 1/2 \cr}$
be the transition matrix of a Markov chain. Then all powers of $\mathbf{P}$ will have a 0 in the upper right-hand corner.
We shall now discuss two important theorems relating to regular chains.
Theorem $1$
Let $\mathbf{P}$ be the transition matrix for a regular chain. Then, as $n \to \infty$, the powers $\mathbf{P}^n$ approach a limiting matrix $\mathbf{W}$ with all rows the same vector $\mathbf{w}$. The vector $\mathbf{w}$ is a strictly positive probability vector (i.e., the components are all positive and they sum to one).
In the next section we give two proofs of this fundamental theorem. We give here the basic idea of the first proof.
We want to show that the powers $\mathbf{P}^n$ of a regular transition matrix tend to a matrix with all rows the same. This is the same as showing that $\mathbf{P}^n$ converges to a matrix with constant columns. Now the $j$th column of $\mathbf{P}^n$ is $\mathbf{P}^{n} \mathbf{y}$ where $\mathbf{y}$ is a column vector with $1$ in the $j$th entry and 0 in the other entries. Thus we need only prove that for any column vector $\mathbf{y},\mathbf{P}^{n} \mathbf{y}$ approaches a constant vector as $n$ tend to infinity.
Since each row of $\mathbf{P}$ is a probability vector, $\mathbf{Py}$ replaces $\mathbf{y}$ by averages of its components. Here is an example:
$\pmatrix{1/2 & 1/4 & 1/4 \cr 1/3 & 1/3 & 1/3 \cr 1/2 & 1/2 & 0\cr} \pmatrix {1 \cr 2 \cr 3 \cr} = \pmatrix{1/2 \cdot 1+ 1/4 \cdot 2+ 1/4 \cdot 3\cr 1/3 \cdot 1+ 1/3 \cdot 2+ 1/3 \cdot 3\cr 1/2 \cdot 1+ 1/2 \cdot 2+ 0 \cdot 3\cr} =\pmatrix {7/4 \cr 2 \cr 3/2 \cr}\ .$
The result of the averaging process is to make the components of $\mathbf{Py}$ more similar than those of $\mathbf{y}$. In particular, the maximum component decreases (from 3 to 2) and the minimum component increases (from 1 to 3/2). Our proof will show that as we do more and more of this averaging to get $\mathbf{P}^{n} \mathbf{y}$, the difference between the maximum and minimum component will tend to 0 as $n \rightarrow \infty$. This means $\mathbf{P}^{n} \mathbf{y}$ tends to a constant vector. The $ij$th entry of $\mathbf{P}^n$, $p_{ij}^{(n)}$, is the probability that the process will be in state $s_j$ after $n$ steps if it starts in state $s_i$. If we denote the common row of $\mathbf{W}$ by $\mathbf{w}$, then Theorem $1$ states that the probability of being in $s_j$ in the long run is approximately $w_j$, the $j$th entry of $\mathbf{w}$, and is independent of the starting state.
Example $3$
Recall that for the Land of Oz example of Section 1.1, the sixth power of the transition matrix $\mathbf{P}$ is, to three decimal places,
$\mathbf{P}^6= \begin{matrix} & \begin{matrix}R&&N&&S\end{matrix} \\begin{matrix}R\N\S\end{matrix} & \begin{pmatrix} .4 & .2 & .4 \ .4 & .2 & .4 \ .4 & .2 & .4 \end{pmatrix}\\end{matrix}$
Thus, to this degree of accuracy, the probability of rain six days after a rainy day is the same as the probability of rain six days after a nice day, or six days after a snowy day. Theorem $1$ predicts that, for large $n$, the rows of $\mathbf{P}$ approach a common vector. It is interesting that this occurs so soon in our example.
Theorem $2$
Let $\mathbf {P}$ be a regular transition matrix, let $\mathbf{W} = \lim_{n \rightarrow \infty} \mathbf{P}^n\ ,$ let $\mathbf{w}$ be the common row of $\mathbf{W}$, and let $\mathbf{c}$ be the column vector all of whose components are 1. Then
1. $\mathbf{w}\mathbf{P} = \mathbf{w}$, and any row vector $\mathbf{v}$ such that $\mathbf{v}\mathbf{P} = \mathbf{v}$ is a constant multiple of $\mathbf{w}$.
2. $\mathbf{P}\mathbf{c} = \mathbf{c}$, and any column vector such that $\mathbf{P}\mathbf{x} = \mathbf{x}$ is a multiple of $\mathbf{c}$.
Proof. To prove part (a), we note that from Theorem $1$,
$\mathbf{P}^n \to \mathbf{W}\ .$
Thus,
$\mathbf{P}^{n + 1} = \mathbf{P}^n \cdot \mathbf{P} \to \mathbf{W}\mathbf{P}\ .$
But $\mathbf{P}^{n + 1} \to \mathbf{W}$, and so $\mathbf{W} = \mathbf{W}\mathbf{P}$, and $\mathbf{w} = \mathbf{w}\mathbf{P}$.
Let $\mathbf{v}$ be any vector with $\mathbf{v} \mathbf{P} = \mathbf{v}$. Then $\mathbf{v} = \mathbf{v} \mathbf{P}^n$, and passing to the limit, $\mathbf{v} = \mathbf{v} \mathbf{W}$. Let $r$ be the sum of the components of $\mathbf{v}$. Then it is easily checked that $\mathbf{v}\mathbf{W} = r\mathbf{w}$. So, $\mathbf{v} = r\mathbf{w}$.
To prove part (b), assume that $\mathbf{x} = \mathbf{P} \mathbf{x}$. Then $\mathbf{x} = \mathbf{P}^n \mathbf{x}$, and again passing to the limit, $\mathbf{x} = \mathbf{W}\mathbf{x}$. Since all rows of $\mathbf{W}$ are the same, the components of $\mathbf{W}\mathbf{x}$ are all equal, so $\mathbf{x}$ is a multiple of $\mathbf{c}$.
Note that an immediate consequence of Theorem $2$ is the fact that there is only one probability vector $\mathbf{v}$ such that $\mathbf{v}\mathbf{P} = \mathbf{v}$.
Fixed Vectors
Definition: 11.3.1: Row and Column Vectors
A row vector $\mathbf{w}$ with the property $\mathbf{w}\mathbf{P} = \mathbf{w}$ is called a for $\mathbf{P}$. Similarly, a column vector $\mathbf{x}$ such that $\mathbf{P}\mathbf{x} = \mathbf{x}$ is called a for $\mathbf{P}$.
Thus, the common row of $\mathbf{W}$ is the unique vector $\mathbf{w}$ which is both a fixed row vector for $\mathbf{P}$ and a probability vector. Theorem $2$ shows that any fixed row vector for $\mathbf{P}$ is a multiple of $\mathbf{w}$ and any fixed column vector for $\mathbf{P}$ is a constant vector.
One can also state Definition $1$ in terms of eigenvalues and eigenvectors. A fixed row vector is a left eigenvector of the matrix $\mathbf{P}$ corresponding to the eigenvalue 1. A similar statement can be made about fixed column vectors.
We will now give several different methods for calculating the fixed row vector for a regular Markov chain.
Example $4$
By Theorem $1$ we can find the limiting vector $\mathbf{w}$ for the Land of Oz from the fact that
$w_1 + w_2 + w_3 = 1$
and
$\pmatrix{ w_1 & w_2 & w_3 } \pmatrix{ 1/2 & 1/4 & 1/4 \cr 1/2 & 0 & 1/2 \cr 1/4 & 1/4 & 1/2 \cr} = \pmatrix{ w_1 & w_2 & w_3 }\ .$
These relations lead to the following four equations in three unknowns:
\begin{aligned} w_1 + w_2 + w_3 &=& 1\ , \ (1/2)w_1 + (1/2)w_2 + (1/4)w_3 &=& w_1\ , \ (1/4)w_1 + (1/4)w_3 &=& w_2\ , \ (1/4)w_1 + (1/2)w_2 + (1/2)w_3 &=& w_3\ .\end{aligned}
Our theorem guarantees that these equations have a unique solution. If the equations are solved, we obtain the solution
$\mathbf{w} = \pmatrix{ .4 & .2 & .4 }\ ,$
in agreement with that predicted from $\mathbf{P}^6$, given in Example 11.1.2.
To calculate the fixed vector, we can assume that the value at a particular state, say state one, is 1, and then use all but one of the linear equations from $\mathbf{w}\mathbf{P} = \mathbf{w}$. This set of equations will have a unique solution and we can obtain $\mathbf{w}$ from this solution by dividing each of its entries by their sum to give the probability vector $\mathbf{w}$. We will now illustrate this idea for the above example.
Example $5$
(Example $1$ continued) We set $w_1 = 1$, and then solve the first and second linear equations from $\mathbf{w}\mathbf{P} = \mathbf{w}$. We have
\begin{aligned} (1/2) + (1/2)w_2 + (1/4)w_3 &=& 1\ , \ (1/4) + (1/4)w_3 &=& w_2\ . \\end{aligned}
If we solve these, we obtain
$\pmatrix{w_1&w_2&w_3} = \pmatrix{1&1/2&1}\ .$
Now we divide this vector by the sum of the components, to obtain the final answer:
$\mathbf{w} = \pmatrix{.4&.2&.4}\ .$
This method can be easily programmed to run on a computer.
As mentioned above, we can also think of the fixed row vector $\mathbf{w}$ as a left eigenvector of the transition matrix $\mathbf{P}$. Thus, if we write $\mathbf{I}$ to denote the identity matrix, then $\mathbf{w}$ satisfies the matrix equation
$\mathbf{w}\mathbf{P} = \mathbf{w}\mathbf{I}\ ,$
or equivalently,
$\mathbf{w}(\mathbf{P} - \mathbf{I}) = \mathbf{0}\ .$
Thus, $\mathbf{w}$ is in the left nullspace of the matrix $\mathbf{P} - \mathbf{I}$. Furthermore, Theorem $2$ states that this left nullspace has dimension 1. Certain computer programming languages can find nullspaces of matrices. In such languages, one can find the fixed row probability vector for a matrix $\mathbf{P}$ by computing the left nullspace and then normalizing a vector in the nullspace so the sum of its components is 1.
The program FixedVector uses one of the above methods (depending upon the language in which it is written) to calculate the fixed row probability vector for regular Markov chains.
So far we have always assumed that we started in a specific state. The following theorem generalizes Theorem [thm 11.3.6] to the case where the starting state is itself determined by a probability vector.
Theorem $3$
Let $\mathbf{P}$ be the transition matrix for a regular chain and $\mathbf{v}$ an arbitrary probability vector. Then
$\lim_{n \to \infty} \mathbf{v} \mathbf{P}^n = \mathbf{w}\ ,$
where $\mathbf{w}$ is the unique fixed probability vector for $\mathbf{P}$.
Proof. By Theorem $1$, $\lim_{n \to \infty} \mathbf{P}^n = \mathbf {W}\ .$ Hence, $\lim_{n \to \infty} \mathbf {v} \mathbf {P}^n = \mathbf {v} \mathbf {W}\ .$ But the entries in $\mathbf {v}$ sum to 1, and each row of $\mathbf {W}$ equals $\mathbf {w}$. From these statements, it is easy to check that $\mathbf {v} \mathbf {W} = \mathbf {w}\ .$
If we start a Markov chain with initial probabilities given by $\mathbf{v}$, then the probability vector $\mathbf{v} \mathbf{P}^n$ gives the probabilities of being in the various states after $n$ steps. Theorem [thm 11.3.9] then establishes the fact that, even in this more general class of processes, the probability of being in $s_j$ approaches $w_j$.
Equilibrium
We also obtain a new interpretation for $\mathbf{w}$. Suppose that our starting vector picks state $s_i$ as a starting state with probability $w_i$, for all $i$. Then the probability of being in the various states after $n$ steps is given by $\mathbf {w} \mathbf {P}^n = \mathbf {w}$, and is the same on all steps. This method of starting provides us with a process that is called “stationary." The fact that $\mathbf{w}$ is the only probability vector for which $\mathbf {w} \mathbf {P} = \mathbf {w}$ shows that we must have a starting probability vector of exactly the kind described to obtain a stationary process.
Many interesting results concerning regular Markov chains depend only on the fact that the chain has a unique fixed probability vector which is positive. This property holds for all ergodic Markov chains.
Theorem $4$
For an ergodic Markov chain, there is a unique probability vector $\mathbf{w}$ such that $\mathbf {w} \mathbf {P} = \mathbf {w}$ and $\mathbf{w}$ is strictly positive. Any row vector such that $\mathbf {v} \mathbf {P} = \mathbf {v}$ is a multiple of $\mathbf{w}$. Any column vector $\mathbf{x}$ such that $\mathbf {P} \mathbf {x} = \mathbf {x}$ is a constant vector.
Proof. This theorem states that Theorem $2$ is true for ergodic chains. The result follows easily from the fact that, if $\mathbf{P}$ is an ergodic transition matrix, then $\bar{\mathbf{P}} = (1/2)\mathbf {I} + (1/2)\mathbf {P}$ is a regular transition matrix with the same fixed vectors (see Exercises $25$–[exer $28$).
For ergodic chains, the fixed probability vector has a slightly different interpretation. The following two theorems, which we will not prove here, furnish an interpretation for this fixed vector.
Theorem $5$
Let $\mathbf{P}$ be the transition matrix for an ergodic chain. Let $\mathbf {A}_n$ be the matrix defined by
$\mathbf {A}_n = \frac{\mathbf {I} + \mathbf {P} + \mathbf {P}^2 +\cdots + \mathbf {P}^n}{n + 1}\ .$
Then $\mathbf {A}_n \to \mathbf {W}$, where $\mathbf{W}$ is a matrix all of whose rows are equal to the unique fixed probability vector $\mathbf{w}$ for $\mathbf{P}$.
If $\mathbf{P}$ is the transition matrix of an ergodic chain, then Theorem $2$ states that there is only one fixed row probability vector for $\mathbf{P}$. Thus, we can use the same techniques that were used for regular chains to solve for this fixed vector. In particular, the program FixedVector works for ergodic chains.
To interpret Theorem $5$, let us assume that we have an ergodic chain that starts in state $s_i$. Let $X^{(m)} = 1$ if the $m$th step is to state $s_j$ and 0 otherwise. Then the average number of times in state $s_j$ in the first $n$ steps is given by
$H^{(n)} = \frac{X^{(0)} + X^{(1)} + X^{(2)} +\cdots+ X^{(n)}}{n + 1}\ .$
But $X^{(m)}$ takes on the value 1 with probability $p_{ij}^{(m)}$ and 0 otherwise. Thus $E(X^{(m)}) = p_{ij}^{(m)}$, and the $ij$th entry of $\mathbf {A}_n$ gives the expected value of $H^{(n)}$, that is, the expected proportion of times in state $s_j$ in the first $n$ steps if the chain starts in state $s_i$.
If we call being in state $s_j$ and any other state we could ask if a theorem analogous to the law of large numbers for independent trials holds. The answer is yes and is given by the following theorem.
Theorem (Law of Large Numbers for Ergodic Markov Chains)$6$
Let $H_j^{(n)}$ be the proportion of times in $n$ steps that an ergodic chain is in state $s_j$. Then for any $\epsilon > 0$, $P\Bigl(|H_j^{(n)} - w_j| > \epsilon\Bigr) \to 0\ ,$ independent of the starting state $s_i$.
We have observed that every regular Markov chain is also an ergodic chain. Hence, Theorems $5$ and $6$ apply also for regular chains. For example, this gives us a new interpretation for the fixed vector $\mathbf {w} = (.4,.2,.4)$ in the Land of Oz example. Theorem $\PageIndex5}$ predicts that, in the long run, it will rain 40 percent of the time in the Land of Oz, be nice 20 percent of the time, and snow 40 percent of the time.
Simulation
We illustrate Theorem $6$ by writing a program to simulate the behavior of a Markov chain. SimulateChain is such a program.
Example $6$
In the Land of Oz, there are 525 days in a year. We have simulated the weather for one year in the Land of Oz, using the program SimulateChain. The results are shown in Table $2$ .
SSRNRNSSSSSSNRSNSSRNSRNSSSNSRRRNSSSNRRSSSSNRSSNSRRRRRRNSSS
SSRRRSNSNRRRRSRSRNSNSRRNRRNRSSNSRNRNSSRRSRNSSSNRSRRSSNRSNR
RNSSSSNSSNSRSRRNSSNSSRNSSRRNRRRSRNRRRNSSSNRNSRNSNRNRSSSRSS
NRSSSNSSSSSSNSSSNSNSRRNRNRRRRSRRRSSSSNRRSSSSRSRRRNRRRSSSSR
RNRRRSRSSRRRRSSRNRRRRRRNSSRNRSSSNRNSNRRRRNRRRNRSNRRNSRRSNR
RRRSSSRNRRRNSNSSSSSRRRRSRNRSSRRRRSSSRRRNRNRRRSRSRNSNSSRRRR
RNSNRNSNRRNRRRRRRSSSNRSSRSNRSSSNSNRNSNSSSNRRSRRRNRRRRNRNRS
SSNSRSNRNRRSNRRNSRSSSRNSRRSSNSRRRNRRSNRRNSSSSSNRNSSSSSSSNR
NSRRRNSSRRRNSSSNRRSRNSSRRNRRNRSNRRRRRRRRRNSNRRRRRNSRRSSSSN
SNS
Table $2$: Weather in the Land of Oz.
State Times Fraction
R 217 .413
N 109 .208
S 199 .379
We note that the simulation gives a proportion of times in each of the states not too different from the long run predictions of .4, .2, and .4 assured by Theorem $1$. To get better results we have to simulate our chain for a longer time. We do this for 10,000 days without printing out each day’s weather. The results are shown in Table $3$. We see that the results are now quite close to the theoretical values of .4, .2, and .4.
Table $3$: Comparison of observed and predicted frequencies for the Land of Oz.
State Times Fraction
R 4010 .401
N 1902 .19
S 4088 .409
Examples of Ergodic Chains
The computation of the fixed vector $\mathbf{w}$ may be difficult if the transition matrix is very large. It is sometimes useful to guess the fixed vector on purely intuitive grounds. Here is a simple example to illustrate this kind of situation.
Example $7$
A white rat is put into the maze of Figure $1$
There are nine compartments with connections between the compartments as indicated. The rat moves through the compartments at random. That is, if there are $k$ ways to leave a compartment, it chooses each of these with equal probability. We can represent the travels of the rat by a Markov chain process with transition matrix given by
$\mathbf{P}= \begin{matrix} & \begin{matrix}0&&1&&2&&3&&4&&5&&6&&7&&8&&9\end{matrix} \\begin{matrix}0\1\2\3\4\5\6\7\8\9\end{matrix} & \begin{pmatrix} 0 & 1 / 2 & 0 & 0 & 0 & 1 / 2 & 0 & 0 & 0 \ 1 / 3 & 0 & 1 / 3 & 0 & 1 / 3 & 0 & 0 & 0 & 0 \ 0 & 1 / 2 & 0 & 1 / 2 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 / 3 & 0 & 1 / 3 & 0 & 0 & 0 & 1 / 3 \ 0 & 1 / 4 & 0 & 1 / 4 & 0 & 1 / 4 & 0 & 1 / 4 & 0 \ 1 / 3 & 0 & 0 & 0 & 1 / 3 & 0 & 1 / 3 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 1 / 2 & 0 & 1 / 2 & 0 \ 0 & 0 & 0 & 0 & 1 / 3 & 0 & 1 / 3 & 0 & 1 / 3 \ 0 & 0 & 0 & 1 / 2 & 0 & 0 & 0 & 1 / 2 & 0 \end{pmatrix}\\end{matrix}$
That this chain is not regular can be seen as follows: From an odd-numbered state the process can go only to an even-numbered state, and from an even-numbered state it can go only to an odd number. Hence, starting in state $i$ the process will be alternately in even-numbered and odd-numbered states. Therefore, odd powers of $\mathbf{P}$ will have 0’s for the odd-numbered entries in row 1. On the other hand, a glance at the maze shows that it is possible to go from every state to every other state, so that the chain is ergodic.
To find the fixed probability vector for this matrix, we would have to solve ten equations in nine unknowns. However, it would seem reasonable that the times spent in each compartment should, in the long run, be proportional to the number of entries to each compartment. Thus, we try the vector whose $j$th component is the number of entries to the $j$th compartment:
$\mathbf {x} = \pmatrix{ 2 & 3 & 2 & 3 & 4 & 3 & 2 & 3 & 2}\ .$
It is easy to check that this vector is indeed a fixed vector so that the unique probability vector is this vector normalized to have sum 1:
$\mathbf {w} = \pmatrix{ \frac1{12} & \frac18 & \frac1{12} & \frac18 & \frac16 & \frac18 & \frac 1{12} & \frac18 & \frac1{12} }\ .$
.
Example $8$
Add example text here.
Solution
Add example text here.
(Example $7$ continued) We recall the Ehrenfest urn model of Example 11.1.6. The transition matrix for this chain is as follows:
$\mathbf{P}= \begin{matrix} & \begin{matrix}0&&1&&2&&3&&4\end{matrix} \\begin{matrix}0\1\2\3\4\end{matrix} & \begin{pmatrix} .000 & 1.000 & .000 & .000 & .000 \ .250 & .000 & .750 & .000 & .000 \ .000 & .500 & .000 & .500 & .000 \ .000 & .000 & .750 & .000 & .250 \ .000 & .000 & .000 & 1.000 & .000 \end{pmatrix}\\end{matrix}$
If we run the program FixedVector for this chain, we obtain the vector
$\mathbf{P}^6= \begin{matrix} & \begin{matrix}0&&&1&&&2&&&3&&&&4\end{matrix} \ & \begin{pmatrix} .0.0625 & .2500 & .3750 & .2500 & .0625 \ \end{pmatrix}\\end{matrix}$
By Theorem $7$, we can interpret these values for $w_i$ as the proportion of times the process is in each of the states in the long run. For example, the proportion of times in state 0 is .0625 and the proportion of times in state 1 is .375. The astute reader will note that these numbers are the binomial distribution 1/16, 4/16, 6/16, 4/16, 1/16. We could have guessed this answer as follows: If we consider a particular ball, it simply moves randomly back and forth between the two urns. This suggests that the equilibrium state should be just as if we randomly distributed the four balls in the two urns. If we did this, the probability that there would be exactly $j$ balls in one urn would be given by the binomial distribution $b(n,p,j)$ with $n = 4$ and $p = 1/2$.
Exercises
Exercise $1$
Which of the following matrices are transition matrices for regular Markov chains?
1. $\mathbf {P} = \pmatrix{ .5 & .5 \cr .5 & .5 }$.
2. $\mathbf {P} = \pmatrix{ .5 & .5 \cr 1 & 0 }$.
3. $\mathbf {P} = \pmatrix{ 1/3 & 0 & 2/3 \cr 0 & 1 & 0 \cr 0 & 1/5 & 4/5}$.
4. $\mathbf {P} = \pmatrix{ 0 & 1 \cr 1 & 0}$.
5. $\mathbf {P} = \pmatrix{ 1/2 & 1/2 & 0 \cr 0 & 1/2 & 1/2 \cr 1/3 & 1/3 & 1/3}$.
Exercise $2$
Consider the Markov chain with transition matrix $\mathbf {P} = \pmatrix{ 1/2 & 1/3 & 1/6 \cr3/4 & 0 & 1/4 \cr 0 & 1 & 0}\ .$
1. Show that this is a regular Markov chain.
2. The process is started in state 1; find the probability that it is in state 3 after two steps.
3. Find the limiting probability vector $\mathbf{w}$.
Exercise $3$
Consider the Markov chain with general $2 \times 2$ transition matrix $\mathbf {P} = \pmatrix{ 1 - a & a \cr b & 1 - b}\ .$
1. Under what conditions is $\mathbf{P}$ absorbing?
2. Under what conditions is $\mathbf{P}$ ergodic but not regular?
3. Under what conditions is $\mathbf{P}$ regular?
Exercise $4$
Find the fixed probability vector $\mathbf{w}$ for the matrices in Exercise [exer 11.3.3] that are ergodic.
Exercise $5$
Find the fixed probability vector $\mathbf{w}$ for each of the following regular matrices.
1. $\mathbf {P} = \pmatrix{ .75 & .25 \cr .5 & .5}$.
2. $\mathbf {P} = \pmatrix{ .9 & .1 \cr .1 & .9}$.
3. $\mathbf {P} = \pmatrix{ 3/4 & 1/4 & 0 \cr 0 & 2/3 & 1/3 \cr 1/4 & 1/4 & 1/2}$.
Exercise $6$
Consider the Markov chain with transition matrix in Exercise $3$, with $a = b = 1$. Show that this chain is ergodic but not regular. Find the fixed probability vector and interpret it. Show that $\mathbf {P}^n$ does not tend to a limit, but that
$\mathbf {A}_n = \frac{\mathbf {I} + \mathbf {P} + \mathbf {P}^2 +\cdots + \mathbf {P}^n}{n + 1}$ does.
Exercise $7$
Consider the Markov chain with transition matrix of Exercise $3$, with $a = 0$ and $b = 1/2$. Compute directly the unique fixed probability vector, and use your result to prove that the chain is not ergodic.
Exercise $8$
Show that the matrix
$\mathbf {P} = \pmatrix{ 1 & 0 & 0 \cr 1/4 & 1/2 & 1/4 \cr 0 & 0 & 1}$
has more than one fixed probability vector. Find the matrix that $\mathbf {P}^n$ approaches as $n \to \infty$, and verify that it is not a matrix all of whose rows are the same.
Exercise $9$
Prove that, if a 3-by-3 transition matrix has the property that its sums are 1, then $(1/3, 1/3, 1/3)$ is a fixed probability vector. State a similar result for $n$-by-$n$ transition matrices. Interpret these results for ergodic chains.
Exercise $10$
Is the Markov chain in Example 11.1.8 ergodic?
Exercise $11$
Is the Markov chain in Example 11.1.9 ergodic?
Exercise $12$
Consider Example 11.2.1 (Drunkard’s Walk). Assume that if the walker reaches state 0, he turns around and returns to state 1 on the next step and, similarly, if he reaches 4 he returns on the next step to state 3. Is this new chain ergodic? Is it regular?
Exercise $13$
For Example 11.1.2 when $\mathbf{P}$ is ergodic, what is the proportion of people who are told that the President will run? Interpret the fact that this proportion is independent of the starting state.
Exercise $14$
Consider an independent trials process to be a Markov chain whose states are the possible outcomes of the individual trials. What is its fixed probability vector? Is the chain always regular? Illustrate this for Example 11.1.3.
Exercise $15$
Show that Example 11.1.6 is an ergodic chain, but not a regular chain. Show that its fixed probability vector $\mathbf{w}$ is a binomial distribution.
Exercise $16$
Show that Example 11.1.7 is regular and find the limiting vector.
Exercise $17$
Toss a fair die repeatedly. Let $S_n$7 denote the total of the outcomes through the $n$th toss. Show that there is a limiting value for the proportion of the first $n$ values of $S_n$ that are divisible by 7, and compute the value for this limit. : The desired limit is an equilibrium probability vector for an appropriate seven state Markov chain.
Exercise $18$
Let $\mathbf{P}$ be the transition matrix of a regular Markov chain. Assume that there are $r$ states and let $N(r)$ be the smallest integer $n$ such that $\mathbf{P}$ is regular if and only if $\mathbf {P}^{N(r)}$ has no zero entries. Find a finite upper bound for $N(r)$. See if you can determine $N(3)$ exactly.
Exercise $19$
Define $f(r)$ to be the smallest integer $n$ such that for all regular Markov chains with $r$ states, the $n$th power of the transition matrix has all entries positive. It has been shown,14 that $f(r) = r^2 - 2r + 2$.
1. Define the transition matrix of an $r$-state Markov chain as follows: For states $s_i$, with $i = 1$, 2, …, $r - 2$, $\mathbf {P}(i,i + 1) = 1$, $\mathbf {P}(r - 1,r) = \mathbf {P}(r - 1, 1) = 1/2$, and $\mathbf {P}(r,1) = 1$. Show that this is a regular Markov chain.
2. For $r = 3$, verify that the fifth power is the first power that has no zeros.
3. Show that, for general $r$, the smallest $n$ such that $\mathbf {P}^n$ has all entries positive is $n = f(r)$.
Exercise $20$
A discrete time queueing system of capacity $n$ consists of the person being served and those waiting to be served. The queue length $x$ is observed each second. If $0 < x < n$, then with probability $p$, the queue size is increased by one by an arrival and, inependently, with probability $r$, it is decreased by one because the person being served finishes service. If $x = 0$, only an arrival (with probability $p$) is possible. If $x = n$, an arrival will depart without waiting for service, and so only the departure (with probability $r$) of the person being served is possible. Form a Markov chain with states given by the number of customers in the queue. Modify the program FixedVector so that you can input $n$, $p$, and $r$, and the program will construct the transition matrix and compute the fixed vector. The quantity $s = p/r$ is called the Describe the differences in the fixed vectors according as $s < 1$, $s = 1$, or $s > 1$.
Exercise $21$
Write a computer program to simulate the queue in Exercise $20$. Have your program keep track of the proportion of the time that the queue length is $j$ for $j = 0$, 1, …, $n$ and the average queue length. Show that the behavior of the queue length is very different depending upon whether the traffic intensity $s$ has the property $s < 1$, $s = 1$, or $s > 1$.
Exercise $22$
In the queueing problem of Exercise $20$, let $S$ be the total service time required by a customer and $T$ the time between arrivals of the customers.
1. Show that $P(S = j) = (1 - r)^{j - 1}r$ and $P(T = j) = (1 - p)^{j - 1}p$, for $j > 0$.
2. Show that $E(S) = 1/r$ and $E(T) = 1/p$.
Interpret the conditions $s < 1$, $s = 1$ and $s > 1$ in terms of these expected values.
Exercise $23$
In Exercise $20$ the service time $S$ has a geometric distribution with $E(S) = 1/r$. Assume that the service time is, instead, a constant time of $t$ seconds. Modify your computer program of Exercise [exer 11.3.20] so that it simulates a constant time service distribution. Compare the average queue length for the two types of distributions when they have the same expected service time (i.e., take $t = 1/r$). Which distribution leads to the longer queues on the average?
Exercise $24$
A certain experiment is believed to be described by a two-state Markov chain with the transition matrix $\mathbf{P}$, where $\mathbf {P} = \pmatrix{ .5 & .5 \cr p & 1 - p}$ and the parameter $p$ is not known. When the experiment is performed many times, the chain ends in state one approximately 20 percent of the time and in state two approximately 80 percent of the time. Compute a sensible estimate for the unknown parameter $p$ and explain how you found it.
Exercise $25$
Prove that, in an $r$-state ergodic chain, it is possible to go from any state to any other state in at most $r - 1$ steps.
Exercise $26$
Let $\mathbf{P}$ be the transition matrix of an $r$-state ergodic chain. Prove that, if the diagonal entries $p_{ii}$ are positive, then the chain is regular.
Exercise $27$
Prove that if $\mathbf{P}$ is the transition matrix of an ergodic chain, then $(1/2)(\mathbf {I} + \mathbf {P})$ is the transition matrix of a regular chain. : Use Exercise $26$.
Exercise $28$
Prove that $\mathbf{P}$ and $(1/2)(\mathbf {I} + \mathbf {P})$ have the same fixed vectors.
Exercise $29$
In his book, 15 A. Engle proposes an algorithm for finding the fixed vector for an ergodic Markov chain when the transition probabilities are rational numbers. Here is his algorithm: For each state $i$, let $a_i$ be the least common multiple of the denominators of the non-zero entries in the $i$th row. Engle describes his algorithm in terms of moving chips around on the states—indeed, for small examples, he recommends implementing the algorithm this way. Start by putting $a_i$ chips on state $i$ for all $i$. Then, at each state, redistribute the $a_i$ chips, sending $a_i p_{ij}$ to state $j$. The number of chips at state $i$ after this redistribution need not be a multiple of $a_i$. For each state $i$, add just enough chips to bring the number of chips at state $i$ up to a multiple of $a_i$. Then redistribute the chips in the same manner. This process will eventually reach a point where the number of chips at each state, after the redistribution, is the same as before redistribution. At this point, we have found a fixed vector. Here is an example:
$\mathbf{P}^6= \begin{matrix} & \begin{matrix}1&&2&&3\end{matrix} \\begin{matrix}1\2\3\end{matrix} & \begin{pmatrix} 1/2 & 1/4 & 1/4 \ 1/2 & 0 & 1/2 \ 1/2 & 1/4 & 1/4 \end{pmatrix}\\end{matrix}$
We start with $\mathbf {a} = (4,2,4)$. The chips after successive redistributions are shown in Table $1$.
$\begin{array}{lll} (4 & 2\;\; & 4)\ (5 & 2 & 3)\ (8 & 2 & 4)\ (7 & 3 & 4)\ (8 & 4 & 4)\ (8 & 3 & 5)\ (8 & 4 & 8)\ (10 & 4 & 6)\ (12 & 4 & 8)\ (12 & 5 & 7)\ (12 & 6 & 8)\ (13 & 5 & 8)\ (16 & 6 & 8)\ (15 & 6 & 9)\ (16 & 6 & 12)\ (17 & 7 & 10)\ (20 & 8 & 12)\ (20 & 8 & 12)\ . \end{array}$
We find that $\mathbf {a} = (20,8,12)$ is a fixed vector.
1. Write a computer program to implement this algorithm.
2. Prove that the algorithm will stop. : Let $\mathbf{b}$ be a vector with integer components that is a fixed vector for $\mathbf{P}$ and such that each coordinate of the starting vector $\mathbf{a}$ is less than or equal to the corresponding component of $\mathbf{b}$. Show that, in the iteration, the components of the vectors are always increasing, and always less than or equal to the corresponding component of $\mathbf{b}$.
Exercise $30$
(Coffman, Kaduta, and Shepp16) A computing center keeps information on a tape in positions of unit length. During each time unit there is one request to occupy a unit of tape. When this arrives the first free unit is used. Also, during each second, each of the units that are occupied is vacated with probability $p$. Simulate this process, starting with an empty tape. Estimate the expected number of sites occupied for a given value of $p$. If $p$ is small, can you choose the tape long enough so that there is a small probability that a new job will have to be turned away (i.e., that all the sites are occupied)? Form a Markov chain with states the number of sites occupied. Modify the program FixedVector to compute the fixed vector. Use this to check your conjecture by simulation.
Exercise $31$
(Alternate proof of Theorem [thm 11.3.8]) Let $\mathbf{P}$ be the transition matrix of an ergodic Markov chain. Let $\mathbf{x}$ be any column vector such that $\mathbf{P} \mathbf{x} = \mathbf{ x}$. Let $M$ be the maximum value of the components of $\mathbf{x}$. Assume that $x_i = M$. Show that if $p_{ij} > 0$ then $x_j = M$. Use this to prove that $\mathbf{x}$ must be a constant vector.
Exercise $32$
Let $\mathbf{P}$ be the transition matrix of an ergodic Markov chain. Let $\mathbf{w}$ be a fixed probability vector (i.e., $\mathbf{w}$ is a row vector with $\mathbf {w}\mathbf {P} = \mathbf {w}$). Show that if $w_i = 0$ and $p_{ji} > 0$ then $w_j = 0$. Use this to show that the fixed probability vector for an ergodic chain cannot have any 0 entries.
Exercise $33$
Find a Markov chain that is neither absorbing or ergodic. | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/11%3A_Markov_Chains/11.03%3A_Ergodic_Markov_Chains.txt |
The fundamental limit theorem for regular Markov chains states that if $\mathbf{P}$ is a regular transition matrix then $\lim_{n \to \infty} \mathbf {P}^n = \mathbf {W}\ ,$ where $\mathbf{W}$ is a matrix with each row equal to the unique fixed probability row vector $\mathbf{w}$ for $\mathbf{P}$. In this section we shall give two very different proofs of this theorem.
Our first proof is carried out by showing that, for any column vector $\mathbf{y}$, $\mathbf{P}^n \mathbf {y}$ tends to a constant vector. As indicated in Section 1.3, this will show that $\mathbf{P}^n$ converges to a matrix with constant columns or, equivalently, to a matrix with all rows the same.
The following lemma says that if an $r$-by-$r$ transition matrix has no zero entries, and $\mathbf {y}$ is any column vector with $r$ entries, then the vector $\mathbf {P}\mathbf{y}$ has entries which are “closer together" than the entries are in $\mathbf {y}$.
Lemma $1$
Let $\mathbf{P}$ be an $r$-by-$r$ transition matrix with no zero entries. Let $d$ be the smallest entry of the matrix. Let $\mathbf{y}$ be a column vector with $r$ components, the largest of which is $M_0$ and the smallest $m_0$. Let $M_1$ and $m_1$ be the largest and smallest component, respectively, of the vector $\mathbf {P} \mathbf {y}$. Then
$M_1 - m_1 \leq (1 - 2d)(M_0 - m_0)\ .$
Proof: In the discussion following Theorem 11.3.1, it was noted that each entry in the vector $\mathbf {P}\mathbf{y}$ is a weighted average of the entries in $\mathbf {y}$. The largest weighted average that could be obtained in the present case would occur if all but one of the entries of $\mathbf {y}$ have value $M_0$ and one entry has value $m_0$, and this one small entry is weighted by the smallest possible weight, namely $d$. In this case, the weighted average would equal $dm_0 + (1-d)M_0\ .$ Similarly, the smallest possible weighted average equals
$dM_0 + (1-d)m_0\ .$
Thus,
\begin{aligned} M_1 - m_1 &\le& \Bigl(dm_0 + (1-d)M_0\Bigr) - \Bigl(dM_0 + (1-d)m_0\Bigr) \ &=& (1 - 2d)(M_0 - m_0)\ .\end{aligned}
This completes the proof of the lemma.
We turn now to the proof of the fundamental limit theorem for regular Markov chains.
Theorem Fundamental Limit Theorem for Regular Chains$1$
If $\mathbf{P}$ is the transition matrix for a regular Markov chain, then
$\lim_{n \to \infty} \mathbf {P}^n = \mathbf {W}\ ,$
where $\mathbf {W}$ is matrix with all rows equal. Furthermore, all entries in $\mathbf{W}$ are strictly positive.
Proof. We prove this theorem for the special case that $\mathbf{P}$ has no 0 entries. The extension to the general case is indicated in Exercise [exer 11.4.6]. Let be any $r$-component column vector, where $r$ is the number of states of the chain. We assume that $r > 1$, since otherwise the theorem is trivial. Let $M_n$ and $m_n$ be, respectively, the maximum and minimum components of the vector $\mathbf {P}^n \mathbf { y}$. The vector $\mathbf {P}^n \mathbf {y}$ is obtained from the vector $\mathbf {P}^{n - 1} \mathbf {y}$ by multiplying on the left by the matrix $\mathbf{P}$. Hence each component of $\mathbf {P}^n \mathbf {y}$ is an average of the components of $\mathbf {P}^{n - 1} \mathbf {y}$. Thus
$M_0 \geq M_1 \geq M_2 \geq\cdots$
and
$m_0 \leq m_1 \leq m_2 \leq\cdots\ .$
Each sequence is monotone and bounded:
$m_0 \leq m_n \leq M_n \leq M_0\ .$
Hence, each of these sequences will have a limit as $n$ tends to infinity.
Let $M$ be the limit of $M_n$ and $m$ the limit of $m_n$. We know that $m \leq M$. We shall prove that $M - m = 0$. This will be the case if $M_n - m_n$ tends to 0. Let $d$ be the smallest element of $\mathbf{P}$. Since all entries of $\mathbf{P}$ are strictly positive, we have $d > 0$. By our lemma
$M_n - m_n \leq (1 - 2d)(M_{n - 1} - m_{n - 1})\ .$
From this we see that
$M_n - m_n \leq (1 - 2d)^n(M_0 - m_0)\ .$
Since $r \ge 2$, we must have $d \leq 1/2$, so $0 \leq 1 - 2d < 1$, so the difference $M_n - m_n$ tends to 0 as $n$ tends to infinity. Since every component of $\mathbf {P}^n \mathbf {y}$ lies between $m_n$ and $M_n$, each component must approach the same number $u = M = m$. This shows that
$\lim_{n \to \infty} \mathbf {P}^n \mathbf {y} = \mathbf{u}\ , \label{eq 11.4.4}$
where $\mathbf{u}$ is a column vector all of whose components equal $u$.
Now let $\mathbf{y}$ be the vector with $j$th component equal to 1 and all other components equal to 0. Then $\mathbf {P}^n \mathbf {y}$ is the $j$th column of $\mathbf {P}^n$. Doing this for each $j$ proves that the columns of $\mathbf {P}^n$ approach constant column vectors. That is, the rows of $\mathbf {P}^n$ approach a common row vector $\mathbf{w}$, or, $\lim_{n \to \infty} \mathbf {P}^n = \mathbf {W}\ . \label{eq 11.4.5}$
It remains to show that all entries in $\mathbf{W}$ are strictly positive. As before, let $\mathbf{y}$ be the vector with $j$th component equal to 1 and all other components equal to 0. Then $\mathbf{P}\mathbf{y}$ is the $j$th column of $\mathbf{P}$, and this column has all entries strictly positive. The minimum component of the vector $\mathbf{P}\mathbf{y}$ was defined to be $m_1$, hence $m_1 > 0$. Since $m_1 \le m$, we have $m > 0$. Note finally that this value of $m$ is just the $j$th component of $\mathbf{w}$, so all components of $\mathbf{w}$ are strictly positive.
Doeblin’s Proof
We give now a very different proof of the main part of the fundamental limit theorem for regular Markov chains. This proof was first given by Doeblin,17 a brilliant young mathematician who was killed in his twenties in the Second World War.
Theorem $2$
Let $\mathbf {P}$ be the transition matrix for a regular Markov chain with fixed vector $\mathbf {w}$. Then for any initial probability vector $\mathbf {u}$, $\mathbf {uP}^n \rightarrow \mathbf {w}$ as $n \rightarrow \infty.$
Proof. Let $X_0,\ X_1,\ \ldots$ be a Markov chain with transition matrix $\mathbf {P}$ started in state $s_i$. Let $Y_0,\ Y_1,\ \ldots$ be a Markov chain with transition probability $\mathbf {P}$ started with initial probabilities given by $\mathbf {w}$. The $X$ and $Y$ processes are run independently of each other.
We consider also a third Markov chain $\mathbf{P}^*$ which consists of watching both the $X$ and $Y$ processes. The states for $\mathbf{P}^*$ are pairs $(s_i, s_j)$. The transition probabilities are given by
$\mathbf{P}^{*}[(i,j),(k,l)] = \mathbf{P}(i,k) \cdot \mathbf{P}(j,l)\ .$
Since $\mathbf{P}$ is regular there is an $N$ such that $\mathbf{P}^{N}(i,j) > 0$ for all $i$ and $j$. Thus for the $\mathbf{P}^*$ chain it is also possible to go from any state $(s_i, s_j)$ to any other state $(s_k,s_l)$ in at most $N$ steps. That is $\mathbf{P}^*$ is also a regular Markov chain.
We know that a regular Markov chain will reach any state in a finite time. Let $T$ be the first time the the chain $\mathbf{P}^*$ is in a state of the form $(s_k,s_k)$. In other words, $T$ is the first time that the $X$ and the $Y$ processes are in the same state. Then we have shown that
$P[T > n] \rightarrow 0 \;\;\mbox{as}\;\; n \rightarrow \infty\ .$
If we watch the $X$ and $Y$ processes after the first time they are in the same state we would not predict any difference in their long range behavior. Since this will happen no matter how we started these two processes, it seems clear that the long range behaviour should not depend upon the starting state. We now show that this is true.
We first note that if $n \ge T$, then since $X$ and $Y$ are both in the same state at time $T$,
$P(X_n = j\ |\ n \ge T) = P(Y_n = j\ |\ n \ge T)\ .$
If we multiply both sides of this equation by $P(n \ge T)$, we obtain
$P(X_n = j,\ n \ge T) = P(Y_n = j,\ n \ge T)\ . \label{eq 11.4.1}$
We know that for all $n$,
$P(Y_n = j) = w_j\ .$ But $P(Y_n = j) = P(Y_n = j,\ n \ge T) + P(Y_n = j,\ n < T)\ ,$
and the second summand on the right-hand side of this equation goes to 0 as $n$ goes to $\infty$, since $P(n < T)$ goes to 0 as $n$ goes to $\infty$. So,
$P(Y_n = j,\ n \ge T) \rightarrow w_j\ ,$
as $n$ goes to $\infty$. From Equation $1$, we see that $P(X_n = j,\ n \ge T) \rightarrow w_j\ ,$ as $n$ goes to $\infty$. But by similar reasoning to that used above, the difference between this last expression and $P(X_n = j)$ goes to 0 as $n$ goes to $\infty$. Therefore,
$P(X_n = j) \rightarrow w_j\ ,$ as $n$ goes to $\infty$.
This completes the proof.
In the above proof, we have said nothing about the rate at which the distributions of the $X_n$’s approach the fixed distribution $\mathbf {w}$. In fact, it can be shown that18
$\sum ^{r}_{j = 1} \mid P(X_{n} = j) - w_j \mid \leq 2 P(T > n)\ .$
The left-hand side of this inequality can be viewed as the distance between the distribution of the Markov chain after $n$ steps, starting in state $s_i$, and the limiting distribution $\mathbf {w}$.
Exercises
Exercise $1$
Define $\mathbf{P}$ and $\mathbf{y}$ by
$\mathbf {P} = \pmatrix{ .5 & .5 \cr.25 & .75 }, \qquad \mathbf {y} = \pmatrix{ 1 \cr 0 }\ .$
Compute $\mathbf {P}\mathbf{y}$, $\mathbf {P}^2 \mathbf {y}$, and $\mathbf {P}^4 \mathbf {y}$ and show that the results are approaching a constant vector. What is this vector?
Exercise $2$
Let $\mathbf{P}$ be a regular $r \times r$ transition matrix and $\mathbf{y}$ any $r$-component column vector. Show that the value of the limiting constant vector for $\mathbf {P}^n \mathbf {y}$ is $\mathbf{w}\mathbf{y}$.
Exercise $3$
Let
$\mathbf {P} = \pmatrix{ 1 & 0 & 0 \cr .25 & 0 & .75 \cr 0 & 0 & 1 }$
be a transition matrix of a Markov chain. Find two fixed vectors of $\mathbf {P}$ that are linearly independent. Does this show that the Markov chain is not regular?
Exercise $4$
Describe the set of all fixed column vectors for the chain given in Exercise$3$.
Exercise $5$
The theorem that $\mathbf {P}^n \to \mathbf {W}$ was proved only for the case that $\mathbf{P}$ has no zero entries. Fill in the details of the following extension to the case that $\mathbf{P}$ is regular. Since $\mathbf{P}$ is regular, for some $N, \mathbf {P}^N$ has no zeros. Thus, the proof given shows that $M_{nN} - m_{nN}$ approaches 0 as $n$ tends to infinity. However, the difference $M_n - m_n$ can never increase. (Why?) Hence, if we know that the differences obtained by looking at every $N$th time tend to 0, then the entire sequence must also tend to 0.
Exercise $6$
Let $\mathbf{P}$ be a regular transition matrix and let $\mathbf{w}$ be the unique non-zero fixed vector of $\mathbf{P}$. Show that no entry of $\mathbf{w}$ is 0.
Exercise $7$
Here is a trick to try on your friends. Shuffle a deck of cards and deal them out one at a time. Count the face cards each as ten. Ask your friend to look at one of the first ten cards; if this card is a six, she is to look at the card that turns up six cards later; if this card is a three, she is to look at the card that turns up three cards later, and so forth. Eventually she will reach a point where she is to look at a card that turns up $x$ cards later but there are not $x$ cards left. You then tell her the last card that she looked at even though you did not know her starting point. You tell her you do this by watching her, and she cannot disguise the times that she looks at the cards. In fact you just do the same procedure and, even though you do not start at the same point as she does, you will most likely end at the same point. Why?
Exercise $8$
Write a program to play the game in Exercise $7$.
Exercise $9$
(Suggested by Peter Doyle) In the proof of Theorem $1$, we assumed the existence of a fixed vector $\mathbf{w}$. To avoid this assumption, beef up the coupling argument to show (without assuming the existenceof a stationary distribution $\mathbf{w}$) that for appropriate constants$C$ and $r<1$, the distance between $\alpha P^n$ and $\beta P^n$ is at most$C r^n$ for any starting distributions $\alpha$ and $\beta$.Apply this in the case where $\beta = \alpha P$ toconclude that the sequence $\alpha P^n$ is a Cauchy sequence,and that its limit is a matrix $W$ whose rows are all equal to a probabilityvector $w$ with $wP=w$. Note that the distance between $\alpha P^n$ and$w$ is at most $C r^n$, so in freeing ourselves from the assumption abouthaving a fixed vector we’ve proved that the convergence to equilibriumtakes place exponentially fast. | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/11%3A_Markov_Chains/11.04%3A_Fundamental_Limit_Theorem_for_Regular_Chains.txt |
In this section we consider two closely related descriptive quantities of interest for ergodic chains: the mean time to return to a state and the mean time to go from one state to another state.
Let $\mathbf{P}$ be the transition matrix of an ergodic chain with states $s_1$, $s_2$, …, $s_r$. Let $\mathbf {w} = (w_1,w_2,\ldots,w_r)$ be the unique probability vector such that $\mathbf {w} \mathbf {P} = \mathbf {w}$. Then, by the Law of Large Numbers for Markov chains, in the long run the process will spend a fraction $w_j$ of the time in state $s_j$. Thus, if we start in any state, the chain will eventually reach state $s_j$; in fact, it will be in state $s_j$ infinitely often.
Another way to see this is the following: Form a new Markov chain by making $s_j$ an absorbing state, that is, define $p_{jj} = 1$. If we start at any state other than $s_j$, this new process will behave exactly like the original chain up to the first time that state $s_j$ is reached. Since the original chain was an ergodic chain, it was possible to reach $s_j$ from any other state. Thus the new chain is an absorbing chain with a single absorbing state $s_j$ that will eventually be reached. So if we start the original chain at a state $s_i$ with $i \ne j$, we will eventually reach the state $s_j$.
Let $\mathbf{N}$ be the fundamental matrix for the new chain. The entries of $\mathbf{N}$ give the expected number of times in each state before absorption. In terms of the original chain, these quantities give the expected number of times in each of the states before reaching state $s_j$ for the first time. The $i$th component of the vector $\mathbf {N} \mathbf {c}$ gives the expected number of steps before absorption in the new chain, starting in state $s_i$. In terms of the old chain, this is the expected number of steps required to reach state $s_j$ for the first time starting at state $s_i$.
Mean First Passage Time
Definition: Term
If an ergodic Markov chain is started in state $s_i$, the expected number of steps to reach state $s_j$ for the first time is called the from $s_i$ to $s_j$. It is denoted by $m_{ij}$. By convention $m_{ii} = 0$.
Example $1$
Let us return to the maze example (Example 11.3.3). We shall make this ergodic chain into an absorbing chain by making state 5 an absorbing state. For example, we might assume that food is placed in the center of the maze and once the rat finds the food, he stays to enjoy it (see Figure [$1$).
The new transition matrix in canonical form is
$\mathbf{P}= \begin{matrix} & \begin{matrix}1&&2&&3&&4&&5&&6&&7&&8&&9\end{matrix} \\begin{matrix}1\2\3\4\5\6\7\8\9\end{matrix} & \left(\begin{array}{cccccccc|c} 0 & 1 / 2 & 0 & 0 & 1 / 2 & 0 & 0 & 0 & 0 \ 1 / 3 & 0 & 1 / 3 & 0 & 0 & 0 & 0 & 0 & 1 / 3 \ 0 & 1 / 2 & 0 & 1 / 2 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 1 / 3 & 0 & 0 & 1 / 3 & 0 & 1 / 3 & 1 / 3 \ 1 / 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 / 3 \ 0 & 0 & 0 & 0 & 1 / 2 & 0 & 1 / 2 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 1 / 3 & 0 & 1 / 3 & 1 / 3 \ 0 & 0 & 0 & 1 / 2 & 0 & 0 & 1 / 2 & 0 & 0 \ \hline 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{array}\right)\end{matrix}$
If we compute the fundamental matrix $\mathbf{N}$, we obtain
$\mathbf {N} = \frac18 \pmatrix{ 14 & 9 & 4 & 3 & 9 & 4 & 3 & 2 \cr 6 & 14 & 6 & 4 & 4 & 2 & 2 & 2 \cr 4 & 9 & 14 & 9 & 3 & 2 & 3 & 4 \cr 2 & 4 & 6 & 14 & 2 & 2 & 4 & 6 \cr 6 & 4 & 2 & 2 & 14 & 6 & 4 & 2 \cr 4 & 3 & 2 & 3 & 9 & 14 & 9 & 4 \cr 2 & 2 & 2 & 4 & 4 & 6 & 14 & 6 \cr 2 & 3 & 4 & 9 & 3 & 4 & 9 & 14 \cr}\ .$
The expected time to absorption for different starting states is given by the vector $\mathbf {N} \mathbf {c}$, where
$\mathbf {N} \mathbf {c} = \pmatrix{ 6 \cr 5 \cr 6 \cr 5 \cr 5 \cr 6 \cr 5 \cr 6 \cr}\ .$
We see that, starting from compartment 1, it will take on the average six steps to reach food. It is clear from symmetry that we should get the same answer for starting at state 3, 7, or 9. It is also clear that it should take one more step, starting at one of these states, than it would starting at 2, 4, 6, or 8. Some of the results obtained from $\mathbf{N}$ are not so obvious. For instance, we note that the expected number of times in the starting state is 14/8 regardless of the state in which we start.
Mean Recurrence Time
A quantity that is closely related to the mean first passage time is the mean recurrence time defined as follows. Assume that we start in state $s_i$; consider the length of time before we return to $s_i$ for the first time. It is clear that we must return, since we either stay at $s_i$ the first step or go to some other state $s_j$, and from any other state $s_j$, we will eventually reach $s_i$ because the chain is ergodic.
Definition: Term
If an ergodic Markov chain is started in state $s_i$, the expected number of steps to return to $s_i$ for the first time is the mean recurrance time for $s_i$. It is denoted by $r_i$.
We need to develop some basic properties of the mean first passage time. Consider the mean first passage time from $s_i$ to $s_j$; assume that $i \ne j$. This may be computed as follows: take the expected number of steps required given the outcome of the first step, multiply by the probability that this outcome occurs, and add. If the first step is to $s_j$, the expected number of steps required is 1; if it is to some other state $s_k$, the expected number of steps required is $m_{kj}$ plus 1 for the step already taken. Thus,
$m_{ij} = p_{ij} + \sum_{k \ne j} p_{ik}(m_{kj} + 1)\ ,$
or, since $\sum_k p_{ik} = 1$,
$m_{ij} = 1 + \sum_{k \ne j}p_{ik} m_{kj}\ .\label{eq 11.5.1}$
Similarly, starting in $s_i$, it must take at least one step to return. Considering all possible first steps gives us
\begin{aligned} r_i &=& \sum_k p_{ik}(m_{ki} + 1) \ &=& 1 + \sum_k p_{ik} m_{ki}\ .\label{eq 11.5.2}\end{aligned}
Mean First Passage Matrix and Mean Recurrence Matrix
Let us now define two matrices $\mathbf{M}$ and $\mathbf{D}$. The $ij$th entry $m_{ij}$ of $\mathbf{M}$ is the mean first passage time to go from $s_i$ to $s_j$ if $i \ne j$; the diagonal entries are 0. The matrix $\mathbf{M}$ is called the mean first passage matrix. The matrix $\mathbf{D}$ is the matrix with all entries 0 except the diagonal entries $d_{ii} = r_i$. The matrix $\mathbf{D}$ is called the mean recurrence matrix. Let $\mathbf {C}$ be an $r \times r$ matrix with all entries 1. Using Equation [eq 11.5.1] for the case $i \ne j$ and Equation [eq 11.5.2] for the case $i = j$, we obtain the matrix equation $\mathbf{M} = \mathbf{P} \mathbf{M} + \mathbf{C} - \mathbf{D}\ , \label{eq 11.5.3}$ or $(\mathbf{I} - \mathbf{P}) \mathbf{M} = \mathbf{C} - \mathbf{D}\ . \label{eq 11.5.4}$ Equation [eq 11.5.4] with $m_{ii} = 0$ implies Equations [eq 11.5.1] and [eq 11.5.2]. We are now in a position to prove our first basic theorem.
Theorem $1$
For an ergodic Markov chain, the mean recurrence time for state $s_i$ is $r_i = 1/w_i$, where $w_i$ is the $i$th component of the fixed probability vector for the transition matrix.
Proof. Multiplying both sides of Equation $6$ by $\mathbf{w}$ and using the fact that
$\mathbf {w}(\mathbf {I} - \mathbf {P}) = \mathbf {0}$
gives
$\mathbf {w} \mathbf {C} - \mathbf {w} \mathbf {D} = \mathbf {0}\ .$
Here $\mathbf {w} \mathbf {C}$ is a row vector with all entries 1 and $\mathbf {w} \mathbf {D}$ is a row vector with $i$th entry $w_i r_i$. Thus
$(1,1,\ldots,1) = (w_1r_1,w_2r_2,\ldots,w_nr_n)$ and $r_i = 1/w_i\ ,$
as was to be proved.
Corollary $1$
For an ergodic Markov chain, the components of the fixed probability vector w are strictly positive. We know that the values of $r_i$ are finite and so $w_i = 1/r_i$ cannot be 0.
Example $2$
In Section 11.3 we found the fixed probability vector for the maze example to be
$\mathbf {w} = \pmatrix{ \frac1{12} & \frac18 & \frac1{12} & \frac18 & \frac16 & \frac18 & \frac1{12} & \frac18 & \frac1{12}}\ .$
Hence, the mean recurrence times are given by the reciprocals of these probabilities. That is,
$\mathbf {r} = \pmatrix{ 12 & 8 & 12 & 8 & 6 & 8 & 12 & 8 & 12 }\ .$
Returning to the Land of Oz, we found that the weather in the Land of Oz could be represented by a Markov chain with states rain, nice, and snow. In Section 1.3 we found that the limiting vector was $\mathbf {w} = (2/5,1/5,2/5)$. From this we see that the mean number of days between rainy days is 5/2, between nice days is 5, and between snowy days is 5/2.
Fundamental Matrix
We shall now develop a fundamental matrix for ergodic chains that will play a role similar to that of the fundamental matrix $\mathbf {N} = (\mathbf {I} - \mathbf {Q})^{-1}$ for absorbing chains. As was the case with absorbing chains, the fundamental matrix can be used to find a number of interesting quantities involving ergodic chains. Using this matrix, we will give a method for calculating the mean first passage times for ergodic chains that is easier to use than the method given above. In addition, we will state (but not prove) the Central Limit Theorem for Markov Chains, the statement of which uses the fundamental matrix.
We begin by considering the case that is the transition matrix of a regular Markov chain. Since there are no absorbing states, we might be tempted to try $\mathbf {Z} = (\mathbf {I} - \mathbf {P})^{-1}$ for a fundamental matrix. But $\mathbf {I} - \mathbf {P}$ does not have an inverse. To see this, recall that a matrix $\mathbf{R}$ has an inverse if and only if $\mathbf {R} \mathbf {x} = \mathbf {0}$ implies $\mathbf {x} = \mathbf {0}$. But since $\mathbf {P} \mathbf {c} = \mathbf {c}$ we have $(\mathbf {I} - \mathbf {P})\mathbf {c} = \mathbf {0}$, and so $\mathbf {I} - \mathbf {P}$ does not have an inverse.
We recall that if we have an absorbing Markov chain, and is the restriction of the transition matrix to the set of transient states, then the fundamental matrix could be written as
$\mathbf {N} = \mathbf {I} + \mathbf {Q} + \mathbf {Q}^2 + \cdots\ .$
The reason that this power series converges is that $\mathbf {Q}^n \rightarrow 0$, so this series acts like a convergent geometric series.
This idea might prompt one to try to find a similar series for regular chains. Since we know that $\mathbf {P}^n \rightarrow \mathbf {W}$, we might consider the series
$\mathbf {I} + (\mathbf {P} -\mathbf {W}) + (\mathbf {P}^2 - \mathbf{W}) + \cdots\ .\label{eq 11.5.8}$
We now use special properties of and to rewrite this series. The special properties are: 1) $\mathbf {P}\mathbf {W} = \mathbf {W}$, and 2) $\mathbf {W}^k = \mathbf {W}$ for all positive integers $k$. These facts are easy to verify, and are left as an exercise (see Exercise [exer 11.5.28]). Using these facts, we see that
\begin{aligned} (\mathbf {P} - \mathbf {W})^n &=& \sum_{i = 0}^n (-1)^i{n \choose i}\mathbf {P}^{n-i}\mathbf {W}^i \ &=& \mathbf {P}^n + \sum_{i = 1}^n (-1)^i{n \choose i} \mathbf {W}^i \ &=& \mathbf {P}^n + \sum_{i = 1}^n (-1)^i{n \choose i} \mathbf {W} \ &=& \mathbf {P}^n + \Biggl(\sum_{i = 1}^n (-1)^i{n \choose i}\Biggr) \mathbf {W}\ .\end{aligned}
If we expand the expression $(1-1)^n$, using the Binomial Theorem, we obtain the expression in parenthesis above, except that we have an extra term (which equals 1). Since $(1-1)^n = 0$, we see that the above expression equals -1. So we have
$(\mathbf {P} - \mathbf {W})^n = \mathbf {P}^n - \mathbf {W}\ ,$ for all $n \ge 1$.
We can now rewrite the series in Eq. $7$ as $\mathbf {I} + (\mathbf {P} - \mathbf {W}) + (\mathbf {P} - \mathbf {W})^2 + \cdots\ .$ Since the $n$th term in this series is equal to $\mathbf {P}^n - \mathbf {W}$, the $n$th term goes to 0 as $n$ goes to infinity. This is sufficient to show that this series converges, and sums to the inverse of the matrix $\mathbf {I} - \mathbf {P} + \mathbf {W}$. We call this inverse the associated with the chain, and we denote it by .
In the case that the chain is ergodic, but not regular, it is not true that $\mathbf {P}^n \rightarrow \mathbf {W}$ as $n \rightarrow \infty$. Nevertheless, the matrix $\mathbf {I} - \mathbf {P} + \mathbf {W}$ still has an inverse, as we will now show.
Proposition $1$
TLet be the transition matrix of an ergodic chain, and let be the matrix all of whose rows are the fixed probability row vector for . Then the matrix
$\mathbf {I} - \mathbf {P} + \mathbf {W}$
has an inverse.
Proof. Let $\mathbf{x}$ be a column vector such that
$(\mathbf {I} - \mathbf {P} + \mathbf {W})\mathbf {x} = \mathbf {0}\ .$
To prove the proposition, it is sufficient to show that must be the zero vector. Multiplying this equation by $\mathbf{w}$ and using the fact that $\mathbf{w}(\mathbf{I} - \mathbf{ P}) = \mathbf{0}$ and $\mathbf{w} \mathbf{W} = \mathbf {w}$, we have
$\mathbf {w}(\mathbf {I} - \mathbf {P} + \mathbf {W})\mathbf {x} = \mathbf {w} \mathbf {x} = \mathbf {0}\ .$
Therefore,
$(\mathbf {I} - \mathbf {P})\mathbf {x} = \mathbf {0}\ .$
But this means that $\mathbf {x} = \mathbf {P} \mathbf {x}$ is a fixed column vector for $\mathbf{P}$. By Theorem 11.3.4, this can only happen if $\mathbf{x}$ is a constant vector. Since $\mathbf {w}\mathbf {x} = 0$, and has strictly positive entries, we see that $\mathbf {x} = \mathbf {0}$. This completes the proof.
As in the regular case, we will call the inverse of the matrix $\mathbf {I} - \mathbf {P} + \mathbf {W}$ the for the ergodic chain with transition matrix , and we will use to denote this fundamental matrix.
Example $3$
Let $\mathbf{P}$ be the transition matrix for the weather in the Land of Oz. Then
\begin{aligned} \mathbf{I} - \mathbf{P} + \mathbf{W} &=& \pmatrix{ 1 & 0 & 0\cr 0 & 1 & 0\cr 0 & 0 & 1\cr} - \pmatrix{ 1/2 & 1/4 & 1/4\cr 1/2 & 0 & 1/2\cr 1/4 & 1/4 & 1/2\cr} + \pmatrix{ 2/5 & 1/5 & 2/5\cr 2/5 & 1/5 & 2/5\cr 2/5 & 1/5 & 2/5\cr} \cr &=& \pmatrix{ 9/10 & -1/20 & 3/20\cr -1/10 & 6/5 & -1/10\cr 3/20 & -1/20 & 9/10\cr}\ ,\cr\end{aligned}
so
$\mathbf{Z} = (\mathbf{I} - \mathbf{P} + \mathbf{W})^{-1} = \pmatrix{ 86/75 & 1/25 & -14/75\cr 2/25 & 21/25 & 2/25\cr -14/75 & 1/25 & 86/75\cr}\ .$
Using the Fundamental Matrix to Calculate the Mean First Passage Matrix
We shall show how one can obtain the mean first passage matrix from the fundamental matrix for an ergodic Markov chain. Before stating the theorem which gives the first passage times, we need a few facts about .
Lemma $1$
Let $\mathbf{Z} = (\mathbf{I} - \mathbf{P} + \mathbf{W})^{-1}$, and let $\mathbf{c}$ be a column vector of all 1’s. Then
$\mathbf{Z}\mathbf{c} = \mathbf{c}\ ,$ $\mathbf{w}\mathbf{Z} = \mathbf{w}\ ,$
and
$\mathbf{Z}(\mathbf{I} - \mathbf{P}) = \mathbf{I} - \mathbf{W}\ .$
Proof. Since $\mathbf{P}\mathbf{c} = \mathbf{c}$ and $\mathbf{W}\mathbf{c} = \mathbf{c}$,
$\mathbf{c} = (\mathbf{I} - \mathbf{P} + \mathbf{W}) \mathbf{c}\ .$
If we multiply both sides of this equation on the left by $\mathbf{Z}$, we obtain
$\mathbf{Z}\mathbf{c} = \mathbf{c}\ .$
Similarly, since $\mathbf{w}\mathbf{P} = \mathbf{w}$ and $\mathbf{w}\mathbf{W} = \mathbf{w}$,
$\mathbf{w} = \mathbf{w}(\mathbf{I} - \mathbf{P} + \mathbf{W})\ .$
If we multiply both sides of this equation on the right by $\mathbf{Z}$, we obtain
$\mathbf{w}\mathbf{Z} = \mathbf{w}\ .$
Finally, we have
\begin{aligned} (\mathbf{I} - \mathbf{P} + \mathbf{W})(\mathbf{I} - \mathbf{W}) &=& \mathbf{I} - \mathbf{W} - \mathbf{P} + \mathbf{W} + \mathbf{W} - \mathbf{W}\ &=& \mathbf{I} - \mathbf{P}\ .\end{aligned}
Multiplying on the left by , we obtain $\mathbf{I} - \mathbf{W} = \mathbf{Z}(\mathbf{I} - \mathbf{P})\ .$
This completes the proof.
The following theorem shows how one can obtain the mean first passage times from the fundamental matrix.
Theorem $2$
The mean first passage matrix $\mathbf{M}$ for an ergodic chain is determined from the fundamental matrix $\mathbf{Z}$ and the fixed row probability vector $\mathbf{w}$ by
$m_{ij} = \frac{z_{jj} - z_{ij}}{w_j}\ .$
Proof. We showed in Equation $6$ that
$(\mathbf {I} - \mathbf {P})\mathbf {M} = \mathbf {C} - \mathbf {D}\ .$
Thus,
$\mathbf {Z}(\mathbf {I} - \mathbf {P})\mathbf {M} = \mathbf {Z} \mathbf {C} - \mathbf {Z} \mathbf {D}\ ,$
and from Lemma $1$,
$\mathbf {Z}(\mathbf {I} - \mathbf {P})\mathbf {M} = \mathbf {C} - \mathbf {Z} \mathbf {D}\ .$
Again using Lemma $1$, we have
$\mathbf {M} - \mathbf{W}\mathbf {M} = \mathbf {C} - \mathbf {Z} \mathbf {D}$
or
$\mathbf {M} = \mathbf {C} - \mathbf {Z} \mathbf {D} + \mathbf{W}\mathbf {M}\ .$
From this equation, we see that
$m_{ij} = 1 - z_{ij}r_j + (\mathbf{w} \mathbf{M})_j\ . \label{eq 11.5.6}$
But $m_{jj} = 0$, and so
$0 = 1 - z_{jj}r_j + (\mathbf {w} \mathbf {M})_j\ ,$
or
$(\mathbf{w} \mathbf{M})_j = z_{jj}r_j - 1\ . \label{eq 11.5.7}$
From Equations $8$ and $9$, we have
$m_{ij} = (z_{jj} - z_{ij}) \cdot r_j\ .$
Since $r_j = 1/w_j$,
$m_{ij} = \frac{z_{jj} - z_{ij}}{w_j}\ .$
Example $\PageIndex{4$
(Example $3$continued) In the Land of Oz example, we find that
$\mathbf{Z} = (\mathbf{I} - \mathbf{P} + \mathbf{W})^{-1} = \pmatrix{ 86/75 & 1/25 & -14/75\cr 2/25 & 21/25 & 2/25\cr -14/75 & 1/25 & 86/75\cr}\ .$
We have also seen that $\mathbf {w} = (2/5,1/5, 2/5)$. So, for example,
\begin{aligned} m_{12} &=& \frac{z_{22} - z_{12}}{w_2} \ &=& \frac{21/25 - 1/25}{1/5} \ &=& 4\ ,\\end{aligned}
by Theorem $2$. Carrying out the calculations for the other entries of $\mathbf{M}$, we obtain
$\mathbf {M} = \pmatrix{ 0 & 4 & 10/3\cr 8/3 & 0 & 8/3\cr 10/3 & 4 & 0\cr}\ .$
Computation
The program ErgodicChain calculates the fundamental matrix, the fixed vector, the mean recurrence matrix $\mathbf{D}$, and the mean first passage matrix $\mathbf{M}$. We have run the program for the Ehrenfest urn model (Example [exam 11.1.6]). We obtain:
$\mathbf{P}= \begin{matrix} & \begin{matrix}0&&&1&&&2&&&3&&&4\end{matrix} \\begin{matrix}0\1\2\3\4 \end{matrix} & \left(\begin{array}{ccccc} .0000 & 1.0000 & .0000 & .0000 & .0000 \ .2500 & .0000 & .7500 & .0000 & .0000 \ .0000 & .5000 & .0000 & .5000 & .0000 \ .0000 & .0000 & .7500 & .0000 & .2500 \ .0000 & .0000 & .0000 & 1.0000 & .0000 \end{array}\right)\end{matrix}$
$\mathbf{w}= \begin{matrix} & \begin{matrix}0&&&1&&&2&&&3&&&4\end{matrix} \ & \begin{pmatrix} .0625 & .2500 & .3750 & .2500 & .0625 \ \end{pmatrix}\\end{matrix}$
$\mathbf{r}= \begin{matrix} & \begin{matrix}0&&&&1&&&&2&&&3&&&4\end{matrix} \ & \begin{pmatrix} 16.0000 & 4.0000 & 2.6667 & 4.0000 & 16.0000 \ \end{pmatrix}\\end{matrix}$
$\mathbf{M}= \begin{matrix} & \begin{matrix}0&&&1&&&2&&&3&&&4\end{matrix} \\begin{matrix}0\1\2\3\4 \end{matrix} & \left(\begin{array}{ccccc} .00000 & 1.0000 & 2.6667 & 6.3333 & 21.3333 \ 15.0000 & .0000 & 1.6667 & 5.3333 & 20.3333 \ 18.6667 & 3.6667 & .0000 & 3.6667 & 18.6667 \ 20.3333 & 5.3333 & 1.6667 & .0000 & 15.0000 \ 21.3333 & 6.3333 & 2.6667 & 1.0000 & .0000 \end{array}\right)\end{matrix}$
From the mean first passage matrix, we see that the mean time to go from 0 balls in urn 1 to 2 balls in urn 1 is 2.6667 steps while the mean time to go from 2 balls in urn 1 to 0 balls in urn 1 is 18.6667. This reflects the fact that the model exhibits a central tendency. Of course, the physicist is interested in the case of a large number of molecules, or balls, and so we should consider this example for $n$ so large that we cannot compute it even with a computer.
Ehrenfest Model
Example $5$
(Example $5$] continued) Let us consider the Ehrenfest model (see Example 11.1.6) for gas diffusion for the general case of $2n$ balls. Every second, one of the $2n$ balls is chosen at random and moved from the urn it was in to the other urn. If there are $i$ balls in the first urn, then with probability $i/2n$ we take one of them out and put it in the second urn, and with probability $(2n - i)/2n$ we take a ball from the second urn and put it in the first urn. At each second we let the number $i$ of balls in the first urn be the state of the system. Then from state $i$ we can pass only to state $i - 1$ and $i + 1$, and the transition probabilities are given by
$p_{i j}=\left\{\begin{array}{cc} \frac{i}{2 n}, & \text { if } j=i-1 \ 1-\frac{i}{2 n}, & \text { if } j=i+1 \ 0, & \text { otherwise. } \end{array}\right.$
This defines the transition matrix of an ergodic, non-regular Markov chain (see Exercise $15$). Here the physicist is interested in long-term predictions about the state occupied. In Example 11.3.4, we gave an intuitive reason for expecting that the fixed vector $\mathbf{w}$ is the binomial distribution with parameters $2n$ and $1/2$. It is easy to check that this is correct. So,
$w_i=\dfrac{\left(\begin{array}{c} 2 n \ i \end{array}\right)}{2^{2 n}}$
Thus the mean recurrence time for state $i$ is
$r_i=\frac{2^{2 n}}{\left(\begin{array}{c} 2 n \ i \end{array}\right)}$
Consider in particular the central term $i = n$. We have seen that this term is approximately $1/\sqrt{\pi n}$. Thus we may approximate $r_n$ by $\sqrt{\pi n}$.
This model was used to explain the concept of reversibility in physical systems. Assume that we let our system run until it is in equilibrium. At this point, a movie is made, showing the system’s progress. The movie is then shown to you, and you are asked to tell if the movie was shown in the forward or the reverse direction. It would seem that there should always be a tendency to move toward an equal proportion of balls so that the correct order of time should be the one with the most transitions from $i$ to $i - 1$ if $i > n$ and $i$ to $i + 1$ if $i < n$.
In Figure $2$ we show the results of simulating the Ehrenfest urn model for the case of $n = 50$ and 1000 time units, using the program EhrenfestUrn. The top graph shows these results graphed in the order in which they occurred and the bottom graph shows the same results but with time reversed. There is no apparent difference.
We note that if we had not started in equilibrium, the two graphs would typically look quite different.
Reversibility
If the Ehrenfest model is started in equilibrium, then the process has no apparent time direction. The reason for this is that this process has a property called Define $X_n$ to be the number of balls in the left urn at step $n$. We can calculate, for a general ergodic chain, the reverse transition probability:
\begin{aligned} P(X_{n - 1} = j | X_n = i) &=& \frac{P(X_{n - 1} = j,X_n = i)}{P(X_n = i)} \ &=& \frac{P(X_{n - 1} = j) P(X_n = i | X_{n - 1} = j)}{P(X_n = i)} \ &=& \frac{P(X_{n - 1} = j) p_{ji}}{P(X_n = i)}\ .\\end{aligned}
In general, this will depend upon $n$, since $P(X_n = j)$ and also $P(X_{n - 1} = j)$ change with $n$. However, if we start with the vector $\mathbf{w}$ or wait until equilibrium is reached, this will not be the case. Then we can define $p_{ij}^* = \frac{w_j p_{ji}}{w_i}$ as a transition matrix for the process watched with time reversed.
Let us calculate a typical transition probability for the reverse chain $\mathbf{ P}^* = \{p_{ij}^*\}$ in the Ehrenfest model. For example,
\begin{aligned}
p_{i, i-1}^* & =\frac{w_{i-1} p_{i-1, i}}{w_i}=\frac{\left(\begin{array}{c}
2 n \
i-1
\end{array}\right)}{2^{2 n}} \times \frac{2 n-i+1}{2 n} \times \frac{2^{2 n}}{\left(\begin{array}{c}
2 n \
i
\end{array}\right)} \
& =\frac{(2 n) !}{(i-1) !(2 n-i+1) !} \times \frac{(2 n-i+1) i !(2 n-i) !}{2 n(2 n) !} \
& =\frac{i}{2 n}=p_{i, i-1} .
\end{aligned}
Similar calculations for the other transition probabilities show that $\mathbf {P}^* = \mathbf {P}$. When this occurs the process is called Clearly, an ergodic chain is reversible if, and only if, for every pair of states $s_i$ and $s_j$, $w_i p_{ij} = w_j p_{ji}$. In particular, for the Ehrenfest model this means that $w_i p_{i,i - 1} = w_{i - 1} p_{i - 1,i}$. Thus, in equilibrium, the pairs $(i, i - 1)$ and $(i - 1, i)$ should occur with the same frequency. While many of the Markov chains that occur in applications are reversible, this is a very strong condition. In Exercise [exer 11.5.12] you are asked to find an example of a Markov chain which is not reversible.
The Central Limit Theorem for Markov Chains
Suppose that we have an ergodic Markov chain with states $s_1, s_2, \ldots, s_k$. It is natural to consider the distribution of the random variables $S^{(n)}_j$, which denotes the number of times that the chain is in state $s_j$ in the first $n$ steps. The $j$th component $w_j$ of the fixed probability row vector $\mathbf{w}$ is the proportion of times that the chain is in state $s_j$ in the long run. Hence, it is reasonable to conjecture that the expected value of the random variable $S^{(n)}_j$, as $n \rightarrow \infty$, is asymptotic to $nw_j$, and it is easy to show that this is the case (see Exercise [exer 11.5.29]).
It is also natural to ask whether there is a limiting distribution of the random variables $S^{(n)}_j$. The answer is yes, and in fact, this limiting distribution is the normal distribution. As in the case of independent trials, one must normalize these random variables. Thus, we must subtract from $S^{(n)}_j$ its expected value, and then divide by its standard deviation. In both cases, we will use the asymptotic values of these quantities, rather than the values themselves. Thus, in the first case, we will use the value $nw_j$. It is not so clear what we should use in the second case. It turns out that the quantity $\sigma_j^2 = 2w_jz_{jj} - w_j - w_j^2 \label{eq 11.5.9}$ represents the asymptotic variance. Armed with these ideas, we can state the following theorem.
Example Central Limit Theory for Markov Chains $6$
For an ergodic chain, for any real numbers $r < s$, we have
$P\left(r<\frac{S_j^{(n)}-n w_j}{\sqrt{n \sigma_j^2}}<s\right) \rightarrow \frac{1}{\sqrt{2 \pi}} \int_r^s e^{-x^2 / 2} d x$
as $n \rightarrow \infty$, for any choice of starting state, where $\sigma_j^2$ is the quantity defined in Equation 11.1.10.
Historical Remarks
Markov chains were introduced by Andreĭ Andreevich Markov (1856–1922) and were named in his honor. He was a talented undergraduate who received a gold medal for his undergraduate thesis at St. Petersburg University. Besides being an active research mathematician and teacher, he was also active in politics and patricipated in the liberal movement in Russia at the beginning of the twentieth century. In 1913, when the government celebrated the 300th anniversary of the House of Romanov family, Markov organized a counter-celebration of the 200th anniversary of Bernoulli’s discovery of the Law of Large Numbers.
Markov was led to develop Markov chains as a natural extension of sequences of independent random variables. In his first paper, in 1906, he proved that for a Markov chain with positive transition probabilities and numerical states the average of the outcomes converges to the expected value of the limiting distribution (the fixed vector). In a later paper he proved the central limit theorem for such chains. Writing about Markov, A. P. Youschkevitch remarks:
Markov arrived at his chains starting from the internal needs of probability theory, and he never wrote about their applications to physical science. For him the only real examples of the chains were literary texts, where the two states denoted the vowels and consonants.19
In a paper written in 1913,20 Markov chose a sequence of 20,000 letters from Pushkin’s to see if this sequence can be approximately considered a simple chain. He obtained the Markov chain with transition matrix
$\mathbf{P}^6= \begin{matrix} & \begin{matrix}vowel&consonant\end{matrix} \\begin{matrix}vowel\consonant \end{matrix} & \begin{pmatrix} .128&&& .872 \ .663 &&&.337 \end{pmatrix}\\end{matrix}$
The fixed vector for this chain is $(.432, .568)$, indicating that we should expect about 43.2 percent vowels and 56.8 percent consonants in the novel, which was borne out by the actual count.
Claude Shannon considered an interesting extension of this idea in his book 21 in which he developed the information-theoretic concept of entropy. Shannon considers a series of Markov chain approximations to English prose. He does this first by chains in which the states are letters and then by chains in which the states are words. For example, for the case of words he presents first a simulation where the words are chosen independently but with appropriate frequencies.
REPRESENTING AND SPEEDILY IS AN GOOD APT OR COME CAN DIFFERENT NATURAL HERE HE THE A IN CAME THE TO OF TO EXPERT GRAY COME TO FURNISHES THE LINE MESSAGE HAD BE THESE.
He then notes the increased resemblence to ordinary English text when the words are chosen as a Markov chain, in which case he obtains
THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED.
A simulation like the last one is carried out by opening a book and choosing the first word, say it is Then the book is read until the word appears again and the word after this is chosen as the second word, which turned out to be The book is then read until the word appears again and the next word, is chosen, and so on.
Other early examples of the use of Markov chains occurred in Galton’s study of the problem of survival of family names in 1889 and in the Markov chain introduced by P. and T. Ehrenfest in 1907 for diffusion. Poincaré in 1912 dicussed card shuffling in terms of an ergodic Markov chain defined on a permutation group. Brownian motion, a continuous time version of random walk, was introducted in 1900–1901 by L. Bachelier in his study of the stock market, and in 1905–1907 in the works of A. Einstein and M. Smoluchowsky in their study of physical processes.
One of the first systematic studies of finite Markov chains was carried out by M. Frechet.22 The treatment of Markov chains in terms of the two fundamental matrices that we have used was developed by Kemeny and Snell 23 to avoid the use of eigenvalues that one of these authors found too complex. The fundamental matrix $\mathbf{N}$ occurred also in the work of J. L. Doob and others in studying the connection between Markov processes and classical potential theory. The fundamental matrix $\mathbf{Z}$ for ergodic chains appeared first in the work of Frechet, who used it to find the limiting variance for the central limit theorem for Markov chains.
Exercises
Exercise $1$
Consider the Markov chain with transition matrix
$\mathbf {P} = \pmatrix{ 1/2 & 1/2 \cr 1/4 & 3/4}\ .$
Find the fundamental matrix $\mathbf{Z}$ for this chain. Compute the mean first passage matrix using $\mathbf{Z}$.
Exercise $2$
A study of the strengths of Ivy League football teams shows that if a school has a strong team one year it is equally likely to have a strong team or average team next year; if it has an average team, half the time it is average next year, and if it changes it is just as likely to become strong as weak; if it is weak it has 2/3 probability of remaining so and 1/3 of becoming average.
1. A school has a strong team. On the average, how long will it be before it has another strong team?
2. A school has a weak team; how long (on the average) must the alumni wait for a strong team?
Exercise $3$
Consider Example 11.1.4 with $a = .5$ and $b = .75$. Assume that the President says that he or she will run. Find the expected length of time before the first time the answer is passed on incorrectly.
Exercise $4$
Find the mean recurrence time for each state of Example 11.1.4 for $a = .5$ and $b = .75$. Do the same for general $a$ and $b$.
Exercise $5$
A die is rolled repeatedly. Show by the results of this section that the mean time between occurrences of a given number is 6.
Exercise $6$
For the Land of Oz example (Example 11.1.1), make rain into an absorbing state and find the fundamental matrix $\mathbf{N}$. Interpret the results obtained from this chain in terms of the original chain.
Exercise $7$
A rat runs through the maze shown in Figure $3$. At each step it leaves the room it is in by choosing at random one of the doors out of the room.
1. Give the transition matrix $\mathbf{P}$ for this Markov chain.
2. Show that it is an ergodic chain but not a regular chain.
3. Find the fixed vector.
4. Find the expected number of steps before reaching Room 5 for the first time, starting in Room 1.
Exercise $8$
Modify the program ErgodicChain so that you can compute the basic quantities for the queueing example of Exercise 11.3.20. Interpret the mean recurrence time for state 0.
Exercise $9$
Consider a random walk on a circle of circumference $n$. The walker takes one unit step clockwise with probability $p$ and one unit counterclockwise with probability $q = 1 - p$. Modify the program ErgodicChain to allow you to input $n$ and $p$ and compute the basic quantities for this chain.
1. For which values of $n$ is this chain regular? ergodic?
2. What is the limiting vector $\mathbf{w}$?
3. Find the mean first passage matrix for $n = 5$ and $p = .5$. Verify that $m_{ij} = d(n - d)$, where $d$ is the clockwise distance from $i$ to $j$.
Exercise $10$
Two players match pennies and have between them a total of 5 pennies. If at any time one player has all of the pennies, to keep the game going, he gives one back to the other player and the game will continue. Show that this game can be formulated as an ergodic chain. Study this chain using the program ErgodicChain.
Exercise $11$
Calculate the reverse transition matrix for the Land of Oz example (Example 11.1.1). Is this chain reversible?
Exercise $12$
Give an example of a three-state ergodic Markov chain that is not reversible.
Exercise $13$
iLet $\mathbf{P}$ be the transition matrix of an ergodic Markov chain and $\mathbf{P}^*$ the reverse transition matrix. Show that they have the same fixed probability vector $\mathbf{w}$.
Exercise $14$
If $\mathbf{P}$ is a reversible Markov chain, is it necessarily true that the mean time to go from state $i$ to state $j$ is equal to the mean time to go from state $j$ to state $i$? : Try the Land of Oz example (Example 11.1.1).
Exercise $15$
Show that any ergodic Markov chain with a symmetric transition matrix (i.e., $p_{ij} = p_{ji})$ is reversible.
Exercise $16$
(Crowell24) Let $\mathbf{P}$ be the transition matrix of an ergodic Markov chain. Show that
$(\mathbf {I} + \mathbf {P} +\cdots+ \mathbf {P}^{n - 1})(\mathbf {I} - \mathbf {P} + \mathbf {W}) = \mathbf {I} - \mathbf {P}^n + n\mathbf {W}\ ,$
and from this show that
$\frac{\mathbf {I} + \mathbf {P} +\cdots+ \mathbf {P}^{n - 1}}n \to \mathbf {W}\ ,$
as $n \rightarrow \infty$.
Exercise $17$
An ergodic Markov chain is started in equilibrium (i.e., with initial probability vector $\mathbf{w}$). The mean time until the next occurrence of state $s_i$ is $\bar{m_i} = \sum_k w_k m_{ki} + w_i r_i$. Show that $\bar {m_i} = z_{ii}/w_i$, by using the facts that $\mathbf {w}\mathbf {Z} = \mathbf {w}$ and $m_{ki} = (z_{ii} - z_{ki})/w_i$.
Exercise $18$
A perpetual craps game goes on at Charley’s. Jones comes into Charley’s on an evening when there have already been 100 plays. He plans to play until the next time that snake eyes (a pair of ones) are rolled. Jones wonders how many times he will play. On the one hand he realizes that the average time between snake eyes is 36 so he should play about 18 times as he is equally likely to have come in on either side of the halfway point between occurrences of snake eyes. On the other hand, the dice have no memory, and so it would seem that he would have to play for 36 more times no matter what the previous outcomes have been. Which, if either, of Jones’s arguments do you believe? Using the result of Exercise $17$, calculate the expected to reach snake eyes, in equilibrium, and see if this resolves the apparent paradox. If you are still in doubt, simulate the experiment to decide which argument is correct. Can you give an intuitive argument which explains this result?
Exercise $19$
Show that, for an ergodic Markov chain (see Theorem $3$),
$\sum_j m_{ij} w_j = \sum_j z_{jj} - 1 = K\ .$
The second expression above shows that the number $K$ is independent of $i$. The number $K$ is called A prize was offered to the first person to give an intuitively plausible reason for the above sum to be independent of $i$. (See also Exercise $24$)
Exercise $20$
Consider a game played as follows: You are given a regular Markov chain with transition matrix $\mathbf P$, fixed probability vector $\mathbf{w}$, and a payoff function $\mathbf f$ which assigns to each state $s_i$ an amount $f_i$ which may be positive or negative. Assume that $\mathbf {w}\mathbf {f} = 0$. You watch this Markov chain as it evolves, and every time you are in state $s_i$ you receive an amount $f_i$. Show that your expected winning after $n$ steps can be represented by a column vector $\mathbf{g}^{(n)}$, with
$\mathbf{g}^{(n)} = (\mathbf {I} + \mathbf {P} + \mathbf {P}^2 +\cdots+ \mathbf {P}^n) \mathbf {f}.$
Show that as $n \to \infty$, $\mathbf {g}^{(n)} \to \mathbf {g}$ with $\mathbf {g} = \mathbf {Z} \mathbf {f}$.
Exercise $21$
A highly simplified game of “Monopoly" is played on a board with four squares as shown in Figure $4$. You start at GO. You roll a die and move clockwise around the board a number of squares equal to the number that turns up on the die. You collect or pay an amount indicated on the square on which you land. You then roll the die again and move around the board in the same manner from your last position. Using the result of Exercise [exer 11.5.23], estimate the amount you should expect to win in the long run playing this version of Monopoly.
Exercise $22$
Show that if $\mathbf P$ is the transition matrix of a regular Markov chain, and $\mathbf W$ is the matrix each of whose rows is the fixed probability vector corresponding to $\mathbf {P}$, then $\mathbf {P}\mathbf {W} = \mathbf {W}$, and $\mathbf {W}^k = \mathbf {W}$ for all positive integers $k$.
Exercise $23$
Assume that an ergodic Markov chain has states $s_1, s_2, \ldots, s_k$. Let $S^{(n)}_j$ denote the number of times that the chain is in state $s_j$ in the first $n$ steps. Let $\mathbf{w}$ denote the fixed probability row vector for this chain. Show that, regardless of the starting state, the expected value of $S^{(n)}_j$, divided by $n$, tends to $w_j$ as $n \rightarrow \infty$. : If the chain starts in state $s_i$, then the expected value of $S^{(n)}_j$ is given by the expression
$\sum_{h = 0}^n p^{(h)}_{ij}\ .$
Exercise $24$
In the course of a walk with Snell along Minnehaha Avenue in Minneapolisin the fall of 1983, Peter Doyle25 suggested the following explanation for the constancy of (see Exercise $19$). Choose a target state accordingto the fixed vector $\mathbf{w}$. Start from state $i$ and wait until the time $T$ thatthe target state occurs for the first time. Let $K_i$ be the expected valueof $T$. Observe that$K_i + w_i \cdot 1/w_i= \sum_j P_{ij} K_j + 1\ ,$and hence$K_i = \sum_j P_{ij} K_j\ .$By the maximum principle, $K_i$ is a constant.Should Peter have been given the prize?
11.R: Footnotes and References
Footnotes
1. R. A. Howard, Dynamic Probabilistic Systems, vol. 1 (New York: John Wiley and Sons, 1971).
2. J. G. Kemeny, J. L. Snell, G. L. Thompson, Introduction to Finite Mathematics, 3rd ed. (Englewood Cliffs, NJ: Prentice-Hall, 1974).
3. P. and T. Ehrenfest, "Über zwei bekannte Einwände gegen das Boltzmannsche H-Theorem," Physikalishce Zeitschrift, vol. 8 (1907), pp. 311-314.
4. S. Sawyer, "Results for The Stepping Stone Model for Migration in Population Genetics," Annals of Probability, vol. 4 (1979), pp. 699-728.
5. H. Gonshor, "An Application of Random Walk to a Problem in Population Genetics," Amercan Math Monthly, vol. 94 (1987), pp. 668-671
6. Private communication.
7. Roberts, Discrete Mathematical Models (Englewood Cliffs, NJ: Prentice Hall, 1976).
8. W. W. Leontief, Input-Output Economics (Oxford: Oxford University Press, 1966).
9. L. J. Guibas and A. M. Odlyzko, "String Overlaps, Pattern Matching, and Non-transitive Games," Journal of Combinatorial Theory, Series A, vol. 30 (1981), pp. 183-208.
10. Penney, "Problem: Penney-Ante," Journal of Recreational Math, vol. 2 (1969), p. 241.
11. M. Gardner, "Mathematical Games," Scientific American, vol. 10 (1974), pp. 120-125.Guibas and Odlyzko, op. cit.
12. E. Seneta, Non-Negative Matrices: An Introduction to Theory and Applications, Wiley, New York, 1973, pp. 52-54.
13. Engle, Wahrscheinlichkeitsrechnung und Statistik, vol. } 2 \text { (Stuttgart: Klett Verlag, 1976).
14. E. G. Coffman, J. T. Kaduta, and L. A. Shepp, "On the Asymptotic Optimality of First Storage Allocation," IEEE Trans. Software Engineering, vol. II (1985), pp. 235-239.
15. Doeblin, "Expose de la Theorie des Chaines Simple Constantes de Markov a un Nombre Fini d'Etats," Rev. Mach. de l'Union Interbalkanique, vol. 2 (1937), pp. 77-105.
16. Lindvall, Lectures on the Coupling Method (New York: Wiley 1992).
17. See Dictionary of Scientific Biography, ed. C. C. Gillespie (New York: Scribner's Sons, 1970), pp. 124-130.
18. A. A. Markov, "An Example of Statistical Analysis of the Text of Eugene Onegin Illustrating the Association of Trials into a Chain," Bulletin de l'Acadamie Imperiale des Sciences de St. Petersburg, ser. 6, vol. 7 (1913), pp. 153-162.
19. C. E. Shannon and W. Weaver, The Mathematical Theory of Communication (Urbana: Univ. of Illinois Press, 1964).
20. M. Frechet, "Théorie des événements en chaine dans le cas d'un nombre fini d'états possible," in Recherches théoriques Modernes sur le calcul des probabilités, vol. 2 (Paris, 1938).
21. J. G. Kemeny and J. L. Snell, Finite Markov Chains.
22. Private communication.
23. Private communication. | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/11%3A_Markov_Chains/11.05%3A_Mean_First_Passage_Time_for_Ergodic_Chains.txt |
• 12.1: Random Walks in Euclidean Space**
In the last several chapters, we have studied sums of random variables with the goal being to describe the distribution and density functions of the sum. In this chapter, we shall look at sums of discrete random variables from a different perspective. We shall be concerned with properties which can be associated with the sequence of partial sums, such as the number of sign changes of this sequence, the number of terms in the sequence which equal 0, and the expected size of the maximum term in th
• 12.2: Gambler's Ruin
• 12.3: Arc Sine Laws**
Thumbnail: Random walk in two dimensions. (Public Domain; László Németh via Wikipedia).
12: Random Walks
In the last several chapters, we have studied sums of random variables with the goal being to describe the distribution and density functions of the sum. In this chapter, we shall look at sums of discrete random variables from a different perspective. We shall be concerned with properties which can be associated with the sequence of partial sums, such as the number of sign changes of this sequence, the number of terms in the sequence which equal 0, and the expected size of the maximum term in the sequence.
We begin with the following definition.
Definition: $1$
Let $\{X_k\}_{k = 1}^\infty$ be a sequence of independent, identically distributed discrete random variables. For each positive integer $n$, we let $S_n$ denote the sum $X_1 + X_2 + \cdots + X_n$. The sequence $\{S_n\}_{n = 1}^\infty$ is called a If the common range of the $X_k$’s is ${\mathbf R}^m$, then we say that $\{S_n\}$ is a random walk in ${\mathbf R}^m$.
We view the sequence of $X_k$’s as being the outcomes of independent experiments. Since the $X_k$’s are independent, the probability of any particular (finite) sequence of outcomes can be obtained by multiplying the probabilities that each $X_k$ takes on the specified value in the sequence. Of course, these individual probabilities are given by the common distribution of the $X_k$’s. We will typically be interested in finding probabilities for events involving the related sequence of $S_n$’s. Such events can be described in terms of the $X_k$’s, so their probabilities can be calculated using the above idea.
There are several ways to visualize a random walk. One can imagine that a particle is placed at the origin in ${\mathbf R}^m$ at time $n = 0$. The sum $S_n$ represents the position of the particle at the end of $n$ seconds. Thus, in the time interval $[n-1, n]$, the particle moves (or jumps) from position $S_{n-1}$ to $S_{n}$. The vector representing this motion is just $S_n - S_{n-1}$, which equals $X_n$. This means that in a random walk, the jumps are independent and identically distributed. If $m = 1$, for example, then one can imagine a particle on the real line that starts at the origin, and at the end of each second, jumps one unit to the right or the left, with probabilities given by the distribution of the $X_k$’s. If $m = 2$, one can visualize the process as taking place in a city in which the streets form square city blocks. A person starts at one corner (i.e., at an intersection of two streets) and goes in one of the four possible directions according to the distribution of the $X_k$’s. If $m = 3$, one might imagine being in a jungle gym, where one is free to move in any one of six directions (left, right, forward, backward, up, and down). Once again, the probabilities of these movements are given by the distribution of the $X_k$’s.
Another model of a random walk (used mostly in the case where the range is ${\mathbf R}^1$) is a game, involving two people, which consists of a sequence of independent, identically distributed moves. The sum $S_n$ represents the score of the first person, say, after $n$ moves, with the assumption that the score of the second person is $-S_n$. For example, two people might be flipping coins, with a match or non-match representing $+1$ or $-1$, respectively, for the first player. Or, perhaps one coin is being flipped, with a head or tail representing $+1$ or $-1$, respectively, for the first player.
Random Walks on the Real Line
We shall first consider the simplest non-trivial case of a random walk in ${\mathbf R}^1$, namely the case where the common distribution function of the random variables $X_n$ is given by $f_X(x) = \left \{ \begin{array}{ll} 1/2, & \mbox{if x = \pm 1,} \ 0, & \mbox{otherwise.} \end{array} \right.$ This situation corresponds to a fair coin being flipped, with $S_n$ representing the number of heads minus the number of tails which occur in the first $n$ flips. We note that in this situation, all paths of length $n$ have the same probability, namely $2^{-n}$.
It is sometimes instructive to represent a random walk as a polygonal line, or path, in the plane, where the horizontal axis represents time and the vertical axis represents the value of $S_n$. Given a sequence $\{S_n\}$ of partial sums, we first plot the points $(n, S_n)$, and then for each $k < n$, we connect $(k, S_k)$ and $(k+1, S_{k+1})$ with a straight line segment. The length of a path is just the difference in the time values of the beginning and ending points on the path. The reader is referred to Figure [fig 12.1]. This figure, and the process it illustrates, are identical with the example, given in Chapter 1, of two people playing heads or tails.
Returns and First Returns
We say that an equalization has occurred, or there is a at time $n$, if $S_n = 0$. We note that this can only occur if $n$ is an even integer. To calculate the probability of an equalization at time $2m$, we need only count the number of paths of length $2m$ which begin and end at the origin. The number of such paths is clearly
${2m \choose m}\ .$
Since each path has probability $2^{-2m}$, we have the following theorem.
Theorem $1$
The probability of a return to the origin at time $2m$ is given by
$u_{2m} = {2m \choose m}2^{-2m}\ .$
The probability of a return to the origin at an odd time is 0.
A random walk is said to have a first return to the origin at time $2m$ if $m > 0$, and $S_{2k} \ne 0$ for all $k < m$. In Figure $1$, the first return occurs at time 2. We define $f_{2m}$ to be the probability of this event. (We also define $f_0 = 0$.) One can think of the expression $f_{2m}2^{2m}$ as the number of paths of length $2m$ between the points $(0, 0)$ and $(2m, 0)$ that do not touch the horizontal axis except at the endpoints. Using this idea, it is easy to prove the following theorem.
Theorem $2$
For $n \ge 1$, the probabilities $\{u_{2k}\}$ and $\{f_{2k}\}$ are related by the equation
$u_{2n} = f_0 u_{2n} + f_2 u_{2n-2} + \cdots + f_{2n}u_0\ .$
Proof. There are $u_{2n}2^{2n}$ paths of length $2n$ which have endpoints $(0, 0)$ and $(2n, 0)$. The collection of such paths can be partitioned into $n$ sets, depending upon the time of the first return to the origin. A path in this collection which has a first return to the origin at time $2k$ consists of an initial segment from $(0, 0)$ to $(2k, 0)$, in which no interior points are on the horizontal axis, and a terminal segment from $(2k, 0)$ to $(2n, 0)$, with no further restrictions on this segment. Thus, the number of paths in the collection which have a first return to the origin at time $2k$ is given by
$f_{2k}2^{2k}u_{2n-2k}2^{2n-2k} = f_{2k}u_{2n-2k}2^{2n}\ .$
If we sum over $k$, we obtain the equation
$u_{2n}2^{2n} = f_0u_{2n} 2^{2n} + f_2u_{2n-2}2^{2n} + \cdots + f_{2n}u_0 2^{2n}\ .$
Dividing both sides of this equation by $2^{2n}$ completes the proof.
The expression in the right-hand side of the above theorem should remind the reader of a sum that appeared in Definition 7.1.1 of the convolution of two distributions. The convolution of two sequences is defined in a similar manner. The above theorem says that the sequence $\{u_{2n}\}$ is the convolution of itself and the sequence $\{f_{2n}\}$. Thus, if we represent each of these sequences by an ordinary generating function, then we can use the above relationship to determine the value $f_{2n}$.
Theorem $3$
For $m \ge 1$, the probability of a first return to the origin at time $2m$ is given by
$f_{2 m}=\frac{u_{2 m}}{2 m-1}=\frac{\left(\begin{array}{c} 2 m \ m \end{array}\right)}{(2 m-1) 2^{2 m}}$
Proof. We begin by defining the generating functions
$U(x) = \sum_{m = 0}^\infty u_{2m}x^m$ and $F(x) = \sum_{m = 0}^\infty f_{2m}x^m\ .$
Theorem $2$ says that
$U(x) = 1 + U(x)F(x)\ . \label{eq 12.1.1}$
(The presence of the 1 on the right-hand side is due to the fact that $u_0$ is defined to be 1, but Theorem $1$ only holds for $m \ge 1$.) We note that both generating functions certainly converge on the interval $(-1, 1)$, since all of the coefficients are at most 1 in absolute value. Thus, we can solve the above equation for $F(x)$, obtaining
$F(x) = \dfrac{{U(x) - 1}{U(x)}\ .$
Now, if we can find a closed-form expression for the function $U(x)$, we will also have a closed-form expression for $F(x)$. From Theorem $1$, we have
$U(x)=\sum_{m=0}^{\infty}\left(\begin{array}{c}2 m \ m\end{array}\right) 2^{-2 m} x^m$
In Wilf,1 we find that
${1\over{\sqrt {1 - 4x}}} = \sum_{m = 0}^\infty {2m \choose m} x^m\ .$
The reader is asked to prove this statement in Exercise $1$. If we replace $x$ by $x/4$ in the last equation, we see that
$U(x) = {1\over{\sqrt {1-x}}}\ .$
Therefore, we have
\begin{aligned} F(x) & =\frac{U(x)-1}{U(x)} \ & =\frac{(1-x)^{-1 / 2}-1}{(1-x)^{-1 / 2}} \ & =1-(1-x)^{1 / 2} .\end{aligned}
Although it is possible to compute the value of $f_{2m}$ using the Binomial Theorem, it is easier to note that $F'(x) = U(x)/2$, so that the coefficients $f_{2m}$ can be found by integrating the series for $U(x)$. We obtain, for $m \ge 1$,
\begin{aligned} f_{2 m} & =\frac{u_{2 m-2}}{2 m} \ & =\frac{\left(\begin{array}{c}2 m-2 \ m-1\end{array}\right)}{m 2^{2 m-1}} \ & =\frac{\left(\begin{array}{c}2 m \ m\end{array}\right)}{(2 m-1) 2^{2 m}} \ & =\frac{u_{2 m}}{2 m-1},\end{aligned}
since
${2m-2 \choose m-1} = {m\over{2(2m-1)}}{2m\choose m}\ .$
This completes the proof of the theorem.
Probability of Eventual Return
In the symmetric random walk process in ${\mathbf R}^m$, what is the probability that the particle eventually returns to the origin? We first examine this question in the case that $m = 1$, and then we consider the general case. The results in the next two examples are due to Pólya.2
Example $1$
(Eventual Return in ${\mathbf R}^1$) One has to approach the idea of eventual return with some care, since the sample space seems to be the set of all walks of infinite length, and this set is non-denumerable. To avoid difficulties, we will define $w_n$ to be the probability that a first return has occurred no later than time $n$. Thus, $w_n$ concerns the sample space of all walks of length $n$, which is a finite set. In terms of the $w_n$’s, it is reasonable to define the probability that the particle eventually returns to the origin to be
$w_* = \lim_{n \rightarrow \infty} w_n\ .$
This limit clearly exists and is at most one, since the sequence $\{w_n\}_{n = 1}^\infty$ is an increasing sequence, and all of its terms are at most one.
In terms of the $f_n$ probabilities, we see that
$w_{2n} = \sum_{i = 1}^n f_{2i}\ .$
Thus,
$w_* = \sum_{i = 1}^\infty f_{2i}\ .$
In the proof of Theorem $3$, the generating function
$F(x) = \sum_{m = 0}^\infty f_{2m}x^m$
was introduced. There it was noted that this series converges for $x \in (-1, 1)$. In fact, it is possible to show that this series also converges for $x = \pm 1$ by using Exercise $4$, together with the fact that
$f_{2m} = \dfrac{u_{2m}}{2m-1} .$
(This fact was proved in the proof of Theorem $3$.) Since we also know that
$F(x) = 1 - (1-x)^{1/2}\ ,$ we see that $w_* = F(1) = 1\ .$
Thus, with probability one, the particle returns to the origin.
An alternative proof of the fact that $w_* = 1$ can be obtained by using the results in Exercise $12$.
Example $2$
(Eventual Return in ${\mathbf R}^m$) We now turn our attention to the case that the random walk takes place in more than one dimension. We define $f^{(m)}_{2n}$ to be the probability that the first return to the origin in ${\mathbf R}^m$ occurs at time $2n$. The quantity $u^{(m)}_{2n}$ is defined in a similar manner. Thus, $f^{(1)}_{2n}$ and $u^{(1)}_{2n}$ equal $f_{2n}$ and $u_{2n}$, which were defined earlier. If, in addition, we define $u^{(m)}_0 = 1$ and $f^{(m)}_0 = 0$, then one can mimic the proof of Theorem $2$, and show that for all $m \ge 1$,
$u^{(m)}_{2n} = f^{(m)}_0 u^{(m)}_{2n} + f^{(m)}_2 u^{(m)}_{2n-2} + \cdots + f^{(m)}_{2n}u^{(m)}_0\ . \label{eq 12.1.1.5}$
We continue to generalize previous work by defining
$U^{(m)}(x) = \sum_{n = 0}^\infty u^{(m)}_{2n} x^n$
and
$F^{(m)}(x) = \sum_{n = 0}^\infty f^{(m)}_{2n} x^n\ .$
Then, by using Equation $2$], we see that
$U^{(m)}(x) = 1 + U^{(m)}(x) F^{(m)}(x)\ ,$
as before. These functions will always converge in the interval $(-1, 1)$, since all of their coefficients are at most one in magnitude. In fact, since
$w^{(m)}_* = \sum_{n = 0}^\infty f^{(m)}_{2n} \le 1$
for all $m$, the series for $F^{(m)}(x)$ converges at $x = 1$ as well, and $F^{(m)}(x)$ is left-continuous at $x = 1$, i.e.,
$\lim_{x \uparrow 1} F^{(m)}(x) = F^{(m)}(1)\ .$
Thus, we have
$w^{(m)}_* = \lim_{x \uparrow 1} F^{(m)}(x) = \lim_{x \uparrow 1} \frac{U^{(m)}(x) - 1}{U^{(m)}(x)}\ , \label{eq 12.1.1.6}$
so to determine $w^{(m)}_*$, it suffices to determine
$\lim_{x \uparrow 1} U^{(m)}(x)\ .$ We let $u^{(m)}$ denote this limit.
We claim that
$u^{(m)} = \sum_{n = 0}^\infty u^{(m)}_{2n}\ .$
(This claim is reasonable; it says that to find out what happens to the function $U^{(m)}(x)$ at $x = 1$, just let $x = 1$ in the power series for $U^{(m)}(x)$.) To prove the claim, we note that the coefficients $u^{(m)}_{2n}$ are non-negative, so $U^{(m)}(x)$ increases monotonically on the interval $[0, 1)$. Thus, for each $K$, we have
$\sum_{n = 0}^K u^{(m)}_{2n} \le \lim_{x \uparrow 1} U^{(m)}(x) = u^{(m)} \le \sum_{n = 0}^\infty u^{(m)}_{2n}\ .$
By letting $K \rightarrow \infty$, we see that $u^{(m)} = \sum_{2n}^\infty u^{(m)}_{2n}\ .$
This establishes the claim.
From Equation $3$, we see that if $u^{(m)} < \infty$, then the probability of an eventual return is
$\frac {u^{(m)} - 1}{u^{(m)}}\ ,$
while if $u^{(m)} = \infty$, then the probability of eventual return is 1.
To complete the example, we must estimate the sum
$\sum_{n = 0}^\infty u^{(m)}_{2n}\ .$
In Exercise $12$, the reader is asked to show that
$u^{(2)}_{2n} = \frac 1 {4^{2n}} {{2n}\choose n}^2\ .$
Using Stirling’s Formula, it is easy to show that (see Exercise $13$)
$\left(\begin{array}{c}2 n \ n\end{array}\right) \sim \frac{2^{2 n}}{\sqrt{\pi n}} ,$
so
$u_{2 n}^{(2)} \sim \frac{1}{\pi n}$
From this it follows easily that
$\sum_{n = 0}^\infty u^{(2)}_{2n}$
diverges, so $w^{(2)}_* = 1$, i.e., in ${\mathbf R}^2$, the probability of an eventual return is 1.
When $m = 3$, Exercise [$12$shows that
$u^{(3)}_{2n} = \frac 1{2^{2n}}{{2n}\choose n} \sum_{j,k} \biggl(\frac 1{3^n}\frac{n!}{j!k!(n-j-k)!}\biggr)^2\ .$
Let $M$ denote the largest value of
$\frac 1{3^n}\frac {n!}{j!k!(n - j - k)!}\ ,$
over all non-negative values of $j$ and $k$ with $j + k \le n$. It is easy, using Stirling’s Formula, to show that
$M \sim \frac cn\ ,$
for some constant $c$. Thus, we have
$u^{(3)}_{2n} \le \frac 1{2^{2n}}{{2n}\choose n} \sum_{j,k} \biggl(\frac M{3^n}\frac{n!}{j!k!(n-j-k)!}\biggr)\ .$
Using Exercise $14$, one can show that the right-hand expression is at most
$\frac {c'}{n^{3/2}}\ ,$
where $c'$ is a constant. Thus,
$\sum_{n = 0}^\infty u^{(3)}_{2n}$
converges, so $w^{(3)}_*$ is strictly less than one. This means that in ${\mathbf R}^3$, the probability of an eventual return to the origin is strictly less than one (in fact, it is approximately .34).
One may summarize these results by stating that one should not get drunk in more than two dimensions.
Expected Number of Equalizations
We now give another example of the use of generating functions to find a general formula for terms in a sequence, where the sequence is related by recursion relations to other sequences. Exercise [exer 12.1.9] gives still another example.
Example $3$
(Expected Number of Equalizations) In this example, we will derive a formula for the expected number of equalizations in a random walk of length $2m$. As in the proof of Theorem $3$, the method has four main parts. First, a recursion is found which relates the $m$th term in the unknown sequence to earlier terms in the same sequence and to terms in other (known) sequences. An example of such a recursion is given in Theorem $2$. Second, the recursion is used to derive a functional equation involving the generating functions of the unknown sequence and one or more known sequences. Equation$1$ is an example of such a functional equation. Third, the functional equation is solved for the unknown generating function. Last, using a device such as the Binomial Theorem, integration, or differentiation, a formula for the $m$th coefficient of the unknown generating function is found.
We begin by defining $g_{2m}$ to be the number of equalizations among all of the random walks of length $2m$. (For each random walk, we disregard the equalization at time 0.) We define $g_0 = 0$. Since the number of walks of length $2m$ equals $2^{2m}$, the expected number of equalizations among all such random walks is $g_{2m}/2^{2m}$. Next, we define the generating function $G(x)$:
$G(x) = \sum_{k = 0}^\infty g_{2k}x^k\ .$
Now we need to find a recursion which relates the sequence $\{g_{2k}\}$ to one or both of the known sequences $\{f_{2k}\}$ and $\{u_{2k}\}$. We consider $m$ to be a fixed positive integer, and consider the set of all paths of length $2m$ as the disjoint union
$E_2 \cup E_4 \cup \cdots \cup E_{2m} \cup H\ ,$
where $E_{2k}$ is the set of all paths of length $2m$ with first equalization at time $2k$, and $H$ is the set of all paths of length $2m$ with no equalization. It is easy to show (see Exercise $3$) that
$|E_{2k}| = f_{2k} 2^{2m}\ .$
We claim that the number of equalizations among all paths belonging to the set $E_{2k}$ is equal to
$|E_{2k}| + 2^{2k} f_{2k} g_{2m - 2k}\ . \label{eq 12.1.2}$
Each path in $E_{2k}$ has one equalization at time $2k$, so the total number of such equalizations is just $|E_{2k}|$. This is the first summand in expression Equation $4$. There are $2^{2k} f_{2k}$ different initial segments of length $2k$ among the paths in $E_{2k}$. Each of these initial segments can be augmented to a path of length $2m$ in $2^{2m-2k}$ ways, by adjoining all possible paths of length $2m - 2k$. The number of equalizations obtained by adjoining all of these paths to any one initial segment is $g_{2m - 2k}$, by definition. This gives the second summand in Equation $4$. Since $k$ can range from 1 to $m$, we obtain the recursion
$g_{2m} = \sum_{k = 1}^m \Bigl(|E_{2k}| + 2^{2k}f_{2k}g_{2m - 2k}\Bigr)\ . \label{eq 12.1.3}$
The second summand in the typical term above should remind the reader of a convolution. In fact, if we multiply the generating function $G(x)$ by the generating function
$F(4x) = \sum_{k = 0}^\infty 2^{2k}f_{2k} x^k\ ,$
the coefficient of $x^m$ equals
$\sum_{k = 0}^m 2^{2k}f_{2k}g_{2m-2k}\ .$
Thus, the product $G(x)F(4x)$ is part of the functional equation that we are seeking. The first summand in the typical term in Equation $5$ gives rise to the sum
$2^{2m}\sum_{k = 1}^m f_{2k}\ .$
From Exercise $2$, we see that this sum is just $(1 - u_{2m})2^{2m}$. Thus, we need to create a generating function whose $m$th coefficient is this term; this generating function is
$\sum_{m = 0}^\infty (1- u_{2m})2^{2m} x^m\ ,$
or
$\sum_{m = 0}^\infty 2^{2m} x^m - \sum_{m = 0}^\infty u_{2m}2^{2m} x^m\ .$
The first sum is just $(1-4x)^{-1}$, and the second sum is $U(4x)$. So, the functional equation which we have been seeking is
$G(x) = F(4x)G(x) + {1\over{1-4x}} - U(4x)\ .$
If we solve this recursion for $G(x)$, and simplify, we obtain
$G(x) = {1\over{(1-4x)^{3/2}}} - {1\over{(1-4x)}}\ . \label{eq 12.1.4}$
We now need to find a formula for the coefficient of $x^m$. The first summand in Equation $6$ is $(1/2)U'(4x)$, so the coefficient of $x^m$ in this function is
$u_{2m+2} 2^{2m+1}(m+1)\ .$
The second summand in Equation $6$ is the sum of a geometric series with common ratio $4x$, so the coefficient of $x^m$ is $2^{2m}$. Thus, we obtain
\begin{aligned} g_{2 m} & =u_{2 m+2} 2^{2 m+1}(m+1)-2^{2 m} \ & =\frac{1}{2}\left(\begin{array}{c}2 m+2 \ m+1\end{array}\right)(m+1)-2^{2 m}\end{aligned}
We recall that the quotient $g_{2m}/2^{2m}$ is the expected number of equalizations among all paths of length $2m$. Using Exercise $4$, it is easy to show that
$\frac{g_{2 m}}{2^{2 m}} \sim \sqrt{\frac{2}{\pi}} \sqrt{2 m}$
In particular, this means that the average number of equalizations among all paths of length $4m$ is not twice the average number of equalizations among all paths of length $2m$. In order for the average number of equalizations to double, one must quadruple the lengths of the random walks.
It is interesting to note that if we define
$M_n = \max_{0 \le k \le n} S_k\ ,$
then we have
$E(M_n) \sim \sqrt{2\over \pi}\sqrt n\ .$
This means that the expected number of equalizations and the expected maximum value for random walks of length $n$ are asymptotically equal as $n \rightarrow \infty$. (In fact, it can be shown that the two expected values differ by at most $1/2$ for all positive integers $n$. See Exercise $9$.)
Exercise $1$
Using the Binomial Theorem, show that
${1\over{\sqrt {1 - 4x}}} = \sum_{m = 0}^\infty {2m \choose m} x^m\ .$
What is the interval of convergence of this power series?
Exercise $2$
1. Show that for $m \ge 1$, $f_{2m} = u_{2m-2} - u_{2m}\ .$
2. Using part (a), find a closed-form expression for the sum $f_2 + f_4 + \cdots + f_{2m}\ .$
3. Using part (b), show that $\sum_{m = 1}^\infty f_{2m} = 1\ .$ (One can also obtain this statement from the fact that $F(x) = 1 - (1-x)^{1/2}\ .)$
4. Using parts (a) and (b), show that the probability of no equalization in the first $2m$ outcomes equals the probability of an equalization at time $2m$.
Exercise $3$
Using the notation of Example [exam 12.1.1], show that
$|E_{2k}| = f_{2k} 2^{2m}\ .$
Exercise $4$
Using Stirling’s Formula, show that
$u_{2m} \sim {1\over{\sqrt {\pi m}}}\ .$
Exercise $5$
A in a random walk occurs at time $2k$ if $S_{2k-1}$ and $S_{2k+1}$ are of opposite sign.
1. Give a rigorous argument which proves that among all walks of length $2m$ that have an equalization at time $2k$, exactly half have a lead change at time $2k$.
2. Deduce that the total number of lead changes among all walks of length $2m$ equals ${1\over 2}(g_{2m} - u_{2m})\ .$
3. Find an asymptotic expression for the average number of lead changes in a random walk of length $2m$.
Exercise $6$
1. Show that the probability that a random walk of length $2m$ has a last return to the origin at time $2k$, where $0 \le k \le m$, equals $ = u_{2k}u_{2m - 2k}\ .$ (The case $k = 0$ consists of all paths that do not return to the origin at any positive time.) : A path whose last return to the origin occurs at time $2k$ consists of two paths glued together, one path of which is of length $2k$ and which begins and ends at the origin, and the other path of which is of length $2m - 2k$ and which begins at the origin but never returns to the origin. Both types of paths can be counted using quantities which appear in this section.
2. Using part (a), show that if $m$ is odd, the probability that a walk of length $2m$ has no equalization in the last $m$ outcomes is equal to $1/2$, regardless of the value of $m$. : The answer to part a) is symmetric in $k$ and $m-k$.
Exercise $7$
Show that the probability of no equalization in a walk of length $2m$ equals $u_{2m}$.
Exercise $8$
Show that $P(S_1 \ge 0,\ S_2 \ge 0,\ \ldots,\ S_{2m} \ge 0) = u_{2m}\ .$ : First explain why \begin{aligned} &&P(S_1 > 0,\ S_2 > 0,\ \ldots,\ S_{2m} > 0) \ && \;\;\;\;\;\;\;\;\;\;\;\;\; = {1\over 2}P(S_1 \ne 0,\ S_2 \ne 0,\ \ldots,\ S_{2m} \ne 0) \ .\end{aligned} Then use Exercise [exer 12.1.7], together with the observation that if no equalization occurs in the first $2m$ outcomes, then the path goes through the point $(1,1)$ and remains on or above the horizontal line $x = 1$.
Exercise $9$
In Feller,3 one finds the following theorem: Let $M_n$ be the random variable which gives the maximum value of $S_k$, for $1 \le k \le n$. Define
$p_{n, r} = {n\choose }2^{-n}\ .$ If $r \ge 0$,
then
$P(M_n = r) = \left \{ \begin{array}{ll} p_{n, r}\,,&\mbox{if r \equiv n\, (\mbox{mod}\ 2)}, \ p_{n, r+1}\,,&\mbox{if r \not\equiv n\,(\mbox{mod}\ 2)}. \end{array} \right.$
1. Using this theorem, show that $E(M_{2m}) = {1\over{2^{2m}}}\sum_{k = 1}^m (4k-1){2m \choose m+k}\ ,$ and if $n = 2m+1$, then $E(M_{2m+1}) = {1\over {2^{2m+1}}} \sum_{k = 0}^m (4k+1){2m+1\choose m+k+1}\ .$
2. For $m \ge 1$, define $r_m = \sum_{k = 1}^m k {2m\choose m+k}$ and $s_m = \sum_{k = 1}^m k {2m+1\choose m+k+1}\ .$ By using the identity ${n\choose k} = {n-1\choose k-1} + {n-1\choose k}\ ,$ show that $s_m = 2r_m - {1\over 2}\biggl(2^{2m} - {2m \choose m}\biggr)$ and $r_m = 2s_{m-1} + {1\over 2}2^{2m-1}\ ,$ if $m \ge 2$.
3. Define the generating functions $R(x) = \sum_{k = 1}^\infty r_k x^k$ and $S(x) = \sum_{k = 1}^\infty s_k x^k\ .$ Show that $S(x) = 2 R(x) - {1\over 2}\biggl({1\over{1- 4x}}\biggr) + {1\over 2}\biggl(\sqrt{1-4x}\biggr)$ and $R(x) = 2xS(x) + x\biggl({1\over{1-4x}}\biggr)\ .$
4. Show that $R(x) = {x\over{(1-4x)^{3/2}}}\ ,$ and $S(x) = {1\over 2}\biggl({1\over{(1- 4x)^{3/2}}}\biggr) - {1\over 2}\biggl({1\over{1- 4x}}\biggr)\ .$
5. Show that $r_m = m{2m-1\choose m-1}\ ,$ and $s_m = {1\over 2}(m+1){2m+1\choose m} - {1\over 2}(2^{2m})\ .$
6. Show that $E(M_{2m}) = {m\over{2^{2m-1}}}{2m\choose m} + {1\over{2^{2m+1}}}{2m\choose m} - {1\over 2}\ ,$ and $E(M_{2m+1}) = {2m+2\choose m+1} - {1\over 2}\ .$ The reader should compare these formulas with the expression for $g_{2m}/2^{(2m)}$ in Example [exam 12.1.1].
Exercise $10$
(from K. Levasseur4) A parent and his child play the following game. A deck of $2n$ cards, $n$ red and $n$ black, is shuffled. The cards are turned up one at a time. Before each card is turned up, the parent and the child guess whether it will be red or black. Whoever makes more correct guesses wins the game. The child is assumed to guess each color with the same probability, so she will have a score of $n$, on average. The parent keeps track of how many cards of each color have already been turned up. If more black cards, say, than red cards remain in the deck, then the parent will guess black, while if an equal number of each color remain, then the parent guesses each color with probability 1/2. What is the expected number of correct guesses that will be made by the parent? : Each of the ${{2n}\choose n}$ possible orderings of red and black cards corresponds to a random walk of length $2n$ that returns to the origin at time $2n$. Show that between each pair of successive equalizations, the parent will be right exactly once more than he will be wrong. Explain why this means that the average number of correct guesses by the parent is greater than $n$ by exactly one-half the average number of equalizations. Now define the random variable $X_i$ to be 1 if there is an equalization at time $2i$, and 0 otherwise. Then, among all relevant paths, we have
$E(X_i) = P(X_i = 1) = \frac \ .$
Thus, the expected number of equalizations equals
$E\biggl(\sum_{i = 1}^n X_i\biggr) = \frac 1 \sum_{i = 1}^n \ .$
One can now use generating functions to find the value of the sum.
It should be noted that in a game such as this, a more interesting question than the one asked above is what is the probability that the parent wins the game? For this game, this question was answered by D. Zagier.5 He showed that the probability of winning is asymptotic (for large $n$) to the quantity $\frac 12 + \frac 1{2\sqrt 2}\ .$
Exercise $11$
Prove that
$u^{(2)}_{2n} = \frac 1{4^{2n}} \sum_{k = 0}^n \frac {(2n)!}{k!k!(n-k)!(n-k)!}\ ,$ and $u^{(3)}_{2n} = \frac 1{6^{2n}} \sum_{j,k} \frac {(2n)!}{j!j!k!k!(n-j-k)!(n-j-k)!}\ ,$
where the last sum extends over all non-negative $j$ and $k$ with $j+k \le n$. Also show that this last expression may be rewritten as
$\frac 1{2^{2n}}{{2n}\choose n} \sum_{j,k} \biggl(\frac 1{3^n}\frac{n!}{j!k!(n-j-k)!}\biggr)^2\ .$
Exercise $12$
Prove that if $n \ge 0$, then
$\sum_{k = 0}^n {n \choose k}^2 = {{2n} \choose n}\ .$
Write the sum as
$\sum_{k = 0}^n {n \choose k}{n \choose {n-k}}$
and explain why this is a coefficient in the product
$(1 + x)^n (1 + x)^n\ .$
Use this, together with Exercise [exer 12.1.11], to show that
$u^{(2)}_{2n} = \frac 1{4^{2n}} {{2n}\choose n}^2\ .$
Exercise $13$
Using Stirling’s Formula, prove that
$ {\sqrt {\pi n}}\ .$
Exercise $14$
Prove that
$\sum_{j,k} \biggl(\frac 1{3^n}\frac{n!}{j!k!(n-j-k)!}\biggr) = 1\ ,$
where the sum extends over all non-negative $j$ and $k$ such that $j + k \le n$. : Count how many ways one can place $n$ labelled balls in 3 labelled urns.
Exercise $15$
Using the result proved for the random walk in ${\mathbf R}^3$ in Example [exam 12.1.0.6], explain why the probability of an eventual return in ${\mathbf R}^n$ is strictly less than one, for all $n \ge 3$. : Consider a random walk in ${\mathbf R}^n$ and disregard all but the first three coordinates of the particle’s position. | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/12%3A_Random_Walks/12.01%3A_Random_Walks_in_Euclidean_Space.txt |
In the last section, the simplest kind of symmetric random walk in ${\mathbf R}^1$ was studied. In this section, we remove the assumption that the random walk is symmetric. Instead, we assume that $p$ and $q$ are non-negative real numbers with $p+q = 1$, and that the common distribution function of the jumps of the random walk is
$f_X(x) = \left \{ \begin{array}{ll} p, & \mbox{if x = 1},\ q, & \mbox{if x = -1}. \end{array} \right.$
One can imagine the random walk as representing a sequence of tosses of a weighted coin, with a head appearing with probability $p$ and a tail appearing with probability $q$. An alternative formulation of this situation is that of a gambler playing a sequence of games against an adversary (sometimes thought of as another person, sometimes called “the house") where, in each game, the gambler has probability $p$ of winning.
The Gambler’s Ruin Problem
The above formulation of this type of random walk leads to a problem known as the Gambler’s Ruin problem. This problem was introduced in Exercise 12.1.23, but we will give the description of the problem again. A gambler starts with a “stake" of size $s$. She plays until her capital reaches the value $M$ or the value 0. In the language of Markov chains, these two values correspond to absorbing states. We are interested in studying the probability of occurrence of each of these two outcomes.
One can also assume that the gambler is playing against an “infinitely rich" adversary. In this case, we would say that there is only one absorbing state, namely when the gambler’s stake is 0. Under this assumption, one can ask for the probability that the gambler is eventually ruined.
We begin by defining $q_k$ to be the probability that the gambler’s stake reaches 0, i.e., she is ruined, before it reaches $M$, given that the initial stake is $k$. We note that $q_0 = 1$ and $q_M = 0$. The fundamental relationship among the $q_k$’s is the following:
$q_k = pq_{k+1} + qq_{k-1}\ ,$
where $1 \le k \le M-1$. This holds because if her stake equals $k$, and she plays one game, then her stake becomes $k+1$ with probability $p$ and $k-1$ with probability $q$. In the first case, the probability of eventual ruin is $q_{k+1}$ and in the second case, it is $q_{k-1}$. We note that since $p + q = 1$, we can write the above equation as
$p(q_{k+1} - q_k) = q(q_k - q_{k-1})\ ,$
or
$q_{k+1} - q_k = {q\over p}(q_k - q_{k-1})\ .$
From this equation, it is easy to see that
$q_{k+1} - q_k = \biggl({q\over p}\biggr)^k(q_1 - q_0)\ . \label{eq 12.2.2}$
We now use telescoping sums to obtain an equation in which the only unknown is $q_1$:
\begin{aligned} -1 &=& q_M - q_0 \ &=& \sum_{k = 0}^{M-1} (q_{k+1} - q_k)\ ,\end{aligned}
so
\begin{aligned} -1 &=& \sum_{k = 0}^{M-1} \biggl({q\over p}\biggr)^k(q_1 - q_0)\ &=& (q_1 - q_0) \sum_{k = 0}^{M-1} \biggl({q\over p}\biggr)^k\ .\end{aligned}
If $p \ne q$, then the above expression equals
$(q_1 - q_0) {(q/p)^M - 1}\over{(q/p) - 1}\ ,$
while if $p = q = 1/2$, then we obtain the equation
$-1 = (q_1 - q_0) M\ .$
For the moment we shall assume that $p \ne q$. Then we have
$q_1 - q_0 = -{(q/p) - 1}\over{(q/p)^M - 1}\ .$
Now, for any $z$ with $1 \le z \le M$, we have
\begin{aligned} q_z - q_0 &=& \sum_{k = 0}^{z-1} (q_{k+1} - q_k)\ &=& (q_1 - q_0)\sum_{k = 0}^{z-1} \biggl({q\over p}\biggr)^k\ &=& -(q_1 - q_0){(q/p)^z - 1}\over{(q/p) - 1}\ &=& -{(q/p)^z - 1}\over{(q/p)^M - 1}\ .\end{aligned}
Therefore,
\begin{aligned} q_z &=& 1 - {(q/p)^z - 1}\over{(q/p)^M - 1}\ &=& {(q/p)^M - (q/p)^z}\over{(q/p)^M - 1}\ . \label{eq 12.2.3}\end{aligned}
Finally, if $p = q = 1/2$, it is easy to show that (see Exercise $11$)
$q_z = {M - z}\over M\ .$
We note that both of these formulas hold if $z = 0$.
We define, for $0 \le z \le M$, the quantity $p_z$ to be the probability that the gambler’s stake reaches $M$ without ever having reached 0. Since the game might continue indefinitely, it is not obvious that $p_z + q_z = 1$ for all $z$. However, one can use the same method as above to show that if $p \ne q$, then
$q_z = {(q/p)^z - 1}\over{(q/p)^M - 1}\ ,$
and if $p = q = 1/2$, then
$q_z = {z\over M}\ .$
Thus, for all $z$, it is the case that $p_z + q_z = 1$, so the game ends with probability 1.
Infinitely Rich Adversaries
We now turn to the problem of finding the probability of eventual ruin if the gambler is playing against an infinitely rich adversary. This probability can be obtained by letting $M$ go to $\infty$ in the expression for $q_z$ calculated above. If $q < p$, then the expression approaches $(q/p)^z$, and if $q > p$, the expression approaches 1. In the case $p = q = 1/2$, we recall that $q_z = 1 - z/M$. Thus, if $M \rightarrow \infty$, we see that the probability of eventual ruin tends to 1.
Historical Remarks
In 1711, De Moivre, in his book , gave an ingenious derivation of the probability of ruin. The following description of his argument is taken from David.6 The notation used is as follows: We imagine that there are two players, A and B, and the probabilities that they win a game are $p$ and $q$, respectively. The players start with $a$ and $b$ counters, respectively.
Imagine that each player starts with his counters before him in a pile, and that nominal values are assigned to the counters in the following manner. A’s bottom counter is given the nominal value $q/p$; the next is given the nominal value $(q/p)^2$, and so on until his top counter which has the nominal value $(q/p)^a$. B’s top counter is valued $(q/p)^{a+1}$, and so on downwards until his bottom counter which is valued $(q/p)^{a+b}$. After each game the loser’s top counter is transferred to the top of the winner’s pile, and it is always the top counter which is staked for the next game. Then B’s stake is always $q/p$ times A’s, so that at every game each player’s nominal expectation is nil. This remains true throughout the play; therefore A’s chance of winning all B’s counters, multiplied by his nominal gain if he does so, must equal B’s chance multiplied by B’s nominal gain. Thus,
$P_a\biggl(\Bigl({q\over p}\Bigr)^{a+1} + \cdots + \Bigl({q\over p}\Bigr)^{a+b}\biggr) = P_b\biggl(\Bigl({q\over p}\Bigr) + \cdots + \Bigl({q\over p}\Bigr)^a\biggr)\ . \label{eq 12.2.1}$
Using this equation, together with the fact that
$P_a + P_b = 1\ ,$
it can easily be shown that
$P_a = {(q/p)^a - 1}\over{(q/p)^{a+b} - 1}\ ,$
if $p \ne q$, and
$P_a = {a\over{a+b}}\ ,$
if $p = q = 1/2$.
In terms of modern probability theory, de Moivre is changing the values of the counters to make an unfair game into a fair game, which is called a martingale. With the new values, the expected fortune of player A (that is, the sum of the nominal values of his counters) after each play equals his fortune before the play (and similarly for player B). (For a simpler martingale argument, see Exercise [exer 12.2.10].) De Moivre then uses the fact that when the game ends, it is still fair, thus Equation $1$ must be true. This fact requires proof, and is one of the central theorems in the area of martingale theory.
Exercise $2$
In the gambler’s ruin problem, assume that the gambler initial stake is 1 dollar, and assume that her probability of success on any one game is $p$. Let $T$ be the number of games until 0 is reached (the gambler is ruined). Show that the generating function for $T$ is
$h(z) = \frac{1 - \sqrt{1 - 4pqz^2}}{2pz}\ ,$
and that
$h(1) = \left \{ \begin{array}{ll} q/p, & \mbox{if q \leq p}, \ 1, & \mbox{if q \geq p,} \end{array} \right.$
and
$h'(1) = \left \{ \begin{array}{ll} 1/(q - p), & \mbox{if q > p}, \ \infty, & \mbox{if q = p.} \end{array} \right.$
Interpret your results in terms of the time $T$ to reach 0. (See also Example [exam 10.1.7].)
Exercise $3$
Show that the Taylor series expansion for $\sqrt{1 - x}$ is
$\sqrt{1 - x} = \sum_{n = 0}^\infty {{1/2} \choose n} x^n\ ,$
where the binomial coefficient ${1/2} \choose n$ is
${{1/2} \choose n} = \frac{(1/2)(1/2 - 1) \cdots (1/2 - n + 1)}{n!}\ .$
Using this and the result of Exercise [exer 12.2.2], show that the probability that the gambler is ruined on the $n$th step is
$p_T(n) = \left \{ \begin{array}{ll} \frac{(-1)^{k - 1}}{2p} {{1/2} \choose k} (4pq)^k, & \mbox{if n = 2k - 1,} \ 0, & \mbox{if n = 2k.} \end{array} \right.$
Exercise $4$
For the gambler’s ruin problem, assume that the gambler starts with $k$ dollars. Let $T_k$ be the time to reach 0 for the first time.
1. Show that the generating function $h_k(t)$ for $T_k$ is the $k$th power of the generating function for the time $T$ to ruin starting at 1. : Let $T_k = U_1 + U_2 +\cdots+ U_k$, where $U_j$ is the time for the walk starting at $j$ to reach $j - 1$ for the first time.
2. Find $h_k(1)$ and $h_k'(1)$ and interpret your results.
The next three problems come from Feller.7
Exercise $5$
As in the text, assume that $M$ is a fixed positive integer.
1. Show that if a gambler starts with an stake of 0 (and is allowed to have a negative amount of money), then the probability that her stake reaches the value of $M$ before it returns to 0 equals $p(1 - q_1)$.
2. Show that if the gambler starts with a stake of $M$ then the probability that her stake reaches 0 before it returns to $M$ equals $qq_{M-1}$.
Exercise $6$
Suppose that a gambler starts with a stake of 0 dollars.
1. Show that the probability that her stake never reaches $M$ before returning to 0 equals $1 - p(1 - q_1)$.
2. Show that the probability that her stake reaches the value $M$ exactly $k$ times before returning to 0 equals $p(1-q_1)(1 - qq_{M-1})^{k-1}(qq_{M-1})$. : Use Exercise [exer 12.2.5].
Exercise $7$
In the text, it was shown that if $q < p$, there is a positive probability that a gambler, starting with a stake of 0 dollars, will never return to the origin. Thus, we will now assume that $q \ge p$. Using Exercise [exer 12.2.6], show that if a gambler starts with a stake of 0 dollars, then the expected number of times her stake equals $M$ before returning to 0 equals $(p/q)^M$, if $q > p$ and 1, if $q = p$. (We quote from Feller: “The truly amazing implications of this result appear best in the language of fair games. A perfect coin is tossed until the first equalization of the accumulated numbers of heads and tails. The gambler receives one penny for every time that the accumulated number of heads exceeds the accumulated number of tails by $m$. ")
Exercise $8$
In the game in Exercise [exer 12.2.7], let $p = q = 1/2$ and $M = 10$. What is the probability that the gambler’s stake equals $M$ at least 20 times before it returns to 0?
Exercise $9$
Write a computer program which simulates the game in Exercise [exer 12.2.7] for the case $p = q = 1/2$, and $M = 10$.
Exercise $19$
In de Moivre’s description of the game, we can modify the definition of player A’s fortune in such a way that the game is still a martingale (and the calculations are simpler). We do this by assigning nominal values to the counters in the same way as de Moivre, but each player’s current fortune is defined to be just the value of the counter which is being wagered on the next game. So, if player A has $a$ counters, then his current fortune is $(q/p)^a$ (we stipulate this to be true even if $a = 0$). Show that under this definition, player A’s expected fortune after one play equals his fortune before the play, if $p \ne q$. Then, as de Moivre does, write an equation which expresses the fact that player A’s expected final fortune equals his initial fortune. Use this equation to find the probability of ruin of player A.
Exercise $11$
Assume in the gambler’s ruin problem that $p = q = 1/2$.
1. Using Equation [eq 12.2.2], together with the facts that $q_0 = 1$ and $q_M = 0$, show that for $0 \le z \le M$,$q_z = {{M - z}\over M}\ .$
2. In Equation [eq 12.2.3], let $p \rightarrow 1/2$ (and since $q = 1 - p$, $q \rightarrow 1/2$ as well). Show that in the limit,
$q_z = {{M - z}\over M}\ .$: Replace $q$ by $1-p$, and use L’Hopital’s rule.
Exercise $12$
In American casinos, the roulette wheels have the integers between 1 and 36, together with 0 and 00. Half of the non-zero numbers are red, the other half are black, and 0 and 00 are green. A common bet in this game is to bet a dollar on red. If a red number comes up, the bettor gets her dollar back, and also gets another dollar. If a black or green number comes up, she loses her dollar.
1. Suppose that someone starts with 40 dollars, and continues to bet on red until either her fortune reaches 50 or 0. Find the probability that her fortune reaches 50 dollars.
2. How much money would she have to start with, in order for her to have a 95% chance of winning 10 dollars before going broke?
3. A casino owner was once heard to remark that “If we took 0 and 00 off of the roulette wheel, we would still make lots of money, because people would continue to come in and play until they lost all of their money.” Do you think that such a casino would stay in business? | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/12%3A_Random_Walks/12.02%3A_Gambler%27s_Ruin.txt |
In Exercise 12.1.6, the distribution of the time of the last equalization in the symmetric random walk was determined. If we let $\alpha_{2k, 2m}$ denote the probability that a random walk of length $2m$ has its last equalization at time $2k$, then we have
$\alpha_{2k, 2m} = u_{2k}u_{2m-2k}\ .$
We shall now show how one can approximate the distribution of the $\alpha$’s with a simple function. We recall that
$u_{2k} \sim {1\over{\sqrt {\pi k}}}\ .$
Therefore, as both $k$ and $m$ go to $\infty$, we have
$\alpha_{2k, 2m} \sim {1\over{\pi \sqrt{k(m-k)}}}\ .$
This last expression can be written as
${1\over{\pi m \sqrt{(k/m)(1 - k/m)}}}\ .$
Thus, if we define
$f(x) = {1\over{\pi \sqrt{x(1-x)}}}\ ,$
for $0 < x < 1$, then we have
$\alpha_{2k, 2m} \approx {1\over m}f\biggl({k\over m}\biggr)\ .$
The reason for the $\approx$ sign is that we no longer require that $k$ get large. This means that we can replace the discrete $\alpha_{2k, 2m}$ distribution by the continuous density $f(x)$ on the interval $[0, 1]$ and obtain a good approximation. In particular, if $x$ is a fixed real number between 0 and 1, then we have
$\sum_{k < xm}\alpha_{2k, 2m} \approx \int_0^x f(t)\,dt\ .$
It turns out that $f(x)$ has a nice antiderivative, so we can write
$\sum_{k < xm}\alpha_{2k, 2m} \approx {2\over \pi}\arcsin \sqrt x\ .$
One can see from the graph of this last function that it has a minimum at $x = 1/2$ and is symmetric about that point. As noted in the exercise, this implies that half of the walks of length $2m$ have no equalizations after time $m$, a fact which probably would not be guessed.
It turns out that the arc sine density comes up in the answers to many other questions concerning random walks on the line. Recall that in Section 1.1, a random walk could be viewed as a polygonal line connecting $(0,0)$ with $(m, S_m)$. Under this interpretation, we define $b_{2k, 2m}$ to be the probability that a random walk of length $2m$ has exactly $2k$ of its $2m$ polygonal line segments above the $t$-axis.
The probability $b_{2k, 2m}$ is frequently interpreted in terms of a two-player game. (The reader will recall the game Heads or Tails, in Example 12.1.4 .) Player A is said to be in the lead at time $n$ if the random walk is above the $t$-axis at that time, or if the random walk is on the $t$-axis at time $n$ but above the $t$-axis at time $n-1$. (At time 0, neither player is in the lead.) One can ask what is the most probable number of times that player A is in the lead, in a game of length $2m$. Most people will say that the answer to this question is $m$. However, the following theorem says that $m$ is the least likely number of times that player A is in the lead, and the most likely number of times in the lead is 0 or $2m$.
Theorem $1$
If Peter and Paul play a game of Heads or Tails of length $2m$, the probability that Peter will be in the lead exactly $2k$ times is equal to
$\alpha_{2k, 2m}\ .$
Proof. To prove the theorem, we need to show that
$b_{2k, 2m} = \alpha_{2k, 2m}\ . \label{eq 12.3.1}$
Exercise 12.1.7 shows that $b_{2m, 2m} = u_{2m}$ and $b_{0, 2m} = u_{2m}$, so we only need to prove that Equation 12.3.1 holds for $1 \le k \le m-1$. We can obtain a recursion involving the $b$’s and the $f$’s (defined in Section 1.1) by counting the number of paths of length $2m$ that have exactly $2k$ of their segments above the $t$-axis, where $1 \le k \le m-1$. To count this collection of paths, we assume that the first return occurs at time $2j$, where $1 \le j \le m-1$. There are two cases to consider. Either during the first $2j$ outcomes the path is above the $t$-axis or below the $t$-axis. In the first case, it must be true that the path has exactly $(2k - 2j)$ line segments above the $t$-axis, between $t = 2j$ and $t = 2m$. In the second case, it must be true that the path has exactly $2k$ line segments above the $t$-axis, between $t = 2j$ and $t = 2m$.
We now count the number of paths of the various types described above. The number of paths of length $2j$ all of whose line segments lie above the $t$-axis and which return to the origin for the first time at time $2j$ equals $(1/2)2^{2j}f_{2j}$. This also equals the number of paths of length $2j$ all of whose line segments lie below the $t$-axis and which return to the origin for the first time at time $2j$. The number of paths of length $(2m - 2j)$ which have exactly $(2k - 2j)$ line segments above the $t$-axis is $b_{2k-2j, 2m-2j}$. Finally, the number of paths of length $(2m-2j)$ which have exactly $2k$ line segments above the $t$-axis is $b_{2k,2m-2j}$. Therefore, we have
$b_{2k,2m} = {1\over 2} \sum_{j = 1}^k f_{2j}b_{2k-2j, 2m-2j} + {1\over 2}\sum_{j = 1}^{m-k} f_{2j}b_{2k, 2m-2j}\ .$
We now assume that Equation 12.3.1 is true for $m < n$. Then we have
\begin{aligned} b_{2k, 2n} &=& {1\over 2} \sum_{j = 1}^k f_{2j}\alpha_{2k-2j, 2m-2j} + {1\over 2}\sum_{j = 1}^{m-k} f_{2j}\alpha_{2k, 2m - 2j}\ &=& {1\over 2}\sum_{j = 1}^k f_{2j}u_{2k-2j}u_{2m-2k} + {1\over 2}\sum_{j = 1}^{m-k} f_{2j}u_{2k}u_{2m - 2j - 2k}\ &=& {1\over 2}u_{2m-2k}\sum_{j = 1}^k f_{2j}u_{2k - 2j} + {1\over 2}u_{2k}\sum_{j = 1}^{m-k} f_{2j}u_{2m - 2j - 2k}\ &=& {1\over 2}u_{2m - 2k}u_{2k} + {1\over 2}u_{2k}u_{2m - 2k}\ ,\end{aligned}
where the last equality follows from Theorem [thm 12.1.2]. Thus, we have $b_{2k, 2n} = \alpha_{2k, 2n}\ ,$ which completes the proof.
We illustrate the above theorem by simulating 10,000 games of Heads or Tails, with each game consisting of 40 tosses. The distribution of the number of times that Peter is in the lead is given in Figure $1$, together with the arc sine density.
We end this section by stating two other results in which the arc sine density appears. Proofs of these results may be found in Feller.8
Theorem $2$
Let $J$ be the random variable which, for a given random walk of length $2m$, gives the smallest subscript $j$ such that $S_{j} = S_{2m}$. (Such a subscript $j$ must be even, by parity considerations.) Let $\gamma_{2k, 2m}$ be the probability that $J = 2k$. Then we have
$\gamma_{2k, 2m} = \alpha_{2k, 2m}\ .$
The next theorem says that the arc sine density is applicable to a wide range of situations. A continuous distribution function $F(x)$ is said to be if $F(x) = 1 - F(-x)$. (If $X$ is a continuous random variable with a symmetric distribution function, then for any real $x$, we have $P(X \le x) = P(X \ge -x)$.) We imagine that we have a random walk of length $n$ in which each summand has the distribution $F(x)$, where $F$ is continuous and symmetric. The subscript of the of such a walk is the unique subscript $k$ such that
$S_k > S_0,\ \ldots,\ S_k > S_{k-1},\ S_k \ge S_{k+1},\ \ldots,\ S_k \ge S_n\ .$
We define the random variable $K_n$ to be the subscript of the first maximum. We can now state the following theorem concerning the random variable $K_n$.
Theorem $3$
Let $F$ be a symmetric continuous distribution function, and let $\alpha$ be a fixed real number strictly between 0 and 1. Then as $n \rightarrow \infty$, we have
$P(K_n < n\alpha) \rightarrow {2\over \pi} \arcsin\sqrt \alpha\ .$
A version of this theorem that holds for a symmetric random walk can also be found in Feller.
Exercises
Exercise $1$
For a random walk of length $2m$, define $\epsilon_k$ to equal 1 if $S_k > 0$, or if $S_{k-1} = 1$ and $S_k = 0$. Define $\epsilon_k$ to equal -1 in all other cases. Thus, $\epsilon_k$ gives the side of the $t$-axis that the random walk is on during the time interval $[k-1, k]$. A “law of large numbers" for the sequence $\{\epsilon_k\}$ would say that for any $\delta > 0$, we would have
$P\left(-\delta<\frac{\epsilon_1+\epsilon_2+\cdots+\epsilon_n}{n}<\delta\right) \rightarrow 1$
as $n \rightarrow \infty$. Even though the $\epsilon$’s are not independent, the above assertion certainly appears reasonable. Using Theorem (3\), show that if $-1 \le x \le 1$, then
$\lim _{n \rightarrow \infty} P\left(\frac{\epsilon_1+\epsilon_2+\cdots+\epsilon_n}{n}<x\right)=\frac{2}{\pi} \arcsin \sqrt{\frac{1+x}{2}}$
Exercise $2$
Given a random walk $W$ of length $m$, with summands
$\{X_1, X_2, \ldots,X_m\}\ ,$
define the random walk to be the walk $W^*$ with summands
$\{X_m, X_{m-1}, \ldots, X_1\}\ .$
1. Show that the $k$th partial sum $S^*_k$ satisfies the equation $S^*_k = S_m - S_{n-k}\ ,$ where $S_k$ is the $k$th partial sum for the random walk $W$.
2. Explain the geometric relationship between the graphs of a random walk and its reversal. (It is not in general true that one graph is obtained from the other by reflecting in a vertical line.)
3. Use parts (a) and (b) to prove Theorem [thm 12.3.2].
12.R: References and Footnotes
1. H. S. Wilf, Generatingfunctionology, (Boston: Academic Press, 1990), p. 50 .
2. G. Pólya, "Über eine Aufgabe der Wahrscheinlichkeitsrechnung betreffend die Irrfahrt im Strassennetz," Math. Ann., vol. 84 (1921), pp. 149-160.
3. Feller, Introduction to Probability Theory and its Applications, vol. I, 3rd ed. (New York: John Wiley \& Sons, 1968).
4. Levasseur, "How to Beat Your Kids at Their Own Game," Mathematics Magazine vol. 61, no. 5 (December, 1988), pp. 301-305.
5. "How Often Should You Beat Your Kids?" Mathematics Magazine vol. 63, no. 2 (April 1990), pp. 89-92.
6. F. N. David, Games, Gods and Gambling (London: Griffen, 1962).
7. W. Feller, op. cit., pg. 367.
8. W. Feller, op. cit., pp. 93-94. | textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/12%3A_Random_Walks/12.03%3A_Arc_Sine_Laws.txt |
In this chapter we review several mathematical topics that form the foundation of probability and mathematical statistics. These include the algebra of sets and functions, general relations with special emphasis on equivalence relations and partial orders, counting measure, and some basic combinatorial structures such as permuations and combinations. We also discuss some advanced topics from topology and measure theory. You may wish to review the topics in this chapter as the need arises.
01: Foundations
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\Q}{\mathbb{Q}}$ $\newcommand{\C}{\mathbb{C}}$ $\newcommand{\A}{\mathbb{A}}$ $\newcommand{\D}{\mathbb{D}}$
Set theory is the foundation of probability and statistics, as it is for almost every branch of mathematics.
Sets and subsets
In this text, sets and their elements are primitive, self-evident concepts, an approach that is sometimes referred to as naive set theory.
A set is simply a collection of objects; the objects are referred to as elements of the set. The statement that $x$ is an element of set $S$ is written $x \in S$, and the negation that $x$ is not an element of $S$ is written as $x \notin S$. By definition, a set is completely determined by its elements; thus sets $A$ and $B$ are equal if they have the same elements: $A = B \text{ if and only if } x \in A \iff x \in B$
Our next definition is the subset relation, another very basic concept.
If $A$ and $B$ are sets then $A$ is a subset of $B$ if every element of $A$ is also an element of $B$: $A \subseteq B \text{ if and only if } x \in A \implies x \in B$
Concepts in set theory are often illustrated with small, schematic sketches known as Venn diagrams, named for John Venn. The Venn diagram in the picture below illustrates the subset relation.
As noted earlier, membership is a primitive, undefined concept in naive set theory. However, the following construction, known as Russell's paradox, after the mathematician and philosopher Bertrand Russell, shows that we cannot be too cavalier in the construction of sets.
Let $R$ be the set of all sets $A$ such that $A \notin A$. Then $R \in R$ if and only if $R \notin R$.
Proof
The contradiction follows from the definition of $R$: If $R \in R$, then by definition, $R \notin R$. If $R \notin R$, then by definition, $R \in R$. The net result, of course, is that $R$ is not a well-defined set.
Usually, the sets under discussion in a particular context are all subsets of a well-defined, specified set $S$, often called a universal set. The use of a universal set prevents the type of problem that arises in Russell's paradox. That is, if $S$ is a given set and $p(x)$ is a predicate on $S$ (that is, a valid mathematical statement that is either true or false for each $x \in S$), then $\{x \in S: p(x)\}$ is a valid subset of $S$. Defining a set in this way is known as predicate form. The other basic way to define a set is simply be listing its elements; this method is known as list form.
In contrast to a universal set, the empty set, denoted $\emptyset$, is the set with no elements.
$\emptyset \subseteq A$ for every set $A$.
Proof
$\emptyset \subseteq A$ means that $x \in \emptyset \implies x \in A$. Since the premise is false, the implication is true.
One step up from the empty set is a set with just one element. Such a set is called a singleton set. The subset relation is a partial order on the collection of subsets of $S$.
Suppose that $A$, $B$ and $C$ are subsets of a set $S$. Then
1. $A \subseteq A$ (the reflexive property).
2. If $A \subseteq B$ and $B \subseteq A$ then $A = B$ (the anti-symmetric property).
3. If $A \subseteq B$ and $B \subseteq C$ then $A \subseteq C$ (the transitive property).
Here are a couple of variations on the subset relation.
Suppose that $A$ and $B$ are sets.
1. If $A \subseteq B$ and $A \ne B$, then $A$ is a strict subset of $B$ and we sometimes write $A \subset B$.
2. If $\emptyset \subset A \subset B$, then $A$ is called a proper subset of $B$.
The collection of all subsets of a given set frequently plays an important role, particularly when the given set is the universal set.
If $S$ is a set, then the set of all subsets of $S$ is known as the power set of $S$ and is denoted $\mathscr{P}(S)$.
Special Sets
The following special sets are used throughout this text. Defining them will also give us practice using list and predicate form.
Special Sets
1. $\R$ denotes the set of real numbers and is the universal set for the other subsets in this list.
2. $\N = \{0, 1, 2, \ldots\}$ is the set of natural numbers
3. $\N_+ = \{1, 2, 3, \ldots\}$ is the set of positive integers
4. $\Z = \{\ldots, -2, -1, 0, 1, 2, \ldots\}$ is the set of integers
5. $\Q = \{m / n: m \in \Z \text{ and } n \in \N_+ \}$ is the set of rational numbers
6. $\A = \{x \in \R: p(x) = 0 \text{ for some polynomial } p \text{ with integer coefficients}\}$ is the set of algebraic numbers.
Note that $\N_+ \subset \N \subset \Z \subset \Q \subset \A \subset \R$. We will also occasionally need the set of complex numbers $\C = \{x + i y: x, \, y \in \R\}$ where $i$ is the imaginary unit. The following special rational numbers turn out to be useful for various constructions.
For $n \in \N$, a rational number of the form $j / 2^n$ where $j \in \Z$ is odd is a dyadic rational (or binary rational) of rank $n$.
1. For $n \in \N$, the set of dyadic rationals of rank $n$ or less is $\D_n = \{j / 2^n: j \in \Z\}$.
2. The set of all dyadic rationals is $\D = \{j / 2^n: j \in \Z \text{ and } n \in \N\}$.
Note that $\D_0 = \Z$ and $\D_n \subset \D_{n+1}$ for $n \in \N$, and of course, $\D \subset \Q$. We use the usual notation for intervals of real numbers, but again the definitions provide practice with predicate notation.
Suppose that $a, \, b \in \R$ with $a \lt b$.
1. $[a, b] = \{x \in \R: a \le x \le b\}$. This interval is closed.
2. $(a, b) = \{x \in \R: a \lt x \lt b\}$. This interval is open.
3. $[a, b) = \{x \in \R: a \le x \lt b\}$. This interval is closed-open.
4. $(a, b] = \{x \in \R: a \lt x \le b\}$. This interval is open-closed.
The terms open and closed are actually topological concepts.
You may recall that $x \in \R$ is rational if and only if the decimal expansion of $x$ either terminates or forms a repeating block. The binary rationals have simple binary expansions (that is, expansions in the base 2 number system).
A number $x \in \R$ is a binary rational of rank $n \in \N_+$ if and only if the binary expansion of $x$ is finite, with $1$ in position $n$ (after the separator).
Proof
It suffices to consider $x \in (0, 1)$. The result is very simple so we just give the first few cases.
1. The number with rank 1 is $1/2$ with binary expansion 0.1
2. The numbers with rank 2 are $1/4$ with expansion 0.01 and $3/4$ with expansion 0.11
3. The numbers with rank 3 are $1/8$ with expansion 0.001, $3/8$ with expansion 0.011, $5/8$ with expansion 0.101, and $7/8$ with expansion 0.111.
Set Operations
We are now ready to review the basic operations of set theory. For the following definitions, suppose that $A$ and $B$ are subsets of a universal set, which we will denote by $S$.
The union of $A$ and $B$ is the set obtained by combining the elements of $A$ and $B$. $A \cup B = \{x \in S: x \in A \text{ or } x \in B\}$
The intersection of $A$ and $B$ is the set of elements common to both $A$ and $B$: $A \cap B = \{x \in S: x \in A \text{ and } x \in B\}$ If $A \cap B = \emptyset$ then $A$ and $B$ are disjoint.
So $A$ and $B$ are disjoint if the two sets have no elements in common.
The set difference of $B$ and $A$ is the set of elements that are in $B$ but not in $A$: $B \setminus A = \{x \in S: x \in B \text{ and } x \notin A\}$
Sometimes (particularly in older works and particularly when $A \subseteq B$), the notation $B - A$ is used instead of $B \setminus A$. When $A \subseteq B$, $B - A$ is known as proper set difference.
The complement of $A$ is the set of elements that are not in $A$: $A^c = \{ x \in S: x \notin A\}$
Note that union, intersection, and difference are binary set operations, while complement is a unary set operation.
In the Venn diagram app, select each of the following and note the shaded area in the diagram.
1. $A$
2. $B$
3. $A^c$
4. $B^c$
5. $A \cup B$
6. $A \cap B$
Basic Rules
In the following theorems, $A$, $B$, and $C$ are subsets of a universal set $S$. The proofs are straightforward, and just use the definitions and basic logic. Try the proofs yourself before reading the ones in the text.
$A \cap B \subseteq A \subseteq A \cup B$.
The identity laws:
1. $A \cup \emptyset = A$
2. $A \cap S = A$
So the empty set acts as an identity relative to the union operation, and the universal set acts as an identiy relative to the intersection operation.
The idempotent laws:
1. $A \cup A = A$
2. $A \cap A = A$
The complement laws:
1. $A \cup A^c = S$
2. $A \cap A^c = \emptyset$
The double complement law: $(A^c)^c = A$
The commutative laws:
1. $A \cup B = B \cup A$
2. $A \cap B = B \cap A$
Proof
These results follows from the commutativity of the or and and logical operators.
The associative laws:
1. $A \cup (B \cup C) = (A \cup B) \cup C$
2. $A \cap (B \cap C) = (A \cap B) \cap C$
Proof
These results follow from the associativity of the or and and logical operators.
Thus, we can write $A \cup B \cup C$ without ambiguity. Note that $x$ is an element of this set if and only if $x$ is an element of at least one of the three given sets. Similarly, we can write $A \cap B \cap C$ without ambiguity. Note that $x$ is an element of this set if and only if $x$ is an element of all three of the given sets.
The distributive laws:
1. $A \cap (B \cup C) = (A \cap B) \cup (A \cap C)$
2. $A \cup (B \cap C) = (A \cup B) \cap (A \cup C)$
Proof
1. $x \in A \cap (B \cup C)$ if and only if $x \in A$ and $x \in B \cup C$ if and only if $x \in A$ and either $x \in B$ or $x \in C$ if and only if $x \in A$ and $x \in B$, or, $x \in A$ and $x \in C$ if and only if $x \in A \cap B$ or $x \in A \cap C$ if and only if $x \in (A \cap B) \cup (A \cap C$.
2. The proof is exactly the same as (a), but with or and and interchanged.
So intersection distributes over union, and union distributes over intersection. It's interesting to compare the distributive properties of set theory with those of the real number system. If $x, \, y, \, z \in \R$, then $x (y + z) = (x y) + (x z)$, so multiplication distributes over addition, but it is not true that $x + (y z) = (x + y)(x + z)$, so addition does not distribute over multiplication. The following results are particularly important in probability theory.
DeMorgan's laws (named after Agustus DeMorgan):
1. $(A \cup B)^c = A^c \cap B^c$
2. $(A \cap B)^c = A^c \cup B^c$.
Proof
1. $x \in (A \cup B)^c$ if and only if $x \notin A \cup B$ if and only if $x \notin A$ and $x \notin B$ if and only $x \in A^c$ and $x \in B^c$ if and only if $x \in A^c \cap B^c$
2. $x \in (A \cap B)^c$ if and only if $x \notin A \cap B$ if and only if $x \notin A$ or $x \notin B$ if and only $x \in A^c$ or $x \in B^c$ if and only if $x \in A^c \cup B^c$
The following result explores the connections between the subset relation and the set operations.
The following statements are equivalent:
1. $A \subseteq B$
2. $B^c \subseteq A^c$
3. $A \cup B = B$
4. $A \cap B = A$
5. $A \setminus B = \emptyset$
Proof
1. Recall that $A \subseteq B$ means that $x \in A \implies x \in B$.
2. $B^c \subseteq A^c$ means that $x \notin B \implies x \notin A$. This is the contrapositive of (a) and hence is equivalent to (a).
3. If $A \subseteq B$ then clearly $A \cup B = B$. Conversely suppose $A \cup B = B$. If $x \in A$ then $x \in A \cup B$ so $x \in B$. Hence $A \subseteq B$.
4. If $A \subseteq B$ then clearly $A \cap B = A$. Conversely suppose $A \cap B = A$. If $x \in A$ then $x \in A \cap B$ and so $x \in B$. Hence $A \subseteq B$.
5. Suppose $A \subseteq B$. If $x \in A$ then $x \in B$ and so by definition, $x \notin A \setminus B$. If $x \notin A$ then again by definition, $x \notin A \setminus B$. Thus $A \setminus B = \emptyset$. Conversely suppose that $A \setminus B = \emptyset$. If $x \in A$ then $x \notin A \setminus B$ so $x \in B$. Thus $A \subseteq B$.
In addition to the special sets defined earlier, we also have the following:
More special sets
1. $\R \setminus \Q$ is the set of irrational numbers
2. $\R \setminus \A$ is the set of transcendental numbers
Since $\Q \subset \A \subset \R$ it follows that $\R \setminus \A \subset \R \setminus \Q$, that is, every transcendental number is also irrational.
Set difference can be expressed in terms of complement and intersection. All of the other set operations (complement, union, and intersection) can be expressed in terms of difference.
Results for set difference:
1. $B \setminus A = B \cap A^c$
2. $A^c = S \setminus A$
3. $A \cap B = A \setminus (A \setminus B)$
4. $A \cup B = S \setminus \left\{(S \setminus A) \setminus \left[(S \setminus A) \setminus (S \setminus B)\right]\right\}$
Proof
1. This is clear from the definition: $B \setminus A = B \cap A^c = \{x \in S: x \in B \text{ and } x \notin A\}$.
2. This follows from (a) with $B = S$.
3. Using (a), DeMorgan's law, and the distributive law, the right side is $A \cap (A \cap B^c)^c = A \cap (A^c \cup B) = (A \cap A^c) \cup (A \cap B) = \emptyset \cup (A \cap B) = A \cap B$
4. Using (a), (b), DeMorgan's law, and the distributive law, the right side is $\left[A^c \cap (A^c \cap B)^c \right]^c = A \cup (A^c \cap B) = (A \cup A^c) \cap (A \cup B) = S \cap (A \cup B) = A \cup B$
So in principle, we could do all of set theory using the one operation of set difference. But as (c) and (d) suggest, the results would be hideous.
$(A \cup B) \setminus (A \cap B) = (A \setminus B) \cup (B \setminus A)$.
Proof
A direct proof is simple, but for practice let's give a proof using set algebra, in particular, DeMorgan's law, and the distributive law: \begin{align} (A \cup B) \setminus (A \cap B) & = (A \cup B) \cap (A \cap B)^c = (A \cup B) \cap (A^c \cup B^c) \ & = (A \cap A^c) \cup (B \cap A^c) \cup (A \cap B^c) \cup (B \cap B^c) \ & = \emptyset \cup (B \setminus A) \cup (A \setminus B) \cup \emptyset = (A \setminus B) \cup (B \setminus A) \end{align}
The set in the previous result is called the symmetric difference of $A$ and $B$, and is sometimes denoted $A \bigtriangleup B$. The elements of this set belong to one but not both of the given sets. Thus, the symmetric difference corresponds to exclusive or in the same way that union corresponds to inclusive or. That is, $x \in A \cup B$ if and only if $x \in A$ or $x \in B$ (or both); $x \in A \bigtriangleup B$ if and only if $x \in A$ or $x \in B$, but not both. On the other hand, the complement of the symmetric difference consists of the elements that belong to both or neither of the given sets:
$(A \bigtriangleup B)^c = (A \cap B) \cup (A^c \cap B^c) = (A^c \cup B) \cap (B^c \cup A)$
Proof
Again, a direct proof is simple, but let's give an algebraic proof for practice: \begin{align} (A \bigtriangleup B)^c & = \left[(A \cup B) \cap (A \cap B)^c \right]^c \ & = (A \cup B)^c \cup (A \cap B) = (A^c \cap B^c) \cup (A \cap B) \ & = (A^c \cup A) \cap (A^c \cup B) \cap (B^c \cup A) \cap (B^c \cup B) \ & = S \cap (A^c \cup B) \cap (B^c \cup A) \cap S = (A^c \cup B) \cap (B^c \cup A) \end{align}
There are 16 different (in general) sets that can be constructed from two given events $A$ and $B$.
Proof
$S$ is the union of 4 pairwise disjoint sets: $A \cap B$, $A \cap B^c$, $A^c \cap B$, and $A^c \cap B^c$. If $A$ and $B$ are in general position, these 4 sets are distinct. Every set that can be constructed from $A$ and $B$ is a union of some (perhaps none, perhaps all) of these 4 sets. There are $2^4 = 16$ sub-collections of the 4 sets.
Open the Venn diagram app. This app lists the 16 sets that can be constructed from given sets $A$ and $B$ using the set operations.
1. Select each of the four subsets in the proof of the last exercise: $A \cap B$, $A \cap B^c$, $A^c \cap B$, and $A^c \cap B^c$. Note that these are disjoint and their union is $S$.
2. Select each of the other 12 sets and show how each is a union of some of the sets in (a).
General Operations
The operations of union and intersection can easily be extended to a finite or even an infinite collection of sets.
Definitions
Suppose that $\mathscr{A}$ is a nonempty collection of subsets of a universal set $S$. In some cases, the subsets in $\mathscr{A}$ may be naturally indexed by a nonempty index set $I$, so that $\mathscr{A} = \{A_i: i \in I\}$. (In a technical sense, any collection of subsets can be indexed.)
The union of the collection of sets $\mathscr{A}$ is the set obtained by combining the elements of the sets in $\mathscr{A}$: $\bigcup \mathscr{A} = \{x \in S: x \in A \text{ for some } A \in \mathscr{A}\}$
If $\mathscr{A} = \{A_i: i \in I\}$, so that the collection of sets is indexed, then we use the more natural notation: $\bigcup_{i \in I} A_i =\{x \in S: x \in A_i \text{ for some } i \in I\}$
The intersection of the collection of sets $\mathscr{A}$ is the set of elements common to all of the sets in $\mathscr{A}$: $\bigcap \mathscr{A} = \{x \in S: x \in A \text{ for all } A \in \mathscr{A}\}$
If $\mathscr{A} = \{A_i : i \in I\}$, so that the collection of sets is indexed, then we use the more natural notation: $\bigcap_{i \in I} A_i = \{x \in S: x \in A_i \text{ for all } i \in I\}$ Often the index set is an integer interval of $\N$. In such cases, an even more natural notation is to use the upper and lower limits of the index set. For example, if the collection is $\{A_i: i \in \N_+\}$ then we would write $\bigcup_{i=1}^\infty A_i$ for the union and $\bigcap_{i=1}^\infty A_i$ for the intersection. Similarly, if the collection is $\{A_i: i \in \{1, 2, \ldots, n\}\}$ for some $n \in \N_+$, we would write $\bigcup_{i=1}^n A_i$ for the union and $\bigcap_{i=1}^n A_i$ for the intersection.
A collection of sets $\mathscr{A}$ is pairwise disjoint if the intersection of any two sets in the collection is empty: $A \cap B = \emptyset$ for every $A, \; B \in \mathscr{A}$ with $A \ne B$.
A collection of sets $\mathscr{A}$ is said to partition a set $B$ if the collection $\mathscr{A}$ is pairwise disjoint and $\bigcup \mathscr{A} = B$.
Partitions are intimately related to equivalence relations. As an example, for $n \in \N$, the set $\mathscr{D}_n = \left\{\left[\frac{j}{2^n}, \frac{j + 1}{2^n}\right): j \in \Z\right\}$ is a partition of $\R$ into intervals of equal length $1 / 2^n$. Note that the endpoints are the dyadic rationals of rank $n$ or less, and that $\mathscr{D}_{n+1}$ can be obtained from $\mathscr{D}_n$ by dividing each interval into two equal parts. This sequence of partitions is one of the reasons that the dyadic rationals are important.
Basic Rules
In the following problems, $\mathscr{A} = \{A_i : i \in I\}$ is a collection of subsets of a universal set $S$, indexed by a nonempty set $I$, and $B$ is a subset of $S$.
The general distributive laws:
1. $\left(\bigcup_{i \in I} A_i \right) \cap B = \bigcup_{i \in I} (A_i \cap B)$
2. $\left(\bigcap_{i \in I} A_i \right) \cup B = \bigcap_{i \in I} (A_i \cup B)$
Restate the laws in the notation where the collection $\mathscr{A}$ is not indexed.
Proof
1. $x$ is an element of the set on the left or the right of the equation if and only if $x \in B$ and $x \in A_i$ for some $i \in I$.
2. $x$ is an element of the set on the left or the right of the equation if and only if $x \in B$ or $x \in A_i$ for every $i \in I$.
$\left( \bigcup \mathscr{A} \right) \cap B = \bigcup\{A \cap B: A \in \mathscr{A}\}$, $\left( \bigcap \mathscr{A} \right) \cup B = \bigcap\{A \cup B: A \in \mathscr{A}\}$
The general De Morgan's laws:
1. $\left(\bigcup_{i \in I} A_i \right)^c = \bigcap_{i \in I} A_i^c$
2. $\left(\bigcap_{i \in I} A_i \right)^c = \bigcup_{i \in I} A_i^c$
Restate the laws in the notation where the collection $\mathscr{A}$ is not indexed.
Proof
1. $x \in \left(\bigcup_{i \in I} A_i \right)^c$ if and only if $x \notin \bigcup_{i \in I} A_i$ if and only if $x \notin A_i$ for every $i \in I$ if and only if $x \in A_i^c$ for every $i \in I$ if and only if $x \in \bigcap_{i \in I} A_i^c$.
2. $x \in \left(\bigcap_{i \in I} A_i \right)^c$ if and only if $x \notin \bigcap_{i \in I} A_i$ if and only if $x \notin A_i$ for some $i \in I$ if and only if $x \in A_i^c$ for some $i \in I$ if and only if $x \in \bigcup_{i \in I} A_i^c$.
$\left( \bigcup \mathscr{A} \right)^c = \bigcap\{A^c: A \in \mathscr{A}\}$, $\left( \bigcap \mathscr{A} \right)^c = \bigcup\{A^c: A \in \mathscr{A}\}$
Suppose that the collection $\mathscr{A}$ partitions $S$. For any subset $B$, the collection $\{A \cap B: A \in \mathscr{A}\}$ partitions $B$.
Proof
Suppose $\mathscr{A} = \{A_i: i \in I\}$ where $I$ is an index set. If $i, \, j \in I$ with $i \ne j$ then $(A_i \cap B) \cap (A_j \cap B) = (A_i \cap A_j) \cap B = \emptyset \cap B = \emptyset$, so the collection $\{A_i \cap B: i \in I\}$ is disjoint. Moreover, by the distributive law, $\bigcup_{i \in I} (A_i \cap B) = \left(\bigcup_{i \in I} A_i\right) \cap B = S \cap B = B$
Suppose that $\{A_i: i \in \N_+\}$ is a collection of subsets of a universal set $S$
1. $\bigcap_{n=1}^\infty \bigcup_{k=n}^\infty A_k = \left\{x \in S: x \in A_k \text{ for infinitely many } k \in \N_+\right\}$
2. $\bigcup_{n=1}^\infty \bigcap_{k=n}^\infty A_k = \left\{x \in S: x \in A_k \text{ for all but finitely many } k \in \N_+\right\}$
Proof
1. Note that $x \in \bigcap_{n=1}^\infty \bigcup_{k=n}^\infty A_k$ if and only if for every $n \in \N_+$ there exists $k \ge n$ such that $x \in A_k$. In turn, this occurs if and only if $x \in A_k$ for infinitely many $k \in \N_+$.
2. Note that $x \in \bigcup_{n=1}^\infty \bigcap_{k=n}^\infty A_k$ if and only if there exists $n \in \N_+$ such that $x \in A_k$ for every $k \ge n$. In turn, this occurs if and only if $x \in A_k$ for all but finitely many $k \in \N_+$.
The sets in the previous result turn out to be important in the study of probability.
Product sets
Definitions
Product sets are sets of sequences. The defining property of a sequence, of course, is that order as well as membership is important.
Let us start with ordered pairs. In this case, the defining property is that $(a, b) = (c, d)$ if and only if $a = c$ and $b = d$. Interestingly, the structure of an ordered pair can be defined just using set theory. The construction in the result below is due to Kazimierz Kuratowski
Define $(a, b) = \{\{a\}, \{a, b\}\}$. This definition captures the defining property of an ordered pair.
Proof
Suppose that $(a, b) = (c, d)$ so that $\{\{a\}, \{a, b\}\} = \{\{c\}, \{c, d\}\}$. In the case that $a = b$ note that $(a, b) = \{\{a\}\}$. Thus we must have $\{c\} = \{c, d\} = \{a\}$ and hence $c = d = a$, and in particular, $a = c$ and $b = d$. In the case that $a \ne b$, we must have $\{c\} = \{a\}$ and hence $c = a$. But we cannot have $\{c, d\} = \{a\}$ because then $(c, d) = \{\{a\}\}$ and hence $\{a, b\} = \{a\}$, which would force $a = b$, a contradiction. Thus we must have $\{c, d\} = \{a, b\}$. Since $c = a$ and $a \ne b$ we must have $d = b$. The converse is trivial: if $a = c$ and $b = d$ then $\{a\} = \{c\}$ and $\{a, b\} = \{c, d\}$ so $(a, b) = (c, d)$.
Of course, it's important not to confuse the ordered pair $(a, b)$ with the open interval $(a, b)$, since the same notation is used for both. Usually it's clear form context which type of object is referred to. For ordered triples, the defining property is $(a, b, c) = (d, e, f)$ if and only if $a = d$, $b = e$, and $c = f$. Ordered triples can be defined in terms of ordered pairs, which via the last result, uses only set theory.
Define $(a, b, c) = (a, (b, c))$. This definition captures the defining property of an ordered triple.
Proof
Suppose $(a, b, c) = (d, e, f)$. Then $(a, (b, c)) = (d, (e, f))$. Hence by the definition of an ordered pair, we must have $a = d$ and $(b, c) = (e, f)$. Using the definition again we have $b = e$ and $c = f$. Conversely, if $a = d$, $b = e$, and $c = f$, then $(b, c) = (e, f)$ and hence $(a, (b, c)) = (d, (e, f))$. Thus $(a, b, c) = (d, e, f)$.
All of this is just to show how complicated structures can be built from simpler ones, and ultimately from set theory. But enough of that! More generally, two ordered sequences of the same size (finite or infinite) are the same if and only if their corresponding coordinates agree. Thus for $n \in \N_+$, the definition for $n$-tuples is $(x_1, x_2, \ldots, x_n) = (y_1, y_2, \ldots, y_n)$ if and only if $x_i = y_i$ for all $i \in \{1, 2, \ldots, n\}$. For infinite sequences, $(x_1, x_2, \ldots) = (y_1, y_2, \ldots)$ if and only if $x_i = y_i$ for all $i \in \N_+$.
Suppose now that we have a sequence of $n$ sets, $(S_1, S_2, \ldots, S_n)$, where $n \in \N_+$. The Cartesian product of the sets is defined as follows: $S_1 \times S_2 \times \cdots \times S_n = \left\{\left(x_1, x_2, \ldots, x_n\right): x_i \in S_i \text{ for } i \in \{1, 2, \ldots, n\}\right\}$
Cartesian products are named for René Descartes. If $S_i = S$ for each $i$, then the Cartesian product set can be written compactly as $S^n$, a Cartesian power. In particular, recall that $\R$ denotes the set of real numbers so that $\R^n$ is $n$-dimensional Euclidean space, named after Euclid, of course. The elements of $\{0, 1\}^n$ are called bit strings of length $n$. As the name suggests, we sometimes represent elements of this product set as strings rather than sequences (that is, we omit the parentheses and commas). Since the coordinates just take two values, there is no risk of confusion.
Suppose that we have an infinite sequence of sets $(S_1, S_2, \ldots)$. The Cartesian product of the sets is defined by $S_1 \times S_2 \times \cdots = \left\{\left(x_1, x_2, \ldots\right): x_i \in S_i \text{ for each } i \in \{1, 2, \ldots\}\right\}$
When $S_i = S$ for $i \in \N_+$, the Cartesian product set is sometimes written as a Cartesian power as $S^\infty$ or as $S^{\N_+}$. An explanation for the last notation, as well as a much more general construction for products of sets, is given in the next section on functions. Also, notation similar to that of general union and intersection is often used for Cartesian product, with $\prod$ as the operator. So $\prod_{i=1}^n S_i = S_1 \times S_2 \times \cdots \times S_n, \quad \prod_{i=1}^\infty S_i = S_1 \times S_2 \times \cdots$
Rules for Product Sets
We will now see how the set operations relate to the Cartesian product operation. Suppose that $S$ and $T$ are sets and that $A \subseteq S$, $B \subseteq S$ and $C \subseteq T$, $D \subseteq T$. The sets in the theorems below are subsets of $S \times T$.
The most important rules that relate Cartesian product with union, intersection, and difference are the distributive rules:
Distributive rules for product sets
1. $A \times (C \cup D) = (A \times C) \cup (A \times D)$
2. $(A \cup B) \times C = (A \times C) \cup (B \times C)$
3. $A \times (C \cap D) = (A \times C) \cap (A \times D)$
4. $(A \cap B) \times C = (A \times C) \cap (B \times C)$
5. $A \times (C \setminus D) = (A \times C) \setminus (A \times D)$
6. $(A \setminus B) \times C = (A \times C) \setminus (B \times C)$
Proof
1. $(x, y) \in A \times (C \cup D)$ if and only if $x \in A$ and $y \in C \cup D$ if and only if $x \in A$ and either $y \in C$ or $y \in D$ if and only if $x \in A$ and $y \in C$, or, $x \in A$ and $y \in D$ if and only if $(x, y) \in A \times C$ or $(x, y) \in A \times D$ if and only if $(x, y) \in (A \times C) \cup (A \times D)$.
2. Similar to (a), but with the roles of the coordinates reversed.
3. $(x, y) \in A \times (C \cap D)$ if and only if $x \in A$ and $y \in C \cap D$ if and only if $x \in A$ and $y \in C$ and $y \in D$ if and only if $(x, y) \in A \times C$ and $(x, y) \in A \times D$ if and only if $(x, y) \in (A \times C) \cap (A \times D)$.
4. Similar to (c) but with the roles of the coordinates reversed.
5. $(x, y) \in A \times (C \setminus D)$ if and only if $x \in A$ and $y \in C \setminus D$ if and only if $x \in A$ and $y \in C$ and $y \notin D$ if and only if $(x, y) \in A \times C$ and $(x, y) \notin A \times D$ if and only if $(x, y) \in (A \times C) \setminus (A \times D)$.
6. Similar to (e) but with the roles of the coordinates reversed.
In general, the product of unions is larger than the corresponding union of products.
$(A \cup B) \times (C \cup D) = (A \times C) \cup (A \times D) \cup (B \times C) \cup (B \times D)$
Proof
$(x, y) \in (A \cup B) \times (C \cup D)$ if and only if $x \in A \cup B$ and $y \in C \cup D$ if and only if at least one of the following is true: $x \in A$ and $y \in C$, $x \in A$ and $y \in D$, $x \in B$ and $y \in C$, $x \in B$ and $y \in D$ if and only if $(x, y) \in (A \times C) \cup (A \times D) \cup (B \times C) \cup (B \times D)$
So in particular it follows that $(A \times C) \cup (B \times D) \subseteq (A \cup B) \times (C \cup D)$. On the other hand, the product of intersections is the same as the corresponding intersection of products.
$(A \times C) \cap (B \times D) = (A \cap B) \times (C \cap D)$
Proof
$(x, y) \in (A \times C) \cap (B \times D)$ if and only if $(x, y) \in A \times C$ and $(x, y) \in B \times D$ if and only if $x \in A$ and $y \in C$ and $x \in B$ and $y \in D$ if and only if $x \in A \cap B$ and $y \in C \cap D$ if and only if $(x, y) \in (A \cap B) \times (C \cap D)$.
In general, the product of differences is smaller than the corresponding difference of products.
$(A \setminus B) \times (C \setminus D) = [(A \times C) \setminus (A \times D)] \setminus [(B \times C) \setminus (B \times D)]$
Proof
$(x, y) \in (A \setminus B) \times (C \setminus D)$ if and only if $x \in A \setminus B$ and $y \in C \setminus D$ if and only if $x \in A$ and $x \notin B$ and $y \in C$ and $y \notin D$. On the other hand, $(x, y) \in [(A \times C) \setminus (A \times D)] \setminus [(B \times C) \setminus (B \times D)]$ if and only if $(x, y) \in (A \times C) \setminus (A \times D)$ and $(x, y) \notin (B \times C) \setminus (B \times D)$. The first statement means that $x \in A$ and $y \in C$ and $y \notin D$. The second statement is the negation of $x \in B$ and $y \in C$ and $y \notin D$. The two statements both hold if and only if $x \in A$ and $x \notin B$ and $y \in C$ and $y \notin D$.
So in particular it follows that $(A \setminus B) \times (C \setminus D) \subseteq (A \times C) \setminus (B \times D)$,
Projections and Cross Sections
In this discussion, suppose again that $S$ and $T$ are nonempty sets, and that $C \subseteq S \times T$.
Cross Sections
1. The cross section of $C$ in the first coordinate at $x \in S$ is $C_x = \{y \in T: (x, y) \in C\}$
2. The cross section of $C$ at in the second coordinate at $y \in T$ is $C^y = \{x \in S: (x, y) \in C\}$
Note that $C_x \subseteq T$ for $x \in S$ and $C^y \subseteq S$ for $y \in T$.
Projections
1. The projection of $C$ onto $T$ is $C_T = \{y \in T: (x, y) \in C \text{ for some } x \in S\}$.
2. The projection of $C$ onto $S$ is $C^S = \{x \in S: (x, y) \in C \text{ for some } y \in T\}$.
The projections are the unions of the appropriate cross sections.
Unions
1. $C_T = \bigcup_{x \in S} C_x$
2. $C^S = \bigcup_{y \in T} C^y$
Cross sections are preserved under the set operations. We state the result for cross sections at $x \in S$. By symmetry, of course, analgous results hold for cross sections at $y \in T$.
Suppose that $C, \, D \subseteq S \times T$. Then for $x \in S$,
1. $(C \cup D)_x = C_x \cup D_x$
2. $(C \cap D)_x = C_x \cap D_x$
3. $(C \setminus D)_x = C_x \setminus D_x$
Proof
1. $y \in (C \cup D)_x$ if and only if $(x, y) \in C \cup D$ if and only if $(x, y) \in C$ or $(x, y) \in D$ if and only if $y \in C_x$ or $y \in D_x$.
2. The proof is just like (a), with and replacing or.
3. The proof is just like (a), with and not replacing or.
For projections, the results are a bit more complicated. We give the results for projections onto $T$; naturally the results for projections onto $S$ are analogous.
Suppose again that $C, \, D \subseteq S \times T$. Then
1. $(C \cup D)_T = C_T \cup D_T$
2. $(C \cap D)_T \subseteq C_T \cap D_T$
3. $(C_T)^c \subseteq (C^c)_T$
Proof
1. Suppose that $y \in (C \cup D)_T$. Then there exists $x \in S$ such that $(x, y) \in C \cup D$. Hence $(x, y) \in C$ so $y \in C_T$, or $(x, y) \in D$ so $y \in D_T$. In either case, $y \in C_T \cup D_T$. Conversely, suppose that $y \in C_T \cup D_T$. Then $y \in C_T$ or $y \in D_T$. If $y \in C_T$ then there exists $x \in S$ such that $(x, y) \in C$. But then $(x, y) \in C \cup D$ so $y \in (C \cup D)_T$. Similarly if $y \in D_T$ then $y \in (C \cup D)_T$.
2. Suppose that $y \in (C \cap D)_T$. Then there exists $x \in S$ such that $(x, y) \in C \cap D$. Hence $(x, y) \in C$ so $y \in C_T$ and $(x, y) \in D$ so $y \in D_T$. Therefore $y \in C_T \cap D_T$.
3. Suppose that $y \in (C_T)^c$. Then $y \notin C_T$, so for every $x \in S$, $(x, y) \notin C$. Fix $x_0 \in S$. Then $(x_0, y) \notin C$ so $(x_0, y) \in C^c$ and therefore $y \in (C^c)_T$.
It's easy to see that equality does not hold in general in parts (b) and (c). In part (b) for example, suppose that $A_1, \; A_2 \subseteq S$ are nonempty and disjoint and $B \subseteq T$ is nonempty. Let $C = A_1 \times B$ and $D = A_2 \times B$. Then $C \cap D = \emptyset$ so $(C \cap D)_T = \emptyset$. But $C_T = D_T = B$. In part (c) for example, suppose that $A$ is a nonempty proper subset of $S$ and $B$ is a nonempty proper subset of $T$. Let $C = A \times B$. Then $C_T = B$ so $(C_T)^c = B^c$. On the other hand, $C^c = (A^c \times B) \cup (A \times B^c) \cup (A^c \times B^c)$, so $(C^c)_T = T$.
Cross sections and projections will be extended to very general product sets in the next section on Functions.
Computational Exercises
Subsets of $\R$
The universal set is $[0, \infty)$. Let $A = [0, 5]$ and $B = (3, 7)$. Express each of the following in terms of intervals:
1. $A \cap B$
2. $A \cup B$
3. $A \setminus B$
4. $B \setminus A$
5. $A^c$
Answer
1. $(3, 5]$
2. $[0, 7)$
3. $[0, 3]$
4. $(5, 7)$
5. $(5, \infty)$
The universal set is $\N$. Let $A = \{n \in \N: n \text{ is even}\}$ and let $B = \{n \in \N: n \le 9\}$. Give each of the following:
1. $A \cap B$ in list form
2. $A \cup B$ in predicate form
3. $A \setminus B$ in list form
4. $B \setminus A$ in list form
5. $A^c$ in predicate form
6. $B^c$ in list form
Answer
1. $\{0, 2, 4, 6, 8\}$
2. $\{n \in \N: n \text{ is even or } n \le 9\}$
3. $\{10, 12, 14, \ldots\}$
4. $\{1, 3, 5, 7, 9\}$
5. $\{n \in \N: n \text{ is odd}\}$
6. $\{10, 11, 12, \ldots\}$
Coins and Dice
Let $S = \{1, 2, 3, 4\} \times \{1, 2, 3, 4, 5, 6\}$. This is the set of outcomes when a 4-sided die and a 6-sided die are tossed. Further let $A = \{(x, y) \in S: x = 2\}$ and $B = \{(x, y) \in S: x + y = 7\}$. Give each of the following sets in list form:
1. $A$
2. $B$
3. $A \cap B$
4. $A \cup B$
5. $A \setminus B$
6. $B \setminus A$
Answer
1. $\{(2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (2, 6)\}$
2. $\{(1, 6), (2, 5), (3, 4), (4, 3)\}$
3. $\{(2, 5)\}$
4. $\{(2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (2, 6), (1, 6), (3, 4), (4, 3)\}$
5. $\{(2, 1), (2, 2), (2, 3), (2, 4), (2, 6)\}$
6. $\{(1, 6), (3, 4), (4, 3)\}$
Let $S = \{0, 1\}^3$. This is the set of outcomes when a coin is tossed 3 times (0 denotes tails and 1 denotes heads). Further let $A = \{(x_1, x_2, x_3) \in S: x_2 = 1\}$ and $B = \{(x_1, x_2, x_3) \in S: x_1 + x_2 + x_3 = 2\}$. Give each of the following sets in list form, using bit-string notation:
1. $S$
2. $A$
3. $B$
4. $A^c$
5. $B^c$
6. $A \cap B$
7. $A \cup B$
8. $A \setminus B$
9. $B \setminus A$
Answer
1. $\{000, 100, 010, 001, 110, 101, 011, 111\}$
2. $\{010, 110, 011, 111\}$
3. $\{110, 011, 101\}$
4. $\{000, 100, 001, 101\}$
5. $\{000, 100, 010, 001, 111\}$
6. $\{110, 011\}$
7. $\{010, 110, 011, 111, 101\}$
8. $\{010, 111\}$
9. $\{101\}$
Let $S = \{0, 1\}^2$. This is the set of outcomes when a coin is tossed twice (0 denotes tails and 1 denotes heads). Give $\mathscr{P}(S)$ in list form.
Answer
$\{\emptyset, \{00\}, \{01\}, \{10\}, \{11\}, \{00, 01\}, \{00, 10\}, \{00, 11\}, \{01, 10\}, \{01, 11\}, \{10, 11\}, \{00, 01, 10\}, \{00, 01, 11\}, \{00, 10, 11\}, \{01, 10, 11\}, \{00, 01, 10, 11\}\}$
Cards
A standard card deck can be modeled by the Cartesian product set $D = \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, j, q, k\} \times \{\clubsuit, \diamondsuit, \heartsuit, \spadesuit\}$ where the first coordinate encodes the denomination or kind (ace, 2–10, jack, queen, king) and where the second coordinate encodes the suit (clubs, diamonds, hearts, spades). Sometimes we represent a card as a string rather than an ordered pair (for example $q \heartsuit$ for the queen of hearts). For the problems in this subsection, the card deck $D$ is the universal set.
Let $H$ denote the set of hearts and $F$ the set of face cards. Find each of the following:
1. $H \cap F$
2. $H \setminus F$
3. $F \setminus H$
4. $H \bigtriangleup F$
Answer
1. $\{j \heartsuit, q \heartsuit, k \heartsuit\}$
2. $\{1 \heartsuit, 2 \heartsuit, 3 \heartsuit, 4 \heartsuit, 5 \heartsuit, 6 \heartsuit, 7 \heartsuit, 8 \heartsuit, 9 \heartsuit, 10 \heartsuit\}$
3. $\{j \spadesuit, q \spadesuit, k \spadesuit, j \diamondsuit, q \diamondsuit, k \diamondsuit, j \clubsuit, q \clubsuit, k \clubsuit\}$
4. $\{1 \heartsuit, 2 \heartsuit, 3 \heartsuit, 4 \heartsuit, 5 \heartsuit, 6 \heartsuit, 7 \heartsuit, 8 \heartsuit, 9 \heartsuit, 10 \heartsuit, j \spadesuit, q \spadesuit, k \spadesuit, j \diamondsuit, q \diamondsuit, k \diamondsuit, j \clubsuit, q \clubsuit, k \clubsuit\}$
A bridge hand is a subset of $D$ with 13 cards. Often bridge hands are described by giving the cross sections by suit.
Suppose that $N$ is a bridge hand, held by a player named North, defined by $N^\clubsuit = \{2, 5, q\}, \, N^\diamondsuit = \{1, 5, 8, q, k\}, \, N^\heartsuit = \{8, 10, j, q\}, \, N^\spadesuit = \{1\}$ Find each of the following:
1. The nonempty cross sections of $N$ by denomination.
2. The projection of $N$ onto the set of suits.
3. The projection of $N$ onto the set of denominations
Answer
1. $N_1 = \{\diamondsuit, \spadesuit\}$, $N_2 = \{\clubsuit\}$, $N_5 = \{\clubsuit, \diamondsuit\}$, $N_8 = \{\diamondsuit, \heartsuit\}$, $N_{10} = \{\heartsuit\}$, $N_j = \{\heartsuit\}$, $N_q = \{\clubsuit, \diamondsuit, \heartsuit\}$, $N_k = \{\diamondsuit\}$
2. $\{\clubsuit, \diamondsuit, \heartsuit, \spadesuit\}$
3. $\{1, 2, 5, 8, 10, j, q, k\}$
By contrast, it is usually more useful to describe a poker hand by giving the cross sections by denomination. In the usual version of draw poker, a hand is a subset of $D$ with 5 cards.
Suppose that $B$ is a poker hand, held by a player named Bill, with $B_1 = \{\clubsuit, \spadesuit\}, \, B_8 = \{\clubsuit, \spadesuit\}, \, B_q = \{\heartsuit\}$ Find each of the following:
1. The nonempty cross sections of $B$ by suit.
2. The projection of $B$ onto the set of suits.
3. The projection of $B$ onto the set of denominations
Answer
1. $B^\clubsuit = \{1, 8\}$, $B^\heartsuit = \{q\}$, $B^\spadesuit = \{1, 8\}$
2. $\{\clubsuit, \heartsuit, \spadesuit\}$
3. $\{1, 8, q\}$
The poker hand in the last exercise is known as a dead man's hand. Legend has it that Wild Bill Hickock held this hand at the time of his murder in 1876.
General unions and intersections
For the problems in this subsection, the universal set is $\R$.
Let $A_n = [0, 1 - \frac{1}{n}]$ for $n \in \N_+$. Find
1. $\bigcap_{n=1}^\infty A_n$
2. $\bigcup_{n=1}^\infty A_n$
3. $\bigcap_{n=1}^\infty A_n^c$
4. $\bigcup_{n=1}^\infty A_n^c$
Answer
1. $\{0\}$
2. $[0, 1)$
3. $(-\infty, 0) \cup [1, \infty)$
4. $\R - \{0\}$
Let $A_n = (2 - \frac{1}{n}, 5 + \frac{1}{n})$ for $n \in \N_+$. Find
1. $\bigcap_{n=1}^\infty A_n$
2. $\bigcup_{n=1}^\infty A_n$
3. $\bigcap_{n=1}^\infty A_n^c$
4. $\bigcup_{n=1}^\infty A_n^c$
Answer
1. $[2, 5]$
2. $(1, 6)$
3. $(-\infty, 1] \cup [6, \infty)$
4. $(-\infty, 2) \cup (5, \infty)$
Subsets of $\R^2$
Let $T$ be the closed triangular region in $\R^2$ with vertices $(0, 0)$, $(1, 0)$, and $(1, 1)$. Find each of the following:
1. The cross section $T_x$ for $x \in \R$
2. The cross section $T^y$ for $y \in \R$
3. The projection of $T$ onto the horizontal axis
4. The projection of $T$ onto the vertical axis
Answer
1. $T_x = [0, x]$ for $x \in [0, 1]$, $T_x = \emptyset$ otherwise
2. $T^y = [y, 1]$ for $y \in [0, 1]$, $T^y = \emptyset$ otherwise
3. $[0, 1]$
4. $[0, 1]$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/01%3A_Foundations/1.01%3A_Sets.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\Q}{\mathbb{Q}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\range}{\text{range}}$
Functions play a central role in probability and statistics, as they do in every other branch of mathematics. For the most part, the proofs in this section are straightforward, so be sure to try them yourself before reading the ones in the text.
Definitions and Properties
Basic Definitions
We start with the formal, technical definition of a function. It's not very intuitive, but has the advantage that it only requires set theory.
A function $f$ from a set $S$ into a set $T$ is a subset of the product set $S \times T$ with the property that for each element $x \in S$, there exists a unique element $y \in T$ such that $(x, y) \in f$. If $f$ is a function from $S$ to $T$ we write $f: S \to T$. If $(x, y) \in f$ we write $y = f(x)$.
Less formally, a function $f$ from $S$ into $T$ is a rule (or procedure or algorithm) that assigns to each $x \in S$ a unique element $f(x) \in T$. The definition of a function as a set of ordered pairs, is due to Kazimierz Kuratowski. The term map or mapping is also used in place of function, so we could say that $f$ maps $S$ into $T$.
The sets $S$ and $T$ in the definition are clearly important.
Suppose that $f: S \to T$.
1. The set $S$ is the domain of $f$.
2. The set $T$ is the range space or co-domain of $f$.
3. The range of $f$ is the set of function values. That is, $\range\left(f\right) = \left\{y \in T: y = f(x) \text{ for some } x \in S\right\}$.
The domain and range are completely specified by a function. That's not true of the co-domain: if $f$ is a function from $S$ into $T$, and $U$ is another set with $T \subseteq U$, then we can also think of $f$ as a function from $S$ into $U$. The following definitions are natural and important.
Suppose again that $f: S \to T$.
1. $f$ maps $S$ onto $T$ if $\range\left(f\right) = T$. That is, for each $y \in T$ there exists $x \in S$ such that $f(x) = y$.
2. $f$ is one-to-one if distinct elements in the domain are mapped to distinct elements in the range. That is, if $u, \, v \in S$ and $u \ne v$ then $f(u) \ne f(v)$.
Clearly a function always maps its domain onto its range. Note also that $f$ is one-to-one if $f(u) = f(v)$ implies $u = v$ for $u, \, v \in S$.
Inverse functions
A funtion that is one-to-one and onto can be reversed in a sense.
If $f$ maps $S$ one-to-one onto $T$, the inverse of $f$ is the function $f^{-1}$ from $T$ onto $S$ given by $f^{-1}(y) = x \iff f(x) = y; \quad x \in S, \; y \in T$
If you like to think of a function as a set of ordered pairs, then $f^{-1} = \{(y, x) \in T \times S: (x, y) \in f\}$. The fact that $f$ is one-to-one and onto ensures that $f^{-1}$ is a valid function from $T$ onto $S$. Sets $S$ and $T$ are in one-to-one correspondence if there exists a one-to-one function from $S$ onto $T$. One-to-one correspondence plays an essential role in the study of cardinality.
Restrictions
The domain of a function can be restricted to create a new funtion.
Suppose that $f: S \to T$ and that $A \subseteq S$. The function $f_A: A \to T$ defined by $f_A(x) = f(x)$ for $x \in A$ is the restriction of $f$ to $A$.
As a set of ordered pairs, note that $f_A = \{(x, y) \in f: x \in A\}$.
Composition
Composition is perhaps the most important way to combine two functions to create another function.
Suppose that $g: R \to S$ and $f: S \to T$. The composition of $f$ with $g$ is the function $f \circ g: R \to T$ defined by $\left(f \circ g\right)(x) = f\left(g(x)\right), \quad x \in R$
Composition is associative:
Suppose that $h: R \to S$, $g: S \to T$, and $f: T \to U$. Then $f \circ \left(g \circ h\right) = \left(f \circ g\right) \circ h$
Proof
Note that both functions map $R$ into $U$. Using the definition of composition, the value of both functions at $x \in R$ is $f\left(g\left(h(x)\right)\right)$.
Thus we can write $f \circ g \circ h$ without ambiguity. On the other hand, composition is not commutative. Indeed depending on the domains and co-domains, $f \circ g$ might be defined when $g \circ f$ is not. Even when both are defined, they may have different domains and co-domains, and so of course cannot be the same function. Even when both are defined and have the same domains and co-domains, the two compositions will not be the same in general. Examples of all of these cases are given in the computational exercises below.
Suppose that $g: R \to S$ and $f: S \to T$.
1. If $f$ and $g$ are one-to-one then $f \circ g$ is one-to-one.
2. If $f$ and $g$ are onto then $f \circ g$ is onto.
Proof
1. Suppose that $u, \, v \in R$ and $(f \circ g)(u) = (f \circ g)(v)$. Then $f\left(g(u)\right) = f\left(g(v)\right)$. Since $f$ is one-to-one, $g(u) = g(v)$. Since $g$ is one-to-one, $u = v$.
2. Suppose that $z \in T$. Since $f$ is onto, there exist $y \in S$ with $f(y) = z$. Since $g$ is onto, there exists $x \in R$ with $g(x) = y$. Then $(f \circ g)(x) = f\left(g(x)\right) = f(y) = z$.
The identity function on a set $S$ is the function $I_S$ from $S$ onto $S$ defined by $I_S(x) = x$ for $x \in S$
The identity function acts like an identity with respect to the operation of composition.
If $f: S \to T$ then
1. $f \circ I_S = f$
2. $I_T \circ f = f$
Proof
1. Note that $f \circ I_S: S \to T$. For $x \in S$, $(f \circ I_S)(x) = f\left(I_S(x)\right) = f(x)$.
2. Note that $I_T \circ f: S \to T$. For $x \in S$, $(I_T \circ f)(x) = I_T\left(f(x)\right) = f(x)$.
The inverse of a function is really the inverse with respect to composition.
Suppose that $f$ is a one-to-one function from $S$ onto $T$. Then
1. $f^{-1} \circ f = I_S$
2. $f \circ f^{-1} = I_T$
Proof
1. Note that $f^{-1} \circ f : S \to S$. For $x \in S$, $\left(f^{-1} \circ f\right)(x) = f^{-1}\left(f(x)\right) = x$.
2. Note that $f \circ f^{-1}: T \to T$. For $y \in T$, $\left(f \circ f^{-1}\right)(y) = f\left(f^{-1}(y)\right) = y$
An element $x \in S^n$ can be thought of as a function from $\{1, 2, \ldots, n\}$ into $S$. Similarly, an element $x \in S^\infty$ can be thought of as a function from $\N_+$ into $S$. For such a sequence $x$, of course, we usually write $x_i$ instead of $x(i)$. More generally, if $S$ and $T$ are sets, then the set of all functions from $S$ into $T$ is denoted by $T^S$. In particular, as we noted in the last section, $S^\infty$ is also (and more accurately) written as $S^{\N_+}$.
Suppose that $g$ is a one-to-one function from $R$ onto $S$ and that $f$ is a one-to-one function from $S$ onto $T$. Then $\left(f \circ g\right)^{-1} = g^{-1} \circ f^{-1}$.
Proof
Note that $(f \circ g)^{-1}: T \to R$ and $g^{-1} \circ f^{-1}: T \to R$. For $y \in T$, let $x = \left(f \circ g\right)^{-1}(y)$. Then $\left(f \circ g\right)(x) = y$ so that $f\left(g(x)\right) = y$ and hence $g(x) = f^{-1}(y)$ and finally $x = g^{-1}\left(f^{-1}(y)\right)$.
Inverse Images
Inverse images of a function play a fundamental role in probability, particularly in the context of random variables.
Suppose that $f: S \to T$. If $A \subseteq T$, the inverse image of $A$ under $f$ is the subset of $S$ given by $f^{-1}(A) = \{x \in S: f(x) \in A\}$
So $f^{-1}(A)$ is the subset of $S$ consisting of those elements that map into $A$.
Technically, the inverse images define a new function from $\mathscr{P}(T)$ into $\mathscr{P}(S)$. We use the same notation as for the inverse function, which is defined when $f$ is one-to-one and onto. These are very different functions, but usually no confusion results. The following important theorem shows that inverse images preserve all set operations.
Suppose that $f: S \to T$, and that $A, \, B \subseteq T$. Then
1. $f^{-1}(A \cup B) = f^{-1}(A) \cup f^{-1}(B)$
2. $f^{-1}(A \cap B) = f^{-1}(A) \cap f^{-1}(B)$
3. $f^{-1}(A \setminus B) = f^{-1}(A) \setminus f^{-1}(B)$
4. If $A \subseteq B$ then $f^{-1}(A) \subseteq f^{-1}(B)$
5. If $A$ and $B$ are disjoint, so are $f^{-1}(A)$ and $f^{-1}(B)$
Proof
1. $x \in f^{-1}(A \cup B)$ if and only if $f(x) \in A \cup B$ if and only if $f(x) \in A$ or $f(x) \in B$ if and only if $x \in f^{-1}(A)$ or $x \in f^{-1}(B)$ if and only if $x \in f^{-1}(A) \cup f^{-1}(B)$
2. The proof is the same as (a), with intersection replacing union and with and replacing or throughout.
3. The proof is the same as (a), with set difference replacing union and with and not replacing or throughout.
4. Suppose $A \subseteq B$. If $x \in f^{-1}(A)$ then $f(x) \in A$ and hence $f(x) \in B$, so $x \in f^{-1}(B)$.
5. If $A$ and $B$ are disjoint, then from (b), $f^{-1}(A) \cap f^{-1}(B) = f^{-1}(A \cap B) = f^{-1}(\emptyset) = \emptyset$.
The result in part (a) holds for arbitrary unions, and the result in part (b) holds for arbitrary intersections. No new ideas are involved; only the notation is more complicated.
Suppose that $\{A_i: i \in I\}$ is a collection of subsets of $T$, where $I$ is a nonempty index set. Then
1. $f^{-1}\left(\bigcup_{i \in I} A_i\right) = \bigcup_{i \in I} f^{-1}(A_i)$
2. $f^{-1}\left(\bigcap_{i \in I} A_i\right) = \bigcap_{i \in I} f^{-1}(A_i)$
Proof
1. $x \in f^{-1}\left(\bigcup_{i \in I} A_i\right)$ if and only if $f(x) \in \bigcup_{i \in I} A_i$ if and only if $f(x) \in A_i$ for some $i \in I$ if and only if $x \in f^{-1}(A_i)$ for some $i \in I$ if and only if $x \in \bigcup_{i \in I} f^{-1}(A_i)$
2. The proof is the same as (a), with intersection replacing union and with for every replacing for some.
Forward Images
Forward images of a function are a naturally complement to inverse images.
Suppose again that $f: S \to T$. If $A \subseteq S$, the forward image of $A$ under $f$ is the subset of $T$ given by $f(A) = \left\{f(x): x \in A\right\}$
So $f(A)$ is the range of $f$ restricted to $A$.
Technically, the forward images define a new function from $\mathscr{P}(S)$ into $\mathscr{P}(T)$, but we use the same symbol $f$ for this new function as for the underlying function from $S$ into $T$ that we started with. Again, the two functions are very different, but usually no confusion results.
It might seem that forward images are more natural than inverse images, but in fact, the inverse images are much more important than the forward ones (at least in probability and measure theory). Fortunately, the inverse images are nicer as well—unlike the inverse images, the forward images do not preserve all of the set operations.
Suppose that $f: S \to T$, and that $A, \, B \subseteq S$. Then
1. $f(A \cup B) = f(A) \cup f(B)$.
2. $f(A \cap B) \subseteq f(A) \cap f(B)$. Equality holds if $f$ is one-to-one.
3. $f(A) \setminus f(B) \subseteq f(A \setminus B)$. Equality holds if $f$ is one-to-one.
4. If $A \subseteq B$ then $f(A) \subseteq f(B)$.
Proof
1. Suppose $y \in f(A \cup B)$. Then $y = f(x)$ for some $x \in A \cup B$. If $x \in A$ then $y \in f(A)$ and if $x \in B$ then $y \in f(B)$. In both cases $y \in f(A) \cup f(B)$. Conversely suppose $y \in f(A) \cup f(B)$. If $y \in f(A)$ then $y = f(x)$ for some $x \in A$. But then $x \in A \cup B$ so $y \in f(A \cup B)$. Similarly, if $y \in f(B)$ then $y = f(x)$ for some $x \in B$. But then $x \in A \cup B$ so $y \in f(A \cup B)$.
2. If $y \in f(A \cap B)$ then $y = f(x)$ for some $x \in A \cap B$. But then $x \in A$ so $y \in f(A)$ and $x \in B$ so $y \in f(B)$ and hence $y \in f(A) \cap f(B)$. Conversely, suppose that $y \in f(A) \cap f(B)$. Then $y \in f(A)$ and $y \in f(B)$, so there exists $x \in A$ with $f(x) = y$ and there exists $u \in B$ with $f(u) = y$. At this point, we can go no further. But if $f$ is one-to-one, then $u = x$ and hence $x \in A$ and $x \in B$. Thus $x \in A \cap B$ so $y \in f(A \cap B)$.
3. Suppose $y \in f(A) \setminus f(B)$. Then $y \in f(A)$ and $y \notin f(B)$. Hence $y = f(x)$ for some $x \in A$ and $y \ne f(u)$ for every $u \in B$. Thus, $x \notin B$ so $x \in A \setminus B$ and hence $y \in f(A \setminus B)$. Conversely, suppose $y \in f(A \setminus B)$. Then $y = f(x)$ for some $x \in A \setminus B$. Hence $x \in A$ so $y \in f(A)$. Again, the proof breaks down at this point. However, if $f$ is one-to-one and $f(u) = y$ for some $u \in B$, then $u = x$ so $x \in B$, a contradiction. Hence $f(u) \ne y$ for every $u \in B$ so $y \notin f(B)$. Thus $y \in f(A \setminus B)$.
4. Suppose $A \subseteq B$. If $y \in f(A)$ then $y = f(x)$ for some $x \in A$. But then $x \in B$ so $y \in f(B)$.
The result in part (a) hold for arbitrary unions, and the results in part (b) hold for arbitrary intersections. No new ideas are involved; only the notation is more complicated.
Suppose that $\{A_i: i \in I\}$ is a collection of subsets of $S$, where $I$ is a nonempty index set. Then
1. $f\left(\bigcup_{i \in I} A_i\right) = \bigcup_{i \in I} f(A_i)$.
2. $f\left(\bigcap_{i \in I} A_i\right) \subseteq \bigcap_{i \in I} f(A_i)$. Equality holds if $f$ is one-to-one.
Proof
1. $y \in f\left(\bigcup_{i \in I} A_i\right)$ if and only if $y = f(x)$ for some $x \in \bigcup_{i \in I} A_i$ if and only if $y = f(x)$ for some $x \in A_i$ and some $i \in I$ if and only if $y \in f(A_i)$ for some $i \in I$ if and only if $y \in \bigcup_{i \in I} f(A_i)$.
2. If $y \in f\left(\bigcap_{i \in I} A_i\right)$ then $y = f(x)$ for some $x \in \bigcap_{i \in I} A_i$. Hence $x \in A_i$ for every $i \in I$ so $y \in f(A_i)$ for every $i \in I$ and thus $y \in \bigcap_{i \in I} f(A_i)$. Conversely, suppose that $y \in \bigcap_{i \in I} f(A_i)$. Then $y \in f(A_i)$ for every $i \in I$. Hence for every $i \in I$ there exists $x_i \in A_i$ with $y = f(x_i)$. If $f$ is one-to-one, $x_i = x_j$ for all $i, \, j \in I$. Call the common value $x$. Then $x \in A_i$ for every $i \in I$ so $x \in \bigcap_{i \in I} A_i$ and therefore $y \in f\left(\bigcap_{i \in I} A_i\right)$.
Suppose again that $f: S \to T$. As noted earlier, the forward images of $f$ define a function from $\mathscr{P}(S)$ into $\mathscr{P}(T)$ and the inverse images define a function from $\mathscr{P}(T)$ into $\mathscr{P}(S)$. One might hope that these functions are inverses of one another, but alas no.
Suppose that $f: S \to T$.
1. $A \subseteq f^{-1}\left[f(A)\right]$ for $A \subseteq S$. Equality holds if $f$ is one-to-one.
2. $f\left[f^{-1}(B)\right] \subseteq B$ for $B \subseteq T$. Equality holds if $f$ is onto.
Proof
1. If $x \in A$ then $f(x) \in f(A)$ and hence $x \in f^{-1}\left[f(A)\right]$. Conversely suppose that $x \in f^{-1}\left[f(A)\right]$. Then $f(x) \in f(A)$ so $f(x) = f(u)$ for some $u \in A$. At this point we can go no further. But if $f$ is one-to-one, then $u = x$ and hence $x \in A$.
2. Suppose $y \in f\left[f^{-1}(B)\right]$. Then $y = f(x)$ for some $x \in f^{-1}(B)$. But then $y = f(x) \in B$. Conversely suppose that $f$ is onto and $y \in B$. There exist $x \in S$ with $f(x) = y$. Hence $x \in f^{-1}(B)$ and so $y \in f\left[f^{-1}(B)\right]$.
Spaces of Real Functions
Real-valued function on a given set $S$ are of particular importance. The usual arithmetic operations on such functions are defined pointwise.
Suppose that$f, \, g: S \to \R$ and $c \in \R$, then $f + g, \, f - g, \, f g, \, c f, \, f / g: S \to \R$ are defined as follows for all $x \in S$.
1. $(f + g)(x) = f(x) + g(x)$
2. $(f - g)(x) = f(x) - g(x)$
3. $(f g)(x) = f(x) g(x)$
4. $(c f)(x) = c f(x)$
5. $(f/g)(x) = f(x) / g(x)$ assuming that $g(x) \ne 0$ for $x \in S$.
Now let $\mathscr V$ denote the collection of all functions from the given set $S$ into $\R$. A fact that is very important in probability as well as other branches of analysis is that $\mathscr V$, with addition and scalar multiplication as defined above, is a vector space. The zero function $\bs 0$ is defined, of course, by $\bs{0}(x) = 0$ for all $x \in S$.
$(\mathscr V, +, \cdot)$ is a vector space over $\R$. That is, for all $f, \, g, \, h \in \mathscr V$ and $a, \, b \in \R$
1. $f + g = g + f$, the commutative property of vector addition.
2. $f + (g + h) = (f + g) + h$, the associative property of vector addition.
3. $a(f + g) = a f + a g$, scalar multiplication distributes over vector addition.
4. $(a + b)f = a f + b f$, scalar multiplication distributive over scalar addition.
5. $f + \bs 0 = f$, the existence of an zero vector.
6. $f + (-f) = \bs 0$, the existence of additive inverses.
7. $1 \cdot f = f$, the unity property.
Proof
Each of these properties follows from the corresponding property in $\R$.
Various subspaces of $\mathscr V$ are important in probability as well. We will return to the discussion of vector spaces of functions in the sections on partial orders and in the advanced sections on metric spaces and measure theory.
Indicator Functions
For our next discussion, suppose that $S$ is the universal set, so that all other sets mentioned are subsets of $S$.
Suppose that $A \subseteq S$. The indicator function of $A$ is the function $\bs{1}_A: S \to \{0, 1\}$ defined as follows: $\bs{1}_A(x) = \begin{cases} 1, & x \in A \ 0, & x \notin A \end{cases}$
Thus, the indicator function of $A$ simply indicates whether or not $x \in A$ for each $x \in S$. Conversely, any function on $S$ that just takes the values 0 and 1 is an indicator function.
If $f: S \to \{0, 1\}$ then $f$ is the indicator function of the set $A = f^{-1}\{1\} = \{x \in S: f(x) = 1\}$.
Thus, there is a one-to-one correspondence between $\mathscr{P}(S)$, the power set of $S$, and the collection of indicator functions $\{0, 1\}^S$. The next result shows how the set algebra of subsets corresponds to the arithmetic algebra of the indicator functions.
Suppose that $A, \, B \subseteq S$. Then
1. $\bs{1}_{A \cap B} = \bs{1}_A \, \bs{1}_B = \min\left\{\bs{1}_A, \bs{1}_B\right\}$
2. $\bs{1}_{A \cup B} = 1 - \left(1 - \bs{1}_A\right)\left(1 - \bs{1}_B\right) = \max\left\{\bs{1}_A, \bs{1}_B\right\}$
3. $\bs{1}_{A^c} = 1 - \bs{1}_A$
4. $\bs{1}_{A \setminus B} = \bs{1}_A \left(1 - \bs{1}_B\right)$
5. $A \subseteq B$ if and only if $\bs{1}_A \le \bs{1}_B$
Proof
1. Note that both functions on the right just take the values 0 and 1. Moreover, $\bs{1}_A(x) \bs{1}_B(x) = \min\left\{\bs{1}_A(x), \bs{1}_B(x)\right\} = 1$ if and only if $x \in A$ and $x \in B$.
2. Note that both function on the right just take the values 0 and 1. Moreover, $1 - \left(1 - \bs{1}_A(x)\right)\left(1 - \bs{1}_B(x)\right) = \max\left\{\bs{1}_A(x), \bs{1}_B(x)\right\} = 1$ if and only if $x \in A$ or $x \in B$.
3. Note that $1 - \bs{1}_A$ just takes the values 0 and 1. Moreover, $1 - \bs{1}_A(x) = 1$ if and only if $x \notin A$.
4. Note that $\bs{1}_{A \setminus B} = \bs{1}_{A \cap B^c} = \bs{1}_A \bs{1}_{B^c} = \bs{1}_A \left(1 - \bs{1}_B\right)$ by parts (a) and (c).
5. Since both functions just take the values 0 and 1, note that $\bs{1}_A \le \bs{1}_B$ if and only if $\bs{1}_A(x) = 1$ implies $\bs{1}_B(x) = 1$. But in turn, this is equivalent to $A \subseteq B$.
The results in part (a) extends to arbitrary intersections and the results in part (b) extends to arbitrary unions.
Suppose that $\{A_i: i \in I\}$ is a collection of subsets of $S$, where $I$ is a nonempty index set. Then
1. $\bs{1}_{\bigcap_{i \in I} A_i} = \prod_{i \in I} \bs{1}_{A_i} = \min\left\{\bs{1}_{A_i}: i \in I\right\}$
2. $\bs{1}_{\bigcup_{i \in I} A_i} = 1 - \prod_{i \in I}\left(1 - \bs{1}_{A_i}\right) = \max\left\{\bs{1}_{A_i}: i \in I\right\}$
Proof
In general, a product over an infinite index set may not make sense. But if all of the factors are either 0 or 1, as they are here, then we can simply define the product to be 1 if all of the factors are 1, and 0 otherwise.
1. The functions in the middle and on the right just take the values 0 and 1. Moreover, both take the value 1 at $x \in S$ if and only if $x \in A_i$ for every $i \in I$.
2. The functions in the middle and on the right just take the values 0 and 1. Moreover, both take the value 1 at $x \in S$ if and only if $x \in A_i$ for some $i \in I$.
Multisets
A multiset is like an ordinary set, except that elements may be repeated. A multiset $A$ (with elements from a universal set $S$) can be uniquely associated with its multiplicity function $m_A: S \to \N$, where $m_A(x)$ is the number of times that element $x$ is in $A$ for each $x \in S$. So the multiplicity function of a multiset plays the same role that an indicator function does for an ordinary set. Multisets arise naturally when objects are sampled with replacement (but without regard to order) from a population. Various sampling models are explored in the section on Combinatorial Structures. We will not go into detail about the operations on multisets, but the definitions are straightforward generalizations of those for ordinary sets.
Suppose that $A$ and $B$ are multisets with elements from the universal set $S$. Then
1. $A \subseteq B$ if and only if $m_A \le m_B$
2. $m_{A \cup B} = \max\{m_A, m_B\}$
3. $m_{A \cap B} = \min\{m_A, m_B\}$
4. $m_{A + B} = m_A + m_B$
Product Spaces
Using functions, we can generalize the Cartesian products studied earlier. In this discussion, we suppose that $S_i$ is a set for each $i$ in a nonempty index set $I$.
Define the product set $\prod_{i \in I} S_i = \left\{x: x \text{ is a function from } I \text{ into } \bigcup_{i \in I} S_i \text{ such that } x(i) \in S_i \text{ for each } i \in I\right\}$
Note that except for being nonempty, there are no assumptions on the cardinality of the index set $I$. Of course, if $I = \{1, 2 \ldots, n\}$ for some $n \in \N_+$, or if $I = \N_+$ then this construction reduces to $S_1 \times S_2 \times \cdots \times S_n$ and to $S_1 \times S_2 \times \cdots$, respectively. Since we want to make the notation more closely resemble that of simple Cartesian products, we will write $x_i$ instead of $x(i)$ for the value of the function $x$ at $i \in I$, and we sometimes refer to this value as the $i$th coordinate of $x$. Finally, note that if $S_i = S$ for each $i \in I$, then $\prod_{i \in I} S_i$ is simply the set of all functions from $I$ into $S$, which we denoted by $S^I$ above.
For $j \in I$ define the projection $p_j: \prod_{i \in I} S_i \to S_j$ by $p_j(x) = x_j$ for $x \in \prod_{i \in I} S_i$.
So $p_j(x)$ is just the $j$th coordinate of $x$. The projections are of basic importance for product spaces. In particular, we have a better way of looking at projections of a subset of a product set.
For $A \subseteq \prod_{i \in I} S_i$ and $j \in I$, the forward image $p_j(A)$ is the projection of $A$ onto $S_j$.
Proof
Note that $p_j(A) = \{p_j(x): x \in A\} = \{x_j: x \in A\}$, the set of all $j$th coordinates of the points in $A$.
So the properties of projection that we studied in the last section are just special cases of the properties of forward images. Projections also allow us to get coordinate functions in a simple way.
Suppose that $R$ is a set, and that $f: R \to \prod_{i \in I} S_i$. If $j \in I$ then $p_j \circ f : R \to S_j$ is the $j$th coordinate function of $f$.
Proof
Note that for $x \in R$, $(p_j \circ f)(x) = p_j[f(x)] = f_j(x)$, the $j$th coordinate of $f(x) \in \prod_{i \in I} S_i$.
This will look more familiar for a simple cartesian product. If $f: R \to S_1 \times S_2 \times \cdots \times S_n$, then $f = (f_1, f_2, \ldots, f_n)$ where $f_j: R \to S_i$ is the $j$th coordinate function for $j \in \{1, 2, \ldots, n\}$.
Cross sections of a subset of a product set can be expressed in terms of inverse images of a function. First we need some additional notation. Suppose that our index set $I$ has at least two elements. For $j \in I$ and $u \in S_j$, define $j_u : \prod_{i \in I - \{j\}} S_i \to \prod_{i \in I} S_i$ by $j_u (x) = y$ where $y_i = x_i$ for $i \in I - \{j\}$ and $y_j = u$. In words, $j_u$ takes a point $x \in \prod_{i \in I - \{j\}} S_i$ and assigns $u$ to coordinate $j$ to produce the point $y \in \prod_{i \in I} S_i$.
In the setting above, if $j \in I$, $u \in S_j$ and $A \subseteq \prod_{i \in I} S_i$ then $j_u^{-1}(A)$ is the cross section of $A$ in the $j$th coordinate at $u$.
Proof
This follows from the definition of cross section: $j_u^{-1}(A)$ is the set of all $x \in \prod_{i \in I - \{j\}} S_i$ such that $y$ defined above is in $A$ and has $j$th coordinate $u$.
Let's look at this for the product of two sets $S$ and $T$. For $x \in S$, the function $1_x: T \to S \times T$ is given by $1_x(y) = (x, y)$. Similarly, for $y \in T$, the function $2_y: S \to S \times T$ is given by $2_y(x) = (x, y)$. Suppose now that $A \subseteq S \times T$. If $x \in S$, then $1_x^{-1}(A) = \{y \in T: (x, y) \in A\}$, the very definition of the cross section of $A$ in the first coordinate at $x$. Similarly, if $y \in T$, then $2_y^{-1}(A) = \{x \in S: (x, y) \in A\}$, the very definition of the cross section of $A$ in the second coordinate at $y$. This construction is not particularly important except to show that cross sections are inverse images. Thus the fact that cross sections preserve all of the set operations is a simple consequence of the fact that inverse images generally preserve set operations.
Operators
Sometimes functions have special interpretations in certain settings.
Suppose that $S$ is a set.
1. A function $f: S \to S$ is sometimes called a unary operator on $S$.
2. A function $g: S \times S \to S$ is sometimes called a binary operator on $S$.
As the names suggests, a unary operator $f$ operates on an element $x \in S$ to produce a new element $f(x) \in S$. Similarly, a binary operator $g$ operates on a pair of elements $(x, y) \in S \times S$ to produce a new element $g(x, y) \in S$. The arithmetic operators are quintessential examples:
The following are operators on $\R$:
1. $\text{minus}(x) = -x$ is a unary operator.
2. $\text{sum}(x, y) = x + y$ is a binary operator.
3. $\text{product}(x, y) = x\,y$ is a binary operator.
4. $\text{difference}(x, y) = x - y$ is a binary operator.
For a fixed universal set $S$, the set operators studied in the section on Sets provide other examples.
For a given set $S$, the following are operators on $\mathscr P(S)$:
1. $\text{complement}(A) = A^c$ is a unary operator.
2. $\text{union}(A, B) = A \cup B$ is a binary operator.
3. $\text{intersect}(A, B) = A \cap B$ is a binary operator.
4. $\text{difference}(A, B) = A \setminus B$ is a binary operator.
As these examples illustrate, a binary operator is often written as $x \, f \, y$ rather than $f(x, y)$. Still, it is useful to know that operators are simply functions of a special type.
Suppose that $f$ is a unary operator on a set $S$, $g$ is a binary operator on $S$, and that $A \subseteq S$.
1. $A$ is closed under $f$ if $x \in A$ implies $f(x) \in A$.
2. $A$ is closed under $g$ if $(x, y) \in A \times A$ implies $g(x, y) \in A$.
Thus if $A$ is closed under the unary operator $f$, then $f$ restricted to $A$ is unary operator on $A$. Similary if $A$ is closed under the binary operator $g$, then $g$ restricted to $A \times A$ is a binary operator on $A$. Let's return to our most basic example.
For the arithmetic operatoes on $\R$,
1. $\N$ is closed under plus and times, but not under minus and difference.
2. $\Z$ is closed under plus, times, minus, and difference.
3. $\Q$ is closed under plus, times, minus, and difference.
Many properties that you are familiar with for special operators (such as the arithmetic and set operators) can now be formulated generally.
Suppose that $f$ and $g$ are binary operators on a set $S$. In the following definitions, $x$, $y$, and $z$ are arbitrary elements of $S$.
1. $f$ is commutative if $f(x, y) = f(y, x)$, that is, $x \, f \, y = y \, f \, x$
2. $f$ is associative if $f(x, f(y, z)) = f(f(x, y), z)$, that is, $x \, f \, (y \, f \, z) = (x \, f \, y) \, f \, z$.
3. $g$ distributes over $f$ (on the left) if $g(x, f(y, z)) = f(g(x, y), g(x, z))$, that is, $x \, g \, (y \, f \, z) = (x \, g \, y) \, f \, (x \, g \, z)$
The Axiom of Choice
Suppose that $\mathscr{S}$ is a collection of nonempty subsets of a set $S$. The axiom of choice states that there exists a function $f: \mathscr{S} \to S$ with the property that $f(A) \in A$ for each $A \in \mathscr{S}$. The function $f$ is known as a choice function.
Stripped of most of the mathematical jargon, the idea is very simple. Since each set $A \in \mathscr{S}$ is nonempty, we can select an element of $A$; we will call the element we select $f(A)$ and thus our selections define a function. In fact, you may wonder why we need an axiom at all. The problem is that we have not given a rule (or procedure or algorithm) for selecting the elements of the sets in the collection. Indeed, we may not know enough about the sets in the collection to define a specific rule, so in such a case, the axiom of choice simply guarantees the existence of a choice function. Some mathematicians, known as constructionists do not accept the axiom of choice, and insist on well defined rules for constructing functions.
A nice consequence of the axiom of choice is a type of duality between one-to-one functions and onto functions.
Suppose that $f$ is a function from a set $S$ onto a set $T$. There exists a one-to-one function $g$ from $T$ into $S$.
Proof.
For each $y \in T$, the set $f^{-1}\{y\}$ is non-empty, since $f$ is onto. By the axiom of choice, we can select an element $g(y)$ from $f^{-1}\{y\}$ for each $y \in T$. The resulting function $g$ is one-to-one.
Suppose that $f$ is a one-to-one function from a set $S$ into a set $T$. There exists a function $g$ from $T$ onto $S$.
Proof.
Fix a special element $x_0 \in S$. If $y \in \text{range}(f)$, there exists a unique $x \in S$ with $f(x) = y$. Define $g(y) = x$. If $y \notin \text{range}(f)$, define $g(y) = x_0$. The function $g$ is onto.
Computational Exercises
Some Elementary Functions
Each of the following rules defines a function from $\R$ into $\R$.
• $f(x) = x^2$
• $g(x) = \sin(x)$
• $h(x) = \lfloor x \rfloor$
• $u(x) = \frac{e^x}{1 + e^x}$
Find the range of the function and determine if the function is one-to-one in each of the following cases:
1. $f$
2. $g$
3. $h$
4. $u$
Answer
1. Range $[0, \infty)$. Not one-to-one.
2. Range $[-1, 1]$. Not one-to-one.
3. Range $\Z$. Not one-to-one.
4. Range $(0, 1)$. One-to-one.
Find the following inverse images:
1. $f^{-1}[4, 9]$
2. $g^{-1}\{0\}$
3. $h^{-1}\{2, 3, 4\}$
Answer
1. $[-3, -2] \cup [2, 3]$
2. $\{n \, \pi: n \in \Z\}$
3. $[2, 5)$
The function $u$ is one-to-one. Find (that is, give the domain and rule for) the inverse function $u^{-1}$.
Answer
$u^{-1}(p) = \ln\left(\frac{p}{1-p}\right)$ for $p \in (0, 1)$
Give the rule and find the range for each of the following functions:
1. $f \circ g$
2. $g \circ f$
3. $h \circ g \circ f$
Answer
1. $(f \circ g)(x) = \sin^2(x)$. Range $[0, 1]$
2. $(g \circ f)(x) = \sin\left(x^2\right)$. Range $[-1, 1]$
3. $(h \circ g \circ f)(x) = \lfloor \sin(x^2) \rfloor$. Range $\{-1, 0, 1\}$
Note that $f \circ g$ and $g \circ f$ are well-defined functions from $\R$ into $\R$, but $f \circ g \ne g \circ f$.
Dice
Let $S = \{1, 2, 3, 4, 5, 6\}^2$. This is the set of possible outcomes when a pair of standard dice are thrown. Let $f$, $g$, $u$, and $v$ be the functions from $S$ into $\Z$ defined by the following rules:
• $f(x, y) = x + y$
• $g(x, y) = y - x$
• $u(x, y) = \min\{x, y\}$
• $v(x, y) = \max\{x, y\}$
In addition, let $F$ and $U$ be the functions defined by $F = (f, g)$ and $U = (u, v)$.
Find the range of each of the following functions:
1. $f$
2. $g$
3. $u$
4. $v$
5. $U$
Answer
1. $\{2, 3, 4, \ldots, 12\}$
2. $\{-5, -4, \ldots, 4, 5\}$
3. $\{1, 2, 3, 4, 5, 6\}$
4. $\{1, 2, 3, 4, 5, 6\}$
5. $\left\{(i, j) \in \{1, 2, 3, 4, 5, 6\}^2: i \le j\right\}$
Give each of the following inverse images in list form:
1. $f^{-1}\{6\}$
2. $u^{-1}\{3\}$
3. $v^{-1}\{4\}$
4. $U^{-1}\{(3, 4)\}$
Answer
1. $\{(1,5), (2,4), (3,3), (4,2), (5,1)\}$
2. $\{(3,3), (3,4), (4,3), (3,5), (5,3), (3,6), (6,3)\}$
3. $\{(1,4), (4,1), (2,4), (4,2), (3,4), (4,3), (4,4)\}$
4. $\{(3,4), (4,3)\}$
Find each of the following compositions:
1. $f \circ U$
2. $g \circ U$
3. $u \circ F$
4. $v \circ F$
5. $F \circ U$
6. $U \circ F$
Answer
1. $f \circ U = f$
2. $g \circ U = \left|g\right|$
3. $u \circ F = g$
4. $v \circ F = f$
5. $F \circ U = \left(f, \left|g\right|\right)$
6. $U \circ F = (g, f)$
Note that while $f \circ U$ is well-defined, $U \circ f$ is not. Note also that $f \circ U = f$ even though $U$ is not the identity function on $S$.
Bit Strings
Let $n \in \N_+$ and let $S = \{0, 1\}^n$ and $T = \{0, 1, \ldots, n\}$. Recall that the elements of $S$ are bit strings of length $n$, and could represent the possible outcomes of $n$ tosses of a coin (where 1 means heads and 0 means tails). Let $f: S \to T$ be the function defined by $f(x_1, x_2, \ldots, x_n) = \sum_{i=1}^n x_i$. Note that $f(\bs{x})$ is just the number of 1s in the the bit string $\bs{x}$. Let $g: T \to S$ be the function defined by $g(k) = \bs{x}_k$ where $\bs{x}_k$ denotes the bit string with $k$ 1s followed by $n - k$ 0s.
Find each of the following
1. $f \circ g$
2. $g \circ f$
Answer
1. $f \circ g: T \to T$ and $\left(f \circ g\right)(k) = k$.
2. $g \circ f: S \to S$ and $\left(g \circ f\right)(\bs{x}) = \bs{x}_k$ where $k = f(\bs{x}) = \sum_{i=1}^n x_i$. In words, $\left(g \circ f\right)(\bs{x})$ is the bit string with the same number of 1s as $\bs{x}$, but rearranged so that all the 1s come first.
In the previous exercise, note that $f \circ g$ and $g \circ f$ are both well-defined, but have different domains, and so of course are not the same. Note also that $f \circ g$ is the identity function on $T$, but $f$ is not the inverse of $g$. Indeed $f$ is not one-to-one, and so does not have an inverse. However, $f$ restricted to $\left\{\bs{x}_k: k \in T\right\}$ (the range of $g$) is one-to-one and is the inverse of $g$.
Let $n = 4$. Give $f^{-1}(\{k\})$ in list form for each $k \in T$.
Answer
1. $f^{-1}(\{0\}) = \{0000\}$
2. $f^{-1}(\{1\}) = \{1000, 0100, 0010, 0001\}$
3. $f^{-1}(\{2\}) = \{1100, 1010, 1001, 0110, 0101, 0011\}$
4. $f^{-1}(\{3\}) = \{1110, 1101, 1011, 0111\}$
5. $f^{-1}(\{4\}) = \{1111\}$
Again let $n = 4$. Let $A = \{1000, 1010\}$ and $B = \{1000, 1100\}$. Give each of the following in list form:
1. $f(A)$
2. $f(B)$
3. $f(A \cap B)$
4. $f(A) \cap f(B)$
5. $f^{-1}\left(f(A)\right)$
Answer
1. $\{1, 2\}$
2. $\{1, 2\}$
3. $\{1\}$
4. $\{1, 2\}$
5. $\{1000, 0100, 0010, 0001, 1100, 1010, 1001, 0110, 0101, 0011\}$
In the previous exercise, note that $f(A \cap B) \subset f(A) \cap f(B)$ and $A \subset f^{-1}\left(f(A)\right)$.
Indicator Functions
Suppose that $A$ and $B$ are subsets of a universal set $S$. Express, in terms of $\bs{1}_A$ and $\bs{1}_B$, the indicator function of each of the 14 non-trivial sets that can be constructed from $A$ and $B$. Use the Venn diagram app to help.
Answer
1. $\bs{1}_A$
2. $\bs{1}_B$
3. $\bs{1}_{A^c} = 1 - \bs{1}_A$
4. $\bs{1}_{B^c} = 1 - \bs{1}_B$
5. $\bs{1}_{A \cap B} = \bs{1}_A \bs{1}_B$
6. $\bs{1}_{A \cup B} = \bs{1}_A + \bs{1}_B - \bs{1}_A \bs{1}_B$
7. $\bs{1}_{A \cap B^c} = \bs{1}_A - \bs{1}_A \bs{1}_B$
8. $\bs{1}_{B \cap A^c} = \bs{1}_B - \bs{1}_A \bs{1}_B$
9. $\bs{1}_{A \cup B^c} = 1 - \bs{1}_B + \bs{1}_A \bs{1}_B$
10. $\bs{1}_{B \cup A^c} = 1 - \bs{1}_A + \bs{1}_A \bs{1}_B$
11. $\bs{1}_{A^c \cap B^c} = 1 - \bs{1}_A - \bs{1}_B + \bs{1}_A \bs{1}_B$
12. $\bs{1}_{A^c \cup B^c} = 1 - \bs{1}_A \bs{1}_B$
13. $\bs{1}_{(A \cap B^c) \cup (B \cap A^c)} = \bs{1}_A + \bs{1}_B - 2 \bs{1}_A \bs{1}_B$
14. $\bs{1}_{(A \cap B) \cup (A^c \cap B^c)} = 1 - \bs{1}_A - \bs{1}_B + 2 \bs{1}_A \bs{1}_B$
Suppose that $A$, $B$, and $C$ are subsets of a universal set $S$. Give the indicator function of each of the following, in terms of $\bs{1}_A$, $\bs{1}_B$, and $\bs{1}_C$ in sum-product form:
1. $D = \{ x \in S: x \text{ is an element of exactly one of the given sets}\}$
2. $E = \{ x \in S: x \text{ is an element of exactly two of the given sets}\}$
Answer
1. $\bs{1}_D = \bs{1}_A + \bs{1}_B + \bs{1}_C - 2 \left(\bs{1}_A \bs{1}_B + \bs{1}_A \bs{1}_C + \bs{1}_B \bs{1}_C\right) + 3 \bs{1}_A \bs{1}_B \bs{1}_C$
2. $\bs{1}_E = \bs{1}_A \bs{1}_B + \bs{1}_A \bs{1}_C + \bs{1}_B \bs{1}_C - 3 \bs{1}_A \bs{1}_B \bs{1}_C$
Operators
Recall the standard arithmetic operators on $\R$ discussed above.
We all know that sum is commutative and associative, and that product distributes over sum.
1. Is difference commutative?
2. Is difference associative?
3. Does product distribute over difference?
4. Does sum distributed over product?
Answer
1. No. $x - y \ne y - x$
2. No. $x - (y - z) \ne (x - y) - z$
3. Yes. $x (y - z) = (x y) - (x z)$
4. No. $x + (y z) \ne (x + y) (x + z)$
Multisets
Express the multiset $A$ in list form that has the multiplicity function $m: \{a, b, c, d, e\} \to \N$ given by $m(a) = 2$, $m(b) = 3$, $m(c) = 1$, $m(d) = 0$, $m(e) = 4$.
Answer
$A = \{a, a, b, b, b, c, e, e, e, e\}$
Express the prime factors of 360 as a multiset in list form.
Answer
$\{2, 2, 2, 3, 3, 5\}$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/01%3A_Foundations/1.02%3A_Functions.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$
Relations play a fundamental role in probability theory, as in most other areas of mathematics.
Definitions and Constructions
Suppose that $S$ and $T$ are sets. A relation from $S$ to $T$ is a subset of the product set $S \times T$.
1. The domain of $R$ is the set of first coordinates: $\text{domain}(R) = \left\{x \in S: (x, y) \in R \text{ for some } y \in T\right\}$.
2. The range of $R$ is the set of second coordinates: $\text{range}(R) = \left\{y \in T: (x, y) \in R \text{ for some } x \in S\right\}$.
A relation from a set $S$ to itself is a relation on $S$.
As the name suggests, a relation $R$ from $S$ into $T$ is supposed to define a relationship between the elements of $S$ and the elements of $T$, and so we usually use the more suggestive notation $x\,R\,y$ when $(x, y) \in R$. Note that the domain of $R$ is the projections of $R$ onto $S$ and the range of $R$ is the projection of $R$ onto $T$.
Basic Examples
Suppose that $S$ is a set and recall that $\mathscr{P}(S)$ denotes the power set of $S$, the collection of all subsets of $S$. The membership relation $\in$ from $S$ to $\mathscr{P}(S)$ is perhaps the most important and basic relationship in mathematics. Indeed, for us, it's a primitive (undefined) relationship—given $x$ and $A$, we assume that we understand the meaning of the statement $x \in A$.
Another basic primitive relation is the equality relation $=$ on a given set of objects $S$. That is, given two objects $x$ and $y$, we assume that we understand the meaning of the statement $x = y$.
Other basic relations that you have seen are
1. The subset relation $\subseteq$ on $\mathscr{P}(S)$.
2. The order relation $\le$ on $\R$
These two belong to a special class of relations known as partial orders that we will study in the next section. Note that a function $f$ from $S$ into $T$ is a special type of relation. To compare the two types of notation (relation and function), note that $x\,f\,y$ means that $y = f(x)$.
Constructions
Since a relation is just a set of ordered pairs, the set operations can be used to build new relations from existing ones.
if $Q$ and $R$ are relations from $S$ to $T$, then so are $Q \cup R$, $Q \cap R$, $Q \setminus R$.
1. $x(Q \cup R)y$ if and only if $x\,Q\,y$ or $x\,R\,y$.
2. $x(Q \cap R)y$ if and only if $x\,Q\,y$ and $x\,R\,y$.
3. $x(Q \setminus R)y$ if and only if $x\,Q\,y$ but not $x\,R\,y$.
4. If $Q \subseteq R$ then $x\,Q\,y$ implies $x\,R\,y$.
If $R$ is a relation from $S$ to $T$ and $Q \subseteq R$, then $Q$ is a relation from $S$ to $T$.
The restriction of a relation defines a new relation.
If $R$ is a relation on $S$ and $A \subseteq S$ then $R_A = R \cap (A \times A)$ is a relation on $A$, called the restriction of $R$ to $A$.
The inverse of a relation also defines a new relation.
If $R$ is a relation from $S$ to $T$, the inverse of $R$ is the relation from $T$ to $S$ defined by $y\,R^{-1}\,x \text{ if and only if } x\,R\,y$
Equivalently, $R^{-1} = \{(y, x): (x, y) \in R\}$. Note that any function $f$ from $S$ into $T$ has an inverse relation, but only when the $f$ is one-to-one is the inverse relation also a function (the inverse function). Composition is another natural way to create new relations from existing ones.
Suppose that $Q$ is a relation from $S$ to $T$ and that $R$ is a relation from $T$ to $U$. The composition $Q \circ R$ is the relation from $S$ to $U$ defined as follows: for $x \in S$ and $z \in U$, $x(Q \circ R)z$ if and only if there exists $y \in T$ such that $x\,Q\,y$ and $y\,R\,z$.
Note that the notation is inconsistent with the notation used for composition of functions, essentially because relations are read from left to right, while functions are read from right to left. Hopefully, the inconsistency will not cause confusion, since we will always use function notation for functions.
Basic Properties
The important classes of relations that we will study in the next couple of sections are characterized by certain basic properties. Here are the definitions:
Suppose that $R$ is a relation on $S$.
1. $R$ is reflexive if $x\,R\,x$ for all $x \in S$.
2. $R$ is irreflexive if no $x \in S$ satisfies $x\,R\,x$.
3. $R$ is symmetric if $x\,R\,y$ implies $y\,R\,x$ for all $x, \, y \in S$.
4. $R$ is anti-symmetric if $x\,R\,y$ and $y\,R\,x$ implies $x = y$ for all $x, \, y \in S$.
5. $R$ is transitive if $x\,R\,y$ and $y\,R\,z$ implies $x\,R\,z$ for all $x, \, y, \, z \in S$.
The proofs of the following results are straightforward, so be sure to try them yourself before reading the ones in the text.
A relation $R$ on $S$ is reflexive if and only if the equality relation $=$ on $S$ is a subset of $R$.
Proof
This follows from the definitions. $R$ is reflexive if and only if $(x, x) \in R$ for all $x \in S$.
A relation $R$ on $S$ is symmetric if and only if $R^{-1} = R$.
Proof
Suppose that $R$ is symmetric. If $(x, y) \in R$ then $(y, x) \in R$ and hence $(x, y) \in R^{-1}$. If $(x, y) \in R^{-1}$ then $(y, x) \in R$ and hence $(x, y) \in R$. Thus $R = R^{-1}$. Conversely, suppose $R = R^{-1}$. If $(x, y) \in R$ then $(x, y) \in R^{-1}$ and hence $(y, x) \in R$.
A relation $R$ on $S$ is transitive if and only if $R \circ R \subseteq R$.
Proof
Suppose that $R$ is transitive. If $(x, z) \in R \circ R$ then there exists $y \in S$ such that $(x, y) \in R$ and $(y, z) \in R$. But then $(x, z) \in R$ by transitivity. Hence $R \circ R \subseteq R$. Conversely, suppose that $R \circ R \subseteq R$. If $(x, y) \in R$ and $(y, z) \in R$ then $(x, z) \in R \circ R$ and hence $(x, z) \in R$. Hence $R$ is transitive.
A relation $R$ on $S$ is antisymmetric if and only if $R \cap R^{-1}$ is a subset of the equality relation $=$ on $S$.
Proof
Restated, this result is that $R$ is antisymmetric if and only if $(x, y) \in R \cap R^{-1}$ implies $x = y$. Thus suppose that $R$ is antisymmetric. If $(x, y) \in R \cap R^{-1}$ then $(x, y) \in R$ and $(x, y) \in R^{-1}$. But then $(y, x) \in R$ so by antisymmetry, $x = y$. Conversely suppose that $(x, y) \in R \cap R^{-1}$ implies $x = y$. If $(x, y) \in R$ and $(y, x) \in R$ then $(x, y) \in R^{-1}$ and hence $(x, y) \in R \cap R^{-1}$. Thus $x = y$ so $R$ is antisymmetric.
Suppose that $Q$ and $R$ are relations on $S$. For each property below, if both $Q$ and $R$ have the property, then so does $Q \cap R$.
1. reflexive
2. symmetric
3. transitive
Proof
1. Suppose that $Q$ and $R$ are reflexive. Then $(x, x) \in Q$ and $(x, x) \in R$ for each $x \in S$ and hence $(x, x) \in Q \cap R$ for each $x \in S$. Thus $Q \cap R$ is reflexive.
2. Suppose that $Q$ and $R$ are symmetric. If $(x, y) \in Q \cap R$ then $(x, y) \in Q$ and $(x, y) \in R$. Hence $(y, x) \in Q$ and $(y, x) \in R$ so $(y, x) \in Q \cap R$. Hence $Q \cap R$ is symmetric.
3. Suppose that $Q$ and $R$ are transitive. If $(x, y) \in Q \cap R$ and $(y, z) \in Q \cap R$ then $(x, y) \in Q$, $(x, y) \in R$, $(y, z) \in Q$, and $(y, z) \in R$. Hence $(x, z) \in Q$ and $(x, z) \in R$ so $(x, z) \in Q \cap R$. Hence $Q \cap R$ is transitive.
Suppose that $R$ is a relation on a set $S$.
1. Give an explicit definition for the property $R$ is not reflexive.
2. Give an explicit definition for the property $R$ is not irreflexive.
3. Are any of the properties $R$ is reflexive, $R$ is not reflexive, $R$ is irreflexive, $R$ is not irreflexive equivalent?
Answer
1. $R$ is not reflexive if and only if there exists $x \in S$ such that $(x, x) \notin R$.
2. $R$ is not irreflexive if and only if there exists $x \in S$ such that $(x, x) \in R$.
3. Nope.
Suppose that $R$ is a relation on a set $S$.
1. Give an explicit definition for the property $R$ is not symmetric.
2. Give an explicit definition for the property $R$ is not antisymmetric.
3. Are any of the properties $R$ is symmetric, $R$ is not symmetric, $R$ is antisymmetric, $R$ is not antisymmetric equivalent?
Answer
1. $R$ is not symmetric if and only if there exist $x, \, y \in S$ such that $(x, y) \in R$ and $(y, x) \notin R$.
2. $R$ is not antisymmetric if and only if there exist distinct $x, \, y \in S$ such that $(x, y) \in R$ and $(y, x) \in R$.
3. Nope.
Computational Exercises
Let $R$ be the relation defined on $\R$ by $x\,R\,y$ if and only if $\sin(x) = \sin(y)$. Determine if $R$ has each of the following properties:
1. reflexive
2. symmetric
3. transitive
4. irreflexive
5. antisymmetric
Answer
1. yes
2. yes
3. yes
4. no
5. no
The relation $R$ in the previous exercise is a member of an important class of equivalence relations.
Let $R$ be the relation defined on $\R$ by $x\,R\,y$ if and only if $x^2 + y^2 \le 1$. Determine if $R$ has each of the following properties:
1. reflexive
2. symmetric
3. transitive
4. irreflexive
5. antisymmetric
Answer
1. no
2. yes
3. no
4. no
5. no | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/01%3A_Foundations/1.03%3A_Relations.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$
Partial orders are a special class of relations that play an important role in probability theory.
Basic Theory
Definitions
A partial order on a set $S$ is a relation $\preceq$ on $S$ that is reflexive, anti-symmetric, and transitive. The pair $(S, \preceq)$ is called a partially ordered set. So for all $x, \ y, \ z \in S$:
1. $x \preceq x$, the reflexive property
2. If $x \preceq y$ and $y \preceq x$ then $x = y$, the antisymmetric property
3. If $x \preceq y$ and $y \preceq z$ then $x \preceq z$, the transitive property
As the name and notation suggest, a partial order is a type of ordering of the elements of $S$. Partial orders occur naturally in many areas of mathematics, including probability. A partial order on a set naturally gives rise to several other relations on the set.
Suppose that $\preceq$ is a partial order on a set $S$. The relations $\succeq$, $\prec$, $\succ$, $\perp$, and $\parallel$ are defined as follows:
1. $x \succeq y$ if and only if $y \preceq x$.
2. $x \prec y$ if and only if $x \preceq y$ and $x \ne y$.
3. $x \succ y$ if and only if $y \prec x$.
4. $x \perp y$ if and only if $x \preceq y$ or $y \preceq x$.
5. $x \parallel y$ if and only if neither $x \preceq y$ nor $y \preceq x$.
Note that $\succeq$ is the inverse of $\preceq$, and $\succ$ is the inverse of $\prec$. Note also that $x \preceq y$ if and only if either $x \prec y$ or $x = y$, so the relation $\prec$ completely determines the relation $\preceq$. The relation $\prec$ is sometimes called a strict or strong partial order to distingush it from the ordinary (weak) partial order $\preceq$. Finally, note that $x \perp y$ means that $x$ and $y$ are related in the partial order, while $x \parallel y$ means that $x$ and $y$ are unrelated in the partial order. Thus, the relations $\perp$ and $\parallel$ are complements of each other, as sets of ordered pairs. A total or linear order is a partial order in which there are no unrelated elements.
A partial order $\preceq$ on $S$ is a total order or linear order if for every $x, \ y \in S$, either $x \preceq y$ or $y \preceq x$.
Suppose that $\preceq_1$ and $\preceq_2$ are partial orders on a set $S$. Then $\preceq_1$ is an sub-order of $\preceq_2$, or equivalently $\preceq_2$ is an extension of $\preceq_1$ if and only if $x \preceq_1 y$ implies $x \preceq_2 y$ for $x, \ y \in S$.
Thus if $\preceq_1$ is a suborder of $\preceq_2$, then as sets of ordered pairs, $\preceq_1$ is a subset of $\preceq_2$. We need one more relation that arises naturally from a partial order.
Suppose that $\preceq$ is a partial order on a set $S$. For $x, \ y \in S$, $y$ is said to cover $x$ if $x \prec y$ but no element $z \in S$ satisfies $x \prec z \prec y$.
If $S$ is finite, the covering relation completely determines the partial order, by virtue of the transitive property.
Suppose that $\preceq$ is a partial order on a finite set $S$. The covering graph or Hasse graph of $(S, \preceq)$ is the directed graph with vertex set $S$ and directed edge set $E$, where $(x, y) \in E$ if and only if $y$ covers $x$.
Thus, $x \prec y$ if and only if there is a directed path in the graph from $x$ to $y$. Hasse graphs are named for the German mathematician Helmut Hasse. The graphs are often drawn with the edges directed upward. In this way, the directions can be inferred without having to actually draw arrows.
Basic Examples
Of course, the ordinary order $\le$ is a total order on the set of real numbers $\R$. The subset partial order is one of the most important in probability theory:
Suppose that $S$ is a set. The subset relation $\subseteq$ is a partial order on $\mathscr{P}(S)$, the power set of $S$.
Proof
We proved this result in the section on sets. To review, recall that for $A, \ B \in \mathscr{P}(S)$, $A \subseteq B$ means that $x \in A$ implies $x \in B$. Also $A = B$ means that $x \in A$ if and only if $x \in B$. Thus
1. $A \subseteq A$
2. $A \subseteq B$ and $B \subseteq A$ if and only if $A = B$
3. $A \subseteq B$ and $B \subseteq C$ imply $A \subseteq C$
Here is a partial order that arises naturally from arithmetic.
Let $\mid$ denote the division relation on the set of positive integers $\N_+$. That is, $m \mid n$ if and only if there exists $k \in \N_+$ such that $n = k m$. Then
1. $\mid$ is a partial order on $\N_+$.
2. $\mid$ is a sub-order of the ordinary order $\le$.
Proof
1. Clearly $n \mid n$ for $n \in \N_+$, since $n = 1 \cdot n$, so $\mid$ is reflexive. Suppose $m \mid n$ and $n \mid m$, where $m, \ n \in \N_+$. Then there exist $j, \ k \in \N_+$ such that $n = k m$ and $m = j n$. Substituting gives $n = j k n$, and hence $j = k = 1$. Thus $m = n$ so $\mid$ is antisymmetric. Finally, suppose $m \mid n$ and $n \mid p$, where $m, \ n, \ p \in \N_+$. Then there exists $j, \ k \in \N_+$ such that $n = j m$ and $p = k n$. Substituting gives $p = j k m$, so $m \mid p$. Thus $\mid$ is transitive.
2. If $m, \ n \in \N_+$ and $m \mid n$, then there exists $k \in \N_+$ such that $n = k m$. Since $k \ge 1$, $m \le n$.
The set of functions from a set into a partial ordered set can itself be partially ordered in a natural way.
Suppose that $S$ is a set and that $(T, \preceq_T)$ is a partially ordered set, and let $\mathscr{S}$ denote the set of functions $f: S \to T$. The relation $\preceq$ on $\mathscr{S}$ defined by $f \preceq g$ if and only $f(x) \preceq_T g(x)$ for all $x \in S$ is a partial order on $\mathscr{S}$.
Proof
Suppose that $f, \, g, \, h \in \mathscr{S}$.
1. $f(x) \preceq_T f(x)$ for all $x \in S$, so $f \preceq f$.
2. If $f \preceq g$ and $g \preceq f$ then $f(x) \preceq_T g(x)$ and $g(x) \preceq_T f(x)$ for all $x \in S$. Hence $f(x) = g(x)$ for all $x \in S$ so $f = g$.
3. If $f \preceq g$ and $g \preceq h$ then $f(x) \preceq_T g(x)$ and $g(x) \preceq_T h(x)$ for all $x \in S$. Hence $f(x) \preceq_T h(x)$ for all $x \in S$ so $f \preceq h$.
Note that we don't need a partial order on the domain $S$.
Basic Properties
The proofs of the following basic properties are straightforward. Be sure to try them yourself before reading the ones in the text.
The inverse of a partial order is also a partial order.
Proof
Clearly the reflexive, antisymmetric and transitive properties hold for $\succeq$.
If $\preceq$ is a partial order on $S$ and $A$ is a subset of $S$, then the restriction of $\preceq$ to $A$ is a partial order on $A$.
Proof
The reflexive, antisymmetric, and transitive properties given above hold for all $x, \ y, \ z \in S$ and hence hold for all $x, \ y, \ z \in A$.
The following theorem characterizes relations that correspond to strict order.
Let $S$ be a set. A relation $\preceq$ is a partial order on $S$ if and only if $\prec$ is transitive and irreflexive.
Proof
Suppose that $\preceq$ is a partial order on $S$. Recall that $\prec$ is defined by $x \prec y$ if and only if $x \preceq y$ and $x \ne y$. If $x \prec y$ and $y \prec z$ then $x \preceq y$ and $y \preceq z$, and so $x \preceq z$. On the other hand, if $x = z$ then $x \preceq y$ and $y \preceq x$ so $x = y$, a contradiction. Hence $x \ne z$ and so $x \prec z$. Therefore $\prec$ is transitive. If $x \prec y$ then $x \ne y$ by definition, so $\prec$ is irreflexive.
Conversely, suppose that $\prec$ is a transitive and irreflexive relation on $S$. Recall that $\preceq$ is defined by $x \preceq y$ if and only if $x \prec y$ or $x = y$. By definition then, $\preceq$ is reflexive: $x \preceq x$ for every $x \in S$. Next, suppose that $x \preceq y$ and $y \preceq x$. If $x \prec y$ and $y \prec x$ then $x \prec x$ by the transitive property of $\prec$. But this is a contradiction by the irreflexive property, so we must have $x = y$. Thus $\preceq$ is antisymmetric. Suppose $x \preceq y$ and $y \preceq z$. There are four cases:
1. If $x \prec y$ and $y \prec z$ then $x \prec z$ by the transitive property of $\prec$.
2. If $x = y$ and $y \prec z$ then $x \prec z$ by substitution.
3. If $x \prec y$ and $y = z$ then $x \prec z$ by substitution.
4. If $x = y$ and $y = z$ then $x = z$ by the transitive property of $=$.
In all cases we have $x \preceq z$ so $\preceq$ is transitive. Hence $\preceq$ is a partial order on $S$.
Monotone Sets and Functions
Partial orders form a natural setting for increasing and decreasing sets and functions. Here are the definitions:
Suppose that $\preceq$ is a partial order on a set $S$ and that $A \subseteq S$. In the following definitions, $x, \, y$ are arbitrary elements of $S$.
1. $A$ is increasing if $x \in A$ and $x \preceq y$ imply $y \in A$.
2. $A$ is decreasing if $y \in A$ and $x \preceq y$ imply $x \in A$.
Suppose that $S$ is a set with partial order $\preceq_S$, $T$ is a set with partial order $\preceq_T$, and that $f: S \to T$. In the following definitions, $x, \, y$ are arbitrary elements of $S$.
1. $f$ is increasing if and only if $x \preceq_S y$ implies $f(x) \preceq_T f(y)$.
2. $f$ is decreasing if and only if $x \preceq_S y$ implies $f(x) \succeq_T f(y)$.
3. $f$ is strictly increasing if and only if $x \prec_S y$ implies $f(x) \prec_T f(y)$.
4. $f$ is strictly decreasing if and only if $x \prec_S y$ implies $f(x) \succ_T f(y)$.
Recall the definition of the indicator function $\bs{1}_A$ associated with a subset $A$ of a universal set $S$: For $x \in S$, $\bs{1}_A(x) = 1$ if $x \in A$ and $\bs{1}_A(x) = 0$ if $x \notin A$.
Suppose that $\preceq$ is a partial order on a set $S$ and that $A \subseteq S$. Then
1. $A$ is increasing if and only if $\boldsymbol{1}_A$ is increasing.
2. $A$ is decreasing if and only if $\boldsymbol{1}_A$ is decreasing.
Proof
1. $A$ is increasing if and only if $x \in A$ and $x \preceq y$ implies $y \in A$ if and only if $\bs{1}_A(x) = 1$ and $x \le y$ implies $\bs{1}_A(y) = 1$ if and only if $\bs{1}_A$ is increasing.
2. $A$ is decreasing if and only if $y \in A$ and $x \preceq y$ implies $x \in A$ if and only if $\bs{1}_A(y) = 1$ and $x \le y$ implies $\bs{1}_A(x) = 1$ if and only if $\bs{1}_A$ is decreasing.
Isomorphism
Two partially ordered sets $(S, \preceq_S)$ and $(T, \preceq_T)$ are said to be isomorphic if there exists a one-to-one function $f$ from $S$ onto $T$ such that $x \preceq_S y$ if and only if $f(x) \preceq_T f(y)$, for all $x, \ y \in S$. The function $f$ is an isomorphism.
Generally, a mathematical space often consists of a set and various structures defined in terms of the set, such as relations, operators, or a collection of subsets. Loosely speaking, two mathematical spaces of the same type are isomorphic if there exists a one-to-one function from one of the sets onto the other that preserves the structures, and again, the function is called an isomorphism. The basic idea is that isomorphic spaces are mathematically identical, except for superficial matters of appearance. The word isomorphism is from the Greek and means equal shape.
Suppose that the partially ordered sets $(S, \preceq_S)$ and $(T, \preceq_T)$ are isomorphic, and that $f: S \to T$ is an isomorphism. Then $f$ and $f^{-1}$ are strictly increasing.
Proof
We need to show that for $x, \ y \in S$, $x \prec_S y$ if and only if $f(x) \prec_T f(y)$. If $x \prec_S y$ then by definition, $f(x) \preceq_T f(y)$. But if $f(x) = f(y)$ then $x = y$ since $f$ is one-to-one. This is a contradiction, so $f(x) \prec_T f(y)$. Similarly, if $f(x) \prec_T f(y)$ then by definition, $x \preceq_S y$. But if $x = y$ then $f(x) = f(y)$, a contradiction. Hence $x \prec_S y$.
In a sense, the subset partial order is universal—every partially ordered set is isomorphic to $(\mathscr{S}, \subseteq)$ for some collection of sets $\mathscr{S}$.
Suppose that $\preceq$ is a partial order on a set $S$. Then there exists $\mathscr{S} \subseteq \mathscr{P}(S)$ such that $(S, \preceq)$ is isomorphic to $(\mathscr{S}, \subseteq)$.
Proof
For each $x \in S$, let $A_x = \{u \in S: u \preceq x\}$, and then let $\mathscr{S} = \{A_x: x \in S\}$, so that $\mathscr{S} \subseteq \mathscr{P}(S)$. We will show that the function $x \mapsto A_x$ from $S$ onto $\mathscr{S}$ is one-to-one, and satisfies $x \preceq y \iff A_x \subseteq A_y$ First, suppose that $x, \ y \in S$ and $A_x = A_y$. Then $x \in A_x$ so $x \in A_y$ and hence $x \preceq y$. Similarly, $y \in A_y$ so $y \in A_x$ and hence $y \preceq x$. Thus $x = y$, so the mapping is one-to-one. Next, suppose that $x \preceq y$. If $u \in A_x$ then $u \preceq x$ so $u \preceq y$ by the transitive property, and hence $u \in A_y$. Thus $A_x \subseteq A_y$. Conversely, suppose $A_x \subseteq A_y$. As before, $x \in A_x$, so $x \in A_y$ and hence $x \preceq y$.
Extremal Elements
Various types of extremal elements play important roles in partially ordered sets. Here are the definitions:
Suppose that $\preceq$ is a partial order on a set $S$ and that $A \subseteq S$.
1. An element $a \in A$ is the minimum element of $A$ if and only if $a \preceq x$ for every $x \in A$.
2. An element $a \in A$ is a minimal element of $A$ if and only if no $x \in A$ satisfies $x \prec a$.
3. An element $b \in A$ is the maximum element of $A$ if and only if $b \succeq x$ for every $x \in A$.
4. An element $b \in A$ is a maximal element of $A$ if and only if no $x \in A$ satisfies $x \succ b$.
In general, a set can have several maximal and minimal elements (or none). On the other hand,
The minimum and maximum elements of $A$, if they exist, are unique. They are denoted $\min(A)$ and $\max(A)$, respectively.
Proof
Suppose that $a, \ b$ are minimum elements of $A$. Since $a, \ b \in A$ we have $a \preceq b$ and $b \preceq a$, so $a = b$ by the antisymmetric property. The proof for the maximum element is analogous.
Minimal, maximal, minimum, and maximum elements of a set must belong to that set. The following definitions relate to upper and lower bounds of a set, which do not have to belong to the set.
Suppose again that $\preceq$ is a partial order on a set $S$ and that $A \subseteq S$. Then
1. An element $u \in S$ is a lower bound for $A$ if and only if $u \preceq x$ for every $x \in A$.
2. An element $v \in S$ is an upper bound for $A$ if and only if $v \succeq x$ for every $x \in A$.
3. The greatest lower bound or infimum of $A$, if it exists, is the maximum of the set of lower bounds of $A$.
4. The least upper bound or supremum of $A$, if it exists, is the minimum of the set of upper bounds of $A$.
By (20), the greatest lower bound of $A$ is unique, if it exists. It is denoted $\text{glb}(A)$ or $\inf(A)$. Similarly, the least upper bound of $A$ is unique, if it exists, and is denoted $\text{lub}(A)$ or $\sup(A)$. Note that every element of $S$ is a lower bound and an upper bound for $\emptyset$, since the conditions in the definition hold vacuously.
The symbols $\wedge$ and $\vee$ are also used for infimum and supremum, respectively, so $\bigwedge A = \inf(A)$ and $\bigvee A = \sup(A)$ if they exist.. In particular, for $x, \ y \in S$, operator notation is more commonly used, so $x \wedge y = \inf\{x, y\}$ and $x \vee y = \sup\{x, y\}$. Partially ordered sets for which these elements always exist are important, and have a special name.
Suppose that $\preceq$ is a partial order on a set $S$. Then $(S, \preceq)$ is a lattice if $x \wedge y$ and $x \vee y$ exist for every $x, \ y \in S$.
For the subset partial order, the inf and sup operators correspond to intersection and union, respectively:
Let $S$ be a set and consider the subset partial order $\subseteq$ on $\mathscr{P}(S)$, the power set of $S$. Let $\mathscr{A}$ be a nonempty subset of $\mathscr{P}(S)$, that is, a nonempty collection of subsets of $S$. Then
1. $\inf(\mathscr{A}) = \bigcap \mathscr{A}$
2. $\sup(\mathscr{A}) = \bigcup \mathscr{A}$
Proof
1. First, $\bigcap \mathscr{A} \subseteq A$ for every $A \in \mathscr{A}$ and hence $\bigcap \mathscr{A}$ is a lower bound of $\mathscr{A}$. If $B$ is a lower bound of $\mathscr{A}$ then $B \subseteq A$ for every $A \in \mathscr{A}$ and hence $B \subseteq \bigcap \mathscr{A}$. Therefore $\bigcap \mathscr{A}$ is the greatest lower bound.
2. First, $A \subseteq \bigcup \mathscr{A}$ for every $A \in \mathscr{A}$ and hence $\bigcup \mathscr{A}$ is an upper bound of $\mathscr{A}$. If $B$ is an upper bound of $\mathscr{A}$ then $A \subseteq B$ for every $A \in \mathscr{A}$ and hence $\bigcup \mathscr{A} \subseteq B$. Therefore $\bigcup \mathscr{A}$ is the least upper bound.
In particular, $A \wedge B = A \cap B$ and $A \vee B = A \cup B$, so $(\mathscr P(S), \subseteq)$ is a lattice.
Consider the division partial order $\mid$ on the set of positive integers $\N_+$ and let $A$ be a nonempty subset of $\N_+$.
1. $\inf(A)$ is the greatest common divisor of $A$, usually denoted $\gcd(A)$ in this context.
2. If $A$ is infinite then $\sup(A)$ does not exist. If $A$ is finite then $\sup(A)$ is the least common multiple of $A$, usually denoted $\text{lcm}(A)$ in this context.
Suppose that $S$ is a set and that $f: S \to S$. An element $z \in S$ is said to be a fixed point of $f$ if $f(z) = z$.
The following result explores a basic fixed point theorem for a partially ordered set. The theorem is important in the study of cardinality.
Suppose that $\preceq$ is a partial order on a set $S$ with the property that $\sup(A)$ exists for every $A \subseteq S$. If $f: S \to S$ is increasing, then $f$ has a fixed point.
Proof.
Let $A = \{x \in S: x \preceq f(x)\}$ and let $z = \sup(A)$. If $x \in A$ then $x \preceq z$ so $x \preceq f(x) \preceq f(z)$. Hence $f(z)$ is an upper bound of $A$ so $z \preceq f(z)$. But then $f(z) \preceq f\left(f(z)\right)$ so $f(z) \in A$. Hence $f(z) \preceq z$. Therefore $f(z) = z$.
Note that the hypotheses of the theorem require that $\sup(\emptyset) = \min(S)$ exists. The set $A = \{x \in S: x \preceq f(x)\}$ is nonempty since $\min(S) \in A$.
If $\preceq$ is a total order on a set $S$ with the property that every nonempty subset of $S$ has a minimum element, then $S$ is said to be well ordered by $\preceq$. One of the most important examples is $\N_+$, which is well ordered by the ordinary order $\le$. On the other hand, the well ordering principle, which is equivalent to the axiom of choice, states that every nonempty set can be well ordered.
Orders on Product Spaces
Suppose that $S$ and $T$ are sets with partial orders $\preceq_S$ and $\preceq_T$ respectively. Define the relation $\preceq$ on $S \times T$ by $(x, y) \preceq (z, w)$ if and only if $x \preceq_S z$ and $y \preceq_T w$.
1. The relation $\preceq$ is a partial order on $S \times T$, called, appropriately enough, the product order.
2. Suppose that $(S, \preceq_S) = (T, \preceq_T)$. If $S$ has at least 2 elements, then $\preceq$ is not a total order on $S^2$.
Proof
Product order extends in a straightforward way to the Cartesian product of a finite or an infinite sequence of partially ordered spaces. For example, suppose that $S_i$ is a set with partial order $\preceq_i$ for each $i \in \{1, 2, \ldots, n\}$, where $n \in \N_+$. The product order $\preceq$ on the product set $S_1 \times S_2 \times \cdots \times S_n$ is defined as follows: for $\bs{x} = (x_1, x_2, \ldots, x_n)$ and $\bs{y} = (y_1, y_2, \ldots, y_n)$ in the product set, $\bs{x} \preceq \bs{y}$ if and only if $x_i \preceq_i y_i$ for each $i \in \{1, 2, \ldots, n\}$. We can generalize this further to arbitrary product sets. Suppose that $S_i$ is a set for each $i$ in a nonempty (both otherwise arbitrary) index set $I$. Recall that $\prod_{i \in I} S_i = \left\{x: x \text{ is a function from } I \text{ into } \bigcup_{i \in I } S_i \text{ such that } x(i) \in S_i \text{ for each } i \in I \right\}$ To make the notation look more like a simple Cartesian product, we will write $x_i$ instead of $x(i)$ for the value of a function $x$ in the product set at $i \in I$.
Suppose that $S_i$ is a set with partial order $\preceq_i$ for each $i$ in a nonempty index set $I$. Define the relation $\preceq$ on $\prod_{i \in I} S_i$ by $x \preceq y$ if and only if $x_i \preceq_i y_i$ for each $i \in I$. Then $\preceq$ is a partial order on the product set, known again as the product order.
Proof
In spite of the abstraction, the proof is perfectly straightforward. Suppose that $x, \, y, \, z \in \prod_{i \in I} S_i$.
1. $x_i \preceq_i x_i$ for every $i \in I$, and hence $x \preceq x$. Thus $\preceq$ is reflexive.
2. Suppose that $x \preceq y$ and $y \preceq x$. Then $x_i \preceq_i y_i$ and $y_i \preceq_i x_i$ for each $i \in I$. Hence $x_i = y_i$ for each $i \in I$ and so $x = y$. Thus $\preceq$ is antisymmetric
3. Suppose that $x \preceq y$ and $y \preceq z$. Then $x_i \preceq_i y_i$ and $y_i \preceq_i z_i$ for each $i \in I$. Hence $x_i \preceq_i z_i$ for each $i \in I$, so $x \preceq z$. Thus $\preceq$ is transitive.
Note again that no assumptions are made on the index set $I$, other than it be nonempty. In particular, no order is necessary on $I$. The next result gives a very different type of order on a product space.
Suppose again that $S$ and $T$ are sets with partial orders $\preceq_S$ and $\preceq_T$ respectively. Define the relation $\preceq$ on $S \times T$ by $(x, y) \preceq (z, w)$ if and only if either $x \prec_S z$, or $x = z$ and $y \preceq_T w$.
1. The relation $\preceq$ is a partial order on $S \times T$, called the lexicographic order or dictionary order.
2. If $\preceq_S$ and $\preceq_T$ are total orders on $S$ and $T$, respectively, then $\preceq$ is a total order on $S \times T$.
Proof
As with the product order, the lexicographic order can be generalized to a collection of partially ordered spaces. However, we need the index set to be totally ordered.
Suppose that $S_i$ is a set with partial order $\preceq_i$ for each $i$ in a nonempty index set $I$. Suppose also that $\le$ is a total order on $I$. Define the relation $\preceq$ on the product set $\prod_{i \in I} S_i$ as follows: $x \prec y$ if and only if there exists $j \in I$ such that $x_i = y_i$ if $i \lt j$ and $x_j \prec_j y_j$. Then
1. $\preceq$ is a partial order on $S$, known again as the lexicographic order.
2. If $\preceq_i$ is a total order for each $i \in I$, and $I$ is well ordered by $\le$, then $\preceq$ is a total order on $S$.
Proof
1. By the result on strong orders, we need to show that $\prec$ is irreflexive and transitive. First, no $x \in \prod_{i \in I} S_i$ satisfies $x \prec x$ since $x_i = x_i$ for all $i \in I$. Hence $\prec$ is irreflexive. Next, suppose that $x, \ y, \ z \in \prod_{i \in I} S_i$ and that $x \prec y$ and $y \prec z$. Then there exists $j \in I$ such that $x_i = y_i$ if $i \lt j$ and $x_j \prec_j y_j$. Similarly, there exists $k \in I$ such that $y_i = z_i$ if $i \lt k$ and $y_k \prec_k z_k$. Again, since $I$ is totally ordered, either $j \lt k$ or $k \lt j$ or $j = k$. If $j \lt k$, then $x_i = y_i = z_i$ if $i \lt j$ and $x_j \prec_j y_j = z_j$. If $k \lt j$, then $x_i = y_i = z_i$ if $i \lt k$ and $x_k = y_k \prec_k z_k$. If $j = k$, then $x_i = y_i = z_i$ if $i \lt j$ and $x_j \prec_j y_j \prec_j z_j$. In all cases, $x \prec z$ so $\prec$ is transitive.
2. Suppose now that $\preceq_i$ is a total order on $S_i$ for each $i \in I$ and that $I$ is well ordered by $\le$. Let $x, \ y \in \prod_{i \in I} S_i$ with $x \ne y$. Let $J = \{i \in I: x_i \ne y_i\}$. Then $J \ne \emptyset$ by assumption, and hence has a minimum element $j$. If $i \lt j$ then $i \notin J$ and hence $x_i = y_i$. On the other hand, $x_j \ne y_j$ since $j \in J$ and therefore, since $\preceq_j$ is totally ordered, we must have either $x_j \prec_j y_j$ or $y_j \prec_j x_j$. In the first case, $x \prec y$ and in the second case $y \prec x$. Hence $\preceq$ is totally ordered.
The term lexicographic comes from the way that we order words alphabetically: We look at the first letter; if these are different, we know how to order the words. If the first letters are the same, we look at the second letter; if these are different, we know how to order the words. We continue in this way until we find letters that are different, and we can order the words. In fact, the lexicographic order is sometimes referred to as the first difference order. Note also that if $S_i$ is a set and $\preceq_i$ a total order on $S_i$ for $i \in I$, then by the well ordering principle, there exists a well ordering $\le$ of $I$, and hence there exists a lexicographic total order on the product space $\prod_{i \in I} S_i$. As a mathematical structure, the lexicographic order is not as obscure as you might think.
$(\R, \le)$ is isomorphic to the lexicographic product of $(\Z, \le)$ with $\left([0, 1), \le\right)$, where $\le$ is the ordinary order for real numbers.
Proof
Every $x \in \R$ can be uniquely expressed in the form $x = n + t$ where $n = \lfloor x \rfloor \in \Z$ is the integer part and $t = x - n \in [0, 1)$ is the remainder. Thus $x \mapsto (n, t)$ is a one-to-one function from $\R$ onto $\Z \times [0, 1)$. For example, $5.3$ maps to $(5, 0.3)$, while $-6.7$ maps to $(-7, 0.3)$. Suppose that $x = m + s, \ y = n + t \in \R$, where of course $m, \, n \in \Z$ are the integer parts of $x$ and $y$, respectively, and $s, \, t \in [0, 1)$ are the corresponding remainders. Then $x \lt y$ if and only if $m \lt n$ or $m = n$ and $s \lt t$. Again, to illustrate with real real numbers, we can tell that $5.3 \lt 7.8$ just by comparing the integer parts: $5 \lt 7$. We can ignore the remainders. On the other hand, to see that $6.4 \lt 6.7$ we need to compare the remainders: $0.4 \lt 0.7$ since the integer parts are the same.
Limits of Sequences of Real Numbers
Suppose that $(a_1, a_2, \ldots)$ is a sequence of real numbers.
The sequence $\inf\{a_n, a_{n+1} \ldots\}$ is increasing in $n \in \N_+$.
Since the sequence of infimums in the last result is increasing, the limit exists in $\R \cup \{\infty\}$, and is called the limit inferior of the original sequence: $\liminf_{n \to \infty} a_n = \lim_{n \to \infty} \inf \{a_n, a_{n+1}, \ldots\}$
The sequence $\sup\{a_n, a_{n+1}, \ldots \}$ is decreasing in $n \in \N_+$.
Since the the sequence of supremums in the last result is decreasing, the limit exists in $\R \cup\{-\infty\}$, and is called the limit superior of the original sequence: $\limsup_{n \to \infty} a_n = \lim_{n \to \infty} \sup\{a_n, a_{n+1}, \ldots \}$ Note that $\liminf_{n \to \infty} a_n \le \limsup_{n \to \infty} a_n$ and equality holds if and only if $\lim_{n \to \infty} a_n$ exists (and is the common value).
Vector Spaces of Functions
Suppose that $S$ is a nonempty set, and recall that the set $\mathscr{V}$ of functions $f: S \to \R$ is a vector space, under the usual pointwise definition of addition and scalar multiplication. As noted in (9), $\mathscr{V}$ is also a partial ordered set, under the pointwise partial order: $f \preceq g$ if and only if $f(x) \le g(x)$ for all $x \in S$. Consistent with the definitions (19), $f \in \mathscr{V}$ is bounded if there exists $C \in (0, \infty)$ such that $\left|f(x)\right| \le C$ for all $x \in S$. Now let $\mathscr{U}$ denote the set of bounded functions $f: S \to \R$, and for $f \in \mathscr{U}$ define $\|f\| = \sup\{\left|f(x)\right|: x \in S\}$
$\mathscr{U}$ is a vector subspace of $\mathscr{V}$ and $\| \cdot \|$ is a norm on $\mathscr{U}$.
Proof
To show that $\mathscr{U}$ is a subspace, we just have to note that it is closed under addition and scalar multiplication. That is, if $f, \, g: S \to \R$ are bounded, and if $c \in \R$, then $f + g$ and $c f$ are bounded. Next we show that $\| \cdot \|$ satisfies the axioms of a norm. Again, let $f, \, g \in \mathscr{U}$ and $c \in \R$
1. Clearly $\|f\| \ge 0$ and $\|f\| = 0$ if and only if $f(x) = 0$ for all $x \in S$ if and only if $f = \bs{0}$, the zero function on $S$.
2. $\| c f\| = \sup\left\{\left|c f(x) \right|: x \in S\right\} = \left|c\right| \sup\left\{\left|f(x)\right|: x \in S\right\} = \left|c\right| \|f\|$
3. By the usual triangle inequality on $\R$, $\left|f(x) + g(x) \right| \le \left|f(x)\right| + \left|g(x)\right|$ for $x \in S$. Hence $\sup\left\{\left|f(x) + g(x)\right|: x \in S\right\} \le \sup\left\{\left|f(x)\right| + \left|g(x)\right|: x \in S\right\} \le \sup\left\{\left|f(x)\right|: x \in S\right\} + \sup\left\{\left|g(x)\right|: x \in S\right\}$ That is, $\|f + g\| \le \|f\| + \|g\|$.
Recall that part (a) is the positive property, part (b) is the scaling property, and part (c) is the triangle inequality.
Appropriately enough, $\| \cdot \|$ is called the supremum norm on $\mathscr{U}$. Vector spaces of bounded, real-valued functions, with the supremum norm are especially important in probability and random processes. We will return to this discussion again in the advanced sections on metric spaces and measure theory.
Computational Exercises
Let $S = \{2, 3, 4, 6, 12\}$.
1. Sketch the Hasse graph corresponding to the ordinary order $\le$ on $S$.
2. Sketch the Hasse graph corresponding to the division partial order $\mid$ on $S$.
Answer
Consider the ordinary order $\le$ on the set of real numbers $\R$, and let $A = [a, b)$ where $a \lt b$. Find each of the following that exist:
1. The set of minimal elements of $A$
2. The set of maximal elements of $A$
3. $\min(A)$
4. $\max(A)$
5. The set of lower bounds of $A$
6. The set of upper bounds of $A$
7. $\inf(A)$
8. $\sup(A)$
Answer
1. $\{a\}$
2. $\emptyset$
3. $a$
4. Does not exist
5. $(-\infty, a]$
6. $[b, \infty)$
7. $a$
8. $b$
Again consider the division partial order $\mid$ on the set of positive integers $\N_+$ and let $A = \{2, 3, 4, 6, 12\}$. Find each of the following that exist:
1. The set of minimal elements of $A$
2. The set of maximal elements of $A$
3. $\min(A)$
4. $\max(A)$
5. The set of lower bounds of $A$
6. The set of upper bounds of $A$
7. $\inf(A)$
8. $\sup(A)$.
Answer
1. $\{2, 3\}$
2. $\{12\}$
3. Does not exist
4. $12$
5. $\{1\}$
6. $\{12, 24, 36, \ldots\}$
7. $1$
8. $12$
Let $S = \{a, b, c\}$.
1. Give $\mathscr{P}(S)$ in list form.
2. Describe the Hasse graph of $(\mathscr{P}(S), \subseteq)$
Answer
1. $\mathscr{P}(S) = \{\emptyset, \{a\}, \{b\}, \{c\}, \{a, b\}, \{a, c\}, \{b, c\}, S\}$
2. For $A \in \mathscr{P}(S)$ and $x \in S \setminus A$, there is a directed edge from $A$ to $A \cup \{x\}$
Note that the Hasse graph of $\supseteq$ looks the same as the graph of $\subseteq$, except for the labels on the vertices. This symmetry is because of the complement relationship.
Let $S = \{a, b, c, d\}$.
1. Give $\mathscr{P}(S)$ in list form.
2. Describe the Hasse graph of $(\mathscr{P}(S), \subseteq)$
Answer
1. $\mathscr{P}(S) = \{\emptyset, \{a\}, \{b\}, \{c\}, \{d\}, \{a, b\}, \{a, c\}, \{a, d\}, \{b, c\}, \{b, d\}, \{c, d\}, \{a, b, c\}, \{a, b, d\}, \{a, c, d\}, \{b, c, d\}, S\}$
2. For $A \in \mathscr{P}(S)$ and $x \in S \setminus A$, there is a directed edge from $A$ to $A \cup \{x\}$
Note again that the Hasse graph of $\supseteq$ looks the same as the graph of $\subseteq$, except for the labels on the vertices. This symmetry is because the complement relationship.
Suppose that $A$ and $B$ are subsets of a universal set $S$. Let $\mathscr{A}$ denote the collection of the 16 subsets of $S$ that can be constructed from $A$ and $B$ using the set operations. Show that $(\mathscr{A}, \subseteq)$ is isomorphic to the partially ordered set in the previous exercise. Use the Venn diagram app to help.
Proof
Let $a = A \cap B$, $b = A \cap B^c$, $c = A^c \cap B$, $d = A^c \cap B^c$. Our basic assumption is that $A$ and $B$ are in general position, so that $a, \, b, \, c, \, d$ are distinct and nonempty. Note also that $\{a, b, c, d\}$ partitions $S$. Now, map each subset $\mathscr{S}$ of $\{a, b, c, d\}$ to $\bigcup \mathscr{S}$. This function is an isomorphism from $\mathscr{S}$ to $\mathscr{A}$. That is, for $\mathscr{S}$ and $\mathscr{T}$ subsets of $\{a, b, c, d\}$, $\mathscr{S} \subseteq \mathscr{T}$ if and only if $\bigcup \mathscr{S} \subseteq \bigcup \mathscr{T}$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/01%3A_Foundations/1.04%3A_Partial_Orders.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\range}{\text{range}}$
Basic Theory
Definitions
A relation $\approx$ on a nonempty set $S$ that is reflexive, symmetric, and transitive is an equivalence relation on $S$. Thus, for all $x, \, y, \, z \in S$,
1. $x \approx x$, the reflexive property.
2. If $x \approx y$ then $y \approx x$, the symmetric property.
3. If $x \approx y$ and $y \approx z$ then $x \approx z$, the transitive property.
As the name and notation suggest, an equivalence relation is intended to define a type of equivalence among the elements of $S$. Like partial orders, equivalence relations occur naturally in most areas of mathematics, including probability.
Suppose that $\approx$ is an equivalence relation on $S$. The equivalence class of an element $x \in S$ is the set of all elements that are equivalent to $x$, and is denoted $[x] = \{y \in S: y \approx x\}$
Results
The most important result is that an equivalence relation on a set $S$ defines a partition of $S$, by means of the equivalence classes.
Suppose that $\approx$ is an equivalence relation on a set $S$.
1. If $x \approx y$ then $[x] = [y]$.
2. If $x \not \approx y$ then $[x] \cap [y] = \emptyset$.
3. The collection of (distinct) equivalence classes is a partition of $S$ into nonempty sets.
Proof
1. Suppose that $x \approx y$. If $u \in [x]$ then $u \approx x$ and hence $u \approx y$ by the transitive property. Hence $u \in [y]$. Similarly, if $u \in [y]$ then $u \approx y$. But $y \approx x$ by the symmetric property, and hence $u \approx x$ by the transitive property. Hence $u \in [x]$.
2. Suppose that $x \not \approx y$. If $u \in [x] \cap [y]$, then $u \in [x]$ and $u \in [y]$, so $u \approx x$ and $u \approx y$. But then $x \approx u$ by the symmetric property, and then $x \approx y$ by the transitive property. This is a contradiction, so $[x] \cap [y] = \emptyset$.
3. From (a) and (b), the (distinct) equivalence classes are disjoint. If $x \in S$, then $x \approx x$ by the reflexive property, and hence $x \in [x]$. Therefore $\bigcup_{x \in S} [x] = S$.
Sometimes the set $\mathscr{S}$ of equivalence classes is denoted $S / \approx$. The idea is that the equivalence classes are new objects obtained by identifying elements in $S$ that are equivalent. Conversely, every partition of a set defines an equivalence relation on the set.
Suppose that $\mathscr{S}$ is a collection of nonempty sets that partition a given set $S$. Define the relation $\approx$ on $S$ by $x \approx y$ if and only if $x \in A$ and $y \in A$ for some $A \in \mathscr{S}$.
1. $\approx$ is an equivalence relation.
2. $\mathscr{S}$ is the set of equivalence classes.
Proof
1. If $x \in S$, then $x \in A$ for some $A \in \mathscr{S}$, since $\mathscr{S}$ partitions $S$. Hence $x \approx x$, and so the reflexive property holds. Next, $\approx$ is trivially symmetric by definition. Finally, suppose that $x \approx y$ and $y \approx z$. Then $x, \, y \in A$ for some $A \in \mathscr{S}$ and $y, \, z \in B$ for some $B \in \mathscr{S}$. But then $y \in A \cap B$. The sets in $\mathscr{S}$ are disjoint, so $A = B$. Hence $x, \, z \in A$, so $x \approx z$. Thus $\approx$ is transitive.
2. If $x \in S$, then $x \in A$ for a unique $A \in \mathscr{S}$, and then by definition, $[x] = A$.
Sometimes the equivalence relation $\approx$ associated with a given partition $\mathscr{S}$ is denoted $S / \mathscr{S}$. The idea, of course, is that elements in the same set of the partition are equivalent.
The process of forming a partition from an equivalence relation, and the process of forming an equivalence relation from a partition are inverses of each other.
1. If we start with an equivalence relation $\approx$ on $S$, form the associated partition, and then construct the equivalence relation associated with the partition, then we end up with the original equivalence relation. In modular notation, $S \big/ (S / \approx)$ is the same as $\approx$.
2. If we start with a partition $\mathscr{S}$ of $S$, form the associated equivalence relation, and then form the partition associated with the equivalence relation, then we end up with the original partition. In modular notation, $S \big/ (S / \mathscr{S})$ is the same as $\mathscr{S}$.
Suppose that $S$ is a nonempty set. The most basic equivalence relation on $S$ is the equality relation $=$. In this case $[x] = \{x\}$ for each $x \in S$. At the other extreme is the trivial relation $\approx$ defined by $x \approx y$ for all $x, \, y \in S$. In this case $S$ is the only equivalence class.
Every function $f$ defines an equivalence relation on its domain, known as the equivalence relation associated with $f$. Moreover, the equivalence classes have a simple description in terms of the inverse images of $f$.
Suppose that $f: S \to T$. Define the relation $\approx$ on $S$ by $x \approx y$ if and only if $f(x) = f(y)$.
1. The relation $\approx$ is an equivalence relation on $S$.
2. The set of equivalences classes is $\mathscr{S} = \left\{f^{-1}\{t\}: t \in \range(f)\right\}$.
3. The function $F: \mathscr{S} \to T$ defined by $F([x]) = f(x)$ is well defined and is one-to-one.
Proof
1. If $x \in S$ then trivially $f(x) = f(x)$, so $x \approx x$. Hence $\approx$ is reflexive. If $x \approx y$ then $f(x) = f(y)$ so trivially $f(y) = f(x)$ and hence $y \approx x$. Thus $\approx$ is symmetric. If $x \approx y$ and $y \approx z$ then $f(x) = f(y)$ and $f(y) = f(z)$, so trivially $f(x) = f(z)$ and so $x \approx z$. Hence $\approx$ is transitive.
2. Recall that $t \in \range(f)$ if and only if $f(x) = t$ for some $x \in S$. Then by definition, $[x] = f^{-1}\{t\} = \{y \in S: f(y) = t\} = \{ y \in S: f(y) = f(x)\}$
3. From (3), $[x] = [y]$ if and only if $x \approx y$ if and only if $f(x) = f(y)$. This shows both that $F$ is well defined, and that $F$ is one-to-one.
Suppose again that $f: S \to T$.
1. If $f$ is one-to-one then the equivalence relation associated with $f$ is the equality relation, and hence $[x] = \{x\}$ for each $x \in S$.
2. If $f$ is a constant function then the equivalence relation associated with $f$ is the trivial relation, and hence $S$ is the only equivalence class.
Proof
1. If $f$ is one-to-one, then $x \approx y$ if and only if $f(x) = f(y)$ if and only if $x = y$.
2. If $f$ is constant on $S$ then $f(x) = f(y)$ and hence $x \approx y$ for all $x, \, y \in S$.
Equivalence relations associated with functions are universal: every equivalence relation is of this form:
Suppose that $\approx$ is an equivalence relation on a set $S$. Define $f: S \to \mathscr{P}(S)$ by $f(x) = [x]$. Then $\approx$ is the equivalence relation associated with $f$.
Proof
From (6), $x \approx y$ if and only if $[x] = [y]$ if and only if $f(x) = f(y)$.
The intersection of two equivalence relations is another equivalence relation.
Suppose that $\approx$ and $\cong$ are equivalence relations on a set $S$. Let $\equiv$ denote the intersection of $\approx$ and $\cong$ (thought of as subsets of $S \times S$). Equivalently, $x \equiv y$ if and only if $x \approx y$ and $x \cong y$.
1. $\equiv$ is an equivalence relation on $S$.
2. $[x]_\equiv = [x]_\approx \cap [x]_\cong$.
Suppose that we have a relation that is reflexive and transitive, but fails to be a partial order because it's not anti-symmetric. The relation and its inverse naturally lead to an equivalence relation, and then in turn, the original relation defines a true partial order on the equivalence classes. This is a common construction, and the details are given in the next theorem.
Suppose that $\preceq$ is a relation on a set $S$ that is reflexive and transitive. Define the relation $\approx$ on $S$ by $x \approx y$ if and only if $x \preceq y$ and $y \preceq x$.
1. $\approx$ is an equivalence relation on $S$.
2. If $A$ and $B$ are equivalence classes and $x \preceq y$ for some $x \in A$ and $y \in B$, then $u \preceq v$ for all $u \in A$ and $v \in B$.
3. Define the relation $\preceq$ on the collection of equivalence classes $\mathscr{S}$ by $A \preceq B$ if and only if $x \preceq y$ for some (and hence all) $x \in A$ and $y \in B$. Then $\preceq$ is a partial order on $\mathscr{S}$.
Proof
1. If $x \in S$ then $x \preceq x$ since $\preceq$ is reflexive. Hence $x \approx x$, so $\approx$ is reflexive. Clearly $\approx$ is symmetric by the symmetry of the definition. Suppose that $x \approx y$ and $y \approx z$. Then $x \preceq y$, $y \preceq z$, $z \preceq y$ and $y \preceq x$. Hence $x \preceq z$ and $z \preceq x$ since $\preceq$ is transitive. Therefore $x \approx z$ so $\approx$ is transitive.
2. Suppose that $A$ and $B$ are equivalence classes of $\approx$ and that $x \preceq y$ for some $x \in A$ and $y \in B$. If $u \in A$ and $v \in B$, then $x \approx u$ and $y \approx v$. Therefore $u \preceq x$ and $y \preceq v$. By transitivity, $u \preceq v$.
3. Suppose that $A \in \mathscr{S}$. If $x, \, y \in A$ then $x \approx y$ and hence $x \preceq y$. Therefore $A \preceq A$ and so $\preceq$ is reflexive. Next suppose that $A, \, B \in \mathscr{S}$ and that $A \preceq B$ and $B \preceq A$. If $x \in A$ and $y \in B$ then $x \preceq y$ and $y \preceq x$. Hence $x \approx y$ so $A = B$. Therefore $\preceq$ is antisymmetric. Finally, suppose that $A, \, B, \, C \in \mathscr{S}$ and that $A \preceq B$ and $B \preceq C$. Note that $B \ne \emptyset$ so let $y \in B$. If $x \in A, \, z \in C$ then $x \preceq y$ and $y \preceq z$. Hence $x \preceq z$ and therefore $A \preceq C$. So $\preceq$ is transitive.
A prime example of the construction in the previous theorem occurs when we have a function whose range space is partially ordered. We can construct a partial order on the equivalence classes in the domain that are associated with the function.
Suppose that $S$ and $T$ are sets and that $\preceq_T$ is a partial order on $T$. Suppose also that $f: S \to T$. Define the relation $\preceq_S$ on $S$ by $x \preceq_S y$ if and only if $f(x) \preceq_T f(y)$.
1. $\preceq_S$ is reflexive and transitive.
2. The equivalence relation on $S$ constructed in (10) is the equivalence relation associated with $f$, as in (6).
3. $\preceq_S$ can be extended to a partial order on the equivalence classes corresponding to $f$.
Proof
1. If $x \in S$ then $f(x) \preceq_T f(x)$ since $\preceq_T$ is reflexive, and hence $x \preceq_S x$. Thus $\preceq_S$ is reflexive. Suppose that $x, \, y, \, z \in S$ and that $x \preceq_S y$ and $y \preceq_S z$. Then $f(x) \preceq_T f(y)$ and $f(y) \preceq_T f(z)$. Hence $f(x) \preceq_T f(z)$ since $\preceq_T$ is transitive. Thus $\preceq_S$ is transitive.
2. For the equivalence relation $\approx$ on $S$ constructed in (10), $x \approx y$ if and only if $x \preceq_S y$ and $y \preceq_S x$ if and only if $f(x) \preceq_T f(y)$ and $f(y) \preceq_T f(x)$ if and only if $f(x) = f(y)$, since $\preceq_T$ is antisymmetric. Thus $\approx$ is the equivalence relation associated with $f$.
3. This follows immediately from (10) and parts (a) and (b). If $u, \, v \in \range(f)$, then $f^{-1}(\{u\}) \preceq_S f^{-1}(\{v\})$ if and only if $u \preceq_T v$.
Examples and Applications
Simple functions
Give the equivalence classes explicitly for the functions from $\R$ into $\R$ defined below:
1. $f(x) = x^2$.
2. $g(x) = \lfloor x \rfloor$.
3. $h(x) = \sin(x)$.
Answer
1. $[x] = \{x, -x\}$
2. $[x] = \left[ \lfloor x \rfloor, \lfloor x \rfloor + 1 \right)$
3. $[x] = \{x + 2 n \pi: n \in \Z\} \cup \{(2 n + 1) \pi - x: n \in \Z\}$
Calculus
Suppose that $I$ is a fixed interval of $\R$, and that $S$ is the set of differentiable functions from $I$ into $\R$. Consider the equivalence relation associated with the derivative operator $D$ on $S$, so that $D(f) = f^{\prime}$. For $f \in S$, give a simple description of $[f]$.
Answer
$[f] = \{f + c: c \in \R\}$
Congruence
Recall the division relation $\mid$ from $\N_+$ to $\Z$: For $d \in \N_+$ and $n \in \Z$, $d \mid n$ means that $n = k d$ for some $k \in \Z$. In words, $d$ divides $n$ or equivalently $n$ is a multiple of $d$. In the previous section, we showed that $\mid$ is a partial order on $\N_+$.
Fix $d \in \N_+$.
1. Define the relation $\equiv_d$ on $\Z$ by $m \equiv_d n$ if and only if $d \mid (n - m)$. The relation $\equiv_d$ is known as congruence modulo $d$.
2. Let $r_d: \Z \to \{0, 1, \ldots, d - 1\}$ be defined so that $r(n)$ is the remainder when $n$ is divided by $d$.
Recall that by the Euclidean division theorem, named for Euclid of course, $n \in \Z$ can be written uniquely in the form $n = k d + q$ where $k \in \Z$ and $q \in \{0, 1, \ldots, d - 1\}$, and then $r_d(n) = q$.
Congruence modulo $d$.
1. $\equiv_d$ is the equivalence relation associated with the function $r_d$.
2. There are $d$ distinct equivalence classes, given by $[q]_d = \{q + k d: k \in \Z\}$ for $q \in \{0, 1, \ldots, d - 1\}$.
Proof
1. Recall that for the equivalence relation associated with $r_d$, integers $m$ and $n$ are equivalent if and only if $r_d(m) = r_d(n)$. By the division theorem, $m = j d + p$ and $n = k d + q$, where $j, \, k \in \Z$ and $p, \, q \in \{0. 1, \ldots, d - 1\}$, and these representations are unique. Thus $n - m = (k - j) d + (q - p)$, and so $m \equiv_d n$ if and only if $d \mid (n - m)$ if and only if $p = q$ if and only if $r_d(m) = r_d(n)$.
2. Recall that the equivalence classes are $r_d^{-1}\{q\}$ for $q \in \range\left(r_d\right) = \{0, 1, \ldots, d - 1\}$. By the division theorem, $r_d^{-1}\{q\} = \{k d + q: k \in \Z\}$.
Explicitly give the equivalence classes for $\equiv_4$, congruence mod 4.
Answer
• $[0]_4 = \{0, 4, 8, 12, \ldots\} \cup \{-4, -8, -12, -16, \ldots\}$
• $[1]_4 = \{1, 5, 9, 13, \ldots\} \cup \{-3, -7, -11, -15, \ldots\}$
• $[2]_4 = \{2, 6, 10, 14, \ldots\} \cup \{-2, -6, -10, -14, \ldots\}$
• $[3]_4 = \{3, 7, 11, 15, \ldots\} \cup \{-1, -5, -9, -13, \ldots\}$
Linear Algebra
Linear algebra provides several examples of important and interesting equivalence relations. To set the stage, let $\R^{m \times n}$ denote the set of $m \times n$ matrices with real entries, for $m, \, n \in \N_+$.
Recall that the following are row operations on a matrix:
1. Multiply a row by a non-zero real number.
2. Interchange two rows.
3. Add a multiple of a row to another row.
Row operations are essential for inverting matrices and solving systems of linear equations.
Matrices $A, \, B \in \R^{m \times n}$ are row equivalent if $A$ can be transformed into $B$ by a finite sequence of row operations. Row equivalence is an equivalence relation on $\R^{m \times n}$.
Proof.
If $A \in \R^{m \times n}$, then $A$ is row equivalent to itself: we can simply do nothing, or if you prefer, we can multiply the first row of $A$ by 1. For symmetry, the key is that each row operation can be reversed by another row operation: multiplying a row by $c \ne 0$ is reversed by multiplying the same row of the resulting matrix by $1 / c$. Interchanging two rows is reversed by interchanging the same two rows of the resulting matrix. Adding $c$ times row $i$ to row $j$ is reversed by adding $-c$ times row $i$ to row $j$ in the resulting matrix. Thus, if we can transform $A$ into $B$ by a finite sequence of row operations, then we can transform $B$ into $A$ by applying the reversed row operations in the reverse order. Transitivity is clear: If we can transform $A$ into $B$ by a sequence of row operations and $B$ into $C$ by another sequence of row operations, then we can transform $A$ into $C$ by putting the two sequences together.
Our next relation involves similarity, which is very important in the study of linear transformations, change of basis, and the theory of eigenvalues and eigenvectors.
Matrices $A, \, B \in \R^{n \times n}$ are similar if there exists an invertible $P \in \R^{n \times n}$ such that $P^{-1} A P = B$. Similarity is an equivalence relation on $\R^{n \times n}$.
Proof
If $A \in \R^{n \times n}$ then $A = I^{-1} A I$, where $I$ is the $n \times n$ identity matrix, so $A$ is similar to itself. Suppose that $A, \, B \in \R^{n \times n}$ and that $A$ is similar to $B$ so that $B = P^{-1} A P$ for some invertible $P \in \R^{n \times n}$. Then $A = P B P^{-1} = \left(P^{-1}\right)^{-1} B P^{-1}$ so $B$ is similar to $A$. Finally, suppose that $A, \, B, \, C \in R^{n \times n}$ and that $A$ is similar to $B$ and that $B$ is similar to $C$. Then $B = P^{-1} A P$ and $C = Q^{-1} B Q$ for some invertible $P, \, Q \in R^{n \times n}$. Then $C = Q^{-1} P^{-1} A P Q = (P Q)^{-1} A (P Q)$, so $A$ is similar to $C$.
Next recall that for $A \in \R^{m \times n}$, the transpose of $A$ is the matrix $A^T \in \R^{n \times m}$ with the property that $(i, j)$ entry of $A$ is the $(j, i)$ entry of $A^T$, for $i, \, j \in \{1, 2, \ldots, m\}$. Simply stated, $A^T$ is the matrix whose rows are the columns of $A$. For the theorem that follows, we need to remember that $(A B)^T = B^T A^T$ for $A \in \R^{m \times n}$ and $B \in \R^{n \times k}$, and $\left(A^T\right)^{-1} = \left(A^{-1}\right)^T$ if $A \in \R^{n \times n}$ is invertible.
Matrices $A, \, B \in \R^{n \times n}$ are congruent if there exists an invertible $P \in \R^{n \times n}$ such that $B = P^T A P$. Congruence is an equivalence relation on $\R^{n \times n}$
Proof
If $A \in \R^{n \times n}$ then $A = I^T A I$, where again $I$ is the $n \times n$ identity matrix, so $A$ is congruent to itself. Suppose that $A, \, B \in \R^{n \times n}$ and that $A$ is congruent to $B$ so that $B = P^T A P$ for some invertible $P \in \R^{n \times n}$. Then $A = \left(P^T\right)^{-1} B P^{-1} = \left(P^{-1}\right)^T B P^{-1}$ so $B$ is congruent to $A$. Finally, suppose that $A, \, B, \, C \in R^{n \times n}$ and that $A$ is congruent to $B$ and that $B$ is congruent to $C$. Then $B = P^T A P$ and $C = Q^T B Q$ for some invertible $P, \, Q \in R^{n \times n}$. Then $C = Q^T P^T A P Q = (P Q)^T A (P Q)$, so $A$ is congruent to $C$.
Congruence is important in the study of orthogonal matrices and change of basis. Of course, the term congruence applied to matrices should not be confused with the same term applied to integers.
Number Systems
Equivalence relations play an important role in the construction of complex mathematical structures from simpler ones. Often the objects in the new structure are equivalence classes of objects constructed from the simpler structures, modulo an equivalence relation that captures the essential properties of the new objects.
The construction of number systems is a prime example of this general idea. The next exercise explores the construction of rational numbers from integers.
Define a relation $\approx$ on $\Z \times \N_+$ by $(j, k) \approx (m, n)$ if and only if $j\,n = k\,m$.
1. $\approx$ is an equivalence relation.
2. Define $\frac{m}{n} = [(m, n)]$, the equivalence class generated by $(m, n)$, for $m \in \Z$ and $n \in \N_+$. This definition captures the essential properties of the rational numbers.
Proof
1. For $(m, n) \in \Z \times \N_+$, $m n = n m$ of course, so $(m, n) \approx (m, n)$. Hence $\approx$ is reflexive. If $(j, k), \, (m, n) \in \Z \times \N_+$ and $(j, k) \approx (m, n)$, then $j n = k m$ so trivially $m k = n j$, and hence $(m, n) \approx (j, k)$. Thus $\approx$ is symmetric. Finally, suppose that $(j, k), \, (m, n), \, (p, q) \in \Z \times \N_+$ and that $(j, k) \approx (m, n)$ and $(m, n) \approx (p, q)$. Then $j n = k m$ and $m q = n p$, so $j n p = k m p$ which implies $j m q = k m p$, and so $j q = k p$. Hence $(j, k) \approx (p, q)$ so $\approx$ is transitive.
2. Suppose that $\frac{j}{k}$ and $\frac{m}{n}$ are rational numbers in the usual, informal sense, where $j, \, m \in \Z$ and $k, \, n \in \N_+$. Then $\frac{j}{k} = \frac{m}{n}$ if and only if $j n = k m$ if and only if $(j, k) \approx (m, n)$, so it makes sense to define $\frac{m}{n}$ as the equivalence class generated by $(m, n)$. Addition and multiplication are defined in the usual way: if $(j, k), \, (m, n) \in \Z \times \N_+$ then $\frac{j}{k} + \frac{m}{n} = \frac{j n + m k}{k n}, \ \ \frac{j}{k} \cdot \frac{m}{n} = \frac{j m}{k n}$ The definitions are consistent; that is they do not depend on the particular representations of the equivalence classes. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/01%3A_Foundations/1.05%3A_Equivalence_Relations.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\Q}{\mathbb{Q}}$ $\newcommand{\A}{\mathbb{A}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
Definitions
Suppose that $\mathscr{S}$ is a non-empty collection of sets. We define a relation $\approx$ on $\mathscr{S}$ by $A \approx B$ if and only if there exists a one-to-one function $f$ from $A$ onto $B$. The relation $\approx$ is an equivalence relation on $\mathscr{S}$. That is, for all $A, \, B, \, C \in \mathscr{S}$,
1. $A \approx A$, the reflexive property
2. If $A \approx B$ then $B \approx A$, the symmetric property
3. If $A \approx B$ and $B \approx C \$ then $A \approx C$, the transitive property
Proof
1. The identity function $I_A$ on $A$, given by $I_A(x) = x$ for $x \in A$, maps $A$ one-to-one onto $A$. Hence $A \approx A$
2. If $A \approx B$ then there exists a one-to-one function $f$ from $A$ onto $B$. But then $f^{-1}$ is a one-to-one function from $B$ onto $A$, so $B \approx A$
3. Suppose that $A \approx B$ and $B \approx C$. Then there exists a one-to-one function $f$ from $A$ onto $B$ and a one-to-one function $g$ from $B$ onto $C$. But then $g \circ f$ is a one-to-one function from $A$ onto $C$, so $A \approx C$.
A one-to-one function $f$ from $A$ onto $B$ is sometimes called a bijection. Thus if $A \approx B$ then $A$ and $B$ are in one-to-one correspondence and are said to have the same cardinality. The equivalence classes under this equivalence relation capture the notion of having the same number of elements.
Let $\N_0 = \emptyset$, and for $k \in \N_+$, let $\N_k = \{0, 1, \ldots k - 1\}$. As always, $\N = \{0, 1, 2, \ldots\}$ is the set of all natural numbers.
Suppose that $A$ is a set.
1. $A$ is finite if $A \approx \N_k$ for some $k \in \N$, in which case $k$ is the cardinality of $A$, and we write $\#(A) = k$.
2. $A$ is infinite if $A$ is not finite.
3. $A$ is countably infinite if $A \approx \N$.
4. $A$ is countable if $A$ is finite or countably infinite.
5. $A$ is uncountable if $A$ is not countable.
In part (a), think of $\N_k$ as a reference set with $k$ elements; any other set with $k$ elements must be equivalent to this one. We will study the cardinality of finite sets in the next two sections on Counting Measure and Combinatorial Structures. In this section, we will concentrate primarily on infinite sets. In part (d), a countable set is one that can be enumerated or counted by putting the elements into one-to-one correspondence with $\N_k$ for some $k \in \N$ or with all of $\N$. An uncountable set is one that cannot be so counted. Countable sets play a special role in probability theory, as in many other branches of mathematics. Apriori, it's not clear that there are uncountable sets, but we will soon see examples.
Preliminary Examples
If $S$ is a set, recall that $\mathscr{P}(S)$ denotes the power set of $S$ (the set of all subsets of $S$). If $A$ and $B$ are sets, then $A^B$ is the set of all functions from $B$ into $A$. In particular, $\{0, 1\}^S$ denotes the set of functions from $S$ into $\{0, 1\}$.
If $S$ is a set then $\mathscr{P}(S) \approx \{0, 1\}^S$.
Proof
The mapping that takes a set $A \in \mathscr{P}(S)$ into its indicator function $\boldsymbol{1}_A \in \{0, 1\}^S$ is one-to-one and onto. Specifically, if $A, \, B \in \mathscr{P}(S)$ and $\bs{1}_A = \bs{1}_B$, then $A = B$, so the mapping is one-to-one. On the other hand, if $f \in \{0, 1\}^S$ then $f = \bs{1}_A$ where $A = \{x \in S: f(x) = 1\}$. Hence the mapping is onto.
Next are some examples of countably infinite sets.
The following sets are countably infinite:
1. The set of even natural numbers $E = \{0, 2, 4, \ldots\}$
2. The set of integers $\Z$
Proof
1. The function $f: \N \to E$ given by $f(n) = 2 n$ is one-to-one and onto.
2. The function $g: \N \to \Z$ given by $g(n) = \frac{n}{2}$ if $n$ is even and $g(n) = -\frac{n + 1}{2}$ if $n$ is odd, is one-to-one and onto.
At one level, it might seem that $E$ has only half as many elements as $\N$ while $\Z$ has twice as many elements as $\N$. as the previous result shows, that point of view is incorrect: $\N$, $E$, and $\Z$ all have the same cardinality (and are countably infinite). The next example shows that there are indeed uncountable sets.
If $A$ is a set with at least two elements then $S = A^\N$, the set of all functions from $\N$ into $A$, is uncountable.
Proof
The proof is by contradiction, and uses a nice trick known as the diagonalization method. Suppose that $S$ is countably infinite (it's clearly not finite), so that the elements of $S$ can be enumerated: $S = \{f_0, f_1, f_2, \ldots\}$. Let $a$ and $b$ denote distinct elements of $A$ and define $g: \N \to A$ by $g(n) = b$ if $f_n(n) = a$ and $g(n) = a$ if $f_n(n) \ne a$. Note that $g \ne f_n$ for each $n \in \N$, so $g \notin S$. This contradicts the fact that $S$ is the set of all functions from $\N$ into $A$.
Subsets of Infinite Sets
Surely a set must be as least as large as any of its subsets, in terms of cardinality. On the other hand, by example (4), the set of natural numbers $\N$, the set of even natural numbers $E$ and the set of integers $\Z$ all have exactly the same cardinality, even though $E \subset \N \subset \Z$. In this subsection, we will explore some interesting and somewhat paradoxical results that relate to subsets of infinite sets. Along the way, we will see that the countable infinity is the smallest of the infinities.
If $S$ is an infinite set then $S$ has a countable infinite subset.
Proof
Select $a_0 \in S$. It's possible to do this since $S$ is infinite and therefore nonempty. Inductively, having chosen $\{a_0, a_1, \ldots, a_{k-1}\} \subseteq S$, select $a_k \in S \setminus \{a_0, a_1, \ldots, a_{k-1}\}$. Again, it's possible to do this since $S$ is not finite. Manifestly, $\{a_0, a_1, \ldots \}$ is a countably infinite subset of $S$.
A set $S$ is infinite if and only if $S$ is equivalent to a proper subset of $S$.
Proof
If $S$ is finite, then $S$ is not equivalent to a proper subset by the pigeonhole principle. If $S$ is infinite, then $S$ has countably infinite subset $\{a_0, a_1, a_2, \ldots\}$ by the previous result. Define the function $f: S \to S$ by $f \left(a_n\right) = a_{2 n}$ for $n \in \N$ and $f(x) = x$ for $x \in S \setminus \{a_0, a_1, a_2, \ldots\}$. Then $f$ maps $S$ one-to-one onto $S \setminus \{a_1, a_3, a_5, \ldots\}$.
When $S$ was infinite in the proof of the previous result, not only did we map $S$ one-to-one onto a proper subset, we actually threw away a countably infinite subset and still maintained equivalence. Similarly, we can add a countably infinite set to an infinite set $S$ without changing the cardinality.
If $S$ is an infinite set and $B$ is a countable set, then $S \approx S \cup B$.
Proof
Consider the most extreme case where $B$ is countably infinite and disjoint from $S$. Then $S$ has a countably infinite subset $A = \{a_0, a_1, a_2, \ldots\}$ by the result above, and $B$ can be enumerated, so $B = \{b_0, b_1, b_2, \ldots\}$. Define the function $f: S \to S \cup B$ by $f\left(a_n\right) = a_{n/2}$ if $n$ is even, $f\left(a_n\right) = b_{(n-1)/2}$ if $n$ is odd, and $f(x) = x$ if $x \in S \setminus \{a_0, a_1, a_2, \ldots\}$. Then $f$ maps $S$ one-to-one onto $S \cup B$.
In particular, if $S$ is uncountable and $B$ is countable then $S \cup B$ and $S \setminus B$ have the same cardinality as $S$, and in particular are uncountable. In terms of the dichotomies finite-infinite and countable-uncountable, a set is indeed at least as large as a subset. First we need a preliminary result.
If $S$ is countably infinite and $A \subseteq S$ then $A$ is countable.
Proof
It suffices to show that if $A$ is an infinite subset of $S$ then $A$ is countably infinite. Since $S$ is countably infinite, it can be enumerated: $S = \{x_0, x_1, x_2, \ldots\}$. Let $n_i$ be the $i$th smallest index such that $x_{n_i} \in A$. Then $A = \{x_{n_0}, x_{n_1}, x_{n_2}, \ldots\}$ and hence is countably infinite.
Suppose that $A \subseteq B$.
1. If $B$ is finite then $A$ is finite.
2. If $A$ is infinite then $B$ is infinite.
3. If $B$ is countable then $A$ is countable.
4. If $A$ is uncountable then $B$ is uncountable.
Proof
1. This is clear from the definition of a finite set.
2. This is the contrapositive of (a).
3. If $A$ is finite, then $A$ is countable. If $A$ is infinite, then $B$ is infinite by (b) and hence is countably infinite. But then $A$ is countably infinite by (9).
4. This is the contrapositive of (c).
Comparisons by one-to-one and onto functions
We will look deeper at the general question of when one set is at least as big as another, in the sense of cardinality. Not surprisingly, this will eventually lead to a partial order on the cardinality equivalence classes.
First note that if there exists a function that maps a set $A$ one-to-one into a set $B$, then in a sense, there is a copy of $A$ contained in $B$. Hence $B$ should be at least as large as $A$.
Suppose that $f: A \to B$ is one-to-one.
1. If $B$ is finite then $A$ is finite.
2. If $A$ is infinite then $B$ is infinite.
3. If $B$ is countable then $A$ is countable.
4. If $A$ is uncountable then $B$ is uncountable.
Proof
Note that $f$ maps $A$ one-to-one onto $f(A)$. Hence $A \approx f(A)$ and $f(A) \subseteq B$. The results now follow from (10):
1. If $B$ is finite then $f(A)$ is finite and hence $A$ is finite.
2. If $A$ is infinite then $f(A)$ is infinite and hence $B$ is infinite.
3. If $B$ is countable then $f(A)$ is countable and hence $A$ is countable.
4. If $A$ is uncountable then $f(A)$ is uncountable and hence $B$ is uncountable.
On the other hand, if there exists a function that maps a set $A$ onto a set $B$, then in a sense, there is a copy of $B$ contained in $A$. Hence $A$ should be at least as large as $B$.
Suppose that $f: A \to B$ is onto.
1. If $A$ is finite then $B$ is finite.
2. If $B$ is infinite then $A$ is infinite.
3. If $A$ is countable then $B$ is countable.
4. If $B$ is uncountable then $A$ is uncountable.
Proof
For each $y \in B$, select a specific $x \in A$ with $f(x) = y$ (if you are persnickety, you may need to invoke the axiom of choice). Let $C$ be the set of chosen points. Then $f$ maps $C$ one-to-one onto $B$, so $C \approx B$ and $C \subseteq A$. The results now follow from (11):
1. If $A$ is finite then $C$ is finite and hence $B$ is finite.
2. If $B$ is infinite then $C$ is infinite and hence $A$ is infinite.
3. If $A$ is countable then $C$ is countable and hence $B$ is countable.
4. If $B$ is uncountable then $C$ is uncountable and hence $A$ is uncountable.
The previous exercise also could be proved from the one before, since if there exists a function $f$ mapping $A$ onto $B$, then there exists a function $g$ mapping $B$ one-to-one into $A$. This duality is proven in the discussion of the axiom of choice. A simple and useful corollary of the previous two theorems is that if $B$ is a given countably infinite set, then a set $A$ is countable if and only if there exists a one-to-one function $f$ from $A$ into $B$, if and only if there exists a function $g$ from $B$ onto $A$.
If $A_i$ is a countable set for each $i$ in a countable index set $I$, then $\bigcup_{i \in I} A_i$ is countable.
Proof
Consider the most extreme case in which the index set $I$ is countably infinite. Since $A_i$ is countable, there exists a function $f_i$ that maps $\N$ onto $A_i$ for each $i \in \N$. Let $M = \left\{2^i 3^j: (i, j) \in \N \times \N\right\}$. Note that the points in $M$ are distinct, that is, $2^i 3^j \ne 2^m 3^n$ if $(i, j), \, (m, n) \in \N \times \N$ and $(i, j) \ne (m, n)$. Hence $M$ is infinite, and since $M \subset \N$, $M$ is countably infinite. The function $f$ given by $f\left(2^i 3^j\right) = f_i(j)$ for $(i, j) \in \N \times \N$ maps $M$ onto $\bigcup_{i \in I} A_i$, and hence this last set is countable.
If $A$ and $B$ are countable then $A \times B$ is countable.
Proof
There exists a function $f$ that maps $\N$ onto $A$, and there exists a function $g$ that maps $\N$ onto $B$. Again, let $M = \left\{2^i 3^j: (i, j) \in \N \times \N \right\}$ and recall that $M$ is countably infinite. Define $h: M \to A \times B$ by $h\left(2^i 3^j\right) = \left(f(i), g(j)\right)$. Then $h$ maps $M$ onto $A \times B$ and hence this last set is countable.
The last result could also be proven from the one before, by noting that
$A \times B = \bigcup_{a \in A} \{a\} \times B$
Both proofs work because the set $M$ is essentially a copy of $\N \times \N$, embedded inside of $\N$. The last theorem generalizes to the statement that a finite product of countable sets is still countable. But, from (5), a product of infinitely many sets (with at least 2 elements each) will be uncountable.
The set of rational numbers $\Q$ is countably infinite.
Proof
The sets $\Z$ and $\N_+$ are countably infinite and hence the set $\Z \times \N_+$ is countably infinite. The function $f: \Z \times \N_+ \to \Q$ given by $f(m, n) = \frac{m}{n}$ is onto.
A real number is algebraic if it is the root of a polynomial function (of degree 1 or more) with integer coefficients. Rational numbers are algebraic, as are rational roots of rational numbers (when defined). Moreover, the algebraic numbers are closed under addition, multiplication, and division. A real number is transcendental if it's not algebraic. The numbers $e$ and $\pi$ are transcendental, but we don't know very many other transcendental numbers by name. However, as we will see, most (in the sense of cardinality) real numbers are transcendental.
The set of algebraic numbers $\A$ is countably infinite.
Proof
Let $\Z_0 = \Z \setminus \{0\}$ and let $\Z_n = \Z^{n-1} \times \Z_0$ for $n \in \N_+$. The set $\Z_n$ is countably infinite for each $n$. Let $C = \bigcup_{n=1}^\infty \Z_n$. Think of $C$ as the set of coefficients and note that $C$ is countably infinite. Let $P$ denote the set of polynomials of degree 1 or more, with integer coefficients. The function $(a_0, a_1, \ldots, a_n) \mapsto a_0 + a_1\,x + \cdots + a_n\,x^n$ maps $C$ onto $P$, and hence $P$ is countable. For $p \in P$, let $A_p$ denote the set of roots of $p$. A polynomial of degree $n$ in $P$ has at most $n$ roots, by the fundamental theorem of algebra, so in particular $A_p$ is finite for each $p \in P$. Finally, note that $\A = \bigcup_{p \in P} A_p$ and so $\A$ is countable. Of course $\N \subset \A$, so $\A$ is countably infinite.
Now let's look at some uncountable sets.
The interval $[0, 1)$ is uncountable.
Proof
Recall that $\{0, 1\}^{\N_+}$ is the set of all functions from $\N_+$ into $\{0, 1\}$, which in this case, can be thought of as infinite sequences or bit strings: $\{0, 1\}^{\N_+} = \left\{\bs{x} = (x_1, x_2, \ldots): x_n \in \{0, 1\} \text{ for all } n \in \N_+\right\}$ By (5), this set is uncountable. Let $N = \left\{\bs{x} \in \{0, 1\}^{\N_+}: x_n = 1 \text{ for all but finitely many } n\right\}$, the set of bit strings that eventually terminate in all 1s. Note that $N = \bigcup_{n=1}^\infty N_n$ where $N_n = \left\{\bs{x} \in \{0, 1\}^{\N_+}: x_k = 1 \text{ for all } k \ge n\right\}$. Clearly $N_n$ is finite for all $n \in \N_+$, so $N$ is countable, and therefore $S = \{0, 1\}^{\N_+} \setminus N$ is uncountable. In fact, $S \approx \{0, 1\}^{\N_+}$. The function $\bs{x} \mapsto \sum_{n = 1}^\infty \frac{x_n}{2^n}$ maps $S$ one-to-one onto $[0, 1)$. In words every number in $[0, 1)$ has a unique binary expansion in the form of a sequence in $S$. Hence $[0, 1) \approx S$ and in particular, is uncountable. The reason for eliminating the bit strings that terminate in 1s is to ensure uniqueness, so that the mapping is one-to-one. The bit string $x_1 x_2 \cdots x_k 0 1 1 1\cdots$ corresponds to the same number in $[0, 1)$ as the bit string $x_1 x_2 \cdots x_k 1 0 0 0\cdots$.
The following sets have the same cardinality, and in particular all are uncountable:
1. $\R$, the set of real numbers.
2. Any interval $I$ of $\R$, as long as the interval is not empty or a single point.
3. $\R \setminus \Q$, the set of irrational numbers.
4. $\R \setminus \A$, the set of transcendental numbers.
5. $\mathscr{P}(\N)$, the power set of $\N$.
Proof
1. The mapping $x \mapsto \frac{2 x - 1}{x (1 - x)}$ maps $(0, 1)$ one-to-one onto $\R$ so $(0, 1) \approx \R$. But $(0, 1) = [0, 1) \setminus \{0\}$, so $(0, 1] \approx (0, 1) \approx \R$, and all of these sets are uncountable by the previous result.
2. Suppose $a, \, b \in \R$ and $a \lt b$. The mapping $x \mapsto a + (b - a) x$ maps $(0, 1)$ one-to-one onto $(a, b)$ and hence $(a, b) \approx (0, 1) \approx \R$. Also, $[a, b) = (a, b) \cup \{a\}$, $(a, b] = (a, b) \cup\{b\}$, and $[a, b] = (a, b) \cup \{a, b\}$, so $(a, b) \approx [a, b) \approx (a, b] \approx [a, b]\approx \R$. The function $x \mapsto e^x$ maps $\R$ one-to-one onto $(0, \infty)$, so $(0, \infty) \approx \R$. For $a \in \R$, the function $x \mapsto a + x$ maps $(0, \infty)$ one-to-one onto $(a, \infty)$ and the mapping $x \mapsto a - x$ maps $(0, \infty)$ one to one onto $(-\infty, a)$ so $(a, \infty) \approx (-\infty, a) \approx (0, \infty) \approx \R$. Next, $[a, \infty) = (a, \infty) \cup \{a\}$ and $(-\infty, a] = (-\infty, a) \cup \{a\}$, so $[a, \infty) \approx (-\infty, a] \approx \R$.
3. $\Q$ is countably infinite, so $\R \setminus \Q \approx \R$.
4. Similarly, $\A$ is countably infinite, so $\R \setminus \A \approx \R$.
5. If $S$ is countably infinite, then by the previous result and (a), $\mathscr{P}(S) \approx \mathscr{P}(\N_+) \approx \{0, 1\}^{\N_+} \approx [0, 1)$.
The Cardinality Partial Order
Suppose that $\mathscr{S}$ is a nonempty collection of sets. We define the relation $\preceq$ on $\mathscr{S}$ by $A \preceq B$ if and only if there exists a one-to-one function $f$ from $A$ into $B$, if and only if there exists a function $g$ from $B$ onto $A$. In light of the previous subsection, $A \preceq B$ should capture the notion that $B$ is at least as big as $A$, in the sense of cardinality.
The relation $\preceq$ is reflexive and transitive.
Proof
For $A \in \mathscr{S}$, the identity function $I_A: A \to A$ given by $I_A(x) = x$ is one-to-one (and also onto), so $A \preceq A$. Suppose that $A, \, B, \, C \in \mathscr{S}$ and that $A \preceq B$ and $B \preceq C$. Then there exist one-to-one functions $f: A \to B$ and $g: B \to C$. But then $g \circ f: A \to C$ is one-to-one, so $A \preceq C$.
Thus, we can use the construction in the section on on Equivalence Relations to first define an equivalence relation on $\mathscr{S}$, and then extend $\preceq$ to a true partial order on the collection of equivalence classes. The only question that remains is whether the equivalence relation we obtain in this way is the same as the one that we have been using in our study of cardinality. Rephrased, the question is this: If there exists a one-to-one function from $A$ into $B$ and a one-to-one function from $B$ into $A$, does there necessarily exist a one-to-one function from $A$ onto $B$? Fortunately, the answer is yes; the result is known as the Schröder-Bernstein Theorem, named for Ernst Schröder and Felix Bernstein.
If $A \preceq B$ and $B \preceq A$ then $A \approx B$.
Proof
Set inclusion $\subseteq$ is a partial order on $\mathscr{P}(A)$ (the power set of $A$) with the property that every subcollection of $\mathscr{P}(A)$ has a supremum (namely the union of the subcollection). Suppose that $f$ maps $A$ one-to-one into $B$ and $g$ maps $B$ one-to-one into $A$. Define the function $h: \mathscr{P}(A) \to \mathscr{P}(A)$ by $h(U) = A \setminus g[B \setminus f(U)]$ for $U \subseteq A$. Then $h$ is increasing: \begin{align} U \subseteq V & \implies f(U) \subseteq f(V) \implies B \setminus f(V) \subseteq B \setminus f(U) \ & \implies g[B \setminus f(V)] \subseteq g[B \setminus f(U)] \implies A \setminus g[B \setminus f(U)] \subseteq A \setminus g[B \setminus f(V)] \end{align} From the fixed point theorem for partially ordered sets, there exists $U \subseteq A$ such that $h(U) = U$. Hence $U = A \setminus g[B \setminus f(U)]$ and therefore $A \setminus U = g[B \setminus f(U)]$. Now define $F: A \to B$ by $F(x) = f(x)$ if $x \in U$ and $F(x) = g^{-1}(x)$ if $x \in A \setminus U$.
Next we show that $F$ is one-to-one. Suppose that $x_1, \, x_2 \in A$ and $F(x_1) = F(x_2)$. If $x_1, \, x_2 \in U$ then $f(x_1) = f(x_2)$ so $x_1 = x_2$ since $f$ is one-to-one. If $x_1, \, x_2 \in A \setminus U$ then $g^{-1}(x_1) = g^{-1}(x_2)$ so $x_1 = x_2$ since $g^{-1}$ is one-to-one. If $x_1 \in U$ and $x_2 \in A \setminus U$. Then $F(x_1) = f(x_1) \in f(U)$ while $F(x_2) = g^{-1}(x_2) \in B \setminus f(U)$, so $F(x_1) = F(x_2)$ is impossible.
Finally we show that $F$ is onto. Let $y \in B$. If $y \in f(U)$ then $y = f(x)$ for some $x \in U$ so $F(x) = y$. If $y \in B \setminus f(U)$ then $x = g(y) \in A \setminus U$ so $F(x) = g^{-1}(x) = y$.
We will write $A \prec B$ if $A \preceq B$, but $A \not \approx B$, That is, there exists a one-to-one function from $A$ into $B$, but there does not exist a function from $A$ onto $B$. Note that $\prec$ would have its usual meaning if applied to the equivalence classes. That is, $[A] \prec [B]$ if and only if $[A] \preceq [B]$ but $[A] \ne [B]$. Intuitively, of course, $A \prec B$ means that $B$ is strictly larger than $A$, in the sense of cardinality.
$A \prec B$ in each of the following cases:
1. $A$ and $B$ are finite and $\#(A) \lt \#(B)$.
2. $A$ is finite and $B$ is countably infinite.
3. $A$ is countably infinite and $B$ is uncountable.
We close our discussion with the observation that for any set, there is always a larger set.
If $S$ is a set then $S \prec \mathscr{P}(S)$.
Proof
First, it's trivial to map $S$ one-to-one into $\mathscr{P}(S)$; just map $x$ to $\{x\}$. Suppose now that $f$ maps $S$ onto $\mathscr{P}(S)$ and let $R = \{x \in S: x \notin f(x)\}$. Since $f$ is onto, there exists $t \in S$ such that $f(t) = R$. Note that $t \in f(t)$ if and only if $t \notin f(t)$.
The proof that a set cannot be mapped onto its power set is similar to the Russell paradox, named for Bertrand Russell.
The continuum hypothesis is the statement that there is no set whose cardinality is strictly between that of $\N$ and $\R$. The continuum hypothesis actually started out as the continuum conjecture, until it was shown to be consistent with the usual axioms of the real number system (by Kurt Gödel in 1940), and independent of those axioms (by Paul Cohen in 1963).
Assuming the continuum hypothesis, if $S$ is uncountable then there exists $A \subseteq S$ such that $A$ and $A^c$ are uncountable.
Proof
Under the continuum hypothesis, if $S$ is uncountable then $[0, 1) \preceq S$. Hence there exists a one-to-one function $f: [0, 1) \to S$. Let $A = f\left[0, \frac{1}{2}\right)$. Then $A$ is uncountable, and since $f\left[\frac{1}{2}, 1\right) \subseteq A^c$, $A^c$ is uncountable.
There is a more complicated proof of the last result, without the continuum hypothesis and just using the axiom of choice. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/01%3A_Foundations/1.06%3A_Cardinality.txt |
$\newcommand{\N}{\mathbb{N}}$
Basic Theory
For our first discussion, we assume that the universal set $S$ is finite. Recall the following definition from the section on cardinality.
For $A \subseteq S$, the cardinality of $A$ is the number of elements in $A$, and is denoted $\#(A)$. The function $\#$ on $\mathscr{P}(S)$ is called counting measure.
Counting measure plays a fundamental role in discrete probability structures, and particularly those that involve sampling from a finite set. The set $S$ is typically very large, hence efficient counting methods are essential. The first combinatorial problem is attributed to the Greek mathematician Xenocrates.
In many cases, a set of objects can be counted by establishing a one-to-one correspondence between the given set and some other set. Naturally, the two sets have the same number of elements, but for various reasons, the second set may be easier to count.
The Addition Rule
The addition rule of combinatorics is simply the additivity axiom of counting measure.
If $\{A_1, A_2, \ldots, A_n\}$ is a collection of disjoint subsets of $S$ then $\#\left( \bigcup_{i=1}^n A_i \right) = \sum_{i=1}^n \#(A_i)$
The following counting rules are simple consequences of the addition rule. Be sure to try the proofs yourself before reading the ones in the text.
$\#(A^c) = \#(S) - \#(A)$. This is the complement rule.
Proof
$\#(B \setminus A) = \#(B) - \#(A \cap B)$. This is the difference rule.
Proof
Note that $A \cap B$ and $B \setminus A$ are disjoint and their union is $B$. Hence $\#(A \cap B) + \#(B \setminus A) = \#(B)$.
If $A \subseteq B$ then $\#(B \setminus A) = \#(B) - \#(A)$. This is the proper difference rule.
Proof
This follows from the difference rule, since $A \cap B = A$.
If $A \subseteq B$ then $\#(A) \le \#(B)$.
Proof
This follows from the proper difference rule: $\#(B) = \#(A) + \#(B \setminus A) \ge \#(A)$.
Thus, $\#$ is an increasing function, relative to the subset partial order $\subseteq$ on $\mathscr{P}(S)$, and the ordinary order $\le$ on $\N$.
Inequalities
Our next disucssion concerns two inequalities that are useful for obtaining bounds on the number of elements in a set. The first is Boole's inequality (named after George Boole) which gives an upper bound on the cardinality of a union.
If $\{A_1, A_2, \ldots, A_n\}$ is a finite collection of subsets of $S$ then $\#\left( \bigcup_{i=1}^n A_i \right) \le \sum_{i=1}^n \#(A_i)$
Proof
Let $B_1 = A_1$ and $B_i = A_i \setminus (A_1 \cup \cdots A_{i-1})$ for $i \in \{2, 3, \ldots, n\}$. Note that $\{B_1, B_2, \ldots, B_n\}$ is a pairwise disjoint collection and has the same union as $\{A_1, A_2, \ldots, A_n\}$. From the increasing property, $\#(B_i) \le \#(A_i)$ for each $i \in \{1, 2, \ldots, n\}$. Hence by the addition rule, $\#\left( \bigcup_{i=1}^n A_i \right) = \#\left(\bigcup_{i=1}^n B_i\right) \le \sum_{i=1}^n \#(A_i)$
Intuitively, Boole's inequality holds because parts of the union have been counted more than once in the expression on the right. The second inequality is Bonferroni's inequality (named after Carlo Bonferroni), which gives a lower bound on the cardinality of an intersection.
If $\{A_1, A_2, \ldots, A_n\}$ is a finite collection of subsets of $S$ then $\#\left( \bigcap_{i=1}^n A_i \right) \ge \#(S) - \sum_{i=1}^n [\#(S) - \#(A_i)]$
Proof
Using the complement rule, Boole's inequality, and DeMorgan's law, $\#\left(\bigcap_{i=1}^n A_i\right) = \#(S) - \#\left(\bigcup_{i=1}^n A_i^c\right) \ge \#(S) - \sum_{i=1}^n \#(A_i^c) = \#(S) - \sum_{i=1}^n [\#(S) - \#(A_i)]$
The Inclusion-Exclusion Formula
The inclusion-exclusion formula gives the cardinality of a union of sets in terms of the cardinality of the various intersections of the sets. The formula is useful because intersections are often easier to count. We start with the special cases of two sets and three sets. As usual, we assume that the sets are subsets of a finite universal set $S$.
If $A$ and $B$ are subsets of $S$ then $\#(A \cup B) = \#(A) + \#(B) - \#(A \cap B)$.
Proof
If $A$, $B$, $C$ are subsets of $S$ then $\#(A \cup B \cup C) = \#(A) + \#(B) + \#(C) - \#(A \cap B) - \#(A \cap C) - \#(B \cap C) + \#(A \cap B \cap C)$.
Proof
The inclusion-exclusion rule for two and three sets can be generalized to a union of $n$ sets; the generalization is known as the (general) inclusion-exclusion formula.
Suppose that $\{A_i: i \in I\}$ is a collection of subsets of $S$ where $I$ is an index set with $\#(I) = n$. Then $\# \left( \bigcup_{i \in I} A_i \right) = \sum_{k = 1}^n (-1)^{k - 1} \sum_{J \subseteq I, \; \#(J) = k} \# \left( \bigcap_{j \in J} A_j \right)$
Proof
The proof is by induction on $n$. The formula holds for $n = 2$ sets by the result for two sets. Suppose the formula holds for $n \in \{2, 3, \ldots\}$, and suppose that $\{A_1, A_2, \ldots, A_n, A_{n+1}\}$ is a collection of $n + 1$ subsets of $S$. Then $\bigcup_{i=1}^{n+1} A_i = \left(\bigcup_{i=1}^n A_i\right) \cup \left[A_{n+1} \setminus \left(\bigcup_{i=1}^n A_i\right)\right]$ and the two sets connected by the central union are disjoint. Using the addition rule and the difference rule, \begin{align} \#\left(\bigcup_{i=1}^{n+1} A_i\right) & = \#\left(\bigcup_{i=1}^n A_i\right) + \#(A_{n+1}) - \#\left[A_{n+1} \cap \left(\bigcup_{i=1}^n A_i\right)\right]\ & = \#\left(\bigcup_{i=1}^n A_i\right) + \#(A_{n+1}) - \#\left[\bigcup_{i=1}^n (A_i \cap A_{n+1})\right] \end{align} By the induction hypothesis, the formula holds for the two unions of $n$ sets in the last expression. The result then follows by simplification.
The general Bonferroni inequalities, named again for Carlo Bonferroni, state that if sum on the right is truncated after $k$ terms ($k \lt n$), then the truncated sum is an upper bound for the cardinality of the union if $k$ is odd (so that the last term has a positive sign) and is a lower bound for the cardinality of the union if $k$ is even (so that the last terms has a negative sign).
The Multiplication Rule
The multiplication rule of combinatorics is based on the formulation of a procedure (or algorithm) that generates the objects to be counted.
Suppose that a procedure consists of $k$ steps, performed sequentially, and that for each $j \in \{1, 2, \ldots, k\}$, step $j$ can be performed in $n_j$ ways, regardless of the choices made on the previous steps. Then the number of ways to perform the entire procedure is $n_1 n_2 \cdots n_k$.
The key to a successful application of the multiplication rule to a counting problem is the clear formulation of an algorithm that generates the objects being counted, so that each object is generated once and only once. That is, we must neither over count nor under count. It's also important to notice that the set of choices available at step $j$ may well depend on the previous steps; the assumption is only that the number of choices available does not depend on the previous steps.
The first two results below give equivalent formulations of the multiplication principle.
Suppose that $S$ is a set of sequences of length $k$, and that we denote a generic element of $S$ by $(x_1, x_2, \ldots, x_k)$. Suppose that for each $j \in \{1, 2, \ldots, k\}$, $x_j$ has $n_j$ different values, regardless of the values of the previous coordinates. Then $\#(S) = n_1 n_2 \cdots n_k$.
Proof
A procedure that generates the sequences in $S$ consists of $k$ steps. Step $j$ is to select the $j$th coordinate.
Suppose that $T$ is an ordered tree with depth $k$ and that each vertex at level $i - 1$ has $n_i$ children for $i \in \{1, 2, \ldots, k\}$. Then the number of endpoints of the tree is $n_1 n_2 \cdots n_k$.
Proof
Each endpoint of the tree is uniquely associated with the path from the root vertex to the endpoint. Each such path is a sequence of length $k$, in which there are $n_j$ values for coordinate $j$ for each $j \in \{1, 2, \ldots, k\}$. Hence the result follows from the result above on sequences.
Product Sets
If $S_i$ is a set with $n_i$ elements for $i \in \{1, 2, \ldots, k\}$ then $\#(S_1 \times S_2 \times \cdots \times S_k) = n_1 n_2 \cdots n_k$
Proof
This is a corollary of the result above on sequences.
If $S$ is a set with $n$ elements, then $S^k$ has $n^k$ elements.
Proof
This is a corollary of the previous result.
In (16), note that the elements of $S^k$ can be thought of as ordered samples of size $k$ that can be chosen with replacement from a population of $n$ objects. Elements of $\{0, 1\}^n$ are sometimes called bit strings of length $n$. Thus, there are $2^n$ bit strings of length $n$.
Functions
The number of functions from a set $A$ of $m$ elements into a set $B$ of $n$ elements is $n^m$.
Proof
An algorithm for constructing a function $f: A \to B$ is to choose the value of $f(x) \in B$ for each $x \in A$. There are $n$ choices for each of the $m$ elements in the domain.
Recall that the set of functions from a set $A$ into a set $B$ (regardless of whether the sets are finite or infinite) is denoted $B^A$. This theorem is motivation for the notation. Note also that if $S$ is a set with $n$ elements, then the elements in the Cartesian power set $S^k$ can be thought of as functions from $\{1, 2, \ldots, k\}$ into $S$. So the counting formula for sequences can be thought of as a corollary of counting formula for functions.
Subsets
Suppose that $S$ is a set with $n$ elements, where $n \in \N$. There are $2^n$ subsets of $S$.
Proof from the multiplication principle
An algorithm for constructing $A \subseteq S$, is to decide whether $x \in A$ or $x \notin A$ for each $x \in S$. There are 2 choices for each of the $n$ elements of $S$.
Proof using indicator functions
Recall that there is a one-to-one correspondence between subsets of $S$ and indicator functions on $S$. An indicator function is simply a function from $S$ into $\{0, 1\}$, and the number of such functions is $2^n$ by the previous result.
Suppose that $\{A_1, A_2, \ldots A_k\}$ is a collection of $k$ subsets of a set $S$, where $k \in \N_+$. There are $2^{2^k}$ different (in general) sets that can be constructed from the $k$ given sets, using the operations of union, intersection, and complement. These sets form the algebra generated by the given sets.
Proof
First note that there are $2^k$ pairwise disjoint sets of the form $B_1 \cap B_2 \cap \cdots \cap B_k$ where $B_i = A_i$ or $B_i = A_i^c$ for each $i$. Next, note that every set that can be constructed from $\{A_1, A_2, \ldots, A_n\}$ is a union of some (perhaps all, perhaps none) of these intersection sets.
Open the Venn diagram app.
1. Select each of the 4 disjoint sets $A \cap B$, $A \cap B^c$, $A^c \cap B$, $A^c \cap B^c$.
2. Select each of the 12 other subsets of $S$. Note how each is a union of some of the sets in (a).
Suppose that $S$ is a set with $n$ elements and that $A$ is a subset of $S$ with $k$ elements, where $n, \, k \in \N$ and $k \le n$. The number of subsets of $S$ that contain $A$ is $2^{n - k}$.
Proof
Note that subset $B$ of $S$ that contains $A$ can be written uniquely in the form $B = A \cup C$ where $C \subseteq A^c$. $A^c$ has $n - k$ elements and hence there are $2^{n-k}$ subsets of $A^c$ by the general subset result.
Our last result in this discussion generalizes the basic subset result above.
Suppose that $n, \, k \in \N$ and that $S$ is a set with $n$ elements. The number of sequences of subsets $(A_1, A_2, \ldots, A_k)$ with $A_1 \subseteq A_2 \subseteq \cdots \subseteq A_k \subseteq S$ is $(k + 1)^n$.
Proof
To construct a sequence of the type in the theorem, we can use the following algorithm: For each $x \in S$, either $x$ is not in the sets, or $x$ occurs for the first time in set $A_i$ where $i \in \{1, 2, \ldots, k\}$. (That is, $x \notin A_j$ for $j \in \{1, \ldots, i - 1\}$ and $x \in A_j$ for $j \in \{i, \ldots, k\}$.) So there are $k + 1$ choices for each of the $n$ elements of $S$.
When $k = 1$ we get $2^n$ as the number of subsets of $S$, as before.
Computational Exercises
Identification Numbers
A license number consists of two letters (uppercase) followed by five digits. How many different license numbers are there?
Answer
$26^2 \cdot 10^5 = 67 \, 600 \, 000$
Suppose that a Personal Identification Number (PIN) is a four-symbol code word in which each entry is either a letter (uppercase) or a digit. How many PINs are there?
Answer
$36^4 = 1 \, 679 \, 616$
Cards, Dice, and Coins
In the board game Clue, Mr. Boddy has been murdered. There are 6 suspects, 6 possible weapons, and 9 possible rooms for the murder.
1. The game includes a card for each suspect, each weapon, and each room. How many cards are there?
2. The outcome of the game is a sequence consisting of a suspect, a weapon, and a room (for example, Colonel Mustard with the knife in the billiard room). How many outcomes are there?
3. Once the three cards that constitute the outcome have been randomly chosen, the remaining cards are dealt to the players. Suppose that you are dealt 5 cards. In trying to guess the outcome, what hand of cards would be best?
Answer
1. $6 + 6 + 9 = 21$ cards
2. $6 \cdot 6 \cdot 9 = 324$ outcomes
3. The best hand would be the $5$ remaining weapons or the $5$ remaining suspects.
An experiment consists of rolling a standard die, drawing a card from a standard deck, and tossing a standard coin. How many outcomes are there?
Answer
$6 \cdot 52 \cdot 2 = 624$
A standard die is rolled 5 times and the sequence of scores recorded. How many outcomes are there?
Answer
$6^5 = 7776$
In the card game Set, each card has 4 properties: number (one, two, or three), shape (diamond, oval, or squiggle), color (red, blue, or green), and shading (solid, open, or stripped). The deck has one card of each (number, shape, color, shading) configuration. A set in the game is defined as a set of three cards which, for each property, the cards are either all the same or all different.
1. How many cards are in a deck?
2. How many sets are there?
Answer
1. $3^4 = 81$
2. $1080$
A coin is tossed 10 times and the sequence of scores recorded. How many sequences are there?
Answer
$2^{10} = 1024$
The die-coin experiment consists of rolling a die and then tossing a coin the number of times shown on the die. The sequence of coin results is recorded.
1. How many outcomes are there?
2. How many outcomes are there with all heads?
3. How many outcomes are there with exactly one head?
Answer
1. $\sum_{k=1}^6 2^k = 126$
2. $6$
3. $\sum_{k=1}^6 k = 21$
Run the die-coin experiment 100 times and observe the outcomes.
Consider a deck of cards as a set $D$ with 52 elements.
1. How many subsets of $D$ are there?
2. How many functions are there from $D$ into the set $\{1, 2, 3, 4\}$?
Answer
1. $2^{52} = 4 \, 503 \, 599 \, 627 \, 370 \, 496$
2. $4^{52} = 20 \, 282 \, 409 \, 603 \, 651 \, 670 \, 423 \, 947 \, 251 \, 286 \, 016$
Birthdays
Consider a group of 10 persons.
1. If we record the birth month of each person, how many outcomes are there?
2. If we record the birthday of each person (ignoring leap day), how many outcomes are there?
Answer
1. $12^{10} = 61 \, 917 \, 364 \, 224$
2. $365^{10} = 41 \, 969 \, 002 \, 243 \, 198 \, 805 \, 166 \, 015 \, 625$
Reliability
In the usual model of structural reliability, a system consists of components, each of which is either working or defective. The system as a whole is also either working or defective, depending on the states of the components and how the components are connected.
A string of lights has 20 bulbs, each of which may be good or defective. How many configurations are there?
Answer
$2^{20} = 1 \, 048 \, 576$
If the components are connected in series, then the system as a whole is working if and only if each component is working. If the components are connected parallel, then the system as a whole is working if and only if at least one component is working.
A system consists of three subsystems with 6, 5, and 4 components, respectively. Find the number of component states for which the system is working in each of the following cases:
1. The components in each subsystem are in parallel and the subsystems are in series.
2. The components in each subsystem are in series and the subsystems are in parallel.
Answer
1. $(2^6 - 1)(2^5 - 1)(2^4 - 1) = 29 \, 295$
2. $2^3 - 1 = 7$
Menus
Suppose that a sandwich at a restaurant consists of bread, meat, cheese, and various toppings. There are 4 choices for the bread, 3 choices for the meat, 5 choices for the cheese, and 10 different toppings (each of which may be chosen). How many sandwiches are there?
Answer
$4 \cdot 3 \cdot 5 \cdot 2^{10} = 61 \, 440$
At a wedding dinner, there are three choices for the entrée, four choices for the beverage, and two choices for the dessert.
1. How many different meals are there?
2. If there are 50 guests at the wedding and we record the meal requested for each guest, how many possible outcomes are there?
Answer
1. $3 \cdot 4 \cdot 2 = 24$
2. $24^{50} \approx 1.02462 \times 10^{69}$
Braille
Braille is a tactile writing system used by people who are visually impaired. The system is named for the French educator Louis Braille and uses raised dots in a $3 \times 2$ grid to encode characters. How many meaningful Braille configurations are there?
Answer
Personality Typing
The Meyers-Briggs personality typing is based on four dichotomies: A person is typed as either extroversion (E) or introversion (I), either sensing (S) or intuition (I), either thinking (T) or feeling (F), and either judgement (J) or perception (P).
1. How many Meyers-Briggs personality types are there? List them.
2. Suppose that we list the personality types of 10 persons. How many possible outcomes are there?
Answer
1. 16
2. $16^{10} = 1 \, 099 \, 511 \, 627 \, 776$
The Galton Board
The Galton Board, named after Francis Galton, is a triangular array of pegs. Galton, apparently too modest to name the device after himself, called it a quincunx from the Latin word for five twelfths (go figure). The rows are numbered, from the top down, by $(0, 1, \ldots )$. Row $n$ has $n + 1$ pegs that are labeled, from left to right by $(0, 1, \ldots, n)$. Thus, a peg can be uniquely identified by an ordered pair $(n, k)$ where $n$ is the row number and $k$ is the peg number in that row.
A ball is dropped onto the top peg $(0, 0)$ of the Galton board. In general, when the ball hits peg $(n, k)$, it either bounces to the left to peg $(n + 1, k)$ or to the right to peg $(n + 1, k + 1)$. The sequence of pegs that the ball hits is a path in the Galton board.
There is a one-to-one correspondence between each pair of the following three collections:
1. Bit strings of length $n$
2. Paths in the Galton board from $(0, 0)$ to any peg in row $n$.
3. Subsets of a set with $n$ elements.
Thus, each of these collections has $2^n$ elements.
Open the Galton board app.
1. Move the ball from $(0, 0)$ to $(10, 6)$ along a path of your choice. Note the corresponding bit string and subset.
2. Generate the bit string $0111001010$. Note the corresponding subset and path.
3. Generate the subset $\{2, 4, 5, 9, 10\}$. Note the corresponding bit string and path.
4. Generate all paths from $(0, 0)$ to $(4, 2)$. How many paths are there?
Answer
1. 6 | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/01%3A_Foundations/1.07%3A_Counting_Measure.txt |
$\newcommand{\N}{\mathbb{N}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\bs}{\boldsymbol}$
The purpose of this section is to study several combinatorial structures that are of basic importance in probability.
Permutations
Suppose that $D$ is a set with $n \in \N$ elements. A permutation of length $k \in \{0, 1, \ldots, n\}$ from $D$ is an ordered sequence of $k$ distinct elements of $D$; that is, a sequence of the form $(x_1, x_2, \ldots, x_k)$ where $x_i \in D$ for each $i$ and $x_i \ne x_j$ for $i \ne j$.
Statistically, a permutation of length $k$ from $D$ corresponds to an ordered sample of size $k$ chosen without replacement from the population $D$.
The number of permutations of length $k$ from an $n$ element set is $n^{(k)} = n (n - 1) \cdots (n - k + 1)$
Proof
This follows easily from the multiplication principle. There are $n$ ways to choose the first element, $n - 1$ ways to choose the second element, and so forth.
By convention, $n^{(0)} = 1$. Recall that, in general, a product over an empty index set is 1. Note that $n^{(k)}$ has $k$ factors, starting at $n$, and with each subsequent factor one less than the previous factor. Some alternate notations for the number of permutations of size $k$ from a set of $n$ objects are $P(n, k)$, $P_{n,k}$, and ${_n}P_k$.
The number of permutations of length $n$ from the $n$ element set $D$ (these are called simply permutations of $D$) is $n! = n^{(n)} = n (n - 1) \cdots (1)$
The function on $\N$ given by $n \mapsto n!$ is the factorial function. The general permutation formula in (2) can be written in terms of factorials:
For $n \in \N$ and $k \in \{0, 1, \ldots, n\}$ $n^{(k)} = \frac{n!}{(n - k)!}$
Although this formula is succinct, it's not always a good idea numerically. If $n$ and $n - k$ are large, $n!$ and $(n - k)!$ are enormous, and division of the first by the second can lead to significant round-off errors.
Note that the basic permutation formula in (2) is defined for every real number $n$ and nonnegative integer $k$. This extension is sometimes referred to as the generalized permutation formula. Actually, we will sometimes need an even more general formula of this type (particularly in the sections on Pólya's urn and the beta-Bernoulli process).
For $a \in \R$, $s \in \R$, and $k \in \N$, define $a^{(s, k)} = a (a + s) (a + 2 s) \cdots [a + (k - 1) s]$
1. $a^{(0, k)} = a^k$
2. $a^{(-1, k)} = a^{(k)}$
3. $a^{(1, k)} = a (a + 1) \cdots (a + k - 1)$
4. $1^{(1, k)} = k!$
The product $a^{(-1, k)} = a^{(k)}$ (our ordinary permutation formula) is sometimes called the falling power of $a$ of order $k$, while $a^{(1, k)}$ is called the rising power of $a$ of order $k$, and is sometimes denoted $a^{[k]}$. Note that $a^{(0, k)}$ is the ordinary $k$th power of $a$. In general, note that $a^{(s, k)}$ has $k$ factors, starting at $a$ and with each subsequent factor obtained by adding $s$ to the previous factor.
Combinations
Consider again a set $D$ with $n \in \N$ elements. A combination of size $k \in \{0, 1, \ldots, n\}$ from $D$ is an (unordered) subset of $k$ distinct elements of $D$. Thus, a combination of size $k$ from $D$ has the form $\{x_1, x_2, \ldots, x_k\}$, where $x_i \in D$ for each $i$ and $x_i \ne x_j$ for $i \ne j$.
Statistically, a combination of size $k$ from $D$ corresponds to an unordered sample of size $k$ chosen without replacement from the population $D$. Note that for each combination of size $k$ from $D$, there are $k!$ distinct orderings of the elements of that combination. Each of these is a permutation of length $k$ from $D$. The number of combinations of size $k$ from an $n$-element set is denoted by $\binom{n}{k}$. Some alternate notations are $C(n, k)$, $C_{n,k}$, and ${_n}C_k$.
The number of combinations of size $k$ from an $n$ element set is $\binom{n}{k} = \frac{n^{(k)}}{k!} = \frac{n!}{k! (n - k)!}$
Proof
An algorithm for generating all permutations of size $k$ from $D$ is to first select a combination of size $k$ and then to select an ordering of the elements. From the multiplication principle it follows that $n^{(k)} = \binom{n}{k} k!$. Hence $\binom{n}{k} = n^{(k)} / k! = n! / [k! (n - k)!]$.
The number $\binom{n}{k}$ is called a binomial coefficient. Note that the formula makes sense for any real number $n$ and nonnegative integer $k$ since this is true of the generalized permutation formula $n^{(k)}$. With this extension, $\binom{n}{k}$ is called the generalized binomial coefficient. Note that if $n$ and $k$ are positive integers and $k \gt n$ then $\binom{n}{k} = 0$. By convention, we will also define $\binom{n}{k} = 0$ if $k \lt 0$. This convention sometimes simplifies formulas.
Properties of Binomial Coefficients
For some of the identities below, there are two possible proofs. An algebraic proof, of course, should be based on (5). A combinatorial proof is constructed by showing that the left and right sides of the identity are two different ways of counting the same collection.
$\binom{n}{n} = \binom{n}{0} = 1$.
Algebraically, the last result is trivial. It also makes sense combinatorially: There is only one way to select a subset of $D$ with $n$ elements ($D$ itself), and there is only one way to select a subset of size 0 from $D$ (the empty set $\emptyset$).
If $n, \, k \in \N$ with $k \le n$ then $\binom{n}{k} = \binom{n}{n - k}$
Combinatorial Proof
Note that if we select a subset of size $k$ from a set of size $n$, then we leave a subset of size $n - k$ behind (the complement). Thus $A \mapsto A^c$ is a one-to-one correspondence between subsets of size $k$ and subsets of size $n - k$.
The next result is one of the most famous and most important. It's known as Pascal's rule and is named for Blaise Pascal.
If $n, \, k \in \N_+$ with $k \le n$ then $\binom{n}{k} = \binom{n - 1}{k - 1} + \binom{n - 1}{k}$
Combinatorial Proof
Suppose that we have $n$ persons, one named Fred, and that we want to select a committee of size $k$. There are $\binom{n}{k}$ different committees. On the other hand, there are $\binom{n - 1}{k - 1}$ committees with Fred as a member, and $\binom{n - 1}{k}$ committees without Fred as a member. The sum of these two numbers is also the number of committees.
Recall that the Galton board is a triangular array of pegs: the rows are numbered $n = 0, 1, \ldots$ and the pegs in row $n$ are numbered $k = 0, 1, \ldots, n$. If each peg in the Galton board is replaced by the corresponding binomial coefficient, the resulting table of numbers is known as Pascal's triangle, named again for Pascal. By (8), each interior number in Pascal's triangle is the sum of the two numbers directly above it.
The following result is the binomial theorem, and is the reason for the term binomial coefficient.
If $a, \, b \in \R$ and $n \in \N$ is a positive integer, then $(a + b)^n = \sum_{k=0}^n \binom{n}{k} a^k b^{n-k}$
Combinatorial Proof
Note that to get the term $a^k b^{n-k}$ in the expansion of $(a + b)^n$, we must select $a$ from $k$ of the factors and $b$ from the remaining $n - k$ factors. The number of ways to do this is $\binom{n}{k}$.
If $j, \, k, \, n \in \N_+$ with $j \le k \le n$ then $k^{(j)} \binom{n}{k} = n^{(j)} \binom{n - j}{k - j}$
Combinatorial Proof
Consider two procedures for selecting a committee of size $k$ from a group of $n$ persons, with $j$ distinct members of the committee as officers (chair, vice chair, etc.). For the first procedure, select the committee from the population and then select the member of the committee to be the officers. The number of ways to perform the first step is $\binom{n}{k}$ and the number of ways to perform the second step is $k^{(j)}$. So by the multiplication principle, the number of ways to choose the committee is the left side of the equation. For the second procedure, select the officers of the committee from the population and then select $k - j$ other committee members from the remaining $n - j$ members of the population. The number of ways to perform the first step is $n^{(j)}$ and the number of ways to perform the second step is $\binom{n - j}{k - j}$. So by the multiplication principle, the number of committees is the right side of the equation.
The following result is known as Vandermonde's identity, named for Alexandre-Théophile Vandermonde.
If $m, \, n, \, k \in \N$ with $k \le m + n$, then $\sum_{j=0}^k \binom{m}{j} \binom{n}{k - j} = \binom{m + n}{k}$
Combinatorial Proof
Suppose that a committee of size $k$ is chosen form a group of $m + n$ persons, consisting of $m$ men and $n$ women. The number of committees with exactly $j$ men and $k - j$ women is $\binom{m}{j} \binom{n}{k - j}$. The sum of this product over $j$ is the total number of committees, which is $\binom{m + n}{k}$.
The next result is a general identity for the sum of binomial coefficients.
If $m, \, n \in \N$ with $n \le m$ then $\sum_{j=n}^m \binom{j}{n} = \binom{m + 1}{n + 1}$
Combinatorial Proof
Suppose that we pick a subset of size $n + 1$ from the set $\{1, 2, \ldots m + 1\}$. For $j \in \{n, n + 1, \ldots, m\}$, the number of subsets in which the largest element is $j + 1$ is $\binom{j}{n}$. Hence the sum of these numbers over $j$ is the total number of subsets of size $n + 1$, which is also $\binom{m + 1}{n + 1}$.
For an even more general version of the last result, see the section on Order Statistics in the chapter on Finite Sampling Models. The following identity for the sum of the first $m$ positive integers is a special case of the last result.
If $m \in \N$ then $\sum_{j=1}^m j = \binom{m + 1}{2} = \frac{(m + 1) m}{2}$
Proof
Let $n = 1$ in previous result.
There is a one-to-one correspondence between each pair of the following collections. Hence the number objects in each of these collection is $\binom{n}{k}$.
1. Subsets of size $k$ from a set of $n$ elements.
2. Bit strings of length $n$ with exactly $k$ 1's.
3. Paths in the Galton board from $(0, 0)$ to $(n, k)$.
Proof
Let $S = \{x_1, x_2, \ldots, x_n\}$ be a set with $n$ elements. A one-to-one correspondence between the subsets $A$ of $S$ with $k$ elements and the bit strings $\bs{b} = b_1 b_2 \ldots b_n$ of length $n$ with $k$ 1's can be constructed by the rule that $x_i \in A$ if and only if $b_i = 1$. In turn, a one-to-one correspondence between the bit strings $\bs{b}$ in part (b) and the paths in Galton board in part (c) can be constructed by the rule that in row $i \in \{0, 1, \ldots, n - 1\}$, turn right if $b_{i+1} = 1$ and turn left if $b_{i+1} = 0$.
The following identity is known as the alternating sum identity for binomial coefficients. It turns out to be useful in the Irwin-Hall probability distribution. We give the identity in two equivalent forms, one for falling powers and one for ordinary powers.
If $n \in \N_+$, $j \in \{0, 1, \ldots n - 1\}$ then
1. $\sum_{k=0}^n \binom{n}{k}(-1)^k k^{(j)} = 0$
2. $\sum_{k=0}^n \binom{n}{k}(-1)^k k^j = 0$
Proof
1. We use the identity above and the binomial theorem binomial theorem: \begin{align*} \sum_{k=0}^n (-1)^k k^{(j)} \binom{n}{k} & = \sum_{k=j}^n (-1)^k k^{(j)} \binom{n}{k} = \sum_{k=j}^n (-1)^k n^{(j)} \binom{n - j}{k - j} \ & = n^{(j)} (-1)^j \sum_{k=j}^n (-1)^{k-j} \binom{n - j}{k - j} = n^{(j)} (-1)^j \sum_{i=0}^{n-j} (-1)^i \binom{n - j}{i} \ & = n^{(j)} (-1)^j (-1 + 1)^{n - j} = 0. \end{align*} Note that it's the last step where we need $j \lt n$.
2. This follows from (a), since $k^j$ is a linear combination of $k^{(i)}$ for $i \in \{0, 1, \ldots j\}$. That is, there exists $c_i \in \R$ for $i \in \{0, 1, \ldots, j\}$ such that $k^j = \sum_{i=0}^j c_i k^{(i)}$. Hence $\sum_{k=0}^n (-1)^k k^j \binom{n}{k} = \sum_{i=0}^j c_i \sum_{k=0}^n (-1)^k k^{(i)} \binom{n}{k} = 0$
Our next identity deals with a generalized binomial coefficient.
If $n, \, k \in \N$ then $\binom{-n}{k} = (-1)^k \binom{n + k - 1}{k}$
Proof
Note that \begin{align} \binom{-n}{k} & = \frac{(-n)^{(k)}}{k!} = \frac{(-n)(-n - 1) \cdots (-n - k + 1)}{k!}\ & = (-1)^k \frac{(n + k - 1)(n + k - 2) \cdots (n)}{k!} = (-1)^k \frac{(n + k - 1)^{(k)}}{k!} = (-1)^k \binom{n + k - 1}{k} \end{align}
In particular, note that $\binom{-1}{k} = (-1)^k$. Our last result in this discussion concerns the binomial operator and its inverse.
The binomial operator takes a sequence of real numbers $\bs a = (a_0, a_1, a_2, \ldots)$ and returns the sequence of real numbers $\bs b = (b_0, b_1, b_2, \ldots)$ by means of the formula $b_n = \sum_{k=0}^n \binom{n}{k} a_k, \quad n \in \N$ The inverse binomial operator recovers the sequence $\bs a$ from the sequence $\bs b$ by means of the formula $a_n = \sum_{k=0}^n (-1)^{n-k} \binom{n}{k} b_k, \quad n \in \N$
Proof
Exponential generating functions can be used for an elegant proof. Exponential generating functions are the combinatorial equivalent of moment generating functions for discrete probability distributions on $\N$. So let $G$ and $H$ denote the exponential generating functions of the sequences $\bs a$ and $\bs b$, resepectively. Then \begin{align*} H(x) & = \sum_{n=0}^\infty \frac{b_n}{n!} x^n = \sum_{n=0}^\infty \frac{1}{n!} \sum_{k=0}^n \binom{n}{k} a_k x^n = \sum_{k=0}^\infty \sum_{n=k}^\infty \frac{1}{n!} \frac{n!}{k! (n - k)!} a_k x^n \ & = \sum_{k=0}^\infty \frac{1}{k!} a_k x^k \sum_{n=k}^\infty \frac{1}{(n-k)!} x^{n-k} = e^x \sum_{k=0}^\infty \frac{1}{k!} a_k x^k = e^x G(x) \end{align*} So it follows that \begin{align*} G(x) & = e^{-x} H(x) = \sum_{k=0}^\infty \frac{1}{k!} b_k x^k \sum_{n=k}^\infty \frac{1}{(n - k)!} (-1)^{n-k} x^{n-k} \ & = \sum_{k=0}^\infty \sum_{n=k}^\infty \frac{1}{n!} \frac{n!}{k! (n - k)!} (-1)^{n-k} b_k x^n = \sum_{n=0}^\infty \frac{1}{n!} \sum_{k = 0}^n \binom{n}{k} (-1)^{n-k} b_k x^n \end{align*} But by definition, $G(x) = \sum_{n=0}^\infty \frac{a_n}{n!} x^n$ and so the inverse formula follows.
Samples
The experiment of drawing a sample from a population is basic and important. There are two essential attributes of samples: whether or not order is important, and whether or not a sampled object is replaced in the population before the next draw. Suppose now that the population $D$ contains $n$ objects and we are interested in drawing a sample of $k$ objects. Let's review what we know so far:
• If order is important and sampled objects are replaced, then the samples are just elements of the product set $D^k$. Hence, the number of samples is $n^k$.
• If order is important and sample objects are not replaced, then the samples are just permutations of size $k$ chosen from $D$. Hence the number of samples is $n^{(k)}$.
• If order is not important and sample objects are not replaced, then the samples are just combinations of size $k$ chosen from $D$. Hence the number of samples is $\binom{n}{k}$.
Thus, we have one case left to consider.
Unordered Samples With Replacement
An unordered sample chosen with replacement from $D$ is called a multiset. A multiset is like an ordinary set except that elements may be repeated.
There is a one-to-one correspondence between each pair of the following collections:
1. Mulitsets of size $k$ from a population $D$ of $n$ elements.
2. Bit strings of length $n + k - 1$ with exactly $k$ 1s.
3. Nonnegative integer solutions $(x_1, x_2, \ldots, x_n)$ of the equation $x_1 + x_2 + \cdots + x_n = k$.
Each of these collections has $\binom{n + k - 1}{k}$ members.
Proof
Suppose that $D = \{d_1, d_2, \ldots, d_n\}$. Consider a multiset of size $k$. Since order does not matter, we can list all of the occurrences of $d_1$ (if any) first, then the occurrences of $d_2$ (if any), and so forth, until we at last list the occurrences of $d_n$ (if any). If we know we are using this data structure, we don't actually have to list the actual elements, we can simply use 1 as a placeholder with 0 as a seperator. In the resulting bit string, 1 occurs $k$ times and 0 occurs $n - 1$ times. Conversely, any such bit string uniquely defines a multiset of size $k$. Next, given a multiset of size $k$ from $D$, let $x_i$ denote the number of times that $d_i$ occurs, for $i \in \{1, 2, \ldots, n\}$. Then $(x_1, x_2, \ldots, x_n)$ satisfies the conditions in (c). Conversely, any solution to the equation in (c) uniquely defines a multiset of size $k$ from $D$. We already know how to count the collection in (b): the number of bit strings of length $n + k - 1$ with 1 occurring $k$ times is $\binom{n + k - 1}{k} = \binom{n + k - 1}{n - 1}$.
Summary of Sampling Formulas
The following table summarizes the formulas for the number of samples of size $k$ chosen from a population of $n$ elements, based on the criteria of order and replacement.
Sampling formulas
Number of Samples With order Without
With replacement $n^k$ $\binom{n + k - 1}{k}$
Without $n^{(k)}$ $\binom{n}{k}$
Multinomial Coefficients
Partitions of a Set
Recall that the binomial coefficient $\binom{n}{j}$ is the number of subsets of size $j$ from a set $S$ of $n$ elements. Note also that when we select a subset $A$ of size $j$ from $S$, we effectively partition $S$ into two disjoint subsets of sizes $j$ and $n - j$, namely $A$ and $A^c$. A natural generalization is to partition $S$ into a union of $k$ distinct, pairwise disjoint subsets $(A_1, A_2, \ldots, A_k)$ where $\#(A_i) = n_i$ for each $i \in \{1, 2, \ldots, k\}$. Of course we must have $n_1 + n_2 + \cdots + n_k = n$.
The number of ways to partition a set of $n$ elements into a sequence of $k$ sets of sizes $(n_1, n_2, \ldots, n_k)$ is $\binom{n}{n_1} \binom{n - n_1}{n_2} \cdots \binom{n - n_1 - \cdots - n_{k-1}}{n_k} = \frac{n!}{n_1! n_2! \cdots n_k!}$
Proof
The left side follows from the multiplication rule. There are $\binom{n}{n_1}$ ways to select the first set in the partition, $\binom{n - n_1}{n_2}$ ways to select the second set in the partition, and so forth. The right side follows by writing the binomial coefficients on the left in terms of factorials and simplifying.
The number in (18) is called a multinomial coefficient and is denoted by $\binom{n}{n_1, n_2, \cdots, n_k} = \frac{n!}{n_1! n_2! \cdots n_k!}$
If $n, \, k \in \N$ with $k \le n$ then $\binom{n}{k, n - k} = \binom{n}{k}$
Combinatorial Proof
As noted before, if we select a subset of size $k$ from an $n$ element set, then we partition the set into two subsets of sizes $k$ and $n - k$.
Sequences
Consider now the set $T = \{1, 2, \ldots, k\}^n$. Elements of this set are sequences of length $n$ in which each coordinate is one of $k$ values. Thus, these sequences generalize the bit strings of length $n$. Again, let $(n_1, n_2, \ldots, n_k)$ be a sequence of nonnegative integers with $\sum_{i=1}^k n_i = n$.
There is a one-to-one correspondence between the following collections:
1. Partitions of $S$ into pairwise disjoint subsets $(A_1, A_2, \ldots, A_k)$ where $\#(A_j) = n_j$ for each $j \in \{1, 2, \ldots, k\}$.
2. Sequences in $\{1, 2, \ldots, k\}^n$ in which $j$ occurs $n_j$ times for each $j \in \{1, 2, \ldots, k\}$.
Proof
Suppose that $S = \{s_1, s_2, \ldots, s_n\}$. A one-to-one correspondence between a partition $(A_1, A_2, \ldots, A_k)$ of the type in (a) and a sequence $\bs{x} = (x_1, x_2, \ldots, x_n)$ of the type in (b) can be constructed by the rule that $s_i \in A_j$ if and only if $x_i = j$.
It follows that the number of elements in both of these collections is $\binom{n}{n_1, n_2, \cdots, n_k} = \frac{n!}{n_1! n_2! \cdots n_k!}$
Permutations with Indistinguishable Objects
Suppose now that we have $n$ object of $k$ different types, with $n_i$ elements of type $i$ for each $i \in \{1, 2, \ldots, k\}$. Moreover, objects of a given type are considered identical. There is a one-to-one correspondence between the following collections:
1. Sequences in $\{1, 2, \ldots, k\}^n$ in which $j$ occurs $n_j$ times for each $j \in \{1, 2, \ldots, k\}$.
2. Distinguishable permutations of the $n$ objects.
Proof
A one-to-one correspondence between a sequence $\bs{x} = (x_1, x_2, \ldots, x_n)$ of the type in (a) and a permutation of the $n$ objects can be constructed by the rule that we put an object of type $j$ in position $i$ if and only if $x_i = j$.
Once again, it follows that the number of elements in both collections is $\binom{n}{n_1, n_2, \cdots, n_k} = \frac{n!}{n_1! n_2! \cdots n_k!}$
The Multinomial Theorem
The following result is the multinomial theorem which is the reason for the name of the coefficients.
If $x_1, \, x_2, \ldots, x_k \in \R$ and $n \in \N$ then $(x_1 + x_2 + \cdots + x_k)^n = \sum \binom{n}{n_1, n_2, \cdots, n_k} x_1^{n_1} x_2^{n_2} \cdots x_k^{n_k}$ The sum is over sequences of nonnegative integers $(n_1, n_2, \ldots, n_k)$ with $n_1 + n_2 + \cdots + n_k = n$. There are $\binom{n + k - 1}{n}$ terms in this sum.
Combinatorial Proof
Note that to get $x_1^{n_1} x_2^{n_2} \cdots x_k^{n_k}$ in the expansion of $(x_1 + x_2 + \cdots x_k)^n$, we must chose $x_i$ in $n_i$ of the factors, for each $i$. The number of ways to do this is the multinomial coefficient $\binom{n}{n_1, n_2, \ldots, n_k}$. The number of terms in the sum follows from the formula above.
Computational Exercises
Arrangements
In a race with 10 horses, the first, second, and third place finishers are noted. How many outcomes are there?
Answer
$720$
Eight persons, consisting of four male-female couples, are to be seated in a row of eight chairs. How many seating arrangements are there in each of the following cases:
1. There are no other restrictions.
2. The men must sit together and the women must sit together.
3. The men must sit together.
4. Each couple must sit together.
Answer
1. $40 \, 320$
2. $1152$
3. $2880$
4. $384$
Suppose that $n$ people are to be seated at a round table. How many seating arrangements are there? The mathematical significance of a round table is that there is no dedicated first chair.
Answer
$(n - 1)!$. Seat one, distinguished person arbitrarily. Every seating arrangement can then be specified by giving the position of a person (say clockwise) relative to the distinguished person.
Twelve books, consisting of 5 math books, 4 science books, and 3 history books are arranged on a bookshelf. Find the number of arrangements in each of the following cases:
1. There are no restrictions.
2. The books of each type must be together.
3. The math books must be together.
Answer
1. $479 \, 001 \, 600$
2. $103 \, 680$
3. $4 \, 838 \, 400$
Find the number of distinct arrangements of the letters in each of the following words:
1. statistics
2. probability
3. mississippi
4. tennessee
5. alabama
Answer
1. $50 \, 400$
2. $9\,979\,200$
3. $34\,650$
4. $3780$
5. $210$
A child has 12 blocks; 5 are red, 4 are green, and 3 are blue. In how many ways can the blocks be arranged in a line if blocks of a given color are considered identical?
Answer
$27\,720$
Code Words
A license tag consists of 2 capital letters and 5 digits. Find the number of tags in each of the following cases:
1. There are no other restrictions
2. The letters and digits are all different.
Answer
1. $67\,600\,000$
2. $19\,656\,000$
Committees
A club has 20 members; 12 are women and 8 are men. A committee of 6 members is to be chosen. Find the number of different committees in each of the following cases:
1. There are no other restrictions.
2. The committee must have 4 women and 2 men.
3. The committee must have at least 2 women and at least 2 men.
Answer
1. $38\,760$
2. $13\,860$
3. $30\,800$
Suppose that a club with 20 members plans to form 3 distinct committees with 6, 5, and 4 members, respectively. In how many ways can this be done.
Answer
$9\,777\,287\,520$. Note that the members not on a committee also form one of the sets in the partition.
Cards
A standard card deck can be modeled by the Cartesian product set $D = \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, j, q, k\} \times \{\clubsuit, \diamondsuit, \heartsuit, \spadesuit\}$ where the first coordinate encodes the denomination or kind (ace, 2-10, jack, queen, king) and where the second coordinate encodes the suit (clubs, diamonds, hearts, spades). Sometimes we represent a card as a string rather than an ordered pair (for example $q \heartsuit$).
A poker hand (in draw poker) consists of 5 cards dealt without replacement and without regard to order from a deck of 52 cards. Find the number of poker hands in each of the following cases:
1. There are no restrictions.
2. The hand is a full house (3 cards of one kind and 2 of another kind).
3. The hand has 4 of a kind.
4. The cards are all in the same suit (so the hand is a flush or a straight flush).
Answer
1. $2\,598\,960$
2. $3744$
3. $624$
4. $5148$
The game of poker is studied in detail in the chapter on Games of Chance.
A bridge hand consists of 13 cards dealt without replacement and without regard to order from a deck of 52 cards. Find the number of bridge hands in each of the following cases:
1. There are no restrictions.
2. The hand has exactly 4 spades.
3. The hand has exactly 4 spades and 3 hearts.
4. The hand has exactly 4 spades, 3 hearts, and 2 diamonds.
Answer
1. $635\,013\,559\,600$
2. $151\,519\,319\,380$
3. $47\,079\,732\,700$
4. $11\,404\,407\,300$
A hand of cards that has no cards in a particular suit is said to be void in that suit. Use the inclusion-exclusion formula to find each of the following:
1. The number of poker hands that are void in at least one suit.
2. The number of bridge hands that are void in at least one suit.
Answer
1. $1\,913\,496$
2. $32\,427\,298\,180$
A bridge hand that has no honor cards (cards of denomination 10, jack, queen, king, or ace) is said to be a Yarborough, in honor of the Second Earl of Yarborough. Find the number of Yarboroughs.
Answer
$347\,373\,600$
A bridge deal consists of dealing 13 cards (a bridge hand) to each of 4 distinct players (generically referred to as north, south, east, and west) from a standard deck of 52 cards. Find the number of bridge deals.
Answer
$53\,644\,737\,765\,488\,792\,839\,237\,440\,000 \approx 5.36 \times 10^{28}$
This staggering number is about the same order of magnitude as the number of atoms in your body, and is one of the reasons that bridge is a rich and interesting game.
Find the number of permutations of the cards in a standard deck.
Answer
$52! \approx 8.0658 \times 10^{67}$
This number is even more staggering. Indeed if you perform the experiment of dealing all 52 cards from a well-shuffled deck, you may well generate a pattern of cards that has never been generated before, thereby ensuring your immortality. Actually, this experiment shows that, in a sense, rare events can be very common. By the way, Persi Diaconis has shown that it takes about seven standard riffle shuffles to thoroughly randomize a deck of cards.
Dice and Coins
Suppose that 5 distinct, standard dice are rolled and the sequence of scores recorded.
1. Find the number of sequences.
2. Find the number of sequences with the scores all different.
Answer
1. $7776$
2. $720$
Suppose that 5 identical, standard dice are rolled. How many outcomes are there?
Answer
$252$
A coin is tossed 10 times and the outcome is recorded as a bit string (where 1 denotes heads and 0 tails).
1. Find the number of outcomes.
2. Find the number of outcomes with exactly 4 heads.
3. Find the number of outcomes with at least 8 heads.
Answer
1. $1024$
2. $210$
3. $56$
Polynomial Coefficients
Find the coefficient of $x^3 \, y^4$ in $(2 \, x - 4 \, y)^7$.
Answer
$71\,680$
Find the coefficient of $x^5$ in $(2 + 3 \, x)^8$.
Answer
$108\,864$
Find the coefficient of $x^3 \, y^7 \, z^5$ in $(x + y + z)^{15}$.
Answer
$360\,360$
The Galton Board
In the Galton board game,
1. Move the ball from $(0, 0)$ to $(10, 6)$ along a path of your choice. Note the corresponding bit string and subset.
2. Generate the bit string $0011101001$. Note the corresponding subset and path.
3. Generate the subset $\{1, 4, 5, 7, 8, 10\}$. Note the corresponding bit string and path.
4. Generate all paths from $(0, 0)$ to $(5, 3)$. How many paths are there?
Answer
1. 10
Generate Pascal's triangle up to $n = 10$.
Samples
A shipment contains 12 good and 8 defective items. A sample of 5 items is selected. Find the number of samples that contain exactly 3 good items.
Answer
$6160$
In the $(n, k)$ lottery, $k$ numbers are chosen without replacement from the set of integers from 1 to $n$ (where $n, \, k \in \N_+$ and $k \lt n$). Order does not matter.
1. Find the number of outcomes in the general $(n, k)$ lottery.
2. Explicitly compute the number of outcomes in the $(44, 6)$ lottery (a common format).
Answer
1. $\binom{n}{k}$
2. $7\,059\,052$
For more on this topic, see the section on Lotteries in the chapter on Games of Chance.
Explicitly compute each formula in the sampling table above when $n = 10$ and $k = 4$.
Answer
1. Ordered samples with replacement: $10\,000$
2. Ordered samples without replacement: $5040$
3. Unordered samples with replacement: $715$
4. Unordered samples without replacement: $210$
Greetings
Suppose there are $n$ people who shake hands with each other. How many handshakes are there?
Answer
$\binom{n}{2}$. Note that a handshake can be thought of as a subset of size 2 from the set of $n$ people.
There are $m$ men and $n$ women. The men shake hands with each other; the women hug each other; and each man bows to each woman.
1. How many handshakes are there?
2. How many hugs are there?
3. How many bows are there?
4. How many greetings are there?
Answer
1. $\binom{m}{2}$
2. $\binom{n}{2}$
3. $m n$
4. $\binom{m}{2} + \binom{n}{2} + m n = \binom{m + n}{2}$
Integer Solutions
Find the number of integer solutions of $x_1 + x_2 + x_3 = 10$ in each of the following cases:
1. $x_i \ge 0$ for each $i$.
2. $x_i \gt 0$ for each $i$.
Answer
1. $66$
2. $36$
Generalized Coefficients
Compute each of the following:
1. $(-5)^{(3)}$
2. $\left(\frac{1}{2}\right)^{(4)}$
3. $\left(-\frac{1}{3}\right)^{(5)}$
Answer
1. $-210$
2. $-\frac{15}{16}$
3. $-\frac{3640}{243}$
Compute each of the following:
1. $\binom{1/2}{3}$
2. $\binom{-5}{4}$
3. $\binom{-1/3}{5}$
Answer
1. $\frac{1}{16}$
2. $70$
3. $-\frac{91}{729}$
Birthdays
Suppose that $n$ persons are selected and their birthdays noted. (Ignore leap years, so that a year has 365 days.)
1. Find the number of outcomes.
2. Find the number of outcomes with distinct birthdays.
Answer
1. $365^n$.
2. $365^{(n)}$.
Chess
Note that the squares of a chessboard are distinct, and in fact are often identified with the Cartesian product set $\{a, b, c, d, e, f, g, h\} \times \{1, 2, 3, 4, 5, 6, 7, 8\}$
Find the number of ways of placing 8 rooks on a chessboard so that no rook can capture another in each of the following cases.
1. The rooks are distinguishable.
2. The rooks are indistinguishable.
Answer
1. $1\,625\,702\,400$
2. $40\,320$
Gifts
Suppose that 20 identical candies are distributed to 4 children. Find the number of distributions in each of the following cases:
1. There are no restrictions.
2. Each child must get at least one candy.
Answer
1. $1771$
2. $969$
In the song The Twelve Days of Christmas, find the number of gifts given to the singer by her true love. (Note that the singer starts afresh with gifts each day, so that for example, the true love gets a new partridge in a pear tree each of the 12 days.)
Answer
$364$
Teams
Suppose that 10 kids are divided into two teams of 5 each for a game of basketball. In how many ways can this be done in each of the following cases:
1. The teams are distinguishable (for example, one team is labeled Alabama and the other team is labeled Auburn).
2. The teams are not distinguishable.
Answer
1. $252$
2. $126$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/01%3A_Foundations/1.08%3A_Combinatorial_Structures.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\Q}{\mathbb{Q}}$ $\newcommand{\D}{\mathbb{D}}$ $\newcommand{\diam}{\text{diam}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\cl}{\text{cl}}$ $\newcommand{\int}{\text{int}}$
Topology is one of the major branches of mathematics, along with other such branches as algebra (in the broad sense of algebraic structures), and analysis. Topology deals with spatial concepts involving distance, closeness, separation, convergence, and continuity. Needless to say, entire series of books have been written about the subject. Our goal in this section and the next is simply to review the basic definitions and concepts of topology that we will need for our study of probability and stochastic processes. You may want to refer to this section as needed.
Basic Theory
Definitions
A topological space consists of a nonempty set $S$ and a collection $\mathscr{S}$ of subsets of $S$ that satisfy the following properties:
1. $S \in \mathscr{S}$ and $\emptyset \in \mathscr{S}$
2. If $\mathscr{A} \subseteq \mathscr{S}$ then $\bigcup \mathscr{A} \in \mathscr{S}$
3. If $\mathscr{A} \subseteq \mathscr{S}$ and $\mathscr{A}$ is finite, then $\bigcap \mathscr{A} \in \mathscr{S}$
If $A \in \mathscr{S}$, then $A$ is said to be open and $A^c$ is said to be closed. The collection $\mathscr{S}$ of open sets is a topology on $S$.
So the union of an arbitrary number of open sets is still open, as is the intersection of a finite number of open sets. The universal set $S$ and the empty set $\emptyset$ are both open and closed. There may or may not exist other subsets of $S$ with this property.
Suppose that $S$ is a nonempty set, and that $\mathscr{S}$ and $\mathscr{T}$ are topologies on $S$. If $\mathscr{S} \subseteq \mathscr{T}$ then $\mathscr{T}$ is finer than $\mathscr{S}$, and $\mathscr{S}$ is coarser than $\mathscr{T}$. Coarser than defines a partial order on the collection of topologies on $S$. That is, if $\mathscr R, \, \mathscr S, \, \mathscr T$ are topologies on $S$ then
1. $\mathscr R$ is coarser than $\mathscr R$, the reflexive property.
2. If $\mathscr R$ is coarser than $\mathscr S$ and $\mathscr S$ is coarser than $\mathscr R$ then $\mathscr R = \mathscr S$, the anti-symmetric property.
3. If $\mathscr R$ is coarser than $\mathscr S$ and $\mathscr S$ is coarser than $\mathscr T$ then $\mathscr R$ is coarser than $\mathscr T$, the transitive property.
A topology can be characterized just as easily by means of closed sets as open sets.
Suppose that $S$ is a nonempty set. A collection of subsets $\mathscr{C}$ is the collection of closed sets for a topology on $S$ if and only if
1. $S \in \mathscr{C}$ and $\emptyset \in \mathscr{C}$
2. If $\mathscr{A} \subseteq \mathscr{C}$ then $\bigcap \mathscr{A} \in \mathscr{C}$.
3. If $\mathscr{A} \subseteq \mathscr{C}$ and $\mathscr{A}$ is a finite then $\bigcup \mathscr{A} \in \mathscr{C}$.
Proof
The set $\mathscr{S} = \{A^c: A \in \mathscr{C}\}$ must satisfy the axioms of a topology. So the result follows DeMorgan's laws: if $\mathscr{A}$ is a collection of subsets of $S$ then \begin{align*} \left(\bigcup \mathscr{A}\right)^c & = \bigcap\{A^c: A \in \mathscr{A}\}\ \left(\bigcap \mathscr{A}\right)^c & = \bigcup\{A^c: A \in \mathscr{A}\} \end{align*}
Suppose that $(S, \mathscr{S})$ is a topological space, and that $x \in S$. A set $A \subseteq S$ is a neighborhood of $x$ if there exists $U \in \scr S$ with $x \in U \subseteq A$.
So a neighborhood of a point $x \in S$ is simply a set with an open subset that contains $x$. The idea is that points in a small neighborhood of $x$ are close to $x$ in a sense. An open set can be defined in terms of the neighborhoods of the points in the set.
Suppose again that $(S, \mathscr{S})$ is a topological space. A set $U \subseteq S$ is open if and only if $U$ is a neighborhood of every $x \in U$
Proof
If $U$ is open, then clearly $U$ is a neighborhood of every point $x \in U$ and clearly satisfies the condition in the theorem. Conversely, suppose that $U$ is a neighborhood of every $x \in U$. Then by definition of neighborhood, for every $x \in U$ there exists an open set $U_x$ with $x \in U_x \subseteq U$. But then $\bigcup_{x \in U} U_x$ is open, and clearly this set is $U$.
Although the proof seems trivial, the neighborhood concept is how you should think of openness. A set $U$ is open if every point in $U$ has a set of nearby points that are also in $U$.
Our next three definitions deal with topological sets that are naturally associated with a given subset.
Suppose again that $(S, \mathscr{S})$ is a topological space and that $A \subseteq S$. The closure of $A$ is the set $\cl(A) = \bigcap\{B \subseteq S: B \text{ is closed and } A \subseteq B\}$ This is the smallest closed set containing $A$:
1. $\cl(A)$ is closed.
2. $A \subseteq \cl(A)$.
3. If $B$ is closed and $A \subseteq B$ then $\cl(A) \subseteq B$
Proof
Note that $\mathscr{B} = \{B \subseteq S: B \text{ is closed and } A \subseteq B\}$ is nonempty since $S \in \mathscr{B}$.
1. The sets in $\mathscr{B}$ are closed so $\bigcap \mathscr{B}$ is closed.
2. By definition, $A \subseteq B$ for each $B \in \mathscr{B}$. Hence $A \subseteq \bigcap\mathscr{B}$.
3. If $B$ is closed and $A \subseteq B$ then $B \in \mathscr{B}$ so $\bigcap \mathscr{B} \subseteq B$.
Of course, if $A$ is closed then $A = \cl(A)$. Complementary to the closure of a set is the interior of the set.
Suppose again that $(S, \mathscr{S})$ is a topological space and that $A \subseteq S$. The interior of $A$ is the set $\int(A) = \bigcup\{U \subseteq S: U \text{ is open and } U \subseteq A\}$ This set is the largest open subset of $A$:
1. $\int(A)$ is open.
2. $\int(A) \subseteq A$.
3. If $U$ is open and $U \subseteq A$ then $U \subseteq \int(A)$
Proof
Note that $\mathscr{U} = \{U \subseteq S: U \text{ is open and } U \subseteq A\}$ is nonempty since $\emptyset \in \mathscr{U}$.
1. The sets in $\mathscr{U}$ are open so $\bigcup \mathscr{U}$ is open.
2. By definition, $U \subseteq A$ for each $U \in \mathscr{U}$. Hence $\bigcup \mathscr{U} \subseteq A$.
3. If $U$ is open and $U \subseteq A$ then $U \in \mathscr{U}$ so $U \subseteq \bigcup\mathscr{U}$.
Of course, if $A$ is open then $A = \int(A)$. The boundary of a set is the set difference between the closure and the interior.
Suppose again that $(S, \mathscr{S})$ is a topological space. The boundary of $A$ is $\partial(A) = \cl(A) \setminus \int(A)$. This set is closed.
Proof
By definition, $\partial(A) = \cl(A) \cap [\int(A)]^c$, the intersection of two closed sets.
A topology on a set induces a natural topology on any subset of the set.
Suppose that $(S, \mathscr{S})$ is a topological space and that $R$ is a nonempty subset of $S$. Then $\mathscr{R} = \{A \cap R: A \in \mathscr{S}\}$ is a topology on $R$, known as the relative topology induced by $\mathscr{S}$.
Proof
First $S \in \mathscr{S}$ and $S \cap R = R$, so $R \in \mathscr{R}$. Next, $\emptyset \in \mathscr{S}$ and $\emptyset \cap R = \emptyset$ so $\emptyset \in \mathscr{R}$. Suppose that $\mathscr{B} \subseteq \mathscr{R}$. For each $B \in \mathscr{B}$, select $A \in \mathscr{S}$ such that $B = A \cap R$. Let $\mathscr{A}$ denote the collection of sets selected (we need the axiom of choice to do this). Then $\bigcup \mathscr{A} \in \mathscr{S}$ and $\bigcup \mathscr{B} = \left(\bigcup \mathscr{A} \right) \cap R$, so $\bigcup \mathscr{B} \in \mathscr{R}$. Finally, suppose that $\mathscr{B} \subseteq \mathscr{R}$ is finite. Once again, for each $B \in \mathscr{B}$ there exists $A \in \mathscr{S}$ with $A \cap R = B$. Let $\mathscr{A}$ denote the collection of sets selected. Then $\mathscr{A}$ is finite so $\bigcap \mathscr{A} \in \mathscr{S}$. But $\bigcap \mathscr{B} = \left(\bigcap \mathscr{A}\right) \cap R$ so $\bigcap \mathscr{B} \in \mathscr{R}$.
In the context of the previous result, note that if $R$ is itself open, then the relative topology is $\mathscr{R} = \{A \in \mathscr{S}: A \subseteq R\}$, the subsets of $R$ that are open in the original topology.
Separation Properties
Separation properties refer to the ability to separate points or sets with disjoint open sets. Our first definition deals with separating two points.
Suppose that $(S, \mathscr{S})$ is a topological space and that $x, \, y$ are distinct points in $S$. Then $x$ and $y$ can be separated if there exist disjoint open sets $U$ and $V$ with $x \in U$ and $y \in V$. If every pair of distinct points in $S$ can be separated, then $(S, \mathscr{S})$ is called a Hausdorff space.
Hausdorff spaces are named for the German mathematician Felix Hausdorff. There are weaker separation properties. For example, there could be an open set $U$ that contains $x$ but not $y$, and an open set $V$ that contains $y$ but not $x$, but no disjoint open sets that contain $x$ and $y$. Clearly if every open set that contains one of the points also contains the other, then the points are indistinguishable from a topological viewpoint. In a Hausdorff space, singletons are closed.
Suppose that $(S, \mathscr{S})$ is a Hausdorff space. Then $\{x\}$ is closed for each $x \in S$.
Proof
The definition shows immediately that $\{x\}^c$ is open: if $y \in \{x\}^c$, there exists on open set $V$ with $y \in V \subseteq \{x\}^c$.
Our next definition deals with separating a point from a closed set.
Suppose again that $(S, \mathscr{S})$ is a topological space. A nonempty closed set $A \subseteq S$ and a point $x \in A^c$ can be separated if there exist disjoint open sets $U$ and $V$ with $A \subseteq U$ and $x \in V$. If every nonempty closed set $A$ and point $x \in A^c$ can be separated, then the space $(S, \mathscr{S})$ is regular.
Clearly if $(S, \mathscr{S})$ is a regular space and singleton sets are closed, then $(S, \mathscr{S})$ is a Hausdorff space.
Bases
Topologies, like other set structures, are often defined by first giving some basic sets that should belong to the collection, and the extending the collection so that the defining axioms are satisfied. This idea is motivation for the following definition:
Suppose again that $(S, \mathscr{S})$ is a topological space. A collection $\mathscr{B} \subseteq \mathscr{S}$ is a base for $\mathscr{S}$ if every set in $\mathscr{S}$ can be written as a union of sets in $\mathscr{B}$.
So, a base is a smaller collection of open sets with the property that every other open set can be written as a union of basic open sets. But again, we often want to start with the basic open sets and extend this collection to a topology. The following theorem gives the conditions under which this can be done.
Suppose that $S$ is a nonempty set. A collection $\mathscr{B}$ of subsets of $S$ is a base for a topology on $S$ if and only if
1. $S = \bigcup \mathscr{B}$
2. If $A, \, B \in \mathscr{B}$ and $x \in A \cap B$, there exists $C \in \mathscr{B}$ with $x \in C \subseteq A \cap B$
Proof
Suppose that $\mathscr{B}$ is a base for a topology $\mathscr{S}$ on $S$. Since $S$ is open, $S$ is a union of sets in $\mathscr{B}$. Since every set in $\mathscr{B}$ is a subset of $S$, we must have $S = \bigcup \mathscr{B}$. Suppose that $A, \, B \in \mathscr{B}$ and that $x \in A \cap B$. Since $A \cap B$ is open, it's a union of sets in $\mathscr{B}$. The point $x$ must be in one of those sets, so there exists $C \in \mathscr{B}$ with $x \in C \subseteq A \cap B$.
Suppose now that $\mathscr{B}$ satisfies the two conditions in the theorem. Let $\mathscr{S}$ be the collection of all unions of sets in $\mathscr{B}$. Then $S \in \mathscr{S}$ by condition (a), and $\emptyset \in \mathscr{S}$ by taking a vacuous union. Suppose that $U_i \in \mathscr{S}$ for $i \in I$ where $I$ is an arbitrary index set. Then for each $i \in I$, there exists an index set $J_i$ such that $U_i = \bigcup_{j \in J_i} B_{i,j}$ where $B_{i,j} \in \mathscr{B}$ for each $j \in J_i$. But then $\bigcup_{i \in I} U_i = \bigcup_{i \in I} \bigcup_{j \in J_i} B_{i,j} \in \mathscr{S}$ Finally, suppose that $U, \, V \in \mathscr{S}$. Then there exist index sets $I$ and $J$ with $U = \bigcup_{i \in I} A_i$ and $V = \bigcup_{j \in J} B_j$ where $A_i \in \mathscr{B}$ for all $i \in I$ and $B_j \in \mathscr{B}$ for all $j \in J$. Then $U \cap V = \bigcup_{i \in I, j \in J} (A_i \cap B_j)$ By condition (b), for each $i \in I$, $j \in J$, and $x \in A_i \cap B_j$ there exists $C_{x,i,j} \in \mathscr{B}$ with $x \in C_{x,i,j} \subseteq A_i \cap B_j$. But then clearly $U \cap V = \bigcup\{C_{x,i,j}: i \in I, j \in J, x \in A_i \cap B_j\} \in \mathscr{S}$
Here is a slightly weaker condition, but one that is often satisfied in practice.
Suppose that $S$ is a nonempty set. A collection $\mathscr{B}$ of subsets of $S$ that satisfies the following properties is a base for a topology on $S$:
1. $S = \bigcup \mathscr{B}$
2. If $A, \, B \in \mathscr{B}$ then $A \cap B \in \mathscr{B}$
Part (b) means that $\mathscr{B}$ is closed under finite intersections.
Compactness
Our next discussion considers another very important type of set. Some additional terminology will make the discussion easier. Suppose that $S$ is a set and $A \subseteq S$. A collection of subsets $\mathscr{A}$ of $S$ is said to cover $A$ if $A \subseteq \mathscr{A}$. So the word cover simply means a collection of sets whose union contains a given set. In a topological space, we can have open an open cover (that is, a cover with open sets), a closed cover (that is, a cover with closed sets), and so forth.
Suppose again that $(S, \mathscr{S})$ is a topological space. A set $C \subseteq S$ is compact if every open cover of $C$ has a finite sub-cover. That is, if $\mathscr{A} \subseteq \mathscr{S}$ with $C \subseteq \bigcup \mathscr{A}$ then there exists a finite $\mathscr{B} \subseteq \mathscr{A}$ with $C \subseteq \bigcup \mathscr{B}$.
So intuitively, a compact set is compact in the ordinary sense of the word. No matter how small are the open sets in the covering of $C$, there will always exist a finite number of the open sets that cover $C$.
Suppose again that $(S, \mathscr{S})$ is a topological space and that $C \subseteq S$ is a compact. If $B \subseteq C$ is closed, then $B$ is also compact.
Proof
Suppose that $\mathscr{A}$ is an open cover of $B$. Since $B$ is closed, $B^c$ is open, so $\mathscr{A} \cup \{B^c\}$ is an open cover of $C$. Since $C$ is compact, this last collection has a finite sub-cover of $C$, which is also a finite sub-cover of $B$.
Compactness is also preserved under finite unions.
Suppose again that $(S, \mathscr{S})$ is a topological space, and that $C_i \subseteq S$ is compact for each $i$ in a finite index set $I$. Then $C = \bigcup_{i \in I} C_i$ is compact.
Proof
Suppose that $\mathscr{A}$ is an open cover of $C$. Then trivially, $\mathscr{A}$ is also an open cover of $C_i$ for each $i \in I$. Hence there exists a finite subcover $\mathscr{A}_i \subseteq \mathscr{A}$ of $C_i$ for each $i \in I$. But then $\bigcup_{i \in I} \mathscr{A}_i$ is also finite and is a covering of $C$.
As we saw above, closed subsets of a compact set are themselves compact. In a Hausdorff space, a compact set is itself closed.
Suppose that $(S, \mathscr{S})$ is a Hausdorff space. If $C \subseteq S$ is compact then $C$ is closed.
Proof
We will show that $C^c$ is open, so fix $x \in C^c$. For each $y \in C$, the points $x$ and $y$ can be separated, so there exist disjoint open sets $U_y$ and $V_y$ such that $x \in U_y$ and $y \in V_y$. Trivially, the collection $\{V_y: y \in C\}$ is an open cover of $C$, and hence there exist a finite subset $B \subseteq C$ such that $\{V_y: y \in B\}$ covers $C$. But then $U = \bigcap_{y \in B} U_y$ is open and is disjoint from $\bigcup_{y \in B} V_y$. Hence also $U$ is disjoint from $C$. So to summarize, $U$ is open and $x \in U \subseteq C^c$.
Also in a Hausdorff space, a point can be separated from a compact set that does not contain the point.
Suppose that $(S, \mathscr{S})$ is a Hausdorff space. If $x \in S$, $C \subseteq S$ is compact, and $x \notin C$, then there exist disjoint open sets $U$ and $V$ with $x \in U$ and $C \subseteq V$
Proof
Since the space is Hausdorff, for each $y \in C$ there exist disjoint open sets $U_y$ and $V_y$ with $x \in U_y$ and $y \in V_y$. The collection $\{V_y: y \in C\}$ is an open cover of $C$, and hence there exists a finite set $B \subset C$ such that $\{V_y: y \in B\}$ covers $C$. Thus let $U = \bigcap_{y \in B} U_y$ and $V = \bigcup_{y \in B} V_y$. Then $U$ is open, since $B$ is finite, and $V$ is open. Moreover $U$ and $V$ are disjoint, and $x \in U$ and $C \subseteq V$.
In a Hausdorff space, if a point has a neighborhood with a compact boundary, then there is a smaller, closed neighborhood.
Suppose again that $(S, \mathscr{S})$ is a Hausdorff space. If $x \in S$ and $A$ is a neighborhood of $x$ with $\partial(A)$ compact, then there exists a closed neighborhood $B$ of $x$ with $B \subseteq A$.
Proof
By (20), there exist disjoint open sets $U$ and $V$ with $x \in U$ and $\partial(A) \subseteq V$. Hence $\cl(U)$ and $\partial(A)$ are disjoint. Let $B = \cl(A \cap U)$. Note that $B$ is closed, and is a neighborhood of $x$ since $U$ and $A$ are neighborhoods of $x$. Moreover, $B \subseteq \cl(A) \cap \cl(U) = [A \cup \partial(A)] \cap \cl(U) = [A \cap \cl(U)] \cup [\partial(A) \cap \cl(U)] = A \cap \cl(U) \subseteq A$
Generally, local properties in a topological space refer to properties that hold on the neighborhoods of a point $x \in S$.
A topological space $(S, \mathscr{S})$ is locally compact if every point $x \in S$ has a compact neighborhood.
This definition is important because many of the topological spaces that occur in applications (like probability) are not compact, but are locally compact. Locally compact Hausdorff spaces have a number of nice properties. In particular, in a locally compact Hausdorff space, there are arbitrarily small compact neighborhoods of a point.
Suppose that $(S, \mathscr{S})$ is a locally compact Hausdorff space. If $x \in S$ and $A$ is a neighborhood of $x$, then there exists a compact neighborhood $B$ of $x$ with $B \subseteq A$.
Proof
Since $S$ is locally compact, there exists a compact neighborhood $C$ of $x$. Hence $A \cap C$ is a neighborhood of $x$. Moreover, $\partial(A \cap C)$ is closed and is a subset of $C$ and hence is compact. From (21), there exists a closed neighborhood $B$ of $x$ with $B \subseteq A \cap C$. Since $B$ is closed and $B \subseteq C$, $B$ is compact. Of course also, $B \subseteq A$.
Countability Axioms
Our next discussion concerns topologies that can be countably constructed in a certain sense. Such axioms limit the size of the topology in a way, and are often satisfied by important topological spaces that occur in applications. We start with an important preliminary definition.
Suppose that $(S, \mathscr{S})$ is a topological space. A set $D \subseteq S$ is dense if $U \cap D$ is nonempty for every nonempty $U \in \mathscr{S}$.
Equivalently, $D$ is dense if every neighborhood of a point $x \in S$ contains an element of $D$. So in this sense, one can find elements of $D$ arbitrarily close to a point $x \in S$. Of course, the entire space $S$ is dense, but we are usually interested in topological spaces that have dense sets of limited cardinality.
Suppose again that $(S, \mathscr{S})$ is a topological space. A set $D \subseteq S$ is dense if and only if $\cl(D) = S$.
Proof
Suppose that $D$ is dense. Since $\cl(D)$ is closed, $[\cl(D)]^c$ is open. If this set is nonempty, it must contain a point in $D$. But that's clearly a contradiction since $D \subseteq \cl(D)$. Conversely, suppose that $\cl(D) = S$. Suppose that $U$ is a nonempty, open set. Then $U^c$ is closed, and $U^c \ne S$. If $D \cap U = \emptyset$, then $D \subseteq U^c$. But then $\cl(D) \subseteq U^c$ so $\cl(D) \ne S$.
Here is our first countability axiom:
A topological space $(S, \mathscr{S})$ is separable if there exists a countable dense subset.
So in a separable space, there is a countable set $D$ with the property that there are points in $D$ arbitrarily close to every $x \in S$. Unfortunately, the term separable is similar to separating points that we discussed above in the definition of a Hausdorff space. But clearly the concepts are very different. Here is another important countability axiom.
A topological space $(S, \mathscr{S})$ is second countable if it has a countable base.
So in a second countable space, there is a countable collection of open sets $\mathscr{B}$ with the property that every other open set is a union of sets in $\mathscr{B}$. Here is how the two properties are related:
If a topological space $(S, \mathscr{S})$ is second countable then it is separable.
Proof
Suppose that $\mathscr{B} = \{U_i: i \in I\}$ is a base for $\mathscr{S}$, where $I$ is a countable index set. Select $x_i \in U_i$ for each $i \in I$, and let $D = \{x_i: i \in I\}$. Of course, $D$ is countable. If $U$ is open and nonempty, then $U = \bigcup_{j \in J} U_j$ for some nonempty $J \subseteq I$. But then $\{x_j: j \in J\} \subseteq U$, so $D$ is dense.
As the terminology suggests, there are other axioms of countability (such as first countable), but the two we have discussed are the most important.
Connected and Disconnected Spaces
This discussion deals with the situation in which a topological space falls into two or more separated pieces, in a sense.
A topological space $(S, \mathscr{S})$ is disconnected if there exist nonempty, disjoint, open sets $U$ and $V$ with $S = U \cup V$. If $(S, \mathscr{S})$ is not disconnected, then it is connected.
Since $U = V^c$, it follows that $U$ and $V$ are also closed. So the space is disconnected if and only if there exists a proper subset $U$ that is open and closed (sadly, such sets are sometimes called clopen). If $S$ is disconnected, then $S$ consists of two pieces $U$ and $V$, and the points in $U$ are not close to the points in $V$, in a sense. To study $S$ topologically, we could simply study $U$ and $V$ separately, with their relative topologies.
Convergence
There is a natural definition for a convergent sequence in a topological space, but the concept is not as useful as one might expect.
Suppose again that $(S, \mathscr{S})$ is a topological space. A sequence of points $(x_n: n \in \N_+)$ in $S$ converges to $x \in S$ if for every neighborhood $A$ of $x$ there exists $m \in \N_+$ such that $x_n \in A$ for $n \gt m$. We write $x_n \to x$ as $n \to \infty$.
So for every neighborhood of $x$, regardless of how small, all but finitely many of the terms of the sequence will be in the neighborhood. One would naturally hope that limits, when they exist, are unique, but this will only be the case if points in the space can be separated.
Suppose that $(S, \mathscr{S})$ is a Hausdorff space. If $(x_n: n \in \N_+)$ is a sequence of points in $S$ with $x_n \to x \in S$ as $n \to \infty$ and $x_n \to y \in S$ as $n \to \infty$, then $x = y$.
Proof
If $x \ne y$, there exist disjoint neighborhoods $A$ and $B$ of $x$ and $y$, respectively. There exist $k, \, m \in \N_+$ such that $x_n \in A$ for all $n \gt k$ and $x_n \in B$ for all $n \gt m$. But then if $n \gt \max\{k, m\}$, $x_n \in A$ and $x_n \in B$, a contradiction.
On the other hand, if distinct points $x, \, y \in S$ cannot be separated, then any sequence that converges to $x$ will also converge to $y$.
Continuity
Continuity of functions is one of the most important concepts to come out of general topology. The idea, of course, is that if two points are close together in the domain, then the functional values should be close together in the range. The abstract topological definition, based on inverse images is very simple, but not very intuitive at first.
Suppose that $(S, \mathscr{S})$ and $(T, \mathscr{T})$ are topological spaces. A function $f: S \to T$ is continuous if $f^{-1}(A) \in \mathscr{S}$ for every $A \in \mathscr{T}$.
So a continuous function has the property that the inverse image of an open set (in the range space) is also open (in the domain space). Continuity can equivalently be expressed in terms of closed subsets.
Suppose again that $(S, \mathscr{S})$ and $(T, \mathscr{T})$ are topological spaces. A function $f: S \to T$ is continuous if and only if $f^{-1}(A)$ is a closed subset of $S$ for every closed subset $A$ of $T$.
Proof
Recall that $f^{-1}(A ^c) = \left[f^{-1}(A)\right]^c$ for $A \subseteq T$. The result follows directly from the definition and the fact that a set is open if and only if its complement is closed.
Continuity preserves limits.
Suppose again that $(S, \mathscr{S})$ and $(T, \mathscr{T})$ are topological spaces, and that $f: S \to T$ is continuous. If $(x_n: n \in \N_+)$ is a sequence of points in $S$ with $x_n \to x \in S$ as $n \to \infty$, then $f(x_n) \to f(x)$ as $n \to \infty$.
Proof
Suppose that $V \subseteq T$ is open and $f(x) \in V$. Then $f^{-1}(V)$ is open in $S$ and $x \in f^{-1}(V)$. Hence there exists $m \in \N_+$ such that $x_n \in f^{-1}(V)$ for every $n \gt m$. But then $f(x_n) \in V$ for $n \gt m$. So $f(x_n) \to f(x)$ as $n \to \infty$.
The converse of the last result is not true, so continuity of functions in a general topological space cannot be characterized in terms of convergent sequences. There are objects like sequences but more general, known as nets, that do characterize continuity, but we will not study these. Composition, the most important way to combine functions, preserves continuity.
Suppose that $(S, \mathscr{S})$, $(T, \mathscr{T})$, and $(U, \mathscr{U})$ are topological spaces. If $f: S \to T$ and $g: T \to U$ are continuous, then $g \circ f: S \to U$ is continuous.
Proof
If $A$ is open in $U$ then $g^{-1}(A)$ is open in $T$ and therefore $f^{-1}\left[g^{-1}(A)\right] = \left(f^{-1} \circ g^{-1}\right)(A)$ is open in $S$. But $(g \circ f)^{-1} = f^{-1} \circ g^{-1}$.
The next definition is very important. A recurring theme in mathematics is to recognize when two mathematical structures of a certain type are fundamentally the same, even though they may appear to be different.
Suppose again that $(S, \mathscr{S})$ and $(T, \mathscr{T})$ are topological spaces. A one-to-one function $f$ that maps $S$ onto $T$ with both $f$ and $f^{-1}$ continuous is a homeomorphism from $(S, \mathscr{S})$ to $(T, \mathscr{T})$. When such a function exists, the topological spaces are said to be homeomorphic.
Note that in this definition, $f^{-1}$ refers to the inverse function, not the mapping of inverse images. If $f$ is a homeomorphism, then $A$ is open in $S$ if and only if $f(A)$ is open in $T$. It follows that the topological spaces are essentially equivalent: any purely topological property can be characterized in terms of open sets and therefore any such property is shared by the two spaces.
Being homeomorphic is an equivalence relation on the collection of topological spaces. That is, for spaces $(S, \mathscr{S})$, $(T, \mathscr{T})$, and $(U, \mathscr{U})$,
1. $(S, \mathscr{S})$ is homeomorphic to $(S, \mathscr{S})$ (the reflexive property).
2. If $(S, \mathscr{S})$ is homeomorphic to $(T, \mathscr{T})$ then $(T, \mathscr{T})$ is homeomorphic to $(S, \mathscr{S})$ (the symmetric property).
3. If $(S, \mathscr{S})$ is homeomorphic to $(T, \mathscr{T})$ and $(T, \mathscr{T})$ is homeomorphic to $(U, \mathscr{U})$ then $(S, \mathscr{S})$ is homeomorphic to $(U, \mathscr{U})$ (the transitive property).
Proof
1. The identity function $I: S \to S$ defined by $I(x) = x$ for $x \in S$ is a homeomorphism from the space $(S, \mathscr{S})$ to itself.
2. If $f$ is a homoemorphism from $(S, \mathscr{S})$ to $(T, \mathscr{T})$ then $f^{-1}$ is a homeomorphism from $(T, \mathscr{T})$ to $(S, \mathscr{S})$.
3. If $f$ is a homeomorphism from $(S, \mathscr{S})$ to $(T, \mathscr{T})$ and $g$ is a homeomorphism from $(T, \mathscr{T})$ to $(U, \mathscr{U})$, then $g \circ f$ is a homeomorphism from $(S, \mathscr{S})$ to $(U, \mathscr{U})$.
Continuity can also be defined locally, by restricting attention to the neighborhoods of a point.
Suppose again that $(S, \mathscr{S})$ and $(T, \mathscr{T})$ are topological spaces, and that $x \in S$. A function $f: S \to T$ is continuous at $x$ if $f^{-1}(B)$ is a neighborhood of $x$ in $S$ whenever $B$ is a neighborhood of $f(x)$ in $T$. If $A \subseteq S$, then $f$ is continuous on $A$ is $f$ is continuous at each $x \in A$.
Suppose again that $(S, \mathscr{S})$ and $(T, \mathscr{T})$ are topological spaces, and that $f: S \to T$. Then $f$ is continuous if and only if $f$ is continuous at each $x \in S$.
Proof
Suppose that $f$ is continuous. Let $x \in S$ and let $B$ be a neighborhood of $f(x)$. Then there exists an open set $V$ in $T$ with $f(x) \in V \subseteq B$. But then $f^{-1}(V)$ is open in $S$, and $x \in f^{-1}(V) \subseteq f^{-1}(B)$, so $f^{-1}(B)$ is a neighborhood of $x$. Hence $f$ is continuous at $x$.
Conversely, suppose that $f$ is continuous at each $x \in S$, and suppose that $V \in \mathscr{T}$. If $V$ contains no points in the range of $f$, then $f^{-1}(V) = \emptyset \in \mathscr{S}$. Otherwise, there exists $x \in S$ with $f(x) \in V$. But then $V$ is a neighborhood of $f(x)$, so $U = f^{-1}(V)$ is a neighborhood of $x$. Let $y \in U$. Then $f(y) \in V$ also, so $U$ is also a neighborhood of $y$. Hence $U \in \mathscr{S}$.
Properties that are defined for a topological space can be applied to a subset of the space, with the relative topology. But one has to be careful.
Suppose again that $(S, \mathscr{S})$ are topological spaces and that $f: S \to T$. Suppose also that $A \subseteq S$, and let $\mathscr{A}$ denote the relative topology on $A$ induced by $\mathscr{S}$, and let $f_A$ denote the restriction of $f$ to $A$. If $f$ is continuous on $A$ then $f_A$ is continuous relative to the spaces $(A, \mathscr{A})$ and $(T, \mathscr{T})$. The converse is not generally true.
Proof
Suppose that $V \in \mathscr{T}$. If $f(A) \cap V = \emptyset$ then $f_A^{-1}(V) = \emptyset \in \mathscr{A}$. Otherwise, suppose there exists $x \in A$ with $f(x) \in V$. Then $V$ is a neighborhood of $f(x)$ in $T$ so $f^{-1}(V)$ is a neighborhood of $x$ in $(S, \mathscr{S})$. Hence $f^{-1}(V) \cap A = f_A^{-1}(V)$ is a neighborhood of $x$ in $(A, \mathscr{A})$. Since $f_A$ is continuous (relative to $(A, \mathscr{A})$) at each $x \in A$, $f_A$ is continuous from the previous result.
For a simple counterexample, suppose that $f$ is not continuous at a particular $x \in S$. The set $\{x\}$ has the trivial relative topology $\{\emptyset, \{x\}\}$, and so $f$ restricted to $\{x\}$ is trivially continuous.
Product Spaces
Cartesian product sets are ubiquitous in mathematics, so a natural question is this: given topological spaces $(S, \mathscr{S})$ and $(T, \mathscr{T})$, what is a natural topology for $S \times T$? The answer is very simple using the concept of a base above.
Suppose that $(S, \mathscr{S})$ and $(T, \mathscr{T})$ are topological spaces. The collection $\mathscr{B} = \{A \times B: A \in \mathscr{S}, B \in \mathscr{T}\}$ is a base for a topology on $S \times T$, called the product topology associated with the given spaces.
Proof
Trivially, $S \times T = \bigcup \mathscr{B}$. In fact $S \times T \in \mathscr{B}$. Next if $A \times B \in \mathscr{B}$ and $C \times D \in \mathscr{B}$, so that $A, \, C$ are open in $S$ and $B, \, D$ are open in $T$, then $(A \times B) \cap (C \times D) = (A \cap C) \times (B \cap D) \in \mathscr{B}$ Hence $\mathscr{B}$ is a base for a topology on $S \times T$.
So basically, we want the product of open sets to be open in the product space. The product topology is the smallest topology that makes this happen. The definition above can be extended to very general product spaces, but to state the extension, let's recall how general product sets are constructed. Suppose that $S_i$ is a set for each $i$ in a nonempty index set $I$. Then the product set $\prod_{i \in I} S_i$ is the set of all functions $x: I \to \bigcup_{i \in I} S_i$ such that $x(i) \in S_i$ for $i \in I$.
Suppose that $(S_i, \mathscr{S}_i)$ is a topological space for each $i$ in a nonempty index set $I$. Then $\mathscr{B} = \left\{\prod_{i \in I} A_i: A_i \in \mathscr{S}_i \text{ for all } i \in I \text{ and } A_i = S_i \text{ for all but finitely many } i \in I\right\}$ is a base for a topology on $\prod_{i \in I} S_i$, known as the product topology associated with the given spaces.
Proof
The proof is just as before, except for the more complicated notation. Trivially $\prod_{i \in I} S_i = \bigcup \mathscr{B}$, and $\mathscr{B}$ is closed under finite intersections.
Suppose again that $S_i$ is a set for each $i$ in a nonempty index set $I$. For $j \in I$, recall that projection function $p_j: \prod_{i \in I} S_i \to S_j$ is defined by $p_j(x) = x(j)$.
Suppose again that $(S_i, \mathscr{S}_i)$ is a topological space for each $i \in I$, and give the product spacee $\prod_{i \in I} S_i$ the product topology. The projection function $p_j$ is continuous for each $j \in I$.
Proof
If $U$ is open in $S_j$ then $p_j^{-1}(U) = \prod_{i \in I} A_i$ where $A_i = S_i$ for $i \in I$ with $i \ne j$, and $A_j = U$, so clearly this inverse image is open in the product space.
As a special case of all this, suppose that $(S, \mathscr{S})$ is a topological space, and that $S_i = S$ for all $i \in I$. Then the product space $\prod_{i \in I} S_i$ is the set of all functions from $I$ to $S$, sometimes denoted $S^I$. In this case, the base for the product topology on $S^I$ is $\mathscr{B} = \left\{\prod_{i \in I} A_i: A_i \in \mathscr{S} \text{ for all } i \in I \text{ and } A_i = S \text{ for all but finitely many } i \in I\right\}$ For $j \in I$, the projection function $p_j$ just returns the value of a function $x: I \to S$ at $j$: $p_j(x) = x(j)$. This projection function is continuous. Note in particular that no topology is necessary on the domain $I$.
Examples and Special Cases
The Trivial Topology
Suppose that $S$ is a nonempty set. Then $\{S, \emptyset\}$ is a topology on $S$, known as the trivial topology.
With the trivial topology, no two distinct points can be separated. So the topology cannot distinguish between points, in a sense, and all points in $S$ are close to each other. Clearly, this topology is not very interesting, except as a place to start. Since there is only one nonempty open set ($S$ itself), the space is connected, and every subset of $S$ is compact. A sequence in $S$ converges to every point in $S$.
Suppose that $S$ has the trivial topology and that $(T, \mathscr{T})$ is another topological space.
1. Every function from $T$ to $S$ is continuous.
2. If $(T, \mathscr{T})$ is a Hausdorff space then the only continuous functions from $S$ to $T$ are constant functions.
Proof
1. Suppose $f: T \to S$. Then$f^{-1}(S) = T \in \mathscr{T}$ and $f^{-1}(\emptyset) = \emptyset \in \mathscr{T}$, so $f$ is continuous.
2. Suppose that $f: S \to T$ is continuous and that $u, \, v$ are distinct elements in the range of $f$. There exist disjoint open sets $U, \, V \in \mathscr{T}$ with $u \in U$ and $v \in V$. But $f^{-1}(U)$ and $f^{-1}(V)$ are nonempty and so must be $S$. If $x \in S$, $f(x) \in U$ and $f(x) \in V$, a contradiction.
The Discrete Topology
At the opposite extreme from the trivial topology, with the smallest collection of open sets, is the discrete topology, with the largest collection of open sets.
Suppose that $S$ is a nonempty set. The power set $\mathscr{P}(S)$ (consisting of all subsets of $S$) is a topology, known as the discrete topology.
So in the discrete topology, every set is both open and closed. All points are separated, and in a sense, widely so. No point is close to another point. With the discrete topology, $S$ is Hausdorff, disconnected, and the compact subsets are the finite subsets. A sequence in $S$ converges to $x \in S$, if and only if all but finitely many terms of the sequence are $x$.
Suppose that $S$ has the discrete topology and that $(T, \mathscr{S})$ is another topological space.
1. Every function from $S$ to $T$ is continuous.
2. If $(T, \mathscr{T})$ is connected, then the only continuous functions from $T$ to $S$ are constant functions.
Proof
1. Trivially, if $f: S \to T$, then $f^{-}(U) \in \mathscr{P}(S)$ for $U \in \mathscr{T}$ so $f$ is continuous.
2. Suppose that $f: T \to S$ is continuous and that $x$ is in the range of $f$. Then $\{x\}$ is open and closed in $S$, so $f^{-1}\{x\}$ is open and closed in $T$. If $T$ is connected, this means that $f^{-1}\{x\} = T$.
Euclidean Spaces
The standard topologies used in the Euclidean spaces are the topologies built from open sets that you familiar with.
For the set of real numbers $\R$, let $\mathscr{B} = \{(a, b): a, \, b \in \R, \; a \lt b\}$, the collection of open intervals. Then $\mathscr{B}$ is a base for a topology $\mathscr{R}$ on $\R$, known as the Euclidean topology.
Proof
Clearly the conditions for $\mathscr{B}$ to be a base given above are satisfied. First $\R = \bigcup \mathscr{B}$. Next, if $(a, b) \in \mathscr{B}$ and $(c, d) \in \mathscr{B}$ and $x \in (a, b) \cap (c, d)$, then $x \in \left(\max\{a, c\}, \min\{b, d\}\right) \subseteq (a, b) \cap (c, d)$.
The space $(\R, \mathscr{R})$ satisfies many properties that are motivations for definitions in topology in the first place. The convergence of a sequence in $\R$, in the topological sense given above, is the same as the definition of convergence in calculus. The same statement holds for the continuity of a function $f$ from $\R$ to $\R$.
Before listing other topological properties, we give a characterization of compact sets, known as the Heine-Borel theorem, named for Eduard Heine and Émile Borel. Recall that $A \subseteq \R$ is bounded if $A \subseteq [a, b]$ for some $a, \, b \in \R$ with $a \lt b$.
A subset $C \subseteq \R$ is compact if and only if $C$ is closed and bounded.
So in particular, closed, bounded intervals of the form $[a, b]$ with $a, \, b \in \R$ and $a \lt b$ are compact.
The space $(\R, \mathscr{R})$ has the following properties:
1. Hausdorff.
2. Connected.
3. Locally compact.
4. Second countable.
Proof
1. Distinct points in $\R$ can be separated by open intervals.
2. $\R$ has no proper subset that is both open and closed.
3. If $A$ is a neighborhood of $x \in \R$, then there exists $a, \, b \in \R$ with $a \lt b$ such that $x \in [a, b] \subseteq A$. The closed interval $[a, b]$ is compact.
4. The collection $\mathscr{Q} = \{(a, b): a, \, b \in \Q, \; a \lt b\}$ is a countable base for $\mathscr{R}$, where as usual, $\Q$ is the set of rational real numbers.
As noted in the proof, $\Q$, the set of rationals, is countable and is dense in $\R$. Another countable, dense subset is $\D = \{j / 2^n: n \in \N \text{ and } j \in \Z\}$, the set of dyadic rationals (or binary rationals). For the higher-dimensional Euclidean spaces, we can use the product topology based on the topology of the real numbers.
For $n \in \{2, 3, \ldots\}$, let $(\R^n, \mathscr{R}_n)$ be the $n$-fold product space corresponding to the space $(\R, \mathscr{R})$. Then $\mathscr{R}_n$ is the Euclidean topology on $\R^n$.
A subset $A \subseteq \R^n$ is bounded if there exists $a, \, b \in \R$ with $a \lt b$ such that $A \subseteq [a, b]^n$, so that $A$ fits inside of an $n$-dimensional block.
A subset $C \subseteq \R^n$ is compact if and only if $C$ is closed and bounded.
The space $(\R^n, \mathscr{R}_n)$ has the following properties:
1. Hausdorff.
2. Connected.
3. Locally compact.
4. Second countable. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/01%3A_Foundations/1.09%3A_Topological_Spaces.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\diam}{\text{diam}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\cl}{\text{cl}}$
Basic Theory
Most of the important topological spaces that occur in applications (like probability) have an additional structure that gives a distance between points in the space.
Definitions
A metric space consists of a nonempty set $S$ and a function $d: S \times S \to [0, \infty)$ that satisfies the following axioms: For $x, \, y, \, z \in S$,
1. $d(x, y) = 0$ if and only if $x = y$.
2. $d(x, y) = d(y, x)$.
3. $d(x, z) \le d(x, y) + d(y, z)$.
The function $d$ is known as a metric or a distance function.
So as the name suggests, $d(x, y)$ is the distance between points $x, \, y \in S$. The axioms are intended to capture the essential properties of distance from geometry. Part (a) is the positive property; the distance is strictly positive if and only if the points are distinct. Part (b) is the symmetric property; the distance from $x$ to $y$ is the same as the distance from $y$ to $x$. Part (c) is the triangle inequality; going from $x$ to $z$ cannot be longer than going from $x$ to $z$ by way of a third point $y$.
Note that if $(S, d)$ is a metric space, and $A$ is a nonempty subset of $S$, then the set $A$ with $d$ restricted to $A \times A$ is also a metric space (known as a subspace). The next definitions also come naturally from geometry:
Suppose that $(S, d)$ is a metric space, and that $x \in S$ and $r \in (0, \infty)$.
1. $B(x, r) = \{y \in S: d(x, y) \lt r\}$ is the open ball with center $x$ and radius $r$.
2. $C(x, r) = \{y \in S: d(x, y) \le r\}$ is the closed ball with center $x$ and radius $r$.
A metric on a space induces a topology on the space in a natural way.
Suppose that $(S, d)$ is a metric space. By definition, a set $U \subseteq S$ is open if for every $x \in U$ there exists $r \in (0, \infty)$ such that $B(x, r) \subseteq U$. The collection $\mathscr S_d$ of open subsets of $S$ is a topology.
Proof
1. Trivially $S$ is open and vacuously $\emptyset$ is open.
2. Suppose that $A_i$ is open for $i$ in an arbitrary index set $I$, and let $A = \bigcup_{i \in I} A_i$. If $x \in A$ then $x \in A_i$ for some $i \in I$. Since $A_i$ is open, there exists $r \in (0, \infty)$ with $B(x, r) \subseteq A_i$. But then $B(x, r) \subseteq A$ so $A$ is open.
3. Suppose that $A_i$ is open for $i$ in a finite index set $I$, and let $A = \bigcap_{i \in I} A_i$. If $x \in A$ then $x \in A_i$ for every $i \in I$. Hence for each $i \in I$ there exist $r_i \in (0, \infty)$ such that $B(x, r_i) \subseteq A_i$. Let $r = \min\{r_i: i \in I\}$. Since $I$ is finite, $r \gt 0$ and $B(x, r) \subseteq B(x, r_i) \subseteq A_i$ for each $i \in I$. Hence $B(x, r) \subseteq A$, so $A$ is open.
As the names suggests, an open ball is in fact open and a closed ball is in fact closed.
Suppose again that $(S, d)$ is a metric space, and that $x \in S$ and $r \in (0, \infty)$. Then
1. $B(x, r)$ is open.
2. $C(x, r)$ is closed.
Proof
1. Let $y \in B(x, r)$, and let $a = d(x, y)$, so that $a \lt r$. If $z \in B(y, r - a)$ then we have $d(x, y) = a$ and $d(y, z) \lt r - a$, so by the triangle inequality, $d(x, z) \lt a + (r - a) = r$. Hence $z \in B(x, r)$. Thus $B(y, r - a) \subseteq B(x, r)$. It follows that $B(x, r)$ is open
2. We show that $U = \left[C(x, r)\right]^c$ is open. Suppose that $y \in U$, and let $a = d(x, y)$, so that $a \gt r$. Let $z \in B(y, a - r)$ and suppose that $z \in C(x, r)$, so that $d(z, x) \le r$. By the triangle inequality again, $d(x, y) \le d(x, z) + d(z, y) \lt r + (a - r) = a$ a contradiction. Hence $z \in U$. So $B(y, a - r) \subseteq U$.
Recall that for a general topological space, a neighborhood of a point $x \in S$ is a set $A \subseteq S$ with the property that there exists an open set $U$ with $x \in U \subseteq A$. It follows that in a metric space, $A \subseteq S$ is a neighborhood of $x$ if and only if there exists $r \gt 0$ such that $B(x, r) \subseteq A$. In words, a neighborhood of a point must contain an open ball about that point.
It's easy to construct new metrics from ones that we already have. Here's one such result.
Suppose that $S$ is a nonempty set, and that $d, \, e$ are metrics on $S$, and $c \in (0, \infty)$. Then the following are also metrics on $S$:
1. $c d$
2. $d + e$
Proof
1. Recall that $c d$ is the function defined by $(c d)(x, y) = c d(x, y)$ for $(x, y) \in S^2$. Since $c \gt 0$, it's easy to see that the axioms are satisfied.
2. Recall that $d + e$ is the function defined by $(d + e)(x, y) = d(x, y) + e(x, y)$ for $(x, y) \in S^2$. Again, it's easy to see that the axioms are satisfied.
Since a metric space produces a topological space, all of the definitions for general topological spaces apply to metric spaces as well. In particular, in a metric space, distinct points can always be separated.
A metric space $(S, d)$ is a Hausdorff space.
Proof
Let $x, \, y$ be distinct points in $S$. Then $r = d(x, y) \gt 0$. The sets $B(x, r/2)$ and $B(y, r/2)$ are open, and contain $x$ and $y$, respectively. Suppose that $z \in B(x, r/2) \cap B(y, r/2)$. By the triangle inequality, $d(x, y) \le d(x, z) + d(z, y) \lt \frac{r}{2} + \frac{r}{2} = r$ a contradiction. Hence $B(x, r/2)$ and $B(y, r/2)$ are disjoint.
Metrizable Spaces
Again, every metric space is a topological space, but not conversely. A non-Hausdorff space, for example, cannot correspond to a metric space. We know there are such spaces; a set $S$ with more than one point, and with the trivial topology $\mathscr S = \{S, \emptyset\}$ is non-Hausdorff.
Suppose that $(S, \mathscr S)$ is a topological space. If there exists a metric $d$ on $S$ such that $\mathscr S = \mathscr S_d$, then $(S, \mathscr S)$ is said to be metrizable.
It's easy to see that different metrics can induce the same topology. For example, if $d$ is a metric and $c \in (0, \infty)$, then the metrics $d$ and $c d$ induce the same topology.
Let $S$ be a nonempty set. Metrics $d$ and $e$ on $S$ are equivalent, and we write $d \equiv e$, if $\mathscr S_d = \mathscr S_e$. The relation $\equiv$ is an equivalence relation on the collection of metrics on $S$. That is, for metrics $d, \, e, \, f$ on $S$,
1. $d \equiv d$, the reflexive property.
2. If $d \equiv e$ then $e \equiv d$, the symmetric property.
3. If $d \equiv e$ and $e \equiv f$ then $d \equiv f$, the transitive property.
There is a simple condition that characterizes when the topology of one metric is finer than the topology of another metric, and then this in turn leads to a condition for equivalence of metrics.
Suppose again that $S$ is a nonempty set and that $d, \, e$ are metrics on $S$. Then $\mathscr S_e$ is finer than $\mathscr S_d$ if and only if every open ball relative to $d$ contains an open ball relative to $e$.
Proof
Suppose that $\mathscr S_d \subseteq \mathscr S_e$ so that $\mathscr S_e$ is finer than $\mathscr S_d$. If $x \in S$ and $a \in (0, \infty)$, then the open ball $B_d(x, a)$ centered at $x$ of radius $a$ for the metric $d$ is in $\mathscr S_d$ and hence in $\mathscr S_e$. Thus there exists $b \in (0, \infty)$ such that $B_e(x, b) \subseteq B_d(x, a)$. Conversely, suppose that the condition in the theorem holds and suppose that $U \in \mathscr S_d$. If $x \in U$ there exists $a \in (0, \infty)$ such that $B_d(x, a) \subseteq U$. Hence there exists $b \in (0, \infty)$ such that $B_e(x, b) \subseteq B_d(x, a) \subseteq U$. So $U \in \mathscr S_e$.
It follows that metrics $d$ and $e$ on $S$ are equivalent if and only if every open ball relative to one of the metrics contains an open ball relative to the other metric.
So every metrizable topology on $S$ corresponds to an equivalence class of metrics that produce that topology. Sometimes we want to know that a topological space is metrizable, because of the nice properties that it will have, but we don't really need to use a specific metric that generates the topology. At any rate, it's important to have conditions that are sufficient for a topological space to be metrizable. The most famous such result is the Urysohn metrization theorem, named for the Russian mathematician Pavel Uryshon:
Suppose that $(S, \mathscr S)$ is a regular, second-countable, Hausdorff space. Then $(S, \mathscr S)$ is metrizable.
Review of the terms
Recall that regular means that every closed set and point not in the set can be separated by disjoint open sets. As discussed earlier, Hausdorff means that any two distinct points can be separated by disjoint open sets. Finally, second-countable means that there is a countable base for the topology, that is, there is a countable collection of open sets with the property that every other open set is a union of sets in the collection.
Convergence
With a distance function, the convergence of a sequence can be characterized in a manner that is just like calculus. Recall that for a general topological space $(S, \mathscr S)$, if $(x_n: n \in \N_+)$ is a sequence of points in $S$ and $x \in S$, then $x_n \to x$ as $n \to \infty$ means that for every neighborhood $U$ of $x$, there exists $m \in \N_+$ such that $x_n \in U$ for $n \gt m$.
Suppose that $(S, d)$ is a metric space, and that $(x_n: n \in \N_+)$ is a sequence of points in $S$ and $x \in S$. Then $x_n \to x$ as $n \to \infty$ if and only if for every $\epsilon \gt 0$ there exists $m \in \N_+$ such that if $n \gt m$ then $d(x_n, x) \lt \epsilon$. Equivalently, $x_n \to x$ as $n \to \infty$ if and only if $d(x_n, x) \to 0$ as $n \to \infty$ (in the usual calculus sense).
Proof
Suppose that $x_n \to x$ as $n \to \infty$, and let $\epsilon \gt 0$. Then $B(x, \epsilon)$ is a neighborhood of $x$, so there exists $m \in \N_+$ such that $x_n \in B(x, \epsilon)$ for $n \gt m$, which is the condition in the theorem. Conversely, suppose that condition in the theorem holds, and let $U$ be a neighborhood of $x$. Then there exists $\epsilon \gt 0$ such that $B(x, \epsilon) \subseteq U$. By assumption, there exists $m \in \N_+$ such that if $n \gt m$ then $x_n \in B(x, \epsilon) \subseteq U$.
So, no matter how tiny $\epsilon \gt 0$ may be, all but finitely many terms of the sequence are within $\epsilon$ distance of $x$. As one might hope, limits are unique.
Suppose again that $(S, d)$ is a metric space. Suppose also that $(x_n: n \in \N_+)$ is a sequence of points in $S$ and that $x, \, y \in S$. If $x_n \to x$ as $n \to \infty$ and $x_n \to y$ as $n \to \infty$ then $x = y$.
Proof
This follows immediately since a metric space is a Hausdorff space, and the limit of a sequence in a Hausdorff space is unique. Here's a direct proof: Let $\epsilon \gt 0$. Then there exists $k \in \N_+$ such that $d(x_n, x) \lt \epsilon / 2$ for $n \gt k$, and there exists $m \in \N_+$ such that $d(x_n, y) \lt \epsilon / 2$ for $n \gt m$. Let $n \gt \max\{k, m\}$. By the triangle inequality, $d(x, y) \le d(x, x_n) + d(x_n, y) \lt \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon$ So we have $d(x, y) \lt \epsilon$ for every $\epsilon \gt 0$ and hence $d(x, y) = 0$ and thus $x = y$.
Convergence of a sequence is a topological property, and so is preserved under equivalence of metrics.
Suppose that $d, \, e$ are equivalent metrics on $S$, and that $(x_n: n \in \N_+)$ is a sequence of points in $S$ and $x \in S$. Then $x_n \to x$ as $n \to \infty$ relative to $d$ if and only if $x_n \to x$ as $n \to \infty$ relative to $e$.
Closed subsets of a metric space have a simple characterization in terms of convergent sequences, and this characterization is more intuitive than the abstract axioms in a general topological space.
Suppose again that $(S, d)$ is a metric space. Then $A \subseteq S$ is closed if and only if whenever a sequence of points in $A$ converges, the limit is also in $A$.
Proof
Suppose that $A$ is closed and that $(x_n: n \in \N_+)$ is a sequence of points in $A$ with $x_n \to x \in S$ as $n \to \infty$. Suppose that $x \in A^c$. Since $A^c$ is open, $x_n \in A^c$ for $n$ sufficiently large, a contradiction. Hence $x \in A$. Conversely, suppose that $A$ has the sequential closure property, but that $A$ is not closed. Then $A^c$ is not open. This means that there exists $x \in A^c$ with the property that every neighborhood of $x$ has points in $A$. Specifically, for each $n \in \N_+$ there exists $x_n \in B(x, 1/n)$ with $x_n \in A$. But clearly $x_n \to x$ as $n \to \infty$, again a contradiction.
The following definition also shows up in standard calculus. The idea is to have a criterion for convergence of a sequence that does not require knowing a-priori the limit. But for metric spaces, this definition takes on added importance.
Suppose again that $(S, d)$ is a metric space. A sequence of points $(x_n: n \in \N_+)$ in $S$ is a Cauchy sequence if for every $\epsilon \gt 0$ there exist $k \in \N_+$ such that if $m, \, n \in \N_+$ with $m \gt k$ and $n \gt k$ then $d(x_m, x_n) \lt \epsilon$.
Cauchy sequences are named for the ubiquitous Augustin Cauchy. So for a Cauchy sequence, no matter how tiny $\epsilon \gt 0$ may be, all but finitely many terms of the sequence will be within $\epsilon$ distance of each other. A convergent sequence is always Cauchy.
Suppose again that $(S, d)$ is a metric space. If a sequence of points $(x_n: n \in \N_+)$ in $S$ converges, then the sequence is Cauchy.
Proof
By assumption, there exists $x \in S$ such that $x_n \to x$ as $n \to \infty$. Let $\epsilon \gt 0$. There exists $k \in \N_+$ such that if $n \in \N_+$ and $n \gt k$ then $d(x_n, x) \lt \epsilon / 2$. Hence if $m, \, n \in \N_+$ with $m \gt k$ and $n \gt k$ then by the triangle inequality, $d(x_m, x_n) \le d(x_m, x) + d(x, x_n) \lt \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon$ So the sequence is Cauchy.
Conversely, one might think that a Cauchy sequence should converge, but it's relatively trivial to create a situation where this is false. Suppose that $(S, d)$ is a metric space, and that there is a point $x \in S$ that is the limit of a sequence of points in $S$ that are all distinct from $x$. Then the space $T = S - \{x\}$ with the metric $d$ restricted to $T \times T$ has a Cauchy sequence that does not converge. Essentially, we have created a convergence hole. So our next defintion is very natural and very important.
Suppose again that $(S, d)$ is metric space and that $A \subseteq S$. Then $A$ is complete if every Cauchy sequence in $A$ converges to a point in $A$.
Of course, completeness can be applied to the entire space $S$. Trivially, a complete set must be closed.
Suppose again that $(S, d)$ is a metric space, and that $A \subseteq S$. If $A$ is complete, then $A$ is closed.
Proof
Suppose that $\bs{x} = (x_n: n \in \N)$ is a sequence of points in $A$ and that $x_n \to x \in S$ as $n \to \infty$. Then $\bs{x}$ is a Cauchy sequence, and so by completeness, $x \in A$. Hence $A$ is closed by (12).
Completeness is such a crucial property that it is often imposed as an assumption on metric spaces that occur in applications. Even though a Cauchy sequence may not converge, here is a partial result that will be useful latter: if a Cauchy sequence has a convergent subsequence, then the sequence itself converges.
Suppose again the $(S, d)$ is a metric space, and that $(x_n: n \in \N_+)$ is a Cauchy sequence in $S$. If there exists a subsequence $\left(x_{n_k}: k \in \N_+\right)$ such that $x_{n_k} \to x \in S$ as $k \to \infty$, then $x_n \to x$ as $n \to \infty$.
Proof
Recall that in the construction of a subsequence, the indices $(n_k: k \in \N_+)$ must be a strictly increasing sequence in $\N_+$. In particular, $n_k \to \infty$ as $k \to \infty$. So let $\epsilon \gt 0$. From the hypotheses, there exists $j \in \N_+$ such that if $k \gt j$ then $d\left(x_{n_k}, x\right) \lt \epsilon / 2$. There exists $N \in \N_+$ such that if $m \gt N$ and $p \gt N$ then $d(x_m, x_p) \lt \epsilon / 2$. Now let $m \gt N$. Pick $k \in \N_+$ such that $k \gt j$ and $n_k \gt N$. By the triangle inequality, $d(x_m, x) \le d\left(x_m, x_{n_k}\right) + d\left(x_{n_k}, x\right) \le \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon$
Continuity
In metric spaces, continuity of functions also has simple characterizations in terms of that are familiar from calculus. We start with local continuity. Recall that the general topological definition is that $f: S \to T$ is continuous at $x \in S$ if $f^{-1}(V)$ is a neighborhood of $x$ in $S$ for every open set $V$ of $f(x)$ in $T$.
Suppose that $(S, d)$ and $(T, e)$ are metric spaces, and that $f: S \to T$. The continuity of $f$ at $x \in S$ is equivalent to each of the following conditions:
1. If $(x_n: n \in \N_+)$ is a sequence in $S$ with $x_n \to x$ as $n \to \infty$ then $f(x_n) \to f(x)$ as $n \to \infty$.
2. For every $\epsilon \gt 0$, there exists $\delta \gt 0$ such that if $y \in S$ and $d(x, y) \lt \delta$ then $e[f(y) - f(x)] \lt \epsilon$.
Proof
1. This condition is sequential continuity at $x$. Continuity at $x$ implies sequential continuity at $x$ for general topological spaces, and hence for metric spaces. Conversely, suppose that sequential continuity holds at $x \in S$, and let $V$ be a neighborhood of $f(x)$ in $T$. If $U = f^{-1}(V)$ is not a neighborhood of $x$ in $S$, then for every $n \in \N_+$, there exists $x_n \in B(x, 1/n)$ with $x_n \notin U$. But then clearly $x_n \to x$ as $n \to \infty$ but $f(x_n)$ does not converge to $f(x)$ as $n \to \infty$, a contradiction.
2. Suppose that $f$ is continuous at $x$. For $\epsilon \gt 0$, $B_T[f(x), \epsilon]$ is a neighborhood of $f(x)$, and hence $U = f^{-1}\left(B_T[f(x), \epsilon]\right)$ is a neighborhood of $x$. Hence there exists $\delta \gt 0$ such that $B_S(x, \delta) \subseteq U$. But this means that if $d(y, x) \lt \delta$ then $e[f(y), f(x)] \lt \epsilon$. Conversely suppose that the condition in (b) holds, and suppose that $V$ is a neighborhood of $f(x)$. Then there exists $\epsilon \gt 0$ such that $B_T[f(x), \epsilon] \subseteq V$. By assumpiton, there exists $\delta \gt 0$ such that if $y \in B_S(x, \delta)$ then $f(y) \in B_T[f(x), \epsilon] \subseteq V$. This means that $f^{-1}(V)$ is a neighborhood of $x$.
More generally, recall that $f$ continuous on $A \subseteq S$ means that $f$ is continuous at each $x \in A$, and that $f$ continuous means that $f$ is continuous on $S$. So general continuity can be characterized in terms of sequential continuity and the $\epsilon$-$\delta$ condition.
On a metric space, there are stronger versions of continuity.
Suppose again that $(S, d)$ and $(T, e)$ are metric spaces and that $f: S \to T$. Then $f$ is uniformly continuous if for every $\epsilon \gt 0$ there exists $\delta \gt 0$ such that if $x, \, y \in S$ with $d(x, y) \lt \delta$ then $e[f(x), f(y)] \le \epsilon$.
In the $\epsilon$-$\delta$ formulation of ordinary point-wise continuity above, $\delta$ depends on the point $x$ in addition to $\epsilon$. With uniform continuity, there exists a $\delta$ depending only on $\epsilon$ that works uniformly in $x \in S$.
Suppose again that $(S, d)$ and $(T, e)$ are metric spaces, and that $f: S \to T$. If $f$ is uniformly continuous then $f$ is continuous.
Here is an even stronger version of continuity.
Suppose again that $(S, d)$ and $(T, e)$ are metric spaces, and that $f: S \to T$. Then $f$ is Höder continuous with exponent $\alpha \in (0, \infty)$ if there exists $C \in (0, \infty)$ such that $e[f(x), f(y)] \le C [d(x, y)]^\alpha$ for all $x, \, y \in S$.
The definition is named for Otto Höder. The exponent $\alpha$ is more important than the constant $C$, which generally does not have a name. If $\alpha = 1$, $f$ is said to be Lipschitz continuous, named for the German mathematician Rudolf Lipschitz.
Suppose again that $(S, d)$ and $(T, e)$ are metric spaces, and that $f: S \to T$. If $f$ is Höder continuous with exponent $\alpha \gt 0$ then $f$ is uniformly continuous.
The case where $\alpha = 1$ and $C \lt 1$ is particularly important.
Suppose again that $(S, d)$ and $(T, e)$ are metric spaces. A function $f: S \to T$ is a contraction if there exists $C \in (0, 1)$ such that $e[f(x), f(y)] \le C d(x, y), \quad x, \, y \in S$
So contractions shrink distance. By the result above, a contraction is uniformly continuous. Part of the importance of contraction maps is due to the famous Banach fixed-point theorem, named for Stefan Banach.
Suppose that $(S, d)$ is a complete metric space and that $f: S \to S$ is a contraction. Then $f$ has a unique fixed point. That is, there exists exactly one $x^* \in S$ with $f(x^*) = x^*$. Let $x_0 \in S$, and recursively define $x_n = f(x_{n-1})$ for $n \in \N_+$. Then $x_n \to x^*$ as $n \to \infty$.
Functions that preserve distance are particularly important. The term isometry means distance-preserving.
Suppose again that $(S, d)$ and $(T, e)$ are metric spaces, and that $f: S \to T$. Then $f$ is an isometry if $e[f(x), f(y)] = d(x, y)$ for every $x, \, y \in S$.
Suppose again that $(S, d)$ and $(T, e)$ are metric spaces, and that $f: S \to T$. If $f$ is an isometry, then $f$ is one-to-one and Lipschitz continuous.
Proof
If $x, \, y \in S$ with $x \ne y$, then $e[f(x), f(y)] = d(x, y) \gt 0$, so $f(x) \ne f(y)$. Hence $f$ is one-to-one. Directly from the definition, $f$ is Höder continuous with exponent $\alpha = 1$ and constant multiple $C = 1$.
In particular, an isometry $f$ is uniformly continuous. If one metric space can be mapped isometrically onto another metric space, the spaces are essentially the same.
Metric spaces $(S, d)$ and $(T, e)$ are isometric if there exists an isometry $f$ that maps $S$ onto $T$. Isometry is an equivalence relation on metric spaces. That is, for metric spaces $(S, d)$, $(T, e)$, and $(U, \rho)$,
1. $(S, d)$ is isometric to $(S, d)$, the reflexive property.
2. If $(S, d)$ is isometric to $(T, e)$ them $(T, e)$ is isometric to $(S, d)$, the symmetric property.
3. If $(S, d)$ is isometric to $(T, e)$ and $(T, e)$ is isometric to $(U, \rho)$, then $(S, d)$ is isometric to $(U, \rho)$, the transitive property.
Proof
1. The identity function $I: S \to S$ defined by $I(x) = x$ for $x \in S$ is an isometry from $(S, d)$ onto $(S, d)$.
2. If $f$ is an isometry from $(S, d)$ onto $(T, e)$ then $f^{-1}$ is an isometry from $(T, e)$ onto $(S, d)$.
3. If $f$ is an isometry from $(S, d)$ onto $(T, e)$ and $g$ is an isometry from $(T, e)$ onto $(U, \rho)$, then $g \circ f$ is an isometry from $(S, d)$ to $(U, \rho)$.
In particular, if metric spaces $(S, d)$ and $(T, e)$ are isometric, then as topological spaces, they are homeomorphic.
Compactness and Boundedness
In a metric space, various definitions related to a set being bounded are natural, and are related to the general concept of compactness.
Suppose again that $(S, d)$ is a metric space, and that $A \subseteq S$. Then $A$ is bounded if there exists $r \in (0, \infty)$ such that $d(x, y) \le r$ for all $x, \, y \in A$. The diameter of $A$ is $\diam(A) = \inf\{r \gt 0: d(x, y) \lt r \text{ for all } x, \, y \in A\}$
Additional details
Recall that $\inf(\emptyset) = \infty$, so $\diam(A) = \infty$ if $A$ is unbounded. In the bounded case, note that if the distance between points in $A$ is bounded by $r \in (0, \infty)$, then the distance is bounded by any $s \in [r, \infty)$. Hence the diameter definition makes sense.
So $A$ is bounded if and only if $\diam(A) \lt \infty$. Diameter is an increasing function relative to the subset partial order.
Suppose again that $(S, d)$ is a metric space, and that $A \subseteq B \subseteq S$. Then $\diam(A) \le \diam(B)$.
Our next definition is stronger, but first let's review some terminology that we used for general topological spaces: If $S$ is a set, $A$ a subset of $S$, and $\mathscr{A}$ a collection of subsets of $S$, then $\mathscr{A}$ is said to cover $A$ if $A \subseteq \bigcup \mathscr{A}$. So with this terminology, we can talk about open covers, closed covers, finite covers, disjoint covers, and so on.
Suppose again that $(S, d)$ is a metric space, and that $A \subseteq S$. Then $A$ is totally bounded if for every $r \gt 0$ there is a finite cover of $A$ with open balls of radius $r$.
Recall that for a general topological space, a set $A$ is compact if every open cover of $A$ has a finite subcover. So in a metric space, the term precompact is sometimes used instead of totally bounded: The set $A$ is totally bounded if every cover of $A$ with open balls of radius $r$ has a finite subcover.
Suppose again that $(S, d)$ is a metric space. If $A \subseteq S$ is totally bounded then $A$ is bounded.
Proof
There exists a finite cover of $A$ with open balls of radius 1. Let $C$ denote the set of centers of the balls, and let $c = \max\{d(u, v): u, \, v \in C\}$, the maximum distance between two centers. Since $C$ is finite, $c \lt \infty$. Now let $x, \, y \in A$. Since the balls cover $A$, there exist $u, \, v \in C$ with $x \in B(u, 1)$ and $y \in B(v, 1)$. By the triangle inequality (what else?) $d(x, y) \le d(x, u) + d(u, v) + d(v, y) \le 2 + c$ Hence $A$ is bounded.
Since a metric space is a Hausdorff space, a compact subset of a metric space is closed. Compactness also has a simple characterization in terms of convergence of sequences.
Suppose again that $(S, d)$ is a metric space. A subset $C \subseteq S$ is compact if and only if every sequence of points in $C$ has a subsequence that converges to a point in $C$.
Proof
The condition in the theorem is known as sequential compactness, so we want to show that sequential compactness is equivalent to compactness. The proof is harder than most of the others in this section, but the proof presented here is the nicest I have found, and is due to Anton Schep.
Suppose that $C$ is compact and that $\bs{x} = (x_n: n \in \N_+)$ is a sequence of points in $C$. Let $A = \{x_n: n \in \N_+\} \subseteq C$, the unordered set of distinct points in the sequence. If $A$ is finite, then some element of $a \in A$ must occur infinitely many times in the sequence. In this case, we can construct a subsequence of $\bs{x}$ all of whose terms are $a$, and so this subsequence trivially converges to $a \in C$. Suppose next that $A$ is infinite. Since the space is Hausdorff, $C$ is closed, and therefore $\cl(A) \subseteq C$. Our next claim is that there exists $a \in \cl(A)$ such that for every $r \gt 0$, the set $A \cap B(a, r)$ is infinte. If the claim is false, then for each $a \in \cl(A)$ there exists $r_a \gt 0$ such that $A \cap B(a, r)$ is finite. It then follows that for each $a \in A$, there exists $\epsilon_a \gt 0$ such that $A \cap B(a, \epsilon_a) = \{a\}$. But then $\mathscr{U} = \{B(a, \epsilon_a): a \in \cl(A)\} \cup \{[\cl(A)]^c\}$ is an open cover of $C$ that has no finite subcover, a contradiction. So the claim is true and for some $a \in \cl(A)$, the set $A \cap B(a, r)$ is infinite for each $r \gt 0$. We can construct a subsequence of $\bs{x}$ that converges to $a \in C$.
Conversely, suppose that $C$ is sequentially compact. If $\bs{x} = (x_n: n \in \N_+)$ is a Cauchy sequence in $C$, then by assumption, $\bs{x}$ has a subsequence that converges to some $x \in C$. But then by (17) the sequence $\bs{x}$ itself converges to $x$, so it follows that $C$ is complete. We next show that $C$ is totally bounded. Our goal is to show that $C$ can be covered by a finite number of balls of an arbitrary radius $r \gt 0$. Pick $x_1 \in C$. If $C \subseteq B(x_1, r)$ then we are done. Otherwise, pick $x_2 \in C \setminus B(x_1, r)$. If $C \subseteq B(x_1, r) \cup B(x_2, r)$ then again we are done. Otherwise there exists $x_3 \in C \setminus [B(x_1, r) \cup B(x_2, r)]$. This process must terminate in a finite number of steps or otherwise we would have a sequence of points $(x_n: n \in \N_+)$ in $C$ with the property that $d(x_n, x_m) \ge r$ for every $n, \, m \in \N_+$. Such a sequence does not have a convergent subsequence. Suppose now that $\mathscr{U}$ is an open cover of $C$ and let $c = \diam(C)$. Then $C$ can be covered by a finite number of closed balls of with centers in $C$ and with radius $c / 4$. It follows that at least one of these balls cannot be covered by a finite subcover from $\mathscr{U}$. Let $C_1$ denote the intersection of this ball with $C$. Then $C_1$ is closed and is sequentially compact with $\diam(C_1) \le c / 4$. Repeating the argument, we generate a nested sequence of close sets $(C_1, C_2, \ldots)$ such that $\diam(C_n) \le c / 2^n$, and with the property that $C_n$ cannot be finitely covered by $\mathscr{U}$ for each $n \in \N_+$. Pick $x_n \in C_n$ for each $n \in \N_+$. Then $\bs{x} = (x_n: n \in \N_+)$ is a Cauchy sequence in $C$ and hence has a subsequence that converges to some $x \in C$. Then $x \in \bigcap_{n=1}^\infty C_n$ and since $\diam(C_n) \to 0$ as $n \to \infty$ it follows that in fact, $\bigcap_{n=1}^\infty C_n = \{x\}$. Now, since $\mathscr{U}$ covers $C$, there exists $U \in \mathscr{U}$ such that $x \in U$. Since $U$ is open, there exists $r \gt 0$ such that $B(x, r) \subseteq U$. Now let $n \in \N_+$ be sufficiently large that $d(x, x_n) \le r / 2$ and $\diam(C_n) \lt r / 2$. Then $C_n \subseteq B(x, r) \subseteq U$, which contradicts the fact that $C_n$ cannot be finitely covered by $\mathscr{U}$.
Hausdorff Measure and Dimension
Our last discussion is somewhat advanced, but is important for the study of certain random processes, particularly Brownian motion. The idea is to measure the size of a set in a metric space in a topological way, and then use this measure to define a type of dimension. We need a preliminary definition, using our convenient cover terminology. If $(S, d)$ is a metric space, $A \subseteq S$, and $\delta \in (0, \infty)$, then a countable $\delta$ cover of $A$ is a countable cover $\mathscr{B}$ of $A$ with the property that $\diam(B) \lt \delta$ for each $B \in \mathscr{B}$.
Suppose again that $(S, d)$ is a metric space and that $A \subseteq S$. For $\delta \in (0, \infty)$ and $k \in [0, \infty)$, define $H_\delta^k(A) = \inf\left\{\sum_{B \in \mathscr{B}} \left[\diam(B)\right]^k: \mathscr{B} \text{ is a countable } \delta \text{ cover of } A \right\}$ The $k$-dimensional Hausdorff measure of $A$ is $H^k(A) = \sup \left\{H_\delta^k(A): \delta \gt 0\right\} = \lim_{\delta \downarrow 0} H_\delta^k(A)$
Additional details
Note that if $\mathscr{B}$ is a countable $\delta$ cover of $A$ then it is also a countable $\epsilon$ cover of $A$ for every $\epsilon \gt \delta$. This means that $H_\delta^k(A)$ is decreasing in $\delta \in (0, \infty)$ for fixed $k \in [0, \infty)$. Hence $\sup \left\{H_\delta^k(A): \delta \gt 0\right\} = \lim_{\delta \downarrow 0} H_\delta^k(A)$
Note that the $k$-dimensional Hausdorff measure is defined for every $k \in [0, \infty)$, not just nonnegative integers. Nonetheless, the integer dimensions are interesting. The 0-dimensional measure of $A$ is the number of points in $A$. In Euclidean space, which we consider in (36), the measures of dimension 1, 2, and 3 are related to length, area, and volume, respectively.
Suppose again that $(S, d)$ is a metric space and that $A \subseteq S$. The Hausdorff dimension of $A$ is $\dim_H(A) = \inf\{k \in [0, \infty): H^k(A) = 0\}$
Of special interest, as before, is the case when $S = \R^n$ for some $n \in \N_+$ and $d$ is the standard Euclidean distance, reviewed in (36). As you might guess, the Hausdorff dimension of a point is 0, the Hausdorff dimension of a simple curve is 1, the Hausdorff dimension of a simple surface is 2, and so on. But there are also sets with fractional Hausdorff dimension, and the stochastic process Brownian motion provides some fascinating examples. The graph of standard Brownian motion has Hausdorff dimension $3/2$ while the set of zeros has Hausdorff dimension $1/2$.
Examples and Special Cases
Normed Vector Spaces
A norm on a vector space generates a metric on the space in a very simple, natural way.
Suppose that $(S, +, \cdot)$ is a vector space, and that $\| \cdot \|$ is a norm on the space. Then $d$ defined by $d(x, y) = \|y - x\|$ for $x, \, y \in S$ is a metric on $S$.
Proof
The metric axioms follow easily from the norm axioms.
1. The positive property for $d$ follows since $\|x\| = 0$ if and only if $x = 0$.
2. The symmetric property for $d$ follows since $\|-x\| = \|x\|$.
3. The triangle inequality for $d$ follows from the triangle inequality for the norm: $\|x + y\| \le \|x\| + \|y\|$.
On $\R^n$, we have a variety of norms, and hence a variety of metrics.
For $n \in \N_+$ and $k \in [1, \infty)$, the function $d_k$ given below is a metric on $\R^n$: $d_k(\bs{x}, \bs{y}) = \left(\sum_{i=1}^n \left|x_i - y_i\right|^k\right)^{1/k}, \quad \bs{x} = (x_1, x_2, \ldots, x_n), \, \bs{y} = (y_1, y_2, \ldots, y_n) \in \R^n$
Proof
This follows from the general result above, since $\| \cdot \|_k$ defined below is a norm on $\R^n$: $\| \bs{x} \|_k = \left(\sum_{i=1}^k \left|x_i\right|^k \right)^{1/k}, \quad \bs{x} = (x_1, x_2, \ldots, x_n) \in \R^n$
Of course the metric $d_2$ is Euclidean distance, named for Euclid of course. This is the most important one, in a practical sense because it's the usual one that we use in the real world, and in a mathematical sense because the associated norm corresponds to the standard inner product on $\R^n$ given by $\langle \bs{x}, \bs{y} \rangle = \sum_{i=1}^n x_i y_i, \quad \bs{x} = (x_1, x_2, \ldots, x_n), \, \bs{y} = (y_1, y_2, \ldots, y_n) \in \R^n$
For $n \in \N_+$, the function $d_\infty$ defined below is a metric on $\R^n$: $d_\infty(\bs{x}, \bs{y}) = \max\{\left|x_i - y_i\right|: i \in \{1, 2 \ldots, n\}\}, \quad \bs{x} = (x_1, x_2, \ldots, x_n) \in \R^n$
Proof
This follows from the general result above, since $\| \cdot \|_\infty$ defined below is a norm on $\R^n$: $\| \bs{x} \|_\infty = \max\{\left|x_i\right|: i \in \{1, 2, \ldots, n\}\}, \quad \bs{x} = (x_1, x_2, \ldots, x_n) \in \R^n$
To justify the notation, recall that $\| \bs{x} \|_k \to \|\bs{x}\|_\infty$ as $k \to \infty$ for $\bs{x} \in \R^n$, and hence $d_k(\bs{x}, \bs{y}) \to d_\infty(\bs{x}, \bs{y})$ as $k \to \infty$ for $\bs{x}, \bs{y} \in \R^n$.
Suppose now that $S$ is a nonempty set. Recall that the collection $\mathscr{V}$ of all functions $f: S \to \R$ is a vector space under the usual pointwise definition of addition and scalar multiplication. That is, if $f, \, g \in \mathscr{V}$ and $c \in \R$, then $f + g \in \mathscr{V}$ and $c f \in \mathscr{V}$ are defined by $(f + g)(x) = f(x) + g(x)$ and $(c f)(x) = c f(x)$ for $x \in S$. Recall further that the collection $\mathscr{U}$ of bounded functions $f: S \to \R$ is a vector subspace of $\mathscr{V}$, and moreover, $\| \cdot \|$ defined by $\| f \| = \sup\{\left| f(x) \right|: x \in S\}$ is a norm on $\mathscr{U}$, known as the supremum norm. It follow that $\mathscr{U}$ is a metric space with the metric $d$ defined by $d(f, g) = \| f - g \| = \sup\{\left|f(x) - g(x)\right|: x \in S\}$ Vector spaces of bounded, real-valued functions, with the supremum norm are very important in the study of probability and stochastic processes. Note that the supremum norm on $\mathscr{U}$ generalizes the maximum norm on $\R^n$, since we can think of a point in $\R^n$ as a function from $\{1, 2, \ldots, n\}$ into $\R$. Later, as part of our discussion on integration with respect to a positive measure, we will see how to generalize the $k$ norms on $\R^n$ to spaces of functions.
Products of Metric Spaces
If we have a finite number of metric spaces, then we can combine the individual metrics, together with an norm on the vector space $\R^n$, to create a norm on the Cartesian product space.
Suppose $n \in \{2, 3, \ldots\}$, and that $(S_i, d_i)$ is a metric space for each $in \in \{1, 2, \ldots, n\}$. Suppose also that $\| \cdot \|$ is a norm on $\R^n$. Then the function $d$ given as follows is a metric on $S = S_1 \times S_2 \times \cdots \times S_n$: $d(\bs{x}, \bs{y}) = \left\|\left(d_1(x_1, y_1), d_2(x_2, y_2), \ldots, d_n(x_n, y_n)\right)\right\|, \quad \bs{x} = (x_1, x_2, \ldots, x_n), \, \bs{y} = (y_1, y_2, \ldots, y_n) \in S$
Proof
1. Note that $d(\bs{x}, \bs{y}) = 0$ if and only if $d_i(x_i, y_i) = 0$ for $i \in \{1, 2, \ldots, n\}$ if and only if $x_i = y_i$ for $i \in \{1, 2, \ldots, n\}$ if and only if $\bs{x} = \bs{y}$.
2. Since $d_i(x_i, y_i) = d_i(y_i, x_i)$ for $i \in \{1, 2, \ldots, n\}$, we have $d(\bs{x}, \bs{y}) = d(\bs{y}, \bs{x})$.
3. The triangle inequality follows from the triangle inequality for each metric, and the triangle inequality for the norm.
Graphs
Recall that a graph (in the combinatorial sense) consists of a countable set $S$ of vertices and a set $E \subseteq S \times S$ of edges. In this discussion, we assume that the graph is undirected in the sense that $(x, y) \in E$ if and only if $(y, x) \in E$, and has no loops so that $(x, x) \notin E$ for $x \in S$. Finally, recall that a path of length $n \in \N_+$ from $x \in S$ to $y \in S$ is a sequence $(x_0, x_1, \ldots, x_n) \in S^{n+1}$ such that $x_0 = x$, $x_n = y$, and $(x_{i-1}, x_i) \in E$ for $i \in \{1, 2, \ldots, n\}$. The graph is connected if there exists a path of finite length between any two distinct vertices in $S$. Such a graph has a natural metric:
Suppose that $G = (S, E)$ is a connected graph. Then $d$ defined as follows is a metric on $S$: $d(x, x) = 0$ for $x \in S$, and $d(x, y)$ is the length of the shortest path from $x$ to $y$ for distinct $x, \, y \in S$.
Proof
1. The positive property follows from the definition: $d(x, y) = 0$ if and only if $x = y$
2. The symmetric property follows since the graph is undirected: $d(x, y) = d(y, x)$ for all $x, \, y \in S$.
3. For the triangle inequality, suppose that $x, \, y, \, z \in S$, and that $m = d(x, y)$ and $n = d(y, z)$. Then there is a path of length $m$ from $x$ to $y$ and a path of length $n$ from $y$ to $z$. Concatenating the paths produces a path of length $m + n$ from $x$ to $z$. But $d(x, z)$ is the length of the shortest such path, so it follows that $d(x, z) \le m + n$.
The Discrete Topology
Suppose that $S$ is a nonempty set. Recall that the discrete topology on $S$ is $\mathscr{P}(S)$, the power set of $S$, so that every subset of $S$ is open (and closed). The discrete topology is metrizable, and there are lots of metrics that generate this topology.
Suppose again that $S$ is a nonempty set. A metric $d$ on $S$ with the property that there exists $c \in (0, \infty)$ such that $d(x, y) \ge c$ for distinct $x, \, y \in S$ generates the discrete topology.
Proof
Note that $B(x, c) = \{x\}$ for $x \in S$. Hence $\{x\}$ is open for each $x \in S$.
So any metric that is bounded from below (for distinct points) generates the discrete topology. It's easy to see that there are such metrics.
Suppose again that $S$ is a nonempty set. The function $d$ on $S \times S$ defined by $d(x, x) = 0$ for $x \in S$ and $d(x, y) = 1$ for distinct $x, \, y \in S$ is a metric on $S$, known as the discrete metric. This metric generates the discrete topology.
Proof
Clearly $d(x, y) = 0$ if and only if $x = y$, and $d(x, y) = d(y, x)$ for $x, \, y \in S$, so the positive and symmetric properties hold. For the triangle inequality, suppose $x, \, y, \, z \in S$. The inequality trivially holds if the points are not distinct. If the points are distinct, then $d(x, z) = 1$ and $d(x, y) + d(y, z) = 2$.
In probability applications, the discrete topology is often appropriate when $S$ is countable. Note also that the discrete metric is the graph distance if $S$ is made into the complete graph, so that $(x, y)$ is an edge for every pair of distinct vertices $x, \, y \in S$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/01%3A_Foundations/1.10%3A_Metric_Spaces.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\Q}{\mathbb{Q}}$ $\newcommand{\D}{\mathbb{D}}$ $\newcommand{\bs}{\boldsymbol}$
In this section we discuss some topics from measure theory that are a bit more advanced than the topics in the early sections of this chapter. However, measure-theoretic ideas are essential for a deep understanding of probability, since probability is itself a measure. The most important of the definitions is the $\sigma$-algebra, a collection of subsets of a set with certain closure properties. Such collections play a fundamental role, even for applied probability, in encoding the state of information about a random experiment.
On the other hand, we won't be overly pedantic about measure-theoretic details in this text. Unless we say otherwise, we assume that all sets that appear are measurable (that is, members of the appropriate $\sigma$-algebras), and that all functions are measurable (relative to the appropriate $\sigma$-algebras).
Although this section is somewhat abstract, many of the proofs are straightforward. Be sure to try the proofs yourself before reading the ones in the text.
Algebras and $\sigma$-Algebras
Suppose that $S$ is a set, playing the role of a universal set for a particular mathematical model. It is sometimes impossible to include all subsets of $S$ in our model, particularly when $S$ is uncountable. In a sense, the more sets that we include, the harder it is to have consistent theories. However, we almost always want the collection of admissible subsets to be closed under the basic set operations. This leads to some important definitions.
Algebras of Sets
Suppose that $\mathscr S$ is a nonempty collection of subsets of $S$. Then $\mathscr S$ is an algebra (or field) if it is closed under complement and union:
1. If $A \in \mathscr S$ then $A^c \in \mathscr S$.
2. If $A \in \mathscr S$ and $B \in \mathscr S$ then $A \cup B \in \mathscr S$.
If $\mathscr S$ is an algebra of subsets of $S$ then
1. $S \in \mathscr S$
2. $\emptyset \in \mathscr S$
Proof
1. Since $\mathscr S$ is nonempty, there exists $A \in \mathscr S$. Hence $A^c \in \mathscr S$ so $S = A \cup A^c \in \mathscr S$.
2. $\emptyset = S^c \in \mathscr S$
Suppose that $\mathscr S$ is an algebra of subsets of $S$ and that $A_i \in \mathscr S$ for each $i$ in a finite index set $I$.
1. $\bigcup_{i \in I} A_i \in \mathscr S$
2. $\bigcap_{i \in I} A_i \in \mathscr S$
Proof
1. This follows by induction on the number of elements in $I$.
2. Thie follows from (a) and DeMorgan's law. If $A_i \in \mathscr S$ for $i \in I$ then $A_i^c \in \mathscr S$ for $i \in I$. Therefore $\bigcup_{i \in I} A_i^c \in \mathscr S$ and hence $\bigcap_{i \in I} A_i = \left(\bigcup_{i \in I} A_i^c\right)^c \in \mathscr S$.
Thus it follows that an algebra of sets is closed under a finite number of set operations. That is, if we start with a finite number of sets in the algebra $\mathscr S$, and build a new set with a finite number of set operations (union, intersection, complement), then the new set is also in $\mathscr S$. However in many mathematical theories, probability in particular, this is not sufficient; we often need the collection of admissible subsets to be closed under a countable number of set operations.
$\sigma$-Algebras of Sets
Suppose that $\mathscr S$ is a nonempty collection of subsets of $S$. Then $\mathscr S$ is a $\sigma$-algebra (or $\sigma$-field) if the following axioms are satisfied:
1. If $A \in \mathscr S$ then $A^c \in \mathscr S$.
2. If $A_i \in \mathscr S$ for each $i$ in a countable index set $I$, then $\bigcup_{i \in I} A_i \in \mathscr S$.
Clearly a $\sigma$-algebra of subsets is also an algebra of subsets, so the basic results for algebras above still hold. In particular, $S \in \mathscr S$ and $\emptyset \in \mathscr S$.
If $A_i \in \mathscr S$ for each $i$ in a countable index set $I$, then $\bigcap_{i \in I} A_i \in \mathscr S$.
Proof
The proof is just like the one above for algebras. If $A_i \in \mathscr S$ for $i \in I$ then $A_i^c \in \mathscr S$ for $i \in I$. Therefore $\bigcup_{i \in I} A_i^c \in \mathscr S$ and hence $\bigcap_{i \in I} A_i = \left(\bigcup_{i \in I} A_i^c\right)^c \in \mathscr S$.
Thus a $\sigma$-algebra of subsets of $S$ is closed under countable unions and intersections. This is the reason for the symbol $\sigma$ in the name. As mentioned in the introductory paragraph, $\sigma$-algebras are of fundamental importance in mathematics generally and probability theory specifically, and thus deserve a special definition:
If $S$ is a set and $\mathscr S$ a $\sigma$-algebra of subsets of $S$, then the pair $(S, \mathscr S)$ is called a measurable space.
The term measurable space will make more sense in the next chapter, when we discuss positive measures (and in particular, probability measures) on such spaces.
Suppose that $S$ is a set and that $\mathscr S$ is a finite algebra of subsets of $S$. Then $\mathscr S$ is also a $\sigma$-algebra.
Proof
Any countable union of sets in $\mathscr S$ reduces to a finite union.
However, there are algebras that are not $\sigma$-algebras. Here is the classic example:
Suppose that $S$ is an infinite set. The collection of finite and co-finite subsets of $S$ defined below is an algebra of subsets of $S$, but not a $\sigma$-algebra: $\mathscr{F} = \{A \subseteq S: A \text{ is finite or } A^c \text{ is finite}\}$
Proof
$S \in \mathscr{F}$ since $S^c = \emptyset$ is finite. If $A \in \mathscr{F}$ then $A^c \in \mathscr{F}$ by the symmetry of the definition. Suppose that $A, \, B \in \mathscr{F}$. If $A$ and $B$ are both finite then $A \cup B$ is finite. If $A^c$ or $B^c$ is finite, then $(A \cup B)^c = A^c \cap B^c$ is finite. In either case, $A \cup B \in \mathscr{F}$. Thus $\mathscr{F}$ is an algebra of subsets of $S$.
Since $S$ is infinite, it contains a countably infinite subset $\{x_0, x_1, x_2, \ldots\}$. Let $A_n = \{x_{2 n}\}$ for $n \in \N$. Then $A_n$ is finite, so $A_n \in \mathscr{F}$ for each $n \in \N$. Let $E = \bigcup_{n=0}^\infty A_n = \{x_0, x_2, x_4, \ldots\}$. Then $E$ is infinite by construction. Also $\{x_1, x_3, x_5, \ldots\} \subseteq E^c$, so $E^c$ is infinite as well. Hence $E \notin \mathscr{F}$ and so $\mathscr{F}$ is not a $\sigma$-algebra.
General Constructions
Recall that $\mathscr{P}(S)$ denotes the collection of all subsets of $S$, called the power set of $S$. Trivially, $\mathscr{P}(S)$ is the largest $\sigma$-algebra of $S$. The power set is often the appropriate $\sigma$-algebra if $S$ is countable, but as noted above, is sometimes too large to be useful if $S$ is uncountable. At the other extreme, the smallest $\sigma$-algebra of $S$ is given in the following result:
The collection $\{\emptyset, S\}$ is a $\sigma$-algebra.
Proof
Clearly $\{\emptyset, S\}$ is a finite algebra: $S$ and $\emptyset$ are complements of each other, and $S \cup \emptyset = S$. Hence $\{S, \emptyset\}$ is a $\sigma$-algebra by the result above.
In many cases, we want to construct a $\sigma$-algebra that contains certain basic sets. The next two results show how to do this.
Suppose that $\mathscr S_i$ is a $\sigma$-algebra of subsets of $S$ for each $i$ in a nonempty index set $I$. Then $\mathscr S = \bigcap_{i \in I} \mathscr S_i$ is also a $\sigma$-algebra of subsets of $S$.
Proof
The proof is completely straightforward. First, $S \in \mathscr S_i$ for each $i \in I$ so $S \in \mathscr S$. If $A \in \mathscr S$ then $A \in \mathscr S_i$ for each $i \in I$ and hence $A^c \in \mathscr S_i$ for each $i \in I$. Therefore $A^c \in \mathscr S$. Finally suppose that $A_j \in \mathscr S$ for each $j$ in a countable index set $J$. Then $A_j \in \mathscr S_i$ for each $i \in I$ and $j \in J$ and therefore $\bigcup_{j \in J} A_j \in \mathscr S_i$ for each $i \in I$. It follows that $\bigcup_{j \in J} A_j \in \mathscr S$.
Note that no restrictions are placed on the index set $I$, other than it be nonempty, so in particular it may well be uncountable.
Suppose that $S$ is a set and that $\mathscr B$ is a collection of subsets of $S$. The $\sigma$-algebra generated by $\mathscr B$ is $\sigma(\mathscr B) = \bigcap \{\mathscr S: \mathscr S \text{ is a } \sigma\text{-algebra of subsets of } S \text{ and } \mathscr B \subseteq \mathscr S\}$ If $\mathscr B$ is countable then $\sigma(\mathscr B)$ is said to be countably generated.
So the $\sigma$-algebra generated by $\mathscr B$ is the intersection of all $\sigma$-algebras that contain $\mathscr B$, which by the previous result really is a $\sigma$-algebra. Note that the collection of $\sigma$-algebras in the intersection is not empty, since $\mathscr{P}(S)$ is in the collection. Think of the sets in $\mathscr B$ as basic sets that we want to be measurable, but do not form a $\sigma$-algebra.
The $\sigma$-algebra $\sigma(\mathscr B)$ is the smallest $\sigma$ algebra containing $\mathscr B$.
1. $\mathscr B \subseteq \sigma(\mathscr B)$
2. If $\mathscr S$ is a $\sigma$-algebra of subsets of $S$ and $\mathscr B \subseteq \mathscr S$ then $\sigma(\mathscr B) \subseteq \mathscr S$.
Proof
Both of these properties follows from the definition of $\sigma(\mathscr B)$ as the intersection of all $\sigma$-algebras that contain $\mathscr B$.
Note that the conditions in the last theorem completely characterize $\sigma(\mathscr B)$. If $\mathscr S_1$ and $\mathscr S_2$ satisfy the conditions, then by (a), $\mathscr B \subseteq \mathscr S_1$ and $\mathscr B \subseteq \mathscr S_2$. But then by (b), $\mathscr S_1 \subseteq \mathscr S_2$ and $\mathscr S_2 \subseteq \mathscr S_1$.
If $A$ is a subset of $S$ then $\sigma\{A\} = \{\emptyset, A, A^c, S\}$
Proof
Let $\mathscr S = \{\emptyset, A, A^c, S\}$. Clearly $\mathscr S$ is an algebra: $A$ and $A^c$ are complements of each other, as are $\emptyset$ and $S$. Also, \begin{align*} &A \cup A^c = A \cup S = A^c \cup S = S \cup S = \emptyset \cup S = S \ &A \cup \emptyset = A \cup A = A \ &A^c \cup \emptyset = A^c \cup A^c = A^c \ &\emptyset \cup \emptyset = \emptyset \end{align*} Since $\mathscr S$ is finite, it is a $\sigma$-algebra by (7). Next, $A \in \mathscr S$. Conversely, if $\mathscr T$ is a $\sigma$-algebra and $A \in \mathscr T$ then of course $\emptyset, S, A^c \in \mathscr T$ so $\mathscr S \subseteq \mathscr T$. Hence $\mathscr S = \sigma\{A\}$
We can generalize the previous result. Recall that a collection of subsets $\mathscr{A} = \{A_i: i \in I\}$ is a partition of $S$ if $A_i \cap A_j = \emptyset$ for $i, \; j \in I$ with $i \ne j$, and $\bigcup_{i \in I} A_i = S$.
Suppose that $\mathscr{A} = \{A_i: i \in I\}$ is a countable partition of $S$ into nonempty subsets. Then $\sigma(\mathscr{A})$ is the collection of all unions of sets in $\mathscr{A}$. That is, $\sigma(\mathscr{A}) = \left\{ \bigcup_{j \in J} A_j: J \subseteq I \right\}$
Proof
Let $\mathscr S = \left\{ \bigcup_{j \in J} A_j: J \subseteq I \right\}$. Note that $S \in \mathscr S$ since $S = \bigcup_{i \in I} A_i$. Next, suppose that $B \in \mathscr S$. Then $B = \bigcup_{j \in J} A_j$ for some $J \subseteq I$. But then $B^c = \bigcup_{j \in J^c} A_j$, so $B^c \in \mathscr S$. Next, suppose that $B_k \in \mathscr S$ for $k \in K$ where $K$ is a countable index set. Then for each $k \in K$ there exists $J_k \subseteq I$ such that $B_k = \bigcup_{j \in J_k} A_j$. But then $\bigcup_{k \in K} B_k = \bigcup_{k \in K} \bigcup_{j \in J_k} A_j = \bigcup_{j \in J} A_j$ where $J = \bigcup_{k \in K} J_k$. Hcnce $\bigcup_{k \in K} B_k \in \mathscr S$. Therefore $\mathscr S$ is a $\sigma$-algebra of subsets of $S$. Trivially, $\mathscr{A} \subseteq \mathscr S$. If $\mathscr T$ is a $\sigma$-algebra of subsets of $S$ and $\mathscr{A} \subseteq \mathscr T$, then clearly $\bigcup_{j \in J} A_j \in \mathscr T$ for every $J \subseteq I$. Hence $\mathscr S \subseteq \mathscr T$.
A $\sigma$-algebra of this form is said to be generated by a countable partition. Note that since $A_i \ne \emptyset$ for $i \in I$, the representation of a set in $\sigma(\mathscr{A})$ as a union of sets in $\mathscr{A}$ is unique. That is, if $J, \, K \subseteq I$ and $J \ne K$ then $\bigcup_{j \in J} A_j \ne \bigcup_{k \in K} A_k$. In particular, if there are $n$ nonempty sets in $\mathscr{A}$, so that $\#(I) = n$, then there are $2^n$ subsets of $I$ and hence $2^n$ sets in $\sigma(\mathscr{A})$.
Suppose now that $\mathscr{A} = \{A_1, A_2, \ldots, A_n\}$ is a collection of $n$ subsets of $S$ (not necessarily disjoint). To describe the $\sigma$-algebra generated by $\mathscr{A}$ we need a bit more notation. For $x = (x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$ (a bit string of length $n$), let $B_x = \bigcap_{i=1}^n A_i^{x_i}$ where $A_i^1 = A_i$ and $A_i^0 = A_i^c$.
In the setting above,
1. $\mathscr B = \{B_x: x \in \{0, 1\}^n\}$ partitions $S$.
2. $A_i = \bigcup\left\{B_x: x \in \{0, 1\}^n, \; x_i = 1\right\}$ for $i \in \{1, 2, \ldots, n\}$.
3. $\sigma(\mathscr{A}) = \sigma(\mathscr B) = \left\{\bigcup_{x \in J} B_x: J \subseteq \{0, 1\}^n\right\}$.
Proof
1. Suppose that $x, \; y \in \{0, 1\}^n$ and that $x \ne y$. Without loss of generality we can suppose that for some $j \in \{1, 2, \ldots, n\}$, $x_j = 0$ while $y_j = 1$. Then $B_x \subseteq A_j^c$ and $B_y \subseteq A_j$ so $B_x$ and $B_y$ are disjoint. Suppose that $s \in S$. Construct $x \in \{0, 1\}^n$ by $x_i = 1$ if $s \in A_i$ and $x_i = 0$ if $s \notin A_i$, for each $i \in \{1, 2, \ldots, n\}$. Then by definition, $s \in B_x$. Hence $\mathscr B$ partitions $S$.
2. Fix $i \in \{1, 2, \ldots, n\}$. Again if $x \in \{0, 1\}^n$ and $x_i = 1$ then $B_x \subseteq A_i$. Hence $\bigcup\left\{B_x: x \in \{0, 1\}^n, \; x_i = 1\right\} \subseteq A_i$. Conversely, suppose $s \in A_i$. Define $y \in \{0, 1\}^n$ by $y_j = 1$ if $s \in A_j$ and $y_j = 0$ if $s \notin A_j$ for each $j \in \{1, 2, \ldots, n\}$. Then $y_i = 1$ and $s \in B_y$. Hence $s \in \bigcup\left\{B_x: x \in \{0, 1\}^n, \; x_i = 1\right\}$.
3. Clearly, every $\sigma$-algebra of subsets of $S$ that contains $\mathscr{A}$ must also contain $\mathscr B$, and every $\sigma$-algebra of subsets of $S$ that contains $\mathscr B$ must also contain $\mathscr{A}$. It follows that $\sigma(\mathscr{A}) = \sigma(\mathscr B)$. The characterization in terms of unions now follows from the previous result.
Recall that there are $2^n$ bit strings of length $n$. The sets in $\mathscr{A}$ are said to be in general position if the sets in $\mathscr B$ are distinct (and hence there are $2^n$ of them) and are nonempty. In this case, there are $2^{2^n}$ sets in $\sigma(\mathscr{A})$.
Open the Venn diagram app. This app shows two subsets $A$ and $B$ of $S$ in general position, and lists the 16 sets in $\sigma\{A, B\}$.
1. Select each of the 4 sets that partition $S$: $A \cap B$, $A \cap B^c$, $A^c \cap B$, $A^c \cap B^c$.
2. Select each of the other 12 sets in $\sigma\{A, B\}$ and note how each is a union of some of the sets in (a).
Sketch a Venn diagram with sets $A_1, \, A_2, \, A_3$ in general position. Identify the set $B_x$ for each $x \in \{0, 1\}^3$.
If a $\sigma$-algebra is generated by a collection of basic sets, then each set in the $\sigma$-algebra is generated by a countable number of the basic sets.
Suppose that $S$ is a set and $\mathscr B$ a nonempty collection of subsets of $S$. Then
$\sigma(\mathscr B) = \{A \subseteq S: A \in \sigma(\mathscr{C}) \text{ for some countable } \mathscr{C} \subseteq \mathscr B\}$
Proof
Let $\mathscr S$ denote the collection on the right. We first show that $\mathscr S$ is a $\sigma$-algebra. First, pick $B \in \mathscr B$, which we can do since $\mathscr B$ is nonempty. Then $S \in \sigma\{B\}$ so $S \in \mathscr S$. Let $A \in \mathscr S$ so that $A \in \sigma(\mathscr{C})$ for some countable $\mathscr{C} \subseteq \mathscr B$. Then $A^c \in \sigma(\mathscr{C})$ so $A^c \in \mathscr S$. Finally, suppose that $A_i \in \mathscr S$ for $i$ in a countable index set $I$. Then for each $i \in I$, there exists a countable $\mathscr{C}_i \subseteq \mathscr B$ such that $A_i \in \sigma(\mathscr{C}_i)$. But then $\bigcup_{i \in I} \mathscr{C}_i$ is also countable and $\bigcup_{i \in I} A_i \in \sigma\left(\bigcup_{i \in I} \mathscr{C}_i \right)$. Hence $\bigcup_{i \in I} A_i \in \mathscr S$.
Next if $B \in \mathscr B$ then $B \in \sigma\{B\}$ so $B \in \mathscr S$. Hence $\sigma(\mathscr B) \subseteq \mathscr S$. Conversely, if $A \in \sigma(\mathscr{C})$ for some countable $\mathscr{C} \subseteq \mathscr B$ then trivially $A \in \sigma(\mathscr B)$.
A $\sigma$-algebra on a set naturally leads to a $\sigma$-algebra on a subset.
Suppose that $(S, \mathscr S)$ is a measurable space, and that $R \subseteq S$. Let $\mathscr{R} = \{A \cap R: A \in \mathscr S\}$. Then
1. $\mathscr{R}$ is a $\sigma$-algebra of subsets of $R$.
2. If $R \in \mathscr S$ then $\mathscr{R} = \{B \in \mathscr S: B \subseteq R\}$.
Proof
1. First, $S \in \mathscr S$ and $S \cap R = R$ so $R \in \mathscr{R}$. Next suppose that $B \in \mathscr{R}$. Then there exists $A \in \mathscr S$ such that $B = A \cap R$. But then $A^c \in \mathscr S$ and $R \setminus B = R \cap B^c = R \cap A^c$, so $R \setminus B \in \mathscr{R}$. Finally, suppose that $B_i \in \mathscr{R}$ for $i$ in a countable index set $I$. For each $i \in I$ there exists $A_i \in \mathscr S$ such that $B_i = A_i \cap R$. But then $\bigcup_{i \in I} A_i \in \mathscr S$ and $\bigcup_{i \in I} B_i = \left(\bigcup_{i \in I} A_i \right) \cap R$, so $\bigcup_{i \in I} B_i \in \mathscr{R}$.
2. Suppose that $R \in \mathscr S$. Then $A \cap R \in \mathscr S$ for every $A \in \mathscr S$, and of course, $A \cap R \subseteq R$. Conversely, if $B \in \mathscr S$ and $B \subseteq R$ then $B = B \cap R$ so $B \in \mathscr{R}$
The $\sigma$-algebra $\mathscr{R}$ is the $\sigma$-algebra on $R$ induced by $\mathscr S$. The following construction is useful for counterexamples. Compare this example with the one for finite and co-finite sets.
Let $S$ be a nonempty set. The collection of countable and co-countable subsets of $S$ is $\mathscr{C} = \{A \subseteq S: A \text{ is countable or } A^c \text{ is countable}\}$
1. $\mathscr{C}$ is a $\sigma$-algebra
2. $\mathscr{C} = \sigma\{\{x\}: x \in S\}$, the $\sigma$-algebra generated by the singleton sets.
Proof
1. First, $S \in \mathscr{C}$ since $S^c = \emptyset$ is countable. If $A \in \mathscr{C}$ then $A^c \in \mathscr{C}$ by the symmetry of the definition. Suppose that $A_i \in \mathscr{C}$ for each $i$ in a countable index set $I$. If $A_i$ is countable for each $i \in I$ then $\bigcup_{i \in I} A_i$ is countable. If $A_j^c$ is countable for some $j \in I$ then $\left(\bigcup_{i \in I} A_i \right)^c = \bigcap_{i \in I} A_i^c \subseteq A_j^c$ is countable. In either case, $\bigcup_{i \in I} A_i \in \mathscr{C}$.
2. Let $\mathscr{D} = \sigma\{\{x\}: x \in S\}$. Clearly $\{x\} \in \mathscr{C}$ for $x \in S$. Hence $\mathscr{D} \subseteq \mathscr{C}$. Conversely, suppose that $A \in \mathscr{C}$. If $A$ is countable, then $A = \bigcup_{x \in A} \{x\} \in \mathscr{D}$. If $A^c$ is countable, then by an identical argument, $A^c \in \mathscr{D}$ and hence $A \in \mathscr{D}$.
Of course, if $S$ is itself countable then $\mathscr{C} = \mathscr{P}(S)$. On the other hand, if $S$ is uncountable, then there exists $A \subseteq S$ such that $A$ and $A^c$ are uncountable. Thus, $A \notin \mathscr{C}$, but $A = \bigcup_{x \in A} \{x\}$, and of course $\{x\} \in \mathscr{C}$. Thus, we have an example of a $\sigma$-algebra that is not closed under general unions.
Topology and Measure
One of the most important ways to generate a $\sigma$-algebra is by means of topology. Recall that a topological space consists of a set $S$ and a topology $\mathscr S$, the collection of open subsets of $S$. Most spaces that occur in probability and stochastic processes are topological spaces, so it's crucial that the topological and measure-theoretic structures are compatible.
Suppose that $(S, \mathscr S)$ is a topological space. Then $\sigma(\mathscr S)$ is the Borel $\sigma$-algebra on $S$, and $(S, \sigma(\mathscr S))$ is a Borel measurable space.
So the Borel $\sigma$-algebra on $S$, named for Émile Borel is generated by the open subsets of $S$. Thus, a topological space $(S, \mathscr S)$ naturally leads to a measurable space $(S, \sigma(\mathscr S))$. Since a closed set is simply the complement of an open set, the Borel $\sigma$-algebra contains the closed sets as well (and in fact is generated by the closed sets). Here are some other sets that are in the Borel $\sigma$-algebra:
Suppose again that $(S, \mathscr S)$ is a topological space and that $I$ is a countable index set.
1. If $A_i$ is open for each $i \in I$ then $\bigcap_{i \in I} A_i \in \sigma(\mathscr S)$. Such sets are called $G_\delta$ sets.
2. If $A_i$ is closed for each $i \in I$ then $\bigcup_{i \in I} A_i \in \sigma(\mathscr S)$. Such sets are called $F_\sigma$ sets.
3. If $(S, \mathscr S)$ is Hausdorff then $\{x\} \in \mathscr S$ for every $x \in S$.
Proof
1. This follows direction from the closure property for intersections.
2. This follows from the definition.
3. This follows since $\{x\}$ is closed for each $x \in S$ if the topology is Hausdorff.
In terms of part (c), recall that a topological space is Hausdorff, named for Felix Hausdorff, if the topology can distinguish individual points. Specifically, if $x, \, y \in S$ are distinct then there exist disjoint open sets $U, \, V$ with $x \in U$ and $y \in V$. This is a very basic property possessed by almost all topological spaces that occur in applications. A simple corollary of (c) is that if the topological space $(S, \mathscr S)$ is Hausdorff then $A \in \sigma(\mathscr S)$ for every countable $A \subseteq S$.
Let's note the extreme cases. If $S$ has the discrete topology $\mathscr{P}(S)$, so that every set is open (and closed), then of course the Borel $\sigma$-algebra is also $\mathscr{P}(S)$. As noted above, this is often the appropriate $\sigma$-algebra if $S$ is countable, but is often too large if $S$ is uncountable. If $S$ has the trivial topology $\{S, \emptyset\}$, then the Borel $\sigma$-algebra is also $\{S, \emptyset\}$, and so is also trivial.
Recall that a base for a topological space $(S, \mathscr T)$ is a collection $\mathscr B \subseteq \mathscr T$ with the property that every set in $\mathscr T$ is a union of a collection of sets in $\mathscr B$. In short, every open set is a union of some of the basic open sets.
Suppose that $(S, \mathscr S)$ is a topological space with a countable base $\mathscr B$. Then $\sigma(\mathscr B) = \sigma(\mathscr S)$.
Proof
Since $\mathscr B \subseteq \mathscr S$ it follows trivially that $\sigma(\mathscr B) \subseteq \sigma(\mathscr S)$. Conversely, if $U \in \mathscr S$, there exists a collection of sets in $\mathscr B$ whose union is $U$. Since $\mathscr B$ is countable, $U \in \sigma(\mathscr B)$.
The topological spaces that occur in probability and stochastic processes are usually assumed to have a countable base (along with other nice properties such as the Hausdorff property and locally compactness). The $\sigma$-algebra used for such a space is usually the Borel $\sigma$-algebra, which by the previous result, is countably generated.
Measurable Functions
Recall that a set usually comes with a $\sigma$-algebra of admissible subsets. A natural requirement on a function is that the inverse image of an admissible set in the range space be admissible in the domain space. Here is the formal definition.
Suppose that $(S, \mathscr S)$ and $(T, \mathscr T)$ are measurable spaces. A function $f: S \to T$ is measurable if $f^{-1}(A) \in \mathscr S$ for every $A \in \mathscr T$.
If the $\sigma$-algebra in the range space is generated by a collection of basic sets, then to check the measurability of a function, we need only consider inverse images of basic sets:
Suppose again that $(S, \mathscr S)$ and $(T, \mathscr T)$ are measurable spaces, and that $\mathscr T = \sigma(\mathscr B)$ for a collection of subsets $\mathscr B$ of $T$. Then $f: S \to T$ is measurable if and only if $f^{-1}(B) \in \mathscr S$ for every $B \in \mathscr B$.
Proof
First $\mathscr B \subseteq \mathscr T$, so if $f: S \to T$ is measurable then the condition in the theorem trivially holds. Conversely, suppose that the condition in the theorem holds, and let $\mathscr{U} = \{A \in \mathscr T: f^{-1}(A) \in \mathscr S\}$. Then $T \in \mathscr{U}$ since $f^{-1}(T) = S \in \mathscr S$. If $A \in \mathscr{U}$ then $f^{-1}(A^c) = \left[f^{-1}(A)\right]^c \in \mathscr S$, so $A^c \in \mathscr{U}$. If $A_i \in \mathscr{U}$ for $i$ in a countable index set $I$, then $f^{-1}\left(\bigcup_{i \in I} A_i\right) = \bigcup_{i \in I} f^{-1}(A_i) \in \mathscr S$, and hence $\bigcup_{i \in I} A_i \in \mathscr{U}$. Thus $\mathscr{U}$ is a $\sigma$-algebra of subsets of $T$. But $\mathscr B \subseteq \mathscr{U}$ by assumption, so $\mathscr T = \sigma(\mathscr B) \subseteq \mathscr{U}$. Of course $\mathscr{U} \subseteq \mathscr T$ by definition, so $\mathscr{U} = \mathscr T$ and hence $f$ is measurable.
If you have reviewed the section on topology then you may have noticed a striking parallel between the definition of continuity for functions on topological spaces and the defintion of measurability for functions on measurable spaces: A function from one topological space to another is continuous if the inverse image of an open set in the range space is open in the domain space. A function from one measurable space to another is measurable if the inverse image of a measurable set in the range space is measurable in the domain space. If we start with topological spaces, which we often do, and use the Borel $\sigma$-algebras to get measurable spaces, then we get the following (hardly surprising) connection.
Suppose that $(S, \mathscr S)$ and $(T, \mathscr T)$ are topological spaces, and that we give $S$ and $T$ the Borel $\sigma$-algebras $\sigma(\mathscr S)$ and $\sigma(\mathscr T)$ respectively. If $f: S \to T$ is continuous, then $f$ is measurable.
Proof
If $V \in \mathscr T$ then $f^{-1}(V) \in \mathscr S \subseteq \sigma(\mathscr S)$. Hence $f$ is measurable by the previous theorem.
Measurability is preserved under composition, the most important method for combining functions.
Suppose that $(R, \mathscr{R})$, $(S, \mathscr S)$, and $(T, \mathscr T)$ are measurable spaces. If $f: R \to S$ is measurable and $g: S \to T$ is measurable, then $g \circ f: R \to T$ is measurable.
Proof
If $A \in \mathscr T$ then $g^{-1}(A) \in \mathscr S$ since $g$ is measurable, and hence $(g \circ f)^{-1}(A) = f^{-1}\left[g^{-1}(A)\right] \in \mathscr{R}$ since $f$ is measurable.
If $T$ is given the smallest possible $\sigma$-algebra or if $S$ is given the largest one, then any function from $S$ into $T$ is measurable.
Every function $f: S \to T$ is measurable in each of the following cases:
1. $\mathscr T = \{\emptyset, T\}$ and $\mathscr S$ is an arbitrary $\sigma$-algebra of subsets of $S$
2. $\mathscr S = \mathscr{P}(S)$ and $\mathscr T$ is an arbitrary $\sigma$-algebra of subsets of $T$.
Proof
1. Suppose that $\mathscr T = \{\emptyset, T\}$ and that $\mathscr S$ is an arbitrary $\sigma$-algebra on $S$. If $f: S \to T$, then $f^{-1}(T) = S \in \mathscr S$ and $f^{-1}(\emptyset) = \emptyset \in \mathscr S$ so $f$ is measurable.
2. Suppose that $\mathscr S = \mathscr{P}(S)$ and that $\mathscr T$ is an arbitrary $\sigma$-algebra on $T$. If $f: S \to T$, then trivially $f^{-1}(A) \in \mathscr S$ for every $A \in \mathscr T$ so $f$ is measurable.
When there are several $\sigma$-algebras for the same set, then we use the phrase with respect to so that we can be precise. If a function is measurable with respect to a given $\sigma$-algebra on its domain, then it's measurable with respect to any larger $\sigma$-algebra on this space. If the function is measurable with respect to a $\sigma$-algebra on the range space then its measurable with respect to any smaller $\sigma$-algebra on this space.
Suppose that $S$ has $\sigma$-algebras $\mathscr{R}$ and $\mathscr S$ with $\mathscr{R} \subseteq \mathscr S$, and that $T$ has $\sigma$-algebras $\mathscr T$ and $\mathscr{U}$ with $\mathscr T \subseteq \mathscr{U}$. If $f: S \to T$ is measurable with respect to $\mathscr{R}$ and $\mathscr{U}$, then $f$ is measureable with respect to $\mathscr S$ and $\mathscr T$.
Proof
If $A \in \mathscr T$ then $A \in \mathscr{U}$. Hence $f^{-1}(A) \in \mathscr{R}$ so $f^{-1}(A) \in \mathscr S$.
The following construction is particularly important in probability theory:
Suppose that $S$ is a set and $(T, \mathscr T)$ is a measurable space. Suppose also that $f: S \to T$ and define $\sigma(f) = \left\{f^{-1}(A): A \in \mathscr T\right\}$. Then
1. $\sigma(f)$ is a $\sigma$-algebra on $S$.
2. $\sigma(f)$ is the smallest $\sigma$-algebra on $S$ that makes $f$ measurable.
Proof
1. The key to the proof is that the inverse image preserves all set operations. First, $S \in \sigma(f)$ since $T \in \mathscr T$ and $f^{-1}(T) = S$. If $B \in \sigma(f)$ then $B = f^{-1}(A)$ for some $A \in \mathscr T$. But then $A^c \in \mathscr T$ and hence $B^c = f^{-1}(A^c) \in \sigma(f)$. Finally, suppose that $B_i \in \sigma(f)$ for $i$ in a countable index set $I$. Then for each $i \in I$ there exists $A_i \in \mathscr T$ such that $B_i = f^{-1}(A_i)$. But then $\bigcup_{i \in I} A_i \in \mathscr T$ and $\bigcup_{i \in I} B_i = f^{-1}\left(\bigcup_{i \in I} A_i \right)$. Hence $\bigcup_{i \in I} B_i \in \sigma(f)$.
2. If $\mathscr S$ is a $\sigma$-algebra on $S$ and $f$ is measurable with respect to $\mathscr S$ and $\mathscr T$, then by definition $f^{-1}(A) \in \mathscr S$ for every $A \in \mathscr T$, so $\sigma(f) \subseteq \mathscr S$.
Appropriately enough, $\sigma(f)$ is called the $\sigma$-algebra generated by $f$. Often, $S$ will have a given $\sigma$-algebra $\mathscr S$ and $f$ will be measurable with respect to $\mathscr S$ and $\mathscr T$. In this case, $\sigma(f) \subseteq \mathscr S$. We can generalize to an arbitrary collection of functions on $S$.
Suppose $S$ is a set and that $(T_i, \mathscr T_i)$ is a measurable space for each $i$ in a nonempty index set $I$. Suppose also that $f_i: S \to T_i$ for each $i \in I$. The $\sigma$-algebra generated by this collection of functions is $\sigma\left\{f_i: i \in I\right\} = \sigma\left\{\sigma(f_i): i \in I\right\} = \sigma\left\{f_i^{-1}(A): i \in I, \, A \in \mathscr T_i\right\}$
Again, this is the smallest $\sigma$-algebra on $S$ that makes $f_i$ measurable for each $i \in I$.
Product Sets
Product sets arise naturally in the form of the higher-dimensional Euclidean spaces $\R^n$ for $n \in \{2, 3, \ldots\}$. In addition, product spaces are particularly important in probability, where they are used to describe the spaces associated with sequences of random variables. More general product spaces arise in the study of stochastic processes. We start with the product of two sets; the generalization to products of $n$ sets and to general products is straightforward, although the notation gets more complicated.
Suppose that $(S, \mathscr S)$ and $(T, \mathscr T)$ are measurable spaces. The product $\sigma$-algebra on $S \times T$ is $\mathscr S \otimes \mathscr T = \sigma\{A \times B: A \in \mathscr S, \; B \in \mathscr T\}$
So the definition is natural: the product $\sigma$-algebra is generated by products of measurable sets. Our next goal is to consider the measurability of functions defined on, or mapping into, product spaces. Of basic importance are the projection functions. If $S$ and $T$ are sets, let $p_1: S \times T \to S$ and $p_2: S \times T \to T$ be defined by $p_1(x, y) = x$ and $p_2(x, y) = y$ for $(x, y) \in S \times T$. Recall that $p_1$ is the projection onto the first coordinate and $p_2$ is the projection onto the second coordinate. The product $\sigma$ algebra is the smallest $\sigma$-algebra that makes the projections measurable:
Suppose again that $(S, \mathscr S)$ and $(T, \mathscr T)$ are measurable spaces. Then $\mathscr S \otimes \mathscr T = \sigma\{p_1, p_2\}$.
Proof
If $A \in \mathscr S$ then $p_1^{-1}(A) = A \times T \in \mathscr S \otimes \mathscr T$. Similarly, if $B \in \mathscr T$ then $p_2^{-1}(B) = S \times B \in \mathscr S \otimes \mathscr T$. Hence $p_1$ and $p_2$ are measurable, so $\sigma\{p_1, p_2\} \subseteq \mathscr S \otimes \mathscr T$. Conversely, if $A \in \mathscr S$ and $B \in \mathscr T$ then $A \times B = p_1^{-1}(A) \cap p_2^{-1}(B) \in \sigma\{p_1, p_2\}$. Since sets of this form generate the product $\sigma$-algebra, we have $\mathscr S \otimes \mathscr T \subseteq \sigma\{p_1, p_2\}$.
Projection functions make it easy to study functions mapping into a product space.
Suppose that $(R, \mathscr{R})$, $(S, \mathscr S)$ and $(T, \mathscr T)$ are measurable spaces, and that $S \times T$ is given the product $\sigma$-algebra $\mathscr S \otimes \mathscr T$. Suppose also that $f: R \to S \times T$, so that $f(x) = \left(f_1(x), f_2(x)\right)$ for $x \in \R$, where $f_1: R \to S$ and $f_2: R \to T$ are the coordinate functions. Then $f$ is measurable if and only if $f_1$ and $f_2$ are measurable.
Proof
Note that $f_1 = p_1 \circ f$ and $f_2 = p_2 \circ f$. So if $f$ is measurable then $f_1$ and $f_2$ are compositions of measurable functions, and hence are measurable. Conversely, suppose that $f_1$ and $f_2$ are measurable. If $A \in \mathscr S$ and $B \in \mathscr T$ then $f^{-1}(A \times B) = f_1^{-1}(A) \cap f_2^{-1}(B) \in \mathscr{R}$. Since products of measurable sets generate $\mathscr S \otimes \mathscr T$, it follows that $f$ is measurable.
Our next goal is to consider cross sections of sets in a product space and cross sections of functions defined on a product space. It will help to introduce some new functions, which in a sense are complementary to the projection functions.
Suppose again that $(S, \mathscr S)$ and $(T, \mathscr T)$ are measurable spaces, and that $S \times T$ is given the product $\sigma$-algebra $\mathscr S \otimes \mathscr T$.
1. For $x \in S$ the function $1_x : T \to S \times T$, defined by $1_x(y) = (x, y)$ for $y \in T$, is measurable.
2. For $y \in T$ the function $2_y: S \to S \times T$, defined by $2_y(x) = (x, y)$ for $x \in S$, is measurable.
Proof
To show that the functions are measurable, if suffices to consider inverse images of products of measurable sets, since such sets generate $\mathscr S \otimes \mathscr T$. Thus, let $A \in \mathscr S$ and $B \in \mathscr T$.
1. For $x \in S$ note that $1_x^{-1}(A \times B)$ is $B$ if $x \in A$ and is $\emptyset$ if $x \notin A$. In either case, $1_x^{-1}(A \times B) \in \mathscr T$.
2. Similarly, for $y \in T$ note that $2_y^{-1}(A \times B)$ is $A$ if $y \in B$ and is $\emptyset$ if $y \notin B$. In either case, $2_y^{-1}(A \times B) \in \mathscr S$.
Now our work is easy.
Suppose again that $(S, \mathscr S)$ and $(T, \mathscr T)$ are measurable spaces, and that $C \in \mathscr S \otimes \mathscr T$. Then
1. For $x \in S$, $\{y \in T: (x, y) \in C\} \in \mathscr T$.
2. For $y \in T$, $\{x \in S: (x, y) \in C\} \in \mathscr S$.
Proof
These result follow immediately from the measurability of the functions $1_x$ and $2_y$:
1. For $x \in S$, $1_x^{-1}(C) = \{y \in T: (x, y) \in C\}$.
2. For $y \in T$, $2_y^{-1}(C) = \{x \in S: (x, y) \in C\}$.
The set in (a) is the cross section of $C$ in the first coordinate at $x$, and the set in (b) is the cross section of $C$ in the second coordinate at $y$. As a simple corollary to the theorem, note that if $A \subseteq S$, $B \subseteq T$ and $A \times B \in \mathscr S \otimes \mathscr T$ then $A \in \mathscr S$ and $B \in \mathscr T$. That is, the only measurable product sets are products of measurable sets. Here is the measurability result for cross-sectional functions:
Suppose again that $(S, \mathscr S)$ and $(T, \mathscr T)$ are measurable spaces, and that $S \times T$ is given the product $\sigma$-algebra $\mathscr S \otimes \mathscr T$. Suppose also that $(U, \mathscr{U})$ is another measurable space, and that $f: S \times T \to U$ is measurable. Then
1. The function $y \mapsto f(x, y)$ from $T$ to $U$ is measurable for each $x \in S$.
2. The function $x \mapsto f(x, y)$ from $S$ to $U$ is measurable for each $y \in T$.
Proof
Note that the function in (a) is just $f \circ 1_x$, and the function in (b) is just $f \circ 2_y$, both are compositions of measurable functions
The results for products of two spaces generalize in a completely straightforward way to a product of $n$ spaces.
Suppose $n \in \N_+$ and that $(S_i, \mathscr S_i)$ is a measurable space for each $i \in \{1, 2, \ldots, n\}$. The product $\sigma$-algebra on the Cartesian product set $S_1 \times S_2 \times \cdots \times S_n$ is $\mathscr S_1 \otimes \mathscr S_2 \otimes \cdots \otimes \mathscr S_n = \sigma\left\{ A_1 \times A_2 \times \cdots \times A_n: A_i \in \mathscr S_i \text{ for all } i \in \{1, 2, \ldots, n\}\right\}$
So again, the product $\sigma$-algebra is generated by products of measurable sets. Results analogous to the theorems above hold. In the special case that $(S_i, \mathscr S_i) = (S, \mathscr S)$ for $i \in \{1, 2, \ldots, n\}$, the Cartesian product becomes $S^n$ and the corresponding product $\sigma$-algebra is denoted $\mathscr S^n$. The notation is natural, but potentially confusing. Note that $\mathscr S^n$ is not the Cartesian product of $\mathscr S$ $n$ times, but rather the $\sigma$-algebra generated by sets of the form $A_1 \times A_2 \times \cdots \times A_n$ where $A_i \in \mathscr S$ for $i \in \{1, 2, \ldots, n\}$.
We can also extend these ideas to a general product. To recall the definition, suppose that $S_i$ is a set for each $i$ in a nonempty index set $I$. The product set $\prod_{i \in I} S_i$ consists of all functions $x: I \to \bigcup_{i \in I} S_i$ such that $x(i) \in S_i$ for each $i \in I$. To make the notation look more like a simple Cartesian product, we often write $x_i$ instead of $x(i)$ for the value of a function in the product set at $i \in I$. The next definition gives the appropriate $\sigma$-algebra for the product set.
Suppose that $(S_i, \mathscr S_i)$ is a measurable space for each $i$ in a nonempty index set $I$. The product $\sigma$-algebra on the product set $\prod_{i \in I} S_i$ is $\sigma\left\{\prod_{i \in I} A_i: A_i \in \mathscr S_i \text{ for each } i \in I \text{ and } A_i = S_i \text{ for all but finitely many } i \in I \right\}$
If you have reviewed the section on topology, the definition should look familiar. If the spaces were topological spaces instead of measurable spaces, with $\mathscr S_i$ the topology of $S_i$ for $i \in I$, then the set of products in the displayed expression above is a base for the product topology on $\prod_{i \in I} S_i$.
The definition can also be understood in terms of projections. Recall that the projection onto coordinate $j \in I$ is the function $p_j: \prod_{i \in I} S_i \to S_j$ given by $p_j(x) = x_j$. The product $\sigma$-algebra is the smallest $\sigma$-algebra on the product set that makes all of the projections measurable.
Suppose again that $(S_i, \mathscr S_i)$ is a measurable space for each $i$ in a nonempty index set $I$, and let $\mathfrak{S}$ denote the product $\sigma$-algebra on the product set $S_I = \prod_{i \in I} S_i$. Then $\mathfrak{S} = \sigma\{p_i: i \in I\}$.
Proof
Let $j \in I$ and $A \in \mathscr S_j$. Then $p_j^{-1}(A) = \prod_{i \in I} A_i$ where $A_i = S_i$ for $i \ne j$ and $A_j = A$. This set is in $\mathfrak{S}$ so $p_j$ is measurable. Hence $\sigma\{p_i: i \in I\} \subseteq \mathfrak{S}$. For the other direction, consider a product set $\prod_{i \in I} A_i$ where $A_i = S_i$ except for $i \in J$, where $J \subseteq I$ is finite. Then $\prod_{i \in I} A_i = \bigcap_{j \in J} p_j^{-1}(A_j)$. This set is in $\sigma\{p_i: i \in I\}$. Product sets of this form generate $\mathfrak{S}$ so it follows that $\mathfrak{S} \subseteq \sigma\{p_i: i \in I\}$.
In the special case that $(S, \mathscr S)$ is a fixed measurable space and $(S_i, \mathscr S_i) = (S, \mathscr S)$ for all $i \in I$, the product set $\prod_{i \in I} S$ is just the collection of functions from $I$ into $S$, often denoted $S^I$. The product $\sigma$-algebra is then denoted $\mathscr S^I$, a notation that is natural, but again potentially confusing. Here is the main measurability result for a function mapping into a product space.
Suppose that $(R, \mathscr{R})$ is a measurable space, and that $(S_i, \mathscr S_i)$ is a measurable space for each $i$ in a nonempty index set $I$. As before, let $\prod_{i \in I} S_i$ have the product $\sigma$-algebra. Suppose now that $f: R \to \prod_{i \in I} S_i$. For $i \in I$ let $f_i: R \to S_i$ denote the $i$th coordinate function of $f$, so that $f_i(x) = [f(x)]_i$ for $x \in R$. Then $f$ is measurable if and only if $f_i$ is measurable for each $i \in I$.
Proof
Suppose that $f$ is measurable. For $i \in I$ note that $f_i = p_i \circ f$ is a composition of measurable functions, and hence is measurable. Conversely, suppose that $f_i$ is measurable for each $i \in I$. To show that measurability of $f$ we need only consider inverse images of sets that generate the product $\sigma$-algebra. Thus, suppose that $A_j \in \mathscr S_j$ for $j$ in a finite subset $J \subseteq I$, and let $A_i = S_i$ for $i \in I - J$. Then $f^{-1}\left(\prod_{i \in I} A_i\right) = \bigcap_{j \in J} f_j^{-1}(A_j)$. This set is in $\mathscr{R}$ since the intersection is over a finite index set.
Just as with the product of two sets, cross-sectional sets and functions are measurable with respect to the product measure. Again, it's best to work with some special functions.
Suppose that $(S_i, \mathscr S_i)$ is a measurable space for each $i$ in an index set $I$ with at least two elements. For $j \in I$ and $u \in S_j$, define the function $j_u: \prod_{i \in I - \{j\}} \to \prod_{i \in I} S_i$ by $j_u(x) = y$ where $y_i = x_i$ for $i \ne j$ and $y_j = u$. Then $j_u$ is measurable with respect to the product $\sigma$-algebras.
Proof
Once again, it suffices to consider the inverse image of the sets that generate the product $\sigma$-algebra. So suppose $A_i \in \mathscr S_i$ for $i \in I$ with $A_i = S_i$ for all but finitely many $i \in I$. Then $j_u^{-1}\left(\prod_{i \in I} A_i\right) = \prod_{i \in I - \{j\}} A_i$ if $u \in A_j$, and the inverse image is $\emptyset$ otherwise. In either case, $j_u^{-1}\left(\prod_{i \in I} A_i\right)$ is in the product $\sigma$-algebra on $\prod_{i \in I - \{j\}} S_i$.
In words, for $j \in I$ and $u \in S_j$, the function $j_u$ takes a point in the product set $\prod_{ i \in I - \{j\}} S_i$ and assigns $u$ to coordinate $j$ to give a point in $\prod_{i \in I} S_i$. If $A \subseteq \prod_{i \in I} S_i$, then $j_u^{-1}(A)$ is the cross section of $A$ in coordinate $j$ at $u$. So it follows immediately from the previous result that the cross sections of a measurable set are measurable. Cross sections of measurable functions are also measurable. Suppose that $(T, \mathscr T)$ is another measurable space, and that $f: \prod_{i \in I} S_i \to T$ is measurable. The cross section of $f$ in coordinate $j \in I$ at $u \in S_j$ is simply $f \circ j_u: S_{I - \{j\}} \to T$, a composition of measurable functions.
However, a non-measurable set can have measurable cross sections, even in a product of two spaces.
Suppose that $S$ is an uncountable set with the $\sigma$-algebra $\mathscr{C}$ of countable and co-countable sets as in (21). Consider $S \times S$ with the product $\sigma$-algebra $\mathscr{C} \otimes \mathscr{C}$. Let $D = \{(x, x): x \in S\}$, the diagonal of $S \times S$. Then $D$ has measurable cross sections, but $D$ is not measurable.
Proof
For $x \in S$, the cross section of $D$ in the first coordinate at $x$ is $\{y \in S: (x, y) \in D\} = \{x\} \in \mathscr{C}$. Similarly, for $y \in S$, the cross section of $D$ in the second coordinate at $y$ is $\{x \in S: (x, y) \in D\} = \{ y\} \in \mathscr{C}$. But $D$ cannot be generated by a countable collection of sets of the form $A \times B$ with $A, \, B \in \mathscr{C}$, so $D \notin \mathscr{C} \otimes \mathscr{C}$, by the result above.
Special Cases
Most of the sets encountered in applied probability are either countable, or subsets of $\R^n$ for some $n$, or more generally, subsets of a product of a countable number of sets of these types. In the study of stochastic processes, various spaces of functions play an important role. In this subsection, we will explore the most important special cases.
Discrete Spaces
If $S$ is countable and $\mathscr S = \mathscr P(S)$ is the collection of all subsets of $S$, then $(S, \mathscr S)$ is a discrete measurable space.
Thus if $(S, \mathscr S)$ is discrete, all subsets of $S$ are measurable and every function from $S$ to another measurable space is measurable. The power set is also the discrete topology on $S$, so $\mathscr S$ is a Borel $\sigma$-algebra as well. As a topological space, $(S, \mathscr S)$ is complete, locally compact, Hausdorff, and since $S$ is countable, separable. Moreover, the discrete topology corresponds to the discrete metric $d$, defined by $d(x, x) = 0$ for $x \in S$ and $d(x, y) = 1$ for $x, \, y \in S$ with $x \ne y$.
Euclidean Spaces
Recall that for $n \in \N_+$, the Euclidean topology on $\R^n$ is generated by the standard Euclidean metric $d_n$ given by $d_n(\bs x, \bs y) = \sqrt{\sum_{i=1}^n (x_i - y_i)^2}, \quad \bs x = (x_1, x_2, \ldots, x_n), \, \bs y = (y_1, y_2, \ldots, y_n) \in \R^n$ With this topology, $\R^n$ is complete, connected, locally compact, Hausdorff, and separable.
For $n \in \N_+$, the $n$-dimensional Euclidean measurable space is $(\R^n, \mathscr R_n)$ where $\mathscr R_n$ is the Borel $\sigma$-algebra corresponding to the standard Euclidean topology on $\R^n$.
The one-dimensional case is particularly important. In this case, the standard Euclidean metric $d$ is given by $d(x, y) = \left|x - y\right|$ for $x, \, y \in \R$. The Borel $\sigma$-algebra $\mathscr R$ can be generated by various collections of intervals.
Each of the following collections generates $\mathscr R$.
1. $\mathscr B_1 = \{I \subseteq \R: I \text{ is an interval} \}$
2. $\mathscr B_2 = \{(a, b]: a, \, b \in \R, \; a \lt b \}$
3. $\mathscr B_3 = \{(-\infty, b]: b \in \R \}$
Proof
The proof involves showing that each set in any one of the collections is in the $\sigma$-algebra of any other collection. Let $\mathscr S_i = \sigma(\mathscr B_i)$ for $i \in \{1, 2, 3\}$.
1. Clearly $\mathscr B_2 \subseteq \mathscr B_1$ and $\mathscr B_3 \subseteq \mathscr B_1$ so $\mathscr S_2 \subseteq \mathscr S_1$ and $\mathscr S_3 \subseteq \mathscr S_1$.
2. If $a, \, b \in \R$ with $a \le b$ then $[a, b] = \bigcap_{n=1}^\infty \left(a - \frac{1}{n}, b\right]$ and $(a, b) = \bigcup_{n=1}^\infty \left(a, b - \frac{1}{n}\right]$, so $[a, b], \, (a, b) \in \mathscr S_2$. Also $[a, b) = \bigcup_{n=1}^\infty \left[a, b - \frac{1}{n}\right]$ so $[a, b) \in \mathscr{R}_2$. Thus all bounded intervals are in $\mathscr S_2$. Next, $[a, \infty) = \bigcup_{n=1}^\infty [a, a + n)$, $(a, \infty) = \bigcup_{n=1}^\infty (a, a + n)$, $(-\infty, a] = \bigcup_{n=1}^\infty (a - n, a]$, and $(-\infty, a) = \bigcup_{n=1}^\infty (a - n, a)$, so each of these intervals is in $\mathscr S_2$. Of course $\R \in \mathscr S_2$, so we now have that $I \in \mathscr S_2$ for every interval $I$. Thus $\mathscr S_1 \subseteq \mathscr S_2$, and so from (a), $\mathscr S_2 = \mathscr S_1$.
3. If $a, \, b \in \R$ with $a \lt b$ then $(a, b] = (-\infty, b] - (-\infty, a]$ so $(a, b] \in \mathscr S_3$. Hence $\mathscr S_2 \subseteq \mathscr S_3$. But then from (a) and (b) it follows that $\mathscr S_3 = \mathscr S_1$.
Since the Euclidean topology has a countable base, $\mathscr R$ is countably generated. In fact each collection of intervals above, but with endpoints restricted to $\Q$, generates $\mathscr R$. Moreover, $\mathscr R$ can also be constructed from $\sigma$-algebras that are generated by countable partitions. First recall that for $n \in \N$, the set of dyadic rationals (or binary rationals) of rank $n$ or less is $\D_n = \{j / 2^n: j \in \Z\}$. Note that $\D_n$ is countable and $\D_n \subseteq \D_{n+1}$ for $n \in \N$. Moreover, the set $\D = \bigcup_{n \in \N} \D_n$ of all dyadic rationals is dense in $\R$. The dyadic rationals are often useful in various applications because $\D_n$ has the natural ordered enumeration $j \mapsto j / 2^n$ for each $n \in \N$. Now let $\mathscr{D}_n = \left\{\left(j / 2^n, (j + 1) / 2^n\right]: j \in \Z\right\}, \quad n \in \N$ Then $\mathscr{D}_n$ is a countable partition of $\R$ into nonempty intervals of equal size $1 / 2^n$, so $\mathscr{E}_n = \sigma(\mathscr{D}_n)$ consists of unions of sets in $\mathscr{D}_n$ as described above. Every set $\mathscr{D}_{n}$ is the union of two sets in $\mathscr{D}_{n+1}$ so clearly $\mathscr{E}_n \subseteq \mathscr{E}_{n+1}$ for $n \in \N$. Finally, the Borel $\sigma$-algebra on $\R$ is $\mathscr{R} = \sigma\left(\bigcup_{n=0}^\infty \mathscr{E}_n\right) = \sigma\left(\bigcup_{n=0}^\infty \mathscr{D}_n\right)$. This construction turns out to be useful in a number of settings.
For $n \in \{2, 3, \ldots\}$, the Euclidean topology on $\R^n$ is the $n$-fold product topology formed from the Euclidean topology on $\R$. So the Borel $\sigma$-algebra $\mathscr R^n$ is also the $n$-fold power $\sigma$-algebra formed from $\mathscr R$. Finally, $\mathscr R^n$ can be generated by $n$-fold products of sets in any of the three collections in the previous theorem.
Space of Real Functions
Suppose that $(S, \mathscr S)$ is a measurable space. From our general discussion of functions, recall that the usual arithmetic operations on functions from $S$ into $\R$ are defined pointwise.
If $f: S \to \R$ and $g: S \to \R$ are measurable and $a \in \R$, then each of the following functions from $S$ into $\R$ is also measurable:
1. $f + g$
2. $f - g$
3. $f g$
4. $a f$
Proof
These results follow from the fact that the arithmetic operators are continuous, and hence measurable. That is, $(x, y) \mapsto x + y$, $(x, y) \mapsto x - y$, and $(x, y) \mapsto x y$ are continuous as functions from $\R^2$ into $\R$. Thus, if $f, \, g: S \to \R$ are measurable, then $(f, g): S \to \R^2$ is measurable by the result above. Then, $f + g$, $f - g$, $f g$ are the compositions, respectively, of $+$, $-$, $\cdot$ with $(f, g)$. Of course, (d) is a simple corollary of (c).
Similarly, if $f: S \to \R \setminus \{0\}$ is measurable, then so is $1 / f$. Recall that the set of functions from $S$ into $\R$ is a vector space, under the pointwise definitions of addition and scalar multiplication. But once again, we usually want to restrict our attention to measurable functions. Thus, it's nice to know that the measurable functions from $S$ into $\R$ also form a vector space. This follows immediately from the closure properties (a) and (d) of the previous theorem. Of particular importance in probability and stochastic processes is the vector space of bounded, measurable functions $f: S \to \R$, with the supremum norm $\|f\| = \sup\left\{\left|f(x)\right|: x \in S \right\}$
The elementary functions that we encounter in calculus and other areas of applied mathematics are functions from subsets of $\R$ into $\R$. The elementary functions include algebraic functions (which in turn include the polynomial and rational functions), the usual transcendental functions (exponential, logarithm, trigonometric), and the usual functions constructed from these by composition, the arithmetic operations, and by piecing together. As we might hope, all of the elementary functions are measurable. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/01%3A_Foundations/1.11%3A_Measurable_Spaces.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$
There are several other types of algebraic set structures that are weaker than $\sigma$-algebras. These are not particularly important in themselves, but are important for constructing $\sigma$-algebras and the measures on these $\sigma$-algebras. You may want to skip this section if you are not intersted in questions of existence and uniqueness of positive measures.
Basic Theory
Definitions
Throughout this section, we assume that $S$ is a set and $\mathscr{S}$ is a nonempty collection of subsets of $S$. Here are the main definitions we will need.
$\mathscr{S}$ is a $\pi$-system if $\mathscr{S}$ is closed under finite intersections: if $A, \, B \in \mathscr{S}$ then $A \cap B \in \mathscr{S}$.
Closure under intersection is clearly a very simple property, but $\pi$ systems turn out to be useful enough to deserve a name.
$\mathscr{S}$ is a $\lambda$-system if it is closed under complements and countable disjoint unions.
1. If $A \in \mathscr{S}$ then $A^c \in \mathscr{S}$.
2. If $A_i \in \mathscr{S}$ for $i$ in a countable index set $I$ and $A_i \cap A_j = \emptyset$ for $i \ne j$ then $\bigcup_{i \in I} A_i \in \mathscr{S}$.
$\mathscr{S}$ is a semi-algebra if it is closed under intersection and if complements can be written as finite, disjoint unions:
1. If $A, \, B \in \mathscr{S}$ then $A \cap B \in \mathscr{S}$.
2. If $A \in \mathscr{S}$ then there exists a finite, disjoint collection $\{B_i: i \in I\} \subseteq \mathscr{S}$ such that $A^c = \bigcup_{i \in I} B_i$.
For our final structure, recall that a sequence $(A_1, A_2, \ldots)$ of subsets of $S$ is increasing if $A_n \subseteq A_{n+1}$ for all $n \in \N_+$. The sequence is decreasing if $A_{n+1} \subseteq A_n$ for all $n \in \N_+$. Of course, these are the standard meanings of increasing and decreasing relative to the ordinary order $\le$ on $\N_+$ and the subset partial order $\subseteq$ on $\mathscr{P}(S)$.
$\mathscr{S}$ is a monotone class if it is closed under increasing unions and decreasing intersections:
1. If $(A_1, A_2, \ldots)$ is an increasing sequence of sets in $\mathscr{S}$ then $\bigcup_{n=1}^\infty A_n \in \mathscr{S}$.
2. If $(A_1, A_2, \ldots)$ is a decreasing sequence of sets in $\mathscr{S}$ then $\bigcap_{n=1}^\infty A_n \in \mathscr{S}$.
If $(A_1, A_2, \ldots)$ is an increasing sequence of sets then we sometimes write $\bigcup_{n=1}^\infty A_n = \lim_{n \to \infty} A_n$. Similarly, if $(A_1, A_2 \ldots)$ is a decreasing sequence of sets we sometimes write $\bigcap_{n=1}^\infty A_n = \lim_{n \to \infty} A_n$. The reason for this notation will become clear in the section on Convergence in the chapter on Probability Spaces. With this notation, a monotone class $\mathscr{S}$ is defined by the condition that if $(A_1, A_2, \ldots)$ is an increasing or decreasing sequence of sets in $\mathscr{S}$ then $\lim_{n \to \infty} A_n \in \mathscr{S}$.
Basic Theorems
Our most important set structure, the $\sigma$-algebra, has all of the properties in the definitions above.
If $\mathscr{S}$ is a $\sigma$-algebra then $\mathscr{S}$ is a $\pi$-system, a $\lambda$-system, a semi-algebra, and a monotone class.
If $\mathscr{S}$ is a $\lambda$-system then $S \in \mathscr{S}$ and $\emptyset \in \mathscr{S}$.
Proof
The proof is just like the one for an algebra. There exists $A \in \mathscr{S}$ since $\mathscr{S}$ is non-empty. Hence $A^c \in \mathscr{S}$ and so $S = A \cup A^c \in \mathscr{S}$. Finally $\emptyset = S^c \in \mathscr{S}$.
Any type of algebraic structure on subsets of $S$ that is defined purely in terms of closure properties will be preserved under intersection. That is, we will have results that are analogous to how $\sigma$-algebras are generated from more basic sets, with completely straightforward and analgous proofs. In the following two theorems, the term system could mean $\pi$-system, $\lambda$-system, or monotone class of subsets of $S$.
If $\mathscr{S}_i$ is a system for each $i$ in an index set $I$ and $\bigcap_{i \in I} \mathscr{S}_i$ is nonempty, then $\bigcap_{i \in I} \mathscr{S}_i$ is a system of the same type.
The condition that $\bigcap_{i \in I} \mathscr{S}_i$ be nonempty is unnecessary for a $\lambda$-system, by the result above. Now suppose that $\mathscr{B}$ is a nonempty collection of subsets of $S$, thought of as basic sets of some sort. Then the system generated by $\mathscr{B}$ is the intersection of all systems that contain $\mathscr{B}$.
The system $\mathscr{S}$ generated by $\mathscr{B}$ is the smallest system containing $\mathscr{B}$, and is characterized by the following properties:
1. $\mathscr{B} \subseteq \mathscr{S}$.
2. If $\mathscr{T}$ is a system and $\mathscr{B} \subseteq \mathscr{T}$ then $\mathscr{S} \subseteq \mathscr{T}$.
Note however, that the previous two results do not apply to semi-algebras, because the semi-algebra is not defined purely in terms of closure properties (the condition on $A^c$ is not a closure property).
If $\mathscr{S}$ is a monotone class and an algebra, then $\mathscr{S}$ is a $\sigma$-algebra.
Proof
All that is needed is to prove closure under countable unions. Thus, suppose that $A_i \in \mathscr{S}$ for $i \in \N_+$. Then $B_n = \bigcup_{i=1}^n A_i \in \mathscr{S}$ since $\mathscr{S}$ is an algebra. The sequence $(B_1, B_2, \ldots)$ is increasing, so $\bigcup_{n=1}^\infty B_n \in \mathscr{S}$, since $\mathscr{S}$ is a monotone class. But $\bigcup_{n=1}^\infty B_n = \bigcup_{i=1}^\infty A_i$.
By definition, a semi-algebra is a $\pi$-system. More importantly, a semi-algebra can be used to construct an algebra.
Suppose that $\mathscr{S}$ is a semi-algebra of subsets of $S$. Then the collection $\mathscr{S}^*$ of finite, disjoint unions of sets in $\mathscr{S}$ is an algebra.
Proof
Suppose that $A, \, B \in \mathscr{S}^*$. Then there exist finite, disjoint collections $\{A_i: i \in I\} \subseteq \mathscr{S}$ and $\{B_j: j \in J\} \subseteq \mathscr{S}$ such that $A = \bigcup_{i \in I} A_i$ and $B = \bigcup_{j \in J} B_j$. Hence $A \cap B = \bigcup_{(i, j) \in I \times J} (A_i \cap B_j)$ But $\{A_i \cap B_j: (i, j) \in I \times J\}$ is a finite, disjoint collection of sets in $\mathscr{S}$, so $A \cap B \in \mathscr{S}^*$. Suppose $A \in \mathscr{S}^*$, so that there exists a finite, disjoint collection $\{A_i: i \in I\}$ such that $A = \bigcup_{i \in I} A_i$. Then $A^c = \bigcap_{i \in I} A_i^c$. But $A_i^c \in \mathscr{S}^*$ by definition of semi-algebra, and we just showed that $\mathscr{S}^*$ is closed under finite intersections, so $A^c \in \mathscr{S}^*$.
We will say that our nonempty collection $\mathscr{S}$ is closed under proper set difference if $A, \, B \in \mathscr{S}$ and $A \subseteq B$ implies $B \setminus A \in \mathscr{S}$. The following theorem gives the basic relationship between $\lambda$-systems and monotone classes.
Suppose that $\mathscr{S}$ is a nonempty collection of subsets of $S$.
1. If $\mathscr{S}$ is a $\lambda$-system then $\mathscr{S}$ is a monotone class and is closed under proper set difference.
2. If $\mathscr{S}$ is a monotone class, is closed under proper set difference, and contains $S$, then $\mathscr{S}$ is a $\lambda$-system.
Proof
1. Suppose that $\mathscr{S}$ is a $\lambda$-system. Suppose that $A, \, B \in \mathscr{S}$ and $A \subseteq B$. Then $B^c \in \mathscr{S}$, and $A$ and $B^c$ are disjoint, so $A \cup B^c \in \mathscr{S}$. But then $(A \cup B^c)^c = B \cap A^c = B \setminus A \in \mathscr{S}$. Hence $\mathscr{S}$ is closed under proper set difference. Next suppose that $(A_1, A_2, \ldots)$ is an increasing sequence of sets in $\mathscr{S}$. Let $B_1 = A_1$ and $B_n = A_n \setminus A_{n-1}$ for $n \in \{2, 3, \ldots\}$. Then $B_i \in \mathscr{S}$ for each $i \in \N_+$. But the sequence $(B_1, B_2, \ldots)$ is disjoint and has the same union as $(A_1, A_2, \ldots)$. Hence $\bigcup_{i=1}^\infty A_i = \bigcup_{i=1}^\infty B_i \in \mathscr{S}$. Finally, suppose that $(A_1, A_2, \ldots)$ is a decreasing sequence of sets in $\mathscr{S}$. Then $A_i^c \in \mathscr{S}$ for each $i \in \N_+$ and $(A_1^c, A_2^c, \ldots)$ is increasing. Hence $\bigcup_{i=1}^\infty A_i^c \in \mathscr{S}$ and therefore $\left(\bigcup_{i=1}^\infty A_i^c\right)^c = \bigcap_{i=1}^\infty A_i \in \mathscr{S}$.
2. Suppose that $\mathscr{S}$ is a monotone class, is closed under proper set difference, and $S \in \mathscr{S}$. If $A \in \mathscr{S}$ then trivially $A \subseteq S$ so $A^c = S \setminus A \in \mathscr{S}$. Next, suppose that $A, \; B \in \mathscr{S}$ are disjoint. Then $A^c \in \mathscr{S}$ and $B \subseteq A^c$, so $A^c \setminus B = A^c \cap B^c \in \mathscr{S}$. Hence $A \cup B = (A^c \cap B^c)^c \in \mathscr{S}$. Finally, suppose that $(A_1, A_2, \ldots)$ is a disjoint sequence of sets in $\mathscr{S}$. We just showed that $\mathscr{S}$ is closed under finite, disjoint unions, so $B_n = \bigcup_{i=1}^n A_i \in \mathscr{S}$. But the sequence $(B_1, B_2, \ldots)$ is increasing, and hence $\bigcup_{n=1}^\infty B_n = \bigcup_{i=1}^\infty A_i \in \mathscr{S}$.
The following theorem is known as the monotone class theorem, and is due to the mathematician Paul Halmos.
Suppose that $\mathscr{A}$ is an algebra, $\mathscr{M}$ is a monotone class, and $\mathscr{A} \subseteq \mathscr{M}$. Then $\sigma(\mathscr{A}) \subseteq \mathscr{M}$.
Proof
First let $m(\mathscr{A})$ denote the monotone class generated by $\mathscr{A}$, as defined above. The outline of the proof is to show that $m(\mathscr{A})$ is an algebra, so that by (9), $m(\mathscr{A})$ is a $\sigma$-algebra. It then follows that $\sigma(\mathscr{A}) \subseteq m(\mathscr{A}) \subseteq \mathscr{M}$. To show that $m(\mathscr{A})$ is an algebra, we first show that it is closed under complements and then under simple union.
Since $m(\mathscr{A})$ is a monotone class, the collection $m^*(\mathscr{A}) = \{A \subseteq S: A^c \in m(\mathscr{A})\}$ is also a monotone class. Moreover, $\mathscr{A} \subseteq m^*(\mathscr{A})$ so it follows that $m(\mathscr{A}) \subseteq m^*(\mathscr{A})$. Hence if $A \in m(\mathscr{A})$ then $A \in m^*(\mathscr{A})$ so $A^c \in m(\mathscr{A})$. Thus $m(\mathscr{A})$ is closed under complements.
Let $\mathscr{M}_1 = \{A \subseteq S: A \cup B \in m(\mathscr{A}) \text{ for all } B \in \mathscr{A}\}$. Then $\mathscr{M}_1$ is a monotone class and $\mathscr{A} \subseteq \mathscr{M}_1$ so $m(\mathscr{A}) \subseteq \mathscr{M}_1$. Next let $\mathscr{M}_2 = \{A \subseteq S: A \cup B \in m(\mathscr{A}) \text{ for all } B \in m(\mathscr{A})\}$. Then $\mathscr{M}_2$ is also a monotone class. Let $A \in \mathscr{A}$. If $B \in m(\mathscr{A})$ then $B \in \mathscr{M}_1$ and hence $A \cup B \in m(\mathscr{A})$. Hence $A \in \mathscr{M}_2$. Thus we have $\mathscr{A} \subseteq \mathscr{M}_2$, so $m(\mathscr{A}) \subseteq \mathscr{M}_2$. Finally, let $A, \, B \in m(\mathscr{A})$. Then $A \in \mathscr{M}_2$ so $A \cup B \in m(\mathscr{A})$ and therefore $m(\mathscr{A})$ is closed under simple union.
As noted in (5), a $\sigma$-algebra is both a $\pi$-system and a $\lambda$-system. The converse is also true, and is one of the main reasons for studying these structures.
If $\mathscr{S}$ is a $\pi$-system and a $\lambda$-system then $\mathscr{S}$ is a $\sigma$-algebra.
Proof
$S \in \mathscr{S}$, and if $A \in \mathscr{S}$ then $A^c \in \mathscr{S}$ by definition of a $\lambda$-system. Thus, all that is left is to show closure under countable unions. Thus, suppose that $(A_1, A_2, \ldots)$ is a sequence of sets in $\mathscr{S}$. Then $A_i^c \in \mathscr{S}$ for each $i \in \N_+$. Since $\mathscr{S}$ is also a $\pi$-system, it follows that for each $n \in \N_+$, $B_n = A_n \cap A_1^c \cap \cdots \cap A_{n-1}^c \in \mathscr{S}$ (by convention $B_1 = A_1$). But the sequence $(B_1, B_2, \ldots)$ is disjoint and has the same union as $(A_1, A_2, \ldots)$. Hence $\bigcup_{i=1}^\infty A_i = \bigcup_{i=1}^\infty B_i \in \mathscr{S}$.
The importance of $\pi$-systems and $\lambda$-systems stems in part from Dynkin's $\pi$-$\lambda$ theorem given next. It's named for the mathematician Eugene Dynkin.
Suppose that $\mathscr{A}$ is a $\pi$-system of subsets of $S$, $\mathscr{B}$ is a $\lambda$-system of subsets of $S$, and $\mathscr{A} \subseteq \mathscr{B}$. Then $\sigma(\mathscr{A}) \subseteq \mathscr{B}$.
Proof
Let $\mathscr{L}$ denote the $\lambda$-system generated by $\mathscr{A}$. Then of course $\mathscr{A} \subseteq \mathscr{L} \subseteq \mathscr{B}$. For $A \in \mathscr{L}$, let $\mathscr{L}_A = \{B \subseteq S: B \cap A \in \mathscr{L}\}$ We will show that $\mathscr{L}_A$ is a $\lambda$-system. Note that $S \cap A = A \in \mathscr{L}$ and therefore $S \in \mathscr{L}_A$. Next, suppose that $B_1, \, B_2 \in \mathscr{L}_A$ and that $B_1 \subseteq B_2$. Then $B_1 \cap A \in \mathscr{L}$ and $B_2 \cap A \in \mathscr{L}$ and $B_1 \cap A \subseteq B_2 \cap A$. Hence $(B_2 \setminus B_1) \cap A = (B_2 \cap A) \setminus (B_1 \cap A) \in \mathscr{L}$. Hence $B_2 \setminus B_1 \in \mathscr{L}_A$. Finally, suppose that $\{B_i: i \in I\}$ is a countable, disjoint collection of sets in $\mathscr{L}_A$. Then $B_i \cap A \in \mathscr{L}$ for each $i \in I$, and $\{B_i \cap A: i \in I\}$ is also a disjoint collection. Therefore, $\bigcup_{i \in I} (B_i \cap A) = \left(\bigcup_{i \in I} B_i \right) \cap A \in \mathscr{L}$. Hence $\bigcup_{i \in I} B_i \in \mathscr{L}_A$.
Next fix $A \in \mathscr{A}$. If $B \in \mathscr{A}$ then $A \cap B \in \mathscr{A}$, so $A \cap B \in \mathscr{L}$ and hence $B \in \mathscr{L}_A$. But $\mathscr{L}$ is the smallest $\lambda$-system containing $\mathscr{A}$ so we have shown that $\mathscr{L} \subseteq \mathscr{L}_A$ for every $A \in \mathscr{A}$. Now fix $B \in \mathscr{L}$. If $A \in \mathscr{A}$ then $B \in \mathscr{L}_A$ so $A \cap B \in \mathscr{L}$ and therefore $A \in \mathscr{L}_B$. Again, $\mathscr{L}$ is the smallest $\lambda$-system containing $\mathscr{A}$ so we have now shown that $\mathscr{L} \subseteq \mathscr{L}_B$ for every $B \in \mathscr{L}$. Finally, let $B, \, C \in \mathscr{L}$. Then $C \in \mathscr{L}_B$ and hence $B \cap C \in \mathscr{L}$. It now follows that $\mathscr{L}$ is a $\pi$-system, as well as a $\lambda$-system, and therefore by the theorem above, $\mathscr{L}$ is a sigma-algebra. But $\mathscr{A} \subseteq \mathscr{L}$ and hence $\sigma(\mathscr{A}) \subseteq \mathscr{L}$.
Examples and Special Cases
Suppose that $S$ is a set and $\mathscr{A}$ is a finite partition of $S$. Then $\mathscr{S} = \{\emptyset\} \cup \mathscr{A}$ is a semi-algebra of subsets of $S$.
Proof
If $A, \, B \in \mathscr{A}$ then $A \cap B = \emptyset \in \mathscr{S}$. If $A \in \mathscr{S}$ then $A^c = \bigcup\{B \in \mathscr{A}: B \neq A \}$
Euclidean Spaces
The following example is particulalry important because it will be used to construct positive measures on $\R$. Let $\mathscr{B} = \{(a, b]: a, \, b \in \R, \; a \lt b\} \cup \{(-\infty, b]: b \in \R\} \cup \{(a, \infty): a \in \R \}$
$\mathscr{B}$ is a semi-algebra of subsets of $\R$.
Proof
Note that the intersection of two intervals of the type in $\mathscr{B}$ is another interval of this type. The complement of an interval of this type is either another interval of this type or the union of two disjoint intervals of this type.
It follows from the theorem above that the collection $\mathscr{A}$ of finite disjoint unions of intervals in $\mathscr{B}$ is an algebra. Recall also that $\sigma(\mathscr{B}) = \sigma(\mathscr{A})$ is the Borel $\sigma$-algebra of $\R$, named for Émile Borel. We can generalize all of this to $\R^n$ for $n \in \N_+$
The collection $\mathscr{B}_n = \left\{\prod_{i=1}^n A_i: A_i \in \mathscr{B} \text{ for each } i \in \{1, 2, \ldots, n\} \right\}$ is a semi-algebra of subsets of $\R^n$.
Recall also that $\sigma(\mathscr{B}_n)$ is the $\sigma$-algebra of Borel sets of $\R^n$.
Product Spaces
The examples in this discussion are important for constructing positive measures on product spaces.
Suppose that $\mathscr S$ is a semi-algebra of subsets of a set $S$ and that $\mathscr T$ is a semi-algebra of subsets of a set $T$. Then $\mathscr U = \{A \times B: A \in \mathscr S, B \in \mathscr T\}$ is a semi-algebra of subsets of $S \times T$.
Proof
1. Suppose that $A \times B, \, C \times D \in \mathscr U$, so that $A, \, C \in \mathscr S$ and $B, \, D \in \mathscr T$. Recall that $(A \times B) \cap (C \times D) = (A \cap C) \times (B \cap D)$. But $A \cap C \in \mathscr S$ and $B \cap D \in \mathscr T$ so $(A \times B) \cap (C \times D) \in \mathscr U$.
2. Suppose that $A \times B \in \mathscr B$ so that $A \in \mathscr S$ and $B \in \mathscr T$. Then $(A \times B)^c = (A^c \times B) \cup (A \times B^c) \cup (A^c \times B^c)$ There exists a finite, disjoint collection $\{A_i: i \in I\}$ of sets in $\mathscr S$ and a finite, disjoint collection $\{B_j: j \in J\}$ of sets in $\mathscr T$ such that $A^c = \bigcup_{i \in I} A_i$ and $B^c = \bigcup_{j \in J} B_j$. Hence $(A \times B)^c = \left[\bigcup_{i \in I} (A_i \times B)\right] \cup \left[\bigcup_{j \in J} (A \times B_j)\right] \cup \left[\bigcup_{i \in I} \bigcup_{j \in J} (A_i \times B_j)\right]$ All of the product sets in this union are in $\mathscr U$ and the product sets are disjoint.
This result extends in a completely straightforward way to a product of a finite number of sets.
Suppose that $n \in \N_+$ and that $\mathscr S_i$ is a semi-algebra of subsets of a set $S_i$ for $i \in \{1, 2, \ldots, n\}$. Then $\mathscr U = \left\{\prod_{i=1}^n A_i: A_i \in \mathscr S_i \text{ for all } i \in \{1, 2, \ldots, n\}\right\}$ is a semi-algebra of subsets of $\prod_{i=1}^n S_i$.
Note that the semi-algebra of products of intervals in $\R^n$ described above is a special case of this result. For the product of an infinite sequence of sets, the result is bit more tricky.
Suppose that $\mathscr S_i$ is a semi-algebra of subsets of a set $S_i$ for $i \in \N_+$. Then $\mathscr U = \left\{\prod_{i=1}^\infty A_i: A_i \in \mathscr S_i \text{ for all } i \in \N_+ \text{ and } A_i = S_i \text{ for all but finitely many } i \in \N_+\right\}$ is a semi-algebra of subsets of $\prod_{i=1}^n S_i$.
Proof
The proof is very much like the previous ones.
1. Suppose that $A = \prod_{i=1}^\infty A_i \in \mathscr U$ and $B = \prod_{i=1}^\infty B_i \in \mathscr U$, so that $A_i, \, B_i \in \mathscr S_i$ for $i \in \N_+$ and $A_i = S_i$ for all but finitely many $i \in \N_+$ and $B_i = S_i$ for all but finitely many $i \in \N_+$. Then $A \cap B = \prod_{i=1}^\infty (A_i \cap B_i)$. Also, $A_i \cap B_i \in \mathscr S_i$ for $i \in \N_+$ and $A_i \cap B_i = S_i$ for all but finitely many $in \in \N_+$. So $A \cap B \in \mathscr U$.
2. Suppose that $A = \prod_{i=1}^\infty A_i \in \mathscr U$, where $A_i \in \mathscr S_i$ for $i \in \N_+$ and $A_i = S_i$ for $i \gt n$, for some $n \in \N_+$. Then $A^c = \bigcup_{j=1}^n B_j$ where $B_j = A_1 \times \cdots \times A_{j-1} \times A_j^c \times S_{j+1} \times S_{j+2} \times \cdots, \quad j \in \{1, 2, \ldots, n\}$ Note that the product sets in this union are disjoint. But for each $j \in \{1, 2, \ldots, n\}$ there exists a finite disjoint collection $\{C_{j,k}: k \in K_j\}$ such that $A_j^c = \bigcup_{k \in K_j} C_{j,k}$. Substituting and distributing then gives $A^c$ as a finite, disjoint union of sets in $\mathscr U$.
Note that this result would not be true with $\mathscr U = \left\{\prod_{i=1}^\infty A_i: A_i \in \mathscr S_i \text{ for all } i \in \N_+\right\}$. In general, the complement of a set in $\mathscr U$ cannot be written as a finite disjoint union of sets in $\mathscr U$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/01%3A_Foundations/1.12%3A_Special_Set_Structures.txt |
The basic topics in this chapter are fundamental to probability theory, and should be accessible to new students of probability. We start with the paradigm of the random experiment and its mathematical model, the probability space. The main objects in this model are sample spaces, events, random variables, and probability measures. We also study several concepts of fundamental importance: conditional probability and independence.
The advanced topics can be skipped if you are a new student of probability, or can be studied later, as the need arises. These topics include the convergence of random variables, the measure-theoretic foundations of probability theory, and the existence and construction of probability measures and random processes.
02: Probability Spaces
$\newcommand{\N}{\mathbb{N}}$
Experiments
Probability theory is based on the paradigm of a random experiment; that is, an experiment whose outcome cannot be predicted with certainty, before the experiment is run. In classical or frequency-based probability theory, we also assume that the experiment can be repeated indefinitely under essentially the same conditions. The repetitions can be in time (as when we toss a single coin over and over again) or in space (as when we toss a bunch of similar coins all at once). The repeatability assumption is important because the classical theory is concerned with the long-term behavior as the experiment is replicated. By contrast, subjective or belief-based probability theory is concerned with measures of belief about what will happen when we run the experiment. In this view, repeatability is a less crucial assumption. In any event, a complete description of a random experiment requires a careful definition of precisely what information about the experiment is being recorded, that is, a careful definition of what constitutes an outcome.
The term parameter refers to a non-random quantity in a model that, once chosen, remains constant. Many probability models of random experiments have one or more parameters that can be adjusted to fit the physical experiment being modeled.
The subjects of probability and statistics have an inverse relationship of sorts. In probability, we start with a completely specified mathematical model of a random experiment. Our goal is perform various computations that help us understand the random experiment, help us predict what will happen when we run the experiment. In statistics, by contrast, we start with an incompletely specified mathematical model (one or more parameters may be unknown, for example). We run the experiment to collect data, and then use the data to draw inferences about the unknown factors in the mathematical model.
Compound Experiments
Suppose that we have $n$ experiments $(E_1, E_2, \ldots, E_n)$. We can form a new, compound experiment by performing the $n$ experiments in sequence, $E_1$ first, and then $E_2$ and so on, independently of one another. The term independent means, intuitively, that the outcome of one experiment has no influence over any of the other experiments. We will make the term mathematically precise later.
In particular, suppose that we have a basic experiment. A fixed number (or even an infinite number) of independent replications of the basic experiment is a new, compound experiment. Many experiments turn out to be compound experiments and moreover, as noted above, (classical) probability theory itself is based on the idea of replicating an experiment.
In particular, suppose that we have a simple experiment with two outcomes. Independent replications of this experiment are referred to as Bernoulli trials, named for Jacob Bernoulli. This is one of the simplest, but most important models in probability. More generally, suppose that we have a simple experiment with $k$ possible outcomes. Independent replications of this experiment are referred to as multinomial trials.
Sometimes an experiment occurs in well-defined stages, but in a dependent way, in the sense that the outcome of a given stage is influenced by the outcomes of the previous stages.
Sampling Experiments
In most statistical studies, we start with a population of objects of interest. The objects may be people, memory chips, or acres of corn, for example. Usually there are one or more numerical measurements of interest to us—the height and weight of a person, the lifetime of a memory chip, the amount of rain, amount of fertilizer, and yield of an acre of corn.
Although our interest is in the entire population of objects, this set is usually too large and too amorphous to study. Instead, we collect a random sample of objects from the population and record the measurements of interest of for each object in the sample.
There are two basic types of sampling. If we sample with replacement, each item is replaced in the population before the next draw; thus, a single object may occur several times in the sample. If we sample without replacement, objects are not replaced in the population. The chapter on Finite Sampling Models explores a number of models based on sampling from a finite population.
Sampling with replacement can be thought of as a compound experiment, based on independent replications of the simple experiment of drawing a single object from the population and recording the measurements of interest. Conversely, a compound experiment that consists of $n$ independent replications of a simple experiment can usually be thought of as a sampling experiment. On the other hand, sampling without replacement is an experiment that consists of dependent stages, because the population changes with each draw.
Examples and Applications
Probability theory is often illustrated using simple devices from games of chance: coins, dice, card, spinners, urns with balls, and so forth. Examples based on such devices are pedagogically valuable because of their simplicity and conceptual clarity. On the other hand, it would be a terrible shame if you were to think that probability is only about gambling and games of chance. Rather, try to see problems involving coins, dice, etc. as metaphors for more complex and realistic problems.
Coins and Dice
In terms of probability, the important fact about a coin is simply that when tossed it lands on one side or the other. Coins in Western societies, dating to antiquity, usually have the head of a prominent person engraved on one side and something of lesser importance on the other. In non-Western societies, coins often did not have a head on either side, but did have distinct engravings on the two sides, one typically more important than the other. Nonetheless, heads and tails are the ubiquitous terms used in probability theory to distinguish the front or obverse side of the coin from the back or reverse side of the coin.
Consider the coin experiment of tossing a coin $n$ times and recording the score (1 for heads or 0 for tails) for each toss.
1. Identify a parameter of the experiment.
2. Interpret the experiment as a compound experiment.
3. Interpret the experiment as a sampling experiment.
4. Interpret the experiment as $n$ Bernoulli trials.
Answer
1. The number of coins $n$ is the parameter.
2. The experiment consists of $n$ independent replications of the simple experiment of tossing the coin one time.
3. The experiment can be thought of as selecting a sample of size $n$ with replacement from he population $\{0, 1\}$.
4. There are two outcomes on each toss and the tosses are independent.
In the simulation of the coin experiment, set $n = 5$. Run the simulation 100 times and observe the outcomes.
Dice are randomizing devices that, like coins, date to antiquity and come in a variety of sizes and shapes. Typically, the faces of a die have numbers or other symbols engraved on them. Again, the important fact is that when a die is thrown, a unique face is chosen (usually the upward face, but sometimes the downward one). For more on dice, see the introductory section in the chapter on Games of Chance.
Consider the dice experiment of throwing a $k$-sided die (with faces numbered 1 to $k$), $n$ times and recording the scores for each throw.
1. Identify the parameters of the experiment.
2. Interpret the experiment as a compound experiment.
3. Interpret the experiment as a sampling experiment.
4. Identify the experiment as $n$ multinomial trials.
Answer
1. The parameters are the number of dice $n$ and the number of faces $k$.
2. The experiment consists of $n$ independent replications of the simple experiment of throwing one die.
3. The experiment can be thought of as selecting a sample of size $n$ with replacement form the population $\{1, 2, \ldots, k\}$.
4. The same $k$ outcomes occur for each die the throws are independent.
In reality, most dice are Platonic solids (named for Plato of course) with 4, 6, 8, 12, or 20 sides. The six-sided die is the standard die.
In the simulation of the dice experiment, set $n = 5$. Run the simulation 100 times and observe the outcomes.
In the die-coin experiment, a standard die is thrown and then a coin is tossed the number of times shown on the die. The sequence of coin scores is recorded (1 for heads and 0 for tails). Interpret the experiment as a compound experiment.
Answer
The first stage consists rolling the die and the second stage consists of tossing the coin. The stages are dependent because the number of tosses depends on the outcome of the die throw.
Note that this experiment can be obtained by randomizing the parameter $n$ in the basic coin experiment in (1).
Run the simulation of the die-coin experiment 100 times and observe the outcomes.
In the coin-die experiment, a coin is tossed. If the coin lands heads, a red die is thrown and if the coin lands tails, a green die is thrown. The coin score (1 for heads and 0 for tails) and the die score are recorded. Interpret the experiment as a compound experiment.
Answer
The first stage consists of tossing the coin and the second stage consists of rolling the die. The stages are dependent because different dice (that may behave differently) are thrown, depending on the outcome of the coin toss.
Run the simulation of the coin-die experiment 100 times and observe the outcomes.
Cards
Playing cards, like coins and dice, date to antiquity. From the point of view of probability, the important fact is that a playing card encodes a number of properties or attributes on the front of the card that are hidden on the back of the card. (Later in this chapter, these properties will become random variables.) In particular, a standard card deck can be modeled by the Cartesian product set $D = \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, j, q, k \} \times \{\clubsuit, \diamondsuit, \heartsuit, \spadesuit \}$ where the first coordinate encodes the denomination or kind (ace, 2–10, jack, queen, king) and where the second coordinate encodes the suit (clubs, diamonds, hearts, spades). Sometimes we represent a card as a string rather than an ordered pair (for example $q \heartsuit$ rather than $(q, \heartsuit)$ for the queen of hearts). Some other properties, derived from the two main ones, are color (diamonds and hearts are red, clubs and spades are black), face (jacks, queens, and kings have faces, the other cards do not), and suit order (from least to highest rank: $(\clubsuit, \diamondsuit, \heartsuit, \spadesuit)$).
Consider the card experiment that consists of dealing $n$ cards from a standard deck (without replacement).
1. Identify a parameter of the experiment.
2. Interpret the experiment as a compound experiment.
3. Interpret the experiment as a sampling experiment.
Answer
1. The parameter is $n$, the number of cards dealt.
2. At each stage, we draw a card from a deck, but the deck changes from one draw to the next, so the stages are dependent.
3. The experiment is to select a sample of size $n$ from the population $D$, without replacement.
In the simulation of the card experiment, set $n = 5$. Run the simulation 100 times and observe the outcomes.
The special case $n = 5$ is the poker experiment and the special case $n = 13$ is the bridge experiment.
Open each of the following to see depictions of card playing in some famous paintings.
1. Cheat with the Ace of Clubs by Georges de La Tour
2. The Cardsharps by Michelangelo Carravagio
3. The Card Players by Paul Cézanne
4. His Station and Four Aces by CM Coolidge
5. Waterloo by CM Coolidge
Urn Models
Urn models are often used in probability as simple metaphors for sampling from a finite population.
An urn contains $m$ distinct balls, labeled from 1 to $m$. The experiment consists of selecting $n$ balls from the urn, without replacement, and recording the sequence of ball numbers.
1. Identify the parameters of the experiment.
2. Interpret the experiment as a compound experiment.
3. Interpret the experiment as a sampling experiment.
Answer
1. The parameters are the number of balls $m$ and the sample size $n$.
2. At each stage, we draw a ball from the urn, but the contents of the urn change from one draw to the next, so the stages are dependent
3. The experiment is to select a sample of size $n$ from the balls in the urn (the population), without replacement.
Consider the basic urn model of the previous exercise. Suppose that $r$ of the $m$ balls are red and the remaining $m - r$ balls are green. Identify an additional parameter of the model. This experiment is a metaphor for sampling from a general dichotomous population
Answer
The parameters are the population size $m$, the sample size $n$, and the number of red balls $r$.
In the simulation of the urn experiment, set $m = 100$, $r = 40$, and $n = 25$. Run the experiment 100 times and observe the results.
An urn initially contains $m$ balls; $r$ are red and $m - r$ are green. A ball is selected from the urn and removed, and then replaced with $k$ balls of the same color. The process is repeated. This is known as Pólya's urn model, named after George Pólya.
1. Identify the parameters of the experiment.
2. Interpret the case $k = 0$ as a sampling experiment.
3. Interpret the case $k = 1$ as a sampling experiment.
Answer
1. The parameters are the population size $m$, the initial number of red balls $r$, and the number new balls added $k$.
2. When $k = 0$, each ball drawn is removed and no new balls are added, so the experiment is to select a sample of size $n$ from the urn, without replacement.
3. When $k = 1$, each ball drawn is replaced with another ball of the same color. So at least in terms of the colors of the balls, the experiment is equivalent to selecting a sample of size $n$ from the urn, with replacement.
Open the image of the painting Allegory of Fortune by Dosso Dossi. Presumably the young man has chosen lottery tickets from an urn.
Buffon's Coin Experiment
Buffon's coin experiment consists of tossing a coin with radius $r \leq \frac{1}{2}$ on a floor covered with square tiles of side length 1. The coordinates of the center of the coin are recorded, relative to axes through the center of the square, parallel to the sides. The experiment is named for comte de Buffon.
1. Identify a parameter of the experiment
2. Interpret the experiment as a compound experiment.
3. Interpret the experiment as sampling experiment.
Answer
1. The parameter is the coin radius $r$.
2. The experiment can be thought of as selecting the coordinates of the coin center independently of one another.
3. The experiment is equivalent to selecting a sample of size 2 from the population $\left[-\frac{1}{2}, \frac{1}{2}\right]$, with replacement.
In the simulation of Buffon's coin experiment, set $r = 0.1$. Run the experiment 100 times and observe the outcomes.
Reliability
In the usual model of structural reliability, a system consists of $n$ components, each of which is either working or failed. The states of the components are uncertain, and hence define a random experiment. The system as a whole is also either working or failed, depending on the states of the components and how the components are connected. For example, a series system works if and only if each component works, while a parallel system works if and only if at least one component works. More generally, a $k$ out of $n$ system works if at least $k$ components work.
Consider the $k$ out of $n$ reliability model.
1. Identify two parameters.
2. What value of $k$ gives a series system?
3. What value of $k$ gives a parallel system?
Answer
1. The parameters are $k$ and $n$
2. $k = n$ gives a series system.
3. $k = 1$ gives a parallel system.
The reliability model above is a static model. It can be extended to a dynamic model by assuming that each component is initially working, but has a random time until failure. The system as a whole would also have a random time until failure that would depend on the component failure times and the structure of the system.
Genetics
In ordinary sexual reproduction, the genetic material of a child is a random combination of the genetic material of the parents. Thus, the birth of a child is a random experiment with respect to outcomes such as eye color, hair type, and many other physical traits. We are often particularly interested in the random transmission of traits and the random transmission of genetic disorders.
For example, let's consider an overly simplified model of an inherited trait that has two possible states (phenotypes), say a pea plant whose pods are either green or yellow. The term allele refers to alternate forms of a particular gene, so we are assuming that there is a gene that determines pod color, with two alleles: $g$ for green and $y$ for yellow. A pea plant has two alleles for the trait (one from each parent), so the possible genotypes are
• $gg$, alleles for green pods from each parent.
• $gy$, an allele for green pods from one parent and an allele for yellow pods from the other (we usually cannot observe which parent contributed which allele).
• $yy$, alleles for yellow pods from each parent.
The genotypes $gg$ and $yy$ are called homozygous because the two alleles are the same, while the genotype $gy$ is called heterozygous because the two alleles are different. Typically, one of the alleles of the inherited trait is dominant and the other recessive. Thus, for example, if $g$ is the dominant allele for pod color, then a plant with genotype $gg$ or $gy$ has green pods, while a plant with genotype $yy$ has yellow pods. Genes are passed from parent to child in a random manner, so each new plant is a random experiment with respect to pod color.
Pod color in peas was actually one of the first examples of an inherited trait studied by Gregor Mendel, who is considered the father of modern genetics. Mendel also studied the color of the flowers (yellow or purple), the length of the stems (short or long), and the texture of the seeds (round or wrinkled).
For another example, the $ABO$ blood type in humans is controlled by three alleles: $a$, $b$, and $o$. Thus, the possible genotypes are $aa$, $ab$, $ao$, $bb$, $bo$ and $oo$. The alleles $a$ and $b$ are co-dominant and $o$ is recessive. Thus there are four possible blood types (phenotypes):
• Type $A$: genotype $aa$ or $ao$
• Type $B$: genotype $bb$ or $bo$
• Type $AB$: genotype $ab$
• type $O$: genotype $oo$
Of course, blood may be typed in much more extensive ways than the simple $ABO$ typing. The RH factor (positive or negative) is the most well-known example.
For our third example, consider a sex-linked hereditary disorder in humans. This is a disorder due to a defect on the X chromosome (one of the two chromosomes that determine gender). Suppose that $h$ denotes the healthy allele and $d$ the defective allele for the gene linked to the disorder. Women have two X chromosomes, and typically $d$ is recessive. Thus, a woman with genotype $hh$ is completely normal with respect to the condition; a woman with genotype $hd$ does not have the disorder, but is a carrier, since she can pass the defective allele to her children; and a woman with genotype $dd$ has the disorder. A man has only one X chromosome (his other sex chromosome, the Y chromosome, typically plays no role in the disorder). A man with genotype $h$ is normal and a man with genotype $d$ has the disorder. Examples of sex-linked hereditary disorders are dichromatism, the most common form of color-blindness, and hemophilia, a bleeding disorder. Again, genes are passed from parent to child in a random manner, so the birth of a child is a random experiment in terms of the disorder.
Point Processes
There are a number of important processes that generate random points in time. Often the random points are referred to as arrivals. Here are some specific examples:
• times that a piece of radioactive material emits elementary particles
• times that customers arrive at a store
• times that requests arrive at a web server
• failure times of a device
To formalize an experiment, we might record the number of arrivals during a specified interval of time or we might record the times of successive arrivals.
There are other processes that produce random points in space. For example,
• flaws in a piece of sheet metal
• errors in a string of symbols (in a computer program, for example)
• raisins in a cake
• misprints on a page
• stars in a region of space
Again, to formalize an experiment, we might record the number of points in a given region of space.
Statistical Experiments
In 1879, Albert Michelson constructed an experiment for measuring the speed of light with an interferometer. The velocity of light data set contains the results of 100 repetitions of Michelson's experiment. Explore the data set and explain, in a general way, the variability of the data.
Answer
The variablility is due to measurement and other experimental errors beyond the control of Michelson.
In 1998, two students at the University of Alabama in Huntsville designed the following experiment: purchase a bag of M&Ms (of a specified advertised size) and record the counts for red, green, blue, orange, and yellow candies, and the net weight (in grams). Explore the M&M data. set and explain, in a general way, the variability of the data.
Answer
The variability in weight is due to measurement error on the part of the students and to manufacturing errors on the part of the company. The variability in color counts is less clear and may be due to purposeful randomness on the part of the company.
In 1999, two researchers at Belmont University designed the following experiment: capture a cicada in the Middle Tennessee area, and record the body weight (in grams), the wing length, wing width, and body length (in millimeters), the gender, and the species type. The cicada data set contains the results of 104 repetitions of this experiment. Explore the cicada data and explain, in a general way, the variability of the data.
Answer
The variability in body measurements is due to differences in the three species, to all sorts of envirnomental factors, and to measurement errors by the researchers.
On June 6, 1761, James Short made 53 measurements of the parallax of the sun, based on the transit of Venus. Explore the Short data set and explain, in a general way, the variability of the data.
Answer
The variability is due to measurement and other experimental errors beyond the control of Short.
In 1954, two massive field trials were conducted in an attempt to determine the effectiveness of the new vaccine developed by Jonas Salk for the prevention of polio. In both trials, a treatment group of children were given the vaccine while a control group of children were not. The incidence of polio in each group was measured. Explore the polio field trial data set and explain, in a general way, the underlying random experiment.
Answer
The basic random experiment is to observe whether a given child, in the treatment group or control group, comes down with polio in a specified period of time. Presumabley, a lower incidence of polio in the treatment group compared with the control group would be evidence that the vaccine was effective.
Each year from 1969 to 1972 a lottery was held in the US to determine who would be drafted for military service. Essentially, the lottery was a ball and urn model and became famous because many believed that the process was not sufficiently random. Explore the Vietnam draft lottery data set and speculate on how one might judge the degree of randomness.
Answer
This is a difficult problem, but presumably in a sufficiently random lottery, one would not expect to see dates in the same month clustered too closely together. Observing such clustering, then, would be evidence that the lottery was not random.
Deterministic Versus Probabilistic Models
One could argue that some of the examples discussed above are inherently deterministic. In tossing a coin, for example, if we know the initial conditions (involving position, velocity, rotation, etc.), the forces acting on the coin (gravity, air resistance, etc.), and the makeup of the coin (shape, mass density, center of mass, etc.), then the laws of physics should allow us to predict precisely how the coin will land. This is true in a technical, theoretical sense, but false in a very real sense. Coins, dice, and many more complicated and important systems are chaotic in the sense that the outcomes of interest depend in a very sensitive way on the initial conditions and other parameters. In such situations, it might well be impossible to ever know the initial conditions and forces accurately enough to use deterministic methods.
In the coin experiment, for example, even if we strip away most of the real world complexity, we are still left with an essentially random experiment. Joseph Keller in his article The Probability of Heads deterministically analyzed the toss of a coin under a number of ideal assumptions:
1. The coin is a perfect circle and has negligible thickness
2. The center of gravity of the coin is the geometric center.
3. The coin is initially heads up and is given an initial upward velocity $u$ and angular velocity $\omega$.
4. In flight, the coin rotates about a horizontal axis along a diameter of the coin.
5. In flight, the coin is governed only by the force of gravity. All other possible forces (air resistance or wind, for example) are neglected.
6. The coin does not bounce or roll after landing (as might be the case if it lands in sand or mud).
Of course, few of these ideal assumptions are valid for real coins tossed by humans. Let $t = u / g$ where $g$ is the acceleration of gravity (in appropriate units). Note that the $t$ just has units of time (in seconds) and hence is independent of how distance is measured. The scaled parameter $t$ actually represents the time required for the coin to reach its maximum height.
Keller showed that the regions of the parameter space $(t, \omega)$ where the coin lands either heads up or tails up are separated by the curves $\omega = \left( 2n \pm \frac{1}{2} \right) \frac{\pi}{2t}, \quad n \in \N$ The parameter $n$ is the total number of revolutions in the toss. A plot of some of these curves is given below. The largest region, in the lower left corner, corresponds to the event that the coin does not complete even one rotation, and so of course lands heads up, just as it started. The next region corresponds to one rotation, with the coin landing tails up. In general, the regions alternate between heads and tails.
The important point, of course, is that for even moderate values of $t$ and $\omega$, the curves are very close together, so that a small change in the initial conditions can easily shift the outcome from heads up to tails up or conversely. As noted in Keller's article, the probabilist and statistician Persi Diaconis determined experimentally that typical values of the initial conditions for a real coin toss are $t = \frac{1}{4}$ seconds and $\omega = 76 \pi \approx 238.6$ radians per second. These values correspond to $n = 19$ revolutions in the toss. Of course, this parameter point is far beyond the region shown in our graph, in a region where the curves are exquisitely close together. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/02%3A_Probability_Spaces/2.01%3A_Random_Experiments.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
The purpose of this section is to study two basic types of objects that form part of the model of a random experiment. If you are a new student of probability, just ignore the measure-theoretic terminology and skip the technical details.
Sample Spaces
The Set of Outcomes
Recall that in a random experiment, the outcome cannot be predicted with certainty, before the experiment is run. On the other hand:
We assume that we can identify a fixed set $S$ that includes all possible outcomes of a random experiment. This set plays the role of the universal set when modeling the experiment.
For simple experiments, $S$ may be precisely the set of possible outcomes. More often, for complex experiments, $S$ is a mathematically convenient set that includes the possible outcomes and perhaps other elements as well. For example, if the experiment is to throw a standard die and record the score that occurs, we would let $S = \{1, 2, 3, 4, 5, 6\}$, the set of possible outcomes. On the other hand, if the experiment is to capture a cicada and measure its body weight (in milligrams), we might conveniently take $S = [0, \infty)$, even though most elements of this set are impossible (we hope!). The problem is that we may not know exactly the outcomes that are possible. Can a light bulb burn without failure for one thousand hours? For one thousand days? for one thousand years?
Often the outcome of a random experiment consists of one or more real measurements, and thus, the $S$ consists of all possible measurement sequences, a subset of $\R^n$ for some $n \in \N_+$. More generally, suppose that we have $n$ experiments and that $S_i$ is the set of outcomes for experiment $i \in \{1, 2, \ldots, n\}$. Then the Cartesian product $S_1 \times S_2 \times \cdots \times S_n$ is the natural set of outcomes for the compound experiment that consists of performing the $n$ experiments in sequence. In particular, if we have a basic experiment with $S$ as the set of outcomes, then $S^n$ is the natural set of outcomes for the compound experiment that consists of $n$ replications of the basic experiment. Similarly, if we have an infinite sequence of experiments and $S_i$ is the set of outcomes for experiment $i \in \N_+$, then then $S_1 \times S_2 \times \cdots$ is the natural set of outcomes for the compound experiment that consists of performing the given experiments in sequence. In particular, the set of outcomes for the compound experiment that consists of indefinite replications of a basic experiment is $S^\infty = S \times S \times \cdots$. This is an essential special case, because (classical) probability theory is based on the idea of replicating a given experiment.
Events
Consider again a random experiment with $S$ as the set of outcomes. Certain subsets of $S$ are referred to as events. Suppose that $A \subseteq S$ is a given event, and that the experiment is run, resulting in outcome $s \in S$.
1. If $s \in A$ then we say that $A$ occurs.
2. If $s \notin A$ then we say that $A$ does not occur.
Intuitively, you should think of an event as a meaningful statement about the experiment: every such statement translates into an event, namely the set of outcomes for which the statement is true. In particular, $S$ itself is an event; by definition it always occurs. At the other extreme, the empty set $\emptyset$ is also an event; by definition it never occurs.
For a note on terminology, recall that a mathematical space consists of a set together with other mathematical structures defined on the set. An example you may be familiar with is a vector space, which consists of a set (the vectors) together with the operations of addition and scalar multiplication. In probability theory, many authors use the term sample space for the set of outcomes of a random experiment, but here is the more careful definition:
The sample space of an experiment is $(S, \mathscr S)$ where $S$ is the set of outcomes and $\mathscr S$ is the collection of events.
Details
Sometimes not every subset of $S$ can be allowed as an event, but the collection of events $\mathscr S$ is required to be a $\sigma$-algebra, so that the sample space $(S, \mathscr S)$ is a measurable space. The axioms of a $\sigma$-algebra ensure that new sets that are constructed in a reasonable way from given events, using the set operations, are themselves valid events. Most of the sample spaces that occur in elementary probability fall into two general categories.
1. Discrete: $S$ is countable and $\mathscr S = \mathscr P(S)$ is the collection of all subsets of $S$. In this case, the sample space $(S, \mathscr S)$ is discrete.
2. Euclidean: $S$ is a measurable subset of $\R^n$ for some $n \in \N_+$ and $\mathscr S$ is the collection of measurable subsets of $S$.
In (b), the mearuable subsets of $\R^n$ include all of the sets encountered in calculus and in standard applications of probability theory, and many more besides. Nonetheless, for technical reasons, certain very weird subsets must be excluded. Typically $S$ is a set defined by a finite number of inequalities involving elementary functions.
The Algebra of Events
The standard algebra of sets leads to a grammar for discussing random experiments and allows us to construct new events from given events. In the following results, suppose that $S$ is the set of outcomes of a random experiment, and that $A$ and $B$ are events.
$A \subseteq B$ if and only if the occurrence of $A$ implies the occurrence of $B$.
Proof
Recall that $\subseteq$ is the subset relation. So by definition, $A \subseteq B$ means that $s \in A$ implies $s \in B$.
$A \cup B$ is the event that occurs if and only if $A$ occurs or $B$ occurs.
Proof
Recall that $A \cup B$ is the union of $A$ and $B$. So by defintion, $s \in A \cup B$ if and only if $s \in A$ or $s \in B$.
$A \cap B$ is the event that occurs if and only if $A$ occurs and $B$ occurs.
Proof
Recall that $A \cap B$ is the intersection of $A$ and $B$. So by definiton, $s \in A \cap B$ if and only if $s \in A$ and $s \in B$.
$A$ and $B$ are disjoint if and only if they are mutually exclusive; they cannot both occur on the same run of the experiment.
Proof
By definition, $A$ and $B$ disjoint means that $A \cap B = \emptyset$.
$A^c$ is the event that occurs if and only if $A$ does not occur.
Proof
Recall that $A^c$ is the complement of $A$, so $s \in A^c$ if and only if $s \notin A$.
$A \setminus B$ is the event that occurs if and only if $A$ occurs and $B$ does not occur.
Proof
Recall that $A \setminus B = A \cap B^c$. Hence $s \in A \setminus B$ if and only if $s \in A$ and $s \notin B$.
$(A \cap B^c) \cup (B \cap A^c)$ is the event that occurs if and only if one but not both of the given events occurs.
Proof
The events in the union are disjoint. So for $s$ is in the given event if and only if either $s \in A$ and $s \notin B$, or $s \in B$ and $s \notin A$.
Recall that the event in (10) is the symmetric difference of $A$ and $B$, and is sometimes denoted $A \Delta B$. This event corresponds to exclusive or, as opposed to the ordinary union $A \cup B$ which corresponds to inclusive or.
$(A \cap B) \cup (A^c \cap B^c)$ is the event that occurs if and only if both or neither of the given events occurs.
Proof
The events in the union are disjoint. Thus $s$ is in the given event if and only if either $s \in A$ and $s \in B$, or $s \notin A$ and $s \notin B$.
In the Venn diagram app, observe the diagram of each of the 16 events that can be constructed from $A$ and $B$.
Suppose now that $\mathscr{A} = \{A_i: i \in I\}$ is a collection of events for the random experiment, where $I$ is a countable index set.
$\bigcup \mathscr{A} = \bigcup_{i \in I} A_i$ is the event that occurs if and only if at least one event in the collection occurs.
Proof
Note that $s \in \bigcup_{i \in I} A_i$ if and only if $s \in A_i$ for some $i \in I$.
$\bigcap \mathscr{A} = \bigcap_{i \in I} A_i$ is the event that occurs if and only if every event in the collection occurs:
Proof
Note that $s \in \bigcap_{i \in I} A_i$ if and only if $s \in A_i$ for every $i \in I$.
$\mathscr{A}$ is a pairwise disjoint collection if and only if the events are mutually exclusive; at most one of the events could occur on a given run of the experiment.
Proof
By definition, $A_i \cap A_j = \emptyset$ for distinct $i, \, j \in I$.
Suppose now that $(A_1, A_2, \ldots$) is an infinite sequence of events.
$\bigcap_{n=1}^\infty \bigcup_{i=n}^\infty A_i$ is the event that occurs if and only if infinitely many of the given events occur. This event is sometimes called the limit superior of $(A_1, A_2, \ldots)$.
Proof
Note that $s$ is in the given event if and only if for every $n \in \N_+$ there exists $i \in \N_+$ with $i \ge n$ such that $s \in A_i$. In turn this means that $s \in A_i$ for infinitely many $i \in I$.
$\bigcup_{n=1}^\infty \bigcap_{i=n}^\infty A_i$ is the event that occurs if and only if all but finitely many of the given events occur. This event is sometimes called the limit inferior of $(A_1, A_2, \ldots)$.
Proof
Note that $s$ is in the given event if and only if there exists $n \in \N_+$ such that $s \in A_i$ for every $i \in \N_+$ with $i \ge n$. In turn, this means that $s \in A_i$ for all but finitely many $i \in I$.
Limit superiors and inferiors are discussed in more detail in the section on convergence.
Random Variables
Intuitively, a random variable is a measurement of interest in the context of the experiment. Simple examples include the number of heads when a coin is tossed several times, the sum of the scores when a pair of dice are thrown, the lifetime of a device subject to random stress, the weight of a person chosen from a population. Many more examples are given below in the exercises below. Mathematically, a random variable is a function defined on the set of outcomes.
A function $X$ from $S$ into a set $T$ is a random variable for the experiment with values in $T$.
Details
The set $T$ will also come with a $\sigma$-algebra $\mathscr T$ of admissible subsets, so that $(T, \mathscr T)$ is a measurable space, just like $(S, \mathscr S)$. The function $X$ is required to be measurable, an assumption which ensures that meaningful statements involving $X$ define events. In the discussion below, all subsets of $T$ are assumed to be in $\mathscr T$.
Probability has its own notation, very different from other branches of mathematics. As a case in point, random variables, even though they are functions, are usually denoted by capital letters near the end of the alphabet. The use of a letter near the end of the alphabet is intended to emphasize the idea that the object is a variable in the context of the experiment. The use of a capital letter is intended to emphasize the fact that it is not an ordinary algebraic variable to which we can assign a specific value, but rather a random variable whose value is indeterminate until we run the experiment. Specifically, when we run the experiment an outcome $s \in S$ occurs, and random variable $X$ takes the value $X(s) \in T$.
If $B \subseteq T$, we use the notation $\{X \in B\}$ for the inverse image $\{s \in S: X(s) \in B\}$, rather than $X^{-1}(B)$. Again, the notation is more natural since we think of $X$ as a variable in the experiment. Think of $\{X \in B\}$ as a statement about $X$, which then translates into the event $\{s \in S: X(s) \in B\}$
Again, every statement about a random variable $X$ with values in $T$ translates into an inverse image of the form $\{X \in B\}$ for some $B \in \mathscr T$. So, for example, if $x \in T$ then $\{X = x\} = \{X \in \{x\}\} = \left\{s \in S: X(s) = x\right\}$. If $X$ is a real-valued random variable and $a, \, b \in \R$ with $a \lt b$ then $\{ a \leq X \leq b\} = \left\{ X \in [a, b]\right\} = \{s \in S: a \leq X(s) \leq b\}$.
Suppose that $X$ is a random variable taking values in $T$, and that $A, \, B \subseteq T$. Then
1. $\{X \in A \cup B\} = \{X \in A\} \cup \{X \in B\}$
2. $\{X \in A \cap B\} = \{X \in A\} \cap \{X \in B\}$
3. $\{X \in A \setminus B\} = \{X \in A\} \setminus \{X \in B\}$
4. $A \subseteq B \implies \{X \in A\} \subseteq \{X \in B\}$
5. If $A$ and $B$ are disjoint, then so are $\{X \in A\}$ and $\{X \in B\}$.
Proof
This is a restatement of the fact that inverse images of a function preserve the set operations; only the notation changes (and is simpler).
1. $s \in \{X \in A \cup B\}$ if and only if $X(s) \in A \cup B$ if and only if $X(s) \in A$ or $X(s) \in B$ if and only if $s \in \{X \in A\}$ or $s \in \{X \in B\}$ if and only if $s \in \{X \in A\} \cup \{X \in B\}$.
2. The proof is exactly the same as (a), with and replacing or.
3. The proof is also exactly the same as (a), with but not replacing or.
4. If $s \in \{X \in A\}$ then $X(s) \in A$ so $X(s) \in B$ and hence $s \in \{X \in B\}$.
5. This follows from part (b).
As with a general function, the result in part (a) holds for the union of a countable collection of subsets, and the result in part (b) holds for the intersection of a countable collection of subsets. No new ideas are involved; only the notation is more complicated.
Often, a random variable takes values in a subset $T$ of $\R^k$ for some $k \in \N_+$. We might express such a random variable as $\bs{X} = (X_1, X_2, \ldots, X_k)$ where $X_i$ is a real-valued random variable for each $i \in \{1, 2, \ldots, k\}$. In this case, we usually refer to $\bs{X}$ as a random vector, to emphasize its higher-dimensional character. A random variable can have an even more complicated structure. For example, if the experiment is to select $n$ objects from a population and record various real measurements for each object, then the outcome of the experiment is a vector of vectors: $\bs{X} = (X_1, X_2, \ldots, X_n)$ where $X_i$ is the vector of measurements for the $i$th object. There are other possibilities; a random variable could be an infinite sequence, or could be set-valued. Specific examples are given in the computational exercises below. However, the important point is simply that a random variable is a function defined on the set of outcomes $S$.
The outcome of the experiment itself can be thought of as a random variable. Specifically, let $T = S$ and let $X$ denote the identify function on $S$ so that $X(s) = s$ for $s \in S$. Then trivially $X$ is a random variable, and the events that can be defined in terms of $X$ are simply the original events of the experiment. That is, if $A$ is an event then $\{X \in A\} = A$. Conversely, every random variable effectively defines a new random experiment.
In the general setting above, a random variable $X$ defines a new random experiment with $T$ as the new set of outcomes and subsets of $T$ as the new collection of events.
Details
Technically, the $\sigma$-algebra $\mathscr T$ would be the new collection of events.
In fact, often a random experiment is modeled by specifying the random variables of interest, in the language of the experiment. Then, a mathematical definition of the random variables specifies the sample space. A function (or transformation) of a random variable defines a new random variable.
Suppose that $X$ is a random variable for the experiment with values in $T$ and that $g$ is a function from $T$ into another set $U$. Then $Y = g(X)$ is a random variable with values in $U$.
Details
Technically, $T$ and $U$ both come with $\sigma$-algebras of admissible subsets $\mathscr T$ and $\mathscr U$, respectively. The function $g$, just like the function $X$, is required to be measurable. This assumption ensures that $Y = g(X)$ is a measurable function from $S$ into $U$, and hence is a valid random variable.
Note that, as functions, $g(X) = g \circ X$, the composition of $g$ with $X$. But again, thinking of $X$ and $Y$ as variables in the context of the experiment, the notation $Y = g(X)$ is much more natural.
Indicator Variables
For an event $A$, the indicator function of $A$ is called the indicator variable of $A$.
The value of this random variables tells us whether or not $A$ has occurred: $\bs{1}_A = \begin{cases} 1, & A \text{ occurs} \ 0, & A \text{ does not occur} \end{cases}$ That is, as a function on $S$, $\bs{1}_A(s) = \begin{cases} 1, & s \in A \ 0, & s \notin A \end{cases}$
If $X$ is a random variable that takes values 0 and 1, then $X$ is the indicator variable of the event $\{X = 1\}$.
Proof
Note that for $s \in S$, $X(s) = 1$ if $s \in \{X = 1\}$ and $X(s) = 0$ otherwise.
Recall also that the set algebra of events translates into the arithmetic algebra of indicator variables.
Suppose that $A$ and $B$ are events.
1. $\bs{1}_{A \cap B} = \bs{1}_A \bs{1}_B = \min\left\{\bs{1}_A, \bs{1}_B\right\}$
2. $\bs{1}_{A \cup B} = 1 - \left(1 - \bs{1}_A\right)\left(1 - \bs{1}_B\right) = \max\left\{\bs{1}_A, \bs{1}_B\right\}$
3. $\bs{1}_{B \setminus A} = \bs{1}_B \left(1 - \bs{1}_A\right)$
4. $\bs{1}_{A^c} = 1 - \bs{1}_A$
5. $A \subseteq B$ if and only if $\bs{1}_A \leq \bs{1}_B$
The results in part (a) extends to arbitrary intersections and the results in part (b) extends to arbitrary unions. If the event $A$ has a complicated description, sometimes we use $\bs 1 (A)$ for the indicator variable rather that $\bs 1_A$.
Examples and Applications
Recall that probability theory is often illustrated using simple devices from games of chance: coins, dice, cards, spinners, urns with balls, and so forth. Examples based on such devices are pedagogically valuable because of their simplicity and conceptual clarity. On the other hand, remember that probability is not only about gambling and games of chance. Rather, try to see problems involving coins, dice, etc. as metaphors for more complex and realistic problems.
Coins and Dice
The basic coin experiment consists of tossing a coin $n$ times and recording the sequence of scores $(X_1, X_2, \ldots, X_n)$ (where 1 denotes heads and 0 denotes tails). This experiment is a generic example of $n$ Bernoulli trials, named for Jacob Bernoulli.
Consider the coin experiment with $n = 4$, and Let $Y$ denote the number of heads.
1. Give the set of outcomes $S$ in list form.
2. Give the event $\{Y = k\}$ in list form for each $k \in \{0, 1, 2, 3, 4\}$.
Answer
To simplify the notation, we represent outcomes a bit strings rather than ordered sequences.
1. $S = \{1111, 1110, 1101, 1011, 0111, 1100, 1010, 1001, 0110, 0101, 0011, 1000, 0100, 0010, 0001, 0000\}$
2. \begin{align} \{Y = 0\} & = \{0000\} \ \{Y = 1\} & = \{1000, 0100, 0010, 0001\} \ \{Y = 2\} & = \{1100, 1010, 1001, 0110, 0101, 0011\} \ \{Y = 3\} & = \{1110, 1101, 1011, 0111\} \ \{Y = 4\} & = \{1111\} \end{align}
In the simulation of the coin experiment, set $n = 4$. Run the experiment 100 times and count the number of times that the event $\{Y = 2\}$ occurs.
Now consider the general coin experiment with the coin tossed $n$ times, and let $Y$ denote the number of heads.
1. Give the set of outcomes $S$ in Cartesian product form, and give the cardinality of $S$.
2. Express $Y$ as a function on $S$.
3. Find $\#\{Y = k\}$ (as a subset of $S$) for $k \in \{0, 1, \ldots, n\}$
Answer
1. $S = \{0, 1\}^n$ and $\#(S) = 2^n$.
2. $Y(x_1, x_2, \ldots, x_n) = x_1 + x_2 + \cdots + x_n$. The set of possible values is $\{0, 1, \ldots, n\}$
3. $\#\{Y = k\} = \binom{n}{k}$
The basic dice experiment consists of throwing $n$ distinct $k$-sided dice (with faces numbered from 1 to $k$) and recording the sequence of scores $(X_1, X_2, \ldots, X_n)$. This experiment is a generic example of $n$ multinomial trials. The special case $k = 6$ corresponds to standard dice.
Consider the dice experiment with $n = 2$ standard dice. Let $S$ denote the set of outcomes, $A$ the event that the first die score is 1, and $B$ the event that the sum of the scores is 7. Give each of the following events in the form indicated:
1. $S$ in Cartesian product form
2. $A$ in list form
3. $B$ in list form
4. $A \cup B$ in list form
5. $A \cap B$ in list form
6. $A^c \cap B^c$ in predicate form
Answer
1. $S = \{1, 2, 3, 4, 5, 6\}^2$
2. $A = \{(1,1), (1,2), (1,3), (1,4), (1,5), (1,6)\}$
3. $B = \{(1,6), (2,5), (3,4), (4,3), (5,2), (6,1)\}$
4. $A \cup B = \{(1,1), (1,2), (1,3), (1,4), (1,5), (1,6), (2,5), (3,4), (4,3), (5,2), (6,1)\}$
5. $A \cap B = \{(1,6)\}$
6. $A^c \cap B^c = \{(x, y) \in S: x + y \ne 7 \text{ and } x \ne 1\}$
In the simulation of the dice experiment, set $n = 2$. Run the experiment 100 times and count the number of times each event in the previous exercise occurs.
Consider the dice experiment with $n = 2$ standard dice, and let $S$ denote the set of outcomes, $Y$ the sum of the scores, $U$ the minimum score, and $V$ the maximum score.
1. Express $Y$ as a function on $S$ and give the set of possible values in list form.
2. Express $U$ as a function on $S$ and give the set of possible values in list form.
3. Express $V$ as a function on the $S$ and give the set of possible values in list form.
4. Give the set of possible values of $(U, V)$ in predicate from
Answer
Note that $S = \{1, 2, 3, 4, 5, 6\}^2$. The following functions are defined on $S$.
1. $Y(x_1, x_2) = x_1 + x_2$. The set of values is $\{2, 3, \ldots, 12\}$
2. $U(x_1, x_2) = \min\{x_1, x_2\}$. The set of values is $\{1, 2, \ldots, 6\}$
3. $V(x_1, x_2) = \max\{x_1, x_2\}$. The set of values is $\{1, 2, \ldots, 6\}$
4. $\left\{(u, v) \in \{1, 2, 3, 4, 5, 6\}^2: u \le v\right\}$
Consider again the dice experiment with $n = 2$ standard dice, and let $S$ denote the set of outcomes, $Y$ the sum of the scores, $U$ the minimum score, and $V$ the maximum score. Give each of the following as subsets of $S$, in list form.
1. $\{X_1 \lt 3, X_2 \gt 4\}$
2. $\{Y = 7\}$
3. $\{U = 2\}$
4. $\{V = 4\}$
5. $\{U = V\}$
Answer
1. $\{(1,5), (2,5), (1,6), (2,6)\}$
2. $\{(1,6), (2,5), (3,4), (4,3), (5,2), (6,1)\}$
3. $\{(2,2), (2,3), (3,2), (2,4), (4,2), (2,5), (5,2), (2,6), (6,2)\}$
4. $\{(4,1), (1,4), (2,4), (4,2), (4,3), (3,4), (4,4)\}$
5. $\{(1,1), (2,2), (3,3), (4,4), (5,5), (6,6)\}$
In the dice experiment, set $n = 2$. Run the experiment 100 times. Count the number of times each event in the previous exercise occurred.
In the general dice experiment with $n$ distinct $k$-sided dice, let $Y$ denote the sum of the scores, $U$ the minimum score, and $V$ the maximum score.
1. Give the set of outcomes $S$ and find $\#(S)$.
2. Express $Y$ as a function on $S$, and give the set of possible values in list form.
3. Express $U$ as a function on $S$, and give the set of possible values in list form.
4. Express $V$ as a function on $S$, and give the set of possible values in list form.
5. Give the set of possible values of $(U, V)$ in predicate from.
Answer
1. $S = \{1, 2, \ldots, k\}^n$ and $\#(S) = k^n$
2. $Y(x_1, x_2, \ldots, x_n) = x_1 + x_2 + \cdots + x_n$. The set of possible values is $\{n, n + 1, \ldots, n k\}$
3. $U(x_1, x_2, \ldots, x_n) = \min\{x_1, x_2, \ldots, x_n\}$. The set of possible values is $\{1, 2, \ldots, k\}$.
4. $V(x_1, x_2, \ldots, x_n) = \max\{x_1, x_2 \ldots, x_n\}$. The set of possible values is $\{1, 2, \ldots, k\}$
5. $\left\{(u, v) \in \{1, 2, \ldots, k\}^2: u \le v\right\}$
The set of outcomes of a random experiment depends of course on what information is recorded. The following exercise is an illustration.
An experiment consists of throwing a pair of standard dice repeatedly until the sum of the two scores is either 5 or 7. Let $A$ denote the event that the sum is 5 rather than 7 on the final throw. Experiments of this type arise in the casino game craps.
1. Suppose that the pair of scores on each throw is recorded. Define the set of outcomes of the experiment and describe $A$ as a subset of this set.
2. Suppose that the pair of scores on the final throw is recorded. Define the set of outcomes of the experiment and describe $A$ as a subset of this set.
Answer
Let $D_5 = \{(1,4), (2,3), (3,2), (4,1)\}$, $D_7 = \{(1,6), (2,5), (3,4), (4,3), (5,2), (6,1)\}$, $D = D_5 \cup D_7$, and $C = D^c$
1. $S = D \cup (C \times D) \cup (C^2 \times D) \cup \cdots$, $A = D_5 \cup (C \times D_5) \cup (C^2 \times D_5) \cup \cdots$
2. $S = D$, $A = D_5$
Suppose that 3 standard dice are rolled and the sequence of scores $(X_1, X_2, X_3)$ is recorded. A person pays $1 to play. If some of the dice come up 6, then the player receives her$1 back, plus $1 for each 6. Otherwise she loses her$1. Let $W$ denote the person's net winnings. This is the game of chuck-a-luck and is treated in more detail in the chapter on Games of Chance.
1. Give the set of outcomes $S$ in Cartesian product form.
2. Express $W$ as a function on $S$ and give the set of possible values in list form.
Answer
1. $S = \{1, 2, 3, 4, 5, 6\}^3$
2. $W(x_1, x_2, x_3) = \bs{1}\left(x_1 = 6\right) + \bs{1}\left(x_2 = 6\right) + \bs{1}\left(x_3 = 6\right) - \bs{1}\left(x_1 \ne 6, x_2 \ne 6, x_3 \ne 6\right)$. The set of possible values is $\{-1, 1, 2, 3\}$
Play the chuck-a-luck experiment a few times and see how you do.
In the die-coin experiment, a standard die is rolled and then a coin is tossed the number of times shown on the die. The sequence of coin scores $\bs{X}$ is recorded (0 for tails, 1 for heads). Let $N$ denote the die score and $Y$ the number of heads.
1. Give the set of outcomes $S$ in terms of Cartesian powers and find $\#(S)$.
2. Express $N$ as a function on $S$ and give the set of possible values in list form.
3. Express $Y$ as a function on $S$ and give the set of possible values in list form.
4. Give the event $A$ that all tosses result in heads in list form.
Answer
1. $S = \bigcup_{n=1}^6 \{0, 1\}^n$, $\#(S) = 126$
2. $N(x_1, x_2, \ldots, x_n) = n$ for $(x_1, x_2, \ldots, x_n) \in S$. The set of values is $\{1, 2, 3, 4, 5, 6\}$.
3. $Y(x_1, x_2, \ldots, x_n) = \sum_{i=1}^n x_i$ for $(x_1, x_2, \ldots, x_n) \in S$. The set of possible values is $\{0, 1, 2, 3, 4, 5, 6\}$.
4. $A = \{1, 11, 111, 1111, 11111, 111111\}$
Run the simulation of the die-coin experiment 10 times. For each run, give the values of the random variables $\bs{X}$, $N$, and $Y$ of the previous exercise. Count the number of times the event $A$ occurs.
In the coin-die experiment, we have a coin and two distinct dice, say one red and one green. First the coin is tossed, and then if the result is heads the red die is thrown, while if the result is tails the green die is thrown. The coin score $X$ and the score of the chosen die $Y$ are recorded. Suppose now that the red die is a standard 6-sided die, and the green die a 4-sided die.
1. Give the set of outcomes $S$ in list form.
2. Express $X$ as a function on $S$.
3. Express $Y$ as a function on $S$.
4. Give the event $\{Y \ge 3\}$ as a subset of $S$ in list form.
Answer
1. $\{(0,1), (0,2), (0,3), (0,4), (1,1), (1,2), (1,3), (1,4), (1,5), (1,6)\}$
2. $X(i, j) = i$ for $(i, j) \in S$
3. $Y(i, j) = j$ for $(i, j) \in S$
4. $\{(0,3), (0,4), (1,3), (1,4), (1,5), (1,6)\}$
Run the coin-die experiment 100 times, with various types of dice.
Sampling Models
Recall that many random experiments can be thought of as sampling experiments. For the general finite sampling model, we start with a population $D$ with $m$ (distinct) objects. We select a sample of $n$ objects from the population. If the sampling is done in a random way, then we have a random experiment with the sample as the basic outcome. Thus, the set of outcomes of the experiment is literally the set of samples; this is the historical origin of the term sample space. There are four common types of sampling from a finite population, based on the criteria of order and replacement. Recall the following facts from the section on combinatorial structures:
Samples of size $n$ chosen from a population with $m$ elements.
1. If the sampling is with replacement and with regard to order, then the set of samples is the Cartesian power $D^n$. The number of samples is $m^n$.
2. If the sampling is without replacement and with regard to order, then the set of samples is the set of all permutations of size $n$ from $D$. The number of samples is $m^{(n)} = m (m - 1) \cdots [m - (n - 1)]$.
3. If the sampling is without replacement and without regard to order, then the set of samples is the set of all combinations (or subsets) of size $n$ from $D$. The number of samples is $\binom{m}{n}$.
4. If the sampling is with replacement and without regard to order, then the set of samples is the set of all multisets of size $n$ from $D$. The number of samples is $\binom{m + n - 1}{n}$.
If we sample with replacement, the sample size $n$ can be any positive integer. If we sample without replacement, the sample size cannot exceed the population size, so we must have $n \in \{1, 2, \ldots, m\}$.
The basic coin and dice experiments are examples of sampling with replacement. If we toss a coin $n$ times and record the sequence of scores (where as usual, 0 denotes tails and 1 denotes heads), then we generate an ordered sample of size $n$ with replacement from the population $\{0, 1\}$. If we throw $n$ (distinct) standard dice and record the sequence of scores, then we generate an ordered sample of size $n$ with replacement from the population $\{1, 2, 3, 4, 5, 6\}$.
Suppose that the sampling is without replacement (the most common case). If we record the ordered sample $\bs{X} = (X_1, X_2, \ldots, X_n)$, then the unordered sample $\bs{W} = \{X_1, X_2, \ldots, X_n\}$ is a random variable (that is, a function of $\bs{X}$). On the other hand, if we just record the unordered sample $\bs{W}$ in the first place, then we cannot recover the ordered sample. Note also that the number of ordered samples of size $n$ is simply $n!$ times the number of unordered samples of size $n$. No such simple relationship exists when the sampling is with replacement. This will turn out to be an important point when we study probability models based on random samples, in the next section.
Consider a sample of size $n = 3$ chosen without replacement from the population $\{a, b, c, d, e\}$.
1. Give $T$, the set of unordered samples in list form.
2. Give in list form the set of all ordered samples that correspond to the unordered sample $\{b, c, e\}$.
3. Note that for every unordered sample, there are 6 ordered samples.
4. Give the cardinality of $S$, the set of ordered samples.
Answer
1. $T = \left\{\{a,b,c\}, \{a,b,d\}, \{a,b,e\}, \{a,c,d\}, \{a,c,e\}, \{a,d,e\}, \{b,c,d\}, \{b,c,e\}, \{b,d,e\}, \{c,d,e\}\right\}$
2. $\{(b,c,e), (b,e,c), (c,b,e), (c,e,b), (e,b,c), (e,c,b)\}$
3. 60
Traditionally in probability theory, an urn containing balls is often used as a metaphor for a finite population.
Suppose that an urn contains 50 (distinct) balls. A sample of 10 balls is selected from the urn. Find the number of samples in each of the following cases:
1. Ordered samples with replacement
2. Ordered samples without replacement
3. Unordered samples without replacement
4. Unordered samples with replacement
Answer
1. $97\,656\,250\,000\,000\,000$
2. $37\,276\,043\,023\,296\,000$
3. $10\,272\,278\,170$
4. $62\,828\,356\,305$
Suppose again that we have a population $D$ with $m$ (distinct) objects, but suppose now that each object is one of two types—either type 1 or type 0. Such populations are said to be dichotomous. Here are some specific examples:
• The population consists of persons, each either male or female.
• The population consists of voters, each either democrat or republican.
• The population consists of devices, each either good or defective.
• The population consists of balls, each either red or green.
Suppose that the population $D$ has $r$ type 1 objects and hence $m - r$ type 0 objects. Of course, we must have $r \in \{0, 1, \ldots, m\}$. Now suppose that we select a sample of size $n$ without replacement from the population. Note that this model has three parameters: the population size $m$, the number of type 1 objects in the population $r$, and the sample size $n$.
Let $Y$ denote the number of type 1 objects in the sample. Then
1. $\#\{Y = k\} = \binom{n}{k} r^{(k)} (m - r)^{(n - k)}$ for each $k \in \{0, 1, \ldots, n\}$, if the event is considered as a subset of $S$, the set of ordered samples.
2. $\#\{Y = k\} = \binom{r}{k} \binom{m - r}{n - k}$ for each $k \in \{0, 1, \ldots, n\}$, if the event is considered as a subset of $T$, the set of unordered samples.
3. The expression in (a) is $n!$ times the expression in (b).
Proof
1. $\binom{n}{k}$ is the number of ways to pick the coordinates (in the ordered sample) where the type 1 objects will go, $r^{(k)}$ is the number of ways to select a permutation of $k$ type 1 objects, and $(m - r)^{(n-k)}$ is the number of ways to select a permutation of $n - k$ type 0 objects. The result follows from the multiplication principle.
2. $\binom{r}{k}$ is the number of ways to select a combatination of $k$ type 1 objects and $\binom{m - r}{n - k}$ is the number of ways to select a combination of $n - k$ type 0 objects. The result again follows from the multiplication principle.
3. This result can be shown algebraically, but a combinatorial argument is better. For every combination of size $n$ there are $n!$ permutations of those objects.
A batch of 50 components consists of 40 good components and 10 defective components. A sample of 5 components is selected, without replacement. Let $Y$ denote the number of defectives in the sample.
1. Let $S$ denote the set of ordered samples. Find $\#(S)$.
2. Let $T$ denote the set of unordered samples. Find $\#(T)$.
3. As a subset of $T$, find $\#\{Y = k\}$ for each $k \in \{0, 1, 2, 3, 4, 5\}$.
Answer
1. $254\,251\,200$
2. $2\,118\,760$
3. $\#\{Y = 0\} = 658\,008$, $\#\{Y = 1\} = 913\,900$, $\#\{Y = 2\} = 444\,600$, $\#\{Y = 3\} = 93\,600$, $\#\{Y = 4\} = 8\,400$, $\#\{Y = 5\} = 252$
Run the simulation of the ball and urn experiment 100 times for the parameter values in the last exercise: $m = 50$, $r = 10$, $n = 5$. Note the values of the random variable $Y$.
Cards
Recall that a standard card deck can be modeled by the Cartesian product set $D = \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, j, q, k\} \times \{\clubsuit, \diamondsuit, \heartsuit, \spadesuit\}$ where the first coordinate encodes the denomination or kind (ace, 2–10, jack, queen, king) and where the second coordinate encodes the suit (clubs, diamonds, hearts, spades). Sometimes we represent a card as a string rather than an ordered pair (for example $q \heartsuit$ rather than $(q, \heartsuit)$ for the queen of hearts).
Most card games involve sampling without replacement from the deck $D$, which plays the role of the population. Thus, the basic card experiment consists of dealing $n$ cards from a standard deck without replacement; in this special context, the sample of cards is often referred to as a hand. Just as in the general sampling model, if we record the ordered hand $\bs{X} = (X_1, X_2, \ldots, X_n)$, then the unordered hand $\bs{W} = \{X_1, X_2, \ldots, X_n\}$ is a random variable (that is, a function of $\bs{X}$). On the other hand, if we just record the unordered hand $\bs{W}$ in the first place, then we cannot recover the ordered hand. Finally, recall that $n = 5$ is the poker experiment and $n = 13$ is the bridge experiment. The game of poker is treated in more detail in the chapter on Games of Chance.
Suppose that a single card is dealt from a standard deck. Let $Q$ denote the event that the card is a queen and $H$ the event that the card is a heart. Give each of the following events in list form:
1. $Q$
2. $H$
3. $Q \cup H$
4. $Q \cap H$
5. $Q \setminus H$
Answer
1. $Q = \{q \clubsuit, q \diamondsuit, q \heartsuit, q \spadesuit\}$
2. $H = \{1 \heartsuit, 2 \heartsuit, \ldots, 10 \heartsuit, j \heartsuit, q \heartsuit, k \heartsuit\}$
3. $Q \cup H = \{1 \heartsuit, 2 \heartsuit, \ldots, 10 \heartsuit, j \heartsuit, q \heartsuit, k \heartsuit, q \clubsuit, q \diamondsuit, q \spadesuit\}$
4. $Q \cap H = \{q \heartsuit\}$
5. $Q \setminus H = \{q \clubsuit, q \diamondsuit, q \spadesuit\}$
In the card experiment, set $n = 1$. Run the experiment 100 times and count the number of times each event in the previous exercise occurs.
Suppose that two cards are dealt from a standard deck and the sequence of cards recorded. Let $S$ denote the set of outcomes, and let $Q_i$ denote the event that the $i$th card is a queen and $H_i$ the event that the $i$th card is a heart for $i \in \{1, 2\}$. Find the number of outcomes in each of the following events:
1. $S$
2. $H_1$
3. $H_2$
4. $H_1 \cap H_2$
5. $Q_1 \cap H_1$
6. $Q_1 \cap H_2$
7. $H_1 \cup H_2$
Answer
1. 2652
2. 663
3. 663
4. 156
5. 51
6. 51
7. 1170
Consider the general card experiment in which $n$ cards are dealt from a standard deck, and the ordered hand $\bs{X}$ is recorded.
1. Give cardinality of $S$, the set of values of the ordered hand $\bs{X}$.
2. Give the cardinality of $T$, the set of values of the unordered hand $\bs{W}$.
3. How many ordered hands correspond to a given unordered hand?
4. Explicitly compute the numbers in (a) and (b) when $n = 5$ (poker).
5. Explicitly compute the numbers in (a) and (b) when $n = 13$ (bridge).
Answer
1. $\#(S) = 52^{(n)}$
2. $\#(T) = \binom{52}{n}$
3. $n!$
4. $311\,875\,200$, $2\,598\,960$
5. $3\,954\,242\,643\,911\,239\,680\,000$, $635\,013\,559\,600$
Consider the bridge experiment of dealing 13 cards from a deck and recording the unordered hand. In the most common point counting system, an ace is worth 4 points, a king 3 points, a queen 2 points, and a jack 1 point. The other cards are worth 0 points. Let $S$ denote the set of outcomes of the experiment and $V$ the point value of the hand.
1. Find the set of possible values of $V$.
2. Find the cardinality of the event $\{V = 0\}$ as a subset of $S$.
Answer
1. $\{0, 1, \ldots, 37\}$
2. $\#\{V = 0\} = 2\,310\,789\,600$
In the card experiment, set $n = 13$ and run the experiment 100 times. For each run, compute the value of each of the random variable $V$ in the previous exercise.
Consider the poker experiment of dealing 5 cards from a deck. Find the cardinality of each of the events below, as a subset of the set of unordered hands.
1. $A$: the event that the hand is a full house (3 cards of one kind and 2 of another kind).
2. $B$: the event that the hand has 4 of a kind (4 cards of one kind and 1 of another kind).
3. $C$: the event that all cards in the hand are in the same suit (the hand is a flush or a straight flush).
Answer
1. $\#(A) = 3744$
2. $\#(B) = 624$
3. $\#(C) = 5148$
Run the poker experiment 1000 times. Note the number of times that the events $A$, $B$, and $C$ in the previous exercise occurred.
Consider the bridge experiment of dealing 13 cards from a standard deck. Let $S$ denote the set of unordered hands, $Y$ the number of hearts in the hand, and $Z$ the number of queens in the hand.
1. Find the cardinality of the event $\{Y = y\}$ as a subset of $S$ for each $y \in \{0, 1, \ldots, 13\}$.
2. Find the cardinality of the event $\{Z = z\}$ as a subset of $S$ for each $z \in \{0, 1, 2, 3, 4\}$.
Answer
1. $\#(Y = y) = \binom{13}{y} \binom{39}{13 - y}$ for $y \in \{0, 1, \ldots, 13\}$
2. $\#(Z = z) = \binom{4}{z} \binom{48}{4 - z}$ for $z \in \{0, 1, 2, 3, 4\}$
Geometric Models
In the experiments that we have considered so far, the sample spaces have all been discrete (so that the set of outcomes is finite or countably infinite). In this subsection, we consider Euclidean sample spaces where the set of outcomes $S$ is continuous in a sense that we will make clear later. The experiments we consider are sometimes referred to as geometric models because they involve selecting a point at random from a Euclidean set.
We first consider Buffon's coin experiment, which consists of tossing a coin with radius $r \le \frac{1}{2}$ randomly on a floor covered with square tiles of side length 1. The coordinates $(X, Y)$ of the center of the coin are recorded relative to axes through the center of the square in which the coin lands. Buffon's experiments are studied in more detail in the chapter on Geometric Models and are named for Compte de Buffon
In Buffon's coin experiment, let $S$ denote the set of outcomes, $A$ the event that the coin does not touch the sides of the square, and let $Z$ denote the distance form the center of the coin to the center of the square.
1. Describe $S$ as a Cartesian product.
2. Describe $A$ as a subset of $S$.
3. Describe $A^c$ as a subset of $S$.
4. Express $Z$ as a function on $S$.
5. Express the event $\{X \lt Y\}$ as a subset of $S$.
6. Express the event $\left\{Z \leq \frac{1}{2}\right\}$ as a subset of $S$.
Answer
1. $S = \left[-\frac{1}{2}, \frac{1}{2}\right]^2$
2. $A = \left[r - \frac{1}{2}, \frac{1}{2} - r\right]^2$
3. $A^c = \left\{(x, y) \in S: x \lt r - \frac{1}{2} \text{ or } x \gt \frac{1}{2} - r \text{ or } y \lt r - \frac{1}{2} \text{ or } y \gt \frac{1}{2} - r\right\}$
4. $Z(x, y) = \sqrt{x^2 + y^2}$ for $(x, y) \in S$
5. $\{X \lt Y\} = \{(x, y) \in S: x \lt y\}$
6. $\{Z \lt \frac{1}{2}\} = \left\{(x, y) \in S: x^2 + y^2 \lt \frac{1}{4}\right\}$
Run Buffon's coin experiment 100 times with $r = 0.2$. For each run, note whether event $A$ occurs and compute the value of random variable $Z$.
A point $(X, Y)$ is chosen at random in the circular region of radius 1 in $\R^2$ centered at the origin. Let $S$ denote the set of outcomes. Let $A$ denote the event that the point is in the inscribed square region centered at the origin, with sides parallel to the coordinate axes. Let $B$ denote the event that the point is in the inscribed square with vertices $(\pm 1, 0)$, $(0, \pm 1)$.
1. Describe $S$ mathematically and sketch the set.
2. Describe $A$ mathematically and sketch the set.
3. Describe $B$ mathematically and sketch the set.
4. Sketch $A \cup B$
5. Sketch $A \cap B$
6. Sketch $A \cap B^c$
Answer
1. $S = \left\{(x, y): x^2 + y^2 \le 1\right\}$
2. $A = \left\{(x, y): -\frac{1}{\sqrt{2}} \le x \le \frac{1}{\sqrt{2}}, -\frac{1}{\sqrt{2}} \le y \le \frac{1}{\sqrt{2}}\right\}$
3. $B = \left\{(x, y) \in S: -1 \le \left|x + y\right| \le 1, -1 \le \left|y - x\right| \le 1\right\}$
Reliability
In the simple model of structural reliability, a system is composed of $n$ components, each of which is either working or failed. The state of component $i$ is an indicator random variable $X_i$, where 1 means working and 0 means failure. Thus, $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a vector of indicator random variables that specifies the states of all of the components, and therefore the set of outcomes of the experiment is $S = \{0, 1\}^n$. The system as a whole is also either working or failed, depending only on the states of the components and how the components are connected together. Thus, the state of the system is also an indicator random variable and is a function of $\bs X$. The state of the system (working or failed) as a function of the states of the components is the structure function.
A series system is working if and only if each component is working. The state of the system is $U = X_1 X_2 \cdots X_n = \min\left\{X_1, X_2, \ldots, X_n\right\}$
A parallel system is working if and only if at least one component is working. The state of the system is $V = 1 - \left(1 - X_1\right)\left(1 - X_2\right) \cdots \left(1 - X_n\right) = \max\left\{X_1, X_2, \ldots, X_n\right\}$
More generally, a $k$ out of $n$ system is working if and only if at least $k$ of the $n$ components are working. Note that a parallel system is a 1 out of $n$ system and a series system is an $n$ out of $n$ system. A $k$ out of $2 k$ system is a majority rules system.
The state of the $k$ out of $n$ system is $U_{n,k} = \bs 1\left(\sum_{i=1}^n X_i \ge k\right)$. The structure function can also be expressed as a polynomial in the variables.
Explicitly give the state of the $k$ out of 3 system, as a polynomial function of the component states $(X_1, X_2, X_3)$, for each $k \in \{1, 2, 3\}$.
Answer
1. $U_{3,1} = X_1 + X_2 + X_3 - X_1 X_2 - X_1 X_3 - X_2 X_3 + X_1 X_2 X_3$
2. $U_{3,2} = X_1 X_2 + X_1 X_3 + X_2 X_3 - 2 \, X_1 X_2 X_3$
3. $U_{3,3} = X_1 X_2 X_3$
In some cases, the system can be represented as a graph or network. The edges represent the components and the vertices the connections between the components. The system functions if and only if there is a working path between two designated vertices, which we will denote by $a$ and $b$.
Find the state of the Wheatstone bridge network shown below, as a function of the component states. The network is named for Charles Wheatstone.
Answer
Not every function $u: \{0, 1\}^n \to \{0, 1\}$ makes sense as a structure function. Explain why the following properties might be desirable:
1. $u(0, 0, \ldots, 0) = 0$ and $s(1, 1, \ldots, 1) = 1$
2. $u$ is an increasing function, where $\{0, 1\}$ is given the ordinary order and $\{0, 1\}^n$ the corresponding product order.
3. For each $i \in \{1, 2, \ldots, n\}$, there exist $\bs{x}$ and $\bs{y}$ in $\{0, 1\}^n$ all of whose coordinates agree, except $x_i = 0$ and $y_i = 1$, and $u(\bs{x}) = 0$ while $u(\bs{y}) = 1$.
Answer
1. This means that if all components have failed then the system has failed, and if all components are working then the system is working.
2. This means that if a particular component is changed from failed to working, then the system may also go from failed to working, but not from working to failed. That is, the system can only improve.
3. This means that every component is relevant to the system, that is, there exists a configuration in which changing component $i$ from failed to working changes the system from failed to working.
The model just discussed is a static model. We can extend it to a dynamic model by assuming that component $i$ is initially working, but has a random time to failure $T_i$, taking values in $[0, \infty)$, for each $i \in \{1, 2, \ldots, n\}$. Thus, the basic outcome of the experiment is the random vector of failure times $(T_1, T_2, \ldots, T_n)$, and so the set of outcomes is $[0, \infty)^n$.
Consider the dynamic reliability model for a system with structure function $u$ (valid in the sense of the previous exercise).
1. The state of component $i$ at time $t \ge 0$ is $X_i(t) = \bs{1}\left(T_i \gt t\right)$.
2. The state of the system at time $t$ is $X(t) = s\left[X_1(t), X_2(t), \ldots, X_n(t)\right]$.
3. The time to failure of the system is $T = \min\{t \ge 0: X(t) = 0\}$.
Suppose that we have two devices and that we record $(X, Y)$, where $X$ is the failure time of device 1 and $Y$ is the failure time of device 2. Both variables take values in the interval $[0, \infty)$, where the units are in hundreds of hours. Sketch each of the following events:
1. The set of outcomes $S$
2. $\{X \lt Y\}$
3. $\{X + Y \gt 2\}$
Answer
1. $S = [0, \infty)^2$, the first quadrant of the coordinate plane.
2. $\{X \lt Y\} = \{(x, y) \in S: x \lt y\}$. This is the region below the diagonal line $x = y$.
3. $\{X + Y \gt 2\} = \{(x, y) \in S: x + y \gt 2$. This is the region above (or to the right) of the line $x + y = 2$.
Genetics
Please refer to the discussion of genetics in the section on random experiments if you need to review some of the definitions in this section.
Recall first that the ABO blood type in humans is determined by three alleles: $a$, $b$, and $o$. Furthermore, $o$ is recessive and $a$ and $b$ are co-dominant.
Suppose that a person is chosen at random and his genotype is recorded. Give each of the following in list form.
1. The set of outcomes S
2. The event that the person is type $A$
3. The event that the person is type $B$
4. The event that the person is type $AB$
5. The event that the person is type $O$
Answer
1. $S = \{aa, ab, ao, bb, bo, oo\}$
2. $A = \{aa, ao\}$
3. $B = \{bb, bo\}$
4. $AB = \{ab\}$
5. $O = \{oo\}$
Suppose next that pod color in certain type of pea plant is determined by a gene with two alleles: $g$ for green and $y$ for yellow, and that $g$ is dominant.
Suppose that $n$ (distinct) pea plants are collected and the sequence of pod color genotypes is recorded.
1. Give the set of outcomes $S$ in Cartesian product form and find $\#(S)$.
2. Let $N$ denote the number of plants with green pods. Find $\#(N = k)$ (as a subset of $S$) for each $k \in \{0, 1, \ldots, n\}$.
Answer
1. $S = \{gg, gy, yy\}^n$, $\#(S) = 3^n$
2. $\binom{n}{k} 2^k$
Next consider a sex-linked hereditary disorder in humans (such as colorblindness or hemophilia). Let $h$ denote the healthy allele and $d$ the defective allele for the gene linked to the disorder. Recall that $d$ is recessive for women.
Suppose that $n$ women are sampled and the sequence of genotypes is recorded.
1. Give the set of outcomes $S$ in Cartesian product form and find $\#(S)$.
2. Let $N$ denote the number of women who are completely healthy (genotype $hh$). Find $\#(N = k)$ (as a subset of $S$) for each $k \in \{0, 1, \ldots, n\}$.
Answer
1. $S = \{hh, hd, dd\}^n$, $\#(S) = 3^n$
2. $\binom{n}{k} 2^{n-k}$
Radioactive Emissions
The emission of elementary particles from a sample of radioactive material occurs in a random way. Suppose that the time of emission of the $i$th particle is a random variable $T_i$ taking values in $(0, \infty)$. If we measure these arrival times, then basic outcome vector is $(T_1, T_2, \ldots)$ and so the set of outcomes is $S = \{(t_1, t_2, \ldots): 0 \lt t_1 \lt t_2 \lt \cdots\}$.
Run the simulation of the gamma experiment in single-step mode for different values of the parameters. Observe the arrival times.
Now let $N_t$ denote the number of emissions in the interval $(0, t]$. Then
1. $N_t = \max\left\{n \in \N_+: T_n \le t\right\}$.
2. $N_t \ge n$ if and only if $T_n \le t$.
Run the simulation of the Poisson experiment in single-step mode for different parameter values. Observe the arrivals in the specified time interval.
Statistical Experiments
In the basic cicada experiment, a cicada in the Middle Tennessee area is captured and the following measurements recorded: body weight (in grams), wing length, wing width, and body length (in millimeters), species type, and gender. The cicada data set gives the results of 104 repetitions of this experiment.
1. Define the set of outcomes $S$ for the basic experiment.
2. Let $F$ be the event that a cicada is female. Describe $F$ as a subset of $S$. Determine whether $F$ occurs for each cicada in the data set.
3. Let $V$ denote the ratio of wing length to wing width. Compute $V$ for each cicada.
4. Give the set of outcomes for the compound experiment that consists of 104 repetitions of the basic experiment.
Answer
For gender, let 0 denote female and 1 male, for species, let 1 denote tredecula, 2 tredecim, and 3 tredecassini.
1. $S = (0, \infty)^4 \times \{0, 1\} \times \{1, 2, 3\}$
2. $F = \{(x_1, x_2, x_3, x_4, y, z) \in S: y = 0\}$
3. $S^{104}$
In the basic M&M experiment, a bag of M&Ms (of a specified size) is purchased and the following measurements recorded: the number of red, green, blue, yellow, orange, and brown candies, and the net weight (in grams). The M&M data set gives the results of 30 repetitions of this experiment.
1. Define the set of outcomes $S$ for the basic experiment.
2. Let $A$ be the event that a bag contains at least 57 candies. Describe $A$ as a subset of $S$.
3. Determine whether $A$ occurs for each bag in the data set.
4. Let $N$ denote the total number of candies. Compute $N$ for each bag in the data set.
5. Give the set of outcomes for the compound experiment that consists of 30 repetitions of the basic experiment.
Answer
1. $S = \N^6 \times (0, \infty)$
2. $A = \{(n_1, n_2, n_3, n_4, n_5, n_6, w) \in S: n_1 + n_2 + \cdots + n_6 \gt 57\}$
3. $S^{30}$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/02%3A_Probability_Spaces/2.02%3A_Events_and_Random_Variables.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
This section contains the final and most important ingredient in the basic model of a random experiment. If you are a new student of probability, skip the technical detials.
Definitions and Interpretations
Suppose that we have a random experiment with sample space $(S, \mathscr S)$, so that $S$ is the set of outcomes of the experiment and $\mathscr S$ is the collection of events. When we run the experiment, a given event $A$ either occurs or does not occur, depending on whether the outcome of the experiment is in $A$ or not. Intuitively, the probability of an event is a measure of how likely the event is to occur when we run the experiment. Mathematically, probability is a function on the collection of events that satisfies certain axioms.
Definition
A probability measure (or probability distribution) $\P$ on the sample space $(S, \mathscr S)$ is a real-valued function defined on the collection of events $\mathscr S$ that satisifes the following axioms:
1. $\P(A) \ge 0$ for every event $A$.
2. $\P(S) = 1$.
3. If $\{A_i: i \in I\}$ is a countable, pairwise disjoint collection of events then $\P\left(\bigcup_{i \in I} A_i\right) = \sum_{i \in I}\P(A_i)$
Details
Recall that the collection of events $\mathscr S$ is required to be a $\sigma$-algebra, which guarantees that the union of the events in (c) is itself an event. A probability measure is a special case of a positive measure.
Axiom (c) is known as countable additivity, and states that the probability of a union of a finite or countably infinite collection of disjoint events is the sum of the corresponding probabilities. The axioms are known as the Kolmogorov axioms, in honor of Andrei Kolmogorov who was the first to formalize probability theory in an axiomatic way. More informally, we say that $\P$ is a probability measure (or distribution) on $S$, the collection of events $\mathscr S$ usually being understood.
Axioms (a) and (b) are really just a matter of convention; we choose to measure the probability of an event with a number between 0 and 1 (as opposed, say, to a number between $-5$ and $7$). Axiom (c) however, is fundamental and inescapable. It is required for probability for precisely the same reason that it is required for other measures of the size of a set, such as cardinality for finite sets, length for subsets of $\R$, area for subsets of $\R^2$, and volume for subsets of $\R^3$. In all these cases, the size of a set that is composed of countably many disjoint pieces is the sum of the sizes of the pieces.
On the other hand, uncountable additivity (the extension of axiom (c) to an uncountable index set $I$) is unreasonable for probability, just as it is for other measures. For example, an interval of positive length in $\R$ is a union of uncountably many points, each of which has length 0.
We now have defined the three essential ingredients for the model a random experiment:
A probability space $(S, \mathscr S, \P)$ consists of
1. A set of outcomes $S$
2. A collection of events $\mathscr{S}$
3. A probability measure $\P$ on the sample space $(S, \mathscr S)$
Details
Again, the collection of events $\mathscr S$ is a $\sigma$-algebra, so that the sample space $(S, \mathscr S)$ is a measurable space. The probability space $(S, \mathscr S, \P)$ is a special case of a positive measure space.
The Law of Large Numbers
Intuitively, the probability of an event is supposed to measure the long-term relative frequency of the event—in fact, this concept was taken as the definition of probability by Richard Von Mises. Here are the relevant definitions:
Suppose that the experiment is repeated indefinitely, and that $A$ is an event. For $n \in \N_+$,
1. Let $N_n(A)$ denote the number of times that $A$ occurred. This is the frequency of $A$ in the first $n$ runs.
2. Let $P_n(A) = N_n(A) / n$. This is the relative frequency or empirical probability of $A$ in the first $n$ runs.
Note that repeating the original experiment indefinitely creates a new, compound experiment, and that $N_n(A)$ and $P_n(A)$ are random variables for the new experiment. In particular, the values of these variables are uncertain until the experiment is run $n$ times. The basic idea is that if we have chosen the correct probability measure for the experiment, then in some sense we expect that the relative frequency of an event should converge to the probability of the event. That is, $P_n(A) \to \P(A) \text{ as } n \to \infty, \quad A \in \mathscr S$ regardless of the uncertainty of the relative frequencies on the left. The precise statement of this is the law of large numbers or law of averages, one of the fundamental theorems in probability. To emphasize the point, note that in general there will be lots of possible probability measures for an experiment, in the sense of the axioms. However, only the probability measure that models the experiment correctly will satisfy the law of large numbers.
Given the data from $n$ runs of the experiment, the empirical probability function $P_n$ is a probability measure on $S$.
Proof
If we run the experiment $n$ times, we generate $n$ points in $S$ (although of course, some of these points may be the same). The function $A \mapsto N_n(A)$ for $A \subseteq S$ is simply counting measure corresponding to the $n$ points. Clearly $P_n(A) \ge 0$ for an event $A$ and $P_n(S) = n / n = 1$. Countable additivity holds by the addition rule for counting measure.
The Distribution of a Random Variable
Suppose now that $X$ is a random variable for the experiment, taking values in a set $T$. Recall that mathematically, $X$ is a function from $S$ into $T$, and $\{X \in B\}$ denotes the event $\{s \in S: X(s) \in B\}$ for $B \subseteq T$. Intuitively, $X$ is a variable of interest for the experiment, and every meaningful statement about $X$ defines an event.
The function $B \mapsto \P(X \in B)$ for $B \subseteq T$ defines a probability measure on $T$.
Proof
The probability measure in (5) is called the probability distribution of $X$, so we have all of the ingredients for a new probability space.
A random variable $X$ with values in $T$ defines a new probability space:
1. $T$ is the set of outcomes.
2. Subsets of $T$ are the events.
3. The probability distribution of $X$ is the probability measure on $T$.
This probability space corresponds to the new random experiment in which the outcome is $X$. Moreover, recall that the outcome of the experiment itself can be thought of as a random variable. Specifically, if we let $T = S$ we let $X$ be the identity function on $S$, so that $X(s) = s$ for $s \in S$. Then $X$ is a random variable with values in $S$ and $\P(X \in A) = \P(A)$ for each event $A$. Thus, every probability measure can be thought of as the distribution of a random variable.
Constructions
Measures
How can we construct probability measures? As noted briefly above, there are other measures of the size of sets; in many cases, these can be converted into probability measures. First, a positive measure $\mu$ on the sample space $(S, \mathscr S)$ is a real-valued function defined on $\mathscr{S}$ that satisfies axioms (a) and (c) in (1), and then $(S, \mathscr S, \mu)$ is a measure space. In general, $\mu(A)$ is allowed to be infinite. However, if $\mu(S)$ is positive and finite (so that $\mu$ is a finite positive measure), then $\mu$ can easily be re-scaled into a probability measure.
If $\mu$ is a positive measure on $S$ with $0 \lt \mu(S) \lt \infty$ then $\P$ defined below is a probability measure. $\P(A) = \frac{\mu(A)}{\mu(S)}, \quad A \in \mathscr S$
Proof
1. $\P(A) \ge 0$ since $\mu(A) \ge 0$ and $0 \lt \mu(S) \lt \infty$.
2. $\P(S) = \mu(S) \big/ \mu(S) = 1$
3. If $\{A_i: i \in I\}$ is a countable collection of disjoint events then $\P\left(\bigcup_{i \in I} A_i \right) = \frac{1}{\mu(S)} \mu\left(\bigcup_{i \in I} A_i \right) = \frac{1}{\mu(S)} \sum_{i \in I} \mu(A_i) = \sum_{i \in I} \frac{\mu(A_i)}{\mu(S)} = \sum_{i \in I} \P(A_i)$
In this context, $\mu(S)$ is called the normalizing constant. In the next two subsections, we consider some very important special cases.
Discrete Distributions
In this discussion, we assume that the sample space $(S, \mathscr S)$ is discrete. Recall that this means that the set of outcomes $S$ is countable and that $\mathscr S = \mathscr P(S)$ is the collection of all subsets of $S$, so that every subset is an event. The standard measure on a discrete space is counting measure $\#$, so that $\#(A)$ is the number of elements in $A$ for $A \subseteq S$. When $S$ is finite, the probability measure corresponding to counting measure as constructed in above is particularly important in combinatorial and sampling experiments.
Suppose that $S$ is a finite, nonempty set. The discrete uniform distribution on $S$ is given by $\P(A) = \frac{\#(A)}{\#(S)}, \quad A \subseteq S$
The underlying model is refereed to as the classical probability model, because historically the very first problems in probability (involving coins and dice) fit this model.
In the general discrete case, if $\P$ is a probability measure on $S$, then since $S$ is countable, it follows from countable additivity that $\P$ is completely determined by its values on the singleton events. Specifically, if we define $f(x) = \P\left(\{x\}\right)$ for $x \in S$, then $\P(A) = \sum_{x \in A} f(x)$ for every $A \subseteq S$. By axiom (a), $f(x) \ge 0$ for $x \in S$ and by axiom (b), $\sum_{x \in S} f(x) = 1$. Conversely, we can give a general construction for defining a probability measure on a discrete space.
Suppose that $g: S \to [0, \infty)$. Then $\mu$ defined by $\mu(A) = \sum_{x \in A} g(x)$ for $A \subseteq S$ is a positive measure on $S$. If $0 \lt \mu(S) \lt \infty$ then $\P$ defined as follows is a probability measure on $S$. $\P(A) = \frac{\mu(A)}{\mu(S)} = \frac{\sum_{x \in A} g(x)}{\sum_{x \in S} g(x)}, \quad A \subseteq S$
Proof
Trivially $\mu(A) \ge 0$ for $A \subseteq S$ since $g$ is nonnegative. The countable additivity property holds since the terms in a sum of nonnegative numbers can be rearranged in any way without altering the sum. Thus let $\{A_i: i \in I\}$ be a countable collection of disjoint subsets of $S$, and let $A = \bigcup_{i \in I} A_i$ then $\mu(A) = \sum_{x \in A} g(x) = \sum_{i \in I} \sum_{x \in A_i} g(x) = \sum_{i \in I} \mu(A_i)$ If $0 \lt \mu(S) \lt \infty$ then $\P$ is a probability measure by scaling result above.
In the context of our previous remarks, $f(x) = g(x) \big/ \mu(S) = g(x) \big/ \sum_{y \in S} g(y)$ for $x \in S$. Distributions of this type are said to be discrete. Discrete distributions are studied in detail in the chapter on Distributions.
If $S$ is finite and $g$ is a constant function, then the probability measure $\P$ associated with $g$ is the discrete uniform distribution on $S$.
Proof
Suppose that $g(x) = c$ for $x \in S$ where $c \gt 0$. Then $\mu(A) = c \#(A)$ and hence $\P(A) = \mu(A) \big/ \mu(S) = \#(A) \big/ \#(S)$ for $A \subseteq S$.
Continuous Distributions
The probability distributions that we will construct next are continuous distributions on $\R^n$ for $n \in \N_+$ and require some calculus.
For $n \in \N_+$, the standard measure $\lambda_n$ on $\R^n$ is given by $\lambda_n(A) = \int_A 1 \, dx, \quad A \subseteq \R^n$ In particular, $\lambda_1(A)$ is the length of ( A \subseteq \R \), $lambda_2(A)$ is the area of $A \subseteq \R^2$, and $\lambda_3(A)$ is the volume of $A \subseteq \R^3$.
Details
Technically, $\lambda_n$ is Lebesgue measure on the measurable subsets of $\R^n$, named for Henri Lebesgue. The representation above in terms of the ordinary Riemann integral of calculus is valid for the subsets that typically occur in applications. As usual, all subsets of $\R^n$ in the discussion below are assumed to be mearuable.
When $n \gt 3$, $\lambda_n(A)$ is sometimes called the $n$-dimensional volume of $A \subseteq \R_n$. The probability measure associated with $\lambda_n$ on a set with positive, finite $n$-dimensional volume is particularly important.
Suppose that $S \subseteq \R_n$ with $0 \lt \lambda_n(S) \lt \infty$. The continuous uniform distribution on $S$ is defined by $\P(A) = \frac{\lambda_n(A)}{\lambda_n(S)}, \quad A \subseteq S$
Note that the continuous uniform distribution is analogous to the discrete uniform distribution defined in (8), but with Lebesgue measure $\lambda_n$ replacing counting measure $\#$. We can generalize this construction to produce many other distributions.
Suppose again that $S \subseteq \R^n$ and that $g: S \to [0, \infty)$. Then $\mu$ defined by $\mu(A) = \int_A g(x) \, dx$ for $A \subseteq S$ is a positive measure on $S$. If $0 \lt \mu(S) \lt \infty$, then $\P$ defined as follows is a probability measure on $S$. $\P(A) = \frac{\mu(A)}{\mu(S)} = \frac{\int_A g(x) \, dx}{\int_S g(x) \, dx}, \quad A \in \mathscr S$
Proof
Technically, the integral in the definition of $\mu(A)$ is the Lebesgue integral, but this integral agrees with the ordinary Riemann integral of calculus when $g$ and $A$ are sufficiently nice. The function $g$ is assumed to be measurable and is the density function of $\mu$ with respect to $\lambda_n$. Technicalities aside, the proof is straightforward:
1. $\mu(A) \ge 0$ for $A \subseteq S$ since $g$ is nonnegative.
2. If $\{A_i: i \in I\}$ is a countable disjoint collection of subsets of $S$ and $A = \bigcup_{i \in I} A_i$, then by a basic property of the integral, $\mu(A) = \int_A g(x) \, dx = \sum_{i \in I} \int_{A_i} g(x) \, dx = \sum_{i \in I} \mu(A_i)$
If $0 \lt \mu(S) \lt \infty$ then $\P$ is a probability measure on $S$ by the scaling result above.
Distributions of this type are said to be continuous. Continuous distributions are studied in detail in the chapter on Distributions. Note that the continuous distribution above is analogous to the discrete distribution in (9), but with integrals replacing sums. The general theory of integration allows us to unify these two special cases, and many others besides.
Rules of Probability
Basic Rules
Suppose again that we have a random experiment modeled by a probability space $(S, \mathscr S, \P)$, so that $S$ is the set of outcomes, $\mathscr S$ the collection of events, and $\P$ the probability measure. In the following theorems, $A$ and $B$ are events. The results follow easily from the axioms of probability in (1), so be sure to try the proofs yourself before reading the ones in the text.
$\P(A^c) = 1 - \P(A)$. This is known as the complement rule.
Proof
$\P(\emptyset) = 0$.
Proof
This follows from the the complement rule applied to $A = S$.
$\P(B \setminus A) = \P(B) - \P(A \cap B)$. This is known as the difference rule.
Proof
If $A \subseteq B$ then $\P(B \setminus A) = \P(B) - \P(A)$.
Proof
This result is a corollary of the difference rule. Note that $A \cap B = A$.
Recall that if $A \subseteq B$ we sometimes write $B - A$ for the set difference, rather than $B \setminus A$. With this notation, the difference rule has the nice form $\P(B - A) = \P(B) - \P(A)$.
If $A \subseteq B$ then $\P(A) \le \P(B)$.
Proof
This result is a corollary of the previous result. Note that $\P(B \setminus A) \ge 0$ and hence $\P(B) - \P(A) \ge 0$.
Thus, $\P$ is an increasing function, relative to the subset partial order on the collection of events $\mathscr S$, and the ordinary order on $\R$. In particular, it follows that $\P(A) \le 1$ for any event $A$.
Suppose that $A \subseteq B$.
1. If $\P(B) = 0$ then $\P(A) = 0$.
2. If $\P(A) = 1$ then $\P(B) = 1$.
Proof
This follows immediately from the increasing property in the last theorem.
The Boole and Bonferroni Inequalities
The next result is known as Boole's inequality, named after George Boole. The inequality gives a simple upper bound on the probability of a union.
If $\{A_i: i \in I\}$ is a countable collection of events then $\P\left( \bigcup_{i \in I} A_i \right) \le \sum_{i \in I} \P(A_i)$
Proof
Intuitively, Boole's inequality holds because parts of the union have been measured more than once in the sum of the probabilities on the right. Of course, the sum of the probabilities may be greater than 1, in which case Boole's inequality is not helpful. The following result is a simple consequence of Boole's inequality.
If $\{A_i: i \in I\}$ is a countable collection of events with $\P(A_i) = 0$ for each $i \in I$, then $\P\left( \bigcup_{i \in I} A_i \right) = 0$
An event $A$ with $\P(A) = 0$ is said to be null. Thus, a countable union of null events is still a null event.
The next result is known as Bonferroni's inequality, named after Carlo Bonferroni. The inequality gives a simple lower bound for the probability of an intersection.
If $\{A_i: i \in I\}$ is a countable collection of events then $\P\left( \bigcap_{i \in I} A_i \right) \ge 1 - \sum_{i \in I}\left[1 - \P(A_i)\right]$
Proof
By De Morgan's law, $\left(\bigcap_{i \in I} A_i\right)^c = \bigcup_{i \in I} A_i^c$. Hence by Boole's inequality, $\P\left[\left(\bigcap_{i \in I} A_i\right)^c\right] \le \sum_{i \in I} \P(A_i^c) = \sum_{i \in I} \left[1 - \P(A_i)\right]$ Using the complement rule again gives Bonferroni's inequality.
Of course, the lower bound in Bonferroni's inequality may be less than or equal to 0, in which case it's not helpful. The following result is a simple consequence of Bonferroni's inequality.
If $\{A_i: i \in I\}$ is a countable collection of events with $\P(A_i) = 1$ for each $i \in I$, then $\P\left( \bigcap_{i \in I} A_i \right) = 1$
An event $A$ with $\P(A) = 1$ is sometimes called almost sure or almost certain. Thus, a countable intersection of almost sure events is still almost sure.
Suppose that $A$ and $B$ are events in an experiment.
1. If $\P(A) = 0$, then $\P(A \cup B) = \P(B)$.
2. If $\P(A) = 1$, then $\P(A \cap B) = \P(B)$.
Proof
1. Using the increasing property and Boole's inequality we have $\P(B) \le \P(A \cup B) \le \P(A) + \P(B) = \P(B)$
2. Using the increasing property and Bonferonni's inequality we have $\P(B) = \P(A) + \P(B) - 1 \le \P(A \cap B) \le P(B)$
The Partition Rule
Suppose that $\{A_i: i \in I\}$ is a countable collection of events that partition $S$. Recall that this means that the events are disjoint and their union is $S$. For any event $B$, $\P(B) = \sum_{i \in I} \P(A_i \cap B)$
Proof
Naturally, this result is useful when the probabilities of the intersections are known. Partitions usually arise in connection with a random variable. Suppose that $X$ is a random variable taking values in a countable set $T$, and that $B$ is an event. Then $\P(B) = \sum_{x \in T} \P(X = x, B)$ In this formula, note that the comma acts like the intersection symbol in the previous formula.
The Inclusion-Exclusion Rule
The inclusion-exclusion formulas provide a method for computing the probability of a union of events in terms of the probabilities of the various intersections of the events. The formula is useful because often the probabilities of the intersections are easier to compute. Interestingly, however, the same formula works for computing the probability of an intersection of events in terms of the probabilities of the various unions of the events. This version is rarely stated, because it's simply not that useful. We start with two events.
If $A, \, B$ are events thatn $\P(A \cup B) = \P(A) + \P(B) - \P(A \cap B)$.
Proof
Here is the complementary result for the intersection in terms of unions:
If $A, \, B$ are events then $\P(A \cap B) = \P(A) + \P(B) - \P(A \cup B)$.
Proof
This follows immediately from the previous formula be rearranging the terms
Next we consider three events.
If $A, \, B, \, C$ are events then $\P(A \cup B \cup C) = \P(A) + \P(B) + \P(C) - \P(A \cap B) - \P(A \cap C) - \P(B \cap C) + \P(A \cap B \cap C)$.
Analytic Proof
First note that $A \cup B \cup C = (A \cup B) \cup [C \setminus (A \cup B)]$. The event in parentheses and the event in square brackets are disjoint. Thus, using the additivity axiom and the difference rule, $\P(A \cup B \cup C) = \P(A \cup B) + \P(C) - \P\left[C \cap (A \cup B)\right] = \P(A \cup B) + \P(C) - \P\left[(C \cap A) \cup (C \cap B)\right]$ Using the inclusion-exclusion rule for two events (twice) we have $\P(A \cup B \cup C) = \P(A) + \P(B) - \P(A \cap B) + \P(C) - \left[\P(C \cap A) + \P(C \cap B) - \P(A \cap B \cap C)\right]$
Proof by accounting
Here is the complementary result for the probability of an intersection in terms of the probabilities of the unions:
If $A, \, B, \, C$ are events then $\P(A \cap B \cap C) = \P(A) + \P(B) + \P(C) - \P(A \cup B) - \P(A \cup C) - \P(B \cup C) + \P(A \cup B \cup C)$.
Proof
This follows from solving for $\P(A \cap B \cap C)$ in the previous result, and then using the result for two events on $\P(A \cap B)$, $\P(B \cap C)$, and $\P(A \cap C)$.
The inclusion-exclusion formulas for two and three events can be generalized to $n$ events. For the remainder of this discussion, suppose that $\{A_i: i \in I\}$ is a collection of events where $I$ is an index set with $\#(I) = n$.
The general inclusion-exclusion formula for the probability of a union. $\P\left( \bigcup_{i \in I} A_i \right) = \sum_{k = 1}^n (-1)^{k - 1} \sum_{J \subseteq I, \; \#(J) = k} \P\left( \bigcap_{j \in J} A_j \right)$
Proof by induction
The proof is by induction on $n$. We have already established the formula for $n = 2$ and $n = 3$. Thus, suppose that the inclusion-exclusion formula holds for a given $n$, and suppose that $(A_1, A_2, \ldots, A_{n+1})$ is a sequence of $n + 1$ events. Then $\bigcup_{i=1}^{n + 1} A_i = \left(\bigcup_{i=1}^n A_i \right) \cup \left[ A_{n+1} \setminus \left(\bigcup_{i=1}^n A_i\right) \right]$ As before, the event in parentheses and the event in square brackets are disjoint. Thus using the additivity axiom, the difference rule, and the distributive rule we have $\P\left(\bigcup_{i=1}^{n+1} A_i\right) = \P\left(\bigcup_{i=1}^n A_i\right) + \P(A_{n+1}) - \P\left(\bigcup_{i=1}^n (A_{n+1} \cap A_i) \right)$ By the induction hypothesis, the inclusion-exclusion formula holds for each union of $n$ events on the right. Applying the formula and simplifying gives the inclusion-exclusion formula for $n + 1$ events.
Proof by accounting
This is the general version of the same argument we used above for 3 events. $\bigcup_{i \in I} A_i$ is the union of the disjoint events of the form $\left(\bigcap_{i \in K} A_i\right) \cap \left(\bigcap_{i \in K^c} A_i\right)$ where $K$ is a nonempty subset of the index set $I$. In the inclusion-exclusion formula, the event corresponding to a given $K$ is measured in $\P\left(\bigcap_{j \in J} A_j\right)$ for every nonempty $J \subseteq K$. Suppose that $\#(K) = k$. Accounting for the positive and negative signs, the net measurement is $\sum_{j=1}^k (-1)^{j-1} \binom{k}{j} = 1$.
Here is the complementary result for the probability of an intersection in terms of the probabilities of the various unions:
The general inclusion-exclusion formula for the probability of an intersection. $\P\left( \bigcap_{i \in I} A_i \right) = \sum_{k = 1}^n (-1)^{k - 1} \sum_{J \subseteq I, \; \#(J) = k} \P\left( \bigcup_{j \in J} A_j \right)$
The general inclusion-exclusion formulas are not worth remembering in detail, but only in pattern. For the probability of a union, we start with the sum of the probabilities of the events, then subtract the probabilities of all of the paired intersections, then add the probabilities of the third-order intersections, and so forth, alternating signs, until we get to the probability of the intersection of all of the events.
The general Bonferroni inequalities (for a union) state that if sum on the right in the general inclusion-exclusion formula is truncated, then the truncated sum is an upper bound or a lower bound for the probability on the left, depending on whether the last term has a positive or negative sign. Here is the result stated explicitly:
Suppose that $m \in \{1, 2, \ldots, n - 1\}$. Then
1. $\P\left( \bigcup_{i \in I} A_i \right) \le \sum_{k = 1}^m (-1)^{k - 1} \sum_{J \subseteq I, \; \#(J) = k} \P\left( \bigcap_{j \in J} A_j \right)$ if $m$ is odd.
2. $\P\left( \bigcup_{i \in I} A_i \right) \ge \sum_{k = 1}^m (-1)^{k - 1} \sum_{J \subseteq I, \; \#(J) = k} \P\left( \bigcap_{j \in J} A_j \right)$ if $m$ is event.
Proof
Let $P_k = \sum_{J \subseteq I, \; \#(J) = k} \P\left( \bigcap_{j \in J} A_j \right)$, the absolute value of the $k$th term in the inclusion-exclusion formula. The result follows since the inclusion-exclusion formula is an alternating series, and $P_k$ is decreasing in $k$.
More elegant proofs of the inclusion-exclusion formula and the Bonferroni inequalities can be constructed using expected value.
Note that there is a probability term in the inclusion-exclusion formulas for every nonempty subset $J$ of the index set $I$, with either a positive or negative sign, and hence there are $2^n - 1$ such terms. These probabilities suffice to compute the probability of any event that can be constructed from the given events, not just the union or the intersection.
The probability of any event that can be constructed from $\{A_i: i \in I\}$ can be computed from either of the following collections of $2^n - 1$ probabilities:
1. $\P\left(\bigcap_{j \in J} A_j\right)$ where $J$ is a nonempty subset of $I$.
2. $\P\left(\bigcup_{j \in J} A_j\right)$ where $J$ is a nonempty subset of $I$.
Remark
If you go back and look at your proofs of the rules of probability above, you will see that they hold for any finite measure $\mu$, not just probability. The only change is that the number 1 is replaced by $\mu(S)$. In particular, the inclusion-exclusion rule is as important in combinatorics (the study of counting measure) as it is in probability.
Examples and Applications
Probability Rules
Suppose that $A$ and $B$ are events in an experiment with $\P(A) = \frac{1}{3}$, $\P(B) = \frac{1}{4}$, $\P(A \cap B) = \frac{1}{10}$. Express each of the following events in the language of the experiment and find its probability:
1. $A \setminus B$
2. $A \cup B$
3. $A^c \cup B^c$
4. $A^c \cap B^c$
5. $A \cup B^c$
Answer
1. $A$ occurs but not $B$. $\frac{7}{30}$
2. $A$ or $B$ occurs. $\frac{29}{60}$
3. One of the events does not occur. $\frac{9}{10}$
4. Neither event occurs. $\frac{31}{60}$
5. Either $A$ occurs or $B$ does not occur. $\frac{17}{20}$
Suppose that $A$, $B$, and $C$ are events in an experiment with $\P(A) = 0.3$, $\P(B) = 0.2$, $\P(C) = 0.4$, $\P(A \cap B) = 0.04$, $\P(A \cap C) = 0.1$, $\P(B \cap C) = 0.1$, $\P(A \cap B \cap C) = 0.01$. Express each of the following events in set notation and find its probability:
1. At least one of the three events occurs.
2. None of the three events occurs.
3. Exactly one of the three events occurs.
4. Exactly two of the three events occur.
Answer
1. $\P(A \cup B \cup C) = 0.67$
2. $\P[(A \cup B \cup C)^c] = 0.37$
3. $\P[(A \cap B^c \cap C^c) \cup (A^c \cap B \cap C^c) \cup (A^c \cap B^c \cap C)] = 0.45$
4. $\P[(A \cap B \cap C^c) \cup (A \cap B^c \cap C) \cup (A^c \cap B \cap C)] = 0.21$
Suppose that $A$ and $B$ are events in an experiment with $\P(A \setminus B) = \frac{1}{6}$, $\P(B \setminus A) = \frac{1}{4}$, and $\P(A \cap B) = \frac{1}{12}$. Find the probability of each of the following events:
1. $A$
2. $B$
3. $A \cup B$
4. $A^c \cup B^c$
5. $A^c \cap B^c$
Answer
1. $\frac{1}{4}$
2. $\frac{1}{3}$
3. $\frac{1}{2}$
4. $\frac{11}{12}$
5. $\frac{1}{2}$
Suppose that $A$ and $B$ are events in an experiment with $\P(A) = \frac{2}{5}$, $\P(A \cup B) = \frac{7}{10}$, and $\P(A \cap B) = \frac{1}{6}$. Find the probability of each of the following events:
1. $B$
2. $A \setminus B$
3. $B \setminus A$
4. $A^c \cup B^c$
5. $A^c \cap B^c$
Answer
1. $\frac{7}{15}$
2. $\frac{7}{30}$
3. $\frac{3}{10}$
4. $\frac{5}{6}$
5. $\frac{3}{10}$
Suppose that $A$, $B$, and $C$ are events in an experiment with $\P(A) = \frac{1}{3}$, $\P(B) = \frac{1}{4}$, $\P(C) = \frac{1}{5}$.
1. Use Boole's inequality to find an upper bound for $\P(A \cup B \cup C)$.
2. Use Bonferronis's inequality to find a lower bound for $\P(A \cap B \cap C)$.
Answer
1. $\frac{47}{60}$
2. $-\frac{83}{60}$, not helpful.
Open the simple probability experiment.
1. Note the 16 events that can be constructed from $A$ and $B$ using the set operations of union, intersection, and complement.
2. Given $\P(A)$, $\P(B)$, and $\P(A \cap B)$ in the table, use the rules of probability to verify the probabilities of the other events.
3. Run the experiment 1000 times and compare the relative frequencies of the events with the probabilities of the events.
Suppose that $A$, $B$, and $C$ are events in a random experiment with $\P(A) = 1/4$, $\P(B) = 1/3$, $\P(C) = 1/6$, $\P(A \cap B) = 1/18$, $\P(A \cap C) = 1/16$, $\P(B \cap C) = 1/12$, and $\P(A \cap B \cap C) = 1/24$. Find the probabilities of the various unions:
1. $A \cup B$
2. $A \cup C$
3. $B \cup C$
4. $A \cup B \cup C$
Answer
1. $19/36$
2. $17/48$
3. $5/12$
4. $85/144$
Suppose that $A$, $B$, and $C$ are events in a random experiment with $\P(A) = 1/4$, $\P(B) = 1/4$, $\P(C) = 5/16$, $\P(A \cup B) = 7/16$, $\P(A \cup C) = 23/48$, $\P(B \cup C) = 11/24$, and $\P(A \cup B \cup C) = 7/12$. Find the probabilities of the various intersections:
1. $A \cap B$
2. $A \cap C$
3. $B \cap C$
4. $A \cap B \cap C$
Answer
1. $1/16$
2. $1/12$
3. $5/48$
4. $1/48$
Suppose that $A$, $B$, and $C$ are events in a random experiment. Explicitly give all of the Bonferroni inequalities for $\P(A \cup B \cup C)$
Proof
1. $\P(A \cup B \cup C) \le \P(A) + \P(B) + \P(C)$
2. $\P(A \cup B \cup C) \ge \P(A) + \P(B) + \P(C) - \P(A \cap B) - \P(A \cap C) - \P(B \cap C)$
3. $\P(A \cup B \cup C) = \P(A) + \P(B) + \P(C) - \P(A \cap B) - \P(A \cap C) - \P(B \cap C) + \P(A \cap B \cap C)$
Coins
Consider the random experiment of tossing a coin $n$ times and recording the sequence of scores $\bs{X} = (X_1, X_2, \ldots, X_n)$ (where 1 denotes heads and 0 denotes tails). This experiment is a generic example of $n$ Bernoulli trials, named for Jacob Bernoulli. Note that the set of outcomes is $S = \{0, 1\}^n$, the set of bit strings of length $n$. If the coin is fair, then presumably, by the very meaning of the word, we have no reason to prefer one point in $S$ over another. Thus, as a modeling assumption, it seems reasonable to give $S$ the uniform probability distribution in which all outcomes are equally likely.
Suppose that a fair coin is tossed 3 times and the sequence of coin scores is recorded. Let $A$ be the event that the first coin is heads and $B$ the event that there are exactly 2 heads. Give each of the following events in list form, and then compute the probability of the event:
1. $A$
2. $B$
3. $A \cap B$
4. $A \cup B$
5. $A^c \cup B^c$
6. $A^c \cap B^c$
7. $A \cup B^c$
Answer
1. $\{100, 101, 110, 111\}$, $\frac{1}{2}$
2. $\{110, 101, 011\}$, $\frac{3}{8}$
3. $\{110, 101\}$, $\frac{1}{4}$
4. $\{100, 101, 110, 111, 011\}$, $\frac{5}{8}$
5. $\{000, 001, 010, 100, 011, 111\}$, $\frac{3}{4}$
6. $\{000, 001, 010\}$, $\frac{3}{8}$
7. $\{100, 101, 110, 111, 000, 010, 001\}$, $\frac{7}{8}$
In the Coin experiment, select 3 coins. Run the experiment 1000 times, updating after every run, and compute the empirical probability of each event in the previous exercise.
Suppose that a fair coin is tossed 4 times and the sequence of scores is recorded. Let $Y$ denote the number of heads. Give the event $\{Y = k\}$ (as a subset of the sample space) in list form, for each $k \in \{0, 1, 2, 3, 4\}$, and then give the probability of the event.
Answer
1. $\{Y = 0\} = \{0000\}$, $\P(Y = 0) = \frac{1}{16}$
2. $\{Y = 1\} = \{1000, 0100, 0010, 0001\}$, $\P(Y = 1) = \frac{4}{16}$
3. $\{Y = 2\} = \{1100, 1010, 1001, 0110, 0101, 0011\}$, $\P(Y = 2) = \frac{6}{16}$
4. $\{Y = 3\} = \{1110, 1101, 1011, 0111\}$, $\P(Y = 3) = \frac{4}{16}$
5. $\{Y = 4\} = \{1111\}$, $\P(Y = 4) = \frac{1}{16}$
Suppose that a fair coin is tossed $n$ times and the sequence of scores is recorded. Let $Y$ denote the number of heads.
$\P(Y = k) = \binom{n}{k} \left( \frac{1}{2} \right)^n, \quad k \in \{0, 1, \ldots, n\}$
Proof
The number of bit strings of length $n$ is $2^n$, and since the coin is fair, these are equally likely. The number of bit strings of length $n$ with exactly $k$ 1's is $\binom{n}{k}$. Hence the probability of 1 occurring exactly $k$ times is $\binom{n}{k} \big/ 2^n$.
The distribution of $Y$ in the last exercise is a special case of the binomial distribution. The binomial distribution is studied in more detail in the chapter on Bernoulli Trials.
Dice
Consider the experiment of throwing $n$ distinct, $k$-sided dice (with faces numbered from 1 to $k$) and recording the sequence of scores $\bs{X} = (X_1, X_2, \ldots, X_n)$. We can record the outcome as a sequence because of the assumption that the dice are distinct; you can think of the dice as somehow labeled from 1 to $n$, or perhaps with different colors. The special case $k = 6$ corresponds to standard dice. In general, note that the set of outcomes is $S = \{1, 2, \ldots, k\}^n$. If the dice are fair, then again, by the very meaning of the word, we have no reason to prefer one point in $S$ over another, so as a modeling assumption it seems reasonable to give $S$ the uniform probability distribution.
Suppose that two fair, standard dice are thrown and the sequence of scores recorded. Let $A$ denote the event that the first die score is less than 3 and $B$ the event that the sum of the dice scores is 6. Give each of the following events in list form and then find the probability of the event.
1. $A$
2. $B$
3. $A \cap B$
4. $A \cup B$
5. $B \setminus A$
Answer
1. $\{(1,1),(1,2),(1,3),(1,4),(1,5),(1,6),(2,1),(2,2),(2,3),(2,4),(2,5),(2,6)\}$, $\frac{12}{36}$
2. $\{(1,5),(5,1),(2,4),(4,2),(3,3)\}$, $\frac{5}{36}$
3. $\{(1,5), (2,4)\}$, $\frac{2}{36}$
4. $\{(1,1),(1,2),(1,3),(1,4),(1,5),(1,6),(2,1),(2,2),(2,3),(2,4),(2,5),(2,6),(5,1),(4,2),(3,3)\}$, $\frac{15}{36}$
5. $\{(5,1), (4,2), (3,3)\}$, $\frac{3}{36}$
In the dice experiment, set $n = 2$. Run the experiment 100 times and compute the empirical probability of each event in the previous exercise.
Consider again the dice experiment with $n = 2$ fair dice. Let $S$ denote the set of outcomes, $Y$ the sum of the scores, $U$ the minimum score, and $V$ the maximum score.
1. Express $Y$ as a function on $S$ and give the set of values.
2. Find $\P(Y = y)$ for each $y$ in the set in part (a).
3. Express $U$ as a function on $S$ and give the set of values.
4. Find $\P(U = u)$ for each $u$ in the set in part (c).
5. Express $V$ as a function on $S$ and give the set of values.
6. Find $\P(V = v)$ for each $v$ in the set in part (e).
7. Find the set of values of $(U, V)$.
8. Find $\P(U = u, V = v)$ for each $(u, v)$ in the set in part (g).
Answer
Note that $S = \{1, 2, 3, 4, 5, 6\}^2$.
1. $Y(x_1, x_2) = x_1 + x_2$ for $(x_1, x_2) \in S$. The set of values is $\{2, 3, \ldots, 12\}$
2. $y$ 2 3 4 5 6 7 8 9 10 11 12
$\P(Y = y)$ $\frac{1}{36}$ $\frac{2}{36}$ $\frac{3}{36}$ $\frac{4}{36}$ $\frac{5}{36}$ $\frac{6}{36}$ $\frac{5}{36}$ $\frac{4}{36}$ $\frac{3}{36}$ $\frac{2}{36}$ $\frac{1}{36}$
3. $U(x_1, x_2) = \min\{x_1, x_2\}$ for $(x_1, x_2) \in S$. The set of values is $\{1, 2, 3, 4, 5, 6\}$
4. $u$ 1 2 3 4 5 6
$\P(U = u)$ $\frac{11}{36}$ $\frac{9}{36}$ $\frac{7}{36}$ $\frac{5}{36}$ $\frac{3}{36}$ $\frac{1}{36}$
5. $V(x_1, x_2) = \max\{x_1, x_2\}$ for $(x_1, x_2) \in S$. The set of values is $\{1, 2, 3, 4, 5, 6\}$
6. $v$ 1 2 3 4 5 6
$\P(V = v)$ $\frac{1}{36}$ $\frac{3}{36}$ $\frac{5}{36}$ $\frac{7}{36}$ $\frac{9}{36}$ $\frac{11}{36}$
7. $\left\{(u, v) \in S: u \le v\right\}$
8. $\P(U = u, V = v) = \begin{cases} \frac{2}{36}, & u \lt v \ \frac{1}{36}, & u = v \end{cases}$
In the previous exercise, note that $(U, V)$ could serve as the outcome vector for the experiment of rolling two standard, fair dice if we do not bother to distinguish the dice (so that we might as well record the smaller score first and then the larger score). Note that this random vector does not have a uniform distribution. On the other hand, we might have chosen at the beginning to just record the unordered set of scores and, as a modeling assumption, imposed the uniform distribution on the corresponding set of outcomes. Both models cannot be right, so which model (if either) describes real dice in the real world? It turns out that for real (fair) dice, the ordered sequence of scores is uniformly distributed, so real dice behave as distinct objects, whether you can tell them apart or not. In the early history of probability, gamblers sometimes got the wrong answers for events involving dice because they mistakenly applied the uniform distribution to the set of unordered scores. It's an important moral. If we are to impose the uniform distribution on a sample space, we need to make sure that it's the right sample space.
A pair of fair, standard dice are thrown repeatedly until the sum of the scores is either 5 or 7. Let $A$ denote the event that the sum of the scores on the last throw is 5 rather than 7. Events of this type are important in the game of craps.
1. Suppose that we record the pair of scores on each throw. Give the set of outcomes $S$ and express $A$ as a subset of $S$.
2. Compute the probability of $A$ in the setting of part (a).
3. Now suppose that we just record the pair of scores on the last throw. Give the set of outcomes $T$ and express $A$ as a subset of $T$.
4. Compute the probability of $A$ in the setting of parts (c).
Answer
Let $D_5 = \{(1,4), (2,3), (3,2), (4,1)\}$, $D_7 = \{(1,6), (2,5), (3,4), (4,3), (5,2), (6,1)\}$, $D = D_5 \cup D_7$, $C = \{1, 2, 3, 4, 5, 6\}^2 \setminus D$
1. $S = D \cup (C \times D) \cup (C^2 \times D) \cup \cdots$, $A = D_5 \cup (C \times D_5) \cup (C^2 \times D_5) \cup \cdots$
2. $\frac{2}{5}$
3. $T = D$, $A = D_5$
4. $\frac{2}{5}$
The previous problem shows the importance of defining the set of outcomes appropriately. Sometimes a clever choice of this set (and appropriate modeling assumptions) can turn a difficult problem into an easy one.
Sampling Models
Recall that many random experiments can be thought of as sampling experiments. For the general finite sampling model, we start with a population $D$ with $m$ (distinct) objects. We select a sample of $n$ objects from the population, so that the sample space $S$ is the set of possible samples. If we select a sample at random then the outcome $\bs{X}$ (the random sample) is uniformly distributed on $S$: $\P(\bs{X} \in A) = \frac{\#(A)}{\#(S)}, \quad A \subseteq S$ Recall from the section on Combinatorial Structures that there are four common types of sampling from a finite population, based on the criteria of order and replacement.
• If the sampling is with replacement and with regard to order, then the set of samples is the Cartesian power $D^n$. The number of samples is $m^n$.
• If the sampling is without replacement and with regard to order, then the set of samples is the set of all permutations of size $n$ from $D$. The number of samples is $m^{(n)} = m (m - 1) \cdots (m - n + 1)$.
• If the sampling is without replacement and without regard to order, then the set of samples is the set of all combinations (or subsets) of size $n$ from $D$. The number of samples is $\binom{m}{n} = m^{(n)} / n!$.
• If the sampling is with replacement and without regard to order, then the set of samples is the set of all multisets of size $n$ from $D$. The number of samples is $\binom{m + n - 1}{n}$.
If we sample with replacement, the sample size $n$ can be any positive integer. If we sample without replacement, the sample size cannot exceed the population size, so we must have $n \in \{1, 2, \ldots, m\}$.
The basic coin and dice experiments are examples of sampling with replacement. If we toss a fair coin $n$ times and record the sequence of scores $\bs{X}$ (where as usual, 0 denotes tails and 1 denotes heads), then $\bs{X}$ is a random sample of size $n$ chosen with order and with replacement from the population $\{0, 1\}$. Thus, $\bs{X}$ is uniformly distributed on $\{0, 1\}^n$. If we throw $n$ (distinct) standard fair dice and record the sequence of scores, then we generate a random sample $\bs{X}$ of size $n$ with order and with replacement from the population $\{1, 2, 3, 4, 5, 6\}$. Thus, $\bs{X}$ is uniformly distributed on $\{1, 2, 3, 4, 5, 6\}^n$. Of an analogous result would hold for fair, $k$-sided dice.
Suppose that the sampling is without replacement (the most common case). If we record the ordered sample $\bs{X} = (X_1, X_2, \ldots, X_n)$, then the unordered sample $\bs{W} = \{X_1, X_2, \ldots\}$ is a random variable (that is, a function of $\bs{X}$). On the other hand, if we just record the unordered sample $\bs{W}$ in the first place, then we cannot recover the ordered sample.
Suppose that $\bs{X}$ is a random sample of size $n$ chosen with order and without replacement from $D$, so that $\bs{X}$ is uniformly distributed on the space of permutations of size $n$ from $D$. Then $\bs{W}$, the unordered sample, is uniformly distributed on the space of combinations of size $n$ from $D$. Thus, $\bs{W}$ is also a random sample.
Proof
Let $\bs{w}$ be a combination of size $n$ from $D$. Then there are $n!$ permutations of the elements in $\bs{w}$. If $A$ denotes this set of permutations, then $\P(\bs{W} = \bs{w}) = \P(\bs{X} \in A) = n! \big/ m^{(n)} = 1 \big/ \binom{m}{n}$.
The result in the last exercise does not hold if the sampling is with replacement (recall the exercise above and the discussion that follows). When sampling with replacement, there is no simple relationship between the number of ordered samples and the number of unordered samples.
Sampling From a Dichotomous Population
Suppose again that we have a population $D$ with $m$ (distinct) objects, but suppose now that each object is one of two types—either type 1 or type 0. Such populations are said to be dichotomous. Here are some specific examples:
• The population consists of persons, each either male or female.
• The population consists of voters, each either democrat or republican.
• The population consists of devices, each either good or defective.
• The population consists of balls, each either red or green.
Suppose that the population $D$ has $r$ type 1 objects and hence $m - r$ type 0 objects. Of course, we must have $r \in \{0, 1, \ldots, m\}$. Now suppose that we select a sample of size $n$ at random from the population. Note that this model has three parameters: the population size $m$, the number of type 1 objects in the population $r$, and the sample size $n$. Let $Y$ denote the number of type 1 objects in the sample.
If the sampling is without replacement then $\P(Y = y) = \frac{\binom{r}{y} \binom{m - r}{n - y}}{\binom{m}{n}}, \quad y \in \{0, 1, \ldots, n\}$
Proof
Recall that the unordered sample is uniformly distributed over the set of combinations of size $n$ chosen from the population. There are $\binom{m}{n}$ such samples. By the multiplication principle, the number of samples with exactly $y$ type 1 objects and $n - y$ type 0 objects is $\binom{r}{y} \binom{m - r}{n - y}$.
In the previous exercise, random variable $Y$ has the hypergeometric distribution with parameters $m$, $r$, and $n$. The hypergeometric distribution is studied in more detail in the chapter on Findite Sampling Models.
If the sampling is with replacement then $\P(Y = y) = \binom{n}{y} \frac{r^y (m - r)^{n-y}}{m^n} = \binom{n}{y} \left( \frac{r}{m}\right)^y \left( 1 - \frac{r}{m} \right)^{n - y}, \quad y \in \{0, 1, \ldots, n\}$
Proof
Recall that the ordered sample is uniformly distributed over the set $D^n$ and there are $m^n$ elements in this set. To count the number of samples with exactly $y$ type 1 objects, we use a three-step procedure: first, select the coordinates for the type 1 objects; there are $\binom{n}{y}$ choices. Next select the $y$ type 1 objects for these coordinates; there are $r^y$ choices. Finally select the $n - y$ type 0 objects for the remaining coordinates of the sample; there are $(m - r)^{n - y}$ choices. The result now follows from the multiplication principle.
In the last exercise, random variable $Y$ has the binomial distribution with parameters $n$ and $p = \frac{r}{m}$. The binomial distribution is studied in more detail in the chapter on Bernoulli Trials.
Suppose that a group of voters consists of 40 democrats and 30 republicans. A sample 10 voters is chosen at random. Find the probability that the sample contains at least 4 democrats and at least 4 republicans, each of the following cases:
1. The sampling is without replacement.
2. The sampling is with replacement.
Answer
1. $\frac{1\,391\,351\,589}{2\,176\,695\,188} \approx 0.6382$
2. $\frac{24\,509\,952}{40\,353\,607} \approx 0.6074$
Look for other specialized sampling situations in the exercises below.
Urn Models
Drawing balls from an urn is a standard metaphor in probability for sampling from a finite population.
Consider an urn with 30 balls; 10 are red and 20 are green. A sample of 5 balls is chosen at random, without replacement. Let $Y$ denote the number of red balls in the sample. Explicitly compute $\P(Y = y)$ for each $y \in \{0, 1, 2, 3, 4, 5\}$.
answer
$y$ 0 1 2 3 4 5
$\P(Y = y)$ $\frac{2584}{23751}$ $\frac{8075}{23751}$ $\frac{8550}{23751}$ $\frac{3800}{23751}$ $\frac{700}{23751/}$ $\frac{42}{23751}$
In the simulation of the ball and urn experiment, select 30 balls with 10 red and 20 green, sample size 5, and sampling without replacement. Run the experiment 1000 times and compare the empirical probabilities with the true probabilities that you computed in the previous exercise.
Consider again an urn with 30 balls; 10 are red and 20 are green. A sample of 5 balls is chosen at random, with replacement. Let $Y$ denote the number of red balls in the sample. Explicitly compute $\P(Y = y)$ for each $y \in \{0, 1, 2, 3, 4, 5\}$.
Answer
$y$ 0 1 2 3 4 5
$\P(Y = y)$ $\frac{32}{243}$ $\frac{80}{243}$ $\frac{80}{243}$ $\frac{40}{243}$ $\frac{10}{243}$ $\frac{1}{243}$
In the simulation of the ball and urn experiment, select 30 balls with 10 red and 20 green, sample size 5, and sampling with replacement. Run the experiment 1000 times and compare the empirical probabilities with the true probabilities that you computed in the previous exercise.
An urn contains 15 balls: 6 are red, 5 are green, and 4 are blue. Three balls are chosen at random, without replacement.
1. Find the probability that the chosen balls are all the same color.
2. Find the probability that the chosen balls are all different colors.
Answer
1. $\frac{34}{455}$
2. $\frac{120}{455}$
Suppose again that an urn contains 15 balls: 6 are red, 5 are green, and 4 are blue. Three balls are chosen at random, with replacement.
1. Find the probability that the chosen balls are all the same color.
2. Find the probability that the chosen balls are all different colors.
Answer
1. $\frac{405}{3375}$
2. $\frac{720}{3375}$
Cards
Recall that a standard card deck can be modeled by the product set $D = \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, j, q, k\} \times \{\clubsuit, \diamondsuit, \heartsuit, \spadesuit\}$ where the first coordinate encodes the denomination or kind (ace, 2–10, jack, queen, king) and where the second coordinate encodes the suit (clubs, diamonds, hearts, spades). Sometimes we represent a card as a string rather than an ordered pair (for example $q \heartsuit$ for the queen of hearts).
Card games involve choosing a random sample without replacement from the deck $D$, which plays the role of the population. Thus, the basic card experiment consists of dealing $n$ cards from a standard deck without replacement; in this special context, the sample of cards is often referred to as a hand. Just as in the general sampling model, if we record the ordered hand $\bs{X} = (X_1, X_2, \ldots, X_n)$, then the unordered hand $\bs{W} = \{X_1, X_2, \ldots, X_n\}$ is a random variable (that is, a function of $\bs{X}$). On the other hand, if we just record the unordered hand $\bs{W}$ in the first place, then we cannot recover the ordered hand. Finally, recall that $n = 5$ is the poker experiment and $n = 13$ is the bridge experiment. The game of poker is treated in more detail in the chapter on Games of Chance. By the way, it takes about 7 standard riffle shuffles to randomize a deck of cards.
Suppose that 2 cards are dealt from a well-shuffled deck and the sequence of cards is recorded. For $i \in \{1, 2\}$, let $H_i$ denote the event that card $i$ is a heart. Find the probability of each of the following events.
1. $H_1$
2. $H_1 \cap H_2$
3. $H_2 \setminus H_1$
4. $H_2$
5. $H_1 \setminus H_2$
6. $H_1 \cup H_2$
Answer
1. $\frac{1}{4}$
2. $\frac{1}{17}$
3. $\frac{13}{68}$
4. $\frac{1}{4}$
5. $\frac{13}{68}$
6. $\frac{15}{34}$
Think about the results in the previous exercise, and suppose that we continue dealing cards. Note that in computing the probability of $H_i$, you could conceptually reduce the experiment to dealing a single card. Note also that the probabilities do not depend on the order in which the cards are dealt. For example, the probability of an event involving the 1st, 2nd and 3rd cards is the same as the probability of the corresponding event involving the 25th, 17th, and 40th cards. Technically, the cards are exchangeable. Here's another way to think of this concept: Suppose that the cards are dealt onto a table in some pattern, but you are not allowed to view the process. Then no experiment that you can devise will give you any information about the order in which the cards were dealt.
In the card experiment, set $n = 2$. Run the experiment 100 times and compute the empirical probability of each event in the previous exercise
In the poker experiment, find the probability of each of the following events:
1. The hand is a full house (3 cards of one kind and 2 cards of another kind).
2. The hand has four of a kind (4 cards of one kind and 1 of another kind).
3. The cards are all in the same suit (thus, the hand is either a flush or a straight flush).
Answer
1. $\frac{3744}{2\,598\,960} \approx 0.001441$
2. $\frac{624}{2\,598\,960} \approx 0.000240$
3. $\frac{5148}{2\,598\,960} \approx 0.001981$
Run the poker experiment 10000 times, updating every 10 runs. Compute the empirical probability of each event in the previous problem.
Find the probability that a bridge hand will contain no honor cards that is, no cards of denomination 10, jack, queen, king, or ace. Such a hand is called a Yarborough, in honor of the second Earl of Yarborough.
Answer
$\frac{347\,373\,600}{635\,013\,559\,600} \approx 0.000547$
Find the probability that a bridge hand will contain
1. Exactly 4 hearts.
2. Exactly 4 hearts and 3 spades.
3. Exactly 4 hearts, 3 spades, and 2 clubs.
Answer
1. $\frac{151\,519\,319\,380}{635\,013\,559\,600} \approx 0.2386$
2. $\frac{47\,079\,732\,700}{635\,013\,559\,600} \approx 0.0741$
3. $\frac{11\,404\,407\,300}{635\,013\,559\,600} \approx 0.0179$
A card hand that contains no cards in a particular suit is said to be void in that suit. Use the inclusion-exclusion rule to find the probability of each of the following events:
1. A poker hand is void in at least one suit.
2. A bridge hand is void in at least one suit.
Answer
1. $\frac{1\,913\,496}{2\,598\,960} \approx 0.7363$
2. $\frac{32\,427\,298\,180}{635\,013\,559\,600} \approx 0.051$
Birthdays
The following problem is known as the birthday problem, and is famous because the results are rather surprising at first.
Suppose that $n$ persons are selected and their birthdays recorded (we will ignore leap years). Let $A$ denote the event that the birthdays are distinct, so that $A^c$ is the event that there is at least one duplication in the birthdays.
1. Define an appropriate sample space and probability measure. State the assumptions you are making.
2. Find $P(A)$ and $\P(A^c)$ in terms of the parameter $n$.
3. Explicitly compute $P(A)$ and $P(A^c)$ for $n \in \{10, 20, 30, 40, 50\}$
Answer
1. Tthe set of outcomes is $S = D^n$ where $D$ is the set of days of the year. We assume that the outcomes are equally likely, so that $S$ has the uniform distribution.
2. $\#(A) = 365^{(n)}$, so $\P(A) = 365^{(n)} / 365^n$ and $\P(A^c) = 1 - 365^{(n)} / 365^n$
3. $n$ $\P(A)$ $\P(A^c)$
10 0.883 0.117
20 0.589 0.411
30 0.294 0.706
40 0.109 0.891
50 0.006 0.994
The small value of $\P(A)$ for relatively small sample sizes $n$ is striking, but is due mathematically to the fact that $365^n$ grows much faster than $365^{(n)}$ as $n$ increases. The birthday problem is treated in more generality in the chapter on Finite Sampling Models.
Suppose that 4 persons are selected and their birth months recorded. Assuming an appropriate uniform distribution, find the probability that the months are distinct.
Answer
$\frac{11880}{20736} \approx 0.573$
Continuous Uniform Distributions
Recall that in Buffon's coin experiment, a coin with radius $r \le \frac{1}{2}$ is tossed randomly on a floor with square tiles of side length 1, and the coordinates $(X, Y)$ of the center of the coin are recorded, relative to axes through the center of the square in which the coin lands (with the axes parallel to the sides of the square, of course). Let $A$ denote the event that the coin does not touch the sides of the square.
1. Define the set of outcomes $S$ mathematically and sketch $S$.
2. Argue that $(X, Y)$ is uniformly distributed on $S$.
3. Express $A$ in terms of the outcome variables $(X, Y)$ and sketch $A$.
4. Find $\P(A)$.
5. Find $\P(A^c)$.
Answer
1. $S = \left[-\frac{1}{2}, \frac{1}{2}\right]^2$
2. Since the coin is tossed randomly, no region of $S$ should be preferred over any other.
3. $\left\{r - \frac{1}{2} \lt X \lt \frac{1}{2} - r, r - \frac{1}{2} \lt Y \lt \frac{1}{2} - r\right\}$
4. $\P(A) = (1 - 2 \, r)^2$
5. $\P(A^c) = 1 - (1 - 2 \, r)^2$
In Buffon's coin experiment, set $r = 0.2$. Run the experiment 100 times and compute the empirical probability of each event in the previous exercise.
A point $(X, Y)$ is chosen at random in the circular region $S \subset \R^2$ of radius 1, centered at the origin. Let $A$ denote the event that the point is in the inscribed square region centered at the origin, with sides parallel to the coordinate axes, and let $B$ denote the event that the point is in the inscribed square with vertices $(\pm 1, 0)$, $(0, \pm 1)$. Sketch each of the following events as a subset of $S$, and find the probability of the event.
1. $A$
2. $B$
3. $A \cap B^c$
4. $B \cap A^c$
5. $A \cap B$
6. $A \cup B$
Answer
1. $2 / \pi$
2. $2 / \pi$
3. $(6 - 4 \sqrt{2}) \big/ \pi$
4. $(6 - 4 \sqrt{2}) \big/ \pi$
5. $4(\sqrt{2} - 1) \big/ \pi$
6. $4(2 - \sqrt{2}) \big/ \pi$
Suppose a point $(X, Y)$ is chosen at random in the circular region $S \subseteq \R^2$ of radius 12, centered at the origin. Let $R$ denote the distance from the origin to the point. Sketch each of the following events as a subset of $S$, and compute the probability of the event. Is $R$ uniformly distributed on the interval $[0, 12]$?
1. $\{R \le 3\}$
2. $\{3 \lt R \le 6\}$
3. $\{6 \lt R \le 9\}$
4. $\{9 \lt R\le 12\}$
Answer
No, $R$ is not uniformly distributed on $[0, 12]$.
1. $\frac{1}{16}$
2. $\frac{3}{16}$
3. $\frac{5}{16}$
4. $\frac{7}{16}$
In the simple probability experiment, points are generated according to the uniform distribution on a rectangle. Move and resize the events $A$ and $B$ and note how the probabilities of the various events change. Create each of the following configurations. In each case, run the experiment 1000 times and compare the relative frequencies of the events to the probabilities of the events.
1. $A$ and $B$ in general position
2. $A$ and $B$ disjoint
3. $A \subseteq B$
4. $B \subseteq A$
Genetics
Please refer to the discussion of genetics in the section on random experiments if you need to review some of the definitions in this subsection.
Recall first that the ABO blood type in humans is determined by three alleles: $a$, $b$, and $o$. Furthermore, $a$ and $b$ are co-dominant and $o$ is recessive. Suppose that the probability distribution for the set of blood genotypes in a certain population is given in the following table:
Genotype $aa$ $ab$ $ao$ $bb$ $bo$ $oo$
Probability 0.050 0.038 0.310 0.007 0.116 0.479
A person is chosen at random from the population. Let $A$, $B$, $AB$, and $O$ be the events that the person is type $A$, type $B$, type $AB$, and type $O$ respectively. Let $H$ be the event that the person is homozygous and $D$ the event that the person has an $o$ allele. Find the probability of the following events:
1. $A$
2. $B$
3. $AB$
4. $O$
5. $H$
6. $D$
7. $H \cup D$
8. $D^c$
Answer
1. 0.360
2. 0.123
3. 0.038
4. 0.479
5. 0.536
6. 0.905
7. 0.962
8. 0.095
Suppose next that pod color in certain type of pea plant is determined by a gene with two alleles: $g$ for green and $y$ for yellow, and that $g$ is dominant.
Let $G$ be the event that a child plant has green pods. Find $\P(G)$ in each of the following cases:
1. At least one parent is type $gg$.
2. Both parents are type $yy$.
3. Both parents are type $gy$.
4. One parent is type $yy$ and the other is type $gy$.
Answer
1. $1$
2. $0$
3. $\frac{3}{4}$
4. $\frac{1}{2}$
Next consider a sex-linked hereditary disorder in humans (such as colorblindness or hemophilia). Let $h$ denote the healthy allele and $d$ the defective allele for the gene linked to the disorder. Recall that $d$ is recessive for women.
Let $B$ be the event that a son has the disorder, $C$ the event that a daughter is a healthy carrier, and $D$ the event that a daughter has the disease. Find $\P(B)$, $\P(C)$ and $\P(D)$ in each of the following cases:
1. The mother and father are normal.
2. The mother is a healthy carrier and the father is normal.
3. The mother is normal and the father has the disorder.
4. The mother is a healthy carrier and the father has the disorder.
5. The mother has the disorder and the father is normal.
6. The mother and father both have the disorder.
Answer
1. $0$, $0$, $0$
2. $1/2$, 0, $1/2$
3. $0$, $1/2$, $0$
4. $1/2$, $1/2$, $1/2$
5. $1$, $1/2$, $0$
6. $1$, $0$, $1$
From this exercise, note that transmission of the disorder to a daughter can only occur if the mother is at least a carrier and the father has the disorder. In ordinary large populations, this is a unusual intersection of events, and thus sex-linked hereditary disorders are typically much less common in women than in men. In brief, women are protected by the extra X chromosome.
Radioactive Emissions
Suppose that $T$ denotes the time between emissions (in milliseconds) for a certain type of radioactive material, and that $T$ has the following probability distribution, defined for measurable $A \subseteq [0, \infty)$ by $\P(T \in A) = \int_A e^{-t} \, dt$
1. Show that this really does define a probability distribution.
2. Find $\P(T \gt 3)$.
3. Find $\P(2 \lt T \lt 4)$.
Answer
1. Note that $\int_0^\infty e^{-t} \, dt = 1$
2. $e^{-3}$
3. $e^{-2} - e^{-4}$
Suppose that $N$ denotes the number of emissions in a one millisecond interval for a certain type of radioactive material, and that $N$ has the following probability distribution: $\P(N \in A) = \sum_{n \in A} \frac{e^{-1}}{n!}, \quad A \subseteq \N$
1. Show that this really does define a probability distribution.
2. Find $\P(N \ge 3)$.
3. Find $\P(2 \le N \le 4)$.
Answer
1. Note that $\sum_{n=0}^\infty \frac{e^{-1}}{n!} = 1$
2. $1 - \frac{5}{2} e^{-1}$
3. $\frac{17}{24} e^{-1}$
The probability distribution that governs the time between emissions is a special case of the exponential distribution, while the probability distribution that governs the number of emissions is a special case of the Poisson distribution, named for Simeon Poisson. The exponential distribution and the Poisson distribution are studied in more detail in the chapter on the Poisson process.
Matching
Suppose that at an absented-minded secretary prepares 4 letters and matching envelopes to send to 4 different persons, but then stuffs the letters into the envelopes randomly. Find the probability of the event $A$ that at least one letter is in the proper envelope.
Solution
Note first that the set of outcomes $S$ can be taken to be the set of permutations of $\{1, 2, 3, 4\}$. For $\bs{x} \in S$, $x_i$ is the number of the envelope containing the $i$th letter. Clearly $S$ should be given the uniform probability distribution. Next note that $A = A_1 \cup A_2 \cup A_3 \cup A_4$ where $A_i$ is the event that the $i$th letter is inserted into the $i$th envelope. Using the inclusion-exclusion rule gives $\P(A) = \frac{5}{8}$.
This exercise is an example of the matching problem, originally formulated and studied by Pierre Remond Montmort. A complete analysis of the matching problem is given in the chapter on Finite Sampling Models.
In the simulation of the matching experiment select $n = 4$. Run the experiment 1000 times and compute the relative frequency of the event that at least one match occurs.
Data Analysis Exercises
For the M&M data set, let $R$ denote the event that a bag has at least 10 red candies, $T$ the event that a bag has at least 57 candies total, and $W$ the event that a bag weighs at least 50 grams. Find the empirical probability the following events:
1. $R$
2. $T$
3. $W$
4. $R \cap T$
5. $T \setminus W$
Answer
1. $\frac{13}{30}$
2. $\frac{19}{30}$
3. $\frac{9}{30}$
4. $\frac{9}{30}$
5. $\frac{11}{30}$
For the cicada data, let $W$ denote the event that a cicada weighs at least 0.20 grams, $F$ the event that a cicada is female, and $T$ the event that a cicada is type tredecula. Find the empirical probability of each of the following:
1. $W$
2. $F$
3. $T$
4. $W \cap F$
5. $F \cup T \cup W$
Answer
1. $\frac{37}{104}$
2. $\frac{59}{104}$
3. $\frac{44}{104}$
4. $\frac{34}{104}$
5. $\frac{85}{104}$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/02%3A_Probability_Spaces/2.03%3A_Probability_Measures.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
The purpose of this section is to study how probabilities are updated in light of new information, clearly an absolutely essential topic. If you are a new student of probability, you may want to skip the technical details.
Definitions and Interpretations
The Basic Definition
As usual, we start with a random experiment modeled by a probability space $(S, \mathscr S, \P)$. Thus, $S$ is the set of outcomes, $\mathscr S$ the collection of events, and $\P$ the probability measure on the sample space $(S, \mathscr S)$. Suppose now that we know that an event $B$ has occurred. In general, this information should clearly alter the probabilities that we assign to other events. In particular, if $A$ is another event then $A$ occurs if and only if $A$ and $B$ occur; effectively, the sample space has been reduced to $B$. Thus, the probability of $A$, given that we know $B$ has occurred, should be proportional to $\P(A \cap B)$.
However, conditional probability, given that $B$ has occurred, should still be a probability measure, that is, it must satisfy the axioms of probability. This forces the proportionality constant to be $1 \big/ \P(B)$. Thus, we are led inexorably to the following definition:
Let $A$ and $B$ be events with $\P(B) \gt 0$. The conditional probability of $A$ given $B$ is defined to be $\P(A \mid B) = \frac{\P(A \cap B)}{\P(B)}$
The Law of Large Numbers
The definition above was based on the axiomatic definition of probability. Let's explore the idea of conditional probability from the less formal and more intuitive notion of relative frequency (the law of large numbers). Thus, suppose that we run the experiment repeatedly. For $n \in \N_+$ and an event $E$, let $N_n(E)$ denote the number of times $E$ occurs (the frequency of $E$) in the first $n$ runs. Note that $N_n(E)$ is a random variable in the compound experiment that consists of replicating the original experiment. In particular, its value is unknown until we actually run the experiment $n$ times.
If $N_n(B)$ is large, the conditional probability that $A$ has occurred, given that $B$ has occurred, should be close to the conditional relative frequency of $A$ given $B$, namely the relative frequency of $A$ for the runs on which $B$ occurred: $N_n(A \cap B) / N_n(B)$. But note that $\frac{N_n(A \cap B)}{N_n(B)} = \frac{N_n(A \cap B) / n}{N_n(B) / n}$ The numerator and denominator of the main fraction on the right are the relative frequencies of $A \cap B$ and $B$, respectively. So by the law of large numbers again, $N_n(A \cap B) / n \to \P(A \cap B)$ as $n \to \infty$ and $N_n(B) \to \P(B)$ as $n \to \infty$. Hence $\frac{N_n(A \cap B)}{N_n(B)} \to \frac{\P(A \cap B)}{\P(B)} \text{ as } n \to \infty$ and we are led again to the definition above.
In some cases, conditional probabilities can be computed directly, by effectively reducing the sample space to the given event. In other cases, the formula in the mathematical definition is better. In some cases, conditional probabilities are known from modeling assumptions, and then are used to compute other probabilities. We will see examples of all of these situations in the computational exercises below.
It's very important that you not confuse $\P(A \mid B)$, the probability of $A$ given $B$, with $\P(B \mid A)$, the probability of $B$ given $A$. Making that mistake is known as the fallacy of the transposed conditional. (How embarrassing!)
Conditional Distributions
Suppose that $X$ is a random variable for the experiment with values in $T$. Mathematically, $X$ is a function from $S$ into $T$, and $\{X \in A\}$ denotes the event $\{s \in S: X(s) \in A\}$ for $A \subseteq T$. Intuitively, $X$ is a variable of interest in the experiment, and every meaningful statement about $X$ defines an event. Recall that the probability distribution of $X$ is the probability measure on $T$ given by $A \mapsto \P(X \in A), \quad A \subseteq T$ This has a natural extension to a conditional distribution, given an event.
If $B$ is an event with $\P(B) \gt 0$, then the conditional distribution of $X$ given $B$ is the probability measure on $T$ given by $A \mapsto \P(X \in A \mid B), \quad A \subseteq T$
Details
Recall that $T$ will come with a $\sigma$-algebra of admissible subsets so that $(T, \mathscr T)$ is a measurable space, just like the sample space $(S, \mathscr S)$. Random variable $X$ is required to be measurable as a function from $S$ into $T$. This ensures that $\{X \in A\}$ is a valid event for each $A \in \mathscr T$, so that the definition makes sense.
Basic Theory
Preliminary Results
Our first result is of fundamental importance, and indeed was a crucial part of the argument for the definition of conditional probability.
Suppose again that $B$ is an event with $\P(B) \gt 0$. Then $A \mapsto \P(A \mid B)$ is a probability measure on $S$.
Proof
Clearly $\P(A \mid B) \ge 0$ for every event $A$, and $\P(S \mid B) = 1$. Thus, suppose that $\{A_i: i \in I\}$ is a countable collection of pairwise disjoint events. Then $\P\left(\bigcup_{i \in I} A_i \biggm| B\right) = \frac{1}{\P(B)} \P\left[\left(\bigcup_{i \in I} A_i\right) \cap B\right] = \frac{1}{\P(B)} \P\left(\bigcup_{i \in I} (A_i \cap B)\right)$ But the collection of events $\{A_i \cap B: i \in I\}$ is also pairwise disjoint, so $\P\left(\bigcup_{i \in I} A_i \biggm| B\right) = \frac{1}{\P(B)} \sum_{i \in I} \P(A_i \cap B) = \sum_{i \in I} \frac{\P(A_i \cap B)}{\P(B)} = \sum_{i \in I} \P(A_i \mid B)$
It's hard to overstate the importance of the last result because this theorem means that any result that holds for probability measures in general holds for conditional probability, as long as the conditioning event remains fixed. In particular the basic probability rules in the section on Probability Measure have analogs for conditional probability. To give two examples, \begin{align} \P\left(A^c \mid B\right) & = 1 - \P(A \mid B) \ \P\left(A_1 \cup A_2 \mid B\right) & = \P\left(A_1 \mid B\right) + \P\left(A_2 \mid B\right) - \P\left(A_1 \cap A_2 \mid B\right) \end{align} By the same token, it follows that the conditional distribution of a random variable with values in $T$, given in above, really does define a probability distribution on $T$. No further proof is necessary. Our next results are very simple.
Suppose that $A$ and $B$ are events with $\P(B) \gt 0$.
1. If $B \subseteq A$ then $\P(A \mid B) = 1$.
2. If $A \subseteq B$ then $\P(A \mid B) = \P(A) / \P(B)$.
3. If $A$ and $B$ are disjoint then $\P(A \mid B) = 0$.
Proof
These results follow directly from the definition of conditional probability. In part (a), note that $A \cap B = B$. In part (b) note that $A \cap B = A$. In part (c) note that $A \cap B = \emptyset$.
Parts (a) and (c) certainly make sense. Suppose that we know that event $B$ has occurred. If $B \subseteq A$ then $A$ becomes a certain event. If $A \cap B = \emptyset$ then $A$ becomes an impossible event. A conditional probability can be computed relative to a probability measure that is itself a conditional probability measure. The following result is a consistency condition.
Suppose that $A$, $B$, and $C$ are events with $\P(B \cap C) \gt 0$. The probability of $A$ given $B$, relative to $\P(\cdot \mid C)$, is the same as the probability of $A$ given $B$ and $C$ (relative to $\P$). That is, $\frac{\P(A \cap B \mid C)}{\P(B \mid C)} = \P(A \mid B \cap C)$
Proof
From the definition, $\frac{\P(A \cap B \mid C)}{\P(B \mid C)} = \frac{\P(A \cap B \cap C) \big/ \P(C)}{\P(B \cap C) \big/ \P(C)} = \frac{\P(A \cap B \cap C)}{\P(B \cap C)} = \P(A \mid B \cap C)$
Correlation
Our next discussion concerns an important concept that deals with how two events are related, in a probabilistic sense.
Suppose that $A$ and $B$ are events with $\P(A) \gt 0$ and $\P(B) \gt 0$.
1. $\P(A \mid B) \gt \P(A)$ if and only if $\P(B \mid A) \gt \P(B)$ if and only if $\P(A \cap B) \gt \P(A) \P(B)$. In this case, $A$ and $B$ are positively correlated.
2. $\P(A \mid B) \lt \P(A)$ if and only if $\P(B \mid A) \lt \P(B)$ if and only if $\P(A \cap B) \lt \P(A) \P(B)$. In this case, $A$ and $B$ are negatively correlated.
3. $\P(A \mid B) = \P(A)$ if and only if $\P(B \mid A) = \P(B)$ if and only if $\P(A \cap B) = \P(A) \P(B)$. In this case, $A$ and $B$ are uncorrelated or independent.
Proof
These properties following directly from the definition of conditional probability and simple algebra. Recall that multiplying or dividing an inequality by a positive number preserves the inequality.
Intuitively, if $A$ and $B$ are positively correlated, then the occurrence of either event means that the other event is more likely. If $A$ and $B$ are negatively correlated, then the occurrence of either event means that the other event is less likely. If $A$ and $B$ are uncorrelated, then the occurrence of either event does not change the probability of the other event. Independence is a fundamental concept that can be extended to more than two events and to random variables; these generalizations are studied in the next section on Independence. A much more general version of correlation, for random variables, is explored in the section on Covariance and Correlation in the chapter on Expected Value.
Suppose that $A$ and $B$ are events. Note from (4) that if $A \subseteq B$ or $B \subseteq A$ then $A$ and $B$ are positively correlated. If $A$ and $B$ are disjoint then $A$ and $B$ are negatively correlated.
Suppose that $A$ and $B$ are events in a random experiment.
1. $A$ and $B$ have the same correlation (positive, negative, or zero) as $A^c$ and $B^c$.
2. $A$ and $B$ have the opposite correlation as $A$ and $B^c$ (that is, positive-negative, negative-positive, or 0-0).
Proof
1. Using DeMorgan's law and the complement law. $\P(A^c \cap B^c) - \P(A^c) \P(B^c) = \P\left[(A \cup B)^c\right] - \P(A^c) \P(B^c) = \left[1 - \P(A \cup B)\right] - \left[1 - \P(A)\right]\left[1 - \P(B)\right]$ Using the inclusion-exclusion law and algebra, $\P(A^c \cap B^c) - \P(A^c) \P(B^c) = \P(A \cap B) - \P(A) \P(B)$
2. Using the difference rule and the complement law: $\P(A \cap B^c) - \P(A) \P(B^c) = \P(A) - \P(A \cap B) - \P(A) \left[1 - \P(B)\right] = -\left[\P(A \cap B) - \P(A) \P(B)\right]$
The Multiplication Rule
Sometimes conditional probabilities are known and can be used to find the probabilities of other events. Note first that if $A$ and $B$ are events with positive probability, then by the very definition of conditional probability, $\P(A \cap B) = \P(A) \P(B \mid A) = \P(B) \P(A \mid B)$ The following generalization is known as the multiplication rule of probability. As usual, we assume that any event conditioned on has positive probability.
Suppose that $(A_1, A_2, \ldots, A_n)$ is a sequence of events. Then $\P\left(A_1 \cap A_2 \cap \cdots \cap A_n\right) = \P\left(A_1\right) \P\left(A_2 \mid A_1\right) P\left(A_3 \mid A_1 \cap A_2\right) \cdots \P\left(A_n \mid A_1 \cap A_2 \cap \cdots \cap A_{n-1}\right)$
Proof
The product on the right a collapsing product in which only the probability of the intersection of all $n$ events survives. The product of the first two factors is $\P\left(A_1 \cap A_2\right)$, and hence the product of the first three factors is $\P\left(A_1 \cap A_2 \cap A_3\right)$, and so forth. The proof can be made more rigorous by induction on $n$.
The multiplication rule is particularly useful for experiments that consist of dependent stages, where $A_i$ is an event in stage $i$. Compare the multiplication rule of probability with the multiplication rule of combinatorics.
As with any other result, the multiplication rule can be applied to a conditional probability measure. In the context above, if $E$ is another event, then $\P\left(A_1 \cap A_2 \cap \cdots \cap A_n \mid E\right) = \P\left(A_1 \mid E\right) \P\left(A_2 \mid A_1 \cap E\right) P\left(A_3 \mid A_1 \cap A_2 \cap E\right) \cdots \P\left(A_n \mid A_1 \cap A_2 \cap \cdots \cap A_{n-1} \cap E\right)$
Conditioning and Bayes' Theorem
Suppose that $\mathscr{A} = \{A_i: i \in I\}$ is a countable collection of events that partition the sample space $S$, and that $\P(A_i) \gt 0$ for each $i \in I$.
The following theorem is known as the law of total probability.
If $B$ is an event then $\P(B) = \sum_{i \in I} \P(A_i) \P(B \mid A_i)$
Proof
Recall that $\{A_i \cap B: i \in I\}$ is a partition of $B$. Hence $\P(B) = \sum_{i \in I} \P(A_i \cap B) = \sum_{i \in I} \P(A_i) \P(B \mid A_i)$
The following theorem is known as Bayes' Theorem, named after Thomas Bayes:
If $B$ is an event then $\P(A_j \mid B) = \frac{\P(A_j) \P(B \mid A_j)}{\sum_{i \in I}\P(A_i) \P(B \mid A_i)}, \quad j \in I$
Proof
Again the numerator is $\P(A_j \cap B)$ while the denominator is $\P(B)$ by the law of total probability.
These two theorems are most useful, of course, when we know $\P(A_i)$ and $\P(B \mid A_i)$ for each $i \in I$. When we compute the probability of $\P(B)$ by the law of total probability, we say that we are conditioning on the partition $\mathscr{A}$. Note that we can think of the sum as a weighted average of the conditional probabilities $\P(B \mid A_i)$ over $i \in I$, where $\P(A_i)$, $i \in I$ are the weight factors. In the context of Bayes theorem, $\P(A_j)$ is the prior probability of $A_j$ and $\P(A_j \mid B)$ is the posterior probability of $A_j$ for $j \in I$. We will study more general versions of conditioning and Bayes theorem in the section on Discrete Distributions in the chapter on Distributions, and again in the section on Conditional Expected Value in the chapter on Expected Value.
Once again, the law of total probability and Bayes' theorem can be applied to a conditional probability measure. So, if $E$ is another event with $\P(A_i \cap E) \gt 0$ for $i \in I$ then \begin{align} \P(B \mid E) & = \sum_{i \in I} \P(A_i \mid E) \P(B \mid A_i \cap E) \ \P(A_j \mid B \cap E) & = \frac{\P(A_j \mid E) \P(B \mid A_j \cap E)}{\sum_{i \in I}\P(A_i \cap E) \P(B \mid A_i \cap E)}, \quad j \in I \end{align}
Examples and Applications
Basic Rules
Suppose that $A$ and $B$ are events in an experiment with $\P(A) = \frac{1}{3}$, $\P(B) = \frac{1}{4}$, $\P(A \cap B) = \frac{1}{10}$. Find each of the following:
1. $\P(A \mid B)$
2. $\P(B \mid A)$
3. $\P(A^c \mid B)$
4. $\P(B^c \mid A)$
5. $\P(A^c \mid B^c)$
Answer
1. $\frac{2}{5}$
2. $\frac{3}{10}$
3. $\frac{3}{5}$
4. $\frac{7}{10}$
5. $\frac{31}{45}$
Suppose that $A$, $B$, and $C$ are events in a random experiment with $\P(A \mid C) = \frac{1}{2}$, $\P(B \mid C) = \frac{1}{3}$, and $\P(A \cap B \mid C) = \frac{1}{4}$. Find each of the following:
1. $\P(B \setminus A \mid C)$
2. $\P(A \cup B \mid C)$
3. $\P(A^c \cap B^c \mid C)$
4. $\P(A^c \cup B^c \mid C)$
5. $\P(A^c \cup B \ \mid C)$
6. $\P(A \mid B \cap C)$
Answer
1. $\frac{1}{12}$
2. $\frac{7}{12}$
3. $\frac{5}{12}$
4. $\frac{3}{4}$
5. $\frac{3}{4}$
6. $\frac{3}{4}$
Suppose that $A$ and $B$ are events in a random experiment with $\P(A) = \frac{1}{2}$, $\P(B) = \frac{1}{3}$, and $\P(A \mid B) =\frac{3}{4}$.
1. Find $\P(A \cap B)$
2. Find $\P(A \cup B)$
3. Find $\P(B \cup A^c)$
4. Find $\P(B \mid A)$
5. Are $A$ and $B$ positively correlated, negatively correlated, or independent?
Answer
1. $\frac{1}{4}$
2. $\frac{7}{12}$
3. $\frac{3}{4}$
4. $\frac{1}{2}$
5. positively correlated.
Open the conditional probability experiment.
1. Given $\P(A)$, $\P(B)$, and $\P(A \cap B)$, in the table, verify all of the other probabilities in the table.
2. Run the experiment 1000 times and compare the probabilities with the relative frequencies.
Simple Populations
In a certain population, 30% of the persons smoke cigarettes and 8% have COPD (Chronic Obstructive Pulmonary Disease). Moreover, 12% of the persons who smoke have COPD.
1. What percentage of the population smoke and have COPD?
2. What percentage of the population with COPD also smoke?
3. Are smoking and COPD positively correlated, negatively correlated, or independent?
Answer
1. 3.6%
2. 45%
3. positively correlated.
A company has 200 employees: 120 are women and 80 are men. Of the 120 female employees, 30 are classified as managers, while 20 of the 80 male employees are managers. Suppose that an employee is chosen at random.
1. Find the probability that the employee is female.
2. Find the probability that the employee is a manager.
3. Find the conditional probability that the employee is a manager given that the employee is female.
4. Find the conditional probability that the employee is female given that the employee is a manager.
5. Are the events female and manager positively correlated, negatively correlated, or indpendent?
Answer
1. $\frac{120}{200}$
2. $\frac{50}{200}$
3. $\frac{30}{120}$
4. $\frac{30}{50}$
5. independent
Dice and Coins
Consider the experiment that consists of rolling 2 standard, fair dice and recording the sequence of scores $\bs{X} = (X_1, X_2)$. Let $Y$ denote the sum of the scores. For each of the following pairs of events, find the probability of each event and the conditional probability of each event given the other. Determine whether the events are positively correlated, negatively correlated, or independent.
1. $\{X_1 = 3\}$, $\{Y = 5\}$
2. $\{X_1 = 3\}$, $\{Y = 7\}$
3. $\{X_1 = 2\}$, $\{Y = 5\}$
4. $\{X_1 = 3\}$, $\{X_1 = 2\}$
Answer
In each case below, the answers are for $\P(A)$, $\P(B)$, $\P(A \mid B)$, and $\P(B \mid A)$
1. $\frac{1}{6}$, $\frac{1}{9}$, $\frac{1}{4}$, $\frac{1}{6}$. Positively correlated.
2. $\frac{1}{6}$, $\frac{1}{6}$, $\frac{1}{6}$, $\frac{1}{6}$. Independent.
3. $\frac{1}{6}$, $\frac{1}{9}$, $\frac{1}{4}$, $\frac{1}{6}$. Positively correlated.
4. $\frac{1}{6}$, $\frac{1}{6}$, $0$, $0$. Negatively correlated.
Note that positive correlation is not a transitive relation. From the previous exercise, for example, note that $\{X_1 = 3\}$ and $\{Y = 5\}$ are positively correlated, $\{Y = 5\}$ and $\{X_1 = 2\}$ are positively correlated, but $\{X_1 = 3\}$ and $\{X_1 = 2\}$ are negatively correlated (in fact, disjoint).
In dice experiment, set $n = 2$. Run the experiment 1000 times. Compute the empirical conditional probabilities corresponding to the conditional probabilities in the last exercise.
Consider again the experiment that consists of rolling 2 standard, fair dice and recording the sequence of scores $\bs{X} = (X_1, X_2)$. Let $Y$ denote the sum of the scores, $U$ the minimum score, and $V$ the maximum score.
1. Find $\P(U = u \mid V = 4)$ for the appropriate values of $u$.
2. Find $\P(Y = y \mid V = 4)$ for the appropriate values of $y$.
3. Find $\P(V = v \mid Y = 8)$ for appropriate values of $v$.
4. Find $\P(U = u \mid Y = 8)$ for the appropriate values of $u$.
5. Find $\P[(X_ 1, X_2) = (x_1, x_2) \mid Y = 8]$ for the appropriate values of $(x_1, x_2)$.
Answer
1. $\frac{2}{7}$ for $u \in \{1, 2, 3\}$, $\frac{1}{7}$ for $u = 4$
2. $\frac{2}{7}$ for $y \in \{5, 6, 7\}$, $\frac{1}{7}$ for $y = 8$
3. $\frac{1}{5}$ for $v = 4$, $\frac{2}{5}$ for $v \in \{5, 6\}$
4. $\frac{2}{5}$ for $u \in \{2, 3\}$, $\frac{1}{5}$ for $u = 4$
5. $\frac{1}{5}$ for $(x_1, x_2) \in \{(2,6), (6,2), (3,5), (5,3), (4,4)\}$
In the die-coin experiment, a standard, fair die is rolled and then a fair coin is tossed the number of times showing on the die. Let $N$ denote the die score and $H$ the event that all coin tosses result in heads.
1. Find $\P(H)$.
2. Find $\P(N = n \mid H)$ for $n \in \{1, 2, 3, 4, 5, 6\}$.
3. Compare the results in (b) with $\P(N = n)$ for $n \in \{1, 2, 3, 4, 5, 6\}$. In each case, note whether the events $H$ and $\{N = n\}$ are positively correlated, negatively correlated, or independent.
Answer
1. $\frac{21}{128}$
2. $\frac{64}{63} \frac{1}{2^n}$ for $n \in \{1, 2, 3, 4, 5, 6\}$
3. positively correlated for $n \in \{1, 2\}$ and negatively correlated for $n \in \{3, 4, 5, 6\}$
Run the die-coin experiment 1000 times. Let $H$ and $N$ be as defined in the previous exercise.
1. Compute the empirical probability of $H$. Compare with the true probability in the previous exercise.
2. Compute the empirical probability of $\{N = n\}$ given $H$, for $n \in \{1, 2, 3, 4, 5, 6\}$. Compare with the true probabilities in the previous exercise.
Suppose that a bag contains 12 coins: 5 are fair, 4 are biased with probability of heads $\frac{1}{3}$; and 3 are two-headed. A coin is chosen at random from the bag and tossed.
1. Find the probability that the coin is heads.
2. Given that the coin is heads, find the conditional probability of each coin type.
Answer
1. $\frac{41}{72}$
2. $\frac{15}{41}$ that the coin is fair, $\frac{8}{41}$ that the coin is biased, $\frac{18}{41}$ that the coin is two-headed
Compare die-coin experiment and bag of coins experiment. In the die-coin experiment, we toss a coin with a fixed probability of heads a random number of times. In the bag of coins experiment, we effectively toss a coin with a random probability of heads a fixed number of times. The random experiment of tossing a coin with a fixed probability of heads $p$ a fixed number of times $n$ is known as the binomial experiment with parameters $n$ and $p$. This is a very basic and important experiment that is studied in more detail in the section on the binomial distribution in the chapter on Bernoulli Trials. Thus, the die-coin and bag of coins experiments can be thought of as modifications of the binomial experiment in which a parameter has been randomized. In general, interesting new random experiments can often be constructed by randomizing one or more parameters in another random experiment.
In the coin-die experiment, a fair coin is tossed. If the coin lands tails, a fair die is rolled. If the coin lands heads, an ace-six flat die is tossed (faces 1 and 6 have probability $\frac{1}{4}$ each, while faces 2, 3, 4, and 5 have probability $\frac{1}{8}$ each). Let $H$ denote the event that the coin lands heads, and let $Y$ denote the score when the chosen die is tossed.
1. Find $\P(Y = y)$ for $y \in \{1, 2, 3, 4, 5, 6\}$.
2. Find $\P(H \mid Y = y)$ for $y \in \{1, 2, 3, 4, 5, 6,\}$.
3. Compare each probability in part (b) with $\P(H)$. In each case, note whether the events $H$ and $\{Y = y\}$ are positively correlated, negatively correlated, or independent.
Answer
1. $\frac{5}{24}$ for $y \in \{1, 6\}$, $\frac{7}{48}$ for $y \in \{2, 3, 4, 5\}$
2. $\frac{3}{5}$ for $y \in \{1, 6\}$, $\frac{3}{7}$ for $y \in \{2, 3, 4, 5\}$
3. Positively correlated for $y \in \{1, 6\}$, negatively correlated for $y \in \{2, 3, 4, 5\}$
Run the coin-die experiment 1000 times. Let $H$ and $Y$ be as defined in the previous exercise.
1. Compute the empirical probability of $\{Y = y\}$, for each $y$, and compare with the true probability in the previous exercise
2. Compute the empirical probability of $H$ given $\{Y = y\}$ for each $y$, and compare with the true probability in the previous exercise.
Cards
Consider the card experiment that consists of dealing 2 cards from a standard deck and recording the sequence of cards dealt. For $i \in \{1, 2\}$, let $Q_i$ be the event that card $i$ is a queen and $H_i$ the event that card $i$ is a heart. For each of the following pairs of events, compute the probability of each event, and the conditional probability of each event given the other. Determine whether the events are positively correlated, negatively correlated, or independent.
1. $Q_1$, $H_1$
2. $Q_1$, $Q_2$
3. $Q_2$, $H_2$
4. $Q_1$, $H_2$
Answer
The answers below are for $\P(A)$, $\P(B)$, $\P(A \mid B)$, and $\P(B \mid A)$ where $A$ and $B$ are the given events
1. $\frac{1}{13}$, $\frac{1}{4}$, $\frac{1}{13}$, $\frac{1}{4}$, independent.
2. $\frac{1}{13}$, $\frac{1}{13}$, $\frac{3}{51}$, $\frac{3}{51}$, negatively correlated.
3. $\frac{1}{13}$, $\frac{1}{4}$, $\frac{1}{13}$, $\frac{1}{4}$, independent.
4. $\frac{1}{13}$, $\frac{1}{4}$, $\frac{1}{13}$, $\frac{1}{4}$, independent.
In the card experiment, set $n = 2$. Run the experiment 500 times. Compute the conditional relative frequencies corresponding to the conditional probabilities in the last exercise.
Consider the card experiment that consists of dealing 3 cards from a standard deck and recording the sequence of cards dealt. Find the probability of the following events:
1. All three cards are all hearts.
2. The first two cards are hearts and the third is a spade.
3. The first and third cards are hearts and the second is a spade.
Proof
1. $\frac{11}{850}$
2. $\frac{13}{850}$
3. $\frac{13}{850}$
In the card experiment, set $n = 3$ and run the simulation 1000 times. Compute the empirical probability of each event in the previous exercise and compare with the true probability.
Bivariate Uniform Distributions
Recall that Buffon's coin experiment consists of tossing a coin with radius $r \le \frac{1}{2}$ randomly on a floor covered with square tiles of side length 1. The coordinates $(X, Y)$ of the center of the coin are recorded relative to axes through the center of the square, parallel to the sides. Since the needle is dropped randomly, the basic modeling assumption is that $(X, Y)$ is uniformly distributed on the square $[-1/2, 1/2]^2$.
In Buffon's coin experiment,
1. Find $\P(Y \gt 0 \mid X \lt Y)$
2. Find the conditional distribution of $(X, Y)$ given that the coin does not touch the sides of the square.
Answer
1. $\frac{3}{4}$
2. Given $(X, Y) \in [r - \frac{1}{2}, \frac{1}{2} - r]^2$, $(X, Y)$ is uniformly distributed on this set.
Run Buffon's coin experiment 500 times. Compute the empirical probability that $Y \gt 0$ given that $X \lt Y$ and compare with the probability in the last exercise.
In the conditional probability experiment, the random points are uniformly distributed on the rectangle $S$. Move and resize events $A$ and $B$ and note how the probabilities change. For each of the following configurations, run the experiment 1000 times and compare the relative frequencies with the true probabilities.
1. $A$ and $B$ in general position
2. $A$ and $B$ disjoint
3. $A \subseteq B$
4. $B \subseteq A$
Reliability
A plant has 3 assembly lines that produces memory chips. Line 1 produces 50% of the chips and has a defective rate of 4%; line 2 has produces 30% of the chips and has a defective rate of 5%; line 3 produces 20% of the chips and has a defective rate of 1%. A chip is chosen at random from the plant.
1. Find the probability that the chip is defective.
2. Given that the chip is defective, find the conditional probability for each line.
Answer
1. 0.037
2. 0.541 for line 1, 0.405 for line 2, 0.054 for line 3
Suppose that a bit (0 or 1) is sent through a noisy communications channel. Because of the noise, the bit sent may be received incorrectly as the complementary bit. Specifically, suppose that if 0 is sent, then the probability that 0 is received is 0.9 and the probability that 1 is received is 0.1. If 1 is sent, then the probability that 1 is received is 0.8 and the probability that 0 is received is 0.2. Finally, suppose that 1 is sent with probability 0.6 and 0 is sent with probability 0.4. Find the probability that
1. 1 was sent given that 1 was received
2. 0 was sent given that 0 was received
Answer
1. $12/13$
2. $3/4$
Suppose that $T$ denotes the lifetime of a light bulb (in 1000 hour units), and that $T$ has the following exponential distribution, defined for measurable $A \subseteq [0, \infty)$:
$\P(T \in A) = \int_A e^{-t} dt$
1. Find $\P(T \gt 3)$
2. Find $\P(T \gt 5 \mid T \gt 2)$
Answer
1. $e^{-3}$
2. $e^{-3}$
Suppose again that $T$ denotes the lifetime of a light bulb (in 1000 hour units), but that $T$ is uniformly distributed on the interal $[0, 10]$.
1. Find $\P(T \gt 3)$
2. Find $\P(T \gt 5 \mid T \gt 2)$
Answer
1. $\frac{7}{10}$
2. $\frac{5}{8}$
Genetics
Please refer to the discussion of genetics in the section on random experiments if you need to review some of the definitions in this section.
Recall first that the ABO blood type in humans is determined by three alleles: $a$, $b$, and $o$. Furthermore, $a$ and $b$ are co-dominant and $o$ is recessive. Suppose that the probability distribution for the set of blood genotypes in a certain population is given in the following table:
Genotype $aa$ $ab$ $ao$ $bb$ $bo$ $oo$
Probability 0.050 0.038 0.310 0.007 0.116 0.479
Suppose that a person is chosen at random from the population. Let $A$, $B$, $AB$, and $O$ be the events that the person is type $A$, type $B$, type $AB$, and type $O$ respectively. Let $H$ be the event that the person is homozygous, and let $D$ denote the event that the person has an $o$ allele. Find each of the following:
1. $\P(A)$, $\P(B)$, $\P(AB)$, $\P(O)$, $\P(H)$, $\P(D)$
2. $P(A \cap H)$, $P(A \mid H)$, $P(H \mid A)$. Are the events $A$ and $H$ positively correlated, negatively correlated, or independent?
3. $P(B \cap H)$, $P(B \mid H)$, $P(H \mid B)$. Are the events $B$ and $H$ positively correlated, negatively correlated, or independent?
4. $P(A \cap D)$, $P(A \mid D)$, $P(D \mid A)$. Are the events $A$ and $D$ positively correlated, negatively correlated, or independent?
5. $P(B \cap D)$, $P(B \mid D)$, $P(D \mid B)$. Are the events $B$ and $D$ positively correlated, negatively correlated, or independent?
6. $P(H \cap D)$, $P(H \mid D)$, $P(D \mid H)$. Are the events $H$ and $D$ positively correlated, negatively correlated, or independent?
Answer
1. 0.360, 0.123, 0.038, 0.479, 0.536, 0.905
2. 0.050, 0.093, 0.139. $A$ and $H$ are negatively correlated.
3. 0.007, 0.013, 0.057. $B$ and $H$ are negatively correlated.
4. 0.310, 0.343, 0.861. $A$ and $D$ are negatively correlated.
5. 0.116, 0.128, 0.943. $B$ and $D$ are positivley correlated.
6. 0.479, 0.529, 0.894. $H$ and $D$ are negatively correlated.
Suppose next that pod color in certain type of pea plant is determined by a gene with two alleles: $g$ for green and $y$ for yellow, and that $g$ is dominant and $y$ recessive.
Suppose that a green-pod plant and a yellow-pod plant are bred together. Suppose further that the green-pod plant has a $\frac{1}{4}$ chance of carrying the recessive yellow-pod allele.
1. Find the probability that a child plant will have green pods.
2. Given that a child plant has green pods, find the updated probability that the green-pod parent has the recessive allele.
Answer
1. $\frac{7}{8}$
2. $\frac{1}{7}$
Suppose that two green-pod plants are bred together. Suppose further that with probability $\frac{1}{3}$ neither plant has the recessive allele, with probability $\frac{1}{2}$ one plant has the recessive allele, and with probability $\frac{1}{6}$ both plants have the recessive allele.
1. Find the probability that a child plant has green pods.
2. Given that a child plant has green pods, find the updated probability that both parents have the recessive gene.
Answer
1. $\frac{23}{24}$
2. $\frac{3}{23}$
Next consider a sex-linked hereditary disorder in humans (such as colorblindness or hemophilia). Let $h$ denote the healthy allele and $d$ the defective allele for the gene linked to the disorder. Recall that $h$ is dominant and $d$ recessive for women.
Suppose that in a certain population, 50% are male and 50% are female. Moreover, suppose that 10% of males are color blind but only 1% of females are color blind.
1. Find the percentage of color blind persons in the population.
2. Find the percentage of color blind persons that are male.
Answer
1. 5.5%
2. 90.9%
Since color blindness is a sex-linked hereditary disorder, note that it's reasonable in the previous exercise that the probability that a female is color blind is the square of the probability that a male is color blind. If $p$ is the probability of the defective allele on the $X$ chromosome, then $p$ is also the probability that a male will be color blind. But since the defective allele is recessive, a woman would need two copies of the defective allele to be color blind, and assuming independence, the probability of this event is $p^2$.
A man and a woman do not have a certain sex-linked hereditary disorder, but the woman has a $\frac{1}{3}$ chance of being a carrier.
1. Find the probability that a son born to the couple will be normal.
2. Find the probability that a daughter born to the couple will be a carrier.
3. Given that a son born to the couple is normal, find the updated probability that the mother is a carrier.
Answer
1. $\frac{5}{6}$
2. $\frac{1}{6}$
3. $\frac{1}{5}$
Urn Models
Urn 1 contains 4 red and 6 green balls while urn 2 contains 7 red and 3 green balls. An urn is chosen at random and then a ball is chosen at random from the selected urn.
1. Find the probability that the ball is green.
2. Given that the ball is green, find the conditional probability that urn 1 was selected.
Answer
1. $\frac{9}{20}$
2. $\frac{2}{3}$
Urn 1 contains 4 red and 6 green balls while urn 2 contains 6 red and 3 green balls. A ball is selected at random from urn 1 and transferred to urn 2. Then a ball is selected at random from urn 2.
1. Find the probability that the ball from urn 2 is green.
2. Given that the ball from urn 2 is green, find the conditional probability that the ball from urn 1 was green.
Answer
1. $\frac{9}{25}$
2. $\frac{2}{3}$
An urn initially contains 6 red and 4 green balls. A ball is chosen at random from the urn and its color is recorded. It is then replaced in the urn and 2 new balls of the same color are added to the urn. The process is repeated. Find the probability of each of the following events:
1. Balls 1 and 2 are red and ball 3 is green.
2. Balls 1 and 3 are red and ball 2 is green.
3. Ball 1 is green and balls 2 and 3 are red.
4. Ball 2 is red.
5. Ball 1 is red given that ball 2 is red.
Answer
1. $\frac{4}{35}$
2. $\frac{4}{35}$
3. $\frac{4}{35}$
4. $\frac{3}{5}$
5. $\frac{2}{3}$
Think about the results in the previous exercise. Note in particular that the answers to parts (a), (b), and (c) are the same, and that the probability that the second ball is red in part (d) is the same as the probability that the first ball is red. More generally, the probabilities of events do not depend on the order of the draws. For example, the probability of an event involving the first, second, and third draws is the same as the probability of the corresponding event involving the seventh, tenth and fifth draws. Technically, the sequence of events $(R_1, R_2, \ldots)$ is exchangeable. The random process described in this exercise is a special case of Pólya's urn scheme, named after George Pólya. We sill study Pólya's urn in more detail in the chapter on Finite Sampling Models
An urn initially contains 6 red and 4 green balls. A ball is chosen at random from the urn and its color is recorded. It is then replaced in the urn and two new balls of the other color are added to the urn. The process is repeated. Find the probability of each of the following events:
1. Balls 1 and 2 are red and ball 3 is green.
2. Balls 1 and 3 are red and ball 2 is green.
3. Ball 1 is green and balls 2 and 3 are red.
4. Ball 2 is red.
5. Ball 1 is red given that ball 2 is red.
Answer
1. $\frac{6}{35}$
2. $\frac{6}{35}$
3. $\frac{16}{105}$
4. $\frac{17}{30}$
5. $\frac{9}{17}$
Think about the results in the previous exercise, and compare with Pólya's urn. Note that the answers to parts (a), (b), and (c) are not all the same, and that the probability that the second ball is red in part (d) is not the same as the probability that the first ball is red. In short, the sequence of events $(R_1, R_2, \ldots)$ is not exchangeable.
Diagnostic Testing
Suppose that we have a random experiment with an event $A$ of interest. When we run the experiment, of course, event $A$ will either occur or not occur. However, suppose that we are not able to observe the occurrence or non-occurrence of $A$ directly. Instead we have a diagnostic test designed to indicate the occurrence of event $A$; thus the test that can be either positive for $A$ or negative for $A$. The test also has an element of randomness, and in particular can be in error. Here are some typical examples of the type of situation we have in mind:
• The event is that a person has a certain disease and the test is a blood test for the disease.
• The event is that a woman is pregnant and the test is a home pregnancy test.
• The event is that a person is lying and the test is a lie-detector test.
• The event is that a device is defective and the test consists of a sensor reading.
• The event is that a missile is in a certain region of airspace and the test consists of radar signals.
• The event is that a person has committed a crime, and the test is a jury trial with evidence presented for and against the event.
Let $T$ be the event that the test is positive for the occurrence of $A$. The conditional probability $\P(T \mid A)$ is called the sensitivity of the test. The complementary probability $\P(T^c \mid A) = 1 - \P(T \mid A)$ is the false negative probability. The conditional probability $\P(T^c \mid A^c)$ is called the specificity of the test. The complementary probability $\P(T \mid A^c) = 1 - \P(T^c \mid A^c)$ is the false positive probability. In many cases, the sensitivity and specificity of the test are known, as a result of the development of the test. However, the user of the test is interested in the opposite conditional probabilities, namely $\P(A \mid T)$, the probability of the event of interest, given a positive test, and $\P(A^c \mid T^c)$, the probability of the complementary event, given a negative test. Of course, if we know $\P(A \mid T)$ then we also have $\P(A^c \mid T) = 1 - \P(A \mid T)$, the probability of the complementary event given a positive test. Similarly, if we know $\P(A^c \mid T^c)$ then we also have $\P(A \mid T^c)$, the probability of the event given a negative test. Computing the probabilities of interest is simply a special case of Bayes' theorem.
The probability that the event occurs, given a positive test is $\P(A \mid T) = \frac{\P(A) \P(T \mid A)}{\P(A) \P(T \mid A) + \P(A^c) \P(T \mid A^c)}$ The probability that the event does not occur, given a negative test is $\P(A^c \mid T^c) = \frac{\P(A^c) \P(T^c \mid A^c)}{\P(A) \P(T^c \mid A) + \P(A^c) \P(T^c \mid A^c)}$
There is often a trade-off between sensitivity and specificity. An attempt to make a test more sensitive may result in the test being less specific, and an attempt to make a test more specific may result in the test being less sensitive. As an extreme example, consider the worthless test that always returns positive, no matter what the evidence. Then $T = S$ so the test has sensitivity 1, but specificity 0. At the opposite extreme is the worthless test that always returns negative, no matter what the evidence. Then $T = \emptyset$ so the test has specificity 1 but sensitivity 0. In between these extremes are helpful tests that are actually based on evidence of some sort.
Suppose that the sensitivity $a = \P(T \mid A) \in (0, 1)$ and the specificity $b = \P(T^c \mid A^c) \in (0, 1)$ are fixed. Let $p = \P(A)$ denote the prior probability of the event $A$ and $P = \P(A \mid T)$ the posterior probability of $A$ given a positive test.
$P$ as a function of $p$ is given by $P = \frac{a p}{(a + b - 1) p + (1 - b)}, \quad p \in [0, 1]$
1. $P$ increases continuously from 0 to 1 as $p$ increases from 0 to 1.
2. $P$ is concave downward if $a + b \gt 1$. In this case $A$ and $T$ are positively correlated.
3. $P$ is concave upward if $a + b \lt 1$. In this case $A$ and $T$ are negatively correlated.
4. $P = p$ if $a + b = 1$. In this case, $A$ and $T$ are uncorrelated (independent).
Proof
The formula for $P$ in terms of $p$ follows from (42) and algebra. For part (a), note that $\frac{dP}{dp} = \frac{a (1 - b)}{[(a + b - 1) p + (1 - b)]^2} \gt 0$ For parts (b)-(d), note that $\frac{d^2 P}{dp^2} = \frac{-2 a (1 - b)(a + b - 1)}{[(1 + b - 1)p + (1 - b)]^3}$ If $a + b \gt 1$, $d^2P/dp^2 \lt 0$ so $P$ is concave downward on $[0, 1]$ and hence $P \gt p$ for $0 \lt p \lt 1$. If $a + b \lt 1$, $d^2P/dp^2 \gt 0$ so $P$ is concave upward on $[0, 1]$ and hence $P \lt p$ for $0 \lt p \lt 1$. Trivially if $a + b = 1$, $P = p$ for $0 \le p \le 1$.
Of course, part (b) is the typical case, where the test is useful. In fact, we would hope that the sensitivity and specificity are close to 1. In case (c), the test is worse than useless since it gives the wrong information about $A$. But this case could be turned into a useful test by simply reversing the roles of positive and negative. In case (d), the test is worthless and gives no information about $A$. It's interesting that the broad classification above depends only on the sum of the sensitivity and specificity.
Suppose that a diagnostic test has sensitivity 0.99 and specificity 0.95. Find $\P(A \mid T)$ for each of the following values of $\P(A)$:
1. 0.001
2. 0.01
3. 0.2
4. 0.5
5. 0.7
6. 0.9
Answer
1. 0.0194
2. 0.1667
3. 0.8319
4. 0.9519
5. 0.9788
6. 0.9944
With sensitivity 0.99 and specificity 0.95, the test in the last exercise superficially looks good. However the small value of $\P(A \mid T)$ for small values of $\P(A)$ is striking (but inevitable given the properties above). The moral, of course, is that $\P(A \mid T)$ depends critically on $\P(A)$ not just on the sensitivity and specificity of the test. Moreover, the correct comparison is $\P(A \mid T)$ with $\P(A)$, as in the exercise, not $\P(A \mid T)$ with $\P(T \mid A)$—Beware of the fallacy of the transposed conditional! In terms of the correct comparison, the test does indeed work well; $\P(A \mid T)$ is significantly larger than $\P(A)$ in all cases.
A woman initially believes that there is an even chance that she is or is not pregnant. She takes a home pregnancy test with sensitivity 0.95 and specificity 0.90 (which are reasonable values for a home pregnancy test). Find the updated probability that the woman is pregnant in each of the following cases.
1. The test is positive.
2. The test is negative.
Answer
1. 0.905
2. 0.053
Suppose that 70% of defendants brought to trial for a certain type of crime are guilty. Moreover, historical data show that juries convict guilty persons 80% of the time and convict innocent persons 10% of the time. Suppose that a person is tried for a crime of this type. Find the updated probability that the person is guilty in each of the following cases:
1. The person is convicted.
2. The person is acquitted.
Answer
1. 0.949
2. 0.341
The Check Engine light on your car has turned on. Without the information from the light, you believe that there is a 10% chance that your car has a serious engine problem. You learn that if the car has such a problem, the light will come on with probability 0.99, but if the car does not have a serious problem, the light will still come on, under circumstances similar to yours, with probability 0.3. Find the updated probability that you have an engine problem.
Answer
0.268
The standard test for HIV is the ELISA (Enzyme-Linked Immunosorbent Assay) test. It has sensitivity and specificity of 0.999. Suppose that a person is selected at random from a population in which 1% are infected with HIV, and given the ELISA test. Find the probability that the person has HIV in each of the following cases:
1. The test is positive.
2. The test is negative.
Answer
1. 0.9098
2. 0.00001
The ELISA test for HIV is a very good one. Let's look another test, this one for prostate cancer, that's rather bad.
The PSA test for prostate cancer is based on a blood marker known as the Prostate Specific Antigen. An elevated level of PSA is evidence for prostate cancer. To have a diagnostic test, in the sense that we are discussing here, we must decide on a definite level of PSA, above which we declare the test to be positive. A positive test would typically lead to other more invasive tests (such as biopsy) which, of course, carry risks and cost. The PSA test with cutoff 2.6 ng/ml has sensitivity 0.40 and specificity 0.81. The overall incidence of prostate cancer among males is 156 per 100000. Suppose that a man, with no particular risk factors, has the PSA test. Find the probability that the man has prostate cancer in each of the following cases:
1. The test is positive.
2. The test is negative.
Answer
1. 0.00328
2. 0.00116
Diagnostic testing is closely related to a general statistical procedure known as hypothesis testing. A separate chapter on hypothesis testing explores this procedure in detail.
Data Analysis Exercises
For the M&M data set, find the empirical probability that a bag has at least 10 reds, given that the weight of the bag is at least 48 grams.
Answer
$\frac{10}{23}$.
Consider the Cicada data.
1. Find the empirical probability that a cicada weighs at least 0.25 grams given that the cicada is male.
2. Find the empirical probability that a cicada weighs at least 0.25 grams given that the cicada is the tredecula species.
Answer
1. $\frac{2}{45}$
2. $\frac{7}{44}$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/02%3A_Probability_Spaces/2.04%3A_Conditional_Probability.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\bs}{\boldsymbol}$
In this section, we will discuss independence, one of the fundamental concepts in probability theory. Independence is frequently invoked as a modeling assumption, and moreover, (classical) probability itself is based on the idea of independent replications of the experiment. As usual, if you are a new student of probability, you may want to skip the technical details.
Basic Theory
As usual, our starting point is a random experiment modeled by a probability space $(S, \mathscr S, \P)$ so that $S$ is the set of outcomes, $\mathscr S$ the collection of events, and $\P$ the probability measure on the sample space $(S, \mathscr S)$. We will define independence for two events, then for collections of events, and then for collections of random variables. In each case, the basic idea is the same.
Independence of Two Events
Two events $A$ and $B$ are independent if $\P(A \cap B) = \P(A) \P(B)$
If both of the events have positive probability, then independence is equivalent to the statement that the conditional probability of one event given the other is the same as the unconditional probability of the event: $\P(A \mid B) = \P(A) \iff \P(B \mid A) = \P(B) \iff \P(A \cap B) = \P(A) \P(B)$ This is how you should think of independence: knowledge that one event has occurred does not change the probability assigned to the other event. Independence of two events was discussed in the last section in the context of correlation. In particular, for two events, independent and uncorrelated mean the same thing.
The terms independent and disjoint sound vaguely similar but they are actually very different. First, note that disjointness is purely a set-theory concept while independence is a probability (measure-theoretic) concept. Indeed, two events can be independent relative to one probability measure and dependent relative to another. But most importantly, two disjoint events can never be independent, except in the trivial case that one of the events is null.
Suppose that $A$ and $B$ are disjoint events, each with positive probability. Then $A$ and $B$ are dependent, and in fact are negatively correlated.
Proof
Note that $\P(A \cap B) = \P(\emptyset) = 0$ but $\P(A) \P(B) \gt 0$.
If $A$ and $B$ are independent events then intuitively it seems clear that any event that can be constructed from $A$ should be independent of any event that can be constructed from $B$. This is the case, as the next result shows. Moreover, this basic idea is essential for the generalization of independence that we will consider shortly.
If $A$ and $B$ are independent events, then each of the following pairs of events is independent:
1. $A^c$, $B$
2. $B$, $A^c$
3. $A^c$, $B^c$
Proof
Suppose that $A$ and $B$ are independent. Then by the difference rule and the complement rule, $\P(A^c \cap B) = \P(B) - \P(A \cap B) = \P(B) - \P(A) \, \P(B) = \P(B)\left[1 - \P(A)\right] = \P(B) \P(A^c)$ Hence $A^c$ and $B$ are equivalent. Parts (b) and (c) follow from (a).
An event that is essentially deterministic, that is, has probability 0 or 1, is independent of any other event, even itself.
Suppose that $A$ and $B$ are events.
1. If $\P(A) = 0$ or $\P(A) = 1$, then $A$ and $B$ are independent.
2. $A$ is independent of itself if and only if $\P(A) = 0$ or $\P(A) = 1$.
Proof
1. Recall that if $\P(A) = 0$ then $\P(A \cap B) = 0$, and if $\P(A) = 1$ then $\P(A \cap B) = \P(B)$. In either case we have $\P(A \cap B) = \P(A) \P(B)$.
2. The independence of $A$ with itself gives $\P(A) = [\P(A)]^2$ and hence either $\P(A) = 0$ or $\P(A) = 1$.
General Independence of Events
To extend the definition of independence to more than two events, we might think that we could just require pairwise independence, the independence of each pair of events. However, this is not sufficient for the strong type of independence that we have in mind. For example, suppose that we have three events $A$, $B$, and $C$. Mutual independence of these events should not only mean that each pair is independent, but also that an event that can be constructed from $A$ and $B$ (for example $A \cup B^c$) should be independent of $C$. Pairwise independence does not achieve this; an exercise below gives three events that are pairwise independent, but the intersection of two of the events is related to the third event in the strongest possible sense.
Another possible generalization would be to simply require the probability of the intersection of the events to be the product of the probabilities of the events. However, this condition does not even guarantee pairwise independence. An exercise below gives an example. However, the definition of independence for two events does generalize in a natural way to an arbitrary collection of events.
Suppose that $A_i$ is an event for each $i$ in an index set $I$. Then the collection $\mathscr{A} = \{A_i: i \in I\}$ is independent if for every finite $J \subseteq I$, $\P\left(\bigcap_{j \in J} A_j \right) = \prod_{j \in J} \P(A_j)$
Independence of a collection of events is much stronger than mere pairwise independence of the events in the collection. The basic inheritance property in the following result follows immediately from the definition.
Suppose that $\mathscr{A}$ is a collection of events.
1. If $\mathscr{A}$ is independent, then $\mathscr{B}$ is independent for every $\mathscr{B} \subseteq \mathscr{A}$.
2. If $\mathscr{B}$ is independent for every finite $\mathscr{B} \subseteq \mathscr{A}$ then $\mathscr{A}$ is independent.
For a finite collection of events, the number of conditions required for mutual independence grows exponentially with the number of events.
There are $2^n - n - 1$ non-trivial conditions in the definition of the independence of $n$ events.
1. Explicitly give the 4 conditions that must be satisfied for events $A$, $B$, and $C$ to be independent.
2. Explicitly give the 11 conditions that must be satisfied for events $A$, $B$, $C$, and $D$ to be independent.
Answer
There are $2^n$ subcollections of the $n$ events. One is empty and $n$ involve a single event. The remaining $2^n - n - 1$ subcollections involve two or more events and correspond to non-trivial conditions.
1. $A$, $B$, $C$ are independent if and only if \begin{align*} & \P(A \cap B) = \P(A) \P(B)\ &\P(A \cap C) = \P(A) \P(C)\ & \P(B \cap C) = \P(B) \P(C)\ & \P(A \cap B \cap C) = \P(A) \P(B) \P(C) \end{align*}
2. $A$, $B$, $C$, $D$ are independent if and only if \begin{align*} & \P(A \cap B) = \P(A) \P(B)\ & \P(A \cap C) = \P(A) \P(C)\ & \P(A \cap D) = \P(A) \P(D)\ & \P(B \cap C) = \P(B) \P(C)\ & \P(B \cap D) = \P(B) \P(D)\ & \P(C \cap D) = \P(C) \P(D)\ & \P(A \cap B \cap C) = \P(A) \P(B) \P(C)\ & \P(A \cap B \cap D) = \P(A) \P(B) \P(D)\ & \P(A \cap C \cap D) = \P(A) \P(C) \P(D) \ & \P(B \cap C \cap D) = \P(B) \P(C) \P(D)\ & \P(A \cap B \cap C \cap D) = \P(A) \P(B) \P(C) \P(D) \end{align*}
If the events $A_1, A_2, \ldots, A_n$ are independent, then it follows immediately from the definition that $\P\left(\bigcap_{i=1}^n A_i\right) = \prod_{i=1}^n \P(A_i)$ This is known as the multiplication rule for independent events. Compare this with the general multiplication rule for conditional probability.
The collection of essentially deterministic events $\mathscr{D} = \{A \in \mathscr{S}: \P(A) = 0 \text{ or } \P(A) = 1\}$ is independent.
Proof
Suppose that $\{A_1, A_2, \ldots, A_n\} \subseteq \mathscr{D}$. If $\P(A_i) = 0$ for some $i \in \{1, 2, \ldots, n\}$ then $\P(A_1 \cap A_2 \cap \cdots \cap A_n) = 0$. If $\P(A_i) = 1$ for every $i \in \{1, 2, \ldots, n\}$ then $\P(A_1 \cap A_2 \cap \cdots \cap A_n) = 1$. In either case, $\P(A_1 \cap A_2 \cdots \cap A_n) = \P(A_1) \P(A_2) \cdots \P(A_n)$.
The next result generalizes the theorem above on the complements of two independent events.
Suppose that $\mathscr A = \{A_i: i \in I\}$ and $\mathscr B = \{B_i: i \in I\}$ are two collections of events with the property that for each $i \in I$, either $B_i = A_i$ or $B_i = A_i^c$. Then $\mathscr A$ is independent if and only if $\mathscr B$ is an independent.
Proof
The proof is actually very similar to the proof for two events, except for more complicated notation. First, by the symmetry of the relation between $\mathscr A$ and $\mathscr B$, it suffices to show $\mathscr A$ indpendent implies $\mathscr B$ independent. Next, by the inheritance property, it suffices to consider the case where the index set $I$ is finite.
1. Fix $k \in I$ and define $B_k = A_k^c$ and $B_i = A_i$ for $i \in I \setminus \{k\}$. Suppose now that $J \subseteq I$. If $k \notin J$ then trivially, $\P\left(\bigcap_{j \in J} B_j\right) = \prod_{j \in J} \P(B_j)$. If $k \in J$, then using the difference rule, \begin{align*} \P\left(\bigcap_{j \in J} B_j\right) &= \P\left(\bigcap_{j \in J \setminus \{k\}} A_j\right) - \P\left(\bigcap_{j \in J} A_j\right) \ & = \prod_{j \in J \setminus\{k\}} \P(A_j) - \prod_{j \in J} \P(A_j) = \left[\prod_{j \in J \setminus\{k\}} \P(A_j)\right][1 - \P(A_k)] = \prod_{j \in J} \P(B_j) \end{align*} Hence $\{B_i: i \in I\}$ is a collection of independent events.
2. Suppose now that $\mathscr B = \{B_i: i \in I\}$ is a general collection of events where $B_i = A_i$ or $B_i = A_i^c$ for each $i \in I$. Then $\mathscr B$ can be obtained from $\mathscr A$ by a finite sequence of complement changes of the type in (a), each of which preserves independence.
The last theorem in turn leads to the type of strong independence that we want. The following exercise gives examples.
If $A$, $B$, $C$, and $D$ are independent events, then
1. $A \cup B$, $C^c$, $D$ are independent.
2. $A \cup B^c$, $C^c \cup D^c$ are independent.
Proof
We will give proofs that use the complement theorem, but to do so, some additional notation is helpful. If $E$ is an event, let $E^1 = E$ and $E^0 = E^c$.
1. Note that $A \cup B = \bigcup_{(i, j) \in I} A^i \cap B^j$ where $I = \{(1, 0), (0, 1), (1, 1)\}$ and note that the events in the union are disjoint. By the distributive property, $(A \cup B) \cap C^c = \bigcup_{(i, j) \in I} A^i \cap B^j \cap C^0$ and again the events in the union are disjoint. By additivity and complement theorem, $\P[(A \cup B) \cap C^c] = \sum_{(i, j) \in I} \P(A^i) \P(B^j) \P(C^0) = \left(\sum_{(i,j) \in I} \P(A^i) \P(B^j)\right) \P(C^0) = \P(A \cup B) \P(C^c)$ By exactly the same type of argument, $\P[(A \cup B) \cap D] = \P(A \cup B) \P(D)$ and $\P[(A \cup B) \cap C^c \cap D] = \P(A \cup B) \P(C^c) \P(D)$. Directly from the result above on complements, $\P(C^c \cap D) = \P(C^c) \P(D)$.
2. Note that $A \cup B^c = \bigcup_{(i, j) \in I} A^i \cap B^j$ where $I = \{(0, 0), (1, 0), (1, 1)\}$ and note that the events in the union are disjoint. Similarly $C^c \cup D^c = \bigcup_{(k, l) \in J} C^i \cap D^j$ where $J = \{(0, 0), (1, 0), (0, 1)\}$, and again the events in the union are disjoint. By the distributive rule for set operations, $(A \cup B^c) \cap (C^c \cup D^c) = \bigcup_{(i, j, k ,l) \in I \times J} A^i \cap B^j \cap C^k \cap D^l$ and once again, the events in the union are disjoint. By additivity and the complement theorem, $\P[(A \cup B^c) \cap (C^c \cup D^c)] = \sum_{(i, j, k ,l) \in I \times J} \P(A^i ) \P(B^j) \P(C^k) \P(D^l)$ But also by additivity, the complement theorem, and the distributive property of arithmetic, $\P(A \cup B^c) \P(C^c \cup D^c) = \left(\sum_{(i,j) \in I} \P(A^i) \P(B^j)\right) \left(\sum_{(k,l) \in J} \P(C^k) \P(D^l)\right) = \sum_{(i, j, k ,l) \in I \times J} \P(A^i ) \P(B^j) \P(C^k) \P(D^l)$
The complete generalization of these results is a bit complicated, but roughly means that if we start with a collection of indpendent events, and form new events from disjoint subcollections (using the set operations of union, intersection, and complment), then the new events are independent. For a precise statement, see the section on measure spaces. The importance of the complement theorem lies in the fact that any event that can be defined in terms of a finite collection of events $\{A_i: i \in I\}$ can be written as a disjoint union of events of the form $\bigcap_{i \in I} B_i$ where $B_i = A_i$ or $B_i = A_i^c$ for each $i \in I$.
Another consequence of the general complement theorem is a formula for the probability of the union of a collection of independent events that is much nicer than the inclusion-exclusion formula.
If $A_1, A_2, \ldots, A_n$ are independent events, then $\P\left(\bigcup_{i=1}^n A_i\right) = 1 - \prod_{i=1}^n \left[1 - \P(A_i)\right]$
Proof
From DeMorgan's law and the independence of $A_1^c, A_2^c, \ldots, A_n^c$ we have $\P\left(\bigcup_{i=1}^n A_i \right) = 1 - \P\left( \bigcap_{i=1}^n A_i^c \right) = 1 - \prod_{i=1}^n \P(A_i^c) = 1 - \prod_{i=1}^n \left[1 - \P(A_i)\right]$
Independence of Random Variables
Suppose now that $X_i$ is a random variable for the experiment with values in a set $T_i$ for each $i$ in a nonempty index set $I$. Mathematically, $X_i$ is a function from $S$ into $T_i$, and recall that $\{X_i \in B\}$ denotes the event $\{s \in S: X_i(s) \in B\}$ for $B \subseteq T_i$. Intuitively, $X_i$ is a variable of interest in the experiment, and every meaningful statement about $X_i$ defines an event. Intuitively, the random variables are independent if information about some of the variables tells us nothing about the other variables. Mathematically, independence of a collection of random variables can be reduced to the independence of collections of events.
The collection of random variables $\mathscr{X} = \{X_i: i \in I\}$ is independent if the collection of events $\left\{\{X_i \in B_i\}: i \in I\right\}$ is independent for every choice of $B_i \subseteq T_i$ for $i \in I$. Equivalently then, $\mathscr{X}$ is independent if for every finite $J \subseteq I$, and for every choice of $B_j \subseteq T_j$ for $j \in J$ we have $\P\left(\bigcap_{j \in J} \{X_j \in B_j\} \right) = \prod_{j \in J} \P(X_j \in B_j)$
Details
Recall that $T_i$ will have a $\sigma$-algebra $\mathscr T_i$ of admissible subsets so that $(T_i, \mathscr T_i)$ is a measurable space just like the sample space $(S, \mathscr S)$ for each $i \in I$. Also $X_i$ is measurable as a function from $S$ into $T_i$ for each $i \in I$. These technical assumptions ensure that the definition makes sense.
Suppose that $\mathscr{X}$ is a collection of random variables.
1. If $\mathscr{X}$ is independent, then $\mathscr{Y}$ is independent for every $\mathscr{Y} \subseteq \mathscr{X}$
2. If $\mathscr{Y}$ is independent for every finite $\mathscr{Y} \subseteq \mathscr{X}$ then $\mathscr{X}$ is independent.
It would seem almost obvious that if a collection of random variables is independent, and we transform each variable in deterministic way, then the new collection of random variables should still be independent.
Suppose now that $g_i$ is a function from $T_i$ into a set $U_i$ for each $i \in I$. If $\{X_i: i \in I\}$ is independent, then $\{g_i(X_i): i \in I\}$ is also independent.
Proof
Except for the abstract setting, the proof of independence is easy. Suppose that $C_i \subseteq U_i$ for each $i \in I$. Then $\left\{g_i(X_i) \in C_i\right\} = \left\{X_i \in g_i^{-1}(C_i)\right\}$ for $i \in I$. By the independence of $\{X_i: i \in I\}$, the collection of events $\left\{\left\{X_i \in g_i^{-1}(C_i)\right\}: i \in I\right\}$ is independent.
Technically, the set $U_i$ will have a $\sigma$-algebra $\mathscr U_i$ of admissible subsets so that $(U_i, \mathscr U_i)$ is a measurable space just like $(T_i, \mathscr T_i)$ and just like the sample space $(S, \mathscr S)$. The function $g_i$ is required to be measurable as a function from $T_i$ into $U_i$ just as $X_i$ is measurable as a function form $S$ into $T_i$. In the proof above, $C_i \in \mathscr U_i$ so that $g^{-1}(C_i) \in \mathscr T_i$ and hence $\{X_i \in g^{-1}(C_i)\} \in \mathscr S$.
As with events, the (mutual) independence of random variables is a very strong property. If a collection of random variables is independent, then any subcollection is also independent. New random variables formed from disjoint subcollections are independent. For a simple example, suppose that $X$, $Y$, and $Z$ are independent real-valued random variables. Then
1. $\sin(X)$, $\cos(Y)$, and $e^Z$ are independent.
2. $(X, Y)$ and $Z$ are independent.
3. $X^2 + Y^2$ and $\arctan(Z)$ are independent.
4. $X$ and $Z$ are independent.
5. $Y$ and $Z$ are independent.
In particular, note that statement 2 in the list above is much stronger than the conjunction of statements 4 and 5. Contrapositively, if $X$ and $Z$ are dependent, then $(X, Y)$ and $Z$ are also dependent. Independence of random variables subsumes independence of events.
A collection of events $\mathscr{A}$ is independent if and only if the corresponding collection of indicator variables $\left\{\bs{1}_A: A \in \mathscr{A}\right\}$ is independent.
Proof
Let $\mathscr A = \{A_i: i \in I\}$ where $I$ is a nonempty index set. For $i \in I$, the only non-trivial events that can be defined in terms of $\bs 1_{A_i}$ are $\left\{\bs 1_{A_i} = 1\right\} = A_i$ and $\left\{\bs 1_{A_i} = 0\right\} = A_i^c$. So $\left\{\bs 1_{A_i}: i \in I\right\}$ is independent if and only if every collection of the form $\{B_i: i \in I\}$ is independent, where for each $i \in I$, either $B_i = A_i$ or $B_i = A_i^c$. But by the complement theorem, this is equivalent to the independence of $\{A_i: i \in I\}$.
Many of the concepts that we have been using informally can now be made precise. A compound experiment that consists of independent stages is essentially just an experiment whose outcome is a sequence of independent random variables $\bs{X} = (X_1, X_2, \ldots)$ where $X_i$ is the outcome of the $i$th stage.
In particular, suppose that we have a basic experiment with outcome variable $X$. By definition, the outcome of the experiment that consists of independent replications of the basic experiment is a sequence of independent random variables $\bs{X} = (X_1, X_2, \ldots)$ each with the same probability distribution as $X$. This is fundamental to the very concept of probability, as expressed in the law of large numbers. From a statistical point of view, suppose that we have a population of objects and a vector of measurements $X$ of interest for the objects in the sample. The sequence $\bs{X}$ above corresponds to sampling from the distribution of $X$; that is, $X_i$ is the vector of measurements for the $i$th object drawn from the sample. When we sample from a finite population, sampling with replacement generates independent random variables while sampling without replacement generates dependent random variables.
Conditional Independence and Conditional Probability
As noted at the beginning of our discussion, independence of events or random variables depends on the underlying probability measure. Thus, suppose that $B$ is an event with positive probability. A collection of events or a collection of random variables is conditionally independent given $B$ if the collection is independent relative to the conditional probability measure $A \mapsto \P(A \mid B)$. For example, a collection of events $\{A_i: i \in I\}$ is conditionally independent given $B$ if for every finite $J \subseteq I$, $\P\left(\bigcap_{j \in J} A_j \biggm| B \right) = \prod_{j \in J} \P(A_j \mid B)$ Note that the definitions and theorems of this section would still be true, but with all probabilities conditioned on $B$.
Conversely, conditional probability has a nice interpretation in terms of independent replications of the experiment. Thus, suppose that we start with a basic experiment with $S$ as the set of outcomes. We let $X$ denote the outcome random variable, so that mathematically $X$ is simply the identity function on $S$. In particular, if $A$ is an event then trivially, $\P(X \in A) = \P(A)$. Suppose now that we replicate the experiment independently. This results in a new, compound experiment with a sequence of independent random variables $(X_1, X_2, \ldots)$, each with the same distribution as $X$. That is, $X_i$ is the outcome of the $i$th repetition of the experiment.
Suppose now that $A$ and $B$ are events in the basic experiment with $\P(B) \gt 0$. In the compound experiment, the event that when $B$ occurs for the first time, $A$ also occurs has probability $\frac{\P(A \cap B)}{\P(B)} = \P(A \mid B)$
Proof
In the compound experiment, if we record $(X_1, X_2, \ldots)$ then the new set of outcomes is $S^\infty = S \times S \times \cdots$. The event that when $B$ occurs for the first time, $A$ also occurs is $\bigcup_{n=1}^\infty \left\{X_1 \notin B, X_2 \notin B, \ldots, X_{n-1} \notin B, X_n \in A \cap B\right\}$ The events in the union are disjoint. Also, since $(X_1, X_2, \ldots)$ is a sequence of independent variables, each with the distribution of $X$ we have $\P\left(X_1 \notin B, X_2 \notin B, \ldots, X_{n-1} \notin B, X_n \in A \cap B\right) = \left[\P\left(B^c\right)\right]^{n-1} \P(A \cap B) = \left[1 - \P(B)\right]^{n-1} \P(A \cap B)$ Hence, using geometric series, the probability of the union is $\sum_{n=1}^\infty \left[1 - \P(B)\right]^{n-1} \P(A \cap B) = \frac{\P(A \cap B)}{1 - \left[1 - \P(B)\right]} = \frac{\P(A \cap B)}{\P(B)}$
Heuristic Argument
Suppose that we create a new experiment by repeating the basic experiment until $B$ occurs for the first time, and then record the outcome of just the last repetition of the basic experiment. Now the set of outcomes is simply $B$ and the appropriate probability measure on the new experiment is $A \mapsto \P(A \mid B)$.
Suppose that $A$ and $B$ are disjoint events in a basic experiment with $\P(A) \gt 0$ and $\P(B) \gt 0$. In the compound experiment obtained by replicating the basic experiment, the event that $A$ occurs before $B$ has probability $\frac{\P(A)}{\P(A) + \P(B)}$
Proof
Note that the event $A$ occurs before $B$ is the same as the event when $A \cup B$ occurs for the first time, $A$ occurs.
Examples and Applications
Basic Rules
Suppose that $A$, $B$, and $C$ are independent events in an experiment with $\P(A) = 0.3$, $\P(B) = 0.4$, and $\P(C) = 0.8$. Express each of the following events in set notation and find its probability:
1. All three events occur.
2. None of the three events occurs.
3. At least one of the three events occurs.
4. At least one of the three events does not occur.
5. Exactly one of the three events occurs.
6. Exactly two of the three events occurs.
Answer
1. $\P(A \cap B \cap C) = 0.096$
2. $\P(A^c \cap B^c \cap C^c) = 0.084$
3. $\P(A \cup B \cup C) = 0.916$
4. $\P(A^c \cup B^c \cup C^c) = 0.904$
5. $\P[(A \cap B^c \cap C^c) \cup (A^c \cap B \cap C^c) \cup (A^c \cap B^c \cap C)] = 0.428$
6. $\P[(A \cap B \cap C^c) \cup (A \cap B^c \cap C) \cup (A^c \cap B \cap C)] = 0.392$
Suppose that $A$, $B$, and $C$ are independent events for an experiment with $\P(A) = \frac{1}{3}$, $\P(B) = \frac{1}{4}$, and $\P(C) = \frac{1}{5}$. Find the probability of each of the following events:
1. $(A \cap B) \cup C$
2. $A \cup B^c \cup C$
3. $(A^c \cap B^c) \cup C^c$
Answer
1. $\frac{4}{15}$
2. $\frac{13}{15}$
3. $\frac{9}{10}$
Simple Populations
A small company has 100 employees; 40 are men and 60 are women. There are 6 male executives. How many female executives should there be if gender and rank are independent? The underlying experiment is to choose an employee at random.
Answer
9
Suppose that a farm has four orchards that produce peaches, and that peaches are classified by size as small, medium, and large. The table below gives total number of peaches in a recent harvest by orchard and by size. Fill in the body of the table with counts for the various intersections, so that orchard and size are independent variables. The underlying experiment is to select a peach at random from the farm.
Frequency Size Small Medium Large Total
Orchard 1 400
2 600
3 300
4 700
Total 400 1000 600 2000
Answer
Frequency Size Small Medium Large Total
Orchard 1 80 200 120 400
2 120 300 180 600
3 60 150 90 300
4 140 350 210 700
total 400 1000 600 2000
Note from the last two exercises that you cannot see independence in a Venn diagram. Again, independence is a measure-theoretic concept, not a set-theoretic concept.
Bernoulli Trials
A Bernoulli trials sequence is a sequence $\bs{X} = (X_1, X_2, \ldots)$ of independent, identically distributed indicator variables. Random variable $X_i$ is the outcome of trial $i$, where in the usual terminology of reliability theory, 1 denotes success and 0 denotes failure. The canonical example is the sequence of scores when a coin (not necessarily fair) is tossed repeatedly. Another basic example arises whenever we start with an basic experiment and an event $A$ of interest, and then repeat the experiment. In this setting, $X_i$ is the indicator variable for event $A$ on the $i$th run of the experiment. The Bernoulli trials process is named for Jacob Bernoulli, and has a single basic parameter $p = \P(X_i = 1)$. This random process is studied in detail in the chapter on Bernoulli trials.
For $(x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$, $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n) = p^{x_1 + x_2 + \cdots + x_n} (1 - p)^{n - (x_1 + x_2 + \cdots + x_n)}$
Proof
If $X$ is a generic Bernoulli trial, then by definition, $\P(X = 1) = p$ and $\P(X = 0) = 1 - p$. Equivalently, $\P(X = x) = p^x (1 - p)^{1 - x}$ for $x \in \{0, 1\}$. Thus the result follows by independence.
Note that the sequence of indicator random variables $\bs{X}$ is exchangeable. That is, if the sequence $(x_1, x_2, \ldots, x_n)$ in the previous result is permuted, the probability does not change. On the other hand, there are exchangeable sequences of indicator random variables that are dependent, as Pólya's urn model so dramatically illustrates.
Let $Y$ denote the number of successes in the first $n$ trials. Then $\P(Y = y) = \binom{n}{y} p^y (1 - p)^{n-y}, \quad y \in \{0, 1, \ldots, n\}$
Proof
Note that $Y = \sum_{i=1}^n X_i$, where $X_i$ is the outcome of trial $i$, as in the previous result. For $y \in \{0, 1, \ldots, n\}$, the event $\{Y = y\}$ occurs if and only if exactly $y$ of the $n$ trials result in success (1). The number of ways to choose the $y$ trials that result in success is $\binom{n}{y}$, and by the previous result, the probability of any particular sequence of $y$ successes and $n - y$ failures is $p^y (1 - p)^{n-y}$. Thus the result follows by the additivity of probability.
The distribution of $Y$ is called the binomial distribution with parameters $n$ and $p$. The binomial distribution is studied in more detail in the chapter on Bernoulli Trials.
More generally, a multinomial trials sequence is a sequence $\bs{X} = (X_1, X_2, \ldots)$ of independent, identically distributed random variables, each taking values in a finite set $S$. The canonical example is the sequence of scores when a $k$-sided die (not necessarily fair) is thrown repeatedly. Multinomial trials are also studied in detail in the chapter on Bernoulli trials.
Cards
Consider the experiment that consists of dealing 2 cards at random from a standard deck and recording the sequence of cards dealt. For $i \in \{1, 2\}$, let $Q_i$ be the event that card $i$ is a queen and $H_i$ the event that card $i$ is a heart. Compute the appropriate probabilities to verify the following results. Reflect on these results.
1. $Q_1$ and $H_1$ are independent.
2. $Q_2$ and $H_2$ are independent.
3. $Q_1$ and $Q_2$ are negatively correlated.
4. $H_1$ and $H_2$ are negatively correlated.
5. $Q_1$ and $H_2$ are independent.
6. $H_1$ and $Q_2$ are independent.
Answer
1. $\P(Q_1) = \P(Q_1 \mid H_1) = \frac{1}{13}$
2. $\P(Q_2) = \P(Q_2 \mid H_2) = \frac{1}{13}$
3. $\P(Q_1) = \frac{1}{13}$, $\P(Q_1 \mid Q_2) = \frac{1}{17}$
4. $\P(H_1) = \frac{1}{4}$, $\P(H_1 \mid H_2) = \frac{4}{17}$
5. $\P(Q_1) = \P(Q_1 \mid H_2) = \frac{1}{13}$
6. $\P(Q_2) = \P(Q_2 \mid H_1) = \frac{1}{13}$
In the card experiment, set $n = 2$. Run the simulation 500 times. For each pair of events in the previous exercise, compute the product of the empirical probabilities and the empirical probability of the intersection. Compare the results.
Dice
The following exercise gives three events that are pairwise independent, but not (mutually) independent.
Consider the dice experiment that consists of rolling 2 standard, fair dice and recording the sequence of scores. Let $A$ denote the event that first score is 3, $B$ the event that the second score is 4, and $C$ the event that the sum of the scores is 7. Then
1. $A$, $B$, $C$ are pairwise independent.
2. $A \cap B$ implies (is a subset of) $C$ and hence these events are dependent in the strongest possible sense.
Answer
Note that $A \cap B = A \cap C = B \cap C = \{(3, 4)\}$, and the probability of the common intersection is $\frac{1}{36}$. On the other hand, $\P(A) = \P(B) = \P(C) = \frac{6}{36} = \frac{1}{6}$.
In the dice experiment, set $n = 2$. Run the experiment 500 times. For each pair of events in the previous exercise, compute the product of the empirical probabilities and the empirical probability of the intersection. Compare the results.
The following exercise gives an example of three events with the property that the probability of the intersection is the product of the probabilities, but the events are not pairwise independent.
Suppose that we throw a standard, fair die one time. Let $A = \{1, 2, 3, 4\}$, $B = C = \{4, 5, 6\}$. Then
1. $\P(A \cap B \cap C) = \P(A) \P(B) \P(C)$.
2. $B$ and $C$ are the same event, and hence are dependent in the strongest possbile sense.
Answer
Note that $A \cap B \cap C = \{4\}$, so $\P(A \cap B \cap C) = \frac{1}{6}$. On the other hand, $\P(A) = \frac{4}{6}$ and $\P(B) = \P(C) = \frac{3}{6}$.
Suppose that a standard, fair die is thrown 4 times. Find the probability of the following events.
1. Six does not occur.
2. Six occurs at least once.
3. The sum of the first two scores is 5 and the sum of the last two scores is 7.
Answer
1. $\left(\frac{5}{6}\right)^4 \approx 0.4823$
2. $1 - \left(\frac{5}{6}\right)^4 \approx 0.5177$
3. $\frac{1}{54}$
Suppose that a pair of standard, fair dice are thrown 8 times. Find the probability of each of the following events.
1. Double six does not occur.
2. Double six occurs at least once.
3. Double six does not occur on the first 4 throws but occurs at least once in the last 4 throws.
Answer
1. $\left(\frac{35}{36}\right)^8 \approx 0.7982$
2. $1 - \left(\frac{35}{36}\right)^8 \approx 0.2018$
3. $\left(\frac{35}{36}\right)^4 \left[1 - \left(\frac{35}{36}\right)^4\right] \approx 0.0952$
Consider the dice experiment that consists of rolling $n$, $k$-sided dice and recording the sequence of scores $\bs{X} = (X_1, X_2, \ldots, X_n)$.The following conditions are equivalent (and correspond to the assumption that the dice are fair):
1. $\bs{X}$ is uniformly distributed on $\{1, 2, \ldots, k\}^n$.
2. $\bs{X}$ is a sequence of independent variables, and $X_i$ is uniformly distributed on $\{1, 2, \ldots, k\}$ for each $i$.
Proof
Let $S = \{1, 2, \ldots, k\}$ and note that $S^n$ has $k^n$ points. Suppose that $\bs{X}$ is uniformly distributed on $S^n$. Then $\P(\bs{X} = \bs{x}) = 1 / k^n$ for each $\bs{x} \in S^n$ so $\P(X_i = x) = k^{n-1}/k^n = 1 / k$ for each $x \in S$. Hence $X_i$ is uniformly distributed on $S$. Moreover, $\P(\bs{X} = \bs{x}) = \P(X_1 = x_1) \P(X_2 = x_2) \cdots \P(X_n = x_n), \quad \bs{x} = (x_1, x_2, \ldots, x_n) \in S^n$ so $\bs{X}$ is an independent sequence. Conversely, if $\bs{X}$ is an independent sequence and $X_i$ is uniformly distributed on $S$ for each $i$ then $\P(X_i = x) = 1/k$ for each $x \in S$ and hence $\P(\bs{X} = \bs{x}) = 1/k^n$ for each $\bs{x} \in S^n$. Thus $\bs{X}$ is uniformly distributed on $S^n$.
A pair of standard, fair dice are thrown repeatedly. Find the probability of each of the following events.
1. A sum of 4 occurs before a sum of 7.
2. A sum of 5 occurs before a sum of 7.
3. A sum of 6 occurs before a sum of 7.
4. When a sum of 8 occurs the first time, it occurs the hard way as $(4, 4)$.
Answer
1. $\frac{3}{9}$
2. $\frac{4}{10}$
3. $\frac{5}{11}$
4. $\frac{1}{5}$
Problems of the type in the last exercise are important in the game of craps. Craps is studied in more detail in the chapter on Games of Chance.
Coins
A biased coin with probability of heads $\frac{1}{3}$ is tossed 5 times. Let $\bs{X}$ denote the outcome of the tosses (encoded as a bit string) and let $Y$ denote the number of heads. Find each of the following:
1. $\P(\bs{X} = \bs{x})$ for each $\bs{x} \in \{0, 1\}^5$.
2. $\P(Y = y)$ for each $y \in \{0, 1, 2, 3, 4, 5\}$.
3. $\P(1 \le Y \le 3)$
Answer
1. $\frac{32}{243}$ if $\bs{x} = 00000$, $\frac{16}{243}$ if $\bs{x}$ has exactly one 1 (there are 5 of these), $\frac{8}{243}$ if $\bs{x}$ has exactly two 1s (there are 10 of these), $\frac{4}{243}$ if $\bs{x}$ has exactly three 1s (there are 10 of these), $\frac{2}{243}$ if $\bs{x}$ has exactly four 1s (there are 5 of these), $\frac{1}{243}$ if $\bs{x} = 11111$
2. $\frac{32}{243}$ if $y = 0$, $\frac{80}{243}$ if $y = 1$, $\frac{80}{243}$ if $y = 2$, $\frac{40}{243}$ if $y = 3$, $\frac{10}{243}$ if $y = 4$, $\frac{1}{243}$ if $y = 5$
3. $\frac{200}{243}$
A box contains a fair coin and a two-headed coin. A coin is chosen at random from the box and tossed repeatedly. Let $F$ denote the event that the fair coin is chosen, and let $H_i$ denote the event that the $i$th toss results in heads. Then
1. $(H_1, H_2, \ldots)$ are conditionally independent given $F$, with $\P(H_i \mid F) = \frac{1}{2}$ for each $i$.
2. $(H_1, H_2, \ldots)$ are conditionally independent given $F^c$, with $\P(H_i \mid F^c) = 1$ for each $i$.
3. $\P(H_i) = \frac{3}{4}$ for each $i$.
4. $\P(H_1 \cap H_2 \cap \cdots \cap H_n) = \frac{1}{2^{n+1}} + \frac{1}{2}$.
5. $(H_1, H_2, \ldots)$ are dependent.
6. $\P(F \mid H_1 \cap H_2 \cap \cdots \cap H_n) = \frac{1}{2^n + 1}$.
7. $\P(F \mid H_1 \cap H_2 \cap \cdots \cap H_n) \to 0$ as $n \to \infty$.
Proof
Parts (a) and (b) are essentially modeling assumptions, based on the design of the experiment. If we know what kind of coin we have, then the tosses are independent. Parts (c) and (d) follow by conditioning on the type of coin and using parts (a) and (b). Part (e) follows from (c) and (d). Note that the expression in (d) is not $(3/4)^n$. Part (f) follows from part (d) and Bayes' theorem. Finally part (g) follows from part (f).
Consider again the box in the previous exercise, but we change the experiment as follows: a coin is chosen at random from the box and tossed and the result recorded. The coin is returned to the box and the process is repeated. As before, let $H_i$ denote the event that toss $i$ results in heads. Then
1. $(H_1, H_2, \ldots)$ are independent.
2. $\P(H_i) = \frac{3}{4}$ for each $i$.
3. $\P(H_1 \cap H_2 \cap \cdots H_n) = \left(\frac{3}{4}\right)^n$.
Proof
Again, part (a) is essentially a modeling assumption. Since we return the coin and draw a new coin at random each time, the results of the tosses should be independent. Part (b) follows by conditioning on the type of the $i$th coin. Part (c) follows from parts (a) and (b).
Think carefully about the results in the previous two exercises, and the differences between the two models. Tossing a coin produces independent random variables if the probability of heads is fixed (that is, non-random even if unknown). Tossing a coin with a random probability of heads generally does not produce independent random variables; the result of a toss gives information about the probability of heads which in turn gives information about subsequent tosses.
Uniform Distributions
Recall that Buffon's coin experiment consists of tossing a coin with radius $r \le \frac{1}{2}$ randomly on a floor covered with square tiles of side length 1. The coordinates $(X, Y)$ of the center of the coin are recorded relative to axes through the center of the square in which the coin lands. The following conditions are equivalent:
1. $(X, Y)$ is uniformly distributed on $\left[-\frac{1}{2}, \frac{1}{2}\right]^2$.
2. $X$ and $Y$ are independent and each is uniformly distributed on $\left[-\frac{1}{2}, \frac{1}{2}\right]$.
Proof
Let $S = \left[-\frac{1}{2}, \frac{1}{2}\right]$, and let $\lambda_1$ denote length measure on $S$ and $\lambda_2$ area measure on $S^2$. Note that $\lambda_1(S) = \lambda_2(S^2) = 1$. Suppose that $(X, Y)$ is uniformly distributed on $S^2$, so that $\P\left[(X, Y) \in C\right] = \lambda_2(C)$ for $C \subseteq S^2$. For $A \subseteq S$, $\P(X \in A) = \P\left[(X, Y) \in A \times S\right] = \lambda_2(A \times S) = \lambda_1(A)$ Hence $X$ is uniformly distributed on $S$. By a similar argument, $Y$ is also uniformly distributed on $S$. Moreover, for $A \subseteq S$ and $B \subseteq S$, $\P(X \in A, Y \in B) = \P[(X, Y) \in A \times B] = \lambda_2(A \times B) = \lambda_1(A) \lambda_1(B) = \P(X \in A) \P(Y \in B)$ so $X$ and $Y$ are independent. Conversely, if $X$ and $Y$ are independent and each is uniformly distributed on $S$, then for $A \subseteq S$ and $B \subseteq S$, $\P\left[(X, Y) \in A \times B\right] = \P(X \in A) \P(Y \in B) = \lambda_1(A) \lambda_1(B) = \lambda_2(A \times B)$ It then follows that $\P\left[(X, Y) \in C\right] = \lambda_2(C)$ for every $C \subseteq S^2$. For more details about this last step, see the advanced section on existence and uniqueness of measures.
Compare this result with the result above for fair dice.
In Buffon's coin experiment, set $r = 0.3$. Run the simulation 500 times. For the events $\{X \gt 0\}$ and $\{Y \lt 0\}$, compute the product of the empirical probabilities and the empirical probability of the intersection. Compare the results.
The arrival time $X$ of the $A$ train is uniformly distributed on the interval $(0, 30)$, while the arrival time $Y$ of the $B$ train is uniformly distributed on the interval $(15, 30)$. (The arrival times are in minutes, after 8:00 AM). Moreover, the arrival times are independent. Find the probability of each of the following events:
1. The $A$ train arrives first.
2. Both trains arrive sometime after 20 minutes.
Answer
1. $\frac{3}{4}$
2. $\frac{2}{9}$
Reliability
Recall the simple model of structural reliability in which a system is composed of $n$ components. Suppose in addition that the components operate independently of each other. As before, let $X_i$ denote the state of component $i$, where 1 means working and 0 means failure. Thus, our basic assumption is that the state vector $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a sequence of independent indicator random variables. We assume that the state of the system (either working or failed) depends only on the states of the components. Thus, the state of the system is an indicator random variable $Y = y(X_1, X_2, \ldots, X_n)$ where $y: \{0, 1\}^n \to \{0, 1\}$ is the structure function. Generally, the probability that a device is working is the reliability of the device. Thus, we will denote the reliability of component $i$ by $p_i = \P(X_i = 1)$ so that the vector of component reliabilities is $\bs{p} = (p_1, p_2, \ldots, p_n)$. By independence, the system reliability $r$ is a function of the component reliabilities: $r(p_1, p_2, \ldots, p_n) = \P(Y = 1)$ Appropriately enough, this function is known as the reliability function. Our challenge is usually to find the reliability function, given the structure function. When the components all have the same probability $p$ then of course the system reliability $r$ is just a function of $p$. In this case, the state vector $\bs{X} = (X_1, X_2, \ldots, X_n)$ forms a sequence of Bernoulli trials.
Comment on the independence assumption for real systems, such as your car or your computer.
Recall that a series system is working if and only if each component is working.
1. The state of the system is $U = X_1 X_2 \cdots X_n = \min\{X_1, X_2, \ldots, X_n\}$.
2. The reliability is $\P(U = 1) = p_1 p_2 \cdots p_n$.
Recall that a parallel system is working if and only if at least one component is working.
1. The state of the system is $V = 1 - (1 - X_1)(1 - X_2) \cdots (1 - X_n) = \max\{X_1, X_2, \ldots, X_n\}$.
2. The reliability is $\P(V = 1) = 1 - (1 - p_1) (1 - p_2) \cdots (1 - p_n)$.
Recall that a $k$ out of $n$ system is working if and only if at least $k$ of the $n$ components are working. Thus, a parallel system is a 1 out of $n$ system and a series system is an $n$ out of $n$ system. A $k$ out of $2 k - 1$ system is a majority rules system. The reliability function of a general $k$ out of $n$ system is a mess. However, if the component reliabilities are the same, the function has a reasonably simple form.
For a $k$ out of $n$ system with common component reliability $p$, the system reliability is $r(p) = \sum_{i = k}^n \binom{n}{i} p^i (1 - p)^{n - i}$
Consider a system of 3 independent components with common reliability $p = 0.8$. Find the reliability of each of the following:
1. The parallel system.
2. The 2 out of 3 system.
3. The series system.
Answer
1. 0.992
2. 0.896
3. 0.512
Consider a system of 3 independent components with reliabilities $p_1 = 0.8$, $p_2 = 0.8$, $p_3 = 0.7$. Find the reliability of each of the following:
1. The parallel system.
2. The 2 out of 3 system.
3. The series system.
Answer
1. 0.994
2. 0.902
3. 0.504
Consider an airplane with an odd number of engines, each with reliability $p$. Suppose that the airplane is a majority rules system, so that the airplane needs a majority of working engines in order to fly.
1. Find the reliability of a 3 engine plane as a function of $p$.
2. Find the reliability of a 5 engine plane as a function of $p$.
3. For what values of $p$ is a 5 engine plane preferable to a 3 engine plane?
Answer
1. $r_3(p) = 3 \, p^2 - 2 \, p^3$
2. $r_5(p) = 6 \, p^5 - 15 \, p^4 + 10 p^3$
3. The 5-engine plane would be preferable if $p \gt \frac{1}{2}$ (which one would hope would be the case). The 3-engine plane would be preferable if $p \lt \frac{1}{2}$. If $p = \frac{1}{2}$, the 3-engine and 5-engine planes are equally reliable.
The graph below is known as the Wheatstone bridge network and is named for Charles Wheatstone. The edges represent components, and the system works if and only if there is a working path from vertex $a$ to vertex $b$.
1. Find the structure function.
2. Find the reliability function.
Answer
1. $Y = X_3 (X_1 + X_2 - X_1 X_2)(X_4 + X_5 - X_4, X_5) + (1 - X_3)(X_1 X_4 + X_2 X_5 - X_1 X_2 X_4 X_5)$
2. $r(p_1, p_2, p_3, p_4, p_5) = p_3 (p_1 + p_2 - p_1 p_2)(p_4 + p_5 - p_4, p_5) + (1 - p_3)(p_1 p_4 + p_2 p_5 - p_1 p_2 p_4 p_5)$
A system consists of 3 components, connected in parallel. Because of environmental factors, the components do not operate independently, so our usual assumption does not hold. However, we will assume that under low stress conditions, the components are independent, each with reliability 0.9; under medium stress conditions, the components are independent with reliability 0.8; and under high stress conditions, the components are independent, each with reliability 0.7. The probability of low stress is 0.5, of medium stress is 0.3, and of high stress is 0.2.
1. Find the reliability of the system.
2. Given that the system works, find the conditional probability of each stress level.
Answer
1. 0.9917. Condition on the stress level.
2. 0.5037 for low, 0.3001 for medium, 0.1962 for high. Use Bayes' theorem and part (a).
Suppose that bits are transmitted across a noisy communications channel. Each bit that is sent, independently of the others, is received correctly with probability 0.9 and changed to the complementary bit with probability 0.1. Using redundancy to improve reliability, suppose that a given bit will be sent 3 times. We naturally want to compute the probability that we correctly identify the bit that was sent. Assume we have no prior knowledge of the bit, so we assign probability $\frac{1}{2}$ each to the event that 000 was sent and the event that 111 was sent. Now find the conditional probability that 111 was sent given each of the 8 possible bit strings received.
Answer
Let $\bs{X}$ denote the string sent and $\bs{Y}$ the string received.
$\bs{y}$ $\P(\bs{X} = 111 \mid \bs{Y} = \bs{y})$
111 $729/730$
110 $9/10$
101 $9/10$
011 $9/10$
100 $1/10$
010 $1/10$
001 $1/10$
000 $1/730$
Diagnostic Testing
Recall the discussion of diagnostic testing in the section on Conditional Probability. Thus, we have an event $A$ for a random experiment whose occurrence or non-occurrence we cannot observe directly. Suppose now that we have $n$ tests for the occurrence of $A$, labeled from 1 to $n$. We will let $T_i$ denote the event that test $i$ is positive for $A$. The tests are independent in the following sense:
• If $A$ occurs, then $(T_1, T_2, \ldots, T_n)$ are (conditionally) independent and test $i$ has sensitivity $a_i = \P(T_i \mid A)$.
• If $A$ does not occur, then $(T_1, T_2, \ldots, T_n)$ are (conditionally) independent and test $i$ has specificity $b_i = \P(T_i^c \mid A^c)$.
Note that unconditionally, it is not reasonable to assume that the tests are independent. For example, a positive result for a given test presumably is evidence that the condition $A$ has occurred, which in turn is evidence that a subsequent test will be positive. In short, we expect that $T_i$ and $T_j$ should be positively correlated.
We can form a new, compound test by giving a decision rule in terms of the individual test results. In other words, the event $T$ that the compound test is positive for $A$ is a function of $(T_1, T_2, \ldots, T_n)$. The typical decision rules are very similar to the reliability structures discussed above. A special case of interest is when the $n$ tests are independent applications of a given basic test. In this case, $a_i = a$ and $b_i = b$ for each $i$.
Consider the compound test that is positive for $A$ if and only if each of the $n$ tests is positive for $A$.
1. $T = T_1 \cap T_2 \cap \cdots \cap T_n$
2. The sensitivity is $\P(T \mid A) = a_1 a_2 \cdots a_n$.
3. The specificity is $\P(T^c \mid A^c) = 1 - (1 - b_1) (1 - b_2) \cdots (1 - b_n)$
Consider the compound test that is positive for $A$ if and only if each at least one of the $n$ tests is positive for $A$.
1. $T = T_1 \cup T_2 \cup \cdots \cup T_n$
2. The sensitivity is $\P(T \mid A) = 1 - (1 - a_1) (1 - a_2) \cdots (1 - a_n)$.
3. The specificity is $\P(T^c \mid A^c) = b_1 b_2 \cdots b_n$.
More generally, we could define the compound $k$ out of $n$ test that is positive for $A$ if and only if at least $k$ of the individual tests are positive for $A$. The series test is the $n$ out of $n$ test, while the parallel test is the 1 out of $n$ test. The $k$ out of $2 k - 1$ test is the majority rules test.
Suppose that a woman initially believes that there is an even chance that she is or is not pregnant. She buys three identical pregnancy tests with sensitivity 0.95 and specificity 0.90. Tests 1 and 3 are positive and test 2 is negative.
1. Find the updated probability that the woman is pregnant.
2. Can we just say that tests 2 and 3 cancel each other out? Find the probability that the woman is pregnant given just one positive test, and compare the answer with the answer to part (a).
Answer
1. 0.834
2. No: 0.905.
Suppose that 3 independent, identical tests for an event $A$ are applied, each with sensitivity $a$ and specificity $b$. Find the sensitivity and specificity of the following tests:
1. 1 out of 3 test
2. 2 out of 3 test
3. 3 out of 3 test
Answer
1. sensitivity $1 - (1 - a)^3$, specificity $b^3$
2. sensitivity $3 \, a^2$, specificity $b^3 + 3 \, b^2 (1 - b)$
3. sensitivity $a^3$, specificity $1 - (1 - b)^3$
In a criminal trial, the defendant is convicted if and only if all 6 jurors vote guilty. Assume that if the defendant really is guilty, the jurors vote guilty, independently, with probability 0.95, while if the defendant is really innocent, the jurors vote not guilty, independently with probability 0.8. Suppose that 70% of defendants brought to trial are guilty.
1. Find the probability that the defendant is convicted.
2. Given that the defendant is convicted, find the probability that the defendant is guilty.
3. Comment on the assumption that the jurors act independently.
Answer
1. 0.5148
2. 0.99996
3. The independence assumption is not reasonable since jurors collaborate.
Genetics
Please refer to the discussion of genetics in the section on random experiments if you need to review some of the definitions in this section.
Recall first that the ABO blood type in humans is determined by three alleles: $a$, $b$, and $o$. Furthermore, $a$ and $b$ are co-dominant and $o$ is recessive. Suppose that in a certain population, the proportion of $a$, $b$, and $o$ alleles are $p$, $q$, and $r$ respectively. Of course we must have $p \gt 0$, $q \gt 0$, $r \gt 0$ and $p + q + r = 1$.
Suppose that the blood genotype in a person is the result of independent alleles, chosen with probabilities $p$, $q$, and $r$ as above.
1. The probability distribubtion of the geneotypes is given in the following table:
Genotype $aa$ $ab$ $ao$ $bb$ $bo$ oo
Probability $p^2$ $2 p q$ $2 p r$ $q^2$ $2 q r$ $r^2$
2. The probability distribution of the blood types is given in the following table:
Blood type $A$ $B$ $AB$ $O$
Probability $p^2 + 2 p r$ $q^2 + 2 q r$ $2 p q$ $r^2$
Proof
Part (a) follows from the independence assumption and basic rules of probability. Even though genotypes are listed as unordered pairs, note that there are two ways that a heterozygous genotype can occur, since either parent could contribute either of the two distinct alleles. Part (b) follows from part (a) and basic rules of probability.
The discussion above is related to the Hardy-Weinberg model of genetics. The model is named for the English mathematician Godfrey Hardy and the German physician Wilhelm Weiberg
Suppose that the probability distribution for the set of blood types in a certain population is given in the following table:
Blood type $A$ $B$ $AB$ $O$
Probability 0.360 0.123 0.038 0.479
Find $p$, $q$, and $r$.
Answer
$p = 0.224$, $q = 0.084$, $r = 0.692$
Suppose next that pod color in certain type of pea plant is determined by a gene with two alleles: $g$ for green and $y$ for yellow, and that $g$ is dominant and $o$ recessive.
Suppose that 2 green-pod plants are bred together. Suppose further that each plant, independently, has the recessive yellow-pod allele with probability $\frac{1}{4}$.
1. Find the probability that 3 offspring plants will have green pods.
2. Given that the 3 offspring plants have green pods, find the updated probability that both parents have the recessive allele.
Answer
1. $\frac{987}{1024}$
2. $\frac{27}{987}$
Next consider a sex-linked hereditary disorder in humans (such as colorblindness or hemophilia). Let $h$ denote the healthy allele and $d$ the defective allele for the gene linked to the disorder. Recall that $h$ is dominant and $d$ recessive for women.
Suppose that a healthy woman initially has a $\frac{1}{2}$ chance of being a carrier. (This would be the case, for example, if her mother and father are healthy but she has a brother with the disorder, so that her mother must be a carrier).
1. Find the probability that the first two sons of the women will be healthy.
2. Given that the first two sons are healthy, compute the updated probability that she is a carrier.
3. Given that the first two sons are healthy, compute the conditional probability that the third son will be healthy.
Answer
1. $\frac{5}{8}$
2. $\frac{1}{5}$
3. $\frac{9}{10}$
Laplace's Rule of Succession
Suppose that we have $m + 1$ coins, labeled $0, 1, \ldots, m$. Coin $i$ lands heads with probability $\frac{i}{m}$ for each $i$. The experiment is to choose a coin at random (so that each coin is equally likely to be chosen) and then toss the chosen coin repeatedly.
1. The probability that the first $n$ tosses are all heads is $p_{m,n} = \frac{1}{m+1} \sum_{i=0}^m \left(\frac{i}{m}\right)^n$
2. $p_{m,n} \to \frac{1}{n+1}$ as $m \to \infty$
3. The conditional probability that toss $n + 1$ is heads given that the previous $n$ tosses were all heads is $\frac{p_{m,n+1}}{p_{m,n}}$
4. $\frac{p_{m,n+1}}{p_{m,n}} \to \frac{n+1}{n+2}$ as $m \to \infty$
Proof
Part (a) follows by conditioning on the chosen coin. For part (b), note that $p_{m,n}$ is an approximating sum for $\int_0^1 x^n \, dx = \frac{1}{n + 1}$. Part (c) follows from the definition of conditional probability, and part (d) is a trivial consequence of (b), (c).
Note that coin 0 is two-tailed, the probability of heads increases with $i$, and coin $m$ is two-headed. The limiting conditional probability in part (d) is called Laplace's Rule of Succession, named after Simon Laplace. This rule was used by Laplace and others as a general principle for estimating the conditional probability that an event will occur on time $n + 1$, given that the event has occurred $n$ times in succession.
Suppose that a missile has had 10 successful tests in a row. Compute Laplace's estimate that the 11th test will be successful. Does this make sense?
Answer
$\frac{11}{12}$. No, not really. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/02%3A_Probability_Spaces/2.05%3A_Independence.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Q}{\mathbb{Q}}$ $\newcommand{\bs}{\boldsymbol}$
This is the first of several sections in this chapter that are more advanced than the basic topics in the first five sections. In this section we discuss several topics related to convergence of events and random variables, a subject of fundamental importance in probability theory. In particular the results that we obtain will be important for:
• Properties of distribution functions,
• The weak law of large numbers,
• The strong law of large numbers.
As usual, our starting point is a random experiment modeled by a probability space $(\Omega, \mathscr{F}, \P)$. So to review, $\Omega$ is the set of outcomes, $\mathscr F$ the $\sigma$-algebra of events, and $\P$ the probability measure on the sample space $(\Omega, \mathscr F)$.
Basic Theory
Sequences of events
Our first discussion deals with sequences of events and various types of limits of such sequences. The limits are also event. We start with two simple definitions.
Suppose that $(A_1, A_2, \ldots)$ is a sequence of events.
1. The sequence is increasing if $A_n \subseteq A_{n+1}$ for every $n \in \N_+$.
2. The sequence is decreasing if $A_{n+1} \subseteq A_n$ for every $n \in \N_+$.
Note that these are the standard definitions of increasing and decreasing, relative to the ordinary total order $\le$ on the index set $\N_+$ and the subset partial order $\subseteq$ on the collection of events. The terminology is also justified by the corresponding indicator variables.
Suppose that $(A_1, A_2, \ldots)$ is a sequence of events, and let $I_n = \bs 1_{A_n}$ denote the indicator variable of the event $A_n$ for $n \in \N_+$.
1. The sequence of events is increasing if and only if the sequence of indicator variables is increasing in the ordinary sense. That is, $I_n \le I_{n+1}$ for each $n \in \N_+$.
2. The sequence of events is decreasing if and only if the sequence of indicator variables is decreasing in the ordinary sense. That is, $I_{n+1} \le I_n$ for each $n \in \ _+$.
Proof
If a sequence of events is either increasing or decreasing, we can define the limit of the sequence in a way that turns out to be quite natural.
Suppose that $(A_1, A_2, \ldots)$ is a sequence of events.
1. If the sequence is increasing, we define $\lim_{n \to \infty} A_n = \bigcup_{n=1}^\infty A_n$.
2. If the sequence is decreasing, we define $\lim_{n \to \infty} A_n = \bigcap_{n=1}^\infty A_n$.
Once again, the terminology is clarified by the corresponding indicator variables.
Suppose again that $(A_1, A_2, \ldots)$ is a sequence of events, and let $I_n = \bs 1_{A_n}$ denote the indicator variable of $A_n$ for $n \in \N_+$.
1. If the sequence of events is increasing, then $\lim_{n \to \infty} I_n$ is the indicator variable of $\bigcup_{n = 1}^\infty A_n$
2. If the sequence of events is decreasing, then $\lim_{n \to \infty} I_n$ is the indicator variable of $\bigcap_{n = 1}^\infty A_n$
Proof
1. If $s \in \bigcup_{n=1}^\infty A_n$ then $s \in A_k$ for some $k \in \N_+$. Since the events are increasing, $s \in A_n$ for every $n \ge k$. In this case, $I_n(s) = 1$ for every $n \ge k$ and hence $\lim_{n \to \infty} I_n(s) = 1$. On the other hand, if $s \notin \bigcup_{n=1}^\infty A_n$ then $s \notin A_n$ for every $n \in \N_+$. In this case, $I_n(s) = 0$ for every $n \in \N_+$ and hence $\lim_{n \to \infty} I_n(s) = 0$.
2. If $s \in \bigcap_{n=1}^\infty A_n$ then $s \in A_n$ for each $n \in \N_+$. In this case, $I_n(s) = 1$ for each $n \in \N_+$ and hence $\lim_{n \to \infty} I_n(s) = 1$. If $s \notin \bigcap_{n=1}^\infty A_n$ then $s \notin A_k$ for some $k \in \N_+$. Since the events are decreasing, $s \notin A_n$ for all $n \ge k$. In this case, $I_n(s) = 0$ for $n \ge k$ and hence $\lim_{n \to \infty} I_n(s) = 0$.
An arbitrary union of events can always be written as a union of increasing events, and an arbitrary intersection of events can always be written as an intersection of decreasing events:
Suppose that $(A_1, A_2, \ldots)$ is a sequence of events. Then
1. $\bigcup_{i = 1}^ n A_i$ is increasing in $n \in \N_+$ and $\bigcup_{i = 1}^\infty A_i = \lim_{n \to \infty} \bigcup_{i = 1}^n A_i$.
2. $\bigcap_{i=1}^n A_i$ is decreasing in $n \in \N_+$ and $\bigcap_{i=1}^\infty A_i = \lim_{n \to \infty} \bigcap_{i=1}^n A_i$.
Proof
1. Trivially $\bigcup_{i=1}^n A_i \subseteq \bigcup_{i=1}^{n+1} A_i$. The second statement simply means that $\bigcup_{n=1}^\infty \bigcup_{i = 1}^n A_i = \bigcup_{i=1}^\infty A_i$.
2. Trivially $\bigcap_{i=1}^{n+1} A_i \subseteq \bigcap_{i=1}^n A_i$. The second statement simply means that $\bigcap_{n=1}^\infty \bigcap_{i=1}^n A_i = \bigcap_{i=1}^\infty A_i$.
There is a more interesting and useful way to generate increasing and decreasing sequences from an arbitrary sequence of events, using the tail segment of the sequence rather than the initial segment.
Suppose that $(A_1, A_2, \ldots)$ is a sequence of events. Then
1. $\bigcup_{i=n}^\infty A_i$ is decreasing in $n \in \N_+$.
2. $\bigcap_{i=n}^\infty A_i$ is increasing in $n \in \N_+$.
Proof
1. Clearly $\bigcup_{i=n+1}^\infty A_i \subseteq \bigcup_{i=n}^\infty A_i$
2. Clearly $\bigcap_{i=n}^\infty A_i \subseteq \bigcap_{i=n+1}^\infty A_i$
Since the new sequences defined in the previous results are decreasing and increasing, respectively, we can take their limits. These are the limit superior and limit inferior, respectively, of the original sequence.
Suppose that $(A_1, A_2, \ldots)$ is a sequence of events. Define
1. $\limsup_{n \to \infty} A_n = \lim_{n \to \infty} \bigcup_{i=n}^\infty A_i = \bigcap_{n=1}^\infty \bigcup_{i=n}^\infty A_i$. This is the event that occurs if an only if $A_n$ occurs for infinitely many values of $n$.
2. $\liminf_{n \to \infty} A_n = \lim_{n \to \infty} \bigcap_{i=n}^\infty A_i = \bigcup_{n=1}^\infty \bigcap_{i=n}^\infty A_i$. This is the event that occurs if an only if $A_n$ occurs for all but finitely many values of $n$.
Proof
1. From the definition, the event $\limsup_{n \to \infty} A_n$ occurs if and only if for each $n \in \N_+$ there exists $i \ge n$ such that $A_i$ occurs.
2. From the definition, the event $\liminf_{n \to \infty} A_n$ occurs if and only if there exists $n \in \N_+$ such that $A_i$ occurs for every $i \ge n$.
Once again, the terminology and notation are clarified by the corresponding indicator variables. You may need to review limit inferior and limit superior for sequences of real numbers in the section on Partial Orders.
Suppose that $(A_1, A_2, \ldots)$ is a sequence of events, and et $I_n = \bs 1_{A_n}$ denote the indicator variable of $A_n$ for $n \in \N_+$. Then
1. $\limsup_{n \to \infty} I_n$ is the indicator variable of $\limsup_{n \to \infty} A_n$.
2. $\liminf_{n \to \infty} I_n$ is the indicator variable of $\liminf_{n \to \infty} A_n$.
Proof
1. By the result above, $\lim_{n \to \infty} \bs 1\left(\bigcup_{i=n}^\infty A_i\right)$ is the indicator variable of $\limsup_{n \to \infty} A_n$. But $\bs 1\left(\bigcup_{i=n}^\infty A_i\right) = \max\{I_i: i \ge n\}$ and hence $\lim_{n \to \infty} \bs 1\left(\bigcup_{i=n}^\infty A_i\right) = \limsup_{n \to \infty} I_n$.
2. By the result above, $\lim_{n \to \infty} \bs 1\left(\bigcap_{i=n}^\infty A_i\right)$ is the indicator variable of $\liminf_{n \to \infty} A_n$. But $\bs 1\left(\bigcap_{i=n}^\infty A_i\right) = \min\{I_i: i \ge n\}$ and hence $\lim_{n \to \infty} \bs 1\left(\bigcap_{i=n}^\infty A_i\right) = \liminf_{n \to \infty} I_n$.
Suppose that $(A_1, A_2, \ldots)$ is a sequence of events. Then $\liminf_{n \to \infty} A_n \subseteq \limsup_{n \to \infty} A_n$.
Proof
If $A_n$ occurs for all but finitely many $n \in \N_+$ then certainly $A_n$ occurs for infinitely many $n \in \N_+$.
Suppose that $(A_1, A_2, \ldots)$ is a sequence of events. Then
1. $\left( \limsup_{n \to \infty} A_n \right)^c = \liminf_{n \to \infty} A_n^c$
2. $\left( \liminf_{n \to \infty} A_n \right)^c = \limsup_{n \to \infty} A_n^c$.
Proof
These results follows from DeMorgan's laws.
The Continuity Theorems
Generally speaking, a function is continuous if it preserves limits. Thus, the following results are the continuity theorems of probability. Part (a) is the continuity theorem for increasing events and part (b) the continuity theorem for decreasing events.
Suppose that $(A_1, A_2, \ldots)$ is a sequence of events.
1. If the sequence is increasing then $\lim_{n \to \infty} \P(A_n) = \P\left( \lim_{n \to \infty} A_n \right) = \P\left(\bigcup_{n=1}^\infty A_n\right)$
2. If the sequence is decreasing then $\lim_{n \to \infty} \P(A_n) = \P\left( \lim_{n \to \infty} A_n \right) = \P\left(\bigcap_{n=1}^\infty A_n\right)$
Proof
1. Let $B_1 = A_1$ and let $B_i = A_i \setminus A_{i-1}$ for $i \in \{2, 3, \ldots\}$. Note that the collection of events $\{B_1, B_2, \ldots \}$ is pairwise disjoint and has the same union as $\{A_1, A_2, \ldots \}$. From countable additivity and the definition of infinite series, $\P\left(\bigcup_{i=1}^\infty A_i\right) = \P\left(\bigcup_{i=1}^\infty B_i\right) = \sum_{i = 1}^\infty \P(B_i) = \lim_{n \to \infty} \sum_{i = 1}^n \P(B_i)$ But $\P(B_1) = \P(A_1)$ and $\P(B_i) = \P(A_i) - \P(A_{i-1})$ for $i \in \{2, 3, \ldots\}$. Therefore $\sum_{i=1}^n \P(B_i) = \P(A_n)$ and hence we have $\P\left(\bigcup_{i=1}^\infty A_i\right) = \lim_{n \to \infty} \P(A_n)$.
2. The sequence of complements $\left(A_1^c, A_2^c, \ldots\right)$ is increasing. Hence using part (a), DeMorgan's law, and the complement rule we have $\P\left(\bigcap_{i=1}^\infty A_i \right) = 1 - \P\left(\bigcup_{i=1}^\infty A_i^c\right) = 1 - \lim_{n \to \infty} \P(A_n^c) = \lim_{n \to \infty} \left[1 - \P\left(A_n^c\right)\right] = \lim_{n \to \infty} \P(A_n)$
The continuity theorems can be applied to the increasing and decreasing sequences that we constructed earlier from an arbitrary sequence of events.
Suppose that $(A_1, A_2, \ldots)$ is a sequence of events.
1. $\P\left( \bigcup_{i=1}^\infty A_i \right) = \lim_{n \to \infty} \P\left( \bigcup_{i = 1}^n A_i \right)$
2. $\P\left( \bigcap_{i=1}^\infty A_i \right) = \lim_{n \to \infty} \P\left( \bigcap_{i = 1}^n A_i \right)$
Proof
These results follow immediately from the continuity theorems.
Suppose that $(A_1, A_2, \ldots)$ is a sequence of events. Then
1. $\P\left(\limsup_{n \to \infty} A_n\right) = \lim_{n \to \infty} \P\left(\bigcup_{i=n}^\infty A_i\right)$
2. $\P\left(\liminf_{n \to \infty} A_n\right) = \lim_{n \to \infty} \P\left(\bigcap_{i=n}^\infty A_i\right)$
Proof
These results follows directly from the definitions, and the continuity theorems.
The next result shows that the countable additivity axiom for a probability measure is equivalent to finite additivity and the continuity property for increasing events.
Temporarily, suppose that $\P$ is only finitely additive, but satisfies the continuity property for increasing events. Then $\P$ is countably additive.
Proof
Suppose that $(A_1, A_2, \ldots)$ is a sequence of pairwise disjoint events. Since we are assuming that $\P$ is finitely additive we have $\P\left(\bigcup_{i=1}^n A_i\right) = \sum_{i=1}^n \P(A_i)$ If we let $n \to \infty$, the left side converges to $\P\left(\bigcup_{i=1}^\infty A_i\right)$ by the continuity assumption and the result above, while the right side converges to $\sum_{i=1}^\infty \P(A_i)$ by the definition of an infinite series.
There are a few mathematicians who reject the countable additivity axiom of probability measure in favor of the weaker finite additivity axiom. Whatever the philosophical arguments may be, life is certainly much harder without the continuity theorems.
The Borel-Cantelli Lemmas
The Borel-Cantelli Lemmas, named after Emil Borel and Francessco Cantelli, are very important tools in probability theory. The first lemma gives a condition that is sufficient to conclude that infinitely many events occur with probability 0.
First Borel-Cantelli Lemma. Suppose that $(A_1, A_2, \ldots)$ is a sequence of events. If $\sum_{n=1}^\infty \P(A_n) \lt \infty$ then $\P\left(\limsup_{n \to \infty} A_n\right) = 0$.
Proof
From the result above on limit superiors, we have $\P\left(\limsup_{n \to \infty} A_n\right) = \lim_{n \to \infty} \P\left(\bigcup_{i = n}^\infty A_i \right)$. But from Boole's inequality, $\P\left(\bigcup_{i = n}^\infty A_i \right) \le \sum_{i = n}^\infty \P(A_i)$. Since $\sum_{i = 1}^\infty \P(A_i) \lt \infty$, we have $\sum_{i = n}^\infty \P(A_i) \to 0$ as $n \to \infty$.
The second lemma gives a condition that is sufficient to conclude that infinitely many independent events occur with probability 1.
Second Borel-Cantelli Lemma. Suppose that $(A_1, A_2, \ldots)$ is a sequence of independent events. If $\sum_{n=1}^\infty \P(A_n) = \infty$ then $\P\left( \limsup_{n \to \infty} A_n \right) = 1$.
Proof
Note first that $1 - x \le e^{-x}$ for every $x \in \R$, and hcnce $1 - \P(A_i) \le \exp\left[-\P(A_i)\right]$ for each $i \in \N_+$. From the results above on limit superiors and complements, $\P\left[\left(\limsup_{n \to \infty} A_n\right)^c\right] = \P\left(\liminf_{n \to \infty} A_n^c\right) = \lim_{n \to \infty} \P \left(\bigcap_{i = n}^\infty A_i^c\right)$ But by independence and the inequality above, $\P\left(\bigcap_{i = n}^\infty A_i^c\right) = \prod_{i = n}^\infty \P\left(A_i^c\right) = \prod_{i = n}^\infty \left[1 - \P(A_i)\right] \le \prod_{i = n}^\infty \exp\left[-\P(A_i)\right] = \exp\left(-\sum_{i = n}^\infty \P(A_i) \right) = 0$
For independent events, both Borel-Cantelli lemmas apply of course, and lead to a zero-one law.
If $(A_1, A_2, \ldots)$ is a sequence of independent events then $\limsup_{n \to \infty} A_n$ has probability 0 or 1:
1. If $\sum_{n=1}^\infty \P(A_n) \lt \infty$ then $\P\left( \limsup_{n \to \infty} A_n \right) = 0$.
2. If $\sum_{n=1}^\infty \P(A_n) = \infty$ then $\P\left( \limsup_{n \to \infty} A_n \right) = 1$.
This result is actually a special case of a more general zero-one law, known as the Kolmogorov zero-one law, and named for Andrei Kolmogorov. This law is studied in the more advanced section on measure. Also, we can use the zero-one law to derive a calculus theorem that relates infinite series and infinte products. This derivation is an example of the probabilistic method—the use of probability to obtain results, seemingly unrelated to probability, in other areas of mathematics.
Suppose that $p_i \in (0, 1)$ for each $i \in \N_+$. Then $\prod_{i=1}^\infty p_i \gt 0 \text{ if and only if } \sum_{i=1}^\infty (1 - p_i) \lt \infty$
Proof
We can easily construct a probability space with a sequence of independent events $(A_1, A_2, \ldots)$ such that $\P(A_i) = 1 - p_i$ for each $i \in \N_+$. The result then follows from the proofs of the two Borel-Cantelli lemmas.
Our next result is a simple application of the second Borel-Cantelli lemma to independent replications of a basic experiment.
Suppose that $A$ is an event in a basic random experiment with $\P(A) \gt 0$. In the compound experiment that consists of independent replications of the basic experiment, the event $A$ occurs infinitely often has probability 1.
Proof
Let $p$ denote the probability of $A$ in the basic experiment. In the compound experiment, we have a sequence of independent events $(A_1, A_2, \ldots)$ with $\P(A_n) = p$ for each $n \in \N_+$ (these are independent copies of $A$). But $\sum_{n=1}^\infty \P(A_n) = \infty$ since $p \gt 0$ so the result follows from the second Borel-Cantelli lemma.
Convergence of Random Variables
Our next discussion concerns two ways that a sequence of random variables defined for our experiment can converge. These are fundamentally important concepts, since some of the deepest results in probability theory are limit theorems involving random variables. The most important special case is when the random variables are real valued, but the proofs are essentially the same for variables with values in a metric space, so we will use the more general setting.
Thus, suppose that $(S, d)$ is a metric space, and that $\mathscr S$ is the corresponding Borel $\sigma$-algebra (that is, the $\sigma$-algebra generated by the topology), so that our measurable space is $(S, \mathscr S)$. Here is the most important special case:
For $n \in \N_+$, is the $n$-dimensional Euclidean space is $(\R^n, d_n)$ where $d_n(\bs x, \bs y) = \sqrt{\sum_{i=1}^n (y_i - x_i)^2}, \quad \bs x = (x_1, x_2 \ldots, x_n), \, \bs y = (y_1, y_2, \ldots, y_n) \in \R^n$
Euclidean spaces are named for Euclid, of course. As noted above, the one-dimensional case where $d(x, y) = |y - x|$ for $x, \, y \in \R$ is particularly important. Returning to the general metric space, recall that if $(x_1, x_2, \ldots)$ is a sequence in $S$ and $x \in S$, then $x_n \to x$ as $n \to \infty$ means that $d(x_n, x) \to 0$ as $n \to \infty$ (in the usual calculus sense). For the rest of our discussion, we assume that $(X_1, X_2, \ldots)$ is a sequence of random variable with values in $S$ and $X$ is a random variable with values in $S$, all defined on the probability space $(\Omega, \mathscr F, \P)$.
We say that $X_n \to X$ as $n \to \infty$ with probability 1 if the event that $X_n \to X$ as $n \to \infty$ has probability 1. That is, $\P\{\omega \in S: X_n(\omega) \to X(\omega) \text{ as } n \to \infty\} = 1$
Details
We need to make sure that the definition makes sense, in that the statement that $X_n$ converges to $X$ as $n \to \infty$ defines a valid event. Note that $X_n$ does not converge to $X$ as $n \to \infty$ if and only if for some $\epsilon \gt 0$, $d(X_n, X) \gt \epsilon$ for infinitely many $n \in \N_+$. Note that if the this condition holds for a given $\epsilon \gt 0$ then it holds for all smaller $\epsilon \gt 0$. Moreover, there are arbitrarily small rational $\epsilon \gt 0$ so $X_n$ does not converge to $X$ as $n \to \infty$ if and only if for some rational $\epsilon \gt 0$, $d(X_n, X) \gt \epsilon$ for infinitely many $n \in \N_+$. Hence $\left\{X_n \to X \text{ as } n \to \infty\right\}^c = \bigcup_{\epsilon \in \Q_+} \limsup_{n \to \infty} \left\{d(X_n, X) \gt \epsilon\right\}$ where $\Q_+$ is the set of positive rational numbers. A critical point to remember is that this set is countable. So, building a little at a time, note that $\left\{d(X_n, X) \gt \epsilon\right\}$ is an event for each $\epsilon \in \Q_+$ and $n \in \N_+$ since $X_n$ and $X$ are random variables. Next, the limit superior of a sequence of events is an event. Finally, a countable union of events is an event.
As good probabilists, we usually suppress references to the sample space and write the definition simply as $\P(X_n \to X \text{ as } n \to \infty) = 1$. The statement that an event has probability 1 is usually the strongest affirmative statement that we can make in probability theory. Thus, convergence with probability 1 is the strongest form of convergence. The phrases almost surely and almost everywhere are sometimes used instead of the phrase with probability 1.
Recall that metrics $d$ and $e$ on $S$ are equivalent if they generate the same topology on $S$. Recall also that convergence of a sequence is a topological property. That is, if $(x_1, x_2, \ldots)$ is a sequence in $S$ and $x \in S$, and if $d, \, e$ are equivalent metrics on $S$, then $x_n \to x$ as $n \to \infty$ relative to $d$ if and only if $x_n \to x$ as $n \to \infty$ relative to $e$. So for our random variables as defined above, it follows that $X_n \to X$ as $n \to \infty$ with probability 1 relative to $d$ if and only if $X_n \to X$ as $n \to \infty$ with probability 1 relative to $e$.
The following statements are equivalent:
1. $X_n \to X$ as $n \to \infty$ with probability 1.
2. $\P\left[d(X_n, X) \gt \epsilon \text{ for infinitely many } n \in \N_+\right] = 0$ for every rational $\epsilon \gt 0$.
3. $\P\left[d(X_n, X) \gt \epsilon \text{ for infinitely many } n \in \N_+\right] = 0$ for every $\epsilon \gt 0$.
4. $\P\left[d(X_k, X) \gt \epsilon \text{ for some } k \ge n\right] \to 0$ as $n \to \infty$ for every $\epsilon \gt 0$.
Proof
From the details in the definition above, $\P(X_n \to X \text{ as } n \to \infty) = 1$ if and only if $\P\left(\bigcup_{\epsilon \in \Q_+} \left\{d(X_n, X) \gt \epsilon \text{ for infinitely many } n \in \N_+\right\} \right) = 0$ where again $\Q_+$ is the set of positive rational numbers. But by Boole's inequality, a countable union of events has probability 0 if and only if every event in the union has probability 0. Thus, (a) is equivalent to (b). Statement (b) is clearly equivalent to (c) since there are arbitrarily small positive rational numbers. Finally, (c) is equivalent to (d) by the continuity result in above.
Our next result gives a fundamental criterion for convergence with probability 1:
If $\sum_{n=1}^\infty \P\left[d(X_n, X) \gt \epsilon\right] \lt \infty$ for every $\epsilon \gt 0$ then $X_n \to X$ as $n \to \infty$ with probability 1.
Proof
By the first Borel-Cantelli lemma, if $\sum_{n=1}^\infty \P\left[d(X_n, X) \gt \epsilon\right) \lt \infty$ then $\P\left[d(X_n, X) \gt \epsilon \text{ for infinitely many } n \in \N_+\right) = 0$. Hence the result follows from the previous theorem.
Here is our next mode of convergence.
We say that $X_n \to X$ as $n \to \infty$ in probability if $\P\left[d(X_n, X) \gt \epsilon\right] \to 0 \text{ as } n \to \infty \text{ for each } \epsilon \gt 0$
The phrase in probability sounds superficially like the phrase with probability 1. However, as we will soon see, convergence in probability is much weaker than convergence with probability 1. Indeed, convergence with probability 1 is often called strong convergence, while convergence in probability is often called weak convergence.
If $X_n \to X$ as $n \to \infty$ with probability 1 then $X_n \to X$ as $n \to \infty$ in probability.
Proof
Let $\epsilon \gt 0$. Then $\P\left[d(X_n, X) \gt \epsilon\right] \le \P\left[d(X_k, X) \gt \epsilon \text{ for some } k \ge n\right]$. But if $X_n \to X$ as $n \to \infty$ with probability 1, then the expression on the right converges to 0 as $n \to \infty$ by part (d) of the result above. Hence $X_n \to X$ as $n \to \infty$ in probability.
The converse fails with a passion. A simple counterexample is given below. However, there is a partial converse that is very useful.
If $X_n \to X$ as $n \to \infty$ in probability, then there exists a subsequence $(n_1, n_2, n_3 \ldots)$ of $\N_+$ such that $X_{n_k} \to X$ as $k \to \infty$ with probability 1.
Proof
Suppose that $X_n \to X$ as $n \to \infty$ in probability. Then for each $k \in \N_+$ there exists $n_k \in \N_+$ such that $\P\left[d\left(X_{n_k}, X \right) \gt 1 / k \right] \lt 1 / k^2$. We can make the choices so that $n_k \lt n_{k+1}$ for each $k$. It follows that $\sum_{k=1}^\infty \P\left[d\left(X_{n_k}, X\right) \gt \epsilon \right] \lt \infty$ for every $\epsilon \gt 0$. By the result above, $X_{n_k} \to X$ as $n \to \infty$ with probability 1.
Note that the proof works because $1 / k \to 0$ as $k \to \infty$ and $\sum_{k=1}^\infty 1 / k^2 \lt \infty$. Any two sequences with these properties would work just as well.
There are two other modes of convergence that we will discuss later:
• Convergence in distribution.
• Convergence in mean,
Examples and Applications
Coins
Suppose that we have an infinite sequence of coins labeled $1, 2, \ldots$ Moreover, coin $n$ has probability of heads $1 / n^a$ for each $n \in \N_+$, where $a \gt 0$ is a parameter. We toss each coin in sequence one time. In terms of $a$, find the probability of the following events:
1. infinitely many heads occur
2. infinitely many tails occur
Answer
Let $H_n$ be the event that toss $n$ results in heads, and $T_n$ the event that toss $n$ results in tails.
1. $\P\left(\limsup_{n \to \infty} H_n\right) = 1$, $\P\left(\limsup_{n \to \infty} T_n\right) = 1$ if $a \in (0, 1]$
2. $\P\left(\limsup_{n \to \infty} H_n\right) = 0$, $\P\left(\limsup_{n \to \infty} T_n\right) = 1$ if $a \in (1, \infty)$
The following exercise gives a simple example of a sequence of random variables that converge in probability but not with probability 1. Naturally, we are assuming the standard metric on $\R$.
Suppose again that we have a sequence of coins labeled $1, 2, \ldots$, and that coin $n$ lands heads up with probability $\frac{1}{n}$ for each $n$. We toss the coins in order to produce a sequence $(X_1, X_2, \ldots)$ of independent indicator random variables with $\P(X_n = 1) = \frac{1}{n}, \; \P(X_n = 0) = 1 - \frac{1}{n}; \quad n \in \N_+$
1. $\P(X_n = 0 \text{ for infinitely many } n) = 1$, so that infinitely many tails occur with probability 1.
2. $\P(X_n = 1 \text{ for infinitely many } n) = 1$, so that infinitely many heads occur with probability 1.
3. $\P(X_n \text{ does not converge as } n \to \infty) = 1$.
4. $X_n \to 0$ as $n \to \infty$ in probability.
Proof
1. This follow from the second Borel-Cantelli lemma, since $\sum_{n = 1}^\infty \P(X_n = 0) = \infty$
2. This also follows from the second Borel-Cantelli lemma, since $\sum_{n = 1}^\infty \P(X_n = 1) = \infty$.
3. This follows from parts (a) and (b). Recall that the intersection of two events with probability 1 still has probability 1.
4. Suppose $0 \lt \epsilon \lt 1$. Then $\P\left(\left|X_n - 0\right| \gt \epsilon\right) = \P(X_n = 1) = \frac{1}{n} \to 0$ as $n \to \infty$.
Discrete Spaces
Recall that a measurable space $(S, \mathscr S)$ is discrete if $S$ is countable and $\mathscr S$ is the collection of all subsets of $S$ (the power set of $S$). Moreover, $\mathscr S$ is the Borel $\sigma$-algebra corresponding to the discrete metric $d$ on $S$ given by $d(x, x) = 0$ for $x \in S$ and $d(x, y) = 1$ for distinct $x, \, y \in S$. How do convergence with probability 1 and convergence in probability work for the discrete metric?
Suppose that $(S, \mathscr S)$ is a discrete space. Suppose further that $(X_1, X_2, \ldots)$ is a sequence of random variables with values in $S$ and $X$ is a random variable with values in $S$, all defined on the probability space $(\Omega, \mathscr F, \P)$. Relative to the discrete metric $d$,
1. $X_n \to X$ as $n \to \infty$ with probability 1 if and only if $\P(X_n = X \text{ for all but finitely many } n \in \N_+) = 1$.
2. $X_n \to X$ as $n \to \infty$ in probability if and only if $\P(X_n \ne X) \to 0$ as $n \to \infty$.
Proof
1. If $(x_1, x_2, \ldots)$ is a sequence of points in $S$ and $x \in S$, then relative to metric $d$, $x_n \to x$ as $n \to \infty$ if and only if $x_n = x$ for all but finitely many $n \in \N_+$.
2. If $\epsilon \ge 1$ then $\P[d(X_n, X) \gt \epsilon] = 0$. If $\epsilon \in (0, 1)$ then $\P[d(X_n, X) \gt \epsilon] = \P(X_n \ne X)$.
Of course, it's important to realize that a discrete space can be the Borel space for metrics other than the discrete metric. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/02%3A_Probability_Spaces/2.06%3A_Convergence.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\supp}{\text{supp}}$
In this section we discuss positive measure spaces (which include probability spaces) from a more advanced point of view. The sections on Measure Theory and Special Set Structures in the chapter on Foundations are essential prerequisites. On the other hand, if you are not interested in the measure-theoretic aspects of probability, you can safely skip this section.
Positive Measure
Definitions
Suppose that $S$ is a set, playing the role of a universal set for a mathematical theory. As we have noted before, $S$ usually comes with a $\sigma$-algebra $\mathscr S$ of admissible subsets of $S$, so that $(S, \mathscr S)$ is a measurable space. In particular, this is the case for the model of a random experiment, where $S$ is the set of outcomes and $\mathscr S$ the $\sigma$-algebra of events, so that the measurable space $(S, \mathscr S)$ is the sample space of the experiment. A probability measure is a special case of a more general object known as a positive measure.
A positive measure on $(S, \mathscr S)$ is a function $\mu: \mathscr S \to [0, \infty]$ that satisfies the following axioms:
1. $\mu(\emptyset) = 0$
2. If $\{A_i: i \in I\}$ is a countable, pairwise disjoint collection of sets in $\mathscr S$ then $\mu\left(\bigcup_{i \in I} A_i\right) = \sum_{i \in I} \mu(A_i)$
The triple $(S, \mathscr S, \mu)$ is a measure space.
Axiom (b) is called countable additivity, and is the essential property. The measure of a set that consists of a countable union of disjoint pieces is the sum of the measures of the pieces. Note also that since the terms in the sum are positive, there is no issue with the order of the terms in the sum, although of course, $\infty$ is a possible value.
So perhaps the term measurable space for $(S, \mathscr S)$ makes a little more sense now—a measurable space is one that can have a positive measure defined on it.
Suppose that $(S, \mathscr S, \mu)$ is a measure space.
1. If $\mu(S) \lt \infty$ then $(S, \mathscr S, \mu)$ is a finite measure space.
2. If $\mu(S) = 1$ then $(S, \mathscr S, \mu)$ is a probability space.
So probability measures are positive measures, but positive measures are important beyond the application to probability. The standard measures on the Euclidean spaces are all positive measures: the extension of length for measurable subsets of $\R$, the extension of area for measurable subsets of $\R^2$, the extension of volume for measurable subsets of $\R^3$, and the higher dimensional analogues. We will actually construct these measures in the next section on Existence and Uniqueness. In addition, counting measure $\#$ is a positive measure on the subsets of a set $S$. Even more general measures that can take positive and negative values are explored in the chapter on Distributions.
Properties
The following results give some simple properties of a positive measure space $(S, \mathscr S, \mu)$. The proofs are essentially identical to the proofs of the corresponding properties of probability, except that the measure of a set may be infinite so we must be careful to avoid the dreaded indeterminate form $\infty - \infty$.
If $A, \, B \in \mathscr S$, then $\mu(B) = \mu(A \cap B) + \mu(B \setminus A)$.
Proof
Note that $B = (A \cap B) \cup (B \setminus A)$, and the sets in the union are disjoint.
If $A, \, B \in \mathscr S$ and $A \subseteq B$ then
1. $\mu(B) = \mu(A) + \mu(B \setminus A)$
2. $\mu(A) \le \mu(B)$
Proof
Part (a) follows from the previous theorem, since $A \cap B = A$. Part (b) follows from part (a).
Thus $\mu$ is an increasing function, relative to the subset partial order $\subseteq$ on $\mathscr S$ and the ordinary order $\le$ on $[0, \infty]$. In particular, if $\mu$ is a finite measure, then $\mu(A) \lt \infty$ for every $A \in \mathscr S$. Note also that if $A, \, B \in \mathscr S$ and $\mu(B) \lt \infty$ then $\mu(B \setminus A) = \mu(B) - \mu(A \cap B)$. In the special case that $A \subseteq B$, this becomes $\mu(B \setminus A) = \mu(B) - \mu(A)$. In particular, these results holds for a finite measure and are just like the difference rules for probability. If $\mu$ is a finite measure, then $\mu(A^c) = \mu(S) - \mu(A)$. This is the analogue of the complement rule in probability, with but with $\mu(S)$ replacing 1.
The following result is the analogue of Boole's inequality for probability. For a general positive measure, the result is referred to as the subadditive property.
Suppose that $A_i \in \mathscr S$ for $i$ in a countable index set $I$. Then $\mu\left(\bigcup_{i \in I} A_i \right) \le \sum_{i \in I} \mu(A_i)$
Proof
The proof is exaclty like the one for Boole's inequality. Assume that $I = \N_+$. Let $B_1 = A_1$ and $B_i = A_i \setminus (A_1 \cup \ldots \cup A_{i-1})$ for $i \in \{2, 3, \ldots\}$. Then $\{B_i: i \in I\}$ is a disjoint collection of sets in $\mathscr S$ with the same union as $\{A_i: i \in I\}$. Also $B_i \subseteq A_i$ for each $i$ so $\mu(B_i) \le \mu(A_i)$. Hence $\mu\left(\bigcup_{i \in I} A_i \right) = \mu\left(\bigcup_{i \in I} B_i \right) = \sum_{i \in I} \mu(B_i) \le \sum_{i \in I} \mu(A_i)$
For a union of sets with finite measure, the inclusion-exclusion formula holds, and the proof is just like the one for probability.
Suppose that $A_i \in \mathscr S$ for each $i \in I$ where $\#(I) = n$, and that $\mu(A_i) \lt \infty$ for $i \in I$. Then $\mu \left( \bigcup_{i \in I} A_i \right) = \sum_{k = 1}^n (-1)^{k - 1} \sum_{J \subseteq I, \; \#(J) = k} \mu \left( \bigcap_{j \in J} A_j \right)$
Proof
The proof is by induction on $n$. The proof for $n = 2$ is simple: $A_1 \cup A_2 = A_1 \cup (A_2 \setminus A_1)$. The union on the right is disjoint, so using additivity and the difference rule, $\mu(A_1 \cup A_2) = \mu (A_1) + \mu(A_2 \setminus A_1) = \mu(A_1) + \mu(A_2) - \mu(A_1 \cap A_2)$ Suppose now that the inclusion-exclusion formula holds for a given $n \in \N_+$, and consider the case $n + 1$. Then $\bigcup_{i=1}^{n + 1} A_i = \left(\bigcup_{i=1}^n A_i \right) \cup \left[ A_{n+1} \setminus \left(\bigcup_{i=1}^n A_i\right) \right]$ As before, the set in parentheses and the set in square brackets are disjoint. Thus using the additivity axiom, the difference rule, and the distributive rule we have $\mu\left(\bigcup_{i=1}^{n+1} A_i\right) = \mu\left(\bigcup_{i=1}^n A_i\right) + \mu(A_{n+1}) - \mu\left(\bigcup_{i=1}^n (A_{n+1} \cap A_i) \right)$ By the induction hypothesis, the inclusion-exclusion formula holds for each union of $n$ sets on the right. Applying the formula and simplifying gives the inclusion-exclusion formula for $n + 1$ sets.
The continuity theorem for increasing sets holds for a positive measure. The continuity theorem for decreasing events holds also, if the sets have finite measure. Again, the proofs are similar to the ones for a probability measure, except for considerations of infinite measure.
Suppose that $(A_1, A_2, \ldots)$ is a sequence of sets in $\mathscr S$.
1. If the sequence is increasing then $\mu\left(\bigcup_{i=1}^\infty A_i \right) = \lim_{n \to \infty} \mu(A_n)$.
2. If sequence is decreasing and $\mu(A_1) \lt \infty$ then $\mu\left(\bigcap_{i=1}^\infty A_i \right) = \lim_{n \to \infty} \mu(A_n)$.
Proof
1. Note that if $\mu(A_k) = \infty$ for some $k$ then $\mu(A_n) = \infty$ for $n \ge k$ and $\mu\left(\bigcup_{i=1}^\infty A_i \right) = \infty$. Thus, suppose that $\mu(A_i) \lt \infty$ for each $i$. Let $B_1 = A_1$ and $B_i = A_i \setminus A_{i-1}$ for $i \in \{2, 3, \ldots\}$. Then $(B_1, B_2, \ldots)$ is a disjoint sequence with the same union as $(A_1, A_2, \ldots)$. Also, $\mu(B_1) = \mu(A_1)$ and by the proper difference rule, $\mu(B_i) = \mu(A_i) - \mu(A_{i-1})$ for $i \in \{2, 3, \ldots\}$. Hence $\mu\left(\bigcup_{i=1}^\infty A_i \right) = \mu \left(\bigcup_{i=1}^\infty B_i \right) = \sum_{i=1}^\infty \mu(B_i) = \lim_{n \to \infty} \sum_{i=1}^n \mu(B_i)$ But $\sum_{i=1}^n \mu(B_i) = \mu(A_1) + \sum_{i=2}^n [\mu(A_i) - \mu(A_{i-1})] = \mu(A_n)$.
2. Note that $A_1 \setminus A_n$ is increasing in $n$. Hence using the continuity result for increasing sets, \begin{align} \mu \left(\bigcap_{i=1}^\infty A_i \right) & = \mu\left[A_1 \setminus \bigcup_{i=1}^\infty (A_1 \setminus A_i) \right] = \mu(A_1) - \mu\left[\bigcup_{i=1}^\infty (A_1 \setminus A_n)\right]\ & = \mu(A_1) - \lim_{n \to \infty} \mu(A_1 \setminus A_n) = \mu(A_1) - \lim_{n \to \infty} \left[\mu(A_1) - \mu(A_n)\right] = \lim_{n \to \infty} \mu(A_n) \end{align}
Recall that if $(A_1, A_2, \ldots)$ is increasing, $\bigcup_{i=1}^\infty A_i$ is denoted $\lim_{n \to \infty} A_n$, and if $(A_1, A_2, \ldots)$ is decreasing, $\bigcap_{i=1}^\infty A_i$ is denoted $\lim_{n \to \infty} A_n$. In both cases, the continuity theorem has the form $\mu\left(\lim_{n \to \infty} A_n\right) = \lim_{n \to \infty} \mu(A_n)$. The continuity theorem for decreasing events fails without the additional assumption of finite measure. A simple counterexample is given below.
The following corollary of the inclusion-exclusion law gives a condition for countable additivity that does not require that the sets be disjoint, but only that the intersections have measure 0. The result is used below in the theorem on completion.
Suppose that $A_i \in \mathscr S$ for each $i$ in a countable index set $I$ and that $\mu(A_i) \lt \infty$ for $i \in I$ and $\mu(A_i \cap A_j) = 0$ for distinct $i, \, j \in I$. Then $\mu\left(\bigcup_{i \in I} A_i \right) = \sum_{i \in I} \mu(A_i)$
Proof
We will assume that $I = \N_+$. For $n \in \N_+$, $\mu\left(\bigcup_{i=1}^n A_i\right) = \sum_{i=1}^n \mu(A_i)$ as an immediate consequence of the inclusion-exclusion law, under the assumption that $\mu(A_i \cap A_j) = 0$ for distinct $i, j \in \{1, 2, \ldots, n\}$. Next $\bigcup_{i=1}^n A_i \uparrow \bigcup_{i=1}^\infty A_i$ as $n \to \infty$, and hence by the continuity theorem for increasing events, $\mu\left(\bigcup_{i=1}^n A_i\right) \to \mu\left(\bigcup_{i=1}^\infty A_i\right)$ as $n \to \infty$. On the other hand, $\sum_{i=1}^n \mu(A_i) \to \sum_{i=1}^\infty \mu(A_i)$ as $n \to \infty$ by the definition of an infinite series of nonnegative terms.
More Definitions
If a positive measure is not finite, then the following definition gives the next best thing.
The measure space $(S, \mathscr S, \mu)$ is $\sigma$-finite if there exists a countable collection $\{A_i: i \in I\} \subseteq \mathscr S$ with $\bigcup_{i \in I} A_i = S$ and $\mu(A_i) \lt \infty$ for each $i \in I$.
So of course, if $\mu$ is a finite measure on $(S, \mathscr S)$ then $\mu$ is $\sigma$-finite, but not conversely in general. On the other hand, for $i \in I$, let $\mathscr S_i = \{A \in \mathscr S: A \subseteq A_i\}$. Then $\mathscr S_i$ is a $\sigma$-algebra of subsets of $A_i$ and $\mu$ restricted to $\mathscr S_i$ is a finite measure. The point of this (and the reason for the definition) is that often nice properties of finite measures can be extended to $\sigma$-finite measures. In particular, $\sigma$-finite measure spaces play a crucial role in the construction of product measure spaces, and for the completion of a measure space considered below.
Suppose that $(S, \mathscr S, \mu)$ is a $\sigma$-finite measure space.
1. There exists an increasing sequence satisfying the $\sigma$-finite definition
2. There exists a disjoint sequence satisfying the $\sigma$-finite definition.
Proof
Without loss of generality, we can take $\N_+$ as the index set in the definition. So there exists $A_n \in \mathscr S$ for $n \in \N_+$ such that $\mu(A_n) \lt \infty$ for each $n \in \N_+$ and $S = \bigcup_{n=1}^\infty A_n$. The proof uses some of the same tricks that we have seen before.
1. Let $B_n = \bigcup_{i = 1}^n A_i$. Then $B_n \in \mathscr S$ for $n \in \N_+$ and this sequence is increasing. Moreover, $\mu(B_n) \le \sum_{i=1}^n \mu(A_i) \lt \infty$ for $n \in \N_+$ and $\bigcup_{n=1}^\infty B_n = \bigcup_{n=1}^\infty A_n = S$.
2. Let $C_1 = A_1$ and let $C_n = A_n \setminus \bigcup_{i=1}^{n-1} A_i$ for $n \in \{2, 3, \ldots\}$. Then $C_n \in \mathscr S$ for each $n \in \N_+$ and this sequence is disjoint. Moreover, $C_n \subseteq A_n$ so $\mu(C_n) \le \mu(A_n) \lt \infty$ and $\bigcup_{n=1}^\infty C_n = \bigcup_{n=1}^\infty A_n = S$.
Our next definition concerns sets where a measure is concentrated, in a certain sense.
Suppose that $(S, \mathscr S, \mu)$ is a measure space. An atom of the space is a set $A \in \mathscr S$ with the following properties:
1. $\mu(A) \gt 0$
2. If $B \in \mathscr S$ and $B \subseteq A$ then either $\mu(B) = \mu(A)$ or $\mu(B) = 0$.
A measure space that has no atoms is called non-atomic or diffuse.
In probability theory, we are often particularly interested in atoms that are singleton sets. Note that $\{x\} \in \mathscr S$ is an atom if and only if $\mu(\{x\}) \gt 0$, since the only subsets of $\{x\}$ are $\{x\}$ itself and $\emptyset$.
Constructions
There are several simple ways to construct new positive measures from existing ones. As usual, we start with a measurable space $(S, \mathscr S)$.
Suppose that $(R, \mathscr R)$ is a measurable subspace of $(S, \mathscr S)$. If $\mu$ is a positive measure on $(S, \mathscr S)$ then $\mu$ restricted to $\mathscr R$ is a positive measure on $(R, \mathscr R)$. If $\mu$ is a finite measure on $(S, \mathscr S)$ then $\mu$ is a finite measure on $(R, \mathscr R)$.
Proof
The assumption is that $\mathscr R$ is a $\sigma$-algebra of subsets of $R$ and $\mathscr R \subseteq \mathscr S$. In particular $R \in \mathscr S$. Since the additivity property of $\mu$ holds for a countable, disjoint collection of events in $\mathscr S$, it trivially holds for a countable, disjoint collection of events in $\mathscr R$. Finally, by the increasing property, $\mu(R) \le \mu(S)$ so if $\mu(S) \lt \infty$ then $\mu(R) \lt \infty$.
However, if $\mu$ is $\sigma$-finite on $(S, \mathscr S)$, it is not necessarily true that $\mu$ is $\sigma$-finite on $(R, \mathscr R)$. A counterexample is given below. The previous theorem would apply, in particular, when $R = S$ so that $\mathscr R$ is a sub $\sigma$-algebra of $\mathscr S$. Next, a positive multiple of a positive measure gives another positive measure.
If $\mu$ is a positive measure on $(S, \mathscr S)$ and $c \in (0, \infty)$, then $c \mu$ is also a positive measure on $(S, \mathscr S)$. If $\mu$ is finite ($\sigma$-finite) then $c \mu$ is finite ($\sigma$-finite) respectively.
Proof
Clearly $c \mu: \mathscr S \to [0, \infty]$. Also $(c \mu)(\emptyset) = c \mu(\emptyset) = 0$. Next if $\{A_i: i \in I\}$ is a countable, disjoint collection of events in $\mathscr S$ then $(c \mu)\left(\bigcup_{i \in I} A_i\right) = c \mu\left(\bigcup_{i \in I} A_i\right) = c \sum_{i \in I} \mu(A_i) = \sum_{i \in I} c \mu(A_i)$ Finally, since $\mu(A) \lt \infty$ if and only if $(c \mu)(A) \lt \infty$ for $A \in \mathscr S$, the finiteness and $\sigma$-finiteness properties are trivially preserved.
A nontrivial finite positive measure $\mu$ is practically just like a probability measure, and in fact can be re-scaled into a probability measure $\P$, as was done in the section on Probability Measures:
Suppose that $\mu$ is a positive measure on $(S, \mathscr S)$ with $0 \lt \mu(S) \lt \infty$. Then $\P$ defined by $\P(A) = \mu(A) / \mu(S)$ for $A \in \mathscr S$ is a probability measure on $(S, \mathscr S)$.
Proof
$\P$ is a measure by the previous result, and trivially $\P(S) = 1$.
Sums of positive measures are also positive measures.
If $\mu_i$ is a positive measure on $(S, \mathscr S)$ for each $i$ in a countable index set $I$ then $\mu = \sum_{i \in I} \mu_i$ is also a positive measure on $(S, \mathscr S)$.
1. If $I$ is finite and $\mu_i$ is finite for each $i \in I$ then $\mu$ is finite.
2. If $I$ is finite and $\mu_i$ is $\sigma$-finite for each $i \in I$ then $\mu$ is $\sigma$-finite.
Proof
Clearly $\mu: \mathscr S \to [0, \infty]$. First $\mu(\emptyset) = \sum_{i \in I} \mu_i(\emptyset) = 0$. Next if $\{A_j: j \in J\}$ is a countable, disjoint collection of events in $\mathscr S$ then $\mu\left(\bigcup_{j \in J} A_j\right) = \sum_{i \in I} \mu_i \left(\bigcup_{j \in J} A_j\right) = \sum_{i \in I} \sum_{j \in J} \mu_i(A_j) = \sum_{j \in J} \sum_{i \in I} \mu_i(A_j) = \sum_{j \in J} \mu(A_j)$ The interchange of sums is permissible since the terms are nonnegative. Suppose now that $I$ is finite.
1. If $\mu_i$ is finite for each $i \in I$ then $\mu(S) = \sum_{i \in I} \mu_i(S) \lt \infty$ so $\mu$ is finite.
2. Suppose that $\mu_i$ is $\sigma$-finite for each $i \in I$. Then for each $i \in I$ there exists a collection $\mathscr A_i = \{A_{i j}: j \in \N\} \subseteq \mathscr S$ such that $\bigcup_{j=1}^\infty A_{i j} = S$ and $\mu_i(A_{i j}) \lt \infty$ for each $j \in \N$. For $j \in \N$, let $B_j = \bigcap_{i \in I} A_{i,j}$. Then $B_j \in \mathscr S$ for each $j \in \N$ and $\bigcup_{j=1}^\infty B_j = \bigcup_{j=1}^\infty \bigcap_{i \in I} A_{i j} = \bigcap_{i \in I} \bigcup_{j=1}^\infty A_{i j} = \bigcap_{i \in I} S = S$ Moreover, $\mu(B_j) = \sum_{i \in I} \mu_i(B_j) \le \sum_{i \in I} \mu_i(A_{i j}) \lt \infty$ so $\mu$ is $\sigma$-finnite.
In the context of the last result, if $I$ is countably infinite and $\mu_i$ is finite for each $i \in I$, then $\mu$ is not necessarily $\sigma$-finite. A counterexample is given below. In this case, $\mu$ is said to be $s$-finite, but we've had enough definitions, so we won't pursue this one. From scaling and sum properties, note that a positive linear combination of positive measures is a positive measure. The next method is sometimes referred to as a change of variables.
Suppose that $(S, \mathscr S, \mu)$ is a measure space. Suppose also that $(T, \mathscr T)$ is another measurable space and that $f: S \to T$ is measurable. Then $\nu$ defined as follows is a positive measure on $(T, \mathscr T)$ $\nu(B) = \mu\left[f^{-1}(B)\right], \quad B \in \mathscr T$ If $\mu$ is finite then $\nu$ is finite.
Proof
Clearly $\nu: \mathscr T \to [0, \infty]$. The proof is easy since inverse images preserve all set operations. First $f^{-1}(\emptyset) = \emptyset$ so $\nu(\emptyset) = 0$. Next, if $\left\{B_i: i \in I\right\}$ is a countable, disjoint collection of sets in $\mathscr T$, then $\left\{f^{-1}(B_i): i \in I\right\}$ is a countable, disjoint collection of sets in $\mathscr S$, and $f^{-1}\left(\bigcup_{i \in I} B_i\right) = \bigcup_{i \in I} f^{-1}(B_i)$. Hence $\nu\left(\bigcup_{i \in I} B_i\right) = \mu\left[f^{-1}\left(\bigcup_{i \in I} B_i\right)\right] = \mu\left[\bigcup_{i \in I} f^{-1}(B_i)\right] = \sum_{i \in I} \mu\left[f^{-1}(B_i)\right] = \sum_{i \in I} \nu(B_i)$ Finally, if $\mu$ is finite then $\nu(T) = \mu[f^{-1}(T)] = \mu(S) \lt \infty$ so $\nu$ is finite.
In the context of the last result, if $\mu$ is $\sigma$-finite on $(S, \mathscr S)$, it is not necessarily true that $\nu$ is $\sigma$-finite on $(T, \mathscr T)$, even if $f$ is one-to-one. A counterexample is given below. The takeaway is that $\sigma$-finiteness of $\nu$ depends very much on the nature of the $\sigma$-algebra $\mathscr T$. Our next result shows that it's easy to explicitly construct a positive measure on a countably generated $\sigma$-algebra, that is, a $\sigma$-algebra generated by a countable partition. Such $\sigma$-algebras are important for counterexamples and to gain insight, and also because many $\sigma$-algebras that occur in applications can be constructed from them.
Suppose that $\mathscr A = \{A_i: i \in I\}$ is a countable partition of $S$ into nonempty sets, and that $\mathscr S = \sigma(\mathscr{A})$, the $\sigma$-algebra generated by the partition. For $i \in I$, define $\mu(A_i) \in [0, \infty]$ arbitrarily. For $A = \bigcup_{j \in J} A_j$ where $J \subseteq I$, define $\mu(A) = \sum_{j \in J} \mu(A_j)$ Then $\mu$ is a positive measure on $(S, \mathscr S)$.
1. The atoms of the measure are the sets of the form $A = \bigcup_{j \in J} A_j$ where $J \subseteq I$ and where $\mu(A_j) \gt 0$ for one and only one $j \in J$.
2. If $\mu(A_i) \lt \infty$ for $i \in I$ and $I$ is finite then $\mu$ is finite.
3. If $\mu(A_i) \lt \infty$ for $i \in I$ and $I$ is countably infinite then $\mu$ is $\sigma$-finite.
Proof
Recall that every $A \in \mathscr S$ has a unique representation of the form $A = \bigcup_{j \in J} A_j$ where $J \subseteq I$. In particular, $J = \emptyset$ in this representation gives $A = \emptyset$. The sum over an empty index set is 0, so $\mu(\emptyset) = 0$. Next suppose that $\{B_k: k \in K\}$ is a countable, disjoint collection of sets in $\mathscr S$. Then there exists a disjoint collection $\{J_k: k \in K\}$ of subsets of $I$ such that $B_k = \bigcup_{j \in J_k} A_j$. Hence $\mu\left(\bigcup_{k \in K} B_k\right) = \mu\left(\bigcup_{k \in K} \bigcup_{j \in J_k} A_j\right) = \sum_{k \in k}\sum_{j \in J_k} \mu(A_j) = \sum_{k \in K} \mu(B_k)$ The fact that the terms are all nonnegative means that we do not have to worry about the order of summation.
1. Again, every $A \in \mathscr S$ has the unique representation $A = \bigcup_{j \in J} A_j$ where $J \subseteq I$. The subsets of $A$ that are in $\mathscr S$ are $\bigcup_{k \in K} A_k$ ahere $K \subseteq J$. Hence $A$ is an atom if and only if $\mu(A_j) \gt 0$ for one and only one $j \in J$.
2. If $I$ is finite and $\mu(A_i) \lt \infty$ then $\mu(S) = \sum_{i \in I} \mu(A_i) \lt \infty$, so $\mu$ is finite.
3. If $I$ is countabley infinite and $\mu(A_i) \lt \infty$ for $i \in I$ then $\mathscr A$ satisfies the condition for $\mu$ to be $\sigma$-finite.
One of the most general ways to construct new measures from old ones is via the theory of integration with respect to a positive measure, which is explored in the chapter on Distributions. The construction of positive measures more or less from scratch is considered in the next section on Existence and Uniqueness. We close this discussion with a simple result that is useful for counterexamples.
Suppose that the measure space $(S, \mathscr S, \mu)$ has an atom $A \in \mathscr S$ with $\mu(A) = \infty$. Then the space is not $\sigma$-finite.
Proof
Let $\{A_i: i \in I\}$ be a countable disjoint collection of sets in $\mathscr S$ that partitions $S$. Then $\{A \cap A_i: i \in I\}$ partitions $A$. Since $\mu(A) = \sum_{i \in I} \mu(A \cap A_i)$, we must have $\mu(A \cap A_i) \gt 0$ for some $i \in I$. Since $A$ is an atom and $A \cap A_i \subseteq A$ it follows that $\mu(A \cap A_i) = \infty$. Hence also therefore $\mu(A_i) = \infty$.
Measure and Topology
Often the spaces that occur in probability and stochastic processes are topological spaces. Recall that a topological space $(S, \mathscr T)$ consists of a set $S$ and a topology $\mathscr T$ on $S$ (the collection of open sets). The topology as well as the measure theory plays an important role, so it's natural to want these two types of structures to be compatible. We have already seen the most important step in this direction: Recall that $\mathscr S = \sigma(\mathscr T)$, the $\sigma$-algebra generated by the topology, is the Borel $\sigma$-algebra on $S$, named for Émile Borel. Since the complement of an open set is a closed set, $\mathscr S$ is also the $\sigma$-algebra generated by the collection of closed sets. Moreover, $\mathscr S$ contains countable intersections of open sets (called $G_\delta$ sets) and countable unions of closed sets (called $F_\sigma$ sets).
Suppose that $(S, \mathscr T)$ is a topological space and let $\mathscr S = \sigma(\mathscr T)$ be the Borel $\sigma$-algebra. A positive measure $\mu$ on $(S, \mathscr S)$ is a Borel measure and then $(S, \mathscr S, \mu)$ is a Borel measure space.
The next definition concerns the subset on which a Borel measure is concentrated, in a certain sense.
Suppose that $(S, \mathscr S, \mu)$ is a Borel measure space. The support of $\mu$ is $\supp(\mu) = \{x \in S: \mu(U) \gt 0 \text{ for every open neighborhood } U \text{ of } x\}$ The set $\supp(\mu)$ is closed.
Proof
Let $A = \supp(\mu)$. For $x \in A^c$, there exists an open neighborhood $V_x$ of $x$ such that $\mu(V_x) = 0$. If $y \in V_x$, then $V_x$ is also an open neighborhood of $y$, so $y \in A^c$. Hence $V_x \subseteq A^c$ for every $x \in A^c$ and so $A^c$ is open.
The term Borel measure has different definitions in the literature. Often the topological space is required to be locally compact, Hausdorff, and with a countable base (LCCB). Then a Borel measure $\mu$ is required to have the additional condition that $\mu(C) \lt \infty$ if $C \subseteq S$ is compact. In this text, we use the term Borel measures in this more restricted sense.
Suppose that $(S, \mathscr S, \mu)$ is a Borel measure space corresponding to an LCCB topolgy. Then the space is $\sigma$-finite.
Proof
Since the topological space is locally compact and has a countable base, $S = \bigcup_{i \in I} C_i$ where $\{C_i: i \in I\}$ is a countable collection of compact sets. Since $\mu$ is a Borel measure, $\mu(C_i) \lt \infty$ and hence $\mu$ is $\sigma$-finite.
Here are a couple of other definitions that are important for Borel measures, again linking topology and measure in natural ways.
Suppose again that $(S, \mathscr S, \mu)$ is a Borel measure space.
1. $\mu$ is inner regular if $\mu(A) = \sup\{\mu(C): C \text{ is compact and } C \subseteq A\}$ for $A \in \mathscr S$.
2. $\mu$ is outer regular if $\mu(A) = \inf\{\mu(U): U \text{ is open and } A \subseteq U\}$ for $A \in \mathscr S$.
3. $\mu$ is regular if it is both inner regular and outer regular.
The measure spaces that occur in probability and stochastic processes are usually regular Borel spaces associated with LCCB topologies.
Null Sets and Equivalence
Sets of measure 0 in a measure space turn out to be very important precisely because we can often ignore the differences between mathematical objects on such sets. In this disucssion, we assume that we have a fixed measure space $(S, \mathscr S, \mu)$.
A set $A \in \mathscr S$ is null if $\mu(A) = 0$.
Consider a measurable statement with $x \in S$ as a free variable. (Technically, such a statement is a predicate on $S$.) If the statement is true for all $x \in S$ except for $x$ in a null set, we say that the statement holds almost everywhere on $S$. This terminology is used often in measure theory and captures the importance of the definition.
Let $\mathscr D = \{A \in \mathscr S: \mu(A) = 0 \text{ or } \mu(A^c) = 0\}$, the collection of null and co-null sets. Then $\mathscr D$ is a sub $\sigma$-algebra of $\mathscr S$.
Proof
Trivially $S \in \mathscr D$ since $S^c = \emptyset$ and $\mu(\emptyset) = 0$. Next if $A \in \mathscr D$ then $A^c \in \mathscr D$ by the symmetry of the definition. Finally, suppose that $A_i \in \mathscr D$ for $i \in I$ where $I$ is a countable index set. If $\mu(A_i) = 0$ for every $i \in I$ then $\mu\left(\bigcup_{i \in I} A_i \right) \le \sum_{i \in I} \mu(A_i) = 0$ by the subadditive property. On the other hand, if $\mu(A_j^c) = 0$ for some $j \in J$ then $\mu\left[\left(\bigcup_{i \in I} A_i \right)^c\right] = \mu\left(\bigcap_{i \in I} A_i^c\right) \le \mu(A_j^c) = 0$. In either case, $\bigcup_{i \in I} A_i \in \mathscr D$.
Of course $\mu$ restricted to $\mathscr D$ is not very interesting since $\mu(A) = 0$ or $\mu(A) = \mu(S)$ for every $A \in \mathscr S$. Our next definition is a type of equivalence between sets in $\mathscr S$. To make this precise, recall first that the symmetric difference between subsets $A$ and $B$ of $S$ is $A \bigtriangleup B = (A \setminus B) \cup (B \setminus A)$. This is the set that consists of points in one of the two sets, but not both, and corresponds to exclusive or.
Sets $A, \, B \in \mathscr S$ are equivalent if $\mu(A \bigtriangleup B) = 0$, and we denote this by $A \equiv B$.
Thus $A \equiv B$ if and only if $\mu(A \bigtriangleup B) = \mu(A \setminus B) + \mu(B \setminus A) = 0$ if and only if $\mu(A \setminus B) = \mu(B \setminus A) = 0$. In the predicate terminology mentioned above, the statement $x \in A \text{ if and only if } x \in B$ is true for almost every $x \in S$. As the name suggests, the relation $\equiv$ really is an equivalence relation on $\mathscr S$ and hence $\mathscr S$ is partitioned into disjoint classes of mutually equivalent sets. Two sets in the same equivalence class differ by a set of measure 0.
The relation $\equiv$ is an equivalence relation on $\mathscr S$. That is, for $A, \, B, \, C \in \mathscr S$,
1. $A \equiv A$ (the reflexive property).
2. If $A \equiv B$ then $B \equiv A$ (the symmetric property).
3. If $A \equiv B$ and $B \equiv C$ then $A \equiv C$ (the transitive property).
Proof
1. The reflexive property is trivial since $A \bigtriangleup A = \emptyset$.
2. The symmetric property is also trivial since $A \bigtriangleup B = B \bigtriangleup A$.
3. For the transitive property, suppose that $A \equiv B$ and $B \equiv C$. Note that $A \setminus C \subseteq (A \setminus B) \cup (B \setminus C)$, and hence $\P(A \setminus C) = 0$. By a symmetric argument, $\P(C \setminus A) = 0$.
Equivalence is preserved under the standard set operations.
If $A, \, B \in \mathscr S$ and $A \equiv B$ then $A^c \equiv B^c$.
Proof
Note that $A^c \setminus B^c = B \setminus A$ and $B^c \setminus A^c = A \setminus B$, so $A^c \bigtriangleup B^c = A \bigtriangleup B$.
Suppose that $A_i, \, B_i \in \mathscr S$ and that $A_i \equiv B_i$ for $i$ in a countable index set $I$. Then
1. $\bigcup_{i \in I} A_i \equiv \bigcup_{i \in I} B_i$
2. $\bigcap_{i \in I} A_i \equiv \bigcap_{i \in I} B_i$
Proof
1. Note that $\left(\bigcup_{i \in I} A_i\right) \bigtriangleup \left(\bigcup_{i \in I} B_i\right) \subseteq \bigcup_{i \in I} (A_i \bigtriangleup B_i)$ To see this, note that if $x$ is in the set on the left then either $x \in A_j$ for some $j \in I$ and $x \notin B_i$ for every $i \in I$, or $x \notin A_i$ for every $i \in I$ and $x \in B_j$ for some $j \in I$. In either case, $x \in A_j \bigtriangleup B_j$ for some $j \in I$.
2. Similarly $\left(\bigcap_{i \in I} A_i\right) \bigtriangleup \left(\bigcap_{i \in I} B_i\right) \subseteq \bigcup_{i \in I} (A_i \bigtriangleup B_i)$ If $x$ is in the set on the left then $x \in A_i$ for every $i \in I$ and $x \notin B_j$ for some $j \in I$, or $x \in B_i$ for every $i \in I$ or $x \notin A_j$ for some $j \in I$. In either case, $x \in A_j \bigtriangleup B_j$ for some $j \in I$
In both parts, the proof is completed by noting that the common set on the right in the displayed equations is null: $\mu\left[\bigcup_{i \in I} (A_i \bigtriangleup B_i) \right] \le \sum_{i \in I} \mu(A_i \bigtriangleup B_i) = 0$
Equivalent sets have the same measure.
If $A, \, B \in \mathscr S$ and $A \equiv B$ then $\mu(A) = \mu(B)$.
Proof
Note again that $A = (A \cap B) \cup (A \setminus B)$. If $A \equiv B$ then $\mu(A) = \mu(A \cap B)$. By a symmetric argument, $\mu(B) = \mu(A \cap B)$.
The converse trivially fails, and a counterexample is given below. However, the collection of null sets and the collection of co-null sets do form equivalence classes.
Suppose that $A \in \mathscr S$.
1. If $\mu(A) = 0$ then $A \equiv B$ if and only if $\mu(B) = 0$.
2. If $\mu(A^c) = 0$ then $A \equiv B$ if and only if $\mu(B^c) = 0$.
Proof
1. Suppose that $\mu(A) = 0$ and $A \equiv B$. Then $\mu(B) = 0$ by the result above. Conversely, note that $A \setminus B \subseteq A$ and $B \setminus A \subseteq B$ so if $\mu(A) = \mu(B) = 0$ then $\mu(A \bigtriangleup B) = 0$ so $A \equiv B$.
2. Part (b) follows from part (a) and the result above on complements.
We can extend the notion of equivalence to measruable functions with a common range space. Thus suppose that $(T, \mathscr T)$ is another measurable space. If $f, \, g: S \to T$ are measurable, then $(f, g): S \to T \times T$ is measurable with respect the usual product $\sigma$-algebra $\mathscr T \otimes \mathscr T$. We assume that the diagonal set $D = \{(y, y): y \in T\} \in \mathscr T \otimes \mathscr T$, which is almost always true in applications.
Measurable functions $f, \, g: S \to T$ are equivalent if $\mu\{x \in S: f(x) \ne g(x)\} = 0$. Again we write $f \equiv g$.
Details
Note that $\{x \in S: f(x) \ne g(x)\} = \{x \in S: (f(x), g(x)) \in D\}^c \in \mathscr S$ by our assumption, so the definition makes sense.
In the terminology discussed earlier, $f \equiv g$ means that $f(x) = g(x)$ almost everywhere on $S$. As with measurable sets, the relation $\equiv$ really does define an equivalence relation on the collection of measurable functions from $S$ to $T$. Thus, the collection of such functions is partitioned into disjoint classes of mutually equivalent variables.
The relation $\equiv$ is an equivalence relation on the collection of measurable functions from $S$ to $T$. That is, for measurable $f, \, g, \, h: S \to T$,
1. $f \equiv f$ (the reflexive property).
2. If $f \equiv g$ then $g \equiv f$ (the symmetric property).
3. If $f \equiv g$ and $g \equiv h$ then $f \equiv h$ (the transitive property).
Proof
Parts (a) and (b) are trivially. For (c) note that $f(x) = g(x)$ and $g(x) = h(x)$ implies $f(x) = h(x)$ for $x \in S$. Negating this statement gives $f(x) \ne h(x)$ implies $f(x) \ne g(x)$ or $g(x) \ne h(x)$. So $\{x \in S: f(x) \ne h(x)\} \subseteq \{x \in S: f(x) \ne g(x)\} \cup \{ x \in S: g(x) \ne h(x)\}$ Since $f \equiv g$ and $g \equiv h$, the two sets on the right have measure 0. Hence, so does the set on the left.
Suppose agaom that $f, \, g: S \to T$ are measurable and that $f \equiv g$. Then for every $B \in \mathscr T$, the sets $f^{-1}(B) \equiv g^{-1}(B)$.
Proof
Note that $f^{-1}(B) \bigtriangleup g^{-1}(B) \subseteq \{x \in S: f(x) \ne g(x)\}$.
Thus if $f, \, g: S \to T$ are measurable and $f \equiv g$, then by the previous result, $\nu_f = \nu_g$ where $\nu_f, \, \nu_g$ are the measures on $(T, \mathscr T)$ associated with $f$ and $g$, as above. Again, the converse fails with a passion.
It often happens that a definition for functions subsumes the corresponding definition for sets, by considering the indicator functons of the sets. So it is with equivalence. In the following result, we can take $T = \{0, 1\}$ with $\mathscr T$ the collection of all subsets.
Suppose that $A, \, B \in \mathscr S$. Then $A \equiv B$ if and only if $\bs{1}_A \equiv \bs{1}_B$.
Proof
Note that $\left\{x \in S: \bs{1}_A(x) \ne \bs{1}_B(x) \right\} = A \bigtriangleup B$.
Equivalence is preserved under composition. For the next result, suppose that $(U, \mathscr U)$ is yet another measurable space.
Suppose that $f, \, g: S \to T$ are measurable and that $h: T \to U$ is measurable. If $f \equiv g$ then $h \circ f \equiv h \circ g$.
Proof
Note that $\{x \in S: h[f(x)] \ne h[g(x)]\} \subseteq \{x \in S: f(x) \ne g(x)\}$.
Suppose again that $(S, \mathscr S, \mu)$ is a measure space. Let $\mathscr V$ denote the collection of all measurable real-valued random functions from $S$ into $\R$. (As usual, $\R$ is given the Borel $\sigma$-algebra.) From our previous discussion of measure theory, we know that with the usual definitions of addition and scalar multiplication, $(\mathscr V, +, \cdot)$ is a vector space. However, in measure theory, we often do not want to distinguish between functions that are equivalent, so it's nice to know that the vector space structure is preserved when we identify equivalent functions. Formally, let $[f]$ denote the equivalence class generated by $f \in \mathscr V$, and let $\mathscr W$ denote the collection of all such equivalence classes. In modular notation, $\mathscr W$ is $\mathscr V \big/ \equiv$. We define addition and scalar multiplication on $\mathscr W$ by $[f] + [g] = [f + g], \; c [f] = [c f]; \quad f, \, g \in \mathscr V, \; c \in \R$
$(\mathscr W, +, \cdot)$ is a vector space.
Proof
All that we have to show is that addition and scalar multiplication are well defined. That is, we must show that the definitions do not depend on the particularly representative of the equivalence class. Then the other properties that define a vector space are inherited from $(\mathscr V, +, \cdot)$. Thus we must show that if $f_1 \equiv f_2$ and $g_1 \equiv g_2$, and if $c \in \R$, then $f_1 + g_1 \equiv f_2 + g_2$ and $c f_1 \equiv c f_2$. For the first problem, note that $(f_1, g_1)$ and $(f_2, g_2)$ are measurable functions from $S$ to $\R^2$. ($\R^2$ is given the product $\sigma$-algebra which also happens to be the Borel $\sigma$-algebra corresponding to the standard Euclidean topolgy). Moreover, $(f_1, g_1) \equiv (f_2, g_2)$ since $\{x \in S: (f_1(x), g_1(x)) \ne (f_2(x), g_2(x))\} = \{x \in S: f_1(x) \ne f_2(x)\} \cup \{x \in S: g_1(x) \ne g_2(x)\}$ But the function $(a, b) \mapsto a + b$ from $\R^2$ into $\R$ is measurable and hence from composition property, it follows that $f_1 + g_1 \equiv f_2 + g_2$. The second problem is easier. The function $a \mapsto c a$ from $\R$ into $\R$ is measurable so again it follos from composition property that $c f_1 \equiv c f_2$.
Often we don't bother to use the special notation for the equivalence class associated with a function. Rather, it's understood that equivalent functions represent the same object. Spaces of functions in a measure space are studied further in the chapter on Distributions.
Completion
Suppose that $(S, \mathscr S, \mu)$ is a measure space and let $\mathscr N = \{A \in \mathscr S: \mu(A) = 0\}$ denote the collection of null sets of the space. If $A \in \mathscr N$ and $B \in \mathscr S$ is a subset of $A$, then we know that $\mu(B) = 0$ so $B \in \mathscr N$ also. However, in general there might be subsets of $A$ that are not in $\mathscr S$. This leads naturally to the following definition.
The measure space $(S, \mathscr S, \mu)$ is complete if $A \in \mathscr N$ and $B \subseteq A$ imply $B \in \mathscr S$ (and hence $B \in \mathscr N$).
Our goal in this discussion is to show that if $(S, \mathscr S, \mu)$ is a $\sigma$-finite measure that is not complete, then it can be completed. That is $\mu$ can be extended to $\sigma$-algebra that includes all of the sets in $\mathscr S$ and all subsets of null sets. The first step is to extend the equivalence relation defined in our previous discussion to $\mathscr P(S)$.
For $A, \, B \subseteq S$, define $A \equiv B$ if and only if there exists $N \in \mathscr N$ such that $A \bigtriangleup B \subseteq N$. The relation $\equiv$ is an equivalence relation on $\mathscr{P}(S)$: For $A, \, B, \, C \subseteq S$,
1. $A \equiv A$ (the reflexive property).
2. If $A \equiv B$ then $B \equiv A$ (the symmetric property).
3. If $A \equiv B$ and $B \equiv C$ then $A \equiv C$ (the transitive property).
Proof
1. Note that $A \bigtriangleup A = \emptyset$ and $\emptyset \in \mathscr N$.
2. Suppose that $A \bigtriangleup B \subseteq N$ where $N \in \mathscr N$. Then $B \bigtriangleup A = A \bigtriangleup B \subseteq N$.
3. Suppose that $A \bigtriangleup B \subseteq N_1$ and $B \bigtriangleup C \subseteq N_2$ where $N_1, \; N_2 \in \mathscr N$. Then $A \bigtriangleup C \subseteq (A \bigtriangleup B) \cup (B \bigtriangleup C) \subseteq N_1 \cup N_2$, and $N_1 \cup N_2 \in \mathscr N$.
So the equivalence relation $\equiv$ partitions $\mathscr P(S)$ into mutually disjoint equivalence classes. Two sets in an equivalence class differ by a subset of a null set. In particular, $A \equiv \emptyset$ if and only if $A \subseteq N$ for some $N \in \mathscr N$. The extended relation $\equiv$ is preserved under the set operations, just as before. Our next step is to enlarge the $\sigma$-algebra $\mathscr S$ by adding any set that is equivalent to a set in $\mathscr S$.
Let $\mathscr S_0 = \{A \subseteq S: A \equiv B \text{ for some } B \in \mathscr S \}$. Then $\mathscr S_0$ is a $\sigma$-algebra of subsets of $S$, and in fact is the $\sigma$-algebra generated by $\mathscr S \cup \{A \subseteq S: A \equiv \emptyset\}$.
Proof
Note that if $A \in \mathscr S$ then $A \equiv A$ so $A \in \mathscr S_0$. In particular, $S \in \mathscr S_0$. Also, $\emptyset \in \mathscr S$ so if $A \equiv \emptyset$ then $A \in \mathscr S_0$. Suppose that $A \in \mathscr S_0$ so that $A \equiv B$ for some $B \in \mathscr S$. Then $B^c \in \mathscr S$ and $A^c \equiv B^c$ so $A^c \in \mathscr S_0$. Next suppose that $A_i \in \mathscr S_0$ for $i$ in a countable index set $I$. Then for each $i \in I$ there exists $B_i \in \mathscr S$ such that $A_i \equiv B_i$. But then $\bigcup_{i \in I} B_i \in \mathscr S$ and $\bigcup_{i \in I} A_i \equiv \bigcup_{i \in I} B_i$, so $\bigcup_{i \in I} A_i \in \mathscr S_0$. Therefore $\mathscr S_0$ is a $\sigma$-algebra of subsets of $S$. Finally, suppose that $\mathscr T$ is a $\sigma$-algebra of subsets of $S$ and that $\mathscr S \cup \{A \subseteq S: A \equiv \emptyset\} \subseteq \mathscr T$. We need to show that $\mathscr S_0 \subseteq \mathscr T$. Thus, suppose that $A \in \mathscr S_0$ Then there exists $B \in \mathscr S$ such that $A \equiv B$. But $B \in \mathscr T$ and $A \bigtriangleup B \in \mathscr T$ so $A \cap B = B \setminus (A \bigtriangleup B) \in \mathscr T$. Also $A \setminus B \in \mathscr T$, so $A = (A \cap B) \cup (A \setminus B) \in \mathscr T$.
Our last step is to extend $\mu$ to a positive measure on the enlarged $\sigma$-algebra $\mathscr S_0$.
Suppose that $A \in \mathscr S_0$ so that $A \equiv B$ for some $B \in \mathscr S$. Define $\mu_0(A) = \mu(B)$. Then
1. $\mu_0$ is well defined.
2. $\mu_0(A) = \mu(A)$ for $A \in \mathscr S$.
3. $\mu_0$ is a positive measure on $\mathscr S_0$.
The measure space $(S, \mathscr S_0, \mu_0)$ is complete and is known as the completion of $(S, \mathscr S, \mu)$.
Proof
1. Suppose that $A \in \mathscr S_0$ and that $A \equiv B_1$ and $A \equiv B_2$ where $B_1, \, B_2 \in \mathscr S$. Then $B_1 \equiv B_2$ so by the result above $\mu(B_1) = \mu(B_2)$. Thus, $\mu_0$ is well-defined.
2. Next, if $A \in \mathscr S$ then of course $A \equiv A$ so $\mu_0(A) = \mu(A)$.
3. Trivially $\mu_0(A) \ge 0$ for $A \in \mathscr S_0$. Thus we just need to show the countable additivity property. To understand the proof you need to keep several facts in mind: the functions $\mu$ and $\mu_0$ agree on $\mathscr S$ (property (b)); equivalence is preserved under set operations; equivalent sets have the same value under $\mu_0$ (property (a)). Since the measure space $(S, \mathscr S, \mu)$ is $\sigma$-finite, there exists a countable disjoint collection $\{C_i: i \in I\}$ of sets in $\mathscr S$ such that $S = \bigcup_{i \in I} C_i$ and $\mu(C_i) \lt \infty$ for each $i \in I$. Suppose first that $A \in \mathscr S_0$, so that there exists $B \in \mathscr S$ with $A \equiv B$. Then $\mu_0(A) = \mu_0\left[\bigcup_{i \in I} (A \cap C_i)\right] = \mu\left[\bigcup_{i \in I} (B \cap C_i)\right] = \sum_{i \in I} \mu(B \cap C_i) = \sum_{i \in I} \mu_0(A \cap C_i)$ Suppose next that $(A_1, A_2, \ldots)$ is a sequence of pairwise disjoint sets in $\mathscr S_0$ so that there exists a sequence $(B_1, B_2, \ldots)$ of sets in $\mathscr S$ such that $A_i \equiv B_i$ for each $i \in \N_+$. For fixed $i \in I$, $\mu_0\left[\bigcup_{n=1}^\infty (A_n \cap C_i)\right] = \mu_0\left[\bigcup_{n=1}^\infty (B_n \cap C_i)\right] = \mu\left[\bigcup_{n=1}^\infty (B_n \cap C_i)\right] = \sum_{in=1}^\infty \mu(B_n \cap C_i) = \sum_{n=1}^\infty \mu_0(A_n \cap C_i)$ The next-to-the-last equality use the inclusion-exclusion law, since we don't know (and it's probably not true) that the sequence $(B_1, B_2, \ldots)$ is disjoint. The use of inclusion-exclusion is why we need $(S, \mathscr S, \mu)$ to be $\sigma$-finite. Finally, using the previous displayed equations, \begin{align*} \mu_0\left(\bigcup_{n=1}^\infty A_n\right) & = \sum_{i \in I} \mu_0\left[\left(\bigcup_{n=1}^\infty A_n\right) \cap C_i\right] = \sum_{i \in I} \mu_0\left(\bigcup_{n=1}^\infty A_n \cap C_i \right) \ & = \sum_{i \in I} \sum_{n=1}^\infty \mu_0(A_n \cap C_i) = \sum_{n=1}^\infty \sum_{i \in I} \mu_0(A_n \cap C_i) = \sum_{n=1}^\infty \mu_0(A_n) \end{align*}
Examples and Exercises
As always, be sure to try the computational exercises and proofs yourself before reading the answers and proofs in the text. Recall that a discrete measure space consists of a countable set, with the $\sigma$-algebra of all subsets, and with counting measure $\#$.
Counterexamples
The continuity theorem for decreasing events can fail if the events do not have finite measure.
Consider $\Z$ with counting measure $\#$ on the $\sigma$-algebra of all subsets. Let $A_n = \{ z \in \Z: z \le -n\}$ for $n \in \N_+$. The continuity theorem fails for $(A_1, A_2, \ldots)$.
Proof
The sequence is decreasing and $\#(A_n) = \infty$ for each $n$, but $\# \left(\bigcap_{i=1}^\infty A_i\right) = \#(\emptyset) = 0$.
Equal measure certainly does not imply equivalent sets.
Suppose that $(S, \mathscr S, \mu)$ is a measure space with the property that there exist disjoint sets $A, \, B \in \mathscr S$ such that $\mu(A) = \mu(B) \gt 0$. Then $A$ and $B$ are not equivalent.
Proof
Note that $A \bigtriangleup B = A \cup B$ and $\mu(A \cup B) \gt 0$.
For a concrete example, we could take $S = \{0, 1\}$ with counting measure $\#$ on $\sigma$-algebra of all subsets, and $A = \{0\}$, $B = \{1\}$.
The $\sigma$-finite property is not necessarily inherited by a sub-measure space. To set the stage for the counterexample, let $\mathscr R$ denote the Borel $\sigma$-algebra of $\R$, that is, the $\sigma$-algebra generated by the standard Euclidean topology. There exists a positive measure $\lambda$ on $(\R, \mathscr R)$ that generalizes length. The measure $\lambda$, known as Lebesgue measure, is constructed in the section on Existence. Next let $\mathscr C$ denote the $\sigma$-algebra of countable and co-countable sets: $\mathscr C = \{A \subseteq \R: A \text{ is countable or } A^c \text{ is countable}\}$ That $\mathscr C$ is a $\sigma$-algebra was shown in the section on measure theory in the chapter on foundations.
$(\R, \mathscr C)$ is a subspace of $(\R, \mathscr R)$. Moreover, $(\R, \mathscr R, \lambda)$ is $\sigma$-finite but $(\R, \mathscr C, \lambda)$ is not.
Proof
If $x \in \R$, then the singleton $\{x\}$ is closed and hence is in $\mathscr R$. A countable set is a countable union of singletons, so if $A$ is countable then $A \in \mathscr R$. It follows that $\mathscr C \subset \mathscr R$. Next, let $I_n$ denote the interval $[n, n + 1)$ for $n \in \Z$. Then $\lambda(I_n) = 1$ for $n \in Z$ and $\R = \bigcup_{n \in \Z} I_n$, so $(\R, \mathscr R, \lambda)$ is $\sigma$-finite. On the other hand, $\lambda\{x\} = 0$ for $x \in R$ (since the set is an interval of length 0). Therefore $\lambda(A) = 0$ if $A$ is countable and $\lambda(A) = \infty$ if $A^c$ is countable. It follows that $\R$ cannot be written as a countable union of sets in $\mathscr C$, each with finite measure.
A sum of finite measures may not be $\sigma$-finite.
Let $S$ be a nonempty, finite set with the $\sigma$-algebra $\mathscr S$ of all subsets. Let $\mu_n = \#$ be counting measure on $(S, \mathscr S)$ for $n \in \N_+$. Then $\mu_n$ is a finite measure for each $n \in \N_+$, but $\mu = \sum_{n \in \N_+} \mu_n$ is not $\sigma$-finite.
Proof
Note that $\mu$ is the trivial measure on $(S, \mathscr S)$ given by $\mu(A) = \infty$ if $A \ne \emptyset$ (and of course $\mu(\emptyset) = 0$).
Basic Properties
In the following problems, $\mu$ is a positive measure on the measurable space $(S, \mathscr S)$.
Suppose that $\mu(S) = 20$ and that $A, B \in \mathscr S$ with $\mu(A) = 5$, $\mu(B) = 6$, $\mu(A \cap B) = 2$. Find the measure of each of the following sets:
1. $A \setminus B$
2. $A \cup B$
3. $A^c \cup B^c$
4. $A^c \cap B^c$
5. $A \cup B^c$
Answer
1. 3
2. 9
3. 18
4. 11
5. 16
Suppose that $\mu(S) = \infty$ and that $A, \, B \in \mathscr S$ with $\mu(A \setminus B) = 2$, $\mu(B \setminus A) = 3$, and $\mu(A \cap B) = 4$. Find the measure of each of the following sets:
1. $A$
2. $B$
3. $A \cup B$
4. $A^c \cap B^c$
5. $A^c \cup B^c$
Answer
1. 6
2. 7
3. 9
4. $\infty$
5. $\infty$
Suppose that $\mu(S) = 10$ and that $A, \, B \in \mathscr S$ with $\mu(A) = 3$, $\mu(A \cup B) = 7$, and $\mu(A \cap B) = 2$. Find the measure of each of the following events:
1. $B$
2. $A \setminus B$
3. $B \setminus A$
4. $A^c \cup B^c$
5. $A^c \cap B^c$
Answer
1. 6
2. 1
3. 4
4. 8
5. 3
Suppose that $A, \, B, \, C \in \mathscr S$ with $\mu(A) = 10$, $\mu(B) = 12$, $\mu(C) = 15$, $\mu(A \cap B) = 3$, $\mu(A \cap C) = 4$, $\mu(B \cap C) = 5$, and $\mu(A \cap B \cap C) = 1S$. Find the probabilities of the various unions:
1. $A \cup B$
2. $A \cup C$
3. $B \cup C$
4. $A \cup B \cup C$
Answer
1. 21
2. 23
3. 22
4. 28 | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/02%3A_Probability_Spaces/2.07%3A_Measure_Spaces.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\length}{\text{length}}$
Suppose that $S$ is a set and $\mathscr{S}$ a $\sigma$-algebra of subsets of $S$, so that $(S, \mathscr{S})$ is a measurable space. In many cases, it is impossible to define a positive measure $\mu$ on $\mathscr{S}$ explicitly, by giving a formula for computing $\mu(A)$ for each $A \in \mathscr{S}$. Rather, we often know how the measure $\mu$ should work on some class of sets $\mathscr{B}$ that generates $\mathscr{S}$. We would then like to know that $\mu$ can be extended to a positive measure on $\mathscr{S}$, and that this extension is unique. The purpose of this section is to discuss the basic results on this topic. To understand this section you will need to review the sections on Measure Theory and Special Set Structures in the chapter on Foundations, and the section on Measure Spaces in this chapter. If you are not interested in questions of existence and uniqueness of positive measures, you can safely skip this section.
Basic Theory
Positive Measures on Algebras
Suppose first that $\mathscr A$ is an algebra of subsets of $S$. Recall that this means that $\mathscr A$ is a collection of subsets that contains $S$ and is closed under complements and finite unions (and hence also finite intersections). Here is our first definition:
A positive measure on $\mathscr A$ is a function $\mu: \mathscr A \to [0, \infty]$ that satisfies the following properties:
1. $\mu(\emptyset) = 0$
2. If $\{A_i: i \in I\}$ is a countable, disjoint collection of sets in $\mathscr A$ and if $\bigcup_{i \in I} A_i \in \mathscr A$ then $\mu\left(\bigcup_{i \in I} A_i\right) = \sum_{i \in I} \mu(A_i)$
Clearly the definition of a positive measure on an algebra is very similar to the definition for a $\sigma$-algebra. If the collection of sets in (b) is finite, then $\bigcup_{i \in I} A_i$ must be in the algebra $\mathscr A$. Thus, $\mu$ is finitely additive. If the collection is countably infinite, then there is no guarantee that the union is in $\mathscr A$. If it is however, then $\mu$ must be additive over this collection. Given the similarity, it is not surprising that $\mu$ shares many of the basic properties of a positive measure on a $\sigma$-algebra, with proofs that are almost identical.
If $A, \, B \in \mathscr A$, then $\mu(B) = \mu(A \cap B) + \mu(B \setminus A)$.
Proof
Note that $B = (A \cap B) \cup (B \setminus A)$, and the sets in the union are in the algebra $\mathscr A$ and are disjoint.
If $A, \, B \in \mathscr A$ and $A \subseteq B$ then
1. $\mu(B) = \mu(A) + \mu(B \setminus A)$
2. $\mu(A) \le \mu(B)$
Proof
Part (a) follows from the previous theorem, since $A \cap B = A$. Part (b) follows from part (a).
Thus $\mu$ is increasing, relative to the subset partial order $\subseteq$ on $\mathscr A$ and the ordinary order $\le$ on $[0, \infty]$. Note also that if $A, \, B \in \mathscr A$ and $\mu(B) \lt \infty$ then $\mu(B \setminus A) = \mu(B) - \mu(A \cap B)$. In the special case that $A \subseteq B$, this becomes $\mu(B \setminus A) = \mu(B) - \mu(A)$. If $\mu(S) \lt \infty$ then $\mu(A^c) = \mu(S) - \mu(A)$. These are the familiar difference and complement rules.
The following result is the subadditive property for a positive measure $\mu$ on an algebra $\mathscr A$.
Suppose that $\{A_i: i \in I \}$ is a countable collection of sets in $\mathscr A$ and that $\bigcup_{i \in I} A_i \in \mathscr A$. Then $\mu\left(\bigcup_{i \in I} A_i \right) \le \sum_{i \in I} \mu(A_i)$
Proof
The proof is just like before. Assume that $I = \N_+$. Let $B_1 = A_1$ and $B_i = A_i \setminus (A_1 \cup \ldots \cup A_{i-1})$ for $i \in \{2, 3, \ldots\}$. Then $\{B_i: i \in I\}$ is a disjoint collection of sets in $\mathscr A$ with the same union as $\{A_i: i \in I\}$. Also $B_i \subseteq A_i$ for each $i$ so $\mu(B_i) \le \mu(A_i)$. Hence if the union is in $\mathscr A$ then $\mu\left(\bigcup_{i \in I} A_i \right) = \mu\left(\bigcup_{i \in I} B_i \right) = \sum_{i \in I} \mu(B_i) \le \sum_{i \in I} \mu(A_i)$
For a finite union of sets with finite measure, the inclusion-exclusion formula holds, and the proof is just like the one for a probability measure.
Suppose that $\{A_i: i \in I\}$ is a finite collection of sets in $\mathscr A$ where $\#(I) = n \in \N_+$, and that $\mu(A_i) \lt \infty$ for $i \in I$. Then $\mu \left( \bigcup_{i \in I} A_i \right) = \sum_{k = 1}^n (-1)^{k - 1} \sum_{J \subseteq I, \; \#(J) = k} \mu \left( \bigcap_{j \in J} A_j \right)$
The continuity theorems hold for a positive measure $\mu$ on an algebra $\mathscr A$, just as for a positive measure on a $\sigma$-algebra, assuming that the appropriate union and intersection are in the algebra. The proofs are just as before.
Suppose that $(A_1, A_2, \ldots)$ is a sequence of sets in $\mathscr A$.
1. If the sequence is increasing, so that $A_n \subseteq A_{n+1}$ for each $n \in \N_+$, and $\bigcup_{i = 1}^\infty A_i \in \mathscr A$, then $\mu\left(\bigcup_{i=1}^\infty A_i \right) = \lim_{n \to \infty} \mu(A_n)$.
2. If the sequence is decreasing, so that $A_{n+1} \subseteq A_n$ for each $n \in \N_+$, and $\mu(A_1) \lt \infty$ and $\bigcap_{i=1}^\infty A_i \in \mathscr A$, then $\mu\left(\bigcap_{i=1}^\infty A_i \right) = \lim_{n \to \infty} \mu(A_n)$.
Proof
1. Note that if $\mu(A_k) = \infty$ for some $k$ then $\mu(A_n) = \infty$ for $n \ge k$ and $\mu\left(\bigcup_{i=1}^\infty A_i \right) = \infty$ if this union is in $\mathscr A$. Thus, suppose that $\mu(A_i) \lt \infty$ for each $i$. Let $B_1 = A_1$ and $B_i = A_i \setminus A_{i-1}$ for $i \in \{2, 3, \ldots\}$. Then $(B_1, B_2, \ldots)$ is a disjoint sequence in $\mathscr A$ with the same union as $(A_1, A_2, \ldots)$. Also, $\mu(B_1) = \mu(A_1)$ and $\mu(B_i) = \mu(A_i) - \mu(A_{i-1})$ for $i \in \{2, 3, \ldots\}$. Hence if the union is in $\mathscr A$, $\mu\left(\bigcup_{i=1}^\infty A_i \right) = \mu \left(\bigcup_{i=1}^\infty B_i \right) = \sum_{i=1}^\infty \mu(B_i) = \lim_{n \to \infty} \sum_{i=1}^n \mu(B_i)$ But $\sum_{i=1}^n \mu(B_i) = \mu(A_1) + \sum_{i=2}^n [\mu(A_i) - \mu(A_{i-1})] = \mu(A_n)$.
2. Note that $A_1 \setminus A_n \in \mathscr A$ and this sequence is increasing. Moreover, $\bigcup_{n=1}^\infty (A_1 \setminus A_n) = \left(\bigcap_{n=1}^\infty A_n \right)^c \cap A_1$. Hence if $\bigcap_{n=1}^\infty A_n \in \mathscr A$ then $\bigcup_{n=1}^\infty (A_1 \setminus A_n) \in \mathscr A$. Thus using the continuity result for increasing sets, \begin{align} \mu \left(\bigcap_{i=1}^\infty A_i \right) & = \mu\left[A_1 \setminus \bigcup_{i=1}^\infty (A_1 \setminus A_i) \right] = \mu(A_1) - \mu\left[\bigcup_{i=1}^\infty (A_1 \setminus A_n)\right]\ & = \mu(A_1) - \lim_{n \to \infty} \mu(A_1 \setminus A_n) = \mu(A_1) - \lim_{n \to \infty} [\mu(A_1) - \mu(A_n)] = \lim_{n \to \infty} \mu(A_n) \end{align}
Recall that if the sequence $(A_1, A_2, \ldots)$ is increasing, then we define $\lim_{n \to \infty} A_n = \bigcup_{n=1}^\infty A_n$, and if the sequence is decreasing then we define $\lim_{n \to \infty} A_n = \bigcap_{n=1}^\infty A_n$. Thus the conclusion of both parts of the continuity theorem is $\P\left(\lim_{n \to \infty} A_n\right) = \lim_{n \to \infty} \P(A_n)$ Finite additivity and continuity for increasing events imply countable additivity:
If $\mu: \mathscr A \to [0, \infty]$ satisfies the properties below then $\mu$ is a positive measure on $\mathscr A$.
1. $\mu(\emptyset) = 0$
2. $\mu\left(\bigcup_{i \in I} A_i\right) = \sum_{i \in I} \mu(A_i)$ if $\{A_i: i \in I\}$ is a finite disjoint collection of sets in $\mathscr A$
3. $\mu\left(\bigcup_{i=1}^\infty A_i \right) = \lim_{n \to \infty} \mu(A_n)$ if $(A_1, A_2, \ldots)$ is an increasing sequence of events in $\mathscr A$ and $\bigcup_{i=1}^\infty A_i \in \mathscr A$.
Proof
All that is left to prove is additivitiy over a countably infinite collection of sets in $\mathscr A$ when the union is also in $\mathscr A$. Thus suppose that $\{A_n: n \in \N\}$ is a disjoint collection of sets in $\mathscr A$ with $\bigcup_{n=1}^\infty A_n \in \mathscr A$. Let $B_n = \bigcup_{i=1}^n A_i$ for $n \in \N_+$. Then $B_n \in \mathscr A$ and $\bigcup_{n=1}^\infty B_n = \bigcup_{n=1}^\infty A_n$. Hence using the finite additivity and the continuity property we have $\P\left(\bigcup_{n = 1}^\infty A_n\right) = \P\left(\bigcup_{n=1}^\infty B_n\right) = \lim_{n \to \infty} \P(B_n) = \lim_{n \to \infty} \sum_{i=1}^n \P(A_i) = \sum_{i=1}^\infty \P(A_i)$
Many of the basic theorems in measure theory require that the measure not be too far removed from being finite. This leads to the following definition, which is just like the one for a positive measure on a $\sigma$-algebra.
A measure $\mu$ on an algebra $\mathscr A$ of subsets of $S$ is $\sigma$-finite if there exists a sequence of sets $(A_1, A_2, \ldots)$ in $\mathscr A$ such that $\bigcup_{n=1}^\infty A_n = S$ and $\mu(A_n) \lt \infty$ for each $n \in \N_+$. The sequence is called a $\sigma$-finite sequence for $\mu$.
Suppose that $\mu$ is a $\sigma$-finite measure on an algebra $\mathscr A$ of subsets of $S$.
1. There exists an increasing $\sigma$-finite sequence.
2. There exists a disjoint $\sigma$-finite sequence.
Proof
We use the same tricks that we have used before. Suppose that $(A_1, A_2, \ldots)$ is a $\sigma$-finite sequence for $\mu$.
1. Let $B_n = \bigcup_{i = 1}^n A_i$. Then $B_n \in \mathscr A$ for $n \in \N_+$ and this sequence is increasing. Moreover, $\mu(B_n) \le \sum_{i=1}^n \mu(A_i) \lt \infty$ for $n \in \N_+$ and $\bigcup_{n=1}^\infty B_n = \bigcup_{n=1}^\infty A_n = S$.
2. Let $C_1 = A_1$ and let $C_n = A_n \setminus \bigcup_{i=1}^{n-1} A_i$ for $n \in \{2, 3, \ldots\}$. Then $C_n \in \mathscr A$ for each $n \in \N_+$ and this sequence is disjoint. Moreover, $C_n \subseteq A_n$ so $\mu(C_n) \le \mu(A_n) \lt \infty$ and $\bigcup_{n=1}^\infty C_n = \bigcup_{n=1}^\infty A_n = S$.
Extension and Uniqueness Theorems
The fundamental theorem on measures states that a positive, $\sigma$-finite measure $\mu$ on an algebra $\mathscr A$ can be uniquely extended to $\sigma(\mathscr A)$. The extension part is sometimes referred to as the Carathéodory extension theorem, and is named for the Greek mathematician Constantin Carathéodory.
If $\mu$ is a positive, $\sigma$-finte measure on an algebra $\mathscr A$, then $\mu$ can be extended to a positive measure on $\mathscr{S} = \sigma(\mathscr A)$.
Proof
The proof is complicated, but here is a broad outline. First, for $A \subseteq S$, we define a cover of $A$ to be a countable collection $\{A_i: i \in I\}$ of sets in $\mathscr A$ such that $A \subseteq \bigcup_{i \in I} A_i$. Next, we define a new set function $\mu^*$, the outer measure, on all subsets of $S$: $\mu^*(A) = \inf \left\{ \sum_{i \in I} \mu(A_i): \{A_i: i \in I\} \text{ is a cover of } A \right\}, \quad A \subseteq S$ Outer measure satifies the following properties.
1. $\mu^*(A) \ge 0$ for $A \subseteq S$, so $\mu^*$ is nonnegative.
2. $\mu^*(A) = \mu(A)$ for $A \in \mathscr A$, so $\mu^*$ extends $\mu$.
3. If $A \subseteq B$ then $\mu^*(A) \le \mu^*(B)$, so $\mu^*$ is increasing
4. If $A_i \subseteq S$ for each $i$ in a countable index set $I$ then $\mu^*\left(\bigcup_{i \in I} A_i\right) \le \sum_{i \in I} \mu^*(A_i)$, so $\mu^*$ is countably subadditive.
Next, $A \subseteq S$ is said to be measurable if $\mu^*(B) = \mu^*(B \cap A) + \mu^*(B \setminus A), \quad B \subseteq S$ Thus, $A$ is measurable if $\mu^*$ is additive with respect to the partition of $B$ induced by $\{A, A^c\}$, for every $B \subseteq S$. We let $\mathscr{M}$ denote the collection of measurable subsets of $S$. The proof is finished by showing that $\mathscr A \subseteq \mathscr{M}$, $\mathscr{M}$ is a $\sigma$-algebra of subsets of $S$, and $\mu^*$ is a positive measure on $\mathscr{M}$. It follows that $\sigma(\mathscr A) = \mathscr{S} \subseteq \mathscr{M}$ and hence $\mu^*$ is a measure on $\mathscr{S}$ that extends $\mu$
Our next goal is the basic uniqueness result, which serves as the complement to the basic extension result. But first we need another variation of the term $\sigma$-finite.
Suppose that $\mu$ is a measure on a $\sigma$-algebra $\mathscr{S}$ of subsets of $S$ and $\mathscr{B} \subseteq \mathscr{S}$. Then $\mu$ is $\sigma$-finite on $\mathscr{B}$ if there exists a countable collection $\{B_i: i \in I\} \subseteq \mathscr{B}$ such that $\mu(B_i) \lt \infty$ for $i \in I$ and $\bigcup_{i \in I} B_i = S$.
The next result is the uniqueness theorem. The proof, like others that we have seen, uses Dynkin's $\pi$-$\lambda$ theorem, named for Eugene Dynkin.
Suppose that $\mathscr{B}$ is a $\pi$-system and that $\mathscr{S} = \sigma(\mathscr{B})$. If $\mu_1$ and $\mu_2$ are positive measures on $\mathscr{S}$ and are $\sigma$-finite on $\mathscr{B}$, and if $\mu_1(A) = \mu_2(A)$ for all $A \in \mathscr{B}$, then $\mu_1(A) = \mu_2(A)$ for all $A \in \mathscr{S}$.
Proof
Suppose that $B \in \mathscr{B}$ and that $\mu_1(B) = \mu_2(B) \lt \infty$. Let $\mathscr{L}_B = \{A \in \mathscr{S}: \mu_1(A \cap B) = \mu_2(A \cap B) \}$. Then $S \in \mathscr{L}_B$ since $\mu_1(B) = \mu_2(B)$. If $A \in \mathscr{L}_B$ then $\mu_1(A \cap B) = \mu_2(A \cap B)$ so $\mu_1(A^c \cap B) = \mu_1(B) - \mu_1(A \cap B) = \mu_2(B) - \mu_2(A \cap B) = \mu_2(A^c \cap B)$ and hence $A^c \in \mathscr{L}_B$. Finally, suppose that $\{A_j: j \in J\}$ is a countable, disjoint collection of events in $\mathscr{L}_B$. Then $\mu_1(A_j \cap B) = \mu_2(A_j \cap B)$ for each $j \in J$ and hence \begin{align} \mu_1\left[ \left(\bigcup_{j \in J} A_j \right) \cap B \right] & = \mu_1 \left(\bigcup_{j \in J} (A_j \cap B) \right) = \sum_{j \in J} \mu_1(A_j \cap B) \ & = \sum_{j \in J} \mu_2(A_j \cap B) = \mu_2\left(\bigcup_{j \in J} (A_j \cap B) \right) = \mu_2 \left[ \left(\bigcup_{j \in J} A_j \right) \cap B \right] \end{align} Therefore $\bigcup_{j \in J} A_j \in \mathscr{L}_B$, and so $\mathscr{L}_B$ is a $\lambda$-system. By assumption, $\mathscr{B} \subseteq \mathscr{L}_B$ and therefore by the $\pi$-$\lambda$ theorem, $\mathscr{S} = \sigma(\mathscr{B}) \subseteq \mathscr{L}_B$.
Next, by assumption there exists $B_i \in \mathscr{B}$ with $\mu_1(B_i) = \mu_2(B_i) \lt \infty$ for each $i \in \N_+$ and $S = \bigcup_{i=1}^\infty B_i$. If $A \in \mathscr{S}$ then the inclusion-exclusion rule can be applied to $\mu_k\left[\left(\bigcup_{i=1}^n B_i\right) \cap A \right] = \mu_k\left[\bigcup_{i=1}^n (A \cap B_i) \right]$ where $k \in \{1, 2\}$ and $n \in \N_+$. But the inclusion-exclusion formula only has terms of the form $\mu_k \left[ \bigcap_{j \in J} (A \cap B_j) \right] = \mu_k \left[ A \cap \left(\bigcap_{j \in J} B_j\right) \right]$ where $J \subseteq \{1, 2, \ldots, n\}$. But $\bigcap_{j \in J} B_j \in \mathscr{B}$ since $\mathscr{B}$ is a $\pi$-system, so by the previous paragraph, $\mu_1 \left[ \bigcap_{j \in J} (A \cap B_j) \right] = \mu_2 \left[ \bigcap_{j \in J} (A \cap B_j) \right]$. It then follows that for each $n \in \N_+$ $\mu_1\left[\left(\bigcup_{i=1}^n B_i\right) \cap A \right] = \mu_2\left[\left(\bigcup_{i=1}^n B_i\right) \cap A \right]$ Finally, letting $n \to \infty$ and using the continuity theorem for increasing sets gives $\mu_1(A) = \mu_2(A)$.
An algebra $\mathscr A$ of subsets of $S$ is trivially a $\pi$-system. Hence, if $\mu_1$ and $\mu_2$ are positive measures on $\mathscr{S} = \sigma(\mathscr A)$ and are $\sigma$-finite on $\mathscr A$, and if $\mu_1(A) = \mu_2(A)$ for $A \in \mathscr A$, then $\mu_1(A) = \mu_2(A)$ for $A \in \mathscr{S}$. This completes the second part of the fundamental theorem.
Of course, the results of this subsection hold for probability measures. Formally, a probability measure $\P$ on an algebra $\mathscr A$ of subsets of $S$ is a positive measure on $\mathscr A$ with the additional requirement that $\P(S) = 1$. Probability measures are trivially $\sigma$-finite, so a probability measure $\P$ on an algebra $\mathscr A$ can be uniquely extended to $\mathscr{S} = \sigma(\mathscr A)$.
However, usually we start with a collection that is more primitive than an algebra. The next result combines the definition with the main theorem associated with the definition. For a proof see the section on Special Set Structures in the chapter on Foundations.
Suppose that $\mathscr{B}$ is a nonempty collection of subsets of $S$ and let $\mathscr A = \left\{\bigcup_{i \in I} B_i: \{B_i: i \in I\} \text{ is a finite, disjoint collection of sets in } \mathscr{B}\right\}$ If the following conditions are satisfied, then $\mathscr{B}$ is a semi-algebra of subsets of $S$, and then $\mathscr A$ is the algebra generated by $\mathscr{B}$.
1. If $B_1, \, B_2 \in \mathscr{B}$ then $B_1 \cap B_2 \in \mathscr{B}$.
2. If $B \in \mathscr{B}$ then $B^c \in \mathscr A$.
Suppose now that we know how a measure $\mu$ should work on a semi-algebra $\mathscr{B}$ that generates an algebra $\mathscr A$ and then a $\sigma$-algebra $\mathscr{S} = \sigma(\mathscr A) = \sigma(\mathscr{B})$. That is, we know $\mu(B) \in [0, \infty]$ for each $B \in \mathscr{B}$. Because of the additivity property, there is no question as to how we should extend $\mu$ to $\mathscr A$. We must have $\mu(A) = \sum_{i \in I} \mu(B_i)$ if $A = \bigcup_{i \in I} B_i$ for some finite, disjoint collection $\{B_i: i \in I\}$ of sets in $\mathscr{B}$ (so that $A \in \mathscr A$). However, we cannot assign the values $\mu(B)$ for $B \in \mathscr{B}$ arbitrarily. The following extension theorem states that, subject just to some essential consistency conditions, the extension of $\mu$ from the semi-algebra $\mathscr{B}$ to the algebra $\mathscr A$ does in fact produce a measure on $\mathscr A$. The consistency conditions are that $\mu$ be finitely additive and countably subadditive on $\mathscr{B}$.
Suppose that $\mathscr{B}$ is a semi-algebra of subsets of $S$ and that $\mathscr A$ is the algebra of subsets of $S$ generated by $\mathscr{B}$. A function $\mu: \mathscr{B} \to [0, \infty]$ can be uniquely extended to a measure on $\mathscr A$ if and only if $\mu$ satisfies the following properties:
1. If $\emptyset \in \mathscr{B}$ then $\mu(\emptyset) = 0$.
2. If $\{B_i: i \in I\}$ is a finite, disjoint collection of sets in $\mathscr{B}$ and $B = \bigcup_{i \in I} B_i \in \mathscr{B}$ then $\mu(B) = \sum_{i \in I} \mu(B_i)$.
3. If $B \in \mathscr{B}$ and $B \subseteq \bigcup_{i \in I} B_i$ where $\{B_i: i \in I\}$ is a countable collection of sets in $\mathscr{B}$ then $\mu(B) \le \sum_{i \in I} \mu(B_i)$
If the measure $\mu$ on the algebra $\mathscr A$ is $\sigma$-finite, then the extension theorem and the uniqueness theorem apply, so $\mu$ can be extended uniquely to a measure on the $\sigma$-algebra $\mathscr{S} = \sigma(\mathscr A) = \sigma(\mathscr{B})$. This chain of extensions, starting with a semi-algebra $\mathscr{B}$, is often how measures are constructed.
Examples and Applications
Product Spaces
Suppose that $(S, \mathscr{S})$ and $(T, \mathscr{T})$ are measurable spaces. For the Cartesian product set $S \times T$, recall that the product $\sigma$-algebra is $\mathscr{S} \otimes \mathscr{T} = \sigma\{A \times B: A \in \mathscr{S}, B \in \mathscr{T}\}$ the $\sigma$-algebra generated by the Cartesian products of measurable sets, sometimes referred to as measurable rectangles.
Suppose that $(S, \mathscr S, \mu)$ and $(T, \mathscr T, \nu)$ are $\sigma$-finite measure spaces. Then there exists a unique $\sigma$-finite measure $\mu \otimes \nu$ on $(S \times T, \mathscr{S} \otimes \mathscr{T})$ such that $(\mu \otimes \nu)(A \times B) = \mu(A) \nu(B); \quad A \in \mathscr{S}, \; B \in \mathscr{T}$ The measure space $(S \times T, \mathscr{S} \otimes \mathscr{T}, \mu \otimes \nu)$ is the product measure space associated with $(S, \mathscr{S}, \mu)$ and $(T, \mathscr{T}, \nu)$.
Proof
Recall that the collection $\mathscr{B} = \{A \times B: A \in \mathscr{S}, B \in \mathscr{T}\}$ is a semi-algebra: the intersection of two product sets is another product set, and the complement of a product set is the union of two disjoint product sets. We define $\rho: \mathscr{B} \to [0, \infty]$ by $\rho(A \times B) = \mu(A) \nu(B)$. The consistency conditions hold, so $\rho$ can be extended to a measure on the algebra $\mathscr A$ generated by $\mathscr{B}$. The algebra $\mathscr A$ is the collection of all finite, disjoint unions of products of measurable sets. We will now show that the extended measure $\rho$ is $\sigma$-finite on $\mathscr A$. Since $\mu$ is $\sigma$-finite, there exists, an increasing sequence $(A_1, A_2, \ldots)$ of sets in $\mathscr{S}$ with $\mu(A_i) \lt \infty$ and $\bigcup_{i = 1}^\infty A_i = S$. Similarly, there exists an increasing sequence $(B_1, B_2, \ldots)$ of sets in $\mathscr{T}$ with $\nu(B_j) \lt \infty$ and $\bigcup_{j = 1}^\infty B_j = T$. Then $\rho(A_i \times B_j) = \mu(A_i) \nu(B_j) \lt \infty$, and since the sets are increasing, $\bigcup_{(i, j) \in \N_+ \times \N_+} A_i \times B_j = S \times T$. The standard extension theorem and uniqueness theorem uniqueness theorem now apply, so $\rho$ can be extended uniquely to a measure on $\sigma(\mathscr A) = \mathscr{S} \otimes \mathscr{T}$.
Recall that for $C \subseteq S \times T$, the cross section of $C$ in the first coordinate at $x \in S$ is $C_x = \{ y \in T: (x, y) \in C\}$. Similarly, the cross section of $C$ in the second coordinate at $y \in T$ is $C^y = \{ x \in S: (x, y) \in C\}$. We know that the cross sections of a measurable set are measurable. The following result shows that the measures of the cross sections of a measurable set form measurable functions.
Suppose again that $(S, \mathscr S, \mu)$ and $(T, \mathscr T, \nu)$ are $\sigma$-finite measure spaces. If $C \in \mathscr{S} \otimes \mathscr{T}$ then
1. $x \mapsto \nu(C_x)$ is a measurable function from $S$ to $[0, \infty]$.
2. $y \mapsto \mu(C^y)$ is a measurable function from $T$ to $[0, \infty]$.
Proof
We prove part (a), since of course the proof for part (b) is symmetric. Suppose first that the measure spaces are finite. Let $\mathscr{R} = \{A \times B: A \in \mathscr{S}, B \in \mathscr{T}\}$ denote the set of measurable rectangles. Let $\mathscr{C} = \{C \in \mathscr{S} \otimes \mathscr{T}: x \mapsto \nu(C_x) \text{ is measurable}\}$. If $A \times B \in \mathscr{R}$, then $A \times B \in \mathscr{C}$, since $\nu[(A \times B)_x] = \nu(B) \bs{1}_A(x)$. Next, suppose $C \in \mathscr{C}$. Then $(C^c)_x = (C_x)^c$, so $\nu[(C^c)_x] = \nu(T) - \nu(C_x)$ and this is a measurable function of $x \in S$. Hence $C^c \in \mathscr{C}$. Next, suppose that $\{C_i: i \in I\}$ is a countable, disjoint collection of sets in $\mathscr{C}$ and let $C = \bigcup_{i \in I} C_i$. Then $\{(C_i)_x: i \in I\}$ is a countable, disjoint collection of sets in $\mathscr{T}$, and $C_x = \bigcup_{i \in I} (C_i)_x$. Hence $\nu(C_x) = \sum_{i \in I} \nu[(C_i)_x]$, and this is a measurable function of $x \in S$. Hence $C \in \mathscr{C}$. It follows that $\mathscr{C}$ is a $\lambda$-system that contains $\mathscr{R}$, which in turn is a $\pi$-system. It follows from Dynkins $\pi$-$\lambda$ theorem, that $\mathscr{S} \otimes \mathscr{T} = \sigma(\mathscr{R}) \subseteq \mathscr{C}$. Thus $\mathscr{C} = \mathscr{S} \otimes \mathscr{T}$.
Consider now the general case where the measure spaces are $\sigma$-finite. There exists a countable, increasing sequence of sets $C_n \in \mathscr{S} \otimes \mathscr{T}$ for $n \in \N_+$ with $(\mu \otimes \nu)(C_n) \lt \infty$ for $n \in \N_+$. If $C \in \mathscr{S} \otimes \mathscr{T}$, then $C \cap C_n$ is increasing in $n \in \N_+$, and $C = \bigcup_{n=1}^\infty (C \cap C_n)$. Hence, for $x \in S$, $(C \cap C_n)_x$ is increasing in $n \in \N_+$ and $C_x = \bigcup_{n=1}^\infty (C \cap C_n)_x$. Therefore $\nu(C_x) = \lim_{n \to \infty} \nu[(C \cap C_n)_x]$. But $x \mapsto \nu[(C \cap C_n)_x]$ is a measurable function of $x \in S$ for each $n \in \N_+$ by the previous argument, so $x \mapsto \nu(C_x)$ is a measurable function of $x \in S$.
In the next chapter, where we study integration with respect to a measure, we will see that for $C \in \mathscr{S} \otimes \mathscr{T}$, the product measure $(\mu \otimes \nu)(C)$ can be computed by integrating $\nu(C_x)$ over $x \in S$ with respect to $\mu$ or by integrating $\mu(C^y)$ over $y \in T$ with respect to $\nu$. These results, generalizing the definition of the product measure, are special cases of Fubini's theorem, named for the Italian mathematician Guido Fubini.
Except for more complicated notation, these results extend in a perfectly straightforward way to the product of a finite number of $\sigma$-finite measure spaces.
Suppose that $n \in \N_+$ and that $(S_i, \mathscr S_i, \mu_i)$ is a $\sigma$-finite measure space for $i \in \{1, 2, \ldots, n\}$. Let $S = \prod_{i=1}^n S_i$ and let $\mathscr S$ denote the corresponding product $\sigma$-algebra. There exists a unique $\sigma$-finite measure $\mu$ on $(S, \mathscr{S})$ satisfying $\mu\left(\prod_{i=1}^n A_i\right) = \prod_{i=1}^n \mu_i(A_i), \quad A_i \in \mathscr{S}_i \text{ for } i \in \{1, 2, \ldots, n\}$ The measure space $(S, \mathscr S, \mu)$ is the product measure space associated with the given measure spaces.
Lebesgue Measure
The next discussion concerns our most important and essential application. Recall that the Borel $\sigma$-algebra on $\R$, named for Émile Borel, is the $\sigma$-algebra $\mathscr{R}$ generated by the standard Euclidean topology on $\R$. Equivalently, $\mathscr{R} = \sigma(\mathscr{I})$ where $\mathscr{I}$ is the collection of intervals of $\R$ (of all types—bounded and unbounded, with any type of closure, and including single points and the empty set). Next recall how the length of an interval is defined. For $a, \, b \in \R$ with $a \le b$, each of the intervals $(a, b)$, $[a, b)$, $(a, b]$, and $[a, b]$ has length $b - a$. For $a \in \R$, each of the intervals $(a, \infty)$, $[a, \infty)$, $(-\infty, a)$, $(-\infty, a]$ has length $\infty$, as does $\R$ itself. The standard measure on $\mathscr{R}$ generalizes the length measurement for intervals.
There exists a unique measure $\lambda$ on $\mathscr{R}$ such that $\lambda(I) = \length(I)$ for $I \in \mathscr{I}$. The measure $\lambda$ is Lebesgue measure on $(\R, \mathscr R)$.
Proof
Recall that $\mathscr{I}$ is a semi-algebra: The intersection of two intervals is another interval, and the complement of an interval is either another interval or the union of two disjoint intervals. Define $\lambda$ on $\mathscr{I}$ by $\lambda(I) = \length(I)$ for $I \in \mathscr{I}$. Then $\lambda$ satisfies the consistency condition and hence $\lambda$ can be extended to a measure on the algebra $\mathscr{J}$ generated by $\mathscr{I}$, namely the collection of finite, disjoint unions of intervals. The measure $\lambda$ on $\mathscr{J}$ is clearly $\sigma$-finite, since $\R$ can be written as a countably infinite union of bounded intervals. Hence the standard extension theorem and uniqueness theorem apply, so $\lambda$ can be extended to a measure on $\mathscr{R} = \sigma(\mathscr{I})$.
The is name in honor of Henri Lebesgue, of course. Since $\lambda$ is $\sigma$-finite, the $\sigma$-algebra of Borel sets $\mathscr{R}$ can be completed with respect to $\lambda$.
The completion of the Borel $\sigma$-algebra $\mathscr R$ with respect to $\lambda$ is the Lebesgue $\sigma$-algebra $\mathscr R^*$.
Recall that completed means that if $A \in \mathscr{R}^*$, $\lambda(A) = 0$ and $B \subseteq A$, then $B \in \mathscr{R}^*$ (and then $\lambda(B) = 0$). The Lebesgue measure $\lambda$ on $\R$, with either the Borel $\sigma$-algebra $\mathscr{R}$, or its completion $\mathscr{R}^*$ is the standard measure that is used for the real numbers. Other properties of the measure space $(\R, \mathscr R, \lambda)$ are given below, in the discussion of Lebesgue measure on $\R^n$.
For $n \in \N_+$, let $\mathscr R_n$ denote the Borel $\sigma$-algebra corresponding to the the standard Euclidean topology on $\R^n$, so that $(\R^n, \mathscr R_n)$ is the $n$-dimensional Euclidean measurable space. The $\sigma$-algebra, $\mathscr{R}_n$ is also the $n$-fold power of $\mathscr{R}$, the Borel $\sigma$-algebra of $\R$. That is, $\mathscr{R}_n = \mathscr{R} \otimes \mathscr{R} \otimes \cdots \otimes \mathscr{R}$ ($n$ times). It is also the $\sigma$-algebra generated by the products of intervals: $\mathscr{R}_n = \sigma\left\{I_1 \times I_2 \times \cdots I_n: I_j \in \mathscr{I} \text{ for } j \in \{1, 2, \ldots n\}\right\}$ As above, let $\lambda$ denote Lebesgue measure on $(\R, \mathscr R)$.
For $n \in \N_+$ the $n$-fold power of $\lambda$, denoted $\lambda_n$ is Lebesgue measure on $(\R^n, \mathscr R_n)$. In particular, $\lambda_n(A_1 \times A_2 \times \cdots \times A_n) = \lambda(A_1) \lambda(A_2) \cdots \lambda(A_n); \quad A_1, \, \ldots, A_n \in \mathscr{R}$
Specializing further, if $I_j \in \mathscr{I}$ is an interval for $j \in \{1, 2, \ldots, n\}$ then $\lambda_n\left(I_1 \times I_2 \times \cdots \times I_n\right) = \length(I_1) \length(I_2) \cdots \length(I_n)$ In particular, $\lambda_2$ extends the area measure on $\mathscr{R}_2$ and $\lambda_3$ extends the volume measure on $\mathscr{R}_3$. In general, $\lambda_n(A)$ is sometimes referred to as $n$-dimensional volume of $A \in \mathscr{R}_n$. As in the one-dimensional case, $\mathscr{R}_n$ can be completed with respect to $\lambda_n$, essentially adding all subsets of sets of measure 0 to $\mathscr{R}_n$. The completed $\sigma$-algebra is the $\sigma$-algebra of Lebesgue measurable sets. Since $\lambda_n(U) \gt 0$ if $U \subseteq \R^n$ is open, the support of $\lambda_n$ is all of $\R^n$. In addition, Lebesgue measure has the regularity properties that are concerned with approximating the measure of a set, from below with the measure of a compact set, and from above with the measure of an open set.
The measure space $(\R^n, \mathscr R_n, \lambda_n)$ is regular. That is, for $A \in \mathscr R_n$,
1. $\lambda_n(A) = \sup\{\lambda_n(C): C \text{ is compact and } C \subseteq A\}$, (inner regularity)
2. $\lambda_n(A) = \inf\{\lambda_n(U): U \text { is open and } A \subseteq U\}$ (outer regulairty).
The following theorem describes how the measure of a set is changed under certain basic transformations. These are essential properties of Lebesgue measure. To setup the notation, suppose that $n \in \N_+$, $A \subseteq \R^n$, $x \in \R^n$, $c \in (0, \infty)$ and that $T$ is an $n \times n$ matrix. Define $A + x = \{a + x: a \in A\}, \quad c A = \{c a: a \in A\}, \quad TA = \{T a: a \in A\}$
Suppose that $A \in \mathscr R_n$.
1. If $x \in \R^n$ then $\lambda_n(A + x) = \lambda_n(A)$ (translation invariance)
2. If $c \in (0, \infty)$ then $\lambda_n(c A) = c^n \lambda_n(A)$ (dialation property)
3. If $T$ is an $n \times n$ matrix then $\lambda_n(T A) = |\det(T)| \lambda_n(A)$ (the scaling property)
Lebesgue-Stieltjes Measures on $\R$
The construction of Lebesgue measure on $\R$ can be generalized. Here is the definition that we will need.
A function $F: \R \to \R$ that satisfis the following properties is a distribution function on $\R$
1. $F$ is increasing: if $x \le y$ then $F(x) \le F(y)$.
2. $F$ is continuous from the right: $\lim_{t \downarrow x} F(t) = F(x)$ for all $x \in \R$.
Since $F$ is increasing, the limit from the left at $x \in \R$ exists in $\R$ and is denoted $F(x^-) = \lim_{t \uparrow x} F(t)$. Similarly $F(\infty) = \lim_{x \to \infty} F(x)$ exists, as a real number or $\infty$, and $F(-\infty) = \lim_{x \to -\infty} F(x)$ exists, as a real number or $-\infty$.
If $F$ is a distribution function on $\R$, then there exists a unique measure $\mu$ on $\mathscr{R}$ that satisfies $\mu(a, b] = F(b) - F(a), \quad -\infty \le a \le b \le \infty$
The measure $\mu$ is called the Lebesgue-Stieltjes measure associated with $F$, named for Henri Lebesgue and Thomas Joannes Stieltjes. Distribution functions and the measures associated with them are studied in more detail in the chapter on Distributions. When the function $F$ takes values in $[0, 1]$, the associated measure $\P$ is a probability measure, and the function $F$ is the probability distribution function of $\P$. Probability distribution functions are also studied in much more detail (but with less technicality) in the chapter on Distributions.
Note that the identity function $x \mapsto x$ for $x \in \R$ is a distribution function, and the measure associated with this function is ordinary Lebesgue measure on $\R$ constructed in(15). | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/02%3A_Probability_Spaces/2.08%3A_Existence_and_Uniqueness.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\supp}{\text{supp}}$
In this section we discuss probability spaces from the more advanced point of view of measure theory. The previous two sections on positive measures and existence and uniqueness are prerequisites. The discussion is divided into two parts: first those concepts that are shared rather equally between probability theory and general measure theory, and second those concepts that are for the most part unique to probability theory. In particular, it's a mistake to think of probability theory as a mere branch of measure theory. Probability has its own notation, terminology, point of view, and applications that makes it an incredibly rich subject on its own.
Basic Concepts
Our first discussion concerns topics that were discussed in the section on positive measures. So no proofs are necessary, but you will notice that the notation, and in some cases the terminology, is very different.
Definitions
We can now give a precise definition of the probability space, the mathematical model of a random experiment.
A probability space $(S, \mathscr S, \P)$, consists of three essential parts:
1. A set of outcomes $S$.
2. A $\sigma$-algebra of events $\mathscr S$.
3. A probability measure $\P$ on the sample space $(S, \mathscr S)$.
Often the special notation $(\Omega, \mathscr{F}, \P)$ is used for a probability space in the literature—the symbol $\Omega$ for the set of outcomes is intended to remind us that these are all possible outcomes. However in this text, we don't insist on the special notation, and use whatever notation seems most appropriate in a given context.
In probability, $\sigma$-algebras are not just important for theoretical and foundational purposes, but are important for practical purposes as well. A $\sigma$-algebra can be used to specify partial information about an experiment—a concept of fundamental importance. Specifically, suppose that $\mathscr{A}$ is a collection of events in the experiment, and that we know whether or not $A$ occurred for each $A \in \mathscr{A}$. Then in fact, we can determine whether or not $A$ occurred for each $A \in \sigma(\mathscr{A})$, the $\sigma$-algebra generated by $\mathscr{A}$.
Technically, a random variable for our experiment is a measurable function from the sample space into another measurable space.
Suppose that $(S, \mathscr S, \P)$ is a probability space and that $(T, \mathscr T)$ is another measurable space. A random variable $X$ with values in $T$ is a measurable function from $S$ into $T$.
1. The probability distribution of $X$ is the mapping on $\mathscr T$ given by $B \mapsto \P(X \in B)$.
2. The collection of events $\{\{X \in B\}: B \in \mathscr T\}$ is a sub $\sigma$-algebra of $\mathscr S$, and is the $\sigma$-algebra generated by $X$, denoted $\sigma(X)$.
Details
If we observe the value of $X$, then we know whether or not each event in $\sigma(X)$ has occurred. More generally, we can construct the $\sigma$-algebra associated with any collection of random variables.
suppose that $(T_i, \mathscr T_i)$ is a measurable space for each $i$ in an index set $I$, and that $X_i$ is a random variable taking values in $T_i$ for each $i \in I$. The $\sigma$-algebra generated by $\{X_i: i \in I\}$ is $\sigma\{X_i: i \in I\} = \sigma\left\{\{X \in B_i\}: B_i \in \mathscr T_i, \; i \in I\right\}$
If we observe the value of $X_i$ for each $i \in I$ then we know whether or not each event in $\sigma\{X_i: i \in I\}$ has occurred. This idea is very important in the study of stochastic processes.
Null Events, Almost Sure Events, and Equivalence
Suppose that $(S, \mathscr S, \P)$ is a probability space.
Define the following collections of events:
1. $\mathscr{N} = \{A \in \mathscr S: \P(A) = 0\}$, the collection of null events
2. $\mathscr{M} = \{A \in \mathscr S: \P(A) = 1\}$, the collection of almost sure events
3. $\mathscr{D} = \mathscr{N} \cup \mathscr{M} = \{A \in \mathscr S: \P(A) = 0 \text{ or } \P(A) = 1 \}$, the collection of essentially deterministic events
The collection of essentially deterministic events $\mathscr D$ is a sub $\sigma$-algebra of $\mathscr S$.
In the section on independence, we showed that $\mathscr{D}$ is also a collection of independent events.
Intuitively, equivalent events or random variables are those that are indistinguishable from a probabilistic point of view. Recall first that the symmetric difference between events $A$ and $B$ is $A \bigtriangleup B = (A \setminus B) \cup (B \setminus A)$; it is the event that occurs if and only if one of the events occurs, but not the other, and corresponds to exclusive or. Here is the definition for events:
Events $A$ and $B$ are equivalent if $A \bigtriangleup B \in \mathscr{N}$, and we denote this by $A \equiv B$. The relation $\equiv$ is an equivalence relation on $\mathscr S$. That is, for $A, \, B, \, C \in \mathscr S$,
1. $A \equiv A$ (the reflexive property).
2. If $A \equiv B$ then $B \equiv A$ (the symmetric property).
3. If $A \equiv B$ and $B \equiv C$ then $A \equiv C$ (the transitive property).
Thus $A \equiv B$ if and only if $\P(A \bigtriangleup B) = \P(A \setminus B) + \P(B \setminus A) = 0$ if and only if $\P(A \setminus B) = \P(B \setminus A) = 0$. The equivalence relation $\equiv$ partitions $\mathscr S$ into disjoint classes of mutually equivalent events. Equivalence is preserved under the set operations.
Suppose that $A, \, B \in \mathscr S$. If $A \equiv B$ then $A^c \equiv B^c$.
Suppose that $A_i, \, B_i \in \mathscr S$ for $i$ in a countable index set $I$. If $A_i \equiv B_i$ for $i \in I$ then
1. $\bigcup_{i \in I} A_i \equiv \bigcup_{i \in I} B_i$
2. $\bigcap_{i \in I} A_i \equiv \bigcap_{i \in I} B_i$
Equivalent events have the same probability.
If $A, \, B \in \mathscr S$ and $A \equiv B$ then $\P(A) = \P(B)$.
The converse trivially fails, and a counterexample is given below However, the null and almost sure events do form equivalence classes.
Suppose that $A \in \mathscr S$.
1. If $A \in \mathscr{N}$ then $A \equiv B$ if and only if $B \in \mathscr{N}$.
2. If $A \in \mathscr{M}$ then $A \equiv B$ if and only if $B \in \mathscr{M}$.
We can extend the notion of equivalence to random variables taking values in the same space. Thus suppose that $(T, \mathscr T)$ is another measurable space. If $X$ and $Y$ are random variables with values in $T$, then $(X, Y)$ is a random variable with values in $T \times T$, which is given the usual product $\sigma$-algebra $\mathscr T \otimes \mathscr T$. We assume that the diagonal set $D = \{(x, x): x \in T\} \in \mathscr T \otimes \mathscr T$, which is almost always true in applications.
Random variables $X$ and $Y$ taking values in $T$ are equivalent if $\P(X = Y) = 1$. Again we write $X \equiv Y$. The relation $\equiv$ is an equivalence relation on the collection of random variables that take values in $T$. That is, for random variables $X$, $Y$, and $Z$ with values in $T$,
1. $X \equiv X$ (the reflexive property).
2. If $X \equiv Y$ then $Y \equiv X$ (the symmetric property).
3. If $X \equiv Y$ and $Y \equiv Z$ then $X \equiv Z$ (the transitive property).
So the collection of random variables with values in $T$ is partitioned into disjoint classes of mutually equivalent variables.
Suppose that $X$ and $Y$ are random variables taking values in $T$ and that $X \equiv Y$. Then
1. $\{X \in B\} \equiv \{Y \in B\}$ for every $B \in \mathscr T$.
2. $X$ and $Y$ have the same probability distribution on $(T, \mathscr T)$.
Again, the converse to part (b) fails with a passion, and a counterexample is given below. It often happens that a definition for random variables subsumes the corresponding definition for events, by considering the indicator variables of the events. So it is with equivalence.
Suppose that $A, \, B \in \mathscr S$. Then $A \equiv B$ if and only if $\bs 1_A \equiv \bs 1_B$.
Equivalence is preserved under a deterministic transformation of the variables. For the next result, suppose that $(U, \mathscr U)$ is yet another measurable space, along with $(T, \mathscr T)$.
Suppose $X, \, Y$ are random variables with values in $T$ and that $g: T \to U$ is measurable. If $X \equiv Y$ then $g(X) \equiv g(Y)$.
Suppose again that $(S, \mathscr S, \P)$ is a probability space corresponding to a random experiment. Let $\mathscr V$ denote the collection of all real-valued random variables for the experiment, that is, all measurable functions from $S$ into $\R$. With the usual definitions of addition and scalar multiplication, $(\mathscr V, +, \cdot)$ is a vector space. However, in probability theory, we often do not want to distinguish between random variables that are equivalent, so it's nice to know that the vector space structure is preserved when we identify equivalent random variables. Formally, let $[X]$ denote the equivalence class generated by a real-valued random variable $X \in \mathscr V$, and let $\mathscr W$ denote the collection of all such equivalence classes. In modular notation, $\mathscr W$ is the set $\mathscr V \big/ \equiv$. We define addition and scalar multiplication on $\mathscr{V}$ by $[X] + [Y] = [X + Y], \; c [X] = [c X]; \quad [X], \; [Y] \in \mathscr{V}, \; c \in \R$
$(\mathscr W, +, \cdot)$ is a vector space.
Often we don't bother to use the special notation for the equivalence class associated with a random variable. Rather, it's understood that equivalent random variables represent the same object. Spaces of functions in a general measure space are studied in the chapter on Distributions, and spaces of random variables are studied in more detail in the chapter on Expected Value.
Completion
Suppose again that $(S, \mathscr S, \P)$ is a probability space, and that $\mathscr N$ denotes the collection of null events, as above. Suppose that $A \in \mathscr N$ so that $\P(A) = 0$. If $B \subseteq A$ and $B \in \mathscr S$, then we know that $\P(B) = 0$ so $B \in \mathscr{N}$ also. However, in general there might be subsets of $A$ that are not in $\mathscr S$. This leads naturally to the following definition.
The probability space $(S, \mathscr S, \P)$ is complete if $A \in \mathscr N$ and $B \subseteq A$ imply $B \in \mathscr S$ (and hence $B \in \mathscr{N}$).
So the probability space is complete if every subset of an event with probability 0 is also an event (and hence also has probability 0). We know from our work on positive measures that every $\sigma$-finite measure space that is not complete can be completed. So in particular a probability space that is not complete can be completed. To review the construction, recall that the equivalence relation $\equiv$ that we used above on $\mathscr S$ is extended to $\mathscr{P}(S)$ (the power set of $S$).
For $A, \, B \subseteq S$, define $A \equiv B$ if and only if there exists $N \in \mathscr{N}$ such that $A \bigtriangleup B \subseteq N$. The relation $\equiv$ is an equivalence relation on $\mathscr{P}(S)$.
Here is how the probability space is completed:
Let $\mathscr S_0 = \{A \subseteq S: A \equiv B \text{ for some } B \in \mathscr S \}$. For $A \in \mathscr S_0$, define $\P_0(A) = \P(B)$ where $B \in \mathscr S$ and $A \equiv B$. Then
1. $\mathscr S_0$ is a $\sigma$-algebra of subsets of $S$ and $\mathscr S \subseteq \mathscr S_0$.
2. $\P_0$ is a probability measure on $(S, \mathscr S_0)$.
3. $(S, \mathscr S_0, \P_0)$ is complete, and is the completion of $(S, \mathscr S, \P)$.
Product Spaces
Our next discussion concerns the construction of probability spaces that correspond to specified distributions. To set the stage, suppose that $(S, \mathscr S, \P)$ is a probability space. If we let $X$ denote the identity function on $S$, so that $X(x) = x$ for $x \in S$, then $\{X \in A\} = A$ for $A \in \mathscr S$ and hence $\P(X \in A) = \P(A)$. That is, $\P$ is the probability distribution of $X$. We have seen this before—every probability measure can be thought of as the distribution of a random variable. The next result shows how to construct a probability space that corresponds to a sequence of independent random variables with specified distributions.
Suppose $n \in \N_+$ and that $(S_i, \mathscr S_i, \P_i)$ is a probability space for $i \in \{1, 2, \ldots, n\}$. The corresponding product measure space $(S, \mathscr S, \P)$ is a probability space. If $X_i: S \to S_i$ is the $i$th coordinate function on $S$ so that $X_i(\bs x) = x_i$ for $\bs x = (x_1, x_2, \ldots, x_n) \in S$ then $(X_1, X_2, \ldots, X_n)$ is a sequence of independent random variables on $(S, \mathscr{S}, \P)$, and $X_i$ has distribution $\P_i$ on $(S_i, \mathscr S_i)$ for each $i \in \{1, 2, \ldots, n \}$.
Proof
Of course, the existence of the product space $(S, \mathscr S, \P)$ follows immediately from the more general result for products of positive measure spaces. Recall that $S = \prod_{i=1}^n S_i$ and that $\mathscr S$ is the $\sigma$-algebra generated by sets of the from $\prod_{i=1}^n A_i$ where $A_i \in \mathscr S_i$ for each $i \in \{1, 2, \ldots, n\}$. Finally, $\P$ is the unique positive measre on $(S, \mathscr S)$ satisfying $\P\left(\prod_{i=1}^n A_i\right) = \prod_{i=1}^n \P_i(A_i)$ where again, $A_i \in \mathscr S_i$ for each $i \in \{1, 2, \ldots, n\}$. Clearly $\P$ is a probability measure since $\P(S) = \prod_{i=1}^n \P_i(S_i) = 1$. Suppose that $A_i \in \mathscr S_i$ for $i \in \{1, 2, \ldots, n\}$. Then $\{X_1 \in A_1, X_2 \in A_2 \ldots, X_n \in A_n\} = \prod_{i=1}^n A_i \in \mathscr S$. Hence $\P(X_1 \in A_1, X_2 \in A_2, \ldots, X_n \in A_n) = \prod_{i=1}^n \P_i(A_i)$ If we fix $i \in \{1, 2, \ldots, n\}$ and let $A_j = S_j$ for $j \ne i$, then the displayed equation give $\P(X_i \in A_i) = \P_i(A_i)$, so $X_i$ has distribution $\P_i$ on $(S_i, \mathscr S_i)$. Returning to the displayed equation we have $\P(X_1 \in A_1, X_2 \in A_2, \ldots, X_n \in A_n) = \prod_{i=1}^n \P(X_i \in A_i)$ so $(X_1, X_2, \ldots, X_n)$ are independent.
Intuitively, the given probability spaces correspond to $n$ random experiments. The product space then is the probability space that corresponds to the experiments performed independently. When modeling a random experiment, if we say that we have a finite sequence of independent random variables with specified distributions, we can rest assured that there actually is a probability space that supports this statement
We can extend the last result to an infinite sequence of probability spaces. Suppose that $(S_i, \mathscr S_i)$ is a measurable space for each $i \in \N_+$. Recall that the product space $\prod_{i=1}^\infty S_i$ consists of all sequences $\bs x = (x_1, x_2, \ldots)$ such that $x_i \in S_i$ for each $i \in \N_+$. The corresponding product $\sigma$-algebra $\mathscr S$ is generated by the collection of cylinder sets. That is, $\mathscr S = \sigma(\mathscr B)$ where $\mathscr{B} = \left\{\prod_{i=1}^\infty A_i: A_i \in \mathscr S_i \text{ for each } i \in \N_+ \text{ and } A_i = S_i \text{ for all but finitely many } i \in \N_+\right\}$
Suppose that $(S_i, \mathscr{S}_i, \P_i)$ is a probability space for $i \in \N_+$. Let $(S, \mathscr S)$ denote the product measurable space so that $\mathscr S = \sigma(\mathscr B)$ where $\mathscr B$ is the collection of cylinder sets. Then there exists a unique probability measure $\P$ on $(S, \mathscr S)$ that satisfies $\P\left(\prod_{i=1}^\infty A_i\right) = \prod_{i=1}^\infty \P_i(A_i), \quad \prod_{i=1}^\infty A_i \in \mathscr B$ If $X_i: S \to S_i$ is the $i$th coordinate function on $S$ for $i \in \N_+$, so that $X_i(\bs x) = x_i$ for $\bs x = (x_1, x_2, \ldots) \in S$, then $(X_1, X_2, \ldots)$ is a sequence of independent random variables on $(S, \mathscr{S}, \P)$, and $X_i$ has distribution $\P_i$ on $(S_i, \mathscr S_i)$ for each $i \in \N_+$.
Proof
The proof is similar to the one in for positive measure spaces in the section on existence and uniqueness. First recall that the collection of cylinder sets $\mathscr B$ is a semi-algebra. We define $\P: \mathscr{B} \to [0, 1]$ as in the statement of the theorem. Note that all but finitely many factors are 1. The consistency conditions are satisfied, so $\P$ can be extended to a probability measure on the algebra $\mathscr A$ generated by $\mathscr{B}$. That is, $\mathscr A$ is the collection of all finite, disjoint unions of cylinder sets. The standard extension theorem and uniqueness theorem now apply, so $\P$ can be extended uniquely to a measure on $\mathscr S = \sigma(\mathscr A)$. The proof that $(X_1, X_2, \ldots)$ are independent and that $X_i$ has distribution $\P_i$ for each $i \in \N_+$ is just as in the previous theorem.
Once again, if we model a random process by starting with an infinite sequence of independent random variables, we can be sure that there exists a probability space that supports this sequence. The particular probability space constructed in the last theorem is called the canonical probability space associated with the sequence of random variables. Note also that it was important that we had probability measures rather than just general positive measures in the construction, since the infinite product $\prod_{i=1}^\infty \P_i(A_i)$ is always well defined. The next section on Stochastic Processes continues the discussion of how to construct probability spaces that correspond to a collection of random variables with specified distributional properties.
Probability Concepts
Our next discussion concerns topics that are unique to probability theory and do not have simple analogies in general measure theory.
Independence
As usual, suppose that $(S, \mathscr S, \P)$ is a probability space. We have already studied the independence of collections of events and the independence of collections of random variables. A more complete and general treatment results if we define the independence of collections of collections of events, and most importantly, the independence of collections of $\sigma$-algebras. This extension actually occurred already, when we went from independence of a collection of events to independence of a collection of random variables, but we did not note it at the time. In spite of the layers of set theory, the basic idea is the same.
Suppose that $\mathscr{A}_i$ is a collection of events for each $i$ in an index set $I$. Then $\mathscr{A} = \{\mathscr{A}_i: i \in I\}$ is independent if and only if for every choice of $A_i \in \mathscr{A}_i$ for $i \in I$, the collection of events $\{ A_i: i \in I\}$ is independent. That is, for every finite $J \subseteq I$, $\P\left(\bigcap_{j \in J} A_j\right) = \prod_{j \in J} \P(A_j)$
As noted above, independence of random variables, as we defined previously, is a special case of our new definition.
Suppose that $(T_i, \mathscr T_i)$ is a measurable space for each $i$ in an index set $I$, and that $X_i$ is a random variable taking values in a set $T_i$ for each $i \in I$. The independence of $\{X_i: i \in I\}$ is equivalent to the independence of $\{\sigma(X_i): i \in I\}$.
Independence of events is also a special case of the new definition, and thus our new definition really does subsume our old one.
Suppose that $A_i$ is an event for each $i \in I$. The independence of $\{A_i: i \in I\}$ is equivalent to the independence of $\{\mathscr{A}_i: i \in I\}$ where $\mathscr{A}_i = \sigma\{A_i\} = \{S, \emptyset, A_i, A_i^c\}$ for each $i \in I$.
For every collection of objects that we have considered (collections of events, collections of random variables, collections of collections of events), the notion of independence has the basic inheritance property.
Suppose that $\mathscr{A}$ is a collection of collections of events.
1. If $\mathscr{A}$ is independent then $\mathscr{B}$ is independent for every $\mathscr{B} \subseteq \mathscr{A}$.
2. If $\mathscr{B}$ is independent for every finite $\mathscr{B} \subseteq \mathscr{A}$ then $\mathscr{A}$ is independent.
Our most important collections are $\sigma$-algebras, and so we are most interested in the independence of a collection of $\sigma$-algebras. The next result allows us to go from the independence of certain types of collections to the independence of the $\sigma$-algebras generated by these collections. To understand the result, you will need to review the definitions and theorems concerning $\pi$-systems and $\lambda$-systems. The proof uses Dynkin's $\pi$-$\lambda$ theorem, named for Eugene Dynkin.
Suppose that $\mathscr{A}_i$ is a collection of events for each $i$ in an index set $I$, and that $\mathscr{A_i}$ is a $\pi$-system for each $i \in I$. If $\left\{\mathscr{A}_i: i \in I\right\}$ is independent, then $\left\{\sigma(\mathscr{A}_i): i \in I\right\}$ is independent.
Proof
In light of the previous result, it suffices to consider a finite set of collections. Thus, suppose that $\{\mathscr{A}_1, \mathscr{A}_2, \ldots, \mathscr{A}_n\}$ is independent. Now, fix $A_i \in \mathscr{A}_i$ for $i \in \{2, 3, \ldots, n\}$ and let $E = \bigcap_{i=2}^n A_i$. Let $\mathscr{L} = \{B \in \mathscr S: \P(B \cap E) = \P(B) \P(E)\}$. Trivially $S \in \mathscr{L}$ since $\P(S \cap E) = \P(E) = \P(S) \P(E)$. Next suppose that $A \in \mathscr{L}$. Then $\P(A^c \cap E) = \P(E) - \P(A \cap E) = \P(E) - \P(A) \P(E) = [1 - \P(A)] \P(E) = \P(A^c) \P(E)$ Thus $A^c \in \mathscr{L}$. Finally, suppose that $\{A_j: j \in J\}$ is a countable collection of disjoint sets in $\mathscr{L}$. Then $\P\left[\left(\bigcup_{j \in J} A_j \right) \cap E \right] = \P\left[ \bigcup_{j \in J} (A_j \cap E) \right] = \sum_{j \in J} \P(A_j \cap E) = \sum_{j \in J} \P(A_j) \P(E) = \P(E) \sum_{j \in J} \P(A_j) = \P(E) \P\left(\bigcup_{j \in J} A_j \right)$ Therefore $\bigcup_{j \in J} A_j \in \mathscr{L}$ and so $\mathscr{L}$ is a $\lambda$-system. Trivially $\mathscr{A_1} \subseteq \mathscr{L}$ by the original independence assumption, so by the $\pi$-$\lambda$ theorem, $\sigma(\mathscr{A}_1) \subseteq \mathscr{L}$. Thus, we have that for every $A_1 \in \sigma(\mathscr{A}_1)$ and $A_i \in \mathscr{A}_i$ for $i \in \{2, 3, \ldots, n\}$, $\P\left(\bigcap_{i=1}^n A_i \right) = \prod_{i=1}^n \P(A_i)$ Thus we have shown that $\left\{\sigma(\mathscr{A}_1), \mathscr{A}_2, \ldots, \mathscr{A}_n\right\}$ is independent. Repeating the argument $n - 1$ additional times, we get that $\{\sigma(\mathscr{A}_1), \sigma(\mathscr{A}_2), \ldots, \sigma(\mathscr{A}_n)\}$ is independent.
The next result is a rigorous statement of the strong independence that is implied the independence of a collection of events.
Suppose that $\mathscr{A}$ is an independent collection of events, and that $\left\{\mathscr{B}_j: j \in J\right\}$ is a partition of $\mathscr{A}$. That is, $\mathscr{B}_j \cap \mathscr{B}_k = \emptyset$ for $j \ne k$ and $\bigcup_{j \in J} \mathscr{B}_j = \mathscr{A}$. Then $\left\{\sigma(\mathscr{B}_j): j \in J\right\}$ is independent.
Proof
Let $\mathscr{B}_j^*$ denote the set of all finite intersections of sets in $\mathscr{B}_j$, for each $j \in J$. Then clearly $\mathscr{B}_j^*$ is a $\pi$-system for each $j$, and $\left\{\mathscr{B}_j^*: j \in J\right\}$ is independent. By the previous theorem, $\left\{\sigma(\mathscr{B}_j^*): j \in J\right\}$ is independent. But clearly $\sigma(\mathscr{B}_j^*) = \sigma(\mathscr{B}_j)$ for $j \in J$.
Let's bring the result down to earth. Suppose that $A, B, C, D$ are independent events. In our elementary discussion of independence, you were asked to show, for example, that $A \cup B^c$ and $C^c \cup D^c$ are independent. This is a consequence of the much stronger statement that the $\sigma$-algebras $\sigma\{A, B\}$ and $\sigma\{C, D\}$ are independent.
Exchangeability
As usual, suppose that $(S, \mathscr S, \P)$ is a probability space corresponding to a random experiment Roughly speaking, a sequence of events or a sequence of random variables is exchangeable if the probability law that governs the sequence is unchanged when the order of the events or variables is changed. Exchangeable variables arise naturally in sampling experiments and many other settings, and are a natural generalization of a sequence of independent, identically distributed (IID) variables. Conversely, it turns out that any exchangeable sequence of variables can be constructed from an IID sequence. First we give the definition for events:
Suppose that $\mathscr A = \{A_i: i \in I\}$ is a collection of events, where $I$ is a nonempty index set. Then $\mathscr A$ is exchangeable if the probability of the intersection of a finite number of the events depends only on the number of events. That is, if $J$ and $K$ are finite subsets of $I$ and $\#(J) = \#(K)$ then $\P\left( \bigcap_{j \in J} A_j\right) = \P \left( \bigcap_{k \in K} A_k\right)$
Exchangeability has the same basic inheritance property that we have seen before.
Suppose that $\mathscr A$ is a collection of events.
1. If $\mathscr A$ is exchangeable then $\mathscr B$ is exchangeable for every $\mathscr B \subseteq \mathscr A$.
2. Conversely, if $\mathscr B$ is exchangeable for every finite $\mathscr B \subseteq \mathscr A$ then $\mathscr A$ is exchangeable.
For a collection of exchangeable events, the inclusion exclusion law for the probability of a union is much simpler than the general version.
Suppose that $\{A_1, A_2, \ldots, A_n\}$ is an exchangeable collection of events. For $J \subseteq \{1, 2, \ldots, n\}$ with $\#(J) = k$, let $p_k = \P\left( \bigcap_{j \in J} A_j\right)$. Then $\P\left(\bigcup_{i = 1}^n A_i\right) = \sum_{k=1}^n (-1)^{k-1} \binom{n}{k} p_k$
Proof
The inclusion-exclusion rule gives $\P \left( \bigcup_{i \in I} A_i \right) = \sum_{k = 1}^n (-1)^{k - 1} \sum_{J \subseteq I, \; \#(J) = k} \P \left( \bigcap_{j \in J} A_j \right)$ But $p_k = \P\left( \bigcap_{j \in J} A_j\right)$ for every $J \subseteq \{1, 2, \ldots, n\}$ with $\#(J) = k$, and there are $\binom{n}{k}$ such subsets.
The concept of exchangeablility can be extended to random variables in the natural way. Suppose that $(T, \mathscr T)$ is a measurable space.
Suppose that $\mathscr A$ is a collection of random variables, each taking values in $T$. The collection $\mathscr A$ is exchangeable if for any $\{X_1, X_2, \ldots, X_n\} \subseteq \mathscr A$, the distribution of the random vector $(X_1, X_2, \ldots, X_n)$ depends only on $n$.
Thus, the distribution of the random vector is unchanged if the coordinates are permuted. Once again, exchangeability has the same basic inheritance property as a collection of independent variables.
Suppose that $\mathscr{A}$ is a collection of random variables, each taking values in $T$.
1. If $\mathscr A$ is exchangeable then $\mathscr B$ is exchangeable for every $\mathscr B \subseteq \mathscr A$.
2. Conversely, if $\mathscr B$ is exchangeable for every finite $\mathscr B \subseteq \mathscr A$ then $\mathscr A$ is exchangeable.
Suppose that $\mathscr A$ is a collection of random variables, each taking values in $T$, and that $\mathscr A$ is exchangeable. Then trivially the variables are identically distributed: if $X, \, Y \in \mathscr A$ and $A \in \mathscr T$, then $\P(X \in A) = \P(Y \in A)$. Also, the definition of exchangeable variables subsumes the definition for events:
Suppose that $\mathscr A$ is a collection of events, and let $\mathscr B = \{\bs 1_A: A \in \mathscr A \}$ denote the corresponding collection of indicator random variables. Then $\mathscr A$ is exchangeable if and only if $\mathscr B$ is exchangeable.
Tail Events and Variables
Suppose again that we have a random experiment modeled by a probability space $(S, \mathscr S, \P)$.
Suppose that $(X_1, X_2, \ldots)$ be a sequence of random variables. The tail sigma algebra of the sequence is $\mathscr T = \bigcap_{n=1}^\infty \sigma\{X_n, X_{n+1}, \ldots\}$
1. An event $B \in \mathscr T$ is a tail event for the sequence.
2. A random variable $Y$ that is measurable with respect to $\mathscr T$ is a tail random variable for the sequence.
Informally, a tail event (random variable) is an event (random variable) that can be defined in terms of $\{X_n, X_{n+1}, \ldots\}$ for each $n \in \N_+$. The tail sigma algebra for a sequence of events $(A_1, A_2, \ldots)$ is defined analogously (or simply let $X_k = \bs{1}(A_k)$, the indicator variable of $A$, for each $k$). For the following results, you may need to review some of the definitions in the section on Convergence.
Suppose that $(A_1, A_2, \ldots)$ is a sequence of events.
1. If the sequence is increasing then $\lim_{n \to \infty} A_n = \bigcup_{n=1}^\infty A_n$ is a tail event of the sequence.
2. If the sequence is decreasing then $\lim_{n \to \infty} A_n = \bigcap_{n=1}^\infty A_n$ is a tail event of the sequence.
Proof
1. If the sequence is increasing then $\bigcup_{n=1}^\infty A_n = \bigcup_{n=k}^\infty A_n \in \sigma\{A_k, A_{k+1}, \ldots\}$ for every $k \in \N_+$.
2. If the sequence is decreasing then $\bigcap_{n=1}^\infty A_n = \bigcap_{n=k}^\infty A_k \in \sigma\{A_k, A_{k+1}, \ldots\}$ for every $k \in \N_+$
Suppose again that $(A_1, A_2, \ldots)$ is a sequence of events. Each of the following is a tail event of the sequence:
1. $\limsup_{n \to \infty} A_n = \bigcap_{n=1}^\infty \bigcup_{i=n}^\infty A_i$
2. $\liminf_{n \to \infty} A_n = \bigcup_{n=1}^\infty \bigcap_{i=n}^\infty A_i$
Proof
1. The events $\bigcup_{i=n}^\infty A_i$ are decreasing in $n$ and hence $\limsup_{n \to \infty} A_n = \lim_{n \to \infty} \bigcup_{i=n}^\infty A_i \in \mathscr T$ by the previous result.
2. The events $\bigcap_{i=n}^\infty A_i$ are increasing in $n$ and hence $\liminf_{n \to \infty} A_n = \lim_{n \to \infty} \bigcap_{i=n}^\infty A_i \in \mathscr T$ by the previous result.
Suppose that $\bs X = (X_1, X_2, \ldots)$ is a sequence of real-valued random variables.
1. $\{X_n \text{ converges as } n \to \infty\}$ is a tail event for $\bs X$.
2. $\liminf_{n \to \infty} X_n$ is a tail random variable for $\bs X$.
3. $\limsup_{n \to \infty} X_n$ is a tail random variable for $\bs X$.
Proof
1. The Cauchy criterion for convergence (named for Augustin Cauchy of course) states that $X_n$ converges as $n \to \infty$ if an only if for every $\epsilon > 0$ there exists $N \in \N_+$ (depending on $\epsilon$) such that if $m, \, n \ge N$ then $\left|X_n - X_m\right| \lt \epsilon$. In this criterion, we can without loss of generality take $\epsilon$ to be rational, and for a given $k \in \N_+$ we can insist that $m, \, n \ge k$. With these restrictions, the Cauchy criterion is a countable intersection of events, each of which is in $\sigma\{X_k, X_{k+1}, \ldots\}$.
2. Recall that $\liminf_{n \to \infty} X_n = \lim_{n \to \infty} \inf\{X_k: k \ge n\}$.
3. Similarly, recall that $\limsup_{n \to \infty} X_n = \lim_{n \to \infty} \sup\{X_k: k \ge n\}$.
The random variable in part (b) may take the value $-\infty$, and the random variable in (c) may take the value $\infty$. From parts (b) and (c) together, note that if $X_n \to X_\infty$ as $n \to \infty$ on the sample space $\mathscr S$, then $X_\infty$ is a tail random variable for $\bs X$.
There are a number of zero-one laws in probability. These are theorems that give conditions under which an event will be essentially deterministic; that is, have probability 0 or probability 1. Interestingly, it can sometimes be difficult to determine which of these extremes is actually the case. The following result is the Kolmogorov zero-one law, named for Andrey Kolmogorov. It states that an event in the tail $\sigma$-algebra of an independent sequence will have probability 0 or 1.
Suppose that $\bs X = (X_1, X_2, \ldots)$ is an independent sequence of random variables
1. If $B$ is a tail event for $\bs X$ then $\P(B) = 0$ or $\P(B) = 1$.
2. If $Y$ is a real-valued tail random variable for $\bs X$ then $Y$ is constant with probability 1.
Proof
1. By definition $B \in \sigma\{X_{n+1}, X_{n+2}, \ldots\}$ for each $n \in \N_+$, and hence $\{X_1, X_2, \ldots, X_n, \bs{1}_B\}$ is an independent set of random variables. Thus $\{X_1, X_2, \ldots, \bs{1}_B\}$ is an independent set of random variables. But $B \in \sigma\{X_1, X_2, \ldots\}$, so it follows that the event $B$ is independent of itself. Therefore $\P(B) = 0$ or $\P(B) = 1$.
2. The function $y \mapsto \P(Y \le y)$ on $\R$ is the (cumulative) distribution function of $Y$. This function is clearly increasing. Moreover, simple applications of the continuity theorems show that it is right continuous and that $\P(Y \le y) \to 0$ as $y \to -\infty$ and $\P(Y \le y) \to 1$ as $y \to \infty$. (Explicit proofs are given in the section on distribution functions in the chapter on Distributions.) But since $Y$ is a tail random variable, $\{Y \le y\}$ is a tail event and hence $\P(Y \le y) \in \{0, 1\}$ for each $y \in \R$. It follows that there exists $c \in \R$ such that $\P(Y \le y) = 0$ for $y \lt c$ and $\P(Y \le y) = 1$ for $y \ge c$. Hence $\P(Y = c) = 1$.
From the Komogorov zero-one law and the result above, note that if $(A_1, A_2, \ldots)$ is a sequence of independent events, then $\limsup_{n \to \infty} A_n$ must have probability 0 or 1. The Borel-Cantelli lemmas give conditions for which of these is correct:
Suppose that $(A_1, A_2, \ldots)$ is a sequence of independent events.
1. If $\sum_{i=1}^\infty \P(A_i) \lt \infty$ then $\P\left(\limsup_{n \to \infty} A_n\right) = 0$.
2. If $\sum_{i=1}^\infty \P(A_i) = \infty$ then $\P\left(\limsup_{n \to \infty} A_n\right) = 1$.
Another proof of the Kolmogorov zero-one law will be given using the martingale convergence theorem.
Examples and Exercises
As always, be sure to try the computational exercises and proofs yourself before reading the answers and proofs in the text.
Counterexamples
Equal probability certainly does not imply equivalent events.
Consider the simple experiment of tossing a fair coin. The event that the coin lands heads and the event that the coin lands tails have the same probability, but are not equivalent.
Proof
Let $S$ denote the sample space, and $H$ the event of heads, so that $H^c$ is the event of tails. Since the coin is fair, $\P(H) = \P(H^c) = \frac{1}{2}$. But $H \bigtriangleup H^c = S$, so $\P(H \bigtriangleup H^c) = 1$, so $H$ and $H^c$ are as far from equivalent as possible.
Similarly, equivalent distributions does not imply equivalent random variables.
Consider the experiment of rolling a standard, fair die. Let $X$ denote the score and $Y = 7 - X$. Then $X$ and $Y$ have the same distribution but are not equivalent.
Proof
Since the die is fair, $X$ is uniformly distributed on $S = \{1, 2, 3, 4, 5, 6\}$. Also $\P(Y = k) = \P(X = 7 - k) = \frac{1}{6}$ for $k \in S$, so $Y$ also has the uniform distribution on $S$. But $\P(X = Y) = \P\left(X = \frac{7}{2}\right) = 0$, so $X$ and $Y$ are as far from equivalent as possible.
Consider the experiment of rolling two standard, fair dice and recording the sequence of scores $(X, Y)$. Then $X$ and $Y$ are independent and have the same distribution, but are not equivalent.
Proof
Since the dice are fair, $(X, Y)$ has the uniform distribution on $\{1, 2, 3, 4, 5, 6\}^2$. Equivalently, $X$ and $Y$ are independent, and each has the uniform distribution on $\{1, 2, 3, 4, 5, 6\}$. But $\P(X = Y) = \frac{1}{6}$, so $X$ and $Y$ are not equivalent. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/02%3A_Probability_Spaces/2.09%3A_Probability_Spaces_Revisited.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$
Introduction
This section requires measure theory, so you may need to review the advanced sections in the chapter on Foundations and in this chapter. In particular, recall that a set $E$ almost always comes with a $\sigma$-algebra $\mathscr E$ of admissible subsets, so that $(E, \mathscr E)$ is a measurable space. Usually in fact, $E$ has a topology and $\mathscr E$ is the corresponding Borel $\sigma$-algebra, that is, the $\sigma$-algebra generated by the topology. If $E$ is countable, we almost always take $\mathscr E$ to be the collection of all subsets of $E$, and in this case $(E, \mathscr E)$ is a discrete space. The other common case is when $E$ is an uncountable measurable subset of $\R^n$ for some $n \in \N$, in which case $\mathscr E$ is the collection of measurable subsets of $E$. If $(E_1, \mathscr E_1), \, (E_2, \mathscr E_2), \ldots, (E_n, \mathscr E_n)$ are measurable spaces for some $n \in \N_+$, then the Cartesian product $E_1 \times E_2 \times \cdots \times E_n$ is given the product $\sigma$-algebra $\mathscr E_1 \otimes \mathscr E_2 \otimes \cdots \otimes \mathscr E_n$. As a special case, the Cartesian power $E^n$ is given the corresponding power $\sigma$-algebra $\mathscr E^n$.
With these preliminary remarks out of the way, suppose that $(\Omega, \mathscr F, \P)$ is a probability space, so that $\Omega$ is the set of outcomes, $\mathscr F$ the $\sigma$-algebra of events, and $\P$ is the probability measure on the sample space $(\Omega, \mathscr F)$. Suppose also that $(S, \mathscr S)$ and $(T, \mathscr T)$ are measurable spaces. Here is our main definition:
A random process or stochastic process on $(\Omega, \mathscr F, \P)$ with state space $(S, \mathscr S)$ and index set $T$ is a collection of random variables $\bs X = \{X_t: t \in T\}$ such that $X_t$ takes values in $S$ for each $t \in T$.
Sometimes it's notationally convenient to write $X(t)$ instead of $X_t$ for $t \in T$. Often $T = \N$ or $T = [0, \infty)$ and the elements of $T$ are interpreted as points in time (discrete time in the first case and continuous time in the second). So then $X_t \in S$ is the state of the random process at time $t \in T$, and the index space $(T, \mathscr T)$ becomes the time space.
Since $X_t$ is itself a function from $\Omega$ into $S$, it follows that ultimately, a stochastic process is a function from $\Omega \times T$ into $S$. Stated another way, $t \mapsto X_t$ is a random function on the probability space $(\Omega, \mathscr F, \P)$. To make this precise, recall that $S^T$ is the notation sometimes used for the collection of functions from $T$ into $S$. Recall also that a natural $\sigma$-algebra used for $S^T$ is the one generated by sets of the form $\left\{f \in S^T: f(t) \in A_t \text{ for all } t \in T\right\}, \text{ where } A_t \in \mathscr S \text{ for every } t \in T \text{ and } A_t = S \text{ for all but finitely many } t \in T$ This $\sigma$-algebra, denoted $\mathscr S^T$, generalizes the ordinary power $\sigma$-algebra $\mathscr S^n$ mentioned in the opening paragraph and will be important in the discussion of existence below.
Suppose that $\bs X = \{X_t: t \in T\}$ is a stochastic process on the probability space $(\Omega, \mathscr F, \P)$ with state space $(S, \mathscr S)$ and index set $T$. Then the mapping that takes $\omega$ into the function $t \mapsto X_t(\omega)$ is measurable with respect to $(\Omega, \mathscr F)$ and $(S^T, \mathscr S^T)$.
Proof
Recall that a mapping with values in $S^T$ is measurable if and only if each of its coordinate functions is measurable. In the present context that means that we must show that the function $X_t$ is measurable with respect to $(\Omega, \mathscr F)$ and $(S, \mathscr S)$ for each $t \in T$. But of course, that follows from the very meaning of the term random variable.
For $\omega \in \Omega$, the function $t \mapsto X_t(\omega)$ is known as a sample path of the process. So $S^T$, the set of functions from $T$ into $S$, can be thought of as a set of outcomes of the stochastic process $\bs X$, a point we will return to in our discussion of existence below.
As noted in the proof of the last theorem, $X_t$ is a measurable function from $\Omega$ into $S$ for each $t \in T$, by the very meaning of the term random variable. But it does not follow in general that $(\omega, t) \mapsto X_t(\omega)$ is measurable as a function from $\Omega \times T$ into $S$. In fact, the $\sigma$-algebra on $T$ has played no role in our discussion so far. Informally, a statement about $X_t$ for a fixed $t \in T$ or even a statement about $X_t$ for countably many $t \in T$ defines an event. But it does not follow that a statement about $X_t$ for uncountably many $t \in T$ defines an event. We often want to make such statements, so the following definition is inevitable:
A stochastic process $\bs X = \{X_t: t \in T\}$ defined on the probability space $(\Omega, \mathscr F, \P)$ and with index space $(T, \mathscr T)$ and state space $(S, \mathscr S)$ is measurable if $(\omega, t) \mapsto X_t(\omega)$ is a measurable function from $\Omega \times T$ into $S$.
Every stochastic process indexed by a countable set $T$ is measurable, so the definition is only important when $T$ is uncountable, and in particular for $T = [0, \infty)$.
Equivalent Processes
Our next goal is to study different ways that two stochastic processes, with the same state and index spaces, can be equivalent. We will assume that the diagonal $D = \{(x, x): x \in S\} \in \mathscr S^2$, an assumption that almost always holds in applications, and in particular for the discrete and Euclidean spaces that are most important to us. Sufficient conditions are that $\mathscr S$ have a sub $\sigma$-algebra that is countably generated and contains all of the singleton sets, properties that hold for the Borel $\sigma$-algebra when the topology on $S$ is locally compact, Hausdorff, and has a countable base.
First, we often feel that we understand a random process $\bs X = \{X_t: t \in T\}$ well if we know the finite dimensional distributions, that is, if we know the distribution of $\left(X_{t_1}, X_{t_2}, \ldots, X_{t_n}\right)$ for every choice of $n \in \N_+$ and $(t_1, t_2, \ldots, t_n) \in T^n$. Thus, we can compute $\P\left[\left(X_{t_1}, X_{t_2}, \ldots, X_{t_n}\right) \in A\right]$ for every $n \in \N_+$, $(t_1, t_2, \ldots, t_n) \in T^n$, and $A \in \mathscr S^n$. Using various rules of probability, we can compute the probabilities of many events involving infinitely many values of the index parameter $t$ as well. With this idea in mind, we have the following definition:
Random processes $\bs X = \{X_t: t \in T\}$ and $\bs{Y} = \{Y_t: t \in T\}$ with state space $(S, \mathscr S)$ and index set $T$ are equivalent in distribution if they have the same finite dimensional distributions. This defines an equivalence relation on the collection of stochastic processes with this state space and index set. That is, if $\bs X$, $\bs Y$, and $\bs Z$ are such processes then
1. $\bs X$ is equivalent in distribution to $\bs X$ (the reflexive property)
2. If $\bs X$ is equivalent in distribution to $\bs{Y}$ then $\bs{Y}$ is equivalent in distribution to $\bs X$ (the symmetric property)
3. If $\bs X$ is equivalent in distribution to $\bs{Y}$ and $\bs{Y}$ is equivalent in distribution to $\bs{Z}$ then $\bs X$ is equivalent in distribution to $\bs{Z}$ (the transitive property)
Note that since only the finite-dimensional distributions of the processes $\bs X$ and $\bs Y$ are involved in the definition, the processes need not be defined on the same probability space. Thus, equivalence in distribution partitions the collection of all random processes with a given state space and index set into mutually disjoint equivalence classes. But of course, we already know that two random variables can have the same distribution but be very different as variables (functions on the sample space). Clearly, the same statement applies to random processes.
Suppose that $\bs X = (X_1, X_2, \ldots)$ is a sequence of Bernoulli trials with success parameter $p = \frac{1}{2}$. Let $Y_n = 1 - X_n$ for $n \in \N_+$. Then $\bs{Y} = (Y_1, Y_2, \ldots)$ is equivalent in distribution to $\bs X$ but $\P(X_n \ne Y_n \text{ for every } n \in \N_+) = 1$
Proof
By the meaning of Bernoulli trials, $\bs X$ is a sequence of independent indicator random variables with $\P(X_n = 1) = \frac{1}{2}$ for each $n \in \N_+$. It follows that $\bs{Y}$ is also a Bernoulli trials sequence with success parameter $\frac{1}{2}$, so $\bs X$ and $\bs{Y}$ are equivalent in distribution. Also, of course, the state set is $\{0, 1\}$ and $Y_n = 1$ if and only if $X_n = 0$.
Motivated by this example, let's look at another, stronger way that random processes can be equivalent. First recall that random variables $X$ and $Y$ on $(\Omega, \mathscr F, \P)$, with values in $S$, are equivalent if $\P(X = Y) = 1$.
Suppose that $\bs X = \{X_t: t \in T\}$ and $\bs{Y} = \{Y_t: t \in T\}$ are stochastic processes defined on the same probability space $(\Omega, \mathscr F, \P)$ and both with state space $(S, \mathscr S)$ and index set $T$. Then $\bs{Y}$ is a versions of $\bs X$ if $Y_t$ is equivalent to $X_t$ (so that $\P(X_t = Y_t) = 1$) for every $t \in T$. This defines an equivalence relation on the collection of stochastic processes on the same probability space and with the same state space and index set. That is, if $\bs X$, $\bs Y$, and $\bs Z$ are such processes then
1. $\bs X$ is a version of $\bs X$ (the reflexive property)
2. If $\bs X$ is a version of $\bs{Y}$ then $\bs{Y}$ is ia version of $\bs X$ (the symmetric property)
3. If $\bs X$ is a version of $\bs{Y}$ and $\bs{Y}$ is of $\bs{Z}$ then $\bs X$ is a version of $\bs{Z}$ (the transitive property)
Proof
Note that $(X_t, Y_t)$ is a random variable with values in $S^2$ (and so the function $\omega \mapsto (X_t(\omega), Y_t(\omega))$ is measurable). The event $\{X_t = Y_t\}$ is the inverse image of the diagonal $D \in \mathscr S^2$ under this mapping, and so the definition makes sense.
So the version of relation partitions the collection of stochastic processes on a given probability space and with a given state space and index set into mutually disjoint equivalence classes.
Suppose again that $\bs X = \{X_t: t \in T\}$ and $\bs{Y} = \{Y_t: t \in T\}$ are random processes on $(\Omega, \mathscr F, \P)$ with state space $(S, \mathscr S)$ and index set $T$. If $\bs{Y}$ is a version of $\bs X$ then $\bs{Y}$ and $\bs X$ are equivalent in distribution.
Proof
Suppose that $(t_1, t_2, \ldots, t_n) \in T^n$ and that $A \in \mathscr S^n$. Recall that the intersection of a finite (or even countably infinite) collection of events with probability 1 still has probability 1. Hence \begin{align} \P\left[\left(X_{t_1}, X_{t_2}, \ldots, X_{t_n}\right) \in A\right] & = \P\left[\left(X_{t_1}, X_{t_2}, \ldots, X_{t_n}\right) \in A, \, X_{t_1} = Y_{t_1}, X_{t_2} = Y_{t_2}, \ldots, X_{t_n} = Y_{t_n} \right] \ & = \P\left[\left(Y_{t_1}, Y_{t_2}, \ldots, Y_{t_n}\right) \in A, \, X_{t_1} = Y_{t_1}, X_{t_2} = Y_{t_2}, \ldots, X_{t_n} = Y_{t_n} \right] = \P\left[\left(Y_{t_1}, Y_{t_2}, \ldots, Y_{t_n}\right) \in A\right] \end{align}
As noted in the proof, a countable intersection of events with probability 1 still has probability 1. Hence if $T$ is countable and random processes $\bs X$ is a version of $\bs{Y}$ then $\P(X_t = Y_t \text{ for all } t \in T) = 1$ so $\bs X$ and $\bs{Y}$ really are essentially the same random process. But when $T$ is uncountable the result in the displayed equation may not be true, and $\bs X$ and $\bs{Y}$ may be very different as random functions on $T$. Here is a simple example:
Suppose that $\Omega = T = [0, \infty)$, $\mathscr F = \mathscr T$ is the $\sigma$-algebra of Borel measurable subsets of $[0, \infty)$, and $\P$ is any continuous probability measure on $(\Omega, \mathscr F)$. Let $S = \{0, 1\}$ (with all subsets measurable, of course). For $t \in T$ and $\omega \in \Omega$, define $X_t(\omega) = \bs{1}_t(\omega)$ and $Y_t(\omega) = 0$. Then $\bs X = \{X_t: t \in T\}$ is a version of $\bs{Y} = \{Y_t: t \in T\}$, but $\P(X_t = Y_t \text{ for all } t \in T\} = 0$.
Proof
For $t \in [0, \infty)$, $\P(X_t \ne Y_t) = \P\{t\} = 0$ since $P$ is a continuous measure. But $\{\omega \in \Omega: X_t(\omega) = Y_t(\omega) \text{ for all } t \in T\} = \emptyset$.
Motivated by this example, we have our strongest form of equivalence:
Suppose that $\bs X = \{X_t: t \in T\}$ and $\bs{Y} = \{Y_t: t \in T\}$ are measurable random processes on the probability space $(\Omega, \mathscr F, \P)$ and with state space $(S, \mathscr S)$ and index space $(T, \mathscr T)$. Then $\bs X$ is indistinguishable from $\bs{Y}$ if $\P(X_t = Y_t \text{ for all } t \in T) = 1$. This defines an equivalence relation on the collection of measurable stochastic processes defined on the same probability space and with the same state and index spaces. That is, if $\bs X$, $\bs Y$, and $\bs Z$ are such processes then
1. $\bs X$ is indistinguishable from $\bs X$ (the reflexive property)
2. If $\bs X$ is indistinguishable from $\bs{Y}$ then $\bs{Y}$ is indistinguishable from $\bs X$ (the symmetric property)
3. If $\bs X$ is indistinguishable from $\bs{Y}$ and $\bs{Y}$ is indistinguishable from $\bs{Z}$ then $\bs X$ is indistinguishable from $\bs{Z}$ (the transitive property)
Details
The measurability requirement for the stochastic processes is needed to ensure that $\{X_t = Y_t \text{ for all } t \in T\}$ is a valid event. To see this, note that $(\omega, t) \mapsto (X_t(\omega), Y_t(\omega))$ is measurable, as a function from $\Omega \times T$ into $S^2$. As before, let $D = \{(x, x): x \in S\}$ denote the diagonal. Then $D^c \in \mathscr S^2$ and the inverse image of $D^c$ under our mapping is $\{(\omega, t) \in \Omega \times T: X_t(\omega) \ne Y_t(\omega)\} \in \mathscr F \otimes \mathscr T$ The projection of this set onto $\Omega$ $\{\omega \in \Omega: X_t(\omega) \ne Y_t(\omega) \text{ for some } t \in T\} \in \mathscr F$ since the projection of a measurable set in the product space is also measurable. Hence the complementary event $\{\omega \in \Omega: X_t(\omega) = Y_t(\omega) \text{ for all } t \in T\} \in \mathscr F$
So the indistinguishable from relation partitions the collection of measurable stochastic processes on a given probability space and with given state space and index space into mutually disjoint equivalence classes. Trivially, if $\bs X$ is indistinguishable from $\bs{Y}$, then $\bs X$ is a version of $\bs{Y}$. As noted above, when $T$ is countable, the converse is also true, but not, as our previous example shows, when $T$ is uncountable. So to summarize, indistinguishable from implies version of implies equivalent in distribution, but none of the converse implications hold in general.
The Kolmogorov Construction
In applications, a stochastic process is often modeled by giving various distributional properties that the process should satisfy. So the basic existence problem is to construct a process that has these properties. More specifically, how can we construct random processes with specified finite dimensional distributions? Let's start with the simplest case, one that we have seen several times before, and build up from there. Our simplest case is to construct a single random variable with a specified distribution.
Suppose that $(S, \mathscr S, P)$ is a probability space. Then there exists a random variable $X$ on probability space $(\Omega, \mathscr F, \P)$ such that $X$ takes values in $S$ and has distribution $P$.
Proof
The proof is utterly trivial. Let $(\Omega, \mathscr F, \P) = (S, \mathscr S, P)$ and define $X: \Omega \to S$ by $X(\omega) = \omega$, so that $X$ is the identity function. Then $\{X \in A\} = A$ and so $\P(X \in A) = P(A)$ for $A \in \mathscr S$.
In spite of its triviality the last result contains the seeds of everything else we will do in this discussion. Next, let's see how to construct a sequence of independent random variables with specified distributions.
Suppose that $P_i$ is a probability measure on the measurable space $(S, \mathscr S)$ for $i \in \N_+$. Then there exists an independent sequence of random variables $(X_1, X_2, \ldots)$ on a probability space $(\Omega, \mathscr F, \P)$ such that $X_i$ takes values in $S$ and has distribution $P_i$ for $i \in \N_+$.
Proof
Let $\Omega = S^\infty = S \times S \times \cdots$. Next let $\mathscr F = \mathscr S^\infty$, the corresponding product $\sigma$-algebra. Recall that this is the $\sigma$-algebra generated by sets of the form $A_1 \times A_2 \times \cdots \text{ where } A_i \in \mathscr S \text{ for each } i \in I \text{ and } A_i = S \text{ for all but finitely many } i \in I$ Finally, let $\P = P_1 \otimes P_2 \otimes \cdots$, the corresponding product measure on $(\Omega, \mathscr F)$. Recall that this is the unique probability measure that satisfies $\P(A_1 \times A_2 \times \cdots) = P_1(A_1) P_2(A_2) \cdots$ where $A_1 \times A_2 \times \cdots$ is a set of the type in the first displayed equation. Now define $X_i$ on $\Omega$ by $X_i(\omega_1, \omega_2, \ldots) = \omega_i$, for $i \in \N_+$, so that $X_i$ is simply the coordinate function for index $i$. If $A_1 \times A_2 \times \cdots$ is a set of the type in the first displayed equation then $\{X_1 \in A_1, X_2 \in A_2, \ldots\} = A_1 \times A_2 \times \cdots$ and so by the definition of the product measure, $\P(X_1 \in A_1, X_2 \in A_2, \cdots) = P_1(A_1) P_2(A_2) \cdots$ It follows that $(X_1, X_2, \ldots)$ is a sequence of independent variables and that $X_i$ has distribution $P_i$ for $i \in \N$.
If you looked at the proof of the last two results you might notice that the last result can be viewed as a special case of the one before, since $\bs X = (X_1, X_2, \ldots)$ is simply the identity function on $\Omega = S^\infty$. The important step is the existence of the product measure $\P$ on $(\Omega, \mathscr F)$.
The full generalization of these results is known as the Kolmogorov existence theorem (named for Andrei Kolmogorov). We start with the state space $(S, \mathscr S)$ and the index set $T$. The theorem states that if we specify the finite dimensional distributions in a consistent way, then there exists a stochastic process defined on a suitable probability space that has the given finite dimensional distributions. The consistency condition is a bit clunky to state in full generality, but the basic idea is very easy to understand. Suppose that $s$ and $t$ are distinct elements in $T$ and that we specify the distribution (probability measure) $P_s$ of $X_s$, $P_t$ of $X_t$, $P_{s,t}$ of $(X_s, X_t)$, and $P_{t,s}$ of $(X_t, X_s)$. Then clearly we must specify these so that $P_s(A) = P_{s,t}(A \times S), \quad P_t(B) = P_{s,t}(S \times B)$ For all $A, \, B \in \mathscr S$. Clearly we also must have $P_{s,t}(C) = P_{t,s}(C^\prime)$ for all measurable $C \in \mathscr S^2$, where $C^\prime = \{(y, x): (x, y) \in C\}$.
To state the consistency conditions in general, we need some notation. For $n \in \N_+$, let $T^{(n)} \subset T^n$ denote the set of $n$-tuples of distinct elements of $T$, and let $\bs{T} = \bigcup_{n=1}^\infty T^{(n)}$ denote the set of all finite sequences of distinct elements of $T$. If $n \in \N_+$, $\bs t = (t_1, t_2, \ldots, t_n) \in T^{(n)}$ and $\pi$ is a permutation of $\{1, 2, \ldots, n\}$, let $\bs t \pi$ denote the element of $T^{(n)}$ with coordinates $(\bs t \pi)_i = t_{\pi(i)}$. That is, we permute the coordinates of $\bs t$ according to $\pi$. If $C \in \mathscr S^n$, let $\pi C = \left\{(x_1, x_2, \ldots, x_n) \in S^n: \left(x_{\pi(1)}, x_{\pi(2)}, \ldots, x_{\pi(n)}\right) \in C\right\} \in \mathscr S^n$ finally, if $n \gt 1$, let $\bs t_-$ denote the vector $(t_1, t_2, \ldots, t_{n-1}) \in T^{(n-1)}$
Now suppose that $P_\bs t$ is a probability measure on $(S^n, \mathscr S^n)$ for each $n \in \N_+$ and $\bs t \in T^{(n)}$. The idea, of course, is that we want the collection $\mathscr P = \{P_\bs t: \bs t \in \bs{T}\}$ to be the finite dimensional distributions of a random process with index set $T$ and state space $(S, \mathscr S)$. Here is the critical definition:
The collection of probability distributions $\mathscr P$ relative to $T$ and $(S, \mathscr S)$ is consistent if
1. $P_{\bs t \pi}(C) = P_\bs t(\pi C)$ for every $n \in \N_+$, $\bs t \in T^{(n)}$, permutation $\pi$ of $\{1, 2, \ldots, n\}$, and measurable $C \subseteq S^n$.
2. $P_{\bs t_-}(C) = P_\bs t(C \times S)$ for every $n > 1$, $\bs t \in T^{(n)}$, and measurable $C \subseteq S^{n-1}$
With the proper definition of consistence, we can state the fundamental theorem.
Kolmogorov Existence Theorem. If $\mathscr P$ is a consistent collection of probability distributions relative to the index set $T$ and the state space $(S, \mathscr S)$, then there exists a probability space $(\Omega, \mathscr F, \P)$ and a stochastic process $\bs X = \{X_t: t \in T\}$ on this probability space such that $\mathscr P$ is the collection of finite dimensional distribution of $\bs X$.
Proof sketch
Let $\Omega = S^T$, the set of functions from $T$ to $S$. Such functions are the outcomes of the stochastic process. Let $\mathscr F = \mathscr S^T$, the product $\sigma$-algebra, generated by sets of the form $B = \{\omega \in \Omega: \omega(t) \in A_t \text{ for all } t \in T\}$ where $A_t \in \mathscr S$ for all $t \in T$ and $A_t = S$ for all but finitely many $t \in T$. We know how our desired probability measure $\P$ should work on the sets that generate $\mathscr F$. Specifically, suppose that $B$ is a set of the type in the displayed equation, and $A_t = S$ except for $\bs t = (t_1, t_2, \ldots, t_n) \in T^{(n)}$. Then we want $\P(B) = P_\bs t(A_{t_1} \times A_{t_2} \times \cdots \times A_{t_n})$ Basic existence and uniqueness theorems in measure theory that we discussed earlier, and the consistency of $\mathscr P$, guarantee that $\P$ can be extended to a probability measure on all of $\mathscr F$. Finally, for $t \in T$ we define $X_t: \Omega \to S$ by $X_t(\omega) = \omega(t)$ for $\omega \in \Omega$, so that $X_t$ is simply the coordinate function of index $t$. Thus, we have a stochastic process $\bs X = \{X_t: t \in T\}$ with state space $(S, \mathscr S)$, defined on the probability space $(\Omega, \mathscr F, \P)$, with $\mathscr P$ as the collection of finite dimensional distributions.
Note that except for the more complicated notation, the construction is very similar to the one for a sequence of independent variables. Again, $\bs X$ is essentially the identity function on $\Omega = S^T$. The important and more difficult part is the construction of the probability measure $\P$ on $(\Omega, \mathscr F)$.
Applications
Our last discussion is a summary of the stochastic processes that are studied in this text. All are classics and are immensely important in applications.
Random processes are associated with Bernoulli trials include
1. the Bernoulli trials sequence itself
2. the sequence of binomial variables
3. the sequence of geometric variables
4. the sequence of negative binomial variables
5. the simple random walk
Construction
The Bernoulli trials sequence in (a) is a sequence of independent, identically distributed indicator random variables, and so can be constructed as in (). The random processes in (b)–(e) are constructed from the Bernoulli trials sequence.
Random process associated with the Poisson model include
1. the sequence of inter-arrival times
2. the sequence of arrival times
3. the counting process on $[0, \infty)$, both in the homogeneous and non-homogeneous cases.
4. A compound Poisson process.
5. the counting process on a general measure space
Constructions
The random process in (a) is a sequence of independent random variable with a common exponential distribution, and so can be constructed as in (). The processes in (b) and (c) can be constructed from the sequence in (a).
Random processes associated with renewal theory include
1. the sequence of inter-arrival times
2. the sequence of arrival times
3. the counting process on $[0, \infty)$
Markov chains form a very important family of random processes as do Brownian motion and related processes. We will study these in subsequent chapters. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/02%3A_Probability_Spaces/2.10%3A_Stochastic_Processes.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$
Introduction
Suppose that $\bs{X} = \{X_t: t \in T\}$ is a stochastic process with state space $(S, \mathscr{S})$ defined on an underlying probability space $(\Omega, \mathscr{F}, \P)$. To review, $\Omega$ is the set of outcomes, $\mathscr{F}$ the $\sigma$-algebra of events, and $\P$ the probability measure on $(S, \mathscr{S})$. Also $S$ is the set of states, and $\mathscr{S}$ the $\sigma$-algebra of admissible subsets of $S$. Usually, $S$ is a topological space and $\mathscr{S}$ the Borel $\sigma$-algebra generated by the open subsets of $S$. A standard set of assumptions is that the topology is locally compact, Hausdorff, and has a countable base, which we will abbreviate by LCCB. For the index set, we assume that either $T = \N$ or that $T = [0, \infty)$ and as usual in these cases, we interpret the elements of $T$ as points of time. The set $T$ is also given a topology, the discrete topology in the first case and the standard Euclidean topology in the second case, and then the Borel $\sigma$-algebra $\mathscr{T}$. So in discrete time with $T = \N$, $\mathscr{T} = \mathscr{P}(T)$, the power set of $T$, so every subset of $T$ is measurable, as is every function from $T$ into a another measurable space. Finally, $X_t$ is a random variable and so by definition is measurable with respect to $\mathscr{F}$ and $\mathscr{S}$ for each $t \in T$. We interpret $X_t$ is the state of some random system at time $t \in T$. Many important concepts involving $\bs{X}$ are based on how the future behavior of the process depends on the past behavior, relative to a given current time.
For $t \in T$, let $\mathscr{F}_t = \sigma\left\{X_s: s \in T, \; s \le t\right\}$, the $\sigma$-algebra of events that can be defined in terms of the process up to time $t$. Roughly speaking, for a given $A \in \mathscr{F}_t$, we can tell whether or not $A$ has occurred if we are allowed to observe the process up to time $t$. The family of $\sigma$-algebras $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ has two critical properties: the family is increasing in $t \in T$, relative to the subset partial order, and all of the $\sigma$-algebras are sub $\sigma$-algebras of $\mathscr{F}$. That is for $s, \, t \in T$ with $s \le t$, we have $\mathscr{F}_s \subseteq \mathscr{F}_t \subseteq \mathscr{F}$.
Filtrations
Basic Definitions
Sometimes we need $\sigma$-algebras that are a bit larger than the ones in the last paragraph. For example, there may be other random variables that we get to observe, as time goes by, besides the variables in $\bs{X}$. Sometimes, particularly in continuous time, there are technical reasons for somewhat different $\sigma$-algebras. Finally, we may want to describe how our information grows, as a family of $\sigma$-algebras, without reference to a random process. For the remainder of this section, we have a fixed measurable space $(\Omega, \mathscr{F})$ which we again think of as a sample space, and the time space $(T, \mathscr{T})$ as described above.
A family of $\sigma$-algebras $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on $(\Omega, \mathscr{F})$ if $s, \, t \in T$ and $s \le t$ imply $\mathscr{F}_s \subseteq \mathscr{F}_t \subseteq \mathscr{F}$. The object $\left(\Omega, \mathscr{F}, \mathfrak{F}\right)$ is a filtered sample space. If $\P$ is a probability measure on $(\Omega, \mathscr{F})$, then $\left(\Omega, \mathscr{F}, \mathfrak{F}, \P\right)$ is a filtered probability space.
So a filtration is simply an increasing family of sub-$\sigma$-algebras of $\mathscr{F}$, indexed by $T$. We think of $\mathscr{F}_t$ as the $\sigma$-algebra of events up to time $t \in T$. The larger the $\sigma$-algebras in a filtration, the more events that are available, so the following relation on filtrations is natural.
Suppose that $\mathfrak{F} =\{\mathscr{F}_t: t \in T\}$ and $\mathfrak{G} = \{\mathscr{G}_t: t \in T\}$ are filtrations on $(\Omega, \mathscr{F})$. We say that $\mathfrak{F}$ is coarser than $\mathfrak{G}$ and $\mathfrak{G}$ is finer than $\mathfrak{F}$, and we write $\mathfrak{F} \preceq \mathfrak{G}$, if $\mathscr{F}_t \subseteq \mathscr{G}_t$ for all $t \in T$. The relation $\preceq$ is a partial order on the collection of filtrations on $(\Omega, \mathscr{F})$. That is, if $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$, $\mathfrak{G} = \{\mathscr{G}_t: t \in T\}$, and $\mathfrak{H} = \mathscr{H}_t: t \in T\}$ are filtrations then
1. $\mathfrak{F} \preceq \mathfrak{F}$, the reflexive property.
2. If $\mathfrak{F} \preceq \mathfrak{G}$ and $\mathfrak{G} \preceq \mathfrak{F}$ then $\mathfrak{F} = \mathfrak{G}$, the antisymmetric property.
3. If $\mathfrak{F} \preceq \mathfrak{G}$ and $\mathfrak{G} \preceq \mathfrak{H}$ then $\mathfrak{F} \preceq \mathfrak{H}$, the transitive property.
Proof
The proof is a simple consequence of the fact that the subset relation defines a partial order.
1. $\mathscr{F}_t \subseteq \mathscr{F}_t$ for each $t \in T$ so $\mathfrak{F} \preceq \mathfrak{F}$.
2. If $\mathfrak{F} \preceq \mathfrak{G}$ and $\mathfrak{G} \preceq \mathfrak{F}$ then $\mathscr{F}_t \subseteq \mathscr{G}_t$ and $\mathscr{G}_t \subseteq \mathscr{F}_t$ for each $t \in T$. Hence $\mathscr{F}_t = \mathscr{G}_t$ for each $t \in T$ and so $\mathfrak{F} = \mathfrak{G}$.
3. If $\mathfrak{F} \preceq \mathfrak{G}$ and $\mathfrak{G} \preceq \mathfrak{H}$ then $\mathscr{F}_t \subseteq \mathscr{G}_t$ and $\mathscr{G}_t \subseteq \mathscr{H}_t$ for each $t \in T$. Hence $\mathscr{F}_t \subseteq \mathscr{H}_t$ for each $t \in T$ and so $\mathfrak{F} \preceq \mathfrak{H}$
So the coarsest filtration on $(\Omega, \mathscr{F})$ is the one where $\mathscr{F}_t = \{\Omega, \emptyset\}$ for every $t \in T$ while the finest filtration is the one where $\mathscr{F}_t = \mathscr{F}$ for every $t \in T$. In the first case, we gain no information as time evolves, and in the second case, we have complete information from the beginning of time. Usually neither of these is realistic.
It's also natural to consider the $\sigma$-algebra that encodes our information over all time.
For a filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ on $(\Omega, \mathscr{F})$, define $\mathscr{F}_\infty = \sigma \left( \bigcup\left\{\mathscr{F}_t: t \in T\right\} \right)$. Then
1. $\mathscr{F}_\infty = \sigma \left( \bigcup\left\{\mathscr{F}_t: t \in T, t \ge s\right\} \right)$ for $s \in T$.
2. $\mathscr{F}_t \subseteq \mathscr{F}_\infty$ for $t \in T$.
Proof
These results follows since the $\sigma$-algebras in a filtration are increasing in time.
Of course, it may be the case that $\mathscr{F}_\infty = \mathscr{F}$, but not necessarily. Recall that the intersection of a collection of $\sigma$-algebras on $(\Omega, \mathscr{F})$ is another $\sigma$-algebra. We can use this to create new filtrations from a collection of given filtrations.
Suppose that $\mathfrak{F}_i = \left\{\mathscr{F}^i_t: t \in T\right\}$ is a filtration on $(\Omega, \mathscr{F})$ for each $i$ in a nonempty index set $I$. Then $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ where $\mathscr{F}_t = \bigcap_{i \in I} \mathscr{F}^i_t$ for $t \in T$ is also a filtration on $(\Omega, \mathscr{F})$. This filtration is sometimes denoted $\mathfrak{F} = \bigwedge_{i \in I} \mathfrak{F}_i$, and is the finest filtration that is coarser than $\mathfrak{F}_i$ for every $i \in I$.
Proof
Suppose $s, \, t \in T$ with $s \le t$. Then $\mathscr{F}^i_s \subseteq \mathscr{F}^i_t \subseteq \mathscr{F}$ for each $i \in I$ so it follows that $\bigcap_{i \in I} \mathscr{F}^i_s \subseteq \bigcap_{i \in I} \mathscr{F}^i_t \subseteq \mathscr{F}$.
Unions of $\sigma$-algebras are not in general $\sigma$-algebras, but we can construct a new filtration from a given collection of filtrations using unions in a natural way.
Suppose again that $\mathfrak{F}_i = \left\{\mathscr{F}^i_t: t \in T\right\}$ is a filtration on $(\Omega, \mathscr{F})$ for each $i$ in a nonempty index set $I$. Then $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ where $\mathscr{F}_t = \sigma\left(\bigcup_{i \in I} \mathscr{F}^i_t\right)$ for $t \in T$ is also a filtration on $(\Omega, \mathscr{F})$. This filtration is sometimes denoted $\mathfrak{F} = \bigvee_{i \in I} \mathfrak{F}_i$, and is the coarsest filtration that is finer than $\mathfrak{F}_i$ for every $i \in I$.
Proof
Suppose $s, \, t \in T$ with $s \le t$. Then $\mathscr{F}^i_s \subseteq \mathscr{F}^i_t \subseteq \mathscr{F}$ for each $i \in I$ so it follows that $\bigcup_{i \in I} \mathscr{F}^i_s \subseteq \bigcup_{i \in I} \mathscr{F}^i_t \subseteq \mathscr{F}$, and hence $\sigma\left(\bigcup_{i \in I} \mathscr{F}^i_s\right) \subseteq \sigma\left(\bigcup_{i \in I} \mathscr{F}^i_t\right) \subseteq \mathscr{F}$.
Stochastic Processes
Note again that we can have a filtration without an underlying stochastic process in the background. However, we usually do have a stochastic process $\bs{X} = \{X_t: t \in T\}$, and in this case the filtration $\mathfrak{F}^0 = \{\mathscr{F}^0_t: t \in T\}$ where $\mathscr{F}^0_t = \sigma\{X_s: s \in T, \, s \le t\}$ is the natural filtration associated with $\bs{X}$. More generally, the following definition is appropriate.
A stochastic process $\bs{X} = \{X_t: t \in T\}$ on $(\Omega, \mathscr{F})$ is adapted to a filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ on $(\Omega, \mathscr{F})$ if $X_t$ is measureable with respect to $\mathscr{F}_t$ for each $t \in T$.
Equivalently, $\bs{X}$ is adapted to $\mathfrak{F}$ if $\mathfrak{F}$ is finer than $\mathfrak{F}^0$, the natural filtration associated with $\bs{X}$. That is, $\sigma\{X_s: s \in T, \; s \le t\} \subseteq \mathscr{F}_t$ for each $t \in T$. So clearly, if $\bs{X}$ is adapted to a filtration, then it is adapted to any finer filtration, and $\mathfrak{F}^0$ is the coarsest filtration to which $\bs{X}$ is adapted. The basic idea behind the definition is that if the filtration $\mathfrak{F}$ encodes our information as time goes by, then the process $\bs{X}$ is observable. In discrete time, there is a related definition.
Suppose that $T = \N$. A stochastic process $\bs{X} = \{X_n: n \in \N\}$ is predictable by the filtration $\mathfrak{F} = \{\mathscr{F}_n: n \in \N\}$ if $X_{n +1}$ is measurable with respect to $\mathscr{F}_n$ for all $n \in \N$.
Clearly if $\bs{X}$ is predictable by $\mathfrak{F}$ then $\bs{X}$ is adapted to $\mathfrak{F}$. But predictable is better than adapted, in the sense that if $\mathfrak{F}$ encodes our information as time goes by, then we can look one step into the future in terms of $\bs{X}$: at time $n$ we can determine $X_{n+1}$. The concept of predictability can be extended to continuous time, but the definition is much more complicated.
Note that ultimately, a stochastic process $\bs{X} = \{X_t: t \in T\}$ with sample space $(\Omega, \mathscr{F})$ and state space $(S, \mathscr{S})$ can be viewed a function from $\Omega \times T$ into $S$, so $X_t(\omega) \in S$ is the state at time $t \in T$ corresponding to the outcome $\omega \in \Omega$. By definition, $\omega \mapsto X_t(\omega)$ is measurable for each $t \in T$, but it is often necessary for the process to be jointly measurable in $\omega$ and $t$.
Suppose that $\bs{X} = \{X_t: t \in T\}$ is a stochastic process with sample space $(\Omega, \mathscr{F})$ and state space $(S, \mathscr{S})$. Then $\bs{X}$ is measurable if $\bs{X}: \Omega \times T \to S$ is measurable with respect to $\mathscr{F} \otimes \mathscr{T}$ and $\mathscr{S}$.
When we have a filtration, as we usually do, there is a stronger condition that is natural. Let $T_t = \{s \in T: s \le t\}$ for $t \in T$, and let $\mathscr{T}_t = \{A \cap T_t: A \in \mathscr{T}\}$ be the corresponding induced $\sigma$-algebra.
Suppose that $\bs{X} = \{X_t: t \in T\}$ is a stochastic process with sample space $(\Omega, \mathscr{F})$ and state space $(S, \mathscr{S})$, and that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration. Then $\bs{X}$ is progressively measurable relative to $\mathfrak{F}$ if $\bs{X}: \Omega \times T_t \to S$ is measurable with respect to $\mathscr{F}_t \otimes \mathscr{T}_t$ and $\mathscr{S}$ for each $t \in T$.
Clearly if $\bs{X}$ is progressively measurable with respect to a filtration, then it is progressively measurable with respect to any finer filtration. Of course when $T$ is discrete , then any process $\bs{X}$ is measurable, and any process $\bs{X}$ adapted to $\mathfrak{F}$ progressively measurable, so these definitions are only of interest in the case of continuous time.
Suppose again that $\bs{X} = \{X_t: t \in T\}$ is a stochastic process with sample space $(\Omega, \mathscr{F})$ and state space $(S, \mathscr{S})$, and that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration. If $\bs{X}$ is progressively measurable relative to $\mathfrak{F}$ then
1. $\bs{X}$ is measurable.
2. $\bs{X}$ is adapted to $\mathfrak{F}$.
Proof
Suppose that $\bs{X}$ is progressively measurable relative to $\mathfrak{F}$.
1. If $A \in \mathscr{S}$ then $\bs{X}^{-1}(A) = \{(\omega, t) \in \Omega \times T: X_t(\omega) \in A\} = \bigcup_{n=1}^\infty \{(\omega, t) \in \Omega \times T_n: X_t(\omega) \in A\}$ By assumption, the $n$th term in the union is in $\mathscr{F} \otimes \mathscr{T}_n \subseteq \mathscr{F} \otimes \mathscr{T}$, so the union is in $\mathscr{F} \otimes \mathscr{T}$.
2. Suppose that $t \in T$. Then $\bs{X}: \Omega \times T_t \to S$ is measurable with respect to $\mathscr{F}_t \otimes \mathscr{T}_t$ and $\mathscr{S}$. But $X_t: \Omega \to S$ is just the cross section of this function at $t$ and hence is measurable with respect to $\mathscr{F}_t$ and $\mathscr{S}$.
When the state space is a topological space (which is usually the case), then as you might guess, there is a natural link between continuity of the sample paths and progressive measurability.
Suppose that $S$ has an LCCB topology and that $\mathscr{S}$ is the $\sigma$-algebra of Borel sets. Suppose also that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is right continuous. Then $\bs{X}$ is progressively measurable relative to the natural filtration $\mathfrak{F}^0$.
So if $\bs{X}$ is right continuous, then $\bs{X}$ is progressively measurable with respect to any filtration to which $\bs{X}$ is adapted. Recall that in the previous section, we studied different ways that two stochastic processes can be equivalent. The following example illustrates some of the subtleties of processes in continuous time.
Suppose that $\Omega = T = [0, \infty)$, $\mathscr{F} = \mathscr{T}$ is the $\sigma$-algebra of Borel measurable subsets of $[0, \infty)$, and $\P$ is any continuous probability measure on $(\Omega, \mathscr{F})$. Let $S = \{0, 1\}$ and $\mathscr{S} = \mathscr{P}(S) = \{\emptyset, \{0\}, \{1\}, \{0, 1\}\}$. For $t \in T$ and $\omega \in \Omega$, define $X_t(\omega) = \bs{1}_t(\omega)$ and $Y_t(\omega) = 0$. Then
1. $\bs{X} = \{X_t: t \in T\}$ is a version of $\bs{Y} = \{Y_t: t \in T\}$
2. $\bs{X}$ is not adapted to the natural filtration of $\bs{Y}$.
Proof
1. This was shown in the previous section, but here it is again: For $t \in T$, $\P(X_t \ne Y_t) = \P(\{t\}) = 0$.
2. Trivially, $\sigma(Y_t) = \{\emptyset, \Omega\}$ for every $t \in T$, so $\sigma\{Y_s: 0 \le s \le t\} = \{\emptyset, \Omega\}$. But $\sigma(X_t) = \{\emptyset, \{t\}, \Omega \setminus \{t\}, \Omega\}$.
Completion
Suppose now that $P$ is a probability measure on $(\Omega, \mathscr{F})$. Recall that $\mathscr{F}$ is complete with respect to $P$ if $A \in \mathscr{F}$, $B \subseteq A$, and $P(A) = 0$ imply $B \in \mathscr{F}$ (and hence $P(B) = 0$). That is, if $A$ is an event with probability 0 and $B \subseteq A$, then $B$ is also an event (and also has probability 0). For a filtration, the following definition is appropriate.
The filtration $\{\mathscr{F}_t: t \in T\}$ is complete with respect to a probability measure $P$ on $(\Omega, \mathscr{F})$ if
1. $\mathscr{F}$ is complete with respect to $P$
2. If $A \in \mathscr{F}$ and $P(A) = 0$ then $A \in \mathscr{F}_0$.
Suppose $P$ is a probability measure on $(\Omega, \mathscr{F})$ and that the filtration $\{\mathscr{F}_t: t \in T\}$ is complete with respect to $P$. If $A \in \mathscr{F}$ is a null event ($P(A) = 0$) or an almost certain event ($\P(A) = 1$) then $A \in \mathscr{F}_t$ for every $t \in T$.
Proof
This follows since almost certain events are complements of null events and since the $\sigma$-algebras are increasing in $t \in T$.
Recall that if $P$ is a probability measure on $(\Omega, \mathscr{F})$, but $\mathscr{F}$ is not complete with respect to $P$, then $\mathscr{F}$ can always be completed. Here's a review of how this is done: Let $\mathscr{N} = \{A \subseteq \Omega: \text{ there exists } N \in \mathscr{F} \text{ with } P(N) = 0 \text{ and } A \subseteq N\}$ So $\mathscr{N}$ is the collection of null sets. Then we let $\mathscr{F}^P = \sigma(\mathscr{F} \cup \mathscr{N})$ and extend $P$ to $\mathscr{F}^P$ is the natural way: if $A \in \mathscr{F}^P$ and $A$ differs from $B \in \mathscr{F}$ by a null set, then $P(A) = P(B)$. Filtrations can also be completed.
Suppose that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on $(\Omega, \mathscr{F})$ and that $P$ is a probability measure on $(\Omega, \mathscr{F})$. As above, let $\mathscr{N}$ denote the collection of null subsets of $\Omega$, and for $t \in T$, let $\mathscr{F}^P_t = \sigma(\mathscr{F}_t \cup \mathscr{N})$. Then $\mathfrak{F}^P = \{\mathscr{F}^P_t: t \in T\}$ is a filtration on $\left(\Omega, \mathscr{F}^P\right)$ that is finer than $\mathfrak{F}$ and is complete relative to $P$.
Proof
If $s, \, t \in T$ with $s \le t$ then $\mathscr{F}_s \subseteq \mathscr{F}_t \subseteq \mathscr{F}$ and hence $\sigma(\mathscr{F}_s \cup \mathscr{N}) \subseteq \sigma(\mathscr{F}_t \cup \mathscr{N}) \subseteq \sigma(\mathscr{F} \cup \mathscr{N})$ and so $\mathscr{F}^P_s \subseteq \mathscr{F}^P_t \subseteq \mathscr{F}^P$. The probability measure $P$ can be extended to $\mathscr{F}^P$ as described above, and hence is defined on $\mathscr{F}^P_t$ for each $t \in T$. By construction, if $A \in \mathscr{F}^P$ and $P(A) = 0$ then $A \in \mathscr{F}^P_0$ so $\mathfrak{F}^P$ is complete with respect to $P$.
Naturally, $\mathfrak{F}^P$ is the completion of $\mathfrak{F}$ with respect to $P$. Sometimes we need to consider all probability measures on $(\Omega, \mathscr{F})$.
Let $\mathscr{P}$ denote the collection of probability measures on $(\Omega, \mathscr{F})$, and suppose that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on $(\Omega, \mathscr{F})$. Let $\mathscr{F}^* = \bigcap \{\mathscr{F}^P: P \in \mathscr{P}\}$, and let $\mathfrak{F}^* = \bigwedge \{\mathfrak{F}^P: P \in \mathscr{P}\}$. Then $\mathfrak{F}^*$ is a filtration on $(\Omega, \mathscr{F}^*)$, known as the universal completion of $\mathfrak{F}$.
Proof
Note that $\mathfrak{F}^P$ is a filtration on $(\Omega, \mathscr{F}^P)$ for each $P \in \mathscr{P}$, so $\mathfrak{F}^*$ is a filtration on $(\Omega, \mathscr{F}^*)$.
The last definition must seem awfully obscure, but it does have a place. In the theory of Markov processes, we usually allow arbitrary initial distributions, which in turn produces a large collection of distributions on the sample space.
Right Continuity
In continuous time, we sometimes need to refine a given filtration somewhat.
Suppose that $\mathfrak{F} = \{\mathscr{F}_t: t \in [0, \infty)\}$ is a filtration on $(\Omega, \mathscr{F})$. For $t \in [0, \infty)$, define $\mathscr{F}_{t+} = \bigcap \{\mathscr{F}_s: s \in (t, \infty)\}$. Then $\mathfrak{F}_+ = \{\mathscr{F}_{t+}: t \in T\}$ is also a filtration on $(\Omega, \mathscr{F})$ and is finer than $\mathfrak{F}$.
Proof
For $t \in [0, \infty)$ note that $\mathscr{F}_{t+}$ is a $\sigma$-algebra since it is the intersection of $\sigma$-algebras, and clearly $\mathscr{F}_{t+} \subseteq \mathscr{F}$. Next, if $s, \, t \in [0, \infty)$ with $s \le t$, then $\{\mathscr{F}_r: r \in (t, \infty)\} \subseteq \{\mathscr{F}_r: r \in (s, \infty)\}$, so it follows that $\mathscr{F}_{s+} = \bigcap\{\mathscr{F}_r: r \in (s, \infty)\} \subseteq \bigcap\{\mathscr{F}_r: r \in (t, \infty)\} = \mathscr{F}_{t+}$ Finally, for $t \in [0, \infty)$, $\mathscr{F}_t \subseteq \mathscr{F}_s$ for every $s \in (t, \infty)$ so $\mathscr{F}_t \subseteq \bigcap\{\mathscr{F}_s: s \in (t, \infty)\} = \mathscr{F}_{t+}$.
Since the $\sigma$-algebras in a filtration are increasing, it follows that for $t \in [0, \infty)$, $\mathscr{F}_{t+} = \bigcap\{\mathscr{F}_s: s \in (t, t + \epsilon)\}$ for every $\epsilon \in (0, \infty)$. So if the filtration $\mathfrak{F}$ encodes the information available as time goes by, then the filtration $\mathfrak{F}_+$ allows an infinitesimal peak into the future at each $t \in [0, \infty)$. In light of the previous result, the next definition is natural.
A filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in [0, \infty)\}$ is right continuous if $\mathfrak{F}_+ = \mathfrak{F}$, so that $\mathscr{F}_{t+} = \mathscr{F}_t$ for every $t \in [0, \infty)$.
Right continuous filtrations have some nice properties, as we will see later. If the original filtration is not right continuous, the slightly refined filtration is:
Suppose again that $\mathfrak{F} = \{\mathscr{F}_t: t \in [0, \infty)\}$ is a filtration. Then $\mathfrak{F}_+$ is a right continuous filtration.
Proof
For $t \in T$ $\mathscr{F}_{t++} = \bigcap\{\mathscr{F}_{s+}: s \in (t, \infty)\} = \bigcap\left\{\bigcap\{\mathscr{F}_r: r \in (s, \infty)\}: s \in (t, \infty)\right\} = \bigcap\{\mathscr{F}_u: u \in (t, \infty)\} = \mathscr{F}_{t+}$
For a stochastic process $\bs{X} = \{X_t: t \in [0, \infty)\}$ in continuous time, often the filtration $\mathfrak{F}$ that is most useful is the right-continuous refinement of the natural filtration. That is, $\mathfrak{F} = \mathfrak{F}^0_+$, so that $\mathscr{F}_t = \sigma\{X_s: s \in [0, t]\}_+$ for $t \in [0, \infty)$.
Stopping Times
Basic Properties
Suppose again that we have a fixed sample space $(\Omega, \mathscr{F})$. Random variables taking values in the time set $T$ are important, but often as we will see, it's necessary to allow such variables to take the value $\infty$ as well as finite times. So let $T_\infty = T \cup \{\infty\}$. We extend order to $T_\infty$ by the obvious rule that $t \lt \infty$ for every $t \in T$. We also extend the topology on $T$ to $T_\infty$ by the rule that for each $s \in T$, the set $\{t \in T_\infty: t \gt s\}$ is an open neighborhood of $\infty$. That is, $T_\infty$ is the one-point compactification of $T$. The reason for this is to preserve the meaning of time converging to infinity. That is, if $(t_1, t_2, \ldots)$ is a sequence in $T_\infty$ then $t_n \to \infty$ as $n \to \infty$ if and only if, for every $t \in T$ there exists $m \in \N_+$ such that $t_n \gt t$ for $n \gt m$. We then give $T_\infty$ the Borel $\sigma$-algebra $\mathscr{T}_\infty$ as before. In discrete time, this is once again the discrete $\sigma$-algebra, so that all subsets are measurable. In both cases, we now have an enhanced time space is $(T_\infty, \mathscr{T}_\infty)$. A random variable $\tau$ taking values in $T_\infty$ is called a random time.
Suppose that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on $(\Omega, \mathscr{F})$. A random time $\tau$ is a stopping time relative to $\mathfrak{F}$ if $\{\tau \le t\} \in \mathscr{F}_t$ for each $t \in T$.
In a sense, a stopping time is a random time that does not require that we see into the future. That is, we can tell whether or not $\tau \le t$ from our information at time $t$. The term stopping time comes from gambling. Consider a gambler betting on games of chance. The gambler's decision to stop gambling at some point in time and accept his fortune must define a stopping time. That is, the gambler can base his decision to stop gambling on all of the information that he has at that point in time, but not on what will happen in the future. The terms Markov time and optional time are sometimes used instead of stopping time. If $\tau$ is a stopping time relative to a filtration, then it is also a stoping time relative to any finer filtration:
Suppose that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ and $\mathfrak{G} = \{\mathscr{G}_t: t \in T\}$ are filtrations on $(\Omega, \mathscr{F})$, and that $\mathfrak{G}$ is finer than $\mathfrak{F}$. If a random time $\tau$ is a stopping time relative to $\mathfrak{F}$ then $\tau$ is a stopping time relative to $\mathfrak{G}$.
Proof
This is very simple. If $t \in T$ then $\{\tau \le t\} \in \mathscr{F}_t$ and hence $\{\tau \le t\} \in \mathscr{G}_t$ since $\mathscr{F}_t \subseteq \mathscr{G}_t$.
So, the finer the filtration, the larger the collection of stopping times. In fact, every random time is a stopping time relative to the finest filtration $\mathfrak{F}$ where $\mathscr{F}_t = \mathscr{F}$ for every $t \in T$. But this filtration corresponds to having complete information from the beginning of time, which of course is usually not sensible. At the other extreme, for the coarsest filtration $\mathfrak{F}$ where $\mathscr{F}_t = \{\Omega, \emptyset\}$ for every $t \in T$, the only stopping times are constants. That is, random times of the form $\tau(\omega) = t$ for every $\omega \in \Omega$, for some $t \in T_\infty$.
Suppose again that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on $(\Omega, \mathscr{F})$. A random time $\tau$ is a stopping time relative to $\mathfrak{F}$ if and only if $\{\tau \gt t\} \in \mathscr{F}_t$ for each $t \in T$.
Proof
This result is trivial since $\{\tau \gt t\} = \{\tau \le t\}^c$ for $t \in T$.
Suppose again that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on $(\Omega, \mathscr{F})$, and that $\tau$ is a stopping time relative to $\mathfrak{F}$. Then
1. $\{\tau \lt t\} \in \mathscr{F}_t$ for every $t \in T$.
2. $\{\tau \ge t\} \in \mathscr{F}_t$ for every $t \in T$.
3. $\{\tau = t\} \in \mathscr{F}_t$ for every $t \in T$.
Proof
1. Suppose first that $T = \N$. Then $\{\tau \lt t\} = \{\tau \le t - 1\} \in \mathscr{F}_{t-1} \subseteq \mathscr{F}_t$ for $t \in \N$. Next suppose that $T = [0, \infty)$. Fix $t \in (0, \infty)$ and let $(s_1, s_2, \ldots)$ be a strictly increasing sequence in $[0, \infty)$ with $s_n \uparrow t$ as $n \to \infty$. Then $\{\tau \lt t\} = \bigcup_{n=1}^\infty \{\tau \le s_n\}$. But $\{\tau \le s_n\} \in \mathscr{F}_{s_n} \subseteq \mathscr{F}_t$ for each $n$, so $\{\tau \lt t\} \in \mathscr{F}_t$.
2. This follows from (a) since $\{\tau \ge t\} = \{\tau \lt t\}^c$ for $t \in T$.
3. For $t \in T$ note that $\{\tau = t\} = \{\tau \le t\} \setminus \{\tau \lt t\}$. Both events in the set difference are in $\mathscr{F}_t$.
Note that when $T = \N$, we actually showed that $\{\tau \lt t\} \in \mathscr{F}_{t-1}$ and $\{\tau \ge t\} \in \mathscr{F}_{t-1}$. The converse to part (a) (or equivalently (b)) is not true, but in continuous time there is a connection to the right-continuous refinement of the filtration.
Suppose that $T = [0, \infty)$ and that $\mathfrak{F} = \{\mathscr{F}_t: t \in [0, \infty)\}$ is a filtration on $(\Omega, \mathscr{F})$. A random time $\tau$ is a stopping time relative to $\mathfrak{F}_+$ if and only if $\{\tau \lt t\} \in \mathscr{F}_t$ for every $t \in [0, \infty)$.
Proof
So restated, we need to show that $\{\tau \le t\} \in \mathscr{F}_{t+}$ for every $t \in [0, \infty)$ if and only if $\{\tau \lt t\} \in \mathscr{F}_t$ for every $t \in [0, \infty)$. (Note by the way, that this not the same as the statement that for every $t \in T$, $\{\tau \lt t\} \in \mathscr{F}_{t+}$ if and only if $\{\tau \le t\} \in \mathscr{F}_t$, which is not true.) Suppose first that $\tau$ is a stopping time relative to $\mathfrak{F}$. Fix $t \in [0, \infty)$ and let $(t_1, t_2, \ldots)$ be a strictly decreasing sequence in $[0, \infty)$ with $t_n \downarrow t$ as $n \to \infty$. Then for each $k \in \N_+$, $\{\tau \le t\} = \bigcap_{n=k}^\infty \{\tau \lt t_n\}$. If $s \gt t$ then there exists $k \in \N_+$ such that $t_n \lt s$ for each $n \ge k$. Hence $\{\tau \lt t_n\} \in \mathscr{F}_{t_n} \subseteq \mathscr{F}_s$ for $n \ge k$, and so it follows that $\{\tau \le t\} \in \mathscr{F}_s$. Since this is true for every $s \gt t$ it follows $\{\tau \le t\} \in \mathscr{F}_{t+}$. Conversely, suppose that $\{\tau \le t\} \in \mathscr{F}_{t+}$ for every $t \in [0, \infty)$. Fix $t \in (0, \infty)$ and let $(t_1, t_2, \ldots)$ be a strictly increasing sequence in $(0, \infty)$ with $t_n \uparrow t$ as $n \to \infty$. Then $\bigcup_{i=1}^\infty \{\tau \le t_n\} = \{\tau \lt t\}$. But for every $n \in \N_+$ $\{\tau \le t_n\} \in \mathscr{F}_{t_n+} = \bigcap\left\{\mathscr{F}_s: s \in (t_n, t)\right\} \subseteq \mathscr{F}_t$ Hence $\{\tau \lt t \} \in \mathscr{F}_t$.
If $\mathfrak{F} = \{\mathscr{F}_t: t \in [0, \infty)\}$ is a filtration and $\tau$ is a random time that satisfies $\{\tau \lt t \} \in \mathscr{F}_t$ for every $t \in T$, then some authors call $\tau$ a weak stopping time or say that $\tau$ is weakly optional for the filtration $\mathfrak{F}$. But to me, the increase in jargon is not worthwhile, and it's better to simply say that $\tau$ is a stopping time for the filtration $\mathfrak{F}_+$. The following corollary now follows.
Suppose that $T = [0, \infty)$ and that $\mathfrak{F} = \{\mathscr{F}_t: t \in [0, \infty)\}$ is a right-continuous filtration. A random time $\tau$ is a stopping time relative to $\mathfrak{F}$ if and only if $\{\tau \lt t\} \in \mathscr{F}_t$ for every $t \in [0, \infty)$.
The converse to part (c) of the result above holds in discrete time.
Suppose that $T = \N$ and that $\mathfrak{F} = \{\mathscr{F}_n: n \in \N\}$ is a filtration on $(\Omega, \mathscr{F})$. A random time $\tau$ is a stopping time for $\mathfrak{F}$ if and only if $\{\tau = n\} \in \mathscr{F}_n$ for every $n \in \N$.
Proof
If $\tau$ is a stopping time then as shown above, $\{\tau = n\} \in \mathscr{F}_n$ for every $n \in \N$. Conversely, suppose that this condition holds. For $n \in \N$, $\{\tau \le n\} = \bigcup_{k=0}^n \{\tau = k\}$. But $\{\tau = k\} \in \mathscr{F}_k \subseteq \mathscr{F}_n$ for $k \in \{0, 1, \ldots, n\}$ so $\{\tau \le n\} \in \mathscr{F}_n$.
Basic Constructions
As noted above, a constant element of $T_\infty$ is a stopping time, but not a very interesting one.
Suppose $s \in T_\infty$ and that $\tau(\omega) = s$ for all $\omega \in \Omega$. The $\tau$ is a stopping time relative to any filtration on $(\Omega, \mathscr{F})$.
Proof
For $t \in T$ note that $\{\tau \le t\} = \Omega$ if $s \le t$ and $\{\tau \le t\} = \emptyset$ if $s \gt t$.
If the filtration $\{\mathscr{F}_t: t \in T\}$ is complete, then a random time that is almost certainly a constant is also a stopping time. The following theorems give some basic ways of constructing new stopping times from ones we already have.
Suppose that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on $(\Omega, \mathscr{F})$ and that $\tau_1$ and $\tau_2$ are stopping times relative to $\mathfrak{F}$. Then each of the following is also a stopping time relative to $\mathfrak{F}$:
1. $\tau_1 \vee \tau_2 = \max\{\tau_1, \tau_2\}$
2. $\tau_1 \wedge \tau_2 = \min\{\tau_1, \tau_2\}$
3. $\tau_1 + \tau_2$
Proof
1. Note that $\{\tau_1 \vee \tau_2 \le t\} = \{\tau_1 \le t\} \cap \{\tau_2 \le t\} \in \mathscr{F}_t$ for $t \in T$, so the result follows from the definition.
2. Note that $\{\tau_1 \wedge \tau_2 \gt t\} = \{\tau_1 \gt t\} \cap \{\tau_2 \gt t\} \in \mathscr{F}_t$ for $t \in T$, so the result follows from the result above.
3. This is simple when $T = \N$. In this case, $\{\tau_1 + \tau_2 \le t\} = \bigcup_{n=0}^t \{\tau_1 = n\} \cap \{\tau_2 \le t - n\}$. But for $n \le t$, $\{\tau_1 = n\} \in \mathscr{F}_n \subseteq \mathscr{F}_t$ and $\{\tau_2 \le t - n\} \in \mathscr{F}_{t - n} \subseteq \mathscr{F}_t$. Hence $\{\tau_1 + \tau_2 \le t\} \in \mathscr{F}_t$. Suppose instead that $T = [0, \infty)$ and $t \in T$. Then $\tau_1 + \tau_2 \gt t$ if and only if either $\tau_1 \le t$ and $\tau_2 \gt t - \tau_1$ or $\tau_1 \gt t$. Of course $\{\tau_1 \gt t\} \in \mathscr{F}_t$ so we just need to show that the first event is also in $\mathscr{F}_t$. Note that $\tau_1 \le t$ and $\tau_2 \gt t - \tau_1$ if and only if there exists a rational $q \in [0, t]$ such that $q \le \tau_1 \le t$ and $\tau_2 \ge t - q$. Each of these events is in $\mathscr{F}_t$ and hence so is the union of the events over the countable collection of rational $q \in [0, t]$.
It follows that if $(\tau_1, \tau_2, \ldots, \tau_n)$ is a finite sequence of stopping times relative to $\mathfrak{F}$, then each of the following is also a stopping time relative to $\mathfrak{F}$:
• $\tau_1 \vee \tau_2 \vee \cdots \vee \tau_n$
• $\tau_1 \wedge \tau_2 \wedge \cdots \wedge \tau_n$
• $\tau_1 + \tau_2 + \cdots + \tau_n$
We have to be careful when we try to extend these results to infinite sequences.
Suppose that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on $(\Omega, \mathscr{F})$, and that $(\tau_n: n \in \N_+)$ is a sequence of stopping times relative to $\mathfrak{F}$. Then $\sup\{\tau_n: n \in \N_+\}$ is also a stopping time relative to $\mathfrak{F}$.
Proof
Let $\tau = \sup\{\tau_n: n \in \N_+\}$. Note that $\tau$ exists in $T_\infty$ and is a random time. For $t \in T$, $\{\tau \le t\} = \bigcap_{n=1}^\infty \{\tau_n \le t\}$. But each event in the intersection is in $\mathscr{F}_t$ and hence so is the intersection.
Suppose that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on $(\Omega, \mathscr{F})$, and that $(\tau_n: n \in \N_+)$ is an increasing sequence of stopping times relative to $\mathfrak{F}$. Then $\lim_{n \to \infty} \tau_n$ is a stopping time relative to $\mathfrak{F}$.
Proof
This is a corollary of the previous theorem. Since the sequence is increasing, $\lim_{n \to \infty} \tau_n = \sup\{\tau_n: n \in \N_+\}$.
Suppose that $T = [0, \infty)$ and that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on $(\Omega, \mathscr{F})$. If $(\tau_n: n \in \N_+)$ is a sequence of stopping times relative to $\mathfrak{F}$, then each of the following is a stopping time relative to $\mathfrak{F}_+$:
1. $\inf\left\{\tau_n: n \in \N_+\right\}$
2. $\liminf_{n \to \infty} \tau_n$
3. $\limsup_{n \to \infty} \tau_n$
Proof
1. Let $\tau = \inf\left\{\tau_n: n \in \N_+\right\}$. Then $\{\tau \ge t\} = \bigcap_{n=1}^\infty\{\tau_n \ge t\} \in \mathscr{F}_t$ for $t \in T$. Hence $\tau$ is a stopping time relative to $\mathfrak{F}_+$ by the result above.
2. Recall that $\liminf_{n \to \infty} \tau_n = \sup\left\{\inf\{\tau_k: k \ge n\}: n \in \N_+\right\}$ and so this is a stopping time relative to $\mathfrak{F}_+$ by part (a) and the result above on supremums.
3. Similarly note that $\limsup_{n \to \infty} \tau_n = \inf\left\{\sup\{\tau_k: k \ge n\}: n \in \N_+\right\}$ and so this is a stopping time relative to $\mathfrak{F}_+$ by part (a) and the result above on supremums.
As a simple corollary, we have the following results:
Suppose that $T = [0, \infty)$ and that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a right-continuous filtration on $(\Omega, \mathscr{F})$. If $(\tau_n: n \in \N_+)$ is a sequence of stopping times relative to $\mathfrak{F}$, then each of the following is a also a stopping time relative to $\mathfrak{F}$:
1. $\inf\left\{\tau_n: n \in \N_+\right\}$
2. $\liminf_{n \to \infty} \tau_n$
3. $\limsup_{n \to \infty} \tau_n$
The $\sigma$-Algebra of a Stopping Time
Consider again the general setting of a filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ on the sample space $(\Omega, \mathscr{F})$, and suppose that $\tau$ is a stopping time relative to $\mathfrak{F}$. We want to define the $\sigma$-algebra $\mathscr{F}_\tau$ of events up to the random time $\tau$, analagous to $\mathscr{F}_t$ the $\sigma$-algebra of events up to a fixed time $t \in T$. Here is the appropriate definition:
Suppose that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on $(\Omega, \mathscr{F})$ and that $\tau$ is a stopping time relative to $\mathfrak{F}$. Define $\mathscr{F}_\tau = \left\{A \in \mathscr{F}: A \cap \{\tau \le t\} \in \mathscr{F}_t \text{ for all } t \in T\right\}$. Then $\mathscr{F}_\tau$ is a $\sigma$-algebra.
Proof
First $\Omega \in \mathscr{F}_\tau$ since $\Omega \cap \{\tau \le t\} = \{\tau \le t\} \in \mathscr{F}_t$ for $t \in T$. If $A \in \mathscr{F}_\tau$ then $A^c \cap \{\tau \le t\} = \{\tau \le t \} \setminus \left(A \cap \{\tau \le t\}\right) \in \mathscr{F}_t$ for $t \in T$. Finally, suppose that $A_i \in \mathscr{F}_\tau$ for $i$ in a countable index set $I$. Then $\left(\bigcup_{i \in I} A_i\right) \cap \{\tau \le t\} = \bigcup_{i \in I} \left(A_i \cap \{\tau \le t\}\right) \in \mathscr{F}_t$ for $t \in T$.
Thus, an event $A$ is in $\mathscr{F}_\tau$ if we can determine if $A$ and $\tau \le t$ both occurred given our information at time $t$. If $\tau$ is constant, then $\mathscr{F}_\tau$ reduces to the corresponding member of the original filtration, which clealry should be the case, and is additional motivation for the definition.
Suppose again that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on $(\Omega, \mathscr{F})$. Fix $s \in T$ and define $\tau(\omega) = s$ for all $\omega \in \Omega$. Then $\mathscr{F}_\tau = \mathscr{F}_s$.
Proof
Suppose that $A \in \mathscr{F}_s$. Then $A \in \mathscr{F}$ and for $t \in T$, $A \cap \{\tau \le t\} = A$ if $s \le t$ and $A \cap \{\tau \le t\} = \emptyset$ if $s \gt t$. In either case, $A \cap \{\tau \le t\} \in \mathscr{F}_t$ and hence $A \in \mathscr{F}_\tau$. Conversely, suppose that $A \in \mathscr{F}_\tau$. Then $A = A \cap \{\tau \le s\} \in \mathscr{F}_s$.
Clearly, if we have the information available in $\mathscr{F}_\tau$, then we should know the value of $\tau$ itself. This is also true:
Suppose again that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on $(\Omega, \mathscr{F})$ and that $\tau$ is a stopping time relative to $\mathfrak{F}$. Then $\tau$ is measureable with respect to $\mathscr{F}_\tau$.
Proof
It suffices to show that $\{\tau \le s\} \in \mathscr{F}_\tau$ for each $s \in T$. For $s, \, t \in T$, $\{\tau \le t\} \cap \{\tau \le s\} = \{\tau \le s \wedge t\} \in \mathscr{F}_{s \wedge t} \subseteq \mathscr{F}_t$ Hence $\{\tau \le s\} \in \mathscr{F}_\tau$.
Here are other results that relate the $\sigma$-algebra of a stopping time to the original filtration.
Suppose again that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on $(\Omega, \mathscr{F})$ and that $\tau$ is a stopping time relative to $\mathfrak{F}$. If $A \in \mathscr{F}_\tau$ then for $t \in T$,
1. $A \cap \{\tau \lt t\} \in \mathscr{F}_t$
2. $A \cap \{\tau = t\} \in \mathscr{F}_t$
Proof
1. By definition, $A \cap \{\tau \le t\} \in \mathscr{F}_t$. But $\{\tau \lt t\} \subseteq \{\tau \le t\}$ and $\{\tau \lt t\} \in \mathscr{F}_t$. Hence $A \cap \{\tau \lt t\} = A \cap \{\tau \le t\} \cap \{\tau \lt t\} \in \mathscr{F}_t$.
2. similarly $\{\tau = t\} \subseteq \{\tau \le t\}$ and $\{\tau = t\} \in \mathscr{F}_t$. Hence $A \cap \{\tau = t\} = A \cap \{\tau \le t\} \cap \{\tau = t\} \in \mathscr{F}_t$
The $\sigma$-algebra of a stopping time relative to a filtration is related to the $\sigma$-algebra of the stopping time relative to a finer filtration in the natural way.
Suppose that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ and $\mathfrak{G} = \{\mathscr{G}_t: t \in T\}$ are filtrations on $(\Omega, \mathscr{F})$ and that $\mathfrak{G}$ is finer than $\mathfrak{F}$. If $\tau$ is a stopping time relative to $\mathfrak{F}$ then $\mathscr{F}_\tau \subseteq \mathscr{G}_\tau$.
Proof
From the result above, $\tau$ is also a stopping time relative to $\mathfrak{G}$, so the statement makes sense. If $A \in \mathscr{F}_\tau$ then for $t \in T$, $A \cap \{\tau \le t\} \in \mathscr{F}_t \subseteq \mathscr{G}_t$, so $A \in \mathscr{G}_\tau$.
When two stopping times are ordered, their $\sigma$-algebras are also ordered.
Suppose that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on $(\Omega, \mathscr{F})$ and that $\rho$ and $\tau$ are stopping times for $\mathfrak{F}$ with $\rho \le\tau$. Then $\mathscr{F}_\rho \subseteq \mathscr{F}_\tau$.
Proof
Suppose that $A \in \mathscr{F}_\rho$ and $t \in T$. Note that $\{\tau \le t\} \subseteq \{\rho \le t\}$. By definition, $A \cap \{\rho \le t\} \in \mathscr{F}_t$ and $\{\tau \le t\} \in \mathscr{F}_t$. Hence $A \cap \{\tau \le t\} = A \cap \{\rho \le t\} \cap \{\tau \le t\} \in \mathscr{F}_t$, so $A \in \mathscr{F}_\tau$.
Suppose again that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on $(\Omega, \mathscr{F})$, and that $\rho$ and $\tau$ are stopping times for $\mathfrak{F}$. Then each of the following events is in $\mathscr{F}_\tau$ and in $\mathscr{F}_\rho$.
1. $\{\rho \lt \tau\}$
2. $\{\rho = \tau\}$
3. $\{\rho \gt \tau\}$
4. $\{\rho \le \tau\}$
5. $\{\rho \ge \tau\}$
Proof
The proofs are easy when $T = \N$.
1. Let $t \in T$. Then $\{\rho \lt \tau\} \cap \{\tau \le t\} = \bigcup_{n=0}^t \bigcup_{k=0}^{n-1} \{\tau = n, \rho = k\}$ But each event in the union is in $\mathscr{F}_t$.
2. Similarly, let $t \in T$. Then $\{\rho = \tau\} \cap \{\tau \le t\} = \bigcup_{n=0}^t \{\rho = n, \tau = n\}$ and again each event in the union is in $\mathscr{F}_t$.
3. This follows from symmetry, reversing the roles of $\rho$ and $\tau$ in part (a).
4. Note that $\{\rho \le \tau\} = \{\rho \lt \tau\} \cup \{\rho = \tau\} \in \mathscr{F}_\tau$.
5. Similarly, note that $\{\rho \ge \tau\} = \{\rho \gt \tau\} \cup \{\rho = \tau\} \in \mathscr{F}_\tau$.
We can stop a filtration at a stopping time. In the next subsection, we will stop a stochastic process in the same way.
Suppose again that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on $(\Omega, \mathscr{F})$, and that $\tau$ is a stopping times for $\mathfrak{F}$. For $t \in T$ define $\mathscr{F}^\tau_t = \mathscr{F}_{t \wedge \tau}$. Then $\mathfrak{F}^\tau = \{\mathscr{F}^\tau_t: t \in T\}$ is a filtration and is coarser than $\mathfrak{F}$.
Proof
The random time $t \wedge \tau$ is a stopping time for each $t \in T$ by the result above, so $\mathscr{F}^\tau_t$ is a sub $\sigma$-algebra of $\mathscr{F}$. If $t \in T$, then by definition, $A \in \mathscr{F}^\tau_t$ if and only if $A \cap \{t \wedge \tau \le r\} \in \mathscr{F}_r$ for every $r \in T$. But for $r \in T$, $\{t \wedge \tau \le r\} = \Omega$ if $r \ge t$ and $\{t \wedge \tau \le r\} = \{\tau \le r\}$ if $r \lt t$. Hence $A \in \mathscr{F}^\tau_t$ if and only if $A \cap \{\tau \le r\} \in \mathscr{F}_r$ for $r \lt t$ and $A \in \mathscr{F}_t$. So in particular, $\mathfrak{F}^\tau$ is coarser than $\mathfrak{F}$. Further, suppose $s, \, t \in T$ with $s \le t$, and that $A \in \mathscr{F}^\tau_s$. Let $r \in T$. If $r \lt s$ then $A \cap \{\tau \le r\} \in \mathscr{F}_r$. If $s \le r \lt t$ then $A \in \mathscr{F}_s \subseteq \mathscr{F}_r$ and $\{\tau \le r\} \in \mathscr{F}_r$ so again $A \cap \{\tau \le r\} \in \mathscr{F}_r$. Finally if $r \ge t$ then $A \in \mathscr{F}_s \subseteq \mathscr{F}_t$. Hence $A \in \mathscr{F}^\tau_t$.
Stochastic Processes
As usual, the most common setting is when we have a stochastic process $\bs{X} = \{X_t: t \in T\}$ defined on our sample space $(\Omega, \mathscr{F})$ and with state space $(S, \mathscr{S})$. If $\tau$ is a random time, we are often interested in the state $X_\tau$ at the random time. But there are two issues. First, $\tau$ may take the value infinity, in which case $X_\tau$ is not defined. The usual solution is to introduce a new death state $\delta$, and define $X_\infty = \delta$. The $\sigma$-algebra $\mathscr{S}$ on $S$ is extended to $S_\delta = S \cup \{\delta\}$ in the natural way, namely $\mathscr{S}_\delta = \sigma(S \cup \{\delta\})$.
Our other problem is that we naturally expect $X_\tau$ to be a random variable (that is, measurable), just as $X_t$ is a random variable for a deterministic $t \in T$. Moreover, if $\bs{X}$ is adapted to a filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$, then we would naturally also expect $X_\tau$ to be measurable with respect to $\mathscr{F}_\tau$, just as $X_t$ is measurable with respect to $\mathscr{F}_t$ for deterministic $t \in T$. But this is not obvious, and in fact is not true without additional assumptions. Note that $X_\tau$ is a random state at a random time, and so depends on an outcome $\omega \in \Omega$ in two ways: $X_{\tau(\omega)}(\omega)$.
Suppose that $\bs{X} = \{X_t: t \in T\}$ is a stochastic process on the sample space $(\Omega, \mathscr{F})$ with state space $(S, \mathscr{S})$, and that $\bs{X}$ is measurable. If $\tau$ is a finite random time, then $X_\tau$ is measurable. That is, $X_\tau$ is a random variable with values in $S$.
Proof
Note that $X_\tau: \Omega \to S$ is the composition of the function $\omega \mapsto (\omega, \tau(\omega))$ from $\Omega$ to $\Omega \times T$ with the function $(\omega, t) \mapsto X_t(\omega)$ from $\Omega \times T$ to $S$. The first function is measurable because the two coordinate functions are measurable. The second function is measurable by assumption.
This result is one of the main reasons for the definition of a measurable process in the first place. Sometimes we literally want to stop the random process at a random time $\tau$. As you might guess, this is the origin of the term stopping time.
Suppose again that $\bs{X} = \{X_t: t \in T\}$ is a stochastic process on the sample space $(\Omega, \mathscr{F})$ with state space $(S, \mathscr{S})$, and that $\bs{X}$ is measurable. If $\tau$ is a random time, then the process $\bs{X}^\tau = \{X^\tau_t: t \in T\}$ defined by $X^\tau_t = X_{t \wedge \tau}$ for $t \in T$ is the process $\bs{X}$ stopped at $\tau$.
Proof
For each $t \in T$, note that $t \wedge \tau$ is a finite random time, and hence $X_{t \wedge \tau}$ is measurable by the previous result. Thus $\bs{X}^\tau$ is a well-defined stochastic process on $(\Omega, \mathscr{F})$ with state space $(S, \mathscr{S})$.
When the original process is progressively measurable, so is the stopped process.
Suppose again that $\bs{X} = \{X_t: t \in T\}$ is a stochastic process on the sample space $(\Omega, \mathscr{F})$ with state space $(S, \mathscr{S})$, and that $\bs{X}$ is progressively measurable with respect to a filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$. If $\tau$ is a stopping time relative to $\mathfrak{F}$, then the stopped process $\bs{X}^\tau = \{X^\tau_t: t \in T\}$ is progressively measurable with respect to the stopped filtration $\mathfrak{F}^\tau$.
Since $\mathfrak{F}$ is finer than $\mathfrak{F}^\tau$, it follows that $\bs{X}^\tau$ is also progressively measurable with respect to $\mathfrak{F}$.
Suppose again that $\bs{X} = \{X_t: t \in T\}$ is a stochastic process on the sample space $(\Omega, \mathscr{F})$ with state space $(S, \mathscr{S})$, and that $\bs{X}$ is progressively measurable with respect to a filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ on $(\Omega, \mathscr{F})$. If $\tau$ is a finite stopping time relative to $\mathfrak{F}$ then $X_\tau$ is measurable with respect to $\mathscr{F}_\tau$.
For many random processes, the first time that the process enters or hits a set of states is particularly important. In the discussion that follows, let $T_+ = \{t \in T: t \gt 0\}$, the set of positive times.
Suppose that $\bs{X} = \{X_t: t \in T\}$ is a stochastic process on $(\Omega, \mathscr{F})$ with state space $(S, \mathscr{S})$. For $A \in \mathscr{S}$, define
1. $\rho_A = \inf\{t \in T: X_t \in A\}$, the first entry time to $A$.
2. $\tau_A = \inf\{t \in T_+: X_t \in A\}$, the first hitting time to $A$.
As usual, $\inf(\emptyset) = \infty$ so $\rho_A = \infty$ if $X_t \notin A$ for all $t \in T$, so that the process never enters $A$, and $\tau_A = \infty$ if $X_t \notin A$ for all $t \in T_+$, so that the process never hits $A$. In discrete time, it's easy to see that these are stopping times.
Suppose that $\{X_n: n \in \N\}$ is a stochastic process on $(\Omega, \mathscr{F})$ with state space $(S, \mathscr{S})$. If $A \in \mathscr{S}$ then $\tau_A$ and $\rho_A$ are stopping times relative to the natural filtration $\mathfrak{F}^0$.
Proof
Let $n \in \N$. Note that $\{\rho_A \gt n\} = \{X_0 \notin A, X_1 \notin A, \ldots, X_n \notin A\} \in \sigma\{X_0, X_1, \ldots, X_n\}$. Similarly, $\{\tau_A \gt n\} = \{X_1 \notin A, X_2 \notin A \ldots, X_n \notin A \} \subseteq \sigma\{X_0, X_1, \ldots, X_n\}$.
So of course in discrete time, $\tau_A$ and $\rho_A$ are stopping times relative to any filtration $\mathfrak{F}$ to which $\bs{X}$ is adapted. You might think that $\tau_A$ and $\rho_A$ should always be a stopping times, since $\tau_A \le t$ if and only if $X_s \in A$ for some $s \in T_+$ with $s \le t$, and $\rho_A \le t$ if and only if $X_s \in A$ for some $s \in T$ with $s \le t$. It would seem that these events are known if one is allowed to observe the process up to time $t$. The problem is that when $T = [0, \infty)$, these are uncountable unions, so we need to make additional assumptions on the stochastic process $\bs{X}$ or the filtration $\mathfrak{F}$, or both.
Suppose that $S$ has an LCCB topology, and that $\mathscr{S}$ is the $\sigma$-algebra of Borel sets. Suppose also that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is right continuous and has left limits. Then $\tau_A$ and $\rho_A$ are stopping times relative to $\mathfrak{F}^0_+$ for every open $A \in \mathscr{S}$.
Here is another result that requires less of the stochastic process $\bs{X}$, but more of the filtration $\mathfrak{F}$.
Suppose that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a stochastic process on $(\Omega, \mathscr{F})$ that is progressively measurable relative to a complete, right-continuous filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in [0, \infty)\}$. If $A \in \mathscr{S}$ then $\rho_A$ and $\tau_A$ are stopping times relative to $\mathfrak{F}$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/02%3A_Probability_Spaces/2.11%3A_Filtrations_and_Stopping_Times.txt |
Recall that a probability distribution is just another name for a probability measure. Most distributions are associated with random variables, and in fact every distribution can be associated with a random variable. In this chapter we explore the basic types of probability distributions (discrete, continuous, mixed), and the ways that distributions can be defined using density functions, distribution functions, and quantile functions. We also study the relationship between the distribution of a random vector and the distributions of its components, conditional distributions, and how the distribution of a random variable changes when the variable is transformed.
In the advanced sections, we study convergence in distribution, one of the most important types of convergence. We also construct the abstract integral with respect to a positive measure and study the basic properties of the integral. This leads in turn to general (signed measures), absolute continuity and singularity, and the existence of density functions. Finally, we study various vector spaces of functions that are defined by integral pro
03: Distributions
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
Definitions and Basic Properties
As usual, our starting point is a random experiment modeled by a probability space $(S, \mathscr S, \P)$. So to review, $S$ is the set of outcomes, $\mathscr S$ the collection of events, and $\P$ the probability measure on the sample space $(S, \mathscr S)$. We use the terms probability measure and probability distribution synonymously in this text. Also, since we use a general definition of random variable, every probability measure can be thought of as the probability distribution of a random variable, so we can always take this point of view if we like. Indeed, most probability measures naturally have random variables associated with them.
Recall that the sample space $(S, \mathscr S)$ is discrete if $S$ is countable and $\mathscr S = \mathscr P(S)$ is the collection of all subsets of $S$. In this case, $\P$ is a discrete distribution and $(S, \mathscr S, \P)$ is a discrete probabiity space.
For the remainder or our discussion we assume that $(S, \mathscr S, \P)$ is a discrete probability space. In the picture below, the blue dots are intended to represent points of positive probability.
It's very simple to describe a discrete probability distribution with the function that assigns probabilities to the individual points in $S$.
The function $f$ on $S$ defined by $f(x) = \P(\{x\})$ for $x \in S$ is the probability density function of $\P$, and satisfies the following properties:
1. $f(x) \ge 0, \; x \in S$
2. $\sum_{x \in S} f(x) = 1$
3. $\sum_{x \in A} f(x) = \P(A)$ for $A \subseteq S$
Proof
These properties follow from the axioms of a probability measure.
1. $f(x) = \P(\{x\}) \ge 0$ since probabilities are nonnegative.
2. $\sum_{x \in S} f(x) = \sum_{x \in S} \P(\{x\}) = \P(S) = 1$ by the countable additivity axiom.
3. $\sum_{x \in A} f(x) = \sum_{x \in A} \P(\{x\}) = \P(A)$ for $A \subseteq S$ again, by the countable additivity axiom.
Property (c) is particularly important since it shows that a discrete probability distribution is completely determined by its probability density function. Conversely, any function that satisfies properties (a) and (b) can be used to construct a discrete probability distribution on $S$ via property (c).
A nonnegative function $f$ on $S$ that satisfies $\sum_{x \in S} f(x) = 1$ is a (discrete) probability density function on $S$, and then $\P$ defined as follows is a probability measure on $S$. $\P(A) = \sum_{x \in A} f(x), \quad A \subseteq S$
Proof
1. $\P(A) = \sum_{x \in A} f(x) \ge 0$ since $f$ is nonnegative.
2. $\P(S) = \sum_{x \in S} f(x) = 1$\) by property (b)
3. Suppose that $\{A_i: i \in I\}$ is a countable, disjoint collection of subsets of $S$, and let $A = \bigcup_{i \in I} A_i$. Then $\P(A) = \sum_{x \in A} f(x) = \sum_{i \in I} \sum_{x \in A_i} f(x) = \sum_{i \in I} \P(A_i)$ Note that since $f$ is nonnegative, the order of the terms in the sum do not matter.
Technically, $f$ is the density of $\P$ relative to counting measure $\#$ on $S$. The technicalities are discussed in detail in the advanced section on absolute continuity and density functions.
The set of outcomes $S$ is often a countable subset of some larger set, such as $\R^n$ for some $n \in \N_+$. But not always. We might want to consider a random variable with values in a deck of cards, or a set of words, or some other discrete population of objects. Of course, we can always map a countable set $S$ one-to-one into a Euclidean set, but it might be contrived or unnatural to do so. In any event, if $S$ is a subset of a larger set, we can always extend a probability density function $f$, if we want, to the larger set by defining $f(x) = 0$ for $x \notin S$. Sometimes this extension simplifies formulas and notation. Put another way, the set of values is often a convenience set that includes the points with positive probability, but perhaps other points as well.
Suppose that $f$ is a probability density function on $S$. Then $\{x \in S: f(x) \gt 0\}$ is the support set of the distribution.
Values of $x$ that maximize the probability density function are important enough to deserve a name.
Suppose again that $f$ is a probability density function on $S$. An element $x \in S$ that maximizes $f$ is a mode of the distribution.
When there is only one mode, it is sometimes used as a measure of the center of the distribution.
A discrete probability distribution defined by a probability density function $f$ is equivalent to a discrete mass distribution, with total mass 1. In this analogy, $S$ is the (countable) set of point masses, and $f(x)$ is the mass of the point at $x \in S$. Property (c) in (2) above simply means that the mass of a set $A$ can be found by adding the masses of the points in $A$.
But let's consider a probabilistic interpretation, rather than one from physics. We start with a basic random variable $X$ for an experiment, defined on a probability space $(\Omega, \mathscr F, \P)$. Suppose that $X$ has a discrete distribution on $S$ with probability density function $f$. So in this setting, $f(x) = \P(X = x)$ for $x \in S$. We create a new, compound experiment by conducting independent repetitions of the original experiment. So in the compound experiment, we have a sequence of independent random variables $(X_1, X_2, \ldots)$ each with the same distribution as $X$; in statistical terms, we are sampling from the distribution of $X$. Define $f_n(x) = \frac{1}{n} \#\left\{ i \in \{1, 2, \ldots, n\}: X_i = x\right\} = \frac{1}{n} \sum_{i=1}^n \bs{1}(X_i = x), \quad x \in S$ Note that $f_n(x)$ is the relative frequency of outcome $x \in S$ in the first $n$ runs. Note also that $f_n(x)$ is a random variable for the compound experiment for each $x \in S$. By the law of large numbers, $f_n(x)$ should converge to $f(x)$, in some sense, as $n \to \infty$. The function $f_n$ is called the empirical probability density function, and it is in fact a (random) probability density function, since it satisfies properties (a) and (b) of (2). Empirical probability density functions are displayed in most of the simulation apps that deal with discrete variables.
It's easy to construct discrete probability density functions from other nonnegative functions defined on a countable set.
Suppose that $g$ is a nonnegative function defined on $S$, and let $c = \sum_{x \in S} g(x)$ If $0 \lt c \lt \infty$, then the function $f$ defined by $f(x) = \frac{1}{c} g(x)$ for $x \in S$ is a discrete probability density function on $S$.
Proof
Clearly $f(x) \ge 0$ for $x \in S$. also $\sum_{x \in S} f(x) = \frac{1}{c} \sum_{x \in S} g(x) = \frac{c}{c} = 1$
Note that since we are assuming that $g$ is nonnegative, $c = 0$ if and only if $g(x) = 0$ for every $x \in S$. At the other extreme, $c = \infty$ could only occur if $S$ is infinite (and the infinite series diverges). When $0 \lt c \lt \infty$ (so that we can construct the probability density function $f$), $c$ is sometimes called the normalizing constant. This result is useful for constructing probability density functions with desired functional properties (domain, shape, symmetry, and so on).
Conditional Densities
Suppose again that $X$ is a random variable on a probability space $(\Omega, \mathscr F, \P)$ and that $X$ takes values in our discrete set $S$. The distributionn of $X$ (and hence the probability density function of $X$) is based on the underlying probability measure on the sample space $(\Omega, \mathscr F)$. This measure could be a conditional probability measure, conditioned on a given event $E \in \mathscr F$ (with $\P(E) \gt 0$). The probability density function in this case is $f(x \mid E) = \P(X = x \mid E), \quad x \in S$ Except for notation, no new concepts are involved. Therefore, all results that hold for discrete probability density functions in general have analogies for conditional discrete probability density functions.
For fixed $E \in \mathscr F$ with $\P(E) \gt 0$ the function $x \mapsto f(x \mid E)$ is a discrete probability density function on $S$ That is,
1. $f(x \mid E) \ge 0$ for $x \in S$.
2. $\sum_{x \in S} f(x \mid E) = 1$
3. $\sum_{x \in A} f(x \mid E) = \P(X \in A \mid E)$ for $\subseteq S$
Proof
This is a consequence of the fact that $A \mapsto \P(A \mid E)$ is a probability measure on $(\Omega, \mathscr F)$. The function $x \mapsto f(x \mid E)$ plays the same role for the conditional probabliity measure that $f$ does for the original probability measure $\P$.
In particular, the event $E$ could be an event defined in terms of the random variable $X$ itself.
Suppose that $B \subseteq S$ and $\P(X \in B) \gt 0$. The conditional probability density function of $X$ given $X \in B$ is the function on $B$ defined by $f(x \mid X \in B) = \frac{f(x)}{\P(X \in B)} = \frac{f(x)}{\sum_{y \in B} f(y)}, \quad x \in B$
Proof
This follows from the previous theorem. $f(x \mid X \in B) = \P(X = x, X \in B) \big/ \P(X \in B)$. The numerator is $f(x)$ if $x \in B$ and is 0 if $x \notin B$.
Note that the denominator is simply the normalizing constant for $f$ restricted to $B$. Of course, $f(x \mid B) = 0$ for $x \in B^c$.
Conditioning and Bayes' Theorem
Suppose again that $X$ is a random variable defined on a probability space $(\Omega, \mathscr F, \P)$ and that $X$ has a discrete distribution on $S$, with probability density function $f$. We assume that $f(x) \gt 0$ for $x \in S$ so that the distribution has support $S$. The versions of the law of total probability and Bayes' theorem given in the following theorems follow immediately from the corresponding results in the section on Conditional Probability. Only the notation is different.
Law of Total Probability. If $E \in \mathscr F$ is an event then $\P(E) = \sum_{x \in S} f(x) \P(E \mid X = x)$
Proof
Note that $\{\{X = x\}: x \in S\}$ is a countable partition of the sample space $\Omega$. That is, these events are disjoint and their union is the entire sample space $\Omega$. Hence $\P(E) = \sum_{x \in S} \P(E \cap \{X = x\}) = \sum_{x \in S} \P(X = x) \P(E \mid X = x) = \sum_{x \in S} f(x) \P(E \mid X = x)$
This result is useful, naturally, when the distribution of $X$ and the conditional probability of $E$ given the values of $X$ are known. When we compute $\P(E)$ in this way, we say that we are conditioning on $X$. Note that $\P(E)$, as expressed by the formula, is a weighted average of $\P(E \mid X = x)$, with weight factors $f(x)$, over $x \in S$.
Bayes' Theorem. If $E \in \mathscr F$ is an event with $\P(E) \gt 0$ then $f(x \mid E) = \frac{f(x) \P(E \mid X = x)}{\sum_{y \in S} f(y) \P(E \mid X = y)}, \quad x \in S$
Proof
Note that the numerator of the fraction on the right is $\P(X = x) \P(E \mid X = x) = \P(\{X = x\} \cap E)$. The denominator is $\P(E)$ by the previous theorem. Hence the ratio is $\P(X = x \mid E) = f(x \mid E)$.
Bayes' theorem, named for Thomas Bayes, is a formula for the conditional probability density function of $X$ given $E$. Again, it is useful when the quantities on the right are known. In the context of Bayes' theorem, the (unconditional) distribution of $X$ is referred to as the prior distribution and the conditional distribution as the posterior distribution. Note that the denominator in Bayes' formula is $\P(E)$ and is simply the normalizing constant for the function $x \mapsto f(x) \P(E \mid X = x)$.
Examples and Special Cases
We start with some simple (albeit somewhat artificial) discrete distributions. After that, we study three special parametric models—the discrete uniform distribution, hypergeometric distributions, and Bernoulli trials. These models are very important, so when working the computational problems that follow, try to see if the problem fits one of these models. As always, be sure to try the problems yourself before looking at the answers and proofs in the text.
Simple Discrete Distributions
Let $g$ be the function defined by $g(n) = n (10 - n)$ for $n \in \{1, 2, \ldots, 9\}$.
1. Find the probability density function $f$ that is proportional to $g$ as in .
2. Sketch the graph of $f$ and find the mode of the distribution.
3. Find $\P(3 \le N \le 6)$ where $N$ has probability density function $f$.
Answer
1. $f(n) = \frac{1}{165} n (10 - n)$ for $n \in \{1, 2, \ldots, 9\}$
2. mode $n = 5$
3. $\frac{94}{165}$
Let $g$ be the function defined by $g(n) = n^2 (10 -n)$ for $n \in \{1, 2 \ldots, 10\}$.
1. Find the probability density function $f$ that is proportional to $g$.
2. Sketch the graph of $f$ and find the mode of the distribution.
3. Find $\P(3 \le N \le 6)$ where $N$ has probability density function $f$.
Answer
1. $f(n) = \frac{1}{825} n^2 (10 - n)$ for $n \in \{1, 2, \ldots, 9\}$
2. mode $n = 7$
3. $\frac{428}{825}$
Let $g$ be the function defined by $g(x, y) = x + y$ for $(x, y) \in \{1, 2, 3\}^2$.
1. Sketch the domain of $g$.
2. Find the probability density function $f$ that is proportional to $g$.
3. Find the mode of the distribution.
4. Find $\P(X \gt Y)$ where $(X, Y)$ has probability density function $f$.
Answer
1. $f(x,y) = \frac{1}{36} (x + y)$ for $(x,y) \in \{1, 2, 3\}^2$
2. mode $(3, 3)$
3. $\frac{2}{9}$
Let $g$ be the function defined by $g(x, y) = x y$ for $(x, y) \in \{(1, 1), (1,2), (1, 3), (2, 2), (2, 3), (3, 3)\}$.
1. Sketch the domain of $g$.
2. Find the probability density function $f$ that is proportional to $g$.
3. Find the mode of the distribution.
4. Find $\P\left[(X, Y) \in \left\{(1, 2), (1, 3), (2, 2), (2, 3)\right\}\right]$ where $(X, Y)$ has probability density function $f$.
Answer
1. $f(x,y) = \frac{1}{25} x y$ for $(x,y) \in \{(1,1), (1,2), (1,3), (2,2), (2,3), (3,3)\}$
2. mode $(3,3)$
3. $\frac{3}{5}$
Consider the following game: An urn initially contains one red and one green ball. A ball is selected at random, and if the ball is green, the game is over. If the ball is red, the ball is returned to the urn, another red ball is added, and the game continues. At each stage, a ball is selected at random, and if the ball is green, the game is over. If the ball is red, the ball is returned to the urn, another red ball is added, and the game continues. Let $X$ denote the length of the game (that is, the number of selections required to obtain a green ball). Find the probability density function of $X$.
Solution
Note that $X$ takes values in $\N_+$. Using the multiplication rule for conditional probabilities, the PDF $f$ of $X$ is given by $f(1) = \frac{1}{2} = \frac{1}{1 \cdot 2}, \; f(2) = \frac{1}{2} \frac{1}{3} = \frac{1}{2 \cdot 3}, \; f(3) = \frac{1}{2} \frac{2}{3} \frac{1}{4} = \frac{1}{3 \cdot 4}$ and in general, $f(x) = \frac{1}{x (x + 1)}$ for $x \in \N_+$. By partial fractions, $f(x) = \frac{1}{x} - \frac{1}{x + 1}$ for $x \in \N_+$ so we can check that $f$ is a valid PDF: $\sum_{x=1}^\infty \left(\frac{1}{x} - \frac{1}{x+1}\right) = \lim_{n \to \infty} \sum_{x=1}^n \left(\frac{1}{x} - \frac{1}{x+1}\right) = \lim_{n \to \infty} \left(1 - \frac{1}{n+1}\right) = 1$
Discrete Uniform Distributions
An element $X$ is chosen at random from a finite set $S$. The distribution of $X$ is the discrete uniform distribution on $S$.
1. $X$ has probability density function $f$ given by $f(x) = 1 \big/ \#(S)$ for $x \in S$.
2. $\P(X \in A) = \#(A) \big/ \#(S)$ for $A \subseteq S$.
Proof
The phrase at random means that all outcomes are equally likely.
Many random variables that arise in sampling or combinatorial experiments are transformations of uniformly distributed variables. The next few exercises review the standard methods of sampling from a finite population. The parameters $m$ and $n$ are positive inteters.
Suppose that $n$ elements are chosen at random, with replacement from a set $D$ with $m$ elements. Let $\bs{X}$ denote the ordered sequence of elements chosen. Then $\bs{X}$ is uniformly distributed on the Cartesian power set $S = D^n$, and has probability density function $f$ given by $f(\bs{x}) = \frac{1}{m^n}, \quad \bs{x} \in S$
Proof
Recall that $\#(D^n) = m^n$.
Suppose that $n$ elements are chosen at random, without replacement from a set $D$ with $m$ elements (so $n \le m$). Let $\bs{X}$ denote the ordered sequence of elements chosen. Then $\bs{X}$ is uniformly distributed on the set $S$ of permutations of size $n$ chosen from $D$, and has probability density function $f$ given by $f(\bs{x}) = \frac{1}{m^{(n)}}, \quad \bs{x} \in S$
Proof
Recall that the number of permutations of size $n$ from $D$ is $m^{(n)}$.
Suppose that $n$ elements are chosen at random, without replacement, from a set $D$ with $m$ elements (so $n \le m$). Let $\bs{W}$ denote the unordered set of elements chosen. Then $\bs{W}$ is uniformly distributed on the set $T$ of combinations of size $n$ chosen from $D$, and has probability density function $f$ given by $f(\bs{w}) = \frac{1}{\binom{m}{n}}, \quad \bs{w} \in T$
Proof
Recall that the number of combinations of size $n$ from $D$ is $\binom{m}{n}$.
Suppose that $X$ is uniformly distributed on a finite set $S$ and that $B$ is a nonempty subset of $S$. Then the conditional distribution of $X$ given $X \in B$ is uniform on $B$.
Proof
From (7), the conditional probability density function of $X$ given $X \in B$ is $f(x \mid B) = \frac{f(x)}{\P(X \in B)} = \frac{1 \big/ \#(S)}{\#(B) \big/ \#(S)} = \frac{1}{\#(B)}, \quad x \in B$
Hypergeometric Models
Suppose that a dichotomous population consists of $m$ objects of two different types: $r$ of the objects are type 1 and $m - r$ are type 0. Here are some typical examples:
• The objects are persons, each either male or female.
• The objects are voters, each either a democrat or a republican.
• The objects are devices of some sort, each either good or defective.
• The objects are fish in a lake, each either tagged or untagged.
• The objects are balls in an urn, each either red or green.
A sample of $n$ objects is chosen at random (without replacement) from the population. Recall that this means that the samples, either ordered or unordered are equally likely. Note that this probability model has three parameters: the population size $m$, the number of type 1 objects $r$, and the sample size $n$. Each is a nonnegative integer with $r \le m$ and $n \le m$. Now, suppose that we keep track of order, and let $X_i$ denote the type of the $i$th object chosen, for $i \in \{1, 2, \ldots, n\}$. Thus, $X_i$ is an indicator variable (that is, a variable that just takes values 0 and 1).
$\bs{X} = (X_1, X_2, \ldots, X_n)$ has probability density function $f$ given by $f(x_1, x_2, \ldots, x_n) = \frac{r^{(y)} (m - r)^{(n-y)}}{m^{(n)}}, \quad (x_1, x_2, \ldots, x_n) \in \{0, 1\}^n \text{ where } y = x_1 + x_2 + \cdots + x_n$
Proof
Recall again that the ordered samples are equally likely, and there are $m^{(n)}$ such samples. The number of ways to select the $y$ type 1 objects and place them in the positions where $x_i = 1$ is $r^{(y)}$. The number of ways to select the $n - y$ type 0 objects and place them in the positions where $x_i = 0$ is $(m - r)^{(n - y)}$. Thus the result follows from the multiplication principle.
Note that the value of $f(x_1, x_2, \ldots, x_n)$ depends only on $y = x_1 + x_2 + \cdots + x_n$, and hence is unchanged if $(x_1, x_2, \ldots, x_n)$ is permuted. This means that $(X_1, X_2, \ldots, X_n)$ is exchangeable. In particular, the distribution of $X_i$ is the same as the distribution of $X_1$, so $\P(X_i = 1) = \frac{r}{m}$. Thus, the variables are identically distributed. Also the distribution of $(X_i, X_j)$ is the same as the distribution of $(X_1, X_2)$, so $\P(X_i = 1, X_j = 1) = \frac{r (r - 1)}{m (m - 1)}$. Thus, $X_i$ and $X_j$ are not independent, and in fact are negatively correlated.
Now let $Y$ denote the number of type 1 objects in the sample. Note that $Y = \sum_{i=1}^n X_i$. Any counting variable can be written as a sum of indicator variables.
$Y$ has probability density function $g$ given by.
$g(y) = \frac{\binom{r}{y} \binom{m - r}{n - y}}{\binom{m}{n}}, \quad y \in \{0, 1, \ldots, n\}$
1. $g(y - 1) \lt g(y)$ if and only if $y \lt t$ where $t = (r + 1) (n + 1) / (m + 2)$.
2. If $t$ is not a positive integer, there is a single mode at $\lfloor t \rfloor$.
3. If $t$ is a positive integer, then there are two modes, at $t - 1$ and $t$.
Proof
Recall again that the unordered samples of size $n$ chosen from the population are equally likely. By the multiplication principle, the number of samples with exactly $y$ type 1 objects and $n - y$ type 0 objects is $\binom{m}{y} \binom{m - r}{n - y}$. The total number of samples is $\binom{m}{n}$.
1. Note that $g(y - 1) \lt g(y)$ if and only if $\binom{r}{y - 1} \binom{m - r}{n + 1 - y} \lt \binom{r}{y} \binom{m - r}{n - y}$. Writing the binomial coefficients in terms of factorials and canceling terms gives $g(y - 1) \lt g(y)$ if and only if $y \lt t$, where $t$ is given above.
2. By the same argument, $f(y - 1) = f(y)$ if and only if $y = t$. If $t$ is not an integer then this cannot happen. Letting $z = \lfloor t \rfloor$, it follows from (a) that $g(y) \lt g(z)$ if $y \lt z$ or $y \gt z$.
3. If $t$ is a positive integer, then by (b), $g(t - 1) = g(t)$ and by (a) $g(y) \lt g(t - 1)$ if $y \lt t - 1$ and $g(y) \lt g(t)$ if $y \gt t$.
The distribution defined by the probability density function in the last result is the hypergeometric distributions with parameters $m$, $r$, and $n$. The term hypergeometric comes from a certain class of special functions, but is not particularly helpful in terms of remembering the model. Nonetheless, we are stuck with it. The set of values $\{0, 1, \ldots, n\}$ is a convenience set: it contains all of the values that have positive probability, but depending on the parameters, some of the values may have probability 0. Recall our convention for binomial coefficients: for $j, \; k \in \N_+$, $\binom{k}{j} = 0$ if $j \gt k$. Note also that the hypergeometric distribution is unimodal: the probability density function increases and then decreases, with either a single mode or two adjacent modes.
We can extend the hypergeometric model to a population of three types. Thus, suppose that our population consists of $m$ objects; $r$ of the objects are type 1, $s$ are type 2, and $m - r - s$ are type 0. Here are some examples:
• The objects are voters, each a democrat, a republican, or an independent.
• The objects are cicadas, each one of three species: tredecula, tredecassini, or tredecim
• The objects are peaches, each classified as small, medium, or large.
• The objects are faculty members at a university, each an assistant professor, or an associate professor, or a full professor.
Once again, a sample of $n$ objects is chosen at random (without replacement). The probability model now has four parameters: the population size $m$, the type sizes $r$ and $s$, and the sample size $n$. All are nonnegative integers with $r + s \le m$ and $n \le m$. Moreover, we now need two random variables to keep track of the counts for the three types in the sample. Let $Y$ denote the number of type 1 objects in the sample and $Z$ the number of type 2 objects in the sample.
$(Y, Z)$ has probability density function $h$ given by $h(y, z) = \frac{\binom{r}{y} \binom{s}{z} \binom{m - r - s}{n - y - z}}{\binom{m}{n}}, \quad (y, z) \in \{0, 1, \ldots, n\}^2 \text{ with } y + z \le n$
Proof
Once again, by the multiplication principle, the number of samples of size $n$ from the population with exactly $y$ type 1 objects, $z$ type 2 objects, and $n - y - z$ type 0 objects is $\binom{r}{y} \binom{s}{z} \binom{m - r - s}{n - y - z}$. The total number of samples of size $n$ is $\binom{m}{n}$.
The distribution defined by the density function in the last exericse is the bivariate hypergeometric distribution with parameters $m$, $r$, $s$, and $n$. Once again, the domain given is a convenience set; it includes the set of points with positive probability, but depending on the parameters, may include points with probability 0. Clearly, the same general pattern applies to populations with even more types. However, because of all of the parameters, the formulas are not worthing remembering in detail; rather, just note the pattern, and remember the combinatorial meaning of the binomial coefficient. The hypergeometric model will be revisited later in this chapter, in the section on joint distributions and in the section on conditional distributions. The hypergeometric distribution and the multivariate hypergeometric distribution are studied in detail in the chapter on Finite Sampling Models. This chapter contains a variety of distributions that are based on discrete uniform distributions.
Bernoulli Trials
A Bernoulli trials sequence is a sequence $(X_1, X_2, \ldots)$ of independent, identically distributed indicator variables. Random variable $X_i$ is the outcome of trial $i$, where in the usual terminology of reliability, 1 denotes success while 0 denotes failure, The process is named for Jacob Bernoulli. Let $p = \P(X_i = 1) \in [0, 1]$ denote the success parameter of the process. Note that the indicator variables in the hypergeometric model satisfy one of the assumptions of Bernoulli trials (identical distributions) but not the other (independence).
$\bs{X} = (X_1, X_2, \ldots, X_n)$ has probability density function $f$ given by $f(x_1, x_2, \ldots, x_n) = p^y (1 - p)^{n - y}, \quad (x_1, x_2, \ldots, x_n) \in \{0, 1\}^n, \text{ where } y = x_1 + x_2 + \cdots + x_n$
Proof
By definition, $\P(X_i = 1) = p$ and $\P(X_i = 0) = 1 - p$. Equivalently, $\P(X_i = x) = p^x (1 - p)^{1-x}$ for $x \in \{0, 1\}$. The formula for $f$ then follows by independence.
Now let $Y$ denote the number of successes in the first $n$ trials. Note that $Y = \sum_{i=1}^n X_i$, so we see again that a complicated random variable can be written as a sum of simpler ones. In particular, a counting variable can always be written as a sum of indicator variables.
$Y$ has probability density function $g$ given by $g(y) = \binom{n}{y} p^y (1 - p)^{n-y}, \quad y \in \{0, 1, \ldots, n\}$
1. $g(y - 1) \lt g(y)$ if and only if $y \lt t$, wher $t = (n + 1) p$.
2. If $t$ is not a positive integer, there is a single mode at $\lfloor t \rfloor$.
3. If $t$ is a positive integer, then there are two modes, at $t - 1$ and $t$.
Proof
From the previous result, any particular sequence of $n$ Bernoulli trials with $y$ successes and $n - y$ failures has probability $p^y (1 - p)^{n - y}$. The number of such sequences is $\binom{n}{y}$, so the formula for $g$ follows by the additivity of probability.
1. Note that $g(y - 1) \lt g(y)$ if and only if $\binom{n}{y - 1} p^{y-1} (1 - p)^{n + 1 - y} \lt \binom{n}{y} p^y ( 1- p)^{n-y}$. Writing the binomial coefficients in terms of factorials and canceling gives $g(y - 1) \lt g(y)$ if and only if $y \lt t$ where $t = (n + 1) p$.
2. By the same argument, $g(y - 1) = g(y)$ if and only if $y = t$. If $t$ is not an integer, this cannot happen. Letting $z = \lfloor t \rfloor$, it follows from (a) that $g(y) \lt g(z)$ if $y \lt z$ or $y \gt z$.
3. If $t$ is a positive integer, then by (b), $g(t - 1) = g(t)$ and by (a) $g(y) \lt g(t - 1)$ if $y \lt t - 1$ and $g(y) \lt g(t)$ if $y \gt t$.
The distribution defined by the probability density function in the last theorem is called the binomial distribution with parameters $n$ and $p$. The distribution is unimodal: the probability density function at first increases and then decreases, with either a single mode or two adjacent modes. The binomial distribution is studied in detail in the chapter on Bernoulli Trials.
Suppose that $p \gt 0$ and let $N$ denote the trial number of the first success. Then $N$ has probability density function $h$ given by $h(n) = (1 - p)^{n-1} p, \quad n \in \N_+$ The probability density function $h$ is decreasing and the mode is $n = 1$.
Proof
For $n \in \N_+$, the event $\{N = n\}$ means that the first $n - 1$ trials were failures and trial $n$ was a success. Each trial results in failure with probability $1 - p$ and success with probability $p$, and the trials are independent, so $\P(N = n) = (1 - p)^{n - 1} p$. Using geometric series, we can check that $\sum_{n=1}^\infty h(n) = \sum_{n=1}^\infty p (1 - p)^{n-1} = \frac{p}{1-(1-p)} = 1$
The distribution defined by the probability density function in the last exercise is the geometric distribution on $\N_+$ with parameter $p$. The geometric distribution is studied in detail in the chapter on Bernoulli Trials.
Sampling Problems
In the following exercises, be sure to check if the problem fits one of the general models above.
An urn contains 30 red and 20 green balls. A sample of 5 balls is selected at random, without replacement. Let $Y$ denote the number of red balls in the sample.
1. Compute the probability density function of $Y$ explicitly and identify the distribution by name and parameter values.
2. Graph the probability density function and identify the mode(s).
3. Find $\P(Y \gt 3)$.
Answer
1. $f(0) = 0.0073$, $f(1) = 0.0686$, $f(2) = 0.2341$, $f(3) = 0.3641$, $f(4) = 0.2587$, $f(5) = 0.0673$. Hypergeometric with $m = 50$, $r = 30$, $n = 5$
2. mode: $y = 3$
3. $\P(Y \gt 3) = 0.3260$
In the ball and urn experiment, select sampling without replacement and set $m = 50$, $r = 30$, and $n = 5$. Run the experiment 1000 times and note the agreement between the empirical density function of $Y$ and the probability density function.
An urn contains 30 red and 20 green balls. A sample of 5 balls is selected at random, with replacement. Let $Y$ denote the number of red balls in the sample.
1. Compute the probability density function of $Y$ explicitly and identify the distribution by name and parameter values.
2. Graph the probability density function and identify the mode(s).
3. Find $\P(Y \gt 3)$.
Answer
1. $f(0) = 0.0102$, $f(1) = 0.0768$, $f(2) = 0.2304$, $f(3) = 0.3456$, $f(4) = 0.2592$, $f(5) = 0.0778$. Binomial with $n = 5$, $p = 3/5$
2. mode: $y = 3$
3. $\P(Y \gt 3) = 0.3370$
In the ball and urn experiment, select sampling with replacement and set $m = 50$, $r = 30$, and $n = 5$. Run the experiment 1000 times and note the agreement between the empirical density function of $Y$ and the probability density function.
A group of voters consists of 50 democrats, 40 republicans, and 30 independents. A sample of 10 voters is chosen at random, without replacement. Let $X$ denote the number of democrats in the sample and $Y$ the number of republicans in the sample.
1. Give the probability density function of $X$.
2. Give the probability density function of $Y$.
3. Give the probability density function of $(X, Y)$.
4. Find the probability that the sample has at least 4 democrats and at least 4 republicans.
Answer
1. $g(x) = \frac{\binom{50}{x} \binom{70}{10-x}}{\binom{120}{10}}$ for $x \in \{0, 1, \ldots, 10\}$. This is the hypergeometric distribution with parameters $m = 120$, $r = 50$ and $n = 10$.
2. $h(y) = \frac{\binom{40}{y} \binom{80}{10-y}}{\binom{120}{10}}$ for $y \in \{0, 1, \ldots, 10\}$. This is the hypergeometric distribution with parameters $m = 120$, $r = 40$ and $n = 10$.
3. $f(x,y) = \frac{\binom{50}{x} \binom{40}{y} \binom{30}{10 - x - y}}{\binom{120}{10}}$ for $(x,y) \in \{0, 1, \ldots, 10\}^2$ with $x + y \le 10$. This is the bivariate hypergeometric distribution with parameters $m = 120$, $r = 50$, $s = 40$ and $n = 10$.
4. $\P(X \ge 4, Y \ge 4) = \frac{15\,137\,200}{75\,597\,113} \approx 0.200$
The Math Club at Enormous State University (ESU) has 20 freshmen, 40 sophomores, 30 juniors, and 10 seniors. A committee of 8 club members is chosen at random, without replacement to organize $\pi$-day activities. Let $X$ denote the number of freshman in the sample, $Y$ the number of sophomores, and $Z$ the number of juniors.
1. Give the probability density function of $X$.
2. Give the probability density function of $Y$.
3. Give the probability density function of $Z$.
4. Give the probability density function of $(X, Y)$.
5. Give the probability density function of $(X, Y, Z)$.
6. Find the probability that the committee has no seniors.
Answer
1. $f_X(x) = \frac{\binom{20}{x} \binom{80}{8-x}}{\binom{100}{8}}$ for $x \in \{0, 1, \ldots, 8\}$. This is the hypergeometric distribution with parameters $m = 100$, $r = 20$, and $n = 8$.
2. $f_Y(y) = \frac{\binom{40}{y} \binom{60}{8-y}}{\binom{100}{8}}$ for $y \in \{0, 1, \ldots, 8\}$. This is the hypergeometric distribution with parameters $m = 100$, $r = 40$, and $n = 8$.
3. $f_Z(z) = \frac{\binom{30}{z} \binom{70}{8-z}}{\binom{100}{8}}$ for $z \in \{0, 1, \ldots, 8\}$. This is the hypergeometric distribution with parameters $m = 100$, $r = 30$, and $n = 8$.
4. $f_{X,Y}(x,y) = \frac{\binom{20}{x} \binom{40}{y} \binom{40}{8-x-y}}{\binom{100}{8}}$ for $(x,y) \in \{0, 1, \ldots, 8\}^2$ with $x + y \le 8$. This is the bivariate hypergeometric distribution with parameters $m = 100$, $r = 20$, $s = 40$ and $n = 10$.
5. $f_{X,Y,Z}(x,y,z) = \frac{\binom{20}{x} \binom{40}{y} \binom{30}{z} \binom{10}{8-x-y-z}}{\binom{100}{8}}$ for $(x,y,z) \in \{0, 1, \ldots, 8\}^3$ with $x + y + z \le 8$. This is the tri-variate hypergeometric distribution with parameters $m = 100$, $r = 20$, $s = 40$, $t = 30$, and $n = 8$.
6. $\P(X + Y + Z = 8) = \frac{156\,597\,013}{275\,935\,140} \approx 0.417$
Coins and Dice
Suppose that a coin with probability of heads $p$ is tossed repeatedly, and the sequence of heads and tails is recorded.
1. Identify the underlying probability model by name and parameter.
2. Let $Y$ denote the number of heads in the first $n$ tosses. Give the probability density function of $Y$ and identify the distribution by name and parameters.
3. Let $N$ denote the number of tosses needed to get the first head. Give the probability density function of $N$ and identify the distribution by name and parameter.
Answer
1. Bernoulli trials with success parameter $p$.
2. $f(k) = \binom{n}{k} p^k (1 - p)^{n-k}$ for $k \in \{0, 1, \ldots, n\}$. This is the binomial distribution with trial parameter $n$ and success parameter $p$.
3. $g(n) = p (1 - p)^{n-1}$ for $n \in \N_+$. This is the geometric distribution with success parameter $p$.
Suppose that a coin with probability of heads $p = 0.4$ is tossed 5 times. Let $Y$ denote the number of heads.
1. Compute the probability density function of $Y$ explicitly.
2. Graph the probability density function and identify the mode.
3. Find $\P(Y \gt 3)$.
Answer
1. $f(0) = 0.0778$, $f(1) = 0.2592$, $f(2) = 0.3456$, $f(3) = 0.2304$, $f(4) = 0.0768$, $f(5) = 0.0102$
2. mode: $k = 2$
3. $\P(Y \gt 3) = 0.0870$
In the binomial coin experiment, set $n = 5$ and $p = 0.4$. Run the experiment 1000 times and compare the empirical density function of $Y$ with the probability density function.
Suppose that a coin with probability of heads $p = 0.2$ is tossed until heads occurs. Let $N$ denote the number of tosses.
1. Find the probability density function of $N$.
2. Find $\P(N \le 5)$.
Answer
1. $f(n) = (0.8)^{n-1} 0.2$ for $n \in \N_+$
2. $\P(N \le 5) = 0.67232$
In the negative binomial experiment, set $k = 1$ and $p = 0.2$. Run the experiment 1000 times and compare the empirical density function with the probability density function.
Suppose that two fair, standard dice are tossed and the sequence of scores $(X_1, X_2)$ recorded. Let $Y = X_1 + X_2$ denote the sum of the scores, $U = \min\{X_1, X_2\}$ the minimum score, and $V = \max\{X_1, X_2\}$ the maximum score.
1. Find the probability density function of $(X_1, X_2)$. Identify the distribution by name.
2. Find the probability density function of $Y$.
3. Find the probability density function of $U$.
4. Find the probability density function of $V$.
5. Find the probability density function of $(U, V)$.
Answer
We denote the PDFs by $f$, $g$, $h_1$, $h_2$, and $h$ respectively.
1. $f(x_1, x_2) = \frac{1}{36}$ for $(x_1,x_2) \in \{1, 2, 3, 4, 5, 6\}^2$. This is the uniform distribution on $\{1, 2, 3, 4, 5, 6\}^2$.
2. $g(2) = g(12) = \frac{1}{36}$, $g(3) = g(11) = \frac{2}{36}$, $g(4) = g(10) = \frac{3}{36}$, $g(5) = g(9) = \frac{4}{36}$, $g(6) = g(8) = \frac{5}{36}$, $g(7) = \frac{6}{36}$
3. $h_1(1) = \frac{11}{36}$, $h_1(2) = \frac{9}{36}$, $h_1(3) = \frac{7}{36}$, $h_1(4) = \frac{5}{36}$, $h_1(5) = \frac{3}{36}$, $h_1(6) = \frac{1}{36}$
4. $h_2(1) = \frac{1}{36}$, $h_2(2) = \frac{3}{36}$, $h_2(3) = \frac{5}{36}$, $h_2(4) = \frac{7}{36}$, $h_2(5) = \frac{9}{36}$, $h_2(6) = \frac{11}{36}$
5. $h(u,v) = \frac{2}{36}$ if $u \lt v$, $h(u, v) = \frac{1}{36}$ if $u = v$ where $(u, v) \in \{1, 2, 3, 4, 5, 6\}^2$ with $u \le v$
Note that $(U, V)$ in the last exercise could serve as the outcome of the experiment that consists of throwing two standard dice if we did not bother to record order. Note from the previous exercise that this random vector does not have a uniform distribution when the dice are fair. The mistaken idea that this vector should have the uniform distribution was the cause of difficulties in the early development of probability.
In the dice experiment, select $n = 2$ fair dice. Select the following random variables and note the shape and location of the probability density function. Run the experiment 1000 times. For each of the following variables, compare the empirical density function with the probability density function.
1. $Y$, the sum of the scores.
2. $U$, the minimum score.
3. $V$, the maximum score.
In the die-coin experiment, a fair, standard die is rolled and then a fair coin is tossed the number of times showing on the die. Let $N$ denote the die score and $Y$ the number of heads.
1. Find the probability density function of $N$. Identify the distribution by name.
2. Find the probability density function of $Y$.
Answer
1. $g(n) = \frac{1}{6}$ for $n \in \{1, 2, 3, 4, 5, 6\}$. This is the uniform distribution on $\{1, 2, 3, 4, 5, 6\}$.
2. $h(0) = \frac{63}{384}$, $h(1) = \frac{120}{384}$, $h(2) = \frac{90}{384}$, $h(3) = \frac{64}{384}$, $h(4) = \frac{29}{384}$, $h(5) = \frac{8}{384}$, $h(6) = \frac{1}{384}$
Run the die-coin experiment 1000 times. For the number of heads, compare the empirical density function with the probability density function.
Suppose that a bag contains 12 coins: 5 are fair, 4 are biased with probability of heads $\frac{1}{3}$; and 3 are two-headed. A coin is chosen at random from the bag and tossed 5 times. Let $V$ denote the probability of heads of the selected coin and let $Y$ denote the number of heads.
1. Find the probability density function of $V$.
2. Find the probability density function of $Y$.
Answer
1. $g(1/2) = 5/12$, $g(1/3) = 4/12$, $g(1) = 3/12$
2. $h(0) = 5311/93312$, $h(1) = 16315/93312$, $h(2) = 22390/93312$, $h(3) = 17270/93312$, $h(4) = 7355/93312$, $h(5) = 24671/93312$
Compare thedie-coin experiment with the bag of coins experiment. In the first experiment, we toss a coin with a fixed probability of heads a random number of times. In second experiment, we effectively toss a coin with a random probability of heads a fixed number of times. In both cases, we can think of starting with a binomial distribution and randomizing one of the parameters.
In the coin-die experiment, a fair coin is tossed. If the coin lands tails, a fair die is rolled. If the coin lands heads, an ace-six flat die is tossed (faces 1 and 6 have probability $\frac{1}{4}$ each, while faces 2, 3, 4, 5 have probability $\frac{1}{8}$ each). Find the probability density function of the die score $Y$.
Answer
$f(y) = 5/24$ for $y \in \{1,6\}$, $f(y) = 7/24$ for $y \in \{2, 3, 4, 5\}$
Run the coin-die experiment 1000 times, with the settings in the previous exercise. Compare the empirical density function with the probability density function.
Suppose that a standard die is thrown 10 times. Let $Y$ denote the number of times an ace or a six occurred. Give the probability density function of $Y$ and identify the distribution by name and parameter values in each of the following cases:
1. The die is fair.
2. The die is an ace-six flat.
Answer
1. $f(k) = \binom{10}{k} \left(\frac{1}{3}\right)^k \left(\frac{2}{3}\right)^{10-k}$ for $k \in \{0, 1, \ldots, 10\}$. This is the binomial distribution with trial parameter $n = 10$ and success parameter $p = \frac{1}{3}$
2. $f(k) = \binom{10}{k} \left(\frac{1}{2}\right)^{10}$ for $k \in \{0, 1, \ldots, 10\}$. This is the binomial distribution with trial parameter $n = 10$ and success parameter $p = \frac{1}{2}$
Suppose that a standard die is thrown until an ace or a six occurs. Let $N$ denote the number of throws. Give the probability density function of $N$ and identify the distribution by name and parameter values in each of the following cases:
1. The die is fair.
2. The die is an ace-six flat.
Answer
1. $g(n) = \left(\frac{2}{3}\right)^{n-1} \frac{1}{3}$ for $n \in \N_+$. This is the geometric distribution with success parameter $p = \frac{1}{3}$
2. $g(n) = \left(\frac{1}{2}\right)^n$ for $n \in \N_+$. This is the geometric distribution with success parameter $p = \frac{1}{2}$
Fred and Wilma takes turns tossing a coin with probability of heads $p \in (0, 1)$: Fred first, then Wilma, then Fred again, and so forth. The first person to toss heads wins the game. Let $N$ denote the number of tosses, and $W$ the event that Wilma wins.
1. Give the probability density function of $N$ and identify the distribution by name.
2. Compute $\P(W)$ and sketch the graph of this probability as a function of $p$.
3. Find the conditional probability density function of $N$ given $W$.
Answer
1. $f(n) = p(1 - p)^{n-1}$ for $n \in \N_+$. This is the geometric distribution with success parameter $p$.
2. $\P(W) = \frac{1-p}{2-p}$
3. $f(n \mid W) = p (2 - p) (1 - p)^{n-2}$ for $n \in \{2, 4, \ldots\}$
The alternating coin tossing game is studied in more detail in the section on The Geometric Distribution in the chapter on Bernoulli trials.
Suppose that $k$ players each have a coin with probability of heads $p$, where $k \in \{2, 3, \ldots\}$ and where $p \in (0, 1)$.
1. Suppose that the players toss their coins at the same time. Find the probability that there is an odd man, that is, one player with a different outcome than all the rest.
2. Suppose now that the players repeat the procedure in part (a) until there is an odd man. Find the probability density function of $N$, the number of rounds played, and identify the distribution by name.
Answer
1. The probability is $2 p (1 - p)$ if $k = 2$, and is $k p (1 - p)^{k-1} + k p^{k-1} (1 - p)$ if $k \gt 2$.
2. Let $r_k$ denote the probability in part (a). $N$ has PDF $f(n) = (1 - r_k)^{n-1} r_k$ for $n \in \N$, and has the geometric distribution with parameter $r_k$.
The odd man out game is treated in more detail in the section on the Geometric Distribution in the chapter on Bernoulli Trials.
Cards
Recall that a poker hand consists of 5 cards chosen at random and without replacement from a standard deck of 52 cards. Let $X$ denote the number of spades in the hand and $Y$ the number of hearts in the hand. Give the probability density function of each of the following random variables, and identify the distribution by name:
1. $X$
2. $Y$
3. $(X, Y)$
Answer
1. $g(x) = \frac{\binom{13}{x} \binom{39}{5-x}}{\binom{52}{5}}$ for $x \in \{0, 1, 2, 3, 4, 5\}$. This is the hypergeometric distribution with population size $m = 52$, type parameter $r = 13$, and sample size $n = 5$
2. $h(y) = \frac{\binom{13}{y} \binom{39}{5-y}}{\binom{52}{5}}$ for $y \in \{0, 1, 2, 3, 4, 5\}$. This is the same hypergeometric distribution as in part (a).
3. $f(x, y) = \frac{\binom{13}{x} \binom{13}{y} \binom{26}{5-x-y}}{\binom{52}{5}}$ for $(x,y) \in \{0, 1, 2, 3, 4, 5\}^2$ with $x + y \le 5$. This is a bivariate hypergeometric distribution.
Recall that a bridge hand consists of 13 cards chosen at random and without replacement from a standard deck of 52 cards. An honor card is a card of denomination ace, king, queen, jack or 10. Let $N$ denote the number of honor cards in the hand.
1. Find the probability density function of $N$ and identify the distribution by name.
2. Find the probability that the hand has no honor cards. A hand of this kind is known as a Yarborough, in honor of Second Earl of Yarborough.
Answer
1. $f(n) = \frac{\binom{20}{n} \binom{32}{13-n}}{\binom{52}{13}}$ for $n \in \{0, 1, \ldots, 13\}$. This is the hypergeometric distribution with population size $m = 52$, type parameter $r = 20$ and sample size $n = 13$.
2. 0.00547
In the most common high card point system in bridge, an ace is worth 4 points, a king is worth 3 points, a queen is worth 2 points, and a jack is worth 1 point. Find the probability density function of $V$, the point value of a random bridge hand.
Reliability
Suppose that in a batch of 500 components, 20 are defective and the rest are good. A sample of 10 components is selected at random and tested. Let $X$ denote the number of defectives in the sample.
1. Find the probability density function of $X$ and identify the distribution by name and parameter values.
2. Find the probability that the sample contains at least one defective component.
Answer
1. $f(x) = \frac{\binom{20}{x} \binom{480}{10-x}}{\binom{500}{10}}$ for $x \in \{0, 1, \ldots, 10\}$. This is the hypergeometric distribution with population size $m = 500$, type parameter $r = 20$, and sample size $n = 10$.
2. $\P(X \ge 1) = 1 - \frac{\binom{480}{10}}{\binom{500}{10}} \approx = 0.3377$
A plant has 3 assembly lines that produce a certain type of component. Line 1 produces 50% of the components and has a defective rate of 4%; line 2 has produces 30% of the components and has a defective rate of 5%; line 3 produces 20% of the components and has a defective rate of 1%. A component is chosen at random from the plant and tested.
1. Find the probability that the component is defective.
2. Given that the component is defective, find the conditional probability density function of the line that produced the component.
Answer
Let $D$ the event that the item is defective, and $f(\cdot \mid D)$ the PDF of the line number given $D$.
1. $\P(D) = 0.037$
2. $f(1 \mid D) = 0.541$, $f(2 \mid D) = 0.405$, $f(3 \mid D) = 0.054$
Recall that in the standard model of structural reliability, a systems consists of $n$ components, each of which, independently of the others, is either working for failed. Let $X_i$ denote the state of component $i$, where 1 means working and 0 means failed. Thus, the state vector is $\bs{X} = (X_1, X_2, \ldots, X_n)$. The system as a whole is also either working or failed, depending only on the states of the components. Thus, the state of the system is an indicator random variable $U = u(\bs{X})$ that depends on the states of the components according to a structure function $u: \{0,1\}^n \to \{0, 1\}$. In a series system, the system works if and only if every components works. In a parallel system, the system works if and only if at least one component works. In a $k$ out of $n$ system, the system works if and only if at least $k$ of the $n$ components work.
The reliability of a device is the probability that it is working. Let $p_i = \P(X_i = 1)$ denote the reliability of component $i$, so that $\bs{p} = (p_1, p_2, \ldots, p_n)$ is the vector of component reliabilities. Because of the independence assumption, the system reliability depends only on the component reliabilities, according to a reliability function $r(\bs{p}) = \P(U = 1)$. Note that when all component reliabilities have the same value $p$, the states of the components form a sequence of $n$ Bernoulli trials. In this case, the system reliability is, of course, a function of the common component reliability $p$.
Suppose that the component reliabilities all have the same value $p$. Let $\bs{X}$ denote the state vector and $Y$ denote the number of working components.
1. Give the probability density function of $\bs{X}$.
2. Give the probability density function of $Y$ and identify the distribution by name and parameter.
3. Find the reliability of the $k$ out of $n$ system.
Answer
1. $f(x_1, x_2, \ldots, x_n) = p^y (1 - p)^{n-y}$ for $(x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$ where $y = x_1 + x_2 \cdots + x_n$
2. $g(k) = \binom{n}{y} p^y (1 - p)^{n-y}$ for $y \in \{0, 1, \ldots, n\}$. This is the binomial distribution with trial parameter $n$ and success parameter $p$.
3. $r(p) = \sum_{i=k}^n \binom{n}{i} p^i (1 - p)^{n-i}$
Suppose that we have 4 independent components, with common reliability $p = 0.8$. Let $Y$ denote the number of working components.
1. Find the probability density function of $Y$ explicitly.
2. Find the reliability of the parallel system.
3. Find the reliability of the 2 out of 4 system.
4. Find the reliability of the 3 out of 4 system.
5. Find the reliability of the series system.
Answer
1. $g(0) = 0.0016$, $g(1) = 0.0256$, $g(2) = 0.1536$, $g(3) = g(4) = 0.4096$
2. $r_{4,1} = 0.9984$
3. $r_{4,2} = 0.9729$
4. $r_{4,3} = 0.8192$
5. $r_{4,4} = 0.4096$
Suppose that we have 4 independent components, with reliabilities $p_1 = 0.6$, $p_2 = 0.7$, $p_3 = 0.8$, and $p_4 = 0.9$. Let $Y$ denote the number of working components.
1. Find the probability density function of $Y$.
2. Find the reliability of the parallel system.
3. Find the reliability of the 2 out of 4 system.
4. Find the reliability of the 3 out of 4 system.
5. Find the reliability of the series system.
Answer
1. $g(0) = 0.0024$, $g(1) = 0.0404$, $g(2) = 0.2.144$, $g(3) = 0.4404$, $g(4) = 0.3024$
2. $r_{4,1} = 0.9976$
3. $r_{4,2} = 0.9572$
4. $r_{4,3} = 0.7428$
5. $r_{4,4} = 0.3024$
The Poisson Distribution
Suppose that $a \gt 0$. Define $f$ by $f(n) = e^{-a} \frac{a^n}{n!}, \quad n \in \N$
1. $f$ is a probability density function.
2. $f(n - 1) \lt f(n)$ if and only if $n \lt a$.
3. If $a$ is not a positive integer, there is a single mode at $\lfloor a \rfloor$
4. If $a$ is a positive integer, there are two modes at $a - 1$ and $a$.
Proof
1. Recall from calculus, the exponential series $\sum_{n=0}^\infty \frac{a^n}{n!} = e^a$ Hence $f$ is a probability density function.
2. Note that $f(n - 1) \lt f(n)$ if and only if $\frac{a^{n-1}}{(n - 1)!} \lt \frac{a^n}{n!}$ if and only if $1 \lt \frac{a}{n}$.
3. By the same argument, $f(n - 1) = f(n)$ if and only if $a = n$. If $a$ is not a positive integer this cannot happen. Hence, letting $k = \lfloor a \rfloor$, it follows from (b) that $f(n) \lt f(k)$ if $n \lt k$ or $n \gt k$.
4. If $a$ is a positive integer, then $f(a - 1) = f(a)$. From (b), $f(n) \lt f(a - 1)$ if $n \lt a - 1$ and $f(n) \lt f(a)$ if $n \gt a$.
The distribution defined by the probability density function in the previous exercise is the Poisson distribution with parameter $a$, named after Simeon Poisson. Note that like the other named distributions we studied above (hypergeometric and binomial), the Poisson distribution is unimodal: the probability density function at first increases and then decreases, with either a single mode or two adjacent modes. The Poisson distribution is studied in detail in the Chapter on Poisson Processes, and is used to model the number of random points in a region of time or space, under certain ideal conditions. The parameter $a$ is proportional to the size of the region of time or space.
Suppose that the customers arrive at a service station according to the Poisson model, at an average rate of 4 per hour. Thus, the number of customers $N$ who arrive in a 2-hour period has the Poisson distribution with parameter 8.
1. Find the modes.
2. Find $\P(N \ge 6)$.
Answer
1. modes: 7, 8
2. $\P(N \gt 6) = 0.8088$
In the Poisson experiment, set $r = 4$ and $t = 2$. Run the simulation 1000 times and compare the empirical density function to the probability density function.
Suppose that the number of flaws $N$ in a piece of fabric of a certain size has the Poisson distribution with parameter 2.5.
1. Find the mode.
2. Find $\P(N \gt 4)$.
Answer
1. mode: 2
2. $\P(N \gt 4) = 0.1088$
Suppose that the number of raisins $N$ in a piece of cake has the Poisson distribution with parameter 10.
1. Find the modes.
2. Find $\P(8 \le N \le 12)$.
Answer
1. modes: 9, 10
2. $\P(8 \le N \le 12) = 0.5713$
A Zeta Distribution
Let $g$ be the function defined by $g(n) = \frac{1}{n^2}$ for $n \in \N_+$.
1. Find the probability density function $f$ that is proportional to $g$.
2. Find the mode of the distribution.
3. Find $\P(N \le 5)$ where $N$ has probability density function $f$.
Answer
1. $f(n) = \frac{6}{\pi^2 n^2}$ for $n \in \N_+$. Recall that $\sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}$
2. Mode $n = 1$
3. $\P(N \le 5) = \frac{5269}{600 \pi^2}$
The distribution defined in the previous exercise is a member of the zeta family of distributions. Zeta distributions are used to model sizes or ranks of certain types of objects, and are studied in more detail in the chapter on Special Distributions.
Benford's Law
Let $f$ be the function defined by $f(d) = \log(d + 1) - \log(d) = \log\left(1 + \frac{1}{d}\right)$ for $d \in \{1, 2, \ldots, 9\}$. (The logarithm function is the base 10 common logarithm, not the base $e$ natural logarithm.)
1. Show that $f$ is a probability density function.
2. Compute the values of $f$ explicitly, and sketch the graph.
3. Find $\P(X \le 3)$ where $X$ has probability density function $f$.
Answer
1. Note that $\sum_{d=1}^9 f(d) = \log(10) = 1$. The sum collapses.
2. $d$ 1 2 3 4 5 6 7 8 9
$f(d)$ 0.3010 0.1761 0.1249 0.0969 0.0792 0.0669 0.0580 0.0512 0.0458
3. $\log(4) \approx 0.6020$
The distribution defined in the previous exercise is known as Benford's law, and is named for the American physicist and engineer Frank Benford. This distribution governs the leading digit in many real sets of data. Benford's law is studied in more detail in the chapter on Special Distributions.
Data Analysis Exercises
In the M&M data, let $R$ denote the number of red candies and $N$ the total number of candies. Compute and graph the empirical probability density function of each of the following:
1. $R$
2. $N$
3. $R$ given $N \gt 57$
Answer
We denote the PDF of $R$ by $f$ and the PDF of $N$ by $g$
1. $r$ 3 4 5 6 8 9 10 11 12 14 15 20
$f(r)$ $\frac{1}{30}$ $\frac{3}{30}$ $\frac{3}{30}$ $\frac{2}{30}$ $\frac{4}{30}$ $\frac{5}{30}$ $\frac{2}{30}$ $\frac{1}{30}$ $\frac{3}{30}$ $\frac{3}{30}$ $\frac{3}{30}$ $\frac{1}{30}$
2. $n$ 50 53 54 55 56 57 58 59 60 61
$g(n)$ $\frac{1}{30}$ $\frac{1}{30}$ $\frac{1}{30}$ $\frac{4}{30}$ $\frac{4}{30}$ $\frac{3}{30}$ $\frac{9}{30}$ $\frac{3}{30}$ $\frac{2}{30}$ $\frac{2}{30}$
3. $r$ 3 4 6 8 9 11 12 14 15
$f(r \mid N \gt 57)$ $\frac{1}{16}$ $\frac{1}{16}$ $\frac{1}{16}$ $\frac{3}{16}$ $\frac{3}{16}$ $\frac{1}{16}$ $\frac{1}{16}$ $\frac{3}{16}$ $\frac{2}{16}$
In the Cicada data, let $G$ denotes gender, $S$ species type, and $W$ body weight (in grams). Compute the empirical probability density function of each of the following:
1. $G$
2. $S$
3. $(G, S)$
4. $G$ given $W \gt 0.20$ grams.
Answer
We denote the PDF of $G$ by $g$, the PDF of $S$ by $h$ and the PDF of $(G, S)$ by $f$.
1. $g(0) = \frac{59}{104}$, $g(1) = \frac{45}{104}$
2. $h(0) = \frac{44}{104}$, $h(1) = \frac{6}{104}$, $h(2) = \frac{54}{104}$
3. $f(0, 0) = \frac{16}{104}$, $f(0, 1) = \frac{3}{104}$, $f(0, 2) = \frac{40}{104}$, $f(1, 0) = \frac{28}{104}$, $f(1, 1) = \frac{3}{104}$, $f(1, 2) = \frac{14}{104}$
4. $g(0 \mid W \gt 0.2) = \frac{31}{73}$, $g(1 \mid W \gt 0.2) = \frac{42}{73}$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/03%3A_Distributions/3.01%3A_Discrete_Distributions.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
In the previous section, we considered discrete distributions. In this section, we study a complementary type of distribution. As usual, if you are a new student of probability, you may want to skip the technical details.
Basic Theory
Definitions and Basic Properties
As usual, our starting point is a random experiment modeled by a probability space $(S, \mathscr S, \P)$. So to review, $S$ is the set of outcomes, $\mathscr S$ the collection of events, and $\P$ the probability measure on the sample space $(S, \mathscr S)$. We use the terms probability measure and probability distribution synonymously in this text. Also, since we use a general definition of random variable, every probability measure can be thought of as the probability distribution of a random variable, so we can always take this point of view if we like. Indeed, most probability measures naturally have random variables associated with them.
In this section, we assume that $S \subseteq \R^n$ for some $n \in \N_+$.
Details
Technically, $S$ is a measurable subset of $\R^n$ and $\mathscr S$ is the $\sigma$-algebra measurable subsets of $S$. Typically in applications, $S$ is defined by a finite number of inequalities involving elementary function.
Here is our first fundamental definition.
The probability measure $\P$ is continuous if $\P(\{x\}) = 0$ for all $x \in S$.
The fact that each point is assigned probability 0 might seem impossible or paradoxical at first, but soon we will see very familiar analogies.
If $\P$ is a continuous distribtion then $\P(C) = 0$ for every countable $C \subseteq S$.
Proof
Since $C$ is countable, it follows from the additivity axiom of probability that
$\P(C) = \sum_{x \in C} \P(\{x\}) = 0$
Thus, continuous distributions are in complete contrast with discrete distributions, for which all of the probability mass is concentrated on the points in a discrete set. For a continuous distribution, the probability mass is continuously spread over $S$ in some sense. In the picture below, the light blue shading is intended to suggest a continuous distribution of probability.
Typically, $S$ is a region of $\R^n$ defined by inequalities involving elementary functions, for example an interval in $\R$, a circular region in $\R^2$, and a conical region in $\R^3$. Suppose that $\P$ is a continuous probability measure on $S$. The fact that each point in $S$ has probability 0 is conceptually the same as the fact that an interval of $\R$ can have positive length even though it is composed of points each of which has 0 length. Similarly, a region of $\R^2$ can have positive area even though it is composed of points (or curves) each of which has area 0. In the one-dimensional case, continuous distributions are used to model random variables that take values in intervals of $\R$, variables that can, in principle, be measured with any degree of accuracy. Such variables abound in applications and include
• length, area, volume, and distance
• time
• mass and weight
• charge, voltage, and current
• resistance, capacitance, and inductance
• velocity and acceleration
• energy, force, and work
Usually a continuous distribution can usually be described by certain type of function.
Suppose again that $\P$ is a continuous distribution on $S$. A function $f: S \to [0, \infty)$ is a probability density function for $\P$ if $\P(A) = \int_A f(x) \, dx, \quad A \in \mathscr S$
Details
Technically, $f$ must be measurable and is a probability density function of $\P$ with respect to Lebesgue measure, the standard measure on $\R^n$. Moreover, the integral is the Lebesgue integral, but the ordinary Riemann integral of calculus suffices for the sets that occur in typical applications.
So the probability distribution $\P$ is completely determined by the probability density function $f$. As a special case, note that $\int_S f(x) \, dx = \P(S) = 1$. Conversely, a nonnegative function on $S$ with this property defines a probability measure.
A function $f: S \to [0, \infty)$ that satisfies $\int_S f(x) \, dx = 1$ is a probability density function on $S$ and then $\P$ defined as follows is a continuous probability measure on $S$: $\P(A) = \int_A f(x) \, dx, \quad A \in \mathscr S$
Proof
Note that we can always extend $f$ to a probability density function on a subset of $\R^n$ that contains $S$, or to all of $\R^n$, by defining $f(x) = 0$ for $x \notin S$. This extension sometimes simplifies notation. Put another way, we can be a bit sloppy about the set of values of the random variable. So for example if $a, \, b \in \R$ with $a \lt b$ and $X$ has a continuous distribution on the interval $[a, b]$, then we could also say that $X$ has a continuous distribution on $(a, b)$ or $[a, b)$, or $(a, b]$.
The points $x \in S$ that maximize the probability density function $f$ are important, just as in the discrete case.
Suppose that $\P$ is a continuous distribution on $S$ with probability density function $f$. An element $x \in S$ that maximizes $f$ is a mode of the distribution.
If there is only one mode, it is sometimes used as a measure of the center of the distribution.
You have probably noticed that probability density functions for continuous distributions are analogous to probability density functions for discrete distributions, with integrals replacing sums. However, there are essential differences. First, every discrete distribution has a unique probability density function $f$ given by $f(x) = \P(\{x\})$ for $x \in S$. For a continuous distribution, the existence of a probability density function is not guaranteed. The advanced section on absolute continuity and density functions has several examples of continuous distribution that do not have density functions, and gives conditions that are necessary and sufficient for the existence of a probability density function. Even if a probability density function $f$ exists, it is never unique. Note that the values of $f$ on a finite (or even countably infinite) set of points could be changed to other nonnegative values and the new function would still be a probability density function for the same distribution. The critical fact is that only integrals of $f$ are important. Second, the values of the PDF $f$ for a discrete distribution are probabilities, and in particular $f(x) \le 1$ for $x \in S$. For a continuous distribution the values are not probabilities and in fact it's possible that $f(x) \gt 1$ for some or even all $x \in S$. Further, $f$ can be unbounded on $S$. In the typical calculus interpretation, $f(x)$ really is probability density at $x$. That is, $f(x) \, dx$ is approximately the probability of a small region of size $dx$ about $x$.
Constructing Probability Density Functions
Just as in the discrete case, a nonnegative function on $S$ can often be scaled to produce a produce a probability density function.
Suppose that $g: S \to [0, \infty)$ and let $c = \int_S g(x) \, dx$ If $0 \lt c \lt \infty$ then $f$ defined by $f(x) = \frac{1}{c} g(x)$ for $x \in S$ defines a probability density function for a continuous distribution on $S$.
Proof
Technically, the function $g$ is measurable. Technicalities aside, the proof is trivial. Clearly $f(x) \ge 0$ for $x \in S$ and $\int_S f(x) \, dx = \frac{1}{c} \int_S g(x) \, dx = \frac{c}{c} = 1$
Note again that $f$ is just a scaled version of $g$. So this result can be used to construct probability density functions with desired properties (domain, shape, symmetry, and so on). The constant $c$ is sometimes called the normalizing constant of $g$.
Conditional Densities
Suppose now that $X$ is a random variable defined on a probability space $(\Omega, \mathscr F, \P)$ and that $X$ has a continuous distribution on $S$. A probability density function for $X$ is based on the underlying probability measure on the sample space $(\Omega, \mathscr F)$. This measure could be a conditional probability measure, conditioned on a given event $E \in \mathscr F$ with $\P(E) \gt 0$. Assuming that the conditional probability density function exists, the usual notation is $f(x \mid E), \quad x \in S$ Note, however, that except for notation, no new concepts are involved. The defining property is $\int_A f(x \mid E) \, dx = \P(X \in A \mid E), \quad A \in \mathscr S$ and all results that hold for probability density functions in general hold for conditional probability density functions. The event $E$ could be an event described in terms of the random variable $X$ itself:
Suppose that $X$ has a continuous distribution on $S$ with probability density function $f$ and that $B \in \mathscr S$ with $\P(X \in B) \gt 0$. The conditional probability density function of $X$ given $X \in B$ is the function on $B$ given by $f(x \mid X \in B) = \frac{f(x)}{\P(X \in B)}, \quad x \in B$
Proof
For $A \in \mathscr S$ with $A \subseteq B$, $\int_A \frac{f(x)}{\P(X \in B)} \, dx = \frac{1}{\P(X \in B)} \int_A f(x) \, dx = \frac{\P(X \in A)}{\P(X \in B)} = \P(X \in A \mid X \in B)$
Of course, $\P(X \in B) = \int_B f(x) \, dx$ and hence is the normaliziang constant for the restriction of $f$ to $B$, as in (8)
Examples and Applications
As always, try the problems yourself before looking at the answers.
The Exponential Distribution
Let $f$ be the function defined by $f(t) = r e^{-r t}$ for $t \in [0, \infty)$, where $r \in (0, \infty)$ is a parameter.
1. Show that $f$ is a probability density function.
2. Draw a careful sketch of the graph of $f$, and state the important qualitative features.
Proof
1. Note that $f(t) \gt 0$ for $t \ge 0$. Also $\int_0^\infty e^{-r t} \, dt = \frac{1}{r}$ so $f$ is a PDF.
2. $f$ is decreasing and concave upward so the mode is 0. $f(x) \to 0$ as $x \to \infty$.
The distribution defined by the probability density function in the previous exercise is called the exponential distribution with rate parameter $r$. This distribution is frequently used to model random times, under certain assumptions. Specifically, in the Poisson model of random points in time, the times between successive arrivals have independent exponential distributions, and the parameter $r$ is the average rate of arrivals. The exponential distribution is studied in detail in the chapter on Poisson Processes.
The lifetime $T$ of a certain device (in 1000 hour units) has the exponential distribution with parameter $r = \frac{1}{2}$. Find
1. $\P(T \gt 2)$
2. $\P(T \gt 3 \mid T \gt 1)$
Answer
1. $e^{-1} \approx 0.3679$
2. $e^{-1} \approx 0.3679$
In the gamma experiment, set $n =1$ to get the exponential distribution. Vary the rate parameter $r$ and note the shape of the probability density function. For various values of $r$, run the simulation 1000 times and compare the the empirical density function with the probability density function.
A Random Angle
In Bertrand's problem, a certain random angle $\Theta$ has probability density function $f$ given by $f(\theta) = \sin \theta$ for $\theta \in \left[0, \frac{\pi}{2}\right]$.
1. Show that $f$ is a probability density function.
2. Draw a careful sketch of the graph $f$, and state the important qualitative features.
3. Find $\P\left(\Theta \lt \frac{\pi}{4}\right)$.
Answer
1. Note that $\sin \theta \ge 0$ for $0 \le \theta \le \frac{\pi}{2}$ and $\int_0^{\pi/2} \sin \theta \, d\theta = 1$.
2. $f$ is increasing and concave downward so the mode is $\frac{\pi}{2}$.
3. $1 - \frac{1}{\sqrt{2}} \approx 0.2929$
Bertand's problem is named for Joseph Louis Bertrand and is studied in more detail in the chapter on Geometric Models.
In Bertrand's experiment, select the model with uniform distance. Run the simulation 1000 times and compute the empirical probability of the event $\left\{\Theta \lt \frac{\pi}{4}\right\}$. Compare with the true probability in the previous exercise.
Gamma Distributions
Let $g_n$ be the function defined by $g_n(t) = e^{-t} \frac{t^n}{n!}$ for $t \in [0, \infty)$ where $n \in \N$ is a parameter.
1. Show that $g_n$ is a probability density function for each $n \in \N$.
2. Draw a careful sketch of the graph of $g_n$, and state the important qualitative features.
Proof
1. Note that $g_n(t) \ge 0$ for $t \ge 0$. Also, $g_0$ is the probability density function of the exponential distribution with parameter 1. For $n \in \N_+$, integration by parts with $u = t^n / n!$ and $dv = e^{-t} dt$ gives $\int_0^\infty g_n(t) \, dt = \int_0^\infty g_{n-1}(t) \, dt$. Hence it follows by induction that $g_n$ is a PDF for each $n \in \N_+$.
2. $g_0$ is decreasing and concave downward, with mode $t = 0$. For $n \gt 0$, $g_n$ increases and then decreases, with mode $t = n$. $g_1$ is concave downward and then upward, with inflection point at $t = 2$. For $n \gt 1$, $g_n$ is concave upward, then downward, then upward again, with inflection points at $n \pm \sqrt{n}$. For all $n \in \N$, $g_n(t) \to 0$ as $t \to \infty$.
Interestingly, we showed in the last section on discrete distributions, that $f_t(n) = g_n(t)$ is a probability density function on $\N$ for each $t \ge 0$ (it's the Poisson distribution with parameter $t$). The distribution defined by the probability density function $g_n$ belongs to the family of Erlang distributions, named for Agner Erlang; $n + 1$ is known as the shape parameter. The Erlang distribution is studied in more detail in the chapter on the Poisson Process. In turn the Erlang distribution belongs to the more general family of gamma distributions. The gamma distribution is studied in more detail in the chapter on Special Distributions.
In the gamma experiment, keep the default rate parameter $r = 1$. Vary the shape parameter and note the shape and location of the probability density function. For various values of the shape parameter, run the simulation 1000 times and compare the empirical density function with the probability density function.
Suppose that the lifetime of a device $T$ (in 1000 hour units) has the gamma distribution above with $n = 2$. Find each of the following:
1. $\P(T \gt 3)$.
2. $\P(T \le 2)$
3. $\P(1 \le T \le 4)$
Answer
1. $\frac{17}{2} e^{-3} \approx 0.4232$
2. $1 - 5 e^{-2} \approx 0.3233$
3. $\frac{5}{2} e^{-1} - 13 e^{-4} \approx 0.6816$
Beta Distributions
Let $f$ be the function defined by $f(x) = 6 x (1 - x)$ for $x \in [0, 1]$.
1. Show that $f$ is a probability density function.
2. Draw a careful sketch of the graph of $f$, and state the important qualitative features.
Answer
1. Note that $f(x) \ge 0$ for $x \in [0, 1]$. Also, $\int_0^1 x (1 - x) \, dx = \frac{1}{6}$, so $f$ is a PDF
2. $f$ increases and then decreases, with mode at $x = \frac{1}{2}$. $f$ is concave downward. $f$ is symmetric about $x = \frac{1}{2}$ (in fact, the graph is a parabola).
Let $f$ be the function defined by $f(x) = 12 x^2 (1 - x)$ for $x \in [0, 1]$.
1. Show that $f$ is a probability density function.
2. Draw a careful sketch the graph of $f$, and state the important qualitative features.
Answer
1. Note that $f(x) \ge 0$ for $0 \le x \le 1$. Also $\int_0^1 x^2 (1 - x) \, dx = \frac{1}{12}$, so $f$ is a PDF.
2. $f$ increases and then decreases, with mode at $x = \frac{2}{3}$. $f$ is concave upward and then downward, with inflection point at $x = \frac{1}{3}$.
The distributions defined in the last two exercises are examples of beta distributions. These distributions are widely used to model random proportions and probabilities, and physical quantities that take values in bounded intervals (which, after a change of units, can be taken to be $[0, 1]$). Beta distributions are studied in detail in the chapter on Special Distributions.
In the special distribution simulator, select the beta distribution. For the following parameter values, note the shape of the probability density function. Run the simulation 1000 times and compare the empirical density function with the probability density function.
1. $a = 2$, $b = 2$. This gives the first beta distribution above.
2. $a = 3$, $b = 2$. This gives the second beta distribuiton above.
Suppose that $P$ is a random proportion. Find $\P\left(\frac{1}{4} \le P \le \frac{3}{4}\right)$ in each of the following cases:
1. $P$ has the first beta distribution above.
2. $P$ has the second beta distribution above.
Answer
1. $\frac{11}{16}$
2. $\frac{11}{16}$
Let $f$ be the function defined by $f(x) = \frac{1}{\pi \sqrt{x (1 - x)}}, \quad x \in (0, 1)$
1. Show that $f$ is a probability density function.
2. Draw a careful sketch of the graph of $f$, and state the important qualitative features.
Answer
1. Note that $f(x) \gt 0$ for $0 \lt x \lt 1$. Using the substitution $u = \sqrt{x}$ givens $\int_0^1 \frac{1}{\sqrt{x (1 - x)}} \, dx = \int_0^1 \frac{2}{\sqrt{1 - u^2}} \, du = 2 \arcsin u \biggm|_0^1 = \pi$ Thus $f$ is a PDF.
2. $f$ is symmetric about $x = \frac{1}{2}$. $f$ decreases and then increases, with minimum at $x = \frac{1}{2}$. $f(x) \to \infty$ as $x \downarrow 0$ and as $x \uparrow 1$ so the distribution has no mode. $f$ is concave upward.
The distribution defined in the last exercise is also a member of the beta family of distributions. But it is also known as the (standard) arcsine distribution, because of the arcsine function that arises in the proof that $f$ is a probability density function. The arcsine distribution has applications to a very important random process known as Brownian motion, named for the Scottish botanist Robert Brown. Arcsine distributions are studied in more generality in the chapter on Special Distributions.
In the special distribution simulator, select the (continuous) arcsine distribution and keep the default parameter values. Run the simulation 1000 times and compare the empirical density function with the probability density function.
Suppose that $X_t$ represents the change in the price of a stock at time $t$, relative to the value at an initial reference time 0. We treat $t$ as a continuous variable measured in weeks. Let $T = \max\left\{t \in [0, 1]: X_t = 0\right\}$, the last time during the first week that the stock price was unchanged over its initial value. Under certain ideal conditions, $T$ will have the arcsine distribution. Find each of the following:
1. $\P\left(T \lt \frac{1}{4}\right)$
2. $\P\left(T \ge \frac{1}{2}\right)$
3. $\P\left(T \le \frac{3}{4}\right)$
Answer
1. $\frac{1}{3}$
2. $\frac{1}{2}$
3. $\frac{2}{3}$
Open the Brownian motion experiment and select the last zero variable. Run the experiment in single step mode a few times. The random process that you observe models the price of the stock in the previous exercise. Now run the experiment 1000 times and compute the empirical probability of each event in the previous exercise.
The Pareto Distribution
Let $g$ be the function defined by $g(x) = 1 /x^b$ for $x \in [1, \infty)$, where $b \in (0, \infty)$ is a parameter.
1. Draw a careful sketch the graph of $g$, and state the important qualitative features.
2. Find the values of $b$ for which there exists a probability density function $f$ (8)proportional to $g$. Identify the mode.
Answer
1. $g$ is decreasing and concave upward, with $g(x) \to 0$ as $x \to \infty$.
2. Note that if $b \ne 1$ $\int_1^\infty x^{-b} \, dx = \frac{x^{1 - b}}{1 - b} \biggm|_1^\infty = \begin{cases} \infty, & 0 \lt b \lt 1 \ \frac{1}{b - 1}, & 1 \lt b \lt \infty \end{cases}$ When $b = 1$ we have $\int_1^\infty x^{-1} \, dx = \ln x \biggm|_1^\infty = \infty$. Thus, when $0 \lt b \le 1$, there is no PDF proportional to $g$. When $b \gt 1$, the PDF proportional to $g$ is $f(x) = \frac{b - 1}{x^b}$ for $x \in [1, \infty)$. The mode is 1.
Note that the qualitative features of $g$ are the same, regardless of the value of the parameter $b \gt 0$, but only when $b \gt 1$ can $g$ be normalized into a probability density function. In this case, the distribution is known as the Pareto distribution, named for Vilfredo Pareto. The parameter $a = b - 1$, so that $a \gt 0$, is known as the shape parameter. Thus, the Pareto distribution with shape parameter $a$ has probability density function $f(x) = \frac{a}{x^{a+1}}, \quad x \in [1, \infty)$ The Pareto distribution is widely used to model certain economic variables and is studied in detail in the chapter on Special Distributions.
In the special distribution simulator, select the Pareto distribution. Leave the scale parameter fixed, but vary the shape parameter, and note the shape of the probability density function. For various values of the shape parameter, run the simulation 1000 times and compare the empirical density function with the probability density function.
Suppose that the income $X$ (in appropriate units) of a person randomly selected from a population has the Pareto distribution with shape parameter $a = 2$. Find each of the following:
1. $\P(X \gt 2)$
2. $\P(X \le 4)$
3. $\P(3 \le X \le 5)$
Answer
1. $\frac{1}{4}$
2. $\frac{15}{16}$
3. $\frac{16}{225}$
The Cauchy Distribution
Let $f$ be the function defined by $f(x) = \frac{1}{\pi (x^2 + 1)}, \quad x \in \R$
1. Show that $f$ is a probability density function.
2. Draw a careful sketch the graph of $f$, and state the important qualitative features.
Answer
1. Note that $f(x) \gt 0$ for $x \in \R$. Also $\int_{-\infty}^\infty \frac{1}{1 + x^2} \, dx = \arctan x \biggm|_{-\infty}^\infty = \pi$ and hence $f$ is a PDF.
2. $f$ increases and then decreases, with mode $x = 0$. $f$ is concave upward, then downward, then upward again, with inflection points at $x = \pm \frac{1}{\sqrt{3}}$. $f$ is symmetric about $x = 0$.
The distribution constructed in the previous exercise is known as the (standard) Cauchy distribution, named after Augustin Cauchy It might also be called the arctangent distribution, because of the appearance of the arctangent function in the proof that $f$ is a probability density function. In this regard, note the similarity to the arcsine distribution above. The Cauchy distribution is studied in more generality in the chapter on Special Distributions. Note also that the Cauchy distribution is obtained by normalizing the function $x \mapsto \frac{1}{1 + x^2}$; the graph of this function is known as the witch of Agnesi, in honor of Maria Agnesi.
In the special distribution simulator, select the Cauchy distribution with the default parameter values. Run the simulation 1000 times and compare the empirical density function with the probability density function.
A light source is 1 meter away from position 0 on an infinite, straight wall. The angle $\Theta$ that the light beam makes with the perpendicular to the wall is randomly chosen from the interval $\left(-\frac{\pi}{2}, \frac{\pi}{2}\right)$. The position $X = \tan(\Theta)$ of the light beam on the wall has the standard Cauchy distribution. Find each of the following:
1. $\P(-1 \lt X \lt 1)$.
2. $\P\left(X \ge \frac{1}{\sqrt{3}}\right)$
3. $\P(X \le \sqrt{3})$
Answer
1. $\frac{1}{2}$
2. $\frac{1}{3}$
3. $\frac{2}{3}$
The Cauchy experiment (with the default parameter values) is a simulation of the experiment in the last exercise.
1. Run the experiment a few times in single step mode.
2. Run the experiment 1000 times and compare the empirical density function with the probability density function.
3. Using the data from (b), compute the relative frequency of each event in the previous exercise, and compare with the true probability.
The Standard Normal Distribution
Let $\phi$ be the function defined by $\phi(z) = \frac{1}{\sqrt{2 \pi}} e^{-z^2/2}$ for $z \in \R$.
1. Show that $\phi$ is a probability density function.
2. Draw a careful sketch the graph of $\phi$, and state the important qualitative features.
Proof
1. Note that $\phi(z) \gt 0$ for $z \in \R$. Let $c = \int_{-\infty}^\infty e^{-z^2 / 2} \, dz$. Then $c^2 = \int_{-\infty}^\infty e^{-x^2/2} \, dx \int_{-\infty}^\infty e^{-y^2/2} \, dy = \int_{-\infty}^\infty \int_{-\infty}^\infty e^{-(x^2 + y^2) / 2} \, dx \, dy$ Change to polar coordinates: $x = r \cos \theta$, $y = r \sin \theta$ where $r \in [0, \infty)$ and $\theta \in [0, 2 \pi)$. Then $x^2 + y^2 = r^2$ and $dx \, dy = r \, dr \, d\theta$. Hence $c^2 = \int_0^{2 \pi} \int_0^\infty r e^{-r^2 / 2} \, dr \, d\theta$ Using the simple substitution $u = r^2$, the inner integral is $\int_0^\infty e^{-u} du = 1$. Then the outer integral is $\int_0^{2\pi} 1 \, d\theta = 2 \pi$. Hence $c = \sqrt{2 \pi}$ and so $f$ is a PDF.
2. Note that $\phi$ is symmetric about 0. $\phi$ increases and then decreases, with mode $z = 0$. $\phi$ is concave upward, then downward, then upward again, with inflection points at $z = \pm 1$. $\phi(z) \to 0$ as $z \to \infty$ and as $z \to -\infty$.
The distribution defined in the last exercise is the standard normal distribution, perhaps the most important distribution in probability and statistics. It's importance stems largely from the central limit theorem, one of the fundamental theorems in probability. In particular, normal distributions are widely used to model physical measurements that are subject to small, random errors. The family of normal distributions is studied in more generality in the chapter on Special Distributions.
In the special distribution simulator, select the normal distribution and keep the default parameter values. Run the simulation 1000 times and compare the empirical density function and the probability density function.
The function $z \mapsto e^{-z^2 / 2}$ is a notorious example of an integrable function that does not have an antiderivative that can be expressed in closed form in terms of other elementary functions. (That's why we had to resort to the polar coordinate trick to show that $\phi$ is a probability density function.) So probabilities involving the normal distribution are usually computed using mathematical or statistical software.
Suppose that the error $Z$ in the length of a certain machined part (in millimeters) has the standard normal distribution. Use mathematical software to approximate each of the following:
1. $\P(-1 \le Z \le 1)$
2. $\P(Z \gt 2)$
3. $\P(Z \lt -3)$
Answer
1. 0.6827
2. 0.0228
3. 0.0013
The Extreme Value Distribution
Let $f$ be the function defined by $f(x) = e^{-x} e^{-e^{-x}}$ for $x \in \R$.
1. Show that $f$ is a probability density function.
2. Draw a careful sketch of the graph of $f$, and state the important qualitative features.
3. Find $\P(X \gt 0)$, where $X$ has probability density function $f$.
Answer
1. Note that $f(x) \gt 0$ for $x \in \R$. Using the substitution $u = e^{-x}$, $\int_{-\infty}^\infty e^{-x} e^{-e^{-x}} \, dx = \int_0^\infty e^{-u} \, du = 1$ (note that the integrand in the last integral is the exponential PDF with parameter 1.
2. $f$ increases and then decreases, with mode $x = 0$. $f$ is concave upward, then downward, then upward again, with inflection points at $x = \pm \ln\left[\left(3 + \sqrt{5}\right)\middle/2\right]$. Note however that $f$ is not symmetric about 0. $f(x) \to 0$ as $x \to \infty$ and as $x \to -\infty$.
3. $1 - e^{-1} \approx 0.6321$
The distribution in the last exercise is the (standard) type 1 extreme value distribution, also known as the Gumbel distribution in honor of Emil Gumbel. Extreme value distributions are studied in more generality in the chapter on Special Distributions.
In the special distribution simulator, select the extreme value distribution. Keep the default parameter values and note the shape and location of the probability density function. Run the simulation 1000 times and compare the empirical density function with the probability density function.
The Logistic Distribution
Let $f$ be the function defined by $f(x) = \frac{e^x}{(1 + e^x)^2}, \quad x \in \R$
1. Show that $f$ is a probability density function.
2. Draw a careful sketch the graph of $f$, and state the important qualitative features.
3. Find $\P(X \gt 1)$, where $X$ has probability density function $f$.
Answer
1. Note that $f(x) \gt 0$ for $x \in \R$. The substitution $u = e^x$ gives $\int_{-\infty}^\infty f(x) \, dx = \int_0^\infty \frac{1}{(1 + u)^2} \, du = 1$
2. $f$ is symmetric about 0. $f$ increases and then decreases with mode $x = 0$. $f$ is concave upward, then downward, then upward again, with inflection points at $x = \pm \ln\left(2 + \sqrt{3}\right)$. $f(x) \to 0$ as $x \to \infty$ and as $x \to -\infty$.
3. $\frac{1}{1 + e} \approx 0.2689$
The distribution in the last exercise is the (standard) logistic distribution. Logistic distributions are studied in more generality in the chapter on Special Distributions.
In the special distribution simulator, select the logistic distribution. Keep the default parameter values and note the shape and location of the probability density function. Run the simulation 1000 times and compare the empirical density function with the probability density function.
Weibull Distributions
Let $f$ be the function defined by $f(t) = 2 t e^{-t^2}$ for $t \in [0, \infty)$.
1. Show that $f$ is a probability density function.
2. Draw a careful sketch the graph of $f$, and state the important qualitative features.
Answer
1. Note that $f(t) \ge 0$ for $t \ge 0$. The substitution $u = t^2$ gives $\int_0^\infty f(t) \, dt = \int_0^\infty e^{-u} \, du = 1$.
2. $f$ increases and then decreases, with mode $t = 1/\sqrt{2}$. $f$ is concave downward and then upward, with inflection point at $t = \sqrt{3/2}$. $f(t) \to 0$ as $t \to \infty$.
Let $f$ be the function defined by $f(t) = 3 t^2 e^{-t^3}$ for $t \ge 0$.
1. Show that $f$ is a probability density function.
2. Draw a careful sketch the graph of $f$, and state the important qualitative features.
Answer
1. Note that $f(t) \ge 0$ for $t \ge 0$. The substitution $u = t^3$ gives $\int_0^\infty f(t) \, dt = \int_0^\infty e^{-u} \, du = 1$
2. $f$ increases and then decreases, with mode $t = \left(\frac{2}{3}\right)^{1/3}$. $f$ is concave upward, then downward, then upward again, with inflection points at $t = \left(1 \pm \frac{1}{3}\sqrt{7}\right)^{1/3}$. $f(t) \to 0$ as $t \to \infty$.
The distributions in the last two exercises are examples of Weibull distributions, name for Waloddi Weibull. Weibull distributions are studied in more generality in the chapter on Special Distributions. They are often used to model random failure times of devices (in appropriately scaled units).
In the special distribution simulator, select the Weibull distribution. For each of the following values of the shape parameter $k$, note the shape and location of the probability density function. Run the simulation 1000 times and compare the empirical density function with the probability density function.
1. $k = 2$. This gives the first Weibull distribution above.
2. $k = 3$. This gives the second Weibull distribution above.
Suppose that $T$ is the failure time of a device (in 1000 hour units). Find $\P\left(T \gt \frac{1}{2}\right)$ in each of the following cases:
1. $T$ has the first Weibull distribution above.
2. $T$ has the second Weibull distribution above.
Answer
1. $e^{-1/4} \approx 0.7788$
2. $e^{-1/8} \approx 0.8825$
Additional Examples
Let $f$ be the function defined by $f(x) = -\ln x$ for $x \in (0, 1]$.
1. Show that $f$ is a probability density function.
2. Draw a careful sketch of the graph of $f$, and state the important qualitative features.
3. Find $\P\left(\frac{1}{3} \le X \le \frac{1}{2}\right)$ where $X$ has the probability density function in (a).
Answer
1. Note that $-\ln x \ge 0$ for $0 \lt x \le 1$. Integration by parts with $u = -\ln x$ and $dv = dx$ gives $\int_0^1 -\ln x \, dx = -x \ln x \biggm|_0^1 + \int_0^1 1 \, dx = 1$
2. $f$ is decreasing and concave upward, with $f(x) \to \infty$ as $x \downarrow 0$, so there is no mode.
3. $\frac{1}{2} \ln 2 - \frac{1}{3} \ln 3 + \frac{1}{6} \approx 0.147$
Let $f$ be the function defined by $f(x) = 2 e^{-x} (1 - e^{-x})$ for $x \in [0, \infty)$.
1. Show that $f$ is a probability density function.
2. Draw a careful sketch of the graph of $f$, and give the important qualitative features.
3. Find $\P(X \ge 1)$ where $X$ has the probability density function in (a).
Answer
1. Note that $f(x) \gt 0$ for $0 \lt x \lt \infty.$. Also, $\int_0^\infty \left(e^{-x} - e^{-2 x}\right) \, dx = \frac{1}{2}$, so $f$ is a PDF.
2. $f$ increases and then decreases, with mode $x = \ln(2)$. $f$ is concave downward and then upward, with an inflection point at $x = \ln(4)$. $f(x) \to 0$ as $x \to \infty$.
3. $2 e^{-1} - e^{-2} \approx 0.6004$
The following problems deal with two and three dimensional random vectors having continuous distributions. The idea of normalizing a function to form a probability density function is important for some of the problems. The relationship between the distribution of a vector and the distribution of its components will be discussed later, in the section on joint distributions.
Let $f$ be the function defined by $f(x, y) = x + y$ for $0 \le x \le 1$, $0 \le y \le 1$.
1. Show that $f$ is a probability density function, and identify the mode.
2. Find $\P(Y \ge X)$ where $(X, Y)$ has the probability density function in (a).
3. Find the conditional density of $(X, Y)$ given $\left\{X \lt \frac{1}{2}, Y \lt \frac{1}{2}\right\}$.
Answer
1. mode $(1, 1)$
2. $\frac{1}{2}$
3. $f\left(x, y \bigm| X \lt \frac{1}{2}, Y \lt \frac{1}{2}\right) = 8 (x + y)$ for $0 \lt x \lt \frac{1}{2}$, $0 \lt y \lt \frac{1}{2}$
Let $g$ be the function defined by $g(x, y) = x + y$ for $0 \le x \le y \le 1$.
1. Find the probability density function $f$ that is proportional to $g$.
2. Find $\P(Y \ge 2 X)$ where $(X, Y)$ has the probability density function in (a).
Answer
1. $f(x,y) = 2(x + y)$, $0 \le x \le y \le 1$
2. $\frac{5}{12}$
Let $g$ be the function defined by $g(x, y) = x^2 y$ for $0 \le x \le 1$, $0 \le y \le 1$.
1. Find the probability density function $f$ that is proportional to $g$.
2. Find $\P(Y \ge X)$ where $(X, Y)$ has the probability density function in (a).
Answer
1. $f(x,y) = 6 x^2 y$ for $0 \le x \le 1$, $0 \le y \le 1$
2. $\frac{2}{5}$
Let $g$ be the function defined by $g(x, y) = x^2 y$ for $0 \le x \le y \le 1$.
1. Find the probability density function $f$ that is proportional to $g$.
2. Find $P(Y \ge 2 X)$ where $(X, Y)$ has the probability density function in (a).
Answer
1. $f(x,y) = 15 x^2 y$ for $0 \le x \le y \le 1$
2. $\frac{1}{8}$
Let $g$ be the function defined by $g(x, y, z) = x + 2 y + 3 z$ for $0 \le x \le 1$, $0 \le y \le 1$, $0 \le z \le 1$.
1. Find the probability density function $f$ that is proportional to $g$.
2. Find $\P(X \le Y \le Z)$ where $(X, Y, Z)$ has the probability density function in (a).
Answer
1. $f(x, y, z) = \frac{1}{3}(x + 2 y + 3 z)$ for $0 \le x \le 1$, $0 \le y \le 1$, $0 \le z \le 1$
2. $\frac{7}{36}$
Let $g$ be the function defined by $g(x, y) = e^{-x} e^{-y}$ for $0 \le x \le y \lt \infty$.
1. Find the probability density function $f$ that is proportional to $g$.
2. Find $\P(X + Y \lt 1)$ where $(X, Y)$ has the probability density function in (a).
Answer
1. $f(x,y) = 2 e^{-x} e^{-y}$, $0 \lt x \lt y \lt \infty$
2. $1 - 2 e^{-1} \approx 0.2642$
Continuous Uniform Distributions
Our next discussion will focus on an important class of continuous distributions that are defined purely in terms of geometry. We need a preliminary definition.
For $n \in \N_+$, the standard measure $\lambda_n$ on $\R^n$ is given by $\lambda_n(A) = \int_A 1 \, dx, \quad A \subseteq \R^n$ In particular, $\lambda_1(A)$ is the length of $A \subseteq \R$, $\lambda_2(A)$ is the area of $A \subseteq \R^2$, and $\lambda_3(A)$ is the volume of $A \subseteq \R^3$.
Details
Technically, $\lambda_n$ is Lebesgue measure on the $\sigma$-algebra of measurable subsets of $\R^n$. The name is in honor of Henri Lebesgue. The representation above in terms of the standard Riemann integral of calculus works for the sets that occur in typical applications. For the remainder of this discussion, we assume that all subsets of $\R^n$ that are mentioned are measurable
Note that if $n \gt 1$, the integral above is a multiple integral. Generally, $\lambda_n(A)$ is referred to as the $n$-dimensional volumve of $A \in \subseteq \R^n$.
Suppose that $S \subseteq \R^n$ for some $n \in \N_+$ with $0 \lt \lambda_n(S) \lt \infty$.
1. the function $f$ defined by $f(x) = 1 \big/ \lambda_n(S)$ for $x \in S$ is a probability density function on $S$.
2. The probability measure associated with $f$ is given by $\P(A) = \lambda_n(A) \big/ \lambda_n(S)$ for $A \subseteq S$, and is known as the uniform distribution on $S$.
Proof
The proof is simple: Clearly $f(x) \gt 0$ for $x \in S$ and $\int_A f(x) \, dx = \frac{1}{\lambda_n(S)} \int_A 1 \, dx = \frac{\lambda_n(A)}{\lambda_n(S)}, \quad A \subseteq S$ In particular, when $A = S$ we have $\int_S f(x) \, dx = 1$.
Note that the probability assigned to a set $A \subseteq \R^n$ is proportional to the size of $A$, as measured by $\lambda_n$. Note also that in both the discrete and continuous cases, the uniform distribution on a set $S$ has constant probability density function on $S$. The uniform distribution on a set $S$ governs a point $X$ chosen at random from $S$, and in the continuous case, such distributions play a fundamental role in various Geometric Models. Uniform distributions are studied in more generality in the chapter on Special Distributions.
The most important special case is the uniform distribution on an interval $[a, b]$ where $a, b \in \R$ and $a \lt b$. In this case, the probability density function is $f(x) = \frac{1}{b - a}, \quad a \le x \le b$ This distribution models a point chosen at random from the interval. In particular, the uniform distribution on $[0, 1]$ is known as the standard uniform distribution, and is very important because of its simplicity and the fact that it can be transformed into a variety of other probability distributions on $\R$. Almost all computer languages have procedures for simulating independent, standard uniform variables, which are called random numbers in this context.
Conditional distributions corresponding to a uniform distribution are also uniform.
Suppose that $R \subseteq S \subseteq \R^n$ for some $n \in \N_+$, and that $\lambda_n(R) \gt 0$ and $\lambda_n(S) \lt \infty$. If $\P$ is the uniform distribution on $S$, then the conditional distribution given $R$ is uniform on $R$.
Proof
The proof is very simple: For $A \subseteq R$,
$\P(A \mid R) = \frac{\P(A \cap R)}{\P(R)} = \frac{\P(A)}{\P(R)} = \frac{\lambda_n(A) \big/ \lambda_n(S)}{\lambda_n(R) \big/ \lambda_n(S)} = \frac{\lambda_n(A)}{\lambda_n(R)}$
The last theorem has important implications for simulations. If we can simulate a random variable that is uniformly distributed on a set, we can simulate a random variable that is uniformly distributed on a subset.
Suppose again that $R \subseteq S \subseteq \R^n$ for some $n \in \N_+$, and that $\lambda_n(R) \gt 0$ and $\lambda_n(S) \lt \infty$. Suppose further that $\bs X = (X_1, X_2, \ldots)$ is a sequence of independent random variables, each uniformly distributed on $S$. Let $N = \min\{k \in \N_+: X_k \in R\}$. Then
1. $N$ has the geometric distribution on $\N_+$ with success parameter $p = \lambda_n(R) \big/ \lambda_n(S)$.
2. $X_N$ is uniformly distributed on $R$.
Proof
1. Since the variables are unifromly distributed on $S$, $\P(X_k \in \R) = \lambda_n(R) / \lambda_n(S)$ for each $k \in \N_+$. Since the variables are independent, each point is in $R$ or not independently. Hence $N$, the index of the first point to fall in $R$, has the geometric distribution on $\N_+$ with success probability $p = \lambda_n(R) / \lambda_n(S)$. That is, $\P(N = k) = (1 - p)^{k-1} p$ for $k \in \N_+$.
2. Note that $p \in (0, 1]$, so $\P(N \in \N_+) = 1$ and hence $X_N$ is well defined. We know from our work on independence and conditional probability that the distribution of $X_N$ is the same as the conditional distribution of $X$ given $X \in R$, which by the previous theorem, is uniformly distributed on $R$.
Suppose in particular that $S$ is a Cartesian product of $n$ bounded intervals. It turns out to be quite easy to simulate a sequence of independent random variables $\bs X = (X_1, X_2, \ldots)$ each of which is uniformly distributed on $S$. Thus, the last theorem give an algorithm for simulating a random variable that is uniformly distributed on an irregularly shaped region $R \subseteq S$ (assuming that we have an algorithm for recognizing when a point $x \in \R^n$ falls in $R$). This method of simulation is known as the rejection method, and as we will see in subsequent sections, is more important that might first appear.
In the simple probability experiment, random points are uniformly distributed on the rectangular region $S$. Move and resize the events $A$ and $B$ and note how the probabilities of the 16 events that can be constructed from $A$ and $B$ change. Run the experiment 1000 times and note the agreement between the relative frequencies of the events and the probabilities of the events.
Suppose that $(X, Y)$ is uniformly distributed on the circular region of radius 5, centered at the origin. We can think of $(X, Y)$ as the position of a dart thrown randomly at a target. Let $R = \sqrt{X^2 + Y^2}$, the distance from the center to $(X, Y)$.
1. Give the probability density function of $(X, Y)$.
2. Find $\P(n \le R \le n + 1$ for $n \in \{0, 1, 2, 3, 4\}$.
Answer
1. $f(x, y) = \frac{1}{25 \pi}$ for $\left\{(x, y) \in \R^2: x^2 + y^2 \le 25\right\}$
2. $\P(n \le R \le n + 1) = \frac{2 n + 1}{25}$ for $n \in \{0, 1, 2, 3, 4\}$
Suppose that $(X, Y, Z)$ is uniformly distributed on the cube $S = [0, 1]^3$. Find $\P(X \lt Y \lt Z)$ in two ways:
1. Using the probability density function.
2. Using a combinatorial argument.
Answer
1. $\P(X \lt Y \lt Z) = \int_0^1 \int_0^z \int_0^y 1 \, dx \, dy \, dz = \frac{1}{6}$
2. Each of the 6 strict orderings of $(X, Y, Z)$ are equally likely, so $\P(X \lt Y \lt Z) = \frac{1}{6}$
The time $T$ (in minutes) required to perform a certain job is uniformly distributed over the interval $[15, 60]$.
1. Find the probability that the job requires more than 30 minutes
2. Given that the job is not finished after 30 minutes, find the probability that the job will require more than 15 additional minutes.
Answer
1. $\frac{2}{3}$
2. $\frac{1}{6}$
Data Analysis Exercises
If $D$ is a data set from a variable $X$ with a continuous distribution, then an empirical density function can be computed by partitioning the data range into subsets of small size, and then computing the probability density of points in each subset. Empirical probability density functions are studied in more detail in the chapter on Random Samples.
For the cicada data, $BW$ denotes body weight (in grams), $BL$ body length (in millimeters), and $G$ gender (0 for female and 1 for male). Construct an empirical density function for each of the following and display each as a bar graph:
1. $BW$
2. $BL$
3. $BW$ given $G = 0$
Answer
1. BW $(0, 0.1]$ $(0.1, 0.2]$ $(0.2, 0.3]$ $(0.3, 0.4]$
Density 0.8654 5.8654 3.0769 0.1923
2. BL $(15, 29]$ $(20, 25]$ $(25, 30]$ $(30, 35]$
Density 0.0058 0.1577 0.0346 0.0019
3. BW $(0, 0.1]$ $(0.1, 0.2]$ $(0.2, 0.3]$ $(0.3, 0.4]$
Density given $G = 0$ 0.3390 4.4068 5.0847 0.1695 | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/03%3A_Distributions/3.02%3A_Continuous_Distributions.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$
In the previous two sections, we studied discrete probability meausres and continuous probability measures. In this section, we will study probability measure that that are combinations of the two types. Once again, if you are a new student of probability you may want to skip the technical details.
Basic Theory
Definitions and Basic Properties
Our starting point is a random experiment modeled by a probability space $(S, \mathscr S, \P)$. So to review, $S$ is the set of outcomes, $\mathscr S$ the collection of events, and $\P$ the probability measure on the sample space $(S, \mathscr S)$. We use the terms probability measure and probability distribution synonymously in this text. Also, since we use a general definition of random variable, every probability measure can be thought of as the probability distribution of a random variable, so we can always take this point of view if we like. Indeed, most probability measures naturally have random variables associated with them. Here is the main definition:
The probability measure $\P$ is of mixed type if $S$ can be partitioned into events $D$ and $C$ with the following properties:
1. $D$ is countable, $0 \lt \P(D) \lt 1$ and $\P(\{x\}) \gt 0$ for every $x \in D$.
2. $C \subseteq \R^n$ for some $n \in \N_+$ and $\P(\{x\}) = 0$ for every $x \in C$.
Details
Recall that the term partition means that $D$ and $C$ are disjoint and $S = D \cup C$. As alwasy, the collection of events $\mathscr S$ is required to be a $\sigma$-algebra. The set $C$ is a measurable subset of $\R^n$ and then the elements of $\mathscr S$ have the form $A \cup B$ where $A \subseteq D$ and $B$ is a measurable subset of $C$. Typically in applications, $C$ is defined by a finite number of inequalities involving elementary functions.
Often the discrete set $D$ is a subset of $\R^n$ also, but that's not a requirement. Note that since $D$ and $C$ are complements, $0 \lt \P(C) \lt 1$ also. Thus, part of the distribution is concentrated at points in a discrete set $D$; the rest of the distribution is continuously spread over $C$. In the picture below, the light blue shading is intended to represent a continuous distribution of probability while the darker blue dots are intended to represents points of positive probability.
The following result is essentially equivalent to the definition.
Suppose that $\P$ is a probability measure on $S$ of mixed type as in (1).
1. The conditional probability measure $A \mapsto \P(A \mid D) = \P(A) / P(D)$ for $A \subseteq D$ is a discrete distribution on $D$
2. The conditional probability measure $A \mapsto \P(A \mid C) = \P(A) / \P(C)$ for $A \subseteq C$ is a continuous distribution on $C$.
Proof
In general, conditional probability measures really are probability measures, so the results are obvious since $\P(\{x\} \mid D) \gt 0$ for $x$ in the countable set $D$, and $\P(\{x\} \mid C) = 0$ for $x \in C$. From another point of view, $\P$ restricted to subsets of $D$ and $\P$ restricted to subsets of $C$ are both finite measures and so can be normalized to producte probability measures.
Note that $\P(A) = \P(D) \P(A \mid D) + \P(C) \P(A \mid C), \quad A \in \mathscr S$ Thus, the probability measure $P$ really is a mixture of a discrete distribution and a continuous distribution. Mixtures are studied in more generality in the section on conditional distributions. We can define a function on $D$ that is a partial probability density function for the discrete part of the distribution.
Suppose that $\P$ is a probability measure on $S$ of mixed type as in (1). Let $g$ be the function defined by $g(x) = \P(\{x\})$ for $x \in D$. Then
1. $g(x) \ge 0$ for $x \in D$
2. $\sum_{x \in D} g(x) = \P(D)$
3. $\P(A) = \sum_{x \in A} g(x)$ for $A \subseteq D$
Proof
These results follow from the axioms of probability.
1. $g(x) = \P(\{x\}) \ge 0$ since probabilities are nonnegative.
2. $\sum_{x \in D} g(x) = \sum_{x \in D} \P(\{x\}) = \P(D)$ by countable additivity.
3. $\sum_{x \in A} g(x) = \sum_{x \in A} \P(\{x\}) = \P(A)$ for $A \subseteq D$, again by countable additivity.
Technically, $g$ is a density function with respect to counting measure $\#$ on $D$, the standard measure used for discrete spaces.
Clearly, the normalized function $x \mapsto g(x) / \P(D)$ is the probability density function of the conditional distribution given $D$, discussed in (2). Often, the continuous part of the distribution is also described by a partial probability density function.
A partial probability density function for the continuous part of $\P$ is a nonnegative function $h: C \to [0, \infty)$ such that $\P(A) = \int_A h(x) \, dx, \quad A \in \mathscr C$
Details
Technically, $h$ is require to be measurable, and is a density function with respect to Lebesgue measure $\lambda_n$ on $C$, the standard measure on $\R^n$.
Clearly, the normalized function $x \mapsto h(x) / \P(C)$ is the probability density function of the conditional distribution given $C$ discussed in (2). As with purely continuous distributions, the existence of a probability density function for the continuous part of a mixed distribution is not guaranteed. And when it does exist, a density function for the continuous part is not unique. Note that the values of $h$ could be changed to other nonnegative values on a countable subset of $C$, and the displayed equation above would still hold, because only integrals of $h$ are important. The probability measure $\P$ is completely determined by the partial probability density functions.
Suppose that $\P$ has partial probability density functions $g$ and $h$ for the discrete and continuous parts, respectively. Then $\P(A) = \sum_{x \in A \cap D} g(x) + \int_{A \cap C} h(x) \, dx, \quad A \in \mathscr S$
Proof
Truncated Variables
Distributions of mixed type occur naturally when a random variable with a continuous distribution is truncated in a certain way. For example, suppose that $T$ is the random lifetime of a device, and has a continuous distribution with probability density function $f$ that is positive on $[0, \infty)$. In a test of the device, we can't wait forever, so we might select a positive constant $a$ and record the random variable $U$, defined by truncating $T$ at $a$, as follows: $U = \begin{cases} T, & T \lt a \ a, & T \ge a \end{cases}$
$U$ has a mixed distribution. In the notation above,
1. $D = \{a\}$ and $g(a) = \int_a^\infty f(t) \, dt$
2. $C = [0, a)$ and $h(t)= f(t)$ for $t \in [0, a)$
Suppose next that random variable $X$ has a continuous distribution on $\R$, with probability density function $f$ that is positive on $\R$. Suppose also that $a, \, b \in \R$ with $a \lt b$. The variable is truncated on the interval $[a, b]$ to create a new random variable $Y$ as follows: $Y = \begin{cases} a, & X \le a \ X, & a \lt X \lt b \ b, & X \ge b \end{cases}$
$Y$ has a mixed distribution. In the notation above,
1. $D = \{a, b\}$, $g(a) = \int_{-\infty}^a f(x) \, dx$, $g(b) = \int_b^\infty f(x) \, dx$
2. $C = (a, b)$ and $h(x) = f(x)$ for $x \in (a, b)$
Another way that a mixed probability distribution can occur is when we have a pair of random variables $(X, Y)$ for our experiment, one with a discrete distribution and the other with a continuous distribution. This setting is explored in the next section on Joint Distributions.
Examples and Applications
Suppose that $X$ has probability $\frac{1}{2}$ uniformly distributed on the set $\{1, 2, \ldots, 8\}$ and has probability $\frac{1}{2}$ uniformly distributed on the interval $[0, 10]$. Find $\P(X \gt 6)$.
Answer
$\frac{13}{40}$
Suppose that $(X, Y)$ has probability $\frac{1}{3}$ uniformly distributed on $\{0, 1, 2\}^2$ and has probability $\frac{2}{3}$ uniformly distributed on $[0, 2]^2$. Find $\P(Y \gt X)$.
Answer
$\frac{4}{9}$
Suppose that the lifetime $T$ of a device (in 1000 hour units) has the exponential distribution with probability density function $f(t) = e^{-t}$ for $0 \le t \lt \infty$. A test of the device is terminated after 2000 hours; the truncated lifetime $U$ is recorded. Find each of the following:
1. $\P(U \lt 1)$
2. $\P(U = 2)$
Answer
1. $1 - e^{-1} \approx 0.6321$
2. $e^{-2} \approx 0.1353$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/03%3A_Distributions/3.03%3A_Mixed_Distributions.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
The purpose of this section is to study how the distribution of a pair of random variables is related to the distributions of the variables individually. If you are a new student of probability you may want to skip the technical details.
Basic Theory
Joint and Marginal Distributions
As usual, we start with a random experiment modeled by a probability space $(\Omega, \mathscr F, \P)$. So to review, $\Omega$ is the set of outcomes, $\mathscr F$ is the collection of events, and $\P$ is the probability measure on the sample space $(\Omega, \mathscr F)$. Suppose now that $X$ and $Y$ are random variables for the experiment, and that $X$ takes values in $S$ while $Y$ takes values in $T$. We can think of $(X, Y)$ as a random variable taking values in the product set $S \times T$. The purpose of this section is to study how the distribution of $(X, Y)$ is related to the distributions of $X$ and $Y$ individually.
Recall that
1. The distribution of $(X, Y)$ is the probability measure on $S \times T$ given by $\P\left[(X, Y) \in C\right]$ for $C \subseteq S \times T$.
2. The distribution of $X$ is the probability measure on $S$ given by $\P(X \in A)$ for $A \subseteq S$.
3. The distribution of $Y$ is the probability measure on $T$ given by $\P(Y \in B)$ for $B \subseteq T$.
In this context, the distribution of $(X, Y)$ is called the joint distribution, while the distributions of $X$ and of $Y$ are referred to as marginal distributions.
Details
The sets $S$ and $T$ come with $\sigma$-algebras of admissible subssets $\mathscr S$ and $\mathscr T$, respectively, just as the collection of events $\mathscr F$ is a $\sigma$-algebra. The Cartesian product set $S \times T$ is given the product $\sigma$-algebra $\mathscr S \otimes \mathscr T$ generated by products $A \times B$ where $A \in \mathscr S$ and $B \in \mathscr T$. The random variables $X$ and $Y$ are measurable, which ensures that $(X, Y)$ is also a random variable (that is, measurable). Moreover, the distribution of $(X, Y)$ is uniquely determined by probabilities of the form $\P[(X, Y) \in A \times B] = \P(X \in A, Y \in B)$ where $A \in \mathscr S$ and $B \in \mathscr T$. As usual the spaces $(S, \mathscr S)$ and $(T, \mathscr T)$ each fall into the two classes we have studied in the previous sections:
1. Discrete: the set is countable and the $\sigma$-algebra consists of all subsets.
2. Euclidean: the set is a measurable subset of $\R^n$ for some $n \in \N_+$ and the $\sigma$-algebra consists of the measurable subsets.
The first simple but very important point, is that the marginal distributions can be obtained from the joint distribution.
Note that
1. $\P(X \in A) = \P\left[(X, Y) \in A \times T\right]$ for $A \subseteq S$
2. $\P(Y \in B) = \P\left[(X, Y) \in S \times B\right]$ for $B \subseteq T$
The converse does not hold in general. The joint distribution contains much more information than the marginal distributions separately. However, the converse does hold if $X$ and $Y$ are independent, as we will show below.
Joint and Marginal Densities
Recall that probability distributions are often described in terms of probability density functions. Our goal is to study how the probability density functions of $X$ and $Y$ individually are related to probability density function of $(X, Y)$. But first we need to make sure that we understand our starting point.
We assume that $(X, Y)$ has density function $f: S \times T \to (0, \infty)$ in the following sense:
1. If $X$ and $Y$ have discrete distributions on the countable sets $S$ and $T$ respectively, then $f$ is defined by $f(x, y) = \P(X = x, Y = y), \quad (x, y) \in S \times T$
2. If $X$ and $Y$ have continuous distributions on $S \subseteq \R^j$ and $T \subseteq \R^k$ respectively, then $f$ is defined by the condition $\P[(X, Y) \in C] = \int_C f(x, y) \, d(x, y), \quad C \subseteq S \times T$
3. In the mixed case where $X$ has a discrete distribution on the countable set $S$ and $Y$ has a continuous distribution on $T \subseteq R^k$, then $f$ is defined by the condition $\P(X = x, Y \in B) = \int_B f(x, y) \, dy, \quad x \in S, \; B \subseteq T$
4. In the mixed case where $X$ has a continuous distribution on $S \subseteq \R^j$ and $Y$ has a discrete distribution on the countable set $T$, then $f$ is defined by the condition $\P(X \in A, Y = y) = \int_A f(x, y), \, dx, \quad A \subseteq S, \; y \in T$
Details
1. In this case, $(X, Y)$ has a discrete distribution on the countable set $S \times T$ and $f$ is the density function with respect to counting measure $\#$ on $S \times T$.
2. In this case, $(X, Y)$ has a continuous distribution on $S \times T \subseteq \R^{j+k}$ and $f$ is the density function with respect to Lebesgue measure $\lambda_{j+k}$ on $S \times T$. Lebesgue measure, named for Henri Lebesgue is the standard measure on Euclidean spaces.
3. In this case, $(X, Y)$ actually has a continuous distribution: $\P[(X, Y) = (x, y) = \P(X = x, Y = y) \le \P(Y = y) = 0, \quad (x, y) \in S \times T$ The function $f$ is the density function with respect to the product measure formed from counting measure $\#$ on $S$ and Lebesgue measure $\lambda_k$ on $T$.
4. This case is just like (c) but with the roles of $S$ and $T$ reversed. Once again, $(X, Y)$ has a continuous distribution and $f$ is the density function with respect to the product measure on $S \times T$ formed by Lebesgue measure $\lambda_j$ on $S$ and counting measure $\#$ on $T$.
In cases (b), (c), and (d), the existence of a probability density function is not guaranteed, but is an assumption that we are making. All four cases (and many others) can be unified under the general theories of measure and integration.
First we will see how to obtain the probability density function of one variable when the other has a discrete distribution.
Suppose that $(X, Y)$ has probability density function $f$ as described above.
1. If $Y$ has a discrete distribution on the countable set $T$, then $X$ has probability density function $g$ given by $g(x) = \sum_{y \in T} f(x, y)$ for $x \in S$
2. If $X$ has a discrete distribution on the countable set $S$, then $Y$ has probability density function $h$ given by $h(y) = \sum_{x \in S} f(x, y), \quad y \in T$
Proof
The two results are symmetric, so we will prove (a). The main tool is the countable additivity property of probability. Suppose first that $X$ also has a discrete distribution on the countable set $S$. Then for $x \in S$, $g(x) = \P(X = x) = \P(X = x, Y \in T) = \sum_{y \in T} \P(X = x, Y = y) = \sum_{y \in T} f(x, y)$ Suppose next that $X$ has a continuous distribution on $S \subseteq \R^j$. Then for $A \subseteq \R^j$, $\P(X \in A) = \P(X \in A, Y \in T) = \sum_{y \in T} \P(X \in A, Y = y) = \sum_{y \in T} \int_A f(x, y) \, dx = \int_A \sum_{y \in T} f(x, y), \, dx$ The interchange of sum and integral is allowed since $f$ is nonnegative. By the meaning of the term, $X$ has probability density function $g$ given by $g(x) = \sum_{y \in T} f(x, y)$ for $x \in S$
Next we will see how to obtain the probability density function of one variable when the other has a continuous distribution.
Suppose again that $(X, Y)$ has probability density function $f$ as described above.
1. If $Y$ has a continuous distribution on $T \subseteq \R^k$ then $X$ has probability density function $g$ given by $g(x) = \int_T f(x, y) \, dy, \quad x \in S$
2. If $X$ has a continuous distribution on $S \subseteq \R^k$ then $Y$ has probability density function $h$ given by $h(y) = \int_S f(x, y) \, dx, \quad y \in T$
Proof
Again, the results are symmetric, so we show (a). Suppose first that $X$ has a discrete distribution on the countable set $S$. Then for $x \in S$ $g(x) = \P(X = x) = \P(X = x, Y \in T) = \int_T f(x, y) \, dy$ Next suppose that $X$ has a continuous distribution on $S \subseteq \R^j$. Then for $A \subseteq S$, $\P(X \in A) = \P(X \in A, Y \in T) = \P\left[(X, Y) \in A \times T\right] = \int_{A \times T} f(x, y) \, d(x, y) = \int_A \int_T f(x, y) \, dy$ Hence by the very meaning of the term, $X$ has probability density function $g$ given by $g(x) = \int_T f(x, y) \, dy$ for $x \in S$. Writing the double integral as an iterated integral is a special case of Fubini's theorem, named for Guido Fubini.
In the context of the previous two theorems, $f$ is called the joint probability density function of $(X, Y)$, while $g$ and $h$ are called the marginal density functions of $X$ and of $Y$, respectively. Some of the computational exercises below will make the term marginal clear.
Independence
When the variables are independent, the marginal distributions determine the joint distribution.
If $X$ and $Y$ are independent, then the distribution of $X$ and the distribution of $Y$ determine the distribution of $(X, Y)$.
Proof
If $X$ and $Y$ are independent then, $\P\left[(X, Y) \in A \times B\right] = \P(X \in A, Y \in B) = \P(X \in A) \P(Y \in B) \quad A \in \mathscr S, \, B \in \mathscr T$ and as noted in the details for (1), this completely determines the distribution $(X, Y)$ on $S \times T$.
When the variables are independent, the joint density is the product of the marginal densities.
Suppose that $X$ and $Y$ are independent and have probability density function $g$ and $h$ respectively. Then $(X, Y)$ has probability density function $f$ given by $f(x, y) = g(x) h(y), \quad (x, y) \in S \times T$
Proof
The main tool is the fact that an event defined in terms of $X$ is independent of an event defined in terms of $Y$.
1. Suppose that $X$ and $Y$ have discrete distributions on the countable sets $S$ and $T$ respectively. Then for $(x, y) \in S \times T$, $\P\left[(X, Y) = (x, y)\right] = \P(X = x, Y = y) = \P(X = x) \P(Y = y) = g(x) h(y)$
2. Suppose next that $X$ and $Y$ have continuous distributions on $S \subseteq \R^j$ and $T \subseteq \R^k$ respectively. Then for $A \subseteq S$ and $B \subseteq T$. $\P\left[(X, Y) \in A \times B\right] = \P(X \in A, Y \in B) = \P(X \in A) \P(Y \in B) = \int_A g(x) \, dx \, \int_B h(y) \, dy = \int_{A \times B} g(x) h(y) \, d(x, y)$ As noted in the details for (1), a probability measure on $S \times T$ is completely determined by its values on product sets, so it follows that $\P\left[(X, Y) \in C\right] = \int_C f(x, y) \, d(x, y)$ for general $C \subseteq S \times T$. Hence $(X, Y)$ has PDF $f$.
3. Suppose next that $X$ has a discrete distribution on the countable set $S$ and that $Y$ has a continuous distribution on $T \subseteq \R^k$. If $x \in S$ and $B \subseteq T$, $\P(X = x, Y \in B) = \P(X = x) \P(Y \in B) = g(x) \int_B h(y) \, dy = \int_B g(x) h(y) \, dy$ so again it follows that $(X, Y)$ has PDF $f$. The case where $X$ has a continuous distribution on $S \subseteq \R^j$ and $Y$ has a discrete distribution on the countable set $T$ is analogous.
The following result gives a converse to the last result. If the joint probability density factors into a function of $x$ only and a function of $y$ only, then $X$ and $Y$ are independent, and we can almost identify the individual probability density functions just from the factoring.
Factoring Theorem. Suppose that $(X, Y)$ has probability density function $f$ of the form $f(x, y) = u(x) v(y), \quad (x, y) \in S \times T$ where $u: S \to [0, \infty)$ and $v: T \to [0, \infty)$. Then $X$ and $Y$ are independent, and there exists a positve constant $c$ such that $X$ and $Y$ have probability density functions $g$ and $h$, respectively, given by \begin{align} g(x) = & c \, u(x), \quad x \in S \ h(y) = & \frac{1}{c} v(y), \quad y \in T \end{align}
Proof
Note that the proofs in the various cases are essentially the same, except for sums in the discrete case and integrals in the continuous case.
1. Suppose that $X$ and $Y$ have discrete distributions on the countable sets $S$ and $T$, respectively, so that $(X, Y)$ has a discrete distribution on $S \times T$. In this case, the assumption is $\P(X = x, Y = y) = u(x) v(y), \quad (x, y) \in S \times T$ Summing over $y \in T$ in the displayed equation gives $g(x) = \P(X = x) = c u(x)$ for $x \in S$ where $c = \sum_{y \in T} v(y)$. Similarly, summing over $x \in S$ in the displayed equation gives $h(y) = \P(Y = y) = k v(y)$ for $y \in T$ where $k = \sum_{x \in S} u(y)$. Summing over $(x, y) \in S \times T$ in the displayed equation gives $1 = c k$ so $k = 1 / c$. Finally, substituting gives $\P(X = x, Y = y) = \P(X = x) \P(Y = y)$ for $(x, y) \in S \times T$ so $X$ and $Y$ are independent.
2. Suppose next that $X$ and $Y$ have continuous distributions on $S \subseteq \R^j$ and $T \subseteq \R^k$ respectively. For $A \subseteq S$ and $B \subseteq T$, $\P(X \in A, Y \in B) = \P\left[(X, Y) \in A \times B\right] = \int_{A \times B} f(x, y) \, d(x, y) = \int_A u(x) \, dx \, \int_B v(y) dy$ Letting $B = T$ in the displayed equation gives $\P(X \in A) = \int_A c \, u(x) \, dx$ for $A \subseteq S$, where $c = \int_T v(y) \, dy$. By definition, $X$ has PDF $g = c \, u$. Next, letting $A = S$ in the displayed equation gives $\P(Y \in B) = \int_B k \, v(y) \, dy$ for $B \subseteq T$, where $k = \int_S u(x) \, dx$. Thus, $Y$ has PDF $g = k \, v$. Next, letting $A = S$ and $B = T$ in the displayed equation gives $1 = c \, k$, so $k = 1 / c$. Now note that the displayed equation holds with $u$ replaced by $g$ and $v$ replaced by $h$, and this in turn gives $\P(X \in A, Y \in B) = \P(X \in A) \P(Y \in B)$, so $X$ and $Y$ are independent.
3. Suppose next that $X$ has a discrete distribution on the countable set $S$ and that $Y$ has a continuous distributions on $T \subseteq \R^k$. For $x \in S$ and $B \subseteq T$, $\P(X = x, Y \in B) = \int_B f(x, y) \, dy = u(x) \int_B v(y) \, dy$ Letting $B = T$ in the displayed equation gives $\P(X \in x) = c \, u(x)$ for $x \in S$, where $c = \int_T v(y) \, dy$. So $X$ has PDF $g = c \, u$. Next, summing over $x \in S$ in the displayed equation gives $\P(Y \in B) = k \int_B \, v(y) \, dy$ for $B \subseteq T$, where $k = \sum_{x \in S} u(x)$. Thus, $Y$ has PDF $g = k \, v$. Next, summing over $x \in S$ and letting $B = T$ in the displayed equation gives $1 = c \, k$, so $k = 1 / c$. Now note that the displayed equation holds with $u$ replaced by $g$ and $v$ replaced by $h$, and this in turn gives $\P(X = x, Y \in B) = \P(X = x) \P(Y \in B)$, so $X$ and $Y$ are independent. The case where where $X$ has a continuous distribution on $S \subseteq \R^j$ and $Y$ has a discrete distribution on the countable set $T$ is analogous.
The last two results extend to more than two random variables, because $X$ and $Y$ themselves may be random vectors. Here is the explicit statement:
Suppose that $X_i$ is a random variable taking values in a set $R_i$ with probability density funcion $g_i$ for $i \in \{1, 2, \ldots, n\}$, and that the random variables are independent. Then the random vector $\bs X = (X_1, X_2, \ldots, X_n)$ taking values in $S = R_1 \times R_2 \times \cdots \times R_n$ has probability density function $f$ given by $f(x_1, x_2, \ldots, x_n) = g_1(x_1) g_2(x_2) \cdots g_n(x_n), \quad (x_1, x_2, \ldots, x_n) \in S$
The special case where the distributions are all the same is particularly important.
Suppose that $\bs X = (X_1, X_2, \ldots, X_n)$ is a sequence of independent random variables, each taking values in a set $R$ and with common probability density function $g$. Then the probability density function $f$ of $\bs X$ on $S = R^n$ is given by $f(x_1, x_2, \ldots, x_n) = g(x_1) g(x_2) \cdots g(x_n), \quad (x_1, x_2, \ldots, x_n) \in S$
In probability jargon, $\bs X$ is a sequence of independent, identically distributed variables, a phrase that comes up so often that it is often abbreviated as IID. In statistical jargon, $\bs X$ is a random sample of size $n$ from the common distribution. As is evident from the special terminology, this situation is very impotant in both branches of mathematics. In statistics, the joint probability density function $f$ plays an important role in procedures such as maximum likelihood and the identification of uniformly best estimators.
Recall that (mutual) independence of random variables is a very strong property. If a collection of random variables is independent, then any subcollection is also independent. New random variables formed from disjoint subcollections are independent. For a simple example, suppose that $X$, $Y$, and $Z$ are independent real-valued random variables. Then
1. $\sin(X)$, $\cos(Y)$, and $e^Z$ are independent.
2. $(X, Y)$ and $Z$ are independent.
3. $X^2 + Y^2$ and $\arctan(Z)$ are independent.
4. $X$ and $Z$ are independent.
5. $Y$ and $Z$ are independent.
In particular, note that statement 2 in the list above is much stronger than the conjunction of statements 4 and 5. Restated, if $X$ and $Z$ are dependent, then $(X, Y)$ and $Z$ are also dependent.
Examples and Applications
Dice
Recall that a standard die is an ordinary six-sided die, with faces numbered from 1 to 6. The answers in the next couple of exercises give the joint distribution in the body of a table, with the marginal distributions literally in the magins. Such tables are the reason for the term marginal distibution.
Suppose that two standard, fair dice are rolled and the sequence of scores $(X_1, X_2)$ recorded. Our ususal assumption is that the variables $X_1$ and $X_2$ are independent. Let $Y = X_1 + X_2$ and $Z = X_1 - X_2$ denote the sum and difference of the scores, respectively.
1. Find the probability density function of $(Y, Z)$.
2. Find the probability density function of $Y$.
3. Find the probability density function of $Z$.
4. Are $Y$ and $Z$ independent?
Answer
Let $f$ denote the PDF of $(Y, Z)$, $g$ the PDF of $Y$ and $h$ the PDF of $Z$. The PDFs are give in the following table. Random variables $Y$ and $Z$ are dependent
$f(y, z)$ $y = 2$ 3 4 5 6 7 8 9 0 11 12 $h(z)$
$z = -5$ 0 0 0 0 0 $\frac{1}{36}$ 0 0 0 0 0 $\frac{1}{36}$
$-4$ 0 0 0 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 0 0 0 $\frac{2}{36}$
$-3$ 0 0 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 0 0 $\frac{3}{36}$
$-2$ 0 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 0 $\frac{4}{36}$
$-1$ 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 $\frac{5}{36}$
0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ $\frac{6}{36}$
1 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 $\frac{5}{36}$
2 0 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 0 $\frac{4}{36}$
3 0 0 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 0 0 $\frac{3}{36}$
4 0 0 0 0 $\frac{1}{36}$ 0 $\frac{1}{36}$ 0 0 0 0 $\frac{2}{36}$
5 0 0 0 0 0 $\frac{1}{36}$ 0 0 0 0 0 $\frac{1}{36}$
$g(y)$ $\frac{1}{36}$ $\frac{2}{36}$ $\frac{3}{36}$ $\frac{4}{36}$ $\frac{5}{36}$ $\frac{6}{36}$ $\frac{5}{36}$ $\frac{4}{36}$ $\frac{3}{36}$ $\frac{2}{36}$ $\frac{1}{36}$ 1
Suppose that two standard, fair dice are rolled and the sequence of scores $(X_1, X_2)$ recorded. Let $U = \min\{X_1, X_2\}$ and $V = \max\{X_1, X_2\}$ denote the minimum and maximum scores, respectively.
1. Find the probability density function of $(U, V)$.
2. Find the probability density function of $U$.
3. Find the probability density function of $V$.
4. Are $U$ and $V$ independent?
Answer
Let $f$ denote the PDF of $(U, V)$, $g$ the PDF of $U$, and $h$ the PDF of $V$. The PDFs are given in the following table. Random variables $U$ and $V$ are dependent.
$f(u, v)$ $u = 1$ 2 3 4 5 6 $h(v)$
$v = 1$ $\frac{1}{36}$ 0 0 0 0 0 $\frac{1}{36}$
2 $\frac{2}{36}$ $\frac{1}{36}$ 0 0 0 0 $\frac{3}{36}$
3 $\frac{2}{36}$ $\frac{2}{36}$ $\frac{1}{36}$ 0 0 0 $\frac{5}{36}$
4 $\frac{2}{36}$ $\frac{2}{36}$ $\frac{2}{36}$ $\frac{1}{36}$ 0 0 $\frac{7}{36}$
5 $\frac{2}{36}$ $\frac{2}{36}$ $\frac{2}{36}$ $\frac{2}{36}$ $\frac{1}{36}$ 0 $\frac{9}{36}$
6 $\frac{2}{36}$ $\frac{2}{36}$ $\frac{2}{36}$ $\frac{2}{36}$ $\frac{2}{36}$ $\frac{1}{36}$ $\frac{11}{36}$
$g(u)$ $\frac{11}{36}$ $\frac{9}{36}$ $\frac{7}{36}$ $\frac{5}{36}$ $\frac{3}{36}$ $\frac{1}{36}$ 1
The previous two exercises show clearly how little information is given with the marginal distributions compared to the joint distribution. With the marginal PDFs alone, you could not even determine the support set of the joint distribution, let alone the values of the joint PDF.
Simple Continuous Distributions
Suppose that $(X, Y)$ has probability density function $f$ given by $f(x, y) = x + y$ for $0 \le x \le 1$, $0 \le y \le 1$.
1. Find the probability density function of $X$.
2. Find the probability density function of $Y$.
3. Are $X$ and $Y$ independent?
Answer
1. $X$ has PDF $g$ given by $g(x) = x + \frac{1}{2}$ for $0 \le x \le 1$
2. $Y$ has PDF $h$ given by $h(y) = y + \frac{1}{2}$ for $0 \le y \le 1$
3. $X$ and $Y$ are dependent.
Suppose that $(X, Y)$ has probability density function $f$ given by $f(x, y) = 2 ( x + y)$ for $0 \le x \le y \le 1$.
1. Find the probability density function of $X$.
2. Find the probability density function of $Y$.
3. Are $X$ and $Y$ independent?
Answer
1. $X$ has PDF $g$ given by $g(x) = (1 + 3 x)(1 - x)$ for $0 \le x \le 1$.
2. $Y$ has PDF $h$ given by $h(h) = 3 y^2$ for $0 \le y \le 1$.
3. $X$ and $Y$ are dependent.
Suppose that $(X, Y)$ has probability density function $f$ given by $f(x, y) = 6 x^2 y$ for $0 \le x \le 1$, $0 \le y \le 1$.
1. Find the probability density function of $X$.
2. Find the probability density function of $Y$.
3. Are $X$ and $Y$ independent?
Answer
1. $X$ has PDF $g$ given by $g(x) = 3 x^2$ for $0 \le x \le 1$.
2. $Y$ has PDF $h$ given by $h(y) = 2 y$ for $0 \le y \le 1$.
3. $X$ and $Y$ are independent.
The last exercise is a good illustration of the factoring theorem. Without any work at all, we can tell that the PDF of $X$ is proportional to $x \mapsto x^2$ on the interval $[0, 1]$, the PDF of $Y$ is proportional to $y \mapsto y$ on the interval $[0, 1]$, and that $X$ and $Y$ are independent. The only thing unclear is how the constant 6 factors.
Suppose that $(X, Y)$ has probability density function $f$ given by $f(x, y) = 15 x^2 y$ for $0 \le x \le y \le 1$.
1. Find the probability density function of $X$.
2. Find the probability density function of $Y$.
3. Are $X$ and $Y$ independent?
Answer
1. $X$ has PDF $g$ given by $g(x) = \frac{15}{2} x^2 \left(1 - x^2\right)$ for $0 \le x \le 1$.
2. $Y$ has PDF $h$ given by $h(y) = 5 y^4$ for $0 \le y \le 1$.
3. $X$ and $Y$ are dependent.
Note that in the last exercise, the factoring theorem does not apply. Random variables $X$ and $Y$ each take values in $[0, 1]$, but the joint PDF factors only on part of $[0, 1]^2$.
Suppose that $(X, Y, Z)$ has probability density function $f$ given $f(x, y, x) = 2 (x + y) z$ for $0 \le x \le 1$, $0 \le y \le 1$, $0 \le z \le 1$.
1. Find the probability density function of each pair of variables.
2. Find the probability density function of each variable.
3. Determine the dependency relationships between the variables.
Proof
1. $(X, Y)$ has PDF $f_{1,2}$ given by $f_{1,2}(x,y) = x + y$ for $0 \le x \le 1$, $0 \le y \le 1$.
2. $(X, Z)$ has PDF $f_{1,3}$ given by $f_{1,3}(x,z) = 2 z \left(x + \frac{1}{2}\right)$ for $0 \le x \le 1$, $0 \le z \le 1$.
3. $(Y, Z)$ had PDF $f_{2,3}$ given by $f_{2,3}(y,z) = 2 z \left(y + \frac{1}{2}\right)$ for $0 \le y \le 1$, $0 \le z \le 1$.
4. $X$ has PDF $f_1$ given by $f_1(x) = x + \frac{1}{2}$ for $0 \le x \le 1$.
5. $Y$ has PDF $f_2$ given by $f_2(y) = y + \frac{1}{2}$ for $0 \le y \le 1$.
6. $Z$ has PDF $f_3$ given by $f_3(z) = 2 z$ for $0 \le z \le 1$.
7. $Z$ and $(X, Y)$ are independent; $X$ and $Y$ are dependent.
Suppose that $(X, Y)$ has probability density function $f$ given by $f(x, y) = 2 e^{-x} e^{-y}$ for $0 \le x \le y \lt \infty$.
1. Find the probability density function of $X$.
2. Find the probability density function of $Y$.
3. Are $X$ and $Y$ independent?
Answer
1. $X$ has PDF $g$ given by $g(x) = 2 e^{-2 x}$ for $0 \le x \lt \infty$.
2. $Y$ has PDF $h$ given by $h(y) = 2 \left(e^{-y} - e^{-2y}\right)$ for $0 \le y \lt \infty$.
3. $X$ and $Y$ are dependent.
In the previous exercise, $X$ has an exponential distribution with rate parameter 2. Recall that exponential distributions are widely used to model random times, particularly in the context of the Poisson model.
Suppose that $X$ and $Y$ are independent, and that $X$ has probability density function $g$ given by $g(x) = 6 x (1 - x)$ for $0 \le x \le 1$, and that $Y$ has probability density function $h$ given by $h(y) = 12 y^2 (1 - y)$ for $0 \le y \le 1$.
1. Find the probability density function of $(X, Y)$.
2. Find $\P(X + Y \le 1)$.
Answer
1. $(X, Y)$ has PDF $f$ given by $f(x, y) = 72 x (1 - x) y^2 (1 - y)$ for $0 \le x \le 1$, $0 \le y \le 1$.
2. $\P(X + Y \le 1) = \frac{13}{35}$
In the previous exercise, $X$ and $Y$ have beta distributions, which are widely used to model random probabilities and proportions. Beta distributions are studied in more detail in the chapter on Special Distributions.
Suppose that $\Theta$ and $\Phi$ are independent random angles, with common probability density function $g$ given by $g(t) = \sin(t)$ for $0 \le t \le \frac{\pi}{2}$.
1. Find the probability density function of $(\Theta, \Phi)$.
2. Find $\P(\Theta \le \Phi)$.
Answer
1. $(\Theta, \Phi)$ has PDF $f$ given by $f(\theta, \phi) = \sin(\theta) \sin(\phi)$ for $0 \le \theta \le \frac{\pi}{2}$, $0 \le \phi \le \frac{\pi}{2}$.
2. $\P(\Theta \le \Phi) = \frac{1}{2}$
The common distribution of $X$ and $Y$ in the previous exercise governs a random angle in Bertrand's problem.
Suppose that $X$ and $Y$ are independent, and that $X$ has probability density function $g$ given by $g(x) = \frac{2}{x^3}$ for $1 \le x \lt \infty$, and that $Y$ has probability density function $h$ given by $h(y) = \frac{3}{y^4}$ for $1 \le y \lt\infty$.
1. Find the probability density function of $(X, Y)$.
2. Find $\P(X \le Y)$.
Answer
1. $(X, Y)$ has PDF $f$ given by $f(x, y) = \frac{6}{x^3 y^4}$ for $1 \le x \lt \infty$, $1 \le y \lt \infty$.
2. $\P(X \le Y) = \frac{2}{5}$
Both $X$ and $Y$ in the previous exercise have Pareto distributions, named for Vilfredo Pareto. Recall that Pareto distributions are used to model certain economic variables and are studied in more detail in the chapter on Special Distributions.
Suppose that $(X, Y)$ has probability density function $g$ given by $g(x, y) = 15 x^2 y$ for $0 \le x \le y \le 1$, and that $Z$ has probability density function $h$ given by $h(z) = 4 z^3$ for $0 \le z \le 1$, and that $(X, Y)$ and $Z$ are independent.
1. Find the probability density function of $(X, Y, Z)$.
2. Find the probability density function of $(X, Z)$.
3. Find the probability density function of $(Y, Z)$.
4. Find $\P(Z \le X Y)$.
Answer
1. $(X, Y, Z)$ has PDF $f$ given by $f(x, y, z) = 60 x^2 y z^3$ for $0 \le x \le y \le 1$, $0 \le z \le 1$.
2. $(X, Z)$ has PDF $f_{1,3}$ given by $f_{1,3}(x, z) = 30 x^2 \left(1 - x^2\right) z^3$ for $0 \le x \le 1$, $0 \le z \le 1$.
3. $(Y, Z)$ has PDF $f_{2,3}$ given by $f_{2,3}(y, z) = 20 y^4 z^3$ for $0 \le y \le 1$, $0 \le z \le 1$.
4. $\P(Z \le X Y) = \frac{15}{92}$
Multivariate Uniform Distributions
Multivariate uniform distributions give a geometric interpretation of some of the concepts in this section.
Recall first that for $n \in \N_+$, the standard measure on $\R^n$ is $\lambda_n(A) = \int_A 1 dx, \quad A \subseteq \R^n$ In particular, $\lambda_1(A)$ is the length of $A \subseteq \R$, $\lambda_2(A)$ is the area of $A \subseteq \R^2$, and $\lambda_3(A)$ is the volume of $A \subseteq \R^3$.
Details
Technically $\lambda_n$ is Lebesgue measure on the measurable subsets of $\R^n$. The integral representation is valid for the types of sets that occur in applications. In the discussion below, all subsets are assumed to be measurable.
Suppose now that $X$ takes values in $\R^j$, $Y$ takes values in $\R^k$, and that $(X, Y)$ is uniformly distributed on a set $R \subseteq \R^{j+k}$. So $0 \lt \lambda_{j+k}(R) \lt \infty$ and the joint probability density function $f$ of $(X, Y)$ is given by $f(x, y) = 1 \big/ \lambda_{j+k}(R)$ for $(x, y) \in R$. Recall that uniform distributions always have constant density functions. Now let $S$ and $T$ be the projections of $R$ onto $\R^j$ and $\R^k$ respectively, defined as follows: $S = \left\{x \in \R^j: (x, y) \in R \text{ for some } y \in \R^k\right\}, \; T = \left\{y \in \R^k: (x, y) \in R \text{ for some } x \in \R^j\right\}$ Note that $R \subseteq S \times T$. Next we denote the cross sections at $x \in S$ and at $y \in T$, respectively by $T_x = \{t \in T: (x, t) \in R\}, \; S_y = \{s \in S: (s, y) \in R\}$
$X$ takes values in $S$ and $Y$ takes values in $T$. The probability density functions $g$ and $h$ of $X$ and $Y$ are proportional to the cross-sectional measures:
1. $g(x) = \lambda_k\left(T_x\right) \big/ \lambda_{j+k}(R)$ for $x \in S$
2. $h(y) = \lambda_j\left(S_y\right) \big/ \lambda_{j+k}(R)$ for $y \in T$
Proof
From our general theory, $X$ has PDF $g$ given by $g(x) = \int_{T_x} f(x, y) \, dy = \int_{T_x} \frac{1}{\lambda_{j+k}(R)} \, dy = \frac{\lambda_k\left(T_x\right)}{\lambda_{j+k}(R)}, \quad x \in S$ Technically, it's possible that $\lambda_k(T_x) = \infty$ for some $x \in S$, but the set of such $x$ has measure 0. That is, $\lambda_j\{x \in S: \lambda_k(T_x) = \infty\} = 0$. The result for $Y$ is analogous.
In particular, note from previous theorem that $X$ and $Y$ are neither independent nor uniformly distributed in general. However, these properties do hold if $R$ is a Cartesian product set.
Suppose that $R = S \times T$.
1. $X$ is uniformly distributed on $S$.
2. $Y$ is uniformly distributed on $T$.
3. $X$ and $Y$ are independent.
Proof
In this case, $T_x = T$ and $S_y = S$ for every $x \in S$ and $y \in T$. Also, $\lambda_{j+k}(R) = \lambda_j(S) \lambda_k(T)$, so for $x \in S$ and $y \in T$, $f(x, y) = 1 \big/ [\lambda_j(S) \lambda_k(T)]$, $g(x) = 1 \big/ \lambda_j(S)$, $h(y) = 1 \big/ \lambda_k(T)$.
In each of the following cases, find the joint and marginal probabilit density functions, and determine if $X$ and $Y$ are independent.
1. $(X, Y)$ is uniformly distributed on the square $R = [-6, 6]^2$.
2. $(X, Y)$ is uniformly distributed on the triangle $R = \{(x, y): -6 \le y \le x \le 6\}$.
3. $(X, Y)$ is uniformly distributed on the circle $R = \left\{(x, y): x^2 + y^2 \le 36\right\}$.
Answer
In the following, $f$ is the PDF of $(X, Y)$, $g$ the PDF of $X$, and $h$ the PDF of $Y$.
• $f(x, y) = \frac{1}{144}$ for $-6 \le x \le 6$, $-6 \le y \le 6$
• $g(x) = \frac{1}{12}$ for $-6 \le x \le 6$
• $h(y) = \frac{1}{12}$ for $-6 \le y \le 6$
• $X$ and $Y$ are independent.
• $f(x, y) = \frac{1}{72}$ for $-6 \le y \le x \le 6$
• $g(x) = \frac{1}{72}(x + 6)$ for $-6 \le x \le 6$
• $h(y) = \frac{1}{72}(y + 6)$ for $-6 \le y \le 6$
• $X$ and $Y$ are dependent.
• $f(x, y) = \frac{1}{36 \pi}$ for $x^2 + y^2 \le 36$
• $g(x) = \frac{1}{18 \pi} \sqrt{36 - x^2}$ for $-6 \le x \le 6$
• $h(y) = \frac{1}{18 \pi} \sqrt{36 - y^2}$ for $-6 \le y \le 6$
• $X$ and $Y$ are dependent.
In the bivariate uniform experiment, run the simulation 1000 times for each of the following cases. Watch the points in the scatter plot and the graphs of the marginal distributions. Interpret what you see in the context of the discussion above.
1. square
2. triangle
3. circle
Suppose that $(X, Y, Z)$ is uniformly distributed on the cube $[0, 1]^3$.
1. Give the joint probability density function of $(X, Y, Z)$.
2. Find the probability density function of each pair of variables.
3. Find the probability density function of each variable
4. Determine the dependency relationships between the variables.
Answer
1. $(X, Y, Z)$ has PDF $f$ given by $f(x, y, z) = 1$ for $0 \le x \le 1$, $0 \le y \le 1$, $0 \le z \le 1$ (the uniform distribution on $[0, 1]^3$)
2. $(X, Y)$, $(X, Z)$, and $(Y, Z)$ have common PDF $g$ given by $g(u, v) = 1$ for $0 \le u \le 1$, $0 \le v \le 1$ (the uniform distribution on $[0, 1]^2$)
3. $X$, $Y$, and $Z$ have common PDF $h$ given by $h(u) = 1$ for $0 \le u \le 1$ (the uniform distribution on $[0, 1]$)
4. $X$, $Y$, $Z$ are independent.
Suppose that $(X, Y, Z)$ is uniformly distributed on $\{(x, y, z): 0 \le x \le y \le z \le 1\}$.
1. Give the joint density function of $(X, Y, Z)$.
2. Find the probability density function of each pair of variables.
3. Find the probability density function of each variable
4. Determine the dependency relationships between the variables.
Answer
1. $(X, Y, Z)$ has PDF $f$ given by $f(x, y, z) = 6$ for $0 \le x \le y \le z \le 1$
• $(X, Y)$ has PDF $f_{1,2}$ given by $f_{1,2}(x, y) = 6 (1 - y)$ for $0 \le x \le y \le 1$
• $(X, Z)$ has PDF $f_{1,3}$ given by $f_{1,3}(x, z) = 6 (z - x)$ for $0 \le x \le z \le 1$
• $(Y, Z)$ has PDF $f_{2,3}$ given by $f_{2,3}(y, z) = 6 y$ for $0 \le y \le z \le 1$
• $X$ has PDF $f_1$ given by $f_1(x) = 3 (1 - x)^2$ for $0 \le x \le 1$
• $Y$ has PDF $f_2$ given by $f_2(y) = 6 y (1 - y)$ for $0 \le y \le 1$
• $Z$ has PDF $f_3$ given by $f_3(z) = 3 z^2$ for $0 \le z \le 1$
2. Each pair of variables is dependent.
The Rejection Method
The following result shows how an arbitrary continuous distribution can be obtained from a uniform distribution. This result is useful for simulating certain continuous distributions, as we will see. To set up the basic notation, suppose that $g$ is a probability density function for a continuous distribution on $S \subseteq \R^n$. Let $R = \{(x, y): x \in S \text{ and } 0 \le y \le g(x)\} \subseteq \R^{n+1}$
If $(X, Y)$ is uniformly distributed on $R$, then $X$ has probability density function $g$.
Proof
Note that since $g$ is a probability density function on $S$. $\lambda_{n+1}(R) = \int_R 1 \, d(x, y) = \int_S \int_0^{g(x)} 1 \, dy \, dx = \int_S g(x) \, dx = 1$ Hence the probability density function $f$ of $(X, Y)$ is given by $f(x, y) = 1$ for $(x, y) \in R$. Thus, the probability density function of $X$ is $x \mapsto \int_0^{g(x)} 1 \, dy = g(x)$ for $x \in S$.
A picture in the case $n = 1$ is given below:
The next result gives the rejection method for simulating a random variable with the probability density function $g$.
Suppose now that $R \subseteq T$ where $T \subseteq \R_{n+1}$ with $\lambda_{n+1}(T) \lt \infty$ and that $\left((X_1, Y_1), (X_2, Y_2), \ldots\right)$ is a sequence of independent random variables with $X_k \in \R^n$, $Y_k \in \R$, and $\left(X_k, Y_k\right)$ uniformly distributed on $T$ for each $k \in \N_+$. Let $N = \min\left\{k \in \N_+: \left(X_k, Y_k\right) \in R\right\} = \min\left\{k \in \N_+: X_k \in S, \; 0 \le Y_k \le g\left(X_k\right)\right\}$
1. $N$ has the geometric distribution on $\N_+$ with success parameter $p = 1 / \lambda_{n+1}(T)$.
2. $\left(X_N, Y_N\right)$ is uniformly distributed on $R$.
3. $X_N$ has probability density function $g$.
Proof
The point of the theorem is that if we can simulate a sequence of independent variables that are uniformly distributed on $T$, then we can simulate a random variable with the given probability density function $g$. Suppose in particular that $R$ is bounded as a subset of $\R^{n+1}$, which would mean that the domain $S$ is bounded as a subset of $\R^n$ and that the probability density function $g$ is bounded on $S$. In this case, we can find $T$ that is the Cartesian product of $n + 1$ bounded intervals with $R \subseteq T$. It turns out to be very easy to simulate a sequence of independent variables, each uniformly distributed on such a product set, so the rejection method always works in this case. As you might guess, the rejection method works best if the size of $T$, namely $\lambda_{n+1}(T)$, is small, so that the success parameter $p$ is large.
The rejection method app simulates a number of continuous distributions via the rejection method. For each of the following distributions, vary the parameters and note the shape and location of the probability density function. Then run the experiment 1000 times and observe the results.
1. The beta distribution
2. The semicircle distribution
3. The triangle distribution
4. The U-power distribution
The Multivariate Hypergeometric Distribution
Suppose that a population consists of $m$ objects, and that each object is one of four types. There are $a$ type 1 objects, $b$ type 2 objects, $c$ type 3 objects and $m - a - b - c$ type 0 objects. We sample $n$ objects from the population at random, and without replacement. The parameters $m$, $a$, $b$, $c$, and $n$ are nonnegative integers with $a + b + c \le m$ and $n \le m$. Denote the number of type 1, 2, and 3 objects in the sample by $X$, $Y$, and $Z$, respectively. Hence, the number of type 0 objects in the sample is $n - X - Y - Z$. In the problems below, the variables $x$, $y$, and $z$ take values in $\N$.
$(X, Y, Z)$ has a (multivariate) hypergeometric distribution with probability density function $f$ given by $f(x, y, z) = \frac{\binom{a}{x} \binom{b}{y} \binom{c}{z} \binom{m - a - b - c}{n - x - y - z}}{\binom{m}{n}}, \quad x + y + z \le n$
Proof
From the basic theory of combinatorics, the numerator is the number of ways to select an unordered sample of size $n$ from the population with $x$ objects of type 1, $y$ objects of type 2, $z$ objects of type 3, and $n - x - y - z$ objects of type 0. The denominator is the total number of ways to select the unordered sample.
$(X, Y)$ also has a (multivariate) hypergeometric distribution, with the probability density function $g$ given by $g(x, y) = \frac{\binom{a}{x} \binom{b}{y} \binom{m - a - b}{n - x - y}}{\binom{m}{n}}, \quad x + y \le n$
Proof
This result could be obtained by summing the joint PDF over $z$ for fixed $(x, y)$. However, there is a much nicer combinatorial argument. Note that we are selecting a random sample of size $n$ from a population of $m$ objects, with $a$ objects of type 1, $b$ objects of type 2, and $m - a - b$ objects of other types.
$X$ has an ordinary hypergeometric distribution, with probability density function $h$ given by $h(x) = \frac{\binom{a}{x} \binom{m - a}{n - x}}{\binom{m}{n}}, \quad x \le n$
Proof
Again, the result could be obtained by summing the joint PDF for $(X, Y, Z)$ over $(y, z)$ for fixed $x$, or by summing the joint PDF for $(X, Y)$ over $y$ for fixed $x$. But as before, there is a much more elegant combinatorial argument. Note that we are selecting a random sample of size $n$ from a population of size $m$ objects, with $a$ objects of type 1 and $m - a$ objects of other types.
These results generalize in a straightforward way to a population with any number of types. In brief, if a random vector has a hypergeometric distribution, then any sub-vector also has a hypergeometric distribution. In other words, all of the marginal distributions of a hypergeometric distribution are themselves hypergeometric. Note however, that it's not a good idea to memorize the formulas above explicitly. It's better to just note the patterns and recall the combinatorial meaning of the binomial coefficient. The hypergeometric distribution and the multivariate hypergeometric distribution are studied in more detail in the chapter on Finite Sampling Models.
Suppose that a population of voters consists of 50 democrats, 40 republicans, and 30 independents. A sample of 10 voters is chosen at random from the population (without replacement, of course). Let $X$ denote the number of democrats in the sample and $Y$ the number of republicans in the sample. Find the probability density function of each of the following:
1. $(X, Y)$
2. $X$
3. $Y$
Answer
In the formulas for the PDFs below, the variables $x$ and $y$ are nonnegative integers.
1. $(X, Y)$ has PDF $f$ given by $f(x, y) = \frac{1}{\binom{120}{10}} \binom{50}{x} \binom{40}{y} \binom{30}{10 - x - y}$ for $x + y \le 10$
2. $X$ has PDF $g$ given by $g(x) = \frac{1}{\binom{120}{10}} \binom{50}{x} \binom{70}{10 - x}$ for $x \le 10$
3. $Y$ has PDF $h$ given by $h(y) = \frac{1}{\binom{120}{10}} \binom{40}{y} \binom{80}{10 - y}$ for $y \le 10$
Suppose that the Math Club at Enormous State University (ESU) has 50 freshmen, 40 sophomores, 30 juniors and 20 seniors. A sample of 10 club members is chosen at random to serve on the $\pi$-day committee. Let $X$ denote the number freshmen on the committee, $Y$ the number of sophomores, and $Z$ the number of juniors.
1. Find the probability density function of $(X, Y, Z)$
2. Find the probability density function of each pair of variables.
3. Find the probability density function of each individual variable.
Answer
In the formulas for the PDFs below, the variables $x$, $y$, and $z$ are nonnegative integers.
1. $(X, Y, Z)$ has PDF $f$ given by $f(x, y, z) = \frac{1}{\binom{140}{10}} \binom{50}{x} \binom{40}{y} \binom{30}{z} \binom{20}{10 - x - y - z}$ for $x + y + z \le 10$.
• $(X, Y)$ has PDF $f_{1,2}$ given by $f_{1,2}(x, y) = \frac{1}{\binom{140}{10}} \binom{50}{x} \binom{40}{y} \binom{50}{10 - x - y}$ for $x + y \le 10$.
• $(X, Z)$ has PDF $f_{1,3}$ given by $f_{1,3}(y, z) = \frac{1}{\binom{140}{10}} \binom{50}{x} \binom{30}{z} \binom{60}{10 - x - z}$ for $x + z \le 10$.
• $(Y, Z)$ has PDF $f_{2,3}$ given by $f_{2,3}(y, z) = \frac{1}{\binom{140}{10}} \binom{40}{y} \binom{30}{z} \binom{70}{10 - y - z}$ for $y + z \le 10$.
• $X$ has PDF $f_1$ given by $f_1(x) = \frac{1}{\binom{120}{10}} \binom{50}{x} \binom{90}{10 - x}$ for $x \le 10$.
• $Y$ has PDF $f_2$ given by $f_2(y) = \frac{1}{\binom{120}{10}} \binom{40}{y} \binom{100}{10 - y}$ for $y \le 10$.
• $Z$ has PDF $f_3$ given by $f_3(z) = \frac{1}{\binom{120}{10}} \binom{30}{z} \binom{110}{10 - z}$ for $z \le 10$
Multinomial Trials
Suppose that we have a sequence of $n$ independent trials, each with 4 possible outcomes. On each trial, outcome 1 occurs with probability $p$, outcome 2 with probability $q$, outcome 3 with probability $r$, and outcome 0 occurs with probability $1 - p - q - r$. The parameters $p$, $q$, and $r$ are nonnegative numbers with $p + q + r \le 1$, and $n \in \N_+$. Denote the number of times that outcome 1, outcome 2, and outcome 3 occurred in the $n$ trials by $X$, $Y$, and $Z$ respectively. Of course, the number of times that outcome 0 occurs is $n - X - Y - Z$. In the problems below, the variables $x$, $y$, and $z$ take values in $\N$.
$(X, Y, Z)$ has a multinomial distribution with probability density function $f$ given by $f(x, y, z) = \binom{n}{x, \, y, \, z} p^x q^y r^z (1 - p - q - r)^{n - x - y - z}, \quad x + y + z \le n$
Proof
The multinomial coefficient is the number of sequences of length $n$ with 1 occurring $x$ times, 2 occurring $y$ times, 3 occurring $z$ times, and 0 occurring $n - x - y - z$ times. The result then follows by independence.
$(X, Y)$ also has a multinomial distribution with the probability density function $g$ given by $g(x, y) = \binom{n}{x, \, y} p^x q^y (1 - p - q)^{n - x - y}, \quad x + y \le n$
Proof
This result could be obtained from the joint PDF above, by summing over $z$ for fixed $(x, y)$. However there is a much better direct argument. Note that we have $n$ independent trials, and on each trial, outcome 1 occurs with probability $p$, outcome 2 with probability $q$, and some other outcome with probability $1 - p - q$.
$X$ has a binomial distribution, with the probability density function $h$ given by $h(x) = \binom{n}{x} p^x (1 - p)^{n - x}, \quad x \le n$
Proof
Again, the result could be obtained by summing the joint PDF for $(X, Y, Z)$ over $(y, z)$ for fixed $x$ or by summing the joint PDF for $(X, Y)$ over $y$ for fixed $x$. But as before, there is a much better direct argument. Note that we have $n$ independent trials, and on each trial, outcome 1 occurs with probability $p$ and some other outcome with probability $1 - p$.
These results generalize in a completely straightforward way to multinomial trials with any number of trial outcomes. In brief, if a random vector has a multinomial distribution, then any sub-vector also has a multinomial distribution. In other terms, all of the marginal distributions of a multinomial distribution are themselves multinomial. The binomial distribution and the multinomial distribution are studied in more detail in the chapter on Bernoulli Trials.
Suppose that a system consists of 10 components that operate independently. Each component is working with probability $\frac{1}{2}$, idle with probability $\frac{1}{3}$, or failed with probability $\frac{1}{6}$. Let $X$ denote the number of working components and $Y$ the number of idle components. Give the probability density function of each of the following:
1. $(X, Y)$
2. $X$
3. $Y$
Answer
In the formulas below, the variables $x$ and $y$ are nonnegative integers.
1. $(X, Y)$ has PDF $f$ given by $f(x, y) = \binom{10}{x, \; y} \left(\frac{1}{2}\right)^x \left(\frac{1}{3}\right)^y \left(\frac{1}{6}\right)^{10 - x - y}$ for $x + y \le 10$.
2. $X$ has PDF $g$ given by $g(x) = \binom{10}{x} \left(\frac{1}{2}\right)^{10}$ for $x \le 10$.
3. $Y$ has PDF $h$ given by $h(y) = \binom{10}{y} \left(\frac{1}{3}\right)^y \left(\frac{2}{3}\right)^{10 - y}$ for $y \le 10$.
Suppose that in a crooked, four-sided die, face $i$ occurs with probability $\frac{i}{10}$ for $i \in \{1, 2, 3, 4\}$. The die is thrown 12 times; let $X$ denote the number of times that score 1 occurs, $Y$ the number of times that score 2 occurs, and $Z$ the number of times that score 3 occurs.
1. Find the probability density function of $(X, Y, Z)$
2. Find the probability density function of each pair of variables.
3. Find the probability density function of each individual variable.
Answer
In the formulas for the PDFs below, the variables $x$, $y$ and $z$ are nonnegative integers.
1. $(X, Y, Z)$ has PDF $f given by \(f(x, y, z) = \binom{12}{x, \, y, \, z} \left(\frac{1}{10}\right)^x \left(\frac{2}{10}\right)^y \left(\frac{3}{10}\right)^z \left(\frac{4}{10}\right)^{12 - x - y - z}$, $x + y + z \le 12$
• $(X, Y)$ has PDF $f_{1,2}$ given by $f_{1,2}(x, y) = \binom{12}{x, \; y} \left(\frac{1}{10}\right)^{x} \left(\frac{2}{10}\right)^y \left(\frac{7}{10}\right)^{12 - x - y}$ for $x + y \le 12$.
• $(X, Z)$ has PDF $f_{1,3}$ given by $f_{1,3}(x, z) = \binom{12}{x, \; z} \left(\frac{1}{10}\right)^{x} \left(\frac{3}{10}\right)^z \left(\frac{6}{10}\right)^{12 - x - z}$ for $x + z \le 12$.
• $(Y, Z)$ has PDF $f_{2,3}$ given by $f_{2,3}(y, z) = \binom{12}{y, \; z} \left(\frac{2}{10}\right)^{y} \left(\frac{3}{10}\right)^z \left(\frac{5}{10}\right)^{12 - y - z}$ for $y + z \le 12$.
• $X$ has PDF $f_1$ given by $f_1(x) = \binom{12}{x} \left(\frac{1}{10}\right)^x \left(\frac{9}{10}\right)^{12 - x}$ for $x \le 12$.
• $Y$ has PDF $f_2$ given by $f_2(y) = \binom{12}{y} \left(\frac{2}{10}\right)^y \left(\frac{8}{10}\right)^{12 - y}$ for $y \le 12$.
• $Z$ has PDF $f_3$ given by $f_3(z) = \binom{12}{z} \left(\frac{3}{10}\right)^z \left(\frac{7}{10}\right)^{12 - z}$ for $z \le 12$.
Bivariate Normal Distributions
Suppose that $(X, Y)$ has probability the density function $f$ given below: $f(x, y) = \frac{1}{12 \pi} \exp\left[-\left(\frac{x^2}{8} + \frac{y^2}{18}\right)\right], \quad (x, y) \in \R^2$
1. Find the probability density function of $X$.
2. Find the probability density function of $Y$.
3. Are $X$ and $Y$ independent?
Answer
1. $X$ has PDF $g$ given by $g(x) = \frac{1}{2 \sqrt{2 \pi}} e^{-x^2 / 8}$ for $x \in \R$.
2. $Y$ has PDF $h$ given by $h(y) = \frac{1}{3 \sqrt{2 \pi}} e^{-y^2 /18}$ for $y \in \R$.
3. $X$ and $Y$ are independent.
Suppose that $(X, Y)$ has probability density function $f$ given below:
$f(x, y) = \frac{1}{\sqrt{3} \pi} \exp\left[-\frac{2}{3}\left(x^2 - x y + y^2\right)\right], \quad(x, y) \in \R^2$
1. Find the density function of $X$.
2. Find the density function of $Y$.
3. Are $X$ and $Y$ independent?
Answer
1. $X$ has PDF $g$ given by $g(x) = \frac{1}{\sqrt{2 \pi}} e^{-x^2 / 2}$ for $x \in \R$.
2. $Y$ has PDF $h$ given by $h(y) = \frac{1}{\sqrt{2 \pi}} e^{-y^2 / 2}$ for $y \in \R$.
3. $X$ and $Y$ are dependent.
The joint distributions in the last two exercises are examples of bivariate normal distributions. Normal distributions are widely used to model physical measurements subject to small, random errors. In both exercises, the marginal distributions of $X$ and $Y$ also have normal distributions, and this turns out to be true in general. The multivariate normal distribution is studied in more detail in the chapter on Special Distributions.
Exponential Distributions
Recall that the exponential distribution has probability density function $f(x) = r e^{-r t}, \quad x \in [0, \infty)$ where $r \in (0, \infty)$ is the rate parameter. The exponential distribution is widely used to model random times, and is studied in more detail in the chapter on the Poisson Process.
Suppose $X$ and $Y$ have exponential distributions with parameters $a \in (0, \infty)$ and $b \in (0, \infty)$, respectively, and are independent. Then $\P(X \lt Y) = \frac{a}{a + b}$.
Suppose $X$, $Y$, and $Z$ have exponential distributions with parameters $a \in (0, \infty)$, $b \in (0, \infty)$, and $c \in (0, \infty)$, respectively, and are independent. Then
1. $\P(X \lt Y \lt Z) = \frac{a}{a + b + c} \frac{b}{b + c}$
2. $\P(X \lt Y, X \lt Z) = \frac{a}{a + b + c}$
If $X$, $Y$, and $Z$ are the lifetimes of devices that act independently, then the results in the previous two exercises give probabilities of various failure orders. Results of this type are also very important in the study of continuous-time Markov processes. We will continue this discussion in the section on transformations of random variables.
Mixed Coordinates
Suppose $X$ takes values in the finite set $\{1, 2, 3\}$, $Y$ takes values in the interval $[0, 3]$, and that $(X, Y)$ has probability density function $f$ given by $f(x, y) = \begin{cases} \frac{1}{3}, & \quad x = 1, \; 0 \le y \le 1 \ \frac{1}{6}, & \quad x = 2, \; 0 \le y \le 2 \ \frac{1}{9}, & \quad x = 3, \; 0 \le y \le 3 \end{cases}$
1. Find the probability density function of $X$.
2. Find the probability density function of $Y$.
3. Are $X$ and $Y$ independent?
Answer
1. $X$ has PDF $g$ given by $g(x) = \frac{1}{3}$ for $x \in \{1, 2, 3\}$ (the uniform distribution on $\{1, 2, 3\}$).
2. $Y$ has PDF $h$ given by $h(y) = \begin{cases} \frac{11}{18}, & 0 \lt y \lt 1 \ \frac{5}{18}, & 1 \lt y \lt 2 \ \frac{2}{18}, & 2 \lt y \lt 3 \end{cases}$.
3. $X$ and $Y$ are dependent.
Suppose that $P$ takes values in the interval $[0, 1]$, $X$ takes values in the finite set $\{0, 1, 2, 3\}$, and that $(P, X)$ has probability density function $f$ given by $f(p, x) = 6 \binom{3}{x} p^{x + 1} (1 - p)^{4 - x}, \quad (p, x) \in [0, 1] \times \{0, 1, 2, 3\}$
1. Find the probability density function of $P$.
2. Find the probability density function of $X$.
3. Are $P$ and $X$ independent?
Answer
1. $P$ has PDF $g$ given by $g(p) = 6 p (1 - p)$ for $0 \le p \le 1$.
2. $X$ has PDF $h$ given by $h(0) = h(3) = \frac{1}{5}$, $h(1) = \frac{3}{10}$
3. $P$ and $X$ are dependent.
As we will see in the section on conditional distributions, the distribution in the last exercise models the following experiment: a random probability $P$ is selected, and then a coin with this probability of heads is tossed 3 times; $X$ is the number of heads. Note that $P$ has a beta distribution.
Random Samples
Recall that the Bernoulli distribution with parameter $p \in [0, 1]$ has probability density function $g$ given by $g(x) = p^x (1 - p)^{1 - x}$ for $x \in \{0, 1\}$. Let $\bs X = (X_1, X_2, \ldots, X_n)$ be a random sample of size $n \in \N_+$ from the distribution. Give the probability density funcion of $\bs X$ in simplified form.
Answer
$\bs X$ has PDF $f$ given by $f(x_1, x_2, \ldots, x_n) = p^y (1 - p)^{n-y}$ for $(x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$, where $y = x_1 + x_2 + \cdots + x_n$
The Bernoulli distribution is name for Jacob Bernoulli, and governs an indicator random varible. Hence if $\bs X$ is a random sample of size $n$ from the distribution then $\bs X$ is a sequence of $n$ Bernoulli trials. A separate chapter studies Bernoulli trials in more detail.
Recall that the geometric distribution on $\N_+$ with parameter $p \in (0, 1)$ has probability density function $g$ given by $g(x) = p (1 - p)^{x - 1}$ for $x \in \N_+$. Let $\bs X = (X_1, X_2, \ldots, X_n)$ be a random sample of size $n \in \N_+$ from the distribution. Give the probability density function of $\bs X$ in simplified form.
Answer
$\bs X$ has pdf $f$ given by $f(x_1, x_2, \ldots, x_n) = p^n (1 - p)^{y-n}$ for $(x_1, x_2, \ldots, x_n) \in \N_+^n$, where $y = x_1 + x_2 + \cdots + x_n$.
The geometric distribution governs the trial number of the first success in a sequence of Bernoulli trials. Hence the variables in the random sample can be interpreted as the number of trials between successive successes.
Recall that the Poisson distribution with parameter $a \in (0, \infty)$ has probability density function $g$ given by $g(x) = e^{-a} \frac{a^x}{x!}$ for $x \in \N$. Let $\bs X = (X_1, X_2, \ldots, X_n)$ be a random sample of size $n \in \N_+$ from the distribution. Give the probability density funcion of $\bs X$ in simplified form.
Answer
$\bs X$ has PDF $f$ given by $f(x_1, x_2, \ldots, x_n) = \frac{1}{x_1! x_2! \cdots x_n!} e^{-n a} a^y$ for $(x_1, x_2, \ldots, x_n) \in \N^n$, where $y = x_1 + x_2 + \cdots + x_n$.
The Poisson distribution is named for Simeon Poisson, and governs the number of random points in a region of time or space under appropriate circumstances. The parameter $a$ is proportional to the size of the region. The Poisson distribution is studied in more detail in the chapter on the Poisson process.
Recall again that the exponential distribution with rate parameter $r \in (0, \infty)$ has probability density function $g$ given by $g(x) = r e^{-r x}$ for $x \in (0, \infty)$. Let $\bs X = (X_1, X_2, \ldots, X_n)$ be a random sample of size $n \in \N_+$ from the distribution. Give the probability density funcion of $\bs X$ in simplified form.
Answer
$\bs X$ has PDF $f$ given by $f(x_1, x_2, \ldots, x_n) = r^n e^{-r y}$ for $(x_1, x_2, \ldots, x_n) \in [0, \infty)^n$, where $y = x_1 + x_2 + \cdots + x_n$.
The exponential distribution governs failure times and other types or arrival times under appropriate circumstances. The exponential distribution is studied in more detail in the chapter on the Poisson process. The variables in the random sample can be interpreted as the times between successive arrivals in the Poisson process.
Recall that the standard normal distribution has probability density function $\phi$ given by $\phi(z) = \frac{1}{\sqrt{2 \pi}} e^{-z^2 / 2}$ for $z \in \R$. Let $\bs Z = (Z_1, Z_2, \ldots, Z_n)$ be a random sample of size $n \in \N_+$ from the distribution. Give the probability density funcion of $\bs Z$ in simplified form.
Answer
$\bs Z$ has PDF $f$ given by $f(z_1, z_2, \ldots, z_n) = \frac{1}{(2 \pi)^{n/2}} e^{-\frac{1}{2} w^2}$ for $(z_1, z_2, \ldots, z_n) \in \R^n$, where $w^2 = z_1^2 + z_2^2 + \cdots + z_n^2$.
The standard normal distribution governs physical quantities, properly scaled and centered, subject to small, random errors. The normal distribution is studied in more generality in the chapter on the Special Distributions.
Data Analysis Exercises
For the cicada data, $G$ denotes gender and $S$ denotes species type.
1. Find the empirical density of $(G, S)$.
2. Find the empirical density of $G$.
3. Find the empirical density of $S$.
4. Do you believe that $S$ and $G$ are independent?
Answer
The empirical joint and marginal empirical densities are given in the table below. Gender and species are probably dependent (compare the joint density with the product of the marginal densities).
$f(i, j)$ $i = 0$ 1 $h(j)$
$j = 0$ $\frac{16}{104}$ $\frac{28}{104}$ $\frac{44}{104}$
1 $\frac{3}{104}$ $\frac{3}{104}$ $\frac{6}{104}$
2 $\frac{40}{104}$ $\frac{14}{104}$ $\frac{56}{104}$
$g(i)$ $\frac{59}{104}$ $\frac{45}{104}$ 1
For the cicada data, let $W$ denote body weight (in grams) and $L$ body length (in millimeters).
1. Construct an empirical density for $(W, L)$.
2. Find the corresponding empirical density for $W$.
3. Find the corresponding empirical density for $L$.
4. Do you believe that $W$ and $L$ are independent?
Answer
The empirical joint and marginal densities, based on simple partitions of the body weight and body length ranges, are given in the table below. Body weight and body length are almost certainly dependent.
Density $(W, L)$ $w \in (0, 0.1]$ $(0.1, 0.2]$ $(0.2, 0.3]$ $(0.3, 0.4]$ Density $L$
$l \in (15, 20]$ 0 0.0385 0.0192 0 0.0058
$(20, 25]$ 0.1731 0.9808 0.4231 0 0.1577
$(25, 30]$ 0 0.1538 0.1731 0.0192 0.0346
$(30, 35]$ 0 0 0 0.0192 0.0019
Density $W$ 0.8654 5.8654 3.0769 0.1923
For the cicada data, let $G$ denote gender and $W$ body weight (in grams).
1. Construct and empirical density for $(W, G)$.
2. Find the empirical density for $G$.
3. Find the empirical density for $W$.
4. Do you believe that $G$ and $W$ are independent?
Answer
The empirical joint and marginal densities, based on a simple partition of the body weight range, are given in the table below. Body weight and gender are almost certainly dependent.
Density $(W, G)$ $w \in (0, 0.1]$ $(0.1, 0.2]$ $(0.2, 0.3]$ $(0.3, 0.4]$ Density $G$
$g = 0$ 0.1923 2.5000 2.8846 0.0962 0.5673
1 0.6731 3.3654 0.1923 0.0962 0.4327
Density $W$ 0.8654 5.8654 3.0769 0.1923 | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/03%3A_Distributions/3.04%3A_Joint_Distributions.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$
In this section, we study how a probability distribution changes when a given random variable has a known, specified value. So this is an essential topic that deals with hou probability measures should be updated in light of new information. As usual, if you are a new student or probability, you may want to skip the technical details.
Basic Theory
Our starting point is a random experiment modeled by a probability space $(\Omega, \mathscr F, \P)$. So to review, $\Omega$ is the set of outcomes, $\mathscr F$ is the collection of events, and $\P$ is the probability measure on the underlying sample space $(\Omega, \mathscr F)$.
Suppose that $X$ is a random variable defined on the sample space (that is, defined for the experiment), taking values in a set $S$.
Details
Technically, the collection of events $\mathscr F$ is a $\sigma$-algebra, so that the sample space $(\Omega, \mathscr F)$ is a measurable space. Similarly, $S$ will have a $\sigma$-algebra of admissible subsets, so that $(S, \mathscr S)$ is also a measurable space. Random variable $X$ is measurable, so that $\{X \in A\} \in \mathscr F$ for every $A \in \mathscr S$. The distribution of $X$ is the probability measure $A \mapsto \P(X \in A)$ for $A \in \mathscr S$.
The purpose of this section is to study the conditional probability measure given $X = x$ for $x \in S$. That is, if $E$ is an event, we would like to define and study the probability of $E$ given $X = x$, denoted $\P(E \mid X = x)$. If $X$ has a discrete distribution, the conditioning event has positive probability, so no new concepts are involved, and the simple definition of conditional probability suffices. When $X$ has a continuous distribution, however, the conditioning event has probability 0, so a fundamentally new approach is needed.
The Discrete Case
Suppose first that $X$ has a discrete distribution with probability density function $g$. Thus $S$ is countable and we can assume that $g(x) = \P(X = x) \gt 0$ for $x \in S$.
If $E$ is an event and $x \in S$ then $\P(E \mid X = x) = \frac{\P(E, X = x)}{g(x)}$
Proof
The meaning of discrete distribution is that $S$ is countable and $\mathscr S = \mathscr P(S)$ is the collection of all subsets of $S$. Technically, $g$ is the probability density function of $X$ with respct to counting measure $\#$ on $S$, the standard measure for discrete spaces. In the diplayed equation above, the comma separating the events in the numerator of the fraction means and, and thus functions just like the intersection symbol. This result follows immediately from the definition of conditional probability: $\P(E \mid X = x) = \frac{\P(E, X = x)}{\P(X = x)} = \frac{\P(E, X = x)}{g(x)}$
The next result is a sepcial case of the law of total probability. and will be the key to the definition when $X$ has a continuous distribution.
If $E$ is an event then $\P(E, X \in A) = \sum_{x \in A} g(x) \P(E \mid X = x), \quad A \subseteq S$ Conversely, this condition uniquely determines $\P(E \mid X = x)$.
Proof
As noted, the displayed equation is just a special case of the law of total probability. For $A \subseteq S$, the countable collection of events $\left\{\{X = x\}: x \in A\right\}$ partitions $\{X \in A\}$ so $\P(E, X \in A) = \sum_{x \in A} \P(E, X = x) = \sum_{x \in A} \P(E \mid X = x) \P(X = x) = \sum_{x \in A} \P(E \mid X = x) g(x)$ Conversely, suppose that the function $Q(x, E)$, defined for $x \in S$ and $E \in \mathscr F$, satisfies $\P(E, X \in A) = \sum_{x \in A} g(x) Q(x, E), \quad A \subseteq S$ Letting $A = \{x\}$ for $x \in S$ gives $\P(E, X = x) = g(x) Q(x, E)$, so $Q(x, E) = \P(E, X = x) \big/ g(x) = \P(E \mid X = x)$.
The Continuous Case
Suppose now that $X$ has a continuous distribution on $S \subseteq \R^n$ for some $n \in \N_+$, with probability density function $g$. We assume that $g(x) \gt 0$ for $x \in S$. Unlike the discrete case, we cannot use simple conditional probability to define $\P(E \mid X = x)$ for an event $E$ and $x \in S$ because the conditioning event has probability 0. Nonetheless, the concept should make sense. If we actually run the experiment, $X$ will take on some value $x$ (even though a priori, this event occurs with probability 0), and surely the information that $X = x$ should in general alter the probabilities that we assign to other events. A natural approach is to use the results obtained in the discrete case as definitions in the continuous case.
If $E$ is an event and $x \in S$, the conditional probability $\P(E \mid X = x)$ is defined by the requirement that $\ \P(E, X \in A) = \int_A g(x) \P(E \mid X = x) \, dx, \quad A \subseteq S$
Details
Technically, $S$ is a measurable subset of $\R^n$ and the $\sigma$-algebra $\mathscr S$ consists of the subsets of $S$ that are also measurable as subsets of $\R^n$. The function $g$ is also required to be measurable, and is the density function of $X$ with respect to Lebesgue measure $\lambda_n$. Lebesgue measure is named for Henri Lebesgue and is the standard measure on $\R^n$.
We will accept the fact that $\P(E \mid X = x)$ can be defined uniquely, up to a set of measure 0, by the condition above, but we will return to this point in the section on Conditional Expectation in the chapter on Expected Value. Essentially the condition means that $\P(E \mid X = x)$ is defined so that $x \mapsto g(x) \P(E \mid X = x)$ is a density function for the finite measure $A \mapsto \P(E, X \in A)$.
Conditioning and Bayes' Theorem
Suppose again that $X$ is a random variable with values in $S$ and probability density function $g$, as described above. Our discussions above in the discrete and continuous cases lead to basic formulas for computing the probability of an event by conditioning on $X$.
The law of total probability. If $E$ is an event, then $\P(E)$ can be computed as follows:
1. If X has a discrete distribution then $\P(E) = \sum_{x \in S} g(x) \P(E \mid X = x)$
2. If X has a continuous distribution then $\P(E) = \int_S g(x) \P(E \mid X = x) \, dx$
Proof
1. This follows from the discrete theorem with $A = S$.
2. This follows from the fundamental definition with $A = S$.
Naturally, the law of total probability is useful when $\P(E \mid X = x)$ and $g(x)$ are known for $x \in S$. Our next result is, Bayes' Theorem, named after Thomas Bayes.
Bayes' Theorem. Suppose that $E$ is an event with $\P(E) \gt 0$. The conditional probability density function $x \mapsto g(x \mid E)$ of $X$ given $E$ can be computed as follows:
1. If $X$ has a discrete distribution then $g(x \mid E) = \frac{g(x) \P(E \mid X = x)}{\sum_{s \in S} g(s) \P(E \mid X = s)}, \quad x \in S$
2. If $X$ has a continuous distribution then $g(x \mid E) = \frac{g(x) \P(E \mid X = x)}{\int_S g(s) \P(E \mid X = s) \, ds}, \quad x \in S$
Proof
1. In the discrete case, as usual, the ordinary simple definition of conditional probability suffices. The numerator in the displayed equation is $\P(X = x) \P(E \mid X = x) = \P(E, X = x)$. The denominator is $\P(E)$ by part (a) of the law of total probability. Hence the fraction is $\P(E, X = x) / \P(E) = \P(X = x \mid E)$.
2. In the continuous case, as usual, the argument is more subtle. We need to show that the expression in the displayed equation satisfies the defining property of a PDF for the conditinal distribution. Once again, the denominator is $\P(E)$ by part (b) of the law of total probability. If $A \subseteq S$ then using the fundamental definition, $\int_A g(x \mid E) \, dx = \frac{1}{\P(E)} \int_A g(x) \P(E \mid X = x) \, dx = \frac{\P(E, X \in A)}{\P(E)} = \P(X \in A \mid E)$ By the meaning of the term, $x \mapsto g(x \mid E)$ is the conditional probability density function of $X$ given $E$.
In the context of Bayes' theorem, $g$ is called the prior probability density function of $X$ and $x \mapsto g(x \mid E)$ is the posterior probability density function of $X$ given $E$. Note also that the conditional probability density function of $X$ given $E$ is proportional to the function $x \mapsto g(x) \P(E \mid X = x)$, the sum or integral of this function that occurs in the denominator is simply the normalizing constant. As with the law of total probability, Bayes' theorem is useful when $\P(E \mid X = x)$ and $g(x)$ are known for $x \in S$.
Conditional Probability Density Functions
The definitions and results above apply, of course, if $E$ is an event defined in terms of another random variable for our experiment. Here is the setup:
Suppose that $X$ and $Y$ are random variables on the probability space, with values in sets $S$ and $T$, respectively, so that $(X, Y)$ is a random variable with values in $S \times T$. We assume that $(X, Y)$ has probability density function $f$, as discussed in the section on Joint Distributions. Recall that $X$ has probability density function $g$ defined as follows:
1. If $Y$ has a discrete distribution on the countable set $T$ then $g(x) = \sum_{y \in T} f(x, y), \quad x \in S$
2. If $Y$ has a continuous distribution on $T \subseteq \R^k$ then $g(x) = \int_T f(x, y) dy, \quad x \in S$
Similary, the probability density function $h$ of $Y$ can be obtained by summing $f$ over $x \in S$ if $X$ has a discrete distribution or integrating $f$ over $S$ if $X$ has a continuous distribution.
Suppose that $x \in S$ and that $g(x) \gt 0$. The function $y \mapsto h(y \mid x)$ defined below is a probability density function on $T$: $h(y \mid x) = \frac{f(x, y)}{g(x)}, \quad y \in T$
Proof
The result is simple, since $g(x)$ is the normalizing constant for $y \mapsto h(y \mid x)$. Specifically, fix $x \in S$. Then $h(y \mid x) \ge 0$. If $Y$ has a discrete distribution then $\sum_{y \in T} h(y \mid x) = \frac{1}{g(x)} \sum_{y \in T} f(x, y) = \frac{g(x)}{g(x)} = 1$ Similarly, if $Y$ has a continuous distribution then $\int_T h(y \mid x) \, dy = \frac{1}{g(x)} \int_T f(x, y) \, dy = \frac{g(x)}{g(x)} = 1$
The distribution that corresponds to this probability density function is what you would expect:
For $x \in S$, the function $y \mapsto h(y \mid x)$ is the conditional probability density function of $Y$ given $X = x$. That is,
1. If $Y$ has a discrete distribution then $\P(Y \in B \mid X = x) = \sum_{y \in B} h(y \mid x), \quad B \subseteq T$
2. If $Y$ has a continuous distribution then $\P(Y \in B \mid X = x) = \int_B h(y \mid x) \, dy, \quad B \subseteq T$
Proof
There are four cases, depending on the type of distribution of $X$ and $Y$, but the computations are identical, except for sums in the discrete case and integrals in the continuous case. The main tool is the basic theorem when $X$ has a discrete distribution and the fundamental definition when $X$ has a continuous distribution, with the event $E$ replaced by $\{Y \in B\}$ for $B \subseteq T$. The other main element is the fact that $f$ is the PDF of the (joint) distribution of $(X, Y)$.
1. Suppose that $Y$ has a discrete distribution on the countable set $T$. If $X$ also has a discrete distribution on the countable set $S$ then $\sum_{x \in A} g(x) \sum_{y \in B} h(y \mid x) = \sum_{x \in A} \sum_{y \in B} g(x) h(y \mid x) = \sum_{x \in A} \sum_{y \in B} f(x, y) = \P(X \in A, Y \in B), \quad A \subseteq S$ In this jointly discrete case, there is a simpler argument of course: $h(y \mid x) = \frac{f(x, y)}{g(x)} = \frac{\P(X = x, Y = y)}{P(X = x)} = \P(Y = y \mid X = x), \quad y \in T$ If $X$ has a continuous distribution on $S \subseteq \R^j$ then $\int_A g(x) \sum_{y \in B} h(y \mid x) \, dx = \int_A \sum_{y \in B} g(x) h(y \mid x) \, dx = \int_A \sum_{y \in B} f(x, y) = \P(X \in A, Y \in B), \quad A \subseteq S$
2. Suppose that $Y$ has continuous distributions on $T \subseteq \R^k$. If $X$ has a discrete distribution on the countable set $S$ then $\sum_{x \in A} g(x) \int_B h(y \mid x) \, dy = \sum_{x \in A} \int_B g(x) h(y \mid x) \, dy = \sum_{x \in A} \int_B f(x, y) \, dy = \P(X \in A, Y \in B), \quad A \subseteq S$ If $X$ has a continuous distribution $S \subseteq \R^j$ then $\int_A g(x) \int_B h(y \mid x) \, dy \, dx = \int_A \int_B g(x) h(y \mid x) \, dy \, dx = \int_A \int_B f(x, y) \, dy \, dx = \P(X \in A, Y \in B), \quad A \subseteq S$
The following theorem gives Bayes' theorem for probability density functions. We use the notation established above.
Bayes' Theorem. For $y \in T$, the conditional probability density function $x \mapsto g(x \mid y)$ of $X$ given $y = y$ can be computed as follows:
1. If $X$ has a discrete distribution then $g(x \mid y) = \frac{g(x) h(y \mid x)}{\sum_{s \in S} g(s) h(y \mid s)}, \quad x \in S$
2. If $X$ has a continuous distribution then $g(x \mid y) = \frac{g(x) h(y \mid x)}{\int_S g(s) h(y \mid s) ds}, \quad x \in S$
Proof
In both cases the numerator is $f(x, y)$ while the denominator is $h(y)$.
In the context of Bayes' theorem, $g$ is the prior probability density function of $X$ and $x \mapsto g(x \mid y)$ is the posterior probability density function of $X$ given $Y = y$ for $y \in T$. Note that the posterior probability density function $x \mapsto g(x \mid y)$ is proportional to the function $x \mapsto g(x) h(y \mid x)$. The sum or integral in the denominator is the normalizing constant.
Independence
Intuitively, $X$ and $Y$ should be independent if and only if the conditional distributions are the same as the corresponding unconditional distributions.
The following conditions are equivalent:
1. $X$ and $Y$ are independent.
2. $f(x, y) = g(x) h(y)$ for $x \in S$, $y \in T$
3. $h(y \mid x) = h(y)$ for $x \in S$, $y \in T$
4. $g(x \mid y) = g(x)$ for $x \in S$, $y \in T$
Proof
The equivalence of (a) and (b) was established in the section on joint distributions. Parts (c) and (d) are equivalent to (b). For a continuous distribution as described in the details in (4), a probability density function is not unique. The values of a PDF can be changed to other nonnegative values on a set of measure 0 and the resulting function is still a PDF. So if $X$ or $Y$ has a continuous distribution, the equations above have to be interpreted as holding for $x$ or $y$, respectively, except on a set of measure 0.
Examples and Applications
In the exercises that follow, look for special models and distributions that we have studied. A special distribution may be embedded in a larger problem, as a conditional distribution, for example. In particular, a conditional distribution sometimes arises when a parameter of a standard distribution is randomized.
A couple of special distributions will occur frequently in the exercises. First, recall that the discrete uniform distribution on a finite, nonempty set $S$ has probability density function $f$ given by $f(x) = 1 \big/ \#(S)$ for $x \in S$. This distribution governs an element selected at random from $S$.
Recall also that Bernoulli trials (named for Jacob Bernoulli) are independent trials, each with two possible outcomes generically called success and failure. The probability of success $p \in [0, 1]$ is the same for each trial, and is the basic parameter of the random process. The number of successes in $n \in \N_+$ Bernoulli trials has the binomial distribution with parameters $n$ and $p$. This distribution has probability density function $f$ given by $f(x) = \binom{n}{x} p^x (1 - p)^{n - x}$ for $x \in \{0, 1, \ldots, n\}$. The binomial distribution is studied in more detail in the chapter on Bernoulli trials
Coins and Dice
Suppose that two standard, fair dice are rolled and the sequence of scores $(X_1, X_2)$ is recorded. Let $U = \min\{X_1, X_2\}$ and $V = \max\{X_1, X_2\}$ denote the minimum and maximum scores, respectively.
1. Find the conditional probability density function of $U$ given $V = v$ for each $v \in \{1, 2, 3, 4, 5, 6\}$.
2. Find the conditional probability density function of $V$ given $U = u$ for each $u \in \{1, 2, 3, 4, 5, 6\}$.
Answer
1. $g(u \mid v)$ $u = 1$ 2 3 4 5 6
$v = 1$ 1 0 0 0 0 0
2 $\frac{2}{3}$ $\frac{1}{3}$ 0 0 0 0
3 $\frac{2}{5}$ $\frac{2}{5}$ $\frac{1}{5}$ 0 0 0
4 $\frac{2}{7}$ $\frac{2}{7}$ $\frac{2}{7}$ $\frac{1}{7}$ 0 0
5 $\frac{2}{9}$ $\frac{2}{9}$ $\frac{2}{9}$ $\frac{2}{9}$ $\frac{1}{9}$ 0
6 $\frac{2}{11}$ $\frac{2}{11}$ $\frac{2}{11}$ $\frac{2}{11}$ $\frac{2}{11}$ $\frac{1}{11}$
2. $h(v \mid u)$ $u = 1$ 2 3 4 5 6
$v = 1$ $\frac{1}{11}$ 0 0 0 0 0
2 $\frac{2}{11}$ $\frac{1}{9}$ 0 0 0 0
3 $\frac{2}{11}$ $\frac{2}{9}$ $\frac{1}{7}$ 0 0 0
4 $\frac{2}{11}$ $\frac{2}{9}$ $\frac{2}{7}$ $\frac{1}{5}$ 0 0
5 $\frac{2}{11}$ $\frac{2}{9}$ $\frac{2}{7}$ $\frac{2}{5}$ $\frac{1}{3}$ 0
6 $\frac{2}{11}$ $\frac{2}{9}$ $\frac{2}{7}$ $\frac{2}{5}$ $\frac{2}{3}$ $1$
In the die-coin experiment, a standard, fair die is rolled and then a fair coin is tossed the number of times showing on the die. Let $N$ denote the die score and $Y$ the number of heads.
1. Find the joint probability density function of $(N, Y)$.
2. Find the probability density function of $Y$.
3. Find the conditional probability density function of $N$ given $Y = y$ for each $y \in \{0, 1, 2, 3, 4, 5, 6\}$.
Answer
1. and b.
$f(n, y)$ $n = 1$ 2 3 4 5 6 $h(y)$
$y = 0$ $\frac{1}{12}$ $\frac{1}{24}$ $\frac{1}{48}$ $\frac{1}{96}$ $\frac{1}{102}$ $\frac{1}{384}$ $\frac{63}{384}$
1 $\frac{1}{12}$ $\frac{1}{12}$ $\frac{1}{16}$ $\frac{1}{24}$ $\frac{5}{192}$ $\frac{1}{64}$ $\frac{120}{384}$
2 0 $\frac{1}{24}$ $\frac{1}{16}$ $\frac{1}{16}$ $\frac{5}{96}$ $\frac{5}{128}$ $\frac{99}{384}$
3 0 0 $\frac{1}{48}$ $\frac{1}{24}$ $\frac{5}{96}$ $\frac{5}{96}$ $\frac{64}{384}$
4 0 0 0 $\frac{1}{96}$ $\frac{5}{192}$ $\frac{5}{128}$ $\frac{29}{384}$
5 0 0 0 0 $\frac{1}{192}$ $\frac{1}{64}$ $\frac{8}{384}$
6 0 0 0 0 0 $\frac{1}{384}$ $\frac{1}{384}$
$g(n)$ $\frac{1}{6}$ $\frac{1}{6}$ $\frac{1}{6}$ $\frac{1}{6}$ $\frac{1}{6}$ $\frac{1}{6}$ 1
2. $g(n \mid y)$ $n = 1$ 2 3 4 5 6
$y = 0$ $\frac{32}{63}$ $\frac{16}{63}$ $\frac{8}{63}$ $\frac{4}{63}$ $\frac{2}{63}$ $\frac{1}{63}$
1 $\frac{16}{60}$ $\frac{16}{60}$ $\frac{12}{60}$ $\frac{8}{60}$ $\frac{5}{60}$ $\frac{3}{60}$
2 0 $\frac{16}{99}$ $\frac{24}{99}$ $\frac{24}{99}$ $\frac{20}{99}$ $\frac{15}{99}$
3 0 0 $\frac{2}{16}$ $\frac{4}{16}$ $\frac{5}{16}$ $\frac{5}{16}$
4 0 0 0 $\frac{4}{29}$ $\frac{10}{29}$ $\frac{15}{29}$
5 0 0 0 0 $\frac{1}{4}$ $\frac{3}{4}$
6 0 0 0 0 0 1
In the die-coin experiment, select the fair die and coin.
1. Run the simulation of 1000 times and compare the empirical density function of $Y$ with the true probability density function in the previous exercise
2. Run the simulation 1000 times and compute the empirical conditional density function of $N$ given $Y = 3$. Compare with the conditional probability density functions in the previous exercise.
In the coin-die experiment, a fair coin is tossed. If the coin is tails, a standard, fair die is rolled. If the coin is heads, a standard, ace-six flat die is rolled (faces 1 and 6 have probability $\frac{1}{4}$ each and faces 2, 3, 4, 5 have probability $\frac{1}{8}$ each). Let $X$ denote the coin score (0 for tails and 1 for heads) and $Y$ the die score.
1. Find the joint probability density function of $(X, Y)$.
2. Find the probability density function of $Y$.
3. Find the conditional probability density function of $X$ given $Y = y$ for each $y \in \{1, 2, 3, 4, 5, 6\}$.
Answer
1. and b.
$f(x, y)$ $y = 1$ 2 3 4 5 6 $g(x)$
$x = 0$ $\frac{1}{12}$ $\frac{1}{12}$ $\frac{1}{12}$ $\frac{1}{12}$ $\frac{1}{12}$ $\frac{1}{12}$ $\frac{1}{2}$
1 $\frac{1}{8}$ $\frac{1}{16}$ $\frac{1}{16}$ $\frac{1}{16}$ $\frac{1}{16}$ $\frac{1}{8}$ $\frac{1}{2}$
$h(y)$ $\frac{5}{24}$ $\frac{7}{24}$ $\frac{7}{48}$ $\frac{7}{48}$ $\frac{7}{48}$ $\frac{5}{24}$ 1
2. $g(x \mid y)$ $y = 1$ 2 3 4 5 6
$x = 0$ $\frac{2}{5}$ $\frac{4}{7}$ $\frac{4}{7}$ $\frac{4}{7}$ $\frac{4}{7}$ $\frac{2}{5}$
1 $\frac{3}{5}$ $\frac{3}{7}$ $\frac{3}{7}$ $\frac{3}{7}$ $\frac{3}{7}$ $\frac{3}{5}$
In the coin-die experiment, select the settings of the previous exercise.
1. Run the simulation 1000 times and compare the empirical density function of $Y$ with the true probability density function in the previous exercise.
2. Run the simulation 100 times and compute the empirical conditional probability density function of $X$ given $Y = 2$. Compare with the conditional probability density function in the previous exercise.
Suppose that a box contains 12 coins: 5 are fair, 4 are biased so that heads comes up with probability $\frac{1}{3}$, and 3 are two-headed. A coin is chosen at random and tossed 2 times. Let $P$ denote the probability of heads of the selected coin, and $X$ the number of heads.
1. Find the joint probability density function of $(P, X)$.
2. Find the probability density function of $X$.
3. Find the conditional probability density function of $P$ given $X = x$ for $x \in \{0, 1, 2\}$.
Answer
1. and b.
$f(p, x)$ $x = 0$ 1 2 $g(p)$
$p = \frac{1}{2}$ $\frac{5}{48}$ $\frac{10}{48}$ $\frac{5}{48}$ $\frac{5}{12}$
$\frac{1}{3}$ $\frac{4}{27}$ $\frac{4}{27}$ $\frac{1}{27}$ $\frac{4}{12}$
1 0 0 $\frac{1}{4}$ $\frac{3}{12}$
$h(x)$ $\frac{109}{432}$ $\frac{154}{432}$ $\frac{169}{432}$ 1
2. $g(p \mid x)$ $x = 0$ 1 2
$p = \frac{1}{2}$ $\frac{45}{109}$ $\frac{45}{77}$ $\frac{45}{169}$
$\frac{1}{3}$ $\frac{64}{109}$ $\frac{32}{77}$ $\frac{16}{169}$
1 0 0 $\frac{108}{169}$
Compare the die-coin experiment with the box of coins experiment. In the first experiment, we toss a coin with a fixed probability of heads a random number of times. In the second experiment, we effectively toss a coin with a random probability of heads a fixed number of times.
Suppose that $P$ has probability density function $g(p) = 6 p (1 - p)$ for $p \in [0, 1]$. Given $P = p$, a coin with probability of heads $p$ is tossed 3 times. Let $X$ denote the number of heads.
1. Find the joint probability density function of $(P, X)$.
2. Find the probability density of function of $X$.
3. Find the conditional probability density of $P$ given $X = x$ for $x \in \{0, 1, 2, 3\}$. Graph these on the same axes.
Answer
1. $f(p, x) = 6 \binom{3}{x} p^{x+1}(1 - p)^{4-x}$ for $p \in [0, 1]$ and $x \in \{0, 1, 2, 3, 4\}$
2. $h(0) = h(3) = \frac{1}{5}$, $h(1) = h(2) = \frac{3}{10}$.
3. $g(p \mid 0) = 30 p (1 - p)^4$, $g(p \mid 1) = 60 p^2 (1 - p)^3$, $g(p \mid 2) = 60 p^3 (1 - p)^2$, $g(p \mid 3) = 30 p^4 (1 - p)$, in each case for $p \in [0, 1]$
Compare the box of coins experiment with the last experiment. In the second experiment, we effectively choose a coin from a box with a continuous infinity of coin types. The prior distribution of $P$ and each of the posterior distributions of $P$ in part (c) are members of the family of beta distributions, one of the reasons for the importance of the beta family. Beta distributions are studied in more detail in the chapter on Special Distributions.
In the simulation of the beta coin experiment, set $a = b = 2$ and $n = 3$ to get the experiment studied in the previous exercise. For various true values of $p$, run the experiment in single step mode a few times and observe the posterior probability density function on each run.
Simple Mixed Distributions
Recall that the exponential distribution with rate parameter $r \in (0, \infty)$ has probability density function $f$ given by $f(t) = r e^{-r t}$ for $t \in [0, \infty)$. The exponential distribution is often used to model random times, under certain assumptions. The exponential distribution is studied in more detail in the chapter on the Poisson Process. Recall also that for $a, \, b \in \R$ with $a \lt b$, the continuous uniform distribution on the interval $[a, b]$ has probability density function $f$ given by $f(x) = \frac{1}{b - a}$ for $x \in [a, b]$. This distribution governs a point selected at random from the interval.
Suppose that there are 5 light bulbs in a box, labeled 1 to 5. The lifetime of bulb $n$ (in months) has the exponential distribution with rate parameter $n$. A bulb is selected at random from the box and tested.
1. Find the probability that the selected bulb will last more than one month.
2. Given that the bulb lasts more than one month, find the conditional probability density function of the bulb number.
Answer
Let $N$ denote the bulb number and $T$ the lifetime.
1. $\P(T \gt 1) = 0.1156$
2. $n$ 1 2 3 4 5
$g(n \mid T \gt 1)$ 0.6364 0.2341 0.0861 0.0317 0.0117
Suppose that $X$ is uniformly distributed on $\{1, 2, 3\}$, and given $X = x \in \{1, 2, 3\}$, random variable $Y$ is uniformly distributed on the interval $[0, x]$.
1. Find the joint probability density function of $(X, Y)$.
2. Find the probability density function of $Y$.
3. Find the conditional probability density function of $X$ given $Y = y$ for $y \in [0, 3]$.
Answer
1. $f(x, y) = \frac{1}{3 x}$ for $y \in [0, x]$ and $x \in \{1, 2, 3\}$.
2. $h(y) = \begin{cases} \frac{11}{18}, & 0 \le y \le 1 \ \frac{5}{18}, & 1 \lt y \le 2 \ \frac{2}{18}, & 2 \lt y \le 3 \end{cases}$
3. For $y \in [0, 1]$, $g(1 \mid y) = \frac{6}{11}$, $g(2 \mid y) = \frac{3}{11}$, $g(3 \mid y) = \frac{2}{11}$
For $y \in (1, 2]$, $g(1 \mid y) = 0$, $g(2 \mid y) = \frac{3}{5}$, $g(3 \mid y) = \frac{2}{5}$
For $y \in (2, 3]$, $g(1 \mid y) = g(2 \mid y) = 0$, $g(3 \mid y) = 1$.
The Poisson Distribution
Recall that the Poisson distribution with parameter $a \in (0, \infty)$ has probability density function $g(n) = e^{-a} \frac{a^n}{n!}$ for $n \in \N$. This distribution is widely used to model the number of random points in a region of time or space; the parameter $a$ is proportional to the size of the region. The Poisson distribution is named for Simeon Poisson, and is studied in more detail in the chapter on the Poisson Process.
Suppose that $N$ is the number of elementary particles emitted by a sample of radioactive material in a specified period of time, and has the Poisson distribution with parameter $a$. Each particle emitted, independently of the others, is detected by a counter with probability $p \in (0, 1)$ and missed with probability $1 - p$. Let $Y$ denote the number of particles detected by the counter.
1. For $n \in \N$, argue that the conditional distribution of $Y$ given $N = n$ is binomial with parameters $n$ and $p$.
2. Find the joint probability density function of $(N, Y)$.
3. Find the probability density function of $Y$.
4. For $y \in \N$, find the conditional probability density function of $N$ given $Y = y$.
Answer
1. Each particle, independently, is detected (success) with probability $p$. This is the very definition of Bernoulli trials, so given $N = n$, the number of detected particles has the binomial distribution with parameters $n$ and $p$
2. The PDF $f$ of $(N, Y)$ is defined by $f(n, y) = e^{-a} a^n \frac{p^y}{y!} \frac{(1-p)^{n-y}}{(n-y)!}, \quad n \in \N, \; y \in \{0, 1, \ldots, n\}$
3. The PDF $h$ of $Y$ is defined by $h(y) = e^{-p a} \frac{(p a)^y}{y!}, \quad y \in \N$ This is the Poisson distribution with parameter $p a$.
4. The conditional PDF of $N$ given $Y = y$ is defined by $g(n \mid y) = e^{-(1-p)a} \frac{[(1 - p) a]^{n-y}}{(n - y)!}, \quad n \in \{y, y+1, \ldots\}$ This is the Poisson distribution with parameter $(1 - p)a$, shifted to start at $y$.
The fact that $Y$ also has a Poisson distribution is an interesting and characteristic property of the distribution. This property is explored in more depth in the section on thinning the Poisson process.
Simple Continuous Distributions
Suppose that $(X, Y)$ has probability density function $f$ defined by $f(x, y) = x + y$ for $(x, y) \in (0, 1)^2$.
1. Find the conditional probability density function of $X$ given $Y = y$ for $y \in (0, 1)$
2. Find the conditional probability density function of $Y$ given $X = x$ for $x \in (0, 1)$
3. Find $\P\left(\frac{1}{4} \le Y \le \frac{3}{4} \bigm| X = \frac{1}{3}\right)$.
4. Are $X$ and $Y$ independent?
Answer
1. For $y \in (0, 1)$, $g(x \mid y) = \frac{x + y}{y + 1/2}$ for $x \in (0, 1)$
2. For $x \in (0, 1)$, $h(y \mid x) = \frac{x + y}{x + 1/2}$ for $y \in (0, 1)$
3. $\frac{1}{2}$
4. $X$ and $Y$ are dependent.
Suppose that $(X, Y)$ has probability density function $f$ defined by $f(x, y) = 2 (x + y)$ for $0 \lt x \lt y \lt 1$.
1. Find the conditional probability density function of $X$ given $Y = y$ for $y \in (0, 1)$.
2. Find the conditional probability density function of $Y$ given $X = x$ for $x \in (0, 1)$.
3. Find $\P\left(Y \ge \frac{3}{4} \bigm| X = \frac{1}{2}\right)$.
4. Are $X$ and $Y$ independent?
Answer
1. For $y \in (0, 1)$, $g(x \mid y) = \frac{x + y}{3 y^2}$ for $x \in (0, y)$.
2. For $x \in (0, 1)$, $h(y \mid x) = \frac{x + y}{(1 + 3 x)(1 - x)}$ for $y \in (x, 1)$.
3. $\frac{3}{10}$
4. $X$ and $Y$ are dependent.
Suppose that $(X, Y)$ has probability density function $f$ defined by $f(x, y) = 15 x^2 y$ for $0 \lt x \lt y \lt 1$.
1. Find the conditional probability density function of $X$ given $Y = y$ for $y \in (0, 1)$.
2. Find the conditional probability density function of $Y$ given $X = x$ for $x \in (0, 1)$.
3. Find $\P\left(X \le \frac{1}{4} \bigm| Y = \frac{1}{3}\right)$.
4. Are $X$ and $Y$ independent?
Answer
1. For $y \in (0, 1)$, $g(x \mid y) = \frac{3 x^2}{y^3}$ for $x \in (0, y)$.
2. For $x \in (0, 1)$, $h(y \mid x) = \frac{2 y}{1 - x^2}$ for $y \in (x, 1)$.
3. $\frac{27}{64}$
4. $X$ and $Y$ are dependent.
Suppose that $(X, Y)$ has probability density function $f$ defined by $f(x, y) = 6 x^2 y$ for $0 \lt x \lt 1$ and $0 \lt y \lt 1$.
1. Find the conditional probability density function of $X$ given $Y = y$ for $y \in (0, 1)$.
2. Find the conditional probability density function of $Y$ given $X = x$ for $x \in (0, 1)$.
3. Are $X$ and $Y$ independent?
Answer
1. For $y \in (0, 1)$, $g(x \mid y) = 3 x^2$ for $y \in (0, 1)$.
2. For $x \in (0, 1)$, $h(y \mid x) = 2 y$ for $y \in (0, 1)$.
3. $X$ and $Y$ are independent.
Suppose that $(X, Y)$ has probability density function $f$ defined by $f(x, y) = 2 e^{-x} e^{-y}$ for $0 \lt x \lt y \lt \infty$.
1. Find the conditional probability density function of $X$ given $Y = y$ for $y \in (0, \infty)$.
2. Find the conditional probability density function of $Y$ given $X = x$ for $x \in (0, \infty)$.
3. Are $X$ and $Y$ independent?
Answer
1. For $y \in (0, \infty)$, $g(x \mid y) = \frac{e^{-x}}{1 - e^{-y}}$ for $x \in (0, y)$.
2. For $x \in (0, \infty)$, $h(y \mid x) = e^{x-y}$ for $y \in (x, \infty)$.
3. $X$ and $Y$ are dependent.
Suppose that $X$ is uniformly distributed on the interval $(0, 1)$, and that given $X = x$, $Y$ is uniformly distributed on the interval $(0, x)$.
1. Find the joint probability density function of $(X, Y)$.
2. Find the probability density function of $Y$.
3. Find the conditional probability density function of $X$ given $Y = y$ for $y \in (0, 1)$.
4. Are $X$ and $Y$ independent?
Answer
1. $f(x, y) = \frac{1}{x}$ for $0 \lt y \lt x \lt 1$
2. $h(y) = -\ln y$ for $y \in (0, 1)$
3. For $y \in (0, 1)$, $g(x \mid y) = -\frac{1}{x \ln y}$ for $x \in (y, 1)$.
4. $X$ and $Y$ are dependent.
Suppose that $X$ has probability density function $g$ defined by $g(x) = 3 x^2$ for $x \in (0, 1)$. The conditional probability density function of $Y$ given $X = x$ is $h(y \mid x) = \frac{3 y^2}{x^3}$ for $y \in (0, x)$.
1. Find the joint probability density function of $(X, Y)$.
2. Find the probability density function of $Y$.
3. Find the conditional probability density function of $X$ given $Y = y$.
4. Are $X$ and $Y$ independent?
Answer
1. $f(x, y) = \frac{9 y^2}{x}$ for $0 \lt y \lt x \lt 1$.
2. $h(y) = -9 y^2 \ln y$ for $y \in (0, 1)$.
3. For $y \in (0, 1)$, $g(x \mid y) = - \frac{1}{x \ln y}$ for $x \in (y, 1)$.
4. $X$ and $Y$ are dependent.
Multivariate Uniform Distributions
Multivariate uniform distributions give a geometric interpretation of some of the concepts in this section.
Recall that For $n \in \N_+$, the standard measure $\lambda_n$ on $\R^n$ is given by $\lambda_n(A) = \int_A 1 \, dx, \quad A \subseteq \R^n$ In particular, $\lambda_1(A)$ is the length of $A \subseteq \R$, $\lambda_2(A)$ is the area of $A \subseteq \R^2$ and $\lambda_3(A)$ is the volume of $A \subseteq \R^3$.
Details
Technically, $\lambda_n$ is Lebesgue measure defined on the $\sigma$-algebra of measurable subsets of $\R^n$. In the disccusion below, we assume that all sets are measurable. The integral representation is valid for the sets that occur in typical applications.
Suppose now that $X$ takes values in $\R^j$, $Y$ takes values in $\R^k$, and that $(X, Y)$ is uniformly distributed on a set $R \subseteq \R^{j+k}$. So $0 \lt \lambda_{j+k}(R) \lt \infty$ and then the joint probability density function $f$ of $(X, Y)$ is given by $f(x, y) = 1 \big/ \lambda_{j+k}(R)$ for $(x, y) \in R$. Now let $S$ and $T$ be the projections of $R$ onto $\R^j$ and $\R^k$ respectively, defined as follows: $S = \left\{x \in \R^j: (x, y) \in R \text{ for some } y \in \R^k\right\}, \quad T = \left\{y \in \R^k: (x, y) \in R \text{ for some } x \in \R^j\right\}$ Note that $R \subseteq S \times T$. Next we denote the cross sections at $x \in S$ and at $y \in T$, respectively by $T_x = \{t \in T: (x, t) \in R\}, \quad S_y = \{s \in S: (s, y) \in R\}$
In the last section on Joint Distributions, we saw that even though $(X, Y)$ is uniformly distributed, the marginal distributions of $X$ and $Y$ are not uniform in general. However, as the next theorem shows, the conditional distributions are always uniform.
Suppose that $(X, Y)$ is uniformly distributed on $R$. Then
1. The conditional distribution of $Y$ given $X = x$ is uniformly on $T_x$ for each $x \in S$.
2. The conditional distribution of $X$ given $Y = y$ is uniformly on $S_y$ for each $y \in T$.
Proof
The results are symmetric, so we will prove (a). Recall that $X$ has PDF $g$ given by $g(x) = \int_{T_x} f(x, y) \, dy = \int_{T_x} \frac{1}{\lambda_{j+k}(R)} \, dy = \frac{\lambda_k(T_x)}{\lambda_{j+k}(R)}, \quad x \in S$ Hence for $x \in S$, the conditional PDF of $Y$ given $X = x$ is $h(y \mid x) = \frac{f(x, y)}{g(x)} = \frac{1}{\lambda_k(T_x)}, \quad y \in T_x$ and this is the PDF of the uniform distribution on $T_x$.
Find the conditional density of each variable given a value of the other, and determine if the variables are independent, in each of the following cases:
1. $(X, Y)$ is uniformly distributed on the square $R = (-6, 6)^2$.
2. $(X, Y)$ is uniformly distributed on the triangle $R = \{(x, y) \in \R^2: -6 \lt y \lt x \lt 6\}$.
3. $(X, Y)$ is uniformly distributed on the circle $R = \{(x, y) \in \R^2: x^2 + y^2 \lt 36\}$.
Answer
The conditional PDF of $X$ given $Y = y$ is denoted $x \mapsto g(x \mid y)$. The conditional PDF of $Y$ given $X = x$ is denoted $y \mapsto h(y \mid x)$.
• For $y \in (-6, 6)$, $g(x \mid y) = \frac{1}{12}$ for $x \in (-6, 6)$.
• For $x \in (-6, 6)$, $h(y \mid x) = \frac{1}{12}$ for $y \in (-6, 6)$.
• $X$, $Y$ are independent.
• For $y \in (-6, 6)$, $g(x \mid y) = \frac{1}{6 - y}$ for $x \in (y, 6)$
• For $x \in (-6, 6)$, $h(y \mid x) = \frac{1}{x + 6}$ for $y \in (-6, x)$
• $X$, $Y$ are dependent.
• For $y \in (-6, 6)$, $g(x \mid y) = \frac{1}{2 \sqrt{36 - y^2}}$ for $x \in \left(-\sqrt{36 - y^2}, \sqrt{36 - y^2}\right)$
• For $x \in (-6, 6)$, $g(x \mid y) = \frac{1}{2 \sqrt{36 - x^2}}$ for $y \in \left(-\sqrt{36 - x^2}, \sqrt{36 - x^2}\right)$
• $X$, $Y$ are dependent.
In the bivariate uniform experiment, run the simulation 1000 times in each of the following cases. Watch the points in the scatter plot and the graphs of the marginal distributions.
1. square
2. triangle
3. circle
Suppose that $(X, Y, Z)$ is uniformly distributed on $R = \{(x, y, z) \in \R^3: 0 \lt x \lt y \lt z \lt 1\}$.
1. Find the conditional density of each pair of variables given a value of the third variable.
2. Find the conditional density of each variable given values of the other two.
Answer
The subscripts 1, 2, and 3 correspond to the variables $X$, $Y$, and $Z$, respectively. Note that the conditions on $(x, y, z)$ in each case are those in the definition of the domain $R$. They are stated differently to emphasize the domain of the conditional PDF as opposed to the given values, which function as parameters. Note also that each distribution is uniform on the appropriate region.
1. For $0 \lt z \lt 1$, $f_{1, 2 \mid 3}(x, y \mid z) = \frac{2}{z^2}$ for $0 \lt x \lt y \lt z$
2. For $0 \lt y \lt 1$, $f_{1, 3 \mid 2}(x, z \mid y) = \frac{1}{y (1 - y)}$ for $0 \lt x \lt y$ and $y \lt z \lt 1$
3. For $0 \lt x \lt 1$, $f_{2, 3 \mid 1}(y, z \mid x) = \frac{2}{(1 - x)^2}$ for $x \lt y \lt z \lt 1$
4. For $0 \lt y \lt z \lt 1$, $f_{1 \mid 2, 3}(x \mid y, z) = \frac{1}{y}$ for $0 \lt x \lt y$
5. For $0 \lt x \lt z \lt 1$, $f_{2 \mid 1, 3}(y \mid x, z) = \frac{1}{z - x}$ for $x \lt y \lt z$
6. For $0 \lt x \lt y \lt 1$, $f_{3 \mid 1, 2}(z \mid x, y) = \frac{1}{1 - y}$ for $y \lt z \lt 1$
The Multivariate Hypergeometric Distribution
Recall the discussion of the (multivariate) hypergeometric distribution given in the last section on joint distributions. As in that discussion, suppose that a population consists of $m$ objects, and that each object is one of four types. There are $a$ objects of type 1, $b$ objects of type 2, and $c$ objects of type 3, and $m - a - b - c$ objects of type 0. We sample $n$ objects from the population at random, and without replacement. The parameters $a$, $b$, $c$, and $n$ are nonnegative integers with $a + b + c \le m$ and $n \le m$. Denote the number of type 1, 2, and 3 objects in the sample by $X$, $Y$, and $Z$, respectively. Hence, the number of type 0 objects in the sample is $n - X - Y - Z$. In the following exercises, $x, \, y, \, z \in \N$.
Suppose that $z \le c$ and $n - m + c \le z \le n$. Then the conditional distribution of $(X, Y)$ given $Z = z$ is hypergeometric, and has the probability density function defined by $g(x, y \mid z) = \frac{\binom{a}{x} \binom{b}{y} \binom{m - a - b - c}{n - x - y - z}}{\binom{m - c}{n - z}}, \quad x + y \le n - z$
Proof
This result can be proved analytically but a combinatorial argument is better. The essence of the argument is that we are selecting a random sample of size $n - z$ without replacement from a population of size $m - c$, with $a$ objects of type 1, $b$ objects of type 2, and $m - a - b$ objects of type 0. The conditions on $z$ ensure that $\P(Z = z) \gt 0$, or equivalently, that the new parameters make sense.
Suppose that $y \le b$, $z \le c$, and $n - m + b \le y + z \le n$. Then the conditional distribution of $X$ given $Y = y$ and $Z = z$ is hypergeometric, and has the probability density function defined by $g(x \mid y, z) = \frac{\binom{a}{x} \binom{m - a - b - c}{n - x - y - z}}{\binom{m - b - c}{n - y - z}}, \quad x \le n - y - z$
Proof
Again, this result can be proved analytically, but a combinatorial argument is better. The essence of the argument is that we are selecting a random sample of size $n - y - z$ from a population of size $m - b - c$, with $a$ objects of type 1 and $m - a - b - c$ objects type 0. The conditions on $y$ and $z$ ensure that $\P(Y = y, Z = z) \gt 0$, or equivalently that the new parameters make sense.
These results generalize in a completely straightforward way to a population with any number of types. In brief, if a random vector has a hypergeometric distribution, then the conditional distribution of some of the variables, given values of the other variables, is also hypergeometric. Moreover, it is clearly not necessary to remember the hideous formulas in the previous two theorems. You just need to recognize the problem as sampling without replacement from a multi-type population, and then identify the number of objects of each type and the sample size. The hypergeometric distribution and the multivariate hypergeometric distribution are studied in more detail in the chapter on Finite Sampling Models.
In a population of 150 voters, 60 are democrats and 50 are republicans and 40 are independents. A sample of 15 voters is selected at random, without replacement. Let $X$ denote the number of democrats in the sample and $Y$ the number of republicans in the sample. Give the probability density function of each of the following:
1. $(X, Y)$
2. $X$
3. $Y$ given $X = 5$
Answer
1. $f(x, y) = \frac{1}{\binom{150}{15}} \binom{60}{x} \binom{50}{y} \binom{40}{15 - x - y}$ for $x + y \le 15$
2. $g(x) = \frac{1}{\binom{150}{15}} \binom{60}{x} \binom{90}{15 - x}$ for $x \le 15$
3. $h(y \mid 5) = \frac{1}{\binom{90}{10}} \binom{50}{y} \binom{40}{10 - y}$ for $y \le 10$
Recall that a bridge hand consists of 13 cards selected at random and without replacement from a standard deck of 52 cards. Let $X$, $Y$, and $Z$ denote the number of spades, hearts, and diamonds, respectively, in the hand. Find the probability density function of each of the following:
1. $(X, Y, Z)$
2. $(X, Y)$
3. $X$
4. $(X, Y)$ given $Z = 3$
5. $X$ given $Y = 3$ and $Z = 2$
Answer
1. $f(x, y, z) = \frac{1}{\binom{52}{13}} \binom{13}{x} \binom{13}{y} \binom{13}{z} \binom{13}{13 - x - y - z}$ for $x + y + z \le 13$.
2. $g(x, y) = \frac{1}{\binom{52}{13}} \binom{13}{x} \binom{13}{y} \binom{26}{13 - x - y}$ for $x + y \le 13$
3. $h(x) = \frac{1}{\binom{52}{13}} \binom{13}{x} \binom{39}{13 - x}$ for $x \le 13$
4. $g(x, y \mid 3) = \frac{1}{\binom{39}{10}} \binom{13}{x} \binom{13}{y} \binom{13}{10 - x - y}$ for $x + y \le 10$
5. $h(x \mid 3, 2) = \frac{1}{\binom{26}{8}} \binom{13}{x} \binom{13}{8 - x}$ for $x \le 8$
Multinomial Trials
Recall the discussion of multinomial trials in the last section on joint distributions. As in that discussion, suppose that we have a sequence of $n$ independent trials, each with 4 possible outcomes. On each trial, outcome 1 occurs with probability $p$, outcome 2 with probability $q$, outcome 3 with probability $r$, and outcome 0 with probability $1 - p - q - r$. The parameters $p, \, q, \, r \in (0, 1)$, with $p + q + r \lt 1$, and $n \in \N_+$. Denote the number of times that outcome 1, outcome 2, and outcome 3 occurs in the $n$ trials by $X$, $Y$, and $Z$ respectively. Of course, the number of times that outcome 0 occurs is $n - X - Y - Z$. In the following exercises, $x, \, y, \, z \in \N$.
For $z \le n$, the conditional distribution of $(X, Y)$ given $Z = z$ is also multinomial, and has the probability density function.
$g(x, y \mid z) = \binom{n - z}{x, \, y} \left(\frac{p}{1 - r}\right)^x \left(\frac{q}{1 - r}\right)^y \left(1 - \frac{p}{1 - r} - \frac{q}{1 - r}\right)^{n - x - y - z}, \quad x + y \le n - z$
Proof
This result can be proved analytically, but a probability argument is better. First, let $I$ denote the outcome of a generic trial. Then $\P(I = 1 \mid I \ne 3) = \P(I = 1) / \P(I \ne 3) = p \big/ (1 - r)$. Similarly, $\P(I = 2 \mid I \ne 3) = q \big/ (1 - r)$ and $\P(I = 0 \mid I \ne 3) = (1 - p - q - r) \big/ (1 - r)$. Now, the essence of the argument is that effectively, we have $n - z$ independent trials, and on each trial, outcome 1 occurs with probability $p \big/ (1 - r)$ and outcome 2 with probability $q \big/ (1 - r)$.
For $y + z \le n$, the conditional distribution of $X$ given $Y = y$ and $Z = z$ is binomial, with the probability density function
$h(x \mid y, z) = \binom{n - y - z}{x} \left(\frac{p}{1 - q - r}\right)^x \left(1 - \frac{p}{1 - q - r}\right)^{n - x - y - z},\quad x \le n - y - z$
Proof
Again, this result can be proved analytically, but a probability argument is better. As before, let $I$ denote the outcome of a generic trial. Then $\P(I = 1 \mid I \notin \{2, 3\}) = p \big/ (1 - q - r)$ and $\P(I = 0 \mid I \notin \{2, 3\}) = (1 - p - q - r) \big/ (1 - q - r)$. Thus, the essence of the argument is that effectively, we have $n - y - z$ independent trials, and on each trial, outcome 1 occurs with probability $p \big/ (1 - q - r)$.
These results generalize in a completely straightforward way to multinomial trials with any number of trial outcomes. In brief, if a random vector has a multinomial distribution, then the conditional distribution of some of the variables, given values of the other variables, is also multinomial. Moreover, it is clearly not necessary to remember the specific formulas in the previous two exercises. You just need to recognize a problem as one involving independent trials, and then identify the probability of each outcome and the number of trials. The binomial distribution and the multinomial distribution are studied in more detail in the chapter on Bernoulli Trials.
Suppose that peaches from an orchard are classified as small, medium, or large. Each peach, independently of the others is small with probability $\frac{3}{10}$, medium with probability $\frac{1}{2}$, and large with probability $\frac{1}{5}$. In a sample of 20 peaches from the orchard, let $X$ denote the number of small peaches and $Y$ the number of medium peaches. Give the probability density function of each of the following:
1. $(X, Y)$
2. $X$
3. $Y$ given $X = 5$
Answer
1. $f(x, y) = \binom{20}{x, \, y} \left(\frac{3}{10}\right)^x \left(\frac{1}{2}\right)^y \left(\frac{1}{5}\right)^{20 - x - y}$ for $x + y \le 20$
2. $g(x) = \binom{20}{x} \left(\frac{3}{10}\right)^x \left(\frac{7}{10}\right)^{20 - x}$ for $x \le 20$
3. $h(y \mid 5) = \binom{15}{y} \left(\frac{5}{7}\right)^y \left(\frac{2}{7}\right)^{15 - y}$ for $y \le 15$
For a certain crooked, 4-sided die, face 1 has probability $\frac{2}{5}$, face 2 has probability $\frac{3}{10}$, face 3 has probability $\frac{1}{5}$, and face 4 has probability $\frac{1}{10}$. Suppose that the die is thrown 50 times. Let $X$, $Y$, and $Z$ denote the number of times that scores 1, 2, and 3 occur, respectively. Find the probability density function of each of the following:
1. $(X, Y, Z)$
2. $(X, Y)$
3. $X$
4. $(X, Y)$ given $Z = 5$
5. $X$ given $Y = 10$ and $Z = 5$
Answer
1. $f(x, y, z) = \binom{50}{x, \, y, \, z} \left(\frac{2}{5}\right)^x \left(\frac{3}{10}\right)^y \left(\frac{1}{5}\right)^z \left(\frac{1}{10}\right)^{50 - x - y - z}$ for $x + y + z \le 50$
2. $g(x, y) = \binom{50}{x, \, y} \left(\frac{2}{5}\right)^x \left(\frac{3}{10}\right)^y \left(\frac{3}{10}\right)^{50 - x - y}$ for $x + y \le 50$
3. $h(x) = \binom{50}{x} \left(\frac{2}{5}\right)^x \left(\frac{3}{5}\right)^{50 - x}$ for $x \le 50$
4. $g(x, y \mid 5) = \binom{45}{x, \, y} \left(\frac{1}{2}\right)^x \left(\frac{3}{8}\right)^y \left(\frac{1}{8}\right)^{45 - x - y}$ for $x + y \le 45$
5. $h(x \mid 10, 5) = \binom{35}{x} \left(\frac{4}{5}\right)^x \left(\frac{1}{4}\right)^{10 - x}$ for $x \le 35$
Bivariate Normal Distributions
The joint distributions in the next two exercises are examples of bivariate normal distributions. The conditional distributions are also normal, an important property of the bivariate normal distribution. In general, normal distributions are widely used to model physical measurements subject to small, random errors. The bivariate normal distribution is studied in more detail in the chapter on Special Distributions.
Suppose that $(X, Y)$ has the bivariate normal distribution with probability density function $f$ defined by $f(x, y) = \frac{1}{12 \pi} \exp\left[-\left(\frac{x^2}{8} + \frac{y^2}{18}\right)\right], \quad (x, y) \in \R^2$
1. Find the conditional probability density function of $X$ given $Y = y$ for $y \in \R$.
2. Find the conditional probability density function of $Y$ given $X = x$ for $x \in \R$.
3. Are $X$ and $Y$ independent?
Answer
1. For $y \in \R$, $g(x \mid y) = \frac{1}{2 \sqrt{2 \pi}} e^{-x^2 / 8}$ for $x \in \R$. This is the PDF of the normal distribution with mean 0 and variance 4.
2. For $x \in \R$, $h(y \mid x) = \frac{1}{3 \sqrt{2 \pi}} e^{-y^2 / 18}$ for $y \in \R$. This is the PDF of the normal distribution with mean 0 and variance 9.
3. $X$ and $Y$ are independent.
Suppose that $(X, Y)$ has the bivariate normal distribution with probability density function $f$ defined by $f(x, y) = \frac{1}{\sqrt{3} \pi} \exp\left[-\frac{2}{3} (x^2 - x y + y^2)\right], \quad (x, y) \in \R^2$
1. Find the conditional probability density function of $X$ given $Y = y$ for $y \in \R$.
2. Find the conditional probability density function of $Y$ given $X = x$ for $x \in \R$.
3. Are $X$ and $Y$ independent?
Answer
1. For $y \in \R$, $g(x \mid y) = \sqrt{\frac{2}{3 \pi}} e^{-\frac{2}{3} (x - y / 2)^2}$ for $x \in \R$. This is the PDF of the normal distribution with mean $y/2$ and variance $3/4$.
2. For $x \in \R$, $h(y \mid x) = \sqrt{\frac{2}{3 \pi}} e^{-\frac{2}{3} (y - x / 2)^2}$ for $y \in \R$. This is the PDF of the normal distribution with mean $x/2$ and variance $3/4$.
3. $X$ and $Y$ are dependent.
Mixtures of Distributions
With our usual sets $S$ and $T$, as above, suppose that $P_x$ is a probability measure on $T$ for each $x \in S$. Suppose also that $g$ is a probability density function on $S$. We can obtain a new probability measure on $T$ by averaging (or mixing) the given distributions according to $g$.
First suppose that $g$ is the probability density function of a discrete distribution on the countable set $S$. Then the function $\P$ defined below is a probability measure on $T$: $\P(B) = \sum_{x \in S} g(x) P_x(B), \quad B \subseteq T$
Proof
Clearly $\P(B) \ge 0$ for $B \subseteq T$ and $\P(T) = \sum_{x \in S} g(x) \, 1 = 1$. Suppose that $\{B_i: i \in I\}$ is a countable, disjoint collection of subsets of $T$. Then $\P\left(\bigcup_{i \in I} B_i\right) = \sum_{x \in S} g(x) P_x\left(\bigcup_{i \in I} B_i\right) = \sum_{x \in S} g(x) \sum_{i \in I} P_x(B_i) = \sum_{i \in I} \sum_{x \in S} g(x) P_x(B_i) = \sum_{i \in I} \P(B_i)$ Reversing the order of summation is justified since the terms are nonnegative.
In the setting of the previous theorem, suppose that $P_x$ has probability density function $h_x$ for each $x \in S$. Then $\P$ has probability density function $h$ given by $h(y) = \sum_{x \in S} g(x) h_x(y), \quad y \in T$
Proof
As usual, we will consider the discrete and continuous cases for the distributions on $T$ separately.
1. Suppose that $T$ is countable so that $P_x$ is a discrete probability measure for each $x \in S$. By definition, for each $x \in S$, $h_x(y) = \P_x(\{y\})$ for $y \in T$. So the probability density function $h$ of $P$ is given by $h(y) = P(\{y\}) = \sum_{x \in S} g(x) P_x(\{y\}) = \sum_{x \in S} g(x) h_x(y), \quad y \in T$
2. Suppose now that $P_x$ has a continuous distribution on $T \subseteq R^k$, with PDF $g_x$ for each $x \in S$, For $B \subseteq T$, $\P(B) = \sum_{x \in S} g(x) P_x(B) = \sum_{x \in S} g(x) \int_B h_x(y) \, dy = \int_B \sum_{x \in S} g(x) h_x(y) \, dy = \int_B h(y) \, dy$ So by definition, $h$ is the PDF of $\P$. Again, the interchange of sum and integral is justified because the functions are nonnegative. Technically, we also need $y \mapsto h_x(y)$ to be measurable for $x \in S$ so that the integral makes sense.
Conversely, given a probability density function $g$ on $S$ and a probability density function $h_x$ on $T$ for each $x \in S$, the function $h$ defined in the previous theorem is a probability density function on $T$.
Suppose now that $g$ is the probability density function of a continuous distribution on $S \subseteq \R^j$. Then the function $\P$ defined below is a probability measure on $T$: $\P(B) = \int_S g(x) P_x(B) dx, \quad B \subseteq T$
Proof
The proof is just like the proof of Theorem (45) with integrals over $S$ replacing the sums over $S$. Clearly $\P(B) \ge 0$ for $B \subseteq T$ and $\P(T) = \int_S g(x) \, P_x(T) \, dx = \int_S g(x) \, dx = 1$. Suppose that $\{B_i: i \in I\}$ is a countable, disjoint collection of subsets of $T$. Then $\P\left(\bigcup_{i \in I} B_i\right) = \int_S g(x) P_x\left(\bigcup_{i \in I} B_i\right) = \int_S g(x) \sum_{i \in I} P_x(B_i) = \sum_{i \in I} \int_S g(x) P_x(B_i) = \sum_{i \in I} \P(B_i)$ Reversing the integral and the sum is justified since the terms are nonnegative. Technically, we need the subsets of $T$ and the mapping $x \mapsto P_x(B)$ to be measurable.
In the setting of the previous theorem, suppose that $P_x$ is a discrete (respectively continuous) distribution with probability density function $h_x$ for each $x \in S$. Then $\P$ is also discrete (respectively continuous) with probability density function $h$ given by $h(y) = \int_S g(x) h_x(y) dx, \quad y \in T$
Proof
The proof is just like the proof of Theorem (46) with integrals over $S$ replacing the sums over $S$.
1. Suppose that $T$ is countable so that $P_x$ is a discrete probability measure for each $x \in S$. By definition, for each $x \in S$, $h_x(y) = \P_x(\{y\})$ for $y \in T$. So the probability density function $h$ of $P$ is given by $h(y) = P(\{y\}) = \int_S g(x) P_x(\{y\}) \, dx = \int_S g(x) h_x(y) \, dx, \quad y \in T$ Technically, we need $x \mapsto P_x(\{y\}) = h_x(y)$ to be measurable for $y \in T$.
2. Suppose now that $P_x$ has a continuous distribution on $T \subseteq R^k$, with PDF $g_x$ for each $x \in S$, For $B \subseteq T$, $\P(B) = \int_S g(x) P_x(B) \, dx = \int_S g(x) \int_B h_x(y) \, dy \, dx = \int_B \int_S g(x) h_x(y) \, dx \, dy = \int_B h(y) \, dy$ So by definition, $h$ is the PDF of $\P$. Again, the interchange of sum and integral is justified because the functions are nonnegative. Technically, we also need $(x, y) \mapsto h_x(y)$ to be measurable so that the integral makes sense.
In both cases, the distribution $\P$ is said to be a mixture of the set of distributions $\{P_x: x \in S\}$, with mixing density $g$.
One can have a mixture of distributions, without having random variables defined on a common probability space. However, mixtures are intimately related to conditional distributions. Returning to our usual setup, suppose that $X$ and $Y$ are random variables for an experiment, taking values in $S$ and $T$ respectively and that $X$ probability density function $g$. The following result is simply a restatement of the law of total probability.
The distribution of $Y$ is a mixture of the conditional distributions of $Y$ given $X = x$, over $x \in S$, with mixing density $g$.
Proof
Only the notation is different.
1. If $X$ has a discrete distribuion on the countable set $S$ then $\P(Y \in B) = \sum_{x \in S} g(x) \P(Y \in B \mid X = x), \quad B \subseteq T$
2. If $X$ has a continuous distribution $S \subseteq \R^j$ then $\P(Y \in B) = \int_S g(x) \P(Y \in B \mid X = x) \, dx, \quad B \subseteq T$
Finally we note that a mixed distribution (with discrete and continuous parts) really is a mixture, in the sense of this discussion.
Suppose that $\P$ is a mixed distribution on a set $T$. Then $\P$ is a mixture of a discrete distribution and a continuous distribution.
Proof
Recall that mixed distribution means that $T$ can be partitioned into a countable set $D$ and a set $C \subseteq \R^n$ for some $n \in \N_+$ with the properties that $\P(\{x\}) \gt 0$ for $x \in D$, $\P(\{x\}) = 0$ for $x \in C$, and $p = \P(D) \in (0, 1)$. Let $S = \{d, c\}$ and define the PDF $g$ on $S$ by $g(d) = p$ and $g(c) = 1 - p$. Recall that the conditional distribution $P_d$ defined by $P_d(A) = \P(A \cap D) / \P(D)$ for $A \subseteq T$ is a discrete distribution on $T$ and similarly the conditional distribution $P_c$ defined by $P_c(A) = \P(A \cap C) / \P(C)$ for $A \subseteq T$ is a continuous distribution on $T$. Clearly with this setup, $\P(A) = g(c) P_c(A) + g(d) P_d(A), \quad A \subseteq T$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/03%3A_Distributions/3.05%3A_Conditional_Distributions.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
As usual, our starting point is a random experiment modeled by a with probability space $(\Omega, \mathscr F, \P)$. So to review, $\Omega$ is the set of outcomes, $\mathscr F$ is the collection of events, and $\P$ is the probability measure on the sample space $(\Omega, \mathscr F)$. In this section, we will study two types of functions that can be used to specify the distribution of a real-valued random variable.
Distribution Functions
Definition
Suppose that $X$ is a random variable with values in $\R$. The (cumulative) distribution function of $X$ is the function $F: \R \to [0, 1]$ defined by $F(x) = \P(X \le x), \quad x \in \R$
The distribution function is important because it makes sense for any type of random variable, regardless of whether the distribution is discrete, continuous, or even mixed, and because it completely determines the distribution of $X$. In the picture below, the light shading is intended to represent a continuous distribution of probability, while the darker dots represents points of positive probability; $F(x)$ is the total probability mass to the left of (and including) $x$.
Basic Properties
A few basic properties completely characterize distribution functions. Notationally, it will be helpful to abbreviate the limits of $F$ from the left and right at $x \in \R$, and at $\infty$ and $-\infty$ as follows: $F(x^+) = \lim_{t \downarrow x} F(t), \; F(x^-) = \lim_{t \uparrow x} F(t), \; F(\infty) = \lim_{t \to \infty} F(t), \; F(-\infty) = \lim_{t \to -\infty} F(t)$
Suppose that $F$ is the distribution function of a real-valued random variable $X$.
1. $F$ is increasing: if $x \le y$ then $F(x) \le F(y)$.
2. $F(x^+) = F(x)$ for $x \in \R$. Thus, $F$ is continuous from the right.
3. $F(x^-) = \P(X \lt x)$ for $x \in \R$. Thus, $F$ has limits from the left.
4. $F(-\infty) = 0$.
5. $F(\infty) = 1$.
Proof
The following result shows how the distribution function can be used to compute the probability that $X$ is in an interval. Recall that a probability distribution on $\R$ is completely determined by the probabilities of intervals; thus, the distribution function determines the distribution of $X$.
Suppose again that $F$ is the distribution function of a real-valued random variable $X$. If $a, \, b \in \R$ with $a \lt b$ then
1. $\P(X = a) = F(a) - F(a^-)$
2. $\P(a \lt X \le b) = F(b) - F(a)$
3. $\P(a \lt X \lt b) = F(b^-) - F(a)$
4. $\P(a \le X \le b) = F(b) - F(a^-)$
5. $\P(a \le X \lt b) = F(b^-) - F(a^-)$
Proof
These results follow from the definition, the basic properties, and the difference rule: $\P(B \setminus A) = \P(B) - \P(A)$ if $A, \, B$ are events and $A \subseteq B$.
1. $\{X = a\} = \{X \le a\} \setminus \{X \lt a\}$, so $\P(X = a) = \P(X \le a) - \P(X \lt a) = F(a) - F(a^-)$.
2. $\{a \lt X \le b\} = \{X \le b\} \setminus \{X \le a\}$, so $\P(a \lt X \le b) = \P(X \le b) - \P(X \le a) = F(b) - F(a)$.
3. $\{a \lt X \lt b\} = \{X \lt b\} \setminus \{X \le a\}$, so $\P(a \lt X \lt b) = \P(X \lt b) - \P(X \le a) = F(b^-) - F(a)$.
4. $\{a \le X \le b\} = \{X \le b\} \setminus \{X \lt a\}$, so $\P(a \le X \le b) = \P(X \le b) - \P(X \lt a) = F(b) - F(a^-)$.
5. $\{a \le X \lt b\} = \{X \lt b\} \setminus \{X \lt a\}$, so $\P(a \le X \lt b) = \P(X \lt b) - \P(X \lt a) = F(b^-) - F(a^-)$.
Conversely, if a Function $F: \R \to [0, 1]$ satisfies the basic properties, then the formulas above define a probability distribution on $\R$, with $F$ as the distribution function. For more on this point, read the section on Existence and Uniqueness.
If $X$ has a continuous distribution, then the distribution function $F$ is continuous.
Proof
If $X$ has a continuous distribution, then by definition, $\P(X = x) = 0$ so $\P(X \lt x) = \P(X \le x)$ for $x \in \R$. Hence from part (a) of the previous theorem, $F(x^-) = F(x^+) = F(x)$.
Thus, the two meanings of continuous come together: continuous distribution and continuous function in the calculus sense. Next recall that the distribution of a real-valued random variable $X$ is symmetric about a point $a \in \R$ if the distribution of $X - a$ is the same as the distribution of $a - X$.
Suppose that $X$ has a continuous distribution on $\R$ that is symmetric about a point $a$. Then the distribution function $F$ satisfies $F(a - t) = 1 - F(a + t)$ for $t \in \R$.
Proof
Since $X - a$ and $a - X$ have the same distribution, $F(a - t) = \P(X \le a - t) = \P(X - a \le -t) = \P(a - X \le -t) = \P(X \ge a + t) = 1 - F(a + t)$
Relation to Density Functions
There are simple relationships between the distribution function and the probability density function. Recall that if $X$ takes value in $S \subseteq \R$ and has probability density function $f$, we can extend $f$ to all of $\R$ by the convention that $f(x) = 0$ for $x \in S^c$. As in Definition (1), it's customary to define the distribution function $F$ on all of $\R$, even if the random variable takes values in a subset.
Suppose that $X$ has discrete distribution on a countable subset $S \subseteq \R$. Let $f$ denote the probability density function and $F$ the distribution function.
1. $F(x) = \sum_{t \in S, \, t \le x} f(t)$ for $x \in \R$
2. $f(x) = F(x) - F(x^-)$ for $x \in S$
Proof
1. This follows from the definition of the PDF of $X$, $f(t) = \P(X = t)$ for $t \in S$, and the additivity of probability.
2. This is a restatement of part (a) of the theorem above.
Thus, $F$ is a step function with jumps at the points in $S$; the size of the jump at $x$ is $f(x)$.
There is an analogous result for a continuous distribution with a probability density function.
Suppose that $X$ has a continuous distribution on $\R$ with probability density function $f$ and distribution function $F$.
1. $F(x) = \int_{-\infty}^x f(t) dt$ for $x \in \R$.
2. $f(x) = F^\prime(x)$ if $f$ is continuous at $x$.
Proof
The last result is the basic probabilistic version of the fundamental theorem of calculus. For mixed distributions, we have a combination of the results in the last two theorems.
Suppose that $X$ has a mixed distribution, with discrete part on a countable subset $D \subseteq \R$, and continuous part on $\R \setminus D$. Let $g$ denote the partial probability density function of the discrete part and assume that the continuous part has partial probability density function $h$. Let $F$ denote the distribution function.
1. $F(x) = \sum_{t \in D, \, t \le x} g(t) + \int_{-\infty}^x h(t) dt$ for $x \in \R$
2. $g(x) = F(x) - F(x^-)$ for $x \in D$
3. $h(x) = F^\prime (x)$ if $x \notin D$ and $h$ is continuous at $x$
Go back to the graph of a general distribution function. At a point of positive probability, the probability is the size of the jump. At a smooth point of the graph, the continuous probability density is the slope.
Recall that the existence of a probability density function is not guaranteed for a continuous distribution, but of course the distribution function always makes perfect sense. The advanced section on absolute continuity and density functioons has an example of a continuous distribution on the interval $(0, 1)$ that has no probability density function. The distribution function is continuous and strictly increases from 0 to 1 on the interval, but has derivative 0 at almost every point!
Naturally, the distribution function can be defined relative to any of the conditional distributions we have discussed. No new concepts are involved, and all of the results above hold.
Reliability
Suppose again that $X$ is a real-valued random variable with distribution function $F$. The function in the following definition clearly gives the same information as $F$.
The function $F^c$ defined by $F^c(x) = 1 - F(x) = \P(X \gt x), \quad x \in \R$ is the right-tail distribution function of $X$. Give the mathematical properties of $F^c$ analogous to the properties of $F$ in (2).
Answer
1. $F^c$ is decreasing.
2. $F^c(t) \to F^c(x)$ as $t \downarrow x$ for $x \in \R$, so $F^c$ is continuous from the right.
3. $F^c(t) \to \P(X \ge x)$ as $t \uparrow x$ for $x \in \R$, so $F^c$ has left limits.
4. $F^c(x) \to 0$ as $x \to \infty$.
5. $F^c(x) \to 1$ as $x \to -\infty$.
So $F$ might be called the left-tail distribution function. But why have two distribution functions that give essentially the same information? The right-tail distribution function, and related functions, arise naturally in the context of reliability theory. For the remainder of this subsection, suppose that $T$ is a random variable with values in $[0, \infty)$ and that $T$ has a continuous distribution with probability density function $f$. Here are the important defintions:
Suppose that $T$ represents the lifetime of a device.
1. The right tail distribution function $F^c$ is the reliability function of $T$.
2. The function $h$ defined by $h(t) = f(t) \big/ F^c(t)$ for $t \ge 0$ is the failure rate function of $T$.
To interpret the reliability function, note that $F^c(t) = \P(T \gt t)$ is the probability that the device lasts at least $t$ time units. To interpret the failure rate function, note that if $dt$ is small then $\P(t \lt T \lt t + dt \mid T \gt t) = \frac{\P(t \lt T \lt t + dt)}{\P(T \gt t)} \approx \frac{f(t) \, dt}{F^c(t)} = h(t) \, dt$ So $h(t) \, dt$ is the approximate probability that the device will fail in the interval $(t, t + dt)$, given survival up to time $t$. Moreover, like the distribution function and the reliability function, the failure rate function also completely determines the distribution of $T$.
The reliability function can be expressed in terms of the failure rate function by $F^c(t) = \exp\left(-\int_0^t h(s) \, ds\right), \quad t \ge 0$
Proof
At the points of continuity of $f$ we have $\frac{d}{dt}F^c(t) = - f(t)$. Hence
$\int_0^t h(s) \, ds = \int_0^t \frac{f(s)}{F^c(s)} \, ds = \int_0^t -\frac{\frac{d}{ds}F^c(s)}{F^c(s)} \, ds = -\ln\left[F^c(t)\right]$
The failure rate function $h$ satisfies the following properties:
1. $h(t) \ge 0$ for $t \ge 0$
2. $\int_0^\infty h(t) \, dt = \infty$
Proof
1. This follows from the definition.
2. This follows from the previous result and the fact that $F^c(t) \to 0$ as $t \to \infty$.
Conversely, a function that satisfies these properties is the failure rate function for a continuous distribution on $[0, \infty)$:
Suppose that $h: [0, \infty) \to [0, \infty)$ is piecewise continuous and $\int_0^\infty h(t) \, dt = \infty$. Then the function $G$ defined by $F^c(t) = \exp\left(-\int_0^t h(s) \, ds\right), \quad t \ge 0$ is a reliability function for a continuous distribution on $[0, \infty)$
Proof
The function $F^c$ is continuous, decreasing, and satisfies $F^c(0) = 1$ and $F^c(t) \to 0$ as $t \to \infty$. Hence $F = 1 - F^c$ is the distribution function for a continuous distribution on $[0, \infty)$.
Multivariate Distribution Functions
Suppose now that $X$ and $Y$ are real-valued random variables for an experiment (that is, defined on the same probability space), so that $(X, Y)$ is random vector taking values in a subset of $\R^2$.
The distribution function of $(X, Y)$ is the function $F$ defined by $F(x, y) = \P(X \le x, Y \le y), \quad (x, y) \in \R^2$
In the graph above, the light shading is intended to suggest a continuous distribution of probability, while the darker dots represent points of positive probability. Thus, $F(x, y)$ is the total probability mass below and to the left (that is, southwest) of the point $(x, y)$. As in the single variable case, the distribution function of $(X, Y)$ completely determines the distribution of $(X, Y)$.
Suppose that $a, \, b, \, c, \, d \in \R$ with $a \lt b$ and $c \lt d$. Then $\P(a \lt X \le b, c \lt Y \le d) = F(b, d) - F(a, d) - F(b, c) + F(a, c)$
Proof
Note that $\{X \le a, Y \le d\} \cup \{X \le b, Y \le c\} \cup \{a \lt X \le b, c \lt Y \le d\} = \{X \le b, Y \le d\}$. The intersection of the first two events is $\{X \le a, Y \le c\}$ while the first and third events and the second and third events are disjoint. Thus, from the inclusion-exclusion rule we have $F(a, d) + F(b, c) + \P(a \lt X \le b, c \lt Y \le d) - F(a, c) = F(b, d)$
A probability distribution on $\R^2$ is completely determined by its values on rectangles of the form $(a, b] \times (c, d]$, so just as in the single variable case, it follows that the distribution function of $(X, Y)$ completely determines the distribution of $(X, Y)$. See the advanced section on existence and uniqueness of positive measures in the chapter on Probability Measures for more details.
In the setting of the previous result, give the appropriate formula on the right for all possible combinations of weak and strong inequalities on the left.
The joint distribution function determines the individual (marginal) distribution functions.
Let $F$ denote the distribution function of $(X, Y)$, and let $G$ and $H$ denote the distribution functions of $X$ and $Y$, respectively. Then
1. $G(x) = F(x, \infty)$ for $x \in \R$
2. $H(y) = F(\infty, y)$ for $y \in \R$
Proof
These results follow from the continuity theorem for increasing events. For example, in (a) $\P(X \le x) = \P(X \le x, Y \lt \infty) = \lim_{y \to \infty} \P(X \le x, Y \le y) = \lim_{y \to \infty} F(x, y)$
On the other hand, we cannot recover the distribution function of $(X, Y)$ from the individual distribution functions, except when the variables are independent.
Random variables $X$ and $Y$ are independent if and only if $F(x, y) = G(x) H(y), \quad (x, y) \in \R^2$
Proof
If $X$ and $Y$ are independent then $F(x, y) = \P(X \le x, Y \le y) = \P(X \le x) \P(Y \le y) = G(x) H(y)$ for $(x, y) \in \R^2$. Conversely, suppose $F(x, y) = G(x) H(y)$ for $(x, y) \in \R^2$. If $a, \, b, \, c, \, d \in \R$ with $a \lt b$ and $c \lt d$ then from (15), \begin{align} \P(a \lt X \le b, c \lt Y \le d) & = G(b)H(d) - G(a)H(d) -G(b)H(c) + G(a)H(c) \ & = [G(b) - G(a)][H(d) - H(c)] = \P(a \lt X \le b) \P(c \lt Y \le d) \end{align} so it follows that $X$ and $Y$ are independent. (Recall again that a probability distribution on $\R^2$ is completely determined by its values on rectangles.)
All of the results of this subsection generalize in a straightforward way to $n$-dimensional random vectors. Only the notation is more complicated.
The Empirical Distribution Function
Suppose now that $X$ is a real-valued random variable for a basic random experiment and that we repeat the experiment $n$ times independently. This generates (for the new compound experiment) a sequence of independent variables $(X_1, X_2, \ldots, X_n)$ each with the same distribution as $X$. In statistical terms, this sequence is a random sample of size $n$ from the distribution of $X$. In statistical inference, the observed values $(x_1, x_2, \ldots, x_n)$ of the random sample form our data.
The empirical distribution function, based on the data $(x_1, x_2, \ldots, x_n)$, is defined by $F_n(x) = \frac{1}{n} \#\left\{i \in \{1, 2, \ldots, n\}: x_i \le x\right\} = \frac{1}{n} \sum_{i=1}^n \bs{1}(x_i \le x), \quad x \in \R$
Thus, $F_n(x)$ gives the proportion of values in the data set that are less than or equal to $x$. The function $F_n$ is a statistical estimator of $F$, based on the given data set. This concept is explored in more detail in the section on the sample mean in the chapter on Random Samples. In addition, the empirical distribution function is related to the Brownian bridge stochastic process which is studied in the chapter on Brownian motion.
Quantile Functions
Definitions
Suppose again that $X$ is a real-valued random variable with distribution function $F$.
For $p \in (0, 1)$, a value of $x$ such that $F(x^-) = \P(X \lt x) \le p$ and $F(x) = \P(X \le x) \ge p$ is called a quantile of order $p$ for the distribution.
Roughly speaking, a quantile of order $p$ is a value where the graph of the distribution function crosses (or jumps over) $p$. For example, in the picture below, $a$ is the unique quantile of order $p$ and $b$ is the unique quantile of order $q$. On the other hand, the quantiles of order $r$ form the interval $[c, d]$, and moreover, $d$ is a quantile for all orders in the interval $[r, s]$. Note also that if $X$ has a continuous distribution (so that $F$ is continuous) and $x$ is a quantile of order $p \in (0, 1)$, then $F(x) = p$.
Note that there is an inverse relation of sorts between the quantiles and the cumulative distribution values, but the relation is more complicated than that of a function and its ordinary inverse function, because the distribution function is not one-to-one in general. For many purposes, it is helpful to select a specific quantile for each order; to do this requires defining a generalized inverse of the distribution function $F$.
The quantile function $F^{-1}$ of $X$ is defined by $F^{-1}(p) = \min\{x \in \R: F(x) \ge p\}, \quad p \in (0, 1)$
$F^{-1}$ is well defined
Since $F$ is right continuous and increasing, $\{x \in \R: F(x) \ge p\}$ is an interval of the form $[a, \infty)$. Thus, the minimum of the set is $a$.
Note that if $F$ strictly increases from 0 to 1 on an interval $S$ (so that the underlying distribution is continuous and is supported on $S$), then $F^{-1}$ is the ordinary inverse of $F$. We do not usually define the quantile function at the endpoints 0 and 1. If we did, note that $F^{-1}(0)$ would always be $-\infty$.
Properties
The following exercise justifies the name: $F^{-1}(p)$ is the minimum of the quantiles of order $p$.
Let $p \in (0, 1)$.
1. $F^{-1}(p)$ is a quantile of order $p$.
2. If $x$ is a quantile of order $p$ then $F^{-1}(p) \le x$.
Proof
Let $y = F^{-1}(p)$.
1. Note that $F(y) \ge p$ by definition, and if $x \lt y$ then $F(x) \lt p$. Hence $F(y^-) \le p$. Therefore $y$ is a quantile of order $p$.
2. Suppose that $x$ is a quantile of order $p$. Then $F(x) \ge p$ so by definition, $y \le x$.
Other basic properties of the quantile function are given in the following theorem.
$F^{-1}$ satisfies the following properties:
1. $F^{-1}$ is increasing on $(0, 1)$.
2. $F^{-1}\left[F(x)\right] \le x$ for any $x \in \R$ with $F(x) \lt 1$.
3. $F\left[F^{-1}(p)\right] \ge p$ for any $p \in (0, 1)$.
4. $F^{-1}\left(p^-\right) = F^{-1}(p)$ for $p \in (0, 1)$. Thus $F^{-1}$ is continuous from the left.
5. $F^{-1}\left(p^+\right) = \inf\{x \in \R: F(x) \gt p\}$ for $p \in (0, 1)$. Thus $F^{-1}$ has limits from the right.
Proof
1. Note that if $p, \; q \in (0, 1)$ with $p \le q$, then $\{x \in \R: F(x) \ge q\} \subseteq \{x \in \R: F(x) \ge p\}$.
2. This follows from the definition: $F^{-1}\left[F(x)\right]$ is the smallest $y \in \R$ with $F(y) \ge F(x)$.
3. This also follows from the definition: $F^{-1}(p)$ is a value $y \in \R$ satisfying $F(y) \ge p$.
4. This follows from the fact that $F$ is continuous from the right
5. This follows from the fact that $F$ has limits from the left.
As always, the inverse of a function is obtained essentially by reversing the roles of independent and dependent variables. In the graphs below, note that jumps of $F$ become flat portions of $F^{-1}$ while flat portions of $F$ become jumps of $F^{-1}$. For $p \in (0, 1)$, the set of quantiles of order $p$ is the closed, bounded interval $\left[F^{-1}(p), F^{-1}(p^+)\right]$. Thus, $F^{-1}(p)$ is the smallest quantile of order $p$, as we noted earlier, while $F^{-1}(p^+)$ is the largest quantile of order $p$.
The following basic property will be useful in simulating random variables, a topic explored in the section on transformations of random variables.
For $x \in \R$ and $p \in (0, 1)$, $F^{-1}(p) \le x$ if and only if $p \le F(x)$.
Proof
Suppose that $F^{-1}(p) \le x$. Then, since $F$ is increasing, $F\left[F^{-1}(p)\right] \le F(x)$. But $p \le F\left[F^{-1}(p)\right]$ by part (c) of the previous result, so $p \le F(x)$. Conversely, suppose that $p \le F(x)$. Then, since $F^{-1}$ is increasing, $F^{-1}(p) \le F^{-1}[F(x)]$. But $F^{-1}[F(x)] \le x$ by part (b) of the previous result, so $F^{-1}(p) \le x$.
Special Quantiles
Certain quantiles are important enough to deserve special names.
Suppose that $X$ is a real-valued random variable.
1. A quantile of order $\frac{1}{4}$ is a first quartile of the distribution.
2. A quantile of order $\frac{1}{2}$ is a median or second quartile of the distribution.
3. A quantile of order $\frac{3}{4}$ is a third quartile of the distribution.
When there is only one median, it is frequently used as a measure of the center of the distribution, since it divides the set of values of $X$ in half, by probability. More generally, the quartiles can be used to divide the set of values into fourths, by probability.
Assuming uniqueness, let $q_1$, $q_2$, and $q_3$ denote the first, second, and third quartiles of $X$, respectively, and let $a = F^{-1}\left(0^+\right)$ and $b = F^{-1}(1)$.
1. The interquartile range is defined to be $q_3 - q_1$.
2. The five parameters $(a, q_1, q_2, q_3, b)$ are referred to as the five number summary of the distribution.
Note that the interval $[q_1, q_3]$ roughly gives the middle half of the distribution, so the interquartile range, the length of the interval, is a natural measure of the dispersion of the distribution about the median. Note also that $a$ and $b$ are essentially the minimum and maximum values of $X$, respectively, although of course, it's possible that $a = -\infty$ or $b = \infty$ (or both). Collectively, the five parameters give a great deal of information about the distribution in terms of the center, spread, and skewness. Graphically, the five numbers are often displayed as a boxplot or box and whisker plot, which consists of a line extending from the minimum value $a$ to the maximum value $b$, with a rectangular box from $q_1$ to $q_3$, and whiskers at $a$, the median $q_2$, and $b$. Roughly speaking, the five numbers separate the set of values of $X$ into 4 intervals of approximate probability $\frac{1}{4}$ each.
Suppose that $X$ has a continuous distribution that is symmetric about a point $a \in \R$. If $a + t$ is a quantile of order $p \in (0, 1)$ then $a - t$ is a quantile of order $1 - p$.
Proof
Note that this is the quantile function version of symmetry result for the distribution function. If $a + t$ is a qantile of order $p$ then (since $X$ has a continuous distribution) $F(a + t) = p$. But then $F(a - t) = 1 - F(a + t) = 1 - p$ so $a - t$ is a quantile of order $1 - p$.
Examples and Applications
Distributions of Different Types
Let $F$ be the function defined by $F(x) = \begin{cases} 0, & x \lt 1\ \frac{1}{10}, & 1 \le x \lt \frac{3}{2}\ \frac{3}{10}, & \frac{3}{2} \le x \lt 2\ \frac{6}{10}, & 2 \le x \lt \frac{5}{2}\ \frac{9}{10}, & \frac{5}{2} \le x \lt 3\ 1, & x \ge 3; \end{cases}$
1. Sketch the graph of $F$ and show that $F$ is the distribution function for a discrete distribution.
2. Find the corresponding probability density function $f$ and sketch the graph.
3. Find $\P(2 \le X \lt 3)$ where $X$ has this distribution.
4. Find the quantile function and sketch the graph.
5. Find the five number summary and sketch the boxplot.
Answer
1. Note that $F$ increases from 0 to 1, is a step function, and is right continuous.
2. $f(x) = \begin{cases} \frac{1}{10}, & x = 1 \ \frac{1}{5}, & x = \frac{3}{2} \ \frac{3}{10}, & x = 2 \ \frac{3}{10}, & x = \frac{5}{2} \ \frac{1}{10}, & x = 3 \end{cases}$
3. $\P(2 \le X \lt 3) = \frac{3}{5}$
4. $F^{-1}(p) = \begin{cases} 1, & 0 \lt p \le \frac{1}{10} \ \frac{3}{2}, & \frac{1}{10} \lt p \le \frac{3}{10} \ 2, & \frac{3}{10} \lt p \le \frac{6}{10} \ \frac{5}{2}, & \frac{6}{10} \lt p \le \frac{9}{10} \ 3, & \frac{9}{10} \lt p \le 1 \end{cases}$
5. $\left(1, \frac{3}{2}, 2, \frac{5}{2}, 3\right)$
Let $F$ be the function defined by
$F(x) = \begin{cases} 0, & x \lt 0\ \frac{x}{x + 1}, & x \ge 0 \end{cases}$
1. Sketch the graph of $F$ and show that $F$ is the distribution function for a continuous distribution.
2. Find the corresponding probability density function $f$ and sketch the graph.
3. Find $\P(2 \le X \lt 3)$ where $X$ has this distribution.
4. Find the quantile function and sketch the graph.
5. Find the five number summary and sketch the boxplot.
Answer
1. Note that $F$ is continuous and increases from 0 to 1.
2. $f(x) = \frac{1}{(x + 1)^2}, \quad x \gt 0$
3. $\P(2 \le X \lt 3) = \frac{1}{12}$
4. $F^{-1}(p) = \frac{p}{1 - p}, \quad 0 \lt p \lt 1$
5. $\left(0, \frac{1}{3}, 1, 3, \infty\right)$
The expression $\frac{p}{1 - p}$ that occurs in the quantile function in the last exercise is known as the odds ratio associated with $p$, particularly in the context of gambling.
Let $F$ be the function defined by
$F(x) = \begin{cases} 0, & x \lt 0\ \frac{1}{4} x, & 0 \le x \lt 1\ \frac{1}{3} + \frac{1}{4} (x - 1)^2, & 1 \le x \lt 2\ \frac{2}{3} + \frac{1}{4} (x - 2)^3, & 2 \le x \lt 3\ 1, & x \ge 3 \end{cases}$
1. Sketch the graph of $F$ and show that $F$ is the distribution function of a mixed distribution.
2. Find the partial probability density function of the discrete part and sketch the graph.
3. Find the partial probability density function of the continuous part and sketch the graph.
4. Find $\P(2 \le X \lt 3)$ where $X$ has this distribution.
5. Find the quantile function and sketch the graph.
6. Find the five number summary and sketch the boxplot.
Answer
1. Note that $F$ is piece-wise continuous, increases from 0 to 1, and is right continuous.
2. $g(1) = g(2) = g(3) = \frac{1}{12}$
3. $h(x) = \begin{cases} \frac{1}{4}, & 0 \lt x \lt 1 \ \frac{1}{2}(x - 1), & 1 \lt x \lt 2 \ \frac{3}{4}(x - 2)^2, & 2 \lt x \lt 3 \end{cases}$
4. $\P(2 \le X \lt 3) = \frac{1}{3}$
5. $F^{-1}(p) = \begin{cases} 4 p, & 0 \lt p \le \frac{1}{4} \ 1, & \frac{1}{4} \lt p \le \frac{1}{3} \ 1 + \sqrt{4(p - \frac{1}{3})}, & \frac{1}{3} \lt p \le \frac{7}{12} \ 2, & \frac{7}{12} \lt p \le \frac{2}{3} \ 2 + \sqrt[3]{4 (p - \frac{2}{3})}, & \frac{2}{3} \lt p \le \frac{11}{12} \ 3, & \frac{11}{12} \lt p \le 1 \end{cases}$
6. $\left(0, 1, 1 + \sqrt{\frac{2}{3}}, 2 + \sqrt[3]{\frac{1}{3}}, 3\right)$
The Uniform Distribution
Suppose that $X$ has probability density function $f(x) = \frac{1}{b - a}$ for $x \in [a, b]$, where $a, \, b \in \R$ and $a \lt b$.
1. Find the distribution function and sketch the graph.
2. Find the quantile function and sketch the graph.
3. Compute the five-number summary.
4. Sketch the graph of the probability density function with the boxplot on the horizontal axis.
Answer
1. $F(x) = \frac{x - a}{b - a}, \quad a \le x \lt b$
2. $F^{-1}(p) = a + (b - a) p, \quad 0 \le p \le 1$
3. $\left(a, \frac{3 a + b}{4}, \frac{a + b}{2}, \frac{a + 3 b}{4}, b\right)$
The distribution in the last exercise is the uniform distribution on the interval $[a, b]$. The left endpoint $a$ is the location parameter and the length of the interval $w = b - a$ is the scale parameter. The uniform distribution models a point chose at random from the interval, and is studied in more detail in the chapter on Special Distributions.
In the special distribution calculator, select the continuous uniform distribution. Vary the location and scale parameters and note the shape of the probability density function and the distribution function.
The Exponential Distribution
Suppose that $T$ has probability density function $f(t) = r e^{-r t}$ for $0 \le t \lt \infty$, where $r \gt 0$ is a parameter.
1. Find the distribution function and sketch the graph.
2. Find the reliability function and sketch the graph.
3. Find the failure rate function and sketch the graph.
4. Find the quantile function and sketch the graph.
5. Compute the five-number summary.
6. Sketch the graph of the probability density function with the boxplot on the horizontal axis.
Answer
1. $F(t) = 1 - e^{-r t}, \quad 0 \le t \lt \infty$
2. $F^c(t) = e^{-r t}, \quad 0 \le t \lt \infty$
3. $h(t) = r, \quad 0 \le t \lt \infty$
4. $F^{-1}(p) = -\frac{1}{r} \ln(1 - p), \quad 0 \le p \lt 1$
5. $\left(0, \frac{1}{r}[\ln 4 - \ln 3], \frac{1}{r} \ln 2, \frac{1}{r} \ln 4 , \infty\right)$
The distribution in the last exercise is the exponential distribution with rate parameter $r$. Note that this distribution is characterized by the fact that it has constant failure rate (and this is the reason for referring to $r$ as the rate parameter). The reciprocal of the rate parameter is the scale parameter. The exponential distribution is used to model failure times and other random times under certain conditions, and is studied in detail in the chapter on The Poisson Process.
In the special distribution calculator, select the exponential distribution. Vary the scale parameter $b$ and note the shape of the probability density function and the distribution function.
The Pareto Distribution
Suppose that $X$ has probability density function $f(x) = \frac{a}{x^{a+1}}$ for $1 \le x \lt \infty$ where $a \gt 0$ is a parameter.
1. Find the distribution function.
2. Find the reliability function.
3. Find the failure rate function.
4. Find the quantile function.
5. Compute the five-number summary.
6. In the case $a = 2$, sketch the graph of the probability density function with the boxplot on the horizontal axis.
Answer
1. $F(x) = 1 - \frac{1}{x^a}, \quad 1 \le x \lt \infty$
2. $F^c(x) = \frac{1}{x^a}, \quad 1 \le x \lt \infty$
3. $h(x) = \frac{a}{x}, \quad 1 \le x \lt \infty$
4. $F^{-1}(p) = (1 - p)^{-1/a}, \quad 0 \le p \lt 1$
5. $\left(1, \left(\frac{3}{4}\right)^{-1 / a}, \left(\frac{1}{2}\right)^{-1/a}, \left(\frac{1}{4}\right)^{-1/a}, \infty \right)$
The distribution in the last exercise is the Pareto distribution with shape parameter $a$, named after Vilfredo Pareto. The Pareto distribution is a heavy-tailed distribution that is sometimes used to model income and certain other economic variables. It is studied in detail in the chapter on Special Distributions.
In the special distribution calculator, select the Pareto distribution. Keep the default value for the scale parameter, but vary the shape parameter and note the shape of the density function and the distribution function.
The Cauchy Distribution
Suppose that $X$ has probability density function $f(x) = \frac{1}{\pi (1 + x^2)}$ for $x \in \R$.
1. Find the distribution function and sketch the graph.
2. Find the quantile function and sketch the graph.
3. Compute the five-number summary and the interquartile range.
4. Sketch the graph of the probability density function with the boxplot on the horizontal axis.
Answer
1. $F(x) = \frac{1}{2} + \frac{1}{\pi} \arctan x, \quad x \in \R$
2. $F^{-1}(p) = \tan\left[\pi\left(p - \frac{1}{2}\right)\right], \quad 0 \lt p \lt 1$
3. $(-\infty, -1, 0, 1, \infty)$, $\text{IQR} = 2$
The distribution in the last exercise is the Cauchy distribution, named after Augustin Cauchy. The Cauchy distribution is studied in more generality in the chapter on Special Distributions.
In the special distribution calculator, select the Cauchy distribution and keep the default parameter values. Note the shape of the density function and the distribution function.
The Weibull Distribution
Let $h(t) = k t^{k - 1}$ for $0 \lt t \lt \infty$ where $k \gt 0$ is a parameter.
1. Sketch the graph of $h$ in the cases $0 \lt k \lt 1$, $k = 1$, $1 \lt k \lt 2$, $k = 2$, and $k \gt 2$.
2. Show that $h$ is a failure rate function.
3. Find the reliability function and sketch the graph.
4. Find the distribution function and sketch the graph.
5. Find the probability density function and sketch the graph.
6. Find the quantile function and sketch the graph.
7. Compute the five-number summary.
Answer
1. $h$ is decreasing and concave upward if $0 \lt k \lt 1$; $h = 1$ (constant) if $k = 1$; $h$ is increasing and concave downward if $1 \lt k \lt 2$; $h(t) = t$ (linear) if $k = 2$; $h$ is increasing and concave upward if $k \gt 2$;
2. $h(t) \gt 0$ for $0 \lt t \lt \infty$ and $\int_0^\infty h(t) \, dt = \infty$
3. $F^c(t) = \exp\left(-t^k\right), \quad 0 \le t \lt \infty$
4. $F(t) = 1 - \exp\left(-t^k\right), \quad 0 \le t \lt \infty$
5. $f(t) = k t^{k-1} \exp\left(-t^k\right), \quad 0 \le t \lt \infty$
6. $F^{-1}(p) = [-\ln(1 - p)]^{1/k}, \quad 0 \le p \lt 1$
7. $\left(0, [\ln 4 - \ln 3]^{1/k}, [\ln 2]^{1/k}, [\ln 4]^{1/k}, \infty\right)$
The distribution in the previous exercise is the Weibull distributions with shape parameter $k$, named after Walodi Weibull. The Weibull distribution is studied in detail in the chapter on Special Distributions. Since this family includes increasing, decreasing, and constant failure rates, it is widely used to model the lifetimes of various types of devices.
In the special distribution calculator, select the Weibull distribution. Keep the default scale parameter, but vary the shape parameter and note the shape of the density function and the distribution function.
Beta Distributions
Suppose that $X$ has probability density function $f(x) = 12 x^2 (1 - x)$ for $0 \le x \le 1$.
1. Find the distribution function of $X$ and sketch the graph.
2. Find $\P\left(\frac{1}{4} \le X \le \frac{1}{2}\right)$.
3. Compute the five number summary and the interquartile range. You will have to approximate the quantiles.
4. Sketch the graph of the density function with the boxplot on the horizontal axis.
Answer
1. $F(x) = 4 x^3 - 3 x^4, \quad 0 \le x \le 1$
2. $\P\left(\frac{1}{4} \le X \le \frac{1}{2}\right) = \frac{67}{256}$
3. $(0, 0.4563, 0.6413, 0.7570, 1)$, $\text{IQR} = 0.3007$
Suppose that $X$ has probability density function $f(x) = \frac{1}{\pi \sqrt{x (1 - x)}}$ for $0 \lt x \lt 1$.
1. Find the distribution function of $X$ and sketch the graph.
2. Compute $\P\left(\frac{1}{3} \le X \le \frac{2}{3}\right)$.
3. Find the quantile function and sketch the graph.
4. Compute the five number summary and the interquartile range.
5. Sketch the graph of the probability density function with the boxplot on the horizontal axis.
Answer
1. $F(x) = \frac{2}{\pi} \arcsin\left(\sqrt{x}\right), \quad 0 \le x \le 1$
2. $\P\left(\frac{1}{3} \le X \le \frac{2}{3}\right) = 0.2163$
3. $F^{-1}(p) = \sin^2\left(\frac{\pi}{2} p\right), \quad 0 \lt p \lt 1$
4. $\left(0, \frac{1}{2} - \frac{\sqrt{2}}{4}, \frac{1}{2}, \frac{1}{2} + \frac{\sqrt{2}}{4}, 1\right)$, $\text{IQR} = \frac{\sqrt{2}}{2}$
The distributions in the last two exercises are examples of beta distributions. The particular beta distribution in the last exercise is also known as the arcsine distribution; the distribution function explains the name. Beta distributions are used to model random proportions and probabilities, and certain other types of random variables, and are studied in detail in the chapter on Special Distributions.
In the special distribution calculator, select the beta distribution. For each of the following parameter values, note the location and shape of the density function and the distribution function.
1. $a = 3$, $b = 2$. This gives the first beta distribution above.
2. $a = b = \frac{1}{2}$. This gives the arcsine distribution above
Logistic Distribution
Let $F(x) = \frac{e^x}{1 + e^x}$ for $x \in \R$.
1. Show that $F$ is a distribution function for a continuous distribution, and sketch the graph.
2. Compute $\P(-1 \le X \le 1)$ where $X$ is a random variable with distribution function $F$.
3. Find the quantile function and sketch the graph.
4. Compute the five-number summary and the interquartile range.
5. Find the probability density function and sketch the graph with the boxplot on the horizontal axis.
Answer
1. Note that $F$ is continuous, and increases from 0 to 1.
2. $\P(-1 \le X \le 1) = 0.4621$
3. $F^{-1}(p) = \ln \left(\frac{p}{1 - p}\right), \quad 0 \lt p \lt 1$
4. $(-\infty, -\ln 3, 0, \ln 3, \infty)$
5. $f(x) = \frac{e^x}{(1 + e^x)^2}, \quad x \in \R$
The distribution in the last exercise is an logistic distribution and the quantile function is known as the logit function. The logistic distribution is studied in detail in the chapter on Special Distributions.
In the special distribution calculator, select the logistic distribution and keep the default parameter values. Note the shape of the probability density function and the distribution function.
Extreme Value Distribution
Let $F(x) = e^{-e^{-x}}$ for $x \in \R$.
1. Show that $F$ is a distribution function for a continuous distribution, and sketch the graph.
2. Compute $\P(-1 \le X \le 1)$ where $X$ is a random variable with distribution function $F$.
3. Find the quantile function and sketch the graph.
4. Compute the five-number summary.
5. Find the probability density function and sketch the graph with the boxplot on the horizontal axis.
Answer
1. Note that $F$ is continuous, and increases from 0 to 1.
2. $\P(-1 \le X \le 1) = 0.6262$
3. $F^{-1}(p) = -\ln(-\ln p), \quad 0 \lt p \lt 1$
4. $\left(-\infty, -\ln(\ln 4), -\ln(\ln 2), -\ln(\ln 4 - \ln 3), \infty\right)$
5. $f(x) = e^{-e^{-x}} e^{-x}, \quad x \in \R$
The distribution in the last exercise is the type 1 extreme value distribution, also known as the Gumbel distribution in honor of Emil Gumbel. Extreme value distributions are studied in detail in the chapter on Special Distributions.
In the special distribution calculator, select the extreme value distribution and keep the default parameter values. Note the shape and location of the probability density function and the distribution function.
The Standard Normal Distribution
Recall that the standard normal distribution has probability density function $\phi$ given by $\phi(z) = \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2} z^2}, \quad z \in \R$ This distribution models physical measurements of all sorts subject to small, random errors, and is one of the most important distributions in probability. The normal distribution is studied in more detail in the chapter on Special Distributions. The distribution function $\Phi$, of course, can be expressed as $\Phi(z) = \int_{-\infty}^z \phi(x) \, dx, \quad z \in \R$ but $\Phi$ and the quantile function $\Phi^{-1}$ cannot be expressed, in closed from, in terms of elementary functions. Because of the importance of the normal distribution $\Phi$ and $\Phi^{-1}$ are themselves considered special functions, like $\sin$, $\ln$, and many others. Approximate values of these functions can be computed using most mathematical and statistical software packages. Because the distribution is symmetric about 0, $\Phi(-z) = 1 - \Phi(z)$ for $z \in \R$, and equivalently, $\Phi^{-1}(1 - p) = -\Phi^{-1}(p)$. In particular, the median is 0.
Open the sepcial distribution calculator and choose the normal distribution. Keep the default parameter values and select CDF view. Note the shape and location of the distribution/quantile function. Compute each of the following:
1. The first and third quartiles
2. The quantiles of order 0.9 and 0.1
3. The quantiles of order 0.95 and 0.05
Miscellaneous Exercises
Suppose that $X$ has probability density function $f(x) = -\ln x$ for $0 \lt x \le 1$.
1. Sketch the graph of $f$.
2. Find the distribution function $F$ and sketch the graph.
3. Find $\P\left(\frac{1}{3} \le X \le \frac{1}{2}\right)$.
Answer
1. $F(x) = x - x \ln x, \quad 0 \lt x \lt 1$
2. $\P(\frac{1}{3} \le X \le \frac{1}{2}) = \frac{1}{6} + \frac{1}{2} \ln 2 - \frac{1}{3} \ln 3$
Suppose that a pair of fair dice are rolled and the sequence of scores $(X_1, X_2)$ is recorded.
1. Find the distribution function of $Y = X_1 + X_2$, the sum of the scores.
2. Find the distribution function of $V = \max \{X_1, X_2\}$, the maximum score.
3. Find the conditional distribution function of $Y$ given $V = 5$.
Answer
The random variables are discrete, so the CDFs are step functions, with jumps at the values of the variables. The following tables give the values of the CDFs at the values of the random variables.
1. $y$ 2 3 4 5 6 7 8 9 10 11 12
$\P(Y \le y)$ $\frac{1}{36}$ $\frac{3}{36}$ $\frac{6}{36}$ $\frac{10}{36}$ $\frac{15}{36}$ $\frac{21}{36}$ $\frac{26}{36}$ $\frac{30}{36}$ $\frac{33}{36}$ $\frac{35}{36}$ 1
2. $v$ 1 2 3 4 5 6
$\P(V \le v)$ $\frac{1}{36}$ $\frac{4}{36}$ $\frac{9}{36}$ $\frac{16}{36}$ $\frac{25}{36}$ 1
3. $y$ 6 7 8 9 10
$\P(Y \le y \mid V = 5)$ $\frac{2}{9}$ $\frac{4}{9}$ $\frac{6}{9}$ $\frac{8}{9}$ 1
Suppose that $(X, Y)$ has probability density function $f(x, y) = x + y$ for $0 \le x \le 1$, $0 \le y \le 1$.
1. Find the distribution function of $X, Y)$.
2. Compute $\P\left(\frac{1}{4} \le X \le \frac{1}{2}, \frac{1}{3} \le Y \le \frac{2}{3}\right)$.
3. Find the distribution function of $X$.
4. Find the distribution function of $Y$.
5. Find the conditional distribution function of $X$ given $Y = y$ for $0 \lt y \lt 1$.
6. Find the conditional distribution function of $Y$ given $X = x$ for $0 \lt x \lt 1$.
7. Are $X$ and $Y$ independent?
Answer
1. $F(x, y) = \frac{1}{2}\left(x y^2 + y x^2\right); \quad 0 \lt x \lt 1, \; 0 \lt y \lt 1$
2. $\P\left(\frac{1}{4} \le X \le \frac{1}{2}, \frac{1}{3} \le Y \le \frac{2}{3}\right) = \frac{7}{96}$
3. $G(x) = \frac{1}{2}\left(x + x^2\right), \quad 0 \lt x \lt 1$
4. $H(y) = \frac{1}{2}\left(y + y^2\right), \quad 0 \lt y \lt 1$
5. $G(x \mid y) = \frac{x^2 / 2 + x y}{y + 1/2}; \quad 0 \lt x \lt 1, \; 0 \lt y \lt 1$
6. $H(y \mid x) = \frac{y^2 / 2 + x y}{x + 1/2}; \quad 0 \lt x \lt 1, \; 0 \lt y \lt 1$
Statistical Exercises
For the M&M data, compute the empirical distribution function of the total number of candies.
Answer
Let $N$ denote the total number of candies. The empirical distribution function of $N$ is a step function; the following table gives the values of the function at the jump points.
$n$ 50 53 54 55 56 57 58 59 60 61
$\P(N \le n)$ $\frac{1}{30}$ $\frac{2}{30}$ $\frac{3}{30}$ $\frac{7}{30}$ $\frac{11}{30}$ $\frac{14}{30}$ $\frac{23}{30}$ $\frac{26}{30}$ $\frac{28}{30}$ 1
For the cicada data, let $BL$ denotes body length and let $G$ denote gender. Compute the empirical distribution function of the following variables:
1. $BL$
2. $BL$ given $G = 1$ (male)
3. $BL$ given $G = 0$ (female).
4. Do you believe that $BL$ and $G$ are independent?
For statistical versions of some of the topics in this section, see the chapter on Random Samples, and in particular, the sections on empirical distributions and order statistics. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/03%3A_Distributions/3.06%3A_Distribution_and_Quantile_Functions.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\sgn}{\text{sgn}}$
This section studies how the distribution of a random variable changes when the variable is transfomred in a deterministic way. If you are a new student of probability, you should skip the technical details.
Basic Theory
The Problem
As usual, we start with a random experiment modeled by a probability space $(\Omega, \mathscr F, \P)$. So to review, $\Omega$ is the set of outcomes, $\mathscr F$ is the collection of events, and $\P$ is the probability measure on the sample space $(\Omega, \mathscr F)$. Suppose now that we have a random variable $X$ for the experiment, taking values in a set $S$, and a function $r$ from $S$ into another set $T$. Then $Y = r(X)$ is a new random variable taking values in $T$. If the distribution of $X$ is known, how do we find the distribution of $Y$? This is a very basic and important question, and in a superficial sense, the solution is easy. But first recall that for $B \subseteq T$, $r^{-1}(B) = \{x \in S: r(x) \in B\}$ is the inverse image of $B$ under $r$.
$\P(Y \in B) = \P\left[X \in r^{-1}(B)\right]$ for $B \subseteq T$.
Proof
However, frequently the distribution of $X$ is known either through its distribution function $F$ or its probability density function $f$, and we would similarly like to find the distribution function or probability density function of $Y$. This is a difficult problem in general, because as we will see, even simple transformations of variables with simple distributions can lead to variables with complex distributions. We will solve the problem in various special cases.
Transformed Variables with Discrete Distributions
When the transformed variable $Y$ has a discrete distribution, the probability density function of $Y$ can be computed using basic rules of probability.
Suppose that $X$ has a discrete distribution on a countable set $S$, with probability density function $f$. Then $Y$ has a discrete distribution with probability density function $g$ given by $g(y) = \sum_{x \in r^{-1}\{y\}} f(x), \quad y \in T$
Proof
Suppose that $X$ has a continuous distribution on a subset $S \subseteq \R^n$ with probability density function $f$, and that $T$ is countable. Then $Y$ has a discrete distribution with probability density function $g$ given by $g(y) = \int_{r^{-1}\{y\}} f(x) \, dx, \quad y \in T$
Proof
So the main problem is often computing the inverse images $r^{-1}\{y\}$ for $y \in T$. The formulas above in the discrete and continuous cases are not worth memorizing explicitly; it's usually better to just work each problem from scratch. The main step is to write the event $\{Y = y\}$ in terms of $X$, and then find the probability of this event using the probability density function of $X$.
Transformed Variables with Continuous Distributions
Suppose that $X$ has a continuous distribution on a subset $S \subseteq \R^n$ and that $Y = r(X)$ has a continuous distributions on a subset $T \subseteq \R^m$. Suppose also that $X$ has a known probability density function $f$. In many cases, the probability density function of $Y$ can be found by first finding the distribution function of $Y$ (using basic rules of probability) and then computing the appropriate derivatives of the distribution function. This general method is referred to, appropriately enough, as the distribution function method.
Suppose that $Y$ is real valued. The distribution function $G$ of $Y$ is given by
$G(y) = \int_{r^{-1}(-\infty, y]} f(x) \, dx, \quad y \in \R$
Proof
Again, this follows from the definition of $f$ as a PDF of $X$. For $y \in \R$, $G(y) = \P(Y \le y) = \P\left[r(X) \in (-\infty, y]\right] = \P\left[X \in r^{-1}(-\infty, y]\right] = \int_{r^{-1}(-\infty, y]} f(x) \, dx$
As in the discrete case, the formula in (4) not much help, and it's usually better to work each problem from scratch. The main step is to write the event $\{Y \le y\}$ in terms of $X$, and then find the probability of this event using the probability density function of $X$.
The Change of Variables Formula
When the transformation $r$ is one-to-one and smooth, there is a formula for the probability density function of $Y$ directly in terms of the probability density function of $X$. This is known as the change of variables formula. Note that since $r$ is one-to-one, it has an inverse function $r^{-1}$.
We will explore the one-dimensional case first, where the concepts and formulas are simplest. Thus, suppose that random variable $X$ has a continuous distribution on an interval $S \subseteq \R$, with distribution function $F$ and probability density function $f$. Suppose that $Y = r(X)$ where $r$ is a differentiable function from $S$ onto an interval $T$. As usual, we will let $G$ denote the distribution function of $Y$ and $g$ the probability density function of $Y$.
Suppose that $r$ is strictly increasing on $S$. For $y \in T$,
1. $G(y) = F\left[r^{-1}(y)\right]$
2. $g(y) = f\left[r^{-1}(y)\right] \frac{d}{dy} r^{-1}(y)$
Proof
1. $G(y) = \P(Y \le y) = \P[r(X) \le y] = \P\left[X \le r^{-1}(y)\right] = F\left[r^{-1}(y)\right]$ for $y \in T$. Note that the inquality is preserved since $r$ is increasing.
2. This follows from part (a) by taking derivatives with respect to $y$ and using the chain rule. Recall that $F^\prime = f$.
Suppose that $r$ is strictly decreasing on $S$. For $y \in T$,
1. $G(y) = 1 - F\left[r^{-1}(y)\right]$
2. $g(y) = -f\left[r^{-1}(y)\right] \frac{d}{dy} r^{-1}(y)$
Proof
1. $G(y) = \P(Y \le y) = \P[r(X) \le y] = \P\left[X \ge r^{-1}(y)\right] = 1 - F\left[r^{-1}(y)\right]$ for $y \in T$. Note that the inquality is reversed since $r$ is decreasing.
2. This follows from part (a) by taking derivatives with respect to $y$ and using the chain rule. Recall again that $F^\prime = f$.
The formulas for the probability density functions in the increasing case and the decreasing case can be combined:
If $r$ is strictly increasing or strictly decreasing on $S$ then the probability density function $g$ of $Y$ is given by $g(y) = f\left[ r^{-1}(y) \right] \left| \frac{d}{dy} r^{-1}(y) \right|$
Letting $x = r^{-1}(y)$, the change of variables formula can be written more compactly as $g(y) = f(x) \left| \frac{dx}{dy} \right|$ Although succinct and easy to remember, the formula is a bit less clear. It must be understood that $x$ on the right should be written in terms of $y$ via the inverse function. The images below give a graphical interpretation of the formula in the two cases where $r$ is increasing and where $r$ is decreasing.
The generalization of this result from $\R$ to $\R^n$ is basically a theorem in multivariate calculus. First we need some notation. Suppose that $r$ is a one-to-one differentiable function from $S \subseteq \R^n$ onto $T \subseteq \R^n$. The first derivative of the inverse function $\bs x = r^{-1}(\bs y)$ is the $n \times n$ matrix of first partial derivatives: $\left( \frac{d \bs x}{d \bs y} \right)_{i j} = \frac{\partial x_i}{\partial y_j}$ The Jacobian (named in honor of Karl Gustav Jacobi) of the inverse function is the determinant of the first derivative matrix $\det \left( \frac{d \bs x}{d \bs y} \right)$ With this compact notation, the multivariate change of variables formula is easy to state.
Suppose that $\bs X$ is a random variable taking values in $S \subseteq \R^n$, and that $\bs X$ has a continuous distribution with probability density function $f$. Suppose also $Y = r(X)$ where $r$ is a differentiable function from $S$ onto $T \subseteq \R^n$. Then the probability density function $g$ of $\bs Y$ is given by $g(\bs y) = f(\bs x) \left| \det \left( \frac{d \bs x}{d \bs y} \right) \right|, \quad y \in T$
Proof
The result follows from the multivariate change of variables formula in calculus. If $B \subseteq T$ then $\P(\bs Y \in B) = \P[r(\bs X) \in B] = \P[\bs X \in r^{-1}(B)] = \int_{r^{-1}(B)} f(\bs x) \, d\bs x$ Using the change of variables $\bs x = r^{-1}(\bs y)$, $d\bs x = \left|\det \left( \frac{d \bs x}{d \bs y} \right)\right|\, d\bs y$ we have $\P(\bs Y \in B) = \int_B f[r^{-1}(\bs y)] \left|\det \left( \frac{d \bs x}{d \bs y} \right)\right|\, d \bs y$ So it follows that $g$ defined in the theorem is a PDF for $\bs Y$.
The Jacobian is the infinitesimal scale factor that describes how $n$-dimensional volume changes under the transformation.
Special Transformations
Linear Transformations
Linear transformations (or more technically affine transformations) are among the most common and important transformations. Moreover, this type of transformation leads to simple applications of the change of variable theorems. Suppose first that $X$ is a random variable taking values in an interval $S \subseteq \R$ and that $X$ has a continuous distribution on $S$ with probability density function $f$. Let $Y = a + b \, X$ where $a \in \R$ and $b \in \R \setminus\{0\}$. Note that $Y$ takes values in $T = \{y = a + b x: x \in S\}$, which is also an interval.
$Y$ has probability density function $g$ given by $g(y) = \frac{1}{\left|b\right|} f\left(\frac{y - a}{b}\right), \quad y \in T$
Proof
The transformation is $y = a + b \, x$. Hence the inverse transformation is $x = (y - a) / b$ and $dx / dy = 1 / b$. The result now follows from the change of variables theorem.
When $b \gt 0$ (which is often the case in applications), this transformation is known as a location-scale transformation; $a$ is the location parameter and $b$ is the scale parameter. Scale transformations arise naturally when physical units are changed (from feet to meters, for example). Location transformations arise naturally when the physical reference point is changed (measuring time relative to 9:00 AM as opposed to 8:00 AM, for example). The change of temperature measurement from Fahrenheit to Celsius is a location and scale transformation. Location-scale transformations are studied in more detail in the chapter on Special Distributions.
The multivariate version of this result has a simple and elegant form when the linear transformation is expressed in matrix-vector form. Thus suppose that $\bs X$ is a random variable taking values in $S \subseteq \R^n$ and that $\bs X$ has a continuous distribution on $S$ with probability density function $f$. Let $\bs Y = \bs a + \bs B \bs X$ where $\bs a \in \R^n$ and $\bs B$ is an invertible $n \times n$ matrix. Note that $\bs Y$ takes values in $T = \{\bs a + \bs B \bs x: \bs x \in S\} \subseteq \R^n$.
$\bs Y$ has probability density function $g$ given by $g(\bs y) = \frac{1}{\left| \det(\bs B)\right|} f\left[ B^{-1}(\bs y - \bs a) \right], \quad \bs y \in T$
Proof
The transformation $\bs y = \bs a + \bs B \bs x$ maps $\R^n$ one-to-one and onto $\R^n$. The inverse transformation is $\bs x = \bs B^{-1}(\bs y - \bs a)$. The Jacobian of the inverse transformation is the constant function $\det (\bs B^{-1}) = 1 / \det(\bs B)$. The result now follows from the multivariate change of variables theorem.
Sums and Convolution
Simple addition of random variables is perhaps the most important of all transformations. Suppose that $X$ and $Y$ are random variables on a probability space, taking values in $R \subseteq \R$ and $S \subseteq \R$, respectively, so that $(X, Y)$ takes values in a subset of $R \times S$. Our goal is to find the distribution of $Z = X + Y$. Note that $Z$ takes values in $T = \{z \in \R: z = x + y \text{ for some } x \in R, y \in S\}$. For $z \in T$, let $D_z = \{x \in R: z - x \in S\}$.
Suppose that $(X, Y)$ probability density function $f$.
1. If $(X, Y)$ has a discrete distribution then $Z = X + Y$ has a discrete distribution with probability density function $u$ given by $u(z) = \sum_{x \in D_z} f(x, z - x), \quad z \in T$
2. If $(X, Y)$ has a continuous distribution then $Z = X + Y$ has a continuous distribution with probability density function $u$ given by $u(z) = \int_{D_z} f(x, z - x) \, dx, \quad z \in T$
Proof
1. $\P(Z = z) = \P\left(X = x, Y = z - x \text{ for some } x \in D_z\right) = \sum_{x \in D_z} f(x, z - x)$
2. For $A \subseteq T$, let $C = \{(u, v) \in R \times S: u + v \in A\}$. Then $\P(Z \in A) = \P(X + Y \in A) = \int_C f(u, v) \, d(u, v)$ Now use the change of variables $x = u, \; z = u + v$. Then the inverse transformation is $u = x, \; v = z - x$ and the Jacobian is 1. Using the change of variables theorem (8) we have $\P(Z \in A) = \int_{D_z \times A} f(x, z - x) \, d(x, z) = \int_A \int_{D_z} f(x, z - x) \, dx \, dz$ It follows that $Z$ has probability density function $z \mapsto \int_{D_z} f(x, z - x) \, dx$.
In the discrete case, $R$ and $S$ are countable, so $T$ is also countable as is $D_z$ for each $z \in T$. In the continuous case, $R$ and $S$ are typically intervals, so $T$ is also an interval as is $D_z$ for $z \in T$. In both cases, determining $D_z$ is often the most difficult step. By far the most important special case occurs when $X$ and $Y$ are independent.
Suppose that $X$ and $Y$ are independent and have probability density functions $g$ and $h$ respectively.
1. If $X$ and $Y$ have discrete distributions then $Z = X + Y$ has a discrete distribution with probability density function $g * h$ given by $(g * h)(z) = \sum_{x \in D_z} g(x) h(z - x), \quad z \in T$
2. If $X$ and $Y$ have continuous distributions then $Z = X + Y$ has a continuous distribution with probability density function $g * h$ given by $(g * h)(z) = \int_{D_z} g(x) h(z - x) \, dx, \quad z \in T$
In both cases, the probability density function $g * h$ is called the convolution of $g$ and $h$.
Proof
Both results follows from the previous result above since $f(x, y) = g(x) h(y)$ is the probability density function of $(X, Y)$.
As before, determining this set $D_z$ is often the most challenging step in finding the probability density function of $Z$. However, there is one case where the computations simplify significantly.
Suppose again that $X$ and $Y$ are independent random variables with probability density functions $g$ and $h$, respectively.
1. In the discrete case, suppose $X$ and $Y$ take values in $\N$. Then $Z$ has probability density function $(g * h)(z) = \sum_{x = 0}^z g(x) h(z - x), \quad z \in \N$
2. In the continuous case, suppose that $X$ and $Y$ take values in $[0, \infty)$. Then $Z$ and has probability density function $(g * h)(z) = \int_0^z g(x) h(z - x) \, dx, \quad z \in [0, \infty)$
Proof
1. In this case, $D_z = \{0, 1, \ldots, z\}$ for $z \in \N$.
2. In this case, $D_z = [0, z]$ for $z \in [0, \infty)$.
Convolution is a very important mathematical operation that occurs in areas of mathematics outside of probability, and so involving functions that are not necessarily probability density functions. The following result gives some simple properties of convolution.
Convolution (either discrete or continuous) satisfies the following properties, where $f$, $g$, and $h$ are probability density functions of the same type.
1. $f * g = g * f$ (the commutative property)
2. $(f * g) * h = f * (g * h)$ (the associative property)
Proof
An analytic proof is possible, based on the definition of convolution, but a probabilistic proof, based on sums of independent random variables is much better. Thus, suppose that $X$, $Y$, and $Z$ are independent random variables with PDFs $f$, $g$, and $h$, respectively.
1. The commutative property of convolution follows from the commutative property of addition: $X + Y = Y + X$.
2. The associative property of convolution follows from the associate property of addition: $(X + Y) + Z = X + (Y + Z)$.
Thus, in part (b) we can write $f * g * h$ without ambiguity. Of course, the constant 0 is the additive identity so $X + 0 = 0 + X = 0$ for every random variable $X$. Also, a constant is independent of every other random variable. It follows that the probability density function $\delta$ of 0 (given by $\delta(0) = 1$) is the identity with respect to convolution (at least for discrete PDFs). That is, $f * \delta = \delta * f = f$. The next result is a simple corollary of the convolution theorem, but is important enough to be highligted.
Suppose that $\bs X = (X_1, X_2, \ldots)$ is a sequence of independent and identically distributed real-valued random variables, with common probability density function $f$. Then $Y_n = X_1 + X_2 + \cdots + X_n$ has probability density function $f^{*n} = f * f * \cdots * f$, the $n$-fold convolution power of $f$, for $n \in \N$.
In statistical terms, $\bs X$ corresponds to sampling from the common distribution.By convention, $Y_0 = 0$, so naturally we take $f^{*0} = \delta$. When appropriately scaled and centered, the distribution of $Y_n$ converges to the standard normal distribution as $n \to \infty$. The precise statement of this result is the central limit theorem, one of the fundamental theorems of probability. The central limit theorem is studied in detail in the chapter on Random Samples. Clearly convolution power satisfies the law of exponents: $f^{*n} * f^{*m} = f^{*(n + m)}$ for $m, \; n \in \N$.
Convolution can be generalized to sums of independent variables that are not of the same type, but this generalization is usually done in terms of distribution functions rather than probability density functions.
Products and Quotients
While not as important as sums, products and quotients of real-valued random variables also occur frequently. We will limit our discussion to continuous distributions.
Suppose that $(X, Y)$ has a continuous distribution on $\R^2$ with probability density function $f$.
1. Random variable $V = X Y$ has probability density function $v \mapsto \int_{-\infty}^\infty f(x, v / x) \frac{1}{|x|} dx$
2. Random variable $W = Y / X$ has probability density function $w \mapsto \int_{-\infty}^\infty f(x, w x) |x| dx$
Proof
We introduce the auxiliary variable $U = X$ so that we have bivariate transformations and can use our change of variables formula.
1. We have the transformation $u = x$, $v = x y$ and so the inverse transformation is $x = u$, $y = v / u$. Hence $\frac{\partial(x, y)}{\partial(u, v)} = \left[\begin{matrix} 1 & 0 \ -v/u^2 & 1/u\end{matrix} \right]$ and so the Jacobian is $1/u$. Using the change of variables theorem, the joint PDF of $(U, V)$ is $(u, v) \mapsto f(u, v / u)|1 /|u|$. Hence the PDF of $V$ is $v \mapsto \int_{-\infty}^\infty f(u, v / u) \frac{1}{|u|} du$
2. We have the transformation $u = x$, $w = y / x$ and so the inverse transformation is $x = u$, $y = u w$. Hence $\frac{\partial(x, y)}{\partial(u, w)} = \left[\begin{matrix} 1 & 0 \ w & u\end{matrix} \right]$ and so the Jacobian is $u$. Using the change of variables formula, the joint PDF of $(U, W)$ is $(u, w) \mapsto f(u, u w) |u|$. Hence the PDF of W is $w \mapsto \int_{-\infty}^\infty f(u, u w) |u| du$
If $(X, Y)$ takes values in a subset $D \subseteq \R^2$, then for a given $v \in \R$, the integral in (a) is over $\{x \in \R: (x, v / x) \in D\}$, and for a given $w \in \R$, the integral in (b) is over $\{x \in \R: (x, w x) \in D\}$. As usual, the most important special case of this result is when $X$ and $Y$ are independent.
Suppose that $X$ and $Y$ are independent random variables with continuous distributions on $\R$ having probability density functions $g$ and $h$, respectively.
1. Random variable $V = X Y$ has probability density function $v \mapsto \int_{-\infty}^\infty g(x) h(v / x) \frac{1}{|x|} dx$
2. Random variable $W = Y / X$ has probability density function $w \mapsto \int_{-\infty}^\infty g(x) h(w x) |x| dx$
Proof
These results follow immediately from the previous theorem, since $f(x, y) = g(x) h(y)$ for $(x, y) \in \R^2$.
If $X$ takes values in $S \subseteq \R$ and $Y$ takes values in $T \subseteq \R$, then for a given $v \in \R$, the integral in (a) is over $\{x \in S: v / x \in T\}$, and for a given $w \in \R$, the integral in (b) is over $\{x \in S: w x \in T\}$. As with convolution, determining the domain of integration is often the most challenging step.
Minimum and Maximum
Suppose that $(X_1, X_2, \ldots, X_n)$ is a sequence of independent real-valued random variables. The minimum and maximum transformations $U = \min\{X_1, X_2, \ldots, X_n\}, \quad V = \max\{X_1, X_2, \ldots, X_n\}$ are very important in a number of applications. For example, recall that in the standard model of structural reliability, a system consists of $n$ components that operate independently. Suppose that $X_i$ represents the lifetime of component $i \in \{1, 2, \ldots, n\}$. Then $U$ is the lifetime of the series system which operates if and only if each component is operating. Similarly, $V$ is the lifetime of the parallel system which operates if and only if at least one component is operating.
A particularly important special case occurs when the random variables are identically distributed, in addition to being independent. In this case, the sequence of variables is a random sample of size $n$ from the common distribution. The minimum and maximum variables are the extreme examples of order statistics. Order statistics are studied in detail in the chapter on Random Samples.
Suppose that $(X_1, X_2, \ldots, X_n)$ is a sequence of indendent real-valued random variables and that $X_i$ has distribution function $F_i$ for $i \in \{1, 2, \ldots, n\}$.
1. $V = \max\{X_1, X_2, \ldots, X_n\}$ has distribution function $H$ given by $H(x) = F_1(x) F_2(x) \cdots F_n(x)$ for $x \in \R$.
2. $U = \min\{X_1, X_2, \ldots, X_n\}$ has distribution function $G$ given by $G(x) = 1 - \left[1 - F_1(x)\right] \left[1 - F_2(x)\right] \cdots \left[1 - F_n(x)\right]$ for $x \in \R$.
Proof
1. Note that since $V$ is the maximum of the variables, $\{V \le x\} = \{X_1 \le x, X_2 \le x, \ldots, X_n \le x\}$. Hence by independence, $H(x) = \P(V \le x) = \P(X_1 \le x) \P(X_2 \le x) \cdots \P(X_n \le x) = F_1(x) F_2(x) \cdots F_n(x), \quad x \in \R$
2. Note that since $U$ as the minimum of the variables, $\{U \gt x\} = \{X_1 \gt x, X_2 \gt x, \ldots, X_n \gt x\}$. Hence by independence, \begin{align*} G(x) & = \P(U \le x) = 1 - \P(U \gt x) = 1 - \P(X_1 \gt x) \P(X_2 \gt x) \cdots P(X_n \gt x)\ & = 1 - [1 - F_1(x)][1 - F_2(x)] \cdots [1 - F_n(x)], \quad x \in \R \end{align*}
From part (a), note that the product of $n$ distribution functions is another distribution function. From part (b), the product of $n$ right-tail distribution functions is a right-tail distribution function. In the reliability setting, where the random variables are nonnegative, the last statement means that the product of $n$ reliability functions is another reliability function. If $X_i$ has a continuous distribution with probability density function $f_i$ for each $i \in \{1, 2, \ldots, n\}$, then $U$ and $V$ also have continuous distributions, and their probability density functions can be obtained by differentiating the distribution functions in parts (a) and (b) of last theorem. The computations are straightforward using the product rule for derivatives, but the results are a bit of a mess.
The formulas in last theorem are particularly nice when the random variables are identically distributed, in addition to being independent
Suppose that $(X_1, X_2, \ldots, X_n)$ is a sequence of independent real-valued random variables, with common distribution function $F$.
1. $V = \max\{X_1, X_2, \ldots, X_n\}$ has distribution function $H$ given by $H(x) = F^n(x)$ for $x \in \R$.
2. $U = \min\{X_1, X_2, \ldots, X_n\}$ has distribution function $G$ given by $G(x) = 1 - \left[1 - F(x)\right]^n$ for $x \in \R$.
In particular, it follows that a positive integer power of a distribution function is a distribution function. More generally, it's easy to see that every positive power of a distribution function is a distribution function. How could we construct a non-integer power of a distribution function in a probabilistic way?
Suppose that $(X_1, X_2, \ldots, X_n)$ is a sequence of independent real-valued random variables, with a common continuous distribution that has probability density function $f$.
1. $V = \max\{X_1, X_2, \ldots, X_n\}$ has probability density function $h$ given by $h(x) = n F^{n-1}(x) f(x)$ for $x \in \R$.
2. $U = \min\{X_1, X_2, \ldots, X_n\}$ has probability density function $g$ given by $g(x) = n\left[1 - F(x)\right]^{n-1} f(x)$ for $x \in \R$.
Coordinate Systems
For our next discussion, we will consider transformations that correspond to common distance-angle based coordinate systems—polar coordinates in the plane, and cylindrical and spherical coordinates in 3-dimensional space. First, for $(x, y) \in \R^2$, let $(r, \theta)$ denote the standard polar coordinates corresponding to the Cartesian coordinates $(x, y)$, so that $r \in [0, \infty)$ is the radial distance and $\theta \in [0, 2 \pi)$ is the polar angle.
It's best to give the inverse transformation: $x = r \cos \theta$, $y = r \sin \theta$. As we all know from calculus, the Jacobian of the transformation is $r$. Hence the following result is an immediate consequence of our change of variables theorem:
Suppose that $(X, Y)$ has a continuous distribution on $\R^2$ with probability density function $f$, and that $(R, \Theta)$ are the polar coordinates of $(X, Y)$. Then $(R, \Theta)$ has probability density function $g$ given by $g(r, \theta) = f(r \cos \theta , r \sin \theta ) r, \quad (r, \theta) \in [0, \infty) \times [0, 2 \pi)$
Next, for $(x, y, z) \in \R^3$, let $(r, \theta, z)$ denote the standard cylindrical coordinates, so that $(r, \theta)$ are the standard polar coordinates of $(x, y)$ as above, and coordinate $z$ is left unchanged. Given our previous result, the one for cylindrical coordinates should come as no surprise.
Suppose that $(X, Y, Z)$ has a continuous distribution on $\R^3$ with probability density function $f$, and that $(R, \Theta, Z)$ are the cylindrical coordinates of $(X, Y, Z)$. Then $(R, \Theta, Z)$ has probability density function $g$ given by $g(r, \theta, z) = f(r \cos \theta , r \sin \theta , z) r, \quad (r, \theta, z) \in [0, \infty) \times [0, 2 \pi) \times \R$
Finally, for $(x, y, z) \in \R^3$, let $(r, \theta, \phi)$ denote the standard spherical coordinates corresponding to the Cartesian coordinates $(x, y, z)$, so that $r \in [0, \infty)$ is the radial distance, $\theta \in [0, 2 \pi)$ is the azimuth angle, and $\phi \in [0, \pi]$ is the polar angle. (In spite of our use of the word standard, different notations and conventions are used in different subjects.)
Once again, it's best to give the inverse transformation: $x = r \sin \phi \cos \theta$, $y = r \sin \phi \sin \theta$, $z = r \cos \phi$. As we remember from calculus, the absolute value of the Jacobian is $r^2 \sin \phi$. Hence the following result is an immediate consequence of the change of variables theorem (8):
Suppose that $(X, Y, Z)$ has a continuous distribution on $\R^3$ with probability density function $f$, and that $(R, \Theta, \Phi)$ are the spherical coordinates of $(X, Y, Z)$. Then $(R, \Theta, \Phi)$ has probability density function $g$ given by $g(r, \theta, \phi) = f(r \sin \phi \cos \theta , r \sin \phi \sin \theta , r \cos \phi) r^2 \sin \phi, \quad (r, \theta, \phi) \in [0, \infty) \times [0, 2 \pi) \times [0, \pi]$
Sign and Absolute Value
Our next discussion concerns the sign and absolute value of a real-valued random variable.
Suppose that $X$ has a continuous distribution on $\R$ with distribution function $F$ and probability density function $f$.
1. $\left|X\right|$ has distribution function $G$ given by $G(y) = F(y) - F(-y)$ for $y \in [0, \infty)$.
2. $\left|X\right|$ has probability density function $g$ given by $g(y) = f(y) + f(-y)$ for $y \in [0, \infty)$.
Proof
1. $\P\left(\left|X\right| \le y\right) = \P(-y \le X \le y) = F(y) - F(-y)$ for $y \in [0, \infty)$.
2. This follows from part (a) by taking derivatives with respect to $y$.
Recall that the sign function on $\R$ (not to be confused, of course, with the sine function) is defined as follows:
$\sgn(x) = \begin{cases} -1, & x \lt 0 \ 0, & x = 0 \ 1, & x \gt 0 \end{cases}$
Suppose again that $X$ has a continuous distribution on $\R$ with distribution function $F$ and probability density function $f$, and suppose in addition that the distribution of $X$ is symmetric about 0. Then
1. $\left|X\right|$ has distribution function $G$ given by$G(y) = 2 F(y) - 1$ for $y \in [0, \infty)$.
2. $\left|X\right|$ has probability density function $g$ given by $g(y) = 2 f(y)$ for $y \in [0, \infty)$.
3. $\sgn(X)$ is uniformly distributed on $\{-1, 1\}$.
4. $\left|X\right|$ and $\sgn(X)$ are independent.
Proof
1. This follows from the previous theorem, since $F(-y) = 1 - F(y)$ for $y \gt 0$ by symmetry.
2. This follows from part (a) by taking derivatives.
3. Note that $\P\left[\sgn(X) = 1\right] = \P(X \gt 0) = \frac{1}{2}$ and so $\P\left[\sgn(X) = -1\right] = \frac{1}{2}$ also.
4. If $A \subseteq (0, \infty)$ then $\P\left[\left|X\right| \in A, \sgn(X) = 1\right] = \P(X \in A) = \int_A f(x) \, dx = \frac{1}{2} \int_A 2 \, f(x) \, dx = \P[\sgn(X) = 1] \P\left(\left|X\right| \in A\right)$
Examples and Applications
This subsection contains computational exercises, many of which involve special parametric families of distributions. It is always interesting when a random variable from one parametric family can be transformed into a variable from another family. It is also interesting when a parametric family is closed or invariant under some transformation on the variables in the family. Often, such properties are what make the parametric families special in the first place. Please note these properties when they occur.
Dice
Recall that a standard die is an ordinary 6-sided die, with faces labeled from 1 to 6 (usually in the form of dots). A fair die is one in which the faces are equally likely. An ace-six flat die is a standard die in which faces 1 and 6 occur with probability $\frac{1}{4}$ each and the other faces with probability $\frac{1}{8}$ each.
Suppose that two six-sided dice are rolled and the sequence of scores $(X_1, X_2)$ is recorded. Find the probability density function of $Y = X_1 + X_2$, the sum of the scores, in each of the following cases:
1. Both dice are standard and fair.
2. Both dice are ace-six flat.
3. The first die is standard and fair, and the second is ace-six flat
4. The dice are both fair, but the first die has faces labeled 1, 2, 2, 3, 3, 4 and the second die has faces labeled 1, 3, 4, 5, 6, 8.
Answer
Let $Y = X_1 + X_2$ denote the sum of the scores.
1. $y$ 2 3 4 5 6 7 8 9 10 11 12
$\P(Y = y)$ $\frac{1}{36}$ $\frac{2}{36}$ $\frac{3}{36}$ $\frac{4}{36}$ $\frac{5}{36}$ $\frac{6}{36}$ $\frac{5}{36}$ $\frac{4}{36}$ $\frac{3}{36}$ $\frac{2}{36}$ $\frac{1}{36}$
2. $y$ 2 3 4 5 6 7 8 9 10 11 12
$\P(Y = y)$ $\frac{1}{16}$ $\frac{1}{16}$ $\frac{5}{64}$ $\frac{3}{32}$ $\frac{7}{64}$ $\frac{3}{16}$ $\frac{7}{64}$ $\frac{3}{32}$ $\frac{3}{32}$ $\frac{1}{16}$ $\frac{1}{16}$
3. $y$ 2 3 4 5 6 7 8 9 10 11 12
$\P(Y = y)$ $\frac{2}{48}$ $\frac{3}{48}$ $\frac{4}{48}$ $\frac{5}{48}$ $\frac{6}{48}$ $\frac{8}{48}$ $\frac{6}{48}$ $\frac{5}{48}$ $\frac{4}{48}$ $\frac{3}{48}$ $\frac{2}{48}$
4. The distribution is the same as for two standard, fair dice in (a).
In the dice experiment, select two dice and select the sum random variable. Run the simulation 1000 times and compare the empirical density function to the probability density function for each of the following cases:
1. fair dice
2. ace-six flat dice
Suppose that $n$ standard, fair dice are rolled. Find the probability density function of the following variables:
1. the minimum score
2. the maximum score.
Answer
Let $U$ denote the minimum score and $V$ the maximum score.
1. $f(u) = \left(1 - \frac{u-1}{6}\right)^n - \left(1 - \frac{u}{6}\right)^n, \quad u \in \{1, 2, 3, 4, 5, 6\}$
2. $g(v) = \left(\frac{v}{6}\right)^n - \left(\frac{v - 1}{6}\right)^n, \quad v \in \{1, 2, 3, 4, 5, 6\}$
In the dice experiment, select fair dice and select each of the following random variables. Vary $n$ with the scroll bar and note the shape of the density function. With $n = 4$, run the simulation 1000 times and note the agreement between the empirical density function and the probability density function.
1. minimum score
2. maximum score.
Uniform Distributions
Recall that for $n \in \N_+$, the standard measure of the size of a set $A \subseteq \R^n$ is $\lambda_n(A) = \int_A 1 \, dx$ In particular, $\lambda_1(A)$ is the length of $A$ for $A \subseteq \R$, $\lambda_2(A)$ is the area of $A$ for $A \subseteq \R^2$, and $\lambda_3(A)$ is the volume of $A$ for $A \subseteq \R^3$. See the technical details in (1) for more advanced information.
Now if $S \subseteq \R^n$ with $0 \lt \lambda_n(S) \lt \infty$, recall that the uniform distribution on $S$ is the continuous distribution with constant probability density function $f$ defined by $f(x) = 1 \big/ \lambda_n(S)$ for $x \in S$. Uniform distributions are studied in more detail in the chapter on Special Distributions.
Let $Y = X^2$. Find the probability density function of $Y$ and sketch the graph in each of the following cases:
1. $X$ is uniformly distributed on the interval $[0, 4]$.
2. $X$ is uniformly distributed on the interval $[-2, 2]$.
3. $X$ is uniformly distributed on the interval $[-1, 3]$.
Answer
1. $g(y) = \frac{1}{8 \sqrt{y}}, \quad 0 \lt y \lt 16$
2. $g(y) = \frac{1}{4 \sqrt{y}}, \quad 0 \lt y \lt 4$
3. $g(y) = \begin{cases} \frac{1}{4 \sqrt{y}}, & 0 \lt y \lt 1 \ \frac{1}{8 \sqrt{y}}, & 1 \lt y \lt 9 \end{cases}$
Compare the distributions in the last exercise. In part (c), note that even a simple transformation of a simple distribution can produce a complicated distribution. In this particular case, the complexity is caused by the fact that $x \mapsto x^2$ is one-to-one on part of the domain $\{0\} \cup (1, 3]$ and two-to-one on the other part $[-1, 1] \setminus \{0\}$.
On the other hand, the uniform distribution is preserved under a linear transformation of the random variable.
Suppose that $\bs X$ has the continuous uniform distribution on $S \subseteq \R^n$. Let $\bs Y = \bs a + \bs B \bs X$, where $\bs a \in \R^n$ and $\bs B$ is an invertible $n \times n$ matrix. Then $\bs Y$ is uniformly distributed on $T = \{\bs a + \bs B \bs x: \bs x \in S\}$.
Proof
This follows directly from the general result on linear transformations in (10). Note that the PDF $g$ of $\bs Y$ is constant on $T$.
For the following three exercises, recall that the standard uniform distribution is the uniform distribution on the interval $[0, 1]$.
Suppose that $X$ and $Y$ are independent and that each has the standard uniform distribution. Let $U = X + Y$, $V = X - Y$, $W = X Y$, $Z = Y / X$. Find the probability density function of each of the follow:
1. $(U, V)$
2. $U$
3. $V$
4. $W$
5. $Z$
Answer
1. $g(u, v) = \frac{1}{2}$ for $(u, v)$ in the square region $T \subset \R^2$ with vertices $\{(0,0), (1,1), (2,0), (1,-1)\}$. So $(U, V)$ is uniformly distributed on $T$.
2. $g_1(u) = \begin{cases} u, & 0 \lt u \lt 1 \ 2 - u, & 1 \lt u \lt 2 \end{cases}$
3. $g_2(v) = \begin{cases} 1 - v, & 0 \lt v \lt 1 \ 1 + v, & -1 \lt v \lt 0 \end{cases}$
4. $h_1(w) = -\ln w$ for $0 \lt w \le 1$
5. $h_2(z) = \begin{cases} \frac{1}{2} & 0 \le z \le 1 \ \frac{1}{2 z^2}, & 1 \le z \lt \infty \end{cases}$
Suppose that $X$, $Y$, and $Z$ are independent, and that each has the standard uniform distribution. Find the probability density function of $(U, V, W) = (X + Y, Y + Z, X + Z)$.
Answer
$g(u, v, w) = \frac{1}{2}$ for $(u, v, w)$ in the rectangular region $T \subset \R^3$ with vertices $\{(0,0,0), (1,0,1), (1,1,0), (0,1,1), (2,1,1), (1,1,2), (1,2,1), (2,2,2)\}$. So $(U, V, W)$ is uniformly distributed on $T$.
Suppose that $(X_1, X_2, \ldots, X_n)$ is a sequence of independent random variables, each with the standard uniform distribution. Find the distribution function and probability density function of the following variables.
1. $U = \min\{X_1, X_2 \ldots, X_n\}$
2. $V = \max\{X_1, X_2, \ldots, X_n\}$
Answer
1. $G(t) = 1 - (1 - t)^n$ and $g(t) = n(1 - t)^{n-1}$, both for $t \in [0, 1]$
2. $H(t) = t^n$ and $h(t) = n t^{n-1}$, both for $t \in [0, 1]$
Both distributions in the last exercise are beta distributions. More generally, all of the order statistics from a random sample of standard uniform variables have beta distributions, one of the reasons for the importance of this family of distributions. Beta distributions are studied in more detail in the chapter on Special Distributions.
In the order statistic experiment, select the uniform distribution.
1. Set $k = 1$ (this gives the minimum $U$). Vary $n$ with the scroll bar and note the shape of the probability density function. With $n = 5$, run the simulation 1000 times and note the agreement between the empirical density function and the true probability density function.
2. Vary $n$ with the scroll bar, set $k = n$ each time (this gives the maximum $V$), and note the shape of the probability density function. With $n = 5$ run the simulation 1000 times and compare the empirical density function and the probability density function.
Let $f$ denote the probability density function of the standard uniform distribution.
1. Compute $f^{*2}$
2. Compute $f^{*3}$
3. Graph $f$, $f^{*2}$, and $f^{*3}$on the same set of axes.
Answer
1. $f^{*2}(z) = \begin{cases} z, & 0 \lt z \lt 1 \ 2 - z, & 1 \lt z \lt 2 \end{cases}$
2. $f^{*3}(z) = \begin{cases} \frac{1}{2} z^2, & 0 \lt z \lt 1 \ 1 - \frac{1}{2}(z - 1)^2 - \frac{1}{2}(2 - z)^2, & 1 \lt z \lt 2 \ \frac{1}{2} (3 - z)^2, & 2 \lt z \lt 3 \end{cases}$
In the last exercise, you can see the behavior predicted by the central limit theorem beginning to emerge. Recall that if $(X_1, X_2, X_3)$ is a sequence of independent random variables, each with the standard uniform distribution, then $f$, $f^{*2}$, and $f^{*3}$ are the probability density functions of $X_1$, $X_1 + X_2$, and $X_1 + X_2 + X_3$, respectively. More generally, if $(X_1, X_2, \ldots, X_n)$ is a sequence of independent random variables, each with the standard uniform distribution, then the distribution of $\sum_{i=1}^n X_i$ (which has probability density function $f^{*n}$) is known as the Irwin-Hall distribution with parameter $n$. The Irwin-Hall distributions are studied in more detail in the chapter on Special Distributions.
Open the Special Distribution Simulator and select the Irwin-Hall distribution. Vary the parameter $n$ from 1 to 3 and note the shape of the probability density function. (These are the density functions in the previous exercise). For each value of $n$, run the simulation 1000 times and compare the empricial density function and the probability density function.
Simulations
A remarkable fact is that the standard uniform distribution can be transformed into almost any other distribution on $\R$. This is particularly important for simulations, since many computer languages have an algorithm for generating random numbers, which are simulations of independent variables, each with the standard uniform distribution. Conversely, any continuous distribution supported on an interval of $\R$ can be transformed into the standard uniform distribution.
Suppose first that $F$ is a distribution function for a distribution on $\R$ (which may be discrete, continuous, or mixed), and let $F^{-1}$ denote the quantile function.
Suppose that $U$ has the standard uniform distribution. Then $X = F^{-1}(U)$ has distribution function $F$.
Proof
The critical property satisfied by the quantile function (regardless of the type of distribution) is $F^{-1}(p) \le x$ if and only if $p \le F(x)$ for $p \in (0, 1)$ and $x \in \R$. Hence for $x \in \R$, $\P(X \le x) = \P\left[F^{-1}(U) \le x\right] = \P[U \le F(x)] = F(x)$.
Assuming that we can compute $F^{-1}$, the previous exercise shows how we can simulate a distribution with distribution function $F$. To rephrase the result, we can simulate a variable with distribution function $F$ by simply computing a random quantile. Most of the apps in this project use this method of simulation. The first image below shows the graph of the distribution function of a rather complicated mixed distribution, represented in blue on the horizontal axis. In the second image, note how the uniform distribution on $[0, 1]$, represented by the thick red line, is transformed, via the quantile function, into the given distribution.
There is a partial converse to the previous result, for continuous distributions.
Suppose that $X$ has a continuous distribution on an interval $S \subseteq \R$ Then $U = F(X)$ has the standard uniform distribution.
Proof
For $u \in (0, 1)$ recall that $F^{-1}(u)$ is a quantile of order $u$. Since $X$ has a continuous distribution, $\P(U \ge u) = \P[F(X) \ge u] = \P[X \ge F^{-1}(u)] = 1 - F[F^{-1}(u)] = 1 - u$ Hence $U$ is uniformly distributed on $(0, 1)$.
Show how to simulate the uniform distribution on the interval $[a, b]$ with a random number. Using your calculator, simulate 5 values from the uniform distribution on the interval $[2, 10]$.
Answer
$X = a + U(b - a)$ where $U$ is a random number.
Beta Distributions
Suppose that $X$ has the probability density function $f$ given by $f(x) = 3 x^2$ for $0 \le x \le 1$. Find the probability density function of each of the following:
1. $U = X^2$
2. $V = \sqrt{X}$
3. $W = \frac{1}{X}$
Proof
1. $g(u) = \frac{3}{2} u^{1/2}$, for $0 \lt u \le 1$
2. $h(v) = 6 v^5$ for $0 \le v \le 1$
3. $k(w) = \frac{3}{w^4}$ for $1 \le w \lt \infty$
Random variables $X$, $U$, and $V$ in the previous exercise have beta distributions, the same family of distributions that we saw in the exercise above for the minimum and maximum of independent standard uniform variables. In general, beta distributions are widely used to model random proportions and probabilities, as well as physical quantities that take values in closed bounded intervals (which after a change of units can be taken to be $[0, 1]$). On the other hand, $W$ has a Pareto distribution, named for Vilfredo Pareto. The family of beta distributions and the family of Pareto distributions are studied in more detail in the chapter on Special Distributions.
Suppose that the radius $R$ of a sphere has a beta distribution probability density function $f$ given by $f(r) = 12 r^2 (1 - r)$ for $0 \le r \le 1$. Find the probability density function of each of the following:
1. The circumference $C = 2 \pi R$
2. The surface area $A = 4 \pi R^2$
3. The volume $V = \frac{4}{3} \pi R^3$
Answer
1. $g(c) = \frac{3}{4 \pi^4} c^2 (2 \pi - c)$ for $0 \le c \le 2 \pi$
2. $h(a) = \frac{3}{8 \pi^2} \sqrt{a}\left(2 \sqrt{\pi} - \sqrt{a}\right)$ for $0 \le a \le 4 \pi$
3. $k(v) = \frac{3}{\pi} \left[1 - \left(\frac{3}{4 \pi}\right)^{1/3} v^{1/3} \right]$ for $0 \le v \le \frac{4}{3} \pi$
Suppose that the grades on a test are described by the random variable $Y = 100 X$ where $X$ has the beta distribution with probability density function $f$ given by $f(x) = 12 x (1 - x)^2$ for $0 \le x \le 1$. The grades are generally low, so the teacher decides to curve the grades using the transformation $Z = 10 \sqrt{Y} = 100 \sqrt{X}$. Find the probability density function of
1. $Y$
2. $Z$
Answer
1. $g(y) = \frac{3}{25} \left(\frac{y}{100}\right)\left(1 - \frac{y}{100}\right)^2$ for $0 \le y \le 100$.
2. $h(z) = \frac{3}{1250} z \left(\frac{z^2}{10\,000}\right)\left(1 - \frac{z^2}{10\,000}\right)^2$ for $0 \le z \le 100$
Bernoulli Trials
Recall that a Bernoulli trials sequence is a sequence $(X_1, X_2, \ldots)$ of independent, identically distributed indicator random variables. In the usual terminology of reliability theory, $X_i = 0$ means failure on trial $i$, while $X_i = 1$ means success on trial $i$. The basic parameter of the process is the probability of success $p = \P(X_i = 1)$, so $p \in [0, 1]$. The random process is named for Jacob Bernoulli and is studied in detail in the chapter on Bernoulli trials.
For $i \in \N_+$, the probability density function $f$ of the trial variable $X_i$ is $f(x) = p^x (1 - p)^{1 - x}$ for $x \in \{0, 1\}$.
Proof
By definition, $f(0) = 1 - p$ and $f(1) = p$. These can be combined succinctly with the formula $f(x) = p^x (1 - p)^{1 - x}$ for $x \in \{0, 1\}$.
Now let $Y_n$ denote the number of successes in the first $n$ trials, so that $Y_n = \sum_{i=1}^n X_i$ for $n \in \N$.
$Y_n$ has the probability density function $f_n$ given by $f_n(y) = \binom{n}{y} p^y (1 - p)^{n - y}, \quad y \in \{0, 1, \ldots, n\}$
Proof
We have seen this derivation before. The number of bit strings of length $n$ with 1 occurring exactly $y$ times is $\binom{n}{y}$ for $y \in \{0, 1, \ldots, n\}$. By the Bernoulli trials assumptions, the probability of each such bit string is $p^n (1 - p)^{n-y}$.
The distribution of $Y_n$ is the binomial distribution with parameters $n$ and $p$. The binomial distribution is stuided in more detail in the chapter on Bernoulli trials
For $m, \, n \in \N$
1. $f_n = f^{*n}$.
2. $f_m * f_n = f_{m + n}$.
Proof
Part (a) can be proved directly from the definition of convolution, but the result also follows simply from the fact that $Y_n = X_1 + X_2 + \cdots + X_n$.
From part (b) it follows that if $Y$ and $Z$ are independent variables, and that $Y$ has the binomial distribution with parameters $n \in \N$ and $p \in [0, 1]$ while $Z$ has the binomial distribution with parameter $m \in \N$ and $p$, then $Y + Z$ has the binomial distribution with parameter $m + n$ and $p$.
Find the probability density function of the difference between the number of successes and the number of failures in $n \in \N$ Bernoulli trials with success parameter $p \in [0, 1]$
Answer
$f(k) = \binom{n}{(n+k)/2} p^{(n+k)/2} (1 - p)^{(n-k)/2}$ for $k \in \{-n, 2 - n, \ldots, n - 2, n\}$
The Poisson Distribution
Recall that the Poisson distribution with parameter $t \in (0, \infty)$ has probability density function $f$ given by $f_t(n) = e^{-t} \frac{t^n}{n!}, \quad n \in \N$ This distribution is named for Simeon Poisson and is widely used to model the number of random points in a region of time or space; the parameter $t$ is proportional to the size of the regtion. The Poisson distribution is studied in detail in the chapter on The Poisson Process.
If $a, \, b \in (0, \infty)$ then $f_a * f_b = f_{a+b}$.
Proof
Let $z \in \N$. Using the definition of convolution and the binomial theorem we have \begin{align} (f_a * f_b)(z) & = \sum_{x = 0}^z f_a(x) f_b(z - x) = \sum_{x = 0}^z e^{-a} \frac{a^x}{x!} e^{-b} \frac{b^{z - x}}{(z - x)!} = e^{-(a + b)} \frac{1}{z!} \sum_{x=0}^z \frac{z!}{x!(z - x)!} a^{x} b^{z - x} \ & = e^{-(a+b)} \frac{1}{z!} \sum_{x=0}^z \binom{z}{x} a^x b^{n-x} = e^{-(a + b)} \frac{(a + b)^z}{z!} = f_{a+b}(z) \end{align}
The last result means that if $X$ and $Y$ are independent variables, and $X$ has the Poisson distribution with parameter $a \gt 0$ while $Y$ has the Poisson distribution with parameter $b \gt 0$, then $X + Y$ has the Poisson distribution with parameter $a + b$. In terms of the Poisson model, $X$ could represent the number of points in a region $A$ and $Y$ the number of points in a region $B$ (of the appropriate sizes so that the parameters are $a$ and $b$ respectively). The independence of $X$ and $Y$ corresponds to the regions $A$ and $B$ being disjoint. Then $X + Y$ is the number of points in $A \cup B$.
The Exponential Distribution
Recall that the exponential distribution with rate parameter $r \in (0, \infty)$ has probability density function $f$ given by $f(t) = r e^{-r t}$ for $t \in [0, \infty)$. This distribution is often used to model random times such as failure times and lifetimes. In particular, the times between arrivals in the Poisson model of random points in time have independent, identically distributed exponential distributions. The Exponential distribution is studied in more detail in the chapter on Poisson Processes.
Show how to simulate, with a random number, the exponential distribution with rate parameter $r$. Using your calculator, simulate 5 values from the exponential distribution with parameter $r = 3$.
Answer
$X = -\frac{1}{r} \ln(1 - U)$ where $U$ is a random number. Since $1 - U$ is also a random number, a simpler solution is $X = -\frac{1}{r} \ln U$.
For the next exercise, recall that the floor and ceiling functions on $\R$ are defined by $\lfloor x \rfloor = \max\{n \in \Z: n \le x\}, \; \lceil x \rceil = \min\{n \in \Z: n \ge x\}, \quad x \in \R$
Suppose that $T$ has the exponential distribution with rate parameter $r \in (0, \infty)$. Find the probability density function of each of the following random variables:
1. $Y = \lfloor T \rfloor$
2. $Z = \lceil T \rceil$
Answer
1. $\P(Y = n) = e^{-r n} \left(1 - e^{-r}\right)$ for $n \in \N$
2. $\P(Z = n) = e^{-r(n-1)} \left(1 - e^{-r}\right)$ for $n \in \N$
Note that the distributions in the previous exercise are geometric distributions on $\N$ and on $\N_+$, respectively. In many respects, the geometric distribution is a discrete version of the exponential distribution.
Suppose that $T$ has the exponential distribution with rate parameter $r \in (0, \infty)$. Find the probability density function of each of the following random variables:
1. $X = T^2$
2. $Y = e^{T}$
3. $Z = \ln T$
Answer
1. $g(x) = r e^{-r \sqrt{x}} \big/ 2 \sqrt{x}$ for $0 \lt x \lt \infty$
2. $h(y) = r y^{-(r+1)}$ for $1 \lt y \lt \infty$
3. $k(z) = r \exp\left(-r e^z\right) e^z$ for $z \in \R$
In the previous exercise, $Y$ has a Pareto distribution while $Z$ has an extreme value distribution. Both of these are studied in more detail in the chapter on Special Distributions.
Suppose that $X$ and $Y$ are independent random variables, each having the exponential distribution with parameter 1. Let $Z = \frac{Y}{X}$.
1. Find the distribution function of $Z$.
2. Find the probability density function of $Z$.
Answer
1. $G(z) = 1 - \frac{1}{1 + z}, \quad 0 \lt z \lt \infty$
2. $g(z) = \frac{1}{(1 + z)^2}, \quad 0 \lt z \lt \infty$
Suppose that $X$ has the exponential distribution with rate parameter $a \gt 0$, $Y$ has the exponential distribution with rate parameter $b \gt 0$, and that $X$ and $Y$ are independent. Find the probability density function of $Z = X + Y$ in each of the following cases.
1. $a = b$
2. $a \ne b$
Answer
1. $h(z) = a^2 z e^{-a z}$ for $0 \lt z \lt \infty$
2. $h(z) = \frac{a b}{b - a} \left(e^{-a z} - e^{-b z}\right)$ for $0 \lt z \lt \infty$
Suppose that $(T_1, T_2, \ldots, T_n)$ is a sequence of independent random variables, and that $T_i$ has the exponential distribution with rate parameter $r_i \gt 0$ for each $i \in \{1, 2, \ldots, n\}$.
1. Find the probability density function of $U = \min\{T_1, T_2, \ldots, T_n\}$.
2. Find the distribution function of $V = \max\{T_1, T_2, \ldots, T_n\}$.
3. Find the probability density function of $V$ in the special case that $r_i = r$ for each $i \in \{1, 2, \ldots, n\}$.
Answer
1. $g(t) = a e^{-a t}$ for $0 \le t \lt \infty$ where $a = r_1 + r_2 + \cdots + r_n$
2. $H(t) = \left(1 - e^{-r_1 t}\right) \left(1 - e^{-r_2 t}\right) \cdots \left(1 - e^{-r_n t}\right)$ for $0 \le t \lt \infty$
3. $h(t) = n r e^{-r t} \left(1 - e^{-r t}\right)^{n-1}$ for $0 \le t \lt \infty$
Note that the minimum $U$ in part (a) has the exponential distribution with parameter $r_1 + r_2 + \cdots + r_n$. In particular, suppose that a series system has independent components, each with an exponentially distributed lifetime. Then the lifetime of the system is also exponentially distributed, and the failure rate of the system is the sum of the component failure rates.
In the order statistic experiment, select the exponential distribution.
1. Set $k = 1$ (this gives the minimum $U$). Vary $n$ with the scroll bar and note the shape of the probability density function. With $n = 5$, run the simulation 1000 times and compare the empirical density function and the probability density function.
2. Vary $n$ with the scroll bar and set $k = n$ each time (this gives the maximum $V$). Note the shape of the density function. With $n = 5$, run the simulation 1000 times and compare the empirical density function and the probability density function.
Suppose again that $(T_1, T_2, \ldots, T_n)$ is a sequence of independent random variables, and that $T_i$ has the exponential distribution with rate parameter $r_i \gt 0$ for each $i \in \{1, 2, \ldots, n\}$. Then $\P\left(T_i \lt T_j \text{ for all } j \ne i\right) = \frac{r_i}{\sum_{j=1}^n r_j}$
Proof
When $n = 2$, the result was shown in the section on joint distributions. Returning to the case of general $n$, note that $T_i \lt T_j$ for all $j \ne i$ if and only if $T_i \lt \min\left\{T_j: j \ne i\right\}$. Note that he minimum on the right is independent of $T_i$ and by the result above, has an exponential distribution with parameter $\sum_{j \ne i} r_j$.
The result in the previous exercise is very important in the theory of continuous-time Markov chains. If we have a bunch of independent alarm clocks, with exponentially distributed alarm times, then the probability that clock $i$ is the first one to sound is $r_i \big/ \sum_{j = 1}^n r_j$.
The Gamma Distribution
Recall that the (standard) gamma distribution with shape parameter $n \in \N_+$ has probability density function $g_n(t) = e^{-t} \frac{t^{n-1}}{(n - 1)!}, \quad 0 \le t \lt \infty$ With a positive integer shape parameter, as we have here, it is also referred to as the Erlang distribution, named for Agner Erlang. This distribution is widely used to model random times under certain basic assumptions. In particular, the $n$th arrival times in the Poisson model of random points in time has the gamma distribution with parameter $n$. The Erlang distribution is studied in more detail in the chapter on the Poisson Process, and in greater generality, the gamma distribution is studied in the chapter on Special Distributions.
Let $g = g_1$, and note that this is the probability density function of the exponential distribution with parameter 1, which was the topic of our last discussion.
If $m, \, n \in \N_+$ then
1. $g_n = g^{*n}$
2. $g_m * g_n = g_{m+n}$
Proof
Part (a) hold trivially when $n = 1$. Also, for $t \in [0, \infty)$, $g_n * g(t) = \int_0^t g_n(s) g(t - s) \, ds = \int_0^t e^{-s} \frac{s^{n-1}}{(n - 1)!} e^{t-s} \, ds = e^{-t} \int_0^t \frac{s^{n-1}}{(n - 1)!} \, ds = e^{-t} \frac{t^n}{n!} = g_{n+1}(t)$ Part (b) follows from (a).
Part (b) means that if $X$ has the gamma distribution with shape parameter $m$ and $Y$ has the gamma distribution with shape parameter $n$, and if $X$ and $Y$ are independent, then $X + Y$ has the gamma distribution with shape parameter $m + n$. In the context of the Poisson model, part (a) means that the $n$th arrival time is the sum of the $n$ independent interarrival times, which have a common exponential distribution.
Suppose that $T$ has the gamma distribution with shape parameter $n \in \N_+$. Find the probability density function of $X = \ln T$.
Answer
$h(x) = \frac{1}{(n-1)!} \exp\left(-e^x\right) e^{n x}$ for $x \in \R$
The Pareto Distribution
Recall that the Pareto distribution with shape parameter $a \in (0, \infty)$ has probability density function $f$ given by $f(x) = \frac{a}{x^{a+1}}, \quad 1 \le x \lt \infty$ Members of this family have already come up in several of the previous exercises. The Pareto distribution, named for Vilfredo Pareto, is a heavy-tailed distribution often used for modeling income and other financial variables. The Pareto distribution is studied in more detail in the chapter on Special Distributions.
Suppose that $X$ has the Pareto distribution with shape parameter $a$. Find the probability density function of each of the following random variables:
1. $U = X^2$
2. $V = \frac{1}{X}$
3. $Y = \ln X$
Answer
1. $g(u) = \frac{a / 2}{u^{a / 2 + 1}}$ for $1 \le u \lt \infty$
2. $h(v) = a v^{a-1}$ for $0 \lt v \lt 1$
3. $k(y) = a e^{-a y}$ for $0 \le y \lt \infty$
In the previous exercise, $V$ also has a Pareto distribution but with parameter $\frac{a}{2}$; $Y$ has the beta distribution with parameters $a$ and $b = 1$; and $Z$ has the exponential distribution with rate parameter $a$.
Show how to simulate, with a random number, the Pareto distribution with shape parameter $a$. Using your calculator, simulate 5 values from the Pareto distribution with shape parameter $a = 2$.
Answer
Using the random quantile method, $X = \frac{1}{(1 - U)^{1/a}}$ where $U$ is a random number. More simply, $X = \frac{1}{U^{1/a}}$, since $1 - U$ is also a random number.
The Normal Distribution
Recall that the standard normal distribution has probability density function $\phi$ given by $\phi(z) = \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2} z^2}, \quad z \in \R$
Suppose that $Z$ has the standard normal distribution, and that $\mu \in (-\infty, \infty)$ and $\sigma \in (0, \infty)$.
1. Find the probability density function $f$ of $X = \mu + \sigma Z$
2. Sketch the graph of $f$, noting the important qualitative features.
Answer
1. $f(x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left[-\frac{1}{2} \left(\frac{x - \mu}{\sigma}\right)^2\right]$ for $x \in \R$
2. $f$ is symmetric about $x = \mu$. $f$ increases and then decreases, with mode $x = \mu$. $f$ is concave upward, then downward, then upward again, with inflection points at $x = \mu \pm \sigma$. $f(x) \to 0$ as $x \to \infty$ and as $x \to -\infty$
Random variable $X$ has the normal distribution with location parameter $\mu$ and scale parameter $\sigma$. The normal distribution is perhaps the most important distribution in probability and mathematical statistics, primarily because of the central limit theorem, one of the fundamental theorems. It is widely used to model physical measurements of all types that are subject to small, random errors. The normal distribution is studied in detail in the chapter on Special Distributions.
Suppose that $Z$ has the standard normal distribution. Find the probability density function of $Z^2$ and sketch the graph.
Answer
$g(v) = \frac{1}{\sqrt{2 \pi v}} e^{-\frac{1}{2} v}$ for $0 \lt v \lt \infty$
Random variable $V$ has the chi-square distribution with 1 degree of freedom. Chi-square distributions are studied in detail in the chapter on Special Distributions.
Suppose that $X$ and $Y$ are independent random variables, each with the standard normal distribution, and let $(R, \Theta)$ be the standard polar coordinates $(X, Y)$. Find the probability density function of
1. $(R, \Theta)$
2. $R$
3. $\Theta$
Answer
Note that the joint PDF of $(X, Y)$ is $f(x, y) = \phi(x) \phi(y) = \frac{1}{2 \pi} e^{-\frac{1}{2}\left(x^2 + y^2\right)}, \quad (x, y) \in \R^2$ From the result above polar coordinates, the PDF of $(R, \Theta)$ is $g(r, \theta) = f(r \cos \theta , r \sin \theta) r = \frac{1}{2 \pi} r e^{-\frac{1}{2} r^2}, \quad (r, \theta) \in [0, \infty) \times [0, 2 \pi)$ From the factorization theorem for joint PDFs, it follows that $R$ has probability density function $h(r) = r e^{-\frac{1}{2} r^2}$ for $0 \le r \lt \infty$, $\Theta$ is uniformly distributed on $[0, 2 \pi)$, and that $R$ and $\Theta$ are independent.
The distribution of $R$ is the (standard) Rayleigh distribution, and is named for John William Strutt, Lord Rayleigh. The Rayleigh distribution is studied in more detail in the chapter on Special Distributions.
The standard normal distribution does not have a simple, closed form quantile function, so the random quantile method of simulation does not work well. However, the last exercise points the way to an alternative method of simulation.
Show how to simulate a pair of independent, standard normal variables with a pair of random numbers. Using your calculator, simulate 6 values from the standard normal distribution.
Answer
The Rayleigh distribution in the last exercise has CDF $H(r) = 1 - e^{-\frac{1}{2} r^2}$ for $0 \le r \lt \infty$, and hence quantle function $H^{-1}(p) = \sqrt{-2 \ln(1 - p)}$ for $0 \le p \lt 1$. Thus we can simulate the polar radius $R$ with a random number $U$ by $R = \sqrt{-2 \ln(1 - U)}$, or a bit more simply by $R = \sqrt{-2 \ln U}$, since $1 - U$ is also a random number. We can simulate the polar angle $\Theta$ with a random number $V$ by $\Theta = 2 \pi V$. Then, a pair of independent, standard normal variables can be simulated by $X = R \cos \Theta$, $Y = R \sin \Theta$.
The Cauchy Distribution
Suppose that $X$ and $Y$ are independent random variables, each with the standard normal distribution. Find the probability density function of $T = X / Y$.
Answer
As usual, let $\phi$ denote the standard normal PDF, so that $\phi(z) = \frac{1}{\sqrt{2 \pi}} e^{-z^2/2}$ for $z \in \R$. Using the theorem on quotient above, the PDF $f$ of $T$ is given by $f(t) = \int_{-\infty}^\infty \phi(x) \phi(t x) |x| dx = \frac{1}{2 \pi} \int_{-\infty}^\infty e^{-(1 + t^2) x^2/2} |x| dx, \quad t \in \R$ Using symmetry and a simple substitution, $f(t) = \frac{1}{\pi} \int_0^\infty x e^{-(1 + t^2) x^2/2} dx = \frac{1}{\pi (1 + t^2)}, \quad t \in \R$
Random variable $T$ has the (standard) Cauchy distribution, named after Augustin Cauchy. The Cauchy distribution is studied in detail in the chapter on Special Distributions.
Suppose that a light source is 1 unit away from position 0 on an infinite straight wall. We shine the light at the wall an angle $\Theta$ to the perpendicular, where $\Theta$ is uniformly distributed on $\left(-\frac{\pi}{2}, \frac{\pi}{2}\right)$. Find the probability density function of the position of the light beam $X = \tan \Theta$ on the wall.
Answer
The PDF of $\Theta$ is $f(\theta) = \frac{1}{\pi}$ for $-\frac{\pi}{2} \le \theta \le \frac{\pi}{2}$. The transformation is $x = \tan \theta$ so the inverse transformation is $\theta = \arctan x$. Recall that $\frac{d\theta}{dx} = \frac{1}{1 + x^2}$, so by the change of variables formula, $X$ has PDF $g$ given by $g(x) = \frac{1}{\pi \left(1 + x^2\right)}, \quad x \in \R$
Thus, $X$ also has the standard Cauchy distribution. Clearly we can simulate a value of the Cauchy distribution by $X = \tan\left(-\frac{\pi}{2} + \pi U\right)$ where $U$ is a random number. This is the random quantile method.
Open the Cauchy experiment, which is a simulation of the light problem in the previous exercise. Keep the default parameter values and run the experiment in single step mode a few times. Then run the experiment 1000 times and compare the empirical density function and the probability density function. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/03%3A_Distributions/3.07%3A_Transformations_of_Random_Variables.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Q}{\mathbb{Q}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\cl}{\text{cl}}$ $\newcommand{\interior}{\text{int}}$ $\newcommand{\bs}{\boldsymbol}$
This section is concenred with the convergence of probability distributions, a topic of basic importance in probability theory. Since we will be almost exclusively concerned with the convergences of sequences of various kinds, it's helpful to introduce the notation $\N_+^* = \N_+ \cup \{\infty\} = \{1, 2, \ldots\} \cup \{\infty\}$.
Distributions on $(\R, \mathscr R)$
Definition
We start with the most important and basic setting, the measurable space $(\R, \mathscr R)$, where $\R$ is the set of real numbers of course, and $\mathscr R$ is the Borel $\sigma$-algebra of subsets of $\R$. Recall that if $P$ is a probability measure on $(\R, \mathscr R)$, then the function $F: \R \to [0, 1]$ defined by $F(x) = P(-\infty, x]$ for $x \in \R$ is the (cumulative) distribution function of $P$. Recall also that $F$ completely determines $P$. Here is the definition for convergence of probability measures in this setting:
Suppose $P_n$ is a probability measure on $(\R, \mathscr R)$ with distribution function $F_n$ for each $n \in \N_+^*$. Then $P_n$ converges (weakly) to $P_\infty$ as $n \to \infty$ if $F_n(x) \to F_\infty(x)$ as $n \to \infty$ for every $x \in \R$ where $F_\infty$ is continuous. We write $P_n \Rightarrow P_\infty$ as $n \to \infty$.
Recall that a distribution function $F$ is continuous at $x \in \R$ if and only if $\P(X = x) = 0$, so that $x$ is not an atom of the distribution (a point of positive probability). We will see shortly why this condition on $F_\infty$ is appropriate. Of course, a probability measure on $(\R, \mathscr R)$ is usually associated with a real-valued random variable for some random experiment that is modeled by a probability space $(\Omega, \mathscr F, \P)$. So to review, $\Omega$ is the set of outcomes, $\mathscr F$ is the $\sigma$-algebra of events, and $\P$ is the probability measure on the sample space $(\Omega, \mathscr F)$. If $X$ is a real-valued random variable defined on the probability space, then the distribution of $X$ is the probability measure $P$ on $(\R, \mathscr R)$ defined by $P(A) = \P(X \in A)$ for $A \in \mathscr R$, and then of course, the distribution function of $X$ is the function $F$ defined by $F(x) = \P(X \le x)$ for $x \in \R$. Here is the convergence terminology used in this setting:
Suppose that $X_n$ is a real-valued random variable with distribution $P_n$ for each $n \in \N_+^*$. If $P_n \Rightarrow P_\infty$ as $n \to \infty$ then we say that $X_n$ converges in distribution to $X_\infty$ as $n \to \infty$. We write $X_n \to X_\infty$ as $n \to \infty$ in distribution.
So if $F_n$ is the distribution function of $X_n$ for $n \in \N_+^*$, then $X_n \to X_\infty$ as $n \to \infty$ in distribution if $F_n(x) \to F_\infty(x)$ at every point $x \in \R$ where $F_\infty$ is continuous. On the one hand, the terminology and notation are helpful, since again most probability measures are associated with random variables (and every probability measure can be). On the other hand, the terminology and notation can be a bit misleading since the random variables, as functions, do not converge in any sense, and indeed the random variables need not be defined on the same probability spaces. It is only the distributions that converge. However, often the random variables are defined on the same probability space $(\Omega, \mathscr F, \P)$, in which case we can compare convergence in distribution with the other modes of convergence we have or will study:
• Convergence with probability 1
• Convergence in probability
• Convergence in mean
We will show, in fact, that convergence in distribution is the weakest of all of these modes of convergence. However, strength of convergence should not be confused with importance. Convergence in distribution is one of the most important modes of convergence; the central limit theorem, one of the two fundamental theorems of probability, is a theorem about convergence in distribution.
Preliminary Examples
The examples below show why the definition is given in terms of distribution functions, rather than probability density functions, and why convergence is only required at the points of continuity of the limiting distribution function. Note that the distributions considered are probability measures on $(\R, \mathscr R)$, even though the support of the distribution may be a much smaller subset. For the first example, note that if a deterministic sequence converges in the ordinary calculus sense, then naturally we want the sequence (thought of as random variables) to converge in distribution. Expand the proof to understand the example fully.
Suppose that $x_n \in \R$ for $n \in \N_+^*$. Define random variable $X_n = x_n$ with probability 1 for each $n \in \N_+^*$. Then $x_n \to x_\infty$ as $n \to \infty$ if and only if $X_n \to X_\infty$ as $n \to \infty$ in distribution.
Proof
For $n \in \N_+^*$, the CDF $F_n$ of $X_n$ is given by $F_n(x) = 0$ for $x \lt x_n$ and $F_n(x) = 1$ for $x \ge x_n$.
1. Suppose that $x_n \to x_\infty$ as $n \to \infty$. If $x \lt x_\infty$ then $x \lt x_n$, and hence $F_n(x) = 0$, for all but finitely many $n \in \N_+$, and so $F_n(x) \to 0$ as $n \to \infty$. If $x \gt x_\infty$ then $x \gt x_n$,and hence $F_n(x) = 1$, for all but finitely many $n \in \N_+$, and so $F_n(x) \to 1$ as $n \to \infty$. Nothing can be said about the limiting behavior of $F_n(x_\infty)$ as $n \to \infty$ without more information. For example, if $x_n \le x_\infty$ for all but finitely many $n \in \N_+$ then $F_n(x_\infty) \to 1$ as $n \to \infty$. If $x_n \gt x_\infty$ for all but finitely many $n \in \N_+$ then $F_n(x_\infty) \to 0$ as $n \to \infty$. If $x_n \lt x_\infty$ for infinitely many $n \in \N_+$ and $x_n \gt x_\infty$ for infinitely many $n \in \N_+$ then $F_n(x_\infty)$ does not have a limit as $n \to \infty$. But regardless, we have $F_n(x) \to F_\infty(x)$ as $n \to \infty$ for every $x \in \R$ except perhaps $x_\infty$, the one point of discontinuity of $F_\infty$. Hence $X_n \to X_\infty$ as $n \to \infty$ in distribution.
2. Conversely, suppose that $X_n \to X_\infty$ as $n \to \infty$ in distribution. If $x \lt x_\infty$ then $F_n(x) \to 0$ as $n \to \infty$ and hence $x \lt x_n$ for all but finitely many $n \in \N_+$. If $x \gt x_\infty$ then $F_n(x) \to 1$ as $n \to \infty$ and hence $x \ge x_n$ for all but finitely many $n \in \N_+$. So, for every $\epsilon \gt 0$, $x_n \in (x_\infty - \epsilon, x_\infty + \epsilon)$ for all but finitely many $n \in \N_+$, and hence $x_n \to x_\infty$ as $n \to \infty$.
The proof is finished, but let's look at the probability density functions to see that these are not the proper objects of study. For $n \in \N_+^*$, the PDF $f_n$ of $X_n$ is given by $f_n(x_n) = 1$ and $f_n(x) = 0$ for $x \in \R \setminus \{x_n\}$. Only when $x_n = x_\infty$ for all but finitely many $n \in \N_+$ do we have $f_n(x) \to f(x)$ for $x \in \R$.
For the example below, recall that $\Q$ denotes the set of rational numbers. Once again, expand the proof to understand the example fully
For $n \in \N_+$, let $P_n$ denote the discrete uniform distribution on $\left\{\frac{1}{n}, \frac{2}{n}, \ldots \frac{n-1}{n}, 1\right\}$ and let $P_\infty$ denote the continuous uniform distribution on the interval $[0, 1]$. Then
1. $P_n \Rightarrow P_\infty$ as $n \to \infty$
2. $P_n(\Q) = 1$ for each $n \in \N_+$ but $P_\infty(\Q) = 0$.
Proof
As usual, let $F_n$ denote the CDF of $P_n$ for $n \in \N_+^*$.
1. For $n \in \N_+$ note that $F_n$ is given by $F_n(x) = \lfloor n \, x \rfloor / n$ for $x \in [0, 1]$. But $n \, x - 1 \le \lfloor n \, x \rfloor \le n \, x$ so $\lfloor n \, x \rfloor / n \to x$ as $n \to \infty$ for $x \in [0, 1]$. Of course, $F_n(x) = 0$ for $x \lt 0$ and $F_n(x) = 1$ for $x \gt 1$. So $F_n(x) \to F_\infty(x)$ as $n \to \infty$ for all $x \in \R$.
2. Note that by definition, so $P_n(\Q) = 1$ for $n \in \N_+$. On the other hand, $P_\infty$ is a continuous distribution and $\Q$ is countable, so $P_\infty(\Q) = 0$.
The proof is finished, but let's look at the probability density functions. For $n \in \N_+$, the PDF $f_n$ of $P_n$ is given by $f_n(x) = \frac{1}{n}$ for $x \in \left\{\frac{1}{n}, \frac{2}{n}, \ldots \frac{n-1}{n}, 1\right\}$ and $f_n(x) = 0$ otherwise. Hence $0 \le f_n(x) \le \frac{1}{n}$ for $n \in \N_+$ and $x \in \R$, so $f_n(x) \to 0$ as $n \to \infty$ for every $x \in \R$.
The point of the example is that it's reasonable for the discrete uniform distribution on $\left\{\frac{1}{n}, \frac{2}{n}, \ldots \frac{n-1}{n}, 1\right\}$ to converge to the continuous uniform distribution on $[0, 1]$, but once again, the probability density functions are evidently not the correct objects of study.
Probability Density Functions
As the previous example shows, it is quite possible to have a sequence of discrete distributions converge to a continuous distribution (or the other way around). Recall that probability density functions have very different meanings in the discrete and continuous cases: density with respect to counting measure in the first case, and density with respect to Lebesgue measure in the second case. This is another indication that distribution functions, rather than density functions, are the correct objects of study. However, if probability density functions of a fixed type converge then the distributions converge. Recall again that we are thinnking of our probability distributions as measures on $(\R, \mathscr R)$ even when supported on a smaller subset.
Convergence in distribution in terms of probability density functions.
1. Suppose that $f_n$ is a probability density function for a discrete distribution $P_n$ on a countable set $S \subseteq \R$ for each $n \in \N_+^*$. If $f_n(x) \to f_\infty(x)$ as $n \to \infty$ for each $x \in S$ then $P_n \Rightarrow P_\infty$ as $n \to \infty$.
2. Suppose that $f_n$ is a probability density function for a continuous distribution $P_n$ on $\R$ for each $n \in \N_+^*$ If $f_n(x) \to f(x)$ as $n \to \infty$ for all $x \in \R$ (except perhaps on a set with Lebesgue measure 0) then $P_n \Rightarrow P_\infty$ as $n \to \infty$.
Proof
1. Fix $x \in \R$. Then $P_n(-\infty, x] = \sum_{y \in S, \, y \le x} f(y)$ for $n \in \N_+$ and $P(-\infty, x] = \sum_{y \in S, \, y \le x} f(y)$. It follows from Scheffé's theorem with the measure space $(S, \mathscr P(S), \#)$ that $P_n(-\infty, x] \to P(-\infty, x]$ as $n \to \infty$.
2. Fix $x \in \R$. Then $P_n(-\infty, x] = \int_{-\infty}^x f(y) \, dy$ for $n \in \N_+$ and $P(-\infty, x] = \int_{-\infty}^x f(y) \, dy$. It follows from Scheffé's theorem with the measure space $(\R, \mathscr R, \lambda)$ that $P_n(-\infty, x] \to P(-\infty, x]$ as $n \to \infty$.
Convergence in Probability
Naturally, we would like to compare convergence in distribution with other modes of convergence we have studied.
Suppose that $X_n$ is a real-valued random variable for each $n \in \N_+^*$, all defined on the same probability space. If $X_n \to X_\infty$ as $n \to \infty$ in probability then $X_n \to X_\infty$ as $n \to \infty$ in distribution.
Proof
Let $F_n$ denote the distribution function of $X_n$ for $n \in \N_+^*$. Fix $\epsilon \gt 0$. Note first that $\P(X_n \le x) = \P(X_n \le x, X_\infty \le x + \epsilon) + \P(X_n \le x, X_\infty \gt x + \epsilon)$. Hence $F_n(x) \le F_\infty(x + \epsilon) + \P\left(\left|X_n - X_\infty\right| \gt \epsilon\right)$. Next, note that $\P(X_\infty \le x - \epsilon) = \P(X_\infty \le x - \epsilon, X_n \le x) + \P(X_\infty \le x - \epsilon, X_n \gt x)$. Hence $F_\infty(x - \epsilon) \le F_n(x) + \P\left(\left|X_n - X_\infty\right|\right) \gt \epsilon$. From the last two results it follows that $F_\infty(x - \epsilon) - \P\left(\left|X_n - X_\infty\right| \gt \epsilon\right) \le F_n(x) \le F_\infty(x + \epsilon) + \P\left(\left|X_n - X_\infty\right| \gt \epsilon\right)$ Letting $n \to \infty$ and using convergence in probability gives $F_\infty(x - \epsilon) \le \liminf_{n \to \infty} F_n(x) \le \limsup_{n \to \infty} F_n(x) \le F_\infty(x + \epsilon)$ Finally, letting $\epsilon \downarrow 0$ we see that if $F_\infty$ is continuous at $x$ then $F_n(x) \to F_\infty(x)$ as $n \to \infty$.
Our next example shows that even when the variables are defined on the same probability space, a sequence can converge in distribution, but not in any other way.
Let $X$ be an indicator variable with $\P(X = 0) = \P(X = 1) = \frac{1}{2}$, so that $X$ is the result of tossing a fair coin. Let $X_n = 1 - X$ for $n \in \N_+$. Then
1. $X_n \to X$ as $n \to \infty$ in distribution.
2. $\P(X_n \text{ does not converge to } X \text{ as } n \to \infty) = 1$.
3. $X_n$ does not converge to $X$ as $n \to \infty$ in probability.
4. $X_n$ does not converge to $X$ as $n \to \infty$ in mean.
Proof
1. This trivially holds since $1 - X$ has the same distribution as $X$.
2. This follows since $\left|X_n - X\right| = 1$ for every $n \in \N_+$.
3. This follows since $\P\left(\left|X_n - X\right| \gt \frac{1}{2}\right) = 1$ for each $n \in \N_+$.
4. This follows since $\E\left(\left|X_n - X\right|\right) = 1$ for each $n \in \N_+$.
The critical fact that makes this counterexample work is that $1 - X$ has the same distribution as $X$. Any random variable with this property would work just as well, so if you prefer a counterexample with continuous distributions, let $X$ have probability density function $f$ given by $f(x) = 6 x (1 - x)$ for $0 \le x \le 1$. The distribution of $X$ is an example of a beta distribution.
The following summary gives the implications for the various modes of convergence; no other implications hold in general.
Suppose that $X_n$ is a real-valued random variable for each $n \in \N_+^*$, all defined on a common probability space.
1. If $X_n \to X_\infty$ as $n \to \infty$ with probability 1 then $X_n \to X_\infty$ as $n \to \infty$ in probability.
2. If $X_n \to X_\infty$ as $n \to \infty$ in mean then $X_n \to X_\infty$ as $n \to \infty$ in probability.
3. If $X_n \to X_\infty$ as $n \to \infty$ in probability then $X_n \to X_\infty$ as $n \to \infty$ in distribtion.
It follows that convergence with probability 1, convergence in probability, and convergence in mean all imply convergence in distribution, so the latter mode of convergence is indeed the weakest. However, our next theorem gives an important converse to part (c) in (7), when the limiting variable is a constant. Of course, a constant can be viewed as a random variable defined on any probability space.
Suppose that $X_n$ is a real-valued random variable for each $n \in \N_+$, defined on the same probability space, and that $c \in \R$. If $X_n \to c$ as $n \to \infty$ in distribution then $X_n \to c$ as $n \to \infty$ in probability.
Proof
Assume that the probability space is $(\Omega, \mathscr F, \P)$. Note first that $\P(X_n \le x) \to 0$ as $n \to \infty$ if $x \lt c$ and $\P(X_n \le x) \to 1$ as $n \to \infty$ if $x \gt c$. It follows that $\P\left(\left|X_n - c\right| \le \epsilon\right) \to 1$ as $n \to \infty$ for every $\epsilon \gt 0$.
The Skorohod Representation
As noted in the summary above, convergence in distribution does not imply convergence with probability 1, even when the random variables are defined on the same probability space. However, the next theorem, known as the Skorohod representation theorem, gives an important partial result in this direction.
Suppose that $P_n$ is a probability measure on $(\R, \mathscr R)$ for each $n \in \N_+^*$ and that $P_n \Rightarrow P_\infty$ as $n \to \infty$. Then there exist real-valued random variables $X_n$ for $n \in \N_+^*$, defined on the same probability space, such that
1. $X_n$ has distribution $P_n$ for $n \in \N_+^*$.
2. $X_n \to X_\infty$ as $n \to \infty$ with probability 1.
Proof
Let $(\Omega, \mathscr F, \P)$ be a probability space and $U$ a random variable defined on this space that is uniformly distributed on the interval $(0, 1)$. For a specific construction, we could take $\Omega = (0, 1)$, $\mathscr F$ the $\sigma$-algebra of Borel measurable subsets of $(0, 1)$, and $\P$ Lebesgue measure on $(\Omega, \mathscr F)$ (the uniform distribution on $(0, 1)$). Then let $U$ be the identity function on $\Omega$ so that $U(\omega) = \omega$ for $\omega \in \Omega$, so that $U$ has probability distribution $\P$. We have seen this construction many times before.
1. For $n \in \N_+^*$, let $F_n$ denote the distribution function of $P_n$ and define $X_n = F_n^{-1}(U)$ where $F_n^{-1}$ is the quantile functions of $F_n$. Recall that $X_n$ has distribution function $F_n$ and therefore $X_n$ has distribution $P_n$ for $n \in \N_+^*$. Of course, these random variables are also defined on $(\Omega, \mathscr F, \P)$.
2. Let $\epsilon \gt 0$ and let $u \in (0, 1)$. Pick a continuity point $x$ of $F_\infty$ such that $F_\infty^{-1}(u) - \epsilon \lt x \lt F_\infty^{-1}(u)$. Then $F_\infty(x) \lt u$ and hence $F_n(x) \lt u$ for all but finitely many $n \in \N_+$. It follows that $F_\infty^{-1}(u) - \epsilon \lt x \lt F_n^{-1}(u)$ for all but finitely many $n \in \N_+$. Let $n \to \infty$ and $u \downarrow 0$ to conclude that $F_\infty^{-1}(u) \le \liminf_{n \to \infty} F_n^{-1}(u)$. Next, let $v$ satisfy $0 \lt u \lt v \lt 1$ and let $\epsilon \gt 0$. Pick a continuity point $x$ of $F_\infty$ such that $F_\infty^{-1}(v) \lt x \lt F_\infty^{-1}(v) + \epsilon$. Then $u \lt v \lt F_\infty(x)$ and hence $u \lt F_n(x)$ for all but finitely many $n \in \N_+$. It follows that $F_n^{-1}(u) \le x \lt F_\infty^{-1}(v) + \epsilon$ for all but finitely many $n \in \N_+$. Let $n \to \infty$ and $\epsilon \downarrow 0$ to conclude that $\limsup_{n \to \infty} F_n^{-1}(u) \le F_\infty^{-1}(v)$. Letting $v \downarrow u$ it follows that $\limsup_{n \to \infty} F_n^{-1}(u) \le F_\infty^{-1}(u)$ if $u$ is a point of continuity of $F_\infty^{-1}$. Therefore $F_n^{-1}(u) \to F_\infty^{-1}(u)$ as $n \to \infty$ if $u$ is a point of continuity of $F_\infty^{-1}$. Recall from analysis that since $F_\infty^{-1}(u)$ is increasing, the set $D \subseteq (0, 1)$ of discontinuities of $F_\infty^{-1}$ is countable. Since $U$ has a continuous distribution, $\P(U \in D) = 0$. Finally, it follows that $\P(X_n \to X_\infty \text{ as } n \to \infty) = 1$.
The following theorem illustrates the value of the Skorohod representation and the usefulness of random variable notation for convergence in distribution. The theorem is also quite intuitive, since a basic idea is that continuity should preserve convergence.
Suppose that $X_n$ is a real-valued random variable for each $n \in \N_+^*$ (not necessarily defined on the same probability space). Suppose also that $g: \R \to \R$ is measurable, and let $D_g$ denote the set of discontinuities of $g$, and $P_\infty$ the distribution of $X_\infty$. If $X_n \to X_\infty$ as $n \to \infty$ in distribution and $P_\infty(D_g) = 0$, then $g(X_n) \to g(X_\infty)$ as $n \to \infty$ in distribution.
Proof
By Skorohod's theorem, there exists random variables $Y_n$ for $n \in \N_+^*$, defined on the same probability space $(\Omega, \mathscr F, \P)$, such that $Y_n$ has the same distribution as $X_n$ for $n \in \N_+^*$, and $Y_n \to Y_\infty$ as $n \to \infty$ with probability 1. Since $\P(Y_\infty \in D_g) = P_\infty(D_g) = 0$ it follows that $g(Y_n) \to g(Y_\infty)$ as $n \to \infty$ with probability 1. Hence by the theorem above, $g(Y_n) \to g(Y_\infty)$ as $n \to \infty$ in distribution. But $g(Y_n)$ has the same distribution as $g(X_n)$ for each $n \in \N_+^*$.
As a simple corollary, if $X_n$ converges $X_\infty$ as $n \to \infty$ in distribution, and if $a, \, b \in \R$ then $a + b X_n$ converges to $a + b X$ as $n \to \infty$ in distribution. But we can do a little better:
Suppose that $X_n$ is a real-valued random variable and that $a_n, \, b_n \in \R$ for each $n \in \N_+^*$. If $X_n \to X_\infty$ as $n \to \infty$ in distribution and if $a_n \to a_\infty$ and $b_n \to b_\infty$ as $n \to \infty$, then $a_n + b_n X_n \to a + b X_\infty$ as $n \to \infty$ in distribution.
Proof
Again by Skorohod's theorem, there exist random variables $Y_n$ for $n \in \N_+^*$, defined on the same probability space $(\Omega, \mathscr F, \P)$ such that $Y_n$ has the same distribution as $X_n$ for $n \in \N_+^*$ and $Y_n \to Y_\infty$ as $n \to \infty$ with probability 1. Hence also $a_n + b_n Y_n \to a_\infty + b_\infty Y_\infty$ as $n \to \infty$ with probability 1. By the result above, $a_n + b_n Y_n \to a_\infty + b_\infty Y_\infty$ as $n \to \infty$ in distribution. But $a_n + b_n Y_n$ has the same distribution as $a_n + b_n X_n$ for $n \in \N_+^*$.
The definition of convergence in distribution requires that the sequence of probability measures converge on sets of the form $(-\infty, x]$ for $x \in \R$ when the limiting distrbution has probability 0 at $x$. It turns out that the probability measures will converge on lots of other sets as well, and this result points the way to extending convergence in distribution to more general spaces. To state the result, recall that if $A$ is a subset of a topological space, then the boundary of $A$ is $\partial A = \cl(A) \setminus \interior(A)$ where $\cl(A)$ is the closure of $A$ (the smallest closed set that contains $A$) and $\interior(A)$ is the interior of $A$ (the largest open set contained in $A$).
Suppose that $P_n$ is a probability measure on $(\R, \mathscr R)$ for $n \in \N_+^*$. Then $P_n \Rightarrow P_\infty$ as $n \to \infty$ if and only if $P_n(A) \to P_\infty(A)$ as $n \to \infty$ for every $A \in \mathscr R$ with $P(\partial A) = 0$.
Proof
Suppose that $P_n \Rightarrow P_\infty$ as $n \to \infty$. Let $X_n$ be a random variable with distribution $P_n$ for $n \in \N_+^*$. (We don't care about the underlying probability spaces.) If $A \in \mathscr R$ then the set of discontinuities of $\bs 1_A$, the indicator function of $A$, is $\partial A$. So, suppose $\P_\infty(\partial A) = 0$. By the continuity theorem above, $\bs 1_A(X_n) \to \bs 1_A(X_\infty)$ as $n \to \infty$ in distribution. Let $G_n$ denote the CDF of $\bs 1_A(X_n)$ for $n \in \N_+^*$. The only possible points of discontinuity of $G_\infty$ are 0 and 1. Hence $G_n\left(\frac 1 2\right) \to G_\infty\left(\frac 1 2\right)$ as $n \to \infty$. But $G_n\left(\frac 1 2\right) = P_n(A^c)$ for $n \in \N_+^*$. Hence $P_n(A^c) \to \P_\infty(A^c)$ and so also $P_n(A) \to P_\infty(A)$ as $n \to \infty$.
Conversely, suppose that the condition in the theorem holds. If $x \in \R$, then the boundary of $(-\infty, x]$ is $\{x\}$, so if $P_\infty\{x\} = 0$ then $P_n(-\infty, x] \to P_\infty(-\infty, x]$ as $n \to \infty$. So by definition, $P_n \Rightarrow P_\infty$ as $n \to \infty$.
In the context of this result, suppose that $a, \, b \in \R$ with $a \lt b$. If $P\{a\} = P\{b\} = 0$, then as $n \to \infty$ we have $P_n(a, b) \to P(a, b)$, $P_n[a, b) \to P[a, b)$, $P_n(a, b] \to P(a, b]$, and $P_n[a, b] \to P[a, b]$. Of course, the limiting values are all the same.
Examples and Applications
Next we will explore several interesting examples of the convergence of distributions on $(\R, \mathscr R)$. There are several important cases where a special distribution converges to another special distribution as a parameter approaches a limiting value. Indeed, such convergence results are part of the reason why such distributions are special in the first place.
The Hypergeometric Distribution
Recall that the hypergeometric distribution with parameters $m$, $r$, and $n$ is the distribution that governs the number of type 1 objects in a sample of size $n$, drawn without replacement from a population of $m$ objects with $r$ objects of type 1. It has discrete probability density function $f$ given by $f(k) = \frac{\binom{r}{k} \binom{m - r}{n - k}}{\binom{m}{n}}, \quad k \in \{0, 1, \ldots, n\}$ The pramaters $m$, $r$, and $n$ are positive integers with $n \le m$ and $r \le m$. The hypergeometric distribution is studied in more detail in the chapter on Finite Sampling Models
Recall next that Bernoulli trials are independent trials, each with two possible outcomes, generically called success and failure. The probability of success $p \in [0, 1]$ is the same for each trial. The binomial distribution with parameters $n \in \N_+$ and $p$ is the distribution of the number successes in $n$ Bernoulli trials. This distribution has probability density function $g$ given by $g(k) = \binom{n}{k} p^k (1 - p)^{n - k}, \quad k \in \{0, 1, \ldots, n\}$ The binomial distribution is studied in more detail in the chapter on Bernoulli Trials. Note that the binomial distribution with parameters $n$ and $p = r / m$ is the distribution that governs the number of type 1 objects in a sample of size $n$, drawn with replacement from a population of $m$ objects with $r$ objects of type 1. This fact is motivation for the following result:
Suppose that $r_m \in \{0, 1, \ldots, m\}$ for each $m \in \N_+$ and that $r_m / m \to p$ as $m \to \infty$. For fixed $n \in \N_+$, the hypergeometric distribution with parameters $m$, $r_m$, and $n$ converges to the binomial distribution with parameters $n$ and $p$ as $m \to \infty$.
Proof
Recall that for $a \in \R$ and $j \in \N$, we let $a^{(j)} = a \, (a - 1) \cdots [a - (j - 1)]$ denote the falling power of $a$ of order $j$. The hypergeometric PDF can be written as $f_m(k) = \binom{n}{k} \frac{r_m^{(k)} (m - r_m)^{(n - k)}}{m^{(n)}}, \quad k \in \{0, 1, \ldots, n\}$ In the fraction above, the numerator and denominator both have $n$ fractors. Suppose that we group the $k$ factors in $r_m^{(k)}$ with the first $k$ factors of $m^{(n)}$ and the $n - k$ factors of $(m - r_m)^{(n-k)}$ with the last $n - k$ factors of $m^{(n)}$ to form a product of $n$ fractions. The first $k$ fractions have the form $(r_m - j) \big/ (m - j)$ for some $j$ that does not depend on $m$. Each of these converges to $p$ as $m \to \infty$. The last $n - k$ fractions have the form $(m - r_m - j) \big/ (m - k - j)$ for some $j$ that does not depend on $m$. Each of these converges to $1 - p$ as $m \to \infty$. Hence $f_m(k) \to \binom{n}{k} p^k (1 - p)^{n-k} \text{ as } m \to \infty \text{ for each } k \in \{0, 1, \ldots, n\}$ The result now follows from the theorem above on density functions.
From a practical point of view, the last result means that if the population size $m$ is large compared to sample size $n$, then the hypergeometric distribution with parameters $m$, $r$, and $n$ (which corresponds to sampling without replacement) is well approximated by the binomial distribution with parameters $n$ and $p = r / m$ (which corresponds to sampling with replacement). This is often a useful result, not computationally, but rather because the binomial distribution has fewer parameters than the hypergeometric distribution (and often in real problems, the parameters may only be known approximately). Specifically, in the limiting binomial distribution, we do not need to know the population size $m$ and the number of type 1 objects $r$ individually, but only in the ratio $r / m$.
In the ball and urn experiment, set $m = 100$ and $r = 30$. For each of the following values of $n$ (the sample size), switch between sampling without replacement (the hypergeometric distribution) and sampling with replacement (the binomial distribution). Note the difference in the probability density functions. Run the simulation 1000 times for each sampling mode and compare the relative frequency function to the probability density function.
1. 10
2. 20
3. 30
4. 40
5. 50
The Binomial Distribution
Recall again that the binomial distribution with parameters $n \in \N_+$ and $p \in [0, 1]$ is the distribution of the number successes in $n$ Bernoulli trials, when $p$ is the probability of success on a trial. This distribution has probability density function $f$ given by $f(k) = \binom{n}{k} p^k (1 - p)^{n - k}, \quad k \in \{0, 1, \ldots, n\}$ Recall also that the Poisson distribution with parameter $r \in (0, \infty)$ has probability density function $g$ given by $g(k) = e^{-r} \frac{r^k}{k!}, \quad k \in \N$ The distribution is named for Simeon Poisson and governs the number of random points in a region of time or space, under certain ideal conditions. The parameter $r$ is proportional to the size of the region of time or space. The Poisson distribution is studied in more detail in the chapter on the Poisson Process.
Suppose that $p_n \in [0, 1]$ for $n \in \N_+$ and that $n p_n \to r \in (0, \infty)$ as $n \to \infty$. Then the binomial distribution with parameters $n$ and $p_n$ converges to the Poisson distribution with parameter $r$ as $n \to \infty$.
Proof
For $k, \, n \in \N$ with $k \le n$, the binomial PDF can be written as $f_n(k) = \frac{n^{(k)}}{k!} p_n^k (1 - p_n)^{n - k} = \frac{1}{k!} (n p_n) \left[(n - 1) p_n\right] \cdots \left[(n - k + 1) p_n\right] (1 - p_n)^{n - k}$ First, $(n - j) p_n \to r$ as $n \to \infty$ for $j \in \{0, 1, \ldots, n - 1\}$. Next, by a famous limit from calculus, $(1 - p_n)^n = (1 - n p_n / n)^n \to e^{-r}$ as $n \to \infty$. Hence also $(1 - p_n)^{n-k} \to e^{-r}$ as $n \to \infty$ for fixed $k \in \N_+$. Therefore $f_n(k) \to e^{-r} r^k / k!$ as $n \to \infty$ for each $k \in \N_+$. The result now follows from the theorem above on density functions.
From a practical point of view, the convergence of the binomial distribution to the Poisson means that if the number of trials $n$ is large and the probability of success $p$ small, so that $n p^2$ is small, then the binomial distribution with parameters $n$ and $p$ is well approximated by the Poisson distribution with parameter $r = n p$. This is often a useful result, again not computationally, but rather because the Poisson distribution has fewer parameters than the binomial distribution (and often in real problems, the parameters may only be known approximately). Specifically, in the approximating Poisson distribution, we do not need to know the number of trials $n$ and the probability of success $p$ individually, but only in the product $n p$. As we will see in the next chapter, the condition that $n p^2$ be small means that the variance of the binomial distribution, namely $n p (1 - p) = n p - n p^2$ is approximately $r = n p$, which is the variance of the approximating Poisson distribution.
In the binomial timeline experiment, set the parameter values as follows, and observe the graph of the probability density function. (Note that $n p = 5$ in each case.) Run the experiment 1000 times in each case and compare the relative frequency function and the probability density function. Note also the successes represented as random points in discrete time.
1. $n = 10$, $p = 0.5$
2. $n = 20$, $p = 0.25$
3. $n = 100$, $p = 0.05$
In the Poisson experiment, set $r = 5$ and $t = 1$, to get the Poisson distribution with parameter 5. Note the shape of the probability density function. Run the experiment 1000 times and compare the relative frequency function to the probability density function. Note the similarity between this experiment and the one in the previous exercise.
The Geometric Distribution
Recall that the geometric distribution on $\N_+$ with success parameter $p \in (0, 1]$ has probability density function $f$ given by $f(k) = p (1 - p)^{k-1}, \quad k \in \N_+$ The geometric distribution governs the trial number of the first success in a sequence of Bernoulli trials.
Suppose that $U$ has the geometric distribution on $\N_+$ with success parameter $p \in (0, 1]$. For $n \in \N_+$, the conditional distribution of $U$ given $U \le n$ converges to the uniform distribution on $\{1, 2, \ldots, n\}$ as $p \downarrow 0$.
Proof
The CDF $F$ of $U$ is given by $F(k) = 1 - (1 - p)^k$ for $k \in \N_+$. Hence for $n \in \N_+$, the conditional CDF of $U$ given $U \le n$ is $F_n(k) = \P(U \le k \mid U \le n) = \frac{\P(U \le k)}{\P(U \le n)} = \frac{1 - (1 - p)^k}{1 - (1 - p)^n}, \quad k \in \{1, 2, \ldots n\}$ Using L'Hospital's rule, gives $F_n(k) \to k / n$ as $p \downarrow 0$ for $k \in \{1, 2, \ldots, n\}$. As a function of $k$ this is the CDF of the uniform distribution on $\{1, 2, \ldots, n\}$.
Next, recall that the exponential distribution with rate parameter $r \in (0, \infty)$ has distribution function $G$ given by $G(t) = 1 - e^{-r t}, \quad 0 \le t \lt \infty$ The exponential distribution governs the time between arrivals in the Poisson model of random points in time.
Suppose that $U_n$ has the geometric distribution on $\N_+$ with success parameter $p_n \in (0, 1]$ for $n \in \N_+$, and that $n p_n \to r \in (0, \infty)$ as $n \to \infty$. The distribution of $U_n / n$ converges to the exponential distribution with parameter $r$ as $n \to \infty$.
Proof
Let $F_n$ denote the CDF of $U_n / n$. Then for $x \in [0, \infty)$ $F_n(x) = \P\left(\frac{U_n}{n} \le x\right) = \P(U_n \le n x) = \P\left(U_n \le \lfloor n x \rfloor\right) = 1 - \left(1 - p_n\right)^{\lfloor n x \rfloor}$ We showed in the proof of the convergence of the binomial distribution that $(1 - p_n)^n \to e^{-r}$ as $n \to \infty$, and hence $\left(1 - p_n\right)^{n x} \to e^{-r x}$ as $n \to \infty$. But by definition, $\lfloor n x \rfloor \le n x \lt \lfloor n x \rfloor + 1$ or equivalently, $n x - 1 \lt \lfloor n x \rfloor \le n x$ so it follows from the squeeze theorem that $\left(1 - p_n \right)^{\lfloor n x \rfloor} \to e^{- r x}$ as $n \to \infty$. Hence $F_n(x) \to 1 - e^{-r x}$ as $n \to \infty$. As a function of $x \in [0, \infty), this is the CDF of the exponential distribution with parameter \(r$.
Note that the limiting condition on $n$ and $p$ in the last result is precisely the same as the condition for the convergence of the binomial distribution to the Poisson distribution. For a deeper interpretation of both of these results, see the section on the Poisson distribution.
In the negative binomial experiment, set $k = 1$ to get the geometric distribution. Then decrease the value of $p$ and note the shape of the probability density function. With $p = 0.5$ run the experiment 1000 times and compare the relative frequency function to the probability density function.
In the gamma experiment, set $k = 1$ to get the exponential distribution, and set $r = 5$. Note the shape of the probability density function. Run the experiment 1000 times and compare the empirical density function and the probability density function. Compare this experiment with the one in the previous exercise, and note the similarity, up to a change in scale.
The Matching Distribution
For $n \in \N_+$, consider a random permutation $(X_1, X_2, \ldots, X_n)$ of the elements in the set $\{1, 2, \ldots, n\}$. We say that a match occurs at position $i$ if $X_i = i$.
$\P\left(X_i = i\right) = \frac{1}{n}$ for each $i \in \{1, 2, \ldots, n\}$.
Proof
The number of permutations of $\{1, 2, \ldots, n\}$ is $n!$. For $i \in \{1, 2, \ldots, n\}$, the number of such permutations with $i$ in position $i$ is $(n - 1)!$. Hence $\P(X_i = i) = (n - 1)! / n! = 1 / n$. A more direct argument is that $i$ is no more or less likely to end up in position $i$ as any other number.
So the matching events all have the same probability, which varies inversely with the number of trials.
$\P\left(X_i = i, X_j = j\right) = \frac{1}{n (n - 1)}$ for $i, \, j \in \{1, 2, \ldots, n\}$ with $i \ne j$.
Proof
Again, the number of permutations of $\{1, 2, \ldots, n\}$ is $n!$. For distinct $i, \, j \in \{1, 2, \ldots, n\}$, the number of such permutations with $i$ in position $i$ and $j$ in position $j$ is $(n - 2)!$. Hence $\P(X_i = i, X_j = j) = (n - 2)! / n! = 1 / n (n - 1)$.
So the matching events are dependent, and in fact are positively correlated. In particular, the matching events do not form a sequence of Bernoulli trials. The matching problem is studied in detail in the chapter on Finite Sampling Models. In that section we show that the number of matches $N_n$ has probability density function $f_n$ given by: $f_n(k) = \frac{1}{k!} \sum_{j=0}^{n-k} \frac{(-1)^j}{j!}, \quad k \in \{0, 1, \ldots, n\}$
The distribution of $N_n$ converges to the Poisson distribution with parameter 1 as $n \to \infty$.
Proof
For $k \in \N$, $f_n(k) = \frac{1}{k!} \sum_{j=0}^{n-k} \frac{(-1)^j}{j!} \to \frac{1}{k!} \sum_{j=0}^\infty \frac{(-1)^j}{j!} = \frac{1}{k!} e^{-1}$ As a function of $k \in \N$, this is the PDF of the Poisson distribution with parameter 1. So the result follows from the theorem above on density functions.
In the matching experiment, increase $n$ and note the apparent convergence of the probability density function for the number of matches. With selected values of $n$, run the experiment 1000 times and compare the relative frequency function and the probability density function.
The Extreme Value Distribution
Suppose that $(X_1, X_2, \ldots)$ is a sequence of independent random variables, each with the standard exponential distribution (parameter 1). Thus, recall that the common distribution function $G$ is given by $G(x) = 1 - e^{-x}, \quad 0 \le x \lt \infty$
As $n \to \infty$, the distribution of $Y_n = \max\{X_1, X_2, \ldots, X_n\} - \ln n$ converges to the distribution with distribution function $F$ given by $F(x) = e^{-e^{-x}}, \quad x \in \R$
Proof
Let $X_{(n)} = \max\{X_1, X_2, \ldots, X_n\}$ and recall that $X_{(n)}$ has CDF $G^n$. Let $F_n$ denote the CDF of $Y_n$. For $x \in \R$ $F_n(x) = \P(Y_n \le x) = \P\left(X_{(n)} \le x + \ln n \right) = G^n(x + \ln n) = \left[1 - e^{-(x + \ln n) }\right]^n = \left(1 - \frac{e^{-x}}{n} \right)^n$ By our famous limit from calculus again, $F_n(x) \to e^{-e^{-x}}$ as $n \to \infty$.
The limiting distribution in Exercise (27) is the standard extreme value distribution, also known as the standard Gumbel distribution in honor of Emil Gumbel. Extreme value distributions are studied in detail in the chapter on Special Distributions.
The Pareto Distribution
Recall that the Pareto distribution with shape parameter $a \in (0, \infty)$ has distribution function $F$ given by $F(x) = 1 - \frac{1}{x^a}, \quad 1 \le x \lt \infty$ The Pareto distribution, named for Vilfredo Pareto, is a heavy-tailed distribution sometimes used to model financial variables. It is studied in more detail in the chapter on Special Distributions.
Suppose that $X_n$ has the Pareto distribution with parameter $n$ for each $n \in \N_+$. Then
1. $X_n \to 1$ as $n \to \infty$ in distribution (and hence also in probability).
2. The distribution of $Y_n = nX_n - n$ converges to the standard exponential distribution as $n \to \infty$.
Proof
1. The CDF of $X_n$ is $F_n(x) = 1 - 1 / x^n$ for $x \ge 1$. Hence $F_n(x) = 0$ for $n \in \N_+$ and $x \le 1$ while $F_n(x) \to 1$ as $n \to \infty$ for $x \gt 1$. Thus the limit of $F_n$ agrees with the CDF of the constant 1, except at $x = 1$, the point of discontinuity.
2. Let $G_n$ denote the CDF of $Y_n$. For $x \ge 0$, $G_n(x) = \P(Y_n \le x) = \P(X_n \le 1 + x / n) = 1 - \frac{1}{(1 + x / n)^n}$ By our famous theorem from calculus again, it follows that $G_n(x) \to 1 - 1 / e^x = 1 - e^{-x}$ as $n \to \infty$. As a function of $x \in [0, \infty$, this is the CDF of the standard exponential distribution.
Fundamental Theorems
The two fundamental theorems of basic probability theory, the law of large numbers and the central limit theorem, are studied in detail in the chapter on Random Samples. For this reason we will simply state the results in this section. So suppose that $(X_1, X_2, \ldots)$ is a sequence of independent, identically distributed, real-valued random variables (defined on the same probability space) with mean $\mu \in (-\infty. \infty)$ and standard deviation $\sigma \in (0, \infty)$. For $n \in \N_+$, let $Y_n = \sum_{i=1}^n X_i$ denote the sum of the first $n$ variables, $M_n = Y_n \big/n$ the average of the first $n$ variables, and $Z_n = (Y_n - n \mu) \big/ \sqrt{n} \sigma$ the standard score of $Y_n$.
The fundamental theorems of probability
1. $M_n \to \mu$ as $n \to \infty$ with probability 1 (and hence also in probability and in distribution). This is the law of large numbers.
2. The distribution of $Z_n$ converges to the standard normal distribution as $n \to \infty$. This is the central limit theorem.
In part (a), convergence with probability 1 is the strong law of large numbers while convergence in probability and in distribution are the weak laws of large numbers.
General Spaces
Our next goal is to define convergence of probability distributions on more general measurable spaces. For this discussion, you may need to refer to other sections in this chapter: the integral with respect to a positive measure, properties of the integral, and density functions. In turn, these sections depend on measure theory developed in the chapters on Foundations and Probability Measures.
Definition and Basic Properties
First we need to define the type of measurable spaces that we will use in this subsection.
We assume that $(S, d)$ is a complete, separable metric space and let $\mathscr S$ denote the Borel $\sigma$-algebra of subsets of $S$, that is, the $\sigma$-algebra generated by the topology. The standard spaces that we often use are special cases of the measurable space $(S, \mathscr S)$:
1. Discrete: $S$ is countable and is given the discrete metric so $\mathscr S$ is the collection of all subsets of $S$.
2. Euclidean: $\R^n$ is given the standard Euclidean metric so $\mathscr R_n$ is the usual $\sigma$-algebra of Borel measurable subsets of $\R^n$.
Additional details
Recall that the metric space $(S, d)$ is complete if every Cauchy sequence in $S$ converges to a point in $S$. The space is separable if there exists a coutable subset that is dense. A complete, separable metric space is sometimes called a Polish space because such spaces were extensively studied by a group of Polish mathematicians in the 1930s, including Kazimierz Kuratowski.
As suggested by our setup, the definition for convergence in distribution involves both measure theory and topology. The motivation is the theorem above for the one-dimensional Euclidean space $(\R, \mathscr R)$.
Convergence in distribution:
1. Suppose that $P_n$ is a probability measure on $(S, \mathscr S)$ for each $n \in \N_+^*$. Then $P_n$ converges (weakly) to $P_\infty$ as $n \to \infty$ if $P_n(A) \to P_\infty(A)$ as $n \to \infty$ for every $A \in \mathscr S$ with $P_\infty(\partial A) = 0$. We write $P_n \Rightarrow P_\infty$ as $n \to \infty$.
2. Suppose that $X_n$ is a random variable with distribution $P_n$ on $(S, \mathscr S)$ for each $n \in \N_+^*$. Then $X_n$ converges in distribution to $X_\infty$ as $n \to \infty$ if $P_n \Rightarrow P_\infty$ as $n \to \infty$. We write $X_n \to X_\infty$ as $n \to \infty$ in distribution.
Notes
1. The definition makes sense since $A \in \mathscr S$ implies $\partial A \in \mathscr S$. Specifically, $\cl(A) \in \mathscr S$ because $\cl(A)$ is closed, and $\interior(A) \in \mathscr S$ because $\interior(A)$ is open.
2. The random variables need not be defined on the same probability space.
Let's consider our two special cases. In the discrete case, as usual, the measure theory and topology are not really necessary.
Suppose that $P_n$ is a probability measures on a discrete space $(S, \mathscr S)$ for each $n \in \N_+^*$. Then $P_n \Rightarrow P_\infty$ as $n \to \infty$ if and only if $P_n(A) \to P_\infty(A)$ as $n \to \infty$ for every $A \subseteq S$.
Proof
This follows from the definition. Every subset is both open and closed so $\partial A = \emptyset$ for every $A \subseteq S$.
In the Euclidean case, it suffices to consider distribution functions, as in the one-dimensional case. If $P$ is a probability measure on $(\R^n, \mathscr R_n)$, recall that the distribution function $F$ of $P$ is given by $F(x_1, x_2, \ldots, x_n) = P\left((-\infty, x_1] \times (-\infty, x_2] \times \cdots \times (-\infty, x_n]\right), \quad (x_1, x_2, \ldots, x_n) \in \R^n$
Suppose that $P_n$ is a probability measures on $(\R^n, \mathscr R_n)$ with distribution function $F_n$ for each $n \in \N_+^*$. Then $P_n \Rightarrow P_\infty$ as $n \to \infty$ if and only if $F_n(\bs x) \to F_\infty(\bs x)$ as $n \to \infty$ for every $\bs x \in \R^n$ where $F_\infty$ is continuous.
Convergence in Probability
As in the case of $(\R, \mathscr R)$, convergence in probability implies convergence in distribution.
Suppose that $X_n$ is a random variable with values in $S$ for each $n \in \N_+^*$, all defined on the same probability space. If $X_n \to X_\infty$ as $n \to \infty$ in probability then $X_n \to X_\infty$ as $n \to \infty$ in distribution.
Notes
Assume that the common probability space is $(\Omega, \mathscr F, \P)$. Recall that convergence in probability means that $\P[d(X_n, X_\infty) \gt \epsilon] \to 0$ as $n \to \infty$ for every $\epsilon \gt 0$,
So as before, convergence with probability 1 implies convergence in probability which in turn implies convergence in distribution.
Skorohod's Representation Theorem
As you might guess, Skorohod's theorem for the one-dimensional Euclidean space $(\R, \mathscr R)$ can be extended to the more general spaces. However the proof is not nearly as straightforward, because we no longer have the quantile function for constructing random variables on a common probability space.
Suppose that $P_n$ is a probability measures on $(S, \mathscr S)$ for each $n \in \N_+^*$ and that $P_n \Rightarrow P_\infty$ as $n \to \infty$. Then there exists a random variable $X_n$ with values in $S$ for each $n \in \N_+^*$, defined on a common probability space, such that
1. $X_n$ has distribution $P_n$ for $n \in \N_+^*$
2. $X_n \to X_\infty$ as $n \to \infty$ with probability 1.
One of the main consequences of Skorohod's representation, the preservation of convergence in distribution under continuous functions, is still true and has essentially the same proof. For the general setup, suppose that $(S, d, \mathscr S)$ and $(T, e, \mathscr T)$ are spaces of the type described above.
Suppose that $X_n$ is a random variable with values in $S$ for each $n \in \N_+^*$ (not necessarily defined on the same probability space). Suppose also that $g: S \to T$ is measurable, and let $D_g$ denote the set of discontinuities of $g$, and $P_\infty$ the distribution of $X_\infty$. If $X_n \to X_\infty$ as $n \to \infty$ in distribution and $P_\infty(D_g) = 0$, then $g(X_n) \to g(X_\infty)$ as $n \to \infty$ in distribution.
Proof
By Skorohod's theorem, there exists random variables $Y_n$ with values in $S$ for $n \in \N_+^*$, defined on the same probability space $(\Omega, \mathscr F, \P)$, such that $Y_n$ has the same distribution as $X_n$ for $n \in \N_+^*$, and $Y_n \to Y_\infty$ as $n \to \infty$ with probability 1. Since $\P(Y_\infty \in D_g) = P_\infty(D_g) = 0$ it follows that $g(Y_n) \to g(Y_\infty)$ as $n \to \infty$ with probability 1. Hence $g(Y_n) \to g(Y_\infty)$ as $n \to \infty$ in distribution. But $g(Y_n)$ has the same distribution as $g(X_n)$ for each $n \in \N_+^*$.
A simple consequence of the continuity theorem is that if a sequence of random vectors in $\R^n$ converge in distribution, then the sequence of each coordinate also converges in distribution. Let's just consider the two-dimensional case to keep the notation simple.
Suppose that $(X_n, Y_n)$ is a random variable with values in $\R^2$ for $n \in \N_+^*$ and that $(X_n, Y_n) \to (X_\infty, Y_\infty)$ as $n \to \infty$ in distribution. Then
1. $X_n \to X_\infty$ as $n \to \infty$ in distribution.
2. $Y_n \to Y_\infty$ as $n \to \infty$ in distribution.
Scheffé's Theorem
Our next discussion concerns an important result known as Scheffé's theorem, named after Henry Scheffé. To state our theorem, suppose that $(S, \mathscr S, \mu)$ is a measure space, so that $S$ is a set, $\mathscr S$ is a $\sigma$-algebra of subsets of $S$, and $\mu$ is a positive measure on $(S, \mathscr S)$. Further, suppose that $P_n$ is a probability measure on $(S, \mathscr S)$ that has density function $f_n$ with respect to $\mu$ for each $n \in \N_+$, and that $P$ is a probability measure on $(S, \mathscr S)$ that has density function $f$ with respect to $\mu$.
If $f_n(x) \to f(x)$ as $n \to \infty$ for almost all $x \in S$ (with respect to $\mu$) then $P_n(A) \to P(A)$ as $n \to \infty$ uniformly in $A \in \mathscr S$.
Proof
From basic properties of the integral it follows that for $A \in \mathscr S$, $\left|P(A) - P_n(A)\right| = \left|\int_A f \, d\mu - \int_A f_n \, d\mu \right| = \left| \int_A (f - f_n) \, d\mu\right| \le \int_A \left|f - f_n\right| \, d\mu \le \int_S \left|f - f_n\right| \, d\mu$ Let $g_n = f - f_n$, and let $g_n^+$ denote the positive part of $g_n$ and $g_n^-$ the negative part of $g_n$. Note that $g_n^+ \le f$ and $g_n^+ \to 0$ as $n \to \infty$ almost everywhere on $S$. Since $f$ is a probability density function, it is trivially integrable, so by the dominated convergence theorem, $\int_S g_n^+ \, d\mu \to 0$ as $n \to \infty$. But $\int_\R g_n \, d\mu = 0$ so $\int_\R g_n^+ \, d\mu = \int_\R g_n^- \, d\mu$. Therefore $\int_S \left|g_n\right| \, d\mu = 2 \int_S g_n^+ d\mu \to 0$ as $n \to \infty$. Hence $P_n(A) \to P(A)$ as $n \to \infty$ uniformly in $A \in \mathscr S$.
Of course, the most important special cases of Scheffé's theorem are to discrete distributions and to continuous distributions on a subset of $\R^n$, as in the theorem above on density functions.
Expected Value
Generating functions are studied in the chapter on Expected Value. In part, the importance of generating functions stems from the fact that ordinary (pointwise) convergence of a sequence of generating functions corresponds to the convergence of the distributions in the sense of this section. Often it is easier to show convergence in distribution using generating functions than directly from the definition.
In addition, converence in distribution has elegant characterizations in terms of the convergence of the expected values of certain types of functions of the underlying random variables. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/03%3A_Distributions/3.08%3A_Convergence_in_Distribution.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Q}{\mathbb{Q}}$ $\newcommand{\E}{\mathbb{E}}$
Our goal in this section is to define and study functions that play the same role for positive measures on $\R$ that (cumulative) distribution functions do for probability measures on $\R$. Of course probability measures on $\R$ are usually associated with real-valued random variables. These general distribution functions are useful for constructing measures on $\R$ and will appear in our study of integrals with respect to a measure in the next section, as well as non-homogeneous Poisson processes and general renewal processes.
Basic Theory
Throughout this section, our basic measurable space is $(\R, \mathscr{R})$, where $\mathscr{R}$ is the $\sigma$-algebra of Borel measurable subsets of $\R$, and as usual, we will let $\lambda$ denote Lebesgue measure on $(\R, \mathscr{R})$. As with cumulative distribution functions, it's convenient to have compact notation for the limits of a function $F: \R \to \R$ from the left and right at $x \in \R$, and at $\infty$ and $-\infty$ (assuming of course that these limits exist): $F(x^+) = \lim_{t \downarrow x}F(t), \; F(x^-) = \lim_{t \uparrow x} F(t), \; F(\infty) = \lim_{t \to \infty} F(t), \; F(-\infty) = \lim_{t \to -\infty} F(t)$
Distribution Functions and Their Measures
A function $F: \R \to \R$ that satisfies the following properties is a distribution function on $\R$
1. $F$ is increasing: if $x \le y$ then $F(x) \le F(y)$.
2. $F$ is continuous from the right: $F(x^+) = F(x)$ for all $x \in \R$.
Since $F$ is increasing, $F(x^-)$ exists in $\R$. Similarly $F(\infty)$ exists, as a real number or $\infty$, and $F(-\infty)$ exists, as a real number or $-\infty$.
If $F$ is a distribution function on $\R$, then there exists a unique positive measure $\mu$ on $\mathscr{R}$ that satisfies $\mu(a, b] = F(b) - F(a), \quad a, \, b \in \R, \; a \le b$
Proof
Let $\mathscr{I}$ denote the collection of subsets of $\R$ consisting of intervals of the form $(a, b]$ where $a, \, b \in \R$ with $a \le b$, and intervals of the form $(-\infty, a]$ and $(a, \infty)$ where $a \in \R$. Then $\mathscr{I}$ is a semi-algebra. That is, if $A, \, B \in \mathscr{I}$ then $A \cap B \in \mathscr{I}$, and if $A \in \mathscr{I}$ then $A^c$ is the union of a finite number (actually one or two) sets in $\mathscr{I}$. We define $\mu$ on $\mathscr{I}$ by $\mu(a, b] = F(b) - F(a)$, $\mu(-\infty, a] = F(a) - F(-\infty)$ and $\mu(a, \infty) = F(\infty) - F(a)$. Note that $\mathscr{I}$ contains the empty set via intervals of the form $(a, a]$ where $a \in \R$, but the definition gives $\mu(\emptyset) = 0$. Next, $\mu$ is finitely additive on $\mathscr{I}$. That is, if $\{A_i: i \in I\}$ is a finite, disjoint collection of sets in $\mathscr{I}$ and $\bigcup_{i \in I} A_i \in \mathscr{I}$, then $\mu\left(\bigcup_{i \in I} A_i\right) = \sum_{i \in I} \mu(A_i)$ Next, $\mu$ is countably subadditive on $\mathscr{I}$. That is, if $A \in \mathscr{I}$ and $A \subseteq \bigcup_{i \in I} A_i$ where $\{A_i: i \in I\}$ is a countable collection of sets in $\mathscr{I}$ then $\mu(A) \le \sum_{i \in I} \mu(A_i)$ Finally, $\mu$ is clearly $\sigma$-finite on $\mathscr{I}$ since $\mu(a, b] \lt \infty$ for $a, \, b \in \R$ with $a \lt b$, and $\R$ is a countable, disjoint union of intervals of this form. Hence it follows from the basic extension and uniqueness theorems that $\mu$ can be extended uniquely to a measure on the $\mathscr{R} = \sigma(\mathscr{I})$.
For the final uniqueness part, suppose that $\mu$ is a measure on $\mathscr{R}$ satisfying $\mu(a, b] = F(b) - F(a)$ for $a, \, b \in \R$ with $a \lt b$. Then by the continuity theorem for increasing sets, $\mu(-\infty, a] = F(a) - F(-\infty)$ and $\mu(a, \infty) = F(\infty) - F(a)$ for $a \in \R$. Hence $\mu$ is the unique measure constructed above.
The measure $\mu$ is called the Lebesgue-Stieltjes measure associated with $F$, named for Henri Lebesgue and Thomas Joannes Stieltjes. A very rich variety of measures on $\R$ can be constructed in this way. In particular, when the function $F$ takes values in $[0, 1]$, the associated measure $\P$ is a probability measure. Another special case of interest is the distribution function defined by $F(x) = x$ for $x \in \R$, in which case $\mu(a, b]$ is the length of the interval $(a, b]$ and therefore $\mu = \lambda$, Lebesgue measure on $\mathscr{R}$. But although the measure associated with a distribution function is unique, the distribution function itself is not. Note that if $c \in \R$ then the distribution function defined by $F(x) = x + c$ for $x \in \R$ also generates Lebesgue measure. This example captures the general situation.
Suppose that $F$ and $G$ are distribution functions that generate the same measure $\mu$ on $\R$. Then there exists $c \in \R$ such that $G = F + c$.
Proof
For $x \in \R$, note that $F(x) - F(0) = G(x) - G(0)$. The common value is $\mu(0, x]$ if $x \ge 0$ and $-\mu(x, 0]$ if $x \lt 0$. Thus $G(x) = F(x) - F(0) + G(0)$ for $x \in \R$.
Returning to the case of a probability measure $\P$ on $\R$, the cumulative distribution function $F$ that we studied in this chapter is the unique distribution function satisfying $F(-\infty) = 0$. More generally, having constructed a measure from a distribution function, let's now consider the complementary problem of finding a distribution function for a given measure. The proof of the last theorem points the way.
Suppose that $\mu$ is a positive measure on $(\R, \mathscr{R})$ with the property that $\mu(A) \lt \infty$ if $A$ is bounded. Then there exists a distribution function that generates $\mu$.
Proof
Define $F$ on $\R$ by $F(x) = \begin{cases} \mu(0, x], & x \ge 0 \ -\mu(x, 0], & x \lt 0 \end{cases}$ Then $F: \R \to \R$ by the assumption on $\mu$. Also $F$ is increasing: if $0 \le x \le y$ then $\mu(0, x] \le \mu(0, y]$ by the increasing property of a positive measure. Similarly, if $x \le y \le 0$, the $\mu(x, 0] \ge \mu(y, 0]$, so $-\mu(x, 0] \le -\mu(y, 0]$. Finally, if $x \le 0 \le y$, then $-\mu(x, 0] \le 0$ and $\mu(0, y] \ge 0$. Next, $F$ is continuous from the right: Suppose that $x_n \in \R$ for $n \in \N_+$ and $x_n \downarrow x$ as $n \to \infty$. If $x \ge 0$ then $\mu(0, x_n] \downarrow \mu(0, x]$ by the continuity theorem for decreasing sets, which applies since the measures are finite. If $x \lt 0$ then $\mu(x_n, 0] \uparrow \mu(x, 0]$ by the continuity theorem for increasing sets. So in both cases, $F(x_n) \downarrow F(x)$ as $n \to \infty$. Hence $F$ is a distribution function, and it remains to show that it generates $\mu$. Let $a, \, b \in \R$ with $a \le b$. If $a \ge 0$ then $\mu(a, b] = \mu(0, b] - \mu(0, a] = F(b) - F(a)$ by the difference property of a positive measure. Similarly, if $b \le 0$ then $\mu(a, b] = \mu(a, 0] - \mu(b, 0] = -F(a) + F(b)$. Finally, if $a \le 0$ and $b \ge 0$, then $\mu(a, b] = \mu(a, 0] + \mu(0, b] = -F(a) + F(b)$.
In the proof of the last theorem, the use of 0 as a reference point is arbitrary, of course. Any other point in $\R$ would do as well, and would produce a distribution function that differs from the one in the proof by a constant. If $\mu$ has the property that $\mu(-\infty, x] \lt \infty$ for $x \in \R$, then it's easy to see that $F$ defined by $F(x) = \mu(-\infty, x]$ for $x \in \R$ is a distribution function that generates $\mu$, and is the unique distribution function with $F(-\infty) = 0$. Of course, in the case of a probability measure, this is the cumulative distribution function, as noted above.
Properties
General distribution functions enjoy many of the same properties as the cumulative distribution function (but not all because of the lack of uniqueness). In particular, we can easily compute the measure of any interval from the distribution function.
Suppose that $F$ is a distribution function and $\mu$ is the positive measure on $(\R, \mathscr{R})$ associated with $F$. For $a, \, b \in \R$ with $a \lt b$,
1. $\mu[a, b] = F(b) - F(a^-)$
2. $\mu\{a\} = F(a) - F(a^-)$
3. $\mu(a, b) = F(b^-) - F(a)$
4. $\mu[a, b) = F(b^-) - F(a^-)$
Proof
All of these results follow from the continuity theorems for a positive measure. Suppose that $(x_1, x_2, \ldots)$ is a sequence of distinct points in $\R$.
1. If $x_n \uparrow a$ as $n \to \infty$ then $(x_n, b] \uparrow [a, b]$ so $\mu(x_n, b] \uparrow \mu[a, b]$ as $n \to \infty$. But also $\mu(x_n, b] = F(b) - F(x_n) \to F(b) - F(a^-)$ as $n \to \infty$.
2. This follows from (a) by taking $a = b$
3. If $x_n \uparrow b$ as $n \to \infty$ then $(a, x_n] \uparrow (a, b)$ so $\mu(a, x_n] \uparrow \mu(a, b)$ as $n \to \infty$. But also $\mu(a, x_n] = F(x_n) - F(a) \to F(b^-) - F(a)$ as $n \to \infty$.
4. From (a) and (b) and the difference rule, $\mu[a, b) = \mu[a, b] - \mu\{b\} = F(b) - F(a^-) - \left[F(b) - F(b^-)\right] = F(b^-) - F(a^-)$
Note that $F$ is continuous at $x \in \R$ if and only if $\mu\{x\} = 0$. In particular, $\mu$ is a continuous measure (recall that this means that $\mu\{x\} = 0$ for all $x \in \R$) if and only if $F$ is continuous on $\R$. On the other hand, $F$ is discontinuous at $x \in \R$ if and only if $\mu\{x\} \gt 0$, so that $\mu$ has an atom at $x$. So $\mu$ is a discrete measure (recall that this means that $\mu$ has countable support) if and only if $F$ is a step function.
Suppose again that $F$ is a distribution function and $\mu$ is the positive measure on $(\R, \mathscr{R})$ associated with $F$. If $a \in \R$ then
1. $\mu(a, \infty) = F(\infty) - F(a)$
2. $\mu[a, \infty) = F(\infty) - F(a^-)$
3. $\mu(-\infty, a] = F(a) - F(-\infty)$
4. $\mu(-\infty, a) = F(a^-) - F(-\infty)$
5. $\mu(\R) = F(\infty) - F(-\infty)$
Proof
The proofs, as before, just use the continuity theorems. Suppose that $(x_1, x_2, \ldots)$ is a sequence of distinct points in $\R$
1. If $x_n \uparrow \infty$ as $n \to \infty$ then $(a, x_n] \uparrow (a, \infty)$ so $\mu(a, x_n] \uparrow \mu(a, \infty)$ as $n \to \infty$. But also $\mu(a, x_n] = F(x_n) - F(a) \to F(\infty) - F(a)$ as $n \to \infty$
2. Similarly, if $x_n \uparrow \infty$ as $n \to \infty$ then $[a, x_n] \uparrow (a, \infty)$ so $\mu[a, x_n] \uparrow \mu[a, \infty)$ as $n \to \infty$. But also $\mu[a, x_n] = F(x_n) - F(a^-) \to F(\infty) - F(a^-)$ as $n \to \infty$
3. If $x_n \downarrow -\infty$ as $n \to \infty$ then $(x_n, a] \uparrow (-\infty, a]$ so $\mu(x_n, a] \uparrow \mu(-\infty, a]$ as $n \to \infty$. But also $\mu(x_n, a] = F(a) - F(x_n) \to F(a) - F(-\infty)$ as $n \to \infty$
4. Similarly, if $x_n \downarrow -\infty$ as $n \to \infty$ then $(x_n, a) \uparrow (-\infty, a)$ so $\mu(x_n, a) \uparrow \mu(-\infty, a)$ as $n \to \infty$. But also $\mu(x_n, a) = F(a^-) - F(x_n) \to F(a^-) - F(-\infty)$ as $n \to \infty$
5. $\mu(\R) = \mu(-\infty, 0] + \mu(0, \infty) = \left[F(0) - F(-\infty)\right] + \left[F(\infty) - F(0)\right] = F(\infty) - F(-\infty)$.
Distribution Functions on $[0, \infty)$
Positive measures and distribution functions on $[0, \infty)$ are particularly important in renewal theory and Poisson processes, because they model random times.
The discrete case. Suppose that $G$ is discrete, so that there exists a countable set $C \subset [0, \infty)$ with $G\left(C^c\right) = 0$. Let $g(t) = G\{t\}$ for $t \in C$ so that $g$ is the density function of $G$ with respect to counting measure on $C$. If $u: [0, \infty) \to \R$ is locally bounded then $\int_0^t u(s) \, dG(s) = \sum_{s \in C \cap [0, t]} u(s) g(s)$
In the discrete case, the distribution is often arithmetic. Recall that this means that the countable set $C$ is of the form $\{n d: n \in \N\}$ for some $d \in (0, \infty)$. In the following results,
The continuous case. Suppose that $G$ is absolutely continuous with respect to Lebesgue measure on $[0, \infty)$ with density function $g: [0, \infty) \to [0, \infty)$. If $u: [0, \infty) \to \R$ is locally bounded then $\int_0^t u(s) \, dG(s) = \int_0^t u(s) g(s) \, ds$
The mixed case. Suppose that there exists a countable set $C \subset [0, \infty)$ with $G(C) \gt 0$ and $G\left(C^c\right) \gt 0$, and that $G$ restricted to subsets of $C^c$ is absolutely continuous with respect to Lebesgue measure. Let $g(t) = G\{t\}$ for $t \in C$ and let $h$ be a density with respect to Lebesgue measure of $G$ restricted to subsets of $C^c$. If $u: [0, \infty) \to \R$ is locally bounded then, $\int_0^t u(s) \, dG(s) = \sum_{s \in C \cap [0, t]} u(s) g(s) + \int_0^t u(s) h(s) \, ds$
The three special cases do not exhaust the possibilities, but are by far the most common cases in applied problems. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/03%3A_Distributions/3.09%3A_General_Distribution_Functions.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\Q}{\mathbb{Q}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\range}{\text{range}}$
Probability density functions have very different interpretations for discrete distributions as opposed to continuous distributions. For a discrete distribution, the probability of an event is computed by summing the density function over the outcomes in the event, while for a continuous distribution, the probability is computed by integrating the density function over the outcomes. For a mixed distributions, we have partial discrete and continuous density functions and the probability of an event is computed by summing and integrating. The various types of density functions can unified under a general theory of integration, which is the subject of this section. This theory has enormous importance in probability, far beyond just density functions. Expected value, which we consider in the next chapter, can be interpreted as an integral with respect to a probability measure. Beyond probability, the general theory of integration is of fundamental importance in many areas of mathematics.
Basic Theory
Definitions
Our starting point is a measure space $(S, \mathscr{S}, \mu)$. That is, $S$ is a set, $\mathscr{S}$ is a $\sigma$-algebra of subsets of $S$, and $\mu$ is a positive measure on $\mathscr{S}$. As usual, the most important special cases are
• Euclidean space: $S = \R^n$ for some $n \in \N_+$, $\mathscr{S} = \mathscr R_n$, the $\sigma$-algebra of Lebesgue measurable subsets of $\R^n$, and $\mu = \lambda_n$, standard $n$-dimensional Lebesgue measure.
• Discrete space: $S$ is a countable set, $\mathscr{S}$ is the collection of all subsets of $S$, and $\mu = \#$, counting measure.
• Probability space: $S$ is the set of outcomes of a random experiment, $\mathscr{S}$ is the $\sigma$-algebra of events, and $\mu = \P$, a probability measure.
The following definition reflects the fact that in measure theory, sets of measure 0 are often considered unimportant.
Consider a statement with $x \in S$ as a free variable. Technically such a statement is a predicate on $S$. Suppose that $A \in \mathscr{S}$.
1. The statement holds on $A$ if it is true for every $x \in A$.
2. The statement holds almost everywhere on $A$ (with respect to $\mu$) if there exists $B \in \mathscr{S}$ with $B \subseteq A$ such that the statement holds on $B$ and $\mu(A \setminus B) = 0$.
A typical statement that we have in mind is an equation or an inequality with $x \in S$ as a free variable. Our goal is to define the integral of certain measurable functions $f: S \to \R$, with respect to the measure $\mu$. The integral may exist as a number in $\R$ (in which case we say that $f$ is integrable), or may exist as $\infty$ or $-\infty$, or may not exist at all. When it exists, the integral is denoted variously by $\int_S f \, d\mu, \; \int_S f(x) \, d\mu(x), \; \int_S f(x) \mu(dx)$ We will use the first two.
Since the set of extended real numbers $\R^* = \R \cup \{-\infty, \infty\}$ plays an important role in the theory, we need to recall the arithmetic of $\infty$ and $-\infty$. Here are the conventions that are appropriate for integration:
Arithmetic on $\R^*$
1. If $a \in (0, \infty]$ then $a \cdot \infty = \infty$ and $a \cdot (-\infty) = -\infty$
2. If $a \in [-\infty, 0)$ then $a \cdot \infty = -\infty$ and $a \cdot (-\infty) = \infty$
3. $0 \cdot \infty = 0$ and $0 \cdot (-\infty) = 0$
4. If $a \in \R$ then $a + \infty = \infty$ and $a + (-\infty) = -\infty$
5. $\infty + \infty = \infty$
6. $-\infty + (-\infty) = -\infty$
However, $\infty - \infty$ is not defined (because it does not make consistent sense) and we must be careful never to produce this indeterminate form. You might recall from calculus that $0 \cdot \infty$ is also an indeterminate form. However, for the theory of integration, the convention that $0 \cdot \infty = 0$ is convenient and consistent. In terms of order of course, $-\infty \lt a \lt \infty$ for $a \in \R$.
We also need to extend topology and measure to $\R^*$. In terms of the first, $(a, \infty]$ is an open neighborhood of $\infty$ and $[-\infty, a)$ is an open neighborhood of $-\infty$ for every $a \in \R$. This ensures that if $x_n \in \R$ for $n \in \N_+$ then $x_n \to \infty$ or $x_n \to -\infty$ as $n \to \infty$ has its usual calculus meaning. Technically this topology results in the two-point compactification of $\R$. Now we can give $\R^*$ the Borel $\sigma$-algebra $\mathscr R^*$, that is, the $\sigma$-algebra generated by the topology. Basically, this simply means that if $A \in \mathscr R$ then $A \cup \{\infty\}$, $A \cup \{-\infty\}$, and $A \cup \{-\infty, \infty\}$ are all in $\mathscr R^*$.
Desired Properties
As motivation for the definition, every version of integration should satisfy some basic properties. First, the integral of the indicator function of a measurable set should simply be the size of the set, as measured by $\mu$. This gives our first definition:
If $A \in \mathscr{S}$ then $\int_S \bs{1}_A \, d\mu = \mu(A)$.
This definition hints at the intimate relationship between measure and integration. We will construct the integral from the measure $\mu$ in this section, but this first property shows that if we started with the integral, we could recover the measure. This property also shows why we need $\infty$ as a possible value of the integral, and coupled with some of the properties below, why $-\infty$ is also needed. Here is a simple corollary of our first definition.
$\int_S 0 \, d\mu = 0$
Proof
Note that $\int_S 0 \, d\mu = \int_S \bs{1}_\emptyset \, d\mu = \mu(\emptyset) = 0$.
We give three more essential properties that we want. First are the linearity properties in two parts—part (a) is the additive property and part (b) is the scaling property.
If $f, \; g: S \to \R$ are measurable functions whose integrals exist, and $c \in \R$, then
1. $\int_S (f + g) \, d\mu = \int_S f \, d\mu + \int_S g \, d\mu$ as long as the right side is not of the form $\infty - \infty$
2. $\int_S c f \, d\mu = c \int_S f \, d\mu$.
The additive property almost implies the scaling property
The steps below do not constitute a proof because questions of the existence of the integrals are ignored and because the limit interchange in the last step is not justified. Still, the argument shows the close relationship between the additive property and the scaling property.
1. If $n \in \N_+$, then by (a) and induction, $\int_S n f \, d\mu = n \int_S f \, d\mu$.
2. From step (1), if $n \in \N_+$ then $\int f \, d\mu = \int_S n \frac{1}{n} f \, d\mu = n \int_S \frac{1}{n} f \, d\mu$ so $\int_S \frac{1}{n} f \, d\mu = \frac{1}{n} \int_S f \, d\mu$.
3. If $m, \; n \in \N_+$ then from steps (1) and (2) $\int_S \frac{m}{n} f \, d\mu = m \int_S \frac{1}{n} f \, d\mu = \frac{m}{n} \int_S f \, d\mu$.
4. $0 = \int_S 0 \, d\mu = \int_S (f - f) \, d\mu = \int_S f \, d\mu + \int_S - f \, d\mu$ so $\int_S -f \, d\mu = -\int_S f \, d\mu$.
5. By steps (3) and (4), $\int_S c f \, d\mu = c \int_S f \, d\mu$ for every $c \in \Q$ (the set of rational real numbers).
6. If $c \in \R$ there exists $c_n \in \Q$ for $n \in \N_+$ with $c_n \to c$ as $n \to \infty$. By step (5), $\int_S c_n f \, d\mu = c_n \int_S f \, d\mu$.
7. Taking limits in step (6) suggests $\int_S c f \, d\mu = c \int_S f \, d\mu$.
To be more explicit, we want the additivity property (a) to hold if at least one of the integrals on the right is finite, or if both are $\infty$ or if both are $-\infty$. What is ruled out are the two cases where one integral is $\infty$ and the other is $-\infty$, and this is what is meant by the indeterminate form $\infty - \infty$. Our next essential properties are the order properties, again in two parts—part (a) is the positive property and part (b) is the increasing property.
Suppose that $f, \, g: S \to \R$ are measurable.
1. If $f \ge 0$ on $S$ then $\int_S f \, d\mu \ge 0$.
2. If the integrals of $f$ and $g$ exist and $f \le g$ on $S$ then $\int_S f \, d\mu \le \int_S g \, d\mu$
The positive property and the additive property imply the increasing property
Implicit in part (a) is that the integral of a nonnegative, measurable function always exists in $[0, \infty]$. Suppose that the integrals of $f$ and $g$ exist and $f \le g$ on $S$. Then $g - f \ge 0$ on $S$ and $g = f + (g - f)$. If $\int_S f \, d\mu = -\infty$, then trivially. $\int_S f \, d\mu \le \int_S g \, d\mu$. Otherwise, by the additivity property, $\int_S g \, d\mu = \int_S f \, d\mu + \int_S (g - f) \, d\mu$ But $\int_S (g - f) \, d\mu \ge 0$ (so in particular the right side is not $-\infty + \infty$), and hence $\int_S g \, d\mu \ge \int_S f \, d\mu$
Our last essential property is perhaps the least intuitive, but is a type of continuity property of integration, and is closely related to the continuity property of positive measure. The official name is the monotone convergence theorem.
Suppose that $f_n: S \to [0, \infty)$ is measurable for $n \in \N_+$ and that $f_n$ is increasing in $n$. Then $\int_S \lim_{n \to \infty} f_n \, d\mu = \lim_{n \to \infty} \int_S f_n d \mu$
Note that since $f_n$ is increasing in $n$, $\lim_{n \to \infty} f_n(x)$ exists in $\R \cup \{\infty\}$ for each $x \in \R$ (and the limit defines a measurable function). This property shows that it is sometimes convenient to allow nonnegative functions to take the value $\infty$. Note also that by the increasing property, $\int_S f_n \, d\mu$ is increasing in $n$ and hence also has a limit in $\R \cup \{\infty\}$.
To see the connection with measure, suppose that $(A_1, A_2, \ldots)$ is an increasing sequence of sets in $\mathscr{S}$, and let $A = \bigcup_{i=1}^\infty A_i$. Note that $\bs{1}_{A_n}$ is increasing in $n \in \N_+$ and $\bs{1}_{A_n} \to \bs{1}_{A}$ as $n \to \infty$. For this reason, the union $A$ is sometimes called the limit of $A_n$ as $n \to \infty$. The continuity theorem of positive measure states that $\mu(A_n) \to \mu(A)$ as $n \to \infty$. Equivalently, $\int_S \bs{1}_{A_n} \, d\mu \to \int_S \bs{1}_A \, d\mu$ as $n \to \infty$, so the continuity theorem of positive measure is a special case of the monotone convergence theorem.
Armed with the properties that we want, the definition of the integral is fairly straightforward, and proceeds in stages. We give the definition successively for
1. Nonnegative simple functions
2. Nonnegative measurable functions
3. Measurable real-valued functions
Of course, each definition should agree with the previous one on the functions that are in both collections.
Simple Functions
A simple function on $S$ is simply a measurable, real-valued function with finite range. Simple functions are usually expressed as linear combinations of indicator functions.
Representations of simple functions
1. Suppose that $I$ is a finite index set, $a_i \in \R$ for each $i \in I$, and $\{A_i: i \in I\}$ is a collection of sets in $\mathscr{S}$ that partition $S$. Then $f = \sum_{i \in I} a_i \bs{1}_{A_i}$ is a simple function. Expressing a simple function in this form is a representation of $f$.
2. A simple function $f$ has a unique representation as $f = \sum_{j \in J} b_j \bs{1}_{B_j}$ where $J$ is a finite index set, $\{b_j: j \in J\}$ is a set of distinct real numbers, and $\{B_j: j \in J\}$ is a collection of nonempty sets in $\mathscr{S}$ that partition $S$. This representation is known as the canonical representation.
Proof
1. Note that $f$ is measurable since $A_i \in \mathscr{S}$ for each $i \in I$. Also $f$ has finite range since $I$ is finite. Specifically, the range of $f$ consists of the distinct $a_i$ for $i \in I$ with $A_i \ne \emptyset$.
2. Suppose that $f$ is simple. Let $\{b_j: j \in J\}$ denote the (distinct) values in the range of $f$ and let $B_j = f^{-1}\{b_j\}$ for $j \in J$. Then $J$ is finite, $\{B_j: j \in J\}$ is a collection of nonempty sets in $\mathscr{S}$ that partition $S$, and $f = \sum_{j \in J} b_j \bs{1}_{B_j}$. Conversely, suppose that $f$ has a representation of this form. Then $\{b_j: j \in J\}$ is the range of $f$ and $B_j = f^{-1}\{b_j\}$ so the representation is unique.
You might wonder why we don't just always use the canonical representation for simple functions. The problem is that even if we start with canonical representations, when we combine simple functions in various ways, the resulting representations may not be canonical. The collection of simple functions is closed under the basic arithmetic operations, and in particular, forms a vector space.
Suppose that $f$ and $g$ are simple functions with representations $f = \sum_{i \in I} a_i \bs{1}_{A_i}$ and $g = \sum_{j \in J} b_j \bs{1}_{B_j}$, and that $c \in \R$. Then
1. $f + g$ is simple, with representation $f + g = \sum_{(i, j) \in I \times J} (a_i + b_j) \bs{1}_{A_i \cap B_j}$.
2. $f g$ is simple, with representation $f g = \sum_{(i, j) \in I \times J} (a_i b_j) \bs{1}_{A_i \cap B_j}$.
3. $c f$ is simple, with representation $c f = \sum_{i \in I} c a_i \bs{1}_{A_i}$.
Proof
Since $f$ and $g$ are measurable, so are $f + g$, $f g$, and $c f$. Moreover, since $f$ and $g$ have finite range, so do $f + g$, $f g$, and $c f$. For the representations in parts (a) and (b), note that $I \times J$ is finite, $\left\{A_i \cap B_j: (i, j) \in I \times J\right\}$ is a collection of sets in $\mathscr{S}$ that partition $S$, and on $A_i \cap B_j$, $f + g = a_i + b_j$ and $f g = a_i b_j$.
As we alluded to earlier, note that even if the representations of $f$ and $g$ are canonical, the representations for $f + g$ and $f g$ may not be. The next result treats composition, and will be important for the change of variables theorem in the next section.
Suppose that $(T, \mathscr{T})$ is another measurable space, and that $f: S \to T$ is measurable. If $g$ is a simple function on $T$ with representation $g = \sum_{i \in I} b_i \bs{1}_{B_i}$, then $g \circ f$ is a simple function on $S$ with representation $g \circ f = \sum_{i \in I} b_i \bs{1}_{f^{-1}(B_i)}$.
Proof
Recall that $g \circ f : S \to \R$ and $\range(g \circ f) \subseteq \range(g)$ so $g \circ f$ has finite range. $f$ is measurable, and inverse images preserve all set operations, so $\left\{f^{-1}(B_i): i \in I\right\}$ is a measurable partition of $S$. Finally, if $x \in f^{-1}(B_i)$ then $f(x) \in B_i$ so $g\left[f(x)\right] = b_i$.
Given the definition of the integral of an indicator function in (3) and that we want the linearity property (5) to hold, there is no question as to how we should define the integral of a nonnegative simple function.
Suppose that $f$ is a nonnegative simple function, with the representation $f = \sum_{i \in I} a_i \bs{1}_{A_i}$ where $a_i \ge 0$ for $i \in I$. We define $\int_S f \, d\mu = \sum_{i \in I} a_i \mu(A_i)$
The definition is consistent
Consistency refers to the fact that a simple function can have more than one representation as a linear combination of indicator functions, and hence we must show that all such representations lead to the same value for the integral. Let $\{b_j: j \in J\}$ denote the set of distinct elements among the numbers $a_i$ where $i \in I$ and $A_i \neq \emptyset$. For $j \in J$, let $I_j = \{i \in I: a_i = b_j\}$ and let $B_j = \bigcup_{i \in I_j} A_i$. Thus, $f = \sum_{j \in J} b_j \bs{1}_{B_j}$, and this is the canonical representation. Note that $\sum_{i \in I} a_i \mu(A_i) = \sum_{j \in J} \sum_{i \in I_j} a_i \mu(A_i) = \sum_{j \in J} b_j \sum_{i \in I_j} \mu(A_i) = \sum_{j \in J} b_j \mu(B_j)$ The first sum is the integral defined in terms of the general representation $f = \sum_{i \in I} a_i \bs{1}_{A_i}$ while the last sum is the integral defined in terms of the unique canonical representation $f = \sum_{j \in J} b_j \bs{1}_{B_j}$. Thus, any representation of a simple function $f$ leads to the same value for the integral.
Note that if $f$ is a nonnegative simple function, then $\int_S f \, d\mu$ exists in $[0, \infty]$, so the order properties holds. We next show that the linearity properties are satisfied for nonnegative simple functions.
Suppose that $f$ and $g$ are nonnegative simple functions, and that $c \in [0, \infty)$. Then
1. $\int_S (f + g) \, d\mu = \int_S f \, d\mu + \int_S g \, d\mu$
2. $\int_S c f \, d\mu = c \int_S f \, d\mu$
Proof
Suppose that $f$ and $g$ are nonnegative simple functions with the representations $f = \sum_{i \in I} a_i \bs{1}_{A_i}$ and $g = \sum_{j \in J} b_j \bs{1}_{B_j}$. Thus $a_i \ge 0$ for $i \in I$, $b_j \ge 0$ for $j \in J$, and $\int_S f \, d\mu = \sum_{i \in I} a_i \mu(A_i)$ and $\int_S g \, d\mu = \sum_{j \in J} b_j \mu(B_j)$.
1. As noted above, $f + g$ has the representation $f + g = \sum_{(i, j) \in I \times J} (a_i + b_j) \bs{1}_{A_i \cap B_j}$ Note that $\{A_i \cap B_j: j \in J\}$ is a partition of $A_i$ for each $i \in I$, and similarly $\{A_i \cap B_j: i \in I\}$ is a partition of $B_j$ for each $j \in J$. Hence \begin{align} \int_S (f + g) \, d\mu & = \sum_{(i, j) \in I \times J} (a_i + b_j) \mu(A_i \cap B_j) \ & = \sum_{i \in I} \sum_{j \in J} a_i \mu(A_i \cap B_j) + \sum_{j \in J} \sum_{i \in I} b_j \mu(A_i \cap B_j) \ & = \sum_{i \in I} a_i \mu(A_i \cap B) + \sum_{j \in J} b_j \mu(B_j \cap A) \ & = \sum_{i \in I} a_i \mu(A_i) + \sum_{j \in J} b_j \mu(B_j) = \int_S f \, d\mu + \int_S g \, d\mu \end{align} Note that all the terms are nonnegative (although some may be $\infty$), so there are no problems with rearranging the order of the terms.
2. This part is easer. For $c \in [0, \infty)$, recall that $c f$ has the representation $c f = \sum_{i \in I} c a_i \bs{1}_{A_i}$ so $\int_S c f \, d\mu = \sum_{i \in I} c a_i \mu(A_i) = c \sum_{i \in I} a_i \mu(A_i) = c \int_S f \, d\mu$
The increasing property holds for nonnegative simple functions.
Suppose that $f$ and $g$ are nonnegative simple functions and $f \le g$ on $S$. Then $\int_S f \, d\mu \le \int_S g \, d\mu$
Proof
The proof from the additive property above works. Note that $g - f$ is a nonnegative simple function, and $g = f + (g - f)$. By the additivity property, $\int_S g \, d\mu = \int_S f \, d\mu + \int_S (g - f) \, d\mu \ge \int_S f \, d\mu$.
Next we give a version of the continuity theorem in (7) for simple functions. It's not completely general, but will be needed for the next subsection where we do prove the general version.
Suppose that $f$ is a nonnegative simple function and that $(A_1, A_2, \ldots)$ is an increasing sequence of sets in $\mathscr{S}$ with $A = \bigcup_{n=1}^\infty A_n$. then $\int_S \bs{1}_{A_n} f \, d\mu \to \int_S \bs{1}_A f \, d\mu \text{ as } n \to \infty$
Proof
Suppose that $f$ has the representation $f = \sum_{i \in I} b_i \bs{1}_{B_i}$. Then $\bs{1}_{A_n} f = \sum_{i \in I} b_i \bs{1}_{A_n} \bs{1}_{B_i} = \sum_{i \in I} b_i \bs{1}_{A_n \cap B_i}$ and similarly, $\bs{1}_A f = \sum_{i \in I} b_i \bs{1}_{A \cap B_i}$. But for each $i \in I$, $B_i \cap A_n$ is increasing in $n \in \N_+$ and $\bigcup_{n=1}^\infty (B_i \cap A_n) = B_i \cap A$. By the continuity theorem for positive measures, $\mu(B_i \cap A_n) \to \mu(B_i \cap A)$ as $n \to \infty$ for each $i \in I$. Since $I$ is finite, $\int_{A_n} f \, d\mu = \sum_{i \in I} b_i \mu(A_n \cap B_i) \to \sum_{i \in I} b_i \mu(A \cap B_i) = \int_A f \, d\mu \text{ as } n \to \infty$
Note that $\bs{1}_{A_n} f$ is increasing in $n \in \N_+$ and $\bs{1}_{A_n} f \to \bs{1}_A f$ as $n \to \infty$, so this really is a special case of the monotone convergence theorem.
Nonnegative Functions
Next we will consider nonnegative measurable functions on $S$. First we note that a function of this type is the limit of nonnegative simple functions.
Suppose that $f: S \to [0, \infty)$ is measurable. Then there exists an increasing sequence $\left(f_1, f_2, \ldots\right)$ of nonnegative simple functions with $f_n \to f$ on $S$ as $n \to \infty$.
Proof
For $n \in \N_+$ and $k \in \left\{1, 2, \ldots, n 2^n\right\}$ Let $I_{n,k} = \left[(k -1) \big/ 2^n, k \big/ 2^n\right)$ and $I_n = [n, \infty)$. Note that
1. $\left\{I_{n,k}: k = 1, \ldots, n 2^n\right\} \cup \left\{I_n\right\}$ is a partition of $[0, \infty)$ for each $n \in \N_+$.
2. $I_{n, k} = I_{n + 1, 2 k - 1} \cup I_{n + 1, 2 k}$ for $k \in \{1, 2, \ldots, n 2^n\}$.
3. $I_n = \left(\bigcup_{k = n 2^{n + 1} + 1}^{(n+1)2^{n+1}} I_{n+1,k} \right) \cup I_{n+1}$ for $n \in \N_+$.
Note that the $n$th partition divides the interval $[0, n)$ into $n 2^n$ subintervals of length $1 \big/ 2^n$. Thus, (b) follows because the $(n + 1)$st partition divides each of the first $2^n$ intervals of the $n$th partition in half, and (c) follows because the $(n + 1)$st partition divides the interval $[n, n + 1)$ into subintervals of length $1 \big/ 2^{n + 1}$. Now let $A_{n,k} = f^{-1}\left(I_{n,k}\right)$ and $A_n = f^{-1}\left(I_n\right)$ for $n \in \N_+$ and $k \in \left\{1, 2, \ldots, n 2^n\right\}$. Since inverse images preserve all set operations, (a), (b), and (c) hold with $A$ replacing $I$ everywhere, and $S$ replacing $[0, \infty)$ in (a). Moreover, since $f$ is measurable, $A_n \in \mathscr{S}$ and $A_{n, k} \in \mathscr{S}$ for each $n$ and $k$. Now, define $f_n = \sum_{k = 1}^{ n 2^n} \frac{k - 1}{2^n} \bs{1}_{A_{n, k}} + n \bs{1}_{A_n}$ Then $f_n$ is a simple function and $0 \le f_n \le f$ for each $n \in \N_+$. To show convergence, fix $x \in S$. If $n \gt f(x)$ then $\left|f(x) - f_n(x)\right| \le 2^{-n}$ and hence $f_n(x) \to f(x)$ as $n \to \infty$. All that remains is to show that $f_n$ is increasing in $n$. Let $x \in S$ and $n \in \N_+$. If $x \in A_{n,k}$ for some $k \in \left\{1, 2, \ldots, n 2^n\right\}$, then $f_n(x) = (k - 1) \big/ 2^n$. But either $f_{n + 1}(x) = (2 k - 2) \big/ 2^{n + 1}$ or $f_{n + 1}(x) = (2 k - 1) \big/ 2^{n + 1}$. If $x \in A_n$ then $f_n(x) = n$. But either $f_{n+1}(x) = (k - 1) \big/ 2^{n + 1}$ for some $k \in \left\{n 2^{n+1} + 1, \ldots, (n + 1) 2^{n+1}\right\}$ or $f_{n+1}(x) = n + 1$. In all cases, $f_{n + 1}(x) \ge f_n(x)$.
The last result points the way towards the definition of the integral of a measurable function $f: S \to [0, \infty)$ in terms of the integrals of simple functions. If $g$ is a nonnegative simple function with $g \le f$, then by the order property, we need $\int_S g \, d\mu \le \int_S f \, d\mu$. On the other hand, there exists a sequence of nonnegative simple function converging to $f$. Thus the continuity property suggests the following definition:
If $f: S \to [0, \infty)$ is measurable, define $\int_S f \, d\mu = \sup\left\{ \int_S g \, d\mu: g \text{ is simple and } 0 \le g \le f \right\}$
Note that $\int_S f \, d\mu$ exists in $[0, \infty]$ so the positive property holds. Note also that if $f$ is simple, the new definition agrees with the old one. As always, we need to establish the essential properties. First, the increasing property holds.
If $f, \, g: S \to [0, \infty)$ are measurable and $f \le g$ on $S$ then $\int_S f \, d\mu \le \int_S g \, d\mu$.
Proof
Note that $\{h: h \text{ is simple and } 0 \le h \le f\} \subseteq \{ h: h \text { is simple and } 0 \le h \le g\}$. therefore $\int_S f \, d\mu = \sup\left\{\int_S h \, d\mu: h \text{ is simple and } 0 \le h \le f \right\} \le \sup\left\{\int_S h \, d\mu: h \text{ is simple and } 0 \le h \le g\right\} = \int_S g \, d\mu$
We can now prove the continuity property known as the monotone convergence theorem in full generality.
Suppose that $f_n: S \to [0, \infty)$ is measurable for $n \in \N_+$ and that $f_n$ is increasing in $n$. Then $\int_S \lim_{n \to \infty} f_n \, d\mu = \lim_{n \to \infty} \int_S f_n d \mu$
Proof
Let $f = \lim_{n \to \infty} f_n$. By the order property, note that $\int_S f_n \, d\mu$ is increasing in $n \in \N_+$ and hence has a limit in $\R^*$, which we will denote by $c$. Note that $f_n \le f$ on $S$ for $n \in \N_+$, so by the order property again, $\int_S f_n \, d\mu \le \int_S f \, d\mu$ for $n \in \N_+$. Letting $n \to \infty$ gives $c \le \int_S f \, d\mu$. To show that $c \ge \int_S f \, d\mu$ we need to show that $c \ge \int_S g \, d\mu$ for every simple function $g$ with $0 \le g \le f$. Fix $a \in (0, 1)$ and let $A_n = \{ x \in S: f_n(x) \ge a g(x)\}$. Since $f_n$ is increasing in $n$, $A_n \subseteq A_{n+1}$. Moreover, since $f_n \to f$ as $n \to \infty$ on $S$ and $g \le f$ on $S$, $\bigcup_{n=1}^\infty A_n = S$. But by definition, $\alpha g \le f_n$ on $A_n$ so $\alpha \int_S \bs{1}_{A_n} g \, d\mu = \int_S \alpha \bs{1}_{A_n} g \, d\mu \le \int_S \bs{1}_{A_n} f_n \, d\mu \le \int_S f_n \, d\mu$ Letting $n \to \infty$ in the extreme parts of the displayed inequality and using the version of the monotone convergence theorem for simple functions, we have $a \int_S g \, d\mu \le c$ for every $a \in (0, 1)$. Finally, letting $a \uparrow 1$ gives $\int_S g \, d\mu \le c$
If $f: S \to [0, \infty)$ is measurable, then by the theorem above, there exists an increasing sequence $\left(f_1, f_2, \ldots\right)$ of simple functions with $f_n \to f$ as $n \to \infty$. By the monotone convergence theorem in (18), $\int_S f_n \, d\mu \to \int_S f \, d\mu$ as $n \to \infty$. These two facts can be used to establish other properties of the integral of a nonnegative function based on our knowledge that the properties hold for simple functions. This type of argument is known as bootstrapping. We use bootstrapping to show that the linearity properties hold:
If $f, \, g: S \to [0, \infty)$ are measurable and $c \in [0, \infty)$, then
1. $\int_S (f + g) \, d\mu = \int_S f \, d\mu + \int_S g \, d\mu$
2. $\int_S c f \, d\mu = c \int_S f \, d\mu$
Proof
1. Let $\left(f_1, f_2, \ldots\right)$ and $\left(g_1, g_2, \ldots\right)$ be increasing sequences of nonnegative simple functions with $f_n \to f$ and $g_n \to g$ as $n \to \infty$. Then $(f_1 + g_1, f_2 + g_2, \ldots)$ is also an increasing sequence of simple functions, and $f_n + g_n \to f + g$ as $n \to \infty$. By the monotone convergence theorem, $\int_S f_n \, d\mu \to \int_S f \, d\mu$, $\int_S g_n \, d\mu \to \int_S g \, d\mu$, and $\int_S (f_n + g_n) \, d\mu \to \int_S (f + g) \, d\mu$ as $n \to \infty$. But $\int_S (f_n + g_n) \, d\mu = \int_S f_n \, d\mu + \int_S g_n \, d\mu$ for each $n \in \N_+$ so taking limits gives $\int_S (f + g) \, d\mu = \int_S f \, d\mu + \int_S g \, d\mu$.
2. Similarly, $(c f_1, c f_2, \ldots)$ is an increasing sequence of nonnegative simple functions with $c f_n \to c f$ as $n \to \infty$. Again, by the MCT, $\int_S f_n \, d\mu \to \int_S f \, d\mu$ and $\int_S c f_n \, d\mu \to \int_S c f \, d\mu$ as $n \to \infty$. But $\int_S c f_n \, d\mu = c \int_S f_n \, d\mu$ so taking limits gives $\int_S c f \, d\mu = c \int_S f \, d\mu$.
General Functions
Our final step is to define the integral of a measurable function $f: S \to \R$. First, recall the positive and negative parts of $x \in \R$: $x^+ = \max\{x, 0\}, \; x^- = \max\{-x, 0\}$ Note that $x^+ \ge 0$, $x^- \ge 0$, $x = x^+ - x^-$, and $\left|x\right| = x^+ + x^-$. Given that we want the integral to have the linearity properties in (5), there is no question as to how we should define the integral of $f$ in terms of the integrals of $f^+$ and $f^-$, which being nonnegative, are defined by the previous subsection.
If $f: S \to \R$ is measurable, we define $\int_S f \, d\mu = \int_S f^+ \, d\mu - \int_S f^- \, d\mu$ assuming that at least one of the integrals on the right is finite. If both are finite, then $f$ is said to be integrable.
Assuming that either the integral of the positive part or the integral of the negative part is finite ensures that we do not get the dreaded indeterminate form $\infty - \infty$.
Suppose that $f: S \to \R$ is measurable. Then $f$ is integrable if and only if $\int_S \left|f \right| \, d\mu \lt \infty$.
Proof
Suppose that $f$ is integrable. Recall that $\left| f \right| = f^+ + f^-$. By the additive property for nonnegative functions, $\int_S \left| f \right| \, d\mu = \int_S f^+ \, d\mu + \int_S f^- \, d\mu \lt \infty$. Conversely, suppose that $\int_S \left| f \right| \, d\mu \lt \infty$. Then $f^+ \le \left| f \right|$ and $f^- \le \left| f \right|$ so by the increasing property for nonnegative functions, $\int_S f^+ \, d\mu \le \int_S \left| f \right| \, d\mu \lt \infty$ and $\int_S f^- \, d\mu \le \int_S \left| f \right| \, d\mu \lt \infty$.
Note that if $f$ is nonnegative, then our new definition agrees with our old one, since $f^+ = f$ and $f^- = 0$. For simple functions the integral has the same basic form as for nonnegative simple functions:
Suppose that $f$ is a simple function with the representation $f = \sum_{i \in I} a_i \bs{1}_{A_i}$. Then $\int_S f \, d\mu = \sum_{i \in I} a_i \mu(A_i)$ assuming that the sum does not have both $\infty$ and $-\infty$ terms.
Proof
Note that $f^+$ and $f^-$ are also simple, with the representations $f^+ = \sum_{i \in I} a_i^+ \bs{1}_{A_i}$ and $f^- = \sum_{i \in I} a_i^- \bs{1}_{A_i}$. hence $\int_S f \, d\mu = \sum_{i \in I} a_i^+ \mu(A_i) - \sum_{i \in I} a_i^- \mu(A_i)$ as long as one of the sums is finite. Given that this is the case, we can recombine the sums to get $\int_S f \, d\mu = \sum_{i \in I} a_i \mu(A_i)$
Once again, we need to establish the essential properties. Our first result is an intermediate step towards linearity.
If $f, \, g: S \to [0, \infty)$ are measurable then $\int_S (f - g) \, d\mu = \int_S f \, d\mu - \int_S g \, d\mu$ as long as at least one of the integrals on the right is finite.
Proof
We take cases. Suppose first that $\int_S f \, d\mu \lt \infty$ and $\int_S g \, d\mu \lt \infty$. Note that $(f - g)^+ \le f$ and $(f - g)^- \le g$. By the increasing property for nonnegative functions, $\int_S (f - g)^+ \, d\mu \le \int_S f \, d\mu \lt \infty$ and $\int_S (f - g)^- \, d\mu \le \int_S g \, d\mu \lt \infty$. Thus $f - g$ is integrable. Next we have $f - g = (f - g)^+ - (f - g)^-$ and therefore $f + (f - g)^- = g + (f - g)^+$. All four of the functions in the last equation are nonnegative, and therefore by additivity property for nonnegative functions, we have $\int_S f \, d\mu + \int_S (f - g)^- \, d\mu = \int_S g \, d\mu + \int_S (f - g)^+ \, d\mu$ All of these integrals are finite, and hence $\int_S (f - g) \, d\mu = \int_S (f - g)^+ \, d\mu - \int_S (f - g)^- \, d\mu = \int_S f \, d\mu - \int_S g \, d\mu$
Next suppose that $\int_S f \, d\mu = \infty$ and $\int_S g \, d\mu \lt \infty$. Then $f - g \le (f - g)^+$ and hence $f \le (f - g)^+ + g$. Using the additivity and increasing properties for nonnegative functions, we have $\infty = \int_S f \, d\mu \le \int_S (f - g)^+ \, d\mu + \int_S g \, d\mu$. Since $\int_S g \, d\mu \lt \infty$ we must have $\int_S (f - g)^+ \, d\mu = \infty$. On the other hand, $(f - g)^- \le g$ so $\int_S (f - g)^- \, d\mu \le \int_S g \, d\mu \lt \infty$. Hence $\int_S (f - g) \, d\mu = \infty = \int_S f \, d\mu - \int_S g \, d\mu$
Finally, suppose that $\int_S f \, d\mu \lt \infty$ and $\int_S g \, d\mu = \infty$. By the argument in the last paragraph, we have $\int_S (g - f)^+ \, d\mu = \infty$ and $\int_S (g - f)^- \, d\mu \lt \infty$. Equivalently, $\int_S (f - g)^+ \, d\mu \lt \infty$ and $\int_S (f - g)^- \, d\mu = \infty$. Hence $\int_S (f - g) \, d\mu = -\infty = \int_S f \, d\mu - \int_S g \, d\mu$.
We finally have the linearity properties in full generality.
If $f, \, g: S \to \R$ are measurable functions whose integrals exist, and $c \in \R$, then
1. $\int_S (f + g) \, d\mu = \int_S f \, d\mu + \int_S g \, d\mu$ as long as the right side is not of the form $\infty - \infty$.
2. $\int_S c f \, d \mu = c \int_S f \, d\mu$
Proof
1. Note that $f + g = (f^+ - f^-) + (g^+ - g^-) = (f^+ + g^+) - (f^- + g^-)$ and the two functions in parentheses in the last expression are nonnegative. By the previous lemma and the additivity property for nonnegative functions, we have \begin{align} \int_S (f + g) \, d\mu & = \int_S (f^+ + g^+) \, d\mu - \int_S (f^- + g^-) \, d\mu \ & = \left(\int_S f^+ \, d\mu + \int_S g^+ \, d\mu\right) - \left(\int_S f^- \, d\mu + \int_S g^- \, d\mu \right) \end{align} assuming that either both integrals in the first parentheses are finite or both integrals in the second parentheses are finite. In either case, we can group the terms (without worrying about the dreaded $\infty - \infty$) to get $\int_S (f + g) \, d\mu = \left(\int_S f^+ \, d\mu - \int_S f^- \, d\mu \right) + \left(\int_S g^+ \, d\mu - \int_S g^- \, d\mu \right) = \int_S f \, d\mu + \int_S g \, d\mu$
2. Note that if $c \ge 0$ then $(c f)^+ = c f^+$ and $(c f)^- = c f^-$. Hence using the scaling property for nonnegative functions, $\int_S c f \, d\mu = \int_S (c f)^+ \, d\mu - \int_S (c f)^- \, d\mu = \int_S c f^+ \, d\mu - \int _S c f^- \, d\mu = c \int_S f^+ \, d\mu - c \int_S f^- \, d\mu = c \int_S f \, d\mu$ On the other hand, if $c \lt 0$, $(c f)^+ = - c f^-$ and $(c f)^- = - c f^+$. Again using the scaling property for nonnegative functions, $\int_S c f \, d\mu = \int_S (c f)^+ \, d\mu - \int_S (c f)^- \, d\mu = \int_S - c f^- \, d\mu - \int _S -c f^+ \, d\mu = - c \int_S f^- \, d\mu + c \int_S f^+ \, d\mu = c \int_S f \, d\mu$
In particular, note that if $f$ and $g$ are integrable, then so are $f + g$ and $c f$ for $c \in \R$. Thus, the set of integrable functions on $(S, \mathscr{S}, \mu)$ forms a vector space, which is denoted $\mathscr{L}(S, \mathscr{S}, \mu)$. The $\mathscr{L}$ is in honor of Henri Lebesgue, who first developed the theory. This vector space, and other related ones, will be studied in more detail in the section on function spaces.
We also have the increasing property in full generality.
If $f, \, g: S \to \R$ are measurable functions whose integrals exist, and if $f \le g$ on $S$ then $\int_S f \, d\mu \le \int_S g \, d\mu$
Proof
We can use the proof based on the additive property from (6). First $g = f + (g - f)$ and $g - f \ge 0$ on $S$. If $\int_S f \, d\mu = -\infty$ then trivially, $\int_S f \, d\mu \le \int_S g \, d\mu$. Otherwise $\int_S (g - f) \, d\mu \ge 0$ and therefore $\int_S g \, d\mu = \int_S f \, d\mu + \int_S (g - f) \, d\mu \ge \int_S f \, d\mu$.
The Integral Over a Set
Now that we have defined the integral of a measurable function $f$ over all of $S$, there is a natural extension to the integral of $f$ over a measurable subset
If $f: S \to \R$ is measurable and $A \in \mathscr{S}$, we define $\int_A f \, d\mu = \int_S \bs{1}_A f \, d\mu$ assuming that the integral on the right exists.
If $f: S \to \R$ is a measurable function whose integral exists and $A \in \mathscr{S}$, then the integral of $f$ over $A$ exists.
Proof
Note that $\left(\bs{1}_A f\right)^+ = \bs{1}_A f^+$ and $\left(\bs{1}_A f\right)^- = \bs{1}_A f^-$. Also $\bs{1}_A f^+ \le f^+$ and $\bs{1}_A f^- \le f^-$. If $\int_S f \, d\mu$ exists, then either $\int_S f^+ \, d\mu \lt \infty$ or $\int_S f^- \, d\mu \lt \infty$. By the increasing property, it follows that either $\int_S \bs{1}_A f^+ \, d\mu \lt \infty$ or $\int_S \bs{1}_A f^- \, d\mu \lt \infty$, so $\int_A f \, d\mu$ exists.
On the other hand, it's clearly possible for $\int_A f \, d\mu$ to exist for some $A \in \mathscr{S}$, but not $\int_S f \, d\mu$.
We could also simply think of $\int_A f \, d\mu$ as the integral of a measurable function $f: A \to \R$ over the measure space $(A, \mathscr{S}_A, \mu_A)$, where $\mathscr{S}_A = \{ B \in \mathscr{S}: B \subseteq A\} = \{C \cap A: C \in \mathscr{S}\}$ is the $\sigma$-algebra of measurable subsets of $A$, and where $\mu_A$ is the restriction of $\mu$ to $\mathscr{S}_A$. It follows that all of the essential properties hold for integrals over $A$: the linearity properties, the order properties, and the monotone convergence theorem. The following property is a simple consequence of the general additive property, and is known as additive property for disjoint domains.
Suppose that $f: S \to \R$ is a measurable function whose integral exists, and that $A, \, B \in \mathscr{S}$ are disjoint. then $\int_{A \cup B} f \, d\mu = \int_A f \, d\mu + \int_B f \, d\mu$
Proof
Recall that $\bs{1}_{A \cup B} = \bs{1}_A + \bs{1}_B$. Hence by the additive property and the previous result, $\int_{A \cup B} f \, d\mu = \int_S \bs{1}_{A \cup B} f \, d\mu = \int_S \left(\bs{1}_A f + \bs{1}_B f\right) \, d\mu = \int_S \bs{1}_A f \, d\mu + \int_S \bs{1}_B f \, d\mu = \int_A f \, d\mu + \int_B f \, d\mu$
By induction, the additive property holds for a finite collection of disjoint domains. The extension to a countably infinite collection of disjoint domains will be considered in the next section on properties of the integral.
Special Cases
Discrete Spaces
Recall again that the measure space $(S, \mathscr S, \#)$ is discrete if $S$ is countable, $\mathscr S$ is the collection of all subsets of $S$, and $\#$ is counting measure on $\mathscr{S}$. Thus all functions $f: S \to \R$ are measurable, and and as we will see, integrals with respect to $\#$ are simply sums.
If $f: S \to \R$ then $\int_S f \, d\# = \sum_{x \in S} f(x)$ as long as either the sum of the positive terms or the sum of the negative terms in finite.
Proof
The proof is a bootstrapping argument.
1. Suppose first that $S$ is finite. In this case, every function $f: S \to \R$ is simple and has the representation $f = \sum_{x \in S} f(x) \bs{1}_x$ where $\bs{1}_x$ is an abbreviation of $\bs{1}_{\{x\}}$. Thus the result follows from the definition of the integral.
2. Next suppose that $S$ is countable infinite and $f: S \to [0, \infty)$. Let $(A_1, A_2, \ldots)$ be an increasing sequence of finite subsets of $S$ with $\bigcup_{i=1}^\infty A_i = S$. Define $f_n = \sum_{x \in A_n} f(x) \bs{1}_x$. Then $(f_1, f_2, \ldots)$ is an increasing sequence of simple functions with $f_n \to f$ as $n \to \infty$. Thus $\int_S f \, d\# = \lim_{n \to \infty} \int_S f_n \, d\# = \lim_{n \to \infty} \sum_{x \in A_n} f(x)$ But by definition, the last limit on the right is just $\sum_{x \in S} f(x)$.
3. Finally consider the general case where $S$ is countable and $f: S \to \R$. In this case the result follows from the definition of the integral as $\int_S f \, d\# = \int_S f^+ \, d\# - \int_S f^- \, d\#$ as long as one of the integrals on the right is finite. By (b), $\int_S f^+ \, d\#$ is the sum of the positive terms and $-\int_S f^- \, d\#$ is the sum of the negative terms.
If the sum of the positive terms and the sum of the negative terms are both finite, then $f$ is integrable with respect to $\#$, but the usual term from calculus is that the series $\sum_{x \in S} f(x)$ is absolutely convergent. The result will look more familiar in the special case $S = \N_+$. Functions on $S$ are simply sequences, so we can use the more familiar notation $a_i$ rather than $a(i)$ for a function $a: S \to \R$. Part (b) of the proof (with $A_n = \{1, 2, \ldots, n\}$) is just the definition of an infinite series of nonnegative terms as the limit of the partial sums: $\sum_{i=1}^\infty a_i = \lim_{n \to \infty} \sum_{i=1}^n a_i$ Part (c) of the proof is just the definition of a general infinite series $\sum_{i=1}^\infty a_i = \sum_{i=1}^\infty a_i^+ - \sum_{i=1}^\infty a_i^-$ as long as one of the series on the right is finite. Again, when both are finite, the series is absolutely convergent. In calculus we also consider conditionally convergent series. This means that $\sum_{i=1}^\infty a_i^+ = \infty$, $\sum_{i=1}^\infty a_i^- = \infty$, but $\lim_{n \to \infty} \sum_{i=1}^n a_i$ exists in $\R$. Such series have no place in general integration theory. Also, you may recall that such series are pathological in the sense that, given any number in $\R^*$, there exists a rearrangement of the terms so that the rearranged series converges to the given number.
The Lebesgue and Riemann Integrals on $\R$
Consider the one-dimensional Euclidean space $(\R, \mathscr{R}, \lambda)$ where $\mathscr{R}$ is the usual $\sigma$-algebra of Lebesgue measurable sets and $\lambda$ is Lebesgue measure. The theory developed above applies, of course, for the integral $\int_A f \, d\mu$ of a measurable function $f: \R \to \R$ over a set $A \in \mathscr{R}$. It's not surprising that in this special case, the theory of integration is referred to as Lebesgue integration in honor of our good friend Henri Lebesgue, who first developed the theory.
On the other hand, we already have a theory of integration on $\R$, namely the Riemann integral of calculus, named for our other good friend Georg Riemann. For a suitable function $f$ and domain $A$ this integral is denoted $\int_A f(x) \, dx$, as we all remember from calculus. How are the two integrals related? As we will see, the Lebesgue integral generalizes the Riemann integral.
To understand the connection we need to review the definition of the Riemann integral. Consider first the standard case where the domain of integration is a closed, bounded interval. Here are the preliminary definitions that we will need.
Suppose that $f: [a, b] \to \R$, where $a, \, b \in \R$ and $a \lt b$.
1. A partition $\mathscr{A} = \{A_i : i \in I\}$ of $[a, b]$ is a finite collection of disjoint subintervals whose union is $[a, b]$.
2. The norm of a partition $\mathscr{A}$ is $\|A\| = \max\{\lambda(A_i): i \in I\}$, the length of the largest subinterval of $\mathscr{A}$.
3. A set of points $B = \{x_i: i \in I\}$ where $x_i \in A_i$ for each $i \in I$ is said to be associated with the partition $\mathscr{A}$.
4. The Riemann sum of $f$ corresponding to a partition $\mathscr{A}$ and and a set $B$ associated with $\mathscr{A}$ is $R\left(f, \mathscr{A}, B\right) = \sum_{i \in I} f(x_i) \lambda(A_i)$
Note that the Riemann sum is simply the integral of the simple function $g = \sum_{i \in I} f(x_i) \bs{1}_{A_i}$. Moreover, since $A_i$ is an interval for each $i \in I$, $g$ is a step function, since it is constant on a finite collection of disjoint intervals. Moreover, again since $A_i$ is an interval for each $i \in I$, $\lambda(A_i)$ is simply the length of the subinterval $A_i$, so of course measure theory per se is not needed for Riemann integration. Now for the definition from calculus:
$f$ is Riemann integrable on $[a, b]$ if there exists $r \in \R$ with the property that for every $\epsilon \gt 0$ there exists $\delta \gt 0$ such that if $\mathscr{A}$ is a partition of $[a, b]$ with $\|A\| \lt \delta$ then $\left| r - R\left(f, \mathscr{A}, B\right) \right| \lt \epsilon$ for every set of points $B$ associated with $\mathscr{A}$. Then of course we define the integral by $\int_a^b f(x) \, dx = r$
Here is our main theorem of this subsection.
If $f: [a, b] \to \R$ is Riemann integrable on $[a, b]$ then $f$ is Lebesgue integrable on $[a, b]$ and $\int_{[a, b]} f \, d\lambda = \int_a^b f(x) \, dx$
On the other hand, there are lots of functions that are Lebesgue integrable but not Riemann integrable. In fact there are indicator functions of this type, the simplest of functions from the point of view of Lebesgue integration.
Consider the function $\bs{1}_\Q$ where as usual, $\Q$ is the set of rational number in $\R$. Then
1. $\int_\R \bs{1}_\Q \, d\lambda = 0$.
2. $\bs{1}_Q$ is not Riemann integrable on any interval $[a, b]$ with $a \lt b$.
Proof
Part (a) follows from the definition of the Lebesgue integral: $\int_\R \bs{1}_\Q \, d\lambda = \lambda(\Q) = 0$ For part (b), note that there are rational and irrational numbers in every interval of $\R$ of positive length (the rational numbers and the irrational numbers are dense in $\R$). Thus, given any partition $\mathscr{A} = \{A_i: i \in I\}$ of $[a, b]$, no matter how small the norm, there are Riemann sums that are 0 (take $x_i \in A_i$ irrational for each $i \in I$), and Riemann sums that are $b - a$ (take $x_i \in A_i$ rational for each $i \in I$)
The following fundamental theorem completes the picture.
$f: [a, b] \to \R$ is Riemann integrable on $[a, b]$ if and only if $f$ is bounded on $[a, b]$ and $f$ is continuous almost everywhere on $[a, b]$.
Now that the Riemann integral is defined for a closed bounded interval, it can be extended to other domains.
Extensions of the Riemann integral.
1. If $f$ is defined on $[a, b)$ and Riemann integrable on $[a, t]$ for $a \lt t \lt b$, we define $\int_a^b f(x) \, dx = \lim_{t \uparrow b} \int_a^t f(x) \, dx$ if the limit exists in $\R^*$.
2. If $f$ is defined on $(a, b]$ and Riemann integrable on $[t, b]$ for $a \lt t \lt b$, we define $\int_a^b f(x) \, dx = \lim_{t \downarrow a} \int_t^b f(x) \, dx$ if the limit exists in $\R^*$.
3. If $f$ is defined on $(a, b)$, we select $c \in (a, b)$ and define $\int_a^b f(x) \, dx = \int_a^c f(x) \, dx + \int_c^b f(x), \, dx$ if the integrals on the right exist in $\R^*$ by (a) and (b), and are not of the form $\infty - \infty$.
4. If $f$ is defined an $[a, \infty)$ and Riemann integrable on $[a, t]$ for $a \lt t \lt \infty$ we define $\int_a^\infty f(x) \, dx = \lim_{t \to \infty} \int_a^t f(x) \, dx$.
5. if $f$ is defined on $(-\infty, b]$ and Riemann integrable on $[t, b]$ for $-\infty \lt t \lt b$ we define $\int_{-\infty}^b f(x) \, dx = \lim_{t \to -\infty} \int_t^b f(x) \, dx$ if the limit exists in $\R^*$
6. if $f$ is defined on $\R$ we select $c \in \R$ and define $\int_{-\infty}^\infty f(x) \, dx = \int_{-\infty}^c f(x) \, dx + \int_c^\infty f(x) \, dx$ if both integrals on the right exist by (d) and (e), and are not of the form $\infty - \infty$.
7. The integral is be defined for a domain that is the union of a finite collection of disjoint intervals by the requirement that the integral be additive over disjoint domains
As another indication of its superiority, note that none of these convolutions is necessary for the Lebesgue integral. Once and for all, we have defined $\int_A f(x) \, dx$ for a general measurable function $f: \R \to \R$ and a general domain $A \in \mathscr{R}$
The Lebesgue-Stieltjes Integral
Consider again the measurable space $(\R, \mathscr{R})$ where $\mathscr{R}$ is the usual $\sigma$-algebra of Lebesgue measurable subsets of $\R$. Suppose that $F: \R \to \R$ is a general distribution function, so that by definition, $F$ is increasing and continuous from the right. Recall that the Lebesgue-Stieltjes measure $\mu$ associated with $F$ is the unique measure on $\mathscr{R}$ that satisfies $\mu(a, b] = F(b) - F(a); \quad a, \, b \in \R, \; a \lt b$ Recall that $F$ satisfies some, but not necessarily all of the properties of a probability distribution function. The properties not necessarily satisfied are the normalizing properties
• $F(x) \to 0$ as $x \to -\infty$
• $F(x) \to 1$ as $x \to \infty$
If $F$ does satisfy these two additional properties, then $\mu$ is a probability measure and $F$ its probability distribution function.
The integral with respect to the measure $\mu$ is, appropriately enough, referred to as the Lebesgue-Stieltjes integral with respect to $F$, and like the measure, is named for the ubiquitous Henri Lebesgue and for Thomas Stieltjes. In addition to our usual notation $\int_S f \, d\mu$, the Lebesgue-Stieltjes integral is also denoted $\int_S f \, dF$ and $\int_S f(x) \, dF(x)$.
Probability Spaces
Suppose that $(S, \mathscr{S}, \P)$ is a probability space, so that $S$ is the set of outcomes of a random experiment, $\mathscr{S}$ is the $\sigma$-algebra of events, and $\P$ the probability measure on the sample space $(S, \mathscr S)$. A measurable, real-valued function $X$ on $S$ is, of course, a real-valued random variable. The integral with respect to $\P$, if it exists, is the expected value of $X$ and is denoted $\E(X) = \int_S X \, d\P$ This concept is of fundamental importance in probability theory and is studied in detail in a separate chapter on Expected Value, mostly from an elementary point of view that does not involve abstract integration. However an advanced section treats expected value as an integral over the underlying probability measure, as above.
Suppose next that $(T, \mathscr T, \#)$ is a discrete space and that $X$ is a random variable for the experiment, taking values in $T$. In this case $X$ has a discrete distribution and the probability density function $f$ of $X$ is given by $f(x) = \P(X = x)$ for $x \in T$. More generally, $\P(X \in A) = \sum_{x \in A} f(x) = \int_A f \, d\#, \quad A \subseteq T$ On the other hand, suppose that $X$ is a random variable with values in $\R^n$, where as usual, $(\R^n, \mathscr R_n, \lambda_n)$ is $n$-dimensional Euclidean space. If $X$ has a continuous distribution, then $f: T \to [0, \infty)$ is a probability density function of $X$ if $\P(X \in A) = \int_A f \, d\lambda_n, \quad A \in \mathscr R^n$ Technically, $f$ is the density function of $X$ with respect to counting measure $\#$ in the discrete case, and $f$ is the density function of $X$ with respect to Lebesgue measure $\lambda_n$ in the continuous case. In both cases, the probability of an event $A$ is computed by integrating the density function, with respect to the appropriate measure, over $A$. There are still differences, however. In the discrete case, the existence of the density function with respect to counting measure is guaranteed, and indeed we have an explicit formula for it. In the continuous case, the existence of a density function with respect to Lebesgue measure is not guaranteed, and indeed there might not be one. More generally, suppose that we have a measure space $(T, \mathscr{T}, \mu)$ and a random variable $X$ with values in $T$. A measurable function $f: T \to [0, \infty)$ is a probability density function of $X$ (or more precisely, the distribution of $X$) with respect to $\mu$ if $\P(X \in A) = \int_A f \, d\mu, \quad A \in \mathscr{T}$ This fundamental question of the existence of a density function will be clarified in the section on absolute continuity and density functions.
Suppose again that $X$ is a real-valued random variable with distribution function $F$. Then, by definition, the distribution of $X$ is the Lebesgue-Stieltjes measure associated with $F$: $\P(a \lt X \le b) = F(b) - F(a), \quad a, \, b \in \R, \; a \lt b$ regardless of whether the distribution is discrete, continuous, or mixed. Trivially, $\P(X \in A) = \int_S \bs{1}_A \, dF$ for $A \in \mathscr R$ and the expected value of $X$ defined above can also be written as $\E(X) = \int_\R x \, dF(x)$. Again, all of this will be explained in much more detail in the next chapter on Expected Value.
Computational Exercises
Let $g(x) = \frac{1}{1 + x^2}$ for $x \in \R$.
1. Find $\int_{-\infty}^\infty g(x) \, dx$.
2. Show that $\int_{-\infty}^\infty x g(x) \, dx$ does not exist.
Answer
1. $\int_{-\infty}^\infty g(x) \, dx = \pi$
2. $\int_0^\infty x g(x) \, dx = \infty$, $\int_{-\infty}^0 x g(x) \, dx = -\infty$
You may recall that the function $g$ in the last exercise is important in the study of the Cauchy distribution, named for Augustin Cauchy. You may also remember that the graph of $g$ is known as the witch of Agnesi, named for Maria Agnesi.
Let $g(x) = \frac{1}{x^b}$ for $x \in [1, \infty)$ where $b \gt 0$ is a parameter. Find $\int_1^\infty g(x) \, dx$
Answer
$\int_1^\infty g(x) \, dx = \begin{cases} \infty, & 0 \lt b \le 1 \ \frac{1}{b - 1}, & b \gt 1 \end{cases}$
You may recall that the function $g$ in the last exercise is important in the study of the Pareto distribution, named for Vilfredo Pareto.
Suppose that $f(x) = 0$ if $x \in \Q$ and $f(x) = \sin(x)$ if $x \in \R - \Q$.
1. Find $\int_{[0, \pi]} f(x) \, d\lambda(x)$
2. Does $\int_0^\pi f(x) \, dx$ exist?
Answer
1. 2
2. No | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/03%3A_Distributions/3.10%3A_The_Integral_With_Respect_to_a_Measure.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\Q}{\mathbb{Q}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\range}{\text{range}}$
Basic Theory
Again our starting point is a measure space $(S, \mathscr{S}, \mu)$. That is, $S$ is a set, $\mathscr{S}$ is a $\sigma$-algebra of subsets of $S$, and $\mu$ is a positive measure on $\mathscr{S}$.
Definition
In the last section we defined the integral of certain measurable functions $f: S \to \R$ with respect to the measure $\mu$. Recall that the integral, denoted $\int_S f \, d\mu$, may exist as a number in $\R$ (in which case $f$ is integrable), or may exist as $\infty$ or $-\infty$, or may fail to exist. Here is a review of how the definition is built up in stages:
Definition of the integral
1. If $f$ is a nonnegative simple function, so that $f = \sum_{i \in I} a_i \bs{1}_{A_i}$ where $I$ is a finite index set, $a_i \in [0, \infty)$ for $i \in I$, and $\{A_i: i \in I\}$ is measurable partition of $S$, then $\int_S f \, d\mu = \sum_{i \in I} a_i \mu(A_i)$
2. If $f: S \to [0, \infty)$ is measurable, then $\int_S f \, d\mu = \sup\left\{\int_S g \, d\mu: g \text{ is simple and } 0 \le g \le f\right\}$
3. If $f: S \to \R$ is measurable, then $\int_S f \, d\mu = \int_S f^+ \, d\mu - \int_S f^- \, d\mu$ as long as the right side is not of the form $\infty - \infty$, and where $f^+$ and $f^-$ denote the positive and negative parts of $f$.
4. If $f:S \to \R$ is measurable and $A \in \mathscr{S}$, then the integral of $f$ over $A$ is defined by $\int_A f \, d\mu = \int_S \bs{1}_A f \, d\mu$ assuming that the integral on the right exists.
Consider a statement on the elements of $S$, for example an equation or an inequality with $x \in S$ as a free variable. (Technically such a statement is a predicate on $S$.) For $A \in \mathscr{S}$, we say that the statement holds on $A$ if it is true for every $x \in A$. We say that the statement holds almost everywhere on $A$ (with respect to $\mu$) if there exists $B \in \mathscr{S}$ with $B \subseteq A$ such that the statement holds on $B$ and $\mu(A \setminus B) = 0$.
Basic Properties
A few properties of the integral that were essential to the motivation of the definition were given in the last section. In this section, we extend some of those properties and we study a number of new ones. As a review, here is what we know so far.
Properties of the integral
1. If $f, \, g: S \to \R$ are measurable functions whose integrals exist, then $\int_S (f + g) \, d\mu = \int_S f \, d\mu + \int_S g \, d\mu$ as long as the right side is not of the form $\infty - \infty$.
2. If $f: S \to \R$ is a measurable function whose integral exists and $c \in \R$, then $\int_S c f \, d\mu = c \int_S f \, d\mu$.
3. If $f: S \to \R$ is measurable and $f \ge 0$ on $S$ then $\int_S f \, d\mu \ge 0$.
4. If $f, \, g: S \to \R$ are measurable functions whose integrals exist and $f \le g$ on $S$ then $\int_S f \, d\mu \le \int_S g \, d\mu$
5. If $f_n: S \to [0, \infty)$ is measurable for $n \in \N_+$ and $f_n$ is increasing in $n$ on $S$ then $\int_S \lim_{n \to \infty} f_n \, d\mu = \lim_{n \to \infty} \int_S f_n \, d\mu$.
6. $f: S \to \R$ is measurable and the the integral of $f$ on $A \cup B$ exists, where $A, \, B \in \mathscr{S}$ are disjoint, then $\int_{A \cup B} f \, d\mu = \int_A f \, d\mu + \int_B f \, d\mu$.
Parts (a) and (b) are the linearity properties; part (a) is the additivity property and part (b) is the scaling property. Parts (c) and (d) are the order properties; part (c) is the positive property and part (d) is the increasing property. Part (e) is a continuity property known as the monotone convergence theorem. Part (f) is the additive property for disjoint domains. Properties (a)–(e) hold with $S$ replaced by $A \in \mathscr{S}$.
Equality and Order
Our first new results are extensions dealing with equality and order. The integral of a function over a null set is 0:
Suppose that $f: S \to \R$ is measurable and $A \in \mathscr{S}$ with $\mu(A) = 0$. Then $\int_A f \, d\mu = 0$.
Proof
The proof proceeds in stages via the definition of the integral.
1. Suppose that $g$ is a nonnegative simple function with $g = 0$ on $A^c$. Then $g$ has the representation $g = \sum_{i \in I} a_i \bs{1}_{A_i}$ where $a_i \in (0, \infty)$ and $A_i \subseteq A$ for for $i \in I$. But $\mu(A_i) = 0$ for each $i \in I$ and so $\int_S g \, d\mu = \sum_{i \in I} a_i \mu(A_i) = 0$
2. Suppose that $f: S \to [0, \infty)$ is measurable. If $g$ is a nonnegative simple function with $g \le \bs{1}_A f$, then $g = 0$ on $A^c$ so by (a), $\int_S g \, d\mu = 0$. Hence by part (b) of (1), $\int_A f \, d\mu = \int_S \bs{1}_A f \, d\mu = 0$.
3. Finally, suppose that $f: S \to \R$ is measurable. Then $\int_A f \, d\mu = \int_A f^+ \, d\mu - \int_A f^- \, d\mu$. But both integrals on the right are 0 by part (b).
Two functions that are indistinguishable from the point of view of $\mu$ must have the same integral.
Suppose that $f: S \to \R$ is a measurable function whose integral exists. If $g: S \to \R$ is measurable and $g = f$ almost everywhere on $S$, then $\int_S g \, d\mu = \int_S f \, d\mu$.
Proof
Note that $g = f$ if and only if $g^+ = f^+$ and $g^- = f^-$. Let $A = \{x \in S: g^+(x) = f^+(x)\}$. Then $A \in \mathscr{S}$ and $\mu(A^c) = 0$. Hence by the additivity property and (3), $\int_S g^+ \, d\mu = \int_A g^+ \, d\mu + \int_{A^c} g^+ \, d\mu = \int_A f^+ \, d\mu + 0 = \int_A f^+ \, d\mu + \int_{A^c} f^+ \, d\mu = \int_S f^+ \, d\mu$ Similarly $\int_S g^- \, d\mu = \int_S f^- \, d\mu$. Hence the integral of $g$ exists and $\int_S g \, d\mu = \int_S f \, d\mu$
Next we have a simple extension of the positive property.
Suppose that $f: S \to \R$ is measurable and $f \ge 0$ almost everywhere on $S$. Then
1. $\int_S f \, d\mu \ge 0$
2. $\int_S f \, = 0$ if and only if $f = 0$ almost everywhere on $S$.
Proof
1. Let $A = \{x \in S: f(x) \ge 0\}$. Then $A \in \mathscr{S}$ and $\mu(A^c) = 0$. By the additivity of the integral over disjoint sets we have $\int_S f \, d\mu = \int_A f \, d\mu + \int_{A^c} f \, d\mu$ But $\int_A f \, d\mu \ge 0$ by the positive property and $\int_{A^c} f \, d\mu = 0$ by the null property, so $\int_S f \, d\mu \ge 0$.
2. Note first that if $\mu(A) = 0$ then both integrals in the displayed equation are 0 so $\int_S f \, d\mu = 0$. For the converse, let $B_n = \left\{x \in S: f(x) \ge \frac{1}{n}\right\}$ for $n \in \N_+$ and $B = \{x \in S: f(x) \gt 0\}$. Then $B_n$ is increasing in $n$ and $\bigcup_{n=1}^\infty B_n = B$. If $\mu(B) \gt 0$ then $\mu(B_n) \gt 0$ for some $n \in \N_+$. But $f \ge \frac{1}{n} \bs{1}_{B_n}$ on $A$, so by the increasing property, $\int_S f \, d\mu = \int_A f \, d\mu \ge \int_A \frac{1}{n} \bs{1}_{B_n} \, d\mu = \frac{1}{n} \mu(B_n) \gt 0$.
So, if $f \ge 0$ almost everywhere on $S$ then $\int_S f \, d\mu \gt 0$ if and only if $\mu\{x \in S: f(x) \gt 0\} \gt 0$. The simple extension of the positive property in turn leads to a simple extension of the increasing property.
Suppose that $f, \, g: S \to \R$ are measurable functions whose integrals exist, and that $f \le g$ almost everywhere on $S$. Then
1. $\int_S f \le \int_S g$
2. Except in the case that both integrals are $\infty$ or both $-\infty$, $\int_S f \, d\mu = \int_S g \, d\mu$ if and only if $f = g$ almost everywhere on $S$.
Proof
1. Note that $g = f + (g - f)$ and $g - f \ge 0$ almost everywhere on $S$. If $\int_S f \, d\mu = -\infty$ then trivially $\int_S f \, d\mu \le \int_S g \, d\mu$. Otherwise, by the additive property, $\int_S g \, d\mu = \int_S f \, d\mu + \int_S (g - f) \, d\mu$ By the positive property, $\int_S (g - f) \, d\mu \ge 0$ so $\int_S g \, d\mu \ge \int_S f \, d\mu$.
2. Except in the case that both integrals are $\infty$ or both are $-\infty$ we have $\int_S g \, d\mu - \int_S f \, d\mu = \int_S (g - f) \, d\mu$ By assumption $g - f \ge 0$ almost everywhere on $S$, and hence by the positive property, the integral on the right is 0 if and only if $g - f = 0$ almost everywhere on $S$.
So if $f \le g$ almost everywhere on $S$ then, except in the two cases mentioned, $\int_S f \, d\mu \lt \int_S g \, d\mu$ if and only if $\mu\{x \in S: f(x) \lt g(x)\} \gt 0$. The exclusion when both integrals are $\infty$ or $-\infty$ is important. A counterexample when this condition does not hold is given below. The next result is the absolute value inequality.
Suppose that $f: S \to \R$ is a measurable function whose integral exists. Then $\left| \int_S f \, d\mu \right| \le \int_S \left|f \right| \, d\mu$ If $f$ is integrable, then equality holds if and only if $f \ge 0$ almost everywhere on $S$ or $f \le 0$ almost everywhere on $S$.
Proof
First note that $-\left|f\right| \le f \le \left|f\right|$ on $S$. The integrals of all three functions exist, so the increasing property and scaling properties give $-\int_S \left|f\right| \, d\mu \le \int_S f \, d\mu \le \int_S \left|f \right| \, d\mu$ which is equivalent to the inequality above. If $f$ is integrable, then by the increasing property, equality holds if and only if $f = -\left|f\right|$ almost everywhere on $S$ or $f = \left|f\right|$ almost everywhere on $S$. In the first case, $f \le 0$ almost everywhere on $S$ and in the second case, $f \ge 0$ almost everywhere on $S$.
Change of Variables
Suppose that $(T, \mathscr{T})$ is another measurable space and that $u: S \to T$ is measurable. As we saw in our first study of positive measures, $\nu$ defined by $\nu(B) = \mu\left[u^{-1}(B)\right], \quad B \in \mathscr{T}$ is a positive measure on $(T, \mathscr{T})$. The following result is known as the change of variables theorem.
If $f: T \to \R$ is measurable then, assuming that the integrals exist, $\int_T f \, d\nu = \int_S (f \circ u) \, d\mu$
Proof
We will show that if either of the integrals exist then they both do, and are equal. The proof is a classical bootstrapping argument that parallels the definition of the integral.
1. Suppose first that $f$ is a nonnegative simple function on $T$ with the representation $f = \sum_{i \in I} b_i \bs{1}_{B_i}$ where $I$ is a finite index set, $\{B_i: i \in I\}$ is a measurable partition of $T$, and $b_i \in [0, \infty)$ for $i \in I$. Recall that $f \circ u$ is a nonnegative simple function on $S$, with representation $f \circ u = \sum_{i \in I} b_i \bs{1}_{u^{-1}(B_i)}$. Hence $\int_T f \, d\nu = \sum_{i \in I} b_i \nu(B_i) = \sum_{i \in I} b_i \mu\left[u^{-1}(B_i)\right] = \int_S (f \circ u) \, d\mu$
2. Next suppose that $f: T \to [0, \infty)$ is measurable, so that $f \circ u: S \to [0, \infty)$ is also measurable. There exists an increasing sequence $(f_1, f_2, \ldots)$ of nonnegative simple functions on $T$ with $f_n \to f$ as $n \to \infty$. Then $(f_1 \circ u, f_2 \circ u, \ldots)$ is an increasing sequence of simple functions on $S$ with $f_n \circ u \to f \circ u$ as $n \to \infty$. By step (a), $\int_T f_n \, d\nu = \int_S (f_n \circ u) \, d\mu$ for each $n \in \N_+$. But by the monotone convergence theorem, $\int_T f_n \, d\nu \to \int_T f \, d\nu$ as $n \to \infty$ and $\int_S (f_n \circ u) \, d\mu \to \int_S (f \circ u) \, d\mu$ so we conclude that $\int_T f \, d\nu = \int_S (f \circ u) \, d\mu$
3. Finally, suppose that $f: T \to \R$ is measurable, so that $f \circ u: S \to \R$ is also measurable. Note that $(f \circ u)^+ = f^+ \circ u$ and $(f \circ u)^- = f^- \circ u$. By part (b), \begin{align} \int_T f^+ \, d\nu & = \int_S (f^+ \circ u) \, d\mu = \int_S (f \circ u)^+ \, d\mu \ \int_T f^- \, d\nu & = \int_S (f^- \circ u) \, d\mu = \int_S (f \circ u)^- \, d\mu \end{align} Assuming that at least one of the integrals in the displayed equations is finite, we have $\int_T f \, d\nu = \int_T f^+ \, d\nu - \int_T f^- \, d\nu = \int_S (f \circ u)^+ \, d\mu - \int_S (f \circ u)^- \, d\mu = \int_S (f \circ u) \, d\mu$
The change of variables theorem will look more familiar if we give the variables explicitly. Thus, suppose that we want to evaluate $\int_S f\left[u(x)\right] \, d\mu(x)$ where again, $u: S \to T$ and $f: T \to \R$. One way is to use the substitution $u = u(x)$, find the new measure $\nu$, and then evaluate $\int_T g(u) \, d\nu(u)$
Convergence Properties
We start with a simple but important corollary of the monotone convergence theorem that extends the additivity property to a countably infinite sum of nonnegative functions.
Suppose that $f_n: S \to [0, \infty)$ is measurable for $n \in \N_+$. Then $\int_S \sum_{n=1}^\infty f_n \, d\mu = \sum_{n=1}^\infty \int_S f_n \, d\mu$
Proof
Let $g_n = \sum_{i=1}^n f_i$ for $n \in \N_+$. Then $g_n: S \to [0, \infty)$ is measurable and $g_n$ is increasing in $n$. Moreover, by definition, $g_n \to \sum_{i=1}^\infty f_i$ as $n \to \infty$. Hence by the MCT, $\int_S g_n \, d\mu \to \int_S \sum_{i=1}^\infty f_i \, d\mu$ as $n \to \infty$. But we know the additivity property holds for finite sums, so $\int_S g_n \, d\mu = \sum_{i=1}^n \int_S f_i \, d\mu$ and again, by definition, this sum converges to $\sum_{i=1}^\infty \int_S f_i \, d\mu$ as $n \to \infty$.
A theorem below gives a related result that relaxes the assumption that $f$ be nonnegative, but imposes a stricter integrability requirement. Our next result is the additivity of the integral over a countably infinite collection of disjoint domains.
Suppose that $f: S \to \R$ is a measurable function whose integral exists, and that $\{A_n: n \in \N_+\}$ is a disjoint collection of sets in $\mathscr{S}$. Let $A = \bigcup_{n=1}^\infty A_n$. Then $\int_A f \, d\mu = \sum_{n=1}^\infty \int_{A_n} f \, d\mu$
Proof
Suppose first that $f$ is nonnegative. Note that $\bs{1}_A = \sum_{n=1}^\infty \bs{1}_{A_n}$ and hence $\bs{1}_A f = \sum_{n=1}^\infty \bs{1}_{A_n} f$. Thus from the theorem above, $\int_A f \, d\mu = \int_S \bs{1}_A f \, d\mu = \int_S \sum_{n=1}^\infty \bs{1}_{A_n} f \, d\mu = \sum_{n=1}^\infty \int_S \bs{1}_{A_n} f \, d\mu = \sum_{n=1}^\infty \int_{A_n} f \, d\mu$ Suppose now that $f: S \to \R$ is measurable and $\int_S f \, d\mu$ exists. Note that for $B \in \mathscr{S}$, $\left(\bs{1}_B f\right)^+ = \bs{1}_B f^+$ and $\left(\bs{1}_B f\right)^- = \bs{1}_B f^-$. Hence from the previous argument, $\int_A f^+ \, d\mu = \sum_{n=1}^\infty \int_{A_n} f^+ \, d\mu, \quad \int_A f^- \, d\mu = \sum_{n=1}^\infty \int_{A_n} f^- \, d\mu$ Both of these are sums of nonnegative terms, and one of the sums, at least, is finite. Hence we can group the terms to get $\int_A f \, d\mu = \int_A f^+ \, d\mu - \int_A f^- \, d\mu = \sum_{n=1}^\infty \int_{A_n} (f^+ - f^-) \, d\mu = \sum_{n=1}^\infty \int_{A_n} f \, d\mu$
Of course, the previous theorem applies if $f$ is nonnegative or if $f$ is integrable. Next we give a minor extension of the monotone convergence theorem that relaxes the assumption that the functions be nonnegative.
Monotone Convergence Theorem. Suppose that $f_n: S \to \R$ is a measurable function whose integral exists for each $n \in \N_+$ and that $f_n$ is increasing in $n$ on $S$. If $\int_S f_1 \, d\mu \gt -\infty$ then $\int_S \lim_{n \to \infty} f_n \, d\mu = \lim_{n \to \infty} \int_S f_n \, d\mu$
Proof
Let $f(x) = \lim_{n \to \infty} f_n(x)$ for $x \in S$ which exists in $\R \cup \{\infty\}$ since $f_n(x)$ is increasing in $n \in \N_+$. If $\int_S f_1 \, d\mu = \infty$, then by the increasing property, $\int_S f_n \, d\mu = \infty$ for all $n \in \N_+$ and $\int_S f \, d\mu = \infty$, so the conclusion of the MCT trivially holds. Thus suppose that $f_1$ is integrable. Let $g_n = f_n - f_1$ for $n \in \N$ and let $g = f - f_1$. Then $g_n$ is nonnegative and increasing in $n$ on $S$, and $g_n \to g$ as $n \to \infty$ on $S$. By the ordinary MCT, $\int_S g_n \, d\mu \to \int_S g \, d\mu$ as $n \to \infty$. But since $\int_S f_1 \, d\mu$ is finite, $\int_S g_n \, d\mu = \int_S f_n \, d\mu - \int_S f_1 \, d\mu$ and $\int_S g \, d\mu = \int f \, d\mu - \int_S f_1 \, d\mu$. Again since $\int_S f_1 \, d\mu$ is finite, it follows that $\int_S f_n \, d\mu \to \int_S f \, d\mu$ as $n \to \infty$.
Here is the complementary result for decreasing functions.
Suppose that $f_n: S \to \R$ is a measurable function whose integral exists for each $n \in \N_+$ and that $f_n$ is decreasing in $n$ on $S$. If $\int_S f_1 \, d\mu \lt \infty$ then $\int_S \lim_{n \to \infty} f_n \, d\mu = \lim_{n \to \infty} \int_S f_n \, d\mu$
Proof
The functions $-f_n$ for $n \in \N_+$ satisfy the hypotheses of the MCT for increasing functions and hence $\int_S \lim_{n \to \infty} -f_n \, d\mu = \lim_{n \to \infty} -\int_S f_n \, d\mu$. By the scaling property, $\int_S \lim_{n \to \infty} f_n \, d\mu = \lim_{n \to \infty} \int_S f_n \, d\mu$.
The additional assumptions on the integral of $f_1$ in the last two extensions of the monotone convergence theorem are necessary. An example is given in below.
Our next result is also a consequence of the montone convergence theorem, and is called Fatou's lemma in honor of Pierre Fatou. Its usefulness stems from the fact that no assumptions are placed on the integrand functions, except that they be nonnegative and measurable.
Fatou's Lemma. Suppose that $f_n: S \to [0, \infty)$ is measurable for $n \in \N_+$. Then $\int_S \liminf_{n \to \infty} f_n \, d\mu \le \liminf_{n \to \infty} \int_S f_n \, d\mu$
Proof
Let $g_n = \inf\left\{f_k: k \in \{n, n + 1, \ldots \}\right\}$ for $n \in \N_+$. Then $g_n: S \to [0, \infty)$ is measurable for $n \in \N_+$, $g_n$ is increasing in $n$, and by definition, $\lim_{n \to \infty} g_n = \liminf_{n \to \infty} f_n$. By the MCT, $\int_S \liminf_{n \to \infty} f_n \, d\mu = \lim_{n \to \infty} \int_S g_n \, d\mu$ But $g_n \le f_k$ on $S$ for $n \in \N_+$ and $k \in \{n, n + 1, \ldots\}$ so by the increasing property, $\int_S g_n \, d\mu \le \int_S f_k \, d\mu$ for $n \in \N_+$ and $k \in \{n, n + 1, \ldots\}$. Hence $\int_S g_n \, d\mu \le \inf\left\{\int_S f_k \, d\mu: k \in \{n, n+1, \ldots\}\right\}$ for $n \in \N_+$ and therefore $\lim_{n \to \infty} \int_S g_n \, d\mu \le \liminf_{n \to \infty} \int_S f_n \, d\mu$
Given the weakness of the hypotheses, it's hardly surprising that strict inequality can easily occur in Fatou's lemma. An example is given below.
Our next convergence result is one of the most important and is known as the dominated convergence theorem. It's sometimes also known as Lebesgue's dominated convergence theorem in honor of Henri Lebesgue, who first developed all of this stuff in the context of $\R^n$. The dominated convergence theorem gives a basic condition under which we may interchange the limit and integration operators.
Dominated Convergence Theorem. Suppose that $f_n: S \to \R$ is measurable for $n \in \N_+$ and that $\lim_{n \to \infty} f_n$ exists on $S$. Suppose also that $\left|f_n\right| \le g$ for $n \in \N$ where $g: S \to [0, \infty)$ is integrable. Then $\int_S \lim_{n \to \infty} f_n \, d\mu = \lim_{n \to \infty} \int_S f_n \, d\mu$
Proof
First note that by the increasing property, $\int_S \left|f_n\right| \, d\mu \le \int_S g \, d\mu \lt \infty$ and hence $f_n$ is integrable for $n \in \N_+$. Let $f = \lim_{n \to \infty} f_n$. Then $f$ is measurable, and by the increasing property again, $\int_S \left| f \right| \, d\mu \lt \int_S g \, d\mu \lt \infty$, so $f$ is integrable.
Now for $n \in \N_+$, let $u_n = \inf\left\{f_k: k \in \{n, n + 1, \ldots\}\right\}$ and let $v_n = \sup\left\{f_k: k \in \{n, n + 1, \ldots\}\right\}$. Then $u_n \le f_n \le v_n$ for $n \in \N_+$, $u_n$ is increasing in $n$, $v_n$ is decreasing in $n$, and $u_n \to f$ and $v_n \to f$ as $n \to \infty$. Moreover, $\int_S u_1 \, d\mu \ge - \int_S g \, d\mu \gt -\infty$ so by the version of the MCT above, $\int_S u_n \, d\mu \to \int_S f \, d\mu$ as $n \to \infty$. Similarly, $\int_S v_1 \, d\mu \lt \int_S g \, d\mu \lt \infty$, so by the MCT in (11), $\int_S v_n \, d\mu \to \int_S f \, d\mu$ as $n \to \infty$. But by the increasing property, $\int_S u_n \, d\mu \le \int_S f_n \, d\mu \le \int_S v_n \, d\mu$ for $n \in \N_+$ so by the squeeze theorem for limits, $\int_S f_n \, d\mu \to \int_S f \, d\mu$ as $n \to \infty$.
As you might guess, the assumption that $\left| f_n \right|$ is uniformly bounded in $n$ by an integrable function is critical. A counterexample when this assumption is missing is given below when this assumption is missing. The dominated convergence theorem remains true if $\lim_{n \to \infty} f_n$ exists almost everywhere on $S$. The follow corollary of the dominated convergence theorem gives a condition for the interchange of infinite sum and integral.
Suppose that $f_i: S \to \R$ is measurable for $i \in \N_+$ and that $\sum_{i=1}^\infty \left| f_i \right|$ is integrable. then $\int_S \sum_{i=1}^\infty f_i \, d\mu = \sum_{i=1}^\infty \int_S f_i \, d\mu$
Proof
The assumption that $g = \sum_{i=1}^\infty \left| f_i \right|$ is integrable implies that $g \lt \infty$ almost everywhere on $S$. In turn, this means that $\sum_{i=1}^\infty f_i$ is absolutely convergent almost everywhere on $S$. Let $f(x) = \sum_{i=1}^\infty f_i(x)$ if $g(x) \lt \infty$, and for completeness, let $f(x) = 0$ if $g(x) = \infty$. Since only the integral of $f$ appears in the theorem, it doesn't matter how we define $f$ on the null set where $g = \infty$. Now let $g_n = \sum_{i=1}^n f_i$. Then $g_n \to f$ as $n \to \infty$ almost everywhere on $S$ and $\left| g_n \right| \le g$ on $S$. Hence by the dominated convergence theorem, $\int_S g_n \, d\mu \to \int_S f \, d\mu$ as $n \to \infty$. But we know the additivity property holds for finite sums, so $\int_S g_n \, d\mu = \sum_{i=1}^n \int_S f_i \, d\mu$, and in turn this converges to $\sum_{i=1}^\infty \int_S f_i \, d\mu$ as $n \to \infty$. Thus we have $\sum_{i=1}^\infty \int_S f_i \, d\mu = \int_S f \, d\mu$.
The following corollary of the dominated convergence theorem is known as the bounded convergence theorem.
Bounded Convergence Theorem. Suppose that $f_n: S \to \R$ is measurable for $n \in \N_+$ and there exists $A \in \mathscr{S}$ such that $\mu(A) \lt \infty$, $\lim_{n \to \infty} f_n$ exists on $A$, and $\left| f_n \right|$ is bounded in $n \in \N_+$ on $A$. Then $\int_A \lim_{n \to \infty} f_n \, d\mu = \lim_{n \to \infty} \int_A f_n \, d\mu$
Proof
Suppose that $\left|f_n\right|$ is bounded in $n$ on $A$ by $c \in (0, \infty)$. The constant $c$ is integrable on $A$ since $\int_A c \, d\mu = c \mu(A) \lt \infty$, and $\left|f_n\right| \le c$ on $A$ for $n \in \N_+$. Thus the result follows from the dominated convergence theorem.
Again, the bounded convergence remains true if $\lim_{n \to \infty} f_n$ exists almost everywhere on $A$. For a finite measure space (and in particular for a probability space), the condition that $\mu(A) \lt \infty$ automatically holds.
Product Spaces
Suppose now that $(S, \mathscr{S}, \mu)$ and $(T, \mathscr{T}, \nu)$ are $\sigma$-finite measure spaces. Please recall the basic facts about the product $\sigma$-algebra $\mathscr{S} \otimes \mathscr{T}$ of subsets of $S \times T$, and the product measure $\mu \otimes \nu$ on $\mathscr{S} \otimes \mathscr{T}$. The product measure space $(S \times T, \mathscr{S} \otimes \mathscr{T}, \mu \otimes \nu)$ is the standard one that we use for product spaces. If $f: S \times T \to \R$ is measurable, there are three integrals we might consider. First, of course, is the integral of $f$ with respect to the product measure $\mu \otimes \nu$ $\int_{S \times T} f(x, y) \, d(\mu \otimes \nu)(x, y)$ sometimes called a double integral in this context. But also we have the nested or iterated integrals where we integrate with respect to one variable at a time: $\int_S \left(\int_T f(x, y) \, d\nu(y)\right) \, d\mu(x), \quad \int_T \left(\int_S f(x, y) d\mu(x)\right) \, d\nu(y)$ How are these integrals related? Well, just as in calculus with ordinary Riemann integrals, under mild conditions the three integrals are the same. The resulting important theorem is known as Fubini's Theorem in honor of the Italian mathematician Guido Fubini.
Fubini's Theorem. Suppose that $f: S \times T \to \R$ is measurable. If the double integral on the left exists, then $\int_{S \times T} f(x, y) \, d(\mu \otimes \nu)(x, y) = \int_S \int_T f(x, y) \, d\nu(y) \, d\mu(x) = \int_T \int_S f(x, y) \, d\mu(x) \, d\nu(y)$
Proof
We will show that $\int_{S \times T} f(x, y) \, d(\mu \otimes \nu)(x, y) = \int_S \int_T f(x, y) \, d\nu(y) \, d\mu(x)$ The proof with the other iterated integral is symmetric. The proof proceeds in stages, paralleling the definition of the integral.
1. Suppose that $f = \bs{1}_{A \times B}$ where $A \in \mathscr{S}$ and $B \in \mathscr{T}$. The equation holds by definition of the product measure, since the double integral is $(\mu \otimes \nu)(A \times B)$ and the iterated integral is $\int_S \int_T \bs{1}_{A \times B} (x, y) \, d\nu(y) \, d\nu(x) = \int_S \int_T \bs{1}_A(x) \bs{1}_B(y) \, d\nu(y) \, d\mu(x) \int_S \bs{1}_A(x) \nu(B) \, d\mu = \mu(A) \nu(B)$
2. Consider $f = \bs{1}_C$ where $C \in \mathscr{S} \otimes \mathscr{T}$. The double integral is $(\mu \otimes \nu)(C)$, and so as a function of $C \in \mathscr{S} \otimes \mathscr{T}$ defines the measure $\mu \otimes \nu$. On the other hand, the iterated integral is $\int_S \int_T \bs{1}_C(x, y) \, d\nu(y) \, d\mu(x) = \int_S \int_T \bs{1}_{C_x}(y) \, d\nu(y) \, d\mu(x) = \int_S \nu(C_x) \, d\mu(x)$ where $C_x = \{y \in T: (x, y) \in C\}$ is the cross-section of $C$ at $x \in S$. Recall that $x \mapsto \nu(C_x)$ is a nonnegative, measurable function of $x$, so $C \mapsto \int_S \nu(C_x) \, d\mu(x)$ makes sense. Moreover, as a function of $C \in \mathscr{S} \otimes \mathscr{T}$, this integral also forms a measure: If $\{C^i: i \in I\}$ is a countable, disjoint collection sets in $\mathscr{S} \otimes \mathscr{T}$, then $\{C_x^i: i \in I\}$ is a countable, disjoint collection of sets in $\mathscr{T}$. Cross-sections preserve set operations, so if $C = \bigcup_{i \in I} C^i$ then $C_x = \bigcup_{i \in I} C_x^i$. By the additivity of the measure $\nu$ and the integral we have $\int_S \nu(C_x) \, d\mu(x) = \int_S \nu\left(\bigcup_{i \in I} C_x^i \right) \, d\mu(x) = \int_S \sum_{i \in I} \nu\left(C_x^i\right) \, d\mu(x) = \sum_{i \in I} \int_S \nu\left(C_x^i\right) \, d\mu(x)$ To summarize, the double integral and the iterated integral define positive measures on $\mathscr{S} \otimes \mathscr{T}$. By (a), these measure agree on the measurable rectangles. By the uniqueness theorem, they must be the same measure. Thus the double integral and the iterated integral agree with integrand $f = \bs{1}_C$ for every $C \in \mathscr{S} \otimes \mathscr{T}$.
3. Suppose $f = \sum_{i \in I} c_i \bs{1}_{C_i}$ is a nonnegative simple function on $S \times T$. Thus, $I$ is a finite index set, $c_i \in [0, \infty)$ for $i \in I$, and $\{C_i: i \in I\}$ is a disjoint collection of sets in $\mathscr{S} \otimes \mathscr{T}$. The double integral and the iterated integral satisfy the linearity properties, and hence by (b), agree with integrand $f$.
4. Suppose that $f: S \to [0, \infty)$ is measurable. Then there exists a sequence of nonnegative simple functions $g_n, \; n \in \N_+$ such that $g_n$ is increasing in $n \in \N_+$ on $S \times T$, and $g_n \to f$ as $n \to \infty$ on $S \times T$. By the monotone convergence theorem, $\int_{S \times T} g_n \, d(\mu \otimes \nu) \to \int_{S \times T} f \, d(\mu \otimes \nu)$. But for fixed $x \in S$, $y \mapsto g_n(x, y)$ is increasing in $n$ on $T$ and has limit $f(x, y)$ as $n \to \infty$. By another application of the montone convergence theorem, $\int_T g_n(x, y) \, d\nu(y) \to \int_T f(x, y) \, d\nu(y)$ as $n \to \infty$. But $x \mapsto \int_T g_n(x, y) \, d\nu(y)$ is measurable and is increasing in $n \in \N_+$ on $S$, so by yet another application of the monotone convergence theorem, $\int_S \int_T g_n(x, y) \, d\nu(y) \, d\mu(x) \to \int_S \int_T f(x, y) \, d\nu(y) \, d\mu(x)$ as $n \to \infty$. But the double integral and the iterated integral agree with integrand $g_n$ by (c) for each $n \in \N_+$, so it follows that the double integral and the iterated integral agree with integrand $f$.
5. Suppose that $f: S \times T \to \R$ is measurable. By (d), the double integral and the iterated integral agree with integrand functions $f^+$ and $f^-$. Assuming that at least one of these is finite, then by the additivity property, they agree with integrand function $f = f^+ - f^-$.
Of course, the double integral exists, and so Fubini's theorem applies, if either $f$ is nonnegative or integrable with respect to $\mu \otimes \nu$. When $f$ is nonnegative, the result is sometimes called Tonelli's theorem in honor of another Italian mathematician, Leonida Tonelli. On the other hand, the iterated integrals may exist, and may be different, when the double integral does not exist. A counterexample and a second counterexample are given below.
A special case of Fubini's theorem (and indeed part of the proof) is that we can compute the measure of a set in the product space by integrating the cross-sectional measures.
If $C \in \mathscr{S} \otimes \mathscr{T}$ then $(\mu \otimes \nu)(C) = \int_S \nu\left(C_x\right) \, d\mu(x) = \int_T \mu\left(C^y\right) \, d\nu(y)$ where $C_x = \{y \in T: (x, y) \in C\}$ for $x \in S$, and $C^y = \{x \in S: (x, y) \in C\}$ for $y \in T$.
In particular, if $C, \; D \in \mathscr{S} \otimes \mathscr{T}$ have the property that $\nu(C_x) = \nu(D_x)$ for all $x \in S$, or $\mu\left(C^y\right) = \mu\left(D^y\right)$ for all $y \in T$ (that is, $C$ and $D$ have the same cross-sectional measures with respect to one of the variables), then $(\mu \otimes \nu)(C) = (\mu \otimes \nu)(D)$. In $\R^2$ with area, and in $\R^3$ with volume (Lebesgue measure in both cases), this is known as Cavalieri's principle, named for Bonaventura Cavalieri, yet a third Italian mathematician. Clearly, Italian mathematicians cornered the market on theorems of this sort.
A simple corollary of Fubini's theorem is that the double integral of a product function over a product set is the product of the integrals. This result has important applications to independent random variables.
Suppose that $g: S \to \R$ and $h: T \to \R$ are measurable, and are either nonnegative or integrable with respect to $\mu$ and $\nu$, respectively. Then $\int_{S \times T} g(x) h(y) d(\mu \otimes \nu)(x, y) = \left(\int_S g(x) \, d\mu(x)\right) \left(\int_T h(y) \, d\nu(y)\right)$
Recall that a discrete measure space consists of a countable set with the $\sigma$-algebra of all subsets and with counting measure. In such a space, integrals are simply sums and so Fubini's theorem allows us to rearrange the order of summation in a double sum.
Suppose that $I$ and $J$ are countable and that $a_{i j} \in \R$ for $i \in I$ and $j \in J$. If the sum of the positive terms or the sum of the negative terms is finite, then $\sum_{(i, j) \in I \times J} a_{i j} = \sum_{i \in I} \sum_{j \in J} a_{i j} = \sum_{j \in J} \sum_{i \in I} a_{i j}$
Often $I = J = \N_+$, and in this case, $a_{i j}$ can be viewed as an infinite array, with $i \in \N_+$ the row number and $j \in \N_+$ the column number:
$a_{11}$ $a_{12}$ $a_{13}$ $\ldots$
$a_{21}$ $a_{22}$ $a_{23}$ $\ldots$
$a_{31}$ $a_{32}$ $a_{33}$ $\ldots$
$\vdots$ $\vdots$ $\vdots$ $\vdots$
The significant point is that $\N_+$ is totally ordered. While there is no implied order of summation in the double sum $\sum_{(i, j) \in \N_+^2} a_{i j}$, the iterated sum $\sum_{i=1}^\infty \sum_{j=1}^\infty a_{i j}$ is obtained by summing over the rows in order and then summing the results by column in order, while the iterated sum $\sum_{j=1}^\infty \sum_{i=1}^\infty a_{i j}$ is obtained by summing over the columns in order and then summing the results by row in order.
Of course, only one of the product spaces might be discrete. Theorems (9) and (15) which give conditions for the interchange of sum and integral can be viewed as applications of Fubini's theorem, where one of the measure spaces is $(S, \mathscr{S}, \mu)$ and the other is $\N_+$ with counting measure.
Examples and Applications
Probability Spaces
Suppose that $(\Omega, \mathscr{F}, \P)$ is a probability space, so that $\Omega$ is the set of outcomes of a random experiment, $\mathscr{F}$ is the $\sigma$-algebra of events, and $\P$ is a probability measure on the sample space $(\Omega, \mathscr F)$. Suppose also that $(S, \mathscr{S})$ is another measurable space, and that $X$ is a random variable for the experiment, taking values in $S$. Of course, this simply means that $X$ is a measurable function from $\Omega$ to $S$. Recall that the probability distribution of $X$ is the probability measure $P_X$ on $(S, \mathscr{S})$ defined by $P_X(A) = \P(X \in A), \quad A \in \mathscr{S}$ Since $\{X \in A\}$ is just probability notation for the inverse image of $A$ under $X$, $P_X$ is simply a special case of constructing a new positive measure from a given positive measure via a change of variables. Suppose now that $r: S \to \R$ is measurable, so that $r(X)$ is a real-valued random variable. The integral of $r(X)$ (assuming that it exists) is known as the expected value of $r(X)$ and is of fundamental importance. We will study expected values in detail in the next chapter. Here, we simply note different ways to write the integral. By the change of variables formula (8) we have $\int_\Omega r\left[X(\omega)\right] \, d\P(\omega) = \int_S r(x) \, dP_X(x)$ Now let $F_Y$ denote the distribution function of $Y = r(X)$. By another change of variables, $Y$ has a probability distribution $P_Y$ on $\R$, which is also a Lebesgue-Stieltjes measure, named for Henri Lebesgue and Thomas Stiletjes. Recall that this probability measure is characterized by $P_Y(a, b] = \P(a \lt Y \le b) = F_Y(b) - F_Y(a); \quad a, \, b \in \R, \; a \lt b$ With another application of our change of variables theorem, we can add to our chain of integrals: $\int_\Omega r\left[X(\omega)\right] \, d\P(\omega) = \int_S r(x) \, dP_X(x) = \int_\R y \, dP_Y(y) = \int_\R y \, dF_Y(y)$ Of course, the last two integrals are simply different notations for exactly the same thing. In the section on absolute continuity and density functions, we will see other ways to write the integral.
Counterexamples
In the first three exercises below, $(\R, \mathscr R, \lambda)$ is the standard one-dimensional Euclidean space, so $mathscr R$ is $\sigma$-algebra of Lebesgue measurabel sets and $\lambda$ is Lebesgue measure.
Let $f = \bs{1}_{[1, \infty)}$ and $g = \bs{1}_{[0, \infty)}$. Show that
1. $f \le g$ on $\R$
2. $\lambda\{x \in \R: f(x) \lt g(x)\} = 1$
3. $\int_\R f \, d\lambda = \int_\R g \, d\lambda = \infty$
This example shows that the strict increasing property can fail when the integrals are infinite.
Let $f_n = \bs{1}_{[n, \infty)}$ for $n \in \N_+$. Show that
1. $f_n$ is decreasing in $n \in \N_+$ on $\R$.
2. $f_n \to 0$ as $n \to \infty$ on $\R$.
3. $\int_\R f_n \, d\lambda = \infty$ for each $n \in \N_+$.
This example shows that the monotone convergence theorem can fail if the first integral is infinite. It also illustrates strict inequality in Fatou's lemma.
Let $f_n = \bs{1}_{[n, n + 1]}$ for $n \in \N_+$. Show that
1. $\lim_{n \to \infty} f_n = 0$ on $\R$ so $\int_\R \lim_{n \to \infty} f_n \, d\mu = 0$
2. $\int_\R f_n \, d\lambda = 1$ for $n \in \N_+$ so $\lim_{n \to \infty} \int_\R f_n \, d\lambda = 1$
3. $\sup\{f_n: n \in \N_+\} = \bs{1}_{[1, \infty)}$ on $\R$
This example shows that the dominated convergence theorem can fail if $\left|f_n\right|$ is not bounded by an integrable function. It also shows that strict inequality can hold in Fatou's lemma.
Consider the product space $[0, 1]^2$ with the usual Lebesgue measurable subsets and Lebesgue measure. Let $f: [0, 1]^2 \to \R$ be defined by $f(x, y) = \frac{x^2 - y^2}{(x^2 + y^2)^2}$ Show that
1. $\int_{[0, 1]^2} f(x, y) \, d(x, y)$ does not exist.
2. $\int_0^1 \int_0^1 f(x, y) \, dx \, dy = -\frac{\pi}{4}$
3. $\int_0^1 \int_0^1 f(x, y) \, dy \, dx = \frac{\pi}{4}$
This example shows that Fubini's theorem can fail if the double integral does not exist.
For $i, \, j \in \N_+$ define the sequence $a_{i j}$ as follows: $a_{i i} = 1$ and $a_{i + 1, i} = -1$ for $i \in \N_+$, $a_{i j} = 0$ otherwise.
1. Give $a_{i j}$ in array form with $i \in \N_+$ as the row number and $j \in \N_+$ as the column number
2. Show that $\sum_{(i, j) \in \N_+^2} a_{i j}$ does not exist
3. Show that $\sum_{i = 1}^\infty \sum_{j = 1}^\infty a_{i j} = 1$
4. Show that $\sum_{j=1}^\infty \sum_{i=1}^\infty a_{i j} = 0$
This example shows that the iterated sums can exist and be different when the double sum does not exist, a counterexample to the corollary to Fubini's theorem for sums when the hypotheses are not satisfied.
Computational Exercises
Compute $\int_D f(x, y) \, d(x,y)$ in each case below for the given $D \subseteq \R^2$ and $f: D \to \R$.
1. $f(x, y) = e^{-2 x} e^{-3 y}$, $D = [0, \infty) \times [0, \infty)$
2. $f(x, y) = e^{-2 x} e^{-3 y}$, $D = \{(x, y) \in \R^2: 0 \le x \le y \lt \infty\}$
Integrals of the type in the last exercise are useful in the study of exponential distributions. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/03%3A_Distributions/3.11%3A_Properties_of_the_Integral.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\C}{\mathbb{C}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\Q}{\mathbb{Q}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
Our starting point in this section is a measurable space $(S, \mathscr{S})$. That is, $S$ is a set and $\mathscr{S}$ is a $\sigma$-algebra of subsets of $S$. So far, we have only considered positive measures on such spaces. Positive measures have applications, as we know, to length, area, volume, mass, probability, counting, and similar concepts of the nonnegative size of a set. Moreover, we have defined the integral of a measurable function $f: S \to \R$ with respect to a positive measure, and we have studied properties of the integral.
Definition
But now we will consider measures that can take negative values as well as positive values. These measures have applications to electric charge, monetary value, and other similar concepts of the content of a set that might be positive or negative. Also, this generalization will help in our study of density functions in the next section. The definition is exactly the same as for a positive measure, except that values in $\R^* = \R \cup \{-\infty, \infty\}$ are allowed.
A measure on $(S, \mathscr{S})$ is a function $\mu: \mathscr{S} \to \R^*$ that satisfies the following properties:
1. $\mu(\emptyset) = 0$
2. If $\{A_i: i \in I\}$ is a countable, disjoint collection of sets in $\mathscr{S}$ then $\mu\left(\bigcup_{i \in I} A_i\right) = \sum_{i \in I} \mu(A_i)$
As before, (b) is known as countable additivity and is the critical assumption: the measure of a set that consists of a countable number of disjoint pieces is the sum of the measures of the pieces. Implicit in the statement of this assumption is that the sum in (b) exists for every countable disjoint collection $\{A_i: i \in I\}$. That is, either the sum of the positive terms is finite or the sum of the negative terms is finite. In turn, this means that the order of the terms in the sum does not matter (a good thing, since there is no implied order). The term signed measure is used by many, but we will just use the simple term measure, and add appropriate adjectives for the special cases. Note that if $\mu(A) \ge 0$ for all $A \in \mathscr{S}$, then $\mu$ is a positive measure, the kind we have already studied (and so the new definition really is a generalization). In this case, the sum in (b) always exists in $[0, \infty]$. If $\mu(A) \in \R$ for all $A \in \mathscr{S}$ then $\mu$ is a finite measure. Note that in this case, the sum in (b) is absolutely convergent for every countable disjoint collection $\{A_i: i \in I\}$. If $\mu$ is a positive measure and $\mu(S) = 1$ then $\mu$ is a probability measure, our favorite kind. Finally, as with positive measures, $\mu$ is $\sigma$-finite if there exists a countable collection $\{A_i: i \in I\}$ of sets in $\mathscr{S}$ such that $S = \bigcup_{i \in I} A_i$ and $\mu(A_i) \in \R$ for $i \in I$.
Basic Properties
We give a few simple properties of general measures; hopefully many of these will look familiar. Throughout, we assume that $\mu$ is a measure on $(S, \mathscr{S})$. Our first result is that although $\mu$ can take the value $\infty$ or $-\infty$, it turns out that it cannot take both of these values.
Either $\mu(A) \gt -\infty$ for all $A \in \mathscr{S}$ or $\mu(A) \lt \infty$ for all $A \in \mathscr{S}$.
Proof
Suppose that there exist $A, \, B \in \mathscr{S}$ with $\mu(A) = \infty$ and $\mu(B) = -\infty$. Then $A = (A \cap B) \cup (A \setminus B)$ and the sets in the union are disjoint. By the additivity assumption, $\mu(A) = \mu(A \cap B) + \mu(A \setminus B)$. Similarly, $\mu(B) = \mu(A \cap B) + \mu(B \setminus A)$. The only way that both of these equations can make sense is for $\mu(A \setminus B) = \infty$, $\mu(B \setminus A) = -\infty$, and $\mu(A \cap B) \in \R$. But then $\mu(A \bigtriangleup B) = \mu(A \setminus B) + \mu(B \setminus A)$ is undefined, and so we have a contradiction.
We will say that two measures are of the same type if neither takes the value $\infty$ or if neither takes the value $-\infty$. Being of the same type is trivially an equivalence relation on the collection of measures on $(S, \mathscr{S})$.
The difference rule holds, as long as the sets have finite measure:
Suppose that $A, \, B \in \mathscr{S}$. If $\mu(B) \in \R$ then $\mu(B \setminus A) = \mu(B) - \mu(A \cap B)$.
Proof
Note that $B = (A \cap B) \cup (B \setminus A)$ and the sets in the union are disjoint. Thus $\mu(B) = \mu(A \cap B) + \mu(B \setminus A)$. Since $\mu(B) \in \R$, we must have $\mu(A \cap B) \in \R$ and $\mu(B \setminus A) \in \R$ also, and then the difference rule holds by subtraction.
The following corollary is the difference rule for subsets, and will be needed below.
Suppose that $A, \, B \in \mathscr{S}$ and $A \subseteq B$. If $\mu(B) \in \R$ then $\mu(A) \in \R$ and $\mu(B \setminus A) = \mu(B) - \mu(A)$.
Proof
Note that $B = A \cup (B \setminus A)$ and the sets in the union are disjoint. Thus $\mu(B) = \mu(A) + \mu(B \setminus A)$. Since $\mu(B) \in \R$, we must have $\mu(A) \in \R$ and $\mu(B \setminus A) \in \R$ also, and then the difference rule holds by subtraction.
As a consequence, suppose that $A, \, B \in \mathscr{S}$ and $A \subseteq B$. If $\mu(A) = \infty$, then by the infinity rule we cannot have $\mu(B) = -\infty$ and by the difference rule we cannot have $\mu(B) \in \R$, so we must have $\mu(B) = \infty$. Similarly, if $\mu(A) = -\infty$ then $\mu(B) = -\infty$. The inclusion-exclusion rules hold for general measures, as long as the sets have finite measure.
Suppose that $A_i \in \mathscr{S}$ for each $i \in I$ where $\#(I) = n$, and that $\mu(A_i) \in \R$ for $i \in I$. Then
$\mu \left( \bigcup_{i \in I} A_i \right) = \sum_{k = 1}^n (-1)^{k - 1} \sum_{J \subseteq I, \; \#(J) = k} \mu \left( \bigcap_{j \in J} A_j \right)$
Proof
For $n = 2$, note that $A_1 \cup A_2 = A_1 \cup (A_2 \setminus A_1)$ and the sets in the last union are disjoint. By the additivity axiom and the difference rule (3), $\mu(A_1 \cup A_2) = \mu(A_1) + \mu(A_2 \setminus A_1) = \mu(A_1) + \mu(A_2) - \mu(A_1 \cap A_2)$ The general result then follows by induction, just like the proof for probability measures.
The continuity properties hold for general measures. Part (a) is the continuity property for increasing sets, and part (b) is the continuity property for decreasing sets.
Suppose that $A_n \in \mathscr{S}$ for $n \in \N_+$.
1. If $A_n \subseteq A_{n+1}$ for $n \in \N_+$ then $\lim_{n \to \infty} \mu(A_n) = \mu\left(\bigcup_{i=1}^\infty A_i\right)$.
2. If $A_{n+1} \subseteq A_n$ for $n \in \N_+$ and $\mu(A_1) \in \R$, then $\lim_{n \to \infty} \mu(A_n) = \mu\left(\bigcap_{i=1}^\infty A_i\right)$
Proof
The proofs are almost the same as for positive measures, except for technicalities involving $\infty$ and $-\infty$.
1. Let $A = \bigcup_{i=1}^\infty A_i$. From the infinity rule and the difference rule, if $\mu(A_m) = \infty$ (respectively $-\infty$) for some $m \in \N_+$, then $\mu(A_n) = \infty$ ($-\infty$) for $n \ge m$ and $\mu(A) = \infty$ ($-\infty$), so the result trivially holds. Thus, assume that $\mu(A_n) \in \R$ for all $n \in \N_+$. Let $B_1 = A_1$ and let $B_i = A_i \setminus A_{i-1}$ for $i \in \{2, 3, \ldots\}$. Then $\{B_i: i \in \N_+\}$ is a disjoint collection of sets and also has union $A$. Moreover, from the difference rule, $\mu(B_i) = \mu(A_{i+1}) - \mu(A_i)$ for $i \in \{2, 3, \ldots\}$. Thus $\mu(A) = \sum_{i=1}^\infty \mu(B_i) = \lim_{n \to \infty} \sum_{i=1}^n \mu(B_i) = \lim_{n \to \infty} \left(\mu(A_1) + \sum_{i=2}^n [\mu(A_i) - \mu(A_{i-1})]\right) = \lim_{n \to \infty} \mu(A_n)$
2. Let $C_n = A_1 \setminus A_n$ for $n \in \N_+$. Then $C_n \subseteq C_{n+1}$ for $n \in \N_+$ and $\bigcup_{i=1}^\infty C_i = A_1 \setminus \bigcap_{i=1}^\infty A_i$. Part (a) applies, so $\lim_{n \to \infty} \mu(C_n) = \mu\left(\bigcup_{i=1}^\infty C_i \right)$. But by the difference rule, $\mu(C_n) = \mu(A_1) - \mu(A_n)$ for $n \in \N_+$ and $\mu\left(\bigcup_{i=1}^\infty C_i\right) = \mu(A_1) - \mu\left(\bigcap_{i=1}^\infty A_i\right)$. All of these are real numbers, so subtracting $\mu(A_1)$ gives the result.
Recall that a positive measure is an increasing function, relative to the subset partial order on $\mathscr{S}$ and the ordinary order on $[0, \infty]$, and this property follows from the difference rule. But for general measures, the increasing property fails, and so do other properties that flow from it, including the subadditive property (Boole's inequality in probability) and the Bonferroni inequalities.
Constructions
It's easy to construct general measures as differences of positive measures.
Suppose that $\mu$ and $\nu$ are positive measures on $(S, \mathscr{S})$ and that at least one of them is finite. Then $\delta = \mu - \nu$ is a measure.
Proof
Suppose that $\nu$ is a finite measure; the proof when $\mu$ is finite is similar. First, $\delta(\emptyset) = \mu(\emptyset) - \nu(\emptyset) = 0$. Suppose that $\{A_i: i \in I\}$ is a countable, disjoint collection of sets in $\mathscr{S}$ and let $A = \bigcup_{i \in I} A_i$. Then $\delta(A) = \mu(A) - \nu(A) = \sum_{i \in I} \mu(A_i) - \sum_{i \in I} \nu(A_i)$ Since $\nu(A_i) \lt \infty$ for $i \in I$, we can combine terms to get $\delta(A) = \sum_{i \in I} [\mu(A_i) - \nu(A_i)] = \sum_{i \in I} \delta(A_i)$
The collection of measures on our space is closed under scalar multiplication.
If $\mu$ is a measure on $(S, \mathscr{S})$ and $c \in \R$, then $c \mu$ is a measure on $(S, \mathscr{S})$
Proof
First, $(c \mu)(\emptyset) = c \mu(\emptyset) = c 0 = 0$. Next suppose that $\{A_i: i \in I\}$ is a countable, disjoint collection of sets in $\mathscr S$. Then $(c \mu) \left(\bigcup_{i \in I} A_i \right) = c \mu \left(\bigcup_{i \in I} A_i\right) = c \sum_{i \in I} \mu(A_i) = \sum_{i \in I} c \mu(A_i) = \sum_{i \in I} (c \mu)(A_i)$ The last step is the important one, and holds since the sum exists.
If $\mu$ is a finite measure, then so is $c \mu$ for $c \in \R$. If $\mu$ is not finite then $\mu$ and $c \mu$ are of the same type if $c \gt 0$ and are of opposite types if $c \lt 0$. We can add two measures to get another measure, as long as they are of the same type. In particular, the collection of finite measures is closed under addition as well as scalar multiplication, and hence forms a vector space.
If $\mu$ and $\nu$ are measures on $(S, \mathscr{S})$ of the same type then $\mu + \nu$ is a measure on $(S, \mathscr{S})$.
Proof
First, $(\mu + \nu)(\emptyset) = \mu(\emptyset) + \nu(\emptyset) = 0 + 0 = 0$. Next suppose that $\{A_i: i \in I\}$ is a countable, disjoint collection of sets in $\mathscr S$. Then \begin{align*} (\mu + \nu) \left(\bigcup_{i \in I} A_i \right) & = \mu \left(\bigcup_{i \in I} A_i\right) + \nu\left(\bigcup_{i \in I} A_i\right)\ & = \sum_{i \in I} \mu(A_i) + \sum_{i \in I} \nu(A_i) = \sum_{i \in I} [\mu(A_i) + \nu(A_i) = \sum_{i \in I} (\mu + \nu)(A_i) \end{align*} The sums can be combined because the measures are of the same type. That is, either the sum of all of the positive terms is finite or the sum of all the negative terms is finite. In short, we don't have to worry about the dreaded indeterminate form $\infty - \infty$.
Finally, it is easy to explicitly construct measures on a $\sigma$-algebra generated by a countable partition. Such $\sigma$-algebras are important for counterexamples and to gain insight, and also because many $\sigma$-algebras that occur in applications can be constructed from them.
Suppose that $\mathscr{A} = \{A_i: i \in I\}$ is a countable partition of $S$ into nonempty sets, and that $\mathscr{S} = \sigma(\mathscr{A})$. For $i \in I$, define $\mu(A_i) \in \R^*$ arbitrarily, subject only to the condition that the sum of the positive terms is finite, or the sum of the negative terms is finite. For $A = \bigcup_{j \in J} A_j$ where $J \subseteq I$, define $\mu(A) = \sum_{j \in J} \mu(A_j)$ Then $\mu$ is a measure on $(S, \mathscr{S})$.
Proof
Recall that every $A \in \mathscr{S}$ has a unique representation of the form $A = \bigcup_{j \in J} A_j$ where $J \subseteq I$.
1. $J = \emptyset$ in the representation gives $A = \emptyset$. The sum over an empty index set is 0, so $\mu(\emptyset) = 0$.
2. Suppose that $\{B_k: k \in K\}$ is a countable, disjoint collection of events in $\mathscr{S}$. Then for each $k \in K$ there exists $J_k \subseteq I$ and $\left\{A^k_j: j \in J_k\right\} \subseteq \mathscr{A}$ such that $B_k = \bigcup_{j \in J_k} A^k_j$. Hence $\mu\left(\bigcup_{k \in K} B_k\right) = \mu\left(\bigcup_{k \in K} \bigcup_{j \in J_k} A^k_j\right) = \sum_{k \in k}\sum_{j \in J_k} \mu(A^k_j) = \sum_{k \in K} \mu(B_k)$ The fact that either the sum of all positive terms is finite or the sum of all the negative terms is finite means that we do not have to worry about the order of summation.
Positive, Negative, and Null Sets
To understand the structure of general measures, we need some basic definitions and properties. As before, we assume that $\mu$ is a measure on $(S, \mathscr{S})$.
Definitions
1. $A \in \mathscr{S}$ is a positive set for $\mu$ if $\mu(B) \ge 0$ for every $B \in \mathscr{S}$ with $B \subseteq A$.
2. $A \in \mathscr{S}$ is a negative set for $\mu$ if $\mu(B) \le 0$ for every $B \in \mathscr{S}$ with $B \subseteq A$.
3. $A \in \mathscr{S}$ is a null set for $\mu$ if $\mu(B) = 0$ for every $B \in \mathscr{S}$ with $B \subseteq A$.
Note that positive and negative are used in the weak sense (just as we use the terms increasing and decreasing in this text). Of course, if $\mu$ is a positive measure, then every $A \in \mathscr{S}$ is positive for $\mu$, and $A \in \mathscr{S}$ is negative for $\mu$ if and only if $A$ is null for $\mu$ if and only if $\mu(A) = 0$. For a general measure, $A \in \mathscr{S}$ is both positive and negative for $\mu$ if and only if $A$ is null for $\mu$. In particular, $\emptyset$ is null for $\mu$. A set $A \in \mathscr{S}$ is a support set for $\mu$ if and only if $A^c$ is a null set for $\mu$. A support set is a set where the measure lives in a sense. Positive, negative, and null sets for $\mu$ have a basic inheritance property that is essentially equivalent to the definition.
Suppose $A \in \mathscr{S}$.
1. If $A$ is positive for $\mu$ then $B$ is positive for $\mu$ for every $B \in \mathscr{S}$ with $B \subseteq A$.
2. If $A$ is negative for $\mu$ then $B$ is negative for $\mu$ for every $B \in \mathscr{S}$ with $B \subseteq A$.
3. If $A$ is null for $\mu$ then $B$ is null for $\mu$ for every $B \in \mathscr{S}$ with $B \subseteq A$.
The collections of positive sets, negative sets, and null sets for $\mu$ are closed under countable unions.
Suppose that $\{A_i: i \in I\}$ is a countable collection of sets in $\mathscr{S}$.
1. If $A_i$ is positive for $\mu$ for $i \in I$ then $\bigcup_{i \in I} A_i$ is positive for $\mu$.
2. If $A_i$ is negative for $\mu$ for $i \in I$ then $\bigcup_{i \in I} A_i$ is negative for $\mu$.
3. If $A_i$ is null for $\mu$ for $i \in I$ then $\bigcup_{i \in I} A_i$ is null for $\mu$.
Proof
We will prove (a); the proofs for (b) and (c) are analogous. Without loss of generality, we can suppose that $I = \N_+$. Let $A = \bigcup_{n=1}^\infty A_n$. Now let $B_1 = A_1$ and $B_n = A_n \setminus \left(\bigcup_{i=1}^{n-1} A_i\right)$ for $n \in \{2, 3, \ldots\}$. Them $\{B_n: n \in \N_+\}$ is a countable, disjoint collection in $\mathscr{S}$, and $\bigcup_{n=1}^\infty B_n = A$. If $C \subseteq A$ then $C = \bigcup_{n=1}^\infty (C \cap B_n)$ and the sets in this union are disjoint. Hence by additivity, $\mu(C) = \sum_{=1}^\infty \mu(C \cap B_n)$. But $C \cap B_n \subseteq B_n \subseteq A_n$ so $\mu(C \cap B_n) \ge 0$. Hence $\mu(C) \ge 0$.
It's easy to see what happens to the positive, negative, and null sets when a measure is multiplied by a non-zero constant.
Suppose that $\mu$ is a measure on $(S, \mathscr{S})$, $c \in \R$, and $A \in \mathscr{S}$.
1. If $c \gt 0$ then $A$ is positive (negative) for $\mu$ if and only if $A$ is positive (negative) for $c \mu$.
2. If $c \lt 0$ then $A$ is positive (negative) for $\mu$ if and only if $A$ is negative (positive) for $c \mu$.
3. If $c \ne 0$ then $A$ is null for $\mu$ if and only if $A$ is null for $c \mu$
Positive, negative, and null sets are also preserved under countable sums, assuming that the measures make senes.
Suppose that $\mu_i$ is a measure on $(S, \mathscr{S})$ for each $i$ in a countable index set $I$, and that $\mu = \sum_{i \in I} \mu_i$ is a well-defined measure on $(S, \mathscr{S})$. Let $A \in \mathscr{S}$.
1. If $A$ is positive for $\mu_i$ for every $i \in I$ then $A$ is positive for $\mu$.
2. If $A$ is negative for $\mu_i$ for every $i \in I$ then $A$ is negative for $\mu$.
3. If $A$ is null for $\mu_i$ for every $i \in I$ then $A$ is null for $\mu$.
In particular, note that $\mu = \sum_{i \in I} \mu_i$ is a well-defined measure if $\mu_i$ is a positive measure for each $i \in I$, or if $I$ is finite and $\mu_i$ is a finite measure for each $i \in I$. It's easy to understand the positive, negative, and null sets for a $\sigma$-algebra generated by a countable partition.
Suppose that $\mathscr{A} = \{A_i: i \in I\}$ is a countable partition of $S$ into nonempty sets, and that $\mathscr{S} = \sigma(\mathscr{A})$. Suppose that $\mu$ is a measure on $(S, \mathscr{S})$. Define $I_+ = \{i \in I: \mu(A_i) \gt 0\}, \; I_- = \{i \in I: \mu(A_i) \lt 0\}, \; I_0 = \{i \in I: \mu(A_i) = 0\}$ Let $A \in \mathscr{S}$, so that $A = \bigcup_{j \in J} A_j$ for some $J \subseteq I$ (and this representation is unique). Then
1. $A$ is positive for $\mu$ if and only if $J \subseteq I_+ \cup I_0$.
2. $A$ is negative for $\mu$ if and only if $J \subseteq I_- \cup I_0$.
3. $A$ is null for $\mu$ if and only if $J \subseteq I_0$.
The Hahn Decomposition
The fundamental results in this section and the next are two decomposition theorems that show precisely the relationship between general measures and positive measures. First we show that if a set has finite, positive measure, then it has a positive subset with at least that measure.
If $A \in \mathscr{S}$ and $0 \le \mu(A) \lt \infty$ then there exists $P \in \mathscr{S}$ with $P \subseteq A$ such that $P$ is positive for $\mu$ and $\mu(P) \ge \mu(A)$.
Proof
The proof is recursive, and works by successively removing sets of negative measure from $A$. For the initialization step, let $A_0 = A$. Then trivially, $A_0 \subseteq A$ and $\mu(A_0) \ge \mu(A)$. For the recursive step, suppose that $A_n \in \mathscr{S}$ has been defined with $A_n \subseteq A$ and $\mu(A_n) \ge \mu(A)$. If $A_n$ is positive for $\mu$, let $P = A_n$. Otherwise let $a_n = \inf\{\mu(B): B \in \mathscr{S}, B \subseteq A_n, \mu(B) \lt 0\}$. Note that since $A_n$ is not positive for $\mu$, the set in the infimum is nonempty and hence $a_n \lt 0$ (and possibly $-\infty$). Let $b_n = a_n / 2$ if $-\infty \lt a_n \lt 0$ and let $b_n = -1$ if $a_n = -\infty$. Since $b_n \gt a_n$, by definition of the infimum, there exists $B_n \subseteq A$ with $\mu(B_n) \le b_n$. Let $A_{n+1} = A_n \setminus B_n$. Then $A_{n+1} \subseteq A_n \subseteq A$ and $\mu(A_{n+1}) = \mu(A_n) - \mu(B_n) \ge \mu(A_n) - b_n \ge \mu(A_n) \ge \mu(A)$ Now, if the recursive process terminates after a finite number of steps, $P$ is well defined and is positive for $\mu$. Otherwise, we have a disjoint sequence of sets $(B_1, B_2, \ldots)$. Let $P = A \setminus \left(\bigcup_{i=1}^\infty B_i\right)$. Then $P \subseteq A$, and by countable additivity and the difference rule, $\mu(P) = \mu(A) - \sum_{n=1}^\infty \mu(B_n) \ge \mu(A) - \sum_{n=1}^\infty b_n \ge \mu(A)$ Suppose that $B \subseteq P$ and $\mu(B) \lt 0$. Then $B \subseteq A_n$ and by definition, $a_n \le \mu(B)$ for every $n \in \N_+$. It follows that $b_n \le \frac{1}{2} \mu(B)$ or $b_n = -1$ for every $n \in \N_+$. Hence $\sum_{n=1}^\infty b_n = -\infty$ and therefore $\mu(P) = \infty$, a contradiction since $\mu(A) \lt \infty$. Hence we must have $\mu(B) \ge 0$ and thus $P$ is positive for $\mu$.
The assumption that $\mu(A) \lt \infty$ is critical; a counterexample is given below. Our first decomposition result is the Hahn decomposition theorem, named for the Austrian mathematician Hans Hahn. It states that $S$ can be partitioned into a positive set and a negative set, and this decomposition is essentially unique.
Hahn Decomposition Theorem. There exists $P \in \mathscr{S}$ such that $P$ is positive for $\mu$ and $P^c$ is negative for $\mu$. The pair $(P, P^c)$ is a Hahn decomposition of $S$. If $(Q, Q^c)$ is another Hahn decomposition, then $P \bigtriangleup Q$ is null for $\mu$.
Proof
Suppose first that $\mu$ does not take the value $\infty$. As with the previous result, the proof is recursive. For the initialization step, let $P_0 = \emptyset$. Then trivially, $P_0$ is positive for $\mu$. For the recursive step, suppose that $P_n \in \mathscr{S}$ is positive for $\mu$. If $P_n^c$ is negative for $\mu$, let $P = P_n$. Otherwise let $a_n = \sup\{\mu(A): A \in \mathscr{S}, A \subseteq P_n^c\}$. Since $P_n^c$ is not negative for $\mu$, it follows that $a_n \gt 0$ (and possibly $\infty$). Let $b_n = a_n / 2$ if $0 \lt a_n \lt \infty$ and $b_n = 1$ if $a_n = \infty$. Then $b_n \lt a_n$ so there exists $B_n \in \mathscr{S}$ with $B_n \subseteq P_n^c$ and $\mu(B_n) \ge b_n \gt 0$. By the previous lemma, there exists $A_n \in \mathscr{S}$ with $A_n \subseteq B_n$, $A_n$ positive for $\mu$, and $\mu(A_n) \ge \mu(B_n)$. Let $P_{n+1} = P_n \cup A_n$. Then $P_{n+1} \in \mathscr{S}$ is positive for $\mu$.
If the recursive process ends after a finite number of steps, then $P$ is well-defined and $(P, P^c)$ is a Hahn decomposition. Otherwise we generate an infinite sequence $(A_1, A_2, \ldots)$ of disjoint sets in $\mathscr{S}$, each positive for $\mu$. Let $P = \bigcup_{n=1}^\infty A_n$. Then $P \in \mathscr{S}$ is positive for $\mu$ by the closure result above. Let $A \subseteq P^c$. If $\mu(A) \gt 0$ then $\mu(A) \le a_n$ for every $n \in \N_+$. Hence $b_n \ge \frac{1}{2} \mu(A)$ or $b_n = 1$ for every $n \in \N_+$. But then $\mu(P) = \sum_{n=1}^\infty \mu(A_n) \ge \sum_{n=1}^\infty \mu(B_n) \ge \sum_{n=1}^\infty b_n = \infty$ a contradiction. Hence $\mu(A) \le 0$ so $P^c$ is negative for $\mu$ and thus $(P, P^c)$ is a Hahn decomposition.
Suppose that $(Q, Q^c)$ is another Hahn decomposition of $S$. Then $P \cap Q^c$ and $Q \cap P^c$ are both positive and negative for $\mu$ and hence are null for $\mu$. Hence $P \bigtriangleup Q = (P \cap Q^c) \cup (Q \cap P^c)$ is null for $\mu$.
Finally, suppose that $\mu$ takes the value $\infty$. Then $\mu$ does not take the value $-\infty$ by the infinity rule and hence $-\mu$ does not take the value $\infty$. By our proof so far, there exists a Hahn decomposition $(P, P^c)$ for $-\mu$ that is essentially unique. But then $(P^c, P)$ is a Hahn decomposition for $\mu$.
It's easy to see the Hahn decomposition for a measure on a $\sigma$-algebra generated by a countable partition.
Suppose that $\mathscr{A} = \{A_i: i \in I\}$ is a countable partition of $S$ into nonempty sets, and that $\mathscr{S} = \sigma(\mathscr{A})$. Suppose that $\mu$ is a measure on $(S, \mathscr{S})$. Let $I_+ = \{i \in I: \mu(A_i) \gt 0\}$ and $I_0 = \{ i \in I: \mu(A_i) = 0$. Then $(P, P^c)$ is a Hahn decomposition of $\mu$ if and only if the positive set $P$ has the form $P = \bigcup_{j \in J} A_j$ where $J = I_+ \cup K$ and $K \subseteq I_0$.
The Jordan Decomposition
The Hahn decomposition leads to another decomposition theorem called the Jordan decomposition theorem, named for the French mathematician Camille Jordan. This one shows that every measure is the difference of positive measures. Once again we assume that $\mu$ is a measure on $(S, \mathscr{S})$.
Jordan Decomposition Theorem. The measure $\mu$ can be written uniquely in the form $\mu = \mu_+ - \mu_-$ where $\mu_+$ and $\mu_-$ are positive measures, at least one finite, and with the property that if $(P, P^c)$ is any Hahn decomposition of $S$, then $P^c$ is a null set of $\mu_+$ and $P$ is a null set of $\mu_-$. The pair $(\mu_+, \mu_-)$ is the Jordan decomposition of $\mu$.
Proof
Let $(P, P^c)$ be a Hahn decomposition of $S$ relative to $\mu$. Define $\mu_+(A) = \mu(A \cap P)$ and $\mu_-(A) = -\mu(A \cap P^c)$ for $A \in \mathscr{S}$. Then $\mu_+$ and $\mu_-$ are positive measures and $\mu = \mu_+ - \mu_-$. Moreover, since $\mu$ cannot take both $\infty$ and $-\infty$ as values by the infinity rule, one of these two positive measures is finite.
Suppose that $(Q, Q^c)$ is an arbitrary Hahn decomposition. If $A \subseteq Q^c$, then $\mu_+(A) = \mu(P \cap A) = 0$ since $P \cap Q^c$ is a null set of $\mu$ by the Hahn decomposition theorem. Similarly if $A \subseteq Q$ then $\mu_-(A) = \mu(P^c \cap A) = 0$ since $P^c \cap Q$ is a null set of $\mu$.
Suppose that $\mu = \nu_+ - \nu_-$ is another decomposition with the same properties. If $A \in \mathscr{S}$ then $\mu_+(A) = \mu(A \cap P) = [\nu_+(A \cap P) - \nu_-(A \cap P)] = \nu_+(A \cap P)]$. But also $\nu_+(A) = \nu_+(A \cap P) + \nu_+(A \cap P^c) = \nu_+(A \cap P)$. Hence $\nu_+ = \mu_+$ and therefore also $\nu_- = \mu_-$.
The Jordan decomposition leads to an important set of new definitions.
Suppose that $\mu$ has Jordan decomposition $\mu = \mu_+ - \mu_-$.
1. The positive measure $\mu_+$ is called the positive variation measure of $\mu$.
2. The positive measure $\mu_-$ is called the negative variation measure of $\mu$.
3. The positive measure $\left| \mu \right| = \mu_+ + \mu_-$ is called the total variation measure of $\mu$.
4. $\| \mu \| = \left|\mu\right|(S)$ is the total variation of $\mu$.
Note that, in spite of the similarity in notation, $\mu_+(A)$ and $\mu_-(A)$ are not simply the positive and negative parts of the (extended) real number $\mu(A)$, nor is $\left| \mu \right|(A)$ the absolute value of $\mu(A)$. Also, be careful not to confuse the total variation of $\mu$, a number in $[0, \infty]$, with the total variation measure. The positive, negative, and total variation measures can be written directly in terms of $\mu$.
For $A \in \mathscr{S}$,
1. $\mu_+(A) = \sup\{\mu(B): B \in \mathscr{S}, B \subseteq A\}$
2. $\mu_-(A) = -\inf\{\mu(B): B \in \mathscr{S}, B \subseteq A\}$
3. $\left| \mu(A) \right| = \sup\left\{ \sum_{i \in I} \mu(A_i): \{A_i: i \in I\} \text{ is a finite, measurable partition of } A \right\}$
4. $\left\| \mu \right\| = \sup\left\{ \sum_{i \in I} \mu(A_i): \{A_i: i \in I\} \text{ is a finite, measurable partition of } S \right\}$
The total variation measure is related to sum and scalar multiples of measures in a natural way.
Suppose that $\mu$ and $\nu$ are measures of the same type and that $c \in \R$. Then
1. $\left| \mu \right| = 0$ if and only if $\mu = 0$ (the zero measure).
2. $\left| c \mu \right| = \left|c\right| \left| \mu \right|$
3. $\left| \mu + \nu \right| \le \left| \mu \right| + \left| \nu \right|$
Proof
1. Since $\mu_+$, $\mu_-$ and $|\mu| = \mu_+ + \mu_-$ are positive measures, $|\mu| = 0$ if and only if $\mu_+ = \mu_- = 0$ if and only if $\mu = 0$.
2. If $c \gt 0$ then $(c \mu)_+ = c \mu$ and $(c \mu)_- = c \mu_-$. If $c \lt 0$ then $(c \mu)_+ = -c \mu_-$ and $(c \mu)_- = - c \mu_+$. Of course, if $c = 0$ then $(c \mu)_+ = (c \mu)_- = 0$. In all cases, $|c \mu| = (c \mu)_+ + (c \mu)_- = |c| (\mu_+ + \mu_-) = |c| |\mu|$
3. From the theorem above, $(\mu + \nu)_+ \le \mu_+ + \nu_+$ and $(\mu + \nu)_- \le \mu_- + \nu_-$. So \begin{align*} |\mu + \nu| & = (\mu + \nu)_+ + (\mu + \nu)_- \le (\mu_+ + \nu_+) + (\mu_- + \nu_-)\ & = (\mu_+ + \mu_-) + (\nu_+ + \nu_-) = |\mu| + |\nu| \end{align*}
You may have noticed that the properties in the last result look a bit like norm properties. In fact, total variation really is a norm on the vector space of finite measures on $(S, \mathscr{S})$:
Suppose that $\mu$ and $\nu$ are measures of the same type and that $c \in \R$. Then
1. $\| \mu \| = 0$ if and only if $\mu = 0$ (the zero property)
2. $\| c \mu \| = \left|c\right| \| \mu \|$ (the scaling property)
3. $\| \mu + \nu \| \le \| \mu \| + \| \nu \|$ (the triangle inequality)
Proof
1. Since $|\mu|$ is a positive measure, $\|\mu\| = |\mu(S)| = 0$ if and only if $|\mu| = 0$. From part (a) of the previous theorem, $|\mu| = 0$ if and only if $\mu = 0$.
2. From part (b) of the previous theorem, $\|c \mu\| = | c \mu(S)| = |c| |\mu(S)| = |c| \|\mu\|$.
3. From part (c) of the previous theorem, $\|\mu + \mu\| = |\mu + \nu|(S) \le |\mu|(S) + |\nu|(S) = \|\mu\| + \|\nu\|$.
Every norm on a vector space leads to a corresponding measure of distance (a metric). Let $\mathscr{M}$ denote the collection of finite measures on $(S, \mathscr{S})$. Then $\mathscr{M}$, under the usual definition of addition and scalar multiplication of measures, is a vector space, and as the last theorem shows, $\| \cdot \|$ is a norm on $\mathscr{M}$. Here are the corresponding metric space properties:
Suppose that $\mu, \, \nu, \, \rho \in \mathscr{M}$ and $c \in \R$. Then
1. $\| \mu - \nu \| = \| \nu - \mu\|$, the symmetric property
2. $\| \mu \| = 0$ if and only if $\mu = 0$, the zero property
3. $\| \mu - \rho\| \le \| \mu - \nu\| + \|\nu - \rho\|$, the triangle inequality
Now that we have a metric, we have a corresponding criterion for convergence.
Suppose that $\mu_n \in \mathscr{M}$ for $n \in \N_+$ and $\mu \in \mathscr{M}$. We say that $\mu_n \to \mu$ as $n \to \infty$ in total variation if $\|\mu_n - \mu\| \to 0$ as $n \to \infty$.
Of course, $\mathscr{M}$ includes the probability measures on $(S, \mathscr{S})$, so we have a new notion of convergence to go along with the others we have studied or will study. Here is a list:
• convergence with probability 1
• convergence in probability
• convergence in distribution
• convergence in $k$th mean
• convergence in total variation
The Integral
Armed with the Jordan decomposition, the integral can be extended to general measures in a natural way.
Suppose that $\mu$ is a measure on $(S, \mathscr{S})$ and that $f: S \to \R$ is measurable. We define $\int_S f \, d\mu = \int_S f \, d\mu_+ - \int_S f \, d\mu_-$ assuming that the integrals on the right exist and that the right side is not of the form $\infty - \infty$.
We will not pursue this extension, but as you might guess, the essential properties of the integral hold.
Complex Measures
Again, suppose that $(S, \mathscr{S})$ is a measurable space. The same axioms that work for general measures can be used to define complex measures. Recall that $\C = \{x + i y: x, \, y \in \R\}$ denotes the set of complex numbers, where $i$ is the imaginary unit.
A complex measure on $(S, \mathscr{S})$ is a function $\mu: \mathscr{S} \to \C$ that satisfies the following properties:
1. $\mu(\emptyset) = 0$
2. If $\{A_i: i \in I\}$ is a countable, disjoint collection of sets in $\mathscr{S}$ then $\mu\left(\bigcup_{i \in I} A_i\right) = \sum_{i \in I} \mu(A_i)$
Clearly a complex measure $\mu$ can be decomposed as $\mu = \nu + i \rho$ where $\nu$ and $\rho$ are finite (real) measures on $(S, \mathscr{S})$. We will have no use for complex measures in this text, but from the decomposition into finite measures, it's easy to see how to develop the theory.
Computational Exercises
Counterexamples
The lemma needed for the Hahn decomposition theorem can fail without the assumption that $\mu(A) \lt \infty$.
Let $S$ be a set with subsets $A$ and $B$ satisfying $\emptyset \subset B \subset A \subset S$. Let $\mathscr{S} = \sigma\{A, B\}$ be the $\sigma$-algebra generated by $\{A, B\}$. Define $\mu(B) = -1$, $\mu(A \setminus B) = \infty$, $\mu(A^c) = 1$.
1. Draw the Venn diagram of $A$, $B$, $S$.
2. List the sets in $\mathscr{S}$.
3. Using additivity, give the value of $\mu$ on each set in $\mathscr{S}$.
4. Show that $A$ does not have a positive subset $P \in \mathscr{S}$ with $\mu(P) \ge \mu(A)$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/03%3A_Distributions/3.12%3A_General_Measures.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\Q}{\mathbb{Q}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\ms}{\mathscr}$ $\newcommand{\range}{\text{range}}$
Basic Theory
Our starting point is a measurable space $(S, \ms{S})$. That is $S$ is a set and $\ms{S}$ is a $\sigma$-algebra of subsets of $S$. In the last section, we discussed general measures on $(S, \ms{S})$ that can take positive and negative values. Special cases are positive measures, finite measures, and our favorite kind, probability measures. In particular, we studied properties of general measures, ways to construct them, special sets (positive, negative, and null), and the Hahn and Jordan decompositions.
In this section, we see how to construct a new measure from a given positive measure using a density function, and we answer the fundamental question of when a measure has a density function relative to the given positive measure.
Relations on Measures
The answer to the question involves two important relations on the collection of measures on $(S, \ms{S})$ that are defined in terms of null sets. Recall that $A \in \ms{S}$ is null for a measure $\mu$ on $(S, \ms{S})$ if $\mu(B) = 0$ for every $B \in \ms{S}$ with $B \subseteq A$. At the other extreme, $A \in \ms S$ is a support set for $\mu$ if $A^c$ is a null set. Here are the basic definitions:
Suppose that $\mu$ and $\nu$ are measures on $(S, \ms{S})$.
1. $\nu$ is absolutely continuous with respect to $\mu$ if every null set of $\mu$ is also a null set of $\nu$. We write $\nu \ll \mu$.
2. $\mu$ and $\nu$ are mutually singular if there exists $A \in \ms{S}$ such that $A$ is null for $\mu$ and $A^c$ is null for $\nu$. We write $\mu \perp \nu$.
Thus $\nu \ll \mu$ if every support support set of $\mu$ is a support set of $\nu$. At the opposite end, $\mu \perp \nu$ if $\mu$ and $\nu$ have disjoint support sets.
Suppose that $\mu$, $\nu$, and $\rho$ are measures on $(S, \ms{S})$. Then
1. $\mu \ll \mu$, the reflexive property.
2. If $\mu \ll \nu$ and $\nu \ll \rho$ then $\mu \ll \rho$, the transitive property.
Recall that every relation that is reflexive and transitive leads to an equivalence relation, and then in turn, the original relation can be extended to a partial order on the collection of equivalence classes. This general theorem on relations leads to the following two results.
Measures $\mu$ and $\nu$ on $(S, \ms{S})$ are equivalent if $\mu \ll \nu$ and $\nu \ll \mu$, and we write $\mu \equiv \nu$. The relation $\equiv$ is an equivalence relation on the collection of measures on $(S, \ms S)$. That is, if $\mu$, $\nu$, and $\rho$ are measures on $(S, \ms{S})$ then
1. $\mu \equiv \mu$, the reflexive property
2. If $\mu \equiv \nu$ then $\nu \equiv \mu$, the symmetric property
3. If $\mu \equiv \nu$ and $\nu \equiv \rho$ then $\mu \equiv \rho$, the transitive property
Thus, $\mu$ and $\nu$ are equivalent if they have the same null sets and thus the same support sets. This equivalence relation is rather weak: equivalent measures have the same support sets, but the values assigned to these sets can be very different. As usual, we will write $[\mu]$ for the equivalence class of a measure $\mu$ on $(S, \ms{S})$, under the equivalence relation $\equiv$.
If $\mu$ and $\nu$ are measures on $(S, \ms{S})$, we write $[\mu] \preceq [\nu]$ if $\mu \ll \nu$. The definition is consistent, and defines a partial order on the collection of equivalence classes. That is, if $\mu$, $\nu$, and $\rho$ are measures on $(S, \ms{S})$ then
1. $[\mu] \preceq [\mu]$, the reflexive property.
2. If $[\mu] \preceq [\nu]$ and $[\nu] \preceq [\mu]$ then $[\mu] = [\nu]$, the antisymmetric property.
3. If $[\mu] \preceq [\nu]$ and $[\nu] \preceq [\rho]$ then $[\mu] \preceq [\rho]$, the transitive property
The singularity relation is trivially symmetric and is almost anti-reflexive.
Suppose that $\mu$ and $\nu$ are measures on $(S, \ms{S})$. Then
1. If $\mu \perp \nu$ then $\nu \perp \mu$, the symmetric property.
2. $\mu \perp \mu$ if and only if $\mu = \bs 0$, the zero measure.
Proof
Part (a) is trivial from the symmetry of the definition. For part (b), note that $S$ is null for $0$ and $\emptyset$ is null for $0$, so $0 \perp 0$. Conversely, suppose that $\mu$ is a measure and $\mu \perp \mu$. Then there exists $A \in \ms{S}$ such that $A$ is null for $\mu$ and $A^c$ is null for $\mu$. But then $S = A \cup A^c$ is null for $\mu$, so $\mu(B) = 0$ for every $B \in \ms{S}$.
Absolute continuity and singularity are preserved under multiplication by nonzero constants.
Suppose that $\mu$ and $\nu$ are measures on $(S, \ms{S})$ and that $a, \, b \in \R \setminus \{0\}$. Then
1. $\nu \ll \mu$ if and only if $a \nu \ll b \mu$.
2. $\nu \perp \mu$ if and only if $a \nu \perp b \mu$.
Proof
Recall that if $c \ne 0$, then $A \in \ms{S}$ is null for $\mu$ if and only if $A$ is null for $c \mu$.
There is a corresponding result for sums of measures.
Suppose that $\mu$ is a measure on $(S, \ms{S})$ and that $\nu_i$ is a measure on $(S, \ms{S})$ for each $i$ in a countable index set $I$. Suppose also that $\nu = \sum_{i \in I} \nu_i$ is a well-defined measure on $(S, \ms{S})$.
1. If $\nu_i \ll \mu$ for every $i \in I$ then $\nu \ll \mu$.
2. If $\nu_i \perp \mu$ for every $i \in I$ then $\nu \perp \mu$.
Proof
Recall that if $A \in \ms{S}$ is null for $\nu_i$ for each $i \in I$, then $A$ is null for $\nu = \sum_{i \in I} \nu_i$, assuming that this is a well-defined measure.
As before, note that $\nu = \sum_{i \in I} \nu_i$ is well-defined if $\nu_i$ is a positive measure for each $i \in I$ or if $I$ is finite and $\nu_i$ is a finite measure for each $i \in I$. We close this subsection with a couple of results that involve both the absolute continuity relation and the singularity relation
Suppose that $\mu$, $\nu$, and $\rho$ are measures on $(S, \ms{S})$. If $\nu \ll \mu$ and $\mu \perp \rho$ then $\nu \perp \rho$.
Proof
Since $\mu \perp \rho$, there exists $A \in \ms{S}$ such that $A$ is null for $\mu$ and $A^c$ is null for $\rho$. But $\nu \ll \mu$ so $A$ is null for $\nu$. Hence $\nu \perp \rho$.
Suppose that $\mu$ and $\nu$ are measures on $(S, \ms{S})$. If $\nu \ll \mu$ and $\nu \perp \mu$ then $\nu = \bs 0$.
Proof
From the previous theorem (with $\rho = \nu$) we have $\nu \perp \nu$ and hence by (5), $\nu = \bs 0$.
Density Functions
We are now ready for our study of density functions. Throughout this subsection, we assume that $\mu$ is a positive, $\sigma$-finite measure on our measurable space $(S, \ms{S})$. Recall that if $f: S \to \R$ is measurable, then the integral of $f$ with respect to $\mu$ may exist as a number in $\R^* = \R \cup \{-\infty, \infty\}$ or may fail to exist.
Suppose that $f: S \to \R$ is a measurable function whose integral with respect to $\mu$ exists. Then function $\nu$ defined by $\nu(A) = \int_A f \, d\mu, \quad A \in \ms{S}$ is a $\sigma$-finite measure on $(S, \ms{S})$ that is absolutely continuous with respect to $\mu$. The function $f$ is a density function of $\nu$ relative to $\mu$.
Proof
To say that the integral exists means that either $\int_S f^+ \, d \mu \lt \infty$ or $\int_S f^- \, d\mu \lt \infty$, where as usual, $f^+$ and $f^-$ are the positive and negative parts of $f$. So $\nu(A) = \nu_+(A) - \nu_-(A)$ for $A \in \ms S$ where $\nu_+(A) = \int_A f^+(A) \, d\mu$ and $\nu_-(A) = \int_A f^-(A) \, d\mu$. Both $\nu_+$ and $\nu_-$ are positive measures by basic properties of the integral: Generically, suppose $g: S \to [0, \infty)$ is measurable. The integral over the empty set is always 0, so $\int_\emptyset g \, d\mu = 0$. Next, if $\{A_i: i \in I\}$ is a countable, disjoint collection of sets in $\ms{S}$ and $A = \bigcup_{i \in I} A_i$, then by the additivity property of the integral over disjoint domains, $\int_A g \, d\mu = \sum_{i \in I} \int_{A_i} g \, d\mu$ By the assumption that the integral exists, either $\nu_+$ or $\nu_-$ is a finite positive measure, and hence $\nu$ is a measure. As you might guess, $\nu_+$ and $\nu_-$ form the Jordan decomposition of $\nu$, a point that we will revisit below.
Again, either $\nu_+$ or $\nu_-$ is a finite measure. By symmetry, let's suppose that $\nu_-$ is finite. Then to show that $\nu$ is $\sigma$-finite, we just need to show that $\nu_+$ is $\sigma$-finite. Since $\mu$ has this property, there exists a collection $\{A_n: n \in \N_+\}$ with $A_n \in \ms S$, $\mu(A_n) \lt \infty$, and $\bigcup_{n=1}^\infty A_n = S$. Let $B_n = \{x \in S: f^+(x) \le n\}$ for $n \in \N_+$. Then $B_n \in \ms S$ for $n \in \N_+$ and $\bigcup_{n=1}^\infty B_n = S$. Hence $\{A_m \cap A_n: (m, n) \in \N_+^2\}$ is a countable collection of measurable sets whose union is also $S$. Moreover, $\nu_+(A_m \cap B_n) = \int_{A_m \cap B_n} f^+ d\mu \le n \mu(A_m \cap B_n) \lt \infty$ Finally, suppose $A \in \ms{S}$ is a null set of $\mu$. If $B \in \ms{S}$ and $B \subseteq A$ then $\mu(B) = 0$ so $\nu(B) = \int_B f \, d\mu = 0$. Hence $\nu \ll \mu$.
The following three special cases are the most important:
1. If $f$ is nonnegative (so that the integral exists in $\R \cup \{\infty\}$) then $\nu$ is a positive measure since $\nu(A) \ge 0$ for $A \in \ms{S}$.
2. If $f$ is integrable (so that the integral exists in $\R$), then $\nu$ is a finite measure since $\nu(A) \in \R$ for $A \in \ms{S}$.
3. If $f$ is nonnegative and $\int_S f \, d\mu = 1$ then $\nu$ is a probability measure since $\nu(A) \ge 0$ for $A \in \ms{S}$ and $\nu(S) = 1$.
In case 3, $f$ is the probability density function of $\nu$ relative to $\mu$, our favorite kind of density function. When they exist, density functions are essentially unique.
Suppose that $\nu$ is a $\sigma$-finite measure on $(S, \ms{S})$ and that $\nu$ has density function $f$ with respect to $\mu$. Then $g: S \to \R$ is a density function of $\nu$ with respect to $\mu$ if and only if $f = g$ almost everywhere on $S$ with respect to $\mu$.
Proof
These results also follow from basic properties of the integral. Suppose that $f, \, g: S \to \R$ are measurable functions whose integrals with respect to $\mu$ exist. If $g = f$ almost everywhere on $S$ with respect to $\mu$ then $\int_A f \, d\mu = \int_A g \, d\mu$ for every $A \in \ms{S}$. Hence if $f$ is a density function for $\nu$ with respect to $\mu$ then so is $g$. For the converse, if $\int_A f \, d\mu = \int_A g \, d\mu$ for every $A \in \ms{S}$, then since $\mu$ is $\sigma$-finite, it follows that $f = g$ almost everywhere on $S$ with respect to $\mu$.
The essential uniqueness of density functions can fail if the positive measure space $(S, \ms S, \mu)$ is not $\sigma$-finite. A simple example is given below. Our next result answers the question of when a measure has a density function with respect to $\mu$, and is the fundamental theorem of this section. The theorem is in two parts: Part (a) is the Lebesgue decomposition theorem, named for our old friend Henri Lebesgue. Part (b) is the Radon-Nikodym theorem, named for Johann Radon and Otto Nikodym. We combine the theorems because our proofs of the two results are inextricably linked.
Suppose that $\nu$ is a $\sigma$-finite measure on $(S, \ms{S})$.
1. Lebesgue Decomposition Theorem. $\nu$ can be uniquely decomposed as $\nu = \nu_c + \nu_s$ where $\nu_c \ll \mu$ and $\nu_s \perp \mu$.
2. Radon-Nikodym Theorem. $\nu_c$ has a density function with respect to $\mu$.
Proof
The proof proceeds in stages. we first prove the result for finite, positive measures, then for $\sigma$-finite, positive measures, and finally for general $\sigma$-finite measures. The first stage is the most complicated.
Part 1, suppose that $\mu$ and $\nu$ are positive, finite measures. Let $\ms{F}$ denote the collection of measurable functions $g: S \to [0, \infty)$ with $\int_A g \, d\mu \le \nu(A)$ for all $A \in \ms{S}$. Note that $\ms{F} \ne \emptyset$ since the constant function $0$ is in $\ms{F}$. The proof works by finding a maximal element of $\ms{F}$ and using this function as the density function of the absolutely continuous part of $\nu$.
Our first step is to show that $\ms{F}$ is closed under the max operator. Let $g_1, \; g_2 \in \ms{F}$. For $A \in \ms{S}$, let $A_1 = \{x \in A: g_1(x) \ge g_2(x)\}$ and $A_2 = \{x \in A: g_1(x) \lt g_2(x)\}$. Then $A_1, \; A_2 \in \ms{S}$ partition $A$ so $\int_A \max\{g_1, g_2\} \, d\mu = \int_{A_1} \max\{g_1, g_2\} \, d\mu + \int_{A_2} \max\{g_1, g_2\} d\mu = \int_{A_1} g_1 \, d\mu + \int_{A_2} g_2 \, d\mu \le \nu(A_1) + \nu(A_2) = \nu(A)$ Hence $\max\{g_1, g_2\} \in \ms{F}$.
Our next step is to show that $\ms{F}$ is closed with respect to increasing limits. Thus suppose that $g_n \in \ms{F}$ for $n \in \N_+$ and that $g_n$ is increasing in $n$ on $S$. Let $g = \lim_{n \to \infty} g_n$. Then $g: S \to [0, \infty]$ is measurable, and by the monotone convergence theorem, $\int_A g \, d\mu = \lim_{n \to \infty} \int_A g_n \, d\mu$ for every $A \in \ms{S}$. But $\int_A g_n \, d\mu \le \nu(A)$ for every $n \in \N_+$ so $\int_A g \, d\mu \le \nu(A)$. In particular, $\int_S g \, d\mu \le \nu(S) \lt \infty$ so $g \lt \infty$ almost everywhere on $S$ with respect to $\mu$. Thus, by redefining $g$ on a $\mu$-null set if necessary, we can assume $g \lt \infty$ on $S$. Hence $g \in \ms{F}$.
Now let $\alpha = \sup\left\{\int_S g \, d\mu: g \in \ms{F}\right\}$. Note that $\alpha \le \nu(S) \lt \infty$. By definition of the supremum, for each $n \in \N_+$ there exist $g_n \in \ms{F}$ such that $\int_S g_n \, d\mu \gt \alpha - \frac{1}{n}$. Now let $f_n = \max\{g_1, g_2, \ldots, g_n\}$ for $n \in \N_+$. Then $f_n \in \ms{F}$ and $f_n$ is increasing in $n \in \N_+$ on $S$. Hence $f = \lim_{n \to \infty} f_n \in \ms{F}$ and $\int_S f \, d\mu = \lim_{n \to \infty} \int_S f_n \, d\mu$. But $\int_S f_n \, d\mu \ge \int_S g_n \, d\mu \gt \alpha - \frac{1}{n}$ for each $n \in \N_+$ and hence $\int_S f \, d\mu \ge \alpha$.
Define $\nu_c(A) = \int_A f \, d\mu$ and $\nu_s(A) = \nu(A) - \nu_c(A)$ for $A \in \ms{S}$. Then $\nu_c$ and $\nu_s$ are finite, positive measures and by our previous theorem, $\nu_c$ is absolutely continuous with respect to $\mu$ and has density function $f$. Our next step is to show that $\nu_s$ is singular with respect to $\mu$. For $n \in \N$, let $(P_n, P_n^c)$ denote a Hahn decomposition of the measure $\nu_s - \frac{1}{n} \mu$. Then $\int_A \left(f + \frac{1}{n} \bs{1}_{P_n}\right) \, d\mu = \nu_c(A) + \frac{1}{n} \mu(P_n \cap A) = \nu(A) - \left[\nu_s(A) - \frac{1}{n} \mu(P_n \cap A)\right]$ But $\nu_s(A) - \frac{1}{n} \mu(P_n \cap A) \ge \nu_s(A \cap P_n) - \frac{1}{n} \mu(A \cap P_n) \ge 0$ since $\nu_s$ is a positive measure and $P_n$ is positive for $\nu_s - \frac{1}{n} \mu$. Thus we have $\int_A \left(f + \frac{1}{n} \bs{1}_{P_n} \right) \, d\mu \le \nu(A)$ for every $A \in \ms{S}$, so $f + \frac{1}{n} \bs{1}_{P_n} \in \ms{F}$ for every $n \in \N_+$. If $\mu(P_n) \gt 0$ then $\int_S \left(f + \frac{1}{n} \bs{1}_{P_n}\right) \, d\mu = \alpha + \frac{1}{n} \mu(P_n) \gt \alpha$, which contradicts the definition of $\alpha$. Hence we must have $\mu(P_n) = 0$ for every $n \in \N_+$. Now let $P = \bigcup_{n=1}^\infty P_n$. Then $\mu(P) = 0$. If $\nu_s(P^c) \gt 0$ then $\nu_s(P^c) - \frac{1}{n} \mu(P^c) \gt 0$ for $n$ sufficiently large. But this is a contradiction since $P^c \subseteq P_n^c$ which is negative for $\nu_s - \frac{1}{n} \mu$ for every $n \in \N_+$. Thus we must have $\nu_s(P^c) = 0$, so $\mu$ and $\nu_s$ are singular.
Part 2. Suppose that $\mu$ and $\nu$ are $\sigma$-finite, positive measures. Then there exists a countable partition $\{S_i: i \in I\}$ of $S$ where $S_i \in \ms{S}$ for $i \in I$, and $\mu(S_i) \lt \infty$ and $\nu(S_i) \lt \infty$ for $i \in I$. Let $\mu_i(A) = \mu(A \cap S_i)$ and $\nu_i(A) = \nu(A \cap S_i)$ for $i \in I$. Then $\mu_i$ and $\nu_i$ are finite, positive measures for $i \in I$, and $\mu = \sum_{i \in I} \mu_i$ and $\nu = \sum_{i \in I} \nu_i$. By part 1, for each $i \in I$, there exists a measurable function $f_i: S \to [0, \infty)$ such that $\nu_i = \nu_{i,c} + \nu_{i,s}$ where $\nu_{i, c}(A) = \int_A f_i \, d\mu$ for $A \in \ms{S}$ and $\nu_{i,s} \perp \mu$. Let $f = \sum_{i \in I} \bs{1}_{A_i} f_i$. Then $f: S \to [0, \infty)$ is measurable. Define $\nu_c(A) = \int_A f \, d\mu$ and $\nu_s(A) = \nu(A) - \nu_c(A)$ for $A \in \ms{S}$. Note that $\nu_c = \sum_{i \in I} \nu_{i,c}$ and $\nu_s = \sum_{i \in I} \nu_{i,s}$. Then $\nu_c \ll \mu$ and has density function $f$ and $\nu_s \perp \mu$.
Part 3. Suppose that $\nu$ is a $\sigma$-finite measure (not necessarily positive). By the Jordan decomposition theorem, $\nu = \nu_+ - \nu_-$ where $\nu_+$ and $\nu_-$ are $\sigma$-finite, positive measures, and at least one is finite. By part 2, there exist measurable functions $f_+: S \to [0, \infty)$ and $f_-: S \to [0, \infty)$ such that $\nu_+ = \nu_{+,c} + \nu_{+,s}$ and $\nu_- = \nu_{-,c} + \nu_{-,s}$ where $\nu_{+,c}(A) = \int_A f_+ \, d\mu$, $\nu_{-,c} = \int_A f_- \, d\mu$ for $A \in \ms{S}$, and $\nu_{+,s} \perp \mu$, $\nu_{-,s} \perp \mu$. Let $f = f_+ - f_-$, $\nu_c(A) = \int_A f \, d\mu$, $\nu_s(A) = \nu(A) - \nu_c(A)$ for $A \in \ms{S}$. Then $\nu = \nu_c + \nu_s$ and $\nu_s = \nu_{+,s} - \nu_{-,s} \perp \mu$.
Uniqueness. Suppose that $\nu = \nu_{c,1} + \nu_{s,1} = \nu_{c,2} + \nu_{s,2}$ where $\nu_{c,i} \ll \mu$ and $\nu_{s,i} \perp \mu$ for $i \in \{1, 2\}$. Then $\nu_{c,1} - \nu_{c,2} = \nu_{s,2} - \nu_{s,1}$. But $\nu_{c,1} - \nu_{c,2} \ll \mu$ and $\nu_{s,2} - \nu_{s,1} \perp \mu$ so $\nu_{c,1} - \nu_{c,2} = \nu_{s,2} - \nu_{s,1} = \bs 0$ by the theorem above
In particular, a measure $\nu$ on $(S, \ms{S})$ has a density function with respect to $\mu$ if and only if $\nu \ll \mu$. The density function in this case is also referred to as the Radon-Nikodym derivative of $\nu$ with respect to $\mu$ and is sometimes written in derivative notation as $d\nu / d\mu$. This notation, however, can be a bit misleading because we need to remember that a density function is unique only up to a $\mu$-null set. Also, the Radon-Nikodym theorem can fail if the positive measure space $(S, \ms S, \mu)$ is not $\sigma$-finite. A couple of examples are given below. Next we characterize the Hahn decomposition and the Jordan decomposition of $\nu$ in terms of the density function.
Suppose that $\nu$ is a measure on $(S, \ms{S})$ with $\nu \ll \mu$, and that $\nu$ has density function $f$ with respect to $\mu$. Let $P = \{x \in S: f(x) \ge 0\}$, and let $f^+$ and $f^-$ denote the positive and negative parts of $f$.
1. A Hahn decomposition of $\nu$ is $(P, P^c)$.
2. The Jordan decomposition is $\nu = \nu_+ - \nu_-$ where $\nu_+(A) = \int_A f^+ \, d\mu$ and $\nu_-(A) = \int_A f^- \, d\mu$, for $A \in \ms{S}$.
Proof
Of course $P^c = \{x \in S: f(x) \lt 0\}$. The proofs are simple.
1. Suppose that $A \in \ms S$. If $A \subseteq P$ then $f(x) \ge 0$ for $x \in A$ and hence $\nu(A) = \int_A f \, d\mu \ge 0$. If $A \subseteq P^c$ then $\nu(A) = \int_A f \, d\mu \le 0$.
2. This follows immediately from (a) and the Jordan decomposition theorem, since $\nu_+(A) = \nu(A \cap P)$ and $\nu_-(A) = -\nu(A \cap P^c)$ for $A \in \ms S$. Note that $f^+ = \bs 1_P f$ and $f^- = -\bs 1_{P^c} f$.
The following result is a basic change of variables theorem for integrals.
Suppose that $\nu$ is a positive measure on $(S, \ms{S})$ with $\nu \ll \mu$ and that $\nu$ has density function $f$ with respect to $\mu$. If $g: S \to \R$ is a measurable function whose integral with respect to $\nu$ exists, then $\int_S g \, d\nu = \int_S g f \, d\mu$
Proof
The proof is a classical bootstrapping argument. Suppose first that $g = \sum_{i \in I} a_i \bs{1}_{A_i}$ is a nonnegative simple function. That is, $I$ is a finite index set, $a_i \in [0, \infty)$ for $i \in I$, and $\{A_i: i \in I\}$ is a disjoint collection of sets in $\ms{S}$. Then $\int_S g \, d\nu = \sum_{i \in I} a_i \nu(A_i)$. But $\nu(A_i) = \int_{A_i} f \, d\mu = \int_S \bs{1}_{A_i} f \, d\mu$ for each $i \in I$ so $\int_S g \, d\mu = \sum_{i \in I} a_i \int_S \bs{1}_{A_i} f \, d\mu = \int_S \left(\sum_{i \in I} a_i \bs{1}_{A_i}\right) f \, d\mu = \int_S g f \, d\mu$ Suppose next that $g: S \to [0, \infty)$ is measurable. There exists a sequence of nonnegative simple functions $(g_1, g_2, \ldots)$ such that $g_n$ is increasing in $n \in \N_+$ on $S$ and $g_n \to g$ as $n \to \infty$ on $S$. Since $f$ is nonnegative, $g_n f$ is increasing in $n \in \N_+$ on $S$ and $g_n f \to g f$ as $n \to \infty$ on $S$. By the first step, $\int_S g_n \, d\nu = \int_S g_n f \, d\mu$ for each $n \in \N_+$. But by the monotone convergence theorem, $\int_S g_n \, d\nu \to \int_S g \, d\nu$ and $\int_S g_n f \, d\mu \to \int_S g f \, d\mu$ as $n \to \infty$. Hence $\int_S g \, d\nu = \int_S g f \, d\mu$.
Finally, suppose that $g: S \to \R$ is a measurable function whose integral with respect to $\nu$ exists. By the previous step, $\int_S g^+ \, d\nu = \int_S g^+ f \, d\mu$ and $\int_S g^- \, d\nu = \int_S g^- f \, d\mu$, and at least one of these integrals is finite. Hence by the additive property $\int_S g \, d\nu = \int_S g^+ \, d\nu - \int_S g^- \, d\nu = \int_S g^+ f \, d\mu - \int_S g^- f \, d\mu = \int_S (g^+ - g^-) f \, d\mu = \int_S g f \, d\mu$
In differential notation, the change of variables theorem has the familiar form $d\nu = f \, d\mu$, and this is really the justification for the derivative notation $f = d\nu / d\mu$ in the first place. The following result gives the scalar multiple rule for density functions.
Suppose that $\nu$ is a measure on $(S, \ms{S})$ with $\nu \ll \mu$ and that $\nu$ has density function $f$ with respect to $\mu$. If $c \in \R$, then $c \nu$ has density function $c f$ with respect to $\mu$.
Proof
If $A \in \ms{S}$ then $\int_A c f \, d\mu = c \int_A f \, d\mu = c \nu(A)$.
Of course, we already knew that $\nu \ll \mu$ implies $c \nu \ll \mu$ for $c \in \R$, so the new information is the relation between the density functions. In derivative notation, the scalar multiple rule has the familiar form $\frac{d(c \nu)}{d\mu} = c \frac{d\nu}{d\mu}$
The following result gives the sum rule for density functions. Recall that two measures are of the same type if neither takes the value $\infty$ or if neither takes the value $-\infty$.
Suppose that $\nu$ and $\rho$ are measures on $(S, \ms{S})$ of the same type with $\nu \ll \mu$ and $\rho \ll \mu$, and that $\nu$ and $\rho$ have density functions $f$ and $g$ with respect to $\mu$, respectively. Then $\nu + \rho$ has density function $f + g$ with respect to $\mu$.
Proof
If $A \in \ms{S}$ then $\int_A (f + g) \, d\mu = \int_A f \, d\mu + \int_A g \, d\mu = \nu(A) + \rho(A)$ The additive property holds because we know that the integrals in the middle of the displayed equation are not of the form $\infty - \infty$.
Of course, we already knew that $\nu \ll \mu$ and $\rho \ll \mu$ imply $\nu + \rho \ll \mu$, so the new information is the relation between the density functions. In derivative notation, the sum rule has the familiar form $\frac{d(\nu + \rho)}{d\mu} = \frac{d\nu}{d\mu} + \frac{d\rho}{d\mu}$ The following result is the chain rule for density functions.
Suppose that $\nu$ is a positive measure on $(S, \ms{S})$ with $\nu \ll \mu$ and that $\nu$ has density function $f$ with respect to $\mu$. Suppose $\rho$ is a measure on $(S, \ms{S})$ with $\rho \ll \nu$ and that $\rho$ has density function $g$ with respect to $\nu$. Then $\rho$ has density function $g f$ with respect to $\mu$.
Proof
This is a simple consequence of the change of variables theorem above. If $A \in \ms{S}$ then $\rho(A) = \int_A g \, d\nu = \int_A g f \, d\mu$.
Of course, we already knew that $\nu \ll \mu$ and $\rho \ll \nu$ imply $\rho \ll \mu$, so once again the new information is the relation between the density functions. In derivative notation, the chan rule has the familiar form $\frac{d\rho}{d\mu} = \frac{d\rho}{d\nu} \frac{d\nu}{d\mu}$ The following related result is the inverse rule for density functions.
Suppose that $\nu$ is a positive measure on $(S, \ms{S})$ with $\nu \ll \mu$ and $\mu \ll \nu$ (so that $\nu \equiv \mu$). If $\nu$ has density function $f$ with respect to $\mu$ then $\mu$ has density function $1 / f$ with respect to $\nu$.
Proof
Let $f$ be a density function of $\nu$ with respect to $\mu$ and let $Z = \{x \in S: f(x) = 0\}$. Then $\nu(Z) = \int_Z f \, d\mu = 0$ so $Z$ is a null set of $\nu$ and hence is also a null set of $\mu$. Thus, we can assume that $f \ne 0$ on $S$. Let $g$ be a density of $\mu$ with respect to $\nu$. Since $\mu \ll \nu \ll \mu$, it follows from the chain rule that $f g$ is a density of $\mu$ with respect to $\mu$. But of course the constant function $1$ is also a density of $\mu$ with respect to itself so we have $f g = 1$ almost everywhere on $S$. Thus $1 / f$ is a density of $\mu$ with respect to $\nu$.
In derivative notation, the inverse rule has the familiar form $\frac{d\mu}{d\nu} = \frac{1}{d\nu / d\mu}$
Examples and Special Cases
Discrete Spaces
Recall that a discrete measure space $(S, \ms S, \#)$ consists of a countable set $S$ with the $\sigma$-algebra $\ms{S} = \ms{P}(S)$ of all subsets of $S$, and with counting measure $\#$. Of course $\#$ is a positive measure and is trivially $\sigma$-finite since $S$ is countable. Note also that $\emptyset$ is the only set that is null for $\#$. If $\nu$ is a measure on $S$, then by definition, $\nu(\emptyset) = 0$, so $\nu$ is absolutely continuous relative to $\mu$. Thus, by the Radon-Nikodym theorem, $\nu$ can be written in the form $\nu(A) = \sum_{x \in A} f(x), \quad A \subseteq S$ for a unique $f: S \to \R$. Of course, this is obvious by a direct argument. If we define $f(x) = \nu\{x\}$ for $x \in S$ then the displayed equation follows by the countable additivity of $\nu$.
Spaces Generated by Countable Partitions
We can generalize the last discussion to spaces generated by countable partitions. Suppose that $S$ is a set and that $\ms{A} = \{A_i: i \in I\}$ is a countable partition of $S$ into nonempty sets. Let $\ms{S} = \sigma(\ms{A})$ and recall that every $A \in \ms{S}$ has a unique representation of the form $A = \bigcup_{j \in J} A_j$ where $J \subseteq I$. Suppse now that $\mu$ is a positive measure on $\ms{S}$ with $0 \lt \mu(A_i) \lt \infty$ for every $i \in I$. Then once again, the measure space $(S, \ms{S}, \mu)$ is $\sigma$-finite and $\emptyset$ is the only null set. Hence if $\nu$ is a measure on $(S, \ms{S})$ then $\nu$ is absolutely continuous with respect to $\mu$ and hence has unique density function $f$ with respect to $\mu$: $\nu(A) = \int_A f \, d\mu, \quad A \in \ms{S}$ Once again, we can construct the density function explicitly.
In the setting above, define $f: S \to \R$ by $f(x) = \nu(A_i) / \mu(A_i)$ for $x \in A_i$ and $i \in I$. Then $f$ is the density of $\nu$ with respect to $\mu$.
Proof
Suppose that $A \in \ms{S}$ so that $A = \bigcup_{j \in J} A_j$ for some $J \subseteq I$. Then $\int_A f \, d\mu = \sum_{j \in J} \int_{A_j} f \, d\mu = \sum_{j \in J} \frac{\nu(A_j)}{\mu(A_j)} \mu(A_j) = \sum_{j \in J} \nu(A_j) = \nu(A)$
Often positive measure spaces that occur in applications can be decomposed into spaces generated by countable partitions. In the section on Convergence in the chapter on Martingales, we show that more general density functions can be obtained as limits of density functions of the type in the last theorem.
Probability Spaces
Suppose that $(\Omega, \ms{F}, \P)$ is a probability space and that $X$ is a random variable taking values in a measurable space $(S, \ms{S})$. Recall that the distribution of $X$ is the probability measure $P_X$ on $(S, \ms{S})$ given by $P_X(A) = \P(X \in A), \quad A \in \ms{S}$ If $\mu$ is a positive measure, $\sigma$-finite measure on $(S, \ms{S})$, then the theory of this section applies, of course. The Radon-Nikodym theorem tells us precisely when (the distribution of) $X$ has a probability density function with respect to $\mu$: we need the distribution to be absolutely continuous with respect to $\mu$: if $\mu(A) = 0$ then $P_X(A) = \P(X \in A) = 0$ for $A \in \ms{S}$.
Suppose that $r: S \to \R$ is measurable, so that $r(X)$ is a real-valued random variable. The integral of $r(X)$ (assuming that it exists) is of fundamental importance, and is knowns as the expected value of $r(X)$. We will study expected values in detail in the next chapter, but here we just note different ways to write the integral. By the change of variables theorem in the last section we have $\int_\Omega r[X(\omega)] d\P(\omega) = \int_S r(x) dP_X(x)$ Assuming that $P_X$, the distribution of $X$, is absolutely continuous with respect to $\mu$, with density function $f$, we can add to our chain of integrals using Theorem (14): $\int_\Omega r[X(\omega)] d\P(\omega) = \int_S r(x) dP_X(x) = \int_S r(x) f(x) d\mu(x)$
Specializing, suppose that $(S, \ms S, \#)$ is a discrete measure space. Thus $X$ has a discrete distribution and (as noted in the previous subsection), the distribution of $X$ is absolutely continuous with respect to $\#$, with probability density function $f$ given by $f(x) = \P(X = x)$ for $x \in S$. In this case the integral simplifies: $\int_\Omega r[X(\omega)] d\P(\omega) = \sum_{x \in S} r(x) f(x)$
Recall next that for $n \in \N_+$, the $n$-dimensional Euclidean measure space is $(\R^n, \ms R_n, \lambda_n)$ where $\ms R_n$ is the $\sigma$-algebra of Lebesgue measurable sets and $\lambda_n$ is Lebesgue measure. Suppose now that $S \in \ms R_n$ and that $\ms{S}$ is the $\sigma$-algebra of Lebesgue measurable subsets of $S$, and that once again, $X$ is a random variable with values in $S$. By definition, $X$ has a continuous distribution if $\P(X = x) = 0$ for $x \in S$. But we now know that this is not enough to ensure that the distribution of $X$ has a density function with respect to $\lambda_n$. We need the distribution to be absolutely continuous, so that if $\lambda_n(A) = 0$ then $\P(X \in A) = 0$ for $A \in \ms{S}$. Of course $\lambda_n\{x\} = 0$ for $x \in S$, so absolute continuity implies continuity, but not conversely. Continuity of the distribution is a (much) weaker condition than absolute continuity of the distribution. If the distribution of $X$ is continuous but not absolutely so, then the distribution will not have a density function with respect to $\lambda_n$.
For example, suppose that $\lambda_n(S) = 0$. Then the distribution of $X$ and $\lambda_n$ are mutually singular since $\P(X \in S) = 1$ and so $X$ will not have a density function with respect to $\lambda_n$. This will always be the case if $S$ is countable, so that the distribution of $X$ is discrete. But it is also possible for $X$ to have a continuous distribution on an uncountable set $S \in \ms R_n$ with $\lambda_n(S) = 0$. In such a case, the continuous distribution of $\bs{X}$ is said to be degenerate. There are a couple of natural ways in which this can happen that are illustrated in the following exercises.
Suppose that $\Theta$ is uniformly distributed on the interval $[0, 2 \pi)$. Let $X = \cos \Theta$, $Y = \sin \Theta$.
1. $(X, Y)$ has a continuous distribution on the circle $C = \{(x, y): x^2 + y^2 = 1\}$.
2. The distribution of $(X, Y)$ and $\lambda_2$ are mutually singular.
3. Find $\P(Y \gt X)$.
Solution
1. If $(x, y) \in C$ then there exist a unique $\theta \in [0, 2 \pi)$ with $x = \cos \theta$ and $y = \sin \theta$. Hence $\P[(X, Y) = (x, y)] = \P(\Theta = \theta) = 0$.
2. $\P[(X, Y) \in C] = 1$ but $\lambda_2(C) = 0$.
3. $\frac{1}{2}$
The last example is artificial since $(X, Y)$ has a one-dimensional distribution in a sense, in spite of taking values in $\R^2$. And of course $\Theta$ has a probability density function $f$ with repsect $\lambda_1$ given by $f(\theta) = 1 / 2 \pi$ for $\theta \in [0, 2 \pi)$.
Suppose that $X$ is uniformly distributed on the set $\{0, 1, 2\}$, $Y$ is uniformly distributed on the interval $[0, 2]$, and that $X$ and $Y$ are independent.
1. $(X, Y)$ has a continuous distribution on the product set $S = \{0, 1, 2\} \times [0, 2]$.
2. The distribution of $(X, Y)$ and $\lambda_2$ are mutually singular.
3. Find $\P(Y \gt X)$.
Solution
1. The variables are independent and $Y$ has a continuous distribution so $\P[(X, Y) = (x, y)] = \P(X = 2) \P(Y = y) = 0$ for $(x, y) \in S$.
2. \P[(X, Y) \in S] = 1\) but $\lambda_2(S) = 0$
3. $\frac{1}{2}$
The last exercise is artificial since $X$ has a discrete distribution on $\{0, 1, 2\}$ (with all subsets measureable and with $\#$), and $Y$ a continuous distribution on the Euclidean space $[0, 2]$ (with Lebesgue mearuable subsets and with $\lambda$). Both are absolutely continuous; $X$ has density function $g$ given by $g(x) = 1/3$ for $x \in \{0, 1, 2\}$ and $Y$ has density function $h$ given by $h(y) = 1 / 2$ for $y \in [0, 2]$. So really, the proper measure space on $S$ is the product measure space formed from these two spaces. Relative to this product space $(X, Y)$ has a density $f$ given by $f(x, y) = 1/6$ for $(x, y) \in S$.
It is also possible to have a continuous distribution on $S \subseteq \R^n$ with $\lambda_n(S) \gt 0$, yet still with no probability density function, a much more interesting situation. We will give a classical construction. Let $(X_1, X_2, \ldots)$ be a sequence of Bernoulli trials with success parameter $p \in (0, 1)$. We will indicate the dependence of the probability measure $\P$ on the parameter $p$ with a subscript. Thus, we have a sequence of independent indicator variables with
$\P_p(X_i = 1) = p, \quad \P_p(X_i = 0) = 1 - p$
We interpret $X_i$ as the $i$th binary digit (bit) of a random variable $X$ taking values in $(0, 1)$. That is, $X = \sum_{i=1}^\infty X_i / 2^i$. Conversely, recall that every number $x \in (0, 1)$ can be written in binary form as $x = \sum_{i=1}^\infty x_i / 2^i$ where $x_i \in \{0, 1\}$ for each $i \in \N_+$. This representation is unique except when $x$ is a binary rational of the form $x = k / 2^n$ for $n \in \N_+$ and $k \in \{1, 3, \ldots 2^n - 1\}$. In this case, there are two representations, one in which the bits are eventually 0 and one in which the bits are eventually 1. Note, however, that the set of binary rationals is countable. Finally, note that the uniform distribution on $(0, 1)$ is the same as Lebesgue measure on $(0, 1)$.
$X$ has a continuous distribution on $(0, 1)$ for every value of the parameter $p \in (0, 1)$. Moreover,
1. If $p, \, q \in (0, 1)$ and $p \ne q$ then the distribution of $X$ with parameter $p$ and the distribution of $X$ with parameter $q$ are mutually singular.
2. If $p = \frac{1}{2}$, $X$ has the uniform distribution on $(0, 1)$.
3. If $p \ne \frac{1}{2}$, then the distribution of $X$ is singular with respect to Lebesgue measure on $(0, 1)$, and hence has no probability density function in the usual sense.
Proof
If $x \in (0, 1)$ is not a binary rational, then $\P_p(X = x) = \P_p(X_i = x_i \text{ for all } i \in \N_+) = \lim_{n \to \infty} \P_p(X_i = x_i \text{ for } i = 1, \; 2 \ldots, \; n) = \lim_{n \to \infty} p^y (1 - p)^{n - y}$ where $y = \sum_{i=1}^n x_i$. Let $q = \max\{p, 1 - p\}$. Then $p^y (1 - p)^{n - y} \le q^n \to 0$ as $n \to \infty$. Hence, $\P_p(X = x) = 0$. If $x \in (0, 1)$ is a binary rational, then there are two bit strings that represent $x$, say $(x_1, x_2, \ldots)$ (with bits eventually 0) and $(y_1, y_2, \ldots)$ (with bits eventually 1). Hence $\P_p(X = x) = \P_p(X_i = x_i \text{ for all } i \in \N_+) + \P_p(X_i = y_i \text{ for all } i \in \N_+)$. But both of these probabilities are 0 by the same argument as before.
Next, we define the set of numbers for which the limiting relative frequency of 1's is $p$. Let $C_p = \left\{ x \in (0, 1): \frac{1}{n} \sum_{i = 1}^n x_i \to p \text{ as } n \to \infty \right\}$. Note that since limits are unique, $C_p \cap C_q = \emptyset$ for $p \ne q$. Next, by the strong law of large numbers, $\P_p(X \in C_p) = 1$. Although we have not yet studied the law of large numbers, The basic idea is simple: in a sequence of Bernoulli trials with success probability $p$, the long-term relative frequency of successes is $p$. Thus the distributions of $X$, as $p$ varies from 0 to 1, are mutually singular; that is, as $p$ varies, $X$ takes values with probability 1 in mutually disjoint sets.
Let $F$ denote the distribution function of $X$, so that $F(x) = \P_p(X \le x) = \P_p(X \lt x)$ for $x \in (0, 1)$. If $x \in (0, 1)$ is not a binary rational, then $X \lt x$ if and only if there exists $n \in \N_+$ such that $X_i = x_i$ for $i \in \{1, 2, \ldots, n - 1\}$ and $X_n = 0$ while $x_n = 1$. Hence $\P_{1/2}(X \lt x) = \sum_{n=1}^\infty \frac{x_n}{2^n} = x$. Since the distribution function of a continuous distribution is continuous, it follows that $F(x) = x$ for all $x \in [0, 1]$. This means that $X$ has the uniform distribution on $(0, 1)$. If $p \ne \frac{1}{2}$, the distribution of $X$ and the uniform distribution are mutually singular, so in particular, $X$ does not have a probability density function with respect to Lebesgue measure.
For an application of some of the ideas in this example, see Bold Play in the game of Red and Black.
Counterexamples
The essential uniqueness of density functions can fail if the underlying positive measure $\mu$ is not $\sigma$-finite. Here is a trivial counterexample:
Suppose that $S$ is a nonempty set and that $\ms{S} = \{S, \emptyset\}$ is the trivial $\sigma$-algebra. Define the positive measure $\mu$ on $(S, \ms{S})$ by $\mu(\emptyset) = 0$, $\mu(S) = \infty$. Let $\nu_c$ denote the measure on $(S, \ms{S})$ with constant density function $c \in \R$ with respect to $\mu$.
1. $(S, \ms{S}, \mu)$ is not $\sigma$-finite.
2. $\nu_c = \mu$ for every $c \in (0, \infty)$.
The Radon-Nikodym theorem can fail if the measure $\mu$ is not $\sigma$-finite, even if $\nu$ is finite. Here are a couple of standard counterexample:
Suppose that $S$ is an uncountable set and $\ms{S}$ is the $\sigma$-algebra of countable and co-countable sets: $\ms{S} = \{A \subseteq S: A \text{ is countable or } A^c \text{ is countable} \}$ As usual, let $\#$ denote counting measure on $\ms{S}$, and define $\nu$ on $\ms{S}$ by $\nu(A) = 0$ if $A$ is countable and $\nu(A) = 1$ if $A^c$ is countable. Then
1. $(S, \ms{S}, \#)$ is not $\sigma$-finite.
2. $\nu$ is a finite, positive measure on $(S, \ms{S})$.
3. $\nu$ is absolutely continuous with respect to $\#$.
4. $\nu$ does not have a density function with respect to $\#$.
Proof
1. Recall that a countable union of countable sets is countable, and so $S$ cannot be written as such a union.
2. Note that $\nu(\emptyset) = 0$. Suppose that $\{A_i: i \in I\}$ is a countable, disjoint collection of sets in $\ms{S}$. If $A_i$ is countable for every $i \in I$ then $\bigcup_{i \in I} A_i$ is countable. Hence $\nu\left(\bigcup_{i \in I} A_i\right) = 0$ and $\nu(A_i) = 0$ for every $i \in I$. Next suppose that $A_j^c$ and $A_k^c$ are countable for distinct $j, \; k \in I$. Since $A_j \cap A_k = \emptyset$, we have $A_j^c \cup A_k^c = S$. But then $S$ would be countable, which is a contradiction. Hence it is only possible for to have $A_j^c$ countable for a single $j \in I$. In this case, $\nu(A_j) = 1$ and $\nu(A_i) = 0$ for $i \ne j$. But also $\left(\bigcup_{i \in I} A_i\right)^c = \bigcap_{i \in I} A_i^c$ is countable, so $\nu\left(\bigcup_{i \in I} A_i\right) = 1$. Hence in all cases, $\nu\left(\bigcup_{i \in I} A_i \right) = \sum_{i \in I} \nu(A_i)$ so $\nu$ is a measure on $(S, \ms{S})$. It is clearly positive and finite.
3. Recall that any measure is absolutely continuous with respect to counting measure, since $\#(A) = 0$ if and only if $A = \emptyset$.
4. Suppose that $\nu$ has density function $f$ with respect to $\#$. Then $0 = \nu\{x\} = \int_{\{x\}} f \, d\# = f(x)$ for every $x \in S$. But then $\nu(S) = \int_S f \, d\# = 0$, which is a contradiction.
Let $\ms R$ denote the standard Borel $\sigma$-algebra on $\R$. Let $\#$ and $\lambda$ denote counting measure and Lebesgue measure on $(\R, \ms R)$, respectively. Then
1. $(\R, \ms R, \#)$ is not $\sigma$-finite.
2. $\lambda$ is absolutely continuous with respect to $\#$.
3. $\lambda$ does not have a density function with respect to $\#$.
Proof
1. $\R$ is uncountable and hence cannot be written as a countable union of finite sets.
2. Since $\emptyset$ is the only null set of $\#$, $\lambda \ll \#$.
3. Suppose that $\lambda$ has density function $f$ with respect to $\#$. Then $0 = \lambda\{x\} = \int_{\{x\}} f \, d\# = f(x), \quad x \in \R$ But then also $\lambda(\R) = \int_\R f \, d\# = 0$, a contradiction. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/03%3A_Distributions/3.13%3A_Absolute_Continuity_and_Density_Functions.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\Q}{\mathbb{Q}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\range}{\text{range}}$
Basic Theory
Our starting point is a positive measure space $(S, \mathscr{S}, \mu)$. That is $S$ is a set, $\mathscr{S}$ is a $\sigma$-algebra of subsets of $S$, and $\mu$ is a positive measure on $(S, \mathscr{S})$. As usual, the most important special cases are
• Euclidean space: $S$ is a Lebesgue measurable subset of $\R^n$ for some $n \in \N_+$, $\mathscr{S}$ is the $\sigma$-algebra of Lebesgue measurable subsets of $S$, and $\mu = \lambda_n$ is $n$-dimensional Lebesgue measure.
• Discrete space: $S$ is a countable set, $\mathscr{S} = \mathscr P(S)$ is the collection of all subsets of $S$, and $\mu = \#$ is counting measure.
• Probability space: $S$ is the set of outcomes of a random experiment, $\mathscr{S}$ is the $\sigma$-algebra of events, and $\mu = \P$ is a probability measure.
In previous sections, we defined the integral of certain measurable functions $f: S \to \R$ with respect to $\mu$, and we studied properties of the integral. In this section, we will study vector spaces of functions that are defined in terms of certain integrability conditions. These function spaces are of fundamental importance in all areas of analysis, including probability. In particular, the results of this section will reappear in the form of spaces of random variables in our study of expected value.
Definitions and Basic Properties
Consider a statement on the elements of $S$, for example an equation or an inequality with $x \in S$ as a free variable. (Technically such a statement is a predicate on $S$.) For $A \in \mathscr{S}$, we say that the statement holds on $A$ if it is true for every $x \in A$. We say that the statement holds almost everywhere on $A$ (with respect to $\mu$) if there exists $B \in \mathscr{S}$ with $B \subseteq A$ such that the statement holds on $B$ and $\mu(A \setminus B) = 0$.
Measurable functions $f, \, g: S \to \R$ are equivalent if $f = g$ almost everywhere on $S$, in which case we write $f \equiv g$. The relation $\equiv$ is an equivalence relation on the collection of measurable functions from $S$ to $\R$. That is, if $f, \, g, \, h: S \to \R$ are measurable then
1. $f \equiv f$, the reflexive property.
2. If $f \equiv g$ then $g \equiv f$, the symmetric property.
3. If $f \equiv g$ and $g \equiv h$ then $f \equiv h$, the transitive property.
Thus, equivalent functions are indistinguishable from the point of view of the measure $\mu$. As with any equivalence relation, $\equiv$ partitions the underlying set (in this case the collection of real-valued measurable functions on $S$) into equivalence classes of mutually equivalent elements. As we will see, we often view these equivalence classes as the basic objects of study. Our next task is to define measures of the size of a function; these will become norms in our spaces.
Suppose that $f: S \to \R$ is measurable. For $p \in (0, \infty)$ we define $\|f\|_p = \left(\int_S \left|f\right|^p \, d\mu\right)^{1/p}$ We also define $\|f\|_\infty = \inf\left\{b \in [0, \infty]: |f| \le b \text{ almost everywhere on } S \right\}$.
Since $\left|f\right|^p$ is a nonnegative, measurable function for $p \in (0, \infty)$, $\int_S \left|f\right|^p \, d\mu$ exists in $[0, \infty]$, and hence so does $\|f\|_p$. Clearly $\|f\|_\infty$ also exists in $[0, \infty]$ and is known as the essential supremum of $f$. A number $b \in [0, \infty]$ such that $|f| \le b$ almost everywhere on $S$ is an essential bound of $f$ and so, appropriately enough, the essential supremum of $f$ is the infimum of the essential bounds of $f$. Thus, we have defined $\|f\|_p$ for all $p \in (0, \infty]$. The definition for $p = \infty$ is special, but we will see that it's the appropriate one.
For $p \in (0, \infty]$, let $L^p$ denote the collection of measurable functions $f: S \to \R$ such $\|f\|_p \lt \infty$
So for $p \in (0, \infty)$, $f \in L^p$ if and only if $|f|^p$ is integrable. The symbol $L$ is in honor of Henri Lebesgue, who first developed the theory. If we want to indicate the dependence on the underlying measure space, we write $L^p(S, \mathscr{S}, \mu)$. Of course, $L^1$ is simply the collection of functions that are integrable with respect to $\mu$. Our goal is to study the spaces $L^p$ for $p \in (0, \infty]$. We start with some simple properties.
Suppose that $f: S \to \R$ is measurable. Then for $p \in (0, \infty]$,
1. $\|f\|_p \ge 0$
2. $\|f\|_p = 0$ if and only if $f = 0$ almost everywhere on $S$, so that $f \equiv 0$.
Proof
1. This is obvious from the definitions.
2. For $p \in (0, \infty)$, this follows from properties of the integral that we already have. First of course, $\int_S 0^p \, d\mu = \int_S 0 \, d\mu = 0$ so $\|0\|_p = 0$. Conversely if $\|f\|_p = 0$ then $\int_S \left|f\right|^p \, d\mu = 0$ and hence $\left|f\right|^p = 0$ almost everywhere on $S$ and so $f = 0$ almost everywhere on $S$. Suppose $p = \infty$. Clearly $\|0\|_\infty = 0$. Conversely suppose that $\|f\|_\infty = 0$. Then for each $n \in \N_+$ there exists $b_n \in [0, \infty)$ with $n_n \to 0$ as $n \to \infty$ and $|f| \le b_n$ almost everywhere on $S$. Hence $f = 0$ almost everywhere on $S$.
Suppose that $f: S \to \R$ is measurable and $c \in \R$. Then $\|c f\|_p = |c| \|f\|_p$ for $p \in (0, \infty]$.
Proof
Again, when $p \in (0, \infty)$, this result follow easily from properties of the integral that we already have: $\int_S \left|c f\right|^p \, d\mu = \left|c\right|^p \int_S \left|f\right|^p \, d\mu$ Taking the $p$th root of both sides gives the result. For $p = \infty$, the result is trivially true if $c = 0$. For $c \ne 0$, note that $b \in [0, \infty]$ is an essential bound of $|f|$ if and only if $|c| b$ is an essential bound if $|c f |$.
In particular, if $f \in L^p$ and $c \in \R$ then $c f \in L^p$.
Conjugate Indices and Hölder's inequality
Certain pairs of our function spaces turn out to be dual or complimentary to one another in a sense. To understand this, we need the following definition.
Indices $p, \, q \in (1, \infty)$ are said to be conjugate if $1/p + 1/q = 1$. In addition, $1$ and $\infty$ are conjugate indices.
For justification of the last case, note that if $p \in (1, \infty)$, then the index conjugate to $p$ is $q = \frac{1}{1 - 1/p}$ and $q \uparrow \infty$ as $p \downarrow 1$. Note that $p = q = 2$ are conjugate indices, and this is the only case where the indices are the same. Ultimately, the importance of conjugate indices stems from the following inequality:
If $x, \, y \in (0, \infty)$ and if $p, \, q \in (1, \infty)$ are conjugate indices, then $x y \le \frac{1}{p} x^p + \frac{1}{q} y^q$ Moreover, equality occurs if and only if $x^p = y^q$.
Proof 1
From properties of the natural logarithm function, $\ln(x y) = \ln(x) + \ln(y) = \frac{1}{p}\ln\left(x^p\right) + \frac{1}{q} \ln\left(y^q\right)$ But the natural logarithm function is concave and $1/p + 1/q = 1$ so $\ln(x y) = \frac{1}{p}\ln\left(x^p\right) + \frac{1}{q}\ln\left(y^q\right) \le \ln\left(\frac{1}{p} x^p + \frac{1}{q} y^q\right)$ Taking exponentials we have $x y \le \frac{1}{p} x^p + \frac{1}{q} y^q$
Proof 2
Fix $y \in (0, \infty)$ and define $f: (0, \infty) \to \R$ by $f(x) = \frac{1}{p} x^p + \frac{1}{q} y^q - x y, \quad x \in (0, \infty)$ Then $f^\prime(x) = x^{p-1} - y$ and $f^{\prime\prime}(x) = (p - 1) x^{p-2}$ for $x \in (0, \infty)$. Hence $f$ has a single critical point at $x = y^{1/(p-1)} = y^{q/p}$ and $f^{\prime\prime}(x) \gt 0$ for $x \in (0, \infty)$. It follows that the minimum value of $f$ on $(0, \infty)$ occurs at $y^{q/p}$ and $f\left(y^{q/p}\right) = 0$. Hence $f(x) \ge 0$ for $x \in (0, \infty)$ with equality only at $x = y^{q/p}$ (that is, $x^p = y^q$).
our next major result is Hölder's inequality, named for Otto Hölder, which clearly indicates the importance of conjugate indices.
Suppose that $f, \, g: S \to \R$ are measurable and that $p$ and $q$ are conjugate indices. Then $\|f g\|_1 \le \|f\|_p \|g\|_q$
Proof
The result is obvious if $\|f\|_p = \infty$ or $\|g\|_q = \infty$, so suppose that $f \in L^p$ and $g \in L^q$. For our first case, suppose that $p = 1$ and $q = \infty$. Note that $\left|g\right| \le \|g\|_\infty$ almost everywhere on $S$. Hence $\int_S \left|fg\right| \, d\mu = \int_S \left|f\right| \left|g\right| \, d\mu \le \|g\|_\infty \int_S \left|f\right| \, d\mu = \|f\|_1 \|g\|_\infty$ For the second case, suppose $p, \, q \in (1, \infty)$. By part (b) of the positive property, the result holds if $\|f\|_p = 0$ or $\|g\|_q = 0$, so assume that $\|f\|_p \gt 0$ and $\|g\|_q \gt 0$. By the additivity of the integral over disjoint domains, we can restrict the integrals to the set $\{x \in S: f(x) \ne 0, g(x) \ne 0\}$, or simply assume that $f \ne 0$ and $g \ne 0$ on $S$. From the basic inequality, $\left|f g\right| \le \frac{1}{p} \left|f\right|^p + \frac{1}{q} \left|g\right|^q$ Suppose first that $\|f\|_p = \|g\|_q = 1$. From the increasing and linearity properties of the integral, $\int_S \left|f g\right| \, d\mu \le \frac{1}{p} \int_S \left|f\right|^p \, d\mu + \frac{1}{q} \int_S \left|g\right|^q \, d\mu = \frac{1}{p} + \frac{1}{q} = 1$ For the general case where $\|f\|_p \gt 0$ and $\|g\|_q \gt 0$, let $f_1 = f \big/ \|f\|_p$ and $g_1 = g \big/ \|g\|_q$. Then $\left\|f_1\right\|_p = \left\|g_1\right\|_q = 1$ so $\left\|f_1 g_1 \right\|_1 \le 1$. So by the scaling property, $\left\|f_1 g_1\right\|_1 = \frac{\|f g\|_1}{\|f\|_p \|g\|_q} \le 1$
In particular, if $f \in L^p$ and $g \in L^q$ then $f g \in L^1$. The most important special case of Hölder's inequality is when $p = q = 2$, in which case we have the Cauchy-Schwartz inequality, named for Augustin Louis Cauchy and Karl Hermann Schwarz: $\|f g\|_1 \le \|f\|_2 \|g\|_2$
Minkowski's Inequality
Our next major result is Minkowski's inequality, named for Hermann Minkowski. This inequality will help show that $L^p$ is a vector space and that $\| \cdot \|_p$ is a norm (up to equivalence) when $p \ge 1$.
Suppose that $f, \, g: S \to \R$ are measurable and that $p \in [1, \infty]$. Then $\|f + g\|_p \le \|f\|_p + \|g\|_p$
Proof
Again, the result is trivial if $\|f\|_p = \infty$ or $\|g\|_p = \infty$, so assume that $f, \, g \in L^p$. When $p = 1$, the result is the simple triangle inequality for the integral: $\|f + g\|_1 = \int_S \left|f + g\right| \, d\mu \le \int_S \left(\left|f\right| + \left|g\right|\right) \, d\mu = \int_S \left|f\right| \, d\mu + \int_S \left|g\right| \, d\mu = \|f\|_1 + \|g\|_1$ For the case $p = \infty$, note that if $a \in [0, \infty]$ is an essential bound for $f$ and $b \in [0, \infty]$ is an essential bound for $g$ then $a + b$ is an essential bound for $f + g$. Hence $\|f + g\|_\infty \le \|f\|_\infty + \|g\|_\infty$. For the last case, suppose that $p \in (1, \infty)$ and let $q$ be the index conjugate to $p$. Then $\left|f + g\right|^p = \left|f + g\right|^{p-1} \left|f + g\right| \le \left|f + g\right|^{p-1}\left(\left|f\right| + \left|g\right|\right) = \left|f + g\right|^{p-1}\left|f\right| + \left|f+g\right|^{p-1} \left|g\right|$ Integrating over $S$ and using the additive and increasing properties of the integral gives $\|f + g\|_p^p \le \int_S \left|f + g\right|^{p-1} \left|f\right| \, d\mu + \int_S \left|f + g\right|^{p-1} \left|g\right| \, d\mu$ But by Höder's inequality, $\int_S \left|f + g\right|^{p-1} \left|f\right| \, d\mu \le \|\left|f+g\right|^{p-1}\|_q \|f\|_p, \; \int_S \left|f + g\right|^{p-1} \left|g\right| \, d\mu \le \|\left|f+g\right|^{p-1}\|_q \|g\|_p$ Combining this with the previous inequality we have $\|f+g\|_p^p \le \|\left|f + g\right|^{p-1}\|_q \left(\|f\|_p + \|g\|_p\right)$ But $(p - 1) q = p$ and $1/q = (p - 1) / p$ so $\|\left|f+g\right|^{p-1}\|_q = \left(\int_S \left|f + g\right|^{(p-1)q} \, d\mu \right)^{1/q} = \left(\int_S \left|f + g\right|^p \, d\mu\right)^{(p - 1)/p} = \|f + g\|_p^{p-1}$ Hence we have $\|f+g\|_p^p \le \|f + g\|_p^{p-1} \left(\|f\|_p + \|g\|_p\right)$ and therefore $\|f + g\|_p \le \|f\|_p + \|g\|_p$.
Vector Spaces
We can now discuss various vector spaces of functions. First, we know from our previous work with measure spaces, that the set $\mathscr{V}$ of all measurable functions $f: S \to \R$ is a vector space under our standard (pointwise) definitions of sum and scalar multiple. The spaces we are studying in this section are subspaces:
$L^p$ is a subspace of $\mathscr{V}$ for every $p \in [1, \infty]$.
Proof
We just need to show that $L^p$ is closed under addition and scalar multiplication. From the positive property, if $f \in L^p$ and $c \in \R$ then $c f \in L^p$. From Minkowski's inequality, if $f, \, g \in L^p$ then $f + g \in L^p$.
However, we usually want to identify functions that are equal almost everywhere on $S$ (with respect to $\mu$). Recalling the equivalence relation $\equiv$ defined above, here are the definitions:
Let $[f]$ denote the equivalence class of $f \in \mathscr{V}$ under the equivalence relation $\equiv$, and let $\mathscr{U} = \left\{[f]: f \in \mathscr{V}\right\}$. If $f, \, g \in \mathscr{V}$ and $c \in \R$ we define
1. $[f] + [g] = [f + g]$
2. $c [f] = [c f]$
Then $\mathscr{U}$ is a vector space.
Proof
we know from our previous work that these definitions are consistent in the sense that they do not depend on the particular representatives of the equivalence classes. That is if $f_1 \equiv f$ and $g_1 \equiv g$ then $f_1 + g_1 \equiv f + g$ and $c f_1 \equiv c f$. That $\mathscr{U}$ is a vector space then follows from the fact that $\mathscr{V}$ is a vector space.
Now we can define the Lebesgue vector spaces precisely.
For $p \in [1, \infty]$, let $\mathscr{L}^p = \left\{[f]: f \in L^p\right\}$. For $f \in \mathscr{V}$ define $\left\|[f]\right\|_p = \|f\|_p$. Then $\mathscr{L}^p$ is a subspace of $\mathscr{U}$ and $\| \cdot \|_p$ is a norm on $\mathscr{L}^p$. That is, for $f, g \in L^p$ and $c \in \R$
1. $\|f\|_p \ge 0$ and $\|f\|_p = 0$ if and only if $f \equiv 0$, the positive property
2. $\| c f \|_p = \left|c\right| \|f\|_p$, the scaling property
3. $\|f + g\|_p \le \|f\|_p + \|g\|_p$, the triangle inequality
Proof
That $\mathscr{L}^p$ is a subspace of $\mathscr{U}$ follows immediately from the fact that $L^p$ is a subspace of $\mathscr{V}$. The fact that $\| \cdot \|$ is a norm on $\mathscr{L}^p$ also follows from our previous work.
We have stated these results precisely, but on the other hand, we don't want to be overly pedantic. It's more natural and intuitive to simply work with the space $\mathscr{V}$ and the subspaces $L^p$ for $p \in [1, \infty]$, and just remember that functions that are equal almost everywhere on $S$ are regarded as the same vector. This will be our point of view for the rest of this section.
Every norm on a vector space naturally leads to a metric. That is, we measure the distance between vectors as the norm of their difference. Stated in terms of the norm $\| \cdot \|_p$, here are the properties of the metric on $L^p$.
For $f, \, g, \, h \in L^p$,
1. $\|f - g\|_p \ge 0$ and $\|f - g\|_p = 0$ if and only if $f \equiv g$, the positive property
2. $\|f - g\|_p = \|g - f\|_p$, the symmetric property
3. $\|f - h\|_p \le \|f - g\|_p + \|g - h\|_p$, the triangle inequality
Once we have a metric, we naturally have a criterion for convergence.
Suppose that $f_n \in L^p$ for $n \in \N_+$ and $f \in L^p$. Then by definition, $f_n \to f$ as $n \to \infty$ in $L^p$ if and only if $\|f_n - f\|_p \to 0$ as $n \to \infty$.
Limits are unique, up to equivalence. (That is, limits are unique in $\mathscr{L}^p$.)
Suppose again that $f_n \in L^p$ for $n \in \N_+$. Recall that this sequence is said to be a Cauchy sequence if for every $\epsilon \gt 0$ there exists $N \in \N_+$ such that if $n \gt N$ and $m \gt N$ then $\|f_n - f_m\|_p \lt \epsilon$. Needless to say, the Cauchy criterion is named for our ubiquitous friend Augustin Cauchy. A metric space in which every Cauchy sequence converges (to an element of the space) is said to be complete. Intuitively, one expects a Cauchy sequence to converge, so a complete space is literally one that is not missing any elements that should be there. A complete, normed vector space is called a Banach space, after the Polish mathematician Stefan Banach. Banach spaces are of fundamental importance in analysis, in large part because of the following result:
$L^p$ is a Banach space for every $p \in [1, \infty]$.
The Space $L^2$
The norm $\| \cdot \|_2$ is special because it corresponds to an inner product.
For $f, \, g \in L^2$, define $\langle f, g \rangle = \int_S f g \, d\mu$
Note that the integral is well-defined by the Cauchy-Schwarz inequality. As with all of our other definitions, this one is consistent with the equivalence relation. That is, if $f \equiv f_1$ and $g \equiv g_1$ then $f g \equiv f_1 g_1$ so $\int_S f g \, d\mu = \int_S f_1 g_1 \, d\mu$ and hence $\langle f, g \rangle = \langle f_1, g_1 \rangle$. Note also that $\langle f, f \rangle = \|f\|_2^2$ for $f \in L^2$, so this definition generates the 2-norm.
$L^2$ is an inner product space. That is, if $f, \, g, \, h \in L^2$ and $c \in \R$ then
1. $\langle f, f \rangle \ge 0$ and $\langle f, f \rangle = 0$ if and only if $f \equiv 0$, the positive property
2. $\langle f, g \rangle = \langle g, f \rangle$, the symmetric property
3. $\langle c f, g \rangle = c \langle f, g \rangle$, the scaling property
4. $\langle f + g, h \rangle = \langle f, g \rangle + \langle f, h \rangle$, the additive property
Proof
Part (a) is a restatement of the positive property of the norm $\| \cdot \|_2$. Part (b) is obvious and parts (c) and (d) follow from the linearity of the integral.
From parts (c) and (d), the inner product is linear in the first argument, with the second argument fixed. By the symmetric property (b), it follows that the inner product is also linear in the second argument with the first argument fixed. That is, the inner product is bi-linear. A complete. inner product space is known as a Hilbert space, named for the German mathematician David Hilbert. Thus, the following result follows immediately from the previous two.
$L^2$ is a Hilbert space.
All inner product spaces lead naturally to the concept of orthogonality; $L^2$ is no exception.
Functions $f, \, g \in L^2$ are orthogonal if $\langle f, g \rangle = 0$, in which case we write $f \perp g$. Equivalently $f \perp g$ if $\int_S f g \, d\mu = 0$
Of course, all of the basic theorems of general inner product spaces hold in $L^2$. For example, the following result is the Pythagorean theorem, named of course for Phythagoras.
If $f, \, g \in L^2$ and $f \perp g$ then $\|f + g\|_2^2 = \|f\|_2^2 + \|g\|_2^2$.
Proof
The proof just uses basic properties of inner product in (17). No special properties of $L^2$ are used. If $f, \, g \in L^2$ and $f \perp g$ then $\|f + g\|^2 = \langle f + g, f + g \rangle = \langle f, f \rangle + 2 \langle f, g \rangle + \langle g, g \rangle = \| f \|^2 + \| g \|^2$
Examples and Special Cases
Discrete Spaces
Recall again that the measure space $(S, \mathscr S, \#)$ is discrete if $S$ is countable, $\mathscr S = \mathscr P(S)$ is the $\sigma$-algebra of all subsets of $S$, and of course, $\#$ is counting measure. In this case, recall that integrals are sums. The exposition will look more familiar if we use the notation of sequences rather than functions. Thus, let $x: S \to \R$, and denote the value of $x$ at $i \in S$ by $x_i$ rather than $x(i)$. For $p \in [1, \infty)$, the $p$-norm is $\|x\|_p = \left(\sum_{i \in S} \left|x\right|_i^p\right)^{1/p}$ On the other hand, $\|x\|_\infty = \sup\{x_i: i \in S\}$. The only null set for $\#$ is $\emptyset$, so the equivalence relation $\equiv$ is simply equality, and so the spaces $L^p$ and $\mathscr{L}^p$ are the same. For $p \in [1, \infty)$, $x \in L^p$ if and only if $\sum_{i \in S} \left|x\right|_i^p \lt \infty$ When $p \in \N_+$ (as is often the case), this condition means that $\sum_{i \in S} x_i^p$ is absolutely convergent. On the other hand, $x \in L^\infty$ if and only if $x$ is bounded. When $S = \N_+$, the space $L^p$ is often denoted $l^p$. The inner produce on $L^2$ is $\langle x, y \rangle = \sum_{i \in S} x_i y_i, \quad x, \, y \in L^2$ When $S = \{1, 2, \ldots, n\}$, $L^2$ is simply the vector space $\R^n$ with the usual addition, scalar multiplication, inner product, and norm that we study in elementary linear algebra. Orthogonal vector are perpendicular in the usual sense.
Probability Spaces
Suppose that $(S, \mathscr S, \P)$ is a probability space, so that $S$ is the set of outcomes of a random experiment, $\mathscr{S}$ is the $\sigma$-algebra of events, and $\P$ is a probability measure on the sample space $(S, \mathscr{S})$. Of course, a measurable function $X: S \to \R$ is simply a real-valued random variable. For $p \in [1, \infty)$, the integral $\int_S \left|x\right|^p \, d\P$ is the expected value of $\left|X\right|^p$, and is denoted $\E\left(\left|X\right|^p\right)$. Thus in this case, $L^p$ is the collection of real-valued random variables $X$ with $\E\left(\left|X\right|^p\right) \lt \infty$. We will study these spaces in more detail in the chapter on expected value. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/03%3A_Distributions/3.14%3A_Function_Spaces.txt |
Expected value is one of the fundamental concepts in probability, in a sense more general than probability itself. The expected value of a real-valued random variable gives a measure of the center of the distribution of the variable. More importantly, by taking the expected value of various functions of a general random variable, we can measure many interesting features of its distribution, including spread, skewness, kurtosis, and correlation. Generating functions are certain types of expected value that completely determine the distribution of the variable. Conditional expected value, which incorporates known information in the computation, is one of the fundamental concepts in probability.
In the advanced topics, we define expected value as an integral with respect to the underlying probability measure. We also revisit conditional expected value from a measure-theoretic point of view. We study vector spaces of random variables with certain expected values as the norms of the spaces, which in turn leads to modes of convergence for random variables.
04: Expected Value
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
Expected value is one of the most important concepts in probability. The expected value of a real-valued random variable gives the center of the distribution of the variable, in a special sense. Additionally, by computing expected values of various real transformations of a general random variable, we con extract a number of interesting characteristics of the distribution of the variable, including measures of spread, symmetry, and correlation. In a sense, expected value is a more general concept than probability itself.
Basic Concepts
Definitions
As usual, we start with a random experiment modeled by a probability space $(\Omega, \mathscr F, \P)$. So to review, $\Omega$ is the set of outcomes, $\mathscr F$ the collection of events and $\P$ the probability measure on the sample space $(\Omega, \mathscr F)$. In the following definitions, we assume that $X$ is a random variable for the experiment, taking values in $S \subseteq \R$.
If $X$ has a discrete distribution with probability density function $f$ (so that $S$ is countable), then the expected value of $X$ is defined as follows (assuming that the sum is well defined): $\E(X) = \sum_{x \in S} x f(x)$
The sum defining the expected value makes sense if either the sum over the positive $x \in S$ is finite or the sum over the negative $x \in S$ is finite (or both). This ensures the that the entire sum exists (as an extended real number) and does not depend on the order of the terms. So as we will see, it's possible for $\E(X)$ to be a real number or $\infty$ or $-\infty$ or to simply not exist. Of course, if $S$ is finite the expected value always exists as a real number.
If $X$ has a continuous distribution with probability density function $f$ (and so $S$ is typically an interval or a union of disjoint intervals), then the expected value of $X$ is defined as follows (assuming that the integral is well defined): $\E(X) = \int_S x f(x) \, dx$
The probability density functions in basic applied probability that describe continuous distributions are piecewise continuous. So the integral above makes sense if the integral over positive $x \in S$ is finite or the integral over negative $x \in S$ is finite (or both). This ensures that the entire integral exists (as an extended real number). So as in the discrete case, it's possible for $\E(X)$ to exist as a real number or as $\infty$ or as $-\infty$ or to not exist at all. As you might guess, the definition for a mixed distribution is a combination of the definitions for the discrete and continuous cases.
If $X$ has a mixed distribution, with partial discrete density $g$ on $D$ and partial continuous density $h$ on $C$, where $D$ and $C$ are disjoint, $D$ is countable, $C$ is typically an interval, and $S = D \cup C$. The expected value of $X$ is defined as follows (assuming that the expression on the right is well defined): $\E(X) = \sum_{x \in D} x g(x) + \int_C x h(x) \, dx$
For the expected value above to make sense, the sum must be well defined, as in the discrete case, the integral must be well defined, as in the continuous case, and we must avoid the dreaded indeterminate form $\infty - \infty$. In the next section on additional properties, we will see that the various definitions given here can be unified into a single definition that works regardless of the type of distribution of $X$. An even more general definition is given in the advanced section on expected value as an integral.
Interpretation
The expected value of $X$ is also called the mean of the distribution of $X$ and is frequently denoted $\mu$. The mean is the center of the probability distribution of $X$ in a special sense. Indeed, if we think of the distribution as a mass distribution (with total mass 1), then the mean is the center of mass as defined in physics. The two pictures below show discrete and continuous probability density functions; in each case the mean $\mu$ is the center of mass, the balance point.
Recall the other measures of the center of a distribution that we have studied:
• A mode is any $x \in S$ that maximizes $f$.
• A median is any $x \in \R$ that satisfies $\P(X \lt x) \le \frac{1}{2}$ and $\P(X \le x) \ge \frac{1}{2}$.
To understand expected value in a probabilistic way, suppose that we create a new, compound experiment by repeating the basic experiment over and over again. This gives a sequence of independent random variables $(X_1, X_2, \ldots)$, each with the same distribution as $X$. In statistical terms, we are sampling from the distribution of $X$. The average value, or sample mean, after $n$ runs is $M_n = \frac{1}{n} \sum_{i=1}^n X_i$ Note that $M_n$ is a random variable in the compound experiment. The important fact is that the average value $M_n$ converges to the expected value $\E(X)$ as $n \to \infty$. The precise statement of this is the law of large numbers, one of the fundamental theorems of probability. You will see the law of large numbers at work in many of the simulation exercises given below.
Extensions
If $a \in \R$ and $n \in \N$, the moment of $X$ about $a$ of order $n$ is defined to be $\E\left[(X - a)^n\right]$ (assuming of course that this expected value exists).
The moments about 0 are simply referred to as moments (or sometimes raw moments). The moments about $\mu$ are the central moments. The second central moment is particularly important, and is studied in detail in the section on variance. In some cases, if we know all of the moments of $X$, we can determine the entire distribution of $X$. This idea is explored in the section on generating functions.
The expected value of a random variable $X$ is based, of course, on the probability measure $\P$ for the experiment. This probability measure could be a conditional probability measure, conditioned on a given event $A \in \mathscr F$ for the experiment (with $\P(A) \gt 0$). The usual notation is $\E(X \mid A)$, and this expected value is computed by the definitions given above, except that the conditional probability density function $x \mapsto f(x \mid A)$ replaces the ordinary probability density function $f$. It is very important to realize that, except for notation, no new concepts are involved. All results that we obtain for expected value in general have analogues for these conditional expected values. On the other hand, we will study a more general notion of conditional expected value in a later section.
Basic Properties
The purpose of this subsection is to study some of the essential properties of expected value. Unless otherwise noted, we will assume that the indicated expected values exist, and that the various sets and functions that we use are measurable. We start with two simple but still essential results.
Simple Variables
First, recall that a constant $c \in \R$ can be thought of as a random variable (on any probability space) that takes only the value $c$ with probability 1. The corresponding distribution is sometimes called point mass at $c$.
If $c$ is a constant random variable, then $\E(c) = c$.
Proof
As a random variable, $c$ has a discrete distribution, so $\E(c) = c \cdot 1 = c$.
Next recall that an indicator variable is a random variable that takes only the values 0 and 1.
If $X$ is an indicator variable then $\E(X) = \P(X = 1)$.
Proof
$X$ is discrete so by definition, $\E(X) = 1 \cdot \P(X = 1) + 0 \cdot \P(X = 0) = \P(X = 1)$.
In particular, if $\bs{1}_A$ is the indicator variable of an event $A$, then $\E\left(\bs{1}_A\right) = \P(A)$, so in a sense, expected value subsumes probability. For a book that takes expected value, rather than probability, as the fundamental starting concept, see the book Probability via Expectation, by Peter Whittle.
Change of Variables Theorem
The expected value of a real-valued random variable gives the center of the distribution of the variable. This idea is much more powerful than might first appear. By finding expected values of various functions of a general random variable, we can measure many interesting features of its distribution.
Thus, suppose that $X$ is a random variable taking values in a general set $S$, and suppose that $r$ is a function from $S$ into $\R$. Then $r(X)$ is a real-valued random variable, and so it makes sense to compute $\E\left[r(X)\right]$ (assuming as usual that this expected value exists). However, to compute this expected value from the definition would require that we know the probability density function of the transformed variable $r(X)$ (a difficult problem, in general). Fortunately, there is a much better way, given by the change of variables theorem for expected value. This theorem is sometimes referred to as the law of the unconscious statistician, presumably because it is so basic and natural that it is often used without the realization that it is a theorem, and not a definition.
If $X$ has a discrete distribution on a countable set $S$ with probability density function $f$. then $\E\left[r(X)\right] = \sum_{x \in S} r(x) f(x)$
Proof
The next result is the change of variables theorem when $X$ has a continuous distribution. We will prove the continuous version in stages, first when $r$ has discrete range below and then in the next section in full generality. Even though the complete proof is delayed, however, we will use the change of variables theorem in the proofs of many of the other properties of expected value.
Suppose that $X$ has a continuous distribution on $S \subseteq \R^n$ with probability density function $f$, and that $r: S \to \R$. Then $\E\left[r(X)\right] = \int_S r(x) f(x) \, dx$
Proof when $r$ has discrete range
The results below gives basic properties of expected value. These properties are true in general, but we will restrict the proofs primarily to the continuous case. The proofs for the discrete case are analogous, with sums replacing integrals. The change of variables theorem is the main tool we will need. In these theorems $X$ and $Y$ are real-valued random variables for an experiment (that is, defined on an underlying probability space) and $c$ is a constant. As usual, we assume that the indicated expected values exist. Be sure to try the proofs yourself before reading the ones in the text.
Linearity
Our first property is the additive property.
$\E(X + Y) = \E(X) + \E(Y)$
Proof
We apply the change of variables theorem with the function $r(x, y) = x + y$. Suppose that $(X, Y)$ has a continuous distribution with PDF $f$, and that $X$ takes values in $S \subseteq \R$ and $Y$ takes values in $T \subseteq \R$. Recall that $X$ has PDF $g$ given by $g(x) = \int_T f(x, y) \, dy$ for $x \in S$ and $Y$ has PDF $h$ given by $h(y) = \int_S f(x, y) \, dx$ for $y \in T$. Thus \begin{align} \E(X + Y) & = \int_{S \times T} (x + y) f(x, y) \, d(x, y) = \int_{S \times T} x f(x, y) \, d(x, y) + \int_{S \times T} y f(x, y) \, d(x, y) \ & = \int_S x \left( \int_T f(x, y) \, dy \right) \, dx + \int_T y \left( \int_S f(x, y) \, dx \right) \, dy = \int_S x g(x) \, dx + \int_T y h(y) \, dy = \E(X) + \E(Y) \end{align} Writing the double integrals as iterated integrals is a special case of Fubini's theorem. The proof in the discrete case is the same, with sums replacing integrals.
Our next property is the scaling property.
$\E(c X) = c \, \E(X)$
Proof
We apply the change of variables formula with the function $r(x) = c x$. Suppose that $X$ has a continuous distribution on $S \subseteq \R$ with PDF $f$. Then $\E(c X) = \int_S c \, x f(x) \, dx = c \int_S x f(x) \, dx = c \E(X)$ Again, the proof in the discrete case is the same, with sums replacing integrals.
Here is the linearity of expected value in full generality. It's a simple corollary of the previous two results.
Suppose that $(X_1, X_2, \ldots)$ is a sequence of real-valued random variables defined on the underlying probability space and that $(a_1, a_2, \ldots, a_n)$ is a sequence of constants. Then $\E\left(\sum_{i=1}^n a_i X_i\right) = \sum_{i=1}^n a_i \E(X_i)$
Thus, expected value is a linear operation on the collection of real-valued random variables for the experiment. The linearity of expected value is so basic that it is important to understand this property on an intuitive level. Indeed, it is implied by the interpretation of expected value given in the law of large numbers.
Suppose that $(X_1, X_2, \ldots, X_n)$ is a sequence of real-valued random variables with common mean $\mu$.
1. Let $Y = \sum_{i=1}^n X_i$, the sum of the variables. Then $\E(Y) = n \mu$.
2. Let $M = \frac{1}{n} \sum_{i=1}^n X_i$, the average of the variables. Then $\E(M) = \mu$.
Proof
1. By the additive property, $\E(Y) = \E\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \E(X_i) = \sum_{i=1}^n \mu = n \mu$
2. Note that $M = Y / n$. Hence from the scaling property and part (a), $\E(M) = \E(Y) / n = \mu$.
If the random variables in the previous result are also independent and identically distributed, then in statistical terms, the sequence is a random sample of size $n$ from the common distribution, and $M$ is the sample mean.
In several important cases, a random variable from a special distribution can be decomposed into a sum of simpler random variables, and then part (a) of the last theorem can be used to compute the expected value.
Inequalities
The following exercises give some basic inequalities for expected value. The first, known as the positive property is the most obvious, but is also the main tool for proving the others.
Suppose that $\P(X \ge 0) = 1$. Then
1. $\E(X) \ge 0$
2. If $\P(X \gt 0) \gt 0$ then $\E(X) \gt 0$.
Proof
1. This result follows from the definition, since we can take the set of values $S$ of $X$ to be a subset of $[0, \infty)$.
2. Suppose that $\P(X \gt 0) \gt 0$ (in addition to $\P(X \ge 0) = 1$). By the continuity theorem for increasing events, there exists $\epsilon \gt 0$ such that $\P(X \ge \epsilon) \gt 0$. Therefore $X - \epsilon \bs{1}(X \ge \epsilon) \ge 0$ (with probability 1). By part (a), linearity, and Theorem 2, $\E(X) - \epsilon \P(X \ge \epsilon) \gt 0$ so $\E(X) \ge \epsilon \P(X \ge \epsilon) \gt 0$.
Next is the increasing property, perhaps the most important property of expected value, after linearity.
Suppose that $\P(X \le Y) = 1$. Then
1. $\E(X) \le \E(Y)$
2. If $\P(X \lt Y) \gt 0$ then $\E(X) \lt \E(Y)$.
Proof
1. The assumption is equivalent to $\P(Y - X \ge 0) = 1$. Thus $\E(Y - X) \ge 0$ by part (a) of the positive property. But then $\E(Y) - \E(X) \ge 0$ by the linearity of expected value.
2. Similarly, this result follows from part (b) of the positive property.
Absolute value inequalities:
1. $\left|\E(X)\right| \le \E\left(\left|X\right|\right)$
2. If $\P(X \gt 0) \gt 0$ and $\P(X \lt 0) \gt 0$ then $\left|\E(X)\right| \lt \E\left(\left|X\right|\right)$.
Proof
1. Note that $-\left|X\right| \le X \le \left|X\right|$ (with probability 1) so by part (a) of the increasing property, $\E\left(-\left|X\right|\right) \le \E(X) \le \E\left(\left|X\right|\right)$. By linearity, $-\E\left(\left|X\right|\right) \le \E(X) \le \E\left(\left|X\right|\right)$ which implies $\left|\E(X)\right| \le \E\left(\left|X\right|\right)$.
2. If $\P(X \gt 0) \gt 0$ then $\P\left(-\left|X\right| \lt X\right) \gt 0$, and if $\P(X \lt 0) \gt 0$ then $\P\left(X \lt \left|X\right|\right) \gt 0$. Hence by part (b) of the increasing property, $-\E\left(\left|X\right|\right) \lt \E(X) \lt \E\left(\left|X\right|\right)$ and therefore $\left|\E(X)\right| \lt \E\left(\left|X\right|\right)$.
Only in Lake Woebegone are all of the children above average:
If $\P\left[X \ne \E(X)\right] \gt 0$ then
1. $\P\left[X \gt \E(X)\right] \gt 0$
2. $\P\left[X \lt \E(X)\right] \gt 0$
Proof
1. We prove the contrapositive. Thus suppose that $\P\left[X \gt \E(X)\right] = 0$ so that $\P\left[X \le \E(X)\right] = 1$. If $\P\left[X \lt \E(X)\right] \gt 0$ then by the increasing property we have $\E(X) \lt \E(X)$, a contradiction. Thus $\P\left[X = \E(X)\right] = 1$.
2. Similarly, if $\P\left[X \lt \E(X)\right] = 0$ then $\P\left[X = \E(X)\right] = 1$.
Thus, if $X$ is not a constant (with probability 1), then $X$ must take values greater than its mean with positive probability and values less than its mean with positive probability.
Symmetry
Again, suppose that $X$ is a random variable taking values in $\R$. The distribution of $X$ is symmetric about $a \in \R$ if the distribution of $a - X$ is the same as the distribution of $X - a$.
Suppose that the distribution of $X$ is symmetric about $a \in \R$. If $\E(X)$ exists, then $\E(X) = a$.
Proof
By assumption, the distribution of $X - a$ is the same as the distribution of $a - X$. Since $\E(X)$ exists we have $\E(a - X) = \E(X - a)$ so by linearity $a - \E(X) = \E(X) - a$. Equivalently $2 \E(X) = 2 a$.
The previous result applies if $X$ has a continuous distribution on $\R$ with a probability density $f$ that is symmetric about $a$; that is, $f(a + x) = f(a - x)$ for $x \in \R$.
Independence
If $X$ and $Y$ are independent real-valued random variables then $\E(X Y) = \E(X) \E(Y)$.
Proof
Suppose that $X$ has a continuous distribution on $S \subseteq \R$ with PDF $g$ and that $Y$ has a continuous distribution on $T \subseteq \R$ with PDF $h$. Then $(X, Y)$ has PDF $f(x, y) = g(x) h(y)$ on $S \times T$. We apply the change of variables theorem with the function $r(x, y) = x y$. $\E(X Y) = \int_{S \times T} x y f(x, y) \, d(x, y) = \int_{S \times T} x y g(x) h(y) \, d(x, y) = \int_S x g(x) \, dx \int_T y h(y) \, dy = \E(X) \E(Y)$ The proof in the discrete case is similar with sums replacing integrals.
It follows from the last result that independent random variables are uncorrelated (a concept that we will study in a later section). Moreover, this result is more powerful than might first appear. Suppose that $X$ and $Y$ are independent random variables taking values in general spaces $S$ and $T$ respectively, and that $u: S \to \R$ and $v: T \to \R$. Then $u(X)$ and $v(Y)$ are independent, real-valued random variables and hence $\E\left[u(X) v(Y)\right] = \E\left[u(X)\right] \E\left[v(Y)\right]$
Examples and Applications
As always, be sure to try the proofs and computations yourself before reading the proof and answers in the text.
Uniform Distributions
Discrete uniform distributions are widely used in combinatorial probability, and model a point chosen at random from a finite set.
Suppose that $X$ has the discrete uniform distribution on a finite set $S \subseteq \R$.
1. $\E(X)$ is the arithmetic average of the numbers in $S$.
2. If the points in $S$ are evenly spaced with endpoints $a, \, b$, then $\E(X) = \frac{a + b}{2}$, the average of the endpoints.
Proof
1. Let $n = \#(S)$, the number of points in $S$. Then $X$ has PDF $f(x) = 1 / n$ for $x \in S$ so $\E(X) = \sum_{x \in S} x \frac{1}{n} = \frac{1}{n} \sum_{x \in S} x$
2. Suppose that $S = \{a, a + h, a + 2 h, \ldots a + (n - 1) h\}$ and let $b = a + (n - 1) h$, the right endpoint. As in (a), $S$ has $n$ points so using (a) and the formula for the sum of the first $n - 1$ positive integers, we have $\E(X) = \frac{1}{n} \sum_{i=0}^{n-1} (a + i h) = \frac{1}{n}\left(n a + h \frac{(n - 1) n}{2}\right) = a + \frac{(n - 1) h}{2} = \frac{a + b}{2}$
The previous results are easy to see if we think of $\E(X)$ as the center of mass, since the discrete uniform distribution corresponds to a finite set of points with equal mass.
Open the special distribution simulator, and select the discrete uniform distribution. This is the uniform distribution on $n$ points, starting at $a$, evenly spaced at distance $h$. Vary the parameters and note the location of the mean in relation to the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean to the distribution mean.
Next, recall that the continuous uniform distribution on a bounded interval corresponds to selecting a point at random from the interval. Continuous uniform distributions arise in geometric probability and a variety of other applied problems.
Suppose that $X$ has the continuous uniform distribution on an interval $[a, b]$, where $a, \, b \in \R$ and $a \lt b$.
1. $\E(X) = \frac{a + b}{2}$, the midpoint of the interval.
2. $E\left(X^n\right) = \frac{1}{n + 1}\left(a^n + a^{n-1} b + \cdots + a b^{n-1} + b^n\right)$ for $n \in \N$.
Proof
1. Recall that $X$ has PDF $f(x) = \frac{1}{b - a}$. Hence $\E(X) = \int_a^b x \frac{1}{b - a} \, dx = \frac{1}{b - a} \frac{b^2 - a^2}{2} = \frac{a + b}{2}$
2. By the change of variables formula, $\E\left(X^n\right) = \int_a^b \frac{1}{b - a} x^n \, dx = \frac{b^{n+1} - a^{n+1}}{(n + 1)`(b - a)} = \frac{1}{n + 1}\left(a^n + a^{n-1} b + \cdots a b^{n-1} + b^n\right)$
Part (a) is easy to see if we think of the mean as the center of mass, since the uniform distribution corresponds to a uniform distribution of mass on the interval.
Open the special distribution simulator, and select the continuous uniform distribution. This is the uniform distribution the interval $[a, a + w]$. Vary the parameters and note the location of the mean in relation to the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean to the distribution mean.
Next, the average value of a function on an interval, as defined in calculus, has a nice interpretation in terms of the uniform distribution.
Suppose that $X$ is uniformly distributed on the interval $[a, b]$, and that $g$ is an integrable function from $[a, b]$ into $\R$. Then $\E\left[g(X)\right]$ is the average value of $g$ on $[a, b]$: $\E\left[g(X)\right] = \frac{1}{b - a} \int_a^b g(x) dx$
Proof
This result follows immediately from the change of variables theorem, since $X$ has PDF $f(x) = 1 / (b - a)$ for $a \le x \le b$.
Find the average value of the following functions on the given intervals:
1. $f(x) = x$ on $[2, 4]$
2. $g(x) = x^2$ on $[0, 1]$
3. $h(x) = \sin(x)$ on $[0, \pi]$.
Answer
1. $3$
2. $\frac{1}{3}$
3. $\frac{2}{\pi}$
The next exercise illustrates the value of the change of variables theorem in computing expected values.
Suppose that $X$ is uniformly distributed on $[-1, 3]$.
1. Give the probability density function of $X$.
2. Find the probability density function of $X^2$.
3. Find $E\left(X^2\right)$ using the probability density function in (b).
4. Find $\E\left(X^2\right)$ using the change of variables theorem.
Answer
1. $f(x) = \frac{1}{4}$ for $-1 \le x \le 3$
2. $g(y) = \begin{cases} \frac{1}{4} y^{-1/2}, & 0 \lt y \lt 1 \ \frac{1}{8} y^{-1/2}, & 1 \lt y \lt 9 \end{cases}$
3. $\int_0^9 y g(y) \, dy = \frac{7}{3}$
4. $\int_{-1}^3 x^2 f(x) \, dx = \frac{7}{3}$
The discrete uniform distribution and the continuous uniform distribution are studied in more detail in the chapter on Special Distributions.
Dice
Recall that a standard die is a six-sided die. A fair die is one in which the faces are equally likely. An ace-six flat die is a standard die in which faces 1 and 6 have probability $\frac{1}{4}$ each, and faces 2, 3, 4, and 5 have probability $\frac{1}{8}$ each.
Two standard, fair dice are thrown, and the scores $(X_1, X_2)$ recorded. Find the expected value of each of the following variables.
1. $Y = X_1 + X_2$, the sum of the scores.
2. $M = \frac{1}{2} (X_1 + X_2)$, the average of the scores.
3. $Z = X_1 X_2$, the product of the scores.
4. $U = \min\{X_1, X_2\}$, the minimum score
5. $V = \max\{X_1, X_2\}$, the maximum score.
Answer
1. $7$
2. $\frac{7}{2}$
3. $\frac{49}{4}$
4. $\frac{101}{36}$
5. $\frac{19}{4}$
In the dice experiment, select two fair die. Note the shape of the probability density function and the location of the mean for the sum, minimum, and maximum variables. Run the experiment 1000 times and compare the sample mean and the distribution mean for each of these variables.
Two standard, ace-six flat dice are thrown, and the scores $(X_1, X_2)$ recorded. Find the expected value of each of the following variables.
1. $Y = X_1 + X_2$, the sum of the scores.
2. $M = \frac{1}{2} (X_1 + X_2)$, the average of the scores.
3. $Z = X_1 X_2$, the product of the scores.
4. $U = \min\{X_1, X_2\}$, the minimum score
5. $V = \max\{X_1, X_2\}$, the maximum score.
Answer
1. $7$
2. $\frac{7}{2}$
3. $\frac{49}{4}$
4. $\frac{77}{32}$
5. $\frac{147}{32}$
In the dice experiment, select two ace-six flat die. Note the shape of the probability density function and the location of the mean for the sum, minimum, and maximum variables. Run the experiment 1000 times and compare the sample mean and the distribution mean for each of these variables.
Bernoulli Trials
Recall that a Bernoulli trials process is a sequence $\bs{X} = (X_1, X_2, \ldots)$ of independent, identically distributed indicator random variables. In the usual language of reliability, $X_i$ denotes the outcome of trial $i$, where 1 denotes success and 0 denotes failure. The probability of success $p = \P(X_i = 1) \in [0, 1]$ is the basic parameter of the process. The process is named for Jacob Bernoulli. A separate chapter on the Bernoulli Trials explores this process in detail.
For $n \in \N_+$, the number of successes in the first $n$ trials is $Y = \sum_{i=1}^n X_i$. Recall that this random variable has the binomial distribution with parameters $n$ and $p$, and has probability density function $f$ given by $f(y) = \binom{n}{y} p^y (1 - p)^{n - y}, \quad y \in \{0, 1, \ldots, n\}$
If $Y$ has the binomial distribution with parameters $n$ and $p$ then $\E(Y) = n p$
Proof from the definition
The critical tools that we need involve binomial coefficients: the identity $y \binom{n}{y} = n \binom{n - 1}{y - 1}$ for $y, \, n \in \N_+$, and the binomial theorem: \begin{align} \E(Y) & = \sum_{y=0}^n y \binom{n}{y} p^y (1 - p)^{n-y} = \sum_{y=1}^n n \binom{n - 1}{y - 1} p^n (1 - p)^{n-y} \ & = n p \sum_{y=1}^{n-1} \binom{n - 1}{y - 1} p^{y-1}(1 - p)^{(n-1) - (y - 1)} = n p [p + (1 - p)]^{n-1} = n p \end{align}
Proof using the additive property
Since $Y = \sum_{i=1}^n X_i$, the result follows immediately from the expected value of an indicator variable and the additive property, since $\E(X_i) = p$ for each $i \in \N_+$.
Note the superiority of the second proof to the first. The result also makes intuitive sense: in $n$ trials with success probability $p$, we expect $n p$ successes.
In the binomial coin experiment, vary $n$ and $p$ and note the shape of the probability density function and the location of the mean. For selected values of $n$ and $p$, run the experiment 1000 times and compare the sample mean to the distribution mean.
Suppose that $p \in (0, 1]$, and let $N$ denote the trial number of the first success. This random variable has the geometric distribution on $\N_+$ with parameter $p$, and has probability density function $g$ given by $g(n) = p (1 - p)^{n-1}, \quad n \in \N_+$
If $N$ has the geometric distribution on $\N_+$ with parameter $p \in (0, 1]$ then $\E(N) = 1 / p$.
Proof
The key is the formula for the deriviative of a geometric series: $\E(N) = \sum_{n=1}^\infty n p (1 - p)^{n-1} = -p \frac{d}{dp} \sum_{n=0}^\infty (1 - p)^n = -p \frac{d}{dp} \frac{1}{p} = p \frac{1}{p^2} = \frac{1}{p}$
Again, the result makes intuitive sense. Since $p$ is the probability of success, we expect a success to occur after $1 / p$ trials.
In the negative binomial experiment, select $k = 1$ to get the geometric distribution. Vary $p$ and note the shape of the probability density function and the location of the mean. For selected values of $p$, run the experiment 1000 times and compare the sample mean to the distribution mean.
The Hypergeometric Distribution
Suppose that a population consists of $m$ objects; $r$ of the objects are type 1 and $m - r$ are type 0. A sample of $n$ objects is chosen at random, without replacement. The parameters $m, \, r, \, n \in \N$ with $r \le m$ and $n \le m$. Let $X_i$ denote the type of the $i$th object selected. Recall that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a sequence of identically distributed (but not independent) indicator random variable with $\P(X_i = 1) = r / m$ for each $i \in \{1, 2, \ldots, n\}$.
Let $Y$ denote the number of type 1 objects in the sample, so that $Y = \sum_{i=1}^n X_i$. Recall that $Y$ has the hypergeometric distribution, which has probability density function $f$ given by $f(y) = \frac{\binom{r}{y} \binom{m - r}{n - y}}{\binom{m}{n}}, \quad y \in \{0, 1, \ldots, n\}$
If $Y$ has the hypergeometric distribution with parameters $m$, $n$, and $r$ then $\E(Y) = n \frac{r}{m}$.
Proof from the definition
Using the hypergeometric PDF, $\E(Y) = \sum_{y=0}^n y \frac{\binom{r}{y} \binom{m - r}{n - y}}{\binom{m}{n}}$ Note that the $y = 0$ term is 0. For the other terms, we can use the identity $y \binom{r}{y} = r \binom{r-1}{y-1}$ to get $\E(Y) = \frac{r}{\binom{m}{n}} \sum_{y=1}^n \binom{r - 1}{y - 1} \binom{m - r}{n - y}$ But substituting $k = y - 1$ and using another fundamental identity, $\sum_{y=1}^n \binom{r - 1}{y - 1} \binom{m - r}{n - y} = \sum_{k=0}^{n-1} \binom{r - 1}{k} \binom{m - r}{n - 1 - k} = \binom{m - 1}{n - 1}$ So substituting and doing a bit of algebra gives $\E(Y) = n \frac{r}{m}$.
Proof using the additive property
A much better proof uses the additive property and the representation of $Y$ as a sum of indicator variables. The result follows immediately since $\E(X_i) = r / m$ for each $i \in \{1, 2, \ldots n\}$.
In the ball and urn experiment, vary $n$, $r$, and $m$ and note the shape of the probability density function and the location of the mean. For selected values of the parameters, run the experiment 1000 times and compare the sample mean to the distribution mean.
Note that if we select the objects with replacement, then $\bs{X}$ would be a sequence of Bernoulli trials, and hence $Y$ would have the binomial distribution with parameters $n$ and $p = \frac{r}{m}$. Thus, the mean would still be $\E(Y) = n \frac{r}{m}$.
The Poisson Distribution
Recall that the Poisson distribution has probability density function $f$ given by $f(n) = e^{-a} \frac{a^n}{n!}, \quad n \in \N$ where $a \in (0, \infty)$ is a parameter. The Poisson distribution is named after Simeon Poisson and is widely used to model the number of random points in a region of time or space; the parameter $a$ is proportional to the size of the region. The Poisson distribution is studied in detail in the chapter on the Poisson Process.
If $N$ has the Poisson distribution with parameter $a$ then $\E(N) = a$. Thus, the parameter of the Poisson distribution is the mean of the distribution.
Proof
The proof depends on the standard series for the exponential function
$\E(N) = \sum_{n=0}^\infty n e^{-a} \frac{a^n}{n!} = e^{-a} \sum_{n=1}^\infty \frac{a^n}{(n - 1)!} = e^{-a} a \sum_{n=1}^\infty \frac{a^{n-1}}{(n-1)!} = e^{-a} a e^a = a.$
In the Poisson experiment, the parameter is $a = r t$. Vary the parameter and note the shape of the probability density function and the location of the mean. For various values of the parameter, run the experiment 1000 times and compare the sample mean to the distribution mean.
The Exponential Distribution
Recall that the exponential distribution is a continuous distribution with probability density function $f$ given by $f(t) = r e^{-r t}, \quad t \in [0, \infty)$ where $r \in (0, \infty)$ is the rate parameter. This distribution is widely used to model failure times and other arrival times; in particular, the distribution governs the time between arrivals in the Poisson model. The exponential distribution is studied in detail in the chapter on the Poisson Process.
Suppose that $T$ has the exponential distribution with rate parameter $r$. Then $\E(T) = 1 / r$.
Proof
This result follows from the definition and an integration by parts:
$\E(T) = \int_0^\infty t r e^{-r t} \, dt = -t e^{-r t} \bigg|_0^\infty + \int_0^\infty e^{-r t} \, dt = 0 - \frac{1}{r} e^{-rt} \bigg|_0^\infty = \frac{1}{r}$
Recall that the mode of $T$ is 0 and the median of $T$ is $\ln 2 / r$. Note how these measures of center are ordered: $0 \lt \ln 2 / r \lt 1 / r$
In the gamma experiment, set $n = 1$ to get the exponential distribution. This app simulates the first arrival in a Poisson process. Vary $r$ with the scroll bar and note the position of the mean relative to the graph of the probability density function. For selected values of $r$, run the experiment 1000 times and compare the sample mean to the distribution mean.
Suppose again that $T$ has the exponential distribution with rate parameter $r$ and suppose that $t \gt 0$. Find $\E(T \mid T \gt t)$.
Answer
$t + \frac{1}{r}$
The Gamma Distribution
Recall that the gamma distribution is a continuous distribution with probability density function $f$ given by $f(t) = r^n \frac{t^{n-1}}{(n - 1)!} e^{-r t}, \quad t \in [0, \infty)$ where $n \in N_+$ is the shape parameter and $r \in (0, \infty)$ is the rate parameter. This distribution is widely used to model failure times and other arrival times, and in particular, models the $n$th arrival in the Poisson process. Thus it follows that if $(X_1, X_2, \ldots, X_n)$ is a sequence of independent random variables, each having the exponential distribution with rate parameter $r$, then $T = \sum_{i=1}^n X_i$ has the gamma distribution with shape parameter $n$ and rate parameter $r$. The gamma distribution is studied in more generality, with non-integer shape parameters, in the chapter on the Special Distributions.
Suppose that $T$ has the gamma distribution with shape parameter $n$ and rate parameter $r$. Then $\E(T) = n / r$.
Proof from the definition
The proof is by induction on $n$, so let $\mu_n$ denote the mean when the shape parameter is $n \in \N_+$. When $n = 1$, we have the exponential distribution with rate parameter $r$, so we know $\mu_1 = 1 /r$ by our result above. Suppose that $\mu_n = r / n$ for a given $n \in \N_+$. Then $\mu_{n+1} = \int_0^\infty t r^{n + 1} \frac{t^n}{n!} e^{-r t} \, dt = \int_0^\infty r^{n+1} \frac{t^{n+1}}{n!} e^{-r t} \, dt$ Integrate by parts with $u = \frac{t^{n+1}}{n!}$, $dv = r^{n+1} e^{-r t} \, dt$ so that $du = (n + 1) \frac{t^n}{n!} \, dt$ and $v = -r^n e^{-r t}$. Then $\mu_{n+1} = (n + 1) \int_0^\infty r^n \frac{t^n}{n!} e^{-r t } \, dt = \frac{n+1}{n} \int_0^\infty t r^n \frac{t^{n-1}}{(n - 1)!} \, dt$ But the last integral is $\mu_n$, so by the induction hypothesis, $\mu_{n+1} = \frac{n + 1}{n} \frac{n}{r} = \frac{n + 1}{r}$.
Proof using the additive property
The result follows immediately from the additive property and the fact that $T$ can be represented in the form $T = \sum_{i=1}^n X_i$ where $X_i$ has the exponential distribution with parameter $r$ for each $i \in \{1, 2, \ldots, n\}$.
Note again how much easier and more intuitive the second proof is than the first.
Open the gamma experiment, which simulates the arrival times in the Poisson process. Vary the parameters and note the position of the mean relative to the graph of the probability density function. For selected parameter values, run the experiment 1000 times and compare the sample mean to the distribution mean.
Beta Distributions
The distributions in this subsection belong to the family of beta distributions, which are widely used to model random proportions and probabilities. The beta distribution is studied in detail in the chapter on Special Distributions.
Suppose that $X$ has probability density function $f$ given by $f(x) = 3 x^2$ for $x \in [0, 1]$.
1. Find the mean of $X$.
2. Find the mode of $X$.
3. Find the median of $X$.
4. Sketch the graph of $f$ and show the location of the mean, median, and mode on the $x$-axis.
Answer
1. $\frac{3}{4}$
2. $1$
3. $\left(\frac{1}{2}\right)^{1/3}$
In the special distribution simulator, select the beta distribution and set $a = 3$ and $b = 1$ to get the distribution in the last exercise. Run the experiment 1000 times and compare the sample mean to the distribution mean.
Suppose that a sphere has a random radius $R$ with probability density function $f$ given by $f(r) = 12 r ^2 (1 - r)$ for $r \in [0, 1]$. Find the expected value of each of the following:
1. The circumference $C = 2 \pi R$
2. The surface area $A = 4 \pi R^2$
3. The volume $V = \frac{4}{3} \pi R^3$
Answer
1. $\frac{6}{5} \pi$
2. $\frac{8}{5} \pi$
3. $\frac{8}{21} \pi$
Suppose that $X$ has probability density function $f$ given by $f(x) = \frac{1}{\pi \sqrt{x (1 - x)}}$ for $x \in (0, 1)$.
1. Find the mean of $X$.
2. Find median of $X$.
3. Note that $f$ is unbounded, so $X$ does not have a mode.
4. Sketch the graph of $f$ and show the location of the mean and median on the $x$-axis.
Answer
1. $\frac{1}{2}$
2. $\frac{1}{2}$
The particular beta distribution in the last exercise is also known as the (standard) arcsine distribution. It governs the last time that the Brownian motion process hits 0 during the time interval $[0, 1]$. The arcsine distribution is studied in more generality in the chapter on Special Distributions.
Open the Brownian motion experiment and select the last zero. Run the simulation 1000 times and compare the sample mean to the distribution mean.
Suppose that the grades on a test are described by the random variable $Y = 100 X$ where $X$ has the beta distribution with probability density function $f$ given by $f(x) = 12 x (1 - x)^2$ for $x \in [0, 1]$. The grades are generally low, so the teacher decides to curve the grades using the transformation $Z = 10 \sqrt{Y} = 100 \sqrt{X}$. Find the expected value of each of the following variables
1. $X$
2. $Y$
3. $Z$
Answer
1. $\E(X) = \frac{2}{5}$
2. $\E(Y) = 40$
3. $\E(Z) = \frac{1280}{21} \approx 60.95$
The Pareto Distribution
Recall that the Pareto distribution is a continuous distribution with probability density function $f$ given by
$f(x) = \frac{a}{x^{a + 1}}, \quad x \in [1, \infty)$
where $a \in (0, \infty)$ is a parameter. The Pareto distribution is named for Vilfredo Pareto. It is a heavy-tailed distribution that is widely used to model certain financial variables. The Pareto distribution is studied in detail in the chapter on Special Distributions.
Suppose that $X$ has the Pareto distribution with shape parameter $a$. Then
1. $\E(X) = \infty$ if $0 \lt a \le 1$
2. $\E(X) = \frac{a}{a - 1}$ if $a \gt 1$
Proof
1. If $0 \lt a \lt 1$, $\E(X) = \int_1^\infty x \frac{a}{x^{a+1}} \, dx = \int_1^\infty \frac{a}{x^a} \, dx = \frac{a}{-a + 1} x^{-a + 1} \bigg|_1^\infty = \infty$ since the exponent $-a + 1 \gt 0$. If $a = 1$, $\E(X) = \int_1^\infty x \frac{1}{x^2} \, dx = \int_1^\infty \frac{1}{x} \, dx = \ln x \bigg|_1^\infty = \infty$.
2. If $a \gt 1$ then $\E(X) = \int_1^\infty x \frac{a}{x^{a+1}} \, dx = \int_1^\infty \frac{a}{x^a} \, dx = \frac{a}{-a + 1} x^{-a + 1} \bigg|_1^\infty = \frac{a}{a - 1}$
The previous exercise gives us our first example of a distribution whose mean is infinite.
In the special distribution simulator, select the Pareto distribution. Note the shape of the probability density function and the location of the mean. For the following values of the shape parameter $a$, run the experiment 1000 times and note the behavior of the empirical mean.
1. $a = 1$
2. $a = 2$
3. $a = 3$.
The Cauchy Distribution
Recall that the (standard) Cauchy distribution has probability density function $f$ given by $f(x) = \frac{1}{\pi \left(1 + x^2\right)}, \quad x \in \R$ This distribution is named for Augustin Cauchy. The Cauchy distributions is studied in detail in the chapter on Special Distributions.
If $X$ has the Cauchy distribution then $\E(X)$ does not exist.
Proof
By definition, $\E(X) = \int_{-\infty}^\infty x \frac{1}{\pi (1 + x^2)} \, dx = \frac{1}{2 \pi} \ln\left(1 + x^2\right) \bigg|_{-\infty}^\infty$ which evaluates to the meaningless expression $\infty - \infty$.
Note that the graph of $f$ is symmetric about 0 and is unimodal. Thus, the mode and median of $X$ are both 0. By the symmetry result, if $X$ had a mean, the mean would be 0 also, but alas the mean does not exist. Moreover, the non-existence of the mean is not just a pedantic technicality. If we think of the probability distribution as a mass distribution, then the moment to the right of $a$ is $\int_a^\infty (x - a) f(x) \, dx = \infty$ and the moment to the left of $a$ is $\int_{-\infty}^a (x - a) f(x) \, dx = -\infty$ for every $a \in \R$. The center of mass simply does not exist. Probabilisitically, the law of large numbers fails, as you can see in the following simulation exercise:
In the Cauchy experiment (with the default parameter values), a light sources is 1 unit from position 0 on an infinite straight wall. The angle that the light makes with the perpendicular is uniformly distributed on the interval $\left(\frac{-\pi}{2}, \frac{\pi}{2}\right)$, so that the position of the light beam on the wall has the Cauchy distribution. Run the simulation 1000 times and note the behavior of the empirical mean.
The Normal Distribution
Recall that the standard normal distribution is a continuous distribution with density function $\phi$ given by $\phi(z) = \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2} z^2}, \quad z \in \R$
Normal distributions are widely used to model physical measurements subject to small, random errors and are studied in detail in the chapter on Special Distributions.
If $Z$ has the standard normal distribution then $\E(X) = 0$.
Proof
Using a simple change of variables, we have
$\E(Z) = \int_{-\infty}^\infty z \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2} z^2} \, dz = - \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2} z^2} \bigg|_{-\infty}^\infty = 0 - 0$
The standard normal distribution is unimodal and symmetric about $0$. Thus, the median, mean, and mode all agree. More generally, for $\mu \in (-\infty, \infty)$ and $\sigma \in (0, \infty)$, recall that $X = \mu + \sigma Z$ has the normal distribution with location parameter $\mu$ and scale parameter $\sigma$. $X$ has probability density function $f$ given by $f(x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left[-\frac{1}{2}\left(\frac{x - \mu}{\sigma}\right)^2\right], \quad x \in \R$ The location parameter is the mean of the distribution:
If $X$ has the normal distribution with location parameter $\mu \in \R$ and scale parameter $\sigma \in (0, \infty)$, then $\E(X) = \mu$
Proof
Of course we could use the definition, but a proof using linearity and the representation in terms of the standard normal distribution is trivial: $\E(X) = \mu + \sigma \E(Z) = \mu$.
In the special distribution simulator, select the normal distribution. Vary the parameters and note the location of the mean. For selected parameter values, run the simulation 1000 times and compare the sample mean to the distribution mean.
Additional Exercises
Suppose that $(X, Y)$ has probability density function $f$ given by $f(x, y) = x + y$ for $(x, y) \in [0, 1] \times [0, 1]$. Find the following expected values:
1. $\E(X)$
2. $\E\left(X^2 Y\right)$
3. $\E\left(X^2 + Y^2\right)$
4. $\E(X Y \mid Y \gt X)$
Answer
1. $\frac{7}{12}$
2. $\frac{17}{72}$
3. $\frac{5}{6}$
4. $\frac{1}{3}$
Suppose that $N$ has a discrete distribution with probability density function $f$ given by $f(n) = \frac{1}{50} n^2 (5 - n)$ for $n \in \{1, 2, 3, 4\}$. Find each of the following:
1. The median of $N$.
2. The mode of $N$
3. $\E(N)$.
4. $\E\left(N^2\right)$
5. $\E(1 / N)$.
6. $\E\left(1 / N^2\right)$.
Answer
1. 3
2. 3
3. $\frac{73}{25}$
4. $\frac{47}{5}$
5. $\frac{2}{5}$
6. $\frac{1}{5}$
Suppose that $X$ and $Y$ are real-valued random variables with $\E(X) = 5$ and $\E(Y) = -2$. Find $\E(3 X + 4 Y - 7)$.
Answer
0
Suppose that $X$ and $Y$ are real-valued, independent random variables, and that $\E(X) = 5$ and $\E(Y) = -2$. Find $\E\left[(3 X - 4) (2 Y + 7)\right]$.
Answer
33
Suppose that there are 5 duck hunters, each a perfect shot. A flock of 10 ducks fly over, and each hunter selects one duck at random and shoots. Find the expected number of ducks killed.
Solution
Number the ducks from 1 to 10. For $k \in \{1, 2, \ldots, 10\}$, let $X_k$ be the indicator variable that takes the value 1 if duck $k$ is killed and 0 otherwise. Duck $k$ is killed if at least one of the hunters selects her, so $\E(X_k) = \P(X_k = 1) = 1 - \left(\frac{9}{10}\right)^5$. The number of ducks killed is $N = \sum_{k=1}^{10} X_k$ so $\E(N) = 10 \left[1 - \left(\frac{9}{10}\right)^5\right] = 4.095$
For a more complete analysis of the duck hunter problem, see The Number of Distinct Sample Values in the chapter on Finite Sampling Models.
Consider the following game: An urn initially contains one red and one green ball. A ball is selected at random, and if the ball is green, the game is over. If the ball is red, the ball is returned to the urn, another red ball is added, and the game continues. At each stage, a ball is selected at random, and if the ball is green, the game is over. If the ball is red, the ball is returned to the urn, another red ball is added, and the game continues. Let $X$ denote the length of the game (that is, the number of selections required to obtain a green ball). Find $\E(X)$.
Solution
The probability density function $f$ of $X$ was found in the section on discrete distributions: $f(x) = \frac{1}{x (x + 1)}$ for $x \in \N_+$. The expected length of the game is infinite: $\E(X) = \sum_{x=1}^\infty x \frac{1}{x (x + 1)} = \sum_{x=1}^\infty \frac{1}{x + 1} = \infty$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/04%3A_Expected_Value/4.01%3A_Definitions_and_Basic_Properties.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
In this section, we study some properties of expected value that are a bit more specialized than the basic properties considered in the previous section. Nonetheless, the new results are also very important. They include two fundamental inequalities as well as special formulas for the expected value of a nonnegative variable. As usual, unless otherwise noted, we assume that the referenced expected values exist.
Basic Theory
Markov's Inequality
Our first result is known as Markov's inequality (named after Andrei Markov). It gives an upper bound for the tail probability of a nonnegative random variable in terms of the expected value of the variable.
If $X$ is a nonnegative random variable, then $\P(X \ge x) \le \frac{\E(X)}{x}, \quad x \gt 0$
Proof
For $x \gt 0$, note that $x \cdot \bs{1}(X \ge x) \le X$. Taking expected values through this inequality gives $x \P(X \ge x) \le \E(X)$.
The upper bound in Markov's inequality may be rather crude. In fact, it's quite possible that $\E(X) \big/ x \ge 1$, in which case the bound is worthless. However, the real value of Markov's inequality lies in the fact that it holds with no assumptions whatsoever on the distribution of $X$ (other than that $X$ be nonnegative). Also, as an example below shows, the inequality is tight in the sense that equality can hold for a given $x$. Here is a simple corollary of Markov's inequality.
If $X$ is a real-valued random variable and $k \in (0, \infty)$ then $\P(\left|X\right| \ge x) \le \frac{\E\left(\left|X\right|^k\right)}{x^k} \quad x \gt 0$
Proof
Since $k \ge 0$, the function $x \mapsto x^k$ is strictly increasing on $[0, \infty)$. Hence using Markov's inequality, $\P(\left|X\right| \ge x) = \P\left(\left|X\right|^k \ge x^k\right) \le \frac{\E\left(\left|X\right|^k\right)}{x^k}$
In this corollary of Markov's inequality, we could try to find $k \gt 0$ so that $\E\left( \left|X\right|^k\right) \big/ x^k$ is minimized, thus giving the tightest bound on $\P\left(\left|X\right|\right) \ge x)$.
Right Distribution Function
Our next few results give alternative ways to compute the expected value of a nonnegative random variable by means of the right-tail distribution function. This function also known as the reliability function if the variable represents the lifetime of a device.
If $X$ is a nonnegative random variable then $\E(X) = \int_0^\infty \P(X \gt x) \, dx$
Proof
A proof can be constructed by expressing $\P(X \gt x)$ in terms of the probability density function of $X$, as a sum in the discrete case or an integral in the continuous case. Then in the expression $\int_0^\infty \P(X \gt x) \, dx$ interchange the integral and the sum (in the discrete case) or the two integrals (in the continuous case). There is a much more elegant proof if we use the fact that we can interchange expected values and integrals when the integrand is nonnegative: $\int_0^\infty \P(X \gt x) \, dx = \int_0^\infty \E\left[\bs{1}(X \gt x)\right] \, dx = \E \left(\int_0^\infty \bs{1}(X \gt x) \, dx \right) = \E\left( \int_0^X 1 \, dx \right) = \E(X)$ This interchange is a special case of Fubini's theorem, named for the Italian mathematician Guido Fubini. See the advanced section on expected value as an integral for more details.
Here is a slightly more general result:
If $X$ is a nonnegative random variable and $k \in (0, \infty)$ then $\E(X^k) = \int_0^\infty k x^{k-1} \P(X \gt x) \, dx$
Proof
The same basic proof works: $\int_0^\infty k x^{k-1} \P(X \gt x) \, dx = \int_0^\infty k x^{k-1} \E\left[\bs{1}(X \gt x)\right] \, dx = \E \left(\int_0^\infty k x^{k-1} \bs{1}(X \gt x) \, dx \right) = \E\left( \int_0^X k x^{k-1} \, dx \right) = \E(X^k)$
The following result is similar to the theorem above, but is specialized to nonnegative integer valued variables:
Suppose that $N$ has a discrete distribution, taking values in $\N$. Then $\E(N) = \sum_{n=0}^\infty \P(N \gt n) = \sum_{n=1}^\infty \P(N \ge n)$
Proof
First, the two sums on the right are equivalent by a simple change of variables. A proof can be constructed by expressing $\P(N \gt n)$ as a sum in terms of the probability density function of $N$. Then in the expression $\sum_{n=0}^\infty \P(N \gt n)$ interchange the two sums. Here is a more elegant proof: $\sum_{n=1}^\infty \P(N \ge n) = \sum_{n=1}^\infty \E\left[\bs{1}(N \ge n)\right] = \E\left(\sum_{n=1}^\infty \bs{1}(N \ge n) \right) = \E\left(\sum_{n=1}^N 1 \right) = \E(N)$ This interchange is a special case of a general rule that allows the interchange of expected value and an infinite series, when the terms are nonnegative. See the advanced section on expected value as an integral for more details.
A General Definition
The special expected value formula for nonnegative variables can be used as the basis of a general formulation of expected value that would work for discrete, continuous, or even mixed distributions, and would not require the assumption of the existence of probability density functions. First, the special formula is taken as the definition of $\E(X)$ if $X$ is nonnegative.
If $X$ is a nonnegative random variable, define $\E(X) = \int_0^\infty \P(X \gt x) \, dx$
Next, for $x \in \R$, recall that the positive and negative parts of $x$ are $x^+ = \max\{x, 0\}$ and $x^- = \max\{0, -x\}$.
For $x \in \R$,
1. $x^+ \ge 0$, $x^- \ge 0$
2. $x = x^+ - x^-$
3. $\left|x\right| = x^+ + x^-$
Now, if $X$ is a real-valued random variable, then $X^+$ and $X^-$, the positive and negative parts of $X$, are nonnegative random variables, so their expected values are defined as above. The definition of $\E(X)$ is then natural, anticipating of course the linearity property.
If $X$ is a real-valued random variable, define $\E(X) = \E\left(X^+\right) - \E\left(X^-\right)$, assuming that at least one of the expected values on the right is finite.
The usual formulas for expected value in terms of the probability density function, for discrete, continuous, or mixed distributions, would now be proven as theorems. We will not go further in this direction, however, since the most complete and general definition of expected value is given in the advanced section on expected value as an integral.
The Change of Variables Theorem
Suppose that $X$ takes values in $S$ and has probability density function $f$. Suppose also that $r: S \to \R$, so that $r(X)$ is a real-valued random variable. The change of variables theorem gives a formula for computing $\E\left[r(X)\right]$ without having to first find the probability density function of $r(X)$. If $S$ is countable, so that $X$ has a discrete distribution, then $\E\left[r(X)\right] = \sum_{x \in S} r(x) f(x)$ If $S \subseteq \R^n$ and $X$ has a continuous distribution on $S$ then $\E\left[r(X)\right] = \int_S r(x) f(x) \, dx$ In both cases, of course, we assume that the expected values exist. In the previous section on basic properties, we proved the change of variables theorem when $X$ has a discrete distribution and when $X$ has a continuous distribution but $r$ has countable range. Now we can finally finish our proof in the continuous case.
Suppose that $X$ has a continuous distribution on $S$ with probability density function $f$, and $r: S \to \R$. Then $\E\left[r(X)\right] = \int_S r(x) f(x) \, dx$
Proof
Suppose first that $r$ is nonnegative. From the theorem above, $\E\left[r(X)\right] = \int_0^\infty \P\left[r(X) \gt t\right] \, dt = \int_0^\infty \int_{r^{-1}(t, \infty)} f(x) \, dx \, dt = \int_S \int_0^{r(x)} f(x) \, dt \, dx = \int_S r(x) f(x) \, dx$ For general $r$, we decompose into positive and negative parts, and use the result just established. \begin{align} \E\left[r(X)\right] & = \E\left[r^+(X) - r^-(X)\right] = \E\left[r^+(X)\right] - \E\left[r^-(X)\right] \ & = \int_S r^+(x) f(x) \, dx - \int_S r^-(x) f(x) \, dx = \int_S \left[r^+(x) - r^-(x)\right] f(x) \, dx = \int_S r(x) f(x) \, dx \end{align}
Jensens's Inequality
Our next sequence of exercises will establish an important inequality known as Jensen's inequality, named for Johan Jensen. First we need a definition.
A real-valued function $g$ defined on an interval $S \subseteq \R$ is said to be convex (or concave upward) on $S$ if for each $t \in S$, there exist numbers $a$ and $b$ (that may depend on $t$), such that
1. $a + b t = g(t)$
2. $a + bx \le g(x)$ for all $x \in S$
The graph of $x \mapsto a + b x$ is called a supporting line for $g$ at $t$.
Thus, a convex function has at least one supporting line at each point in the domain
You may be more familiar with convexity in terms of the following theorem from calculus: If $g$ has a continuous, non-negative second derivative on $S$, then $g$ is convex on $S$ (since the tangent line at $t$ is a supporting line at $t$ for each $t \in S$). The next result is the single variable version of Jensen's inequality
If $X$ takes values in an interval $S$ and $g: S \to \R$ is convex on $S$, then $\E\left[g(X)\right] \ge g\left[\E(X)\right]$
Proof
Note that $\E(X) \in S$ so let $y = a + b x$ be a supporting line for $g$ at $\E(X)$. Thus $a + b \E(X) = g[\E(X)]$ and $a + b \, X \le g(X)$. Taking expected values through the inequality gives
$a + b \, \E(X) = g\left[\E(X)\right] \le \E\left[g(X)\right]$
Jensens's inequality extends easily to higher dimensions. The 2-dimensional version is particularly important, because it will be used to derive several special inequalities in the section on vector spaces of random variables. We need two definitions.
A set $S \subseteq \R^n$ is convex if for every pair of points in $S$, the line segment connecting those points also lies in $S$. That is, if $\bs x, \, \bs y \in S$ and $p \in [0, 1]$ then $p \bs x + (1 - p) \bs y \in S$.
Suppose that $S \subseteq \R^n$ is convex. A function $g: S \to \R$ on $S$ is convex (or concave upward) if for each $\bs t \in S$, there exist $a \in \R$ and $\bs b \in \R^n$ (depending on $\bs t$) such that
1. $a + \bs b \cdot \bs t = g(\bs t)$
2. $a + \bs b \cdot \bs x \le g(\bs x)$ for all $\bs x \in S$
The graph of $\bs x \mapsto a + \bs b \cdot \bs x$ is called a supporting hyperplane for $g$ at $\bs t$.
In $\R^2$ a supporting hyperplane is an ordinary plane. From calculus, if $g$ has continuous second derivatives on $S$ and has a positive non-definite second derivative matrix, then $g$ is convex on $S$. Suppose now that $\bs X = (X_1, X_2, \ldots, X_n)$ takes values in $S \subseteq \R^n$, and let $\E(\bs X ) = (\E(X_1), \E(X_2), \ldots, \E(X_n))$. The following result is the general version of Jensen's inequlaity.
If $S$ is convex and $g: S \to \R$ is convex on $S$ then
$\E\left[g(\bs X)\right] \ge g\left[\E(\bs X)\right]$
Proof
First $\E(\bs X) \in S$, so let $y = a + \bs b \cdot \bs x$ be a supporting hyperplane for $g$ at $\E(\bs X)$. Thus $a + \bs b \cdot \E(\bs X) = g[\E(\bs X)]$ and $a + \bs b \cdot \bs X \le g(\bs X)$. Taking expected values through the inequality gives $a + \bs b \cdot \E(\bs X ) = g\left[\E(\bs X)\right] \le \E\left[g(\bs X)\right]$
We will study the expected value of random vectors and matrices in more detail in a later section. In both the one and $n$-dimensional cases, a function $g: S \to \R$ is concave (or concave downward) if the inequality in the definition is reversed. Jensen's inequality also reverses.
Expected Value in Terms of the Quantile Function
If $X$ has a continuous distribution with support on an interval of $\R$, then there is a simple (but not well known) formula for the expected value of $X$ as the integral the quantile function of $X$. Here is the general result:
Suppose that $X$ has a continuous distribution with support on an interval $(a, b) \subseteq \R$. Let $F$ denote the cumulative distribution function of $X$ so that $F^{-1}$ is the quantile function of $X$. If $g: (a, b) \to \R$ then (assuming that the expected value exists), $\E[g(X)] = \int_0^1 g\left[F^{-1}(p)\right] dp, \quad n \in \N$
Proof
Suppose that $X$ has probability density function $f$, although the theorem is true without this assumption. Under the assumption that $X$ has a continuous distribution with support on the interval $(a, b)$, the distribution function $F$ is strictly increasing on $(a, b)$, and the quantile function $F^{-1}$ is the ordinary inverse of $F$. Substituting $p = F(x)$, $dp = F^\prime(x) \, dx = f(x) \, dx$ we have $\int_0^1 g\left[F^{-1}(p)\right] d p = \int_a^b g\left(F^{-1}[F(x)]\right) f(x) \, dx = \int_a^b g(x) f(x) \, dx = \E[g(X)]$
So in particular, $\E(X) = \int_0^1 F^{-1}(p) \, dp$.
Examples and Applications
Let $a \in (0, \infty)$ and let $\P(X = a) = 1$, so that $X$ is a constant random variable. Show that Markov's inequality is in fact equality at $x = a$.
Solution
Of course $\E(X) = a$. Hence $\P(X \ge a) = 1$ and $\E(X) / a = 1$.
The Exponential Distribution
Recall that the exponential distribution is a continuous distribution with probability density function $f$ given by $f(t) = r e^{-r t}, \quad t \in [0, \infty)$ where $r \in (0, \infty)$ is the rate parameter. This distribution is widely used to model failure times and other arrival times; in particular, the distribution governs the time between arrivals in the Poisson model. The exponential distribution is studied in detail in the chapter on the Poisson Process.
Suppose that $X$ has exponential distribution with rate parameter $r$.
1. Find $\E(X)$ using the right distribution formula.
2. Find $\E(X)$ using the quantile function formula.
3. Compute both sides of Markov's inequality.
Answer
1. $\int_0^\infty e^{-r t} \, dt = \frac{1}{r}$
2. $\int_0^1 -\frac{1}{r} \ln(1 - p) \, dp = \frac{1}{r}$
3. $e^{-r t} \lt \frac{1}{r t}$ for $t \gt 0$
Open the gamma experiment. Keep the default value of the stopping parameter ($n = 1$), which gives the exponential distribution. Vary the rate parameter $r$ and note the shape of the probability density function and the location of the mean. For various values of the rate parameter, run the experiment 1000 times and compare the sample mean with the distribution mean.
The Geometric Distribution
Recall that Bernoulli trials are independent trials each with two outcomes, which in the language of reliability, are called success and failure. The probability of success on each trial is $p \in [0, 1]$. A separate chapter on Bernoulli Trials explores this random process in more detail. It is named for Jacob Bernoulli. If $p \in (0, 1)$, the trial number $N$ of the first success has the geometric distribution on $\N_+$ with success parameter $p$. The probability density function $f$ of $N$ is given by $f(n) = p (1 - p)^{n - 1}, \quad n \in \N_+$
Suppose that $N$ has the geometric distribution on $\N_+$ with parameter $p \in (0, 1)$.
1. Find $\E(N)$ using the right distribution function formula.
2. Compute both sides of Markov's inequality.
3. Find $\E(N \mid N \text{ is even })$.
Answer
1. $\sum_{n=0}^\infty (1 - p)^n = \frac{1}{p}$
2. $(1 - p)^{n-1} \lt \frac{1}{n p}, \quad n \in \N_+$
3. $\frac{2 (1 - p)^2}{p (2 - p)^2}$
Open the negative binomial experiment. Keep the default value of the stopping parameter ($k = 1$), which gives the geometric distribution. Vary the success parameter $p$ and note the shape of the probability density function and the location of the mean. For various values of the success parameter, run the experiment 1000 times and compare the sample mean with the distribution mean.
The Pareto Distribution
Recall that the Pareto distribution is a continuous distribution with probability density function $f$ given by $f(x) = \frac{a}{x^{a + 1}}, \quad x \in [1, \infty)$ where $a \in (0, \infty)$ is a parameter. The Pareto distribution is named for Vilfredo Pareto. It is a heavy-tailed distribution that is widely used to model certain financial variables. The Pareto distribution is studied in detail in the chapter on Special Distributions.
Suppose that $X$ has the Pareto distribution with parameter $a \gt 1$.
1. Find $\E(X)$ using the right distribution function formula.
2. Find $\E(X)$ using the quantile function formula.
3. Find $\E(1 / X)$.
4. Show that $x \mapsto 1 / x$ is convex on $(0, \infty)$.
5. Verify Jensen's inequality by comparing $\E(1 / X)$ and $1 \big/ \E(X)$.
Answer
1. $\int_0^1 1 \, dx + \int_1^\infty x^{-a} \, dx = \frac{a}{a - 1}$
2. $\int_0^1 (1 - p)^{-1/a} dp = \frac{a}{a - 1}$
3. $\frac{a}{a + 1}$
4. The convexity of $1 / x$ is clear from the graph. Note also that $\frac{d^2}{dx^2} \frac{1}{x} = \frac{2}{x^3} \gt 0$ for $x \gt 0$.
5. $\frac{a}{a + 1} \gt \frac{a -1}{a}$
Open the special distribution simulator and select the Pareto distribution. Keep the default value of the scale parameter. Vary the shape parameter and note the shape of the probability density function and the location of the mean. For various values of the shape parameter, run the experiment 1000 times and compare the sample mean with the distribution mean.
A Bivariate Distribution
Suppose that $(X, Y)$ has probability density function $f$ given by $f(x, y) = 2 (x + y)$ for $0 \le x \le y \le 1$.
1. Show that the domain of $f$ is a convex set.
2. Show that $(x, y) \mapsto x^2 + y^2$ is convex on the domain of $f$.
3. Compute $\E\left(X^2 + Y^2\right)$.
4. Compute $\left[\E(X)\right]^2 + \left[\E(Y)\right]^2$.
5. Verify Jensen's inequality by comparing (b) and (c).
Answer
1. Note that the domain is a triangular region.
2. The second derivative matrix is $\left[\begin{matrix} 2 & 0 \ 0 & 2\end{matrix}\right]$.
3. $\frac{5}{6}$
4. $\frac{53}{72}$
5. $\frac{5}{6} \gt \frac{53}{72}$
The Arithmetic and Geometric Means
Suppose that $\{x_1, x_2, \ldots, x_n\}$ is a set of positive numbers. The arithmetic mean is at least as large as the geometric mean: $\left(\prod_{i=1}^n x_i \right)^{1/n} \le \frac{1}{n}\sum_{i=1}^n x_i$
Proof
Let $X$ be uniformly distributed on $\{x_1, x_2, \ldots, x_n\}$. We apply Jensen's inequality with the natural logarithm function, which is concave on $(0, \infty)$: $\E\left(\ln X \right) = \frac{1}{n} \sum_{i=1}^n \ln x_i = \ln \left[ \left(\prod_{i=1}^n x_i \right)^{1/n} \right] \le \ln\left[\E(X)\right] = \ln \left(\frac{1}{n}\sum_{i=1}^n x_i \right)$ Taking exponentials of each side gives the inequality. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/04%3A_Expected_Value/4.02%3A_Additional_Properties.txt |
$\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\mse}{\text{mse}}$ $\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$
Recall the expected value of a real-valued random variable is the mean of the variable, and is a measure of the center of the distribution. Recall also that by taking the expected value of various transformations of the variable, we can measure other interesting characteristics of the distribution. In this section, we will study expected values that measure the spread of the distribution about the mean.
Basic Theory
Definitions and Interpretations
As usual, we start with a random experiment modeled by a probability space $(\Omega, \mathscr F, \P)$. So to review, $\Omega$ is the set of outcomes, $\mathscr F$ the collection of events, and $\P$ the probability measure on the sample space $(\Omega, \mathscr F)$. Suppose that $X$ is a random variable for the experiment, taking values in $S \subseteq \R$. Recall that $\E(X)$, the expected value (or mean) of $X$ gives the center of the distribution of $X$.
The variance and standard deviation of $X$ are defined by
1. $\var(X) = \E\left(\left[X - \E(X)\right]^2\right)$
2. $\sd(X) = \sqrt{\var(X)}$
Implicit in the definition is the assumption that the mean $\E(X)$ exists, as a real number. If this is not the case, then $\var(X)$ (and hence also $\sd(X)$) are undefined. Even if $\E(X)$ does exist as a real number, it's possible that $\var(X) = \infty$. For the remainder of our discussion of the basic theory, we will assume that expected values that are mentioned exist as real numbers.
The variance and standard deviation of $X$ are both measures of the spread of the distribution about the mean. Variance (as we will see) has nicer mathematical properties, but its physical unit is the square of that of $X$. Standard deviation, on the other hand, is not as nice mathematically, but has the advantage that its physical unit is the same as that of $X$. When the random variable $X$ is understood, the standard deviation is often denoted by $\sigma$, so that the variance is $\sigma^2$.
Recall that the second moment of $X$ about $a \in \R$ is $\E\left[(X - a)^2\right]$. Thus, the variance is the second moment of $X$ about the mean $\mu = \E(X)$, or equivalently, the second central moment of $X$. In general, the second moment of $X$ about $a \in \R$ can also be thought of as the mean square error if the constant $a$ is used as an estimate of $X$. In addition, second moments have a nice interpretation in physics. If we think of the distribution of $X$ as a mass distribution in $\R$, then the second moment of $X$ about $a \in \R$ is the moment of inertia of the mass distribution about $a$. This is a measure of the resistance of the mass distribution to any change in its rotational motion about $a$. In particular, the variance of $X$ is the moment of inertia of the mass distribution about the center of mass $\mu$.
The mean square error (or equivalently the moment of inertia) about $a$ is minimized when $a = \mu$:
Let $\mse(a) = \E\left[(X - a)^2\right]$ for $a \in \R$. Then $\mse$ is minimized when $a = \mu$, and the minimum value is $\sigma^2$.
Proof
The relationship between measures of center and measures of spread is studied in more detail in the advanced section on vector spaces of random variables.
Properties
The following exercises give some basic properties of variance, which in turn rely on basic properties of expected value. As usual, be sure to try the proofs yourself before reading the ones in the text. Our first results are computational formulas based on the change of variables formula for expected value
Let $\mu = \E(X)$.
1. If $X$ has a discrete distribution with probability density function $f$, then $\var(X) = \sum_{x \in S} (x - \mu)^2 f(x)$.
2. If $X$ has a continuous distribution with probability density function $f$, then $\var(X) = \int_S (x - \mu)^2 f(x) dx$
Proof
1. This follows from the discrete version of the change of variables formula.
2. Similarly, this follows from the continuous version of the change of variables formula.
Our next result is a variance formula that is usually better than the definition for computational purposes.
$\var(X) = \E(X^2) - [\E(X)]^2$.
Proof
Let $\mu = \E(X)$. Using the linearity of expected value we have $\var(X) = \E[(X - \mu)^2] = \E(X^2 - 2 \mu X + \mu^2) = \E(X^2) - 2 \mu \E(X) + \mu^2 = \E(X^2) - 2 \mu^2 + \mu^2 = \E(X^2) - \mu^2$
Of course, by the change of variables formula, $\E\left(X^2\right) = \sum_{x \in S} x^2 f(x)$ if $X$ has a discrete distribution, and $\E\left(X^2\right) = \int_S x^2 f(x) \, dx$ if $X$ has a continuous distribution. In both cases, $f$ is the probability density function of $X$.
Variance is always nonnegative, since it's the expected value of a nonnegative random variable. Moreover, any random variable that really is random (not a constant) will have strictly positive variance.
The nonnegative property.
1. $\var(X) \ge 0$
2. $\var(X) = 0$ if and only if $\P(X = c) = 1$ for some constant $c$ (and then of course, $\E(X) = c$).
Proof
These results follow from the basic positive property of expected value. Let $\mu = \E(X)$. First $(X - \mu)^2 \ge 0$ with probability 1 so $\E\left[(X - \mu)^2\right] \ge 0$. In addition, $\E\left[(X - \mu)^2\right] = 0$ if and only if $\P(X = \mu) = 1$.
Our next result shows how the variance and standard deviation are changed by a linear transformation of the random variable. In particular, note that variance, unlike general expected value, is not a linear operation. This is not really surprising since the variance is the expected value of a nonlinear function of the variable: $x \mapsto (x - \mu)^2$.
If $a, \, b \in \R$ then
1. $\var(a + b X) = b^2 \var(X)$
2. $\sd(a + b X) = \left|b\right| \sd(X)$
Proof
1. Let $\mu = \E(X)$. By linearity, $\E(a + b X) = a + b \mu$. Hence $\var(a + b X) = \E\left([(a + b X) - (a + b \mu)]^2\right) = \E\left[b^2 (X - \mu)^2\right] = b^2 \var(X)$.
2. This result follows from (a) by taking square roots.
Recall that when $b \gt 0$, the linear transformation $x \mapsto a + b x$ is called a location-scale transformation and often corresponds to a change of location and change of scale in the physical units. For example, the change from inches to centimeters in a measurement of length is a scale transformation, and the change from Fahrenheit to Celsius in a measurement of temperature is both a location and scale transformation. The previous result shows that when a location-scale transformation is applied to a random variable, the standard deviation does not depend on the location parameter, but is multiplied by the scale factor. There is a particularly important location-scale transformation.
Suppose that $X$ is a random variable with mean $\mu$ and variance $\sigma^2$. The random variable $Z$ defined as follows is the standard score of $X$. $Z = \frac{X - \mu}{\sigma}$
1. $\E(Z) = 0$
2. $\var(Z) = 1$
Proof
1. From the linearity of expected value, $\E(Z) = \frac{1}{\sigma} [\E(X) - \mu] = 0$
2. From the scaling property, $\var(Z) = \frac{1}{\sigma^2} \var(X) = 1$.
Since $X$ and its mean and standard deviation all have the same physical units, the standard score $Z$ is dimensionless. It measures the directed distance from $\E(X)$ to $X$ in terms of standard deviations.
Let $Z$ denote the standard score of $X$, and suppose that $Y = a + b X$ where $a, \, b \in \R$ and $b \ne 0$.
1. If $b \gt 0$, the standard score of $Y$ is $Z$.
2. If $b \lt 0$, the standard score of $Y$ is $-Z$.
Proof
$E(Y) = a + b \E(X)$ and $\sd(Y) = \left|b\right| \, \sd(X)$. Hence $\frac{Y - \E(Y)}{\sd(Y)} = \frac{b}{\left|b\right|} \frac{X - \E(X)}{\sd(X)}$
As just noted, when $b \gt 0$, the variable $Y = a + b X$ is a location-scale transformation and often corresponds to a change of physical units. Since the standard score is dimensionless, it's reasonable that the standard scores of $X$ and $Y$ are the same. Here is another standardized measure of dispersion:
Suppose that $X$ is a random variable with $\E(X) \ne 0$. The coefficient of variation is the ratio of the standard deviation to the mean: $\text{cv}(X) = \frac{\sd(X)}{\E(X)}$
The coefficient of variation is also dimensionless, and is sometimes used to compare variability for random variables with different means. We will learn how to compute the variance of the sum of two random variables in the section on covariance.
Chebyshev's Inequality
Chebyshev's inequality (named after Pafnuty Chebyshev) gives an upper bound on the probability that a random variable will be more than a specified distance from its mean. This is often useful in applied problems where the distribution is unknown, but the mean and variance are known (at least approximately). In the following two results, suppose that $X$ is a real-valued random variable with mean $\mu = \E(X) \in \R$ and standard deviation $\sigma = \sd(X) \in (0, \infty)$.
Chebyshev's inequality 1. $\P\left(\left|X - \mu\right| \ge t\right) \le \frac{\sigma^2}{t^2}, \quad t \gt 0$
Proof
Here's an alternate version, with the distance in terms of standard deviation.
Chebyshev's inequality 2. $\P\left(\left|X - \mu\right| \ge k \sigma\right) \le \frac{1}{k^2}, \quad k \gt 0$
Proof
Let $t = k \sigma$ in the first version of Chebyshev's inequality.
The usefulness of the Chebyshev inequality comes from the fact that it holds for any distribution (assuming only that the mean and variance exist). The tradeoff is that for many specific distributions, the Chebyshev bound is rather crude. Note in particular that the first inequality is useless when $t \le \sigma$, and the second inequality is useless when $k \le 1$, since 1 is an upper bound for the probability of any event. On the other hand, it's easy to construct a distribution for which Chebyshev's inequality is sharp for a specified value of $t \in (0, \infty)$. Such a distribution is given in an exercise below.
Examples and Applications
As always, be sure to try the problems yourself before looking at the solutions and answers.
Indicator Variables
Suppose that $X$ is an indicator variable with $p = \P(X = 1)$, where $p \in [0, 1]$. Then
1. $\E(X) = p$
2. $\var(X) = p (1 - p)$
Proof
1. We proved this in the section on basic properties, although the result is so simple that we can do it again: $\E(X) = 1 \cdot p + 0 \cdot (1 - p) = p$.
2. Note that $X^2 = X$ since $X$ only takes values 0 and 1. Hence $\E\left(X^2\right) = p$ and therefore $\var(X) = p - p^2 = p (1 - p)$.
The graph of $\var(X)$ as a function of $p$ is a parabola, opening downward, with roots at 0 and 1. Thus the minimum value of $\var(X)$ is 0, and occurs when $p = 0$ and $p = 1$ (when $X$ is deterministic, of course). The maximum value is $\frac{1}{4}$ and occurs when $p = \frac{1}{2}$.
Uniform Distributions
Discrete uniform distributions are widely used in combinatorial probability, and model a point chosen at random from a finite set. The mean and variance have simple forms for the discrete uniform distribution on a set of evenly spaced points (sometimes referred to as a discrete interval):
Suppose that $X$ has the discrete uniform distribution on $\{a, a + h, \ldots, a + (n - 1) h\}$ where $a \in \R$, $h \in (0, \infty)$, and $n \in \N_+$. Let $b = a + (n - 1) h$, the right endpoint. Then
1. $\E(X) = \frac{1}{2}(a + b)$.
2. $\var(X) = \frac{1}{12}(b - a)(b - a + 2 h)$.
Proof
1. We proved this in the section on basic properties. Here it is again, using the formula for the sum of the first $n - 1$ positive integers: $\E(X) = \frac{1}{n} \sum_{i=0}^{n-1} (a + i h) = \frac{1}{n}\left(n a + h \frac{(n - 1) n}{2}\right) = a + \frac{(n - 1) h}{2} = \frac{a + b}{2}$
2. Note that $\E\left(X^2\right) = \frac{1}{n} \sum_{i=0}^{n-1} (a + i h)^2 = \frac{1}{n - 1} \sum_{i=0}^{n-1} \left(a^2 + 2 a h i + h^2 i^2\right)$ Using the formulas for the sum of the frist $n - 1$ positive integers, and the sum of the squares of the first $n - 1$ positive integers, we have $\E\left(X^2\right) = \frac{1}{n}\left[ n a^2 + 2 a h \frac{(n-1) n}{2} + h^2 \frac{(n - 1) n (2 n -1)}{6}\right]$ Using computational formula and simplifying gives the result.
Note that mean is simply the average of the endpoints, while the variance depends only on difference between the endpoints and the step size.
Open the special distribution simulator, and select the discrete uniform distribution. Vary the parameters and note the location and size of the mean $\pm$ standard deviation bar in relation to the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
Next, recall that the continuous uniform distribution on a bounded interval corresponds to selecting a point at random from the interval. Continuous uniform distributions arise in geometric probability and a variety of other applied problems.
Suppose that $X$ has the continuous uniform distribution on the interval $[a, b]$ where $a, \, b \in \R$ with $a \lt b$. Then
1. $\E(X) = \frac{1}{2}(a + b)$
2. $\var(X) = \frac{1}{12}(b - a)^2$
Proof
1. $\E(X) = \int_a^b x \frac{1}{b - a} \, dx = \frac{b^2 - a^2}{2 (b - a)} = \frac{a + b}{2}$
2. $\E(X^2) = \int_a^b x^2 \frac{1}{b - a} = \frac{b^3 - a^3}{3 (b - a)}$. The variance result then follows from (a), the computational formula and simple algebra.
Note that the mean is the midpoint of the interval and the variance depends only on the length of the interval. Compare this with the results in the discrete case.
Open the special distribution simulator, and select the continuous uniform distribution. This is the uniform distribution the interval $[a, a + w]$. Vary the parameters and note the location and size of the mean $\pm$ standard deviation bar in relation to the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
Dice
Recall that a fair die is one in which the faces are equally likely. In addition to fair dice, there are various types of crooked dice. Here are three:
• An ace-six flat die is a six-sided die in which faces 1 and 6 have probability $\frac{1}{4}$ each while faces 2, 3, 4, and 5 have probability $\frac{1}{8}$ each.
• A two-five flat die is a six-sided die in which faces 2 and 5 have probability $\frac{1}{4}$ each while faces 1, 3, 4, and 6 have probability $\frac{1}{8}$ each.
• A three-four flat die is a six-sided die in which faces 3 and 4 have probability $\frac{1}{4}$ each while faces 1, 2, 5, and 6 have probability $\frac{1}{8}$ each.
A flat die, as the name suggests, is a die that is not a cube, but rather is shorter in one of the three directions. The particular probabilities that we use ($\frac{1}{4}$ and $\frac{1}{8}$) are fictitious, but the essential property of a flat die is that the opposite faces on the shorter axis have slightly larger probabilities that the other four faces. Flat dice are sometimes used by gamblers to cheat. In the following problems, you will compute the mean and variance for each of the various types of dice. Be sure to compare the results.
A standard, fair die is thrown and the score $X$ is recorded. Sketch the graph of the probability density function and compute each of the following:
1. $\E(X)$
2. $\var(X)$
Answer
1. $\frac{7}{2}$
2. $\frac{35}{12}$
An ace-six flat die is thrown and the score $X$ is recorded. Sketch the graph of the probability density function and compute each of the following:
1. $\E(X)$
2. $\var(X)$
Answer
1. $\frac{7}{2}$
2. $\frac{15}{4}$
A two-five flat die is thrown and the score $X$ is recorded. Sketch the graph of the probability density function and compute each of the following:
1. $\E(X)$
2. $\var(X)$
Answer
1. $\frac{7}{2}$
2. $\frac{11}{4}$
A three-four flat die is thrown and the score $X$ is recorded. Sketch the graph of the probability density function and compute each of the following:
1. $\E(X)$
2. $\var(X)$
Answer
1. $\frac{7}{2}$
2. $\frac{9}{4}$
In the dice experiment, select one die. For each of the following cases, note the location and size of the mean $\pm$ standard deviation bar in relation to the probability density function. Run the experiment 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
1. Fair die
2. Ace-six flat die
3. Two-five flat die
4. Three-four flat die
The Poisson Distribution
Recall that the Poisson distribution is a discrete distribution on $\N$ with probability density function $f$ given by $f(n) = e^{-a} \, \frac{a^n}{n!}, \quad n \in \N$ where $a \in (0, \infty)$ is a parameter. The Poisson distribution is named after Simeon Poisson and is widely used to model the number of random points in a region of time or space; the parameter $a$ is proportional to the size of the region. The Poisson distribution is studied in detail in the chapter on the Poisson Process.
Suppose that $N$ has the Poisson distribution with parameter $a$. Then
1. $\E(N) = a$
2. $\var(N) = a$
Proof
1. We did this computation in the previous section. Here it is again: $\E(N) = \sum_{n=0}^\infty n e^{-a} \frac{a^n}{n!} = e^{-a} \sum_{n=1}^\infty \frac{a^n}{(n - 1)!} = e^{-a} a \sum_{n=1}^\infty \frac{a^{n-1}}{(n-1)!} = e^{-a} a e^a = a.$
2. First we compute the second factorial moment: $\E[N (N - 1)] = \sum_{n=1}^\infty n (n - 1) e^{-a} \frac{a^n}{n!} = \sum_{n=2}^\infty e^{-a} \frac{a^n}{(n - 2)!} = e^{-a} a^2 \sum_{n=2}^\infty \frac{a^{n-2}}{(n - 2)!} = a^2 e^{-a} e^a = a^2$ Hence, $E\left(N^2\right) = \E[N(N - 1)] + \E(N) = a^2 + a$ and so $\var(N) = (a^2 + a) - a^2 = a$.
Thus, the parameter of the Poisson distribution is both the mean and the variance of the distribution.
In the Poisson experiment, the parameter is $a = r t$. Vary the parameter and note the size and location of the mean $\pm$ standard deviation bar in relation to the probability density function. For selected values of the parameter, run the experiment 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The Geometric Distribution
Recall that Bernoulli trials are independent trials each with two outcomes, which in the language of reliability, are called success and failure. The probability of success on each trial is $p \in [0, 1]$. A separate chapter on Bernoulli Trials explores this random process in more detail. It is named for Jacob Bernoulli. If $p \in (0, 1]$, the trial number $N$ of the first success has the geometric distribution on $\N_+$ with success parameter $p$. The probability density function $f$ of $N$ is given by $f(n) = p (1 - p)^{n - 1}, \quad n \in \N_+$
Suppose that $N$ has the geometric distribution on $\N_+$ with success parameter $p \in (0, 1]$. Then
1. $\E(N) = \frac{1}{p}$
2. $\var(N) = \frac{1 - p}{p^2}$
Proof
1. We proved this in the section on basic properties. Here it is again: $\E(N) = \sum_{n=1}^\infty n p (1 - p)^{n-1} = -p \frac{d}{dp} \sum_{n=0}^\infty (1 - p)^n = -p \frac{d}{dp} \frac{1}{p} = p \frac{1}{p^2} = \frac{1}{p}$
2. First we compute the second factorial moment: $\E[N(N - 1)] = \sum_{n = 2}^\infty n (n - 1) (1 - p)^{n-1} p = p(1 - p) \frac{d^2}{dp^2} \sum_{n=0}^\infty (1 - p)^n = p (1 - p) \frac{d^2}{dp^2} \frac{1}{p} = p (1 - p) \frac{2}{p^3} = \frac{2 (1 - p)}{p^2}$ Hence $\E(N^2) = \E[N(N - 1)] + \E(N) = 2 / p^2 - 1 / p$ and hence $\var(X) = 2 / p^2 - 1 / p - 1 / p^2 = 1 / p^2 - 1 / p$.
Note that the variance is 0 when $p = 1$, not surprising since $X$ is deterministic in this case.
In the negative binomial experiment, set $k = 1$ to get the geometric distribution . Vary $p$ with the scroll bar and note the size and location of the mean $\pm$ standard deviation bar in relation to the probability density function. For selected values of $p$, run the experiment 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
Suppose that $N$ has the geometric distribution with parameter $p = \frac{3}{4}$. Compute the true value and the Chebyshev bound for the probability that $N$ is at least 2 standard deviations away from the mean.
Answer
1. $\frac{1}{16}$
2. $\frac{1}{4}$
The Exponential Distribution
Recall that the exponential distribution is a continuous distribution on $[0, \infty)$ with probability density function $f$ given by $f(t) = r e^{-r t}, \quad t \in [0, \infty)$ where $r \in (0, \infty)$ is the with rate parameter. This distribution is widely used to model failure times and other arrival times. The exponential distribution is studied in detail in the chapter on the Poisson Process.
Suppose that $T$ has the exponential distribution with rate parameter $r$. Then
1. $\E(T) = \frac{1}{r}$.
2. $\var(T) = \frac{1}{r^2}$.
Proof
1. We proved this in the section on basic properties. Here it is again, using integration by parts: $\E(T) = \int_0^\infty t r e^{-r t} \, dt = -t e^{-r t} \bigg|_0^\infty + \int_0^\infty e^{-r t} \, dt = 0 - \frac{1}{r} e^{-rt} \bigg|_0^\infty = \frac{1}{r}$
2. Integrating by parts again and using (a), we have $\E\left(T^2\right) = \int_0^\infty t^2 r e^{-r t} \, dt = -t^2 e^{-r t} \bigg|_0^\infty + \int_0^\infty 2 t e^{-r t} \, dt = 0 + \frac{2}{r^2}$ Hence $\var(T) = \frac{2}{r^2} - \frac{1}{r^2} = \frac{1}{r^2}$
Thus, for the exponential distribution, the mean and standard deviation are the same.
In the gamma experiment, set $k = 1$ to get the exponential distribution. Vary $r$ with the scroll bar and note the size and location of the mean $\pm$ standard deviation bar in relation to the probability density function. For selected values of $r$, run the experiment 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
Suppose that $X$ has the exponential distribution with rate parameter $r \gt 0$. Compute the true value and the Chebyshev bound for the probability that $X$ is at least $k$ standard deviations away from the mean.
Answer
1. $e^{-(k+1)}$
2. $\frac{1}{k^2}$
The Pareto Distribution
Recall that the Pareto distribution is a continuous distribution on $[1, \infty)$ with probability density function $f$ given by $f(x) = \frac{a}{x^{a + 1}}, \quad x \in [1, \infty)$ where $a \in (0, \infty)$ is a parameter. The Pareto distribution is named for Vilfredo Pareto. It is a heavy-tailed distribution that is widely used to model financial variables such as income. The Pareto distribution is studied in detail in the chapter on Special Distributions.
Suppose that $X$ has the Pareto distribution with shape parameter $a$. Then
1. $\E(X) = \infty$ if $0 \lt a \le 1$ and $\E(X) = \frac{a}{a - 1}$ if $1\lt a \lt \infty$
2. $\var(X)$ is undefined if $0 \lt a \le 1$, $\var(X) = \infty$ if $1 \lt a \le 2$, and $\var(X) = \frac{a}{(a - 1)^2 (a - 2)}$ if $2 \lt a \lt \infty$
Proof
1. We proved this in the section on basic properties. Here it is again: $\E(X) = \int_1^\infty x \frac{a}{x^{a+1}} \, dx = \int_1^\infty \frac{a}{x^a} \, dx = \frac{a}{-a + 1} x^{-a + 1} \bigg|_1^\infty = \begin{cases} \infty, & 0 \lt a \lt 1 \ \frac{a}{a - 1}, & a \gt 1 \end{cases}$ When $a = 1$, $\E(X) = \int_1^\infty \frac{1}{x} = \ln x \bigg|_1^\infty = \infty$
2. If $0 \lt a \le 1$ then $\E(X) = \infty$ and so $\var(X)$ is undefined. On the other hand, $\E\left(X^2\right) = \int_1^\infty x^2 \frac{a}{x^{a+1}} \, dx = \int_1^\infty \frac{a}{x^{a-1}} \, dx = a x^{-a + 2} \bigg|_1^\infty = \begin{cases} \infty, & 0 \lt a \lt 2 \ \frac{a}{a - 2}, & a \gt 2 \end{cases}$ When $a = 2$, $\E\left(X^2\right) = \int_1^\infty \frac{2}{x} \, dx = \infty$. Hence $\var(X) = \infty$ if $1 \lt a \le 2$ and $\var(X) = \frac{a}{a - 2} - \left(\frac{a}{a - 1}\right)^2$ if $a \gt 2$.
In the special distribution simuator, select the Pareto distribution. Vary $a$ with the scroll bar and note the size and location of the mean $\pm$ standard deviation bar. For each of the following values of $a$, run the experiment 1000 times and note the behavior of the empirical mean and standard deviation.
1. $a = 1$
2. $a = 2$
3. $a = 3$
The Normal Distribution
Recall that the standard normal distribution is a continuous distribution on $\R$ with probability density function $\phi$ given by
$\phi(z) = \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2} z^2}, \quad z \in \R$
Normal distributions are widely used to model physical measurements subject to small, random errors and are studied in detail in the chapter on Special Distributions.
Suppose that $Z$ has the standard normal distribution. Then
1. $\E(Z) = 0$
2. $\var(Z) = 1$
Proof
1. We proved this in the section on basic properties. Here it is again: $\E(Z) = \int_{-\infty}^\infty z \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2} z^2} \, dz = - \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2} z^2} \bigg|_{-\infty}^\infty = 0 - 0$
2. From (a), $\var(Z) = \E(Z^2) = \int_{-\infty}^\infty z^2 \phi(z) \, dz$. Integrate by parts with $u = z$ and $dv = z \phi(z) \, dz$. Thus, $du = dz$ and $v = -\phi(z)$. Hence $\var(Z) = -z \phi(z) \bigg|_{-\infty}^\infty + \int_{-\infty}^\infty \phi(z) \, dz = 0 + 1$
More generally, for $\mu \in \R$ and $\sigma \in (0, \infty)$, recall that the normal distribution with location parameter $\mu$ and scale parameter $\sigma$ is a continuous distribution on $\R$ with probability density function $f$ given by $f(x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left[-\frac{1}{2}\left(\frac{x - \mu}{\sigma}\right)^2\right], \quad x \in \R$ Moreover, if $Z$ has the standard normal distribution, then $X = \mu + \sigma Z$ has the normal distribution with location parameter $\mu$ and scale parameter $\sigma$. As the notation suggests, the location parameter is the mean of the distribution and the scale parameter is the standard deviation.
Suppose that $X$ has the normal distribution with location parameter $\mu$ and scale parameter $\sigma$. Then
1. $\E(X) = \mu$
2. $\var(X) = \sigma^2$
Proof
We could use the probability density function, of course, but it's much better to use the representation of $X$ in terms of the standard normal variable $Z$, and use properties of expected value and variance.
1. $\E(X) = \mu + \sigma \E(Z) = \mu + 0 = \mu$
2. $\var(X) = \sigma^2 \var(Z) = \sigma^2 \cdot 1 = \sigma^2$.
So to summarize, if $X$ has a normal distribution, then its standard score $Z$ has the standard normal distribution.
In the special distribution simulator, select the normal distribution. Vary the parameters and note the shape and location of the mean $\pm$ standard deviation bar in relation to the probability density function. For selected parameter values, run the experiment 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
Beta Distributions
The distributions in this subsection belong to the family of beta distributions, which are widely used to model random proportions and probabilities. The beta distribution is studied in detail in the chapter on Special Distributions.
Suppose that $X$ has a beta distribution with probability density function $f$. In each case below, graph $f$ below and compute the mean and variance.
1. $f(x) = 6 x (1 - x)$ for $x \in [0, 1]$
2. $f(x) = 12 x^2 (1 - x)$ for $x \in [0, 1]$
3. $f(x) = 12 x (1 - x)^2$ for $x \in [0, 1]$
Answer
1. $\E(X) = \frac{1}{2}$, $\var(X) = \frac{1}{20}$
2. $\E(X) = \frac{3}{5}$, $\var(X) = \frac{1}{25}$
3. $\E(X) = \frac{2}{6}$, $\var(X) = \frac{1}{25}$
In the special distribution simulator, select the beta distribution. The parameter values below give the distributions in the previous exercise. In each case, note the location and size of the mean $\pm$ standard deviation bar. Run the experiment 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
1. $a = 2$, $b = 2$
2. $a = 3$, $b = 2$
3. $a = 2$, $b = 3$
Suppose that a sphere has a random radius $R$ with probability density function $f$ given by $f(r) = 12 r ^2 (1 - r)$ for $r \in [0, 1]$. Find the mean and standard deviation of each of the following:
1. The circumference $C = 2 \pi R$
2. The surface area $A = 4 \pi R^2$
3. The volume $V = \frac{4}{3} \pi R^3$
Answer
1. $\frac{6}{5} \pi$, $\frac{2}{5} \pi$
2. $\frac{8}{5} \pi$, $\frac{2}{5} \sqrt{\frac{38}{7}} \pi$
3. $\frac{8}{21} \pi$, $\frac{8}{3} \sqrt{\frac{19}{1470}} \pi$
Suppose that $X$ has probability density function $f$ given by $f(x) = \frac{1}{\pi \sqrt{x (1 - x)}}$ for $x \in (0, 1)$. Find
1. $\E(X)$
2. $\var(X)$
Answer
1. $\frac{1}{2}$
2. $\frac{1}{8}$
The particular beta distribution in the last exercise is also known as the (standard) arcsine distribution. It governs the last time that the Brownian motion process hits 0 during the time interval $[0, 1]$. The arcsine distribution is studied in more generality in the chapter on Special Distributions.
Open the Brownian motion experiment and select the last zero. Note the location and size of the mean $\pm$ standard deviation bar in relation to the probability density function. Run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
Suppose that the grades on a test are described by the random variable $Y = 100 X$ where $X$ has the beta distribution with probability density function $f$ given by $f(x) = 12 x (1 - x)^2$ for $x \in [0, 1]$. The grades are generally low, so the teacher decides to curve the grades using the transformation $Z = 10 \sqrt{Y} = 100 \sqrt{X}$. Find the mean and standard deviation of each of the following variables:
1. $X$
2. $Y$
3. $Z$
Answer
1. $\E(X) = \frac{2}{5}$, $\sd(X) = \frac{1}{5}$
2. $\E(Y) = 40$, $\sd(Y) = 20$
3. $\E(Z) = 60.95$, $\sd(Z) = 16.88$
Exercises on Basic Properties
Suppose that $X$ is a real-valued random variable with $\E(X) = 5$ and $\var(X) = 4$. Find each of the following:
1. $\var(3 X - 2)$
2. $\E(X^2)$
Answer
1. $36$
2. $29$
Suppose that $X$ is a real-valued random variable with $\E(X) = 2$ and $\E\left[X(X - 1)\right] = 8$. Find each of the following:
1. $\E(X^2)$
2. $\var(X)$
Answer
1. $10$
2. $6$
The expected value $\E\left[X(X - 1)\right]$ is an example of a factorial moment.
Suppose that $X_1$ and $X_2$ are independent, real-valued random variables with $\E(X_i) = \mu_i$ and $\var(X_i) = \sigma_i^2$ for $i \in \{1, 2\}$. Then
1. $\E\left(X_1 X_2\right) = \mu_1 \mu_2$
2. $\var\left(X_1 X_2\right) = \sigma_1^2 \sigma_2^2 + \sigma_1^2 \mu_2^2 + \sigma_2^2 \mu_1^2$
Proof
1. This is an important, basic result that was proved in the section on basic properties.
2. Since $X_1^2$ and $X_2^2$ are also independent, we have $\E\left(X_1^2 X_2^2\right) = \E\left(X_1^2\right) \E\left(X_2^2\right) = (\sigma_1^2 + \mu_1^2) (\sigma_2^2 + \mu_2^2)$. The result then follows from the computational formula and algebra.
Marilyn Vos Savant has an IQ of 228. Assuming that the distribution of IQ scores has mean 100 and standard deviation 15, find Marilyn's standard score.
Answer
$z = 8.53$
Fix $t \in (0, \infty)$. Suppose that $X$ is the discrete random variable with probability density function defined by $\P(X = t) = \P(X = -t) = p$, $\P(X = 0) = 1 - 2 p$, where $p \in (0, \frac{1}{2})$. Then equality holds in Chebyshev's inequality at $t$.
Proof
Note that $\E(X) = 0$ and $\var(X) = \E(X^2) = 2 p t^2$. So $\P(|X| \ge t) = 2 p$ and $\sigma^2 / t^2 = 2 p$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/04%3A_Expected_Value/4.03%3A_Variance.txt |
$\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$ $\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$
As usual, our starting point is a random experiment, modeled by a probability space $(\Omega, \mathscr F, P)$. So to review, $\Omega$ is the set of outcomes, $\mathscr F$ the collection of events, and $\P$ the probability measure on the sample space $(\Omega, \mathscr F)$. Suppose that $X$ is a real-valued random variable for the experiment. Recall that the mean of $X$ is a measure of the center of the distribution of $X$. Furthermore, the variance of $X$ is the second moment of $X$ about the mean, and measures the spread of the distribution of $X$ about the mean. The third and fourth moments of $X$ about the mean also measure interesting (but more subtle) features of the distribution. The third moment measures skewness, the lack of symmetry, while the fourth moment measures kurtosis, roughly a measure of the fatness in the tails. The actual numerical measures of these characteristics are standardized to eliminate the physical units, by dividing by an appropriate power of the standard deviation. As usual, we assume that all expected values given below exist, and we will let $\mu = \E(X)$ and $\sigma^2 = \var(X)$. We assume that $\sigma \gt 0$, so that the random variable is really random.
Basic Theory
Skewness
The skewness of $X$ is the third moment of the standard score of $X$: $\skw(X) = \E\left[\left(\frac{X - \mu}{\sigma}\right)^3\right]$ The distribution of $X$ is said to be positively skewed, negatively skewed or unskewed depending on whether $\skw(X)$ is positive, negative, or 0.
In the unimodal case, if the distribution is positively skewed then the probability density function has a long tail to the right, and if the distribution is negatively skewed then the probability density function has a long tail to the left. A symmetric distribution is unskewed.
Suppose that the distribution of $X$ is symmetric about $a$. Then
1. $\E(X) = a$
2. $\skw(X) = 0$.
Proof
By assumption, the distribution of $a - X$ is the same as the distribution of $X - a$. We proved part (a) in the section on properties of expected Value. Thus, $\skw(X) = \E\left[(X - a)^3\right] \big/ \sigma^3$. But by symmetry and linearity, $\E\left[(X - a)^3\right] = \E\left[(a - X)^3\right] = - \E\left[(X - a)^3\right]$, so it follows that $\E\left[(X - a)^3\right] = 0$.
The converse is not true—a non-symmetric distribution can have skewness 0. Examples are given in Exercises (30) and (31) below.
$\skw(X)$ can be expressed in terms of the first three moments of $X$. $\skw(X) = \frac{\E\left(X^3\right) - 3 \mu \E\left(X^2\right) + 2 \mu^3}{\sigma^3} = \frac{\E\left(X^3\right) - 3 \mu \sigma^2 - \mu^3}{\sigma^3}$
Proof
Note tht $(X - \mu)^3 = X^3 - 3 X^2 \mu + 3 X \mu^2 - \mu^3$. From the linearity of expected value we have $\E\left[(X - \mu)^3\right] = \E\left(X^3\right) - 3 \mu \E\left(X^2\right) + 3 \mu^2 \E(X) - \mu^3 = E\left(X^3\right) - 3 \mu \E\left(X^2\right) + 2 \mu^3$ The second expression follows from substituting $\E\left(X^2\right) = \sigma^2 + \mu^2$.
Since skewness is defined in terms of an odd power of the standard score, it's invariant under a linear transformation with positve slope (a location-scale transformation of the distribution). On the other hand, if the slope is negative, skewness changes sign.
Suppose that $a \in \R$ and $b \in \R \setminus \{0\}$. Then
1. $\skw(a + b X) = \skw(X)$ if $b \gt 0$
2. $\skw(a + b X) = - \skw(X)$ if $b \lt 0$
Proof
Let $Z = (X - \mu) / \sigma$, the standard score of $X$. Recall from the section on variance that the standard score of $a + b X$ is $Z$ if $b \gt 0$ and is $-Z$ if $b \lt 0$.
Recall that location-scale transformations often arise when physical units are changed, such as inches to centimeters, or degrees Fahrenheit to degrees Celsius.
Kurtosis
The kurtosis of $X$ is the fourth moment of the standard score: $\kur(X) = \E\left[\left(\frac{X - \mu}{\sigma}\right)^4\right]$
Kurtosis comes from the Greek word for bulging. Kurtosis is always positive, since we have assumed that $\sigma \gt 0$ (the random variable really is random), and therefore $\P(X \ne \mu) \gt 0$. In the unimodal case, the probability density function of a distribution with large kurtosis has fatter tails, compared with the probability density function of a distribution with smaller kurtosis.
$\kur(X)$ can be expressed in terms of the first four moments of $X$. $\kur(X) = \frac{\E\left(X^4\right) - 4 \mu \E\left(X^3\right) + 6 \mu^2 \E\left(X^2\right) - 3 \mu^4}{\sigma^4} = \frac{\E\left(X^4\right) - 4 \mu \E\left(X^3\right) + 6 \mu^2 \sigma^2 + 3 \mu^4}{\sigma^4}$
Proof
Note that $(X - \mu)^4 = X^4 - 4 X^3 \mu + 6 X^2 \mu^2 - 4 X \mu^3 + \mu^4$. From linearity of expected value, we have $\E\left[(X - \mu)^4\right] = \E\left(X^4\right) - 4 \mu \E\left(X^3\right) + 6 \mu^2 \E\left(X^2\right) - 4 \mu^3 \E(X) + \mu^4 = \E(X^4) - 4 \mu \E(X^3) + 6 \mu^2 \E(X^2) - 3 \mu^4$ The second expression follows from the substitution $\E\left(X^2\right) = \sigma^2 + \mu^2$.
Since kurtosis is defined in terms of an even power of the standard score, it's invariant under linear transformations.
Suppose that $a \in \R$ and $b \in \R \setminus\{0\}$. Then $\kur(a + b X) = \kur(X)$.
Proof
As before, let $Z = (X - \mu) / \sigma$ denote the standard score of $X$. Then the standard score of $a + b X$ is $Z$ if $b \gt 0$ and is $-Z$ if $b \lt 0$.
We will show in below that the kurtosis of the standard normal distribution is 3. Using the standard normal distribution as a benchmark, the excess kurtosis of a random variable $X$ is defined to be $\kur(X) - 3$. Some authors use the term kurtosis to mean what we have defined as excess kurtosis.
Computational Exercises
As always, be sure to try the exercises yourself before expanding the solutions and answers in the text.
Indicator Variables
Recall that an indicator random variable is one that just takes the values 0 and 1. Indicator variables are the building blocks of many counting random variables. The corresponding distribution is known as the Bernoulli distribution, named for Jacob Bernoulli.
Suppose that $X$ is an indicator variable with $\P(X = 1) = p$ where $p \in (0, 1)$. Then
1. $\E(X) = p$
2. $\var(X) = p (1 - p)$
3. $\skw(X) = \frac{1 - 2 p}{\sqrt{p (1 - p)}}$
4. $\kur(X) = \frac{1 - 3 p + 3 p^2}{p (1 - p)}$
Proof
Parts (a) and (b) have been derived before. All four parts follow easily from the fact that $X^n = X$ and hence $\E\left(X^n\right) = p$ for $n \in \N_+$.
Open the binomial coin experiment and set $n = 1$ to get an indicator variable. Vary $p$ and note the change in the shape of the probability density function.
Dice
Recall that a fair die is one in which the faces are equally likely. In addition to fair dice, there are various types of crooked dice. Here are three:
• An ace-six flat die is a six-sided die in which faces 1 and 6 have probability $\frac{1}{4}$ each while faces 2, 3, 4, and 5 have probability $\frac{1}{8}$ each.
• A two-five flat die is a six-sided die in which faces 2 and 5 have probability $\frac{1}{4}$ each while faces 1, 3, 4, and 6 have probability $\frac{1}{8}$ each.
• A three-four flat die is a six-sided die in which faces 3 and 4 have probability $\frac{1}{4}$ each while faces 1, 2, 5, and 6 have probability $\frac{1}{8}$ each.
A flat die, as the name suggests, is a die that is not a cube, but rather is shorter in one of the three directions. The particular probabilities that we use ($\frac{1}{4}$ and $\frac{1}{8}$) are fictitious, but the essential property of a flat die is that the opposite faces on the shorter axis have slightly larger probabilities that the other four faces. Flat dice are sometimes used by gamblers to cheat.
A standard, fair die is thrown and the score $X$ is recorded. Compute each of the following:
1. $\E(X)$
2. $\var(X)$
3. $\skw(X)$
4. $\kur(X)$
Answer
1. $\frac{7}{2}$
2. $\frac{35}{12}$
3. $0$
4. $\frac{303}{175}$
An ace-six flat die is thrown and the score $X$ is recorded. Compute each of the following:
1. $\E(X)$
2. $\var(X)$
3. $\skw(X)$
4. $\kur(X)$
Answer
1. $\frac{7}{2}$
2. $\frac{15}{4}$
3. $0$
4. $\frac{37}{25}$
A two-five flat die is thrown and the score $X$ is recorded. Compute each of the following:
1. $\E(X)$
2. $\var(X)$
3. $\skw(X)$
4. $\kur(X)$
Answer
1. $\frac{7}{2}$
2. $\frac{11}{4}$
3. $0$
4. $\frac{197}{121}$
A three-four flat die is thrown and the score $X$ is recorded. Compute each of the following:
1. $\E(X)$
2. $\var(X)$
3. $\skw(X)$
4. $\kur(X)$
Answer
1. $\frac{7}{2}$
2. $\frac{9}{4}$
3. $0$
4. $\frac{59}{27}$
All four die distributions above have the same mean $\frac{7}{2}$ and are symmetric (and hence have skewness 0), but differ in variance and kurtosis.
Open the dice experiment and set $n = 1$ to get a single die. Select each of the following, and note the shape of the probability density function in comparison with the computational results above. In each case, run the experiment 1000 times and compare the empirical density function to the probability density function.
1. fair
2. ace-six flat
3. two-five flat
4. three-four flat
Uniform Distributions
Recall that the continuous uniform distribution on a bounded interval corresponds to selecting a point at random from the interval. Continuous uniform distributions arise in geometric probability and a variety of other applied problems.
Suppose that $X$ has uniform distribution on the interval $[a, b]$, where $a, \, b \in \R$ and $a \lt b$. Then
1. $\E(X) = \frac{1}{2}(a + b)$
2. $\var(X) = \frac{1}{12}(b - a)^2$
3. $\skw(X) = 0$
4. $\kur(X) = \frac{9}{5}$
Proof
Parts (a) and (b) we have seen before. For parts (c) and (d), recall that $X = a + (b - a)U$ where $U$ has the uniform distribution on $[0, 1]$ (the standard uniform distribution). Hence it follows from the formulas for skewness and kurtosis under linear transformations that $\skw(X) = \skw(U)$ and $\kur(X) = \kur(U)$. Since $\E(U^n) = 1/(n + 1)$ for $n \in \N_+$, it's easy to compute the skewness and kurtosis of $U$ from the computational formulas skewness and kurtosis. Of course, the fact that $\skw(X) = 0$ also follows trivially from the symmetry of the distribution of $X$ about the mean.
Open the special distribution simulator, and select the continuous uniform distribution. Vary the parameters and note the shape of the probability density function in comparison with the moment results in the last exercise. For selected values of the parameter, run the simulation 1000 times and compare the empirical density function to the probability density function.
The Exponential Distribution
Recall that the exponential distribution is a continuous distribution on $[0, \infty)$with probability density function $f$ given by $f(t) = r e^{-r t}, \quad t \in [0, \infty)$ where $r \in (0, \infty)$ is the with rate parameter. This distribution is widely used to model failure times and other arrival times. The exponential distribution is studied in detail in the chapter on the Poisson Process.
Suppose that $X$ has the exponential distribution with rate parameter $r \gt 0$. Then
1. $\E(X) = \frac{1}{r}$
2. $\var(X) = \frac{1}{r^2}$
3. $\skw(X) = 2$
4. $\kur(X) = 9$
Proof
These results follow from the computational formulas for skewness and kurtosis and the general moment formula $\E\left(X^n\right) = n! / r^n$ for $n \in \N$.
Note that the skewness and kurtosis do not depend on the rate parameter $r$. That's because $1 / r$ is a scale parameter for the exponential distribution
Open the gamma experiment and set $n = 1$ to get the exponential distribution. Vary the rate parameter and note the shape of the probability density function in comparison to the moment results in the last exercise. For selected values of the parameter, run the experiment 1000 times and compare the empirical density function to the true probability density function.
Pareto Distribution
Recall that the Pareto distribution is a continuous distribution on $[1, \infty)$ with probability density function $f$ given by $f(x) = \frac{a}{x^{a + 1}}, \quad x \in [1, \infty)$ where $a \in (0, \infty)$ is a parameter. The Pareto distribution is named for Vilfredo Pareto. It is a heavy-tailed distribution that is widely used to model financial variables such as income. The Pareto distribution is studied in detail in the chapter on Special Distributions.
Suppose that $X$ has the Pareto distribution with shape parameter $a \gt 0$. Then
1. $\E(X) = \frac{a}{a - 1}$ if $a \gt 1$
2. $\var(X) = \frac{a}{(a - 1)^2 (a - 2)}$ if $a \gt 2$
3. $\skw(X) = \frac{2 (1 + a)}{a - 3} \sqrt{1 - \frac{2}{a}}$ if $a \gt 3$
4. $\kur(X) = \frac{3 (a - 2)(3 a^2 + a + 2)}{a (a - 3)(a - 4)}$ if $a \gt 4$
Proof
These results follow from the standard computational formulas for skewness and kurtosis and the general moment formula $\E\left(X^n\right) = \frac{a}{a - n}$ if $n \in \N$ and $n \lt a$.
Open the special distribution simulator and select the Pareto distribution. Vary the shape parameter and note the shape of the probability density function in comparison to the moment results in the last exercise. For selected values of the parameter, run the experiment 1000 times and compare the empirical density function to the true probability density function.
The Normal Distribution
Recall that the standard normal distribution is a continuous distribution on $\R$ with probability density function $\phi$ given by
$\phi(z) = \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2} z^2}, \quad z \in \R$
Normal distributions are widely used to model physical measurements subject to small, random errors and are studied in detail in the chapter on Special Distributions.
Suppose that $Z$ has the standard normal distribution. Then
1. $\E(Z) = 0$
2. $\var(Z) = 1$
3. $\skw(Z) = 0$
4. $\kur(Z) = 3$
Proof
Parts (a) and (b) were derived in the previous sections on expected value and variance. Part (c) follows from symmetry. For part (d), recall that $\E(Z^4) = 3 \E(Z^2) = 3$.
More generally, for $\mu \in \R$ and $\sigma \in (0, \infty)$, recall that the normal distribution with mean $\mu$ and standard deviation $\sigma$ is a continuous distribution on $\R$ with probability density function $f$ given by $f(x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left[-\frac{1}{2}\left(\frac{x - \mu}{\sigma}\right)^2\right], \quad x \in \R$ However, we also know that $\mu$ and $\sigma$ are location and scale parameters, respectively. That is, if $Z$ has the standard normal distribution then $X = \mu + \sigma Z$ has the normal distribution with mean $\mu$ and standard deviation $\sigma$.
If $X$ has the normal distribution with mean $\mu \in \R$ and standard deviation $\sigma \in (0, \infty)$, then
1. $\skw(X) = 0$
2. $\kur(X) = 3$
Proof
The results follow immediately from the formulas for skewness and kurtosis under linear transformations and the previous result.
Open the special distribution simulator and select the normal distribution. Vary the parameters and note the shape of the probability density function in comparison to the moment results in the last exercise. For selected values of the parameters, run the experiment 1000 times and compare the empirical density function to the true probability density function.
The Beta Distribution
The distributions in this subsection belong to the family of beta distributions, which are continuous distributions on $[0, 1]$ widely used to model random proportions and probabilities. The beta distribution is studied in detail in the chapter on Special Distributions.
Suppose that $X$ has probability density function $f$ given by $f(x) = 6 x (1 - x)$ for $x \in [0, 1]$. Find each of the following:
1. $\E(X)$
2. $\var(X)$
3. $\skw(X)$
4. $\kur(X)$
Answer
1. $\frac{1}{2}$
2. $\frac{1}{20}$
3. $0$
4. $\frac{15}{7}$
Suppose that $X$ has probability density function $f$ given by $f(x) = 12 x^2 (1 - x)$ for $x \in [0, 1]$. Find each of the following:
1. $\E(X)$
2. $\var(X)$
3. $\skw(X)$
4. $\kur(X)$
Answer
1. $\frac{3}{5}$
2. $\frac{1}{25}$
3. $-\frac{2}{7}$
4. $\frac{33}{14}$
Suppose that $X$ has probability density function $f$ given by $f(x) = 12 x (1 - x)^2$ for $x \in [0, 1]$. Find each of the following:
1. $\E(X)$
2. $\var(X)$
3. $\skw(X)$
4. $\kur(X)$
Answer
1. $\frac{2}{5}$
2. $\frac{1}{25}$
3. $\frac{2}{7}$
4. $\frac{33}{14}$
Open the special distribution simulator and select the beta distribution. Select the parameter values below to get the distributions in the last three exercises. In each case, note the shape of the probability density function in relation to the calculated moment results. Run the simulation 1000 times and compare the empirical density function to the probability density function.
1. $a = 2$, $b = 2$
2. $a = 3$, $b = 2$
3. $a = 2$, $b = 3$
Suppose that $X$ has probability density function $f$ given by $f(x) = \frac{1}{\pi \sqrt{x (1 - x)}}$ for $x \in (0, 1)$. Find
1. $\E(X)$
2. $\var(X)$
3. $\skw(X)$
4. $\kur(X)$
Answer
1. $\frac{1}{2}$
2. $\frac{1}{8}$
3. 0
4. 96
The particular beta distribution in the last exercise is also known as the (standard) arcsine distribution. It governs the last time that the Brownian motion process hits 0 during the time interval $[0, 1]$. The arcsine distribution is studied in more generality in the chapter on Special Distributions.
Open the Brownian motion experiment and select the last zero. Note the shape of the probability density function in relation to the moment results in the last exercise. Run the simulation 1000 times and compare the empirical density function to the probability density function.
Counterexamples
The following exercise gives a simple example of a discrete distribution that is not symmetric but has skewness 0.
Suppose that $X$ is a discrete random variable with probability density function $f$ given by $f(-3) = \frac{1}{10}$, $f(-1) = \frac{1}{2}$, $f(2) = \frac{2}{5}$. Find each of the following and then show that the distribution of $X$ is not symmetric.
1. $\E(X)$
2. $\var(X)$
3. $\skw(X)$
4. $\kur(X)$
Answer
1. 0
2. 3
3. 0
4. $\frac{5}{3}$
The PDF $f$ is clearly not symmetric about 0, and the mean is the only possible point of symmetry.
The following exercise gives a more complicated continuous distribution that is not symmetric but has skewness 0. It is one of a collection of distributions constructed by Erik Meijer.
Suppose that $U$, $V$, and $I$ are independent random variables, and that $U$ is normally distributed with mean $\mu = -2$ and variance $\sigma^2 = 1$, $V$ is normally distributed with mean $\nu = 1$ and variance $\tau^2 = 2$, and $I$ is an indicator variable with $\P(I = 1) = p = \frac{1}{3}$. Let $X = I U + (1 - I) V$. Find each of the following and then show that the distribution of $X$ is not symmetric.
1. $\E(X)$
2. $\var(X)$
3. $\skw(X)$
4. $\kur(X)$
Solution
The distribution of $X$ is a mixture of normal distributions. The PDF is $f = p g + (1 - p) h$ where $g$ is the normal PDF of $U$ and $h$ is the normal PDF of $V$. However, it's best to work with the random variables. For $n \in \N_+$, note that $I^n = I$ and $(1 - I)^n = 1 - I$ and note also that the random variable $I (1 - I)$ just takes the value 0. It follows that $X^n = I U^n + (1 - I) V^n, \quad n \in \N_+$ So now, using standard results for the normal distribution,
1. $\E(X) = p \mu + (1 - p) \nu = 0$.
2. $\var(X) = \E(X^2) = p (\sigma^2 + \mu^2) + (1 - p) (\tau^2 + \nu^2) = \frac{11}{3}$
3. $\E(X^3) = p (3 \mu \sigma^2 + \mu^3) + (1 - p)(3 \nu \tau^2 + \nu^3) = 0$ so $\skw(X) = 0$
4. $\E(X^4) = p(3 \sigma^4 + 6 \sigma^2 \mu^2 + \mu^4) + (1 - p) (3 \tau^4 + 6 \tau^2 \nu^2 + \nu^4) = 31$ so $\kur(X) = \frac{279}{121} \approx 2.306$
The graph of the PDF $f$ of $X$ is given below. Note that $f$ is not symmetric about 0. (Again, the mean is the only possible point of symmetry.) | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/04%3A_Expected_Value/4.04%3A_Skewness_and_Kurtosis.txt |
$\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\mse}{\text{mse}}$ $\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
Recall that by taking the expected value of various transformations of a random variable, we can measure many interesting characteristics of the distribution of the variable. In this section, we will study an expected value that measures a special type of relationship between two real-valued variables. This relationship is very important both in probability and statistics.
Basic Theory
Definitions
As usual, our starting point is a random experiment modeled by a probability space $(\Omega, \mathscr F, \P)$. Unless otherwise noted, we assume that all expected values mentioned in this section exist. Suppose now that $X$ and $Y$ are real-valued random variables for the experiment (that is, defined on the probability space) with means $\E(X)$, $\E(Y)$ and variances $\var(X)$, $\var(Y)$, respectively.
The covariance of $(X, Y)$ is defined by $\cov(X, Y) = \E\left(\left[X - \E(X)\right]\left[Y - \E(Y)\right]\right)$ and, assuming the variances are positive, the correlation of $(X, Y)$ is defined by $\cor(X, Y) = \frac{\cov(X, Y)}{\sd(X) \sd(Y)}$
1. If $\cov(X, Y) \gt 0$ then $X$ and $Y$ are positively correlated.
2. If $\cov(X, Y) \lt 0$ then $X$ and $Y$ are negatively correlated.
3. If $\cov(X, Y) = 0$ then $X$ and $Y$ are uncorrelated.
Correlation is a scaled version of covariance; note that the two parameters always have the same sign (positive, negative, or 0). Note also that correlation is dimensionless, since the numerator and denominator have the same physical units, namely the product of the units of $X$ and $Y$.
As these terms suggest, covariance and correlation measure a certain kind of dependence between the variables. One of our goals is a deeper understanding of this dependence. As a start, note that $\left(\E(X), \E(Y)\right)$ is the center of the joint distribution of $(X, Y)$, and the vertical and horizontal lines through this point separate $\R^2$ into four quadrants. The function $(x, y) \mapsto \left[x - \E(X)\right]\left[y - \E(Y)\right]$ is positive on the first and third quadrants and negative on the second and fourth.
Properties of Covariance
The following theorems give some basic properties of covariance. The main tool that we will need is the fact that expected value is a linear operation. Other important properties will be derived below, in the subsection on the best linear predictor. As usual, be sure to try the proofs yourself before reading the ones in the text. Once again, we assume that the random variables are defined on the common sample space, are real-valued, and that the indicated expected values exist (as real numbers).
Our first result is a formula that is better than the definition for computational purposes, but gives less insight.
$\cov(X, Y) = \E(X Y) - \E(X) \E(Y)$.
Proof
Let $\mu = \E(X)$ and $\nu = \E(Y)$. Then
$\cov(X, Y) = \E\left[(X - \mu)(Y - \nu)\right] = \E(X Y - \mu Y - \nu X + \mu \nu) = \E(X Y) - \mu \E(Y) - \nu \E(X) + \mu \nu = \E(X Y) - \mu \nu$
From (2), we see that $X$ and $Y$ are uncorrelated if and only if $\E(X Y) = \E(X) \E(Y)$, so here is a simple but important corollary:
If $X$ and $Y$ are independent, then they are uncorrelated.
Proof
We showed in Section 1 that if $X$ and $Y$ are indepedent then $\E(X Y) = \E(X) \E(Y)$.
However, the converse fails with a passion: Exercise (31) gives an example of two variables that are functionally related (the strongest form of dependence), yet uncorrelated. The computational exercises give other examples of dependent yet uncorrelated variables also. Note also that if one of the variables has mean 0, then the covariance is simply the expected product.
Trivially, covariance is a symmetric operation.
$\cov(X, Y) = \cov(Y, X)$.
As the name suggests, covariance generalizes variance.
$\cov(X, X) = \var(X)$.
Proof
Let $\mu = \E(X)$. Then $\cov(X, X) = \E\left[(X - \mu)^2\right] = \var(X)$.
Covariance is a linear operation in the first argument, if the second argument is fixed.
If $X$, $Y$, $Z$ are random variables, and $c$ is a constant, then
1. $\cov(X + Y, Z) = \cov(X, Z) + \cov(Y, Z)$
2. $\cov(c X, Y) = c \, \cov(X, Y)$
Proof
We use the computational formula in (2)
1. \begin{align} \cov(X + Y, Z) & = \E\left[(X + Y) Z\right] - \E(X + Y) \E(Z) = \E(X Z + Y Z) - \left[\E(X) + \E(Y)\right] \E(Z) \ & = \left[\E(X Z) - \E(X) \E(Z)\right] + \left[\E(Y Z) - \E(Y) \E(Z)\right] = \cov(X, Z) + \cov(Y, Z) \end{align}
2. $\cov(c X, Y) = \E(c X Y) - \E(c X) \E(Y) = c \E(X Y) - c \E(X) \E(Y) = c [\E(X Y) - \E(X) \E(Y) = c \, \cov(X, Y)$
By symmetry, covariance is also a linear operation in the second argument, with the first argument fixed. Thus, the covariance operator is bi-linear. The general version of this property is given in the following theorem.
Suppose that $(X_1, X_2, \ldots, X_n)$ and $(Y_1, Y_2, \ldots, Y_m)$ are sequences of random variables, and that $(a_1, a_2, \ldots, a_n)$ and $(b_1, b_2, \ldots, b_m)$ are constants. Then $\cov\left(\sum_{i=1}^n a_i \, X_i, \sum_{j=1}^m b_j \, Y_j\right) = \sum_{i=1}^n \sum_{j=1}^m a_i \, b_j \, \cov(X_i, Y_j)$
The following result shows how covariance is changed under a linear transformation of one of the variables. This is simply a special case of the basic properties, but is worth stating.
If $a, \, b \in \R$ then $\cov(a + bX, Y) = b \, \cov(X, Y)$.
Proof
A constant is independent of any random variable. Hence $\cov(a + b X, Y) = \cov(a, Y) + b \, \cov(X, Y) = b \, \cov(X, Y)$.
Of course, by symmetry, the same property holds in the second argument. Putting the two together we have that if $a, \, b, \, c, \, d \in \R$ then $\cov(a + b X, c + d Y) = b d \, \cov(X, Y)$.
Properties of Correlation
Next we will establish some basic properties of correlation. Most of these follow easily from corresponding properties of covariance above. We assume that $\var(X) \gt 0$ and $\var(Y) \gt 0$, so that the random variable really are random and hence the correlation is well defined.
The correlation between $X$ and $Y$ is the covariance of the corresponding standard scores: $\cor(X, Y) = \cov\left(\frac{X - \E(X)}{\sd(X)}, \frac{Y - \E(Y)}{\sd(Y)}\right) = \E\left(\frac{X - \E(X)}{\sd(X)} \frac{Y - \E(Y)}{\sd(Y)}\right)$
Proof
From the definitions and the linearity of expected value, $\cor(X, Y) = \frac{\cov(X, Y)}{\sd(X) \sd(Y)} = \frac{\E\left(\left[X - \E(X)\right]\left[Y - \E(Y)\right]\right)}{\sd(X) \sd(Y)} = \E\left(\frac{X - \E(X)}{\sd(X)} \frac{Y - \E(Y)}{\sd(Y)}\right)$ Since the standard scores have mean 0, this is also the covariance of the standard scores.
This shows again that correlation is dimensionless, since of course, the standard scores are dimensionless. Also, correlation is symmetric:
$\cor(X, Y) = \cor(Y, X)$.
Under a linear transformation of one of the variables, the correlation is unchanged if the slope is positve and changes sign if the slope is negative:
If $a, \, b \in \R$ and $b \ne 0$ then
1. $\cor(a + b X, Y) = \cor(X, Y)$ if $b \gt 0$
2. $\cor(a + b X, Y) = - \cor(X, Y)$ if $b \lt 0$
Proof
Let $Z$ denote the standard score of $X$. If $b \gt 0$, the standard score of $a + b X$ is also $Z$. If $b \lt 0$, the standard score of $a + b X$ is $-Z$. Hence the result follows from the result above for standard scores.
This result reinforces the fact that correlation is a standardized measure of association, since multiplying the variable by a positive constant is equivalent to a change of scale, and adding a contant to a variable is equivalent to a change of location. For example, in the Challenger data, the underlying variables are temperature at the time of launch (in degrees Fahrenheit) and O-ring erosion (in millimeters). The correlation between these two variables is of fundamental importance. If we decide to measure temperature in degrees Celsius and O-ring erosion in inches, the correlation is unchanged. Of course, the same property holds in the second argument, so if $a, \, b, \, c, \, d \in \R$ with $b \ne 0$ and $d \ne 0$, then $\cor(a + b X, c + d Y) = \cor(X, Y)$ if $b d \gt 0$ and $\cor(a + b X, c + d Y) = -\cor(X, Y)$ if $b d \lt 0$.
The most important properties of covariance and correlation will emerge from our study of the best linear predictor below.
The Variance of a Sum
We will now show that the variance of a sum of variables is the sum of the pairwise covariances. This result is very useful since many random variables with special distributions can be written as sums of simpler random variables (see in particular the binomial distribution and hypergeometric distribution below).
If $(X_1, X_2, \ldots, X_n)$ is a sequence of real-valued random variables then $\var\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \sum_{j=1}^n \cov(X_i, X_j) = \sum_{i=1}^n \var(X_i) + 2 \sum_{\{(i, j): i \lt j\}} \cov(X_i, X_j)$
Proof
From the variance property on (5), and the linear property (7), $\var\left(\sum_{i=1}^n X_i\right) = \cov\left(\sum_{i=1}^n X_i, \sum_{j=1}^n X_j\right) = \sum_{i=1}^j \sum_{j=1}^n \cov(X_i, X_j)$ The second expression follows since $\cov(X_i, X_i) = \var(X_i)$ for each $i$ and $\cov(X_i, X_j) = \cov(X_j, X_i)$ for $i \ne j$ by the symmetry property (4)
Note that the variance of a sum can be larger, smaller, or equal to the sum of the variances, depending on the pure covariance terms. As a special case of (12), when $n = 2$, we have $\var(X + Y) = \var(X) + \var(Y) + 2 \, \cov(X, Y)$ The following corollary is very important.
If $(X_1, X_2, \ldots, X_n)$ is a sequence of pairwise uncorrelated, real-valued random variables then $\var\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \var(X_i)$
Proof
This follows immediately from (12), since $\cov(X_i, X_j) = 0$ for $i \ne j$.
Note that the last result holds, in particular, if the random variables are independent. We close this discussion with a couple of minor corollaries.
If $X$ and $Y$ are real-valued random variables then $\var(X + Y) + \var(X - Y) = 2 \, [\var(X) + \var(Y)]$.
Proof
From (12), $\var(X + Y) = \var(X) + \var(Y) + 2 \cov(X, Y)$ Similarly, $\var(X - Y) = \var(X) + \var(-Y) + 2 \cov(X, - Y) = \var(X) + \var(Y) - 2 \cov(X, Y)$ Adding gives the result.
If $X$ and $Y$ are real-valued random variables with $\var(X) = \var(Y)$ then $X + Y$ and $X - Y$ are uncorrelated.
Proof
From the linear property (7) and the symmetry property (4), $\cov(X + Y, X - Y) = \cov(X, X) - \cov(X, Y) + \cov(Y, X) - \cov(Y, Y) = \var(X) - \var(Y)$
Random Samples
In the following exercises, suppose that $(X_1, X_2, \ldots)$ is a sequence of independent, real-valued random variables with a common distribution that has mean $\mu$ and standard deviation $\sigma \gt 0$. In statistical terms, the variables form a random sample from the common distribution.
For $n \in \N+$, let $Y_n = \sum_{i=1}^n X_i$.
1. $\E\left(Y_n\right) = n \mu$
2. $\var\left(Y_n\right) = n \sigma^2$
Proof
1. This follows from the additive property of expected value.
2. This follows from the additive property of variance (`(13) for independent variables
For $n \in \N_+$, let $M_n = Y_n \big/ n = \frac{1}{n} \sum_{i=1}^n X_i$, so that $M_n$ is the sample mean of $(X_1, X_2, \ldots, X_n)$.
1. $\E\left(M_n\right) = \mu$
2. $\var\left(M_n\right) = \sigma^2 / n$
3. $\var\left(M_n\right) \to 0$ as $n \to \infty$
4. $\P\left(\left|M_n - \mu\right| \gt \epsilon\right) \to 0$ as $n \to \infty$ for every $\epsilon \gt 0$.
Proof
1. This follows from part (a) of the (16) and the scaling property of expected value.
2. This follows from part (b) of the (16) and the scaling property of variance.
3. This is an immediate consequence of (b).
4. This follows from (c) and Chebyshev's inequality: $\P\left(\left|M_n - \mu\right| \gt \epsilon\right) \le \var(M_n) \big/ \epsilon^2 \to 0$ as $n \to \infty$
Part (c) of (17) means that $M_n \to \mu$ as $n \to \infty$ in mean square. Part (d) means that $M_n \to \mu$ as $n \to \infty$ in probability. These are both versions of the weak law of large numbers, one of the fundamental theorems of probability.
The standard score of the sum $Y_n$ and the standard score of the sample mean $M_n$ are the same: $Z_n = \frac{Y_n - n \, \mu}{\sqrt{n} \, \sigma} = \frac{M_n - \mu}{\sigma / \sqrt{n}}$
1. $\E(Z_n) = 0$
2. $\var(Z_n) = 1$
Proof
The equality of the standard score of $Y_n$ and of $Z_n$ is a result of simple algebra. But recall more generally that the standard score of a variable is unchanged by a linear transformation of the variable with positive slope (a location-scale transformation of the distribution). Of course, parts (a) and (b) are true for any standard score.
The central limit theorem, the other fundamental theorem of probability, states that the distribution of $Z_n$ converges to the standard normal distribution as $n \to \infty$.
Events
If $A$ and $B$ are events in our random experiment then the covariance and correlation of $A$ and $B$ are defined to be the covariance and correlation, respectively, of their indicator random variables.
If $A$ and $B$ are events, define $\cov(A, B) = \cov(\bs 1_A, \bs 1_B)$ and $\cor(A, B) = \cor(\bs 1_A, \bs 1_B)$. Equivalently,
1. $\cov(A, B) = \P(A \cap B) - \P(A) \P(B)$
2. $\cor(A, B) = \left[\P(A \cap B) - \P(A) \P(B)\right] \big/ \sqrt{\P(A)\left[1 - \P(A)\right] \P(B)\left[1 - \P(B)\right]}$
Proof
Recall that if $X$ is an indicator variable with $\P(X = 1) = p$, then $\E(X) = p$ and $\var(X) = p (1 - p)$. Also, if $X$ and $Y$ are indicator variables then $X Y$ is an indicator variable and $\P(X Y = 1) = \P(X = 1, Y = 1)$. The results then follow from the definitions.
In particular, note that $A$ and $B$ are positively correlated, negatively correlated, or independent, respectively (as defined in the section on conditional probability) if and only if the indicator variables of $A$ and $B$ are positively correlated, negatively correlated, or uncorrelated, as defined in this section.
If $A$ and $B$ are events then
1. $\cov(A, B^c) = -\cov(A, B)$
2. $\cov(A^c, B^c) = \cov(A, B)$
Proof
These results follow from linear property (7) and the fact that that $\bs 1_{A^c} = 1 - \bs 1_A$.
If $A$ and $B$ are events with $A \subseteq B$ then
1. $\cov(A, B) = \P(A)[1 - \P(B)]$
2. $\cor(A, B) = \sqrt{\P(A)\left[1 - \P(B)\right] \big/ \P(B)\left[1 - \P(A)\right]}$
Proof
These results follow from (19), since $A \cap B = A$.
In the language of the experiment, $A \subseteq B$ means that $A$ implies $B$. In such a case, the events are positively correlated, not surprising.
The Best Linear Predictor
What linear function of $X$ (that is, a function of the form $a + b X$ where $a, \, b \in \R$) is closest to $Y$ in the sense of minimizing mean square error? The question is fundamentally important in the case where random variable $X$ (the predictor variable) is observable and random variable $Y$ (the response variable) is not. The linear function can be used to estimate $Y$ from an observed value of $X$. Moreover, the solution will have the added benefit of showing that covariance and correlation measure the linear relationship between $X$ and $Y$. To avoid trivial cases, let us assume that $\var(X) \gt 0$ and $\var(Y) \gt 0$, so that the random variables really are random. The solution to our problem turns out to be the linear function of $X$ with the same expected value as $Y$, and whose covariance with $X$ is the same as that of $Y$.
The random variable $L(Y \mid X)$ defined as follows is the only linear function of $X$ satisfying properties (a) and (b). $L(Y \mid X) = \E(Y) + \frac{\cov(X, Y)}{\var(X)} \left[X - \E(X)\right]$
1. $\E\left[L(Y \mid X)\right] = \E(Y)$
2. $\cov\left[X, L(Y \mid X) \right] = \cov(X, Y)$
Proof
By the linearity of expected value, $\E\left[L(Y \mid X)\right] = \E(Y) + \frac{\cov(X, Y)}{\var(X)} \left[\E(X) - \E(X)\right] = \E(Y)$ Next, by the linearity of covariance and the fact that a constant is independent (and hence uncorrelated) with any random variable, $\cov\left[X, L(Y \mid X)\right] = \frac{\cov(X, Y)}{\var(X)} \cov(X, X) = \frac{\cov(X, Y)}{\var(X)} \var(X) = \cov(X, Y)$ Conversely, suppose that $U = a + b X$ satisfies $\E(U) = \E(Y)$ and $\cov(X, U) = \cov(Y, U)$. Again using linearity of covariance and the uncorrelated property of constants, the second equation gives $b \, \cov(X, X) = \cov(X, Y)$ so $b = \cov(X, Y) \big/ \var(X)$. Then the first equation gives $a = \E(Y) - b \E(X)$, so $U = L(Y \mid X)$.
Note that in the presence of part (a), part (b) is equivalent to $\E\left[X L(Y \mid X)\right] = \E(X Y)$. Here is another minor variation, but one that will be very useful: $L(Y \mid X)$ is the only linear function of $X$ with the same mean as $Y$ and with the property that $Y - L(Y \mid X)$ is uncorrelated with every linear function of $X$.
$L(Y \mid X)$ is the only linear function of $X$ that satisfies
1. $\E\left[L(Y \mid X)\right] = \E(Y)$
2. $\cov\left[Y - L(Y \mid X), U\right] = 0$ for every linear function $U$ of $X$.
Proof
Of course part (a) is the same as part (a) of (22). Suppose that $U = a + b X$ where $a, \, b \in \R$. From basic properties of covariance and the previous result, $\cov\left[Y - L(Y \mid X), U\right] = b \, \cov\left[Y - L(Y \mid X), X\right] = b \left(\cov(Y, X) - \cov\left[L(Y \mid X), X\right]\right) = 0$ Conversely, suppose that $V$ is a linear function of $X$ and that $\E(V) = \E(Y)$ and $\cov(Y - V, U) = 0$ for every linear function $U$ of $X$. Letting $U = X$ we have $\cov(Y - V, X) = 0$ so $\cov(V, X) = \cov(Y, X)$. Hence $V = L(Y \mid X)$ by (22).
The variance of $L(Y \mid X)$ and its covariance with $Y$ turn out to be the same.
Additional properties of $L(Y \mid X)$:
1. $\var\left[L(Y \mid X)\right] = \cov^2(X, Y) \big/ \var(X)$
2. $\cov\left[L(Y \mid X), Y\right] = \cov^2(X, Y) \big/ \var(X)$
Proof
1. From basic properties of variance, $\var\left[L(Y \mid X)\right] = \left[\frac{\cov(X, Y)}{\var(X)}\right]^2 \var(X) = \frac{\cov^2(X, Y)}{\var(X)}$
2. From basic properties of covariance, $\cov\left[L(Y \mid X), Y\right] = \frac{\cov(X, Y)}{\var(X)} \cov(X, Y) = \frac{\cov^2(X, Y)}{\var(X)}$
We can now prove the fundamental result that $L(Y \mid X)$ is the linear function of $X$ that is closest to $Y$ in the mean square sense. We give two proofs; the first is more straightforward, but the second is more interesting and elegant.
Suppose that $U$ is a linear function of $X$. Then
1. $\E\left(\left[Y - L(Y \mid X)\right]^2\right) \le \E\left[(Y - U)^2\right]$
2. Equality occurs in (a) if and only if $U = L(Y \mid X)$ with probability 1.
Proof from calculus
Let $\mse(a, b)$ denote the mean square error when $U = a + b \, X$ is used as an estimator of $Y$, as a function of the parameters $a, \, b \in \R$: $\mse(a, b) = \E\left(\left[Y - (a + b \, X)\right]^2 \right)$ Expanding the square and using the linearity of expected value gives $\mse(a, b) = a^2 + b^2 \E(X^2) + 2 a b \E(X) - 2 a \E(Y) - 2 b \E(X Y) + \E(Y^2)$ In terms of the variables $a$ and $b$, the first three terms are the second-order terms, the next two are the first-order terms, and the last is the zero-order term. The second-order terms define a quadratic form whose standard symmetric matrix is $\left[\begin{matrix} 1 & \E(X) \ \E(X) & \E(X^2) \end{matrix} \right]$ The determinant of this matrix is $\E(X^2) - [\E(X)]^2 = \var(X)$ and the diagonal terms are positive. All of this means that the graph of $\mse$ is a paraboloid opening upward, so the minimum of $\mse$ will occur at the unique critical point. Setting the first derivatives of $\mse$ to 0 we have \begin{align} -2 \E(Y) + 2 b \E(X) + 2 a & = 0 \ -2 \E(X Y) + 2 b \E\left(X^2\right) + 2 a \E(X) & = 0 \end{align} Solving the first equation for $a$ gives $a = \E(Y) - b \E(X)$. Substituting this into the second equation and solving gives $b = \cov(X, Y) \big/ \var(X)$.
Proof using properties
1. We abbreviate $L(Y \mid X)$ by $L$ for simplicity. Suppose that $U$ is a linear function of $X$. Then $\E\left[(Y - U)^2\right] = \E\left(\left[(Y - L) + (L - U)\right]^2\right) = \E\left[(Y - L)^2\right] + 2 \E\left[(Y - L)(L - U)\right] + \E\left[(L - U)^2\right]$ Since $Y - L$ has mean 0, the middle term is $\cov(Y - L, L - U)$. But $L$ and $U$ are linear functions of $X$ and hence so is $L - U$. Thus $\cov(Y - L, L - U) = 0$ by (23). Hence $\E\left[(Y - U)^2\right] = \E\left[(Y - L)^2\right] + \E\left[(L - U)^2\right] \ge \E\left[(Y - L)^2\right]$
2. Equality occurs in (a) if and only if $\E\left[(L - U)^2\right] = 0$, if and only if $\P(L = U) = 1$.
The mean square error when $L(Y \mid X)$ is used as a predictor of $Y$ is $\E\left(\left[Y - L(Y \mid X)\right]^2 \right) = \var(Y)\left[1 - \cor^2(X, Y)\right]$
Proof
Again, let $L = L(Y \mid X)$ for convenience. Since $Y - L$ has mean 0, $\E\left[(Y - L)^2\right] = \var(Y - L) = \var(Y) - 2 \cov(L, Y) + \var(L)$ But $\cov(L, Y) = \var(L) = \cov^2(X, Y) \big/ \var(X)$ by (24). Hence $\E\left[(Y - L)^2\right] = \var(Y) - \frac{\cov^2(X, Y)}{\var(X)} = \var(Y) \left[1 - \frac{\cov^2(X, Y)}{\var(X) \var(Y)}\right] = \var(Y) \left[1 - \cor^2(X, Y)\right]$
Our solution to the best linear perdictor problems yields important properties of covariance and correlation.
Additional properties of covariance and correlation:
1. $-1 \le \cor(X, Y) \le 1$
2. $-\sd(X) \sd(Y) \le \cov(X, Y) \le \sd(X) \sd(Y)$
3. $\cor(X, Y) = 1$ if and only if, with probability 1, $Y$ is a linear function of $X$ with positive slope.
4. $\cor(X, Y) = - 1$ if and only if, with probability 1, $Y$ is a linear function of $X$ with negative slope.
Proof
Since mean square error is nonnegative, it follows from (26) that $\cor^2(X, Y) \le 1$. This gives parts (a) and (b). For parts (c) and (d), note that if $\cor^2(X, Y) = 1$ then $Y = L(Y \mid X)$ with probability 1, and that the slope in $L(Y \mid X)$ has the same sign as $\cor(X, Y)$.
The last two results clearly show that $\cov(X, Y)$ and $\cor(X, Y)$ measure the linear association between $X$ and $Y$. The equivalent inequalities (a) and (b) above are referred to as the correlation inequality. They are also versions of the Cauchy-Schwarz inequality, named for Augustin Cauchy and Karl Schwarz
Recall from our previous discussion of variance that the best constant predictor of $Y$, in the sense of minimizing mean square error, is $\E(Y)$ and the minimum value of the mean square error for this predictor is $\var(Y)$. Thus, the difference between the variance of $Y$ and the mean square error above for $L(Y \mid X)$ is the reduction in the variance of $Y$ when the linear term in $X$ is added to the predictor: $\var(Y) - \E\left(\left[Y - L(Y \mid X)\right]^2\right) = \var(Y) \, \cor^2(X, Y)$ Thus $\cor^2(X, Y)$ is the proportion of reduction in $\var(Y)$ when $X$ is included as a predictor variable. This quantity is called the (distribution) coefficient of determination. Now let
$L(Y \mid X = x) = \E(Y) + \frac{\cov(X, Y)}{\var(X)}\left[x - \E(X)\right], \quad x \in \R$
The function $x \mapsto L(Y \mid X = x)$ is known as the distribution regression function for $Y$ given $X$, and its graph is known as the distribution regression line. Note that the regression line passes through $\left(\E(X), \E(Y)\right)$, the center of the joint distribution.
However, the choice of predictor variable and response variable is crucial.
The regression line for $Y$ given $X$ and the regression line for $X$ given $Y$ are not the same line, except in the trivial case where the variables are perfectly correlated. However, the coefficient of determination is the same, regardless of which variable is the predictor and which is the response.
Proof
The two regression lines are \begin{align} y - \E(Y) & = \frac{\cov(X, Y)}{\var(X)}\left[x - \E(X)\right] \ x - \E(X) & = \frac{\cov(X, Y)}{\var(Y)}\left[y - \E(Y)\right] \end{align} The two lines are the same if and only if $\cov^2(X, Y) = \var(X) \var(Y)$. But this is equivalent to $\cor^2(X, Y) = 1$.
Suppose that $A$ and $B$ are events with $0 \lt \P(A) \lt 1$ and $0 \lt \P(B) \lt 1$. Then
1. $\cor(A, B) = 1$ if and only $\P(A \setminus B) + \P(B \setminus A) = 0$. (That is, $A$ and $B$ are equivalent events.)
2. $\cor(A, B) = - 1$ if and only $\P(A \setminus B^c) + \P(B^c \setminus A) = 0$. (That is, $A$ and $B^c$ are equivalent events.)
Proof
Recall from (19) that $\cor(A, B) = \cor(\bs 1_A, \bs 1_B)$, so if $\cor^2(A, B) = 1$ then from (27), $\bs 1_B = L(\bs 1_B \mid \bs 1_A)$ with probability 1. But $\bs 1_A$ and $\bs 1_B$ each takes values 0 and 1 only. Hence the only possible regression lines are $y = 0$, $y = 1$, $y = x$ and $y = 1 - x$. The first two correspond to $\P(B) = 0$ and $\P(B) = 1$, respectively, which are excluded by the hypotheses.
1. In this case, the slope is positive, so the regression line is $y = x$. That is, $\bs 1_B = \bs 1_A$ with probability 1.
2. In this case, the slope is negative, so the regression line is $y = 1 - x$. That is, $\bs 1_B = 1 - \bs 1_A = \bs 1_{A^c}$ with probability 1.
The concept of best linear predictor is more powerful than might first appear, because it can be applied to transformations of the variables. Specifically, suppose that $X$ and $Y$ are random variables for our experiment, taking values in general spaces $S$ and $T$, respectively. Suppose also that $g$ and $h$ are real-valued functions defined on $S$ and $T$, respectively. We can find $L\left[h(Y) \mid g(X)\right]$, the linear function of $g(X)$ that is closest to $h(Y)$ in the mean square sense. The results of this subsection apply, of course, with $g(X)$ replacing $X$ and $h(Y)$ replacing $Y$. Of course, we must be able to compute the appropriate means, variances, and covariances.
We close this subsection with two additional properties of the best linear predictor, the linearity properties.
Suppose that $X$, $Y$, and $Z$ are random variables and that $c$ is a constant. Then
1. $L(Y + Z \mid X) = L(Y \mid X) + L(Z \mid X)$
2. $L(c Y \mid X) = c L(Y \mid X)$
Proof from the definitions
These results follow easily from the linearity of expected value and covariance.
1. \begin{align} L(Y + Z \mid X) & = \E(Y + Z) + \frac{\cov(X, Y + Z)}{\var(X)}\left[X - \E(X)\right] \ &= \left(\E(Y) + \frac{\cov(X, Y)}{\var(X)} \left[X - \E(X)\right]\right) + \left(\E(Z) + \frac{\cov(X, Z)}{\var(X)}\left[X - \E(X)\right]\right) \ & = \E(Y \mid X) + \E(Z \mid X) \end{align}
2. $L(c Y \mid X) = \E(c Y) + \frac{\cov(X, cY)}{\var(X)}\left[X - \E(X)\right] = c \E(Y) + c \frac{\cov(X, Y)}{\var(X)}\left[X - \E(X)\right] = c L(Y \mid X)$
Proof by characterizing properties
1. We show that $L(Y \mid X) + L(Z \mid X)$ satisfy the properties that characterize $L(Y + Z \mid X)$. \begin{align} \E\left[L(Y \mid X) + L(Z \mid X)\right] & = \E\left[L(Y \mid X)\right] + \E\left[L(Z \mid X)\right] = \E(Y) + \E(Z) = \E(Y + Z) \ \cov\left[X, L(Y \mid X) + L(Z \mid X)\right] & = \cov\left[X, L(Y \mid X)\right] + \cov\left[X, L(Z \mid X)\right] = \cov(X, Y) + \cov(X, Z) = \cov(X, Y + Z) \end{align}
2. Similarly, we show that $c L(Y \mid X)$ satisfies the properties that characterize $L(c Y \mid X)$ \begin{align} \E\left[ c L(Y \mid X)\right] & = c \E\left[L(Y \mid X)\right] = c \E(Y) = \E(c Y) \ \cov\left[X, c L(Y \mid X)\right] & = c \, \cov\left[X, L(Y \mid X)\right] = c \, \cov(X, Y) = \cov(X, c Y) \end{align}
There are several extensions and generalizations of the ideas in the subsection:
• The corresponding statistical problem of estimating $a$ and $b$, when these distribution parameters are unknown, is considered in the section on Sample Covariance and Correlation.
• The problem finding the function of $X$ that is closest to $Y$ in the mean square error sense (using all reasonable functions, not just linear functions) is considered in the section on Conditional Expected Value.
• The best linear prediction problem when the predictor and response variables are random vectors is considered in the section on Expected Value and Covariance Matrices.
The use of characterizing properties will play a crucial role in these extensions.
Examples and Applications
Uniform Distributions
Suppose that $X$ is uniformly distributed on the interval $[-1, 1]$ and $Y = X^2$. Then $X$ and $Y$ are uncorrelated even though $Y$ is a function of $X$ (the strongest form of dependence).
Proof
Note that $\E(X) = 0$ and $\E(Y) = \E\left(X^2\right) = 1 / 3$ and $\E(X Y) = E\left(X^3\right) = 0$. Hence $\cov(X, Y) = \E(X Y) - \E(X) \E(Y) = 0$.
Suppose that $(X, Y)$ is uniformly distributed on the region $S \subseteq \R^2$. Find $\cov(X, Y)$ and $\cor(X, Y)$ and determine whether the variables are independent in each of the following cases:
1. $S = [a, b] \times [c, d]$ where $a \lt b$ and $c \lt d$, so $S$ is a rectangle.
2. $S = \left\{(x, y) \in \R^2: -a \le y \le x \le a\right\}$ where $a \gt 0$, so $S$ is a triangle
3. $S = \left\{(x, y) \in \R^2: x^2 + y^2 \le r^2\right\}$ where $r \gt 0$, so $S$ is a circle
Answer
1. $\cov(X, Y) = 0$, $\cor(X, Y) = 0$. $X$ and $Y$ are independent.
2. $\cov(X, Y) = \frac{a^2}{9}$, $\cor(X, Y) = \frac{1}{2}$. $X$ and $Y$ are dependent.
3. $\cov(X, Y) = 0$, $\cor(X, Y) = 0$. $X$ and $Y$ are dependent.
In the bivariate uniform experiment, select each of the regions below in turn. For each region, run the simulation 2000 times and note the value of the correlation and the shape of the cloud of points in the scatterplot. Compare with the results in the last exercise.
1. Square
2. Triangle
3. Circle
Suppose that $X$ is uniformly distributed on the interval $(0, 1)$ and that given $X = x \in (0, 1)$, $Y$ is uniformly distributed on the interval $(0, x)$. Find each of the following:
1. $\cov(X, Y)$
2. $\cor(X, Y)$
3. $L(Y \mid X)$
4. $L(X \mid Y)$
Answer
1. $\frac{1}{24}$
2. $\sqrt{\frac{3}{7}}$
3. $\frac{1}{2} X$
4. $\frac{2}{7} + \frac{6}{7} Y$
Dice
Recall that a standard die is a six-sided die. A fair die is one in which the faces are equally likely. An ace-six flat die is a standard die in which faces 1 and 6 have probability $\frac{1}{4}$ each, and faces 2, 3, 4, and 5 have probability $\frac{1}{8}$ each.
A pair of standard, fair dice are thrown and the scores $(X_1, X_2)$ recorded. Let $Y = X_1 + X_2$ denote the sum of the scores, $U = \min\{X_1, X_2\}$ the minimum scores, and $V = \max\{X_1, X_2\}$ the maximum score. Find the covariance and correlation of each of the following pairs of variables:
1. $(X_1, X_2)$
2. $(X_1, Y)$
3. $(X_1, U)$
4. $(U, V)$
5. $(U, Y)$
Answer
1. $0$, $0$
2. $\frac{35}{12}$, $\frac{1}{\sqrt{2}} = 0.7071$
3. $\frac{35}{24}$, $0.6082$
4. $\frac{1369}{1296}$, $\frac{1369}{2555} = 0.5358$
5. $\frac{35}{12}$, $0.8601$
Suppose that $n$ fair dice are thrown. Find the mean and variance of each of the following variables:
1. $Y_n$, the sum of the scores.
2. $M_n$, the average of the scores.
Answer
1. $\E\left(Y_n\right) = \frac{7}{2} n$, $\var\left(Y_n\right) = \frac{35}{12} n$
2. $\E\left(M_n\right) = \frac{7}{2}$, $\var\left(M_n\right) = \frac{35}{12 n}$
In the dice experiment, select fair dice, and select the following random variables. In each case, increase the number of dice and observe the size and location of the probability density function and the mean $\pm$ standard deviation bar. With $n = 20$ dice, run the experiment 1000 times and compare the sample mean and standard deviation to the distribution mean and standard deviation.
1. The sum of the scores.
2. The average of the scores.
Suppose that $n$ ace-six flat dice are thrown. Find the mean and variance of each of the following variables:
1. $Y_n$, the sum of the scores.
2. $M_n$, the average of the scores.
Answer
1. $n \frac{7}{2}$, $n \frac{15}{4}$
2. $\frac{7}{2}$, $\frac{15}{4 n}$
In the dice experiment, select ace-six flat dice, and select the following random variables. In each case, increase the number of dice and observe the size and location of the probability density function and the mean $\pm$ standard deviation bar. With $n = 20$ dice, run the experiment 1000 times and compare the sample mean and standard deviation to the distribution mean and standard deviation.
1. The sum of the scores.
2. The average of the scores.
A pair of fair dice are thrown and the scores $(X_1, X_2)$ recorded. Let $Y = X_1 + X_2$ denote the sum of the scores, $U = \min\{X_1, X_2\}$ the minimum score, and $V = \max\{X_1, X_2\}$ the maximum score. Find each of the following:
1. $L(Y \mid X_1)$
2. $L(U \mid X_1)$
3. $L(V \mid X_1)$
Answer
1. $\frac{7}{2} + X_1$
2. $\frac{7}{9} + \frac{1}{2} X_1$
3. $\frac{49}{19} + \frac{1}{2} X_1$
Bernoulli Trials
Recall that a Bernoulli trials process is a sequence $\boldsymbol{X} = (X_1, X_2, \ldots)$ of independent, identically distributed indicator random variables. In the usual language of reliability, $X_i$ denotes the outcome of trial $i$, where 1 denotes success and 0 denotes failure. The probability of success $p = \P(X_i = 1)$ is the basic parameter of the process. The process is named for Jacob Bernoulli. A separate chapter on the Bernoulli Trials explores this process in detail.
For $n \in \N_+$, the number of successes in the first $n$ trials is $Y_n = \sum_{i=1}^n X_i$. Recall that this random variable has the binomial distribution with parameters $n$ and $p$, which has probability density function $f$ given by
$f_n(y) = \binom{n}{y} p^y (1 - p)^{n - y}, \quad y \in \{0, 1, \ldots, n\}$
The mean and variance of $Y_n$ are
1. $\E(Y_n) = n p$
2. $\var(Y_n) = n p (1 - p)$
Proof
These results could be derived from the PDF of $Y_n$, of course, but a derivation based on the sum of IID variables is much better. Recall that $\E(X_i) = p$ and $\var(X_i) = p (1 - p)$ so the results follow immediately from theorem (16).
In the binomial coin experiment, select the number of heads. Vary $n$ and $p$ and note the shape of the probability density function and the size and location of the mean $\pm$ standard deviation bar. For selected values of the parameters, run the experiment 1000 times and compare the sample mean and standard deviation to the distribution mean and standard deviation.
For $n \in \N_+$, the proportion of successes in the first $n$ trials is $M_n = Y_n / n$. This random variable is sometimes used as a statistical estimator of the parameter $p$, when the parameter is unknown.
The mean and variance of $M_n$ are
1. $\E(M_n) = p$
2. $\var(M_n) = p (1 - p) / n$
Proof
Recall that $\E(X_i) = p$ and $\var(X_i) = p (1 - p)$ so the results follow immediately from theorem (17).
In the binomial coin experiment, select the proportion of heads. Vary $n$ and $p$ and note the shape of the probability density function and the size and location of the mean $\pm$ standard deviation bar. For selected values of the parameters, run the experiment 1000 times and compare the sample mean and standard deviation to the distribution mean and standard deviation.
As a special case of (17) note that $M_n \to p$ as $n \to \infty$ in mean square and in probability.
The Hypergeometric Distribution
Suppose that a population consists of $m$ objects; $r$ of the objects are type 1 and $m - r$ are type 0. A sample of $n$ objects is chosen at random, without replacement. The parameters $m, \, n \in \N_+$ and $r \in \N$ with $n \le m$ and $r \le m$. For $i \in \{1, 2, \ldots, n\}$, let $X_i$ denote the type of the $i$th object selected. Recall that $(X_1, X_2, \ldots, X_n)$ is a sequence of identically distributed (but not independent) indicator random variables.
Let $Y$ denote the number of type 1 objects in the sample, so that $Y = \sum_{i=1}^n X_i$. Recall that this random variable has the hypergeometric distribution, which has probability density function $f_n$ given by
$f(y) = \frac{\binom{r}{y} \binom{m - r}{n - y}}{\binom{m}{n}}, \quad y \in \{0, 1, \ldots, n\}$
For distinct $i, \, j \in \{1, 2, \ldots, n\}$,
1. $\E(X_i) = \frac{r}{m}$
2. $\var(X_i) = \frac{r}{m} \left(1 - \frac{r}{m}\right)$
3. $\cov(X_i, X_j) = -\frac{r}{m}\left(1 - \frac{r}{m}\right) \frac{1}{m - 1}$
4. $\cor(X_i, X_j) = -\frac{1}{m - 1}$
Proof
Recall that $\E(X_i) = \P(X_i = 1) = \frac{r}{m}$ for each $i$ and $\E(X_i X_j) = \P(X_i = 1, X_j = 1) = \frac{r}{m} \frac{r - 1}{m - 1}$ for each $i \ne j$. Technically, the sequence of indicator variables is exchangeable. The results now follow from the definitions and simple algebra.
Note that the event of a type 1 object on draw $i$ and the event of a type 1 object on draw $j$ are negatively correlated, but the correlation depends only on the population size and not on the number of type 1 objects. Note also that the correlation is perfect if $m = 2$. Think about these result intuitively.
The mean and variance of $Y$ are
1. $\E(Y) = n \frac{r}{m}$
2. $\var(Y) = n \frac{r}{m}\left(1 - \frac{r}{m}\right) \frac{m - n}{m - 1}$
Proof
Again, a derivation from the representation of $Y$ as a sum of indicator variables is far preferable to a derivation based on the PDF of $Y$. These results follow immediately from (45), the additive property of expected value, and Theorem (12).
Note that if the sampling were with replacement, $Y$ would have a binomial distribution, and so in particular $E(Y) = n \frac{r}{m}$ and $\var(Y) = n \frac{r}{m} \left(1 - \frac{r}{m}\right)$. The additional factor $\frac{m - n}{m - 1}$ that occurs in the variance of the hypergeometric distribution is sometimes called the finite population correction factor. Note that for fixed $m$, $\frac{m - n}{m - 1}$ is decreasing in $n$, and is 0 when $n = m$. Of course, we know that we must have $\var(Y) = 0$ if $n = m$, since we would be sampling the entire population, and so deterministically, $Y = r$. On the other hand, for fixed $n$, $\frac{m - n}{m - 1} \to 1$ as $m \to \infty$. More generally, the hypergeometric distribution is well approximated by the binomial when the population size $m$ is large compared to the sample size $n$. These ideas are discussed more fully in the section on the hypergeometric distribution in the chapter on Finite Sampling Models.
In the ball and urn experiment, select sampling without replacement. Vary $m$, $r$, and $n$ and note the shape of the probability density function and the size and location of the mean $\pm$ standard deviation bar. For selected values of the parameters, run the experiment 1000 times and compare the sample mean and standard deviation to the distribution mean and standard deviation.
Exercises on Basic Properties
Suppose that $X$ and $Y$ are real-valued random variables with $\cov(X, Y) = 3$. Find $\cov(2 X - 5, 4 Y + 2)$.
Answer
24
Suppose $X$ and $Y$ are real-valued random variables with $\var(X) = 5$, $\var(Y) = 9$, and $\cov(X, Y) = - 3$. Find
1. $\cor(X, Y)$
2. $\var(2 X + 3 Y - 7)$
3. $\cov(5 X + 2 Y - 3, 3 X - 4 Y + 2)$
4. $\cor(5 X + 2 Y - 3, 3 X - 4 Y + 2)$
Answer
1. $-\frac{1}{\sqrt{5}} \approx -0.4472$
2. 65
3. 45
4. $\frac{15}{\sqrt{2929}} \approx 0.2772$
Suppose that $X$ and $Y$ are independent, real-valued random variables with $\var(X) = 6$ and $\var(Y) = 8$. Find $\var(3 X - 4 Y + 5)$.
Answer
182
Suppose that $A$ and $B$ are events in an experiment with $\P(A) = \frac{1}{2}$, $\P(B) = \frac{1}{3}$, and $\P(A \cap B) = \frac{1}{8}$. Find each of the following:
1. $\cov(A, B)$
2. $\cor(A, B)$
Answer
1. $-\frac{1}{24}$
2. $-\sqrt 2 / 8$
Suppose that $X$, $Y$, and $Z$ are real-valued random variables for an experiment, and that $L(Y \mid X) = 2 - 3 X$ and $L(Z \mid X) = 5 + 4 X$. Find $L(6 Y - 2 Z \mid X)$.
Answer
$2 - 26 X$
Suppose that $X$ and $Y$ are real-valued random variables for an experiment, and that $\E(X) = 3$, $\var(X) = 4$, and $L(Y \mid X) = 5 - 2 X$. Find each of the following:
1. $\E(Y)$
2. $\cov(X, Y)$
Answer
1. $-1$
2. $-8$
Simple Continuous Distributions
Suppose that $(X, Y)$ has probability density function $f$ given by $f(x, y) = x + y$ for $0 \le x \le 1$, $0 \le y \le 1$. Find each of the following
1. $\cov(X, Y)$
2. $\cor(X, Y)$
3. $L(Y \mid X)$
4. $L(X \mid Y)$
Answer
1. $-\frac{1}{144}$
2. $-\frac{1}{11} \approx -0.0909$
3. $\frac{7}{11} - \frac{1}{11} X$
4. $\frac{7}{11} = \frac{1}{11} Y$
Suppose that $(X, Y)$ has probability density function $f$ given by $f(x, y) = 2 (x + y)$ for $0 \le x \le y \le 1$. Find each of the following:
1. $\cov(X, Y)$
2. $\cor(X, Y)$
3. $L(Y \mid X)$
4. $L(X \mid Y)$
Answer
1. $\frac{1}{48}$
2. $\frac{5}{\sqrt{129}} \approx 0.4402$
3. $\frac{26}{43} + \frac{15}{43} X$
4. $\frac{5}{9} Y$
Suppose again that $(X, Y)$ has probability density function $f$ given by $f(x, y) = 2 (x + y)$ for $0 \le x \le y \le 1$.
1. Find $\cov\left(X^2, Y\right)$.
2. Find $\cor\left(X^2, Y\right)$.
3. Find $L\left(Y \mid X^2\right)$.
4. Which predictor of $Y$ is better, the one based on $X$ or the one based on $X^2$?
Answer
1. $\frac{7}{360}$
2. $0.448$
3. $\frac{1255}{1920} + \frac{245}{634} X$
4. The predictor based on $X^2$ is slightly better.
Suppose that $(X, Y)$ has probability density function $f$ given by $f(x, y) = 6 x^2 y$ for $0 \le x \le 1$, $0 \le y \le 1$. Find each of the following:
1. $\cov(X, Y)$
2. $\cor(X, Y)$
3. $L(Y \mid X)$
4. $L(X \mid Y)$
Answer
Note that $X$ and $Y$ are independent.
1. $0$
2. $0$
3. $\frac{2}{3}$
4. $\frac{3}{4}$
Suppose that $(X, Y)$ has probability density function $f$ given by $f(x, y) = 15 x^2 y$ for $0 \le x \le y \le 1$. Find each of the following:
1. $\cov(X, Y)$
2. $\cor(X, Y)$
3. $L(Y \mid X)$
4. $L(X \mid Y)$
Answer
1. $\frac{5}{336}$
2. $0.05423$
3. $\frac{30}{51} + \frac{20}{51} X$
4. $\frac{3}{4} Y$
Suppose again that $(X, Y)$ has probability density function $f$ given by $f(x, y) = 15 x^2 y$ for $0 \le x \le y \le 1$.
1. Find $\cov\left(\sqrt{X}, Y\right)$.
2. Find $\cor\left(\sqrt{X}, Y\right)$.
3. Find $L\left(Y \mid \sqrt{X}\right)$.
4. Which of the predictors of $Y$ is better, the one based on $X$ of the one based on $\sqrt{X}$?
Answer
1. $\frac{10}{1001}$
2. $\frac{24}{169} \sqrt{14}$
3. $\frac{5225}{13\;182} + \frac{1232}{2197} X$
4. The predictor based on $X$ is slightly better. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/04%3A_Expected_Value/4.05%3A_Covariance_and_Correlation.txt |
$\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$
As usual, our starting point is a random experiment modeled by a probability sace $(\Omega, \mathscr F, \P)$. A generating function of a real-valued random variable is an expected value of a certain transformation of the random variable involving another (deterministic) variable. Most generating functions share four important properties:
1. Under mild conditions, the generating function completely determines the distribution of the random variable.
2. The generating function of a sum of independent variables is the product of the generating functions
3. The moments of the random variable can be obtained from the derivatives of the generating function.
4. Ordinary (pointwise) convergence of a sequence of generating functions corresponds to the special convergence of the corresponding distributions.
Property 1 is perhaps the most important. Often a random variable is shown to have a certain distribution by showing that the generating function has a certain form. The process of recovering the distribution from the generating function is known as inversion. Property 2 is frequently used to determine the distribution of a sum of independent variables. By contrast, recall that the probability density function of a sum of independent variables is the convolution of the individual density functions, a much more complicated operation. Property 3 is useful because often computing moments from the generating function is easier than computing the moments directly from the probability density function. The last property is known as the continuity theorem. Often it is easer to show the convergence of the generating functions than to prove convergence of the distributions directly.
The numerical value of the generating function at a particular value of the free variable is of no interest, and so generating functions can seem rather unintuitive at first. But the important point is that the generating function as a whole encodes all of the information in the probability distribution in a very useful way. Generating functions are important and valuable tools in probability, as they are in other areas of mathematics, from combinatorics to differential equations.
We will study the three generating functions in the list below, which correspond to increasing levels of generality. The fist is the most restrictive, but also by far the simplest, since the theory reduces to basic facts about power series that you will remember from calculus. The third is the most general and the one for which the theory is most complete and elegant, but it also requires basic knowledge of complex analysis. The one in the middle is perhaps the one most commonly used, and suffices for most distributions in applied probability.
We will also study the characteristic function for multivariate distributions, although analogous results hold for the other two types. In the basic theory below, be sure to try the proofs yourself before reading the ones in the text.
Basic Theory
The Probability Generating Function
For our first generating function, assume that $N$ is a random variable taking values in $\N$.
The probability generating function $P$ of $N$ is defined by $P(t) = \E\left(t^N\right)$ for all $t \in \R$ for which the expected value exists in $\R$.
That is, $P(t)$ is defined when $\E\left(|t|^N\right) \lt \infty$. The probability generating function can be written nicely in terms of the probability density function.
Suppose that $N$ has probability density function $f$ and probability generating function $P$. Then $P(t) = \sum_{n=0}^\infty f(n) t^n, \quad t \in (-r, r)$ where $r \in [1, \infty]$ is the radius of convergence of the series.
Proof
The expansion follows from the discrete change of variables theorem for expected value. Note that the series is a power series in $t$, and hence by basic calculus, converges absolutely for $t \in (-r, r)$ where $r \in [0, \infty]$ is the radius of convergence. But since $\sum_{n=0}^\infty f(n) = 1$ we must have $r \ge 1$, and the series converges absolutely at least for $t \in [-1, 1]$.
In the language of combinatorics, $P$ is the ordinary generating function of $f$. Of course, if $N$ just takes a finite set of values in $\N$ then $r = \infty$. Recall from calculus that a power series can be differentiated term by term, just like a polynomial. Each derivative series has the same radius of convergence as the original series (but may behave differently at the endpoints of the interval of convergence). We denote the derivative of order $n$ by $P^{(n)}$. Recall also that if $n \in \N$ and $k \in \N$ with $k \le n$, then the number of permutations of size $k$ chosen from a population of $n$ objects is $n^{(k)} = n (n - 1) \cdots (n - k + 1)$ The following theorem is the inversion result for probability generating functions: the generating function completely determines the distribution.
Suppose again that $N$ has probability density function $f$ and probability generating function $P$. Then $f(k) = \frac{P^{(k)}(0)}{k!}, \quad k \in \N$
Proof
This is a standard result from the theory of power series. Differentiating $k$ times gives $P^{(k)}(t) = \sum_{n=k}^\infty n^{(k)} f(n) t^{n-k}$ for $t \in (-r, r)$. Hence $P^{(k)}(0) = k^{(k)} f(k) = k! f(k)$
Our next result is not particularly important, but has a certain curiosity.
$\P(N \text{ is even}) = \frac{1}{2}\left[1 + P(-1)\right]$.
Proof
Note that $P(1) + P(-1) = \sum_{n=0}^\infty f(n) + \sum_{n=0}^\infty (-1)^n f(n) = 2 \sum_{k=0}^\infty f(2 k) = 2 \P(N \text{ is even })$ We can combine the two sum since we know that the series converge absolutely at 1 and $-1$.
Recall that the factorial moment of $N$ of order $k \in \N$ is $\E\left[N^{(k)}\right]$. The factorial moments can be computed from the derivatives of the probability generating function. The factorial moments, in turn, determine the ordinary moments about 0 (sometimes referred to as raw moments).
Suppose that the radius of convergence $r \gt 1$. Then $P^{(k)}(1) = \E\left[N^{(k)}\right]$ for $k \in \N$. In particular, $N$ has finite moments of all orders.
Proof
As before, $P^{(k)}(t) = \sum_{n=k}^\infty n^{(k)} f(n) t^{n-k}$ for $t \in (-r, r)$. Hence if $r \gt 1$ then $P^{(k)}(1) = \sum_{n=k}^\infty n^{(k)} f(n) = \E\left[N^{(k)}\right]$
Suppose again that $r \gt 1$. Then
1. $\E(N) = P^\prime(1)$
2. $\var(N) = P^{\prime \prime}(1) + P^\prime(1)\left[1 - P^\prime(1)\right]$
Proof
1. $\E(N) = \E\left[N^{(1)}\right] = P^\prime(1)$.
2. $\E\left(N^2\right) = \E[N (N - 1)] + \E(N) = \E\left[N^{(2)}\right] + \E(N) = P^{\prime\prime}(1) + P^\prime(1)$. Hence from (a), $\var(N) = P^{\prime\prime}(1) + P^\prime(1) - \left[P^\prime(1)\right]^2$.
Suppose that $N_1$ and $N_2$ are independent random variables taking values in $\N$, with probability generating functions $P_1$ and $P_2$ having radii of convergence $r_1$ and $r_2$, respectively. Then the probability generating function $P$ of $N_1 + N_2$ is given by $P(t) = P_1(t) P_2(t)$ for $\left|t\right| \lt r_1 \wedge r_2$.
Proof
Recall that the expected product of independent variables is the product of the expected values. Hence $P(t) = \E\left(t^{N_1 + N_2}\right) = \E\left(t^{N_1} t^{N_2}\right) = \E\left(t^{N_1}\right) \E\left(t^{N_2}\right) = P_1(t) P_2(t), \quad \left|t\right| \lt r_1 \wedge r_2$
The Moment Generating Function
Our next generating function is defined more generally, so in this discussion we assume that the random variables are real-valued.
The moment generating function of $X$ is the function $M$ defined by $M(t) = \E\left(e^{tX}\right), \quad t \in \R$
Note that since $e^{t X} \ge 0$ with probability 1, $M(t)$ exists, as a real number or $\infty$, for any $t \in \R$. But as we will see, our interest will be in the domain where $M(t) \lt \infty$.
Suppose that $X$ has a continuous distribution on $\R$ with probability density function $f$. Then $M(t) = \int_{-\infty}^\infty e^{t x} f(x) \, dx$
Proof
This follows from the change of variables theorem for expected value.
Thus, the moment generating function of $X$ is closely related to the Laplace transform of the probability density function $f$. The Laplace transform is named for Pierre Simon Laplace, and is widely used in many areas of applied mathematics, particularly differential equations. The basic inversion theorem for moment generating functions (similar to the inversion theorem for Laplace transforms) states that if $M(t) \lt \infty$ for $t$ in an open interval about 0, then $M$ completely determines the distribution of $X$. Thus, if two distributions on $\R$ have moment generating functions that are equal (and finite) in an open interval about 0, then the distributions are the same.
Suppose that $X$ has moment generating function $M$ that is finite in an open interval $I$ about 0. Then $X$ has moments of all orders and $M(t) = \sum_{n=0}^\infty \frac{\E\left(X^n\right)}{n!} t^n, \quad t \in I$
Proof
Under the hypotheses, the expected value operator can be interchanged with the infinite series for the exponential function: $M(t) = \E\left(e^{t X}\right) = \E\left(\sum_{n=0}^\infty \frac{X^n}{n!} t^n\right) = \sum_{n=0}^\infty \frac{\E(X^n)}{n!} t^n, \quad t \in I$ The interchange is a special case of Fubini's theorem, named for Guido Fubini. For more details see the advanced section on properties of the integral in the chapter on Distributions.
So under the finite assumption in the last theorem, the moment generating function, like the probability generating function, is a power series in $t$.
Suppose again that $X$ has moment generating function $M$ that is finite in an open interval about 0. Then $M^{(n)}(0) = \E\left(X^n\right)$ for $n \in \N$
Proof
This follows by the same argument as above for the PGF: $M^{(n)}(0) / n!$ is the coefficient of order $n$ in the power series above, namely $\E\left(X^n\right) / n!$. Hence $M^{(n)}(0) = \E\left(X^n\right)$.
Thus, the derivatives of the moment generating function at 0 determine the moments of the variable (hence the name). In the language of combinatorics, the moment generating function is the exponential generating function of the sequence of moments. Thus, a random variable that does not have finite moments of all orders cannot have a finite moment generating function. Even when a random variable does have moments of all orders, the moment generating function may not exist. A counterexample is constructed below.
For nonnegative random variables (which are very common in applications), the domain where the moment generating function is finite is easy to understand.
Suppose that $X$ takes values in $[0, \infty)$ and has moment generating function $M$. If $M(t) \lt \infty$ for $t \in \R$ then $M(s) \lt \infty$ for $s \le t$.
Proof
Since $X \ge 0$, if $s \le t$ then $s X \le t X$ and hence $e^{s X} \le e^{t X}$. Hence $\E\left(e^{s X}\right) \le \E\left(e^{t X} \right)$.
So for a nonnegative random variable, either $M(t) \lt \infty$ for all $t \in \R$ or there exists $r \in (0, \infty)$ such that $M(t) \lt \infty$ for $t \lt r$. Of course, there are complementary results for non-positive random variables, but such variables are much less common. Next we consider what happens to the moment generating function under some simple transformations of the random variables.
Suppose that $X$ has moment generating function $M$ and that $a, \, b \in \R$. The moment generating function $N$ of $Y = a + b X$ is given by $N(t) = e^{a t} M(b t)$ for $t \in \R$.
Proof
$\E\left[e^{t (a + b X)}\right] = \E\left(e^{t a} e^{t b X}\right) = e^{t a} \E\left[e^{(t b) X}\right] = e^{a t} M(b t)$ for $t \in \R). Recall that if \( a \in \R$ and $b \in (0, \infty)$ then the transformation $a + b X$ is a location-scale transformation on the distribution of $X$, with location parameter $a$ and scale parameter $b$. Location-scale transformations frequently arise when units are changed, such as length changed from inches to centimeters or temperature from degrees Fahrenheit to degrees Celsius.
Suppose that $X_1$ and $X_2$ are independent random variables with moment generating functions $M_1$ and $M_2$ respectively. The moment generating function $M$ of $Y = X_1 + X_2$ is given by $M(t) = M_1(t) M_2(t)$ for $t \in \R$.
Proof
As with the PGF, the proof for the MGF relies on the law of exponents and the fact that the expected value of a product of independent variables is the product of the expected values: $\E\left[e^{t (X_1 + X_2)}\right] = \E\left(e^{t X_1} e^{t X_2}\right) = \E\left(e^{t X_1}\right) \E\left(e^{t X_2}\right) = M_1(t) M_2(t), \quad t \in \R$
The probability generating function of a variable can easily be converted into the moment generating function of the variable.
Suppose that $X$ is a random variable taking values in $\N$ with probability generating function $G$ having radius of convergence $r$. The moment generating function $M$ of $X$ is given by $M(t) = G\left(e^t\right)$ for $t \lt \ln(r)$.
Proof
$M(t) = \E\left(e^{t X}\right) = \E\left[\left(e^t\right)^X\right] = G\left(e^t\right)$ for $e^t \lt r$.
The following theorem gives the Chernoff bounds, named for the mathematician Herman Chernoff. These are upper bounds on the tail events of a random variable.
If $X$ has moment generating function $M$ then
1. $\P(X \ge x) \le e^{-t x} M(t)$ for $t \gt 0$
2. $\P(X \le x) \le e^{-t x} M(t)$ for $t \lt 0$
Proof
1. From Markov's inequality, $\P(X \ge x) = \P\left(e^{t X} \ge e^{t x}\right) \le \E\left(e^{t X}\right) \big/ e^{t x} = e^{-t x} M(t)$ if $t \gt 0$.
2. Similarly, $\P(X \le x) = \P\left(e^{t X} \ge e^{t x}\right) \le e^{-t x} M(t)$ if $t \lt 0$.
Naturally, the best Chernoff bound (in either (a) or (b)) is obtained by finding $t$ that minimizes $e^{-t x} M(t)$.
The Characteristic Function
Our last generating function is the nicest from a mathematical point of view. Once again, we assume that our random variables are real-valued.
The characteristic function of $X$ is the function $\chi$ defined by by $\chi(t) = \E\left(e^{i t X}\right) = \E\left[\cos(t X)\right] + i \E\left[\sin(t X)\right], \quad t \in \R$
Note that $\chi$ is a complex valued function, and so this subsection requires some basic knowledge of complex analysis. The function $\chi$ is defined for all $t \in \R$ because the random variable in the expected value is bounded in magnitude. Indeed, $\left|e^{i t X}\right| = 1$ for all $t \in \R$. Many of the properties of the characteristic function are more elegant than the corresponding properties of the probability or moment generating functions, because the characteristic function always exists.
If $X$ has a continuous distribution on $\R$ with probability density function $f$ and characteristic function $\chi$ then $\chi(t) = \int_{-\infty}^{\infty} e^{i t x} f(x) dx, \quad t \in \R$
Proof
This follows from the change of variables theorem for expected value, albeit a complex version.
Thus, the characteristic function of $X$ is closely related to the Fourier transform of the probability density function $f$. The Fourier transform is named for Joseph Fourier, and is widely used in many areas of applied mathematics.
As with other generating functions, the characteristic function completely determines the distribution. That is, random variables $X$ and $Y$ have the same distribution if and only if they have the same characteristic function. Indeed, the general inversion formula given next is a formula for computing certain combinations of probabilities from the characteristic function.
Suppose again that $X$ has characteristic function $\chi$. If $a, \, b \in \R$ and $a \lt b$ then $\int_{-n}^n \frac{e^{-i a t} - e^{- i b t}}{2 \pi i t} \chi(t) \, dt \to \P(a \lt X \lt b) + \frac{1}{2}\left[\P(X = b) - \P(X = a)\right] \text{ as } n \to \infty$
The probability combinations on the right side completely determine the distribution of $X$. A special inversion formula holds for continuous distributions:
Suppose that $X$ has a continuous distribution with probability density function $f$ and characteristic function $\chi$. At every point $x \in \R$ where $f$ is differentiable, $f(x) = \frac{1}{2 \pi} \int_{-\infty}^\infty e^{-i t x} \chi(t) \, dt$
This formula is essentially the inverse Fourrier transform. As with the other generating functions, the characteristic function can be used to find the moments of $X$. Moreover, this can be done even when only some of the moments exist.
Suppose again that $X$ has characteristic function $\chi$. If $n \in \N_+$ and $\E\left(\left|X^n\right|\right) \lt \infty$. Then $\chi(t) = \sum_{k=0}^n \frac{\E\left(X^k\right)}{k!} (i t)^k + o(t^n)$ and therefore $\chi^{(n)}(0) = i^n \E\left(X^n\right)$.
Details
Recall that the last term is a generic function that satisfies $o(t^n) / t^n \to 0$ as $t \to \infty$.
Next we consider how the characteristic function is changed under some simple transformations of the variables.
Suppose that $X$ has characteristic function $\chi$ and that $a, \, b \in \R$. The characteristic function $\psi$ of $Y = a + b X$ is given by $\psi(t) = e^{i a t} \chi(b t)$ for $t \in \R$.
Proof
The proof is just like the one for the MGF: $\psi(t) = \E\left[e^{i t (a + b X)}\right] = \E\left(e^{i t a} e^{i t b X}\right) = e^{i t a} \E\left[e^{i (t b) X}\right] = e^{i a t} \chi(b t)$ for $t \in \R$.
Suppose that $X_1$ and $X_2$ are independent random variables with characteristic functions $\chi_1$ and $\chi_2$ respectively. The characteristic function $\chi$ of $Y = X_1 + X_2$ is given by $\chi(t) = \chi_1(t) \chi_2(t)$ for $t \in \R$.
Proof
Again, the proof is just like the one for the MGF: $\chi(t) = \E\left[e^{i t (X_1 + X_2)}\right] = \E\left(e^{i t X_1} e^{i t X_2}\right) = \E\left(e^{i t X_1}\right) \E\left(e^{i t X_2}\right) = \chi_1(t) \chi_2(t), \quad t \in \R$
The characteristic function of a random variable can be obtained from the moment generating function, under the basic existence condition that we saw earlier.
Suppose that $X$ has moment generating function $M$ that satisfies $M(t) \lt \infty$ for $t$ in an open interval $I$ about 0. Then the characteristic function $\chi$ of $X$ satisfies $\chi(t) = M(i t)$ for $t \in I$.
The final important property of characteristic functions that we will discuss relates to convergence in distribution. Suppose that $(X_1, X_2, \ldots)$ is a sequence of real-valued random with characteristic functions $(\chi_1, \chi_2, \ldots)$ respectively. Since we are only concerned with distributions, the random variables need not be defined on the same probability space.
The Continuity Theorem
1. If the distribution of $X_n$ converges to the distribution of a random variable $X$ as $n \to \infty$ and $X$ has characteristic function $\chi$, then $\chi_n(t) \to \chi(t)$ as $n \to \infty$ for all $t \in \R$.
2. Conversely, if $\chi_n(t)$ converges to a function $\chi(t)$ as $n \to \infty$ for $t$ in an open interval about 0, and if $\chi$ is continuous at 0, then $\chi$ is the characteristic function of a random variable $X$, and the distribution of $X_n$ converges to the distribution of $X$ as $n \to \infty$.
There are analogous versions of the continuity theorem for probability generating functions and moment generating functions. The continuity theorem can be used to prove the central limit theorem, one of the fundamental theorems of probability. Also, the continuity theorem has a straightforward generalization to distributions on $\R^n$.
The Joint Characteristic Function
All of the generating functions that we have discussed have multivariate extensions. However, we will discuss the extension only for the characteristic function, the most important and versatile of the generating functions. There are analogous results for the other generating functions. So in this discussion, we assume that $(X, Y)$ is a random vector for our experiment, taking values in $\R^2$.
The (joint) characteristic function $\chi$ of $(X, Y)$ is defined by $\chi(s, t) = \E\left[\exp(i s X + i t Y)\right], \quad (s, t) \in \R^2$
Once again, the most important fact is that $\chi$ completely determines the distribution: two random vectors taking values in $\R^2$ have the same characteristic function if and only if they have the same distribution.
The joint moments can be obtained from the derivatives of the characteristic function.
Suppose that $(X, Y)$ has characteristic function $\chi$. If $m, \, n \in \N$ and $\E\left(\left|X^m Y^n\right|\right) \lt \infty$ then $\chi^{(m, n)}(0, 0) = e^{i \, (m + n)} \E\left(X^m Y^n\right)$
The marginal characteristic functions and the characteristic function of the sum can be easily obtained from the joint characteristic function:
Suppose again that $(X, Y)$ has characteristic function $\chi$, and let $\chi_1$, $\chi_2$, and $\chi_+$ denote the characteristic functions of $X$, $Y$, and $X + Y$, respectively. For $t \in \R$
1. $\chi(t, 0) = \chi_1(t)$
2. $\chi(0, t) = \chi_2(t)$
3. $\chi(t, t) = \chi_+(t)$
Proof
All three results follow immediately from the definitions.
Suppose again that $\chi_1$, $\chi_2$, and $\chi$ are the characteristic functions of $X$, $Y$, and $(X, Y)$ respectively. Then $X$ and $Y$ are independent if and only if $\chi(s, t) = \chi_1(s) \chi_2(t)$ for all $(s, t) \in \R^2$.
Naturally, the results for bivariate characteristic functions have analogies in the general multivariate case. Only the notation is more complicated.
Examples and Applications
As always, be sure to try the computational problems yourself before expanding the solutions and answers in the text.
Dice
Recall that an ace-six flat die is a six-sided die for which faces numbered 1 and 6 have probability $\frac{1}{4}$ each, while faces numbered 2, 3, 4, and 5 have probability $\frac{1}{8}$ each. Similarly, a 3-4 flat die is a six-sided die for which faces numbered 3 and 4 have probability $\frac{1}{4}$ each, while faces numbered 1, 2, 5, and 6 have probability $\frac{1}{8}$ each.
Suppose that an ace-six flat die and a 3-4 flat die are rolled. Use probability generating functions to find the probability density function of the sum of the scores.
Solution
Let $X$ and $Y$ denote the score on the ace-six die and 3-4 flat die, respectively. Then $X$ and $Y$ have PGFs $P$ and $Q$ given by \begin{align*} P(t) &= \frac{1}{4} t + \frac{1}{8} t^2 + \frac{1}{8} t^3 + \frac{1}{8} t^4 + \frac{1}{8} t^5 + \frac{1}{4} t^6, \quad t \in \R\ Q(t) &= \frac{1}{8} t + \frac{1}{8} t^2 + \frac{1}{4} t^3 + \frac{1}{4} t^4 + \frac{1}{8} t^5 + \frac{1}{8} t^6, \quad t \in \R \end{align*} Hence $X + Y$ has PGF $P Q$. Expanding (a computer algebra program helps) gives $P(t) Q(t) = \frac{1}{32} t^2 + \frac{3}{64} t^3 + \frac{3}{32} t^4 + \frac{1}{8} t^5 + \frac{1}{8} t^6 + \frac{5}{32} t^7 + \frac{1}{8} t^8 + \frac{1}{8} t^9 + \frac{3}{32} t^{10} + \frac{3}{64} t^{11} + \frac{1}{32} t^{12}, \quad t \in \R$ Thus the PDF $f$ of $X + Y$ is given by $f(2) = f(12) = \frac{1}{32}$, $f(3) = f(11) = \frac{3}{64}$, $f(4) = f(10) = \frac{3}{32}$, $f(5) = f(6) = f(8) = f(9) = \frac{1}{8}$ and $f(7) = \frac{5}{32}$.
Two fair, 6-sided dice are rolled. One has faces numbered $(0, 1, 2, 3, 4, 5)$ and the other has faces numbered $(0, 6, 12, 18, 24, 30)$. Use probability generating functions to find the probability density function of the sum of the scores, and identify the distribution.
Solution
Let $X$ and $Y$ denote the score on the first die and the second die described, respectively. Then $X$ and $Y$ have PGFs $P$ and $Q$ given by \begin{align*} P(t) &= \frac{1}{6} \sum_{k=0}^5 t^k \quad t \in \R \ Q(t) &= \frac{1}{6} \sum_{j=0}^5 t^{6 j} \quad t \in \R \end{align*} Hence $X + Y$ has PGF $P Q$. Simplifying gives $P(t) Q(t) = \frac{1}{36} \sum_{j=0}^5 \sum_{k=0}^5 t^{6 j + k} = \frac{1}{36} \sum_{n=0}^{35} t^n, \quad t \in \R$ Hence $X + Y$ is uniformly distributed on $\{0, 1, 2, \ldots, 35\}$.
Suppose that random variable $Y$ has probability generating function $P$ given by $P(t) = \left(\frac{2}{5} t + \frac{3}{10} t^2 + \frac{1}{5} t^3 + \frac{1}{10} t^4\right)^5, \quad t \in \R$
1. Interpret $Y$ in terms of rolling dice.
2. Use the probability generating function to find the first two factorial moments of $Y$.
3. Use (b) to find the variance of $Y$.
Answer
1. A four-sided die has faces numbered $(1, 2, 3, 4)$ with respective probabilities $\left(\frac{2}{5}, \frac{3}{10}, \frac{1}{5}, \frac{1}{10}\right)$. $Y$ is the sum of the scores when the die is rolled 5 times.
2. $\E(Y) = P^\prime(1) = 10$, $\E[Y(Y - 1)] = P^{\prime \prime}(1) = 95$
3. $\var(Y) = 5$
Bernoulli Trials
Suppose $X$ is an indicator random variable with $p = \P(X = 1)$, where $p \in [0, 1]$ is a parameter. Then $X$ has probability generating function $P(t) = 1 - p + p t$ for $t \in \R$.
Proof
$P(t) = \E\left(t^X\right) = t^0 (1 - p) + t^1 p = 1 - p + p t$ for $t \in \R$.
Recall that a Bernoulli trials process is a sequence $(X_1, X_2, \ldots)$ of independent, identically distributed indicator random variables. In the usual language of reliability, $X_i$ denotes the outcome of trial $i$, where 1 denotes success and 0 denotes failure. The probability of success $p = \P(X_i = 1)$ is the basic parameter of the process. The process is named for Jacob Bernoulli. A separate chapter on the Bernoulli Trials explores this process in more detail.
For $n \in \N_+$, the number of successes in the first $n$ trials is $Y_n = \sum_{i=1}^n X_i$. Recall that this random variable has the binomial distribution with parameters $n$ and $p$, which has probability density function $f_n$ given by $f_n(y) = \binom{n}{y} p^y (1 - p)^{n - y}, \quad y \in \{0, 1, \ldots, n\}$
Random variable $Y_n$ has probability generating function $P_n$ given by $P_n(t) = (1 - p + p t)^n$ for $t \in \R$.
Proof
This follows immediately from the PGF of an indicator variable and the result for sums of independent variables.
Rando variable $Y_n$ has the following parameters:
1. $\E\left[Y_n^{(k)}\right] = n^{(k)} p^k$
2. $\E\left(Y_n\right) = n p$
3. $\var\left(Y_n\right) = n p (1 - p)$
4. $\P(Y_n \text{ is even}) = \frac{1}{2}\left[1 - (1 - 2 p)^n\right]$
Proof
1. Repeated differentiation gives $P^{(k)}(t) = n^{(k)} p^k (1 - p + p t)^{n-k}$. Hence $P^{(k)}(1) = n^{(k)} p^k$, which is $\E\left[X^{(k)} \right]$ by the moment result above.
2. This follows from the formula for mean.
3. This follows from the formula for variance.
4. This follows from the even value formula.
Suppose that $U$ has the binomial distribution with parameters $m \in \N_+$ and $p \in [0, 1]$, $V$ has the binomial distribution with parameters $n \in \N_+$ and $q \in [0, 1]$, and that $U$ and $V$ are independent.
1. If $p = q$ then $U + V$ has the binomial distribution with parameters $m + n$ and $p$.
2. If $p \ne q$ then $U + V$ does not have a binomial distribution.
Proof
From the result for sums of independent variables and the PGF of the binomial distribution, note that the probability generating function of $U + V$ is $P(t) = (1 - p + p t)^m (1 - q + q t)^n$ for $t \in \R$.
1. If $p = q$ then $U + V$ has PGF $P(t) = (1 - p + p t)^{m + n}$, which is the PGF of the binomial distribution with parameters $m + n$ and $p$.
2. On the other hand, if $p \ne q$, the PGF $P$ does not have the functional form of a binomial PGF.
Suppose now that $p \in (0, 1]$. The trial number $N$ of the first success in the sequence of Bernoulli trials has the geometric distribution on $\N_+$ with success parameter $p$. The probability density function $h$ is given by $h(n) = p (1 - p)^{n-1}, \quad n \in \N_+$ The geometric distribution is studied in more detail in the chapter on Bernoulli trials.
Let $Q$ denote the probability generating function of $N$. Then
1. $Q(t) = \frac{p t}{1 - (1 - p)t}$ for $-\frac{1}{1 - p} \lt t \lt \frac{1}{1 - p}$
2. $\E\left[N^{(k)}\right] = k! \frac{(1 - p)^{k-1}}{p^k}$ for $k \in \N$
3. $\E(N) = \frac{1}{p}$
4. $\var(N) = \frac{1 - p}{p^2}$
5. $\P(N \text{ is even}) = \frac{1 - p}{2 - p}$
Proof
1. Using the formula for the sum of a geometric series, $Q(t) = \sum_{n=1}^\infty (1 - p)^{n-1} p t^n = p t \sum_{n=1}^\infty [(1 - p) t]^{n-1} = \frac{p t}{1 - (1 - p) t}, \quad \left|(1 - p) t\right| \lt 1$
2. Repeated differentiation gives $H^{(k)}(t) = k! p (1 - p)^{k-1} \left[1 - (1 - p) t\right]^{-(k+1)}$ and then the result follows from the inversion formula.
3. This follows from (b) and the formula for mean.
4. This follows from (b) and the formula for variance.
5. This follows from even value formula.
The probability that $N$ is even comes up in the alternating coin tossing game with two players.
The Poisson Distribution
Recall that the Poisson distribution has probability density function $f$ given by $f(n) = e^{-a} \frac{a^n}{n!}, \quad n \in \N$ where $a \in (0, \infty)$ is a parameter. The Poisson distribution is named after Simeon Poisson and is widely used to model the number of random points in a region of time or space; the parameter is proportional to the size of the region of time or space. The Poisson distribution is studied in more detail in the chapter on the Poisson Process.
Suppose that $N$ has Poisson distribution with parameter $a \in (0, \infty)$. Let $P_a$ denote the probability generating function of $N$. Then
1. $P_a(t) = e^{a (t - 1)}$ for $t \in \R$
2. $\E\left[N^{(k)}\right] = a^k$
3. $\E(N) = a$
4. $\var(N) = a$
5. $\P(N \text{ is even}) = \frac{1}{2}\left(1 + e^{-2 a}\right)$
Proof
1. Using the exponential series, $P_a(t) = \sum_{n=0}^\infty e^{-a} \frac{a^n}{n!} t^n = e^{-a} \sum_{n=0}^\infty \frac{(a t)^n}{n!} = e^{-a} e^{a t}, \quad t \in \R$
2. Repeated differentiation gives $P_a^{(k)}(t) = e^{a (t - 1)} a^k$, so the result follows from inversion formula.
3. This follows from (b) and the formula for mean.
4. This follows from (b) and the formula for variance.
5. This follows from even value formula.
The Poisson family of distributions is closed with respect to sums of independent variables, a very important property.
Suppose that $X, \, Y$ have Poisson distributions with parameters $a, \, b \in (0, \infty)$, respectively, and that $X$ and $Y$ are independent. Then $X + Y$ has the Poisson distribution with parameter $a + b$.
Proof
In the notation of the previous result, note that $P_a P_b = P_{a+b}$.
The right distribution function of the Poisson distribution does not have a simple, closed-form expression. The following exercise gives an upper bound.
Suppose that $N$ has the Poisson distribution with parameter $a \gt 0$. Then $\P(N \ge n) \le e^{n - a} \left(\frac{a}{n}\right)^n, \quad n \gt a$
Proof
The PGF of $N$ is $P(t) = e^{a (t - 1)}$ and hence the MGF is $P\left(e^t\right) = \exp\left(a e^t - a\right)$. From the Chernov bounds we have $\P(N \ge n) \le e^{-t n} \exp\left(a e^t - a\right) = \exp\left(a e^t - a - tn\right)$ If $n \gt a$ the expression on the right is minimized when $t = \ln(n / a)$. Substituting gives the upper bound.
The following theorem gives an important convergence result that is explored in more detail in the chapter on the Poisson process.
Suppose that $p_n \in (0, 1)$ for $n \in \N_+$ and that $n p_n \to a \in (0, \infty)$ as $n \to \infty$. Then the binomial distribution with parameters $n$ and $p_n$ converges to the Poisson distribution with parameter $a$ as $n \to \infty$.
Proof
Let $P_n$ denote the probability generating function of the binomial distribution with parameters $n$ and $p_n$. From the PGF of the binomial distribution we have $P_n(t) = \left[1 + p_n (t - 1)\right]^n = \left[1 + \frac{n p_n (t - 1)}{n}\right]^n, \quad t \in \R$ Using a famous theorem from calculus, $P_n(t) \to e^{a (t - 1)}$ as $n \to \infty$. But this is the PGF of the Poisson distribution with parameter $a$, so the result follows from the continuity theorem for PGFs.
The Exponential Distribution
Recall that the exponential distribution is a continuous distribution on $[0, \infty)$ with probability density function $f$ given by $f(t) = r e^{-r t}, \quad t \in (0, \infty)$ where $r \in (0, \infty)$ is the rate parameter. This distribution is widely used to model failure times and other random times, and in particular governs the time between arrivals in the Poisson model. The exponential distribution is studied in more detail in the chapter on the Poisson Process.
Suppose that $T$ has the exponential distribution with rate parameter $r \in (0, \infty)$ and let $M$ denote the moment generating function of $T$. Then
1. $M(s) = \frac{r}{r - s}$ for $s \in (-\infty, r)$.
2. $\E(T^n) = n! / r^n$ for $n \in \N$
Proof
1. $M(s) = \E\left(e^{s T}\right) = \int_0^\infty e^{s t} r e^{-r t} \, dt = \int_0^\infty r e^{(s - r) t} \, dt = \frac{r}{r - s}$ for $s \lt r$.
2. $M^{(n)}(s) = \frac{r n!}{(r - s)^{n+1}}$ for $n \in \N$
Suppose that $(T_1, T_2, \ldots)$ is a sequence of independent random variables, each having the exponential distribution with rate parameter $r \in (0, \infty)$. For $n \in \N_+$, the moment generating function $M_n$ of $U_n = \sum_{i=1}^n T_i$ is given by $M_n(s) = \left(\frac{r}{r - s}\right)^n, \quad s \in (-\infty, r)$
Proof
This follows from the previous result and the result for sums of independent variables.
Random variable $U_n$ has the Erlang distribution with shape parameter $n$ and rate parameter $r$, named for Agner Erlang. This distribution governs the $n$th arrival time in the Poisson model. The Erlang distribution is a special case of the gamma distribution and is studied in more detail in the chapter on the Poisson Process.
Uniform Distributions
Suppose that $a, \, b \in \R$ and $a \lt b$. Recall that the continuous uniform distribution on the interval $[a, b]$ has probability density function $f$ given by $f(x) = \frac{1}{b - a}, \quad x \in [a, b]$ The distribution corresponds to selecting a point at random from the interval. Continuous uniform distributions arise in geometric probability and a variety of other applied problems.
Suppose that $X$ is uniformly distributed on the interval $[a, b]$ and let $M$ denote the moment generating function of $X$. Then
1. $M(t) = \frac{e^{b t} - e^{a t}}{(b - a)t}$ if $t \ne 0$ and $M(0) = 1$
2. $\E\left(X^n\right) = \frac{b^{n+1} - a^{n + 1}}{(n + 1)(b - a)}$ for $n \in \N$
Proof
1. $M(t) = \int_a^b e^{t x} \frac{1}{b - a} \, dx = \frac{e^{b t} - e^{a t}}{(b - a)t}$ if $t \ne 0$. Trivially $M(0) = 1$
2. This is a case where the MGF is not helpful, and it's much easier to compute the moments directly: $\E\left(X^n\right) = \int_a^b x^n \frac{1}{b - a} \, dx = \frac{b^{n+1} - a^{n + 1}}{(n + 1)(b - a)}$
Suppose that $(X, Y)$ is uniformly distributed on the triangle $T = \{(x, y) \in \R^2: 0 \le x \le y \le 1\}$. Compute each of the following:
1. The joint moment generating function of $(X, Y)$.
2. The moment generating function of $X$.
3. The moment generating function of $Y$.
4. The moment generating function of $X + Y$.
Answer
1. $M(s, t) = 2 \frac{e^{s+t} - 1}{s (s + t)} - 2 \frac{e^t - 1}{s t}$ if $s \ne 0, \; t \ne 0$. $M(0, 0) = 1$
2. $M_1(s) = 2 \left(\frac{e^2}{s^2} - \frac{1}{s^2} - \frac{1}{s}\right)$ if $s \ne 0$. $M_1(0) = 1$
3. $M_2(t) = 2 \frac{t e^t - e^t + 1}{t^2}$ if $t \ne 0$. $M_2(0) = 1$
4. $M_+(t) = \frac{e^{2 t} - 1}{t^2} - 2 \frac{e^t - 1}{t^2}$ if $t \ne 0$. $M_+(0) = 1$
A Bivariate Distribution
Suppose that $(X, Y)$ has probability density function $f$ given by $f(x, y) = x + y$ for $(x, y) \in [0, 1]^2$. Compute each of the following:
1. The joint moment generating function $(X, Y)$.
2. The moment generating function of $X$.
3. The moment generating function of $Y$.
4. The moment generating function of $X + Y$.
Answer
1. $M(s, t) = \frac{e^{s+t}(-2 s t + s + t) + e^s(s t - s - t) + s + t}{s^2 t^2}$ if $s \ne 0, \, t \ne 0$. $M(0, 0) = 1$
2. $M_1(s) = \frac{3 s e^2 - 2 e^2 - s + 2}{2 s^2}$ if $s \ne 0$. $M_1(0) = 1$
3. $M_2(t) = \frac{3 t e^t - 2 e^t - t + 2}{2 t^2}$ if $t \ne 0$. $M_2(0) = 1$
4. $M_+(t) = \frac{[e^{2 t}(1 - t) + e^t (t - 2) + 1]}{t^3}$ if $t \ne 0$. $M_+(0) = 1$
The Normal Distribution
Recall that the standard normal distribution is a continuous distribution on $\R) with probability density function \(\phi$ given by $\phi(z) = \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2} z^2}, \quad z \in \R$ Normal distributions are widely used to model physical measurements subject to small, random errors and are studied in more detail in the chapter on Special Distributions.
Suppose that $Z$ has the standard normal distribution and let $M$ denote the moment generating function of $Z$. Then
1. $M(t) = e^{\frac{1}{2} t^2}$ for $t \in \R$
2. $\E\left(Z^n\right) = 1 \cdot 3 \cdots (n - 1)$ if $n$ is even and $\E\left(Z^n\right) = 0$ if $n$ is odd.
Proof
1. First, $M(t) = \E\left(e^{t Z}\right) = \int_{-\infty}^\infty e^{t z} \frac{1}{\sqrt{2 \pi}} e^{-z^2 / 2} \, dz = \int_{-\infty}^\infty \frac{1}{\sqrt{2 \pi}} \exp\left(-\frac{z^2}{2} + t z\right) \, dz$ Completing the square in $z$ gives $\exp\left(-\frac{z^2}{2} + t z\right) = \exp\left[\frac{1}{2} t^2 - \frac{1}{2}(z - t)^2 \right] = e^{\frac{1}{2} t^2} \exp\left[-\frac{1}{2} (z - t)^2\right]$. hence $M(t) = e^{\frac{1}{2} t^2} \int_{-\infty}^\infty \frac{1}{\sqrt{2 \pi}} \exp\left[-\frac{1}{2} (z - t)^2\right] \, dz = e^{\frac{1}{2} t^2}$ because the function of $z$ in the last integral is the probability density function for the normal distribution with mean $t$ and variance 1.
2. Note that $M^\prime(t) = t M(t)$. Thus, repeated differentiation gives $M^{(n)}(t) = p_n(t) M(t)$ for $n \in \N$, where $p_n$ is a polynomial of degree $n$ satisfying $p_{n+1}^\prime(t) = t p_n(t) + p_n^\prime(t)$. Since $p_0 = 1$, it's easy to see that $p_n$ has only even or only odd terms, depending on whether $n$ is even or odd, respectively. Thus, $\E\left(X^n\right) = p_n(0)$. This is 0 if $n$ is odd, and is the constant term $1 \cdot 3 \cdots (n - 1)$ if $n$ is even. Of course, we can also see that the odd order moments must be 0 by symmetry.
More generally, for $\mu \in \R$ and $\sigma \in (0, \infty)$, recall that the normal distribution with mean $\mu$ and standard deviation $\sigma$ is a continuous distribution on $\R$ with probability density function $f$ given by $f(x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left[-\frac{1}{2}\left(\frac{x - \mu}{\sigma}\right)^2\right], \quad x \in \R$ Moreover, if $Z$ has the standard normal distribution, then $X = \mu + \sigma Z$ has the normal distribution with mean $\mu$ and standard deviation $\sigma$. Thus, we can easily find the moment generating function of $X$:
Suppose that $X$ has the normal distribution with mean $\mu$ and standard deviation $\sigma$. The moment generating function of $X$ is $M(t) = \exp\left(\mu t + \frac{1}{2} \sigma^2 t^2\right), \quad t \in \R$
Proof
This follows easily the previous result and the result for linear transformations: $X = \mu + \sigma Z$ where $Z$ has the standard normal distribution. Hence $M(t) = \E\left(e^{t X}\right) = e^{\mu t} \E\left(e^{\sigma t Z}\right) = e^{\mu t} e^{\frac{1}{2} \sigma^2 t ^2}, \quad t \in \R$
So the normal family of distributions in closed under location-scale transformations. The family is also closed with respect to sums of independent variables:
If $X$ and $Y$ are independent, normally distributed random variables then $X + Y$ has a normal distribution.
Proof
Suppose that $X$ has the normal distribution with mean $\mu \in \R$ and standard deviation $\sigma \in (0, \infty)$, and that $Y$ has the normal distribution with mean $\nu \in \R$ and standard deviation $\tau \in (0, \infty)$. By (14), the MGF of $X + Y$ is $M_{X+Y}(t) = M_X(t) M_Y(t) = \exp\left(\mu t + \frac{1}{2} \sigma^2 t^2\right) \exp\left(\nu t + \frac{1}{2} \tau^2 t^2\right) = \exp\left[(\mu + \nu) t + \frac{1}{2}\left(\sigma^2 + \tau^2\right) t^2 \right]$ which we recognize as the MGF of the normal distribution with mean $\mu + \nu$ and variance $\sigma^2 + \tau^2$. Of course, we already knew that $\E(X + Y) = \E(X) + \E(Y)$, and since $X$ and $Y$ are independent, $\var(X + Y) = \var(X) + \var(Y)$, so the new information is that the distribution is also normal.
The Pareto Distribution
Recall that the Pareto distribution is a continuous distribution on $[1, \infty) with probability density function \(f$ given by $f(x) = \frac{a}{x^{a + 1}}, \quad x \in [1, \infty)$ where $a \in (0, \infty)$ is the shape parameter. The Pareto distribution is named for Vilfredo Pareto. It is a heavy-tailed distribution that is widely used to model financial variables such as income. The Pareto distribution is studied in more detail in the chapter on Special Distributions.
Suppose that $X$ has the Pareto distribution with shape parameter $a$, and let $M$ denote the moment generating function of $X$. Then
1. $\E\left(X^n\right) = \frac{a}{a - n}$ if $n \lt a$ and $\E\left(X^n\right) = \infty$ if $n \ge a$
2. $M(t) = \infty$ for $t \gt 0$
Proof
1. We have seen this computation before. $\E\left(X^n\right) = \int_1^\infty x^n \frac{a}{x^{a+1}} \, dx = \int_1^\infty x^{n - a - 1} \, dx$. The integral evaluates to $\frac{a}{a - n}$ if $n \lt a$ and $\infty$ if $n \ge a$.
2. This follows from part (a). Since $X \ge 1$, $M(t)$ is increasing in $t$. Thus $M(t) \le 1$ if $t \lt 0$. If $M(t) \lt \infty$ for some $t \gt 0$, then $M(t)$ would be finite for $t$ in an open interval about 0, in which case $X$ would have finite moments of all orders. Of course, it's also easy to see directly from the integral that $M(t) = \infty$ for $t \gt 0$
On the other hand, like all distributions on $\R$, the Pareto distribution has a characteristic function. However, the characteristic function of the Pareto distribution does not have a simple, closed form.
The Cauchy Distribution
Recall that the (standard) Cauchy distribution is a continuous distribution on $\R$ with probability density function $f$ given by $f(x) = \frac{1}{\pi \left(1 + x^2\right)}, \quad x \in \R$ and is named for Augustin Cauchy. The Cauch distribution is studied in more generality in the chapter on Special Distributions. The graph of $f$ is known as the Witch of Agnesi, named for Maria Agnesi.
Suppose that $X$ has the standard Cauchy distribution, and let $M$ denote the moment generating function of $X$. Then
1. $\E(X)$ does not exist.
2. $M(t) = \infty$ for $t \ne 0$.
Proof
1. We have seen this computation before. $\int_a^\infty \frac{x}{\pi (1 + x^2)} \, dx = \infty$ and $\int_{-\infty}^a \frac{x}{\pi (1 + x^2)} \, dx = -\infty$ for every $a \in \R$, so $\int_{-\infty}^\infty \frac{x}{\pi (1 + x^2)} \, dx$ does not exist.
2. Note that $\int_0^\infty \frac{e^{t x}}{\pi (1 + x^2) } \, dx = \infty$ if $t \ge 0$ and $\int_{-\infty}^0 \frac{e^{t x}}{\pi (1 + x^2)} \, dx = \infty$ if $t \le 0$.
Once again, all distributions on $\R$ have characteristic functions, and the standard Cauchy distribution has a particularly simple one.
Let $\chi$ denote the characteristic function of $X$. Then $\chi(t) = e^{-\left|t\right|}$ for $t \in \R$.
Proof
The proof of this result requires contour integrals in the complex plane, and is given in the section on the Cauchy distribution in the chapter on special distributions.
Counterexample
For the Pareto distribution, only some of the moments are finite; so course, the moment generating function cannot be finite in an interval about 0. We will now give an example of a distribution for which all of the moments are finite, yet still the moment generating function is not finite in any interval about 0. Furthermore, we will see two different distributions that have the same moments of all orders.
Suppose that Z has the standard normal distribution and let $X = e^Z$. The distribution of $X$ is known as the (standard) lognormal distribution. The lognormal distribution is studied in more generality in the chapter on Special Distributions. This distribution has finite moments of all orders, but infinite moment generating function.
$X$ has probability density function $f$ given by $f(x) = \frac{1}{\sqrt{2 \pi} x} \exp\left(-\frac{1}{2} \ln^2(x)\right), \quad x \gt 0$
1. $\E\left(X^n\right) = e^{\frac{1}{2}n^2}$ for $n \in \N$.
2. $\E\left(e^{t X}\right) = \infty$ for $t \gt 0$.
Proof
We use the change of variables theorem. The transformation is $x = e^z$ so the inverse transformation is $z = \ln x$ for $x \in (0, \infty)$ and $z \in \R$. Letting $\phi$ denote the PDF of $Z$, it follows that the PDF of $X$ is $f(x) = \phi(z) \, dz / dx = \phi\left(\ln x\right) \big/ x$ for $x \gt 0$.
1. We use the moment generating function of the standard normal distribution given above: $\E\left(X^n\right) = \E\left(e^{n Z}\right) = e^{n^2 / 2}$.
2. Note that $\E\left(e^{t X}\right) = \E\left[\sum_{n=0}^\infty \frac{(t X)^n}{n!}\right] = \sum_{n=0}^\infty \frac{\E(X^n)}{n!} t^n = \sum_{n=0}^\infty \frac{e^{n^2 / 2}}{n!} t^n = \infty, \quad t \gt 0$ The interchange of expected value and sum is justified since $X$ is nonnegative. See the advanced section on properties of the integral in the chapter on Distributions for more details.
Next we construct a different distribution with the same moments as $X$.
Let $h$ be the function defined by $h(x) = \sin\left(2 \pi \ln x \right)$ for $x \gt 0$ and let $g$ be the function defined by $g(x) = f(x)\left[1 + h(x)\right]$ for $x \gt 0$. Then
1. $g$ is a probability density function.
2. If $Y$ has probability density function $g$ then $\E\left(Y^n\right) = e^{\frac{1}{2} n^2}$ for $n \in \N$
Proof | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/04%3A_Expected_Value/4.06%3A_Generating_Functions.txt |
$\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\mse}{\text{MSE}}$ $\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
As usual, our starting point is a random experiment modeled by a probability space $(\Omega, \mathscr F, \P)$. So to review, $\Omega$ is the set of outcomes, $\mathscr F$ the collection of events, and $\P$ the probability measure on the sample space $(\Omega, \mathscr F)$. Suppose next that $X$ is a random variable taking values in a set $S$ and that $Y$ is a random variable taking values in $T \subseteq \R$. We assume that either $Y$ has a discrete distribution, so that $T$ is countable, or that $Y$ has a continuous distribution so that $T$ is an interval (or perhaps a union of intervals). In this section, we will study the conditional expected value of $Y$ given $X$, a concept of fundamental importance in probability. As we will see, the expected value of $Y$ given $X$ is the function of $X$ that best approximates $Y$ in the mean square sense. Note that $X$ is a general random variable, not necessarily real-valued, but as usual, we will assume that either $X$ has a discrete distribution, so that $S$ is countable or that $X$ has a continuous distribution on $S \subseteq \R^n$ for some $n \in \N_+$. In the latter case, $S$ is typically a region defined by inequalites involving elementary functions. We will also assume that all expected values that are mentioned exist (as real numbers).
Basic Theory
Definitions
Note that we can think of $(X, Y)$ as a random variable that takes values in the Cartesian product set $S \times T$. We need recall some basic facts from our work with joint distributions and conditional distributions.
We assume that $(X, Y)$ has joint probability density function $f$ and we let $g$ denote the (marginal) probability density function $X$. Recall that if $Y$ has a discrte distribution then $g(x) = \sum_{y \in T} f(x, y), \quad x \in S$ and if $Y$ has a continuous distribution then $g(x) = \int_T f(x, y) \, dy, \quad x \in S$ In either case, for $x \in S$, the conditional probability density function of $Y$ given $X = x$ is defined by $h(y \mid x) = \frac{f(x, y)}{g(x)}, \quad y \in T$
We are now ready for the basic definitions:
For $x \in S$, the conditional expected value of $Y$ given $X = x \in S$ is simply the mean computed relative to the conditional distribution. So if $Y$ has a discrete distribution then $E(Y \mid X = x) = \sum_{y \in T} y h(y \mid x), \quad x \in S$ and if $Y$ has a continuous distribution then $\E(Y \mid X = x) = \int_T y h(y \mid x) \, dy, \quad x \in S$
1. The function $v: S \to \R$ defined by $v(x) = \E(Y \mid X = x)$ for $x \in S$ is the regression function of $Y$ based on $X$.
2. The random variable $v(X)$ is called the conditional expected value of $Y$ given $X$ and is denoted $\E(Y \mid X)$.
Intuitively, we treat $X$ as known, and therefore not random, and we then average $Y$ with respect to the probability distribution that remains. The advanced section on conditional expected value gives a much more general definition that unifies the definitions given here for the various distribution types.
Properties
The most important property of the random variable $\E(Y \mid X)$ is given in the following theorem. In a sense, this result states that $\E(Y \mid X)$ behaves just like $Y$ in terms of other functions of $X$, and is essentially the only function of $X$ with this property.
The fundamental property
1. $\E\left[r(X) \E(Y \mid X)\right] = \E\left[r(X) Y\right]$ for every function $r: S \to \R$.
2. If $u: S \to \R$ satisfies $\E[r(X) u(X)] = \E[r(X) Y]$ for every $r: S \to \R$ then $\P\left[u(X) = \E(Y \mid X)\right] = 1$.
Proof
We give the proof in the continuous case. The discrete case is analogous, with sums replacing integrals.
1. From the change of variables theorem for expected value, \begin{align} \E\left[r(X) \E(Y \mid X)\right] & = \int_S r(x) \E(Y \mid X = x) g(x) \, dx = \int_S r(x) \left(\int_T y h(y \mid x) \, dy \right) g(x) \, dx\ & = \int_S \int_T r(x) y h(y \mid x) g(x) \, dy \, dx = \int_{S \times T} r(x) y f(x, y) \, d(x, y) = \E[r(X) Y] \end{align}
2. Suppose that $u_1: S \to \R$ and $u_2: S \to \R$ satisfy the condition in (b). Define $r: S \to \R$ by $r(x) = \bs 1[u_1(x) \gt u_2(x)]$. Then by assumption, $\E\left[r(X) u_1(X)\right] = \E\left[r(X) Y\right] = \E\left[r(X) u_2(X)\right]$ But if $\P\left[u_1(X) \gt u_2(X)\right] \gt 0$ then $\E\left[r(X) u_1(X)\right] \gt \E\left[r(X) u_2(X)\right]$, a contradiction. Hence we must have $\P\left[u_1(X) \gt u_2(X)\right] = 0$ and by a symmetric argument, $\P[u_1(X) \lt u_2(X)] = 0$.
Two random variables that are equal with probability 1 are said to be equivalent. We often think of equivalent random variables as being essentially the same object, so the fundamental property above essentially characterizes $\E(Y \mid X)$. That is, we can think of $\E(Y \mid X)$ as any random variable that is a function of $X$ and satisfies this property. Moreover the fundamental property can be used as a definition of conditional expected value, regardless of the type of the distribution of $(X, Y)$. If you are interested, read the more advanced treatment of conditional expected value.
Suppose that $X$ is also real-valued. Recall that the best linear predictor of $Y$ based on $X$ was characterized by property (a), but with just two functions: $r(x) = 1$ and $r(x) = x$. Thus the characterization in the fundamental property is certainly reasonable, since (as we show below) $\E(Y \mid X)$ is the best predictor of $Y$ among all functions of $X$, not just linear functions.
The basic property is also very useful for establishing other properties of conditional expected value. Our first consequence is the fact that $Y$ and $\E(Y \mid X)$ have the same mean.
$\E\left[\E(Y \mid X)\right] = \E(Y)$.
Proof
Let $r$ be the constant function 1 in the basic property.
Aside from the theoretical interest, this theorem is often a good way to compute $\E(Y)$ when we know the conditional distribution of $Y$ given $X$. We say that we are computing the expected value of $Y$ by conditioning on $X$.
For many basic properties of ordinary expected value, there are analogous results for conditional expected value. We start with two of the most important: every type of expected value must satisfy two critical properties: linearity and monotonicity. In the following two theorems, the random variables $Y$ and $Z$ are real-valued, and as before, $X$ is a general random variable.
Linear Properties
1. $\E(Y + Z \mid X) = \E(Y \mid X) + \E(Z \mid X)$.
2. $\E(c \, Y \mid X) = c \, \E(Y \mid X)$
Proof
1. Note that $\E(Y \mid X) + \E(Z \mid X)$ is a function of $X$. If $r: S \to \R$ then $\E\left(r(x) \left[\E(Y \mid X) + \E(Z \mid X)\right]\right) = \E\left[r(X) \E(Y \mid X)\right] + \E\left[r(X) \E(Z \mid X)\right] = E\left[r(X) Y\right] + \E\left[r(X) Z\right] = \E\left[r(X) (Y + Z)\right]$ Hence the result follows from the basic property.
2. Note that $c \E(Y \mid X)$ is a function of $X$. If $r: S \to \R$ then $\E\left[r(X) c \E(Y \mid X)\right] = c \E\left[r(X) \E(Y \mid X)\right] = c \E\left[r(X) Y\right] = \E\left[r(X) (c Y)\right]$ Hence the result follows from the basic property
Part (a) is the additive property and part (b) is the scaling property. The scaling property will be significantly generalized below in (8).
Positive and Increasing Properties
1. If $Y \ge 0$ then $\E(Y \mid X) \ge 0$.
2. If $Y \le Z$ then $\E(Y \mid X) \le \E(Z \mid X)$.
3. $\left|\E(Y \mid X)\right| \le \E\left(\left|Y\right| \mid X\right)$
Proof
1. This follows directly from the definition.
2. Note that if $Y \le Z$ then $Y - Z \ge 0$ so by (a) and linearity, $\E(Y - Z \mid X) = \E(Y \mid X) - \E(Z \mid X) \ge 0$
3. Note that $-\left|Y\right| \le Y \le \left|Y\right|$ and hence by (b) and linearity, $-\E\left(\left|Y\right| \mid X \right) \le \E(Y \mid X) \le \E\left(\left|Y\right| \mid X\right)$.
Our next few properties relate to the idea that $\E(Y \mid X)$ is the expected value of $Y$ given $X$. The first property is essentially a restatement of the fundamental property.
If $r: S \to \R$, then $Y - \E(Y \mid X)$ and $r(X)$ are uncorrelated.
Proof
Note that $Y - \E(Y \mid X)$ has mean 0 by the mean property. Hence, by the basic property, $\cov\left[Y - \E(Y \mid X), r(X)\right] = \E\left\{\left[Y - \E(Y \mid X)\right] r(X)\right\} = \E\left[Y r(X)\right] - \E\left[\E(Y \mid X) r(X)\right] = 0$
The next result states that any (deterministic) function of $X$ acts like a constant in terms of the conditional expected value with respect to $X$.
If $s: S \to \R$ then $\E\left[s(X)\,Y \mid X\right] = s(X)\,\E(Y \mid X)$
Proof
Note that $s(X) \E(Y \mid X)$ is a function of $X$. If $r: S \to \R$ then $\E\left[r(X) s(X) \E(Y \mid X)\right] = \E\left[r(X) s(X) Y\right]$ So the result now follow from the basic property.
The following rule generalizes theorem (8) and is sometimes referred to as the substitution rule for conditional expected value.
If $s: S \times T \to \R$ then $\E\left[s(X, Y) \mid X = x\right] = \E\left[s(x, Y) \mid X = x\right]$
In particular, it follows from (8) that $\E[s(X) \mid X] = s(X)$. At the opposite extreme, we have the next result: If $X$ and $Y$ are independent, then knowledge of $X$ gives no information about $Y$ and so the conditional expected value with respect to $X$ reduces to the ordinary (unconditional) expected value of $Y$.
If $X$ and $Y$ are independent then $\E(Y \mid X) = \E(Y)$
Proof
Trivially, $\E(Y)$ is a (constant) function of $X$. If $r: S \to \R$ then $\E\left[\E(Y) r(X)\right] = \E(Y) \E\left[r(X)\right] = \E\left[Y r(X)\right]$, the last equality by independence. Hence the result follows from the basic property.
Suppose now that $Z$ is real-valued and that $X$ and $Y$ are random variables (all defined on the same probability space, of course). The following theorem gives a consistency condition of sorts. Iterated conditional expected values reduce to a single conditional expected value with respect to the minimum amount of information. For simplicity, we write $\E(Z \mid X, Y)$ rather than $\E\left[Z \mid (X, Y)\right]$.
Consistency
1. $\E\left[\E(Z \mid X, Y) \mid X\right] = \E(Z \mid X)$
2. $\E\left[\E(Z \mid X) \mid X, Y\right] = \E(Z \mid X)$
Proof
1. Suppose that $X$ takes values in $S$ and $Y$ takes values in $T$, so that $(X, Y)$ takes values in $S \times T$. By definition, $\E(Z \mid X)$ is a function of $X$. If $r: S \to \R$ then trivially $r$ can be thought of as a function on $S \times T$ as well. Hence $\E\left[r(X) \E(Z \mid X)\right] = \E\left[r(X) Z\right] = \E\left[r(X) \E(Z \mid X, Y)\right]$ It follows from the basic property that $\E\left[\E(Z \mid X, Y) \mid X\right] = \E(Z \mid X)$.
2. Note that since $\E(Z \mid X)$ is a function of $X$, it is trivially a function of $(X, Y)$. Hence from (8), $\E\left[\E(Z \mid X) \mid X, Y\right] = \E(Z \mid X)$.
Finally we show that $\E(Y \mid X)$ has the same covariance with $X$ as does $Y$, not surprising since again, $\E(Y \mid X)$ behaves just like $Y$ in its relations with $X$.
$\cov\left[X, \E(Y \mid X)\right] = \cov(X, Y)$.
Proof
$\cov\left[X, \E(Y \mid X)\right] = \E\left[X \E(Y \mid X)\right] - \E(X) \E\left[\E(Y \mid X)\right]$. But $\E\left[X \E(Y \mid X)\right] = \E(X Y)$ by basic property, and $\E\left[\E(Y \mid X)\right] = \E(Y)$ by the mean property. Hence $\cov\left[X, \E(Y \mid X)\right] = \E(X Y) - \E(X) \E(Y) = \cov(X, Y)$.
Conditional Probability
The conditional probability of an event $A$, given random variable $X$ (as above), can be defined as a special case of the conditional expected value. As usual, let $\bs 1_A$ denote the indicator random variable of $A$.
If $A$ is an event, defined $\P(A \mid X) = \E\left(\bs{1}_A \mid X\right)$
Here is the fundamental property for conditional probability:
The fundamental property
1. $\E\left[r(X) \P(A \mid X)\right] = \E\left[r(X) \bs{1}_A\right]$ for every function $r: S \to \R$.
2. If $u: S \to \R$ and $u(X)$ satisfies $\E[r(X) u(X)] = \E\left[r(X) \bs 1_A\right]$ for every function $r: S \to \R$, then $\P\left[u(X) = \P(A \mid X)\right] = 1$.
For example, suppose that $X$ has a discrete distribution on a countable set $S$ with probability density function $g$. Then (a) becomes $\sum_{x \in S} r(x) \P(A \mid X = x) g(x) = \sum_{x \in S} r(x) \P(A, X = x)$ But this is obvious since $\P(A \mid X = x) = \P(A, X = x) \big/ \P(X = x)$ and $g(x) = \P(X = x)$. Similarly, if $X$ has a continuous distribution on $S \subseteq \R^n$ then (a) states that $\E\left[r(X) \bs{1}_A\right] = \int_S r(x) \P(A \mid X = x) g(x) \, dx$
The properties above for conditional expected value, of course, have special cases for conditional probability.
$\P(A) = \E\left[\P(A \mid X)\right]$.
Proof
This is a direct result of the mean property, since $\E(\bs{1}_A) = \P(A)$.
Again, the result in the previous exercise is often a good way to compute $\P(A)$ when we know the conditional probability of $A$ given $X$. We say that we are computing the probability of $A$ by conditioning on $X$. This is a very compact and elegant version of the conditioning result given first in the section on Conditional Probability in the chapter on Probability Spaces and later in the section on Discrete Distributions in the Chapter on Distributions.
The following result gives the conditional version of the axioms of probability.
Axioms of probability
1. $\P(A \mid X) \ge 0$ for every event $A$.
2. $\P(\Omega \mid X) = 1$
3. If $\{A_i: i \in I\}$ is a countable collection of disjoint events then $\P\left(\bigcup_{i \in I} A_i \bigm| X\right) = \sum_{i \in I} \P(A_i \mid X)$.
Details
There are some technical issues involving the countable additivity property (c). The conditional probabilities are random variables, and so for a given collection $\{A_i: i \in I\}$, the left and right sides are the same with probability 1. We will return to this point in the more advanced section on conditional expected value
From the last result, it follows that other standard probability rules hold for conditional probability given $X$. These results include
• the complement rule
• the increasing property
• Boole's inequality
• Bonferroni's inequality
• the inclusion-exclusion laws
The Best Predictor
The next result shows that, of all functions of $X$, $\E(Y \mid X)$ is closest to $Y$, in the sense of mean square error. This is fundamentally important in statistical problems where the predictor vector $X$ can be observed but not the response variable $Y$. In this subsection and the next, we assume that the real-valued random variables have finite variance.
If $u: S \to \R$, then
1. $\E\left(\left[\E(Y \mid X) - Y\right]^2\right) \le \E\left(\left[u(X) - Y\right]^2\right)$
2. Equality holds in (a) if and only if $u(X) = \E(Y \mid X)$ with probability 1.
Proof
1. Note that \begin{align} \E\left(\left[Y - u(X)\right]^2\right) & = \E\left(\left[Y - \E(Y \mid X) + \E(Y \mid X) - u(X)\right]^2\right) \ & = \E\left(\left[Y - \E(Y \mid X)\right]^2 \right) + 2 \E\left(\left[Y - \E(Y \mid X)\right] \left[\E(Y \mid X) - u(X)\right]\right) + \E\left(\left[\E(Y \mid X) - u(X)\right]^2\right) \end{align} But $Y - \E(Y \mid X)$ has mean 0, so the middle term on the right is $2 \cov\left[Y - \E(Y \mid X), \E(Y \mid X) - u(X)\right]$. Moreover, $\E(Y \mid X) - u(X)$ is a function of $X$ and hence is uncorrelated with $Y - \E(Y \mid X)$ by the general uncorrelated property. Hence the middle term is 0, so $\E\left(\left[Y - u(X)\right]^2\right) = \E\left(\left[Y - \E(Y \mid X)\right]^2 \right) + \E\left(\left[\E(Y \mid X) - u(X)\right]^2\right)$ and therefore $\E\left(\left[Y - \E(Y \mid X)\right]^2 \right) \le \E\left(\left[Y - u(X)\right]^2\right)$.
2. Equality holds if and only if $\E\left(\left[\E(Y \mid X) - u(X)\right]^2\right) = 0$, if and only if $\P\left[u(X) = \E(Y \mid X)\right] = 1$.
Suppose now that $X$ is real-valued. In the section on covariance and correlation, we found that the best linear predictor of $Y$ given $X$ is
$L(Y \mid X) = \E(Y) + \frac{\cov(X,Y)}{\var(X)} \left[X - \E(X)\right]$
On the other hand, $\E(Y \mid X)$ is the best predictor of $Y$ among all functions of $X$. It follows that if $\E(Y \mid X)$ happens to be a linear function of $X$ then it must be the case that $\E(Y \mid X) = L(Y \mid X)$. However, we will give a direct proof also:
If $\E(Y \mid X) = a + b X$ for constants $a$ and $b$ then $\E(Y \mid X) = L(Y \mid X)$; that is,
1. $b = \cov(X,Y) \big/ \var(X)$
2. $a = \E(Y) - \E(X) \cov(X,Y) \big/ \var(X)$
Proof
First, $\E(Y) = \E\left[\E(Y \mid X)\right] = a + b \E(X)$, so $a = \E(Y) - b \E(X)$. Next, $\cov(X, Y) = \cov[X \E(Y \mid X)] = \cov(X, a + b X) = b \var(X)$ and therefore $b = \cov(X, Y) \big/ \var(X)$.
Conditional Variance
The conditional variance of $Y$ given $X$ is defined like the ordinary variance, but with all expected values conditioned on $X$.
The conditional variance of $Y$ given $X$ is defined as $\var(Y \mid X) = \E\left(\left[Y - \E(Y \mid X)\right]^2 \biggm| X \right)$
Thus, $\var(Y \mid X)$ is a function of $X$, and in particular, is a random variable. Our first result is a computational formula that is analogous to the one for standard variance—the variance is the mean of the square minus the square of the mean, but now with all expected values conditioned on $X$:
$\var(Y \mid X) = \E\left(Y^2 \mid X\right) - \left[\E(Y \mid X)\right]^2$.
Proof
Expanding the square in the definition and using basic properties of conditional expectation, we have
\begin{align} \var(Y \mid X) & = \E\left(Y^2 - 2 Y \E(Y \mid X) + \left[\E(Y \mid X)\right]^2 \biggm| X \right) = \E(Y^2 \mid X) - 2 \E\left[Y \E(Y \mid X) \mid X\right] + \E\left(\left[\E(Y \mid X)\right]^2 \mid X\right) \ & = \E\left(Y^2 \mid X\right) - 2 \E(Y \mid X) \E(Y \mid X) + \left[\E(Y \mid X)\right]^2 = \E\left(Y^2 \mid X\right) - \left[\E(Y \mid X)\right]^2 \end{align}
Our next result shows how to compute the ordinary variance of $Y$ by conditioning on $X$.
$\var(Y) = \E\left[\var(Y \mid X)\right] + \var\left[\E(Y \mid X)\right]$.
Proof
From the previous theorem and properties of conditional expected value we have $\E\left[\var(Y \mid X)\right] = \E\left(Y^2\right) - \E\left(\left[\E(Y \mid X)\right]^2\right)$. But $\E\left(Y^2\right) = \var(Y) + \left[\E(Y)\right]^2$ and similarly, $\E\left(\left[\E(Y \mid X)\right]^2\right) = \var\left[\E(Y \mid X)\right] + \left(\E\left[\E(Y \mid X)\right]\right)^2$. But also, $\E\left[\E(Y \mid X)\right] = \E(Y)$ so subsituting we get $\E\left[\var(Y \mid X)\right] = \var(Y) - \var\left[\E(Y \mid X)\right]$.
Thus, the variance of $Y$ is the expected conditional variance plus the variance of the conditional expected value. This result is often a good way to compute $\var(Y)$ when we know the conditional distribution of $Y$ given $X$. With the help of (21) we can give a formula for the mean square error when $\E(Y \mid X)$ is used a predictor of $Y$.
Mean square error $\E\left(\left[Y - \E(Y \mid X)\right]^2\right) = \var(Y) - \var\left[E(Y \mid X)\right]$
Proof
From the definition of conditional variance, and using mean property and variance formula we have $\E\left(\left[Y - \E(Y \mid X)\right]^2\right) = \E\left[\var(Y \mid X)\right] = \var(Y) - \var\left[E(Y \mid X)\right]$
Let us return to the study of predictors of the real-valued random variable $Y$, and compare the three predictors we have studied in terms of mean square error.
Suppose that $Y$ is a real-valued random variable.
1. The best constant predictor of $Y$ is $\E(Y)$ with mean square error $\var(Y)$.
2. If $X$ is another real-valued random variable, then the best linear predictor of $Y$ given $X$ is $L(Y \mid X) = \E(Y) + \frac{\cov(X,Y)}{\var(X)}\left[X - \E(X)\right]$ with mean square error $\var(Y)\left[1 - \cor^2(X,Y)\right]$.
3. If $X$ is a general random variable, then the best overall predictor of $Y$ given $X$ is $\E(Y \mid X)$ with mean square error $\var(Y) - \var\left[\E(Y \mid X)\right]$.
Conditional Covariance
Suppose that $Y$ and $Z$ are real-valued random variables, and that $X$ is a general random variable, all defined on our underlying probability space. Analogous to variance, the conditional covariance of $Y$ and $Z$ given $X$ is defined like the ordinary covariance, but with all expected values conditioned on $X$.
The conditional covariance of $Y$ and $Z$ given $X$ is defined as $\cov(Y, Z \mid X) = \E\left([Y - \E(Y \mid X)] [Z - \E(Z \mid X) \biggm| X \right)$
Thus, $\cov(Y, Z \mid X)$ is a function of $X$, and in particular, is a random variable. Our first result is a computational formula that is analogous to the one for standard covariance—the covariance is the mean of the product minus the product of the means, but now with all expected values conditioned on $X$:
$\cov(Y, Z \mid X) = \E\left(Y Z \mid X\right) - \E(Y \mid X) E(Z \mid X)$.
Proof
Expanding the product in the definition and using basic properties of conditional expectation, we have
\begin{align} \cov(Y, Z \mid X) & = \E\left(Y Z - Y \E(Z \mid X) - Z E(Y \mid X) + \E(Y \mid X) E(Z \mid X) \biggm| X \right) = \E(Y Z \mid X) - \E\left[Y \E(Z \mid X) \mid X\right] - \E\left[Z \E(Y \mid X) \mid X\right] + \E\left[\E(Y \mid X) \E(Z \mid X) \mid X\right] \ & = \E\left(Y Z \mid X\right) - \E(Y \mid X) \E(Z \mid X) - \E(Y \mid X) \E(Z \mid X) + \E(Y \mid X) \E(Z \mid X) = \E\left(Y Z \mid X\right) - \E(Y \mid X) E(Z \mid X) \end{align}
Our next result shows how to compute the ordinary covariance of $Y$ and $Z$ by conditioning on $X$.
$\cov(Y, Z) = \E\left[\cov(Y, Z \mid X)\right] + \cov\left[\E(Y \mid X), \E(Z \mid X) \right]$.
Proof
From (25) and properties of conditional expected value we have $\E\left[\cov(Y, Z \mid X)\right] = \E(Y Z) - \E\left[\E(Y\mid X) \E(Z \mid X) \right]$ But $\E(Y Z) = \cov(Y, Z) + \E(Y) \E(Z)$ and similarly, $\E\left[\E(Y \mid X) \E(Z \mid X)\right] = \cov[\E(Y \mid X), \E(Z \mid X) + \E[\E(Y\mid X)] \E[\E(Z \mid X)]$ But also, $\E[\E(Y \mid X)] = \E(Y)$ and $\E[\E(Z \mid X)] = \E(Z)$ so subsituting we get $\E\left[\cov(Y, Z \mid X)\right] = \cov(Y, Z) - \cov\left[E(Y \mid X), E(Z \mid X)\right]$
Thus, the covariance of $Y$ and $Z$ is the expected conditional covariance plus the covariance of the conditional expected values. This result is often a good way to compute $\cov(Y, Z)$ when we know the conditional distribution of $(Y, Z)$ given $X$.
Examples and Applications
As always, be sure to try the proofs and computations yourself before reading the ones in the text.
Simple Continuous Distributions
Suppose that $(X,Y)$ has probability density function $f$ defined by $f(x,y) = x + y$ for $0 \le x \le 1$, $0 \le y \le 1$.
1. Find $L(Y \mid X)$.
2. Find $\E(Y \mid X)$.
3. Graph $L(Y \mid X = x)$ and $\E(Y \mid X = x)$ as functions of $x$, on the same axes.
4. Find $\var(Y)$.
5. Find $\var(Y)\left[1 - \cor^2(X, Y)\right]$.
6. Find $\var(Y) - \var\left[\E(Y \mid X)\right]$.
Answer
1. $\frac{7}{11} - \frac{1}{11} X$
2. $\frac{3 X + 2}{6 X + 3}$
3. $\frac{11}{144} = 0.0764$
4. $\frac{5}{66} = 0.0758$
5. $\frac{1}{12} - \frac{1}{144} \ln 3 = 0.0757$
Suppose that $(X,Y)$ has probability density function $f$ defined by $f(x,y) = 2 (x + y)$ for $0 \le x \le y \le 1$.
1. Find $L(Y \mid X)$.
2. Find $\E(Y \mid X)$.
3. Graph $L(Y \mid X = x)$ and $\E(Y \mid X = x)$ as functions of $x$, on the same axes.
4. Find $\var(Y)$.
5. Find $\var(Y)\left[1 - \cor^2(X, Y)\right]$.
6. Find $\var(Y) - \var\left[\E(Y \mid X)\right]$.
Answer
1. $\frac{26}{43} + \frac{15}{43} X$
2. $\frac{5 X^2 + 5 X + 2}{9 X + 3}$
3. $\frac{3}{80} = 0.0375$
4. $\frac{13}{430} = 0.0302$
5. $\frac{1837}{21\;870} - \frac{512}{6561} \ln(2) = 0.0299$
Suppose that $(X,Y)$ has probability density function $f$ defined by $f(x,y) = 6 x^2 y$ for $0 \le x \le 1$, $0 \le y \le 1$.
1. Find $L(Y \mid X)$.
2. Find $\E(Y \mid X)$.
3. Graph $L(Y \mid X = x)$ and $\E(Y \mid X = x)$ as functions of $x$, on the same axes.
4. Find $\var(Y)$.
5. Find $\var(Y)\left[1 - \cor^2(X, Y)\right]$.
6. Find $\var(Y) - \var\left[\E(Y \mid X)\right]$.
Answer
Note that $X$ and $Y$ are independent.
1. $\frac{2}{3}$
2. $\frac{2}{3}$
3. $\frac{1}{18}$
4. $\frac{1}{18}$
5. $\frac{1}{18}$
Suppose that $(X,Y)$ has probability density function $f$ defined by $f(x,y) = 15 x^2 y$ for $0 \le x \le y \le 1$.
1. Find $L(Y \mid X)$.
2. Find $\E(Y \mid X)$.
3. Graph $L(Y \mid X = x)$ and $\E(Y \mid X = x)$ as functions of $x$, on the same axes.
4. Find $\var(Y)$.
5. Find $\var(Y)\left[1 - \cor^2(X, Y)\right]$.
6. Find $\var(Y) - \var\left[\E(Y \mid X)\right]$.
Answer
1. $\frac{30}{51} + \frac{20}{51}X$
2. $\frac{2(X^2 + X + 1)}{3(X + 1)}$
3. $\frac{5}{252} = 0.0198$
4. $\frac{5}{357} = 0.0140$
5. $\frac{292}{63} - \frac{20}{3} \ln(2) = 0.0139$
Exercises on Basic Properties
Suppose that $X$, $Y$, and $Z$ are real-valued random variables with $\E(Y \mid X) = X^3$ and $\E(Z \mid X) = \frac{1}{1 + X^2}$. Find $\E\left(Y\,e^X - Z\,\sin X \mid X\right)$.
Answer
$X^3 e^X - \frac{\sin X}{1 + X^2}$
Uniform Distributions
As usual, continuous uniform distributions can give us some geometric insight.
Recall first that for $n \in \N_+$, the standard measure on $\R^n$ is $\lambda_n(A) = \int_A 1 dx, \quad A \subseteq \R^n$ In particular, $\lambda_1(A)$ is the length of $A \subseteq \R$, $\lambda_2(A)$ is the area of $A \subseteq \R^2$, and $\lambda_3(A)$ is the volume of $A \subseteq \R^3$.
Details
Technically $\lambda_n$ is Lebesgue measure on the measurable subsets of $\R^n$. The integral representation is valid for the types of sets that occur in applications. In the discussion below, all subsets are assumed to be measurable.
With our usual setup, suppose that $X$ takes values in $S \subseteq \R^n$, $Y$ takes values in $T \subseteq \R$, and that $(X, Y)$ is uniformly distributed on $R \subseteq S \times T \subseteq \R^{n+1}$. So $0 \lt \lambda_{n+1}(R) \lt \infty$, and the joint probability density function $f$ of $(X, Y)$ is given by $f(x, y) = 1 / \lambda_{n+1}(R)$ for $(x, y) \in R$. Recall that uniform distributions, whether discrete or continuous, always have constant densities. Finally, recall that the cross section of $R$ at $x \in S$ is $T_x = \{y \in T: (x, y) \in R\}$.
In the setting above, suppose that $T_x$ is a bounded interval with midpoint $m(x)$ and length $l(x)$ for each $x \in S$. Then
1. $\E(Y \mid X) = m(X)$
2. $\var(Y \mid X) = \frac{1}{12}l^2(X)$
Proof
This follows immediately from the fact that the conditional distribution of $Y$ given $X = x$ is uniformly distributed on $T_x$ for each $x \in S$.
So in particular, the regression curve $x \mapsto \E(Y \mid X = x)$ follows the midpoints of the cross-sectional intervals.
In each case below, suppose that $(X,Y)$ is uniformly distributed on the give region. Find $\E(Y \mid X)$ and $\var(Y \mid X)$
1. The rectangular region $R = [a, b] \times [c, d]$ where $a \lt b$ and $c \lt d$.
2. The triangular region $T = \left\{(x,y) \in \R^2: -a \le x \le y \le a\right\}$ where $a \gt 0$.
3. The circular region $C = \left\{(x, y) \in \R^2: x^2 + y^2 \le r\right\}$ where $r \gt 0$.
Answer
1. $\E(Y \mid X) = \frac{1}{2}(c + d)$, $\var(Y \mid X) = \frac{1}{12}(d - c)^2$. Note that $X$ and $Y$ are independent.
2. $\E(Y \mid X) = \frac{1}{2}(a + X)$, $\var(Y \mid X) = \frac{1}{12}(a - X)^2$
3. $\E(Y \mid X) = 0$, $\var(Y \mid X) = 4 (r^2 - X^2)$
In the bivariate uniform experiment, select each of the following regions. In each case, run the simulation 2000 times and note the relationship between the cloud of points and the graph of the regression function.
1. square
2. triangle
3. circle
Suppose that $X$ is uniformly distributed on the interval $(0, 1)$, and that given $X$, random variable $Y$ is uniformly distributed on $(0, X)$. Find each of the following:
1. $\E(Y \mid X)$
2. $\E(Y)$
3. $\var(Y \mid X)$
4. $\var(Y)$
Answer
1. $\frac{1}{2} X$
2. $\frac{1}{4}$
3. $\frac{1}{12} X^2$
4. $\frac{7}{144}$
The Hypergeometric Distribution
Suppose that a population consists of $m$ objects, and that each object is one of three types. There are $a$ objects of type 1, $b$ objects of type 2, and $m - a - b$ objects of type 0. The parameters $a$ and $b$ are positive integers with $a + b \lt m$. We sample $n$ objects from the population at random, and without replacement, where $n \in \{0, 1, \ldots, m\}$. Denote the number of type 1 and 2 objects in the sample by $X$ and $Y$, so that the number of type 0 objects in the sample is $n - X - Y$. In the in the chapter on Distributions, we showed that the joint, marginal, and conditional distributions of $X$ and $Y$ are all hypergeometric—only the parameters change. Here is the relevant result for this section:
In the setting above,
1. $\E(Y \mid X) = \frac{b}{m - a}(n - X)$
2. $\var(Y \mid X) = \frac{b (m - a - b)}{(m - a)^2 (m - a - 1)} (n - X) (m - a - n + X)$
3. $\E\left([Y - \E(Y \mid X)]^2\right) = \frac{n(m - n)b(m - a - b)}{m (m - 1)(m - a)}$
Proof
Recall that $(X, Y)$ has the (multivariate) hypergeometric distribution with parameters $m$, $a$, $b$, and $n$. Marginally, $X$ has the hypergeometric distribution with parameters $m$, $a$, and $n$, and $Y$ has the hypergeometric distribution with parameters $m$, $b$, and $n$. Given $X = x \in \{0, 1, \ldots, n\}$, the remaining $n - x$ objects are chosen at random from a population of $m - a$ objects, of which $b$ are type 2 and $m - a - b$ are type 0. Hence, the conditional distribution of $Y$ given $X = x$ is hypergeometric with parameters $m - a$, $b$, and $n - x$. Parts (a) and (b) then follow from the standard formulas for the mean and variance of the hypergeometric distribution, as functions of the parameters. Part (c) is the mean square error, and in this case can be computed most easily as $\var(Y) - \var[\E(Y \mid X)] = \var(Y) - \left(\frac{b}{m - a}\right)^2 \var(X) = n \frac{b}{m} \frac{m - b}{m} \frac{m - n}{m - 1} - \left(\frac{b}{m - a}\right)^2 n \frac{a}{m} \frac{m - a}{m} \frac{m - n}{m - 1}$ Simplifying gives the result.
Note that $\E(Y \mid X)$ is a linear function of $X$ and hence $\E(Y \mid X) = L(Y \mid X)$.
In a collection of 120 objects, 50 are classified as good, 40 as fair and 30 as poor. A sample of 20 objects is selected at random and without replacement. Let $X$ denote the number of good objects in the sample and $Y$ the number of poor objects in the sample. Find each of the following:
1. $\E(Y \mid X)$
2. $\var(Y \mid X)$
3. The predicted value of $Y$ given $X = 8$
Answer
1. $\E(Y \mid X) = \frac{80}{7} - \frac{4}{7} X$
2. $\var(Y \mid X) = \frac{4}{1127}(20 - X)(50 + X)$
3. $\frac{48}{7}$
The Multinomial Trials Model
Suppose that we have a sequence of $n$ independent trials, and that each trial results in one of three outcomes, denoted 0, 1, and 2. On each trial, the probability of outcome 1 is $p$, the probability of outcome 2 is $q$, so that the probability of outcome 0 is $1 - p - q$. The parameters $p, \, q \in (0, 1)$ with $p + q \lt 1$, and of course $n \in \N_+$. Let $X$ denote the number of trials that resulted in outcome 1, $Y$ the number of trials that resulted in outcome 2, so that $n - X - Y$ is the number of trials that resulted in outcome 0. In the in the chapter on Distributions, we showed that the joint, marginal, and conditional distributions of $X$ and $Y$ are all multinomial—only the parameters change. Here is the relevant result for this section:
In the setting above,
1. $\E(Y \mid X) = \frac{q}{1 - p}(n - X)$
2. $\var(Y \mid X) = \frac{q (1 - p - q)}{(1 - p)^2}(n - X)$
3. $\E\left([Y - \E(Y \mid X)]^2\right) = \frac{q (1 - p - q)}{1 - p} n$
Proof
Recall that $(X, Y)$ has the multinomial distribution with parameters $n$, $p$, and $q$. Marginally, $X$ has the binomial distribution with parameters $n$ and $p$, and $Y$ has the binomial distribution with parameters $n$ and $q$. Given $X = x \in \{0, 1, \ldots, n\}$, the remaining $n - x$ trials are independent, but with just two outcomes: outcome 2 occurs with probability $q / (1 - p)$ and outcome 0 occurs with probability $1 - q / (1 - p)$. (These are the conditional probabilities of outcomes 2 and 0, respectively, given that outcome 1 did not occur.) Hence the conditional distribution of $Y$ given $X = x$ is binomial with parameters $n - x$ and $q / (1 - p)$. Parts (a) and (b) then follow from the standard formulas for the mean and variance of the binomial distribution, as functions of the parameters. Part (c) is the mean square error and in this case can be computed most easily from $\E[\var(Y \mid X)] = \frac{q (1 - p - q)}{(1 - p)^2} [n - \E(X)] = \frac{q (1 - p - q)}{(1 - p)^2} (n - n p) = \frac{q (1 - p - q)}{1 - p} n$
Note again that $\E(Y \mid X)$ is a linear function of $X$ and hence $\E(Y \mid X) = L(Y \mid X)$.
Suppose that a fair, 12-sided die is thrown 50 times. Let $X$ denote the number of throws that resulted in a number from 1 to 5, and $Y$ the number of throws that resulted in a number from 6 to 9. Find each of the following:
1. $\E(Y \mid X)$
2. $\var(Y \mid X)$
3. The predicted value of $Y$ given $X = 20$
Answer
1. $\E(Y \mid X) = \frac{4}{7}(50 - X)$
2. $\var(Y \mid X) = \frac{12}{49}(50 - X)$
3. $\frac{120}{7}$
The Poisson Distribution
Recall that the Poisson distribution, named for Simeon Poisson, is widely used to model the number of random points in a region of time or space, under certain ideal conditions. The Poisson distribution is studied in more detail in the chapter on the Poisson Process. The Poisson distribution with parameter $r \in (0, \infty)$ has probability density function $f$ defined by $f(x) = e^{-r} \frac{r^x}{x!}, \quad x \in \N$ The parameter $r$ is the mean and variance of the distribution.
Suppose that $X$ and $Y$ are independent random variables, and that $X$ has the Poisson distribution with parameter $a \in (0, \infty)$ and $Y$ has the Poisson distribution with parameter $b \in (0, \infty)$. Let $N = X + Y$. Then
1. $\E(X \mid N) = \frac{a}{a + b}N$
2. $\var(X \mid N) = \frac{a b}{(a + b)^2} N$
3. $\E\left([X - \E(X \mid N)]^2\right) = \frac{a b}{a + b}$
Proof
We have shown before that the distribution of $N$ is also Poisson, with parameter $a + b$, and that the conditional distribution of $X$ given $N = n \in \N$ is binomial with parameters $n$ and $a / (a + b)$. Hence parts (a) and (b) follow from the standard formulas for the mean and variance of the binomial distribution, as functions of the parameters. Part (c) is the mean square error, and in this case can be computed most easily as $\E[\var(X \mid N)] = \frac{a b}{(a + b)^2} \E(N) = \frac{ab}{(a + b)^2} (a + b) = \frac{a b}{a + b}$
Once again, $\E(X \mid N)$ is a linear function of $N$ and so $\E(X \mid N) = L(X \mid N)$. If we reverse the roles of the variables, the conditional expected value is trivial from our basic properties: $\E(N \mid X) = \E(X + Y \mid X) = X + b$
Coins and Dice
A pair of fair dice are thrown, and the scores $(X_1, X_2)$ recorded. Let $Y = X_1 + X_2$ denote the sum of the scores and $U = \min\left\{X_1, X_2\right\}$ the minimum score. Find each of the following:
1. $\E\left(Y \mid X_1\right)$
2. $\E\left(U \mid X_1\right)$
3. $\E\left(Y \mid U\right)$
4. $\E\left(X_2 \mid X_1\right)$
Answer
1. $\frac{7}{2} + X_1$
2. $x$ 1 2 3 4 5 6
$\E(U \mid X_1 = x)$ 1 $\frac{11}{6}$ $\frac{5}{2}$ 3 $\frac{10}{3}$ $\frac{7}{2}$
3. $u$ 1 2 3 4 5 6
$\E(Y \mid U = u)$ $\frac{52}{11}$ $\frac{56}{9}$ $\frac{54}{7}$ $\frac{46}{5}$ $\frac{32}{3}$ 12
4. $\frac{7}{2}$
A box contains 10 coins, labeled 0 to 9. The probability of heads for coin $i$ is $\frac{i}{9}$. A coin is chosen at random from the box and tossed. Find the probability of heads.
Answer
$\frac{1}{2}$
This problem is an example of Laplace's rule of succession, named for Pierre Simon Laplace.
Random Sums of Random Variables
Suppose that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of independent and identically distributed real-valued random variables. We will denote the common mean, variance, and moment generating function, respectively, by $\mu = \E(X_i)$, $\sigma^2 = \var(X_i)$, and $G(t) = \E\left(e^{t\,X_i}\right)$. Let $Y_n = \sum_{i=1}^n X_i, \quad n \in \N$ so that $(Y_0, Y_1, \ldots)$ is the partial sum process associated with $\bs{X}$. Suppose now that $N$ is a random variable taking values in $\N$, independent of $\bs{X}$. Then $Y_N = \sum_{i=1}^N X_i$ is a random sum of random variables; the terms in the sum are random, and the number of terms is random. This type of variable occurs in many different contexts. For example, $N$ might represent the number of customers who enter a store in a given period of time, and $X_i$ the amount spent by the customer $i$, so that $Y_N$ is the total revenue of the store during the period.
The conditional and ordinary expected value of $Y_N$ are
1. $\E\left(Y_N \mid N\right) = N \mu$
2. $\E\left(Y_N\right) = \E(N) \mu$
Proof
1. Using the substitution rule and the independence of $N$ and $\bs{X}$ we have $\E\left(Y_N \mid N = n\right) = \E\left(Y_n \mid N = n\right) = \E(Y_n) = \sum_{i=1}^n \E(X_i) = n \mu$ so $\E\left(Y_N \mid N\right) = N \mu$.
2. From (a) and conditioning, $E\left(Y_N\right) = \E\left[\E\left(Y_N \mid N\right)\right] = \E(N \mu) = \E(N) \mu$.
Wald's equation, named for Abraham Wald, is a generalization of the previous result to the case where $N$ is not necessarily independent of $\bs{X}$, but rather is a stopping time for $\bs{X}$. Roughly, this means that the event $N = n$ depends only $(X_1, X_2, \ldots, X_n)$. Wald's equation is discussed in the chapter on Random Samples. An elegant proof of and Wald's equation is given in the chapter on Martingales. The advanced section on stopping times is in the chapter on Probability Measures.
The conditional and ordinary variance of $Y_N$ are
1. $\var\left(Y_N \mid N\right) = N \sigma^2$
2. $\var\left(Y_N\right) = \E(N) \sigma^2 + \var(N) \mu^2$
Proof
1. Using the substitution rule, the independence of $N$ and $\bs{X}$, and the fact that $\bs{X}$ is an IID sequence, we have $\var\left(Y_N \mid N = n\right) = \var\left(Y_n \mid N = n\right) = \var\left(Y_n\right) = \sum_{i=1}^n \var(X_i) = n \sigma^2$ so $\var\left(Y_N \mid N\right) = N \sigma^2$.
2. From (a) and the previous result, $\var\left(Y_N\right) = \E\left[\var\left(Y_N \mid N\right)\right] + \var\left[\E(Y_N \mid \N)\right] = \E(\sigma^2 N) + \var(\mu N) = \E(N) \sigma^2 + \mu^2 \var(N)$
Let $H$ denote the probability generating function of $N$. The conditional and ordinary moment generating function of $Y_N$ are
1. $\E\left(e^{t Y_N} \mid N\right) = \left[G(t)\right]^N$
2. $\E\left(e^{t N}\right) = H\left(G(t)\right)$
Proof
1. Using the substitution rule, the independence of $N$ and $\bs{X}$, and the fact that $\bs{X}$ is an IID sequence, we have $\E\left(e^{t Y_N} \mid N = n\right) = \E\left(e^{t Y_n} \mid N = n\right) = \E\left(e^{t Y_n}\right) = \left[G(t)\right]^n$ (Recall that the MGF of the sum of independent variables is the product of the individual MGFs.)
2. From (a) and conditioning, $\E\left(e^{t N}\right) = \E\left[\E\left(e^{t N} \mid N\right)\right] = \E\left(G(t)^N\right) = H(G(t))$.
Thus the moment generating function of $Y_N$ is $H \circ G$, the composition of the probability generating function of $N$ with the common moment generating function of $\bs{X}$, a simple and elegant result.
In the die-coin experiment, a fair die is rolled and then a fair coin is tossed the number of times showing on the die. Let $N$ denote the die score and $Y$ the number of heads. Find each of the following:
1. The conditional distribution of $Y$ given $N$.
2. $\E\left(Y \mid N\right)$
3. $\var\left(Y \mid N\right)$
4. $\E\left(Y_i\right)$
5. $\var(Y)$
Answer
1. Binomial with parameters $N$ and $p = \frac{1}{2}$
2. $\frac{1}{2} N$
3. $\frac{1}{4} N$
4. $\frac{7}{4}$
5. $\frac{7}{3}$
Run the die-coin experiment 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The number of customers entering a store in a given hour is a random variable with mean 20 and standard deviation 3. Each customer, independently of the others, spends a random amount of money with mean $50 and standard deviation$5. Find the mean and standard deviation of the amount of money spent during the hour.
Answer
1. $1000$
2. $30.82$
A coin has a random probability of heads $V$ and is tossed a random number of times $N$. Suppose that $V$ is uniformly distributed on $[0, 1]$; $N$ has the Poisson distribution with parameter $a \gt 0$; and $V$ and $N$ are independent. Let $Y$ denote the number of heads. Compute the following:
1. $\E(Y \mid N, V)$
2. $\E(Y \mid N)$
3. $\E(Y \mid V)$
4. $\E(Y)$
5. $\var(Y \mid N, V)$
6. $\var(Y)$
Answer
1. $N V$
2. $\frac{1}{2} N$
3. $a V$
4. $\frac{1}{2} a$
5. $N V (1 - V)$
6. $\frac{1}{12} a^2 + \frac{1}{2} a$
Mixtures of Distributions
Suppose that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of real-valued random variables. Denote the mean, variance, and moment generating function of $X_i$ by $\mu_i = \E(X_i)$, $\sigma_i^2 = \var(X_i)$, and $M_i(t) = \E\left(e^{t\,X_i}\right)$, for $i \in \N_+$. Suppose also that $N$ is a random variable taking values in $\N_+$, independent of $\bs{X}$. Denote the probability density function of $N$ by $p_n = \P(N = n)$ for $n \in \N_+$. The distribution of the random variable $X_N$ is a mixture of the distributions of $\bs{X} = (X_1, X_2, \ldots)$, with the distribution of $N$ as the mixing distribution.
The conditional and ordinary expected value of $X_N$ are
1. $\E(X_N \mid N) = \mu_N$
2. $\E(X_N) = \sum_{n=1}^\infty p_n\,\mu_n$
Proof
1. Using the substitution rule and the independence of $N$ and $\bs{X}$, we have $\E(X_N \mid N = n) = \E(X_n \mid N = n) = \E(X_n) = \mu_n$
2. From (a) and the conditioning rule, $\E\left(X_N\right) = \E\left[\E\left(X_N\right)\right] = \E\left(\mu_N\right) = \sum_{n=1}^\infty p_n \mu_n$
The conditional and ordinary variance of $X_N$ are
1. $\var\left(X_N \mid N\right) = \sigma_N^2$
2. $\var(X_N) = \sum_{n=1}^\infty p_n (\sigma_n^2 + \mu_n^2) - \left(\sum_{n=1}^\infty p_n\,\mu_n\right)^2$.
Proof
1. Using the substitution rule and the independence of $N$ and $\bs{X}$, we have $\var\left(X_N \mid N = n\right) = \var\left(X_n \mid N = n\right) = \var\left(X_n\right) = \sigma_n^2$
2. From (a) we have \begin{align} \var\left(X_N\right) & = \E\left[\var\left(X_N \mid N\right)\right] + \var\left[\E\left(X_N \mid N\right)\right] = \E\left(\sigma_N^2\right) + \var\left(\mu_N\right) = \E\left(\sigma_N^2\right) + \E\left(\mu_N^2\right) - \left[\E\left(\mu_N\right)\right]^2 \ & = \sum_{n=1}^\infty p_n \sigma_n^2 + \sum_{n=1}^\infty p_n \mu_n^2 - \left(\sum_{n=1}^\infty p_n \mu_n\right)^2 \end{align}
The conditional and ordinary moment generating function of $X_N$ are
1. $\E\left(e^{t X_N} \mid N\right) = M_N(t)$
2. $\E\left(e^{tX_N}\right) = \sum_{i=1}^\infty p_i M_i(t)$.
Proof
1. Using the substitution rule and the independence of $N$ and $\bs{X}$, we have $\E\left(e^{t X_N} \mid N = n\right) = \E\left(e^{t X_n} \mid N = n\right) = \E\left(^{t X_n}\right) = M_n(t)$
2. From (a) and the conditioning rule, $\E\left(e^{t X_N}\right) = \E\left[\E\left(e^{t X_N} \mid N\right)\right] = \E\left[M_N(t)\right] = \sum_{n=1}^\infty p_n M_n(t)$
In the coin-die experiment, a biased coin is tossed with probability of heads $\frac{1}{3}$. If the coin lands tails, a fair die is rolled; if the coin lands heads, an ace-six flat die is rolled (faces 1 and 6 have probability $\frac{1}{4}$ each, and faces 2, 3, 4, 5 have probability $\frac{1}{8}$ each). Find the mean and standard deviation of the die score.
Answer
1. $\frac{7}{2}$
2. $1.8634$
Run the coin-die experiment 1000 times and note the apparent convergence of the empirical mean and standard deviation to the distribution mean and standard deviation. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/04%3A_Expected_Value/4.07%3A_Conditional_Expected_Value.txt |
$\newcommand{\var}{\text{var}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\vc}{\text{vc}}$ $\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
The main purpose of this section is a discussion of expected value and covariance for random matrices and vectors. These topics are somewhat specialized, but are particularly important in multivariate statistical models and for the multivariate normal distribution. This section requires some prerequisite knowledge of linear algebra.
We assume that the various indices $m, \, n, p, k$ that occur in this section are positive integers. Also we assume that expected values of real-valued random variables that we reference exist as real numbers, although extensions to cases where expected values are $\infty$ or $-\infty$ are straightforward, as long as we avoid the dreaded indeterminate form $\infty - \infty$.
Basic Theory
Linear Algebra
We will follow our usual convention of denoting random variables by upper case letters and nonrandom variables and constants by lower case letters. In this section, that convention leads to notation that is a bit nonstandard, since the objects that we will be dealing with are vectors and matrices. On the other hand, the notation we will use works well for illustrating the similarities between results for random matrices and the corresponding results in the one-dimensional case. Also, we will try to be careful to explicitly point out the underlying spaces where various objects live.
Let $\R^{m \times n}$ denote the space of all $m \times n$ matrices of real numbers. The $(i, j)$ entry of $\bs{a} \in \R^{m \times n}$ is denoted $a_{i j}$ for $i \in \{1, 2, \ldots, m\}$ and $j \in \{1, 2, \ldots, n\}$. We will identify $\R^n$ with $\R^{n \times 1}$, so that an ordered $n$-tuple can also be thought of as an $n \times 1$ column vector. The transpose of a matrix $\bs{a} \in \R^{m \times n}$ is denoted $\bs{a}^T$—the $n \times m$ matrix whose $(i, j)$ entry is the $(j, i)$ entry of $\bs{a}$. Recall the definitions of matrix addition, scalar multiplication, and matrix multiplication. Recall also the standard inner product (or dot product) of $\bs{x}, \, \bs{y} \in \R^n$: $\langle \bs{x}, \bs{y} \rangle = \bs{x} \cdot \bs{y} = \bs{x}^T \bs{y} = \sum_{i=1}^n x_i y_i$ The outer product of $\bs{x}$ and $\bs{y}$ is $\bs{x} \bs{y}^T$, the $n \times n$ matrix whose $(i, j)$ entry is $x_i y_j$. Note that the inner product is the trace (sum of the diagonal entries) of the outer product. Finally recall the standard norm on $\R^n$, given by $\|\bs{x}\| = \sqrt{\langle \bs{x}, \bs{x}\rangle} = \sqrt{x_1^2 + x_2^2 + \cdots + x_n^2}$ Recall that inner product is bilinear, that is, linear (preserving addition and scalar multiplication) in each argument separately. As a consequence, for $\bs{x}, \, \bs{y} \in \R^n$, $\|\bs{x} + \bs{y}\|^2 = \|\bs{x}\|^2 + \|\bs{y}\|^2 + 2 \langle \bs{x}, \bs{y} \rangle$
Expected Value of a Random Matrix
As usual, our starting point is a random experiment modeled by a probability space $(\Omega, \mathscr F, \P)$. So to review, $\Omega$ is the set of outcomes, $\mathscr F$ the collection of events, and $\P$ the probability measure on the sample space $(\Omega, \mathscr F)$. It's natural to define the expected value of a random matrix in a component-wise manner.
Suppose that $\bs{X}$ is an $m \times n$ matrix of real-valued random variables, whose $(i, j)$ entry is denoted $X_{i j}$. Equivalently, $\bs{X}$ is as a random $m \times n$ matrix, that is, a random variable with values in $\R^{m \times n}$. The expected value $\E(\bs{X})$ is defined to be the $m \times n$ matrix whose $(i, j)$ entry is $\E\left(X_{i j}\right)$, the expected value of $X_{i j}$.
Many of the basic properties of expected value of random variables have analogous results for expected value of random matrices, with matrix operation replacing the ordinary ones. Our first two properties are the critically important linearity properties. The first part is the additive property—the expected value of a sum is the sum of the expected values.
$\E(\bs{X} + \bs{Y}) = \E(\bs{X}) + \E(\bs{Y})$ if $\bs{X}$ and $\bs{Y}$ are random $m \times n$ matrices.
Proof
This is true by definition of the matrix expected value and the ordinary additive property. Note that $\E\left(X_{i j} + Y_{i j}\right) = \E\left(X_{i j}\right) + \E\left(Y_{i j}\right)$. The left side is the $(i, j)$ entry of $\E(\bs{X} + \bs{Y})$ and the right side is the $(i, j)$ entry of $\E(\bs{X}) + \E(\bs{Y})$.
The next part of the linearity properties is the scaling property—a nonrandom matrix factor can be pulled out of the expected value.
Suppose that $\bs{X}$ is a random $n \times p$ matrix.
1. $\E(\bs{a} \bs{X}) = \bs{a} \E(\bs{X})$ if $\bs{a} \in \R^{m \times n}$.
2. $\E(\bs{X} \bs{a}) = \E(\bs{X}) \bs{a}$ if $\bs{a} \in \R^{p \times n}$.
Proof
1. By the ordinary linearity and scaling properties, $\E\left(\sum_{j=1}^n a_{i j} X_{j k}\right) = \sum_{j=1}^n a_{i j} \E\left(X_{j k}\right)$. The left side is the $(i, k)$ entry of $\E(\bs{a} \bs{X})$ and the right side is the $(i, k)$ entry of $\bs{a} \E(\bs{X})$.
2. The proof is similar to (a).
Recall that for independent, real-valued variables, the expected value of the product is the product of the expected values. Here is the analogous result for random matrices.
$\E(\bs{X} \bs{Y}) = \E(\bs{X}) \E(\bs{Y})$ if $\bs{X}$ is a random $m \times n$ matrix, $\bs{Y}$ is a random $n \times p$ matrix, and $\bs{X}$ and $\bs{Y}$ are independent.
Proof
By the ordinary linearity properties and by the independence assumption, $\E\left(\sum_{j=1}^n X_{i j} Y_{j k}\right) = \sum_{j=1}^n \E\left(X_{i j} Y_{j k}\right) = \sum_{j=1}^n \E\left(X_{i j}\right) \E\left(Y_{j k}\right)$ The left side is the $(i, k)$ entry of $\E(\bs{X} \bs{Y})$ and the right side is the $(i, k)$ entry of $\E(\bs{X}) \E(\bs{Y})$.
Actually the previous result holds if $\bs{X}$ and $\bs{Y}$ are simply uncorrelated in the sense that $X_{i j}$ and $Y_{j k}$ are uncorrelated for each $i \in \{1, \ldots, m\}$, $j \in \{1, 2, \ldots, n\}$ and $k \in \{1, 2, \ldots p\}$. We will study covariance of random vectors in the next subsection.
Covariance Matrices
Our next goal is to define and study the covariance of two random vectors.
Suppose that $\bs{X}$ is a random vector in $\R^m$ and $\bs{Y}$ is a random vector in $\R^n$.
1. The covariance matrix of $\bs{X}$ and $\bs{Y}$ is the $m \times n$ matrix $\cov(\bs{X}, \bs{Y})$ whose $(i,j)$ entry is $\cov\left(X_i, Y_j\right)$ the ordinary covariance of $X_i$ and $Y_j$.
2. Assuming that the coordinates of $\bs{X}$ and $\bs{Y}$ have positive variance, the correlation matrix of $\bs{X}$ and $\bs{Y}$ is the $m \times n$ matrix $\cor(\bs{X}, \bs{Y})$ whose $(i, j)$ entry is $\cor\left(X_i, Y_j\right)$, the ordinary correlation of $X_i$ and $Y_j$
Many of the standard properties of covariance and correlation for real-valued random variables have extensions to random vectors. For the following three results, $\bs X$ is a random vector in $\R^m$ and $\bs Y$ is a random vector in $\R^n$.
$\cov(\bs{X}, \bs{Y}) = \E\left(\left[\bs{X} - \E(\bs{X})\right]\left[\bs{Y} - \E(\bs{Y})\right]^T\right)$
Proof
By the definition of the expected value of a random vector and by the defintion of matrix multiplication, the $(i, j)$ entry of $\left[\bs{X} - \E(\bs{X})\right]\left[\bs{Y} - \E(\bs{Y})\right]^T$ is simply $\left[X_i - \E\left(X_i\right)\right] \left[Y_j - \E\left(Y_j\right)\right]$. The expected value of this entry is $\cov\left(X_i, Y_j\right)$, which in turn, is the $(i, j)$ entry of $\cov(\bs{X}, \bs{Y})$
Thus, the covariance of $\bs{X}$ and $\bs{Y}$ is the expected value of the outer product of $\bs{X} - \E(\bs{X})$ and $\bs{Y} - \E(\bs{Y})$. Our next result is the computational formula for covariance: the expected value of the outer product of $\bs{X}$ and $\bs{Y}$ minus the outer product of the expected values.
$\cov(\bs{X},\bs{Y}) = \E\left(\bs{X} \bs{Y}^T\right) - \E(\bs{X}) \left[\E(\bs{Y})\right]^T$.
Proof
The $(i, j)$ entry of $\E\left(\bs{X} \bs{Y}^T\right) - \E(\bs{X}) \left[\E(\bs{Y})\right]^T$ is $\E\left(X_i, Y_j\right) - \E\left(X_i\right) \E\left(Y_j\right)$, which by the standard computational formula, is $\cov\left(X_i, Y_j\right)$, which in turn is the $(i, j)$ entry of $\cov(\bs{X}, \bs{Y})$.
The next result is the matrix version of the symmetry property.
$\cov(\bs{Y}, \bs{X}) = \left[\cov(\bs{X}, \bs{Y})\right]^T$.
Proof
The $(i, j)$ entry of $\cov(\bs{X}, \bs{Y})$ is $\cov\left(X_i, Y_j\right)$, which is the $(j, i)$ entry of $\cov(\bs{Y}, \bs{X})$.
In the following result, $\bs{0}$ denotes the $m \times n$ zero matrix.
$\cov(\bs{X}, \bs{Y}) = \bs{0}$ if and only if $\cov\left(X_i, Y_j\right) = 0$ for each $i$ and $j$, so that each coordinate of $\bs{X}$ is uncorrelated with each coordinate of $\bs{Y}$.
Proof
This follows immediately from the definition of $\cov(\bs{X}, \bs{Y})$.
Naturally, when $\cov(\bs{X}, \bs{Y}) = \bs{0}$, we say that the random vectors $\bs{X}$ and $\bs{Y}$ are uncorrelated. In particular, if the random vectors are independent, then they are uncorrelated. The following results establish the bi-linear properties of covariance.
The additive properties.
1. $\cov(\bs{X} + \bs{Y}, \bs{Z}) = \cov(\bs{X}, \bs{Z}) + \cov(\bs{Y}, \bs{Z})$ if $\bs{X}$ and $\bs{Y}$ are random vectors in $\R^m$ and $\bs{Z}$ is a random vector in $\R^n$.
2. $\cov(\bs{X}, \bs{Y} + \bs{Z}) = \cov(\bs{X}, \bs{Y}) + \cov(\bs{X}, \bs{Z})$ if $\bs{X}$ is a random vector in $\R^m$, and $\bs{Y}$ and $\bs{Z}$ are random vectors in $\R^n$.
Proof
1. From the ordinary additive property of covariance, $\cov\left(X_i + Y_i, Z_j\right) = \cov\left(X_i, Z_j\right) + \cov\left(Y_i, Z_j\right)$. The left side is the $(i, j)$ entry of $\cov(\bs{X} + \bs{Y}, \bs{Z})$ and the right side is the $(i, j)$ entry of $\cov(\bs{X}, \bs{Z}) + \cov(\bs{Y}, \bs{Z})$.
2. The proof is similar to (a), using the additivity of covariance in the second argument.
The scaling properties
1. $\cov(\bs{a} \bs{X}, \bs{Y}) = \bs{a} \cov(\bs{X}, \bs{Y})$ if $\bs{X}$ is a random vector in $\R^n$, $\bs{Y}$ is a random vector in $\R^p$, and $\bs{a} \in \R^{m \times n}$.
2. $\cov(\bs{X}, \bs{a} \bs{Y}) = \cov(\bs{X}, \bs{Y}) \bs{a}^T$ if $\bs{X}$ is a random vector in $\R^m$, $\bs{Y}$ is a random vector in $\R^n$, and $\bs{a} \in \R^{k \times n}$.
Proof
1. Using the ordinary linearity properties of covariance in the first argument, we have $\cov\left(\sum_{j=1}^n a_{i j} X_j, Y_k\right) = \sum_{j=1}^n a_{i j} \cov\left(X_j, Y_k\right)$ The left side is the $(i, k)$ entry of $\cov(\bs{a} \bs{X}, \bs{Y})$ and the right side is the $(i, k)$ entry of $\bs{a} \cov(\bs{X}, \bs{Y})$.
2. The proof is similar to (a), using the linearity of covariance in the second argument.
Variance-Covariance Matrices
Suppose that $\bs{X}$ is a random vector in $\R^n$. The covariance matrix of $\bs{X}$ with itself is called the variance-covariance matrix of $\bs{X}$: $\vc(\bs{X}) = \cov(\bs{X}, \bs{X}) = \E\left(\left[\bs{X} - \E(\bs{X})\right]\left[\bs{X} - \E(\bs{X})\right]^T\right)$
Recall that for an ordinary real-valued random variable $X$, $\var(X) = \cov(X, X)$. Thus the variance-covariance matrix of a random vector in some sense plays the same role that variance does for a random variable.
$\vc(\bs{X})$ is a symmetric $n \times n$ matrix with $\left(\var(X_1), \var(X_2), \ldots, \var(X_n)\right)$ on the diagonal.
Proof
Recall that $\cov\left(X_i, X_j\right) = \cov\left(X_j, X_i\right)$. Also, the $(i, i)$ entry of $\vc(\bs{X})$ is $\cov\left(X_i, X_i\right) = \var\left(X_i\right)$.
The following result is the formula for the variance-covariance matrix of a sum, analogous to the formula for the variance of a sum of real-valued variables.
$\vc(\bs{X} + \bs{Y}) = \vc(\bs{X}) + \cov(\bs{X}, \bs{Y}) + \cov(\bs{Y}, \bs{X}) + \vc(\bs{Y})$ if $\bs{X}$ and $\bs{Y}$ are random vectors in $\R^n$.
Proof
This follows from the additive property of covariance: $\vc(\bs{X} + \bs{Y}) = \cov(\bs{X} + \bs{Y}, \bs{X} + \bs{Y}) = \cov(\bs{X}, \bs{X}) + \cov(\bs{X}, \bs{Y}) + \cov(\bs{Y}, \bs{X}) + \cov(\bs{Y}, \bs{Y})$
Recall that $\var(a X) = a^2 \var(X)$ if $X$ is a real-valued random variable and $a \in \R$. Here is the analogous result for the variance-covariance matrix of a random vector.
$\vc(\bs{a} \bs{X}) = \bs{a} \vc(\bs{X}) \bs{a}^T$ if $\bs{X}$ is a random vector in $\R^n$ and $\bs{a} \in \R^{m \times n}$.
Proof
This follows from the scaling property of covariance: $\vc(\bs{a} \bs{X}) = \cov(\bs{a} \bs{X}, \bs{a} \bs{X}) = \bs{a} \cov(\bs{X}, \bs{X}) \bs{a}^T$
Recall that if $X$ is a random variable, then $\var(X) \ge 0$, and $\var(X) = 0$ if and only if $X$ is a constant (with probability 1). Here is the analogous result for a random vector:
Suppose that $\bs{X}$ is a random vector in $\R^n$.
1. $\vc(\bs{X})$ is either positive semi-definite or positive definite.
2. $\vc(\bs{X})$ is positive semi-definite but not positive definite if and only if there exists $\bs{a} \in \R^n$ and $c \in \R$ such that, with probability 1, $\bs{a}^T \bs{X} = \sum_{i=1}^n a_i X_i = c$
Proof
1. From the previous result, $0 \le \var\left(\bs{a}^T \bs{X}\right) = \vc\left(\bs{a}^T \bs{X}\right) = \bs{a}^T \vc(\bs{X}) \bs{a}$ for every $\bs{a} \in \R^n$. Thus, by definition, $\vc(\bs{X})$ is either positive semi-definite or positive definite.
2. In light of (a), $\vc(\bs{X})$ is positive semi-definite but not positive definite if and only if there exists $\bs{a} \in \R^n$ such that $\bs{a}^T \vc(\bs{X}) \bs{a} = \var\left(\bs{a}^T \bs{X}\right) = 0$. But in turn, this is true if and only if $\bs{a}^T \bs{X}$ is constant with probability 1.
Recall that since $\vc(\bs{X})$ is either positive semi-definite or positive definite, the eigenvalues and the determinant of $\vc(\bs{X})$ are nonnegative. Moreover, if $\vc(\bs{X})$ is positive semi-definite but not positive definite, then one of the coordinates of $\bs{X}$ can be written as a linear transformation of the other coordinates (and hence can usually be eliminated in the underlying model). By contrast, if $\vc(\bs{X})$ is positive definite, then this cannot happen; $\vc(\bs{X})$ has positive eigenvalues and determinant and is invertible.
Best Linear Predictor
Suppose that $\bs{X}$ is a random vector in $\R^m$ and that $\bs{Y}$ is a random vector in $\R^n$. We are interested in finding the function of $\bs{X}$ of the form $\bs{a} + \bs{b} \bs{X}$, where $\bs{a} \in \R^n$ and $\bs{b} \in \R^{n \times m}$, that is closest to $\bs{Y}$ in the mean square sense. Functions of this form are analogous to linear functions in the single variable case. However, unless $\bs{a} = \bs{0}$, such functions are not linear transformations in the sense of linear algebra, so the correct term is affine function of $\bs{X}$. This problem is of fundamental importance in statistics when random vector $\bs{X}$, the predictor vector is observable, but not random vector $\bs{Y}$, the response vector. Our discussion here generalizes the one-dimensional case, when $X$ and $Y$ are random variables. That problem was solved in the section on Covariance and Correlation. We will assume that $\vc(\bs{X})$ is positive definite, so that $\vc(\bs{X})$ is invertible, and none of the coordinates of $\bs{X}$ can be written as an affine function of the other coordinates. We write $\vc^{-1}(\bs{X})$ for the inverse instead of the clunkier $\left[\vc(\bs{X})\right]^{-1}$.
As with the single variable case, the solution turns out to be the affine function that has the same expected value as $\bs{Y}$, and whose covariance with $\bs{X}$ is the same as that of $\bs{Y}$.
Define $L(\bs{Y} \mid \bs{X}) = \E(\bs{Y}) + \cov(\bs{Y},\bs{X}) \vc^{-1}(\bs{X}) \left[\bs{X} - \E(\bs{X})\right]$. Then $L(\bs{Y} \mid \bs{X})$ is the only affine function of $\bs{X}$ in $\R^n$ satisfying
1. $\E\left[L(\bs{Y} \mid \bs{X})\right] = \E(\bs{Y})$
2. $\cov\left[L(\bs{Y} \mid \bs{X}), \bs{X}\right] = \cov(\bs{Y}, \bs{X})$
Proof
From linearity, $\E\left[L(\bs{Y} \mid \bs{X})\right] = E(\bs{Y}) + \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X})\left[\E(\bs{X}) - \E(\bs{X})\right] = 0$ From linearity and the fact that a constant vector is independent (and hence uncorrelated) with any random vector, $\cov\left[L(\bs{Y} \mid \bs{X}), \bs{X}\right] = \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \cov(\bs{X}, \bs{X}) = \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \vc(\bs{X}) = \cov(\bs{Y}, \bs{X})$ Conversely, suppose that $\bs{U} = \bs{a} + \bs{b} \bs{X}$ for some $\bs{a} \in \R^n$ and $\bs{b} \in \R^{m \times n}$, and that $\E(\bs{U}) = \E(\bs{Y})$ and $\cov(\bs{U}, \bs{X}) = \cov(\bs{Y}, \bs{X})$. From the second equation, again using linearity and the uncorrelated property of constant vectors, we get $\bs{b} \cov(\bs{X}, \bs{X}) = \cov(\bs{Y}, \bs{X})$ and therefore $\bs{b} = \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X})$. Then from the first equation, $\bs{a} + \bs{b} \E(\bs{X}) = \bs{Y}$ so $\bs{a} = \E(\bs{Y}) - \bs{b} \E(\bs{X})$.
A simple corollary is the $\bs{Y} - L(\bs{Y} \mid \bs{X})$ is uncorrelated with any affine function of $\bs{X}$:
If $\bs{U}$ is an affine function of $\bs{X}$ then
1. $\cov\left[\bs{Y} - L(\bs{Y} \mid \bs{X}), \bs{U}\right] = \bs{0}$
2. $\E\left(\langle \bs{Y} - L(\bs{Y} \mid \bs{X}), \bs{U}\rangle\right) = 0$
Proof
Suppose that $\bs{U} = \bs{a} + \bs{b} \bs{X}$ where $\bs{a} \in \R^n$ and $\bs{b} \in \R^{m \times n}$. For simplicity, let $\bs{L} = L(\bs{Y} \mid \bs{X})$
1. From the previous result, $\cov(\bs{Y}, \bs{X}) = \cov(\bs{L}, \bs{X})$. Hence using linearity, $\cov\left(\bs{Y} - \bs{L}, \bs{U}\right) = \cov(\bs{Y} - \bs{L}, \bs{a}) + \cov(\bs{Y} - \bs{L}, \bs{X}) \bs{b}^T = \bs{0} + \left[\cov(\bs{Y}, \bs{X}) - \cov(\bs{L}, \bs{X})\right] = \bs{0}$
2. Recall that $\langle \bs{Y} - \bs{L}, \bs{U}\rangle$ is the trace of $\cov(\bs{Y} - \bs{L}, \bs{U})$ and hence has expected value 0 by part (a).
The variance-covariance matrix of $L(\bs{Y} \mid \bs{X})$, and its covariance matrix with $\bs{Y}$ turn out to be the same, again analogous to the single variable case.
Additional properties of $L(\bs{Y} \mid \bs{X})$:
1. $\cov\left[\bs{Y}, L(\bs{Y} \mid \bs{X})\right] = \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \cov(\bs{X}, \bs{Y})$
2. $\vc\left[L(\bs{Y} \mid \bs{X})\right] = \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \cov(\bs{X}, \bs{Y})$
Proof
Recall that $L(\bs{Y} \mid \bs{X}) = \E(\bs{Y}) + \cov(\bs{Y},\bs{X}) \vc^{-1}(\bs{X}) \left[\bs{X} - \E(\bs{X})\right]$
1. Using basic properties of covariance, $\cov\left[Y, L(\bs{Y} \mid \bs{X})\right] = \cov\left[\bs{Y}, \bs{X} - \E(\bs{X})\right] \left[\cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X})\right]^T = \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \cov(\bs{X}, \bs{Y})$
2. Using basic properties of variance-covariance, $\vc\left[L(\bs{Y} \mid \bs{X})\right] = \vc\left[\cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \bs{X} \right] = \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \vc(\bs{X}) \left[\cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X})\right]^T = \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \cov(\bs{X}, \bs{Y})$
Next is the fundamental result that $L(\bs{Y} \mid \bs{X})$ is the affine function of $\bs{X}$ that is closest to $\bs{Y}$ in the mean square sense.
Suppose that $\bs{U} \in \R^n$ is an affine function of $\bs{X}$. Then
1. $\E\left(\|\bs{Y} - L(\bs{Y} \mid \bs{X})\|^2\right) \le \E\left(\|\bs{Y} - \bs{U}\|^2\right)$
2. Equality holds in (a) if and only if $\bs{U} = L(\bs{Y} \mid \bs{X})$ with probability 1.
Proof
Again, let $\bs{L} = L(\bs{Y} \mid \bs{X})$ for simplicity and let $\bs{U} \in \R^n$ be an affine function of $\bs{X}$.
1. Using the linearity of expected value, note that $\E\left(\|\bs{Y} - \bs{U}\|^2\right) = \E\left[\|(\bs{Y} - \bs{L}) + (\bs{L} - \bs{U})\|^2\right] = \E\left(\|\bs{Y} - \bs{L}\|^2\right) + 2 \E(\langle \bs{Y} - \bs{L}, \bs{L} - \bs{U}\rangle) + \E\left(\|\bs{L} - \bs{U}\|^2\right)$ But $\bs{L} - \bs{U}$ is an affine function of $\bs{X}$ and hence the middle term is 0 by our previous corollary. Hence $\E\left(\|\bs{Y} - \bs{U}\|^2\right) = \E\left(\|\bs{L} - \bs{Y}\|^2\right) + \E\left(\|\bs{L} - \bs{U}\|^2\right) \ge \E\left(\|\bs{L} - \bs{Y}\|^2\right)$
2. From (a), equality holds in the inequality if and only if $\E\left(\|\bs{L} - \bs{U}\|^2\right) = 0$ if and only if $\P(\bs{L} = \bs{U}) = 1$.
The variance-covariance matrix of the difference between $\bs{Y}$ and the best affine approximation is given in the next theorem.
$\vc\left[\bs{Y} - L(\bs{Y} \mid \bs{X})\right] = \vc(\bs{Y}) - \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \cov(\bs{X}, \bs{Y})$
Proof
Again, we abbreviate $L(\bs{Y} \mid \bs{X})$ by $\bs{L}$. Using basic properties of variance-covariance matrices, $\vc(\bs{Y} - \bs{L}) = \vc(\bs{Y}) - \cov(\bs{Y}, \bs{L}) - \cov(\bs{L}, \bs{Y}) + \vc(\bs{L})$ But $\cov(\bs{Y}, \bs{L}) = \cov(\bs{L}, \bs{Y}) = \vc(\bs{L}) = \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \cov(\bs{Y}, \bs{X})$. Substituting gives the result.
The actual mean square error when we use $L(\bs{Y} \mid \bs{X})$ to approximate $\bs{Y}$, namely $\E\left(\left\|\bs{Y} - L(\bs{Y} \mid \bs{X})\right\|^2\right)$, is the trace (sum of the diagonal entries) of the variance-covariance matrix above. The function of $\bs{x}$ given by $L(\bs{Y} \mid \bs{X} = \bs{x}) = \E(\bs{Y}) + \cov(\bs{Y},\bs{X}) \vc^{-1}(\bs{X}) \left[\bs{x} - \E(\bs{X})\right]$ is known as the (distribution) linear regression function. If we observe $\bs{x}$ then $L(\bs{Y} \mid \bs{X} = \bs{x})$ is our best affine prediction of $\bs{Y}$.
Multiple linear regression is more powerful than it may at first appear, because it can be applied to non-linear transformations of the random vectors. That is, if $g: \R^m \to \R^j$ and $h: \R^n \to \R^k$ then $L\left[h(\bs{Y}) \mid g(\bs{X})\right]$ is the affine function of $g(\bs{X})$ that is closest to $h(\bs{Y})$ in the mean square sense. Of course, we must be able to compute the appropriate means, variances, and covariances.
Moreover, Non-linear regression with a single, real-valued predictor variable can be thought of as a special case of multiple linear regression. Thus, suppose that $X$ is the predictor variable, $Y$ is the response variable, and that $(g_1, g_2, \ldots, g_n)$ is a sequence of real-valued functions. We can apply the results of this section to find the linear function of $\left(g_1(X), g_2(X), \ldots, g_n(X)\right)$ that is closest to $Y$ in the mean square sense. We just replace $X_i$ with $g_i(X)$ for each $i$. Again, we must be able to compute the appropriate means, variances, and covariances to do this.
Examples and Applications
Suppose that $(X, Y)$ has probability density function $f$ defined by $f(x, y) = x + y$ for $0 \le x \le 1$, $0 \le y \le 1$. Find each of the following:
1. $\E(X, Y)$
2. $\vc(X, Y)$
Answer
1. $\left(\frac{7}{12}, \frac{7}{12}\right)$
2. $\left[\begin{matrix} \frac{11}{144} & -\frac{1}{144} \ -\frac{1}{144} & \frac{11}{144}\end{matrix}\right]$
Suppose that $(X, Y)$ has probability density function $f$ defined by $f(x, y) = 2 (x + y)$ for $0 \le x \le y \le 1$. Find each of the following:
1. $\E(X, Y)$
2. $\vc(X, Y)$
Answer
1. $\left(\frac{5}{12}, \frac{3}{4}\right)$
2. $\left[\begin{matrix} \frac{43}{720} & \frac{1}{48} \ \frac{1}{48} & \frac{3}{80} \end{matrix} \right]$
Suppose that $(X, Y)$ has probability density function $f$ defined by $f(x, y) = 6 x^2 y$ for $0 \le x \le 1$, $0 \le y \le 1$. Find each of the following:
1. $\E(X, Y)$
2. $\vc(X, Y)$
Answer
Note that $X$ and $Y$ are independent.
1. $\left(\frac{3}{4}, \frac{2}{3}\right)$
2. $\left[\begin{matrix} \frac{3}{80} & 0 \ 0 & \frac{1}{18} \end{matrix} \right]$
Suppose that $(X, Y)$ has probability density function $f$ defined by $f(x, y) = 15 x^2 y$ for $0 \le x \le y \le 1$. Find each of the following:
1. $\E(X, Y)$
2. $\vc(X, Y)$
3. $L(Y \mid X)$
4. $L\left[Y \mid \left(X, X^2\right)\right]$
5. Sketch the regression curves on the same set of axes.
Answer
1. $\left( \frac{5}{8}, \frac{5}{6} \right)$
2. $\left[ \begin{matrix} \frac{17}{448} & \frac{5}{336} \ \frac{5}{336} & \frac{5}{252} \end{matrix} \right]$
3. $\frac{10}{17} + \frac{20}{51} X$
4. $\frac{49}{76} + \frac{10}{57} X + \frac{7}{38} X^2$
Suppose that $(X, Y, Z)$ is uniformly distributed on the region $\left\{(x, y, z) \in \R^3: 0 \le x \le y \le z \le 1\right\}$. Find each of the following:
1. $\E(X, Y, Z)$
2. $\vc(X, Y, Z)$
3. $L\left[Z \mid (X, Y)\right]$
4. $L\left[Y \mid (X, Z)\right]$
5. $L\left[X \mid (Y, Z)\right]$
6. $L\left[(Y, Z) \mid X\right]$
Answer
1. $\left(\frac{1}{4}, \frac{1}{2}, \frac{3}{4}\right)$
2. $\left[\begin{matrix} \frac{3}{80} & \frac{1}{40} & \frac{1}{80} \ \frac{1}{40} & \frac{1}{20} & \frac{1}{40} \ \frac{1}{80} & \frac{1}{40} & \frac{3}{80} \end{matrix}\right]$
3. $\frac{1}{2} + \frac{1}{2} Y$. Note that there is no $X$ term.
4. $\frac{1}{2} X + \frac{1}{2} Z$. Note that this is the midpoint of the interval $[X, Z]$.
5. $\frac{1}{2} Y$. Note that there is no $Z$ term.
6. $\left[\begin{matrix} \frac{1}{3} + \frac{2}{3} X \ \frac{2}{3} + \frac{1}{3} X \end{matrix}\right]$
Suppose that $X$ is uniformly distributed on $(0, 1)$, and that given $X$, random variable $Y$ is uniformly distributed on $(0, X)$. Find each of the following:
1. $\E(X, Y)$
2. $\vc(X, Y)$
Answer
1. $\left(\frac{1}{2}, \frac{1}{4}\right)$
2. $\left[\begin{matrix} \frac{1}{12} & \frac{1}{24} \ \frac{1}{24} & \frac{7}{144} \end{matrix} \right]$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/04%3A_Expected_Value/4.08%3A_Expected_Value_and_Covariance_Matrices.txt |
$\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
In the introductory section, we defined expected value separately for discrete, continuous, and mixed distributions, using density functions. In the section on additional properties, we showed how these definitions can be unified, by first defining expected value for nonnegative random variables in terms of the right-tail distribution function. However, by far the best and most elegant definition of expected value is as an integral with respect to the underlying probability measure. This definition and a review of the properties of expected value are the goals of this section. No proofs are necessary (you will be happy to know), since all of the results follow from the general theory of integration. However, to understand the exposition, you will need to review the advanced sections on the integral with respect to a positive measure and the properties of the integral. If you are a new student of probability, or are not interested in the measure-theoretic detail of the subject, you can safely skip this section.
Definitions
As usual, our starting point is a random experiment modeled by a probability space $(\Omega, \mathscr{F}, \P)$. So $\Omega$ is the set of outcomes, $\mathscr{F}$ is the $\sigma$-algebra of events, and $\P$ is the probability measure on the sample space $(\Omega, \mathscr F)$.
Recall that a random variable $X$ for the experiment is simply a measurable function from $(\Omega, \mathscr{F})$ into another measurable space $(S, \mathscr{S})$. When $S \subseteq \R^n$, we assume that $S$ is Lebesgue measurable, and we take $\mathscr{S}$ to the $\sigma$-algebra of Lebesgue measurable subsets of $S$. As noted above, here is the measure-theoretic definition:
If $X$ is a real-valued random variable on the probability space, the expected value of $X$ is defined as the integral of $X$ with respect to $\P$, assuming that the integral exists: $\E(X) = \int_\Omega X \, d\P$
Let's review how the integral is defined in stages, but now using the notation of probability theory.
Let $S$ denote the support set of $X$, so that $S$ is a measurable subset of $\R$.
1. If $S$ is finite, then $\E(X) = \sum_{x \in S} x \, \P(X = x)$.
2. If $S \subseteq [0, \infty)$, then $\E(X) = \sup\left\{\E(Y): Y \text{ has finite range and } 0 \le Y \le X\right\}$
3. For general $S \subseteq \R$, $\E(X) = \E\left(X^+\right) - \E\left(X^-\right)$ as long as the right side is not of the form $\infty - \infty$, and where $X^+$ and $X^-$ denote the positive and negative parts of $X$.
4. If $A \in \mathscr{F}$, then $\E(X; A) = \E\left(X \bs{1}_A \right)$, assuming that the expected value on the right exists.
Thus, as with integrals generally, an expected value can exist as a number in $\R$ (in which case $X$ is integrable), can exist as $\infty$ or $-\infty$, or can fail to exist. In reference to part (a), a random variable with a finite set of values in $\R$ is a simple function in the terminology of general integration. In reference to part (b), note that the expected value of a nonnegative random variable always exists in $[0, \infty]$. In reference to part (c), $\E(X)$ exists if and only if either $\E\left(X^+\right) \lt \infty$ or $\E\left(X^-\right) \lt \infty$.
Our next goal is to restate the basic theorems and properties of integrals, but in the notation of probability. Unless otherwise noted, all random variables are assumed to be real-valued.
Basic Properties
The Linear Properties
Perhaps the most important and basic properties are the linear properties. Part (a) is the additive property and part (b) is the scaling property.
Suppose that $X$ and $Y$ are random variables whose expected values exist, and that $c \in \R$. Then
1. $\E(X + Y) = \E(X) + \E(Y)$ as long as the right side is not of the form $\infty - \infty$.
2. $\E(c X) = c \E(X)$
Thus, part (a) holds if at least one of the expected values on the right is finite, or if both are $\infty$, or if both are $-\infty$. What is ruled out are the two cases where one expected value is $\infty$ and the other is $-\infty$, and this is what is meant by the indeterminate form $\infty - \infty$.
Equality and Order
Our next set of properties deal with equality and order. First, the expected value of a random variable over a null set is 0.
If $X$ is a random variable and $A$ is an event with $\P(A) = 0$. Then $\E(X; A) = 0$.
Random variables that are equivalent have the same expected value
If $X$ is a random variable whose expected value exists, and $Y$ is a random variable with $\P(X = Y) = 1$, then $\E(X) = \E(Y)$.
Our next result is the positive property of expected value.
Suppose that $X$ is a random variable and $\P(X \ge 0) = 1$. Then
1. $\E(X) \ge 0$
2. $\E(X) = 0$ if and only if $\P(X = 0) = 1$.
So, if $X$ is a nonnegative random variable then $\E(X) \gt 0$ if and only if $\P(X \gt 0) \gt 0$. The next result is the increasing property of expected value, perhaps the most important property after linearity.
Suppose that $X, Y$ are random variables whose expected values exist, and that $\P(X \le Y) = 1$. Then
1. $\E(X) \le \E(Y)$
2. Except in the case that both expected values are $\infty$ or both $-\infty$, $\E(X) = \E(Y)$ if and only if $\P(X = Y) = 1$.
So if $X \le Y$ with probability 1 then, except in the two cases mentioned, $\E(X) \lt \E(Y)$ if and only if $\P(X \lt Y) \gt 0$. The next result is the absolute value inequality.
Suppose that $X$ is a random variable whose expected value exists. Then
1. $\left| \E(X) \right| \le \E \left(\left| X \right| \right)$
2. If $\E(X)$ is finite, then equality holds in (a) if and only if $\P(X \ge 0) = 1$ or $\P(X \le 0) = 1$.
Change of Variables and Density Functions
The Change of Variables Theorem
Suppose now that $X$ is a general random variable on the probability space $(\Omega, \mathscr F, \P)$, taking values in a measurable space $(S, \mathscr{S})$. Recall that the probability distribution of $X$ is the probability measure $P$ on $(S, \mathscr{S})$ given by $P(A) = \P(X \in A)$ for $A \in \mathscr{S}$. This is a special case of a new positive measure induced by a given positive measure and a measurable function. If $g: S \to \R$ is measurable, then $g(X)$ is a real-valued random variable. The following result shows how to computed the expected value of $g(X)$ as an integral with respect to the distribution of $X$, and is known as the change of variables theorem.
If $g: S \to \R$ is measurable then, assuming that the expected value exists, $\E\left[g(X)\right] = \int_S g(x) \, dP(x)$
So, using the original definition and the change of variables theorem, and giving the variables explicitly for emphasis, we have $\E\left[g(X)\right] = \int_\Omega g\left[X(\omega)\right] \, d\P(\omega) = \int_S g(x) \, dP(x)$
The Radon-Nikodym Theorem
Suppose now $\mu$ is a positive measure on $(S, \mathscr{S})$, and that the distribution of $X$ is absolutely continuous with respect to $\mu$. Recall that this means that $\mu(A) = 0$ implies $P(A) = \P(X \in A) = 0$ for $A \in \mathscr{S}$. By the Radon-Nikodym theorem, named for Johann Radon and Otto Nikodym, $X$ has a probability density function $f$ with respect to $\mu$. That is, $P(A) = \P(X \in A) = \int_A f \, d\mu, \quad A \in \mathscr{S}$ In this case, we can write the expected value of $g(X)$ as an integral with respect to the probability density function.
If $g: S \to \R$ is measurable then, assuming that the expected value exists, $\E\left[g(X)\right] = \int_S g f \, d\mu$
Again, giving the variables explicitly for emphasis, we have the following chain of integrals: $\E\left[g(X)\right] = \int_\Omega g\left[X(\omega)\right] \, d\P(\omega) = \int_S g(x) \, dP(x) = \int_S g(x) f(x) \, d\mu(x)$
There are two critically important special cases.
Discrete Distributions
Suppose first that $(S, \mathscr S, \#)$ is a discrete measure space, so that $S$ is countable, $\mathscr{S} = \mathscr{P}(S)$ is the collection of all subsets of $S$, and $\#$ is counting measure on $(S, \mathscr S)$. Thus, $X$ has a discrete distribution on $S$, and this distribution is always absolutely continuous with respect to $\#$. Specifically, $\#(A) = 0$ if and only if $A = \emptyset$ and of course $\P(X \in \emptyset) = 0$. The probability density function $f$ of $X$ with respect to $\#$, as we know, is simply $f(x) = \P(X = x)$ for $x \in S$. Moreover, integrals with respect to $\#$ are sums, so $\E\left[g(X)\right] = \sum_{x \in S} g(x) f(x)$ assuming that the expected value exists. Existence in this case means that either the sum of the positive terms is finite or the sum of the negative terms is finite, so that the sum makes sense (and in particular does not depend on the order in which the terms are added). Specializing further, if $X$ itself is real-valued and $g = 1$ we have $\E(X) = \sum_{x \in S} x f(x)$ which was our original definition of expected value in the discrete case.
Continuous Distributions
For the second special case, suppose that $(S, \mathscr S, \lambda_n)$ is a Euclidean measure space, so that $S$ is a Lebesgue measurable subset of $\R^n$ for some $n \in \N_+$, $\mathscr S$ is the $\sigma$-algebra of Lebesgue measurable subsets of $S$, and $\lambda_n$ is Lebesgue measure on $(S, \mathscr S)$. The distribution of $X$ is absolutely continuous with respect to $\lambda_n$ if $\lambda_n(A) = 0$ implies $\P(X \in A) = 0$ for $A \in \mathscr{S}$. If this is the case, then a probability density function $f$ of $X$ has its usual meaning. Thus, $\E\left[g(X)\right] = \int_S g(x) f(x) \, d\lambda_n(x)$ assuming that the expected value exists. When $g$ is a typically nice function, this integral reduces to an ordinary $n$-dimensional Riemann integral of calculus. Specializing further, if $X$ is itself real-valued and $g = 1$ then $\E(X) = \int_S x f(x) \, dx$ which was our original definition of expected value in the continuous case.
Interchange Properties
In this subsection, we review properties that allow the interchange of expected value and other operations: limits of sequences, infinite sums, and integrals. We assume again that the random variables are real-valued unless otherwise specified.
Limits
Our first set of convergence results deals with the interchange of expected value and limits. We start with the expected value version of Fatou's lemma, named in honor of Pierre Fatou. Its usefulness stems from the fact that no assumptions are placed on the random variables, except that they be nonnegative.
Suppose that $X_n$ is a nonnegative random variable for $n \in \N_+$. Then $\E\left( \liminf_{n \to \infty} X_n \right) \le \liminf_{n \to \infty} \E(X_n)$
Our next set of results gives conditions for the interchange of expected value and limits.
Suppose that $X_n$ is a random variable for each $n \in \N_+$. then $\E\left(\lim_{n \to \infty} X_n\right) = \lim_{n \to \infty} \E\left(X_n\right)$ in each of the following cases:
1. $X_n$ is nonnegative for each $n \in \N_+$ and $X_n$ is increasing in $n$.
2. $\E(X_n)$ exists for each $n \in \N_+$, $\E(X_1) \gt -\infty$, and $X_n$ is increasing in $n$.
3. $\E(X_n)$ exists for each $n \in \N_+$, $\E(X_1) \lt \infty$, and $X_n$ is decreasing in $n$.
4. $\lim_{n \to \infty} X_n$ exists, and $\left|X_n\right| \le Y$ for $n \in \N$ where $Y$ is a nonnegative random variable with $\E(Y) \lt \infty$.
5. $\lim_{n \to \infty} X_n$ exists, and $\left|X_n\right| \le c$ for $n \in \N$ where $c$ is a positive constant.
Statements about the random variables in the theorem above (nonnegative, increasing, existence of limit, etc.) need only hold with probability 1. Part (a) is the monotone convergence theorem, one of the most important convergence results and in a sense, essential to the definition of the integral in the first place. Parts (b) and (c) are slight generalizations of the monotone convergence theorem. In parts (a), (b), and (c), note that $\lim_{n \to \infty} X_n$ exists (with probability 1), although the limit may be $\infty$ in parts (a) and (b) and $-\infty$ in part (c) (with positive probability). Part (d) is the dominated convergence theorem, another of the most important convergence results. It's sometimes also known as Lebesgue's dominated convergence theorem in honor of Henri Lebesgue. Part (e) is a corollary of the dominated convergence theorem, and is known as the bounded convergence theorem.
Infinite Series
Our next results involve the interchange of expected value and an infinite sum, so these results generalize the basic additivity property of expected value.
Suppose that $X_n$ is a random variable for $n \in \N_+$. Then $\E\left( \sum_{n=1}^\infty X_n\right) = \sum_{n=1}^\infty \E\left(X_n\right)$ in each of the following cases:
1. $X_n$ is nonnegative for each $n \in \N_+$.
2. $\E\left(\sum_{n=1}^\infty \left| X_n \right|\right) \lt \infty$
Part (a) is a consequence of the monotone convergence theorem, and part (b) is a consequence of the dominated convergence theorem. In (b), note that $\sum_{n=1}^\infty \left| X_n \right| \lt \infty$ and hence $\sum_{n=1}^\infty X_n$ is absolutely convergent with probability 1. Our next result is the additivity of the expected value over a countably infinite collection of disjoint events.
Suppose that $X$ is a random variable whose expected value exists, and that $\{A_n: n \in \N_+\}$ is a disjoint collection events. Let $A = \bigcup_{n=1}^\infty A_n$. Then $\E(X; A) = \sum_{n=1}^\infty \E(X; A_n)$
Of course, the previous theorem applies in particular if $X$ is nonnegative.
Integrals
Suppose that $(T, \mathscr{T}, \mu)$ is a $\sigma$-finite measure space, and that $X_t$ is a real-valued random variable for each $t \in T$. Thus we can think of $\left\{X_t: t \in T\right\}$ is a stochastic process indexed by $T$. We assume that $(\omega, t) \mapsto X_t(\omega)$ is measurable, as a function from the product space $(\Omega \times T, \mathscr{F} \otimes \mathscr{T})$ into $\R$. Our next result involves the interchange of expected value and integral, and is a consequence of Fubini's theorem, named for Guido Fubini.
Under the assumptions above, $\E\left[\int_T X_t \, d\mu(t)\right] = \int_T \E\left(X_t\right) \, d\mu(t)$ in each of the following cases:
1. $X_t$ is nonnegative for each $t \in T$.
2. $\int_T \E\left(\left|X_t\right|\right) \, d\mu(t) \lt \infty$
Fubini's theorem actually states that the two iterated integrals above equal the joint integral $\int_{\Omega \times T} X_t(\omega) \, d(\P \otimes \mu)(\omega, t)$ where of course, $\P \otimes \mu$ is the product measure on $(\Omega \times T, \mathscr{F} \otimes \mathscr{T})$. However, our interest is usually in evaluating the iterated integral above on the left in terms of the iterated integral on the right. Part (a) is the expected value version of Tonelli's theorem, named for Leonida Tonelli.
Examples and Exercises
You may have worked some of the computational exercises before, but try to see them in a new light, in terms of the general theory of integration.
The Cauchy Distribution
Recall that the Cauchy distribution, named for Augustin Cauchy, is a continuous distribution with probability density function $f$ given by $f(x) = \frac{1}{\pi \left(1 + x^2\right)}, \quad x \in \R$ The Cauchy distribution is studied in more generality in the chapter on Special Distributions.
Suppose that $X$ has the Cauchy distribution.
1. Show that $\E(X)$ does not exist.
2. Find $\E\left(X^2\right)$
Answer
1. $\E\left(X^+\right) = \E\left(X^-\right) = \infty$
2. $\infty$
Open the Cauchy Experiment and keep the default parameters. Run the experiment 1000 times and note the behaior of the sample mean.
The Pareto Distribution
Recall that the Pareto distribution, named for Vilfredo Pareto, is a continuous distribution with probability density function $f$ given by $f(x) = \frac{a}{x ^{a+1}}, \quad x \in [1, \infty)$ where $a \gt 0$ is the shape parameter. The Pareto distribution is studied in more generality in the chapter on Special Distributions.
Suppose that $X$ has the Pareto distribution with shape parameter $a$. Find $\E(X)$ is the following cases:
1. $0 \lt a \le 1$
2. $a \gt 1$
Answer
1. $\infty$
2. $\frac{a}{a - 1}$
Open the special distribution simulator and select the Pareto distribution. Vary the shape parameter and note the shape of the probability density function and the location of the mean. For various values of the parameter, run the experiment 1000 times and compare the sample mean with the distribution mean.
Suppose that $X$ has the Pareto distribution with shape parameter $a$. Find $E\left(1 / X^n \right)$ for $n \in \N_+$.
Answer
$\frac{a}{a + n}$
Special Results for Nonnegative Variables
For a nonnegative variable, the moments can be obtained from integrals of the right-tail distribution function.
If $X$ is a nonnegative random variable then $\E\left(X^n\right) = \int_0^\infty n x^{n-1} \P(X \gt x) \, dx$
Proof
By Fubini's theorem we can interchange an expected value and integral when the integrand is nonnegative. Hence $\int_0^\infty n x^{n-1} \P(X \gt x) \, dx = \int_0^\infty n x^{n-1} \E\left[\bs{1}(X \gt x)\right] \, dx = \E \left(\int_0^\infty n x^{n-1} \bs{1}(X \gt x) \, dx \right) = \E\left( \int_0^X n x^{n-1} \, dx \right) = \E\left(X^n\right)$
When $n = 1$ we have $\E(X) = \int_0^\infty \P(X \gt x) \, dx$. We saw this result before in the section on additional properties of expected value, but now we can understand the proof in terms of Fubini's theorem.
For a random variable taking nonnegative integer values, the moments can be computed from sums involving the right-tail distribution function.
Suppose that $X$ has a discrete distribution, taking values in $\N$. Then $\E\left(X^n\right) = \sum_{k=1}^\infty \left[k^n - (k - 1)^n\right] \P(X \ge k)$
Proof
By the theorem above, we can interchange expected value and infinite series when the terms are nonnegative. Hence $\sum_{k=1}^\infty \left[k^n - (k - 1)^n\right] \P(X \ge k) = \sum_{k=1}^\infty \left[k^n - (k - 1)^n\right] \E\left[\bs{1}(X \ge k)\right] = \E\left(\sum_{k=1}^\infty \left[k^n - (k - 1)^n\right] \bs{1}(X \ge k) \right) = \E\left(\sum_{k=1}^X \left[k^n - (k - 1)^n\right] \right) = \E\left(X^n\right)$
When $n = 1$ we have $\E(X) = \sum_{k=0}^\infty \P(X \ge k)$. We saw this result before in the section on additional properties of expected value, but now we can understand the proof in terms of the interchange of sum and expected value. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/04%3A_Expected_Value/4.09%3A_Expected_Value_as_an_Integral.txt |
$\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\mse}{\text{MSE}}$ $\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
Conditional expected value is much more important than one might at first think. In fact, conditional expected value is at the core of modern probability theory because it provides the basic way of incorporating known information into a probability measure.
Basic Theory
Definition
As usual, our starting point is a random experiment modeled by a probability space $(\Omega, \mathscr F, \P)$, so that $\Omega$ is the set of outcomes,, $\mathscr F$ is the $\sigma$-algebra of events, and $\P$ is the probability measure on the sample space $(\Omega, \mathscr F)$. In our first elementary discussion, we studied the conditional expected value of a real-value random variable $X$ given a general random variable $Y$. The more general approach is to condition on a sub $\sigma$-algebra $\mathscr G$ of $\mathscr F$. The sections on $\sigma$-algebras and measure theory are essential prerequisites for this section.
Before we get to the definition, we need some preliminaries. First, all random variables mentioned are assumed to be real valued. next the notion of equivalence plays a fundamental role in this section. Next recall that random variables $X_1$ and $X_2$ are equivalent if $\P(X_1 = X_2) = 1$. Equivalence really does define an equivalence relation on the collection of random variables defined on the sample space. Moreover, we often regard equivalent random variables as being essentially the same object. More precisely from this point of view, the objects of our study are not individual random variables but rather equivalence classes of random variables under this equivalence relation. Finally, for $A \in \mathscr F$, recall the notation for the expected value of $X$ on the event $A$ $\E(X; A) = \E(X \bs{1}_A)$ assuming of course that the expected value exists. For the remainder of this subsection, suppose that $\mathscr G$ is a sub $\sigma$-algebra of $\mathscr F$.
Suppose that $X$ is a random variable with $\E(|X|) \lt \infty$. The conditional expected value of $X$ given $\mathscr G$ is the random variable $\E(X \mid \mathscr G)$ defined by the following properties:
1. $\E(X \mid \mathscr G)$ is measurable with repsect to $\mathscr G$.
2. If $A \in \mathscr G$ then $\E[\E(X \mid \mathscr G); A] = \E(X; A)$
The basic idea is that $\E(X \mid \mathscr G)$ is the expected value of $X$ given the information in the $\sigma$-algebra $\mathscr G$. Hopefully this idea will become clearer during our study. The conditions above uniquely define $\E(X \mid \mathscr G)$ up to equivalence. The proof of this fact is a simple application of the Radon-Nikodym theorem, named for Johann Radon and Otto Nikodym
Suppose again that $X$ is a random variable with $\E(|X|) \lt \infty$.
1. There exists a random variable $V$ satisfying the definition.
2. If $V_1$ and $V_2$ satisfy the definition, then $\P(V_1 = V_2) = 1$ so that $V_1$ and $V_2$ are equivalent.
Proof
1. Note that $\nu(A) = \E(X; A)$ for $A \in \mathscr G$ defines a (signed) measure on $\mathscr G$. Moreover, if $A \in \mathscr G$ and $\P(A) = 0$ then $\nu(A) = 0$. Hence $\nu$ is absolutely continuous with respect to the restriction of $\P$ to $\mathscr G$. By the Radon-Nikodym theorem, there exists a random variable $V$ that is measurable with respect to $\mathscr G$ such that $\nu(A) = \E(V; A)$ for $A \in \mathscr G$. That is, $V$ is the density or derivative of $\nu$ with respect to $\P$ on $\mathscr G$.
2. This follows from the uniqueness of the Radon-Nikodym derivative, up to equivalence.
The following characterization might seem stronger but in fact in equivalent to the definition.
Suppose again that $X$ is a random variable with $\E(|X|) \lt \infty$. Then $\E(X \mid \mathscr G)$ is characterized by the following properties:
1. $\E(X \mid \mathscr G)$ is measurable with respect to $\mathscr G$
2. If $U$ is measurable with respect to $\mathscr G$ and $\E(|U X|) \lt \infty$ then $\E[U \E(X \mid \mathscr G)] = \E(U X)$.
Proof
We have to show that part (b) in the definition is equivalent to part (b) here. First (b) here implies (b) in the definition since $\bs{1}_A$ is $\mathscr G$-measurable if $A \in \mathscr G$. Conversely suppose that (b) in the definition holds. We will show that (b) here holds by a classical bootstrapping argument.. First $\E[U \E(X \mid \mathscr G)] = \E(U X)$ if $U = \bs{1}_A$ for some $A \in \mathscr G$. Next suppose that $U$ is a simple random variable that is $\mathscr G$-measurable. That is, $U = \sum_{i \in I} a_i \bs{1}_{A_i}$ where $I$ is a finite index set, $a_i \ge 0$ for $i \in I$, and $A_i \in \mathscr G$ for $i \in I$. then $\E[U \E(X \mid \mathscr G)] = \E\left[\sum_{i \in I} a_i \bs{1}_{A_i} \E(X \mid \mathscr G)\right] = \sum_{i \in I} a_i \E[\bs{1}_{A_i} \E(X \mid \mathscr G)] = \sum_{i \in I} a_i \E(\bs{1}_{A_i} X) = \E\left(\sum_{i \in I} a_i \bs{1}_{A_i} X\right) = \E(U X)$ Next suppose that $U$ is nonnegative and $\mathscr G$-measurable. Then there exists a sequence of simple $\mathscr G$-measurable random variables $(U_1, U_2, \ldots)$ with $U_n \uparrow U$ as $n \to \infty$. Then by the previous step, $\E[U_n \E(X \mid \mathscr G)] = \E(U_n X)$ for each $n$. Letting $n \to \infty$ and using the monotone convergence theorem we have $\E[U \E(X \mid \mathscr G)] = \E(U X)$. Finally, suppose that $U$ is a general $\mathscr G$-measurable random variable. Then $U = U^+ - U^-$ where $U^+$ and $U^-$ are the usual positive and negative parts of $U$. These parts are nonnegative and $\mathscr G$-measurable, so by the previous step, $\E[U^+ \E(X \mid \mathscr G)] = \E(U^+ X)$ and $\E[U^- \E(X \mid \mathscr G)] = \E(U^- X)$. hence $\E[U \E(X \mid \mathscr G)] = \E[(U^+ - U^-) \E(X \mid \mathscr G)] = \E[U^+ \E(X \mid \mathscr G)] - \E[U^- \E(X \mid \mathscr G)] = \E(U^+ X) - \E(U^- X) = \E(U X)$
Properties
Our next discussion concerns some fundamental properties of conditional expected value. All equalities and inequalities are understood to hold modulo equivalence, that is, with probability 1. Note also that many of the proofs work by showing that the right hand side satisfies the properties in the definition for the conditional expected value on the left side. Once again we assume that $\mathscr G$ is a sub $sigma$-algebra of $\mathscr F$.
Our first property is a simple consequence of the definition: $X$ and $\E(X \mid \mathscr G)$ have the same mean.
Suppose that $X$ is a random variable with $\E(|X|) \lt \infty$. Then $\E[\E(X \mid \mathscr G)] = \E(X)$.
Proof
This follows immediately by letting $A = \Omega$ in the definition.
The result above can often be used to compute $\E(X)$, by choosing the $\sigma$-algebra $\mathscr G$ in a clever way. We say that we are computing $\E(X)$ by conditioning on $\mathscr G$. Our next properties are fundamental: every version of expected value must satisfy the linearity properties. The first part is the additive property and the second part is the scaling property.
Suppose that $X$ and $Y$ are random variables with $\E(|X|) \lt \infty$ and $\E(|Y|) \lt \infty$, and that $c \in \R$. Then
1. $\E(X + Y \mid \mathscr G) = \E(X \mid \mathscr G) + \E(Y \mid \mathscr G)$
2. $\E(c X \mid \mathscr G) = c \E(X \mid \mathscr G)$
Proof
1. Note that $\E(|X + Y|) \le \E(|X|) + \E(|Y|) \lt \infty$ so $\E(X + Y \mid \mathscr G)$ is defined. We show that $\E(X \mid \mathscr G) + \E(Y \mid \mathscr G)$ satisfies the conditions in the definition for $\E(X + Y \mid \mathscr G)$. Note first that $\E(X \mid \mathscr G) + \E(Y \mid \mathscr G)$ is $\mathscr G$-measurable since both terms are. If $A \in \mathscr G$ then $\E\{[\E(X \mid \mathscr G) + \E(Y \mid \mathscr G)]; A\} = \E[\E(X \mid \mathscr G); A] + \E[\E(Y \mid \mathscr G); A] = \E(X; A) + \E(Y; A) = \E[X + Y; A]$
2. Note that $\E(|c X|) = |c| \E(|X|) \lt \infty$ so $\E(c X \mid \mathscr G)$ is defined. We show that $c \E(X \mid \mathscr G)$ satisfy the conditions in the definition for $\E(c X \mid \mathscr G)$. Note first that $c \E(X \mid \mathscr G)$ is $\mathscr G$-measurable since the second factor is. If $A \in \mathscr G$ then $\E[c \E(X \mid \mathscr G); A] = c \E[\E(X \mid \mathscr G); A] = c \E(X; A) = \E(c X; A)$
The next set of properties are also fundamental to every notion of expected value. The first part is the positive property and the second part is the increasing property.
Suppose again that $X$ and $Y$ are random variables with $\E(|X|) \lt \infty$ and $\E(|Y|) \lt \infty$.
1. If $X \ge 0$ then $\E(X \mid \mathscr G) \ge 0$
2. If $X \le Y$ then $\E(X \mid \mathscr G) \le \E(Y \mid \mathscr G)$
Proof
1. Let $A = \{\E(X \mid \mathscr G) \lt 0\}$. Note that $A \in \mathscr G$ and hence $\E(X; A) = \E[\E(X \mid \mathscr G); A]$. Since $X \ge 0$ with probability 1 we have $E(X; A) \ge 0$. On the other hand, if $\P(A) \gt 0$ then $\E[\E(X \mid \mathscr G); A] \lt 0$ which is a contradiction. Hence we must have $\P(A) = 0$.
2. Note that if $X \le Y$ then $Y - X \ge 0$. Hence by (a) and the additive property, $\E(Y - X \mid \mathscr G) = \E(Y \mid \mathscr G) - \E(X \mid \mathscr G) \ge 0$ so $\E(Y \mid \mathscr G) \ge \E(X \mid \mathscr G)$.
The next few properties relate to the central idea that $\E(X \mid \mathscr G)$ is the expected value of $X$ given the information in the $\sigma$-algebra $\mathscr G$.
Suppose that $X$ and $V$ are random variables with $\E(|X|) \lt \infty$ and $\E(|X V|) \lt \infty$ and that $V$ is measurable with respect to $\mathscr G$. Then $\E(V X \mid \mathscr G) = V \E(X \mid \mathscr G)$.
Proof
We show that $V \E(X \mid \mathscr G)$ satisfy the in properties that characterize $\E(V X \mid \mathscr G)$. First, $V \E(X \mid \mathscr G)$ is $\mathscr G$-measurable since both factors are. If $U$ is $\mathscr G$-measurable with $\E(|U V X|) \lt \infty$ then $U V$ is also $\mathscr G$-measurable and hence $\E[U V \E(X \mid \mathscr G)] = \E(U V X) = \E[U (V X)]$
Compare this result with the scaling property. If $V$ is measurable with respect to $\mathscr G$ then $V$ is like a constant in terms of the conditional expected value given $\mathscr G$. On the other hand, note that this result implies the scaling property, since a constant can be viewed as a random variable, and as such, is measurable with respect to any $\sigma$-algebra. As a corollary to this result, note that if $X$ itself is measurable with respect to $\mathscr G$ then $\E(X \mid \mathscr G) = X$. The following result gives the other extreme.
Suppose that $X$ is a random variable with $\E(|X|) \lt \infty$. If $X$ and $\mathscr G$ are independent then $\E(X \mid \mathscr G) = \E(X)$.
Proof
We show that $\E(X)$ satisfy the properties in the definiton for $\E(X \mid \mathscr G)$. First of course, $\E(X)$ is $\mathscr G$-measurable as a constant random variable. If $A \in \mathscr G$ then $X$ and $\bs{1}_A$ are independent and hence $\E(X; A) = \E(X) \P(A) = \E[\E(X); A]$
Every random variable $X$ is independent of the trivial $\sigma$-algebra $\{\emptyset, \Omega\}$ so it follows that $\E(X \mid \{\emptyset, \Omega\}) = \E(X)$.
The next properties are consistency conditions, also known as the tower properties. When conditioning twice, with respect to nested $\sigma$-algebras, the smaller one (representing the least amount of information) always prevails.
Suppose that $X$ is a random variable with $\E(|X|) \lt \infty$ and that $\mathscr H$ is a sub $\sigma$-algebra of $\mathscr G$. Then
1. $\E[\E(X \mid \mathscr H) \mid \mathscr G] = \E(X \mid \mathscr H)$
2. $\E[\E(X \mid \mathscr G) \mid \mathscr H] = \E(X \mid \mathscr H)$
Proof
1. Note first that $\E(X \mid \mathscr H)$ is $\mathscr H$-measurable and hence also $\mathscr G$-measurable. Thus by (7), $\E[\E(X \mid \mathscr H) \mid \mathscr G] = \E(X \mid \mathscr H)$.
2. We show that $\E(X \mid \mathscr H)$ satisfies the coonditions in the definition for $\E[\E(X \mid \mathscr G) \mid \mathscr H]$. Note again that $\E(X \mid \mathscr H)$ is $\mathscr H$-measurable. If $A \in \mathscr H$ then $A \in \mathscr G$ and hence $\E[\E(X \mid \mathscr G); A] = \E(X; A) = \E[\E(X \mid \mathscr H); A]$
The next result gives Jensen's inequality for conditional expected value, named for Johan Jensen.
Suppose that $X$ takes values in an interval $S \subseteq \R$ and that $g: S \to \R$ is convex. If $\E(|X|) \lt \infty$ and $\E(|g(X)| \lt \infty$ then $\E[g(X) \mid \mathscr G] \ge g[\E(X \mid \mathscr G)]$
Proof
As with Jensen's inequality for ordinary expected value, the best proof uses the characterization of convex functions in terms of supporting lines: For each $t \in S$ there exist numbers $a$ and $b$ (depending on $t$) such that
• $a + b t = g(t)$
• $a + b x \le g(x)$ for $x \in S$
Random variables $X$ and $\E(X \mid \mathscr G)$ takes values in $S$. We can construct a random supporting line at $\E(X \mid \mathscr G)$. That is, there exist random variables $A$ and $B$, measurable with respect to $\mathscr G$, such that
1. $A + B \E(X \mid \mathscr G) = g[\E(X \mid \mathscr G)]$
2. $A + B X \le g(X)$
We take conditional expected value through the inequality in (b) and then use properties of conditional expected value and property (a): $\E[g(X) \mid \mathscr G] \ge \E(A + B X \mid \mathscr G) = A + B \E(X \mid \mathscr G) = g[\E(X \mid \mathscr G]$ Note that the second step uses the fact that $A$ and $B$ are measurable with respect to $\mathscr G$.
Conditional Probability
For our next discussion, suppose as usual that $\mathscr G$ is a sub $\sigma$-algebra of $\mathscr F$. The conditional probability of an event $A$ given $\mathscr G$ can be defined as a special case of conditional expected value. As usual, let $\bs{1}_A$ denote the indicator random variable of $A$.
For $A \in \mathscr F$ we define $\P(A \mid \mathscr G) = \E(\bs 1_A \mid \mathscr G)$
Thus, we have the following characterizations of conditional probability, which are special cases of the definition and the alternate version:
If $A \in \mathscr F$ then $\P(A \mid \mathscr G)$ is characterized (up to equivalence) by the following properties
1. $\P(A \mid \mathscr G)$ is measurable with respect to $\mathscr G$.
2. If $B \in \mathscr G$ then $\E[\P(A \mid \mathscr G); B] = \P(A \cap B)$
Proof
For part (b), note that $\E[\bs{1}_B \P(A \mid \mathscr G)] = \E[\bs{1}_B \E(\bs{1}_A \mid \mathscr G)] = \E(\bs{1}_A \bs{1}_B) = \E(\bs{1}_{A \cap B}) = \P(A \cap B)$
If $A \in \mathscr F$ then $\P(A \mid \mathscr G)$ is characterized (up to equivalence) by the following properties
1. $\P(A \mid \mathscr G)$ is measurable with respect to $\mathscr G$.
2. If $U$ is measurable with respect to $\mathscr G$ and $\E(|U|) \lt \infty$ then $\E[U \P(A \mid \mathscr G)] = \E(U; A)$
The properties above for conditional expected value, of course, have special cases for conditional probability. In particular, we can compute the probability of an event by conditioning on a $\sigma$-algebra:
If $A \in \mathscr F$ then $\P(A) = \E[\P(A \mid \mathscr G)]$.
Proof
This is a direct result of the mean property since $\E(\bs{1}_A) = \P(A)$.
Again, the last theorem is often a good way to compute $\P(A)$ when we know the conditional probability of $A$ given $\mathscr G$. This is a very compact and elegant version of the law of total probability given first in the section on Conditional Probability in the chapter on Probability Spaces and later in the section on Discrete Distributions in the Chapter on Distributions. The following theorem gives the conditional version of the axioms of probability.
The following properties hold (as usual, modulo equivalence):
1. $\P(A \mid \mathscr G) \ge 0$ for every $A \in \mathscr F$
2. $\P(\Omega \mid \mathscr G) = 1$
3. If $\{A_i: i \in I\}$ is a countable disjoint subset of $\mathscr F$ then $\P(\bigcup_{i \in I} A_i \bigm| \mathscr G) = \sum_{i \in I} \P(A_i \mid \mathscr G)$
Proof
1. This is a direct consequence of (6).
2. This is trivial since $\bs{1}_\Omega = 1$.
3. We show that the right side satisfies the conditions in (11) that define the left side. Note that $\sum_{i \in I} \P(A_i \mid \mathscr G)$ is $\mathscr G$-measurable since each term in the sum has this property. Let $B \in \mathscr G$. then $\E\left[\sum_{i \in I} \P(A_i \mid \mathscr G); B\right] = \sum_{i \in I} \E[\P(A_i \mid \mathscr G); B] = \sum_{i \in I} \P(A_i \cap B) = \P\left(B \cap \bigcup_{i \in I} A_i\right)$
From the last result, it follows that other standard probability rules hold for conditional probability given $\mathscr G$ (as always, modulo equivalence). These results include
• the complement rule
• the increasing property
• Boole's inequality
• Bonferroni's inequality
• the inclusion-exclusion laws
However, it is not correct to state that $A \mapsto \P(A \mid \mathscr G)$ is a probability measure, because the conditional probabilities are only defined up to equivalence, and so the mapping does not make sense. We would have to specify a particular version of $\E(A \mid \mathscr G)$ for each $A \in \mathscr F$ for the mapping to make sense. Even if we do this, the mapping may not define a probability measure. In part (c), the left and right sides are random variables and the equation is an event that has probability 1. However this event depends on the collection $\{A_i: i \in I\}$. In general, there will be uncountably many such collections in $\mathscr F$, and the intersection of all of the corresponding events may well have probability less than 1 (if it's measurable at all). It turns out that if the underlying probability space $(\Omega, \mathscr F, \P)$ is sufficiently nice (and most probability spaces that arise in applications are nice), then there does in fact exist a regular conditional probability. That is, for each $A \in \mathscr F$, there exists a random variable $\P(A \mid \mathscr G)$ satisfying the conditions in (12) and such that with probability 1, $A \mapsto \P(A \mid \mathscr G)$ is a probability measure.
The following theorem gives a version of Bayes' theorem, named for the inimitable Thomas Bayes.
Suppose that $A \in \mathscr G$ and $B \in \mathscr F$. then $\P(A \mid B) = \frac{\E[\P(B \mid \mathscr G); A]}{\E[\P(B \mid \mathscr G)]}$
Proof
The proof is absolutely trivial. By definition of conditional probability given $\mathscr G$, the numerator is $\P(A \cap B)$ and the denominator is $P(B)$. Nonetheless, Bayes' theorem is useful in settings where the expected values in the numerator and denominator can be computed directly
Basic Examples
The purpose of this discussion is to tie the general notions of conditional expected value that we are studying here to the more elementary concepts that you have seen before. Suppose that $A$ is an event (that is, a member of $\mathscr F$) with $\P(A) \gt 0$. If $B$ is another event, then of course, the conditional probability of $B$ given $A$ is $\P(B \mid A) = \frac{\P(A \cap B)}{\P(A)}$ If $X$ is a random variable then the conditional distribution of $X$ given $A$ is the probability measure on $\R$ given by $R \mapsto \P(X \in R \mid A) = \frac{\P(\{X \in R\} \cap A)}{\P(A)} \text{ for measurable } R \subseteq \R$ If $\E(|X|) \lt \infty$ then the conditional expected value of $X$ given $A$, denoted $\E(X \mid A)$, is simply the mean of this conditional distribution.
Suppose now that $\mathscr{A} = \{A_i: i \in I\}$ is a countable partition of the sample space $\Omega$ into events with positive probability. To review the jargon, $\mathscr A \subseteq \mathscr F$; the index set $I$ is countable; $A_i \cap A_j = \emptyset$ for distinct $i, \, j \in I$; $\bigcup_{i \in I} A_i = \Omega$; and $\P(A_i) \gt 0$ for $i \in I$. Let $\mathscr G = \sigma(\mathscr{A})$, the $\sigma$-algebra generated by $\mathscr{A}$. The elements of $\mathscr G$ are of the form $\bigcup_{j \in J} A_j$ for $J \subseteq I$. Moreover, the random variables that are measurable with respect to $\mathscr G$ are precisely the variables that are constant on $A_i$ for each $i \in I$. The $\sigma$-algebra $\mathscr G$ is said to be countably generated.
If $B \in \mathscr F$ then $\P(B \mid \mathscr G)$ is the random variable whose value on $A_i$ is $\P(B \mid A_i)$ for each $i \in I$.
Proof
Let $U$ denote the random variable that takes the value $\P(B \mid A_i)$ on $A_i$ for each $i \in I$. First, $U$ is measurable with respect to $\scr G$ since $U$ is constant on $A_i$ for each $i \in I$. So we just need to show that $E(U ; A) = \P(A \cap B)$ for each $A \in \mathscr G$. Thus, let $A = \bigcup_{j \in J} A_j$ where $J \subseteq I$. Then $\E(U; A) = \sum_{j \in J} \E(U ; A_j) = \sum_{j \in J} \P(B \mid A_j) \P(A_j) = \P(A \cap B)$
In this setting, the version of Bayes' theorem in (15) reduces to the usual elementary formulation: For $i \in I$, $\E[\P(B \mid \mathscr G); A_i] = \P(A_i) \P(B \mid A_i)$ and $\E[\P(B \mid \mathscr G)] = \sum_{j \in I} \P(A_j) \P(B \mid A_j)$. Hence $\P(A_i \mid B) = \frac{\P(A_i) \P(B \mid A_i)}{\sum_{j \in I} \P(A_j) \P(B \mid A_j)}$
If $X$ is a random variable with $\E(|X|) \lt \infty$, then $\E(X \mid \mathscr G)$ is the random variable whose value on $A_i$ is $\E(X \mid A_i)$ for each $i \in I$.
Proof
Let $U$ denote the random variable that takes the value $\E(X \mid A_i)$ on $A_i$ for each $i \in I$. First, $U$ is measurable with respect to $\scr G$ since $U$ is constant on $A_i$ for each $i \in I$. So we just need to show that $E(U; A) = \E(X; A)$ for each $A \in \mathscr G$. Thus, let $A = \bigcup_{j \in J} A_j$ where $J \subseteq I$. Then $\E(U; A) = \sum_{j \in J} \E(U; A_j) = \sum_{j \in J} \E(X \mid A_j) \P(A_j) = E(X; A)$
The previous examples would apply to $\mathscr G = \sigma(Y)$ if $Y$ is a discrete random variable taking values in a countable set $T$. In this case, the partition is simply $\mathscr{A} = \{ \{Y = y\}: y \in T\}$. On the other hand, suppose that $Y$ is a random variable taking values in a general set $T$ with $\sigma$-algebra $\mathscr{T}$. The real-valued random variables that are measurable with respect to $\mathscr G = \sigma(Y)$ are (up to equivalence) the measurable, real-valued functions of $Y$.
Specializing further, Suppose that $X$ takes values in $S \subseteq \R$, $Y$ takes values in $T \subseteq \R^n$ (where $S$ and $T$ are Lebesgue measurable) and that $(X, Y)$ has a joint continuous distribution with probability density function $f$. Then $Y$ has probability density function $h$ given by $h(y) = \int_S f(x, y) \, dx, \quad y \in T$ Assume that $h(y) \gt 0$ for $y \in T$. Then for $y \in T$, a conditional probability density function of $X$ given $Y = y$ is defined by $g(x \mid y) =\frac{f(x, y)}{h(y)}, \quad x \in S$ This is precisely the setting of our elementary discussion of conditional expected value. If $\E(|X|) \lt \infty$ then we usually write $\E(X \mid Y)$ instead of the clunkier $\E[X \mid \sigma(Y)]$.
In this setting above suppose that $\E(|X|) \lt \infty$. Then $\E(X \mid Y) = \int_S x g(x \mid Y) \, dx$
Proof
Once again, we show that the integral on the right satisfies the properties in the definition for $\E(X \mid Y) = \E[X \mid \sigma(Y)]$. First, $y \mapsto \int_S x g(x \mid y) \, dx$ is measurable as a function from $T$ into $\R$ and hence the random variable $\int_x g(x \mid Y) \, dx$ is a measurable function of $Y$ and so is measurable with respect to $\sigma (Y)$. Next suppose that $B \in \sigma(Y)$. Then $B = \{Y \in A\}$ for some $A \in \mathscr F$. Then \begin{align*} \E\left[\int_S x g(x \mid Y) \, dx; B\right] & = \E\left[\int_S x g(x \mid Y) \, dx; Y \in A\right] \ & = \E\left[\int_S x \frac{f(x, y)}{h(y)} \, dx; Y \in A\right] = \int_A \int_S x \frac{f(x, y)}{h(y)} h(y) \, dx \, dy \ & = \int_{S \times A} x f(x, y) \, d(x, y) = \E(X; Y \in A) = \E(X; B) \end{align*}
Best Predictor
In our elementary treatment of conditional expected value, we showed that the conditional expected value of a real-valued random variable $X$ given a general random variable $Y$ is the best predictor of $X$, in the least squares sense, among all real-valued functions of $Y$. A more careful statement is that $\E(X \mid Y)$ is the best predictor of $X$ among all real-valued random variables that are measurable with respect to $\sigma(Y)$. Thus, it should come as not surprise that if $\mathscr G$ is a sub $\sigma$-algebra of $\mathscr F$, then $\E(X \mid \mathscr G)$ is the best predictor of $X$, in the least squares sense, among all real-valued random variables that are measurable with respect to $\mathscr G)$. We will show that this is indeed the case in this subsection. The proofs are very similar to the ones given in the elementary section. For the rest of this discussion, we assume that $\mathscr G$ is a sub $\sigma$-algebra of $\mathscr F$ and that all random variables mentioned are real valued.
Suppose that $X$ and $U$ are random variables with $\E(|X|) \lt \infty$ and $\E(|X U|) \lt \infty$ and that $U$ is measurable with respect to $\mathscr G$. Then $X - \E(X \mid \mathscr G)$ and $U$ are uncorrelated.
Proof
Note that $X - \E(X \mid \mathscr G)$ has mean 0 by the mean property. Using the properties that characterize $\E(X \mid \mathscr G)$ we have $\cov[X - \E(X \mid \mathscr G), U] = \E(U [X - \E(X \mid \mathscr G)]) = \E(U X) - \E[U \E(X \mid \mathscr G] = \E(U X) - \E(U X) = 0$
The next result is the main one: $\E(X \mid \mathscr G)$ is closer to $X$ in the mean square sense than any other random variable that is measurable with respect to $\mathscr G$. Thus, if $\mathscr G$ represents the information that we have, then $\E(X \mid \mathscr G)$ is the best we can do in estimating $X$.
Suppose that $X$ and $U$ are random variables with $\E(X^2) \lt \infty$ and $\E(U^2) \lt \infty$ and that $U$ is measurable with respect to $\mathscr G$. Then
1. $\E([X - \E(X \mid \mathscr G)]^2) \le \E[(X - U)^2]$.
2. Equality holds if and only if $\P[U = \E(X \mid \mathscr G)] = 1$, so $U$ and $\E(X \mid \mathscr G)$ are equivalent.
Proof
1. Note that \begin{align} \E[(X - U)^2] & = \E([X - \E(X \mid \mathscr G) + \E(X \mid \mathscr G) - U]^2) \ & = \E([X - \E(X \mid \mathscr G)]^2) + 2 \E([X - \E(X \mid \mathscr G)][\E(X \mid \mathscr G) - U]) + \E([\E(X \mid \mathscr G) - U]^2 ) \end{align} By mean property, $X - \E(X \mid \mathscr G)$ has mean 0, so the middle term in the displayed equation is $2 \cov[X - \E(X \mid \mathscr G), \E(X \mid \mathscr G) - U]$. But $\E(X \mid \mathscr G) - U$ is $\mathscr G$-measurable and hence this covariance is 0 by uncorrelated proerty. Therefore $\E[(X - U)^2] = \E([X - \E(X \mid \mathscr G)]^2) + \E([\E(X \mid \mathscr G) - U]^2 ) \ge \E([X - \E(X \mid \mathscr G)]^2)$
2. Equality holds if and only if $\E([\E(X \mid \mathscr G) - U]^2 ) = 0$ if and only if $\P[U = \E(X \mid \mathscr G)] = 1$
Conditional Variance
Once again, we assume that $\mathscr G$ is a sub $\sigma$-algebra of $\mathscr F$ and that all random variables mentioned are real valued, unless otherwise noted. It's natural to define the conditional variance of a random variable given $\mathscr G$ in the same way as ordinary variance, but witl all expected values conditioned on $\mathscr G$.
Suppose that $X$ is a random variable with $\E(X^2) \lt \infty$. The conditional variance of $X$ given $\mathscr G$ is $\var(X \mid \mathscr G) = \E\left([X - \E(X \mid \mathscr G)]^2 \biggm| \mathscr G\right)$
Like all conditional expected values relative to $\mathscr G$, $\var(X \mid \mathscr G)$ is a random variable that is measurable with respect to $\mathscr G$ and is unique up to equivalence. The first property is analogous to the computational formula for ordinary variance.
Suppose again that $X$ is a random variable with $\E(X^2) \lt \infty$. Then $\var(X \mid \mathscr G) = \E(X^2 \mid \mathscr G) - [\E(X \mid \mathscr G)]^2$
Proof
Expanding the square in the definition and using basic properties of conditional expectation, we have
\begin{align} \var(X \mid \mathscr G) & = \E(X^2 - 2 X \E(X \mid \mathscr G) + [\E(X \mid \mathscr G)]^2 \biggm| \mathscr G ) = \E(X^2 \mid \mathscr G) - 2 \E[X \E(X \mid \mathscr G) \mid \mathscr G] + \E([\E(X \mid \mathscr G)]^2 \mid \mathscr G) \ & = \E(X^2 \mid \mathscr G) - 2 \E(X \mid \mathscr G) \E(X \mid \mathscr G) + [\E(X \mid \mathscr G)]^2 = \E(X^2 \mid \mathscr G) - [\E(X \mid \mathscr G)]^2 \end{align}
Next is a formula for the ordinary variance in terms of conditional variance and expected value.
Suppose again that $X$ is a random variable with $\E(X^2) \lt \infty$. Then $\var(X) = \E[\var(X \mid \mathscr G)] + \var[\E(X \mid \mathscr G)]$
Proof
From the previous theorem and properties of conditional expected value we have $\E[\var(X \mid \mathscr G)] = \E(X^2) - \E([\E(X \mid \mathscr G)]^2)$. But $\E(X^2) = \var(X) + [\E(X)]^2$ and similarly, $\E([\E(X \mid \mathscr G)]^2) = \var[\E(X \mid \mathscr G)] + (\E[\E(X \mid \mathscr G)])^2$. But also, $\E[\E(X \mid \mathscr G)] = \E(X)$ so subsituting we get $\E[\var(X \mid \mathscr G)] = \var(X) - \var[\E(X \mid \mathscr G)]$.
So the variance of $X$ is the expected conditional variance plus the variance of the conditional expected value. This result is often a good way to compute $\var(X)$ when we know the conditional distribution of $X$ given $\mathscr G$. In turn, this property leads to a formula for the mean square error when $\E(X \mid \mathscr G)$ is thought of as a predictor of $X$.
Suppose again that $X$ is a random variable with $\E(X^2) \lt \infty$. $\E([X - \E(X \mid \mathscr G)]^2) = \var(X) - \var[\E(X \mid \mathscr G)]$
Proof
From the definition and from the mean property and variance formula, $\E([X - \E(X \mid \mathscr G)]^2) = \E[\var(X \mid \mathscr G)] = \var(X) - \var[\E(X \mid \mathscr G)]$
Let us return to the study of predictors of the real-valued random variable $X$, and compare them in terms of mean square error.
Suppose again that $X$ is a random variable with $\E(X^2) \lt \infty$.
1. The best constant predictor of $X$ is $\E(X)$ with mean square error $\var(X)$.
2. If $Y$ is another random variable with $\E(Y^2) \lt \infty$, then the best predictor of $X$ among linear functions of $Y$ is $L(X \mid Y) = \E(X) + \frac{\cov(X,Y)}{\var(Y)}[Y - \E(Y)]$ with mean square error $\var(X)[1 - \cor^2(X,Y)]$.
3. If $Y$ is a (general) random variable, then the best predictor of $X$ among all real-valued functions of $Y$ with finite variance is $\E(X \mid Y)$ with mean square error $\var(X) - \var[\E(X \mid Y)]$.
4. If $\mathscr G$ is a sub $\sigma$-algebra of $\mathscr F$, then the best predictor of $X$ among random variables with finite variance that are measurable with respect to $\mathscr G$ is $\E(X \mid \mathscr G)$ with mean square error $\var(X) - \var[\E(X \mid \mathscr G)]$.
Of course, (a) is a special case of (d) with $\mathscr G = \{\emptyset, \Omega\}$ and (c) is a special case of (d) with $\mathscr G = \sigma(Y)$. Only (b), the linear case, cannot be interpreted in terms of conditioning with respect to a $\sigma$-algebra.
Conditional Covariance
Suppose again that $\mathscr G$ is a sub $\sigma$-algebra of $\mathscr F$. The conditional covariance of two random variables is defined like the ordinary covariance, but with all expected values conditioned on $\mathscr G$.
Suppose that $X$ and $Y$ are random variables with $\E(X^2) \lt \infty$ and $\E(Y^2) \lt \infty$. The conditional covariance of $X$ and $Y$ given $\mathscr G$ is defined as $\cov(X, Y \mid \mathscr G) = \E\left([X - \E(X \mid \mathscr G)] [Y - \E(Y \mid \mathscr G)] \biggm| \mathscr G \right)$
So $\cov(X, Y \mid \mathscr G)$ is a random variable that is measurable with respect to $\mathscr G$ and is unique up to equivalence. As should be the case, conditional covariance generalizes conditional variance.
Suppose that $X$ is a random variable with $\E(X^2) \lt \infty$. Then $\cov(X, X \mid \mathscr G) = \var(X \mid \mathscr G)$.
Proof
This follows immediately from the two definitions.
Our next result is a computational formula that is analogous to the one for standard covariance—the covariance is the mean of the product minus the product of the means, but now with all expected values conditioned on $\mathscr G$:
Suppose again that $X$ and $Y$ are random variables with $\E(X^2) \lt \infty$ and $\E(Y^2) \lt \infty$. Then $\cov(X, Y \mid \mathscr G) = \E(X Y \mid \mathscr G) - \E(X \mid \mathscr G) E(Y \mid \mathscr G)$
Proof
Expanding the product in the definition and using basic properties of conditional expectation, we have
\begin{align} \cov(X, Y \mid \mathscr G) & = \E\left(X Y - X \E(Y \mid \mathscr G) - Y E(X \mid \mathscr G) + \E(X \mid \mathscr G) E(Y \mid \mathscr G) \biggm| \mathscr G \right) = \E(X Y \mid \mathscr G) - \E\left[X \E(Y \mid \mathscr G) \mid \mathscr G\right] - \E\left[Y \E(X \mid \mathscr G) \mid \mathscr G\right] + \E\left[\E(X \mid \mathscr G) \E(Y \mid \mathscr G) \mid \mathscr G\right] \ & = \E\left(X Y \mid \mathscr G\right) - \E(X \mid \mathscr G) \E(Y \mid \mathscr G) - \E(X \mid \mathscr G) \E(Y \mid \mathscr G) + \E(X \mid \mathscr G) \E(Y \mid \mathscr G) = \E\left(X Y \mid \mathscr G\right) - \E(X \mid \mathscr G) E(Y \mid \mathscr G) \end{align}
Our next result shows how to compute the ordinary covariance of $X$ and $Y$ by conditioning on $X$.
Suppose again that $X$ and $Y$ are random variables with $\E(X^2) \lt \infty)$ and $\E(Y^2 \lt \infty)$. Then $\cov(X, Y) = \E\left[\cov(X, Y \mid \mathscr G)\right] + \cov\left[\E(X \mid \mathscr G), \E(Y \mid \mathscr G) \right]$
Proof
From (29) and properties of conditional expected value we have $\E\left[\cov(X, Y \mid \mathscr G)\right] = \E(X Y) - \E\left[\E(X\mid \mathscr G) \E(Y \mid \mathscr G) \right]$ But $\E(X Y) = \cov(X, Y) + \E(X) \E(Y)$ and similarly, $\E\left[\E(X \mid \mathscr G) \E(Y \mid \mathscr G)\right] = \cov[\E(X \mid \mathscr G), \E(Y \mid \mathscr G) + \E[\E(X\mid \mathscr G)] \E[\E(Y \mid \mathscr G)]$ But also, $\E\left[\E(X \mid \mathscr G)\right] = \E(X)$ and $\E[\E(Y \mid \mathscr G)] = \E(Y)$ so subsituting we get $\E\left[\cov(X, Y \mid \mathscr G)\right] = \cov(X, Y) - \cov\left[\E(X \mid \mathscr G), \E(Y \mid \mathscr G)\right]$
Thus, the covariance of $X$ and $Y$ is the expected conditional covariance plus the covariance of the conditional expected values. This result is often a good way to compute $\cov(X, Y)$ when we know the conditional distribution of $(X, Y)$ given $\mathscr G$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/04%3A_Expected_Value/4.10%3A_Conditional_Expected_Value_Revisited.txt |
$\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$
Basic Theory
Many of the concepts in this chapter have elegant interpretations if we think of real-valued random variables as vectors in a vector space. In particular, variance and higher moments are related to the concept of norm and distance, while covariance is related to inner product. These connections can help unify and illuminate some of the ideas in the chapter from a different point of view. Of course, real-valued random variables are simply measurable, real-valued functions defined on the sample space, so much of the discussion in this section is a special case of our discussion of function spaces in the chapter on Distributions, but recast in the notation of probability.
As usual, our starting point is a random experiment modeled by a probability space $(\Omega, \mathscr{F}, \P)$. Thus, $\Omega$ is the set of outcomes, $\mathscr{F}$ is the $\sigma$-algebra of events, and $\P$ is the probability measure on the sample space $(\Omega, \mathscr F)$. Our basic vector space $\mathscr V$ consists of all real-valued random variables defined on $(\Omega, \mathscr{F}, \P)$ (that is, defined for the experiment). Recall that random variables $X_1$ and $X_2$ are equivalent if $\P(X_1 = X_2) = 1$, in which case we write $X_1 \equiv X_2$. We consider two such random variables as the same vector, so that technically, our vector space consists of equivalence classes under this equivalence relation. The addition operator corresponds to the usual addition of two real-valued random variables, and the operation of scalar multiplication corresponds to the usual multiplication of a real-valued random variable by a real (non-random) number. These operations are compatible with the equivalence relation in the sense that if $X_1 \equiv X_2$ and $Y_1 \equiv Y_2$ then $X_1 + Y_1 \equiv X_2 + Y_2$ and $c X_1 \equiv c X_2$ for $c \in \R$. In short, the vector space $\mathscr V$ is well-defined.
Norm
Suppose that $k \in [1, \infty)$. The $k$ norm of $X \in \mathscr V$ is defined by
$\|X\|_k = \left[\E\left(\left|X\right|^k\right)\right]^{1 / k}$
Thus, $\|X\|_k$ is a measure of the size of $X$ in a certain sense, and of course it's possible that $\|X\|_k = \infty$. The following theorems establish the fundamental properties. The first is the positive property.
Suppose again that $k \in [1, \infty)$. For $X \in \mathscr V$,
1. $\|X\|_k \ge 0$
2. $\|X\|_k = 0$ if and only if $\P(X = 0) = 1$ (so that $X \equiv 0$).
Proof
These results follow from the basic inequality properties of expected value. First $\left|X\right|^k \ge 0$ with probability 1, so $\E\left(\left|X\right|^k\right) \ge 0$. In addition, $\E\left(\left|X\right|^k\right) = 0$ if and only if $\P(X = 0) = 1$.
The next result is the scaling property.
Suppose again that $k \in [1, \infty)$. Then $\|c X\|_k = \left|c\right| \, \|X\|_k$ for $X \in \mathscr V$ and $c \in \R$.
Proof $\| c X \|_k = [\E\left(\left|c X\right|^k\right]^{1 / k} = \left[\E\left(\left|c\right|^k \left|X\right|^k\right)\right]^{1/k} = \left[\left|c\right|^k \E\left(\left|X\right|^k\right)\right]^{1/k} = \left|c\right| \left[\E\left(\left|X\right|^k\right)\right]^{1/k} = \left|c\right| \|X\|_k$
The next result is Minkowski's inequality, named for Hermann Minkowski, and also known as the triangle inequality.
Suppose again that $k \in [1, \infty)$. Then $\|X + Y\|_k \le \|X\|_k + \|Y\|_k$ for $X, \, Y \in \mathscr V$.
Proof
The first quadrant $S = \left\{(x, y) \in \R^2: x \ge 0, \; y \ge 0\right\}$ is a convex set and $g(x, y) = \left(x ^{1/k} + y^{1/k}\right)^k$ is concave on $S$. From Jensen's inequality, if $U$ and $V$ are nonnegative random variables, then $\E\left[(U^{1/k} + V^{1/k})^k\right] \le \left(\left[\E(U)\right]^{1/k} + \left[\E(V)\right]^{1/k}\right)^k$ Letting $U = \left|X\right|^k$ and $V = \left|Y\right|^k$ and simplifying gives the result. To show that $g$ really is concave on $S$, we can compute the second partial derivatives. Let $h(x, y) = x^{1/k} + y^{1/k}$ so that $g = h^k$. Then \begin{align} g_{xx} & = \frac{k-1}{k} h^{k-2} x^{1/k - 2}\left(x^{1/k} - h\right) \ g_{yy} & = \frac{k-1}{k} h^{k-2} y^{1/k - 2}\left(y^{1/k} - h\right) \ g_{xy} & = \frac{k-1}{k} h^{k-2} x^{1/k - 1} y^{1/k - 1} \end{align} Clearly $h(x, y) \ge x^{1/k}$ and $h(x, y) \ge y^{1/k}$ for $x \ge 0$ and $y \ge 0$, so $g_{xx}$and $g_{yy}$, the diagonal entries of the second derivative matrix, are nonpositive on $S$. A little algebra shows that the determinant of the second derivative matrix $g_{xx} g_{yy} - g_{xy}^2 = 0$ on $S$. Thus, the second derivative matrix of $g$ is negative semi-definite.
It follows from the last three results that the set of random variables (again, modulo equivalence) with finite $k$ norm forms a subspace of our parent vector space $\mathscr V$, and that the $k$ norm really is a norm on this vector space.
For $k \in [1, \infty)$, $\mathscr L_k$ denotes the vector space of $X \in \mathscr V$ with $\|X\|_k \lt \infty$, and with norm $\| \cdot \|_k$.
In analysis, $p$ is often used as the index rather than $k$ as we have used here, but $p$ seems too much like a probability, so we have broken with tradition on this point. The $\mathscr L$ is in honor of Henri Lebesgue, who developed much of this theory. Sometimes, when we need to indicate the dependence on the underlying $\sigma$-algebra $\mathscr{F}$, we write $\mathscr L_k(\mathscr{F})$. Our next result is Lyapunov's inequality, named for Aleksandr Lyapunov. This inequality shows that the $k$-norm of a random variable is increasing in $k$.
Suppose that $j, \, k \in [1, \infty)$ with $j \le k$. Then $\|X\|_j \le \|X\|_k$ for $X \in \mathscr V$.
Proof
Note that $S = \{x \in \R: x \ge 0\}$ is convex and $g(x) = x^{k/j}$ is convex on $S$. From Jensen's inequality, if $U$ is a nonnegative random variable then $\left[\E(U)\right]^{k/j} \le \E\left(U^{k/j}\right)$. Letting $U = \left|X\right|^j$ and simplifying gives the result.
Lyapunov's inequality shows that if $1 \le j \le k$ and $\|X\|_k \lt \infty$ then $\|X\|_j \lt \infty$. Thus, $\mathscr L_k$ is a subspace of $\mathscr L_j$.
Metric
The $k$ norm, like any norm on a vector space, can be used to define a metric, or distance function; we simply compute the norm of the difference between two vectors.
For $k \in [1, \infty)$, the $k$ distance (or $k$ metric) between $X, \, Y \in \mathscr V$ is defined by $d_k(X, Y) = \|X - Y\|_k = \left[\E\left(\left|X - Y\right|^k\right)\right]^{1/k}$
The following properties are analogous to the properties in norm properties (and thus very little additional work is required for the proofs). These properties show that the $k$ metric really is a metric on $\mathscr L_k$ (as always, modulo equivalence). The first is the positive property.
Suppose again that $k \in [1, \infty)$ $X, \; Y \in \mathscr V$. Then
1. $d_k(X, Y) \ge 0$
2. $d_k(X, Y) = 0$ if and only if $\P(X = Y) = 1$ (so that $X \equiv Y$ and $Y$).
Proof
These results follow directly from the positive property.
Next is the obvious symmetry property:
$d_k(X, Y) = d_k(Y, X)$ for $X, \; Y \in \mathscr V$.
Next is the distance version of the triangle inequality.
$d_k(X, Z) \le d_k(X, Y) + d_k(Y, Z)$ for $X, \; Y, \; Z \in \mathscr V$
Proof
From Minkowski's inequality, $d_k(X, Z) = \|X - Z\|_k = \|(X - Y) + (Y - Z) \|_k \le \|X - Y\|_k + \|Y - Z\|_k = d_k(X, Y) + d_k(Y, Z)$
The last three properties mean that $d_k$ is indeed a metric on $\mathscr L_k$ for $k \ge 1$. In particular, note that the standard deviation is simply the 2-distance from $X$ to its mean $\mu = \E(X)$: $\sd(X) = d_2(X, \mu) = \|X - \mu\|_2 = \sqrt{\E\left[(X - \mu)^2\right]}$ and the variance is the square of this. More generally, the $k$th moment of $X$ about $a$ is simply the $k$th power of the $k$-distance from $X$ to $a$. The 2-distance is especially important for reasons that will become clear below, in the discussion of inner product. This distance is also called the root mean square distance.
Center and Spread Revisited
Measures of center and measures of spread are best thought of together, in the context of a measure of distance. For a real-valued random variable $X$, we first try to find the constants $t \in \R$ that are closest to $X$, as measured by the given distance; any such $t$ is a measure of center relative to the distance. The minimum distance itself is the corresponding measure of spread.
Let us apply this procedure to the 2-distance.
For $X \in \mathscr L_2$, define the root mean square error function by $d_2(X, t) = \|X - t\|_2 = \sqrt{\E\left[(X - t)^2\right]}, \quad t \in \R$
For $X \in \mathscr L_2$, $d_2(X, t)$ is minimized when $t = \E(X)$ and the minimum value is $\sd(X)$.
Proof
Note that the minimum value of $d_2(X, t)$ occurs at the same points as the minimum value of $d_2^2(X, t) = \E\left[(X - t)^2\right]$ (this is the mean square error function). Expanding and taking expected values term by term gives $\E\left[(X - t)^2\right] = \E\left(X^2\right) - 2 t \E(X) + t^2$ This is a quadratic function of $t$ and hence the graph is a parabola opening upward. The minimum occurs at $t = \E(X)$, and the minimum value is $\var(X)$. Hence the minimum value of $t \mapsto d_2(X, t)$ also occurs at $t = \E(X)$ and the minimum value is $\sd(X)$.
We have seen this computation several times before. The best constant predictor of $X$ is $\E(X)$, with mean square error $\var(X)$. The physical interpretation of this result is that the moment of inertia of the mass distribution of $X$ about $t$ is minimized when $t = \mu$, the center of mass. Next, let us apply our procedure to the 1-distance.
For $X \in \mathscr L_1$, define the mean absolute error function by $d_1(X, t) = \|X - t\|_1 = \E\left[\left|X - t\right|\right], \quad t \in \R$
We will show that $d_1(X, t)$ is minimized when $t$ is any median of $X$. (Recall that the set of medians of $X$ forms a closed, bounded interval.) We start with a discrete case, because it's easier and has special interest.
Suppose that $X \in \mathscr L_1$ has a discrete distribution with values in a finite set $S \subseteq \R$. Then $d_1(X, t)$ is minimized when $t$ is any median of $X$.
Proof
Note first that $\E\left(\left|X - t\right|\right) = \E(t - X, \, X \le t) + \E(X - t, \, X \gt t)$. Hence $\E\left(\left|X - t\right|\right) = a_t \, t + b_t$, where $a_t = 2 \, \P(X \le t) - 1$ and where $b_t = \E(X) - 2 \, \E(X, \, X \le t)$. Note that $\E\left(\left|X - t\right|\right)$ is a continuous, piecewise linear function of $t$, with corners at the values in $S$. That is, the function is a linear spline. Let $m$ be the smallest median of $X$. If $t \lt m$ and $t \notin S$, then the slope of the linear piece at $t$ is negative. Let $M$ be the largest median of $X$. If $t \gt M$ and $t \notin S$, then the slope of the linear piece at $t$ is positive. If $t \in (m, M)$ then the slope of the linear piece at $t$ is 0. Thus $\E\left(\left|X - t\right|\right)$ is minimized for every $t$ in the median interval $[m, M]$.
The last result shows that mean absolute error has a couple of basic deficiencies as a measure of error:
• The function may not be smooth (differentiable).
• The function may not have a unique minimizing value of $t$.
Indeed, when $X$ does not have a unique median, there is no compelling reason to choose one value in the median interval, as the measure of center, over any other value in the interval.
Suppose now that $X \in \mathscr L_1$ has a general distribution on $\R$. Then $d_1(X, t)$ is minimized when $t$ is any median of $X$.
Proof
Let $s, \, t \in \R$. Suppose first that $s \lt t$. Computing the expected value over the events $X \le s$, $s \lt X \le t$, and $X \ge t$, and simplifying gives $\E\left(\left|X - t\right|\right) = \E\left(\left|X - s\right|\right) + (t - s) \, \left[2 \, \P(X \le s) - 1\right] + 2 \, \E(t - X, \, s \lt X \le t)$ Suppose next that $t \lt s$. Using similar methods gives $\E\left(\left|X - t\right|\right) = \E\left(\left|X - s\right|\right) + (t - s) \, \left[2 \, \P(X \lt s) - 1\right] + 2 \, \E(X - t, \, t \le X \lt s)$ Note that the last terms on the right in these equations are nonnegative. If we take $s$ to be a median of $X$, then the middle terms on the right in the equations are also nonnegative. Hence if $s$ is a median of $X$ and $t$ is any other number then $\E\left(\left|X - t\right|\right) \ge \E\left(\left|X - s\right|\right)$.
Convergence
Whenever we have a measure of distance, we automatically have a criterion for convergence.
Suppose that $X_n \in \mathscr L_k$ for $n \in \N_+$ and that $X \in \mathscr L_k$, where $k \in [1, \infty)$. Then $X_n \to X$ as $n \to \infty$ in $k$th mean if $X_n \to X$ as $n \to \infty$ in the vector space $\mathscr L_k$. That is, $d_k(X_n, X) = \|X_n - X\|_k \to 0 \text{ as } n \to \infty$ or equivalently $\E\left(\left|X_n - X\right|^k\right) \to 0$ as $n \to \infty$.
When $k = 1$, we simply say that $X_n \to X$ as $n \to \infty$ in mean; when $k = 2$, we say that $X_n \to X$ as $n \to \infty$ in mean square. These are the most important special cases.
Suppose that $1 \le j \le k$. If $X_n \to X$ as $n \to \infty$ in $k$th mean then $X_n \to X$ as $n \to \infty$ in $j$th mean.
Proof
This follows from Lyanpuov's inequality. Note that $0 \le d_j(X_n, X) \le d_k(X_n, X) \to 0$ as $n \to \infty$.
Convergence in $k$th mean implies that the $k$ norms converge.
Suppose that $X_n \in \mathscr L_k$ for $n \in \N_+$ and that $X \in \mathscr L_k$, where $k \in [1, \infty)$. If $X_n \to X$ as $n \to \infty$ in $k$th mean then $\|X_n\|_k \to \|X\|_k$ as $n \to \infty$. Equivalently, if $\E(|X_n - X|^k) \to 0$ as $n \to \infty$ then $\E(|X_n|^k) \to \E(|X|^k)$ as $n \to \infty$.
Proof
This is a simple consequence of the reverse triangle inequality, which holds in any normed vector space. The general result is that if a sequence of vectors in a normed vector space converge then the norms converge. In our notation here, $\left|\|X_n\|_k - \|X\|_k\right| \le \|X_n - X\|_k$ so if the right side converges to 0 as $n \to \infty$, then so does the left side.
The converse is not true; a counterexample is given below. Our next result shows that convergence in mean is stronger than convergence in probability.
Suppose that $X_n \in \mathscr L_1$ for $n \in \N_+$ and that $X \in \mathscr L_1$. If $X_n \to X$ as $n \to \infty$ in mean, then $X_n \to X$ as $n \to \infty$ in probability.
Proof
This follows from Markov's inequality. For $\epsilon \gt 0$, $0 \le \P\left(\left|X_n - X\right| \gt \epsilon\right) \le \E\left(\left|X_n - X\right|\right) \big/ \epsilon \to 0$ as $n \to \infty$.
The converse is not true. That is, convergence with probability 1 does not imply convergence in $k$th mean; a counterexample is given below. Also convergence in $k$th mean does not imply convergence with probability 1; a counterexample to this is given below. In summary, the implications in the various modes of convergence are shown below; no other implications hold in general.
• Convergence with probability 1 implies convergence in probability.
• Convergence in $k$th mean implies convergence in $j$th mean if $j \le k$.
• Convergence in $k$th mean implies convergence in probability.
• Convergence in probability implies convergence in distribution.
However, the next section on uniformly integrable variables gives a condition under which convergence in probability implies convergence in mean.
Inner Product
The vector space $\mathscr L_2$ of real-valued random variables on $(\Omega, \mathscr{F}, \P)$ (modulo equivalence of course) with finite second moment is special, because it's the only one in which the norm corresponds to an inner product.
The inner product of $X, \, Y \in \mathscr L_2$ is defined by $\langle X, Y \rangle = \E(X Y)$
The following results are analogous to the basic properties of covariance, and show that this definition really does give an inner product on the vector space
For $X, \, Y, \, Z \in \mathscr L_2$ and $a \in \R$,
1. $\langle X, Y \rangle = \langle Y, X \rangle$, the symmetric property.
2. $\langle X, X \rangle \ge 0$ and $\langle X, X \rangle = 0$ if and only if $\P(X = 0) = 1$ (so that $X \equiv 0$), the positive property.
3. $\langle a X, Y \rangle = a \langle X, Y \rangle$, the scaling property.
4. $\langle X + Y, Z \rangle = \langle X, Z \rangle + \langle Y, Z \rangle$, the additive property.
Proof
1. This property is trivial from the definition.
2. Note that $\E(X^2) \ge 0$ and $\E(X^2) = 0$ if and only if $\P(X = 0) = 1$.
3. This follows from the scaling property of expected value: $\E(a X Y) = a \E(X Y)$
4. This follows from the additive property of expected value: $\E[(X + Y) Z] = \E(X Z) + \E(Y Z)$.
From parts (a), (c), and (d) it follows that inner product is bi-linear, that is, linear in each variable with the other fixed. Of course bi-linearity holds for any inner product on a vector space. Covariance and correlation can easily be expressed in terms of this inner product. The covariance of two random variables is the inner product of the corresponding centered variables. The correlation is the inner product of the corresponding standard scores.
For $X, \, Y \in \mathscr L_2$,
1. $\cov(X, Y) = \langle X - \E(X), Y - \E(Y) \rangle$
2. $\cor(X, Y) = \left \langle [X - \E(X)] \big/ \sd(X), [Y - \E(Y)] / \sd(Y) \right \rangle$
Proof
1. This is simply a restatement of the definition of covariance.
2. This is a restatement of the fact that the correlation of two variables is the covariance of their corresponding standard scores.
Thus, real-valued random variables $X$ and $Y$ are uncorrelated if and only if the centered variables $X - \E(X)$ and $Y - \E(Y)$ are perpendicular or orthogonal as elements of $\mathscr L_2$.
For $X \in \mathscr L_2$, $\langle X, X \rangle = \|X\|_2^2 = \E\left(X^2\right)$.
Thus, the norm associated with the inner product is the 2-norm studied above, and corresponds to the root mean square operation on a random variable. This fact is a fundamental reason why the 2-norm plays such a special, honored role; of all the $k$-norms, only the 2-norm corresponds to an inner product. In turn, this is one of the reasons that root mean square difference is of fundamental importance in probability and statistics. Technically, the vector space $\mathscr L_2$ is a Hilbert space, named for David Hilbert.
The next result is Hölder's inequality, named for Otto Hölder.
Suppose that $j, \, k \in [1, \infty)$ and $\frac{1}{j} + \frac{1}{k} = 1$. For $X \in \mathscr L_j$ and $Y \in \mathscr L_k$, $\langle \left|X\right|, \left|Y\right| \rangle \le \|X\|_j \|Y\|_k$
Proof
Note that $S = \left\{(x, y) \in \R^2: x \ge 0, \; y \ge 0\right\}$ is a convex set and $g(x, y) = x^{1/j} y^{1/k}$ is concave on $S$. From Jensen's inequality, if $U$ and $V$ are nonnegative random variables then $\E\left(U^{1/j} V^{1/k}\right) \le \left[\E(U)\right]^{1/j} \left[\E(V)\right]^{1/k}$. Substituting $U = \left|X\right|^j$ and $V = \left|Y\right|^k$ gives the result.
To show that $g$ really is concave on $S$, we compute the second derivative matrix:
$\left[ \begin{matrix} (1 / j)(1 / j - 1) x^{1 / j - 2} y^{1 / k} & (1 / j)(1 / k) x^{1 / j - 1} y^{1 / k - 1} \ (1 / j)(1 / k) x^{1 / j - 1} y^{1 / k - 1} & (1 / k)(1 / k - 1) x^{1 / j} y^{1 / k - 2} \end{matrix} \right]$
Since $1 / j \lt 1$ and $1 / k \lt 1$, the diagonal entries are negative on $S$. The determinant simplifies to
$(1 / j)(1 / k) x^{2 / j - 2} y^{2 / k - 2} [1 - (1 / j + 1 / k)] = 0$
In the context of the last theorem, $j$ and $k$ are called conjugate exponents. If we let $j = k = 2$ in Hölder's inequality, then we get the Cauchy-Schwarz inequality, named for Augustin Cauchy and Karl Schwarz: For $X, \, Y \in \mathscr L_2$, $\E\left(\left|X\right| \left|Y\right|\right) \le \sqrt{\E\left(X^2\right)} \sqrt{\E\left(Y^2\right)}$ In turn, the Cauchy-Schwarz inequality is equivalent to the basic inequalities for covariance and correlations: For $X, \, Y \in \mathscr L_2$, $\left| \cov(X, Y) \right| \le \sd(X) \sd(Y), \quad \left|\cor(X, Y)\right| \le 1$
If $j, \, k \in [1, \infty)$ are conjugate exponents then
1. $k = \frac{j}{j - 1}$.
2. $k \downarrow 1$ as $j \uparrow \infty$.
The following result is an equivalent to the identity $\var(X + Y) + \var(X - Y) = 2\left[\var(X) + \var(Y)\right]$ that we studied in the section on covariance and correlation. In the context of vector spaces, the result is known as the parallelogram rule:
If $X, \, Y \in \mathscr L_2$ then $\|X + Y\|_2^2 + \|X - Y\|_2^2 = 2 \|X\|_2^2 + 2 \|Y\|_2^2$
Proof
This result follows from the bi-linearity of inner product: \begin{align} \|X + Y\|_2^2 + \|X - Y\|_2^2 & = \langle X + Y, X + Y \rangle + \langle X - Y, X - Y\rangle \ & = \left(\langle X, X \rangle + 2 \langle X, Y \rangle + \langle Y, Y \rangle\right) + \left(\langle X, X \rangle - 2 \langle X, Y \rangle + \langle Y, Y \rangle\right) = 2 \|X\|^2 + 2 \|Y\|^2 \end{align}
The following result is equivalent to the statement that the variance of the sum of uncorrelated variables is the sum of the variances, which again we proved in the section on covariance and correlation. In the context of vector spaces, the result is the famous Pythagorean theorem, named for Pythagoras of course.
If $(X_1, X_2, \ldots, X_n)$ is a sequence of random variables in $\mathscr L_2$ with $\langle X_i, X_j \rangle = 0$ for $i \ne j$ then $\left \| \sum_{i=1}^n X_i \right \|_2^2 = \sum_{i=1}^n \|X_i\|_2^2$
Proof
Again, this follows from the bi-linearity of inner product: $\left \| \sum_{i=1}^n X_i \right \|_2^2 = \left\langle \sum_{i=1}^n X_i, \sum_{j=1}^n X_j\right\rangle = \sum_{i=1}^n \sum_{j=1}^n \langle X_i, X_j \rangle$ The terms with $i \ne j$ are 0 by the orthogonality assumption, so $\left \| \sum_{i=1}^n X_i \right \|_2^2 = \sum_{i=1}^n \langle X_i, X_i \rangle = \sum_{i=1}^n \|X_i\|_2^2$
Projections
The best linear predictor studied in the section on covariance and correlation and conditional expected values have nice interpretation in terms of projections onto subspaces of $\mathscr L_2$. First let's review the concepts. Recall that $\mathscr U$ is a subspace of $\mathscr L_2$ if $\mathscr U \subseteq \mathscr L_2$ and $\mathscr U$ is also a vector space (under the same operations of addition and scalar multiplication). To show that $\mathscr U \subseteq \mathscr L_2$ is a subspace, we just need to show the closure properties (the other axioms of a vector space are inherited).
• If $U, \; V \in \mathscr U$ then $U + V \in \mathscr U$.
• If $U \in \mathscr U$ and $c \in \R$ then $c U \in \mathscr U$.
Suppose now that $\mathscr U$ is a subspace of $\mathscr L_2$ and that $X \in \mathscr L_2$. Then the projection of $X$ onto $\mathscr U$ (if it exists) is the vector $V \in \mathscr U$ with the property that $X - V$ is perpendicular to $\mathscr U$: $\langle X - V, U \rangle = 0, \quad U \in \mathscr U$
The projection has two critical properties: It is unique (if it exists) and it is the vector in $\mathscr U$ closest to $X$. If you look at the proofs of these results, you will see that they are essentially the same as the ones used for the best predictors of $X$ mentioned at the beginning of this subsection. Moreover, the proofs use only vector space concepts—the fact that our vectors are random variables on a probability space plays no special role.
The projection of $X$ onto $\mathscr U$ (if it exists) is unique.
Proof
Suppose that $V_1$ and $V_2$ satisfy the definition. then $\left\|V_1 - V_2\right\|_2^2 = \langle V_1 - V_2, V_1 - V_2 \rangle = \langle V_1 - X + X - V_2, V_1 - V_2 \rangle = \langle V_1 - X, V_1 - V_2 \rangle + \langle X - V_2, V_1 - V_2 \rangle = 0$ Hence $V_1 \equiv V_2$. The last equality in the displayed equation holds by assumption and the fact that $V_1 - V_2 \in \mathscr U$
Suppose that $V$ is the projection of $X$ onto $\mathscr U$. Then
1. $\left\|X - V\right\|_2^2 \le \left\|X - U\right\|_2^2$ for all $U \in \mathscr U$.
2. Equality holds in (a) if and only if $U \equiv V$
Proof
1. If $U \in \mathscr U$ then $\left\| X - U \right\|_2^2 = \left\| X - V + V - U \right\|_2^2 = \left\| X - V \right\|_2^2 + 2 \langle X - V, V - U \rangle + \left\| V - U \right\|_2^2$ But the middle terms is 0 so $\left\| X - U \right\|_2^2 = \left\| X - V \right\|_2^2 + \left\| V - U \right\|_2^2 \ge \left\| X - V \right\|_2^2$
2. Equality holds if and only if $\left\| V - U \right\|_2^2 = 0$, if and only if $V \equiv U$.
Now let's return to our study of best predictors of a random variable.
If $X \in \mathscr L_2$ then the set $\mathscr W_X = \{a + b X: a \in \R, \; b \in \R\}$ is a subspace of $\mathscr L_2$. In fact, it is the subspace generated by $X$ and 1.
Proof
Note that $\mathscr W_X$ is the set of all linear combinations of the vectors $1$ and $X$. If $U, \, V \in \mathscr W_X$ then $U + V \in \mathscr W_X$. If $U \in \mathscr W_X$ and $c \in \R$ then $c U \in \mathscr W_X$.
Recall that for $X, \, Y \in \mathscr L_2$, the best linear predictor of $Y$ based on $X$ is $L(Y \mid X) = \E(Y) + \frac{\cov(X, Y)}{\var(X)} \left[X - \E(X)\right]$ Here is the meaning of the predictor in the context of our vector spaces.
If $X, \, Y \in \mathscr L_2$ then $L(Y \mid X)$ is the projection of $Y$ onto $\mathscr W_X$.
Proof
Note first that $L(Y \mid X) \in \mathscr W_X$. Thus, we just need to show that $Y - L(Y \mid X)$ is perpendicular to $\mathscr W_X$. For this, it suffices to show
1. $\left\langle Y - L(Y \mid X), X \right\rangle = 0$
2. $\left\langle Y - L(Y \mid X), 1 \right\rangle = 0$
We have already done this in the earlier sections, but for completeness, we do it again. Note that $\E\left(X \left[X - \E(X)\right]\right) = \var(X)$. Hence $\E\left[X L(Y \mid X)\right] = \E(X) \E(Y) + \cov(X, Y) = \E(X Y)$. This gives (a). By linearity, $\E\left[L(Y \mid X)\right] = \E(Y)$ so (b) holds as well.
The previous result is actually just the random variable version of the standard formula for the projection of a vector onto a space spanned by two other vectors. Note that $1$ is a unit vector and that $X_0 = X - \E(X) = X - \langle X, 1 \rangle 1$ is perpendicular to $1$. Thus, $L(Y \mid X)$ is just the sum of the projections of $Y$ onto $1$ and $X_0$: $L(Y \mid X) = \langle Y, 1 \rangle 1 + \frac{\langle Y, X_0 \rangle}{\langle X_0, X_0\rangle} X_0$
Suppose now that $\mathscr{G}$ is a sub $\sigma$-algebra of $\mathscr{F}$. Of course if $X: \Omega \to \R$ is $\mathscr{G}$-measurable then $X$ is $\mathscr{F}$-measurables, so $\mathscr L_2(\mathscr{G})$ is a subspace of $\mathscr L_2(\mathscr{F})$.
If $X \in \mathscr L_2(\mathscr{F})$ then $\E(X \mid \mathscr{G})$ is the projection of $X$ onto $\mathscr L_2(\mathscr{G})$.
Proof
This is essentially the definition of $\E(X \mid \mathscr{G})$ as the only (up to equivalence) random variable in $\mathscr L_2(\mathscr{G})$ with $\E\left[\E(X \mid \mathscr{G}) U\right] = \E(X U)$ for every $U \in \mathscr L_2(\mathscr{G})$.
But remember that $\E(X \mid \mathscr{G})$ is defined more generally for $X \in \mathscr L_1(\mathscr{F})$. Our final result in this discussion concerns convergence.
Suppose that $k \in [1, \infty)$ and that $\mathscr{G}$ is a sub $\sigma$-algebra of $\mathscr{F}$.
1. If $X \in \mathscr L_k(\mathscr{F})$ then $\E(X \mid \mathscr{G}) \in \mathscr L_k(\mathscr{G})$
2. If $X_n \in \mathscr L_k(\mathscr{F})$ for $n \in \N_+$, $X \in \mathscr L_k(\mathscr{F})$, and $X_n \to X$ as $n \to \infty$ in $\mathscr L_k(\mathscr{F})$ then $\E(X_n \mid \mathscr{G}) \to \E(X \mid \mathscr{G})$ as $n \to \infty$ in $\mathscr L_k(\mathscr{G})$
Proof
1. Note that $|\E(X \mid \mathscr{G})| \le \E(|X| \mid \mathscr{G})$. Since $t \mapsto t^k$ is increasing and convex on $[0, \infty)$ we have $|\E(X \mid \mathscr{G})|^k \le [\E(|X| \mid \mathscr{G})]^k \le \E\left(|X|^k \mid \mathscr{G}\right)$ The last step uses Jensen's inequality. Taking expected values gives $\E[|\E(X \mid \mathscr{G})|^k] \le \E(|X|^k) \lt \infty$
2. Using the same ideas, $\E\left[\left|\E(X_n \mid \mathscr{G}) - \E(X \mid \mathscr{G})\right|^k\right] = \E\left[\left|\E(X_n - X \mid \mathscr{G})\right|^k\right] \le E[|X_n - X|^k]$ By assumption, the right side converges to 0 as $n \to \infty$ and hence so does the left side.
Examples and Applications
App Exercises
In the error function app, select the root mean square error function. Click on the $x$-axis to generate an empirical distribution, and note the shape and location of the graph of the error function.
In the error function app, select the mean absolute error function. Click on the $x$-axis to generate an empirical distribution, and note the shape and location of the graph of the error function.
Computational Exercises
Suppose that $X$ is uniformly distributed on the interval $[0, 1]$.
1. Find $\|X\|_k$ for $k \in [1, \infty)$.
2. Graph $\|X\|_k$ as a function of $k \in [1, \infty)$.
3. Find $\lim_{k \to \infty} \|X\|_k$.
Answer
1. $\frac{1}{(k + 1)^{1/k}}$
2. 1
Suppose that $X$ has probability density function $f(x) = \frac{a}{x^{a+1}}$ for $1 \le x \lt \infty$, where $a \gt 0$ is a parameter. Thus, $X$ has the Pareto distribution with shape parameter $a$.
1. Find $\|X\|_k$ for $k \in [1, \infty)$.
2. Graph $\|X\|_k$ as a function of $k \in (1, a)$.
3. Find $\lim_{k \uparrow a} \|X\|_k$.
Answer
1. $\left(\frac{a}{a -k}\right)^{1/k}$ if $k \lt a$, $\infty$ if $k \ge a$
2. $\infty$
Suppose that $(X, Y)$ has probability density function $f(x, y) = x + y$ for $0 \le x \le 1$, $0 \le y \le 1$. Verify Minkowski's inequality.
Answer
1. $\|X + Y\|_k = \left(\frac{2^{k+2} - 2}{(k + 2)(k + 3)}\right)^{1/k}$
2. $\|X\|_k + \|Y\|_k = 2 \left(\frac{1}{k + 2} + \frac{1}{2(k + 1)}\right)^{1/k}$
Let $X$ be an indicator random variable with $\P(X = 1) = p$, where $0 \le p \le 1$. Graph $\E\left(\left|X - t\right|\right)$ as a function of $t \in \R$ in each of the cases below. In each case, find the minimum value of the function and the values of $t$ where the minimum occurs.
1. $p \lt \frac{1}{2}$
2. $p = \frac{1}{2}$
3. $p \gt \frac{1}{2}$
Answer
1. The minimum is $p$ and occurs at $t = 0$.
2. The minimum is $\frac{1}{2}$ and occurs for $t \in [0, 1]$
3. The minimum is $1 - p$ and occurs at $t = 1$
Suppose that $X$ is uniformly distributed on the interval $[0, 1]$. Find $d_1(X, t) = \E\left(\left|X - t\right|\right)$ as a function of $t$ and sketch the graph. Find the minimum value of the function and the value of $t$ where the minimum occurs.
Suppose that $X$ is uniformly distributed on the set $[0, 1] \cup [2, 3]$. Find $d_1(X, t) = \E\left(\left|X - t\right|\right)$ as a function of $t$ and sketch the graph. Find the minimum value of the function and the values of $t$ where the minimum occurs.
Suppose that $(X, Y)$ has probability density function $f(x, y) = x + y$ for $0 \le x \le 1$, $0 \le y \le 1$. Verify Hölder's inequality in the following cases:
1. $j = k = 2$
2. $j = 3$, $k = \frac{3}{2}$
Answer
1. $\|X\|_2 \|Y\|_2 = \frac{5}{12}$
2. $\|X\|_3 + \|Y\|_{3/2} \approx 0.4248$
Counterexamples
The following exercise shows that convergence with probability 1 does not imply convergence in mean.
Suppose that $(X_1, X_2, \ldots)$ is a sequence of independent random variables with $\P\left(X = n^3\right) = \frac{1}{n^2}, \; \P(X_n = 0) = 1 - \frac{1}{n^2}; \quad n \in \N_+$
1. $X_n \to 0$ as $n \to \infty$ with probability 1.
2. $X_n \to 0$ as $n \to \infty$ in probability.
3. $\E(X_n) \to \infty$ as $n \to \infty$.
Proof
1. This follows from the basic characterization of convergence with probability 1: $\sum_{n=1}^\infty \P(X_n \gt \epsilon) = \sum_{n=1}^\infty 1 / n^2 \lt \infty$ for $0 \lt \epsilon \lt 1$.
2. This follows since convergence with probability 1 implies convergence in probability.
3. Note that $\E(X_n) = n^3 / n^2 = n$ for $n \in \N_+$.
The following exercise shows that convergence in mean does not imply convergence with probability 1.
Suppose that $(X_1, X_2, \ldots)$ is a sequence of independent indicator random variables with $\P(X_n = 1) = \frac{1}{n}, \; \P(X_n = 0) = 1 - \frac{1}{n}; \quad n \in \N_+$
1. $\P(X_n = 0 \text{ for infinitely many } n) = 1$.
2. $\P(X_n = 1 \text{ for infinitely many } n) = 1$.
3. $\P(X_n \text{ does not converge as } n \to \infty) = 1$.
4. $X_n \to 0$ as $n \to \infty$ in $k$th mean for every $k \ge 1$.
Proof
1. This follows from the second Borel-Cantelli lemma since $\sum_{n=1}^\infty \P(X_n = 1) = \sum_{n=1}^\infty 1 / n = \infty$
2. This also follows from the second Borel-Cantelli lemma since $\sum_{n=1}^\infty \P(X_n = 0) = \sum_{n=1}^\infty (1 - 1 / n) = \infty$.
3. This follows from parts (a) and (b).
4. Note that $\E(X_n) = 1 / n \to 0$ as $n \to \infty$.
The following exercise show that convergence of the $k$th means does not imply convergence in $k$th mean.
Suppose that $U$ has the Bernoulli distribution with parmaeter $\frac{1}{2}$, so that $\P(U = 1) = \P(U = 0) = \frac{1}{2}$. Let $X_n = U$ for $n \in \N_+$ and let $X = 1 - U$. Let $k \in [1, \infty)$. Then
1. $\E(X_n^k) = \E(X^k) = \frac{1}{2}$ for $n \in \N_+$, so $\E(X_n^k) \to \E(X^k)$ as $n \to \infty$
2. $\E(|X_n - X|^k) = 1$ for $n \in \N$ so $X_n$ does not converge to $X$ as $n \to \infty$ in $\mathscr L_k$.
Proof
1. Note that $X_n^k = U^k = U$ for $n \in \N_+$, since $U$ just takes values 0 and 1. Also, $U$ and $1 - U$ have the same distribution so $\E(U) = \E(1 - U) = \frac{1}{2}$.
2. Note that $X_n - X = U - (1 - U) = 2 U - 1$ for $n \in \N_+$. Again, $U$ just takes values 0 and 1, so $|2 U - 1| = 1$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/04%3A_Expected_Value/4.11%3A_Vector_Spaces_of_Random_Variables.txt |
$\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
Two of the most important modes of convergence in probability theory are convergence with probability 1 and convergence in mean. As we have noted several times, neither mode of convergence implies the other. However, if we impose an additional condition on the sequence of variables, convergence with probability 1 will imply convergence in mean. The purpose of this brief, but advanced section, is to explore the additional condition that is needed. This section is particularly important for the theory of martingales.
Basic Theory
As usual, our starting point is a random experiment modeled by a probability space $(\Omega, \mathscr{F}, \P)$. So $\Omega$ is the set of outcomes, $\mathscr{F}$ is the $\sigma$-algebra of events, and $\P$ is the probability measure on the sample space $(\Omega, \mathscr F)$. In this section, all random variables that are mentioned are assumed to be real valued, unless otherwise noted. Next, recall from the section on vector spaces that for $k \in [1, \infty)$, $\mathscr{L}_k$ is the vector space of random variables $X$ with $\E(|X|^k) \lt \infty$, endowed with the norm $\|X\|_k = \left[\E(X^k)\right]^{1/k}$. In particular, $X \in \mathscr{L}_1$ simply means that $\E(|X|) \lt \infty$ so that $\E(X)$ exists as a real number. From the section on expected value as an integral, recall the following notation, assuming of course that the expected value makes sense: $\E(X; A) = \E(X \bs{1}_A) = \int_A X \, d\P$
Definition
The following result is motivation for the main definition in this section.
If $X$ is a random variable then $\E(|X|) \lt \infty$ if and only if $\E(|X|; |X| \ge x) \to 0$ as $x \to \infty$.
Proof
Note that that $|X| \bs{1}(|X| \le x)$ is nonnegative, increasing in $x \in [0, \infty)$ and $|X| \bs{1}(|X| \le x) \to |X|$ as $x \to \infty$. From the monotone convergence theorem, $\E(|X|; |X| \le x) \to \E(|X|)$ as $x \to \infty$. On the other hand, $\E(|X|) = \E(|X|; |X| \le x) + \E(|X|; |X| \gt x)$ If $\E(|X|) \lt \infty$ then taking limits in the displayed equation shows that $\E(|X|: |X| \gt x) \to 0$ as $x \to \infty$. On the other hand, $\E(|X|; |X| \le x) \le x$. So if $\E(|X|) = \infty$ then $\E(|X|; |X| \gt x) = \infty$ for every $x \in [0, \infty)$.
Suppose now that $X_i$ is a random variable for each $i$ in a nonempty index set $I$ (not necessarily countable). The critical definition for this section is to require the convergence in the previous theorem to hold uniformly for the collection of random variables $\bs X = \{X_i: i \in I\}$.
The collection $\bs X = \{X_i: i \in I\}$ is uniformly integrable if for each $\epsilon \gt 0$ there exists $x \gt 0$ such that for all $i \in I$, $\E(|X_i|; |X_i| \gt x) \lt \epsilon$ Equivalently $\E(|X_i|; |X_i| \gt x) \to 0$ as $x \to \infty$ uniformly in $i \in I$.
Properties
Our next discussion centers on conditions that ensure that the collection of random variables $\bs X = \{X_i: i \in I\}$ is uniformly integrable. Here is an equivalent characterization:
The collection $\bs X = \{X_i: i \in I\}$ is uniformly integrable if and only if the following conditions hold:
1. $\{\E(|X_i|): i \in I\}$ is bounded.
2. For each $\epsilon \gt 0$ there exists $\delta \gt 0$ such that if $A \in \mathscr{F}$ and $\P(A) \lt \delta$ then $\E(|X_i|; A) \lt \epsilon$ for all $i \in I$.
Proof
Suppose that $\bs X$ is uniformly integrable. With $\epsilon = 1$ there exists $x \gt 0$ such that $\E(|X_i|; |X_i| \gt x) \lt 1$ for all $i \in I$. Hence $\E(|X_i|) = \E(|X_i|; |X_i| \le x) + \E(|X_i|; |X_i| \gt x) \le x + 1, \quad i \in I$ so (a) holds. For (b), let $\epsilon \gt 0$. There exists $x \gt 0$ such that $\E(|X_i|; |X_i| \gt x) \lt \epsilon / 2$ for all $i \in I$. Let $\delta = \epsilon / 2 x$. If $A \in \mathscr{F}$ and $\P(A) \lt \delta$ then $\E(|X_i|; A) = \E(|X_i|; A \cap \{|X| \le x\}) + \E(|X_i|; A \cap \{|X| \gt x\}) \le x \P(A) + \E(|X_i|; |X| \gt x) \lt \epsilon / 2 + \epsilon / 2 = \epsilon$ Conversely, suppose that (a) and (b) hold. By (a), there exists $c \gt 0$ such that $\E(|X_i|) \le c$ for all $i \in I$. Let $\epsilon \gt 0$. By (b) there exists $\delta \gt 0$ such that if $A \in \mathscr{F}$ with $\P(A) \lt \delta$ then $\E(|X_i|; A) \lt \epsilon$ for all $i \in I$. Next, by Markov's inequality, $\P(|X_i| \gt x) \le \frac{\E(|X_i|)}{x} \le \frac{c}{x}, \quad i \in I$ Pick $x \gt 0$ such that $c / x \lt \delta$, so that $\P(|X_i| \gt x) \lt \delta$ for each $i \in I$. Then for each $j \in I$, $\E(|X_i|; |X_j| \gt x) \lt \epsilon$ for all $i \in I$ and so in particular, $\E(|X_i|; |X_i| \gt x) \lt \epsilon$ for all $i \in I$. Hence $\bs X$ is uniformly integrable.
Condition (a) means that $\bs X$ is bounded (in norm) as a subset of the vector space $\mathscr{L}_1$. Trivially, a finite collection of integrable random variables is uniformly integrable.
Suppose that $I$ is finite and that $\E(|X_i|) \lt \infty$ for each $i \in I$. Then $\bs X = \{X_i: i \in I\}$ is uniformly integrable.
A subset of a uniformly integrable set of variables is also uniformly integrable.
If $\{X_i: i \in I\}$ is uniformly integrable and $J$ is a nonempty subset of $I$, then $\{X_j: j \in J\}$ is uniformly integrable.
If the random variables in the collection are dominated in absolute value by a random variable with finite mean, then the collection is uniformly integrable.
Suppose that $Y$ is a nonnegative random variable with $\E(Y) \lt \infty$ and that $|X_i| \le Y$ for each $i \in I$. Then $\bs X = \{X_i: i \in I\}$ is uniformly integrable.
Proof
Clearly $\E(|X_i|; |X_i| \gt x) \le E(Y; Y \gt x)$ for $x \in [0, \infty)$ and for all $i \in I$. The right side is independent of $i \in I$, and by the theorem above, converges to 0 as $x \to \infty$. Hence $\bs X$ is uniformly integrable.
The following result is more general, but essentially the same proof works.
Suppose that $\bs Y = \{X_j: j \in J\}$ is uniformly integrable, and $\bs X = \{X_i: i \in I\}$ is a set of variables with the property that for each $i \in I$ there exists $j \in J$ such that $|X_i| \le |Y_j|$. Then $\bs X$ is uniformly integrable.
As a simple corollary, if the variables are bounded in absolute value then the collection is uniformly integrable.
If there exists $c \gt 0$ such that $|X_i| \le c$ for all $i \in I$ then $\bs X = \{X_i: i \in I\}$ is uniformly integrable.
Just having $\E(|X_i|)$ bounded in $i \in I$ (condition (a) in the characterization above) is not sufficient for $\bs X = \{X_i: i \in I\}$ to be uniformly integrable; a counterexample is given below. However, if $\E(|X_i|^k)$ is bounded in $i \in I$ for some $k \gt 1$, then $\bs X$ is uniformly integrable. This condition means that $\bs X$ is bounded (in norm) as a subset of the vector space $\mathscr{L}_k$.
If $\left\{\E(|X_i|^k: i \in I\right\}$ is bounded for some $k \gt 1$, then $\{X_i: i \in I\}$ is uniformly integrable.
Proof
Suppose that for some $k \gt 1$ and $c \gt 0$, $\E(|X_i|^k) \le c$ for all $i \in I$. Then $k - 1 \gt 0$ and so $t \mapsto t^{k-1}$ is increasing on $(0, \infty)$. So if $|X_i| \gt x$ for $x \gt 0$ then $|X_i|^k = |X_i| |X_i|^{k-1} \ge |X_i| x^{k-1}$ Hence $|X_i| \le |X_i|^k / x^{k-1}$ on the event $|X_i| \gt x$. Therefore $\E(|X_i|; |X_i| \gt x) \le \E\left(\frac{|X_i|^k}{x^{k-1}}; |X_i| \gt x\right) \le \frac{\E(|X_i|^k)}{x^{k-1}} \le \frac{c}{x^{k-1}}$ The last expression is independent of $i \in I$ and converges to 0 as $x \to \infty$. Hence $\bs X$ is uniformly integrable.
Uniformly integrability is closed under the operations of addition and scalar multiplication.
Suppose that $\bs X = \{X_i: i \in I\}$ and $\bs Y = \{Y_i: i \in I\}$ are uniformly integrable and that $c \in \R$. Then each of the following collections is also uniformly integrable.
1. $\bs X + \bs Y = \{X_i + Y_i: i \in I\}$
2. $c \bs X = \{c X_i: i \in I\}$
Proof
We use the characterization above. The proofs use standard techniques, so try them yourself.
1. There exists $a, \, b \in (0, \infty)$ such that $\E(|X_i|) \le a$ and $\E(|Y_i|) \le b$ for all $i \in I$. Hence $\E(|X_i + Y_i|) \le \E(|X_i| + |Y_i|) \le \E(|X_i|) + \E(|Y_i|) \le a + b, \quad i \in I$ Next let $\epsilon \gt 0$. There exists $\delta_1 \gt 0$ such that if $A \in \mathscr{F}$ with $\P(A) \lt \delta_1$ then $\E(|X_i|; A) \lt \epsilon / 2$ for all $i \in I$, and similarly, there exists $\delta_2 \gt 0$ such that if $A \in \mathscr{F}$ with $\P(A) \lt \delta_2$ then $\E(|Y_i|; A) \lt \epsilon / 2$ for all $i \in I$. Hence if $A \in \mathscr{F}$ with $\P(A) \lt \delta_1 \wedge \delta_2$ then $\E(|X_i + Y_i|; A) \le \E(|X_i| + |Y_i|; A) = \E(|X_i|; A) + \E(|Y_i|; A) \lt \epsilon / 2 + \epsilon / 2 = \epsilon, \quad i \in I$
2. There exists $a \in (0, \infty)$ such that $\E(|X_i|) \le a$ for all $i \in I$. Hence $\E(|c X_i|) = |c| \E(|X_i|) \le c a, \quad i \in I$ The second condition is trivial if $c = 0$, so suppose $c \ne 0$. For $\epsilon \gt 0$ there exists $\delta \gt 0$ such that if $A \in \mathscr{F}$ and $\P(A) \lt \delta$ then $\E(|X_i|; A) \lt \epsilon / c$ for all $i \in I$. Hence $\E(|c X_i|; A) = |c| \E(|X_i|; A) \lt \epsilon$.
The following corollary is trivial, but will be needed in our discussion of convergence below.
Suppose that $\{X_i: i \in I\}$ is uniformly integrable and that $X$ is a random variable with $\E(|X|) \lt \infty$. Then $\{X_i - X: i \in I\}$ is uniformly integrable.
Proof
Let $Y_i = X$ for each $i \in I$. Then $\{Y_i: i \in I\}$ is uniformly integrable, so the result follows from the previous theorem.
Convergence
We now come to the main results, and the reason for the definition of uniform integrability in the first place. To set up the notation, suppose that $X_n$ is a random variable for $n \in \N_+$ and that $X$ is a random variable. We know that if $X_n \to X$ as $n \to \infty$ in mean then $X_n \to X$ as $n \to \infty$ in probability. The converse is also true if and only if the sequence is uniformly integrable. Here is the first half:
If $X_n \to X$ as $n \to \infty$ in mean, then $\{X_n: n \in \N\}$ is uniformly integrable.
Proof
The hypothesis means that $X_n \to X$ as $n \to \infty$ in the vector space $\mathscr{L}_1$. That is, $\E(|X_n|) \lt \infty$ for $n \in \N_+$, $\E(|X|) \lt \infty$, and $E(|X_n - X|) \to 0$ as $n \to \infty$. From the last section, we know that this implies that $\E(|X_n|) \to \E(|X|)$ as $n \to \infty$, so $\E(|X_n|)$ is bounded in $n \in \N$. Let $\epsilon \gt 0$. Then there exists $N \in \N_+$ such that if $n \gt N$ then $\E(|X_n - X|) \lt \epsilon/2$. Since all of our variables are in $\mathscr{L}_1$, for each $n \in \N_+$ there exists $\delta_n \gt 0$ such that if $A \in \mathscr{F}$ and $\P(A) \lt \delta_n$ then $\E(|X_n - X|; A) \lt \epsilon / 2$. Similarly, there exists $\delta_0 \gt 0$ such that if $A \in \mathscr{F}$ and $\P(A) \lt \delta_0$ then $\E(|X|; A) \lt \epsilon / 2$. Let $\delta = \min\{\delta_n: n \in \{0, 1, \ldots, N\}\}$ so $\delta \gt 0$. If $A \in \mathscr{F}$ and $\P(A) \lt \delta$ then $\E(|X_n|; A) = \E(|X_n - X + X|; A) \le \E(|X_n - X|; A) + \E(|X|; A), \quad n \in \N_+$ If $n \le N$ then $\E(|X_n - X|; A) \le \epsilon / 2$ since $\delta \le \delta_n$. If $n \gt N$ then $\E(|X_n - X|; A) \le \E(|X_n - X|) \lt \epsilon / 2$. For all $n$, $E(|X|; A) \lt \epsilon / 2$ since $\delta \le \delta_0$. So for all $n \in \N_+$, $\E(|X_n|: A) \lt \epsilon$ and hence $\{X_n: n \in \N_+\}$ is uniformly integrable.
Here is the more important half, known as the uniform integrability theorem:
If $\{X_n: n \in \N_+\}$ is uniformly integrable and $X_n \to X$ as $n \to \infty$ in probability, then $X_n \to X$ as $n \to \infty$ in mean.
Proof
Since $X_n \to X$ as $n \to \infty$ in probability, we know that there exists a subsequence $\left(X_{n_k}: k \in \N_+\right)$ of $(X_n: n \in \N_+)$ such that $X_{n_k} \to X$ as $k \to \infty$ with probability 1. By the uniform integrability, $\E(|X_n|)$ is bounded in $n \in \N_+$. Hence by Fatou's lemma $\E(|X|) = \E\left(\liminf_{k \to \infty} \left|X_{n_k}\right|\right) \le \liminf_{n \to \infty} \E\left(\left|X_{n_k}\right|\right) \le \limsup_{n \to \infty} \E\left(\left|X_{n_k}\right|\right) \lt \infty$ Let $Y_n = X_n - X$ for $n \in \N_+$. From the corollary above, we know that $\{Y_n: n \in \N_+\}$ is uniformly integrable, and we also know that $Y_n$ converges to 0 as $n \to \infty$ in probability. Hence we need to show that $Y_n \to 0$ as $n \to \infty$ in mean. Let $\epsilon \gt 0$. By uniform integrability, there exists $\delta \gt 0$ such that if $A \in \mathscr{F}$ and $\P(A) \lt \delta$ then $\E(|Y_n|: A) \lt \epsilon / 2$ for all $n \in \N$. Since $Y_n \to 0$ as $n \to \infty$ in probability, there exists $N \in \N_+$ such that if $n \gt N$ then $\P(|Y_n| \gt \epsilon / 2) \lt \delta$. Hence if $n \gt N$ then $\E(|Y_n|) = \E(|Y_n|; |Y_n| \le \epsilon / 2) + \E(|Y_n|; |Y_n| \gt \epsilon / 2) \lt \epsilon / 2 + \epsilon / 2 = \epsilon$ Hence $Y_n \to 0$ as $n \to \infty$ in mean.
As a corollary, recall that if $X_n \to X$ as $n \to \infty$ with probability 1, then $X_n \to X$ as $n \to \infty$ in probability. Hence if $\bs X = \{X_n: n \in \N_+\}$ is uniformly integrable then $X_n \to X$ as $n \to \infty$ in mean.
Examples
Our first example shows that bounded $\mathscr{L}_1$ norm is not sufficient for uniform integrability.
Suppose that $U$ is uniformly distributed on the interval $(0, 1)$ (so $U$ has the standard uniform distribution). For $n \in \N_+$, let $X_n = n \bs{1}(U \le 1 / n)$. Then
1. $\E(|X_n|) = 1$ for all $n \in \N_+$
2. $\E(|X_n|; |X_n| \gt x) = 1$ for $x \gt 0$, $n \in \N_+$ with $n \gt x$
Proof
First note that $|X_n| = X_n$ since $X_n \ge 0$.
1. By definition, $\E(X_n) = n \P(U \le 1 / n) = n / n = 1$ for $n \in \N_+$.
2. If $n \gt x \gt 0$ then $X_n \gt x$ if and only if $X_n = n$ if and only if $U \le 1/n$. Hence $\E(X_n; X_n \gt x) = n \P(U \le 1/n) = 1$ as before.
By part (b), $\E(|X_n|; |X_n| \gt x)$ does not converge to 0 as $x \to \infty$ uniformly in $n \in \N_+$, so $\bs X = \{X_n: n \in \N_+\}$ is not uniformly integrable.
The next example gives an important application to conditional expected value. Recall that if $X$ is a random variable with $\E(|X|) \lt \infty$ and $\mathscr{G}$ is a sub $\sigma$-algebra of $\mathscr{F}$ then $\E(X \mid \mathscr{G})$ is the expected value of $X$ given the information in $\mathscr{G}$, and is the $\mathscr{G}$-measurable random variable closest to $X$ in a sense. Indeed if $X \in \mathscr{L}_2(\mathscr{F})$ then $\E(X \mid \mathscr{G})$ is the projection of $X$ onto $\mathscr{L}_2(\mathscr{G})$. The collection of all conditional expected values of $X$ is uniformly integrable:
Suppose that $X$ is a real-valued random variable with $\E(|X|) \lt \infty$. Then $\{\E(X \mid \mathscr{G}): \mathscr{G} \text{ is a sub }\sigma\text{-algebra of } \mathscr{F}\}$ is uniformly integrable.
Proof
We use the characterization above. Let $\mathscr{G}$ be a sub $\sigma$-algebra of $\mathscr{F}$. Recall that $\left|\E(X \mid \mathscr{G})\right| \le \E(|X| \mid \mathscr{G})$ and hence $\E[|\E(X \mid \mathscr{G})|] \le \E[\E(|X| \mid \mathscr{G})] = \E(|X|)$ So property (a) holds. Next let $\epsilon \gt 0$. Since $\E(|X|) \lt \infty$, there exists $\delta \gt 0$ such that if $A \in \mathscr{F}$ and $\P(A) \lt \delta$ then $\E(|X|; A) \lt \epsilon$. Suppose that $A \in \mathscr{G}$ with $\P(A) \lt \delta$. Then $|\E(X \mid \mathscr{G})| \bs{1}_A \le \E(|X| \mid \mathscr{G}) \bs{1}_A$ so $\E[|\E(X \mid \mathscr{G})|; A] \le \E[\E(|X| \mid \mathscr{G}); A] = \E[\E(|X| \bs{1}_A \mid \mathscr{G}] = \E(|X|; A) \lt \epsilon$ So condition (b) holds. Note that the first equality in the displayed equation holds since $A \in \mathscr{G}$.
Note that the collection of sub $\sigma$-algebras of $\mathscr{F}$, and so also the collection of conditional expected values above, might well be uncountable. The conditional expected values range from $\E(X)$, when $\mathscr{G} = \{\Omega, \emptyset\}$ to $X$ itself, when $\mathscr{G} = \mathscr{F}$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/04%3A_Expected_Value/4.12%3A_Uniformly_Integrable_Variables.txt |
$\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
The goal of this section is to study a type of mathematical object that arises naturally in the context of conditional expected value and parametric distributions, and is of fundamental importance in the study of stochastic processes, particularly Markov processes. In a sense, the main object of study in this section is a generalization of a matrix, and the operations generalizations of matrix operations. If you keep this in mind, this section may seem less abstract.
Basic Theory
Definitions
Recall that a measurable space $(S, \mathscr S)$ consists of a set $S$ and a $\sigma$-algebra $\mathscr S$ of subsets of $S$. If $\mu$ is a positive measure on $(S, \mathscr S)$, then $(S, \mathscr S, \mu)$ is a measure space. The two most important special cases that we have studied frequently are
1. Discrete: $S$ is countable, $\mathscr S = \mathscr P(S)$ is the collection of all subsets of $S$, and $\mu = \#$ is counting measure on $(S, \mathscr S)$.
2. Euclidean: $S$ is a measurable subset of $\R^n$ for some $n \in \N_+$, $\mathscr S$ is the collection of subsets of $S$ that are also measurable, and $\mu = \lambda_n$ is $n$-dimensional Lebesgue measure on $(S, \mathscr S)$.
More generally, $S$ usually comes with a topology that is locally compact, Hausdorff, with a countable base (LCCB), and $\mathscr S$ is the Borel $\sigma$-algebra, the $\sigma$-algebra generated by the topology (the collection of open subsets of $S$). The measure $\mu$ is usually a Borel measure, and so satisfies $\mu(C) \lt \infty$ if $C \subseteq S$ is compact. A discrete measure space is of this type, corresponding to the discrete topology. A Euclidean measure space is also of this type, corresponding to the Euclidean topology, if $S$ is open or closed (which is usually the case). In the discrete case, every function from $S$ to another measurable space is measurable, and every from function from $S$ to another topological space is continuous, so the measure theory is not really necessary.
Recall also that the measure space $(S, \mathscr S, \mu)$ is $\sigma$-finite if there exists a countable collection $\{A_i: i \in I\} \subseteq \mathscr S$ such that $\mu(A_i) \lt \infty$ for $i \in I$ and $S = \bigcup_{i \in I} A_i$. If $(S, \mathscr S, \mu)$ is a Borel measure space corresponding to an LCCB topology, then it is $\sigma$-finite.
If $f: S \to \R$ is measurable, define $\| f \| = \sup\{\left|f(x)\right|: x \in S\}$. Of course we may well have $\|f\| = \infty$. Let $\mathscr B(S)$ denote the collection of bounded measurable functions $f: S \to \R$. Under the usual operations of pointwise addition and scalar multiplication, $\mathscr B(S)$ is a vector space, and $\| \cdot \|$ is the natural norm on this space, known as the supremum norm. This vector space plays an important role.
In this section, it is sometimes more natural to write integrals with respect to the positive measure $\mu$ with the differential before the integrand, rather than after. However, rest assured that this is mere notation, the meaning of the integral is the same. So if $f: S \to \R$ is measurable then we may write the integral of $f$ with respect to $\mu$ in operator notation as $\mu f = \int_S \mu(dx) f(x)$ assuming, as usual, that the integral exists. This will be the case if $f$ is nonnegative, although $\infty$ is a possible value. More generally, the integral exists in $\R \cup \{-\infty, \infty\}$ if $\mu f^+ \lt \infty$ or $\mu f^- \lt \infty$ where $f^+$ and $f^-$ are the positive and negative parts of $f$. If both are finite, the integral exists in $\R$ (and $f$ is integrable with respect to $\mu$). If If $\mu$ is a probability measure and we think of $(S, \mathscr S)$ as the sample space of a random experiment, then we can think of $f$ as a real-valued random variable, in which case our new notation is not too far from our traditional expected value $\E(f)$. Our main definition comes next.
Suppose that $(S, \mathscr S)$ and $(T, \mathscr T)$ are measurable spaces. A kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$ is a function $K: S \times \mathscr T \to [0, \infty]$ such that
1. $x \mapsto K(x, A)$ is a measurable function from $S$ into $[0, \infty]$ for each $A \in \mathscr T$.
2. $A \mapsto K(x, A)$ is a positive measure on $\mathscr T$ for each $x \in S$.
If $(T, \mathscr T) = (S, \mathscr S)$, then $K$ is said to be a kernel on $(S, \mathscr S)$.
There are several classes of kernels that deserve special names.
Suppose that $K$ is a kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$. Then
1. $K$ is $\sigma$-finite if the measure $K(x, \cdot)$ is $\sigma$-finite for every $x \in S$.
2. $K$ is finite if $K(x, T) \lt \infty$ for every $x \in S$.
3. $K$ is bounded if $K(x, T)$ is bounded in $x \in S$.
4. $K$ is a probability kernel if $K(x, T) = 1$ for every $x \in S$.
Define $\|K\| = \sup\{K(x, T): x \in S\}$, so that $\|K\| \lt \infty$ if $K$ is a bounded kernel and $\|K\| = 1$ if $K$ is a probability kernel.
So a probability kernel is bounded, a bounded kernel is finite, and a finite kernel is $\sigma$-finite. The terms stochastic kernel and Markov kernel are also used for probability kernels, and for a probability kernel $\|K\| = 1$ of course. The terms are consistent with terms used for measures: $K$ is a finite kernel if and only if $K(x, \cdot)$ is a finite measure for each $x \in S$, and $K$ is a probability kernel if and only if $K(x, \cdot)$ is a probability measure for each $x \in S$. Note that $\|K\|$ is simply the supremum norm of the function $x \mapsto K(x, T)$.
A kernel defines two natural integral operators, by operating on the left with measures, and by operating on the right with functions. As usual, we are often a bit casual witht the question of existence. Basically in this section, we assume that any integrals mentioned exist.
Suppose that $K$ is a kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$.
1. If $\mu$ is a positive measure on $(S, \mathscr S)$, then $\mu K$ defined as follows is a positive measure on $(T, \mathscr T)$: $\mu K(A) = \int_S \mu(dx) K(x, A), \quad A \in \mathscr T$
2. If $f: T \to \R$ is measurable, then $K f: S \to \R$ defined as follows is measurable (assuming that the integrals exist in $\R$): $K f(x) = \int_T K(x, dy) f(y), \quad x \in S$
Proof
1. Clearly $\mu K(A) \ge 0$ for $A \in \mathscr T$. Suppose that $\{A_j: i \in J\}$ is a countable collection of disjoint sets in $\mathscr T$ and $A = \bigcup_{j \in J} A_j$. Then \begin{align*} \mu K(A) & = \int_S \mu(dx) K(x, A) = \int_S \mu(dx) \left(\sum_{j \in J} K(x, A_j) \right) \ & = \sum_{j \in J} \int_S \mu(dx) K(x, A_j) = \sum_{j \in J} \mu K(A_j) \end{align*} The interchange of sum and integral is justified since the terms are nonnegative.
2. The measurability of $K f$ follows from the measurability of $f$ and of $x \mapsto K(x, A)$ for $A \in \mathscr S$, and from basic properties of the integral.
Thus, a kernel transforms measures on $(S, \mathscr S)$ into measures on $(T, \mathscr T)$, and transforms certain measurable functions from $T$ to $\R$ into measurable functions from $S$ to $\R$. Again, part (b) assumes that $f$ is integrable with respect to the measure $K(x, \cdot)$ for every $x \in S$. In particular, the last statement will hold in the following important special case:
Suppose that $K$ is a kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$ and that $f \in \mathscr B(T)$.
1. If $K$ is finite then $Kf$ is defined and $\|Kf\| = \|K\| \|f\|$.
2. If $K$ is bounded then $Kf \in \mathscr B(T)$.
Proof
1. If $K$ is finite then $K \left|f\right|(x) = \int_T K(x, dy) \left|f(y)\right| \le \int_T K(x, dy) \|f\| = \|f\| K(x, T) \lt \infty \quad x \in S$ Hence $f$ is integrable with respect to $K(x, \cdot)$ for each $x \in S$ so $Kf$ is defined. Continuing with our inequalities, we have $|K f(x)| \le K |f|(x) \le \|f\| K(x, T) \le \|f\| \|K\|$ so $\|Kf\| \le \|K\| \|f\|$. Moreover equality holds when $f = \bs{1}_T$, the constant function 1 on $T$.
2. If $K$ is bounded then $\|K\| \lt \infty$ so from (a), $\|K f \| \lt \infty$.
The identity kernel $I$ on the measurable space $(S, \mathscr S)$ is defined by $I(x, A) = \bs{1}(x \in A)$ for $x \in S$ and $A \in \mathscr S$.
Thus, $I(x, A) = 1$ if $x \in A$ and $I(x, A) = 0$ if $x \notin A$. So $x \mapsto I(x, A)$ is the indicator function of $A \in \mathscr S$, while $A \mapsto I(x, A)$ is point mass at $x \in S$. Clearly the identity kernel is a probability kernel. If we need to indicate the dependence on the particular space, we will add a subscript. The following result justifies the name.
Let $I$ denote the identity kernel on $(S, \mathscr S)$.
1. If $\mu$ is a positive measure on $(S, \mathscr S)$ then $\mu I = \mu$.
2. If $f: S \to \R$ is measurable, then $I f = f$.
Constructions
We can create a new kernel from two given kernels, by the usual operations of addition and scalar multiplication.
Suppose that $K$ and $L$ are kernels from $(S, \mathscr S)$ to $(T, \mathscr T)$, and that $c \in [0, \infty)$. Then $c K$ and $K + L$ defined below are also kernels from $(S, \mathscr S)$ to $(T, \mathscr T)$.
1. $(c K)(x, A) = c K(x, A)$ for $x \in S$ and $A \in \mathscr T$.
2. $(K + L)(x, A) = K(x, A) + L(x, A)$ for $x \in S$ and $A \in \mathscr T$.
If $K$ and $L$ are $\sigma$-finite (finite) (bounded) then $c K$ and $K + L$ are $\sigma$-finite (finite) (bounded), respectively.
Proof
These results are simple.
1. Since $x \mapsto K(x, A)$ is measurable for $A \in \mathscr T$, so is $x \mapsto c K(x, A)$. Since $A \mapsto K(x, A)$ is a positive measure on $(T, \mathscr T)$ for $x \in S$, so is $A \mapsto c K(x, A)$ since $c \ge 0$.
2. Since $x \mapsto K(x, A)$ and $x \mapsto L(x, A)$ are measurable for $A \in \mathscr T$, so is $x \mapsto K(x, A) + L(x, A)$. Since $A \mapsto K(x, A)$ and $A \mapsto L(x, A)$ are positive measures on $(T, \mathscr T)$ for $x \in S$, so is $A \mapsto K(x, A) + L(x, A)$.
A simple corollary of the last result is that if $a, \, b \in [0, \infty)$ then $a K + b L$ is a kerneal from $(S, \mathscr S)$ to $(T, \mathscr T)$. In particular, if $K, \, L$ are probability kernels and $p \in (0, 1)$ then $p K + (1 - p) L$ is a probability kernel. A more interesting and important way to form a new kernel from two given kernels is via a multiplication operation.
Suppose that $K$ is a kernel from $(R, \mathscr R)$ to $(S, \mathscr S)$ and that $L$ is a kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$. Then $K L$ defined as follows is a kernel from $(R, \mathscr R)$ to $(T, \mathscr T)$: $K L(x, A) = \int_S K(x, dy) L(y, A), \quad x \in R, \, A \in \mathscr T$
1. If $K$ is finite and $L$ is bounded then $K L$ is finite.
2. If $K$ and $L$ are bounded then $K L$ is bounded.
3. If $K$ and $L$ are stochastic then $K L$ is stochastic
Proof
The measurability of $x \mapsto (K L)(x, A)$ for $A \in \mathscr T$ follows from basic properties of the integral. For the second property, fix $x \in R$. Clearly $K L(x, A) \ge 0$ for $A \in \mathscr T$. Suppose that $\{A_j: j \in J\}$ is a countable collection of disjoint sets in $\mathscr T$ and $A = \bigcup_{j \in J} A_j$. Then \begin{align*} K L(x, A) & = \int_S K(x, dy) L(x, A) = \int_S K(x, dy) \left(\sum_{j \in J} L(y, A_j)\right) \ & = \sum_{j \in J} \int_S K(x, dy) L(y, A_j) = \sum_{j \in J} K L(x, A_j) \end{align*} The interchange of sum and integral is justified since the terms are nonnegative.
Once again, the identity kernel lives up to its name:
Suppose that $K$ is a kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$. Then
1. $I_S K = K$
2. $K I_T = K$
The next several results show that the operations are associative whenever they make sense.
Suppose that $K$ is a kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$, $\mu$ is a positive measure on $\mathscr S$, $c \in [0, \infty)$, and $f: T \to \R$ is measurable. Then, assuming that the appropriate integrals exist,
1. $c (\mu K) = (c \mu) K$
2. $c (K f) = (c K) f$
3. $(\mu K) f = \mu (K f)$
Proof
These results follow easily from the definitions.
1. The common measure on $\mathscr T$ is $c \mu K(A) = c \int_S \mu(dx) K(x, A)$ for $A \in \mathscr T$.
2. The common function from $S$ to $\R$ is $c K f(x) = c \int_S K(x, dy) f(y)$ for $x \in S$, assuming that the integral exists for $x \in S$.
3. The common real number is $\mu K f = \int_S \mu(dx) \int_T K(x, dy) f(y)$, assuming that the integrals exist.
Suppose that $K$ is a kernel from $(R, \mathscr R)$ to $(S, \mathscr S)$ and $L$ is a kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$. Suppose also that $\mu$ is a positive measure on $(R, \mathscr R)$, $f: T \to \R$ is measurable, and $c \in [0, \infty)$. Then, assuming that the appropriate integrals exist,
1. $(\mu K) L = \mu (K L)$
2. $K ( L f) = (K L) f$
3. $c (K L) = (c K) L$
Proof
These results follow easily from the definitions.
1. The common measure on $(T, \mathscr T)$ is $\mu K L(A) = \int_R \mu(dx) \int_S K(x, dy) L(y, A)$ for $A \in \mathscr T$.
2. The common measurable function from $R$ to $\R$ is $K L f(x) = \int_S K(x, dy) \int_T L(y, dz) f(z)$ for $x \in R$, assuming that the integral exists for $x \in S$.
3. The common kernel from $(R, \mathscr R)$ to $(T, \mathscr T)$ is $c K L(x, A) = c \int_S K(x, dy) L(y, A)$ for $x \in R$ and $A \in \mathscr T$.
Suppose that $K$ is a kernel from $(R, \mathscr R)$ to $(S, \mathscr S)$, $L$ is a kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$, and $M$ is a kernel from $(T, \mathscr T)$ to $(U, \mathscr U)$. Then $(K L) M = K (L M)$.
Proof
This results follow easily from the definitions. The common kernel from $(R, \mathscr R)$ to $(U, \mathscr U)$ is $K L M(x, A) = \int_S K(x, dy) \int_T L(y, dz) M(z, A), \quad x \in R, \, A \in \mathscr U$
The next several results show that the distributive property holds whenever the operations makes sense.
Suppose that $K$ and $L$ are kernels from $(R, \mathscr R)$ to $(S, \mathscr S)$ and that $M$ and $N$ are kernels from $(S, \mathscr S)$ to $(T, \mathscr T)$. Suppose also that $\mu$ is a positive measure on $(R, \mathscr R)$ and that $f: S \to \R$ is measurable. Then, assuming that the appropriate integrals exist,
1. $(K + L) M = K M + L M$
2. $K (M + N) = K M + K N$
3. $\mu (K + L) = \mu K + \mu L$
4. $(K + L) f = K f + L f$
Suppose that $K$ is a kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$, and that $\mu$ and $\nu$ are positive measures on $(S, \mathscr S)$, and that $f$ and $g$ are measurable functions from $T$ to $\R$. Then, assuming that the appropriate integrals exist,
1. $(\mu + \nu) K = \mu K + \nu K$
2. $K(f + g) = K f + K g$
3. $\mu(f + g) = \mu f + \mu g$
4. $(\mu + \nu) f = \mu f + \nu f$
In particular, note that if $K$ is a kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$, then the transformation $\mu \mapsto \mu K$ defined for positive measures on $(S, \mathscr S)$, and the transformation $f \mapsto K f$ defined for measurable functions $f: T \to \R$ (for which $K f$ exists), are both linear operators. If $\mu$ is a positive measure on $(S, \mathscr S)$, then the integral operator $f \mapsto \mu f$ defined for measurable $f: S \to \R$ (for which $\mu f$ exists) is also linear, but of course, we already knew that. Finally, note that the operator $f \mapsto K f$ is positive: if $f \ge 0$ then $K f \ge 0$. Here is the important summary of our results when the kernel is bounded.
If $K$ is a bounded kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$, then $f \mapsto K f$ is a bounded, linear transformation from $\mathscr B(T)$ to $\mathscr B(S)$ and $\|K\|$ is the norm of the transformation.
The commutative property for the product of kernels fails with a passion. If $K$ and $L$ are kernels, then depending on the measurable spaces, $K L$ may be well defined, but not $L K$. Even if both products are defined, they may be kernels from or to different measurable spaces. Even if both are defined from and to the same measurable spaces, it may well happen that $K L \neq L K$. Some examples are given below
If $K$ is a kernel on $(S, \mathscr S)$ and $n \in \N$, we let $K^n = K K \cdots K$, the $n$-fold power of $K$. By convention, $K^0 = I$, the identity kernel on $S$.
Fixed points of the operators associated with a kernel turn out to be very important.
Suppose that $K$ is a kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$.
1. A positive measure $\mu$ on $(S, \mathscr S)$ such that $\mu K = \mu$ is said to be invariant for $K$.
2. A measurable function $f: T \to \R$ such that $K f = f$ is said to be invariant for $K$
So in the language of linear algebra (or functional analysis), an invariant measure is a left eigenvector of the kernel, while an invariant function is a right eigenvector of the kernel, both corresponding to the eigenvalue 1. By our results above, if $\mu$ and $\nu$ are invariant measures and $c \in [0, \infty)$, then $\mu + \nu$ and $c \mu$ are also invariant. Similarly, if $f$ and $g$ are invariant functions and $c \in \R$, the $f + g$ and $c f$ are also invariant.
Of couse we are particularly interested in probability kernels.
Suppose that $P$ is a probability kernel from $(R, \mathscr R)$ to $(S, \mathscr S)$ and that $Q$ is a probability kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$. Suppose also that $\mu$ is a probability measure on $(R, \mathscr R)$. Then
1. $P Q$ is a probability kernel from $(R, \mathscr R)$ to $(T, \mathscr T)$.
2. $\mu P$ is a probability measure on $(S, \mathscr S)$.
Proof
1. We know that $P Q$ is a kernel from $(R, \mathscr R)$ to $(T, \mathscr T)$. So we just need to note that $P Q(T) = \int_S P(x, dy) Q(y, T) = \int_S P(x, dy) = P(x, S) = 1, \quad x \in R$
2. We know that $\mu P$ is a positive measure on $(S, \mathscr S))$. So we just need to note that $\mu P(S) = \int_R \mu(dx) P(x, S) = \int_R \mu(dx) = \mu(R) = 1$
As a corollary, it follows that if $P$ is a probability kernel on $(S, \mathscr S)$, then so is $P^n$ for $n \in \N$.
The operators associated with a kernel are of fundamental importance, and we can easily recover the kernel from the operators. Suppose that $K$ is a kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$, and let $x \in S$ and $A \in \mathscr T$. Then trivially, $K \bs{1}_A(x) = K(x, A)$ where as usual, $\bs{1}_A$ is the indicator function of $A$. Trivially also $\delta_x K(A) = K(x, A)$ where $\delta_x$ is point mass at $x$.
Kernel Functions
Usually our measurable spaces are in fact measure spaces, with natural measures associated with the spaces, as in the special cases described in (1). When we start with measure spaces, kernels are usually constructed from density functions in much the same way that positive measures are defined from density functions.
Suppose that $(S, \mathscr S, \lambda)$ and $(T, \mathscr T, \mu)$ are measure spaces. As usual, $S \times T$ is given the product $\sigma$-algebra $\mathscr S \otimes \mathscr T$. If $k: S \times T \to [0, \infty)$ is measurable, then the function $K$ defined as follows is a kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$: $K(x, A) = \int_A k(x, y) \mu(dy), \quad x \in S, \, A \in \mathscr T$
Proof
The measurability of $x \mapsto K(x, A) = \int_A k(x, y) \mu(dy)$ for $A \in \mathscr T$ follows from a basic property of the integral. The fact that $A \mapsto K(x, A) = \int_A k(x, y) \mu(dy)$ is a positive measure on $\mathscr T$ for $x \in S$ also follows from a basic property of the integral. In fact, $y \mapsto k(x, y)$ is the density of this measure with respect to $\mu$.
Clearly the kernel $K$ depends on the positive measure $\mu$ on $(T, \mathscr T)$ as well as the function $k$, while the measure $\lambda$ on $(S, \mathscr S)$ plays no role (and so is not even necessary). But again, our point of view is that the spaces have fixed, natural measures. Appropriately enough, the function $k$ is called a kernel density function (with respect to $\mu$), or simply a kernel function.
Suppose again that $(S, \mathscr S, \lambda)$ and $(T, \mathscr T, \mu)$ are measure spaces. Suppose also $K$ is a kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$ with kernel function $k$. If $f: T \to \R$ is measurable, then, assuming that the integrals exists, $K f(x) = \int_S k(x, y) f(y) \mu(dy), \quad x \in S$
Proof
This follows since the function $y \mapsto k(x, y)$ is the density of the measure $A \mapsto K(x, A)$ with respect to $\mu$: $K f(x) = \int_S K(x, dy) f(y) = \int_S k(x, y) f(y) \mu(dy), \quad x \in S$
A kernel function defines an operator on the left with functions on $S$ in a completely analogous way to the operator on the right above with functions on $T$.
Suppose again that $(S, \mathscr S, \lambda)$ and $(T, \mathscr T, \mu)$ are measure spaces, and that $K$ is a kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$ with kernel function $k$. If $f: S \to \R$ is measurable, then the function $f K: T \to \R$ defined as follows is also measurable, assuming that the integrals exists $f K(y) = \int_S \lambda(dx) f(x) k(x, y), \quad y \in T$
The operator defined above depends on the measure $\lambda$ on $(S, \mathscr S)$ as well as the kernel function $k$, while the measure $\mu$ on $(T, \mathscr T)$ playes no role (and so is not even necessary). But again, our point of view is that the spaces have fixed, natural measures. Here is how our new operation on the left with functions relates to our old operation on the left with measures.
Suppose again that $(S, \mathscr S, \lambda)$ and $(T, \mathscr T, \mu)$ are measure spaces, and that $K$ is a kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$ with kernel function $k$. Suppose also that $f: S \to [0, \infty)$ is measurable, and let $\rho$ denote the measure on $(S, \mathscr S)$ that has density $f$ with respect to $\lambda$. Then $f K$ is the density of the measure $\rho K$ with respect to $\mu$.
Proof
The main tool, as usual, is an interchange of integrals. For $B \in \mathscr T$, \begin{align*} \rho K(B) & = \int_S \rho(dx) K(x, B) = \int_S f(x) K(x, B) \lambda(dx) = \int_S f(x) \left[\int_B k(x, y) \mu(dy)\right] \lambda(dx) \ & = \int_B \left[\int_S f(x) k(x, y) \lambda(dx)\right] \mu(dy) = \int_B f K(y) \mu(dy) \end{align*}
As always, we are particularly interested in stochastic kernels. With a kernel function, we can have doubly stochastic kernels.
Suppose again that $(S, \mathscr S, \lambda)$ and $(T, \mathscr T, \mu)$ are measure spaces and that $k: S \times T \to [0, \infty)$ is measurable. Then $k$ is a double stochastic kernel function if
1. $\int_T k(x, y) \mu(dy) = 1$ for $x \in S$
2. $\int_S \lambda(dx) k(x, y) = 1$ for $y \in S$
Of course, condition (a) simply means that the kernel associated with $k$ is a stochastic kernel according to our original definition.
The most common and important special case is when the two spaces are the same. Thus, if $(S, \mathscr S, \lambda)$ is a measure space and $k : S \times S \to [0, \infty)$ is measurable, then we have an operator $K$ that operates on the left and on the right with measurable functions $f: S \to \R$: \begin{align*} f K(y) & = \int_S \lambda(dx) f(x) k(x, y), \quad y \in S \ K f(x) & = \int_S k(x, y) f(y) \lambda(d y), \quad x \in S \end{align*} If $f$ is nonnegative and $\mu$ is the measure on with density function $f$, then $f K$ is the density function of the measure $\mu K$ (both with respect to $\lambda$).
Suppose again that $(S, \mathscr S, \lambda)$ is a measure space and $k : S \times S \to [0, \infty)$ is measurable. Then $k$ is symmetric if $k(x, y) = k(y, x)$ for all $(x, y) \in S^2$.
Of course, if $k$ is a symmetric, stochastic kernel function on $(S, \mathscr S, \lambda)$ then $k$ is doubly stochastic, but the converse is not true.
Suppose that $(R, \mathscr R, \lambda)$, $(S, \mathscr S, \mu)$, and $(T, \mathscr T, \rho)$ are measure spaces. Suppose also that $K$ is a kernel from $(R, \mathscr R)$ to $(S, \mathscr S)$ with kernel function $k$, and that $L$ is a kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$ with kernel function $l$. Then the kernel $K L$ from $(R, \mathscr R)$ to $(T, \mathscr T)$ has density $k l$ given by $k l(x, z) = \int_S k(x, y) l(y, z) \mu(dy), \quad (x, z) \in R \times T$
Proof
Once again, the main tool is an interchange of integrals via Fubini's theorem. Let $x \in R$ and $B \in \mathscr T$. Then \begin{align*} K L(x, B) & = \int_S K(x, dy) L(y, B) = \int_S k(x, y) L(y, B) \mu(dy) \ & = \int_S k(x, y) \left[\int_B l(y, z) \rho(dz) \right] \mu(dy) = \int_B \left[\int_S k(x, y) l(y, z) \mu(dy) \right] \rho(dz) = \int_B k l(x, z) \mu(dz) \end{align*}
Examples and Special Cases
The Discrete Case
In this subsection, we assume that the measure spaces are discrete, as described in (1). Since the $\sigma$-algebra (all subsets) and the measure (counting measure) are understood, we don't need to reference them. Recall that integrals with respect to counting measure are sums. Suppose now that $K$ is a kernel from the discrete space $S$ to the discrete space $T$. For $x \in S$ and $y \in T$, let $K(x, y) = K(x, \{y\})$. Then more generally, $K(x, A) = \sum_{y \in A} K(x, y), \quad x \in S, \, A \subseteq T$ The function $(x, y) \mapsto K(x, y)$ is simply the kernel function of the kernel $K$, as defined above, but in this case we usually don't bother with using a different symbol for the function as opposed to the kernel. The function $K$ can be thought of as a matrix, with rows indexed by $S$ and columns indexed by $T$ (and so an infinite matrix if $S$ or $T$ is countably infinite). With this interpretation, all of the operations defined above can be thought of as matrix operations. If $f: T \to \R$ and $f$ is thought of as a column vector indexed by $T$, then $K f$ is simply the ordinary product of the matrix $K$ and the vector $f$; the product is a column vector indexed by $S$: $K f(x) = \sum_{y \in S} K(x, y) f(y), \quad x \in S$ Similarly, if $f: S \to \R$ and $f$ is thought of as a row vector indexed by $S$, then $f K$ is simple the ordinary product of the vector $f$ and the matrix $K$; the product is a row vector indexed by $T$: $f K(y) = \sum_{x \in S} f(x) K(x, y), \quad y \in T$ If $L$ is another kernel from $T$ to another discrete space $U$, then as functions, $K L$ is the simply the matrix product of $K$ and $L$: $K L(x, z) = \sum_{y \in T} K(x, y) L(x, z), \quad (x, z) \in S \times L$
Let $S = \{1, 2, 3\}$ and $T = \{1, 2, 3, 4\}$. Define the kernel $K$ from $S$ to $T$ by $K(x, y) = x + y$ for $(x, y) \in S \times T$. Define the function $f$ on $S$ by $f(x) = x!$ for $x \in S$, and define the function $g$ on $T$ by $g(y) = y^2$ for $y \in T$. Compute each of the following using matrix algebra:
1. $f K$
2. $K g$
Answer
In matrix form, $K = \left[\begin{matrix} 2 & 3 & 4 & 5 \ 3 & 4 & 5 & 6 \ 4 & 5 & 6 & 7 \end{matrix} \right], \quad f = \left[\begin{matrix} 1 & 2 & 6 \end{matrix} \right], \quad g = \left[\begin{matrix} 1 \ 4 \ 9 \ 16 \end{matrix} \right]$
1. As a row vector indexed by $T$, the product is $f K = \left[\begin{matrix} 32 & 41 & 50 & 59\end{matrix}\right]$
2. As a column vector indexed by $S$, $K g = \left[\begin{matrix} 130 \ 160 \ 190 \end{matrix}\right]$
Let $R = \{0, 1\}$, $S = \{a, b\}$, and $T = \{1, 2, 3\}$. Define the kernel $K$ from $R$ to $S$, the kernel $L$ from $S$ to $S$ and the kernel $M$ from $S$ to $T$ in matrix form as follows: $K = \left[\begin{matrix} 1 & 4 \ 2 & 3\end{matrix}\right], \; L = \left[\begin{matrix} 2 & 2 \ 1 & 5 \end{matrix}\right], \; M = \left[\begin{matrix} 1 & 0 & 2 \ 0 & 3 & 1 \end{matrix} \right]$ Compute each of the following kernels, or explain why the operation does not make sense:
1. $K L$
2. $L K$
3. $K^2$
4. $L^2$
5. $K M$
6. $L M$
Proof
Note that these are not just abstract matrices, but rather have rows and columns indexed by the appropriate spaces. So the products make sense only when the spaces match appropriately; it's not just a matter of the number of rows and columns.
1. $K L$ is the kernel from $R$ to $S$ given by $K L = \left[\begin{matrix} 6 & 22 \ 7 & 19 \end{matrix} \right]$
2. $L K$ is not defined since the column space $S$ of $L$ is not the same as the row space $R$ of $K$.
3. $K^2$ is not defined since the row space $R$ is not the same as the column space $S$.
4. $L^2$ is the kernel from $S$ to $S$ given by $L^2 = \left[\begin{matrix} 6 & 14 \ 7 & 27 \end{matrix}\right]$
5. $K M$ is the kernel from $R$ to $T$ given by $K M = \left[\begin{matrix} 1 & 12 & 6 \ 2 & 9 & 7 \end{matrix} \right]$
6. $L M$ is the kernel from $S$ to $T$ given by $L M = \left[\begin{matrix} 2 & 6 & 6 \ 1 & 15 & 7 \end{matrix}\right]$
Conditional Probability
An important class of probability kernels arises from the distribution of one random variable, conditioned on the value of another random variable. In this subsection, suppose that $(\Omega, \mathscr{F}, \P)$ is a probability space, and that $(S, \mathscr S)$ and $(T, \mathscr T)$ are measurable spaces. Further, suppose that $X$ and $Y$ are random variables defined on the probability space, with $X$ taking values in $S$ and that $Y$ taking values in $T$. Informally, $X$ and $Y$ are random variables defined on the same underlying random experiment.
The function $P$ defined as follows is a probability kernel from $(S, \mathscr S)$ to $(T, \mathscr T)$, known as the conditional probability kernel of $Y$ given $X$. $P(x, A) = \P(Y \in A \mid X = x), \quad x \in S, \, A \in \mathscr T$
Proof
Recall that for $A \in \mathscr T$, the conditional probability $\P(Y \in A \mid X)$ is itself a random variable, and is measurable with respect to $\sigma(X)$. That is, $\P(Y \in A \mid X) = P(X, A)$ for some measurable function $x \mapsto P(x, A)$ from $S$ to $[0, 1]$. Then, by definition, $\P(Y \in A \mid X = x) = P(x, A)$. Trivially, of course, $A \mapsto P(x, A)$ is a probability measure on $(T, \mathscr T)$ for $x \in S$.
The operators associated with this kernel have natural interpretations.
Let $P$ be the conditional probability kernel of $Y$ given $X$.
1. If $f: T \to \R$ is measurable, then $Pf(x) = \E[f(Y) \mid X = x]$ for $x \in S$ (assuming as usual that the expected value exists).
2. If $\mu$ is the probability distribution of $X$ then $\mu P$ is the probability distribution of $Y$.
Proof
These are basic results that we have already studied, dressed up in new notation.
1. Since $A \mapsto P(x, A)$ is the conditional distribution of $Y$ given $X = x$, $\E[f(Y) \mid X = x] = \int_S P(x, dy) f(y) = P f(x)$
2. Let $A \in \mathscr T$. Conditioning on $X$ gives $\P(Y \in A) = \E[\P(Y \in A \mid X)] = \int_S \mu(dx) P(Y \in A \mid X = x) = \int_S \mu(dx) P(x, A) = \mu P(A)$
As in the general discussion above, the measurable spaces $(S, \mathscr S)$ and $(T, \mathscr T)$ are usually measure spaces with natural measures attached. So the conditional probability distributions are often given via conditional probability density functions, which then play the role of kernel functions. The next two exercises give examples.
Suppose that $X$ and $Y$ are random variables for an experiment, taking values in $\R$. For $x \in \R$, the conditional distribution of $Y$ given $X = x$ is normal with mean $x$ and standard deviation 1. Use the notation and operations of this section for the following computations:
1. Give the kernel function for the conditional distribution of $Y$ given $X$.
2. Find $\E\left(Y^2 \bigm| X = x\right)$.
3. Suppose that $X$ has the standard normal distribution. Find the probability density function of $Y$.
Answer
1. The kernel function (with respect to Lebesgue measure, of course) is $p(x, y) = \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2} (y - x)^2}, \quad x, \, y \in \R$
2. Let $g(y) = y^2$ for $y \in \R$. Then $E\left(Y^2 \bigm| X = x\right) = P g(x) = 1 + x^2$ for $x \in \R$
3. The standard normal PDF $f$ is given $f(x) = \frac{1}{\sqrt{2 \pi}} e^{-x^2/2}$ for $x \in \R$. Thus $Y$ has PDF $f P$. $f P(y) \int_{-\infty}^\infty f(x) p(x, y) dx = \frac{1}{2 \sqrt{\pi}} e^{-\frac{1}{4} y^2}, \quad y \in \R$ This is the PDF of the normal distribution with mean 0 and variance 2.
Suppose that $X$ and $Y$ are random variables for an experiment, with $X$ taking values in $\{a, b, c\}$ and $Y$ taking values in $\{1, 2, 3, 4\}$. The kernel function of $Y$ given $X$ is as follows: $P(a, y) = 1/4$, $P(b, y) = y / 10$, and $P(c, y) = y^2/30$, each for $y \in \{1, 2, 3, 4\}$.
1. Give the kernel $P$ in matrix form and verify that it is a probability kernel.
2. Find $f P$ where $f(a) = f(b) = f(c) = 1/3$. The result is the density function of $Y$ given that $X$ is uniformly distributed.
3. Find $P g$ where $g(y) = y$ for $y \in \{1, 2, 3, 4\}$. The resulting function is $\E(Y \mid X = x)$ for $x \in \{a, b, c\}$.
Answer
1. $P$ is given in matrix form below. Note that the row sums are 1. $P = \left[\begin{matrix} \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} \ \frac{1}{10} & \frac{2}{10} & \frac{3}{10} & \frac{4}{10} \ \frac{1}{30} & \frac{4}{30} & \frac{9}{30} & \frac{16}{30} \end{matrix} \right]$
2. In matrix form, $f = \left[\begin{matrix} \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \end{matrix} \right]$ and $f P = \left[\begin{matrix} \frac{23}{180} & \frac{35}{180} & \frac{51}{180} & \frac{71}{180} \end{matrix} \right]$.
3. In matrix form, $g = \left[\begin{matrix} 1 \ 2 \ 3 \ 4 \end{matrix} \right], \quad P g = \left[\begin{matrix} \frac{5}{2} \ 3 \ \frac{10}{3} \end{matrix} \right]$
Parametric Distributions
A parametric probability distribution also defines a probability kernel in a natural way, with the parameter playing the role of the kernel variable, and the distribution playing the role of the measure. Such distributions are usually defined in terms of a parametric density function which then defines a kernel function, again with the parameter playing the role of the first argument and the variable the role of the second argument. If the parameter is thought of as a given value of another random variable, as in Bayesian analysis, then there is considerable overlap with the previous subsection. In most cases, (and in particular in the examples below), the spaces involved are either discrete or Euclidean, as described in (1).
Consider the parametric family of exponential distributions. Let $f$ denote the identity function on $(0, \infty)$.
1. Give the probability density function as a probability kernel function $p$ on $(0, \infty)$.
2. Find $P f$.
3. Find $f P$.
4. Find $p^2$, the kernel function corresponding to the product kernel $P^2$.
Answer
1. $p(r, x) = r e^{-r x}$ for $r, \, x \in (0, \infty)$.
2. For $r \in (0, \infty)$, $P f(r) = \int_0^\infty p(r, x) f(x) \, dx = \int_0^\infty x r e^{-r x} dx = \frac{1}{r}$ This is the mean of the exponential distribution.
3. For $x \in (0, \infty)$, $f P(x) = \int_0^\infty f(r) p(r, x) \, dr = \int_0^\infty r^2 e^{-r x} dr = \frac{2}{x^3}$
4. For $r, \, y \in (0, \infty)$, $p^2(r, y) = \int_0^\infty p(r, x) p(x, y) \, dx = \int_0^\infty = \int_0^\infty r x e^{-(r + y) x} dx = \frac{r}{(r + y)^2}$
Consider the parametric family of Poisson distributions. Let $f$ be the identity function on $\N$ and let $g$ be the identity function on $(0, \infty)$.
1. Give the probability density function $p$ as a probability kernel function from $(0, \infty)$ to $\N$.
2. Show that $P f = g$.
3. Show that $g P = f$.
Answer
1. $p(r, n) = e^{-r} \frac{r^n}{n!}$ for $r \in (0, \infty)$ and $n \in \N$.
2. For $r \in (0, \infty)$, $P f(r)$ is the mean of the Poisson distribution with parameter $r$: $P f(r) = \sum_{n=0}^\infty p(r, n) f(n) = \sum_{n=0}^\infty n e^{-r} \frac{r^n}{n!} = r$
3. For $n \in \N$, $g P(n) = \int_0^\infty g(r) p(r, n) \, dr = \int_0^\infty e^{-r} \frac{r^{n+1}}{n!} dr = n$
Clearly the Poisson distribution has some very special and elegant properties. The next family of distributions also has some very special properties. Compare this exercise with the exercise (30).
Consider the family of normal distributions, parameterized by the mean and with variance 1.
1. Give the probability density function as a probability kernel function $p$ on $\R$.
2. Show that $p$ is symmetric.
3. Let $f$ be the identity function on $\R$. Show that $P f = f$ and $f P = f$.
4. For $n \in \N$, find $p^n$ the kernel function for the operator $P^n$.
Answer
1. For $\mu, \, x \in \R$, $p(\mu, x) = \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2}(x - \mu)^2}$ That is, $x \mapsto p(x, \mu)$ is the normal probability density function with mean $\mu$ and variance 1.
2. Note that $p(\mu, x) = p(x, \mu)$ for $\mu, \, x \in \R$. So $\mu \mapsto p(\mu, x)$ is the normal probability density function with mean $x$ and variance 1.
3. Since $f(x) = x$ for $x \in \R$, this follows from the previous two parts: $P f(\mu) = \mu$ for $\mu \in \R$ and $f P(x) = x$ for $x \in \R$
4. For $\mu, \, y \in \R$, $p^2(\mu, x) = \int_{-\infty}^\infty p(\mu, t) p(t, y) \, dt = \frac{1}{\sqrt{4 \pi}} e^{-\frac{1}{4}(x - \mu)^2}$ so that $x \mapsto p^2(\mu, x)$ is the normal PDF with mean $\mu$ and variance 2. By induction, $p^n(\mu, x) = \frac{1}{\sqrt{2 \pi n}} e^{-\frac{1}{2 n}(x - \mu)^2}$ for $n \in \N_+$ and $\mu, \, x \in \R$. Thus $x \mapsto p^n(\mu, x)$ is the normal PDF with mean $\mu$ and variance $n$.
For each of the following special distributions, express the probability density function as a probability kernel function. Be sure to specify the parameter spaces.
1. The general normal distribution on $\R$.
2. The beta distribution on $(0, 1)$.
3. The negative binomial distribution on $\N$.
Answer
1. The normal distribution with mean $\mu$ and standard deviation $\sigma$ defines a kernel function $p$ from $\R \times (0, \infty)$ to $\R$ given by $p[(\mu, \sigma), x] = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left[-\left(\frac{x - \mu}{\sigma}\right)^2\right]$
2. The beta distribution with left parameter $a$ and right parameter $b$ defines a kernel function $p$ from $(0, \infty)^2$ to $(0, 1)$ given by $p[(a, b), x] = \frac{1}{B(a, b)} x^{a - 1} y^{b - 1}$ where $B$ is the beta function.
3. The negative binomial distribution with stopping parameter $k$ and success parameter $\alpha$ defines a kernel function $p$ from $(0, \infty) \times (0, 1)$ to $\N$ given by $p[(n, \alpha), k] = \binom{n + k - 1}{n} \alpha^k (1 - \alpha)^n$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/04%3A_Expected_Value/4.13%3A_Kernels_and_Operators.txt |
In this chapter, we study several general families of probability distributions and a number of special parametric families of distributions. Unlike the other expository chapters in this text, the sections are not linearly ordered and so this chapter serves primarily as a reference. You may want to study these topics as the need arises.
First, we need to discuss what makes a probability distribution special in the first place. In some cases, a distribution may be important because it is connected with other special distributions in interesting ways (via transformations, limits, conditioning, etc.). In some cases, a parametric family may be important because it can be used to model a wide variety of random phenomena. This may be the case because of fundamental underlying principles, or simply because the family has a rich collection of probability density functions with a small number of parameters (usually 3 or less). As a general philosophical principle, we try to model a random process with as few parameters as possible; this is sometimes referred to as the principle of parsimony of parameters. In turn, this is a special case of Ockham's razor, named in honor of William of Ockham, the principle that states that one should use the simplest model that adequately describes a given phenomenon. Parsimony is important because often the parameters are not known and must be estimated.
In many cases, a special parametric family of distributions will have one or more distinguished standard members, corresponding to specified values of some of the parameters. Usually the standard distributions will be mathematically simplest, and often other members of the family can be constructed from the standard distributions by simple transformations on the underlying standard random variable.
An incredible variety of special distributions have been studied over the years, and new ones are constantly being added to the literature. To truly deserve the adjective special, a distribution should have a certain level of mathematical elegance and economy, and should arise in interesting and diverse applications.
• 5.1: Location-Scale Families
As usual, our starting point is a random experiment modeled by a probability space (Ω,F,P), so that Ω is the set of outcomes, F the collection of events, and P the probability measure on the sample space (Ω,F). In this section, we assume that we fixed random variable Z defined on the probability space, taking values in R.
• 5.2: General Exponential Families
Many of the special distributions studied in this chapter are general exponential families, at least with respect to some of their parameters. On the other hand, most commonly, a parametric family fails to be a general exponential family because the support set depends on the parameter. The following theorems give a number of examples. Proofs will be provided in the individual sections.
• 5.3: Stable Distributions
Stable distributions are an important general class of probability distributions on R that are defined in terms of location-scale transformations. Stable distributions occur as limits (in distribution) of scaled and centered sums of independent, identically distributed variables. Such limits generalize the central limit theorem, and so stable distributions generalize the normal distribution in a sense. The pioneering work on stable distributions was done by Paul Lévy.
• 5.4: Infinitely Divisible Distributions
A number of special distributions are infinitely divisible. Proofs of the results stated below are given in the individual sections.
• 5.5: Power Series Distributions
Power Series Distributions are discrete distributions on (a subset of) N constructed from power series. This class of distributions is important because most of the special, discrete distributions are power series distributions.
• 5.6: The Normal Distribution
The normal distribution holds an honored role in probability and statistics, mostly because of the central limit theorem, one of the fundamental theorems that forms a bridge between the two subjects. In addition, as we will see, the normal distribution has many nice mathematical properties. The normal distribution is also called the Gaussian distribution, in honor of Carl Friedrich Gauss, who was among the first to use the distribution.
• 5.7: The Multivariate Normal Distribution
The multivariate normal distribution is among the most important of multivariate distributions, particularly in statistical inference and the study of Gaussian processes such as Brownian motion. The distribution arises naturally from linear transformations of independent normal variables. In this section, we consider the bivariate normal distribution first, because explicit results can be given and because graphical interpretations are possible.
• 5.8: The Gamma Distribution
In this section we will study a family of distributions that has special importance in probability and statistics. In particular, the arrival times in the Poisson process have gamma distributions, and the chi-square distribution in statistics is a special case of the gamma distribution. Also, the gamma distribution is widely used to model physical quantities that take positive values.
• 5.9: Chi-Square and Related Distribution
In this section we will study a distribution, and some relatives, that have special importance in statistics. In particular, the chi-square distribution will arise in the study of the sample variance when the underlying distribution is normal and in goodness of fit tests.
• 5.10: The Student t Distribution
In this section we will study a distribution that has special importance in statistics. In particular, this distribution will arise in the study of a standardized version of the sample mean when the underlying distribution is normal.
• 5.11: The F Distribution
In this section we will study a distribution that has special importance in statistics. In particular, this distribution arises from ratios of sums of squares when sampling from a normal distribution, and so is important in estimation and in the two-sample normal model and in hypothesis testing in the two-sample normal model.
• 5.12: The Lognormal Distribution
The lognormal distribution is a continuous distribution on (0,∞) and is used to model random quantities when the distribution is believed to be skewed, such as certain income and lifetime variables.
• 5.13: The Folded Normal Distribution
The folded normal distribution is the distribution of the absolute value of a random variable with a normal distribution. As has been emphasized before, the normal distribution is perhaps the most important in probability and is used to model an incredible variety of random phenomena.
• 5.14: The Rayleigh Distribution
The Rayleigh distribution, named for William Strutt, Lord Rayleigh, is the distribution of the magnitude of a two-dimensional random vector whose coordinates are independent, identically distributed, mean 0 normal variables. The distribution has a number of applications in settings where magnitudes of normal variables are important.
• 5.15: The Maxwell Distribution
The Maxwell distribution, named for James Clerk Maxwell, is the distribution of the magnitude of a three-dimensional random vector whose coordinates are independent, identically distributed, mean 0 normal variables. The distribution has a number of applications in settings where magnitudes of normal variables are important, particularly in physics. The Maxwell distribution is closely related to the Rayleigh distribution.
• 5.16: The Lévy Distribution
The Lévy distribution, named for the French mathematician Paul Lévy, is important in the study of Brownian motion, and is one of only three stable distributions whose probability density function can be expressed in a simple, closed form.
• 5.17: The Beta Distribution
In this section, we will study the beta distribution, the most important distribution that has bounded support. But before we can study the beta distribution we must study the beta function.
• 5.18: The Beta Prime Distribution
The beta prime distribution is the distribution of the odds ratio associated with a random variable with the beta distribution. Since variables with beta distributions are often used to model random probabilities and proportions, the corresponding odds ratios occur naturally as well.
• 5.19: The Arcsine Distribution
The arcsine distribution is important in the study of Brownian motion and prime numbers, among other applications.
• 5.20: General Uniform Distributions
This section explores uniform distributions in an abstract setting. If you are a new student of probability, or are not familiar with measure theory, you may want to skip this section and read the sections on the uniform distribution on an interval and the discrete uniform distributions.
• 5.21: The Uniform Distribution on an Interval
The continuous uniform distribution on an interval of R is one of the simplest of all probability distributions, but nonetheless very important. In particular, continuous uniform distributions are the basic tools for simulating other probability distributions. The uniform distribution corresponds to picking a point at random from the interval.
• 5.22: Discrete Uniform Distributions
The discrete uniform distribution is a special case of the general uniform distribution with respect to a measure, in this case counting measure. The distribution corresponds to picking an element of S at random. Most classical, combinatorial probability models are based on underlying discrete uniform distributions. The chapter on Finite Sampling Models explores a number of such models.
• 5.23: The Semicircle Distribution
• 5.24: The Triangle Distribution
Like the semicircle distribution, the triangle distribution is based on a simple geometric shape. The distribution arises naturally when uniformly distributed random variables are transformed in various ways.
• 5.25: The Irwin-Hall Distribution
The Irwin-Hall distribution, named for Joseph Irwin and Phillip Hall, is the distribution that governs the sum of independent random variables, each with the standard uniform distribution. It is also known as the uniform sum distribution. Since the standard uniform is one of the simplest and most basic distributions (and corresponds in computer science to a random number), the Irwin-Hall is a natural family of distributions. It also serves as a nice example of the central limit theorem.
• 5.26: The U-Power Distribution
The U-power distribution is a U-shaped family of distributions based on a simple family of power functions.
• 5.27: The Sine Distribution
The sine distribution is a simple probability distribution based on a portion of the sine curve. It is also known as Gilbert's sine distribution, named for the American geologist Grove Karl (GK) Gilbert who used the distribution in 1892 to study craters on the moon.
• 5.28: The Laplace Distribution
The Laplace distribution, named for Pierre Simon Laplace arises naturally as the distribution of the difference of two independent, identically distributed exponential variables. For this reason, it is also called the double exponential distribution.
• 5.29: The Logistic Distribution
The logistic distribution is used for various growth models, and is used in a certain type of regression, known appropriately as logistic regression.
• 5.30: The Extreme Value Distribution
Extreme value distributions arise as limiting distributions for maximums or minimums (extreme values) of a sample of independent, identically distributed random variables, as the sample size increases. Thus, these distributions are important in probability and mathematical statistics.
• 5.31: The Hyperbolic Secant Distribution
The hyperbolic secant distribution is a location-scale familty with a number of interesting parallels to the normal distribution. As the name suggests, the hyperbolic secant function plays an important role in the distribution, so we should first review some definitions.
• 5.32: The Cauchy Distribution
The Cauchy distribution, named of course for the ubiquitous Augustin Cauchy, is interesting for a couple of reasons. First, it is a simple family of distributions for which the expected value (and other moments) do not exist. Second, the family is closed under the formation of sums of independent variables, and hence is an infinitely divisible family of distributions.
• 5.33: The Exponential-Logarithmic Distribution
The exponential-logarithmic distribution arises when the rate parameter of the exponential distribution is randomized by the logarithmic distribution. The exponential-logarithmic distribution has applications in reliability theory in the context of devices or organisms that improve with age, due to hardening or immunity.
• 5.34: The Gompertz Distribution
The Gompertz distributon, named for Benjamin Gompertz, is a continuous probability distribution on [0,∞) that has exponentially increasing failure rate. Unfortunately, the death rate of adult humans increases exponentially, so the Gompertz distribution is widely used in actuarial science.
• 5.35: The Log-Logistic Distribution
As the name suggests, the log-logistic distribution is the distribution of a variable whose logarithm has the logistic distribution. The log-logistic distribution is often used to model random lifetimes, and hence has applications in reliability.
• 5.36: The Pareto Distribution
The Pareto distribution is a skewed, heavy-tailed distribution that is sometimes used to model the distribution of incomes and other financial variables.
• 5.37: The Wald Distribution
The Wald distribution, named for Abraham Wald, is important in the study of Brownian motion. Specifically, the distribution governs the first time that a Brownian motion with positive drift hits a fixed, positive value. In Brownian motion, the distribution of the random position at a fixed time has a normal (Gaussian) distribution, and thus the Wald distribution, which governs the random time at a fixed position, is sometimes called the inverse Gaussian distribution.
• 5.38: The Weibull Distribution
The Weibull distribution is named for Waloddi Weibull. Weibull was not the first person to use the distribution, but was the first to study it extensively and recognize its wide use in applications. The standard Weibull distribution is the same as the standard exponential distribution. But as we will see, every Weibull random variable can be obtained from a standard Weibull variable by a simple deterministic transformation, so the terminology is justified.
• 5.39: Benford's Law
Benford's law refers to probability distributions that seem to govern the significant digits in real data sets. The law is named for the American physicist and engineer Frank Benford, although the law was actually discovered earlier by the astronomer and mathematician Simon Newcomb.
• 5.40: The Zeta Distribution
The zeta distribution is used to model the size or ranks of certain types of objects randomly chosen from certain types of populations. Typical examples include the frequency of occurrence of a word randomly chosen from a text, or the population rank of a city randomly chosen from a country. The zeta distribution is also known as the Zipf distribution, in honor of the American linguist George Zipf.
• 5.41: The Logarithmic Series Distribution
The logarithmic series distribution, as the name suggests, is based on the standard power series expansion of the natural logarithm function. It is also sometimes known more simply as the logarithmic distribution.
05: Special Distributions
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
General Theory
As usual, our starting point is a random experiment modeled by a probability space $(\Omega, \mathscr F, \P)$, so that $\Omega$ is the set of outcomes, $\mathscr F$ the collection of events, and $\P$ the probability measure on the sample space $(\Omega, \mathscr F)$. In this section, we assume that we fixed random variable $Z$ defined on the probability space, taking values in $\R$.
Definition
For $a \in \R$ and $b \in (0, \infty)$, let $X = a + b \, Z$. The two-parameter family of distributions associated with $X$ is called the location-scale family associated with the given distribution of $Z$. Specifically, $a$ is the location parameter and $b$ the scale parameter.
Thus a linear transformation, with positive slope, of the underlying random variable $Z$ creates a location-scale family for the underlying distribution. In the special case that $b = 1$, the one-parameter family is called the location family associated with the given distribution, and in the special case that $a = 0$, the one-parameter family is called the scale family associated with the given distribution. Scale transformations, as the name suggests, occur naturally when physical units are changed. For example, if a random variable represents the length of an object, then a change of units from meters to inches corresponds to a scale transformation. Location transformations often occur when the zero reference point is changed, in measuring distance or time, for example. Location-scale transformations can also occur with a change of physical units. For example, if a random variable represents the temperature of an object, then a change of units from Fahrenheit to Celsius corresponds to a location-scale transformation.
Distribution Functions
Our goal is to relate various functions that determine the distribution of $X = a + b Z$ to the corresponding functions for $Z$. First we consider the (cumulative) distribution function.
If $Z$ has distribution function $G$ then $X$ has distribution function $F$ given by $F(x) = G \left( \frac{x - a}{b} \right), \quad x \in \R$
Proof
For $x \in \R$ $F(x) = \P(X \le x) = \P(a + b Z \le x) = \P\left(Z \le \frac{x - a}{b}\right) = G\left(\frac{x - a}{b}\right)$
Next we consider the probability density function. The results are a bit different for discrete distributions and continuous distribution, not surprising since the density function has different meanings in these two cases.
If $Z$ has a discrete distribution with probability density function $g$ then $X$ also has a discrete distribution, with probability density function $f$ given by $f(x) = g\left(\frac{x - a}{b}\right), \quad x \in \R$
Proof
$Z$ takes values in a countable subset $S \subset \R$ and hence $X$ takes values in $T = \{a + b z: z \in S\}$, which is also countable. Moreover $f(x) = \P(X = x) = \P\left(Z = \frac{x - a}{b}\right) = g\left(\frac{x - a}{b}\right), \quad x \in \R$
If $Z$ has a continuous distribution with probability density function $g$, then $X$ also has a continuous distribution, with probability density function $f$ given by
$f(x) = \frac{1}{b} \, g \left( \frac{x - a}{b} \right), \quad x \in \R$
1. For the location family associated with $g$, the graph of $f$ is obtained by shifting the graph of $g$, $a$ units to the right if $a \gt 0$ and $-a$ units to the left if $a \lt 0$.
2. For the scale family associated with $g$, if $b \gt 1$, the graph of $f$ is obtained from the graph of $g$ by stretching horizontally and compressing vertically, by a factor of $b$. If $0 \lt b \lt 1$, the graph of $f$ is obtained from the graph of $g$ by compressing horizontally and stretching vertically, by a factor of $b$.
Proof
First note that $\P(X = x) = \P\left(Z = \frac{x - a}{b}\right) = 0$, so $X$ has a continuous distribution. Typically, $Z$ takes values in an interval of $\R$ and thus so does $X$. The formula for the density function follows by taking derivatives of the distribution function above, since $f = F^\prime$ and $g = G^\prime$.
If $Z$ has a mode at $z$, then $X$ has a mode at $x = a + b z$.
Proof
This follows from density function in the discrete case or the density function in the continuous case. If $g$ has a maximum at $z$ then $f$ has a maximum at $x = a + b z$
Next we relate the quantile functions of $Z$ and $X$.
If $G$ and $F$ are the distribution functions of $Z$ and $X$, respectively, then
1. $F^{-1}(p) = a + b \, G^{-1}(p)$ for $p \in (0, 1)$
2. If $z$ is a quantile of order $p$ for $Z$ then $x = a + b \, z$ is a quantile of order $p$ for $X$.
Proof
These results follow from the distribution function above.
Suppose now that $Z$ has a continuous distribution on $[0, \infty)$, and that we think of $Z$ as the failure time of a device (or the time of death of an organism). Let $X = b Z$ where $b \in [0, \infty)$, so that the distribution of $X$ is the scale family associated with the distribution of $Z$. Then $X$ also has a continuous distribution on $[0, \infty)$ and can also be thought of as the failure time of a device (perhaps in different units).
Let $G^c$ and $F^c$ denote the reliability functions of $Z$ and $X$ respectively, and let $r$ and $R$ denote the failure rate functions of $Z$ and $X$, respectively. Then
1. $F^c(x) = G^c(x / b)$ for $x \in [0, \infty)$
2. $R(x) = \frac{1}{b} r\left(\frac{x}{b}\right)$ for $x \in [0, \infty)$
Proof
Recall that $G^c = 1 - G$, $F^c = 1 - F$, $r = g / \bar{G}$, and $R = f / \bar{F}$. Thus the results follow from the distribution function and the density function above.
Moments
The following theorem relates the mean, variance, and standard deviation of $Z$ and $X$.
As before, suppose that $X = a + b \, Z$. Then
1. $\E(X) = a + b \, \E(Z)$
2. $\var(X) = b^2 \, \var(Z)$
3. $\sd(X) = b \, \sd(Z)$
Proof
These result follow immediately from basic properties of expected value and variance.
Recall that the standard score of a random variable is obtained by subtracting the mean and dividing by the standard deviation. The standard score is dimensionless (that is, has no physical units) and measures the distance from the mean to the random variable in standard deviations. Since location-scale familes essentially correspond to a change of units, it's not surprising that the standard score is unchanged by a location-scale transformation.
The standard scores of $X$ and $Z$ are the same:
$\frac{X - \E(X)}{\sd(X)} = \frac{Z - \E(Z)}{\sd(Z)}$
Proof
From the mean and variance above:
$\frac{X - \E(X)}{\sd(X)} = \frac{a + b Z - [a + b \E(Z)]}{b \sd(Z)} = \frac{Z - \E(Z)}{\sd(Z)}$
Recall that the skewness and kurtosis of a random variable are the third and fourth moments, respectively, of the standard score. Thus it follows from the previous result that skewness and kurtosis are unchanged by location-scale transformations: $\skw(X) = \skw(Z)$, $\kur(X) = \kur(Z)$.
We can represent the moments of $X$ (about 0) to those of $Z$ by means of the binomial theorem: $\E\left(X^n\right) = \sum_{k=0}^n \binom{n}{k} b^k a^{n - k} \E\left(Z^k\right), \quad n \in \N$ Of course, the moments of $X$ about the location parameter $a$ have a simple representation in terms of the moments of $Z$ about 0: $\E\left[(X - a)^n\right] = b^n \E\left(Z^n\right), \quad n \in \N$ The following exercise relates the moment generating functions of $Z$ and $X$.
If $Z$ has moment generating function $m$ then $X$ has moment generating function $M$ given by
$M(t) = e^{a t} m(b t)$
Proof $M(t) = \E\left(e^{tX}\right) = \E\left[e^{t(a + bZ)}\right] = e^{ta} \E\left(e^{t b Z}\right) = e^{a t} m(b t)$
Type
As we noted earlier, two probability distributions that are related by a location-scale transformation can be thought of as governing the same underlying random quantity, but in different physical units. This relationship is important enough to deserve a name.
Suppose that $P$ and $Q$ are probability distributions on $\R$ with distribution functions $F$ and $G$, respectively. Then $P$ and $Q$ are of the same type if there exist constants $a \in \R$ and $b \in (0, \infty)$ such that $F(x) = G \left( \frac{x - a}{b} \right), \quad x \in \R$
Being of the same type is an equivalence relation on the collection of probability distributions on $\R$. That is, if $P$, $Q$, and $R$ are probability distribution on $\R$ then
1. $P$ is the same type as $P$ (the reflexive property).
2. If $P$ is the same type as $Q$ then $Q$ is the same type as $P$ (the symmetric property).
3. If $P$ is the same type as $Q$, and $Q$ is the same type as $R$, then $P$ is the same type as $R$ (the transitive property).
Proof
Let $F$, $G$, and $H$ denote the distribution functions of $P$, $Q$, and $R$ respectively.
1. This is trivial, of course, since we can take $a = 0$ and $b = 1$.
2. Suppose there exists $a \in \R$ and $b \in (0, \infty)$ such that $F(x) = G\left(\frac{x - a}{b}\right)$ for $x \in \R$. Then $G(x) = F(a + b x) = F\left(\frac{x - (-a/b)}{1/b}\right)$ for $x \in \R$.
3. Suppose there exists $a, \, c \in \R$ and $b, \, d \in (0, \infty)$ such that $F(x) = G\left(\frac{x - a}{b}\right)$ and $G(x) = H\left(\frac{x - c}{d}\right)$ for $x \in \R$. Then $F(x) = H\left(\frac{x - (a + bc)}{bd}\right)$ for $x \in \R$.
So, the collection of probability distributions on $\R$ is partitioned into mutually exclusive equivalence classes, where the distributions in each class are all of the same type.
Examples and Applications
Special Distributions
Many of the special parametric families of distributions studied in this chapter and elsewhere in this text are location and/or scale families.
The arcsine distribution is a location-scale family.
The Cauchy distribution is a location-scale family.
The exponential distribution is a scale family.
The exponential-logarithmic distribution is a scale family for each value of the shape parameter.
The extreme value distribution is a location-scale family.
The gamma distribution is a scale family for each value of the shape parameter.
The Gompertz distribution is a scale family for each value of the shape parameter.
The half-normal distribution is a scale family.
The hyperbolic secant distribution is a location-scale family.
The Lévy distribution is a location scale family.
The logistic distribution is a location-scale family.
The log-logistic distribution is a scale family for each value of the shape parameter.
The Maxwell distribution is a scale family.
The normal distribution is a location-scale family.
The Pareto distribution is a scale family for each value of the shape parameter.
The Rayleigh distribution is a scale family.
The semicircle distribution is a location-scale family.
The triangle distribution is a location-scale family for each value of the shape parameter.
The uniform distribution on an interval is a location-scale family.
The U-power distribution is a location-scale family for each value of the shape parameter.
The Weibull distribution is a scale family for each value of the shape parameter.
The Wald distribution is a scale family, although in the usual formulation, neither of the parameters is a scale parameter. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.01%3A_Location-Scale_Families.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
Definition
We start with a probability space $(\Omega, \mathscr F, \P)$ as a model for a random experiment. So as usual, $\Omega$ is the set of outcomes, $\mathscr F$ the $\sigma$-algebra of events, and $\P$ the probability measure on the sample space $(\Omega, \mathscr F)$. For the general formulation that we want in this section, we need two additional spaces, a measure space $(S, \mathscr S, \mu)$ (where the probability distributions will live) and a measurable space $(T, \mathscr T)$ (serving the role of a parameter space). Typically, these spaces fall into our two standard categories. Specifically, the measure space is usually one of the following:
• Discrete. $S$ is countable, $\mathscr S$ is the collection of all subsets of $S$, and $\mu = \#$ is counting measure.
• Euclidean. $S$ is a sufficiently nice Borel measurable subset of $\R^n$ for some $n \in \N_+$, $\mathscr S$ is the $\sigma$-algebra of Borel measurable subsets of $S$, and $\mu = \lambda_n$ is $n$-dimensional Lebesgue measure.
Similarly, the parameter space $(T, \mathscr T)$ is usually either discrete, so that $T$ is countable and $\mathscr T$ the collection of all subsets of $T$, or Euclidean so that $T$ is a sufficiently nice Borel measurable subset of $\R^m$ for some $m \in \N_+$ and $\mathscr T$ is the $\sigma$-algebra of Borel measurable subsets of $T$.
Suppose now that $X$ is random variable defined on the probability space, taking values in $S$, and that the distribution of $X$ depends on a parameter $\theta \in T$. For $\theta \in T$ we assume that the distribution of $X$ has probability density function $f_\theta$ with respect to $\mu$.
for $k \in \N_+$, the family of distributions of $X$ is a $k$-parameter exponential family if $f_\theta(x) = \alpha(\theta) \, g(x) \, \exp \left( \sum_{i=1}^k \beta_i(\theta) \, h_i(x) \right); \quad x \in S, \, \theta \in T$ where $\alpha$ and $\left(\beta_1, \beta_2, \ldots, \beta_k\right)$ are measurable functions from $T$ into $\R$, and where $g$ and $\left(h_1, h_2, \ldots, h_k\right)$ are measurable functions from $S$ into $\R$. Moreover, $k$ is assumed to be the smallest such integer.
1. The parameters $\left(\beta_1(\theta), \beta_2(\theta), \ldots, \beta_k(\theta)\right)$ are called the natural parameters of the distribution.
2. the random variables $\left(h_1(X), h_2(X), \ldots, h_k(X)\right)$ are called the natural statistics of the distribution.
Although the definition may look intimidating, exponential families are useful because many important theoretical results in statistics hold for exponential families, and because many special parametric families of distributions turn out to be exponential families. It's important to emphasize that the representation of $f_\theta(x)$ given in the definition must hold for all $x \in S$ and $\theta \in T$. If the representation only holds for a set of $x \in S$ that depends on the particular $\theta \in T$, then the family of distributions is not a general exponential family.
The next result shows that if we sample from the distribution of an exponential family, then the distribution of the random sample is itself an exponential family with the same natural parameters.
Suppose that the distribution of random variable $X$ is a $k$-parameter exponential family with natural parameters $(\beta_1(\theta), \beta_2(\theta), \ldots, \beta_k(\theta))$, and natural statistics $(h_1(X), h_2(X), \ldots, h_k(X))$. Let $\bs X = (X_1, X_2, \ldots, X_n)$ be a sequence of $n$ independent random variables, each with the same distribution as $X$. Then $\bs X$ is a $k$-parameter exponential family with natural parameters $(\beta_1(\theta), \beta_2(\theta), \ldots, \beta_k(\theta))$, and natural statistics $u_j(\boldsymbol{X}) = \sum_{i=1}^n h_j(X_i), \quad j \in \{1, 2, \ldots, k\}$
Proof
Let $f_\theta$ denote the PDF of $X$ corresponding to the parameter value $\theta \in T$, so that $f_\theta(x)$ has the representation given in the definition for $x \in S$ and $\theta \in T$. Then for $\theta \in T$, $\bs X = (X_1, X_2, \ldots, X_n)$ has PDF $g_\theta$ given by $g_\theta(x_1, x_2, \ldots, x_n) = f_\theta(x_1) f_\theta(x_2) \cdots f_\theta(x_n), \quad (x_1, x_2, \ldots, x_n) \in S^n$ Substituting and simplifying gives the result.
Examples and Special Cases
Special Distributions
Many of the special distributions studied in this chapter are general exponential families, at least with respect to some of their parameters. On the other hand, most commonly, a parametric family fails to be a general exponential family because the support set depends on the parameter. The following theorems give a number of examples. Proofs will be provided in the individual sections.
The Bernoulli distribution is a one parameter exponential family in the success parameter $p \in [0, 1]$
The beta distiribution is a two-parameter exponential family in the shape parameters $a \in (0, \infty)$, $b \in (0, \infty)$.
The beta prime distribution is a two-parameter exponential family in the shape parameters $a \in (0, \infty)$, $b \in (0, \infty)$.
The binomial distribution is a one-parameter exponential family in the success parameter $p \in [0, 1]$ for a fixed value of the trial parameter $n \in \N_+$.
The chi-square distribution is a one-parameter exponential family in the degrees of freedom $n \in (0, \infty)$.
The exponential distribution is a one-parameter exponential family (appropriately enough), in the rate parameter $r \in (0, \infty)$.
The gamma distribution is a two-parameter exponential family in the shape parameter $k \in (0, \infty)$ and the scale parameter $b \in (0, \infty)$.
The geometric distribution is a one-parameter exponential family in the success probability $p \in (0, 1)$.
The half normal distribution is a one-parameter exponential family in the scale parameter $\sigma \in (0, \infty)$
The Laplace distribution is a one-parameter exponential family in the scale parameter $b \in (0, \infty)$ for a fixed value of the location parameter $a \in \R$.
The Lévy distribution is a one-parameter exponential family in the scale parameter $b \in (0, \infty)$ for a fixed value of the location parameter $a \in \R$.
The logarithmic distribution is a one-parameter exponential family in the shape parameter $p \in (0, 1)$
The lognormal distribution is a two parameter exponential family in the shape parameters $\mu \in \R$, $\sigma \in (0, \infty)$.
The Maxwell distribution is a one-parameter exponential family in the scale parameter $b \in (0, \infty)$.
The $k$-dimensional multinomial distribution is a $k$-parameter exponential family in the probability parameters $(p_1, p_2, \ldots, p_k)$ for a fixed value of the trial parameter $n \in \N_+$.
The $k$-dimensional multivariate normal distribution is a $\frac{1}{2}(k^2 + 3 k)$-parameter exponential family with respect to the mean vector $\bs{\mu}$ and the variance-covariance matrix $\bs{V}$.
The negative binomial distribution is a one-parameter exponential family in the success parameter $p \in (0, 1)$ for a fixed value of the stopping parameter $k \in \N_+$.
The normal distribution is a two-parameter exponential family in the mean $\mu \in \R$ and the standard deviation $\sigma \in (0, \infty)$.
The Pareto distribution is a one-parameter exponential family in the shape parameter for a fixed value of the scale parameter.
The Poisson distribution is a one-parameter exponential family.
The Rayleigh distribution is a one-parameter exponential family.
The U-power distribution is a one-parameter exponential family in the shape parameter, for fixed values of the location and scale parameters.
The Weibull distribution is a one-parameter exponential family in the scale parameter for a fixed value of the shape parameter.
The zeta distribution is a one-parameter exponential family.
The Wald distribution is a two-parameter exponential family. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.02%3A_General_Exponential_Families.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$ $\newcommand{\sgn}{\,\text{sgn}}$ $\newcommand{\bs}{\boldsymbol}$
This section discusses a theoretical topic that you may want to skip if you are a new student of probability.
Basic Theory
Stable distributions are an important general class of probability distributions on $\R$ that are defined in terms of location-scale transformations. Stable distributions occur as limits (in distribution) of scaled and centered sums of independent, identically distributed variables. Such limits generalize the central limit theorem, and so stable distributions generalize the normal distribution in a sense. The pioneering work on stable distributions was done by Paul Lévy.
Definition
In this section, we consider real-valued random variables whose distributions are not degenerate (that is, not concentrated at a single value). After all, a random variable with a degenerate distribution is not really random, and so is not of much interest.
Random variable $X$ has a stable distribution if the following condition holds: If $n \in \N_+$ and $(X_1, X_2, \ldots, X_n)$ is a sequence of independent variables, each with the same distribution as $X$, then $X_1 + X_2 + \cdots + X_n$ has the same distribution as $a_n + b_n X$ for some $a_n \in \R$ and $b_n \in (0, \infty)$. If $a_n = 0$ for $n \in \N_+$ then the distribution of $X$ is strictly stable.
1. The parameters $a_n$ for $n \in \N_+$ are the centering parameters.
2. The parameters $b_n$ for $n \in \N_+$ are the norming parameters.
Details
Since the distribution of $X$ is not point mass at 0, note that if the distribution of $a + b X$ is the same as the distribution of $c + d X$ for some $a, \, c \in \R$ and $b, \, d \in (0, \infty)$, then $a = c$ and $b = d$. Thus, the centering parameters $a_n$ and the norming parameters $b_n$ are uniquely defined for $n \in \N_+$.
Recall that two distributions on $\R$ that are related by a location-scale transformation are said to be of the same type, and that being of the same type defines an equivalence relation on the class of distributions on $\R$. With this terminology, the definition of stability has a more elegant expression: $X$ has a stable distribution if the sum of a finite number of independent copies of $X$ is of the same type as $X$. As we will see, the norming parameters are more important than the centering parameters, and in fact, only certain norming parameters can occur.
Basic Properties
We start with some very simple results that follow easily from the definition, before moving on to the deeper results.
Suppose that $X$ has a stable distribution with mean $\mu$ and finite variance. Then the norming parameters are $\sqrt{n}$ and the centering parameters are $\left(n - \sqrt{n}\right) \mu$ for $n \in \N_+$.
Proof
As usual, let $a_n$ and $b_n$ denote the centering and norming parameters of $X$ for $n \in \N_+$, and let $\sigma^2$ denote the (finite) variance of $X$. Suppose that $n \in \N_+$ and that $(X_1, X_2, \ldots, X_n)$ is a sequence of independent variables, each with the distribution of $X$. Then $X_1 + X_2 + \cdots + X_n$ has the same distribution as $a_n + b_n X$. Taking variances gives $n \sigma^2 = b_n^2 \sigma^2$ and hence $b_n = \sqrt{n}$. Taking expected values now gives $n \mu = a_n + \sqrt{n} \mu$.
It will turn out that the only stable distribution with finite variance is the normal distribution, but the result above is useful as an intermediate step. Next, it seems fairly clear from the definition that the family of stable distributions is itself a location-scale family.
Suppose that the distribution of $X$ is stable, with centering parameters $a_n \in \R$ and norming parameters $b_n \in (0, \infty)$for $n \in \N_+$. If $c \in \R$ and $d \in (0, \infty)$, then the distribution of $Y = c + d X$ is also stable, with centering parameters $d a_n + (n - b_n) c$ and norming parameters $b_n$ for $n \in \N_+$.
Proof
Suppose that $n \in \N_+$ and that $(Y_1, Y_2, \ldots, Y_n)$ is a sequence of independent variables, each with the same distribution as $Y$. Then $Y_1 + Y_2 + \cdots + Y_n$ has the same distribution $n c + d(X_1 + X_2 + \cdots + X_n)$ where $(X_1, X_2, \ldots)$ is a sequence of independent variables, each with the same distribution as $X$. By stability, $X_1 + X_2 + \cdots + X_n$ has the same distribution as $a_n + b_n X$. Hence $Y_1 + Y_2 + \cdots + Y_n$ has the same distribution as $(n c + d a_n) + d b_n X$, which in turn has the same distribution as $[d a_n + (n - b_n) c] + b_n Y$.
An important point is the the norming parameters are unchanged under a location-scale transformation.
Suppose that the distribution of $X$ is stable, with centering parameters $a_n \in \R$ and norming parameters $b_n \in (0, \infty)$ for $n \in \N_+$. Then the distribution of $-X$ is stable, with centering parameters $-a_n$ and norming parameters $b_n$ for $n \in \N_+$.
Proof
If $n \in \N_+$ and $(X_1, X_2, \ldots, X_n)$ is a sequence of independent variables, each with the same distribution as $X$ then $(-X_1, -X_2, \ldots, -X_n)$ is a sequence of independent variables each with the same distribution as $-X$. By stability, $-\sum_{i=1}^n X_i$ has the same distribution as $-(a_n + b_n X) = - a_n + b_n (-X)$.
From the last two results, if $X$ has a stable distribution, then so does $c + d X$, with the same norming parameters, for every $c, \, d \in \R$ with $d \neq 0$. Stable distributions are also closed under convolution (corresponding to sums of independent variables) if the norming parameters are the same.
Suppose that $X$ and $Y$ are independent variables. Assume also that $X$ has a stable distribution with centering parameters $a_n \in \R$ and norming parameters $b_n \in (0, \infty)$ for $n \in \N_+$, and that $Y$ has a stable distribution with centering parameters $c_n \in \R$ and the same norming parameters $b_n$ for $n \in \N_+$. Then $Z = X + Y$ has a stable distribution with centering paraemters $a_n + c_n$ and norming parameters $b_n$ for $n \in \N_+$.
Proof
Suppose that $n \in \N_+$ and that $(Z_1, X_2, \ldots, Z_n)$ is a sequence of independent variables, each with the same distribution as $Z$. Then $\sum_{i=1}^n Z_i$ has the same distribution as $\sum_{i=1}^n X_i + \sum_{i=1}^n Y_i$ where $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a sequence of independent variables, each with the same distribution as $X$, and $\bs{Y} = (Y_1, Y_2, \ldots, Y_n)$ is a sequence of independent variables, each with the same distribution as $Y$, and where $\bs{X}$ and $\bs{Y}$ are independent. By stability, this is the same as the distribution of $(a_n + b_n X) + (c_n + b_n Y) = (a_n + c_n) + b_n (X + Y)$.
We can now give another characterization of stability that just involves two independent copies of $X$.
Random variable $X$ has a stable distribution if and only if the following condition holds: If $X_1, \, X_2$ are independent variables, each with the same distribution as $X$ and $d_1, d_2 \in (0, \infty)$ then $d_1 X_1 + d_2 X_2$ has the same distribution as $a + b X$ for some $a \in \R$ and $b \in (0, \infty)$.
Proof
Suppose that the condition in the theorem holds. We will show by induction that the condition in the definition holds. For $n = 2$, the stability condition is a special case of the condition in the theorem, with $d_1 = d_2 = 1$. Suppose that the stability condition holds for a given $n \in \N_+$. Suppose that $(X_1, X_2, \ldots, X_n, X_{n+1})$ is a sequence of independent random variables, each with the distribution of $X$. By the induction hypothesis, $Y_n = X_1 + X_2 + \cdots + X_n$ has the same distribution as $a_n + b_n X$ for some $a_n \in \R$ and $b_n \in (0, \infty)$. By independence, $Y_{n+1} = X_1 + X_2 + \cdots + X_n + X_{n+1}$ has the same distribution as $a_n + b_n X_1 + X_{n+1}$. By another application of the condition above, $b_n X_1 + X_{n+1}$ has the same distribution as $c + b_{n+1} X$ for some $c \in \R$ and $b_{n+1} \in (0, \infty)$. But then $Y_{n+1}$ has the same distribution as $(a_n + c) + b_{n+1} X$.
As a corollary of a couple of the results above, we have the following:
Suppose that $X$ and $Y$ are independent with the same stable distribution. Then the distribution of $X - Y$ is strictly stable, with the same norming parameters.
Note that the distribution of $X - Y$ is symmetric (about 0). The last result is useful because it allows us to get rid of the centering parameters when proving facts about the norming parameters. Here is the most important of those facts:
Suppose that $X$ has a stable distribution. Then the norming parameters have the form $b_n = n^{1/\alpha}$ for $n \in \N_+$, for some $\alpha \in (0, 2]$. The parameter $\alpha$ is known as the index or characteristic exponent of the distribution.
Proof
The proof is in several steps, and is based on the proof in An Introduction to Probability Theory and Its Applications, Volume II, by William Feller. The proof uses the basic trick of writing a sum of independent copies of $X$ in different ways in order to obtain relationships between the norming constants $b_n$.
First we can assume from our last result that the distribution of $X$ is symmetric and strictly stable. Let $(X_1, X_2, \ldots)$ be a sequence of independent variables, each with the distribution of $X$. Let $Y_n = \sum_{i=1}^n X_i$ for $n \in \N_+$. Now let $n, \, m \in \N_+$ and consider $Y_{m n}$. Directly from stability, $Y_{m n}$ has the same distribution as $b_{m n} X$. On the other hand, $Y_{m n}$ can be thought of as a sum of $m$ blocks, where each block is a sum of $n$ independent copies of $X$. Each block has the same distribution as $b_n X$, and since the blocks are independent, it follows that $Y_{m n}$ has the same distribution as $b_n X_1 + b_n X_2 + \cdots + b_n X_m = b_n (X_1 + X_2 + \cdots + X_m)$ But by another application of stability, the random variable on the right has the same distribution as $b_n b_m X$. It then follows that $b_{m n} = b_m b_n$ for all $m, \, n \in \N_+$ which in turn leads to $b_{n^k} = b_n^k$ for all $n, \, k \in \N_+$.
We use the same trick again, this time with a sum. Let $m, \, n \in \N_+$ and consider $Y_{m+n}$. Directly from stability, $Y_{m + n}$ has the same distribution as $b_{m+n} X$. On the other hand, $Y_{m+n}$ can be thought of as the sum of two blocks. The first is the sum of $m$ independent copies of $X$ and hence has the same distribution as $b_m X$, while the second is the sum of $n$ independent copies of $X$ and hence has the same distribution as $b_n X$. Since the blocks are independent, it follows that $b_{m+n} X$ has the same distribution as $b_m X_1 + b_n X_2$, or equivalently, $X$ has the same distribution as $U = \frac{b_m}{b_{m+n}} X_1 + \frac{b_n}{b_{m+n}} X_2$ Next note that for $x \gt 0$, $\left\{X_1 \ge 0, X_2 \gt \frac{b_{m+n}}{b_n} x\right\} \subseteq \{U \gt x\}$ and so by independence, $\P(U \gt x) \ge \P\left(X_1 \ge 0, X_2 \gt \frac{b_{m + n}}{b_n} x \right) = \P(X_1 \ge 0) \P\left(X_2 \gt \frac{b_{m+n}}{b_n} x \right)$ But by symmetry, $\P(X_1 \ge 0) \ge \frac{1}{2}$. Also $X_2$ and $U$ have the same distribution as $X$, so we conclude that $\P(X \gt x) \ge \frac{1}{2} \P\left(X \gt \frac{b_{m+n}}{b_n} x\right), \quad x \gt 0$ It follows that the ratios $b_n \big/ b_{m+n}$ are bounded for $m, \, n \in \N_+$. If that were not the case, we could find a sequence of integers $m, \, n$ with $b_{m+n} \big/ b_n \to 0$, in which case the displayed equation above would give the contradiction $\P(X \gt x) \ge \frac{1}{4}$ for all $x \gt 0$. Restating, the ratios $b_k / b_n$ are bounded for $k, \, n \in \N_+$ with $k \lt n$.
Fix $r \in \N_+$. There exists a unique $\alpha \in (0, \infty)$ with $b_r = r^{1/\alpha}$. It then follows from step 1 above that $b_n = n^{1/\alpha}$ for every $n = r^j$ with $j \in \N_+$. Similarly, if $s \in \N_+$, there exists $\beta \in (0, \infty)$ with $b_s = s^{1/\beta}$ and then $b_m = m^{1/\beta}$ for every $m = s^k$ with $k \in \N_+$. For our next step, we show that $\alpha = \beta$ and it then follows that $b_n = n^{1/\alpha}$ for every $n \in \N_+$. Towards that end, note that if $m = s^k$ with $k \in \N_+$ there exists $n = r^j$ with $j \in \N_+$ with $n \le m \le r n$. Hence $b_m = m^{1/\beta} \le (r n)^{1/\beta} = r^{1/\beta} b_n^{\alpha / \beta}$ Therefore $\frac{b_m}{b_n} \le r^{1/\beta} b_n^{\alpha/ \beta - 1}$ Since the coefficients $b_n$ are unbounded in $n \in \N_+$, but the ratios $b_n / b_m$ are bounded for $m, \, n \in \N_+$ with $m \gt n$, the last inequality implies that $\beta \le \alpha$. Reversing the roles of $m$ and $n$ then gives $\alpha \le \beta$ and hence $\alpha = \beta$.
All that remains to show is that $\alpha \le 2$. We will do this by showing that if $\alpha \gt 2$, then $X$ must have finite variance, in which case the finite variance property above leads to the contradiction $\alpha = 2$. Since $X^2$ is nonnegative, $\E\left(X^2\right) = \int_0^\infty \P\left(X^2 \gt x\right) dx = \int_0^\infty \P\left(\left|X\right| \gt \sqrt{x}\right) dx = \sum_{k=1}^\infty \int_{2^{k-1}}^{2^k} \P\left(\left|X\right| \gt \sqrt{x}\right) dx$ So the idea is to find bounds on the integrals on the right so that the sum converges. Towards that end, note that for $t \gt 0$ and $n \in \N_+$ $\P(\left|Y_n\right| \gt t b_n) = \P(b_n \left|X\right| \gt t b_n) = \P(\left|X\right| \gt t)$ Hence we can choose $t$ so that $\P(\left|Y_n\right| \gt t b_n) \le \frac{1}{4}$. On the other hand, using a special inequality for symmetric distributions, $\frac{1}{2}\left(1 - \exp\left[-n \P\left(\left|X\right| \gt t b_n\right)\right]\right) \le \P(\left|Y_n\right| \gt t b_n)$ This implies that $n \P\left(\left|X\right| \gt t b_n\right)$ is bounded in $n$ or otherwise the two inequalities together would lead to $\frac{1}{2} \le \frac{1}{4}$. Substituting $x = t b_n = t n^{1/\alpha}$ leads to $\P(\left|X\right| \gt x) \le M x^{-\alpha}$ for some $M \gt 0$. It then follows that $\int_{2^{k-1}}^{2^k} \P\left(\left|X\right| \gt \sqrt{x}\right) dx \le M 2^{k(1 - \alpha / 2)}$ If $\alpha \gt 2$, the series with the terms on the right converges and we have $\E(X^2) \lt \infty$.
Every stable distribution is continuous.
Proof
As in the proof of the previous theorem, suppose that $X$ has a symmetric stable distribution with norming parameters $b_n$ for $n \in \N_+$. As a special case of the last proof, for $n \in \N_+$, $X$ has the same distribution as $\frac{1}{b_{n + 1}} X_1 + \frac{b_n}{b_{n + 1}} X_2$ where $X_1$ and $X_2$ are independent and also have this distribution. Suppose now that $\P(X = x) = p$ for some $x \ne 0$ where $p \gt 0$. Then $\P\left(X = \frac{1 + b_n}{b_{1 + n}} x\right) \ge \P(X_1 = x) \P(X_2 = x) = p^2 \gt 0$ If the index $\alpha \ne 1$, the points $\frac{(1 + b_n)}{b_{1 + n}} x = \frac{1 + n^{1/\alpha}}{(1 + n)^{1/\alpha}} x, \quad n \in \N_+$ are distinct, which gives us infinitely many atoms, each with probability at least $p^2$—clearly a contradiction.
Next, suppose that the only atom is $x = 0$ and that $\P(X = 0) = p$ where $p \in (0, 1)$. Then $X_1 + X_2$ has the same distribution as $b_2 X$. But $P(X_1 + X_2 = 0) = \P(X_1 = 0) \P(X_2 = 0) = p^2$ while $\P(b_2 X = 0) = \P(X = 0) = p$, another contradiction.
The next result is a precise statement of the limit theorem alluded to in the introductory paragraph.
Suppose that $(X_1, X_2, \ldots)$ is a sequence of independent, identically distributed random variables, and let $Y_n = \sum_{i=1}^n X_i$ for $n \in \N_+$. If there exist constants $a_n \in \R$ and $b_n \in (0, \infty)$ for $n \in \N_+$ such that $(Y_n - a_n) \big/ b_n$ has a (non-degenerate) limiting distribution as $n \to \infty$, then the limiting distribution is stable.
The following theorem completely characterizes stable distributions in terms of the characteristic function.
Suppose that $X$ has a stable distribution. The characteristic function of $X$ has the following form, for some $\alpha \in (0, 2]$, $\beta \in [-1, 1]$, $c \in \R$, and $d \in (0, \infty)$ $\chi(t) = \E\left(e^{i t X}\right) = \exp\left(i t c - d^\alpha \left|t\right|^\alpha \left[1 + i \beta \sgn(t) u_\alpha(t)\right]\right), \quad t \in \R$ where $\sgn$ is the usual sign function, and where $u_\alpha(t) = \begin{cases} \tan\left(\frac{\pi \alpha}{2}\right), & \alpha \ne 1 \ \frac{2}{\pi} \ln(|t|), & \alpha = 1 \end{cases}$
1. The parameter $\alpha$ is the index, as before.
2. The parameter $\beta$ is the skewness parameter.
3. The parameter $c$ is the location parameter.
4. The parameter $d$ is the scale parameter.
Thus, the family of stable distributions is a 4 parameter family. The index parameter $\alpha$ and and the skewness parameter $\beta$ can be considered shape parameters. When the location parameter $c = 0$ and the scale parameter $d = 1$, we get the standard form of the stable distributions, with characteristic function $\chi(t) = \E\left(e^{i t X}\right) = \exp\left(-\left|t\right|^\alpha \left[1 + i \beta \sgn(t) u_\alpha(t)\right]\right), \quad t \in \R$
The characteristic function gives another proof that stable distributions are closed under convolution (corresponding to sums of independent variables), if the index is fixed.
Suppose that $X_1$ and $X_2$ are independent random variables, and that $X_1$ and $X_2$ have the stable distribution with common index $\alpha \in (0, 2]$, skewness parameter $\beta_k \in [-1, 1]$, location parameter $c_k \in \R$, and scale parameter $d_k \in (0, \infty)$. Then $X_1 + X_2$ has the stable distribution with index $\alpha$, location parameter $c = c_1 + c_2$, scale parameter $d = \left(d_1^\alpha + d_2^\alpha\right)^{1/\alpha}$, and skewness parameter $\beta = \frac{\beta_1 d_1^\alpha + \beta_2 d_2^\alpha}{d_1^\alpha + d_2^\alpha}$
Proof
Let $\chi_k$ denote the characteristic function of $X_k$ for $k \in \{1, 2\}$. Then $X_1 + X_2$ has characteristic function $\chi = \chi_1 \chi_2$. The result follows from using the form of the characteristic function above and some algebra.
Special Cases
Three special parametric families of distributions studied in this chapter are stable. In the proofs in this subsection, we use the definition of stability and various important properties of the distributions. These properties, in turn, are verified in the sections devoted to the distributions. We also give proofs based on the characteristic function, which allows us to identify the skewness parameter.
The normal distribution is stable with index $\alpha = 2$. There is no skewness parameter.
Proof
If $n \in \N_+$ and $(Z_1, Z_2, \ldots, Z_n)$ is a sequence of independent variables, each with the standard normal distribution, then $Z_1 + Z_2 + \cdots + Z_n$ has the normal distribution with mean 0 and variance $n$. But this is also the distribution of $\sqrt{n} Z$ where $Z$ has the standard normal distribution. Hence the standard normal distribution is strictly stable, with index $\alpha = 2$. The normal distribution with mean $\mu \in \R$ and standard deviation $\sigma \in (0, \infty)$ is the distribution of $\mu + \sigma Z$. From our basic properties above, this distribution is stable with index $\alpha = 2$ and centering parameters $\left(n - \sqrt{n}\right) \mu$ for $n \in \N$.
In terms of the characteristic function, note that if $\alpha = 2$ then $u_\alpha(t) = \tan(\pi) = 0$ so the skewness parameter $\beta$ drops out completely. The characteristic function in standard form $\chi(t) = e^{-t^2}$ for $t \in \R$, which is the characteristic function of the normal distribution with mean 0 and variance 2.
Of course, the normal distribution has finite variance, so once we know that it is stable, it follows from the finite variance property above that the index must be 2. Moreover, the characteristic function shows that the normal distribution is the only stable distribution with index 2, and hence the only stable distribution with finite variance.
Open the special distribution simulator and select the normal distribution. Vary the parameters and note the shape and location of the probability density function. For various values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
The Cauchy distribution is stable with index $\alpha = 1$ and skewness parameter $\beta = 0$.
Proof
If $n \in \N_+$ and $(Z_1, Z_2, \ldots, Z_n)$ is a sequence of independent variables, each with the standard Cauchy distribution, then $Z_1 + Z_2 + \cdots + Z_n$ has the Cauchy distribution scale parameter $n$. By definition this is the same as the distribution of $n Z$ where $Z$ has the standard Cauchy distribution. Hence the standard Cauchy distribution is strictly stable, with index $\alpha = 1$. The Cauchy distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$ is the distribution of $a + b Z$. From our basic properties above, this distribution is strictly stable with index $\alpha = 1$.
When $\alpha = 1$ and $\beta = 0$ the characteristic function in standard form is $\chi(t) = \exp\left(-\left|t\right|\right)$ for $t \in \R$, which is the characteristic function of the standard Cauchy distribution.
Open the special distribution simulator and select the Cauchy distribution. Vary the parameters and note the shape and location of the probability density function. For various values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
The Lévy distribution is stable with index $\alpha = \frac{1}{2}$ and skewness parameter $\beta = 1$.
Proof
If $n \in \N_+$ and $(Z_1, Z_2, \ldots, Z_n)$ is a sequence of independent variables, each with the standard Lévy distribution, then $Z_1 + Z_2 + \cdots + Z_n$ has the Lévy distribution scale parameter $n^2$. By definition this is the same as the distribution of $n^2 Z$ where $Z$ has the standard Lévy distribution. Hence the standard Lévy distribution is strictly stable, with index $\alpha = \frac{1}{2}$. The Lévy distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$ is the distribution of $a + b Z$. From our basic properties above, this distribution is stable with index $\alpha = \frac{1}{2}$ and centering parameters $(n - n^2) a$ for $n \in \N_+$.
When $\alpha = \frac{1}{2}$ note that $u_\alpha(t) = \tan\left(\frac{\pi}{4}\right) = 1$. So the characteristic function in standard form with $\alpha = \frac{1}{2}$ and $\beta = 1$ is $\chi(t) = \exp\left(-\left|t\right|^{1/2}\left[1 + i \sgn(t)\right]\right)$ which is the characteristic function of the standard Lévy distribution.
Open the special distribution simulator and select the Lévy distribution. Vary the parameters and note the shape and location of the probability density function. For various values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
The normal, Cauchy, and Lévy distributions are the only stable distributions for which the probability density function is known in closed form. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.03%3A_Stable_Distributions.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$ $\newcommand{\sgn}{\,\text{sgn}}$ $\newcommand{\bs}{\boldsymbol}$
This section discusses a theoretical topic that you may want to skip if you are a new student of probability.
Basic Theory
Infinitely divisible distributions form an important class of distributions on $\R$ that includes the stable distributions, the compound Poisson distributions, as well as several of the most important special parametric families of distribtions. Basically, the distribution of a real-valued random variable is infinitely divisible if for each $n \in \N_+$, the variable can be decomposed into the sum of $n$ independent copies of another variable. Here is the precise definition.
The distribution of a real-valued random variable $X$ is infinitely divisible if for every $n \in \N_+$, there exists a sequence of independent, identically distributed variables $(X_1, X_2, \ldots, X_n)$ such that $X_1 + X_2 + \cdots + X_n$ has the same distribution as $X$.
If the distribution of $X$ is stable then the distribution is infinitely divisible.
Proof
Let $n \in \N_+$ and let $(X_1, X_2, \ldots, X_n)$ be a sequence of independent variables, each with the same distribution as $X$. By the definition of stability, there exists $a_n \in \R$ and $b_n \in (0, \infty)$ such that $\sum_{i=1}^n X_i$ has the same distribution as $a_n + b_n X$. But then $\frac{1}{b_n} \left(\sum_{i=1}^n X_i - a_n\right) = \sum_{i=1}^n \frac{X_i - a_n/n}{b_n}$ has the same distribution as $X$. But $\left(\frac{X_i - a_n/n}{b_n}: i \in \{1, 2, \ldots, n\} \right)$ is an IID sequence, and hence the distribution of $X$ is infinitely divisible.
Suppose now that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of independent, identically distributed random variables, and that $N$ has a Poisson distribution and is independent of $\bs{X}$. Recall that the distribution of $\sum_{i=1}^N X_i$ is said to be compound Poisson. Like the stable distributions, the compound Poisson distributions form another important class of infinitely divisible distributions.
Suppose that $Y$ is a random variable.
1. If $Y$ is compound Poisson then $Y$ is infinitely divisible.
2. If $Y$ is infinitely divisible and takes values in $\N$ then $Y$ is compound Poisson.
Proof
1. Suppose that $Y$ is compound Poisson, so that we can take $Y = \sum_{i=1}^N X_i$ where $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of independent, identically distributed random variables with common characteristic function $\phi$, and where $N$ is independent of $\bs{X}$ and has the Poisson distribution with parameter $\lambda \in (0, \infty)$. The characteristic function $\chi$ of $Y$ is given by $\chi(t) = \exp(\lambda [\phi(t) - 1])$ for $t \in \R$. But then for $n \in \N_+$, $\chi(t) = \left[\exp\left(\frac{\lambda}{n}[\phi(t) - 1]\right)\right]^n, \quad t \in \R$ But $t \mapsto \exp\left(\frac{\lambda}{n}[\phi(t) - 1]\right)$ is the characteristic function of the compound Poisson distribution corresponding to $\bs{X}$ but with Poisson parameter $\lambda / n$. Restated in terms of random variables, $Y = \sum_{i=1}^n Y_i$ where $Y_i$ has the compound Poisson distribution corresponding to $\bs{X}$ with Poisson parameter $\lambda / n$.
2. The proof is from An Introduction to Probability Theory and Its Applications by William Feller, and requires some additional notation. Recall that the symbol $\asymp$ is used to connect functions that are asymptotically the same in the sense that that the ratio converges to 1. Suppose now that $Y$ takes values in $\N$ and is infinitely divisible. In this case we can use probability generating functions rather than characteristic functions, so let $P$ denote the PGF of $Y$. By definition, $P(t) = \sum_{k=0}^\infty p_k t^k$ where $p_k = \P(Y = k)$ for $k \in \N$. Since $Y$ is infinitely divisible, $P^{1/n}$ is also a PGF for every $n \in \N_+$, so let $P^{1/n}(t) = \sum_{k=0}^\infty p_{n k} t^k$ where $p_{n k} \ge 0$ for $k \in \N$ and $n \in \N_+$ and $\sum_{k=0}^\infty p_{n k} = 1$ for $n \in \N_+$. As with all PGFs, the series for $P(t)$ and for $P^{1/n}(t)$ converge at least for $t \in [0, 1]$, and this interval is sufficient for a PGF to completely determine the underlying distribution. For $n \in \N_+$, we have $\sum_{k=0}^\infty p_k t^k = \left(\sum_{k=0}^\infty p_{n k} t^k\right)^n$ Expanding the series on the right and then equating coefficients of the two series term by term, we see that if $p_0 = 0$ then $p_{n 0} = 0$ which in turn would imply $p_1 = \cdots = p_{n-1} = 0$. Since this is true for all $n \in \N_+$, we would have $P$ identically 0, which is a contradiction. Hence $p_0 \gt 0$ and so $P(t) \gt 0$ for $t \in [0, 1]$ and therefore $\left[P(t) \big/ p_0\right]^{1/n} \to 1$ as $n \to \infty$ for $t \in [0, 1]$. Next recall that $\ln(1 + x) \asymp x$ as $x \downarrow 0$. It follows that for $t \in [0, 1]$, $\ln\left(\left[\frac{P(t)}{p_0}\right]^{1/n}\right) = \ln\left\{1 + \left(\left[\frac{P(t)}{p_0}\right]^{1/n} - 1\right)\right\} \asymp \left[\frac{P(t)}{p_0}\right]^{1/n} - 1 \text{ as } n \to \infty$ As a special case, when $t = 1$, we have $\ln\left[\left(1 / p_0\right)^{1/n}\right] \asymp \left(1 / p_0\right)^{1/n} - 1$ as $n \to \infty$. Hence using properties of logarithms and a bit of algebra, $\frac{\ln[P(t)] - \ln(p_0)}{-\ln(p_0)} = \frac{\ln\left(\left[P(t) \big/ p_0\right]^{1/n}\right)}{\ln\left[\left(1 / p_0\right)^{1/n}\right]} \asymp \frac{P^{1/n}(t) - p_0^{1/n}}{1 - p_0^{1/n}} \text{ as } n \to \infty$ The power series (about 0) for the expression on the right has positive coefficients, and the expression takes the value 1 when $t = 1$. Thus, the expression on the right is a PGF for each $n \in \N_+$. By the continuity theorem for convergence in distribution, it follows that the left side, which we will denote by $Q(t)$, is also a PGF. Solving we have $P(t) = \exp(\lambda [Q(t) - 1]), \quad t \in [0, 1]$ where $\lambda = -\ln(p_0)$. This is the PGF of the distribution obtained by compounding the distribution with PGF Q with the Poisson distribution with parameter $\lambda$.
Special Cases
A number of special distributions are infinitely divisible. Proofs of the results stated below are given in the individual sections.
Stable Distributions
First, the normal distribution, the Cauchy distribution, and the Lévy distribution are stable, so they are infinitely divisible. However, direct arguments give more information, because we can identify the distribution of the component variables.
The normal distribution is infinitely divisible. If $X$ has the normal distribution with mean $\mu \in \R$ and standard deviation $\sigma \in (0, \infty)$, then for $n \in \N_+$, $X$ has the same distribution as $X_1 + X_2 + \cdots + X_n$ where $(X_1, X_2, \ldots, X_n)$ are independent, and $X_i$ has the normal distribution with mean $\mu/n$ and standard deviation $\sigma/\sqrt{n}$ for each $i \in \{1, 2, \ldots, n\}$.
The Cauchy distribution is infinitely divisible. If $X$ has the Cauchy distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$, then for $n \in \N_+$, $X$ has the same distribution as $X_1 + X_2 + \cdots + X_n$ where $(X_1, X_2, \ldots, X_n)$ are independent, and $X_i$ has the Cauchy distribution with location parameter $a/n$ and scale parameter $b/n$ for each $i \in \{1, 2, \ldots, n\}$.
Other Special Distributions
On the other hand, there are distributions that are infinitely divisible but not stable.
The gamma distribution is infinitely divisible. If $X$ has the gamma distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$, then for $n \in \N_+$, $X$ has the same distribution as $X_1 + X_2 + \cdots + X_n$ where $(X_1, X_2, \ldots, X_n)$ are independent, and $X_i$ has the gamma distribution with shape parameter $k / n$ and scale parameter $b$ for each $i \in \{1, 2, \ldots, n\}$
The chi-square distribution is infinitely divisible. If $X$ has the chi-square distribution with $k \in (0, \infty)$ degrees of freedom, then for $n \in \N_+$, $X$ has the same distribution as $X_1 + X_2 + \cdots + X_n$ where $(X_1, X_2, \ldots, X_n)$ are independent, and $X_i$ has the chi-square distribution with $k / n$ degrees of freedom for each $i \in \{1, 2, \ldots, n\}$.
The Poisson distribution distribution is infinitely divisible. If $X$ has the Poisson distribution with rate parameter $\lambda \in (0, \infty)$, then for $n \in \N_+$, $X$ has the same distribution as $X_1 + X_2 + \cdots + X_n$ where $(X_1, X_2, \ldots, X_n)$ are independent, and $X_i$ has the Poisson distribution with rate parameter $\lambda/n$ for each $i \in \{1, 2, \ldots, n\}$.
The general negative binomial distribution on $\N$ is infinitely divisible. If $X$ has the negative binomial distribution on $\N$ with parameters $k \in (0, \infty)$ and $p \in (0, 1)$, then for $n \in \N_+$, $X$ has the same distribution as $X_1 + X_2 + \cdots + X_n$ were $(X_1, X_2, \ldots, X_n)$ are independent, and $X_i$ has the negative binomial distribution on $\N$ with parameters $k / n$ and $p$ for each $i \in \{1, 2, \ldots, n\}$.
Since the Poisson distribution and the negative binomial distributions are distributions on $\N$, it follows from the characterization above that these distributions must be compound Poisson. Of course it is completely trivial that the Poisson distribution is compound Poisson, but it's far from obvious that the negative binomial distribution has this property. It turns out that the negative binomial distribution can be obtained by compounding the logarithmic series distribution with the Poisson distribution.
The Wald distribution is infinitely divisible. If $X$ has the Wald distribution with shape parameter $\lambda \in (0, \infty)$ and mean $\mu \in (0, \infty)$, then for $n \in \N_+$, $X$ has the same distribution as $X_1 + X_2 + \cdots + X_n$ where $(X_1, X_2, \ldots, X_n)$ are independent, and $X_i$ has the Wald distribution with shape parameter $\lambda / n^2$ and mean $\mu / n$ for each $i \in \{1, 2, \ldots, n\}$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.04%3A_Infinitely_Divisible_Distributions.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\bs}{\boldsymbol}$
Power Series Distributions are discrete distributions on (a subset of) $\N$ constructed from power series. This class of distributions is important because most of the special, discrete distributions are power series distributions.
Basic Theory
Power Series
Suppose that $\bs{a} = (a_0, a_1, a_2, \ldots)$ is a sequence of nonnegative real numbers. We are interested in the power series with $\bs{a}$ as the sequence of coefficients. Recall first that the partial sum of order $n \in \N$ is $g_n(\theta) = \sum_{k=0}^n a_k \theta^k, \quad \theta \in \R$ The power series $g$ is then defined by $g(\theta) = \lim_{n \to \infty} g_n(\theta)$ for $\theta \in \R$ for which the limit exists, and is denoted $g(\theta) = \sum_{n=0}^\infty a_n \theta^n$ Note that the series converges when $\theta = 0$, and $g(0) = a_0$. Beyond this trivial case, recall that there exists $r \in [0, \infty]$ such that the series converges absolutely for $\left|\theta\right| \lt r$ and diverges for $\left|\theta\right| \gt r$. The number $r$ is the radius of convergence. From now on, we assume that $r \gt 0$. If $r \lt \infty$, the series may converge (absolutely) or may diverge to $\infty$ at the endpoint $r$. At $-r$, the series may converge absolutely, may converge conditionally, or may diverge.
Distributions
From now on, we restrict $\theta$ to the interval $[0, r)$; this interval is our parameter space. Some of the results below may hold when $r \lt \infty$ and $\theta = r$, but dealing with this case explicitly makes the exposition unnecessarily cumbersome.
Suppose that $N$ is a random variable with values in $\N$. Then $N$ has the power series distribution associated with the function $g$ (or equivalently with the sequence $\bs{a}$) and with parameter $\theta \in [0, r)$ if $N$ has probability density function $f_\theta$ given by $f_\theta(n) = \frac{a_n \theta^n}{g(\theta)}, \quad n \in \N$
Proof
To show that $f_\theta$ is a valid discrete probability density function, note that $a_n \theta^n$ is nonnegative for each $n \in \N$ and $g(\theta)$, by definition, is the normalizing constant for the sequence $\left(a_n \theta^n: n \in \N\right)$.
Note that when $\theta = 0$, the distribution is simply the point mass distribution at $0$; that is, $f_0(0) = 1$.
The distribution function $F_\theta$ is given by $F_\theta(n) = \frac{g_n(\theta)}{g(\theta)}, \quad n \in \N$
Proof
This follows immediately from the definitions since $F_\theta(n) = \sum_{k=0}^n f_\theta(k)$ for $n \in \N$
Of course, the probability density function $f_\theta$ is most useful when the power series $g(\theta)$ can be given in closed form, and similarly the distribution function $F_\theta$ is most useful when the power series and the partial sums can be given in closed form
Moments
The moments of $N$ can be expressed in terms of the underlying power series function $g$, and the nicest expression is for the factorial moments. Recall that the permutation formula is $t^{(k)} = t (t - 1) \cdots (t - k + 1)$ for $t \in \R$ and $k \in \N$, and the factorial moment of $N$ of order $k \in \N$ is $\E\left(N^{(k)}\right)$.
For $\theta \in [0, r)$, the factorial moments of $N$ are as follows, where $g^{(k)}$ is the $k$th derivative of $g$. $\E\left(N^{(k)}\right) = \frac{\theta^k g^{(k)}(\theta)}{g(\theta)}, \quad k \in \N$
Proof
Recall that a power series is infinitely differentiable in the open interval of convergence, and that the derivatives can be taken term by term. Thus $\E\left(N^{(k)}\right) = \sum_{n=0}^\infty n^{(k)} \frac{a_n \theta^n}{g(\theta)} = \frac{\theta^k}{g(\theta)} \sum_{n=k}^\infty a_k n^{(k)} \theta^{n-k} = \frac{\theta^k}{g(\theta)} g^{(k)}(\theta)$
The mean and variance of $N$ are
1. $\E(N) = \theta g^\prime(\theta) \big/ g(\theta)$
2. $\var(N) = \theta^2 \left(g^{\prime\prime}(\theta) \big/ g(\theta) - \left[g^\prime(\theta) \big/ g(\theta)\right]^2\right)$
Proof
1. This follows from the previous result on factorial moments with $k = 1$.
2. This also follows from the previous result since $\var(N) = \E\left(N^{(2)}\right) + \E(N)[1 - \E(N)]$.
The probability generating function of $N$ also has a simple expression in terms of $g$.
For $\theta \in (0, r)$, the probability generating function $P$ of $N$ is given by $P(t) = \E\left(t^N\right) = \frac{g(\theta t)}{g(\theta)}, \quad t \lt \frac{r}{\theta}$
Proof
For $t \in (0, r / \theta)$, $P(t) = \sum_{n=0}^\infty t^n f_\theta(n) = \frac{1}{g(\theta)} \sum_{n=0}^\infty a_n (t \theta)^n = \frac{g(t \theta)}{g(\theta)}$
Relations
Power series distributions are closed with respect to sums of independent variables.
Suppose that $N_1$ and $N_2$ are independent, and have power series distributions relative to the functions $g_1$ and $g_2$, respectively, each with parameter value $\theta \lt \min\{r_1, r_2\}$. Then $N_1 + N_2$ has the power series distribution relative to the function $g_1 g_2$, with parameter value $\theta$.
Proof
A direct proof is possible, but there is an easy proof using probability generating functions. Recall that the PGF of the sum of independent variables is the product of the PGFs. Hence the PGF of $N_1 + N_2$ is $P(t) = P_1(t) P_2(t) = \frac{g_1(\theta t)}{g_1(\theta)} \frac{g_2(\theta t)}{g_2(\theta)} = \frac{g_1(\theta t) g_2(\theta t)}{g_1(\theta) g_2(\theta)}, \quad t \lt \min\left\{\frac{r_1}{\theta}, \frac{r_2}{\theta}\right\}$ The last expression is the PGF of the power series distribution relative to the function $g_1 g_2$, at $\theta$.
Here is a simple corollary.
Suppose that $(N_1, N_2, \ldots, N_k)$ is a sequence of independent variables, each with the same power series distribution, relative to the function $g$ and with parameter value $\theta \lt r$. Then $N_1 + N_2 + \cdots + N_k$ has the power series distribution relative to the function $g^k$ and with parameter $\theta$.
In the context of this result, recall that $(N_1, N_2, \ldots, N_k)$ is a random sample of size $k$ from the common distribution.
Examples and Special Cases
Special Distributions
The Poisson distribution with rate parameter $\lambda \in [0, \infty)$ is a power series distribution relative to the function $g(\lambda) = e^\lambda$ for $\lambda \in [0, \infty)$.
Proof
This follows directly from the definition, since the PDF of the Poisson distribution with parameter $\lambda$ is $f(n) = e^{-\lambda} \lambda^n / n!$ for $n \in \N$.
The geometric distribution on $\N$ with success parameter $p \in (0, 1]$ is a power series distribution relative to the function $g(\theta) = 1 \big/ (1 - \theta)$ for $\theta \in [0, 1)$, where $\theta = 1 - p$.
Proof
This follows directly from the definition, since the PDF of the geometric distribution on $\N$ is $f(n) = (1 - p)^n p = (1 - \theta) \theta^n$ for $n \in \N$.
For fixed $k \in (0, \infty)$, the negative binomial distribution on $\N$ with with stopping parameter $k$ and success parameter $p \in (0, 1]$ is a power series distribution relative to the function $g(\theta) = 1 \big/ (1 - \theta)^k$ for $\theta \in [0, 1)$, where $\theta = 1 - p$.
Proof
This follows from the result above on sums of IID variables, but can also be seen directly, since the PDF is $f(n) = \binom{n + k - 1}{k - 1} p^k (1 - p)^{n} = (1 - \theta)^k \binom{n + k - 1}{k - 1} \theta^n, \quad n \in \N$
For fixed $n \in \N_+$, the binomial distribution with trial parameter $n$ and success parameter $p \in [0, 1)$ is a power series distribution relative to the function $g(\theta) = \left(1 + \theta\right)^n$ for $\theta \in [0, \infty)$, where $\theta = p \big/ (1 - p)$.
Proof
Note that the PDF of the binomial distribution is $f(k) = \binom{n}{k} p^k (1 - p)^{n - k} = (1 - p)^n \binom{n}{k} \left(\frac{p}{1 - p}\right)^k = \frac{1}{(1 + \theta)^n} \binom{n}{k} \theta^k, \quad k \in \{0, 1, \ldots, n\}$ where $\theta = p / (1 - p)$. This shows that the distribution is a power series distribution corresponding to the function $g(\theta) = (1 + \theta)^n$.
The logarithmic distribution with parameter $p \in [0, 1)$ is a power series distribution relative to the function $g(p) = -\ln(1 - p)$ for $p \in [0, 1)$.
Proof
This follows directly from the definition, since the PDF is $f(n) = \frac{1}{-\ln(1 - p)} \frac{1}{n} p^n, \quad n \in \N$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.05%3A_Power_Series_Distributions.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
The normal distribution holds an honored role in probability and statistics, mostly because of the central limit theorem, one of the fundamental theorems that forms a bridge between the two subjects. In addition, as we will see, the normal distribution has many nice mathematical properties. The normal distribution is also called the Gaussian distribution, in honor of Carl Friedrich Gauss, who was among the first to use the distribution.
The Standard Normal Distribution
Distribution Functions
The standard normal distribution is a continuous distribution on $\R$ with probability density function $\phi$ given by $\phi(z) = \frac{1}{\sqrt{2 \pi}} e^{-z^2 / 2}, \quad z \in \R$
Proof that $\phi$ is a probability density function
Let $c = \int_{-\infty}^{\infty} e^{-z^2 / 2} dz$. We need to show that $c = \sqrt{2 \pi}$. That is, $\sqrt{2 \pi}$ is the normalzing constant for the function $z \mapsto e^{-z^2 / 2}$. The proof uses a nice trick: $c^2 = \int_{-\infty}^\infty e^{-x^2 / 2} \, dx \int_{-\infty}^\infty e^{-y^2 / 2} \, dy = \int_{\R^2} e^{-(x^2 + y^2) / 2} \, d(x, y)$ We now convert the double integral to polar coordinates: $x = r \cos \theta$, $y = r \sin \theta$ where $r \in [0, \infty)$ and $\theta \in [0, 2 \pi)$. So, $x^2 + y^2 = r^2$ and $d(x, y) = r \, d(r, \theta)$. Thus, converting back to iterated integrals, $c^2 = \int_0^{2 \pi} \int_0^\infty r e^{-r^2 / 2} \, dr \, d\theta$ Substituting $u = r^2 / 2$ in the inner integral gives $\int_0^\infty e^{-u} \, du = 1$ and then the outer integral is $\int_0^{2 \pi} 1 \, d\theta = 2 \pi$. Thus, $c^2 = 2 \pi$ and so $c = \sqrt{2 \pi}$.
The standard normal probability density function has the famous bell shape that is known to just about everyone.
The standard normal density function $\phi$ satisfies the following properties:
1. $\phi$ is symmetric about $z = 0$.
2. $\phi$ increases and then decreases, with mode $z = 0$.
3. $\phi$ is concave upward and then downward and then upward again, with inflection points at $z = \pm 1$.
4. $\phi(z) \to 0$ as $z \to \infty$ and as $z \to -\infty$.
Proof
These results follow from standard calculus. Note that $\phi^\prime(z) = - z \phi(z)$ (which gives (b)) and hence also $\phi^{\prime \prime}(z) = (z^2 - 1) \phi(z)$ (which gives (c)).
In the Special Distribution Simulator, select the normal distribution and keep the default settings. Note the shape and location of the standard normal density function. Run the simulation 1000 times, and compare the empirical density function to the probability density function.
The standard normal distribution function $\Phi$, given by $\Phi(z) = \int_{-\infty}^z \phi(t) \, dt = \int_{-\infty}^z \frac{1}{\sqrt{2 \pi}} e^{-t^2 / 2} \, dt$ and its inverse, the quantile function $\Phi^{-1}$, cannot be expressed in closed form in terms of elementary functions. However approximate values of these functions can be obtained from the special distribution calculator, and from most mathematics and statistics software. Indeed these functions are so important that they are considered special functions of mathematics.
The standard normal distribution function $\Phi$ satisfies the following properties:
1. $\Phi(-z) = 1 - \Phi(z)$ for $z \in \R$
2. $\Phi^{-1}(p) = -\Phi^{-1}(1 - p)$ for $p \in (0, 1)$
3. $\Phi(0) = \frac{1}{2}$, so the median is 0.
Proof
Part (a) follows from the symmetry of $\phi$. Part (b) follows from part (a). Part (c) follows from part (a) with $z = 0$.
In the special distribution calculator, select the normal distribution and keep the default settings.
1. Note the shape of the density function and the distribution function.
2. Find the first and third quartiles.
3. Compute the interquartile range.
In the special distribution calculator, select the normal distribution and keep the default settings. Find the quantiles of the following orders for the standard normal distribution:
1. $p = 0.001$, $p = 0.999$
2. $p = 0.05$, $p = 0.95$
3. $p = 0.1$, $p = 0.9$
Moments
Suppose that random variable $Z$ has the standard normal distribution.
The mean and variance of $Z$ are
1. $\E(Z) = 0$
2. $\var(Z) = 1$
Proof
1. Of course, by symmetry, if $Z$ has a mean, the mean must be 0, but we have to argue that the mean exists. Actually it's not hard to compute the mean directly. Note that $\E(Z) = \int_{-\infty}^\infty z \frac{1}{\sqrt{2 \pi}} e^{-z^2 / 2} \, dz = \int_{-\infty}^0 z \frac{1}{\sqrt{2 \pi}} e^{-z^2 / 2} \, dz + \int_0^\infty z \frac{1}{\sqrt{2 \pi}} e^{-z^2 / 2} \, dz$ The integrals on the right can be evaluated explicitly using the simple substitution $u = z^2 / 2$. The result is $\E(Z) = -1/\sqrt{2 \pi} + 1/\sqrt{2 \pi} = 0$.
2. Note that $\var(Z) = \E(Z^2) = \int_{-\infty}^\infty z^2 \phi(z) \, dz$ Integrate by parts, using the parts $u = z$ and $dv = z \phi(z) \, dz$. Thus $du = dz$ and $v = -\phi(z)$. Note that $z \phi(z) \to 0$ as $z \to \infty$ and as $z \to -\infty$. Thus, the integration by parts formula gives $\var(Z) = \int_{-\infty}^\infty \phi(z) \, dz = 1$.
In the Special Distribution Simulator, select the normal distribution and keep the default settings. Note the shape and size of the mean $\pm$ standard deviation bar.. Run the simulation 1000 times, and compare the empirical mean and standard deviation to the true mean and standard deviation.
More generally, we can compute all of the moments. The key is the following recursion formula.
For $n \in \N_+$, $\E\left(Z^{n+1}\right) = n \E\left(Z^{n-1}\right)$
Proof
First we use the differential equation in the proof of the PDF properties above, namely $\phi^\prime(z) = - z \phi(z)$. $\E\left(Z^{n+1}\right) = \int_{-\infty}^\infty z^{n+1} \phi(z) \, dz = \int_{-\infty}^\infty z^n z \phi(z) \, dz = - \int_{-\infty}^\infty z^n \phi^\prime(z) \, dz$ Now we integrate by parts, with $u = z^n$ and $dv = \phi^\prime(z) \, dz$ to get $\E\left(Z^{n+1}\right) = -z^n \phi(z) \bigg|_{-\infty}^\infty + \int_{-\infty}^\infty n z^{n-1} \phi(z) \, dz = 0 + n \E\left(Z^{n-1}\right)$
The moments of the standard normal distribution are now easy to compute.
For $n \in \N$,
1. $\E \left( Z^{2 n} \right) = 1 \cdot 3 \cdots (2n - 1) = (2 n)! \big/ (n! 2^n)$
2. $\E \left( Z^{2 n + 1} \right) = 0$
Proof
The result follows from the mean and variance and recursion relation above.
1. Since $\E(Z) = 0$ it follows that $\E\left(Z^n\right) = 0$ for every odd $n \in \N$.
2. Since $\E\left(Z^2\right) = 1$, it follows that $\E\left(Z^4\right) = 1 \cdot 3$ and then $\E\left(Z^6\right) = 1 \cdot 3 \cdot 5$, and so forth. You can use induction, if you like, for a more formal proof.
Of course, the fact that the odd-order moments are 0 also follows from the symmetry of the distribution. The following theorem gives the skewness and kurtosis of the standard normal distribution.
The skewness and kurtosis of $Z$ are
1. $\skw(Z) = 0$
2. $\kur(Z) = 3$
Proof
1. This follows immediately from the symmetry of the distribution. Directly, since $Z$ has mean 0 and variance 1, $\skw(Z) = \E\left(Z^3\right) = 0$.
2. Since $\E(Z) = 0$ and $\var(Z) = 1$, $\kur(Z) = \E\left(Z^4\right) = 3$.
Because of the last result, (and the use of the standard normal distribution literally as a standard), the excess kurtosis of a random variable is defined to be the ordinary kurtosis minus 3. Thus, the excess kurtosis of the normal distribution is 0.
Many other important properties of the normal distribution are most easily obtained using the moment generating function or the characteristic function.
The moment generating function $m$ and characteristic function $\chi$ of $Z$ are given by
1. $m(t) = e^{t^2 / 2}$ for $t \in \R$.
2. $\chi(t) = e^{-t^2 / 2}$ for $t \in \R$.
Proof
1. Note that $m(t) = \E(e^{t Z}) = \int_{-\infty}^\infty e^{t z} \frac{1}{\sqrt{2 \pi}} e^{-z^2 / 2} \, dz = \int_{-\infty}^\infty \frac{1}{2 \pi} \exp\left(-\frac{1}{2} z^2 + t z\right) \, dz$ We complete the square in $z$ to get $-\frac{1}{2} z^2 + t z = -\frac{1}{2}(z - t)^2 + \frac{1}{2}$. Thus we have $\E(e^{t Z}) = e^{\frac{1}{2} t^2} \int_{-\infty}^\infty \frac{1}{\sqrt{2 \pi}} \exp\left[-\frac{1}{2}(z - t)^2\right] \, dz$ In the integral, if we use the simple substitution $u = z - t$ then the integral becomes $\int_{-\infty}^\infty \phi(u) \, du = 1$. Hence $\E\left(e^{t Z}\right) = e^{\frac{1}{2} t^2}$,
2. This follows from (a) since $\chi(t) = m(i t)$.
Thus, the standard normal distribution has the curious property that the characteristic function is a multiple of the probability density function: $\chi = \sqrt{2 \pi} \phi$ The moment generating function can be used to give another derivation of the moments of $Z$, since we know that $\E\left(Z^n\right) = m^{(n)}(0)$.
The General Normal Distribution
The general normal distribution is the location-scale family associated with the standard normal distribution.
Suppose that $\mu \in \R$ and $\sigma \in (0, \infty)$ and that $Z$ has the standard normal distribution. Then $X = \mu + \sigma Z$ has the normal distribution with location parameter $\mu$ and scale parameter $\sigma$.
Distribution Functions
Suppose that $X$ has the normal distribution with location parameter $\mu \in \R$ and scale parameter $\sigma \in (0, \infty)$. The basic properties of the density function and distribution function of $X$ follow from general results for location scale families.
The probability density function $f$ of $X$ is given by $f(x) = \frac{1}{\sigma} \phi\left(\frac{x - \mu}{\sigma}\right) = \frac{1}{\sqrt{2 \, \pi} \, \sigma} \exp \left[ -\frac{1}{2} \left( \frac{x - \mu}{\sigma} \right)^2 \right], \quad x \in \R$
Proof
This follows from the change of variables formula corresponding to the transformation $x = \mu + \sigma z$.
The probability density function $f$ satisfies the following properties:
1. $f$ is symmetric about $x = \mu$.
2. $f$ increases and then decreases with mode $x = \mu$.
3. $f$ is concave upward then downward then upward again, with inflection points at $x = \mu \pm \sigma$.
4. $f(x) \to 0$ as $x \to \infty$ and as $x \to -\infty$.
Proof
These properties follow from the corresponding properties of $\phi$.
In the special distribution simulator, select the normal distribution. Vary the parameters and note the shape and location of the probability density function. With your choice of parameter settings, run the simulation 1000 times and compare the empirical density function to the true probability density function.
Let $F$ denote the distribution function of $X$, and as above, let $\Phi$ denote the standard normal distribution function.
The distribution function $F$ and quantile function $F^{-1}$ satsify the following properties:
1. $F(x) = \Phi \left( \frac{x - \mu}{\sigma} \right)$ for $x \in \R$.
2. $F^{-1}(p) = \mu + \sigma \, \Phi^{-1}(p)$ for $p \in (0, 1)$.
3. $F(\mu) = \frac{1}{2}$ so the median occurs at $x = \mu$.
Proof
Part (a) follows since $X = \mu + \sigma Z$. Parts (b) and (c) follow from (a).
In the special distribution calculator, select the normal distribution. Vary the parameters and note the shape of the density function and the distribution function.
Moments
Suppose again that $X$ has the normal distribution with location parameter $\mu \in \R$ and scale parameter $\sigma \in (0, \infty)$. As the notation suggests, the location and scale parameters are also the mean and standard deviation, respectively.
The mean and variance of $X$ are
1. $\E(X) = \mu$
2. $\var(X) = \sigma^2$
Proof
This follows from the representation $X = \mu + \sigma Z$ and basic properties of expected value and variance.
So the parameters of the normal distribution are usually referred to as the mean and standard deviation rather than location and scale. The central moments of $X$ can be computed easily from the moments of the standard normal distribution. The ordinary (raw) moments of $X$ can be computed from the central moments, but the formulas are a bit messy.
For $n \in \N$,
1. $\E \left[ (X - \mu)^{2 n} \right] = 1 \cdot 3 \cdots (2n - 1) \sigma^{2n} = (2 n)! \sigma^{2n} \big/ (n! 2^n)$
2. $\E \left[ (X - \mu)^{2 \, n + 1} \right] = 0$
All of the odd central moments of $X$ are 0, a fact that also follows from the symmetry of the probability density function.
In the special distribution simulator select the normal distribution. Vary the mean and standard deviation and note the size and location of the mean/standard deviation bar. With your choice of parameter settings, run the simulation 1000 times and compare the empirical mean and standard deviation to the true mean and standard deviation.
The following exercise gives the skewness and kurtosis.
The skewness and kurtosis of $X$ are
1. $\skw(X) = 0$
2. $\kur(X) = 3$
Proof
The skewness and kurtosis of a variable are defined in terms of the standard score, so these results follows from the corresponding result for $Z$.
The moment generating function $M$ and characteristic function $\chi$ of $X$ are given by
1. $M(t) = \exp \left( \mu t + \frac{1}{2} \sigma^2 t^2 \right)$ for $t \in \R$.
2. $\chi(t) =\exp \left( i \mu t - \frac{1}{2} \sigma^2 t^2 \right)$ for $t \in \R$
Proof
1. This follows from the representation $X = \mu + \sigma Z$, basic properties of expected value, and the MGF of $Z$ in (12): $\E\left(e^{t X}\right) = \E\left(e^{t \mu + t \sigma Z}\right) = e^{t \mu} \E\left(e^{t \sigma Z}\right) = e^{t \mu} e^{\frac{1}{2} t^2 \sigma^2} = e^{t \mu + \frac{1}{2} \sigma^2 t^2}$
2. This follows from (a) since $\chi(t) = M(i t)$.
Relations
The normal family of distributions satisfies two very important properties: invariance under linear transformations of the variable and invariance with respect to sums of independent variables. The first property is essentially a restatement of the fact that the normal distribution is a location-scale family.
Suppose that $X$ is normally distributed with mean $\mu$ and variance $\sigma^2$. If $a \in \R$ and $b \in \R \setminus \{0\}$, then $a + b X$ is normally distributed with mean $a + b \mu$ and variance $b^2 \sigma^2$.
Proof
The MGF of $a + b X$ is $\E\left[e^{t (a + b X)}\right] = e^{ta} \E\left[e^{(t b) X}\right] = e^{ta} e^{\mu (t b) + \sigma^2 (t b)^2 / 2} = e^{(a + b \mu)t + b^2 \sigma^2 t^2 / 2}$ which we recognize as the MGF of the normal distribution with mean $a + b \mu$ and variance $b^2 \sigma^2$.
Recall that in general, if $X$ is a random variable with mean $\mu$ and standard deviation $\sigma \gt 0$, then $Z = (X - \mu) / \sigma$ is the standard score of $X$. A corollary of the last result is that if $X$ has a normal distribution then the standard score $Z$ has a standard normal distribution. Conversely, any normally distributed variable can be constructed from a standard normal variable.
Standard score.
1. If $X$ has the normal distribution with mean $\mu$ and standard deviation $\sigma$ then $Z = \frac{X - \mu}{\sigma}$ has the standard normal distribution.
2. If $Z$ has the standard normal distribution and if $\mu \in \R$ and $\sigma \in (0, \infty)$, then $X = \mu + \sigma Z$ has the normal distribution with mean $\mu$ and standard deviation $\sigma$.
Suppose that $X_1$ and $X_2$ are independent random variables, and that $X_i$ is normally distributed with mean $\mu_i$ and variance $\sigma_i^2$ for $i \in \{1, 2\}$. Then $X_1 + X_2$ is normally distributed with
1. $\E(X_1 + X_2) = \mu_1 + \mu_2$
2. $\var(X_1 + X_2) = \sigma_1^2 + \sigma_2^2$
Proof
The MGF of $X_1 + X_2$ is the product of the MGFs, so $\E\left(\exp\left[t (X_1 + X_2)\right]\right) = \exp\left(\mu_1 t + \sigma_1^2 t^2 / 2\right) \exp\left(\mu_2 t + \sigma_2^2 t^2 / 2\right) = \exp\left[\left(\mu_1 + \mu_2\right)t + \left(\sigma_1^2 + \sigma_2^2\right) t^2 / 2\right]$ which we recognize as the MGF of the normal distribution with mean $\mu_1 + \mu_2$ and variance $\sigma_1^2 + \sigma_2^2$.
This theorem generalizes to a sum of $n$ independent, normal variables. The important part is that the sum is still normal; the expressions for the mean and variance are standard results that hold for the sum of independent variables generally. As a consequence of this result and the one for linear transformations, it follows that the normal distribution is stable.
The normal distribution is stable. Specifically, suppose that $X$ has the normal distribution with mean $\mu \in \R$ and variance $\sigma^2 \in (0, \infty)$. If $(X_1, X_2, \ldots, X_n)$ are independent copies of $X$, then $X_1 + X_2 + \cdots + X_n$ has the same distribution as $\left(n - \sqrt{n}\right) \mu + \sqrt{n} X$, namely normal with mean $n \mu$ and variance $n \sigma^2$.
Proof
As a consequence of the result for sums $X_1 + X_2 + \cdots + X_n$ has the normal distribution with mean $n \mu$ and variance $n \sigma^2$. As a consequence of the result for linear transforamtions, $\left(n - \sqrt{n}\right) \mu + \sqrt{n} X$ has the normal distribution with mean $\left(n - \sqrt{n}\right) \mu + \sqrt{n} \mu = n \mu$ and variance $\left(\sqrt{n}\right)^2 \sigma^2 = n \sigma^2$.
All stable distributions are infinitely divisible, so the normal distribution belongs to this family as well. For completeness, here is the explicit statement:
The normal distribution is infinitely divisible. Specifically, if $X$ has the normal distribution with mean $\mu \in \R$ and variance $\sigma^2 \in (0, \infty)$, then for $n \in \N_+$, $X$ has the same distribution as $X_1 + X_2 + \cdots + X_n$ where $(X_1, X_2, \ldots, X_n)$ are independent, and each has the normal distribution with mean $\mu / n$ and variance $\sigma^2 / n$.
Finally, the normal distribution belongs to the family of general exponential distributions.
Suppose that $X$ has the normal distribution with mean $\mu$ and variance $\sigma^2$. The distribution is a two-parameter exponential family with natural parameters $\left( \frac{\mu}{\sigma^2}, -\frac{1}{2 \, \sigma^2} \right)$, and natural statistics $\left(X, X^2\right)$.
Proof
Expanding the square, the normal PDF can be written in the form $f(x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left(-\frac{\mu^2}{2 \sigma^2}\right) \exp\left(\frac{\mu}{\sigma^2} x - \frac{1}{2 \sigma^2} x^2 \right), \quad x \in \R$ so the result follows from the definition of the general exponential family.
A number of other special distributions studied in this chapter are constructed from normally distributed variables. These include
• The lognormal distribution
• The folded normal distribution, which includes the half normal distribution as a special case
• The Rayleigh distribution
• The Maxwell distribution
• The Lévy distribution
Also, as mentioned at the beginning of this section, the importance of the normal distribution stems in large part from the central limit theorem, one of the fundamental theorems of probability. By virtue of this theorem, the normal distribution is connected to many other distributions, by means of limits and approximations, including the special distributions in the following list. Details are given in the individual sections.
• The binomial distribution
• The negative binomial distribution
• The Poisson distribution
• The gamma distribution
• The chi-square distribution
• The student $t$ distribution
• The Irwin-Hall distribution
Computational Exercises
Suppose that the volume of beer in a bottle of a certain brand is normally distributed with mean 0.5 liter and standard deviation 0.01 liter.
1. Find the probability that a bottle will contain at least 0.48 liter.
2. Find the volume that corresponds to the 95th percentile
Answer
Let $X$ denote the volume of beer in liters
1. $\P(X \gt 0.48) = 0.9772$
2. $x_{0.95} = 0.51645$
A metal rod is designed to fit into a circular hole on a certain assembly. The radius of the rod is normally distributed with mean 1 cm and standard deviation 0.002 cm. The radius of the hole is normally distributed with mean 1.01 cm and standard deviation 0.003 cm. The machining processes that produce the rod and the hole are independent. Find the probability that the rod is to big for the hole.
Answer
Let $X$ denote the radius of the rod and $Y$ the radius of the hole. $\P(Y - X \lt 0) = 0.0028$
The weight of a peach from a certain orchard is normally distributed with mean 8 ounces and standard deviation 1 ounce. Find the probability that the combined weight of 5 peaches exceeds 45 ounces.
Answer
Let $X$ denote the combined weight of the 5 peaches, in ounces. $\P(X \gt 45) = 0.0127$
A Further Generlization
In some settings, it's convenient to consider a constant as having a normal distribution (with mean being the constant and variance 0, of course). This convention simplifies the statements of theorems and definitions in these settings. Of course, the formulas for the probability density function and the distribution function do not hold for a constant, but the other results involving the moment generating function, linear transformations, and sums are still valid. Moreover, the result for linear transformations would hold for all $a$ and $b$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.06%3A_The_Normal_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\vc}{\text{vc}}$ $\newcommand{\bs}{\boldsymbol}$
The multivariate normal distribution is among the most important of multivariate distributions, particularly in statistical inference and the study of Gaussian processes such as Brownian motion. The distribution arises naturally from linear transformations of independent normal variables. In this section, we consider the bivariate normal distribution first, because explicit results can be given and because graphical interpretations are possible. Then, with the aid of matrix notation, we discuss the general multivariate distribution.
The Bivariate Normal Distribution
The Standard Distribution
Recall that the probability density function $\phi$ of the standard normal distribution is given by $\phi(z) = \frac{1}{\sqrt{2 \pi}} e^{-z^2/2}, \quad z \in \R$ The corresponding distribution function is denoted $\Phi$ and is considered a special function in mathematics: $\Phi(z) = \int_{-\infty}^z \phi(x) \, dx = \int_{-\infty}^z \frac{1}{\sqrt{2 \pi}} e^{-x^2/2} \, dx, \quad z \in \R$ Finally, the moment generating function $m$ is given by $m(t) = \E\left(e^{t Z}\right) = \exp\left[\frac{1}{2} \var(t Z)\right] = e^{t^2/2}, \quad t \in \R$
Suppose that $Z$ and $W$ are independent random variables, each with the standard normal distribution. The distribution of $(Z, W)$ is known as the standard bivariate normal distribution.
The basic properties of the standard bivariate normal distribution follow easily from independence and properties of the (univariate) normal distribution. Recall first that the graph of a function $f: \R^2 \to \R$ is a surface. For $c \in \R$, the set of points $\left\{(x, y) \in \R^2: f(x, y) = c\right\}$ is the level curve of $f$ at level $c$. The graph of $f$ can be understood by means of the level curves.
The probability density function $\phi_2$ of the standard bivariate normal distribution is given by $\phi_2(z, w) = \frac{1}{2 \pi} e^{-\frac{1}{2}\left(z^2 + w^2\right)}, \quad (z, w) \in \R^2$
1. The level curves of $\phi_2$ are circles centered at the origin.
2. The mode of the distribution is $(0, 0)$.
3. $\phi_2$ is concave downward on $\left\{(z, w) \in \R^2: z^2 + w^2 \lt 1\right\}$
Proof
By independence, $\phi_2(z, w) = \phi(z) \phi(w)$ for $(z, w) \in \R^2$. Parts (a) and (b) are clear. For part (c), the second derivative matrix of $\phi_2$ is $\left[\begin{matrix} \phi_2(z, w)\left(z^2 - 1\right) & \phi_2(z, w) z w \ \phi_2(z, w) z w & \phi_2(z, w)\left(w^2 - 1\right) \end{matrix}\right]$ with determinant $\phi_2^2(z, w) \left(1 - z^2 - w^2\right)$. The determinant is positive and the diagonal entries negative on the circular region $\left\{(z, w) \in \R^2: z^2 + w^2 \lt 1\right\}$, so the matrix is negative definite on this region.
Clearly $\phi$ has a number of symmetry properties as well: $\phi_2(z, w)$ is symmetric in $z$ about 0 so that $\phi_2(-z, w) = \phi_2(z, w)$; $\phi_2(z, w)$ is symmetric in $w$ about 0 so that $\phi_2(z, -w) = \phi_2(z, w)$; $\phi_2(z, w)$ is symmetric in $(z, w)$ so that $\phi_2(z, w) = \phi_2(w, z)$. In short, $\phi_2$ has the classical bell shape that we associate with normal distributions.
Open the bivariate normal experiment, keep the default settings to get the standard bivariate normal distribution. Run the experiment 1000 times. Observe the cloud of points in the scatterplot, and compare the empirical density functions to the probability density functions.
Suppose that $(Z, W)$ has the standard bivariate normal distribution. The moment generating function $m_2$ of $(Z, W)$ is given by $m_2(s, t) = \E\left[\exp(s Z + t W)\right] = \exp\left[\frac{1}{2} \var(s Z + t W)\right] = \exp\left[\frac{1}{2}\left(s^2 + t^2\right)\right], \quad (s, t) \in \R^2$
Proof
By independence, $m_2(s, t) = m(s) m(t)$ for $(s, t) \in \R^2$ where $m$ is the standard normal MGF.
The General Distribution
The general bivariate normal distribution can be constructed by means of an affine transformation on a standard bivariate normal vector. The distribution has 5 parameters. As we will see, two are location parameters, two are scale parameters, and one is a correlation parameter.
Suppose that $(Z, W)$ has the standard bivariate normal distribution. Let $\mu, \, \nu \in \R$; $\sigma, \, \tau \in (0, \infty)$; and $\rho \in (-1, 1)$, and let $X$ and $Y$ be new random variables defined by \begin{align} X & = \mu + \sigma \, Z \ Y & = \nu + \tau \rho Z + \tau \sqrt{1 - \rho^2} W \end{align} The joint distribution of $(X, Y)$ is called the bivariate normal distribution with parameters $(\mu, \nu, \sigma, \tau, \rho)$.
We can use the change of variables formula to find the joint probability density function.
Suppose that $(X, Y)$ has the bivariate normal distribution with the parameters $(\mu, \nu, \sigma, \tau, \rho)$ as specified above. The joint probability density function $f$ of $(X, Y)$ is given by $f(x, y) = \frac{1}{2 \pi \sigma \tau \sqrt{1 - \rho^2}} \exp \left\{ -\frac{1}{2 (1 - \rho^2)} \left[ \frac{(x - \mu)^2}{\sigma^2} - 2 \rho \frac{(x - \mu)(y - \nu)}{\sigma \tau} + \frac{(y - \nu)^2}{\tau^2} \right] \right\}, \quad (x, y) \in \R^2$
1. The level curves of $f$ are ellipses centered at $(\mu, \nu)$.
2. The mode of the distribution is $(\mu, \nu)$.
Proof
Consider the transformation that defines $(x, y)$ from $(z, w)$ in the definition. The inverse transformation is given by \begin{align} z & = \frac{x - \mu}{\sigma} \ w & = \frac{x - \mu}{\sigma \, \sqrt{1 - \rho^2}} - \rho \frac{y - \nu}{\tau \, \sqrt{1 - \rho^2}} \end{align} The Jacobian of the inverse transformation is $\frac{\partial(z, w)}{\partial(x, y)} = \frac{1}{\sigma \tau \sqrt{1 - \rho^2}}$ Note that the Jacobian is a constant, because the transformation is affine. The result now follows from the independence of $Z$ and $W$, and the change of variables formula
1. Note that $f$ has the form $f(x, y) = a \exp\left[- b g(x, y)\right]$ where $a$ and $b$ are positive constants and $g(x, y) = \frac{(x - \mu)^2}{\sigma^2} - 2 \rho \frac{(x - \mu)(y - \nu)}{\sigma \tau} + \frac{(y - \nu)^2}{\tau^2}, \quad (x, y) \in \R^2$ The graph of $g$ is a paraboloid opening upward. The level curves of $f$ are the same as the level curves of $g$ (but at different levels of course).
2. The maximum of $f$ occurs at the minimum of $g$, at the point $(\mu, \nu)$.
The following theorem gives fundamental properties of the bivariate normal distribution.
Suppose that $(X, Y)$ has the bivariate normal distribution with parameters $(\mu, \nu, \sigma, \tau, \rho)$ as specified above. Then
1. $X$ is normally distributed with mean $\mu$ and standard deviation $\sigma$.
2. $Y$ is normally distributed with mean $\nu$ and standard deviation $\tau$.
3. $\cor(X, Y) = \rho$.
4. $X$ and $Y$ are independent if and only if $\rho = 0$.
Proof
These result can be proved from the probability density function, but it's easier and more helpful to use the transformation definition. So, assume that $(X, Y)$ is defined in terms of the standard bivariate normal pair $(Z, W)$ as in the definition.
1. $X = \mu + \sigma Z$ so $X$ has the normal distribution with mean $\mu$ and standard deviation $\sigma$. This is a basic property of the normal distribution, and indeed is the way that the general normal variable is constructed from a standard normal variable.
2. Since $Z$ and $W$ are independent and each has the standard normal distribution, $Y = \nu + \tau \rho Z + \tau \sqrt{1 - \rho^2} W$ is normally distributed by another basic property. Because $Z$ and $W$ have mean 0, it follows from the linear property of expected value that $\E(Y) = \nu$. Similarly, since $Z$ and $W$ have variance 1, it follows from basic properties of variance that $\var(Y) = \tau^2 \rho^2 + \tau^2 (1 - \rho^2) = \tau^2$.
3. Using the bi-linear property of covariance and independence we have $\cov(X, Y) = \rho \tau \sigma \, \cov(Z, Z) = \rho \tau \sigma$, and hence from (a) and (b), $\cor(X, Y) = \rho$.
4. As a general property, recall that if $X$ and $Y$ are independent then $\cor(X, Y) = 0$. Conversely, if $\rho = 0$ then $X = \mu + \sigma Z$ and $Y = \nu + \tau W$. Since $Z$ and $W$ are independent, so are $X$ and $Y$.
Thus, two random variables with a joint normal distribution are independent if and only if they are uncorrelated.
In the bivariate normal experiment, change the standard deviations of $X$ and $Y$ with the scroll bars. Watch the change in the shape of the probability density functions. Now change the correlation with the scroll bar and note that the probability density functions do not change. For various values of the parameters, run the experiment 1000 times. Observe the cloud of points in the scatterplot, and compare the empirical density functions to the probability density functions.
In the case of perfect correlation ($\rho = 1$ or $\rho = -1$), the distribution of $(X, Y)$ is also said to be bivariate normal, but degenerate. In this case, we know from our study of covariance and correlation that $(X, Y)$ takes values on the regression line $\left\{(x, y) \in \R^2: y = \nu + \rho \frac{\tau}{\sigma} (x - \mu)\right\}$, and hence does not have a probability density function (with respect to Lebesgue measure on $\R^2$). Degenerate normal distributions will be discussed in more detail below.
In the bivariate normal experiment, run the experiment 1000 times with the values of $\rho$ given below and selected values of $\sigma$ and $\tau$. Observe the cloud of points in the scatterplot and compare the empirical density functions to the probability density functions.
1. $\rho \in \{0, 0.3, 0.5, 0.7, 1\}$
2. $\rho \in \{-0.3, -0.5, -0.7, -1\}$
The conditional distributions are also normal.
Suppose that $(X, Y)$ has the bivariate normal distribution with parameters $(\mu, \nu, \sigma, \tau, \rho)$ as specified above.
1. For $x \in \R$, the conditional distribution of $Y$ given $X = x$ is normal with mean $\E(Y \mid X = x) = \nu + \rho \frac{\tau}{\sigma} (x - \mu)$ and variance $\var(Y \mid X = x) = \tau^2 \left(1 - \rho^2\right)$.
2. For $y \in \R$, the conditional distribution of $X$ given $Y = y$ is normal with mean $\E(X \mid Y = y) = \mu + \rho \frac{\sigma}{\tau} (y - \nu)$ and variance $\var(X \mid Y = y) = \sigma^2 \left(1 - \rho^2\right)$.
Proof from density functions
By symmetry, we need only prove (a). The conditional PDF of $Y$ given $X = x$ is $y \mapsto f(x, y) \big/ g(x)$ where $f$ is the joint PDF, and where $g$ is the PDF of $X$, namely the normal PDF with mean $\mu$ and standard deviation $\sigma$. The result then follows after some algebra.
Proof from random variables
Again, we only need to prove (a). We can assume that $(X, Y)$ is defined in terms of a standard normal pair $(Z, W)$ as in the definition. Hence $Y = \nu + \rho \tau \frac{X - \mu}{\sigma} + \tau \sqrt{1 - \rho^2} W$ Since that $X$ and $W$ are independent, the conditional distribution of $Y$ given $X = x$ is the distribution of $\nu + \rho \tau \frac{x - \mu}{\sigma} + \tau \sqrt{1 - \rho^2} W$. The latter distribution is normal, with mean and variance specified in the theorem.
Note that the conditional variances do not depend on the value of the given variable.
In the bivariate normal experiment, set the standard deviation of $X$ to 1.5, the standard deviation of $Y$ to 0.5, and the correlation to 0.7.
1. Run the experiment 100 times.
2. For each run, compute $\E(Y \mid X = x)$ the predicted value of $Y$ for the given the value of $X$.
3. Over all 100 runs, compute the square root of the average of the squared errors between the predicted value of $Y$ and the true value of $Y$.
You may be perplexed by the lack of symmetry in how $(X, Y)$ is defined in terms of $(Z, W)$ in the original definition. Note however that the distribution is completely determined by the 5 parameters. If we define $X^\prime = \mu + \sigma \rho Z + \sigma \sqrt{1 - \rho^2} W$ and $Y^\prime = \nu + \tau Z$ then $(X^\prime, Y^\prime)$ has the same distribution as $(X, Y)$, namely the bivariate normal distribution with parameters $(\mu, \nu, \sigma, \tau, \rho)$ (although, of course $(X^\prime, Y^\prime)$ and $(X, Y)$ are different random vectors). There are other ways to define the same distribution as an affine transformation of $(Z, W)$—the situation will be clarified in the next subsection.
Suppose that $(X, Y)$ has the bivariate normal distribution with parameters $(\mu, \nu, \sigma, \tau, \rho)$. Then $(X, Y)$ has moment generating function $M$ given by $M(s, t) = \E\left[\exp\left(sX + tY\right)\right] = \exp\left[\E(s X + t Y) + \frac{1}{2} \var(s X + t Y)\right] = \exp\left[\mu s + \nu t + \frac{1}{2} \left( \sigma^2 s^2 + 2 \rho \sigma \tau s t + \tau^2 t^2\right) \right], \quad (s, t) \in \R^2$
Proof
Using the representation of $(X, Y)$ in terms of the standard bivariate normal vector $(Z, W)$ in the definition and collecting terms gives $M(s, t) = \E\left(\exp\left[(\mu s + \nu t) + (\sigma s + \rho \tau t) Z + \tau \sqrt{1 - \rho^2} t W\right] \right)$ Hence from independence we have $M(s, t) = \exp(\mu s + \nu t) m(\sigma s + \rho \tau t) m\left(\tau \sqrt{1 - \rho^2} t\right)$ where $m$ is the standard normal MGF. Substituting and simplifying gives the result.
We showed above that if $(X, Y)$ has a bivariate normal distribution then the marginal distributions of $X$ and $Y$ are also normal. The converse is not true.
Suppose that $(X, Y)$ has probability density function $f$ given by $f(x, y) = \frac{1}{2 \pi} e^{-(x^2 + y^2)/2} \left[1 + x y e^{-(x^2 + y^2 - 2)} / 2\right], \quad (x, y) \in \R^2$
1. $X$ and $Y$ each have standard normal distributions.
2. $(X, Y)$ does not have a bivariate normal distribution.
Proof
Note that $f(x, y) = \phi_2(x, y) [1 + u(x) u(y)]$ for $(x, y) \in \R^2$, where $\phi_2$ is the bivariate standard normal PDF and where $u$ is given by $u(t) = t e^{-(t^2 - 1) / 2}$ for $t \in \R$. From simple calculus, $u$ is symmetric about 0, has a local maximum at $t = 1$, and $u(t) \to 0$ as $t \to \infty$. In particular, $|u(t)| \le 1$ for $t \in \R$ and hence $f(x, y) \ge 0$ for $(x, y) \in \R^2$. Next, a helpful trick is that we can write integrals of $f$ as expected values of functions of a standard normal pair $(Z, W)$. In particular, $\int_{\R^2} f(x, y) d(x, y) = \E[1 + u(Z) u(W)] = 1 + \E[u(Z)] \E[u(W)] = 1$ since $\E[u(Z)] = \E[u(W)] = 0$ by the symmetry of the standard normal distribution and the symmetry of $u$ about 0. Hence $f$ is a valid PDF on $\R^2$. Suppose now that $(X, Y)$ has PDF $f$.
1. The PDF of $X$ at $x \in \R$ is $\int_\R f(x, y) dy = \int_\R \phi_2(x, y) dy + u(x) \E[u(W)] = \phi(x)$ where as usual, $\phi$ is the standard normal PDF on $\R$. By symmetry, $Y$ also has the standard normal distribution.
2. $f$ does not have the form of a bivariate normal PDF and hence $(X, Y)$ does not have a bivariate normal distribution.
Transformations
Like its univariate counterpart, the family of bivariate normal distributions is preserved under two types of transformations on the underlying random vector: affine transformations and sums of independent vectors. We start with a preliminary result on affine transformations that should help clarify the original definition. Throughout this discussion, we assume that the parameter vector $(\mu, \nu, \sigma, \tau, \rho)$ satisfies the usual conditions: $\mu, \, \nu \in \R$, and $\sigma, \, \tau \in (0, \infty)$, and $\rho \in (-1, 1)$.
Suppose that $(Z, W)$ has the standard bivariate normal distribution. Let $X = a_1 + b_1 Z + c_1 W$ and $Y = a_2 + b_2 Z + c_2 W$ where the coefficients are in $\R$ and $b_1 c_2 - c_1 b_2 \ne 0$. Then $(X, Y)$ has a bivariate normal distribution with parameters given by
1. $\E(X) = a_1$
2. $\E(Y) = a_2$
3. $\var(X) = b_1^2 + c_1^2$
4. $\var(Y) = b_2^2 + c_2^2$
5. $\cov(X, Y) = b_1 b_2 + c_1 c_2$
Proof
A direct proof using the change of variables formula is possible, but our goal is to show that $(X, Y)$ can be written in the form given above in the definition. First, parts (a)–(e) follow from basic properties of expected value, variance, and covariance. So, in the notation used in the definition, we have $\mu = a_1$, $\nu = a_2$, $\sigma = \sqrt{b_1^2 + c_1^2}$, $\tau = \sqrt{b_2^2 + c_2^2}$, and $\rho = \frac{b_1 b_2 + c_1 c_2}{\sqrt{b_1^2 + c_1^2} \sqrt{b_2^2 + c_2^2}}$ (Note from the assumption on the coefficients that $b_1^2 + c_1^2 \gt 0$ and $b_2^2 + c_2^2 \gt 0$). Simple algebra shows that $\sqrt{1 - \rho^2} = \frac{b_1 c_2 - c_1 b_2}{\sqrt{b_1^2 + c_1^2} \sqrt{b_2^2 + c_2^2}}$ Next we define \begin{align} U & = \frac{b_1 Z + c_1 W}{\sqrt{b_1^2 + c_1^2}} \ V & = \frac{c_1 Z - b_1 W}{\sqrt{b_1^2 + c_1^2}} \end{align} The transformation that defines $(u, v)$ from $(z, w)$ is its own inverse, and has Jacobian 1. Hence it follows that $(U, V)$ has the same joint distribution as $(Z, W)$, namely the standard bivariate normal distribution. Simple algebra shows that \begin{align} X & = a_1 + \sqrt{b_1^2 + c_1^2} U = \mu + \sigma U \ Y & = a_2 + \frac{b_1 b_2 + c_1 c_2}{\sqrt{b_1^2 + c_1^2}} U + \frac{c_1 b_2 - b_1 c_2}{\sqrt{b_1^2 + c_1^2}} V = \nu + \tau \rho U + \tau \sqrt{1 - \rho^2} V \end{align} This is the form given in the definition, so it follows that $(X, Y)$ has a bivariate normal distribution.
Now it is easy to show more generally that the bivariate normal distribution is closed with respect to affine transformations.
Suppose that $(X, Y)$ has the bivariate normal distribution with parameters $(\mu, \nu, \sigma, \tau, \rho)$. Define $U = a_1 + b_1 X + c_1 Y$ and $V = a_2 + b_2 X + c_2 Y$, where the coefficients are in $\R$ and $b_1 c_2 - c_1 b_2 \ne 0$. Then $(U, V)$ has a bivariate normal distribution with parameters as follows:
1. $\E(U) = a_1 + b_1 \mu + c_1 \nu$
2. $\E(V) = a_2 + b_2 \mu + c_2 \nu$
3. $\var(U) = b_1^2 \sigma^2 + c_1^2 \tau^2 + 2 b_1 c_1 \rho \sigma \tau$
4. $\var(V) = b_2^2 \sigma^2 + c_2^2 \tau^2 + 2 b_2 c_2 \rho \sigma \tau$
5. $\cov(U, V) = b_1 b_2 \sigma^2 + c_1 c_2 \tau^2 + (b_1 c_2 + b_2 c_1) \rho \sigma \tau$
Proof
From our original construction, we can write $X = \mu + \sigma Z$ and $Y = \nu + \tau \rho Z + \tau \sqrt{1 - \rho^2} W$ where $(Z, W)$ has the standard bivariate normal distribution. Then by simple substitution, $U = A_1 + B_1 Z + C_1 W$ and $V = A_2 + B_2 Z + C_2 W$ where $A_i = a_i + b_i \mu + c_i\nu$, $B_i = b_i \sigma + c_i \tau \rho$, $C_i = c_i \tau \sqrt{1 - \rho^2}$ for $i \in \{1, 2\}$. By simple algebra, $B_1 C_2 - C_1 B_2 = \sigma \tau \sqrt{1 - \rho^2}(b_1 c_2 - c_1 b_2) \ne 0$ Hence $(U, V)$ has a bivariate normal distribution from the previous theorem. Parts (a)–(e) follow from basic properties of expected value, variance, and covariance.
The bivariate normal distribution is preserved with respect to sums of independent variables.
Suppose that $(X_i, Y_i)$ has the bivariate normal distribution with parameters $(\mu_i, \nu_i, \sigma_i, \tau_i, \rho_i)$ for $i \in \{1, 2\}$, and that $(X_1, Y_1)$ and $(X_2, Y_2)$ are independent. Then $(X_1 + X_2, Y_1 + Y_2)$ has the bivariate normal distribution with parameters given by
1. $\E(X_1 + X_2) = \mu_1 + \mu_2$
2. $\E(Y_1 + Y_2) = \nu_1 + \nu_2$
3. $\var(X_1 + X_2) = \sigma_1^2 + \sigma_2^2$
4. $\var(Y_1 + Y_2) = \tau_1^2 + \tau_2^2$
5. $\cov(X_1 + X_2, Y_1 + Y_2)$ = $\rho_1 \sigma_1 \tau_1 + \rho_2 \sigma_2 \tau_2$
Proof
Let $M_i$ denote the MGF of $(X_i, Y_i)$ for $i \in \{1, 2\}$ and let $M$ denote the MGF of $(X_1 + X_2, Y_1 + Y_2)$. By independence, $M(s, t) = M_1(s, t) M_2(s, t)$ for $(s, t) \in \R^2$. Using the bivariate normal MGF, and basic properties of the exponential function, $M(s, t) = \exp\left(\E\left[s(X_1 + X_2) + t(Y_1 + Y_2)\right] + \frac{1}{2} \var\left[s(X_1 + X_2) + t(Y_1 + Y_2)\right]\right), \quad (s, t) \in \R^2$ Of course from basic properties of expected value, variance, and covariance, \begin{align} \E\left[s(X_1 + X_2) + t(Y_1 + Y_2)\right] & = s(\mu_1 + \mu_2) + t(\nu_1 + \nu_2) \ \var\left[s(X_1 + X_2) + t(Y_1 + Y_2)\right] & = s(\sigma_1^2 + \sigma_2^2) + t(\tau_1^2 + \tau_2^2) + 2 s t (\rho_1 \sigma_1 \tau_1 + \rho_2 \sigma_2 \tau_2) \end{align} Substituting gives the result.
The following result is important in the simulation of normal variables.
Suppose that $(Z, W)$ has the standard bivariate normal distribution. Define the polar coordinates $(R, \Theta)$ of $(Z, W)$ by the equations $Z = R \, \cos \Theta$, $W = R \, \sin \Theta$ where $R \ge 0$ and $0 \le \Theta \lt 2 \, \pi$. Then
1. $R$ has probability density function $g$ given by $g(r) = r \, e^{-\frac{1}{2} r^2}$ for $r \in [0, \infty)$.
2. $\Theta$ is uniformly distributed on $[0, 2 \pi)$.
3. $R$ and $\Theta$ are independent.
Proof
The Jacobian of the polar coordinate transformation that gives $(z, w)$ from $(r, \theta)$ is $r$, as we all remember from calculus. Hence by the change of variables theorem, the PDF $g$ of $(R, \Theta)$ in terms of the from standard normal PDF $\phi_2$ is given by $g(r, \theta) = \phi_2(r \cos \theta, r \sin \theta) r = \frac{1}{2 \pi} r e^{-r^2 /2}, \quad (r, \theta) \in [0, \infty) \times [0, 2 \pi)$ The result then follows from the factorization theorem for independent random variables.
The distribution of $R$ is known as the standard Rayleigh distribution, named for William Strutt, Lord Rayleigh. The Rayleigh distribution studied in more detail in a separate section.
Since the quantile function $\Phi^{-1}$ of the normal distribution cannot be given in a simple, closed form, we cannot use the usual random quantile method of simulating a normal random variable. However, the quantile method works quite well to simulate a Rayleigh variable, and of course simulating uniform variables is trivial. Hence we have a way of simulating a standard bivariate normal vector with a pair of random numbers (which, you will recall are independent random variables, each with the standard uniform distribution, that is, the uniform distribution on $[0, 1)$).
Suppose that $U$ and $V$ are independent random variables, each with the standard uniform distribution. Let $R = \sqrt{-2 \ln U}$ and $\Theta = 2 \pi V$. Define $Z = R \cos \Theta$ and $W = R \sin \Theta$. Then $(Z, W)$ has the standard bivariate normal distribution.
Proof
The Rayleigh distribution function $F$ is given by $F(r) = 1 - e^{-r^2 / 2}$ for $r \in [0, \infty)$ and hence the quantile function is given by $F^{-1}(p) = \sqrt{-2 \ln(1 - p)}$ for $p \in [0, 1)$. Hence if $U$ has the standard uniform distribution, then $\sqrt{-2 \ln (1 - U)}$ has the Rayleigh distribution. But $1 - U$ also has the standard uniform distribution so $R = \sqrt{-2 \ln U }$ also has the Rayleigh distribution. If $V$ has the standard uniform distribution then of course $2 \pi V$ is uniformly distributed on $[0, 2 \pi)$. If $U$ and $V$ are independent, then so are $R$ and $\Theta$. By the previous theorem, if $Z = R \cos \Theta$ and $W = R \sin \Theta$, then $(Z, W)$ has the standard bivariate normal distribution.
Of course, if we can simulate $(Z, W)$ with a standard bivariate normal distribution, then we can simulate $(X, Y)$ with the general bivariate normal distribution, with parameter $(\mu, \nu, \sigma, \tau, \rho)$ by definition (5), namely $X = \mu + \sigma Z$, $Y = \nu + \tau \rho Z + \tau \sqrt{1 - \rho^2} W$.
The General Multivariate Normal Distribution
The general multivariate normal distribution is a natural generalization of the bivariate normal distribution studied above. The exposition is very compact and elegant using expected value and covariance matrices, and would be horribly complex without these tools. Thus, this section requires some prerequisite knowledge of linear algebra. In particular, recall that $\boldsymbol{A}^T$ denotes the transpose of a matrix $\boldsymbol{A}$ and that we identify a vector in $\R^n$ with the corresponding $n \times 1$ column vector.
The Standard Distribution
Suppose that $\bs Z = (Z_1, Z_2, \ldots, Z_n)$ is a vector of independent random variables, each with the standard normal distribution. Then $\bs Z$ is said to have the $n$-dimensional standard normal distribution.
1. $\E(\bs Z) = \bs{0}$ (the zero vector in $\R^n$).
2. $\vc(\bs Z) = I$ (the $n \times n$ identity matrix).
$\bs Z$ has probability density function $\phi_n$ given by $\phi_n(\bs z) = \frac{1}{(2 \pi)^{n/2}} \exp \left( -\frac{1}{2} \bs z \cdot \bs z \right) = \frac{1}{(2 \pi)^{n/2}} \exp\left(-\frac{1}{2} \sum_{i=1}^n z_i^2\right), \quad \bs z = (z_1, z_2, \ldots, z_n) \in \R^n$ where as usual, $\phi$ is the standard normal PDF.
Proof
By independence, $\phi_n(\bs z) = \phi(z_1) \phi(z_2) \cdots \phi(z_n)$.
$\bs Z$ has moment generating function $m_n$ given by $m_n(\bs t) = \E\left[\exp(\bs t \cdot \bs Z)\right] = \exp\left[\frac{1}{2} \var(\bs t \cdot \bs Z)\right] = \exp \left( \frac{1}{2} \bs t \cdot \bs t \right) = \exp\left(\frac{1}{2} \sum_{i=1}^n t_i^2\right), \quad \bs t = (t_1, t_2, \ldots, t_n) \in \R^n$
Proof
By independence, $\E\left[\exp(\bs t \cdot \bs Z)\right] = m(t_1) m(t_2) \cdots m(t_n)$ where $m$ is the standard normal MGF.
The General Distribution
Suppose that $\bs Z$ has the $n$-dimensional standard normal distribution. Suppose also that $\bs \mu \in \R^n$ and that $\bs A \in \R^{n \times n}$ is invertible. The random vector $\bs X = \bs \mu + \bs A \bs Z$ is said to have an $n$-dimensional normal distribution.
1. $\E(\bs X) = \bs \mu$.
2. $\vc(\bs X) = \bs A \, \bs A^T$.
Proof
1. From the linear property of expected value, $\E(\bs X) = \bs \mu + \bs A \E(\bs Z) = \bs \mu$.
2. From basic properties of the variance-covariance matrix, $\vc(\bs X) = \bs A \bs A^T \vc(\bs Z) = \bs A \bs A^T$.
In the context of this result, recall that the variance-covariance matrix $\vc(\bs X) = \bs A \bs A^T$ is symmetric and positive definite (and hence also invertible). We will now see that the multivariate normal distribution is completely determined by the expected value vector $\bs \mu$ and the variance-covariance matrix $\bs V$, and hence these give the basic parameters of the distribution.
Suppose that $\bs X$ has an $n$-dimensional normal distribution with expected value vector $\bs \mu$ and variance-covariance matrix $\bs V$. The probability density function $f$ of $\bs X$ is given by
$f(\bs x) = \frac{1}{(2 \pi)^{n/2} \sqrt{\det(\bs V)}} \exp \left[ -\frac{1}{2} (\bs x - \bs \mu) \cdot \bs V^{-1} (\bs x - \bs \mu) \right], \quad \bs x \in \R^n$
Proof
From the definition can assume that $\bs X = \bs \mu + \bs A \bs Z$ where $\bs A \in \R^{n \times n}$ is invertible and $\bs Z$ has the $n$-dimensional standard normal distribution, so that $\bs V = \bs A \bs A^T$ The inverse of the transformation $\bs x = \bs \mu + \bs A \bs z$ is $\bs z = \bs A^{-1}(\bs x - \bs \mu)$ and hence the Jacobian of the inverse transformation is $\det\left(\bs A^{-1}\right) = 1 \big/ \det(\bs A)$. Using the multivariate change of variables theorem, $f(\bs x) = \frac{1}{\left|\det(\bs A)\right|} \phi_n\left[\bs A^{-1}(\bs x - \bs \mu)\right] = \frac{1}{(2 \pi)^{n/2} \left|\det(\bs A)\right|} \exp\left[-\frac{1}{2} \bs A^{-1}(\bs x - \bs \mu) \cdot \bs A^{-1}(\bs x - \bs \mu)\right], \quad \bs x \in \R^n$ But $\det(\bs V) = \det\left(\bs A \bs A^T\right) = \det(\bs A) \det\left(\bs A^T\right) = \left[\det(\bs A)\right]^2$ and hence $\left|\det(\bs A)\right| = \sqrt{\det(\bs V)}$. Also, \begin{align} \bs A^{-1}(\bs x - \bs \mu) \cdot \bs A^{-1}(\bs x - \bs \mu) & = \left[\bs A^{-1}(\bs x - \bs \mu)\right]^T \bs A^{-1}(\bs x - \bs \mu) = (\bs x - \bs \mu)^T \left(\bs A^{-1}\right)^T \bs A^{-1} (\bs x - \bs \mu) \ & = (\bs x - \bs \mu)^T \left(\bs A^T\right)^{-1} \bs A^{-1} (\bs x - \bs \mu) = (\bs x - \bs \mu)^T \left(\bs A \bs A^T\right)^{-1} (\bs x - \bs \mu)\ & = (\bs x - \bs \mu)^T \bs V^{-1} (\bs x - \bs \mu) = (\bs x - \bs \mu) \cdot \bs V^{-1} (\bs x - \bs \mu) \end{align}
Suppose again that $\bs X$ has an $n$-dimensional normal distribution with expected value vector $\bs \mu$ and variance-covariance matrix $\bs V$. The moment generating function $M$ of $\bs X$ is given by $M(\bs t) = \E\left[\exp(\bs t \cdot \bs X)\right] = \exp\left[\E(\bs t \cdot \bs X) + \frac{1}{2} \var(\bs t \cdot \bs X)\right] = \exp \left( \bs t \cdot \bs \mu + \frac{1}{2} \bs t \cdot \bs V \bs t \right), \quad \bs t \in \R^n$
Proof
Once again we start with the definition and assume that $\bs X = \bs \mu + \bs A \bs X$ where $\bs A \in \R^{n \times n}$ is invertible. we have $\E\left[\exp(\bs t \cdot \bs X\right] = \exp(\bs t \cdot \bs \mu) \E\left[\exp(\bs t \cdot \bs A \bs Z)\right]$. But $\bs t \cdot \bs A \bs Z = \left(\bs A^T \bs t\right) \cdot \bs Z$ so using the MGF of $\bs Z$ we have $\E\left[\exp(\bs t \cdot \bs A \bs Z)\right] = \exp\left[\frac{1}{2} \left(\bs A^T \bs t\right) \cdot \left(\bs A^T \bs t\right)\right] = \exp\left[\frac{1}{2} \bs t^T \bs A \bs A^T \bs t\right] = \exp\left[\frac{1}{2} \bs t \cdot \bs V \bs t\right]$
Of course, the moment generating function completely determines the distribution. Thus, if a random vector $\bs X$ in $\R^n$ has a moment generating function of the form given above, for some $\bs \mu \in \R^n$ and symmetric, positive definite $\bs V \in \R^{n \times n}$, then $\bs X$ has the $n$-dimensional normal distribution with mean $\bs \mu$ and variance-covariance matrix $\bs V$.
Note again that in the representation $\bs X = \bs \mu + \bs A \bs Z$, the distribution of $\bs X$ is uniquely determined by the expected value vector $\bs \mu$ and the variance-covariance matrix $\bs V = \bs A \bs A^T$, but not by $\bs \mu$ and $\bs A$. In general, for a given positive definite matrix $\bs V$, there are many invertible matrices $\bs A$ such that $\bs V = \bs A \bs A^T$ (the matrix $\bs A$ is a bit like a square root of $\bs V$). A theorem in matrix theory states that there is a unique lower triangular matrix $\bs L$ with this property. The representation $\bs X = \bs \mu + \bs L \bs Z$ is known as the canonical representation of $\bs X$.
If $\bs X = (X, Y)$ has bivariate normal distribution with parameters $(\mu, \nu, \sigma, \tau, \rho)$, then the lower triangular matrix $\bs L$ such that $\bs L \bs L^T = \vc(\bs X)$ is $\bs L = \left[\begin{matrix} \sigma & 0 \ \tau \rho & \tau \sqrt{1 - \rho^2} \end{matrix} \right]$
Proof
Note that $\bs L \bs L^T = \left[\begin{matrix} \sigma^2 & \sigma \tau \rho \ \sigma \tau \rho & \tau^2 \end{matrix}\right] = \vc(X, Y)$
Note that the matrix $\bs L$ above gives the canonical representation of $(X, Y)$ in terms of the standard normal vector $(Z, W)$ in the original definition, namely $X = \mu + \sigma Z$, $Y = \nu + \tau \rho Z + \tau \sqrt{1 - \rho^2} W$.
If the matrix $\bs A \in \R^{n \times n}$ in the definition is not invertible, then the variance-covariance matrix $\bs V = \bs A \bs A^T$ is symmetric, but only positive semi-definite. The random vector $\bs X = \bs \mu + \bs A \bs Z$ takes values in a lower dimensional affine subspace of $\R^n$ that has measure 0 relative to $n$-dimensional Lebesgue measure $\lambda_n$. Thus, $\bs X$ does not have a probability density function relative to $\lambda_n$, and so the distribution is degenerate. However, the formula for the moment generating function still holds. Degenerate normal distributions are discussed in more detail below.
Transformations
The multivariate normal distribution is invariant under two basic types of transformations on the underlying random vectors: affine transformations (with linearly independent rows), and concatenation of independent vectors. As simple corollaries of these two results, the normal distribution is also invariant with respect to subsequences of the random vector, re-orderings of the terms in the random vector, and sums of independent random vectors. The main tool that we will use is the moment generating function. We start with the first main result on affine transformations.
Suppose that $\bs X$ has the $n$-dimensional normal distribution with mean vector $\bs \mu$ and variance-covariance matrix $\bs V$. Suppose also that $\bs{a} \in \R^m$ and that $\bs A \in \R^{m \times n}$ has linearly independent rows (thus, $m \le n$). Then $\bs{Y} = \bs{a} + \bs A \bs X$ has an $m$-dimensional normal distribution, with
1. $\E(\bs{Y}) = \bs{a} + \bs A \bs \mu$
2. $\vc(\bs{Y}) = \bs A \bs V \bs A^T$
Proof
For $\bs t \in \R^m$, $\E\left[\exp(\bs t \cdot \bs{Y})\right] = \exp(\bs t \cdot \bs{a}) \E\left[\bs t \cdot \bs A \bs X\right]$. but $\bs t \cdot \bs A \bs X = \left(\bs A^T \bs t\right) \cdot \bs X$, so using the MGF of $\bs X$ we have $\E\left[\exp(\bs t \cdot \bs A \bs X)\right] = \exp\left[\left(\bs A^T \bs t\right) \cdot \bs \mu + \frac{1}{2}\left(\bs A^T \bs t\right) \cdot \bs V \left(\bs A^T \bs t\right)\right]$ But $\left(\bs A^T \bs t\right) \cdot \bs \mu = \bs t \cdot \bs A \bs \mu$ and $\left(\bs A^T \bs t\right) \cdot \bs V \left(\bs A^T \bs t\right) = \bs t \cdot \left(\bs A \bs V \bs A^T\right) \bs t$, so letting $\bs{b} = \bs{a} + \bs A \bs \mu$ and $\bs{U} = \bs A \bs V \bs A^T$ and putting the pieces together, we have $\E\left[\exp( \bs t \cdot \bs{Y})\right] = \exp\left[ \bs{b} \cdot \bs t + \frac{1}{2} \bs t \cdot \bs{U} \bs t \right]$.
A clearly important special case is $m = n$, which generalizes the definition. Thus, if $\bs{a} \in \R^n$ and $\bs A \in \R^{n \times n}$ is invertible, then $\bs{Y} = \bs{a} + \bs A \bs X$ has an $n$-dimensional normal distribution. Here are some other corollaries:
Suppose that $\bs X = (X_1, X_2, \ldots, X_n)$ has an $n$-dimensional normal distribution. If $\{i_1, i_2, \ldots, i_m\}$ is a set of distinct indices, then $\bs{Y} = \left(X_{i_1}, X_{i_2}, \ldots, X_{i_m}\right)$ has an $m$-dimensional normal distribution.
Proof
Let $A \in \R^{m \times n}$ be the matrix defined by the condition that for $j \in \{1, 2, \ldots, m\}$, row $j$ has 1 in position $i_j$ and has 0 in all other positions. Then $\bs A$ has linearly independent rows (since the $i_j$ are distinct in $j$) and $\bs{Y} = \bs A \bs X$. Thus the result follows from the general theorem on affine transformations.
In the context of the previous result, if $\bs X$ has mean vector $\bs \mu$ and variance-covariance matrix $\bs V$, then $\bs{Y}$ has mean vector $\bs A \bs \mu$ and variance-covariance matrix $\bs A \bs V \bs A^T$, where $\bs A$ is the 0-1 matrix defined in the proof. As simple corollaries, note that if $\bs X = (X_1, X_2, \ldots, X_n)$ has an $n$-dimensional normal distribution, then any permutation of the coordinates of $\bs X$ also has an $n$-dimensional normal distribution, and $(X_1, X_2, \ldots, X_m)$ has an $m$-dimensional normal distribution for any $m \le n$. Here is a slight extension of the last statement.
Suppose that $\bs X$ is a random vector in $\R^m$, $\bs{Y}$ is a random vector in $\R^n$, and that $(\bs X, \bs{Y})$ has an $(m + n)$-dimensional normal distribution. Then
1. $\bs X$ has an $m$-dimensional normal distribution.
2. $\bs{Y}$ has an $n$-dimensional normal distribution.
3. $\bs X$ and $\bs{Y}$ are independent if and only if $\cov(\bs X, \bs{Y}) = \bs{0}$ (the $m \times n$ zero matrix).
Proof
As we already noted, parts (a) and (b) are a simple consequence of the previous theorem. Thus, we just need to verify (c). In block form, note that $\vc(\bs X, \bs{Y}) = \left[\begin{matrix} \vc(\bs X) & \cov(\bs X, \bs{Y}) \ \cov(\bs{Y}, \bs X) & \vc(\bs{Y})\end{matrix} \right]$ Now let $M$ denote the moment generating function of $(\bs X, \bs{Y})$, $M_1$ the MGF of $\bs X$, and $M_2$ the MGF of $\bs{Y}$. From the form of the MGF, note that $M(\bs{s}, \bs t) = M_1(\bs{s}) M_2(\bs t)$ for all $\bs{s} \in \R^m$, $\bs t \in \R^n$ if and only if $\cov(\bs X, \bs{Y}) = \bs{0}$, the $m \times n$ zero matrix.
Next is the converse to part (c) of the previous result: concatenating independent normally distributed vectors produces another normally distributed vector.
Suppose that $\bs X$ has the $m$-dimensional normal distribution with mean vector $\bs \mu$ and variance-covariance matrix $\bs{U}$, $\bs{Y}$ has the $n$-dimensional normal distribution with mean vector $\bs{\nu}$ and variance-covariance matrix $\bs V$, and that $\bs X$ and $\bs{Y}$ are independent. Then $\bs Z = (\bs X, \bs{Y})$ has the $m + n$-dimensional normal distribution with
1. $\E(\bs X, \bs{Y}) = (\bs \mu, \bs{\nu})$
2. $\vc(\bs X, \bs{Y}) = \left[\begin{matrix} \vc(\bs X) & \bs{0} \ \bs{0}^T & \vc(\bs{Y})\end{matrix}\right]$ where $\bs{0}$ is the $m \times n$ zero matrix.
Proof
For $\bs t \in \R^{m + n}$, write $\bs t$ in block form as $\bs t = (\bs{r}, \bs{s})$ where $\bs{r} \in \R^m$ and $\bs{s} \in \R^n$. By independence, the MGF of $(\bs X, \bs{Y})$ is $\E\left(\exp\left[\bs t \cdot (\bs X, \bs{Y})\right]\right) = \E\left[\bs{r} \cdot \bs X + \bs{s} \cdot \bs{Y}\right] = \E\left[\exp(\bs{r} \cdot \bs X)\right] \E\left[\exp(\bs{s} \cdot \bs{Y})\right]$ Using the formula for the normal MGF we have $\E\left(\exp\left[\bs t \cdot (\bs X, \bs{Y})\right]\right) = \exp \left( \bs{r} \cdot \bs \mu + \frac{1}{2} \bs{r} \cdot \bs{U} \, \bs{r} \right) \exp \left( \bs{s} \cdot \bs{\nu} + \frac{1}{2} \bs{s} \cdot \bs V \, \bs{s} \right) = \exp\left[(\bs{r} \cdot \bs \mu + \bs{s} \cdot \bs{\nu}) + \frac{1}{2} (\bs{r} \cdot \bs{U} \bs{r} + \bs{s} \cdot \bs V \bs{s})\right]$ But $\bs{r} \cdot \bs \mu + \bs{s} \cdot \bs{\nu} = \bs t \cdot (\bs \mu, \bs{\nu})$ and $\bs{r} \cdot \bs{U} \bs{r} + \bs{s} \cdot \bs V \bs{s} = \bs t \cdot \left[\begin{matrix} \vc(\bs X) & \bs{0} \ \bs{0}^T & \vc(\bs{Y})\end{matrix}\right] \bs t$ so the proof is complet
Just as in the univariate case, the normal family of distributions is closed with respect to sums of independent variables. The proof follows easily from the previous result.
Suppose that $\bs X$ has the $n$-dimensional normal distribution with mean vector $\bs \mu$ and variance-covariance matrix $\bs{U}$, $\bs{Y}$ has the $n$-dimensional normal distribution with mean vector $\bs{\nu}$ and variance-covariance matrix $\bs V$, and that $\bs X$ and $\bs{Y}$ are independent. Then $\bs X + \bs{Y}$ has the $n$-dimensional normal distribution with
1. $\E(\bs X + \bs{Y}) = \bs \mu + \bs{\nu}$
2. $\vc(\bs X + \bs{Y}) = \bs{U} + \bs V$
Proof
From the previous result $(\bs X, \bs{Y})$ has a $2 n$-dimensional normal distribution. Moreover, $\bs X + \bs{Y} = \bs A(\bs X, \bs{Y})$ where $\bs A$ is the $n \times 2 n$ matrix defined by the condition that for $i \in \{1, 2, \ldots, n\}$, row $i$ has 1 in positions $i$ and $n + i$ and $0$ in all other positions. The matrix $A$ has linearly independent rows and thus the result follows from the general theorem on affine transformations. Parts (a) and (b) are standard results for sums of independent random vectors.
We close with a trivial corollary to the general result on affine transformation, but this corollary points the way to a further generalization of the multivariate normal distribution that includes the degenerate distributions.
Suppose that $\bs X$ has an $n$-dimensional normal distribution with mean vector $\bs \mu$ and variance-covariance matrix $\bs V$, and that $\bs{a} \in \R^n$ with $\bs{a} \ne \bs{0}$. Then $Y = \bs{a} \cdot \bs X$ has a (univariate) normal distribution with
1. $\E(Y) = \bs{a} \cdot \bs \mu$
2. $\var(Y) = \bs{a} \cdot \bs V \bs{a}$
Proof
Note again that $\bs{a} \cdot \bs X = \bs{a}^T \bs X$. Since $\bs{a} \ne \bs{0}$, the single row of $\bs{a}^T$ is linearly independent and hence the result follows from the general theorem on affine transformations.
A Further Generalization
The last result can be used to give a simple, elegant definition of the multivariate normal distribution that includes the degenerate distributions as well as the ones we have considered so far. First we will adopt our general definition of the univariate normal distribution that includes constant random variables.
A random variable $\bs X$ that takes values in $\R^n$ has an $n$-dimensional normal distribution if and only if $\bs{a} \cdot \bs X$ has a univariate normal distribution for every $\bs{a} \in \R^n$.
Although an $n$-dimensional normal distribution may not have a probability density function with respect to $n$-dimensional Lebesgue measure $\lambda_n$, the form of the moment generating function is unchanged.
Suppose that $\bs X$ has mean vector $\bs \mu$ and variance-covariance matrix $\bs V$, and that $\bs X$ has an $n$-dimensional normal distribution. The moment generating function of $\bs X$ is given by $\E\left[\exp(\bs t \cdot \bs X)\right] = \exp\left[\E(\bs t \cdot \bs X) + \frac{1}{2} \var(\bs t \cdot \bs X)\right] = \exp \left( \bs t \cdot \bs \mu + \frac{1}{2} \bs t \cdot \bs V \, \bs t \right), \quad \bs t \in \R^n$
Proof
If $\bs t \in \R^n$, then by definition, $\bs t \cdot \bs X$ has a univariate normal distribution. Thus $\E\left[\exp(\bs t \cdot \bs X)\right]$ is simply the moment generating function of $\bs t \cdot \bs X$, evaluated at the argument 1. The results then follow from the univariate MGF.
Our new general definition really is a generalization.
Suppose that $\bs X$ has an $n$-dimensional normal distribution in the sense of the general definition, and that the distribution of $\bs X$ has a probability density function on $\R^n$ with respect to Lebesgue measure $\lambda_n$. Then $\bs X$ has an $n$-dimensional normal distribution in the sense of our original definition.
Proof
This follows from our previous results, since both the MGF and the PDF completely determine the distribution of $\bs X$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.07%3A_The_Multivariate_Normal_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
In this section we will study a family of distributions that has special importance in probability and statistics. In particular, the arrival times in the Poisson process have gamma distributions, and the chi-square distribution in statistics is a special case of the gamma distribution. Also, the gamma distribution is widely used to model physical quantities that take positive values.
The Gamma Function
Before we can study the gamma distribution, we need to introduce the gamma function, a special function whose values will play the role of the normalizing constants.
Definition
The gamma function $\Gamma$ is defined as follows $\Gamma(k) = \int_0^\infty x^{k-1} e^{-x} \, dx, \quad k \in (0, \infty)$ The function is well defined, that is, the integral converges for any $k \gt 0$. On the other hand, the integral diverges to $\infty$ for $k \le 0$.
Proof
Note that $\int_0^\infty x^{k-1} e^{-x} \, dx = \int_0^1 x^{k-1} e^{-x} \, dx + \int_1^\infty x^{k-1} e^{-x} \, dx$ For the first integral on the right, $\int_0^1 x^{k-1} e^{-x} \, dx \le \int_0^1 x^{k-1} \, dx = \frac{1}{k}$ For the second integral, let $n = \lceil k \rceil$. Then $\int_1^\infty x^{k-1} e^{-x} \, dx \le \int_1^\infty x^{n-1} e^{-x} \, dx$ The last integral can be evaluated explicitly by integrating by parts, and is finite for every $n \in \N_+$.
Finally, if $k \le 0$, note that $\int_0^1 x^{k-1} e^{-x}, \, dx \ge e^{-1} \int_0^1 x^{k-1} \, dx = \infty$
The gamma function was first introduced by Leonhard Euler.
The (lower) incomplete gamma function is defined by $\Gamma(k, x) = \int_0^x t^{k-1} e^{-t} \, dt, \quad k, x \in (0, \infty)$
Properties
Here are a few of the essential properties of the gamma function. The first is the fundamental identity.
$\Gamma(k + 1) = k \, \Gamma(k)$ for $k \in (0, \infty)$.
Proof
This follows from integrating by parts, with $u = x^k$ and $dv = e^{-x} \, dx$: $\Gamma(k + 1) = \int_0^\infty x^k e^{-x} \, dx = \left(-x^k e^{-x}\right)_0^\infty + \int_0^\infty k x^{k-1} e^{-x} \, dx = 0 + k \, \Gamma(k)$
Applying this result repeatedly gives $\Gamma(k + n) = k (k + 1) \cdots (k + n - 1) \Gamma(k), \quad n \in \N_+$ It's clear that the gamma function is a continuous extension of the factorial function.
$\Gamma(k + 1) = k!$ for $k \in \N$.
Proof
This follows from the fundmental identity and the fact that $\Gamma(1) = 1$.
The values of the gamma function for non-integer arguments generally cannot be expressed in simple, closed forms. However, there are exceptions.
$\Gamma\left(\frac{1}{2}\right) = \sqrt{\pi}$.
Proof
By definition, $\Gamma\left(\frac{1}{2}\right) = \int_0^\infty x^{-1/2} e^{-x} \, dx$ Substituting $x = z^2 / 2$ gives $\Gamma\left(\frac{1}{2}\right) = \int_0^\infty \sqrt{2} e^{-z^2/2} \, dz = 2 \sqrt{\pi} \int_0^\infty \frac{1}{\sqrt{2 \pi}} e^{-z^2/2} \, dz$ But the last integrand is the PDF of the standard normal distribution, and so the integral evaluates to $\frac{1}{2}$
We can generalize the last result to odd multiples of $\frac{1}{2}$.
For $n \in \N$, $\Gamma\left(\frac{2 n + 1}{2}\right) = \frac{1 \cdot 3 \cdots (2 n - 1)}{2^n} \sqrt{\pi} = \frac{(2 n)!}{4^n n!} \sqrt{\pi}$
Proof
This follows from the previous result and the fundamental identity.
Stirling's Approximation
One of the most famous asymptotic formulas for the gamma function is Stirling's formula, named for James Stirling. First we need to recall a definition.
Suppose that $f, \, g: D \to (0, \infty)$ where $D = (0, \infty)$ or $D = \N_+$. Then $f(x) \approx g(x)$ as $x \to \infty$ means that $\frac{f(x)}{g(x)} \to 1 \text{ as } x \to \infty$
Stirling's formula $\Gamma(x + 1) \approx \left( \frac{x}{e} \right)^x \sqrt{2 \pi x} \text{ as } x \to \infty$
As a special case, Stirling's result gives an asymptotic formula for the factorial function: $n! \approx \left( \frac{n}{e} \right)^n \sqrt{2 \pi n} \text{ as } n \to \infty$
The Standard Gamma Distribution
Distribution Functions
The standard gamma distribution with shape parameter $k \in (0, \infty)$ is a continuous distribution on $(0, \infty)$ with probability density function $f$ given by $f(x) = \frac{1}{\Gamma(k)} x^{k-1} e^{-x}, \quad x \in (0, \infty)$
Clearly $f$ is a valid probability density function, since $f(x) \gt 0$ for $x \gt 0$, and by definition, $\Gamma(k)$ is the normalizing constant for the function $x \mapsto x^{k-1} e^{-x}$ on $(0, \infty)$. The following theorem shows that the gamma density has a rich variety of shapes, and shows why $k$ is called the shape parameter.
The gamma probability density function $f$ with shape parameter $k \in (0, \infty)$ satisfies the following properties:
1. If $0 \lt k \lt 1$, $f$ is decreasing with $f(x) \to \infty$ as $x \downarrow 0$.
2. If $k = 1$, $f$ is decreasing with $f(0) = 1$.
3. If $k \gt 1$, $f$ increases and then decreases, with mode at $k - 1$.
4. If $0 \lt k \le 1$, $f$ is concave upward.
5. If $1 \lt k \le 2$, $f$ is concave downward and then upward, with inflection point at $k - 1 + \sqrt{k - 1}$.
6. If $k \gt 2$, $f$ is concave upward, then downward, then upward again, with inflection points at $k - 1 \pm \sqrt{k - 1}$.
Proof
These results follow from standard calculus. For $x \gt 0$, \begin{align*} f^\prime(x) &= \frac{1}{\Gamma(k)} x^{k-2} e^{-x}[(k - 1) - x] \ f^{\prime \prime}(x) &= \frac{1}{\Gamma(k)} x^{k-3} e^{-x} \left[(k - 1)(k - 2) - 2 (k - 1) x + x^2\right] \end{align*}
The special case $k = 1$ gives the standard exponential distribuiton. When $k \ge 1$, the distribution is unimodal.
In the simulation of the special distribution simulator, select the gamma distribution. Vary the shape parameter and note the shape of the density function. For various values of $k$, run the simulation 1000 times and compare the empirical density function to the true probability density function.
The distribution function and the quantile function do not have simple, closed representations for most values of the shape parameter. However, the distribution function has a trivial representation in terms of the incomplete and complete gamma functions.
The distribution function $F$ of the standard gamma distribution with shape parameter $k \in (0, \infty)$ is given by $F(x) = \frac{\Gamma(k, x)}{\Gamma(k)}, \quad x \in (0, \infty)$
Approximate values of the distribution and quantile functions can be obtained from special distribution calculator, and from most mathematical and statistical software packages.
Using the special distribution calculator, find the median, the first and third quartiles, and the interquartile range in each of the following cases:
1. $k = 1$
2. $k = 2$
3. $k = 3$
Moments
Suppose that $X$ has the standard gamma distribution with shape parameter $k \in (0, \infty)$. The mean and variance are both simply the shape parameter.
The mean and variance of $X$ are
1. $\E(X) = k$
2. $\var(X) = k$
Proof
1. From the fundamental identity, $\E(X) = \int_0^\infty x \frac{1}{\Gamma(k)} x^{k-1} e^{-x} \, dx = \frac{\Gamma(k + 1)}{\Gamma(k)} = k$
2. From the fundamental identity again $\E\left(X^2\right) = \int_0^\infty x^2 \frac{1}{\Gamma(k)} x^{k-1} e^{-x} \, dx = \frac{\Gamma(k + 2)}{\Gamma(k)} = (k + 1) k$ and hence $\var(X) = \E\left(X^2\right) - [\E(X)]^2 = k$
In the simulation of the special distribution simulator, select the gamma distribution. Vary the shape parameter and note the size and location of the mean $\pm$ standard deviation bar. For selected values of $k$, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
More generally, the moments can be expressed easily in terms of the gamma function:
The moments of $X$ are
1. $\E(X^a) = \Gamma(a + k) \big/ \Gamma(k)$ if $a \gt -k$
2. $\E(X^n) = k^{[n]} = k (k + 1) \cdots (k + n - 1)$ if $n \in \N$
Proof
1. For $a \gt -k$, $\E(X^a) = \int_0^\infty x^a \frac{1}{\Gamma(k)} x^{k-1} e^{-x} \, dx = \frac{1}{\Gamma(k)} \int_0^\infty x^{a + k} e^{-x} \, dx = \frac{\Gamma(a + k)}{\Gamma(k)}$
2. If $n \in \N$, then by the fundamental identity, $\Gamma(k + n) = k (k + 1) \cdots (k + n - 1) \Gamma(k)$, so the result follows from (a).
Note also that $\E(X^a) = \infty$ if $a \le -k$. We can now also compute the skewness and the kurtosis.
The skewness and kurtosis of $X$ are
1. $\skw(X) = \frac{2}{\sqrt{k}}$
2. $\kur(X) = 3 + \frac{6}{k}$
Proof
These results follows from the previous moment results and the computational formulas for skewness and kurtosis.
In particular, note that $\skw(X) \to 0$ and $\kur(X) \to 3$ as $k \to \infty$. Note also that the excess kurtosis $\kur(X) - 3 \to 0$ as $k \to \infty$.
In the simulation of the special distribution simulator, select the gamma distribution. Increase the shape parameter and note the shape of the density function in light of the previous results on skewness and kurtosis. For various values of $k$, run the simulation 1000 times and compare the empirical density function to the true probability density function.
The following theorem gives the moment generating function.
The moment generating function of $X$ is given by $\E\left(e^{t X}\right) = \frac{1}{(1 - t)^k}, \quad t \lt 1$
Proof
For $t \lt 1$, $\E\left(e^{t X}\right) = \int_0^\infty e^{t x} \frac{1}{\Gamma(k)} x^{k-1} e^{-x} \, dx = \int_0^\infty \frac{1}{\Gamma(k)} x^{k-1} e^{-x(1 - t)} \, dx$ Substituting $u = x(1 - t)$ so that $x = u \big/ (1 - t)$ and $dx = du \big/ (1 - t)$ gives $\E\left(e^{t X}\right) = \frac{1}{(1 - t)^k} \int_0^\infty \frac{1}{\Gamma(k)} u^{k-1} e^{-u} \, du = \frac{1}{(1 - t)^k}$
The General Gamma Distribution
The gamma distribution is usually generalized by adding a scale parameter.
If $Z$ has the standard gamma distribution with shape parameter $k \in (0, \infty)$ and if $b \in (0, \infty)$, then $X = b Z$ has the gamma distribution with shape parameter $k$ and scale parameter $b$.
The reciprocal of the scale parameter, $r = 1 / b$ is known as the rate parameter, particularly in the context of the Poisson process. The gamma distribution with parameters $k = 1$ and $b$ is called the exponential distribution with scale parameter $b$ (or rate parameter $r = 1 / b$). More generally, when the shape parameter $k$ is a positive integer, the gamma distribution is known as the Erlang distribution, named for the Danish mathematician Agner Erlang. The exponential distribution governs the time between arrivals in the Poisson model, while the Erlang distribution governs the actual arrival times.
Basic properties of the general gamma distribution follow easily from corresponding properties of the standard distribution and basic results for scale transformations.
Distribution Functions
Suppose that $X$ has the gamma distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$.
$X$ has probability density function $f$ given by $f(x) = \frac{1}{\Gamma(k) b^k} x^{k-1} e^{-x/b}, \quad x \in (0, \infty)$
Proof
Recall that if $g$ is the PDF of the standard gamma distribution with shape parameter $k$ then $f(x) = \frac{1}{b} g\left(\frac{x}{b}\right)$ for $x \gt 0$.
Recall that the inclusion of a scale parameter does not change the shape of the density function, but simply scales the graph horizontally and vertically. In particular, we have the same basic shapes as for the standard gamma density function.
The probability density function $f$ of $X$ satisfies the following properties:
1. If $0 \lt k \lt 1$, $f$ is decreasing with $f(x) \to \infty$ as $x \downarrow 0$.
2. If $k = 1$, $f$ is decreasing with $f(0) = 1$.
3. If $k \gt 1$, $f$ increases and then decreases, with mode at $(k - 1) b$.
4. If $0 \lt k \le 1$, $f$ is concave upward.
5. If $1 \lt k \le 2$, $f$ is concave downward and then upward, with inflection point at $b \left(k - 1 + \sqrt{k - 1}\right)$.
6. If $k \gt 2$, $f$ is concave upward, then downward, then upward again, with inflection points at $b \left(k - 1 \pm \sqrt{k - 1}\right)$.
In the simulation of the special distribution simulator, select the gamma distribution. Vary the shape and scale parameters and note the shape and location of the probability density function. For various values of the parameters, run the simulation 1000 times and compare the empirical density function to the true probability density function.
Once again, the distribution function and the quantile function do not have simple, closed representations for most values of the shape parameter. However, the distribution function has a simple representation in terms of the incomplete and complete gamma functions.
The distribution function $F$ of $X$ is given by $F(x) = \frac{\Gamma(k, x/b)}{\Gamma(k)}, \quad x \in (0, \infty)$
Proof
From the defintion we can take $X = b Z$ where $Z$ has the standard gamma distribution with shape parameter $k$. Then $\P(X \le x) = \P(Z \le x/b)$ for $x \in (0, \infty)$, so the result follows from the distribution function of $Z$.
Approximate values of the distribution and quanitle functions can be obtained from special distribution calculator, and from most mathematical and statistical software packages.
Open the special distribution calculator. Vary the shape and scale parameters and note the shape and location of the distribution and quantile functions. For selected values of the parameters, find the median and the first and third quartiles.
Moments
Suppose again that $X$ has the gamma distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$.
The mean and variance of $X$ are
1. $\E(X) = b k$
2. $\var(X) = b^2 k$
Proof
From the definition, we can take $X = b Z$ where $Z$ has the standard gamma distribution with shape parameter $k$. Then using the mean and variance of $Z$,
1. $\E(X) = b \E(Z) = b k$
2. $\var(X) = b^2 \var(Z) = b^2 k$
In the special distribution simulator, select the gamma distribution. Vary the parameters and note the shape and location of the mean $\pm$ standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The moments of $X$ are
1. $\E(X^a) = b^a \Gamma(a + k) \big/ \Gamma(k)$ for $a \gt -k$
2. $\E(X^n) = b^n k^{[n]} = b^n k (k + 1) \cdots (k + n - 1)$ if $n \in \N$
Proof
Again, from the definition, we can take $X = b Z$ where $Z$ has the standard gamma distribution with shape parameter $k$. The results follow from the moment results for $Z$, since $E(X^a) = b^a \E(Z^a)$.
Note also that $\E(X^a) = \infty$ if $a \le -k$. Recall that skewness and kurtosis are defined in terms of the standard score, and hence are unchanged by the addition of a scale parameter.
The skewness and kurtosis of $X$ are
1. $\skw(X) = \frac{2}{\sqrt{k}}$
2. $\kur(X) = 3 + \frac{6}{k}$
The moment generating function of $X$ is given by $\E\left(e^{t X}\right) = \frac{1}{(1 - b t)^k}, \quad t \lt \frac{1}{b}$
Proof
From the definition, we can take $X = b Z$ where $Z$ has the standard gamma distribution with shape parameter $k$. Then $\E\left(e^{t X}\right) = \E\left[e^{(t b )Z}\right]$, so the result follows from the moment generating function of $Z$.
Relations
Our first result is simply a restatement of the meaning of the term scale parameter.
Suppose that $X$ has the gamma distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$. If $c \in (0, \infty)$, then $c X$ has the gamma distribution with shape parameter $k$ and scale parameter $b c$.
Proof
From the definition, we can take $X = b Z$ where $Z$ has the standard gamma distribution with shape parameter $k$. Then $c X = c b Z$.
More importantly, if the scale parameter is fixed, the gamma family is closed with respect to sums of independent variables.
Suppose that $X_1$ and $X_2$ are independent random variables, and that $X_i$ has the gamma distribution with shape parameter $k_i \in (0, \infty)$ and scale parameter $b \in (0, \infty)$ for $i \in \{1, 2\}$. Then $X_1 + X_2$ has the gamma distribution with shape parameter $k_1 + k_2$ and scale parameter $b$.
Proof
Recall that the MGF of $X = X_1 + X_2$ is the product of the MGFs of $X_1$ and $X_2$, so $\E\left(e^{t X}\right) = \frac{1}{(1 - b t)^{k_1}} \frac{1}{(1 - b t)^{k_2}} = \frac{1}{(1 - b t)^{k_1 + k_2}}, \quad t \lt \frac{1}{b}$
From the previous result, it follows that the gamma distribution is infinitely divisible:
Suppose that $X$ has the gamma distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$. For $n \in \N_+$, $X$ has the same distribution as $\sum_{i=1}^n X_i$, where $(X_1, X_2, \ldots, X_n)$ is a sequence of independent random variables, each with the gamma distribution with with shape parameter $k / n$ and scale parameter $b$.
From the sum result and the central limit theorem, it follows that if $k$ is large, the gamma distribution with shape parameter $k$ and scale parameter $b$ can be approximated by the normal distribution with mean $k b$ and variance $k b^2$. Here is the precise statement:
Suppose that $X_k$ has the gamma distribution with shape parameter $k \in (0, \infty)$ and fixed scale parameter $b \in (0, \infty)$. Then the distribution of the standardized variable below converges to the standard normal distribution as $k \to \infty$: $Z_k = \frac{X_k - k b}{\sqrt{k} b}$
In the special distribution simulator, select the gamma distribution. For various values of the scale parameter, increase the shape parameter and note the increasingly normal shape of the density function. For selected values of the parameters, run the experiment 1000 times and compare the empirical density function to the true probability density function.
The gamma distribution is a member of the general exponential family of distributions:
The gamma distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$ is a two-parameter exponential family with natural parameters $(k - 1, -1 / b)$, and natural statistics $(\ln X, X)$.
Proof
This follows from the definition of the general exponential family. The gamma PDF can be written as $f(x) = \frac{1}{b^k \Gamma(k)} \exp\left[(k - 1) \ln x - \frac{1}{b} x\right], \quad x \in (0, \infty)$
For $n \in (0, \infty)$, the gamma distribution with shape parameter $n/2$ and scale parameter 2 is known as the chi-square distribution with $n$ degrees of freedom. The chi-square distribution is important enough to deserve a separate section.
Computational Exercise
Suppose that the lifetime of a device (in hours) has the gamma distribution with shape parameter $k = 4$ and scale parameter $b = 100$.
1. Find the probability that the device will last more than 300 hours.
2. Find the mean and standard deviation of the lifetime.
Answer
Let $X$ denote the lifetime in hours.
1. $\P(X \gt 300) = 13 e^{-3} \approx 0.6472$
2. $\E(X) = 400$, $\sd(X) = 200$
Suppose that $Y$ has the gamma distribution with parameters $k = 10$ and $b = 2$. For each of the following, compute the true value using the special distribution calculator and then compute the normal approximation. Compare the results.
1. $\P(18 \lt X \lt 25)$
2. The 80th percentile of $Y$
Answer
1. $\P(18 \lt X \lt 25) = 0.3860$, $\P(18 \lt X \lt 25) \approx 0.4095$
2. $y_{0.8} = 25.038$, $y_{0.8} \approx 25.325$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.08%3A_The_Gamma_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$ $\newcommand{\bs}{\boldsymbol}$
In this section we will study a distribution, and some relatives, that have special importance in statistics. In particular, the chi-square distribution will arise in the study of the sample variance when the underlying distribution is normal and in goodness of fit tests.
The Chi-Square Distribution
Distribution Functions
For $n \in (0, \infty)$, the gamma distribution with shape parameter $n / 2$ and scale parameter 2 is called the chi-square distribution with $n$ degrees of freedom. The probability density function $f$ is given by $f(x) = \frac{1}{2^{n/2} \Gamma(n/2)} x^{n/2 - 1} e^{-x/2}, \quad x \in (0, \infty)$
So the chi-square distribution is a continuous distribution on $(0, \infty)$. For reasons that will be clear later, $n$ is usually a positive integer, although technically this is not a mathematical requirement. When $n$ is a positive integer, the gamma function in the normalizing constant can be be given explicitly.
If $n \in \N_+$ then
1. $\Gamma(n/2) = (n/2 - 1)!$ if $n$ is even.
2. $\Gamma(n/2) = \frac{(n - 1)!}{2^{n-1} (n/2 - 1/2)!} \sqrt{\pi}$ if $n$ is odd.
The chi-square distribution has a rich collection of shapes.
The chi-square probability density function with $n \in (0, \infty)$ degrees of freedom satisfies the following properties:
1. If $0 \lt n \lt 2$, $f$ is decreasing with $f(x) \to \infty$ as $x \downarrow 0$.
2. If $n = 2$, $f$ is decreasing with $f(0) = \frac{1}{2}$.
3. If $n \gt 2$, $f$ increases and then decreases with mode at $n - 2$.
4. If $0 \lt n \le 2$, $f$ is concave downward.
5. If $2 \lt n \le 4$, $f$ is concave downward and then upward, with inflection point at $n - 2 + \sqrt{2 n - 4}$
6. If $n \gt 4$ then $f$ is concave upward then downward and then upward again, with inflection points at $n - 2 \pm \sqrt{2 n - 4}$
In the special distribution simulator, select the chi-square distribution. Vary $n$ with the scroll bar and note the shape of the probability density function. For selected values of $n$, run the simulation 1000 times and compare the empirical density function to the true probability density function.
The distribution function and the quantile function do not have simple, closed-form representations for most values of the parameter. However, the distribution function can be given in terms of the complete and incomplete gamma functions.
Suppose that $X$ has the chi-square distribution with $n \in (0, \infty)$ degrees of freedom. The distribution function $F$ of $X$ is given by $F(x) = \frac{\Gamma(n/2, x/2)}{\Gamma(n/2)}, \quad x \in (0, \infty)$
Approximate values of the distribution and quantile functions can be obtained from the special distribution calculator, and from most mathematical and statistical software packages.
In the special distribution calculator, select the chi-square distribution. Vary the parameter and note the shape of the probability density, distribution, and quantile functions. In each of the following cases, find the median, the first and third quartiles, and the interquartile range.
1. $n = 1$
2. $n = 2$
3. $n = 5$
4. $n = 10$
Moments
The mean, variance, moments, and moment generating function of the chi-square distribution can be obtained easily from general results for the gamma distribution.
If $X$ has the chi-square distribution with $n \in (0, \infty)$ degrees of freedom then
1. $\E(X) = n$
2. $\var(X) = 2 n$
In the simulation of the special distribution simulator, select the chi-square distribution. Vary $n$ with the scroll bar and note the size and location of the mean $\pm$ standard deviation bar. For selected values of $n$, run the simulation 1000 times and compare the empirical moments to the distribution moments.
The skewness and kurtosis of the chi-square distribution are given next.
If $X$ has the chi-square distribution with $n \in (0, \infty)$ degrees of freedom, then
1. $\skw(X) = 2 \sqrt{2 / n}$
2. $\kur(X) = 3 + 12/n$
Note that $\skw(X) \to 0$ and $\kur(X) \to 3$ as $n \to \infty$. In particular, the excess kurtosis $\kur(X) - 3 \to 0$ as $n \to \infty$.
In the simulation of the special distribution simulator, select the chi-square distribution. Increase $n$ with the scroll bar and note the shape of the probability density function in light of the previous results on skewness and kurtosis. For selected values of $n$, run the simulation 1000 times and compare the empirical density function to the true probability density function.
The next result gives the general moments of the chi-square distribution.
If $X$ has the chi-square distribution with $n \in (0, \infty)$ degrees of freedom, then for $k \gt -n/2$, $\E\left(X^k\right) = 2^k \frac{\Gamma(n/2 + k)}{\Gamma(n/2)}$
In particular, if $k \in \N_+$ then $\E\left(X^k\right) = 2^k \left(\frac{n}{2}\right)\left(\frac{n}{2} + 1\right) \cdots \left(\frac{n}{2} + k - 1\right)$ Note also $\E\left(X^k\right) = \infty$ if $k \le -n/2$.
If $X$ has the chi-square distribution with $n \in (0, \infty)$ degrees of freedom, then $X$ has moment generating function $\E\left(e^{t X}\right) = \frac{1}{(1 - 2 t)^{n / 2}}, \quad t \lt \frac{1}{2}$
Relations
The chi-square distribution is connected to a number of other special distributions. Of course, the most important relationship is the definition—the chi-square distribution with $n$ degrees of freedom is a special case of the gamma distribution, corresponding to shape parameter $n/2$ and scale parameter 2. On the other hand, any gamma distributed variable can be re-scaled into a variable with a chi-square distribution.
If $X$ has the gamma distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$ then $Y = \frac{2}{b} X$ has the chi-square distribution with $2 k$ degrees of freedom.
Proof
Since the gamma distribution is a scale family, $Y$ has a gamma distribution with shape parameter $k$ and scale parameter $b \frac{2}{b} = 2$. Hence $Y$ has the chi-square distribution with $2 k$ degrees of freedom.
The chi-square distribution with 2 degrees of freedom is the exponential distribution with scale parameter 2.
Proof
The chi-square distribution with 2 degrees of freedom is the gamma distribution with shape parameter 1 and scale parameter 2, which we already know is the exponential distribution with scale parameter 2.
If $Z$ has the standard normal distribution then $X = Z^2$ has the chi-square distribution with 1 degree of freedom.
Proof
As usual, let $\phi$ and $\Phi$ denote the PDF and CDF of the standard normal distribution, respectivley Then for $x \gt 0$, $\P(X \le x) = \P(-\sqrt{x} \le Z \le \sqrt{x}) = 2 \Phi\left(\sqrt{x}\right) - 1$ Differentiating with respect to $x$ gives the density function $f$ of $X$: $f(x) = \phi\left(\sqrt{x}\right) x^{-1/2} = \frac{1}{\sqrt{2 \pi}} x^{-1/2} e^{-x / 2}, \quad x \in (0, \infty)$ which we recognize as the chi-square PDF with 1 degree of freedom.
Recall that if we add independent gamma variables with a common scale parameter, the resulting random variable also has a gamma distribution, with the common scale parameter and with shape parameter that is the sum of the shape parameters of the terms. Specializing to the chi-square distribution, we have the following important result:
If $X$ has the chi-square distribution with $m \in (0, \infty)$ degrees of freedom, $Y$ has the chi-square distribution with $n \in (0, \infty)$ degrees of freedom, and $X$ and $Y$ are independent, then $X + Y$ has the chi-square distribution with $m + n$ degrees of freedom.
The last two results lead to the following theorem, which is fundamentally important in statistics.
Suppose that $n \in \N_+$ and that $(Z_1, Z_2, \ldots, Z_n)$ is a sequence of independent standard normal variables. Then the sum of the squares $V = \sum_{i=1}^n Z_i^2$ has the chi-square distribution with $n$ degrees of freedom:
This theorem is the reason that the chi-square distribution deserves a name of its own, and the reason that the degrees of freedom parameter is usually a positive integer. Sums of squares of independent normal variables occur frequently in statistics.
From the central limit theorem, and previous results for the gamma distribution, it follows that if $n$ is large, the chi-square distribution with $n$ degrees of freedom can be approximated by the normal distribution with mean $n$ and variance $2 n$. Here is the precise statement:
If $X_n$ has the chi-square distribution with $n \in (0, \infty)$ degrees of freedom, then the distribution of the standard score $Z_n = \frac{X_n - n}{\sqrt{2 n}}$ converges to the standard normal distribution as $n \to \infty$.
In the simulation of the special distribution simulator, select the chi-square distribution. Start with $n = 1$ and increase $n$. Note the shape of the probability density function in light of the previous theorem. For selected values of $n$, run the experiment 1000 times and compare the empirical density function to the true density function.
Like the gamma distribution, the chi-square distribution is infinitely divisible:
Suppose that $X$ has the chi-square distribution with $n \in (0, \infty)$ degrees of freedom. For $k \in \N_+$, $X$ has the same distribution as $\sum_{i=1}^k X_i$, where $(X_1, X_2, \ldots, X_k)$ is a sequence of independent random variables, each with the chi-square distribution with $n / k$ degrees of freedom.
Also like the gamma distribution, the chi-square distribution is a member of the general exponential family of distributions:
The chi-square distribution with with $n \in (0, \infty)$ degrees of freedom is a one-parameter exponential family with natural parameter $n/2 - 1$, and natural statistic $\ln X$.
Proof
This follows from the definition of the general exponential family. The PDF can be written as $f(x) = \frac{e^{-x/2}}{2^{n/2} \Gamma(n/2)} \exp\left[(n/2 - 1) \ln x\right], \quad x \in (0, \infty)$
The Chi Distribution
The chi distribution, appropriately enough, is the distribution of the square root of a variable with the chi-square distribution
Suppose that $X$ has the chi-square distribution with $n \in (0, \infty)$ degrees of freedom. Then $U = \sqrt{X}$ has the chi distribution with $n$ degrees of freedom.
So like the chi-square distribution, the chi distribution is a continuous distribution on $(0, \infty)$.
Distribution Functions
The distribution function $G$ of the chi distribution with $n \in (0, \infty)$ degrees of freedom is given by $G(u) = \frac{\Gamma(n/2, u^2/2)}{\Gamma(n/2)}, \quad u \in (0, \infty)$
Proof
Suppose that $U$ has the chi distribution with $n$ degrees of freedom so that $X = U^2$ has the chi-square distribution with $n$ degrees of freedom. For $u \in (0, \infty)$, $G(u) = \P(U \le u) = \P(U^2 \le u^2) = \P(X \le u^2) = F(x^2)$ where $F$ is the chi-square distribution function with $n$ degrees of freedom.
The probability density function $g$ of the chi distribution with $n \in (0, \infty)$ degrees of freedom is given by $g(u) = \frac{1}{2^{n/2 - 1} \Gamma(n/2)} u^{n-1} e^{-u^2/2}, \quad u \in (0, \infty)$
Proof
Suppose again that $U$ has the chi distribution with $n$ degrees of freedom so that $X = U^2$ has the chi-square distribution with $n$ degrees of freedom. The transformation $u = \sqrt{x}$ maps $(0, \infty)$ one-to-one onto $(0, \infty)$. The inverse transformation is $x = u^2$ with $dx/du = 2 u$. Hence by the standard change of variables formula, $g(u) = f(x) \frac{dx}{du} = f(u^2) 2 u$ where $f$ is the chi-square PDF.
The chi probability density function also has a variety of shapes.
The chi probability density function with $n \in (0, \infty)$ degrees of freedom satisfies the following properties:
1. If $0 \lt n \lt 1$, $g$ is decreasing with $g(u) \to \infty$ as $u \downarrow 0$.
2. If $n = 1$, $g$ is decreasing with $g(0) = \sqrt{2 / \pi}$ as $u \downarrow 0$.
3. If $n \gt 1$, $g$ increases and then decreases with mode $u = \sqrt{n - 1}$
4. If $0 \lt n \lt 1$, $g$ is concave upward.
5. If $1 \le n \le 2$, $g$ is concave downward and then upward with inflection point at $u = \sqrt{\frac{1}{2}[2 n - 1 + \sqrt{8 n - 7}]}$
6. If $n \gt 2$, $g$ is concave upward then downward then upward again with inflection points at $u = \sqrt{\frac{1}{2}[2 n - 1 \pm \sqrt{8 n - 7}]}$
Moments
The raw moments of the chi distribution are easy to comput in terms of the gamma function.
Suppose that $U$ has the chi distribution with $n \in (0, \infty)$ degrees of freedom. Then $\E(U^k) = 2^{k/2} \frac{\Gamma[(n + k) / 2]}{\Gamma(n/2)}, \quad k \in (0, \infty)$
Proof
By definition $E(U^k) = \int_0^\infty u^k g(u) \, du = \frac{1}{2^{n/2-1} \Gamma(n/2)} \int_0^\infty u^{n+k-1} e^{-u^2/2} du$ The change of variables $v = u^2/2$, so that $u = 2^{1/2} v^{1/2}$ and $du = 2^{-1/2} v^{-1/2}$ gives (after simplification) $E(U^k) = \frac{2^{k/2}}{\Gamma(n/2)} \int_0^\infty v^{(n+k)/2 - 1} e^{-v} dv$ The last integral is $\Gamma[(n + k) / 2]$.
Curiously, the second moment is simply the degrees of freedom parameter.
Suppose again that $U$ has the chi distribution with $n \in (0, \infty)$ degrees of freedom. Then
1. $\E(U) = 2^{1/2} \frac{\Gamma[(n+1)/2]}{\Gamma(n/2)}$
2. $\E(U^2) = n$
3. $\var(U) = n - 2 \frac{\Gamma^2[(n+1)/2]}{\Gamma^2(n/2)}$
Proof
For part (b), using the fundamental identity of the gamma function we have $\E(U^2) = 2 \frac{\Gamma(n/2 + 1)}{\Gamma(n/2)} = 2 \frac{(n/2) \Gamma(n/2)}{\Gamma(n/2)} = n$ The other parts follow from direct substitution.
Relations
The fundamental relationship of course is the one between the chi distribution and the chi-square distribution given in the definition. In turn, this leads to a fundamental relationship between the chi distribution and the normal distribution.
Suppose that $n \in \N_+$ and that $(Z_1, Z_2, \ldots, Z_n)$ is a sequence of independent variables, each with the standard normal distribution. Then $U = \sqrt{Z_1^2 + Z_2^2 + \cdots + Z_n^2}$ has the chi distribution with $n$ degrees of freedom.
Note that the random variable $U$ in the last result is the standard Euclidean norm of $(Z_1, Z_2, \ldots, Z_n)$, thought of as a vector in $\R^n$. Note also that the chi distribution with 1 degree of freedom is the distribution of $\left|Z\right|$, the absolute value of a standard normal variable, which is known as the standard half-normal distribution.
The Non-Central Chi-Square Distribution
Much of the importance of the chi-square distribution stems from the fact that it is the distribution that governs the sum of squares of independent, standard normal variables. A natural generalization, and one that is important in statistical applications, is to consider the distribution of a sum of squares of independent normal variables, each with variance 1 but with different means.
Suppose that $n \in \N_+$ and that $(X_1, X_2, \ldots, X_n)$ is a sequence of independent variables, where $X_k$ has the normal distribution with mean $\mu_k \in \R$ and variance 1 for $k \in \{1, 2, \ldots, n\}$. The distribution of $Y = \sum_{k=1}^n X_k^2$ is the non-central chi-square distribution with $n$ degrees of freedom and non-centrality parameter $\lambda = \sum_{k=1}^n \mu_k^2$.
Note that the degrees of freedom is a positive integer while the non-centrality parameter $\lambda \in [0, \infty)$, but we will soon generalize the degrees of freedom.
Distribution Functions
Like the chi-square and chi distributions, the non-central chi-square distribution is a continuous distribution on $(0, \infty)$. The probability density function and distribution function do not have simple, closed expressions, but there is a fascinating connection to the Poisson distribution. To set up the notation, let $f_k$ and $F_k$ denote the probability density and distribution functions of the chi-square distribution with $k \in (0, \infty)$ degrees of freedom. Suppose that $Y$ has the non-central chi-square distribution with $n \in \N_+$ degrees of freedom and non-centrality parameter $\lambda \in [0, \infty)$. The following fundamental theorem gives the probability density function of $Y$ as an infinite series, and shows that the distribution does in fact depend only on $n$ and $\lambda$.
The probability density function $g$ of $Y$ is given by $g(y) = \sum_{k=0}^\infty e^{-\lambda / 2} \frac{(\lambda / 2)^k}{k!} f_{n + 2 k}(y), \quad y \in (0, \infty)$
Proof
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a sequence of independent random variables, where $X_i$ has the normal distribution with mean $\mu_i$ and variance 1, and where $\lambda = \sum_{i=1}^n \mu_i^2$. So by definition, $Y = \sum_{i=1}^n X_i^2$ has the non-central chi-square distribution with $n$ degrees of freedom and non-centrality parameter $\lambda$. The random vector $\bs{X}$ has a multivariate normal distribution with mean vector $\bs{\mu} = (\mu_1, \mu_2, \ldots, \mu_n)$ and variance-covariance matrix $I$ (the $n \times n$ identity matrix). The (joint) PDF $h$ of $\bs{X}$ is symmetric about $\bs{\mu}$: $h(\bs{\mu} - \bs{x}) = h(\bs{\mu} + \bs{x})$ for $\bs{x} \in \R^n$. Because of this symmetry, the distribution of $Y$ depends on $\bs{\mu}$ only through the parameter $\lambda$. It follows that $Y$ has the same distribution as $\sum_{i=1}^n U_i^2$ where $(U_1, U_2, \ldots, U_n)$ are independent, $U_1$ has the normal distribution with mean $\sqrt{\lambda}$ and variance 1, and $(U_2, U_3, \ldots, U_n)$ are standard normal.
The distribution of $U_1^2$ is found by the usual change of variables methods. Let $\phi$ and $\Phi$ denote the standard normal PDF and CDF, respectively, so that $U_1$ has CDF given by $\P(U_1 \le x) = \Phi\left(x - \sqrt{\lambda}\right)$ for $x \in \R$. Thus, $\P\left(U_1^2 \le x\right) = \P\left(-\sqrt{x} \le U_1 \le \sqrt{x}\right) = \Phi\left(\sqrt{x} - \sqrt{\lambda}\right) - \Phi\left(-\sqrt{x} - \sqrt{\lambda}\right), \quad x \in (0, \infty)$ Taking derivatives, the PDF $g$ of $U_1^2$ is given by $g(x) = \frac{1}{2 \sqrt{x}}\left[\phi\left(\sqrt{x} - \sqrt{\lambda}\right) + \phi\left(-\sqrt{x} - \sqrt{\lambda}\right)\right] \frac{1}{2 \sqrt{x}}\left[\phi\left(\sqrt{x} - \sqrt{\lambda}\right) + \phi\left(\sqrt{x} + \sqrt{\lambda}\right)\right], \quad x \in (0, \infty)$ But $\phi(z) = \frac{1}{\sqrt{2 \pi}} e^{-z^2/2}$ for $z \in \R$, so substituting and simplifying gives $g(x) = \frac{1}{\sqrt{2 \pi x}} e^{-\frac{1}{2}(x + \lambda)} \frac{1}{2} \left(e^{\sqrt{\lambda x}} + e^{- \sqrt{\lambda x}} \right) = \frac{1}{\sqrt{2 \pi x}} e^{-\frac{1}{2}(x + \lambda)} \cosh\left(\sqrt{\lambda x}\right), \quad x \in (0, \infty)$ Next, recall that the Taylor series for the hyperbolic cosine function is $\cosh(x) = \sum_{k=0}^\infty \frac{x^{2 k}}{(2 k)!}, \quad x \in \R$ which leads to $g(x) = \sum_{k=0}^\infty \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2}(x + \lambda)} \frac{\lambda^k x^{k - 1/2}}{(2 k)!}, \quad x \in (0, \infty)$ After a bit more algebra, we get the representation in the theorem, with $n = 1$. That is, $g(x) = \sum_{k=0}^\infty e^{-\lambda / 2} \frac{(\lambda / 2)^k}{k!} \frac{1}{2^{(2 k + 1) / 2} \Gamma[(2 k + 1) / 2]} x^{(2 k + 1)/2 - 1} e^{-x/2}, \quad x \in (0, \infty)$ Or in functional form, $g = \sum_{k=0}^\infty e^{-\lambda / 2} \frac{(\lambda / 2)^k}{k!} f_{2 k + 1}$.
To complete the proof, we know that $\sum_{j=2}^n U_j^2$ has the chi-square distribution with $n - 1$ degrees of freedom, and hence has PDF $f_{n-1}$, and is independent of $U_1$. Therefore the distribution of $\sum_{j=1}^n U_j^2$ is $g * f_{n-1} = \left(\sum_{k=0}^\infty e^{-\lambda / 2} \frac{(\lambda / 2)^k}{k!} f_{2 k + 1}\right) * f_{n-1} = \sum_{k=0}^\infty e^{-\lambda / 2} \frac{(\lambda / 2)^k}{k!} f_{2 k + n}$ where $*$ denotes convolution as usual, and where we have used the fundamental result above on the sum of independent chi-square variables.
The function $k \mapsto e^{-\lambda / 2} \frac{(\lambda / 2)^k}{k!}$ on $\N$ is the probability density function of the Poisson distribution with parameter $\lambda / 2$. So it follows that if $N$ has the Poisson distribution with parameter $\lambda / 2$ and the conditional distribution of $Y$ given $N$ is chi-square with parameter $n + 2 N$, then $Y$ has the distribution discussed here—non-central chi-square with $n$ degrees of freedom and non-centrality parameter $\lambda$. Moreover, it's clear that $g$ is a valid probability density function for any $n \in (0, \infty)$, so we can generalize our definition a bit.
For $n \in (0, \infty)$ and $\lambda \in [0, \infty)$, the distribution with probability density function $g$ above is the non-central chi-square distribution with $n$ degrees of freedom and non-centrality parameter $\lambda$.
The distribution function $G$ is given by $G(y) = \sum_{k=0}^\infty e^{-\lambda/2} \frac{(\lambda / 2)^k}{k!} F_{n + 2 k}(y), \quad y \in (0, \infty)$
Proof
This follows immediately from the result for the PDF, since $G(0) = 0$ and $G^\prime = g$.
Moments
In this discussion, we assume again that $Y$ has the non-central chi-square distribution with $n \in (0, \infty)$ degrees of freedom and non-centrality parameter $\lambda \in [0, \infty)$.
The moment generating function $M$ of $Y$ is given by $M(t) = \E\left(e^{t Y}\right) = \frac{1}{(1 - 2 t)^{n/2}} \exp\left(\frac{\lambda t}{1 - 2 t}\right), \quad t \in (-\infty, 1/2)$
Proof
We will use the fundamental relationship mentioned above. Thus, suppose that $N$ has the Poisson distribution with parameter $\lambda / 2$, and that given $N$, $Y$ has the chi-square distribution with $n + 2 N$ degrees of freedom. Conditioning and using the MGF of the chi-square distribution above gives $E\left(e^{t Y}\right) = \E\left[\E\left(e^{t Y} \mid N\right)\right] = \E \left(\frac{1}{(1 - 2 t)^{(n + 2 N) / 2}}\right) = \frac{1}{(1 - 2 t)^{n/2}} \E\left[\left(\frac{1}{1 - 2 t}\right)^{N}\right]$ The last expected value is the probability generating function of $N$, evaluated at $\frac{1}{1 - 2 t}$. Hence $\E\left(e^{t Y}\right) = \frac{1}{1 - 2 t} \exp\left[\frac{\lambda}{2}\left(\frac{1}{1 - 2 t} - 1\right)\right] = \frac{1}{(1 - 2 t)^{n/2}} \exp\left(\frac{\lambda t}{1 - 2 t}\right)$
The mean and variance of $Y$ are
1. $\E(Y) = n + \lambda$
2. $\var(Y) = 2(n + 2 \lambda)$
Proof
These results can be obtained by taking derivatives of the MGF, but the derivation using the connection with the Poisson distribution is more interesting. So suppose again that $N$ has the Poisson distribution with parameter $\lambda / 2$ and that the conditional distribution of $Y$ given $N$ is chi-square with $n + 2 N$ degrees of freedom. Conditioning and using the means and variances of the chi-square and Poisson distributions, we have
1. $\E(Y) = \E[\E(Y \mid N)] = \E(n + 2 N) = n + 2 (\lambda / 2) = n + \lambda$
2. $\var(Y) = \E[\var(Y \mid N)] + \var[\E(Y \mid N)] = \E[2 (n + 2 N)] + \var(n + 2 N) = 2 n + 4 (\lambda / 2) + 4 \lambda / 2 = 2 n + 4 \lambda$
The skewness and kurtosis of $Y$ are
1. $\skw(Y) = 2^{3/2} \frac{n + 3 \lambda}{(n + 2 \lambda)^{3/2}}$
2. $\kur(Y) = 3 + 12 \frac{n + 4 \lambda}{(n + 2 \lambda)^2}$
Note that $\skw(Y) \to 0$ as $n \to \infty$ or as $\lambda \to \infty$. Note also that the excess kurtosis is $\kur(Y) - 3 = 12 \frac{n + 4 \lambda}{(n + 2 \lambda)^2}$. So $\kur(Y) \to 3$ (the kurtosis of the normal distribution) as $n \to \infty$ or as $\lambda \to \infty$.
Relations
Trivially of course, the ordinary chi-square distribution is a special case of the non-central chi-square distribution, with non-centrality parameter 0. The most important relation is the orignal definition above. The non-central chi-square distribution with $n \in \N_+$ degrees of freedom and non-centrality parameter $\lambda \in [0, \infty)$ is the distribution of the sum of the squares of $n$ independent normal variables with variance 1 and whose means satisfy $\sum_{k=1}^n \mu_k^2 = \lambda$. The next most important relation is the one that arose in the probability density function and was so useful for computing moments. We state this one again for emphasis.
Suppose that $N$ has the Poisson distribution with parameter $\lambda / 2$, where $\lambda \in (0, \infty)$, and that the conditional distribution of $Y$ given $N$ is chi-square with $n + 2 N$ degrees of freedom, where $n \in (0, \infty)$. Then the (unconditional) distribution of $Y$ is non-central chi-square with $n$ degree of freedom and non-centrality parameter $\lambda$.
Proof
For $j \in \N_+$, let $f_j$ denote the chi-square PDF with $j$ degrees of freedom. Then from the assumptions, the PDF $g$ of $Y$ is given by $g(y) = \sum_{n=0}^\infty \P(N = k) f_{n + 2 k}(y) = \sum_{n=0}^\infty e^{-\lambda / 2} \frac{(\lambda / 2)^k}{k!} f_{n + 2 k}(y), \quad y \in (0, \infty)$ which is the PDF of the non-central chi-square distribution with $n$ degrees of freedom and non-centrality parameter $\lambda$, derived above.
As the asymptotic results for the skewness and kurtosis suggest, there is also a central limit theorem.
Suppose that $Y$ has the non-central chi-square distribution with $n \in (0, \infty)$ degrees of freedom and non-centrality parameter $\lambda \in (0, \infty)$. Then the distribution of the standard score $\frac{Y - (n + \lambda)}{\sqrt{2(n + 2 \lambda)}}$ converges to the standard normal distribution as $n \to \infty$ or as $\lambda \to \infty$.
Computational Exercises
Suppose that a missile is fired at a target at the origin of a plane coordinate system, with units in meters. The missile lands at $(X, Y)$ where $X$ and $Y$ are independent and each has the normal distribution with mean 0 and variance 100. The missile will destroy the target if it lands within 20 meters of the target. Find the probability of this event.
Answer
Let $Z$ denote the distance from the missile to the target. $\P(Z \lt 20) = 1 - e^{-2} \approx 0.8647$
Suppose that $X$ has the chi-square distribution with $n = 18$ degrees of freedom. For each of the following, compute the true value using the special distribution calculator and then compute the normal approximation. Compare the results.
1. $\P(15 \lt X \lt 20)$
2. The 75th percentile of $X$.
Answer
1. $\P(15 \lt X \lt 20) = 0.3252$, $\P(15 \lt X \lt 20) \approx 0.3221$
2. $x_{0.75} = 21.605$, $x_{0.75} \approx 22.044$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.09%3A_Chi-Square_and_Related_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
In this section we will study a distribution that has special importance in statistics. In particular, this distribution will arise in the study of a standardized version of the sample mean when the underlying distribution is normal.
Basic Theory
Definition
Suppose that $Z$ has the standard normal distribution, $V$ has the chi-squared distribution with $n \in (0, \infty)$ degrees of freedom, and that $Z$ and $V$ are independent. Random variable $T = \frac{Z}{\sqrt{V / n}}$ has the student $t$ distribution with $n$ degrees of freedom.
The student $t$ distribution is well defined for any $n \gt 0$, but in practice, only positive integer values of $n$ are of interest. This distribution was first studied by William Gosset, who published under the pseudonym Student.
Distribution Functions
Suppose that $T$ has the $t$ distribution with $n \in (0, \infty)$ degrees of freedom. Then $T$ has a continuous distribution on $\R$ with probability density function $f$ given by $f(t) = \frac{\Gamma[(n + 1) / 2]}{\sqrt{n \pi} \, \Gamma(n / 2)} \left( 1 + \frac{t^2}{n} \right)^{-(n + 1) / 2}, \quad t \in \R$
Proof
For $v \gt 0$, the conditional distribution of $T$ given $V = v$ is normal with mean 0 and variance $n / v$. By definition, $V$ has the chi-square distribution with $n$ degrees of freedom. Hence, the joint PDF of $(T, V)$ is $g(t, v) = \sqrt{\frac{v}{2 \pi n}} e^{-v z^2 / 2 n} \frac{1}{2^{n/2} \Gamma(n/2)} v^{n/2-1} e^{-v/2} = \frac{1}{2^{(n+1)/2} \sqrt{n \pi} \, \Gamma(n/2)} v^{(n+1)/2 - 1} e^{-v(1 + t^2/n)/2}, \quad t \in \R, \, v \in (0, \infty)$ The PDF of $T$ is $f(t) = \int_0^\infty g(t, v) \, dv = \frac{1}{2^{(n+1)/2} \sqrt{n \pi} \, \Gamma(n/2)} \int_0^\infty v^{(n+1)/2 - 1} e^{-v(1 + t^2/n)/2} \, dv, \quad t \in \R$ Except for the missing normalizing constant, the integrand is the gamma PDF with shape parameter $(n + 1)/2$ and scale parameter $2 \big/ (1 + t^2/n)$. Hence $f(t) = \frac{1}{2^{(n+1)/2} \sqrt{n \pi} \, \Gamma(n/2)} \Gamma\left[(n + 1)/2\right] \left(\frac{2}{1 + t^2/n}\right)^{(n+1)/2}, \quad t \in \R$ Simplifying gives the result.
The proof of this theorem provides a good way of thinking of the $t$ distribution: the distribution arises when the variance of a mean 0 normal distribution is randomized in a certain way.
In the special distribution simulator, select the student $t$ distribution. Vary $n$ and note the shape of the probability density function. For selected values of $n$, run the simulation 1000 times and compare the empirical density function to the true probability density function.
The Student probability density function $f$ with $n \in (0, \infty)$ degrees of freedom has the following properties:
1. $f$ is symmetric about $t = 0$.
2. $f$ is increasing and then decreasing with mode $t = 0$.
3. $f$ is concave upward, then downward, then upward again with inflection points at $\pm \sqrt{n / (n + 1)}$.
4. $f(t) \to 0$ as $t \to \infty$ and as $t \to -\infty$.
In particular, the distribution is unimodal with mode and median at $t = 0$. Note also that the inflection points converge to $\pm 1$ as $n \to \infty$.
The distribution function and the quantile function of the general $t$ distribution do not have simple, closed-form representations. Approximate values of these functions can be obtained from the special distribution calculator, and from most mathematical and statistical software packages.
In the special distribution calculator, select the student distribution. Vary the parameter and note the shape of the probability density, distribution, and quantile functions. In each of the following cases, find the first and third quartiles:
1. $n = 2$
2. $n = 5$
3. $n = 10$
4. $n = 20$
Moments
Suppose that $T$ has a $t$ distribution. The representation in the definition can be used to find the mean, variance and other moments of $T$. The main point to remember in the proofs that follow is that since $V$ has the chi-square distribution with $n$ degrees of freedom, $E\left(V^k\right) = \infty$ if $k \le -\frac{n}{2}$, while if $k \gt -\frac{n}{2}$, $\E\left(V^k\right) = 2^k \frac{\Gamma(k + n / 2)}{\Gamma(n/2)}$
Suppose that $T$ has the $t$ distribution with $n \in (0, \infty)$ degrees of freedom. Then
1. $\E(T)$ is undefined if $0 \lt n \le 1$
2. $\E(T) = 0$ if $1 \lt n \lt \infty$
Proof
By independence, $\E(T) = \sqrt{n} \E\left(V^{-1/2}\right) \E(Z)$. Of course $\E(Z) = 0$. On the other hand, $\E\left(V^{-1/2}\right) = \infty$ if $n \le 1$ and $\E\left(V^{-1/2}\right) \lt \infty$ if $n \gt 1$.
Suppose again that $T$ has the $t$ distribution with $n \in (0, \infty)$ degrees of freedom then
1. $\var(T)$ is undefined if $0 \lt n \le 1$
2. $\var(T) = \infty$ if $1 \lt n \le 2$
3. $\var(T) = \frac{n}{n - 2}$ if $2 \lt n \lt \infty$
Proof
By independence, $\E\left(T^2\right) = n \E\left(Z^2\right) \E\left(V^{-1}\right)$. Of course $\E\left(Z^2\right) = 1$. On the other hand, $\E\left(V^{-1}\right) = \infty$ if $n \le 2$ and $\E\left(V^{-1}\right) = 1 \big/ (n - 2)$ if $n \gt 2$. The results now follow from the previous result on the mean.
Note that $\var(T) \to 1$ as $n \to \infty$.
In the simulation of the special distribution simulator, select the student $t$ distribution. Vary $n$ and note the location and shape of the mean $\pm$ standard deviation bar. For selected values of $n$, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
Next we give the general moments of the $t$ distribution.
Suppose again that $T$ has the $t$ distribution with $n \in (0, \infty)$ degrees of freedom and $k \in \N$. Then
1. $\E\left(T^k\right)$ is undefined if $k$ is odd and $k \ge n$
2. $\E\left(T^k\right) = \infty$ if $k$ is even and $k \ge n$
3. $\E\left(T^k\right) = 0$ if $k$ is odd and $k \lt n$
4. If $k$ is even and $k \lt n$ then $\E\left(T^k\right) = \frac{n^{k/2} 1 \cdot 3 \cdots (k - 1) \Gamma\left((n - k) \big/ 2\right)}{2^{k/2} \Gamma(n/2)} = \frac{n^{k/2} k! \Gamma\left((n - k)\big/2\right)}{2^k (k/2)! \Gamma(n/2)}$
Proof
By independence, $\E\left(T^k\right) = n^{k/2} \E\left(Z^k\right) \E\left(V^{-k/2}\right)$. Recall that $\E\left(Z^k\right) = 0$ if $k$ is odd, while $\E\left(Z^k\right) = 1 \cdot 3 \cdots (k - 1) = \frac{k!}{(k/2)! 2^{k/2}}$ if $k$ is even. Also, $\E\left(V^{-k/2}\right) = \infty$ if $k \ge n$, while $\E\left(V^{-k/2}\right) = \frac{2^{-k/2} \Gamma\left((n - k) \big/ 2\right)}{\Gamma(n/2)}$ if $k \lt n$. The results now follow by considering the various cases.
From the general moments, we can compute the skewness and kurtosis of $T$.
Suppose again that $T$ has the $t$ distribution with $n \in (0, \infty)$ degrees of freedom. Then
1. $\skw(T) = 0$ if $n \gt 3$
2. $\kur(T) = 3 + \frac{6}{n - 4}$ if $n \gt 4$
Proof
1. This follows from the symmetry of the distribution of $T$, although $\skw(T)$ only exists if $\E\left(T^3\right)$ exists.
2. For $n \gt 4$, $\kur(T) = \frac{\E(T^4)}{\left[\E\left(T^2\right)\right]^2} = \frac{3 n^2 \Gamma\left[(n - 4) / 2\right] \big/ 4 \Gamma(n/2)}{\left(n \big/ (n - 2) \right)^2} = \frac{3 (n - 2)^2 \Gamma\left[(n - 4) / 2\right]}{4 \Gamma(n/2)}$ But $\Gamma(n/2) = (n/2 - 1) (n/2 - 2) \Gamma(n/2 - 2)$. Simplifying gives the result.
Note that $\kur(T) \to 3$ as $n \to \infty$ and hence the excess kurtosis $\kur(T) - 3 \to 0$ as $n \to \infty$.
In the special distribution simulator, select the student $t$ distribution. Vary $n$ and note the shape of the probability density function in light of the previous results on skewness and kurtosis. For selected values of $n$, run the simulation 1000 times and compare the empirical density function to the true probability density function.
Since $T$ does not have moments of all orders, there is no interval about 0 on which the moment generating function of $T$ is finite. The characteristic function exists, of course, but has no simple representation, except in terms of special functions.
Relations
The $t$ distribution with 1 degree of freedom is known as the Cauchy distribution. The probability density function is $f(t) = \frac{1}{\pi (1 + t^2)}, \quad t \in \R$
The Cauchy distribution is named after Augustin Cauchy and is studied in more detail in a separate section.
You probably noticed that, qualitatively at least, the $t$ probability density function is very similar to the standard normal probability density function. The similarity is quantitative as well:
Let $f_n$ denote the $t$ probability density function with $n \in (0, \infty)$ degrees of freedom. Then for fixed $t \in \R$, $f_n(t) \to \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2} t^2} \text{ as } n \to \infty$
Proof
From a basic limit theorem in calculus, $\left( 1 + \frac{t^2}{n} \right)^{-(n + 1) / 2} \to e^{-t^2/2} \text{ as } n \to \infty$ An application of Stirling's approximation shows that $\frac{\Gamma[(n + 1) / 2]}{\sqrt{n \pi} \, \Gamma(n / 2)} \to \frac{1}{\sqrt{2 \pi}} \text{ as } n \to \infty$
Note that the function on the right is the probability density function of the standard normal distribution. We can also get convergence of the $t$ distribution to the standard normal distribution from the basic random variable representation in the definition.
Suppose that $T_n$ has the $t$ distribution with $n \in \N_+$ degrees of freedom, so that we can represent $T_n$ as $T_n = \frac{Z}{\sqrt{V_n / n}}$ where $Z$ has the standard normal distribution, $V_n$ has the chi-square distribution with $n$ degrees of freedom, and $Z$ and $V_n$ are independent. Then $T_n \to Z$ as $n \to \infty$ with probability 1.
Proof
We can represent $V_n$ as $V_n = Z_1^2 + Z_2^2 + \cdots Z_n^2$ where $(Z_1, Z_2, \ldots, Z_n)$ are independent, standard normal variables, independent of $Z$. Note that $V_n / n \to 1$ as $n \to \infty$ with probability 1 by the strong law of large numbers.
The $t$ distribution has more probability in the tails, and consequently less probability near 0, compared to the standard normal distribution.
The Non-Central $t$ Distribution
One natural way to generalize the student $t$ distribution is to replace the standard normal variable $Z$ in the definition above with a normal variable having an arbitrary mean (but still unit variance). The reason this particular generalization is important is because it arises in hypothesis tests about the mean based on a random sample from the normal distribution, when the null hypothesis is false. For details see the sections on tests in the normal model and tests in the bivariate normal model in the chapter on Hypothesis Testing.
Suppose that $Z$ has the standard normal distribution, $\mu \in \R$, $V$ has the chi-squared distribution with $n \in (0, \infty)$ degrees of freedom, and that $Z$ and $V$ are independent. Random variable $T = \frac{Z + \mu}{\sqrt{V / n}}$ has the non-central student $t$ distribution with $n$ degrees of freedom and non-centrality parameter $\mu$.
The standard functions that characterize a distribution—the probability density function, distribution function, and quantile function—do not have simple representations for the non-central $t$ distribution, but can only be expressed in terms of other special functions. Similarly, the moments do not have simple, closed form expressions either. For the beginning student of statistics, the most important fact is that the probability density function of the non-central $t$ distribution is similar (but not exactly the same) as that of the standard $t$ distribution (with the same degrees of freedom), but shifted and scaled. The density function is shifted to the right or left, depending on whether $\mu \gt 0$ or $\mu \lt 0$.
Computational Exercises
Suppose that $T$ has the $t$ distribution with $n = 10$ degrees of freedom. For each of the following, compute the true value using the special distribution calculator and then compute the normal approximation. Compare the results.
1. $\P(-0.8 \lt T \lt 1.2)$
2. The 90th percentile of $T$.
Answer
1. $\P(-0.8 \lt T \lt 1.2) = 0.650$, $\P(-0.8 \lt T \lt 1.2) \approx 0.673$
2. $x_{0.90} = 1.372$, $x_{0.90} \approx 1.281$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.10%3A_The_Student_t_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
In this section we will study a distribution that has special importance in statistics. In particular, this distribution arises from ratios of sums of squares when sampling from a normal distribution, and so is important in estimation and in the two-sample normal model and in hypothesis testing in the two-sample normal model.
Basic Theory
Definition
Suppose that $U$ has the chi-square distribution with $n \in (0, \infty)$ degrees of freedom, $V$ has the chi-square distribution with $d \in (0, \infty)$ degrees of freedom, and that $U$ and $V$ are independent. The distribution of $X = \frac{U / n}{V / d}$ is the $F$ distribution with $n$ degrees of freedom in the numerator and $d$ degrees of freedom in the denominator.
The $F$ distribution was first derived by George Snedecor, and is named in honor of Sir Ronald Fisher. In practice, the parameters $n$ and $d$ are usually positive integers, but this is not a mathematical requirement.
Distribution Functions
Suppose that $X$ has the $F$ distribution with $n \in (0, \infty)$ degrees of freedom in the numerator and $d \in (0, \infty)$ degrees of freedom in the denominator. Then $X$ has a continuous distribution on $(0, \infty)$ with probability density function $f$ given by $f(x) = \frac{\Gamma(n/2 + d/2)}{\Gamma(n / 2) \Gamma(d / 2)} \frac{n}{d} \frac{[(n/d) x]^{n/2 - 1}}{\left[1 + (n / d) x\right]^{n/2 + d/2}}, \quad x \in (0, \infty)$ where $\Gamma$ is the gamma function.
Proof
The trick, once again, is conditioning. The conditional distribution of $X$ given $V = v \in (0, \infty)$ is gamma with shape parameter $n/2$ and scale parameter $2 d / n v$. Hence the conditional PDF is $x \mapsto \frac{1}{\Gamma(n/2) \left(2 d / n v\right)^{n/2}} x^{n/2 - 1} e^{-x(nv /2d)}$ By definition, $V$ has the chi-square distribution with $d$ degrees of freedom, and so has PDF $v \mapsto \frac{1}{\Gamma(d/2) 2^{d/2}} v^{d/2 - 1} e^{-v/2}$ The joint PDF of $(X, V)$ is the product of these functions: $g(x, v) = \frac{1}{\Gamma(n/2) \Gamma(d/2) 2^{(n+d)/2}} \left(\frac{n}{d}\right)^{n/2} x^{n/2 - 1} v^{(n+d)/2 - 1} e^{-v( n x / d + 1)/2}; \quad x, \, v \in (0, \infty)$ The PDF of $X$ is therefore $f(x) = \int_0^\infty g(x, v) \, dv = \frac{1}{\Gamma(n/2) \Gamma(d/2) 2^{(n+d)/2}} \left(\frac{n}{d}\right)^{n/2} x^{n/2 - 1} \int_0^\infty v^{(n+d)/2 - 1} e^{-v( n x / d + 1)/2} \, dv$ Except for the normalizing constant, the integrand in the last integral is the gamma PDF with shape parameter $(n + d)/2$ and scale parameter $2 d \big/ (n x + d)$. Hence the integral evaluates to $\Gamma\left(\frac{n + d}{2}\right) \left(\frac{2 d}{n x + d}\right)^{(n + d)/2}$ Simplifying gives the result.
Recall that the beta function $B$ can be written in terms of the gamma function by $B(a, b) = \frac{\Gamma(a) \Gamma(b)}{\Gamma(a + b)},\ \quad a, \, b \in (0, \infty)$ Hence the probability density function of the $F$ distribution above can also be written as $f(x) = \frac{1}{B(n/2, d/2)} \frac{n}{d} \frac{[(n/d) x]^{n/2 - 1}}{\left[1 + (n / d) x\right]^{n/2 + d/2}}, \quad x \in (0, \infty)$ When $n \ge 2$, the probability density function is defined at $x = 0$, so the support interval is $[0, \infty)$ is this case.
In the special distribution simulator, select the $F$ distribution. Vary the parameters with the scroll bars and note the shape of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
Both parameters influence the shape of the $F$ probability density function, but some of the basic qualitative features depend only on the numerator degrees of freedom. For the remainder of this discussion, let $f$ denote the $F$ probability density function with $n \in (0, \infty)$ degrees of freedom in the numerator and $d \in (0, \infty)$ degrees of freedom in the denominator.
Probability density function $f$ satisfies the following properties:
1. If $0 \lt n \lt 2$, $f$ is decreasing with $f(x) \to \infty$ as $x \downarrow 0$.
2. If $n = 2$, $f$ is decreasing with mode at $x = 0$.
3. If $n \gt 2$, $f$ increases and then decreases, with mode at $x = \frac{(n - 2) d}{n (d + 2)}$.
Proof
These properties follow from standard calculus. The first derivative of $f$ is $f^\prime(x) = \frac{1}{B(n/2, d/2)} \left(\frac{n}{d}\right)^2 \frac{[(n/d)x]^{n/2-2}}{[1 + (n/2)x]^{n/2 + d/2 + 1}} [(n/2 - 1) - (n/d)(d/2 + 1)x], \quad x \in (0, \infty)$
Qualitatively, the second order properties of $f$ also depend only on $n$, with transitions at $n = 2$ and $n = 4$.
For $n \gt 2$, define \begin{align} x_1 & = \frac{d}{n} \frac{(n - 2)(d + 4) - \sqrt{2 (n - 2)(d + 4)(n + d)}}{(d + 2)(d + 4)} \ x_2 & = \frac{d}{n} \frac{(n - 2)(d + 4) + \sqrt{2 (n - 2)(d + 4)(n + d)}}{(d + 1)(d + 4)} \end{align} The probability density function $f$ satisfies the following properties:
1. If $0 \lt n \le 2$, $f$ is concave upward.
2. If $2 \lt n \le 4$, $f$ is concave downward and then upward, with inflection point at $x_2$.
3. If $n \gt 4$, $f$ is concave upward, then downward, then upward again, with inflection points at $x_1$ and $x_2$.
Proof
These results follow from standard calculus. The second derivative of $f$ is $f^{\prime\prime}(x) = \frac{1}{B(n/2, d/2)} \left(\frac{n}{d}\right)^3 \frac{[(n/d)x]^{n/2-3}}{[1 + (n/d)x]^{n/2 + d/2 + 2}}\left[(n/2 - 1)(n/2 - 2) - 2 (n/2 - 1)(d/2 + 2) (n/d) x + (d/2 + 1)(d/2 + 2)(n/d)^2 x^2\right], \quad x \in (0, \infty)$
The distribution function and the quantile function do not have simple, closed-form representations. Approximate values of these functions can be obtained from the special distribution calculator and from most mathematical and statistical software packages.
In the special distribution calculator, select the $F$ distribution. Vary the parameters and note the shape of the probability density function and the distribution function. In each of the following cases, find the median, the first and third quartiles, and the interquartile range.
1. $n = 5$, $d = 5$
2. $n = 5$, $d = 10$
3. $n = 10$, $d = 5$
4. $n = 10$, $d = 10$
The general probability density function of the $F$ distribution is a bit complicated, but it simplifies in a couple of special cases.
Special cases.
1. If $n = 2$, $f(x) = \frac{1}{(1 + 2 x / d)^{1 + d / 2}}, \quad x \in (0, \infty)$
2. If $n = d \in (0, \infty)$, $f(x) = \frac{\Gamma(n)}{\Gamma^2(n/2)} \frac{x^{n/2-1}}{(1 + x)^n}, \quad x \in (0, \infty)$
3. If $n = d = 2$, $f(x) = \frac{1}{(1 + x)^2}, \quad x \in (0, \infty)$
4. If $n = d = 1$, $f(x) = \frac{1}{\pi \sqrt{x}(1 + x)}, \quad x \in (0, \infty)$
Moments
The random variable representation in the definition, along with the moments of the chi-square distribution can be used to find the mean, variance, and other moments of the $F$ distribution. For the remainder of this discussion, suppose that $X$ has the $F$ distribution with $n \in (0, \infty)$ degrees of freedom in the numerator and $d \in (0, \infty)$ degrees of freedom in the denominator.
Mean
1. $\E(X) = \infty$ if $0 \lt d \le 2$
2. $\E(X) = \frac{d}{d - 2}$ if $d \gt 2$
Proof
By independence, $\E(X) = \frac{d}{n} \E(U) \E\left(V^{-1}\right)$. Recall that $\E(U) = n$. Similarly if $d \le 2$, $\E\left(V^{-1}\right) = \infty$ while if $d \gt 2$, $\E\left(V^{-1}\right) = \frac{\Gamma(d/2 - 1)}{2 \Gamma(d/2)} = \frac{1}{d - 2}$
Thus, the mean depends only on the degrees of freedom in the denominator.
Variance
1. $\var(X)$ is undefined if $0 \lt d \le 2$
2. $\var(X) = \infty$ if $2 \lt d \le 4$
3. If $d \gt 4$ then $\var(X) = 2 \left(\frac{d}{d - 2} \right)^2 \frac{n + d - 2}{n (d - 4)}$
Proof
By independence, $\E\left(X^2\right) = \frac{d^2}{n^2} \E\left(U^2\right) \E\left(V^{-2}\right)$. Recall that $E(\left(U^2\right) = 4 \frac{\Gamma(n/2 + 2)}{\Gamma(n/2)} = (n + 2) n$ Similarly if $d \le 4$, $\E\left(V^{-2}\right) = \infty$ while if $d \gt 4$, $\E\left(V^{-2}\right) = \frac{\Gamma(d/2 - 2)}{4 \Gamma(d/2)} = \frac{1}{(d - 2)(d - 4)}$ Hence $\E\left(X^2\right) = \infty$ if $d \le 4$ while if $d \gt 4$, $\E\left(X^2\right) = \frac{(n + 2) d^2}{n (d - 2)(d - 4)}$ The results now follow from the previous result on the mean and the computational formula $\var(X) = \E\left(X^2\right) - \left[\E(X)\right]^2$.
In the simulation of the special distribution simulator, select the $F$ distribution. Vary the parameters with the scroll bar and note the size and location of the mean $\pm$ standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation..
General moments. For $k \gt 0$,
1. $\E\left(X^k\right) = \infty$ if $0 \lt d \le 2 k$
2. If $d \gt 2 k$ then $\E\left(X^k\right) = \left( \frac{d}{n} \right)^k \frac{\Gamma(n/2 + k) \, \Gamma(d/2 - k)}{\Gamma(n/2) \Gamma(d/2)}$
Proof
By independence, $\E\left(X^k\right) = \left(\frac{d}{n}\right)^k \E\left(U^k\right) \E\left(V^{-k}\right)$. Recall that $\E\left(U^k\right) = \frac{2^k \Gamma(n/2 + k)}{\Gamma(n/2)}$ On the other hand, $\E\left(V^{-k}\right) = \infty$ if $d/2 \le k$ while if $d/2 \gt k$, $\E\left(V^{-k}\right) = \frac{2^{-k} \Gamma(d/2 - k)}{\Gamma(d/2)}$
If $k \in \N$, then using the fundamental identity of the gamma distribution and some algebra, $\E\left(X^{k}\right) = \left(\frac{d}{n}\right)^k \frac{n (n + 2) \cdots [n + 2(k - 1)]}{(d - 2)(d - 4) \cdots (d - 2k)}$ From the general moment formula, we can compute the skewness and kurtosis of the $F$ distribution.
Skewness and kurtosis
1. If $d \gt 6$, $\skw(X) = \frac{(2 n + d - 2) \sqrt{8 (d - 4)}}{(d - 6) \sqrt{n (n + d - 2)}}$
2. If $d \gt 8$, $\kur(X) = 3 + 12 \frac{n (5 d - 22)(n + d - 2) + (d - 4)(d-2)^2}{n(d - 6)(d - 8)(n + d - 2)}$
Proof
These results follow from the formulas for $\E\left(X^k\right)$ for $k \in \{1, 2, 3, 4\}$ and the standard computational formulas for skewness and kurtosis.
Not surprisingly, the $F$ distribution is positively skewed. Recall that the excess kurtosis is $\kur(X) - 3 = 12 \frac{n (5 d - 22)(n + d - 2) + (d - 4)(d-2)^2}{n(d - 6)(d - 8)(n + d - 2)}$
In the simulation of the special distribution simulator, select the $F$ distribution. Vary the parameters with the scroll bar and note the shape of the probability density function in light of the previous results on skewness and kurtosis. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
Relations
The most important relationship is the one in the definition, between the $F$ distribution and the chi-square distribution. In addition, the $F$ distribution is related to several other special distributions.
Suppose that $X$ has the $F$ distribution with $n \in (0, \infty)$ degrees of freedom in the numerator and $d \in (0, \infty)$ degrees of freedom in the denominator. Then $1 / X$ has the $F$ distribution with $d$ degrees of freedom in the numerator and $n$ degrees of freedom in the denominator.
Proof
This follows easily from the random variable interpretation in the definition. We can write $X = \frac{U/n}{V/d}$ where $U$ and $V$ are independent and have chi-square distributions with $n$ and $d$ degrees of freedom, respectively. Hence $\frac{1}{X} = \frac{V/d}{U/n}$
Suppose that $T$ has the $t$ distribution with $n \in (0, \infty)$ degrees of freedom. Then $X = T^2$ has the $F$ distribution with 1 degree of freedom in the numerator and $n$ degrees of freedom in the denominator.
Proof
This follows easily from the random variable representations of the $t$ and $F$ distributions. We can write $T = \frac{Z}{\sqrt{V/n}}$ where $Z$ has the standard normal distribution, $V$ has the chi-square distribution with $n$ degrees of freedom, and $Z$ and $V$ are independent. Hence $T^2 = \frac{Z^2}{V/n}$ Recall that $Z^2$ has the chi-square distribution with 1 degree of freedom.
Our next relationship is between the $F$ distribution and the exponential distribution.
Suppose that $X$ and $Y$ are independent random variables, each with the exponential distribution with rate parameter $r \in (0, \infty)$. Then $Z = X / Y$. has the $F$ distribution with $2$ degrees of freedom in both the numerator and denominator.
Proof
We first find the distribution function $F$ of $Z$ by conditioning on $X$: $F(z) = \P(Z \le z) = \P(Y \ge X / z) = \E\left[\P(Y \ge X / z \mid X)\right]$ But $\P(Y \ge y) = e^{-r y}$ for $y \ge 0$ so $F(z) = \E\left(e^{-r X / z}\right)$. Also, $X$ has PDF $g(x) = r e^{-r x}$ for $x \ge 0$ so $F(z) = \int_0^\infty e^{- r x / z} r e^{-r x} \, dx = \int_0^\infty r e^{-r x (1 + 1/z)} \, dx = \frac{1}{1 + 1/z} = \frac{z}{1 + z}, \quad z \in (0, \infty)$ Differentiating gives the PDF of $Z$ $f(z) = \frac{1}{(1 + z)^2}, \quad z \in (0, \infty)$ which we recognize as the PDF of the $F$ distribution with 2 degrees of freedom in the numerator and the denominator.
A simple transformation can change a variable with the $F$ distribution into a variable with the beta distribution, and conversely.
Connections between the $F$ distribution and the beta distribution.
1. If $X$ has the $F$ distribution with $n \in (0, \infty)$ degrees of freedom in the numerator and $d \in (0, \infty)$ degrees of freedom in the denominator, then $Y = \frac{(n/d) X}{1 + (n/d) X}$ has the beta distribution with left parameter $n/2$ and right parameter $d/2$.
2. If $Y$ has the beta distribution with left parameter $a \in (0, \infty)$ and right parameter $b \in (0, \infty)$ then $X = \frac{b Y}{a(1 - Y)}$ has the $F$ distribution with $2 a$ degrees of freedom in the numerator and $2 b$ degrees of freedom in the denominator.
Proof
The two statements are equivalent and follow from the standard change of variables formula. The function $y = \frac{(n/d) x}{1 + (n/d) x}$ maps $(0, \infty)$ one-to-one onto (0, 1), with inverse $x = \frac{d}{n}\frac{y}{1 - y}$ Let $f$ denote the PDF of the $F$ distribution with $n$ degrees of freedom in the numerator and $d$ degrees of freedom in the denominator, and let $g$ denote the PDF of the beta distribution with left parameter $n/2$ and right parameter $d/2$. Then $f$ and $g$ are related by
1. $g(y) = f(x) \frac{dx}{dy}$
2. $f(x) = g(y) \frac{dy}{dx}$
The $F$ distribution is closely related to the beta prime distribution by a simple scale transformation.
Connections with the beta prime distributions.
1. If $X$ has the $F$ distribution with $n \in (0, \infty)$ degrees of freedom in the numerator and $d \in (0, \infty)$ degrees of freedom in the denominator, then $Y = \frac{n}{d} X$ has the beta prime distribution with parameters $n/2$ and $d/2$.
2. If $Y$ has the beta prime distribution with parameters $a \in (0, \infty)$ and $b \in (0, \infty)$ then $X = \frac{b}{a} X$ has the $F$ distribution with $2 a$ degrees of the freedom in the numerator and $2 b$ degrees of freedom in the denominator.
Proof
Let $f$ denote the PDF of $X$ and $g$ the PDF of $Y$.
1. By the change of variables formula, $g(y) = \frac{d}{n} f\left(\frac{d}{n} y\right), \quad y \in (0, \infty)$ Substituting into the beta $F$ PDF shows that $Y$ has the appropriate beta prime distribution.
2. Again using the change of variables formula, $f(x) = \frac{a}{b} g\left(\frac{a}{b} x\right), \quad x \in (0, \infty)$ Substituting into the beta prime PDF shows that $X$ has the appropriate $F$ PDF.
The Non-Central $F$ Distribution
The $F$ distribution can be generalized in a natural way by replacing the ordinary chi-square variable in the numerator in the definition above with a variable having a non-central chi-square distribution. This generalization is important in analysis of variance.
Suppose that $U$ has the non-central chi-square distribution with $n \in (0, \infty)$ degrees of freedom and non-centrality parameter $\lambda \in [0, \infty)$, $V$ has the chi-square distribution with $d \in (0, \infty)$ degrees of freedom, and that $U$ and $V$ are independent. The distribution of $X = \frac{U / n}{V / d}$ is the non-central $F$ distribution with $n$ degrees of freedom in the numerator, $d$ degrees of freedom in the denominator, and non-centrality parameter $\lambda$.
One of the most interesting and important results for the non-central chi-square distribution is that it is a Poisson mixture of ordinary chi-square distributions. This leads to a similar result for the non-central $F$ distribution.
Suppose that $N$ has the Poisson distribution with parameter $\lambda / 2$, and that the conditional distribution of $X$ given $N$ is the $F$ distribution with $N + 2 n$ degrees of freedom in the numerator and $d$ degrees of freedom in the denominator, where $\lambda \in [0, \infty)$ and $n, \, d \in (0, \infty)$. Then $X$ has the non-central $F$ distribution with $n$ degrees of freedom in the numerator, $d$ degrees of freedom in the denominator, and non-centrality parameter $\lambda$.
Proof
As in the theorem, let $N$ have the Poisson distribution with parameter $\lambda / 2$, and suppose also that the conditional distribution of $U$ given $N$ is chi-square with $n + 2 N$ degrees of freedom, and that $V$ has the chi-square distribution with $d$ degrees of freedom and is independent of $(N, U)$. Let $X = (U / n) \big/ (V / d)$. Since $V$ is independent of $(N, U)$, the variable $X$ satisfies the condition in the theorem; that is, the conditional distribution of $X$ given $N$ is the $F$ distribution with $n + 2 N$ degrees of freedom in the numerator and $d$ degrees of freedom in the denominator. But then also, (unconditionally) $U$ has the non-central chi-square distribution with $n$ degrees of freedom in the numerator and non-centrality parameter $\lambda$, $V$ has the chi-square distribution with $d$ degrees of freedom, and $U$ and $V$ are independent. So by definition $X$ has the $F$ distribution with $n$ degrees of freedom in the numerator, $d$ degrees of freedom in the denominator, and non-centrality parameter $\lambda$.
From the last result, we can express the probability density function and distribution function of the non-central $F$ distribution as a series in terms of ordinary $F$ density and distribution functions. To set up the notation, for $j, k \in (0, \infty)$ let $f_{j k}$ be the probability density function and $F_{j k}$ the distribution function of the $F$ distribution with $j$ degrees of freedom in the numerator and $k$ degrees of freedom in the denominator. For the rest of this discussion, $\lambda \in [0, \infty)$ and $n, \, d \in (0, \infty)$ as usual.
The probability density function $g$ of the non-central $F$ distribution with $n$ degrees of freedom in the numerator, $d$ degrees of freedom in the denominator, and non-centrality parameter $\lambda$ is given by $g(x) = \sum_{k = 0}^\infty e^{-\lambda / 2} \frac{(\lambda / 2)^k}{k!} f_{n + 2 k, d}(x), \quad x \in (0, \infty)$
The distribution function $G$ of the non-central $F$ distribution with $n$ degrees of freedom in the numerator, $d$ degrees of freedom in the denominator, and non-centrality parameter $\lambda$ is given by $G(x) = \sum_{k = 0}^\infty e^{-\lambda / 2} \frac{(\lambda / 2)^k}{k!} F_{n + 2 k, d}(x), \quad x \in (0, \infty)$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.11%3A_The_F_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
Basic Theory
Definition
Suppose that $Y$ has the normal distribution with mean $\mu \in \R$ and standard deviation $\sigma \in (0, \infty)$. Then $X = e^Y$ has the lognormal distribution with parameters $\mu$ and $\sigma$.
1. The parameter $\sigma$ is the shape parameter of the distribution.
2. The parameter $e^\mu$ is the scale parameter of the distribution.
If $Z$ has the standard normal distribution then $W = e^Z$ has the standard lognormal distribution.
So equivalently, if $X$ has a lognormal distribution then $\ln X$ has a normal distribution, hence the name. The lognormal distribution is a continuous distribution on $(0, \infty)$ and is used to model random quantities when the distribution is believed to be skewed, such as certain income and lifetime variables. It's easy to write a general lognormal variable in terms of a standard lognormal variable. Suppose that $Z$ has the standard normal distribution and let $W = e^Z$ so that $W$ has the standard lognormal distribution. If $\mu \in \R$ and $\sigma \in (0, \infty)$ then $Y = \mu + \sigma Z$ has the normal distribution with mean $\mu$ and standard deviation $\sigma$ and hence $X = e^Y$ has the lognormal distribution with parameters $\mu$ and $\sigma$. But $X = e^Y = e^{\mu + \sigma Z} = e^\mu \left(e^Z\right)^\sigma = e^\mu W^\sigma$
Distribution Functions
Suppose that $X$ has the lognormal distribution with parameters $\mu \in \R$ and $\sigma \in (0, \infty)$.
The probability density function $f$ of $X$ is given by $f(x) = \frac{1}{\sqrt{2 \pi} \sigma x} \exp \left[-\frac{\left(\ln x - \mu\right)^2}{2 \sigma^2} \right], \quad x \in (0, \infty)$
1. $f$ increases and then decreases with mode at $x = \exp\left(\mu - \sigma^2\right)$.
2. $f$ is concave upward then downward then upward again, with inflection points at $x = \exp\left(\mu - \frac{3}{2} \sigma^2 \pm \frac{1}{2} \sigma \sqrt{\sigma^2 + 4}\right)$
3. $f(x) \to 0$ as $x \downarrow 0$ and as $x \to \infty$.
Proof
The form of the PDF follows from the change of variables theorem. Let $g$ denote the PDF of the normal distribution with mean $\mu$ and standard deviation $\sigma$, so that $g(y) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left[-\frac{1}{2}\left(\frac{y - \mu}{\sigma}\right)^2\right], \quad y \in \R$ The mapping $x = e^y$ maps $\R$ one-to-one onto $(0, \infty)$ with inverse $y = \ln x$. Hence the PDF $f$ of $X = e^Y$ is $f(x) = g(y) \frac{dy}{dx} = g\left(\ln x\right) \frac{1}{x}$ Substituting gives the result. Parts (a)–(d) follow from standard calculus.
In the special distribution simulator, select the lognormal distribution. Vary the parameters and note the shape and location of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the true probability density function.
Let $\Phi$ denote the standard normal distribution function, so that $\Phi^{-1}$ is the standard normal quantile function. Recall that values of $\Phi$ and $\Phi^{-1}$ can be obtained from the special distribution calculator, as well as standard mathematical and statistical software packages, and in fact these functions are considered to be special functions in mathematics. The following two results show how to compute the lognormal distribution function and quantiles in terms of the standard normal distribution function and quantiles.
The distribution function $F$ of $X$ is given by $F(x) = \Phi \left( \frac{\ln x - \mu}{\sigma} \right), \quad x \in (0, \infty)$
Proof
Once again, write $X = e^{\mu + \sigma Z}$ where $Z$ has the standard normal distribution. For $x \gt 0$, $F(x) = \P(X \le x) = \P\left(Z \le \frac{\ln x - \mu}{\sigma}\right) = \Phi \left( \frac{\ln x - \mu}{\sigma} \right)$
The quantile function of $X$ is given by $F^{-1}(p) = \exp\left[\mu + \sigma \Phi^{-1}(p)\right], \quad p \in (0, 1)$
Proof
This follows by solving $p = F(x)$ for $x$ in terms of $p$.
In the special distribution calculator, select the lognormal distribution. Vary the parameters and note the shape and location of the probability density function and the distribution function. With $\mu = 0$ and $\sigma = 1$, find the median and the first and third quartiles.
Moments
The moments of the lognormal distribution can be computed from the moment generating function of the normal distribution. Once again, we assume that $X$ has the lognormal distribution with parameters $\mu \in \R$ and $\sigma \in (0, \infty)$.
For $t \in \R$, $\E\left(X^t\right) = \exp \left( \mu t + \frac{1}{2} \sigma^2 t^2 \right)$
Proof
Recall that if $Y$ has the normal distribution with mean $\mu \in \R$ and standard deviation $\sigma \in (0, \infty)$, then $Y$ has moment generating function given by $\E\left(e^{t Y}\right) = \exp\left(\mu t + \frac{1}{2} \sigma^2 t^2\right), \quad t \in \R$ Hence the result follows immediately since $\E\left(X^t\right) = \E\left(e^{t Y}\right)$.
In particular, the mean and variance of $X$ are
1. $\E(X) = \exp\left(\mu + \frac{1}{2} \sigma^2\right)$
2. $\var(X) = \exp\left[2 (\mu + \sigma^2)\right] - \exp\left(2 \mu + \sigma^2\right)$
In the simulation of the special distribution simulator, select the lognormal distribution. Vary the parameters and note the shape and location of the mean$\pm$standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical moments to the true moments.
From the general formula for the moments, we can also compute the skewness and kurtosis of the lognormal distribution.
The skewness and kurtosis of $X$ are
1. $\skw(X) = \left(e^{\sigma^2} + 2\right) \sqrt{e^{\sigma^2} - 1}$
2. $\kur(X) = e^{4 \sigma^2} + 2 e^{3 \sigma^2} + 3 e^{2 \sigma^2} - 3$
Proof
These result follow from the first 4 moments of the lognormal distribution and the standard computational formulas for skewness and kurtosis.
The fact that the skewness and kurtosis do not depend on $\mu$ is due to the fact that $\mu$ is a scale parameter. Recall that skewness and kurtosis are defined in terms of the standard score and so are independent of location and scale parameters. Naturally, the lognormal distribution is positively skewed. Finally, note that the excess kurtosis is $\kur(X) - 3 = e^{4 \sigma^2} + 2 e^{3 \sigma^2} + 3 e^{2 \sigma^2} - 6$
Even though the lognormal distribution has finite moments of all orders, the moment generating function is infinite at any positive number. This property is one of the reasons for the fame of the lognormal distribution.
$\E\left(e^{t X}\right) = \infty$ for every $t \gt 0$.
Proof
By definition, $X = e^Y$ where $Y$ has the normal distribution with mean $\mu$ and standard deviation $\sigma$. Using the change of variables formula for expected value we have $\E\left(e^{t X}\right) = \E\left(e^{t e^Y}\right) = \int_{-\infty}^\infty \exp(t e^y) \frac{1}{\sqrt{2 \pi} \sigma} \exp\left[-\frac{1}{2}\left(\frac{y - \mu}{\sigma}\right)^2\right] dy = \frac{1}{\sqrt{2 \pi} \sigma} \int_{-\infty}^\infty \exp\left[t e^y - \frac{1}{2} \left(\frac{y - \mu}{\sigma}\right)^2\right] dy$ If $t \gt 0$ the integrand in the last integral diverges to $\infty$ as $y \to \infty$, so there is no hope that the integral converges.
Related Distributions
The most important relations are the ones between the lognormal and normal distributions in the definition: if $X$ has a lognormal distribution then $\ln X$ has a normal distribution; conversely if $Y$ has a normal distribution then $e^Y$ has a lognormal distribution. The lognormal distribution is also a scale family.
Suppose that $X$ has the lognormal distribution with parameters $\mu \in \R$ and $\sigma \in (0, \infty)$ and that $c \in (0, \infty)$. Then $c X$ has the lognormal distribution with parameters $\mu + \ln c$ and $\sigma$.
Proof
From the definition, we can write $X = e^Y$ where $Y$ has the normal distribution with mean $\mu$ and standard deviation $\sigma$. Hence $c X = c e^Y = e^{\ln c} e^Y = e^{\ln c + Y}$ But $\ln c + Y$ has the normal distribution with mean $\ln c + \mu$ and standard deviation $\sigma$.
The reciprocal of a lognormal variable is also lognormal.
If $X$ has the lognormal distribution with parameters $\mu \in \R$ and $\sigma \in (0, \infty)$ then $1 / X$ has the lognormal distribution with parameters $-\mu$ and $\sigma$.
Proof
Again from the definition, we can write $X = e^Y$ where $Y$ has the normal distribution with mean $\mu$ and standard deviation $\sigma$. Hence $1 / X = e^{-Y}$. But $-Y$ has the normal distribution with mean $-\mu$ and standard deviation $\sigma$.
The lognormal distribution is closed under non-zero powers of the underlying variable. In particular, this generalizes the previous result.
Suppose that $X$ has the lognormal distribution with parameters $\mu \in \R$ and $\sigma \in (0, \infty)$ and that $a \in \R \setminus \{0\}$. Then $X^a$ has the lognormal distribution with parameters with parameters $a \mu$ and $|a| \sigma$.
Proof
Again from the definition, we can write $X = e^Y$ where $Y$ has the normal distribution with mean $\mu$ and standard deviation $\sigma$. Hence $X^a = e^{a Y}$. But $a Y$ has the normal distribution with mean $a \mu$ and standard deviation $|a| \sigma$.
Since the normal distribution is closed under sums of independent variables, it's not surprising that the lognormal distribution is closed under products of independent variables.
Suppose that $n \in \N_+$ and that $(X_1, X_2, \ldots, X_n)$ is a sequence of independent variables, where $X_i$ has the lognormal distribution with parameters $\mu_i \in \R$ and $\sigma_i \in (0, \infty)$ for $i \in \{1, 2, \ldots, n\}$. Then $\prod_{i=1}^n X_i$ has the lognormal distribution with parameters $\mu$ and $\sigma$ where $\mu = \sum_{i=1}^n \mu_i$ and $\sigma^2 = \sum_{i=1}^n \sigma_i^2$.
Proof
Again from the definition, we can write $X_i = e^{Y_i}$ where $Y_i$ has the normal distribution with mean $\mu_i$ and standard deviation $\sigma_i$ for $i \in \{1, 2, \ldots, n\}$ and where $(Y_1, Y_2, \ldots, Y_n)$ is an independent sequence. Hence $\prod_{i=1}^n X_i = \exp\left(\sum_{i=1}^n Y_i\right)$. But $\sum_{i=1}^n Y_i$ has the normal distribution with mean $\sum_{i=1}^n \mu_i$ and variance $\sum_{i=1}^n \sigma_i^2$.
Finally, the lognormal distribution belongs to the family of general exponential distributions.
Suppose that $X$ has the lognormal distribution with parameters $\mu \in \R$ and $\sigma \in (0, \infty)$. The distribution of $X$ is a 2-parameter exponential family with natural parameters and natural statistics, respectively, given by
1. $\left( -1 / 2 \sigma^2, \mu / \sigma^2 \right)$
2. $\left(\ln^2(X), \ln X\right)$
Proof
This follows from the definition of the general exponential family, since we can write the lognormal PDF in the form $f(x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left(-\frac{\mu^2}{2 \sigma^2}\right) \frac{1}{x} \exp\left[-\frac{1}{2 \sigma^2} \ln^2(x) + \frac{\mu}{\sigma^2} \ln x\right], \quad x \in (0, \infty)$
Computational Exercises
Suppose that the income $X$ of a randomly chosen person in a certain population (in $1000 units) has the lognormal distribution with parameters $\mu = 2$ and $\sigma = 1$. Find $\P(X \gt 20)$. Answer $\P(X \gt 20) = 0.1497$ Suppose that the income $X$ of a randomly chosen person in a certain population (in$1000 units) has the lognormal distribution with parameters $\mu = 2$ and $\sigma = 1$. Find each of the following:
1. $\E(X)$
2. $\var(X)$
Answer
1. $\E(X) = e^{5/2} \approx 12.1825$
2. $\sd(X) = \sqrt{e^6 - e^5} \approx 15.9629$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.12%3A_The_Lognormal_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{ \E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\sgn}{\text{sgn}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
The General Folded Normal Distribution
Introduction
The folded normal distribution is the distribution of the absolute value of a random variable with a normal distribution. As has been emphasized before, the normal distribution is perhaps the most important in probability and is used to model an incredible variety of random phenomena. Since one may only be interested in the magnitude of a normally distributed variable, the folded normal arises in a very natural way. The name stems from the fact that the probability measure of the normal distribution on $(-\infty, 0]$ is folded over to $[0, \infty)$. Here is the formal definition:
Suppose that $Y$ has a normal distribution with mean $\mu \in \R$ and standard deviation $\sigma \in (0, \infty)$. Then $X = |Y|$ has the folded normal distribution with parameters $\mu$ and $\sigma$.
So in particular, the folded normal distribution is a continuous distribution on $[0, \infty)$.
Distribution Functions
Suppose that $Z$ has the standard normal distribution. Recall that $Z$ has probability density function $\phi$ and distribution function $\Phi$ given by \begin{align} \phi(z) & = \frac{1}{\sqrt{2 \pi}} e^{-z^2 / 2}, \quad z \in \R \ \Phi(z) & = \int_{-\infty}^z \phi(x) \, dx = \int_{-\infty}^z \frac{1}{\sqrt{2 \pi}} e^{-x^2 / 2} \, dx, \quad z \in \R \end{align} The standard normal distribution is so important that $\Phi$ is considered a special function and can be computed using most mathematical and statistical software. If $\mu \in \R$ and $\sigma \in (0, \infty)$, then $Y = \mu + \sigma Z$ has the normal distribution with mean $\mu$ and standard deviation $\sigma$, and therefore $X = |Y| = |\mu + \sigma Z|$ has the folded normal distribution with parameters $\mu$ and $\sigma$. For the remainder of this discussion we assume that $X$ has this folded normal distribution.
$X$ has distribution function $F$ given by \begin{align} F(x) & = \Phi\left(\frac{x - \mu}{\sigma}\right) - \Phi\left(\frac{-x - \mu}{\sigma}\right) = \Phi\left(\frac{x - \mu}{\sigma}\right) + \Phi\left(\frac{x + \mu}{\sigma}\right) - 1 \ & = \int_0^x \frac{1}{\sigma \sqrt{2 \pi}} \left\{\exp\left[-\frac{1}{2}\left(\frac{y + \mu}{\sigma}\right)^2\right] + \exp\left[-\frac{1}{2} \left(\frac{y - \mu}{\sigma}\right)^2\right] \right\} \, dy, \quad x \in [0, \infty) \end{align}
Proof
For $x \in [0, \infty)$, \begin{align} F(x) & = \P(X \le x) = \P(|Y| \le x) = \P(|\mu + \sigma Z| \le x) = \P(-x \le \mu + \sigma Z \le x) \ & = \P\left(\frac{-x - \mu}{\sigma} \le Z \le \frac{x - \mu}{\sigma}\right) = \Phi\left(\frac{x - \mu}{\sigma}\right) - \Phi\left(\frac{-x - \mu}{\sigma} \right) \end{align} which gives the first expression. The second expression follows since $\Phi(-z) = 1 - \Phi(z)$ for $z \in \R$. Finally, the integral formula follows from the form of $\Phi$ given above and simple substitution.
We cannot compute the quantile function $F^{-1}$ in closed form, but values of this function can be approximated.
Open the special distribution calculator and select the folded normal distribution, and set the view to CDF. Vary the parameters and note the shape of the distribution function. For selected values of the parameters, compute the median and the first and third quartiles.
$X$ has probability density function $f$ given by \begin{align} f(x) & = \frac{1}{\sigma} \left[\phi\left(\frac{x - \mu}{\sigma}\right) + \phi\left(\frac{x + \mu}{\sigma}\right)\right] \ & = \frac{1}{\sigma \sqrt{2 \pi}} \left\{\exp\left[-\frac{1}{2}\left(\frac{x + \mu}{\sigma}\right)^2\right] + \exp\left[-\frac{1}{2} \left(\frac{x - \mu}{\sigma}\right)^2\right] \right\}, \quad x \in [0, \infty) \end{align}
Proof
This follows from differentiating the CDF with respect to $x$, since $F^\prime(x) = f(x)$ and $\Phi^\prime(z) = \phi(z)$.
Open the special distribution simulator and select the folded normal distribution. Vary the parameters $\mu$ and $\sigma$ and note the shape of the probability density function. For selected values of the parameters, run the simulation 1000 times and compae the empirical density function to the true probability density function.
Note that the folded normal distribution is unimodal for some values of the parameters and decreasing for other values. Note also that $\mu$ is not a location parameter nor is $\sigma$ a scale parameter; both influence the shape of the probability density function.
Moments
We cannot compute the the mean of the folded normal distribution in closed form, but the mean can at least be given in terms of $\Phi$. Once again, we assume that $X$ has the folded normal distribution with parmaeters $\mu \in \R$ and $\sigma \in (0, \infty)$.
The first two moments of $x$ are
1. $\E(X) = \mu [1 - 2 \Phi(-\mu / \sigma)] + \sigma \sqrt{2 / \pi} \exp(-\mu^2 / 2 \sigma^2)$
2. $\E(X^2) = \mu^2 + \sigma^2$
Proof
From the definition, we can assume $X = |\mu + \sigma Z|$ where $Z$ has the standard normal distribution. Then \begin{align} E(X) & = \E(|\mu + \sigma Z|) = \E(\mu + \sigma Z; Z \ge -\mu / \sigma) - \E(\mu + \sigma Z; Z \le -\mu / \sigma) \ & = \E(\mu + \sigma Z) - 2 \E(\mu + \sigma Z: Z \le - \mu / \sigma) = \mu -2 \mu \Phi(-\mu / \sigma) - 2 \sigma \E(Z; Z \le -\mu / \sigma) \end{align} So we just need to compute the last expected value. Using the change of variables $u = z^2 / 2$ we get $\E(Z; Z \le -\mu / \sigma) = \int_{-\infty}^{-\mu/\sigma} z \frac{1}{\sqrt{2 \pi}} e^{-z^2/2} \, dz = -\int_{\mu^2/2\sigma^2}^\infty \frac{1}{\sqrt{2\pi}} e^{-u} \, du = -\frac{1}{\sqrt{2 \pi}} e^{-\mu^2/2 \sigma^2}$ Substituting gives the result in (a). For (b), let $Y$ have the normal distribution with mean $\mu$ and standard deviation $\sigma$ so that we can take $X = |Y|$. Then $\E(X^2) = \E(Y^2) = \var(Y) + [\E(Y)]^2 = \sigma^2 + \mu^2$.
In particular, the variance of $X$ is
$\var(X) = \mu^2 + \sigma^2 - \left\{\mu \left[1 - 2 \Phi\left(-\frac{\mu}{\sigma}\right)\right] + \sigma \sqrt{\frac{2}{\pi}} \exp\left(-\frac{\mu^2}{2 \sigma^2}\right) \right\}^2$
Open the special distribution simulator and select the folded normal distribution. Vary the parameters and note the size and location of the mean$\pm$standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the true mean and standard deviation.
Related Distributions
The most important relation is the one between the folded normal distribution and the normal distribution in the definition: If $Y$ has a normal distribution then $X = |Y|$ has a folded normal distribution. The folded normal distribution is also related to itself through a symmetry property that is perhaps not completely obvious from the initial definition:
For $\mu \in \R$ and $\sigma \in (0, \infty)$, the folded normal distribution with parameters $-\mu$ and $\sigma$ is the same as the folded normal distribution with parameters $\mu$ and $\sigma$.
Proof 1
The PDF is unchanged if $\mu$ is replaced with $-\mu$.
Proof 2
Suppose that $Y$ has the normal distribution with mean $\mu$ and standard deviation $\sigma$ so that $|Y|$ has the folded normal distribution with parameters $\mu$ and $\sigma$. Then $-Y$ has the normal distribution with mean $-\mu$ and standard deviation $\sigma$ so that $|-Y|$ has the folded normal distribution with parameters $-\mu$ and $\sigma$. But $|Y| = |-Y|$.
The folded normal distribution is also closed under scale transformations.
Suppose that $X$ has the folded normal distribution with parameters $\mu \in \R$ and $\sigma \in (0, \infty)$ and that $b \in (0, \infty)$. Then $b X$ has the folded normal distribution with parameters $b \mu$ and $b \sigma$.
Proof
Once again from the definition, we can assume $X = |Y|$ where $Y$ has the normal distribution with mean $\mu$ and standard deviation $\sigma$. But then $b X = b |Y| = |b Y|$, and $b Y$ has the normal distribution with mean $b \mu$ and standard deviation $b \sigma$.
The Half-Normal Distribution
When $\mu = 0$, results for the folded normal distribution are much simpler, and fortunately this special case is the most important one. We are more likely to be interested in the magnitude of a normally distributed variable when the mean is 0, and moreover, this distribution arises in the study of Brownian motion.
Suppose that $Z$ has the standard normal distribution and that $\sigma \in (0, \infty)$. Then $X = \sigma |Z|$ has the half-normal distribution with scale parameter $\sigma$. If $\sigma = 1$ so that $X = |Z|$, then $X$ has the standard half-normal distribution.
Distribution Functions
For our next discussion, suppose that $X$ has the half-normal distribution with parameter $\sigma \in (0, \infty)$. Once again, $\Phi$ and $\Phi^{-1}$ denote the distribution function and quantile function, respectively, of the standard normal distribution.
The distribution function $F$ and quantile function $F^{-1}$ of $X$ are \begin{align} F(x) & = 2 \Phi\left(\frac{x}{\sigma}\right) - 1 = \int_0^x \frac{1}{\sigma} \sqrt{\frac{2}{\pi}} \exp\left(-\frac{y^2}{2 \sigma^2}\right) \, dy, \quad x \in [0, \infty) \ F^{-1}(p) & = \sigma \Phi^{-1}\left(\frac{1 + p}{2}\right), \quad p \in [0, 1) \end{align}
Proof
The result for the CDF follows from the CDF of the folded normal distribution with $\mu = 0$. Recall that $\Phi(-z) = 1 - \Phi(z)$ for $z \in \R$. The result for the quantile function follows from the result for the CDF and simple algebra.
Open the special distribution calculator and select the folded normal distribution. Select CDF view and keep $\mu = 0$. Vary $\sigma$ and note the shape of the CDF. For various values of $\sigma$, compute the median and the first and third quartiles.
The probability density function $f$ of $X$ is given by $f(x) = \frac{2}{\sigma} \phi\left(\frac{x}{\sigma}\right) = \frac{1}{\sigma} \sqrt{\frac{2}{\pi}} \exp\left(-\frac{x^2}{2 \sigma^2}\right), \quad x \in [0, \infty)$
1. $f$ is decreasing with mode at $x = 0$.
2. $f$ is concave downward and then upward, with inflection point at $x = \sigma$.
Proof
The formula for $f$ follows from differentiating the CDF above. Properties (a) and (b) follow from standard calculus.
Open the special distribution simulator and select the folded normal distribution. Keep $\mu = 0$ and vary $\sigma$, and note the shape of the probability density function. For selected values of $\sigma$, run the simulation 1000 times and compare the empricial density function to the true probability density function.
Moments
The moments of the half-normal distribution can be computed explicitly. Once again we assume that $X$ has the half-normal distribution with parameter $\sigma \in (0, \infty)$.
For $n \in \N$ \begin{align} \E(X^{2n}) & = \sigma^{2n} \frac{(2n)!}{n! 2^n}\ \E(X^{2n+1}) & = \sigma^{2n+1} 2^n \sqrt{\frac{2}{\pi}} n! \end{align}
Proof
As in the definition, we can take $X = \sigma |Z|$ where $Z$ has the standard normal distribution. The even order moments of $X$ are the same as the even order moments of $\sigma Z$. These were computed in the section on the normal distribution. For the odd order moments we again use the simple substitution $z = x^2 / 2$ to get $\E(X^{2n+1}) = \sigma^{2n+1} \int_0^\infty x^{2n+1} \sqrt{\frac{2}{\pi}} e^{-x^2/2} \, dx = \sigma^{2n+1} 2^n \sqrt{\frac{2}{\pi}} \int_0^\infty z^n e^{-z} \, dz = \sigma^{2n+1} 2^n \sqrt{\frac{2}{\pi}} n!$
In particular, we have $\E(X) = \sigma \sqrt{2/\pi}$ and $\var(X) = \sigma^2(1 - 2 / \pi)$
Open the special distribution simulator and select the folded normal distribution. Keep $\mu = 0$ and vary $\sigma$, and note the size and location of the mean$\pm$standard deviation bar. For selected values of $\sigma$, run the simulation 1000 times and compare the mean and standard deviation to the true mean and standard deviation.
Next are the skewness and kurtosis of the half-normal distribution.
Skewness and kurtosis
1. The skewness of $X$ is $\skw(X) = \frac{\sqrt{2 / \pi} (4 / \pi - 1)}{(1 - 2 / \pi)^{3/2}} \approx 0.99527$
2. The kurtosis of $X$ is $\kur(X) = \frac{3 - 4 / \pi - 12 / \pi^2}{(1 - 2 / \pi)^2} \approx 3.8692$
Proof
Skewness and kurtosis are functions of the standard score and so do not depend on the scale parameter $\sigma$. The results then follow by letting $\sigma = 1$ and using the standard computational formulas for skewness and kurtosis in terms of the moments of the half-normal distribution.
Related Distributions
Once again, the most important relation is the one in the definition: If $Y$ has a normal distribution with mean 0 then $X = |Y|$ has a half-normal distribution. Since the half normal distribution is a scale family, it is trivially closed under scale transformations.
Suppose that $X$ has the half-normal distribution with parameter $\sigma$ and that $b \in (0, \infty)$. Then $b X$ has the half-normal distribution with parameter $b \sigma$.
Proof
As in the definition, let $X = \sigma |Z|$ where $Z$ is standard normal. Then $b X = b \sigma |Z|$.
The standard half-normal distribution is also a special case of the chi distribution.
The standard half-normal distribution is the chi distribution with 1 degree of freedom.
Proof
If $Z$ is a standard normal variable, then $Z^2$ has the chi-square distribution with 1 degree of freedom, and hence $\left|Z\right| = \sqrt{Z^2}$ has the chi distribution with 1 degree of freedom. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.13%3A_The_Folded_Normal_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
The Rayleigh distribution, named for William Strutt, Lord Rayleigh, is the distribution of the magnitude of a two-dimensional random vector whose coordinates are independent, identically distributed, mean 0 normal variables. The distribution has a number of applications in settings where magnitudes of normal variables are important.
The Standard Rayleigh Distribution
Definition
Suppose that $Z_1$ and $Z_2$ are independent random variables with standard normal distributions. The magnitude $R = \sqrt{Z_1^2 + Z_2^2}$ of the vector $(Z_1, Z_2)$ has the standard Rayleigh distribution.
So in this definition, $(Z_1, Z_2)$ has the standard bivariate normal distribution
Distribution Functions
We give five functions that completely characterize the standard Rayleigh distribution: the distribution function, the probability density function, the quantile function, the reliability function, and the failure rate function. For the remainder of this discussion, we assume that $R$ has the standard Rayleigh distribution.
$R$ has distribution function $G$ given by $G(x) = 1 - e^{-x^2/2}$ for $x \in [0, \infty)$.
Proof
$(Z_1, Z_2)$ has joint PDF $(z_1, z_2) \mapsto \frac{1}{2 \pi} e^{-(z_1^2 + z_2^2)/2}$ on $\R^2$. Hence $\P(R \le x) = \int_{C_x} \frac{1}{2 \pi} e^{-(z_1^2 + z_2^2)/2} d(z_1, z_2)$ where $C_x = \{(z_1, z_2) \in \R^2: z_1^2 + z_2^2 \le x^2\}$. Convert to polar coordinates with $z_1 = r \cos \theta$, $z_2 = r \sin \theta$ to get $\P(R \le x) = \int_0^{2\pi} \int_0^x \frac{1}{2 \pi} e^{-r^2/2} r \, dr \, d\theta$ The result now follows by simple integration.
$R$ has probability density function $g$ given by $g(x) = x e^{-x^2 / 2}$ for $x \in [0, \infty)$.
1. $g$ increases and then decreases with mode at $x = 1$.
2. $g$ is concave downward and then upward with inflection point at $x = \sqrt{3}$.
Proof
The formula for the PDF follows immediately from the distribution function since $g(x) = G^\prime(x)$.
1. $g^\prime(x) = e^{-x^2 / 2}(1 - x^2)$
2. $g^{\prime\prime}(x) = x e^{-x^2/2}(x^2 - 3)$.
Open the Special Distribution Simulator and select the Rayleigh distribution. Keep the default parameter value and note the shape of the probability density function. Run the simulation 1000 times and compare the emprical density function to the probability density function.
$R$ has quantile function $G^{-1}$ given by $G^{-1}(p) = \sqrt{-2 \ln(1 - p)}$ for $p \in [0, 1)$. In particular, the quartiles of $R$ are
1. $q_1 = \sqrt{4 \ln 2 - 2 \ln 3} \approx 0.7585$, the first quartile
2. $q_2 = \sqrt{2 \ln 2} \approx 1.1774$, the median
3. $q_3 = \sqrt{4 \ln 2} \approx 1.6651$, the third quartile
Proof
The formula for the quantile function follows immediately from the distribution function by solving $p = G(x)$ for $x$ in terms of $p \in [0, 1)$.
Open the Special Distribution Calculator and select the Rayleigh distribution. Keep the default parameter value. Note the shape and location of the distribution function. Compute selected values of the distribution function and the quantile function.
$R$ has reliability function $G^c$ given by $G^c(x) = e^{-x^2/2}$ for $x \in [0, \infty)$.
Proof
Recall that the reliability function is simply the right-tail distribution function, so $G^c(x) = 1 - G(x)$.
$R$ has failure rate function $h$ given by $h(x) = x$ for $x \in [0, \infty)$. In particular, $R$ has increasing failure rate.
Proof
Recall that the failure rate function is $h(x) = g(x) \big/ G^c(x)$.
Moments
Once again we assume that $R$ has the standard Rayleigh distribution. We can express the moment generating function of $R$ in terms of the standard normal distribution function $\Phi$. Recall that $\Phi$ is so commonly used that it is a special function of mathematics.
$R$ has moment generating function $m$ given by $m(t) = \E(e^{tR}) = 1 + \sqrt{2 \pi} t e^{t^2/2} \Phi(t), \quad t \in \R$
Proof
By definition $m(t) = \int_0^\infty e^{t x} x e^{-x^2/2} dx$. Combining the exponential and completing the square in $x$ gives $m(t) = e^{t^2/2} \int_0^\infty x e^{-(x - t)^2/2} dx = \sqrt{2 \pi} \int_0^\infty \frac{1}{\sqrt{2 \pi}} x e^{-(x - t)^2/2} dx$ But $x \mapsto \frac{1}{\sqrt{2 \pi}} e^{-(x - t)^2/2}$ is the PDF of the normal distribution with mean $t$ and variance 1. The rest of the derivation follows from basic calculus.
The mean, variance of $R$ are
1. $\E(R) = \sqrt{\pi / 2} \approx 1.2533$
2. $\var(R) = 2 - \pi/2$
Proof
1. Note that $\E(R) = \int_0^\infty x^2 e^{-x^2/2} dx = \sqrt{2 \pi} \int_0^\infty x^2 \frac{1}{\sqrt{2 \pi}}e^{-x^2/2} dx$ But $x \mapsto \frac{1}{\sqrt{2 \pi}} e^{-x^2/2}$ is the PDF of the standard normal distribution. Hence the second integral is $\frac{1}{2}$ (since the variance of the standard normal distribution is 1).
2. An integration by parts gives $\E\left(R^2\right) = \int_0^\infty x^3 e^{-x^2/2} dx = 0 + 2 \int_0^\infty x e^{-x^2/2} dx = 2$
Numerically, $\E(R) \approx 1.2533$ and $\sd(R) \approx 0.6551$.
Open the Special Distribution Simulator and select the Rayleigh distribution. Keep the default parameter value. Note the size and location of the mean$\pm$standard deviation bar. Run the simulation 1000 times and compare the empirical mean and stadard deviation to the true mean and standard deviation.
The general moments of $R$ can be expressed in terms of the gamma function $\Gamma$.
$\E(R^n) = 2^{n/2} \Gamma(1 + n/2)$ for $n \in \N$.
Proof
The substitution $u = x^2/2$ gives $\E(R^n) = \int_0^\infty x^n x e^{-x^2/2} dx = \int_0^\infty (2 u)^{n/2} e^{-u} du = 2^{n/2} \int_0^\infty u^{n/2} e^{-u} du$ The last integral is $\Gamma(1 + n/2)$ by definition.
Of course, the formula for the general moments gives an alternate derivation of the mean and variance above, since $\Gamma(3/2) = \sqrt{\pi} / 2$ and $\Gamma(2) = 1$. On the other hand, the moment generating function can be also be used to derive the formula for the general moments.
The skewness and kurtosis of $R$ are
1. $\skw(R) = 2 \sqrt{\pi}(\pi - 3) \big/ (4 - \pi)^{3/2} \approx 0.6311$
2. $\kur(R) = (32 - 3 \pi^2) \big/ (4 - \pi)^2 \approx 3.2451$
Proof
These results follow from the standard formulas for the skewness and kurtosis in terms of the moments, since $\E(R) = \sqrt{\pi/2}$, $\E\left(R^2\right) = 2$, $\E\left(R^3\right) = 3 \sqrt{2 \pi}$, and $\E\left(R^4\right) = 8$.
Related Distributions
The fundamental connection between the standard Rayleigh distribution and the standard normal distribution is given in the very definition of the standard Rayleigh, as the distribution of the magnitude of a point with independent, standard normal coordinates.
Connections to the chi-square distribution.
1. If $R$ has the standard Rayleigh distribution then $R^2$ has the chi-square distribution with 2 degrees of freedom.
2. If $V$ has the chi-square distribution with 2 degrees of freedom then $\sqrt{V}$ has the standard Rayleigh distribution.
Proof
This follows directly from the definition of the standard Rayleigh variable $R = \sqrt{Z_1^2 + Z_2^2}$, where $Z_1$ and $Z_2$ are independent standard normal variables.
Recall also that the chi-square distribution with 2 degrees of freedom is the same as the exponential distribution with scale parameter 2.
Since the quantile function is in closed form, the standard Rayleigh distribution can be simulated by the random quantile method.
Connections between the standard Rayleigh distribution and the standard uniform distribution.
1. If $U$ has the standard uniform distribution (a random number) then $R = G^{-1}(U) = \sqrt{-2 \ln(1 - U)}$ has the standard Rayleigh distribution.
2. If $R$ has the standard Rayleigh distribution then $U = G(R) = 1 - \exp(-R^2/2)$ has the standard uniform distribution
In part (a), note that $1 - U$ has the same distribution as $U$ (the standard uniform). Hence $R = \sqrt{-2 \ln U}$ also has the standard Rayleigh distribution.
Open the random quantile simulator and select the Rayleigh distribution with the default parameter value (standard). Run the simulation 1000 times and compare the empirical density function to the true density function.
There is another connection with the uniform distribution that leads to the most common method of simulating a pair of independent standard normal variables. We have seen this before, but it's worth repeating. The result is closely related to the definition of the standard Rayleigh variable as the magnitude of a standard bivariate normal pair, but with the addition of the polar coordinate angle.
Suppose that $R$ has the standard Rayleigh distribution, $\Theta$ is uniformly distributed on $[0, 2 \pi)$, and that $R$ and $\Theta$ are independent. Let $Z = R \cos \Theta$, $W = R \sin \Theta$. Then $(Z, W)$ have the standard bivariate normal distribution.
Proof
By independence, the joint PDF $f$ of $(R, \Theta)$ is given by $f(r, \theta) = r e^{-r^2/2} \frac{1}{2 \pi}, \quad r \in [0, \infty), \, \theta \in [0, 2 \pi)$ As we recall from calculus, the Jacobian of the transformation $z = r \cos \theta$, $w = r \sin \theta$ is $r$, and hence the Jacobian of the inverse transformation that takes $(z, w)$ into $(r, \theta)$ is $1 / r$. Moreover, $r = \sqrt{z^2 + w^2}$. From the change of variables theorem, the PDF $g$ of $(Z, W)$ is given by $g(z, w) = f(r, \theta) \frac{1}{r}$. This leads to $g(z, w) = \frac{1}{2 \pi} e^{-(z^2 + w^2) / 2} = \frac{1}{\sqrt{2 \pi}} e^{-z^2 / 2} \frac{1}{\sqrt{2 \pi}} e^{-w^2 / 2}, \quad z \in \R, \, w \in \R$ Hence $(Z, W)$ has the standard bivariate normal distribution.
The General Rayleigh Distribution
Definition
The standard Rayleigh distribution is generalized by adding a scale parameter.
If $R$ has the standard Rayleigh distribution and $b \in (0, \infty)$ then $X = b R$ has the Rayleigh distribution with scale parameter $b$.
Equivalently, the Rayleigh distribution is the distribution of the magnitude of a two-dimensional vector whose components have independent, identically distributed mean 0 normal variables.
If $U_1$ and $U_2$ are independent normal variables with mean 0 and standard deviation $\sigma \in (0, \infty)$ then $X = \sqrt{U_1^2 + U_2^2}$ has the Rayleigh distribution with scale parameter $\sigma$.
Proof
We can take $U_1 = \sigma Z_1$ and $U_2 = \sigma Z_2$ where $Z_1$ and $Z_2$ are independent standard normal variables. Then $X = \sigma \sqrt{Z_1^2 + Z_2^2} = \sigma R$ where $R$ has the standard Rayleigh distribution.
Distribution Functions
In this section, we assume that $X$ has the Rayleigh distribution with scale parameter $b \in (0, \infty)$.
$X$ has cumulative distribution function $F$ given by $F(x) = 1 - \exp \left(-\frac{x^2}{2 b^2}\right)$ for $x \in [0, \infty)$.
Proof
Recall that $F(x) = G(x / b)$ where $G$ is the standard Rayleigh CDF.
$X$ has probability density function $f$ given by $f(x) = \frac{x}{b^2} \exp\left(-\frac{x^2}{2 b^2}\right)$ for $x \in [0, \infty)$.
1. $f$ increases and then decreases with mode at $x = b$.
2. $f$ is concave downward and then upward with inflection point at $x = \sqrt{3} b$.
Proof
Recall that $f(x) = \frac{1}{b} g\left(\frac{x}{b}\right)$ where $g$ is the standard Rayleigh PDF.
Open the Special Distribution Simulator and select the Rayleigh distribution. Vary the scale parameter and note the shape and location of the probability density function. For various values of the scale parameter, run the simulation 1000 times and compare the emprical density function to the probability density function.
$X$ has quantile function $F^{-1}$ given by $F^{-1}(p) = b \sqrt{-2 \ln(1 - p)}$ for $p \in [0, 1)$. In particular, the quartiles of $X$ are
1. $q_1 = b \sqrt{4 \ln 2 - 2 \ln 3}$, the first quartile
2. $q_2 = b \sqrt{2 \ln 2}$, the median
3. $q_3 = b \sqrt{4 \ln 2}$, the third quartile
Proof
Recall that $F^{-1}(p) = b G^{-1}(p)$ where $G^{-1}$ is the standard Rayleigh quantile function.
Open the Special Distribution Calculator and select the Rayleigh distribution. Vary the scale parameter and note the location and shape of the distribution function. For various values of the scale parameter, compute selected values of the distribution function and the quantile function.
$X$ has reliability function $F^c$ given by $F^c(x) = \exp\left(-\frac{x^2}{2 b^2}\right)$ for $x \in [0, \infty)$.
Proof
Recall that $F^c(x) = 1 - F(x)$.
$X$ has failure rate function $h$ given by $h(x) = x / b^2$ for $x \in [0, \infty)$. In particular, $X$ has increasing failure rate.
Proof
Recall that $h(x) = f(x) \big/ H(x)$.
Moments
Again, we assume that $X$ has the Rayleigh distribution with scale parameter $b$, and recall that $\Phi$ denotes the standard normal distribution function.
$X$ has moment generating function $M$ given by $M(t) = \E(e^{t X}) = 1 + \sqrt{2 \pi} b t \exp\left(\frac{b^2 t^2}{2}\right) \Phi(t), \quad t \in \R$
Proof
Recall that $M(t) = m(b t)$ where $m$ is the standard Rayleigh MGF.
The mean and variance of $R$ are
1. $\E(X) = b \sqrt{\pi / 2}$
2. $\var(X) = b^2 (2 - \pi/2)$
Proof
These result follow from standard mean and variance and basic properties of expected value and variance.
Open the Special Distribution Simulator and select the Rayleigh distribution. Vary the scale parameter and note the size and location of the mean$\pm$standard deviation bar. For various values of the scale parameter, run the simulation 1000 times and compare the empirical mean and stadard deviation to the true mean and standard deviation.
Again, the general moments can be expressed in terms of the gamma function $\Gamma$.
$\E(X^n) = b^n 2^{n/2} \Gamma(1 + n/2)$ for $n \in \N$.
Proof
This follows from the standard moments and basic properties of expected value.
The skewness and kurtosis of $X$ are
1. $\skw(X) = 2 \sqrt{\pi}(\pi - 3) \big/ (4 - \pi)^{3/2} \approx 0.6311$
2. $\kur(X) = (32 - 3 \pi^2) \big/ (4 - \pi)^2 \approx 3.2451$
Proof
Recall that skewness and kurtosis are defined in terms of the standard score, and hence are unchanged by a scale transformation. Thus the results follow from the standard skewness and kurtosis.
Related Distributions
The fundamental connection between the Rayleigh distribution and the normal distribution is the defintion, and of course, is the primary reason that the Rayleigh distribution is special in the first place. By construction, the Rayleigh distribution is a scale family, and so is closed under scale transformations.
If $X$ has the Rayleigh distribution with scale parameter $b \in (0, \infty)$ and if $c \in (0, \infty)$ then $c X$ has the Rayleigh distribution with scale parameter $b c$.
The Rayleigh distribution is a special case of the Weibull distribution.
The Rayleigh distribution with scale parameter $b \in (0, \infty)$ is the Weibull distribution with shape parameter $2$ and scale parameter $\sqrt{2} b$.
The following result generalizes the connection between the standard Rayleigh and chi-square distributions.
If $X$ has the Rayleigh distribution with scale parameter $b \in (0, \infty)$ then $X^2$ has the exponential distribution with scale parameter $2 b^2$.
Proof
We can take $X = b R$ where $R$ has the standard Rayleigh distribution. Then $X^2 = b^2 R^2$, and $R^2$ has the exponential distribution with scale parameter 2. Hence $X^2$ has the exponential distribution with scale parameter $2 b^2$.
Since the quantile function is in closed form, the Rayleigh distribution can be simulated by the random quantile method.
Suppose that $b \in (0, \infty)$.
1. If $U$ has the standard uniform distribution (a random number) then $X = F^{-1}(U) = b \sqrt{-2 \ln(1 - U)}$ has the Rayleigh distribution with scale parameter $b$.
2. If $X$ has the Rayleigh distribution with scale parameter $b$ then $U = F(X) = 1 - \exp(-X^2/2 b^2)$ has the standard uniform distribution
In part (a), note that $1 - U$ has the same distribution as $U$ (the standard uniform). Hence $X = b \sqrt{-2 \ln U}$ also has the Rayleigh distribution with scale parameter $b$.
Open the random quantile simulator and select the Rayleigh distribution. For selected values of the scale parameter, run the simulation 1000 times and compare the empirical density function to the true density function.
Finally, the Rayleigh distribution is a member of the general exponential family.
If $X$ has the Rayleigh distribution with scale parameter $b \in (0, \infty)$ then $X$ has a one-parameter exponential distribution with natural parameter $-1/b^2$ and natural statistic $X^2 / 2$.
Proof
This follows directly from the definition of the general exponential distribution. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.14%3A_The_Rayleigh_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
The Maxwell distribution, named for James Clerk Maxwell, is the distribution of the magnitude of a three-dimensional random vector whose coordinates are independent, identically distributed, mean 0 normal variables. The distribution has a number of applications in settings where magnitudes of normal variables are important, particularly in physics. It is also called the Maxwell-Boltzmann distribution in honor also of Ludwig Boltzmann. The Maxwell distribution is closely related to the Rayleigh distribution, which governs the magnitude of a two-dimensional random vector whose coordinates are independent, identically distributed, mean 0 normal variables.
The Standard Maxwell Distribution
Definition
Suppose that $Z_1$, $Z_2$, and $Z_3$ are independent random variables with standard normal distributions. The magnitude $R = \sqrt{Z_1^2 + Z_2^2 + Z_3^2}$ of the vector $(Z_1, Z_2, Z_3)$ has the standard Maxwell distribution.
So in the context of the definition, $(Z_1, Z_2, Z_3)$ has the standard trivariate normal distribution. The Maxwell distribution is a continuous distribution on $[0, \infty)$.
Distribution Functions
In this discussion, we assume that $R$ has the standard Maxwell distribution. The distribution function of $R$ can be expressed in terms of the standard normal distribution function $\Phi$. Recall that $\Phi$ occurs so frequently that it is considered a special function in mathematics.
$R$ has distribution function $G$ given by $G(x) = 2 \Phi(x) - \sqrt{\frac{2}{\pi}} x e^{-x^2/2} - 1, \quad x \in [0, \infty)$
Proof
$(Z_1, Z_2, Z_3)$ has joint PDF $(z_1, z_2, z_3) \mapsto \frac{1}{(2 \pi)^{3/2}} e^{-(z_1^2 + z_2^2 + z_3^2)/2}$ on $\R^3$. Hence $\P(R \le x) = \int_{B_x} \frac{1}{(2 \pi)^{3/2}} e^{-(z_1^2 + z_2^2 + z_3^2)/2} d(z_1, z_2, z_3), \quad x \in [0, \infty)$ where $B_x = \left\{(z_1, z_2. z_3) \in \R^3: z_1^2 + z_2^2 + z_3^2 \le x^2\right\}$, the spherical region of radius $x$ centered at the origin. Convert to spherical coordinates with $z_1 = \rho \sin \phi \cos \theta$, $z_2 = \rho \sin \phi \sin \theta$, $z_3 = \rho \cos \phi$ to get $\P(R \le x) = \int_0^\pi \int_0^{2 \pi} \int_0^x \frac{1}{(2 \pi)^{3/2}} e^{-\rho^2/2} \rho^2 \sin \phi \, d \rho \, d \theta \, d\phi, \quad x \in [0, \infty)$ The result now follows by simple integration.
$R$ has probability density function $g$ given by $g(x) = \sqrt{\frac{2}{\pi}} x^2 e^{-x^2 / 2}, \quad x \in [0, \infty)$
1. $g$ increases and then decreases with mode at $x = \sqrt{2}$.
2. $g$ is concave upward, then downward, then upward again, with inflection points at $x_1 = \sqrt{(5 - \sqrt{17})/2} \approx 0.6622$ and $x_2 = \sqrt{(5 + \sqrt{17})/2} \approx 2.1358$
Proof
The formula for the PDF follows immediately from the distribution function since $g(x) = G^\prime(x)$.
1. $g^\prime(x) = \sqrt{2 / \pi} x e^{-x^2 / 2}(2 - x^2)$
2. $g^{\prime\prime}(x) = \sqrt{2 / \pi} e^{-x^2/2}(x^4 - 5 x^2 + 2)$
Open the Special Distribution Simulator and select the Maxwell distribution. Keep the default parameter value and note the shape of the probability density function. Run the simulation 1000 times and compare the emprical density function to the probability density function.
The quantile function has no simple closed-form expression.
Open the Special Distribution Calculator and select the Maxwell distribution. Keep the default parameter value. Find approximate values of the median and the first and third quartiles.
Moments
Suppose again that $R$ has the standard Maxwell distribution. The moment generating function of $R$, like the distribution function, can be expressed in terms of the standard normal distribution function $\Phi$.
$R$ has moment generating function $m$ given by $m(t) = \E\left(e^{tR}\right) = \sqrt{\frac{2}{\pi}} t + 2(1 + t^2) e^{t^2/2} \Phi(t), \quad t \in \R$
Proof
Completing the square in $x$ gives $m(t) = \int_0^\infty \sqrt{\frac{2}{\pi}} x^2 e^{-x^2/2} e^{tx} dx = \sqrt{\frac{2}{\pi}} e^{t^2/2} \int_0^\infty x^2 e^{-(x - t)^2/2} dx$ The substitution $z = x - t$ gives $m(t) = \sqrt{\frac{2}{\pi}} e^{t^2/2} \int_{-t}^\infty (z + t)^2 e^{-z^2/2} dz = \sqrt{\frac{2}{\pi}} e^{t^2/2} \int_{-t}^\infty (z^2 + 2 t z + t^2) e^{-z^2/2} dz$ Integrating by parts or by simple substitution, using the fact that $z \mapsto \frac{1}{\sqrt{2 \pi}} e^{-z^2/2}$ is the standard normal PDF, and that $1 - \Phi(-t) = \Phi(t)$ we have \begin{align} \int_{-t}^\infty z^2 e^{-z^2/2} dz & = -t e^{-t^2/2} + \sqrt{2 \pi} \Phi(t) \ \int_{-t}^\infty 2 t z e^{-z^2/2} dz & = 2 t e^{-t^2/2} \ \int_{-t}^\infty t^2 e^{-z^2/2} dz & = t^2 \sqrt{2 \pi} \Phi(t) \end{align} Simplifying gives the result.
The mean and variance of $R$ can be found from the moment generating function, but direct computations are also easy.
The mean and variance of $R$ are
1. $\E(R) = 2 \sqrt{2 / \pi}$
2. $\var(R) = 3 - 8/\pi$
Proof
The integration methods are by parts and by simple substitution. $\E(R) = \int_0^\infty \sqrt{\frac{2}{\pi}} x^3 e^{-x^2/2} dx = 2 \sqrt{\frac{2}{\pi}} \int_0^\infty x e^{-x^2/2} dx = 2 \sqrt{\frac{2}{\pi}}$ $\E\left(R^2\right) = \int_0^\infty \sqrt{\frac{2}{\pi}} x^4 e^{-x^2/2} dx = 3 \int_0^\infty \sqrt{\frac{2}{\pi}} x^2 e^{-x^2/2} dx = 3$
Numerically, $\E(R) \approx 1.5958$ and $\sd(R) = \approx 0.6734$
Open the Special Distribution Simulator and select the Maxwell distribution. Keep the default parameter value. Note the size and location of the mean$\pm$standard deviation bar. Run the simulation 1000 times and compare the empirical mean and standard deviation to the true mean and standard deviation.
The general moments of $R$ can be expressed in terms of the gamma function $\Gamma$
For $n \in \N_+$, $\E(R^n) = \frac{2^{n/2 + 1}}{\sqrt{\pi}} \Gamma\left(\frac{n + 3}{2}\right)$
Proof
The substitution $u = x^2/2$ gives $\E(R^n) = \int_0^\infty \sqrt{\frac{2}{\pi}} x^{n + 1} x e^{-x^2/2} dx = \int_0^\infty \sqrt{\frac{2}{\pi}} (2 u)^{(n+1)/2} e^{-u} du = \frac{2^{n/2 + 1}}{\sqrt{\pi}} \int_0^\infty u^{(n+1)/2} e^{-u} du$ The last integral is $\Gamma[(n+3)/2]$ by definition.
Of course, the formula for the general moments gives an alternate derivation for the mean and variance above since $\Gamma(2) = 1$ and $\Gamma(5/2) = 3 \sqrt{\pi} / 4$. On the other hand, the moment generating function can be also be used to derive the formula for the general moments. Finally, we give the skewness and kurtosis of $R$.
The skewness and kurtosis of $R$ are
1. $\skw(R) = 2 \sqrt{2}(16 - 5 \pi) \big/ (3 \pi - 8)^{3/2} \approx 0.4857$
2. $\kur(R) = (15 \pi^2 +16 \pi -192) \big/ (3 \pi - 8)^2 \approx 3.1082$
Proof
These results follow from the standard formulas for the skewness and kurtosis in terms of the moments, since $\E(R) = 2 \sqrt{2 / \pi}$, $\E\left(R^2\right) = 3$, $\E\left(R^3\right) = 8 \sqrt{2/\pi}$, and $\E\left(R^4\right) = 15$.
Related Distributions
The fundamental connection between the standard Maxwell distribution and the standard normal distribution is given in the very definition of the standard Maxwell, as the distribution of the magnitude of a vector in $\R^3$ with independent, standard normal coordinates.
Connections to the chi-square distribution.
1. If $R$ has the standard Maxwell distribution then $R^2$ has the chi-square distribution with 3 degrees of freedom.
2. If $V$ has the chi-square distribution with 3 degrees of freedom then $\sqrt{V}$ has the standard Maxwell distribution.
Proof
This follows directly from the definition of the standard Maxwell variable $R = \sqrt{Z_1^2 + Z_2^2 + Z_3^2}$, where $Z_1$, $Z_2$, and $Z_3$ are independent standard normal variables.
Equivalently, the Maxwell distribution is simply the chi distribution with 3 degrees of freedom.
The General Maxwell Distribution
Definition
The standard Maxwell distribution is generalized by adding a scale parameter.
If $R$ has the standard Maxwell distribution and $b \in (0, \infty)$ then $X = b R$ has the Maxwell distribution with scale parameter $b$.
Equivalently, the Maxwell distribution is the distribution of the magnitude of a three-dimensional vector whose components have independent, identically distributed, mean 0 normal variables.
If $U_1$, $U_2$ and $U_3$ are independent normal variables with mean 0 and standard deviation $\sigma \in (0, \infty)$ then $X = \sqrt{U_1^2 + U_2^2 + U_3^2}$ has the Maxwell distribution with scale parameter $\sigma$.
Proof
We can take $U_i = \sigma Z_i$ for $i \in \{1, 2, 3\}$ where $Z_1$, $Z_2$, and $Z_3$ are independent standard normal variables. Then $X = \sigma \sqrt{Z_1^2 + Z_2^2 + Z_3^2} = \sigma R$ where $R$ has the standard Maxwell distribution.
Distribution Functions
In this section, we assume that $X$ has the Maxwell distribution with scale parameter $b \in (0, \infty)$. We can give the distribution function of $X$ in terms of the standard normal distribution function $\Phi$.
$X$ has distribution function $F$ given by $F(x) = 2 \Phi\left(\frac{x}{b}\right) - \frac{1}{b}\sqrt{\frac{2}{\pi}} x \exp\left(-\frac{x^2}{2 b^2}\right) - 1, \quad x \in [0, \infty)$
Proof
Recall that $F(x) = G(x / b)$ where $G$ is the standard Maxwell CDF.
$X$ has probability density function $f$ given by $f(x) = \frac{1}{b^3}\sqrt{\frac{2}{\pi}} x^2 \exp\left(-\frac{x^2}{2 b^2}\right), \quad x \in [0, \infty)$
1. $f$ increases and then decreases with mode at $x = b \sqrt{2}$.
2. $f$ is concave upward, then downward, then upward again, with inflection points at $x = b \sqrt{(5 \pm \sqrt{17})/2}$.
Proof
Recall that $f(x) = \frac{1}{b} g\left(\frac{x}{b}\right)$ where $g$ is the standard Maxwell PDF.
Open the Special Distribution Simulator and select the Maxwell distribution. Vary the scale parameter and note the shape and location of the probability density function. For various values of the scale parameter, run the simulation 1000 times and compare the emprical density function to the probability density function.
Again, the quantile function does not hava a simple, closed-form expression.
Open the Special Distribution Calculator and select the Maxwell distribution. For various values of the scale parameter, compute the median and the first and third quartiles.
Moments
Again, we assume that $X$ has the Maxwell distribution with scale parameter $b \in (0, \infty)$. As before, the moment generating function of $X$ can be written in terms of the standard normal distribution function $\Phi$.
$X$ has moment generating function $M$ given by $M(t) = \E\left(e^{t X}\right) = \sqrt{\frac{2}{\pi}} b t + 2(1 + b^2 t^2) \exp\left(\frac{b^2 t^2}{2}\right) \Phi(b t), \quad t \in \R$
Proof
Recall that $M(t) = m(b t)$ where $m$ is the standard Maxwell MGF.
The mean and variance of $X$ are
1. $\E(X) = b 2 \sqrt{2 / \pi}$
2. $\var(X) = b^2 (3 - 8/\pi)$
Proof
These result follow from the standard mean and variance and basic properties of expected value and variance.
Open the Special Distribution Simulator and select the Maxwell distribution. Vary the scale parameter and note the size and location of the mean$\pm$standard deviation bar. For various values of the scale parameter, run the simulation 1000 times compare the empirical mean and standard deviation to the true mean and standard deviation.
As before, the general moments can be expressed in terms of the gamma function $\Gamma$.
For $n \in \N$, $\E(X^n) = b^n \frac{2^{n/2 + 1}}{\sqrt{\pi}} \Gamma\left(\frac{n + 3}{2}\right)$
Proof
This follows from the standard moments and basic properties of expected value.
Finally, the skewness and kurtosis are unchanged.
The skewness and kurtosis of $X$ are
1. $\skw(X) = 2 \sqrt{2}(16 - 5 \pi) \big/ (3 \pi - 8)^{3/2} \approx 0.4857$
2. $\kur(X) = (15 \pi^2 +16 \pi -192) \big/ (3 \pi - 8)^2 \approx 3.1082$
Proof
Recall that skewness and kurtosis are defined in terms of the standard score, and hence are unchanged by a scale transformation. Thus the results follow from the standard skewness and kurtosis.
Related Distributions
The fundamental connection between the Maxwell distribution and the normal distribution is given in the definition, and of course, is the primary reason that the Maxwell distribution is special in the first place.
By construction, the Maxwell distribution is a scale family, and so is closed under scale transformations.
If $X$ has the Maxwell distribution with scale parameter $b \in (0, \infty)$ and if $c \in (0, \infty)$ then $c X$ has the Maxwell distribution with scale parameter $b c$.
Proof
By definition, we can assume that $X = b R$ where $R$ has the standard Maxwell distribution. Hence $c X = (c b) R$ has the Maxwell distribution with scale parameter $b c$.
The Maxwell distribution is a generalized exponential distribution.
If $X$ has the Maxwell distribution with scale parameter $b \in (0, \infty)$ then $X$ is a one-parameter exponential family with natural parameter $-1/b^2$ and natural statistic $X^2 / 2$.
Proof
This follows directly from the definition of the general exponential distribution. and the form of the PDF. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.15%3A_The_Maxwell_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\sgn}{\,\text{sgn}}$
The Lévy distribution, named for the French mathematician Paul Lévy, is important in the study of Brownian motion, and is one of only three stable distributions whose probability density function can be expressed in a simple, closed form.
The Standard Lévy Distribution
Definition
If $Z$ has the standard normal distribution then $U = 1 / Z^2$ has the standard Lévy distribution.
So the standard Lévy distribution is a continuous distribution on $(0, \infty)$.
Distribution Functions
We assume that $U$ has the standard Lévy distribution. The distribution function of $U$ has a simple expression in terms of the standard normal distribution function $\Phi$, not surprising given the definition.
$U$ has distribution function $G$ given by $G(u) = 2\left[1 - \Phi\left(\frac{1}{\sqrt{u}}\right)\right], \quad u \in (0, \infty)$
Proof
For $u \in (0, \infty)$, $\P\left(\frac{1}{Z^2} \le u \right) = \P\left(Z^2 \ge \frac{1}{u}\right) = \P\left(Z \ge \frac{1}{\sqrt{u}}\right) + \P\left(Z \le -\frac{1}{\sqrt{u}}\right) = 2\left[1 - \Phi\left(\frac{1}{\sqrt{u}}\right)\right]$
Similarly, the quantile function of $U$ has a simple expression in terms of the standard normal quantile function $\Phi^{-1}$.
$U$ has quantile function $G^{-1}$ given by $G^{-1}(p) = \frac{1}{\left[\Phi^{-1}\left(1 - p / 2\right)\right]^2}, \quad p \in [0, 1)$ The quartiles of $U$ are
1. $q_1 = \left[\Phi^{-1}\left(\frac{7}{8}\right)\right]^{-2} \approx 0.7557$, the first quartile.
2. $q_2 = \left[\Phi^{-1}\left(\frac{3}{4}\right)\right]^{-2} \approx 2.1980$, the median.
3. $q_3 = \left[\Phi^{-1}\left(\frac{5}{8}\right)\right]^{-2} \approx 9.8516$, the third quartile.
Proof
The quantile function can be obtained from the distribution function by solving $p = G(u)$ for $u = G^{-1}(p)$.
Open the Special Distribution Calculator and select the Lévy distribution. Keep the default parameter values. Note the shape and location of the distribution function. Compute a few values of the distribution function and the quantile function.
Finally, the probability density function of $U$ has a simple closed expression.
$U$ has probability density function $g$ given by $g(u) = \frac{1}{\sqrt{2 \pi}} \frac{1}{u^{3/2}} \exp\left(-\frac{1}{2 u}\right), \quad u \in (0, \infty)$
1. $g$ increases and then decreasing with mode at $x = \frac{1}{3}$.
2. $g$ is concave upward, then downward, then upward again, with inflection points at $x = \frac{1}{3} - \frac{\sqrt{10}}{15} \approx 0.1225$ and at $x = \frac{1}{3} + \frac{\sqrt{10}}{15} \approx 0.5442$.
Proof
The formula for $g$ follows from differentiating the CDF given above: $g(u) = -2 \Phi^\prime(u^{-1/2}) \left(-\frac{1}{2} u^{-3/2}\right), \quad u \in (0, \infty)$ But $\Phi^\prime(z) = \phi(z) = \frac{1}{\sqrt{2 \pi}} e^{-z^2 / 2}$, the standard normal PDF. Substitution and simplification then gives the results. Parts (a) and (b) also follow from standard calculus: \begin{align} g^\prime(u) & = \frac{1}{2 \sqrt{2 \pi}} u^{-7/2} e^{-u^{-1}/2} (-3 u + 1) \ g^{\prime\prime}(u) & = \frac{1}{4 \sqrt{2 \pi}} u^{-11/2} e^{-u^{-1}/2} (15 u^2 - 10 u + 1) \end{align}
Open the Special Distribtion Simulator and select the Lévy distribution. Keep the default parameter values. Note the shape of the probability density function. Run the simulation 1000 times and compare the empirical density function to the probability density function.
Moments
We assume again that $U$ has the standard Lévy distribution. After exploring the graphs of the probability density function and distribution function above, you probably noticed that the Lévy distribution has a very heavy tail. The 99th percentile is about 6400, for example. The following result is not surprising.
$\E(U) = \infty$
Proof
Note that $u \mapsto e^{-1/2u}$ is increasing. Hence $\E(U) = \int_0^\infty u \frac{1}{\sqrt{2 \pi} u^{3/2}} e^{-1/2u} du \gt \int_1^\infty \frac{1}{\sqrt{2 \pi}} u^{-1/2} e^{-1/2} du = \infty$
Of course, the higher-order moments are infinite as well, and the variance, skewness, and kurtosis do not exist. The moment generating function is infinite at every positive value, and so is of no use. On the other hand, the characteristic function of the standard Lévy distribution is very useful. For the following result, recall that the sign function $\sgn$ is given by $\sgn(t) = 1$ for $t \gt 0$, $\sgn(t) = - 1$ for $t \lt 0$, and $\sgn(0) = 0$.
$U$ has characteristic function $\chi_0$ given by $\chi_0(t) = \E\left(e^{i t U}\right) = \exp\left(-\left|t\right|^{1/2}\left[1 + i \sgn(t)\right]\right), \quad t \in \R$
Related Distributions
The most important relationship is the one in the definition: If $Z$ has the standard normal distribution then $U = 1 / Z^2$ has the standard Lévy distribution. The following result is bascially the converse.
If $U$ has the standard Lévy distribution, then $V = 1/\sqrt{U}$ has the standard half-normal distribution.
Proof
From the definition, we can take $U = 1 / Z^2$ where $Z$ has the standard normal distribution. Then $1/\sqrt{U} = \left|Z\right|$, and $\left|Z\right|$ has the standard half-normal distribution.
The General Lévy Distribution
Like so many other standard distributions, the standard Lévy distribution is generalized by adding location and scale parameters.
Definition
Suppose that $U$ has the standard Lévy distribution, and $a \in \R$ and $b \in (0, \infty)$. Then $X = a + b U$ has the Lévy distribution with location parameter $a$ and scale parameter $b$.
Note that $X$ has a continuous distribution on the interval $(a, \infty)$.
Distribution Functions
Suppose that $X$ has the Lévy distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$. As before, the distribution function of $X$ has a simple expression in terms of the standard normal distribution function $\Phi$.
$X$ has distribution function $G$ given by $F(x) = 2\left[1 - \Phi\left(\sqrt{\frac{b}{(x - a)}}\right)\right], \quad x \in (a, \infty)$
Proof
Recall that $F(x) = G\left(\frac{x - a}{b}\right)$ where $G$ is the standard Lévy CDF.
Similarly, the quantile function of $X$ has a simple expression in terms of the standard normal quantile function $\Phi^{-1}$.
$X$ has quantile function $F^{-1}$ given by $F^{-1}(p) = a + \frac{b}{\left[\Phi^{-1}\left(1 - p/2\right)\right]^2}, \quad p \in [0, 1)$ The quartiles of $X$ are
1. $q_1 = a + b \left[\Phi^{-1}\left(\frac{7}{8}\right)\right]^{-2}$, the first quartile.
2. $q_2 = a + b \left[\Phi^{-1}\left(\frac{3}{4}\right)\right]^{-2}$, the median.
3. $q_3 = a + b \left[\Phi^{-1}\left(\frac{5}{8}\right)\right]^{-2}$, the third quartile.
Proof
Recall that $F^{-1}(p) = a + b G^{-1}(p)$, where $G^{-1}$ is the standard Lévy quantile function.
Open the Special Distribution Calculator and select the Lévy distribution. Vary the parameter values and note the shape of the graph of the distribution function function. For various values of the parameters, compute a few values of the distribution function and the quantile function.
Finally, the probability density function of $X$ has a simple closed expression.
$X$ has probability density function $f$ given by $f(x) = \sqrt{\frac{b}{2 \pi}} \frac{1}{(x - a)^{3/2}} \exp\left[-\frac{b}{2 (x - a)}\right], \quad x \in (a, \infty)$
1. $f$ increases and then decreases with mode at $x = a + \frac{1}{3} b$.
2. $f$ is concave upward, then downward, then upward again with inflection points at $x = a + \left(\frac{1}{3} \pm \frac{\sqrt{10}}{15}\right) b$.
Proof
Recall that $f(x) = \frac{1}{b} g\left(\frac{x - a}{b}\right)$ where $g$ is the standard Lévy PDF, so the formula for $f$ follow from the definition of $g$ and simple algebra. Parts (a) and (b) follow from the corresponding results for $g$.
Open the Special Distribtion Simulator and select the Lévy distribution. Vary the parameters and note the shape and location of the probability density function. For various parameter values, run the simulation 1000 times and compare the empirical density function to the probability density function.
Moments
Assume again that $X$ has the Lévy distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$. Of course, since the standard Lévy distribution has infinite mean, so does the general Lévy distribution.
$\E(X) = \infty$
Also as before, the variance, skewness, and kurtosis of $X$ are undefined. On the other hand, the characteristic function of $X$ is very important.
$X$ has characteristic function $\chi$ given by $\chi(t) = \E\left(e^{i t X}\right) = \exp\left(i t a - b^{1/2} \left|t\right|^{1/2}[1 + i \sgn(t)]\right), \quad t \in \R$
Proof
This follows from the standard characteristic function since $\chi(t) = e^{i t a} \chi_0(b t)$. Note that $\sgn(b t) = \sgn(t)$ since $b \gt 0$.
Related Distributions
Since the Lévy distribution is a location-scale family, it is trivially closed under location-scale transformations.
Suppose that $X$ has the Lévy distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$, and that $c \in \R$ and $d \in (0, \infty)$. Then $Y = c + d X$ has the Lévy distribution with location parameter $c + a d$ and scale parameter $b d$.
Proof
From the definition, we can take $X = a + b U$ where $U$ has the standard Lévy distribution. Hence $Y = c + d X = (c + a d) + (b d) U$ has the Lévy distribution with location parameter $c + a d$ and scale parameter $b d$.
Of more interest is the fact that the Lévy distribution is closed under convolution (corresponding to sums of independent variables).
Suppose that $X_1$ and $X_2$ are independent, and that, $X_k$ has the Lévy distribution with location parameter $a_k \in \R$ and scale parameter $b_k \in (0, \infty)$ for $k \in \{1, 2\}$. Then $X_1 + X_2$ has the Lévy distribution with location parameter $a_1 + a_2$ and scale parameter $(b_1^{1/2} + b_2^{1/2})^2$.
Proof
The characteristic function of $X_k$ is $\chi_k(t) = \exp\left(i t a_k - b_k^{1/2} \left| t \right|^{1/2}[1 + i \sgn(t)]\right), \quad t \in \R$ for $k \in \{1, 2\}$. Hence the characteristic function of $X_1 + X_2$ is \begin{align*} \chi(t) = \chi_1(t) \chi_2(t) & = \exp\left[i t (a_1 + a_2) - \left(b_1^{1/2} + b_2^{1/2}\right) \left| t \right|^{1/2}[1 + i \sgn(t)]\right]\ & = \exp\left[i t A - B^{1/2} \left| t \right|^{1/2}[1 + i \sgn(t)]\right], \quad t \in \R \end{align*} where $A = a_1 + a_2$ is the location parameter and $B = \left(b_1^{1/2} + b_2^{1/2}\right)^2$ is the scale parameter.
As a corollary, the Lévy distribution is a stable distribution with index $\alpha = \frac{1}{2}$:
Suppose that $n \in \N_+$ and that $(X_1, X_2, \ldots, X_n)$ is a sequence of independent random variables, each having the Lévy distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$. Then $X_1 + X_2 + \cdots + X_n$ has the Lévy distribution with location parameter $n a$ and scale parameter $n^2 b$.
Stability is one of the reasons for the importance of the Lévy distribution. From the characteristic function, it follows that the skewness parameter is $\beta = 1$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.16%3A_The_Levy_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
In this section, we will study the beta distribution, the most important distribution that has bounded support. But before we can study the beta distribution we must study the beta function.
The Beta Function
Definition
The beta function $B$ is defined as follows: $B(a, b) = \int_0^1 u^{a-1} (1 - u)^{b - 1} du; \quad a, \, b \in (0, \infty)$
Proof that $B$ is well defined
We need to show that $B(a, b) \lt \infty$ for every $a, \, b \in (0, \infty)$. The integrand is positive on $(0, 1)$, so the integral exists, either as a real number or $\infty$. If $a \ge 1$ and $b \ge 1$, the integrand is continuous on $[0, 1]$, so of course the integral is finite. Thus, the only cases of interest are when $0 \lt a \lt 1$ or $0 \lt b \lt 1$. Note that $\int_0^1 u^{a-1} (1 - u)^{b - 1} du = \int_0^{1/2} u^{a-1} (1 - u)^{b - 1} du + \int_{1/2}^1 u^{a-1} (1 - u)^{b - 1} du$ If $0 \lt a \lt 1$, $(1 - u)^{b-1}$ is bounded on $\left(0, \frac{1}{2}\right]$ and $\int_0^{1/2} u^{a - 1} \, du = \frac{1}{a 2^a}$. Hence the first integral on the right in the displayed equation is finite. Similarly, If $0 \lt b \lt 1$, $u^{a-1}$ is bounded on $\left[\frac{1}{2}, 1\right)$ and $\int_{1/2}^1 (1 - u)^{b-1} \, du = \frac{1}{b 2^b}$. Hence the second integral on the right in the displayed equation is also finite.
The beta function was first introduced by Leonhard Euler.
Properties
The beta function satisfies the following properties:
1. $B(a, b) = B(b, a)$ for $a, \, b \in (0, \infty)$, so $B$ is symmetric.
2. $B(a, 1) = \frac{1}{a}$ for $a \in (0, \infty)$
3. $B(1, b) = \frac{1}{b}$ for $b \in (0, \infty)$
Proof
1. Using the substitution $v = 1 - u$ we have $B(a, b) = \int_0^1 u^{a-1}(1 - u)^{b-1} du = \int_0^1 (1 - v)^{a-1} v^{b-1} dv = B(b, a)$
2. $B(a, 1) = \int_0^1 u^{a-1} du = \frac{1}{a}$
3. This follows from (a) and (b).
The beta function has a simple expression in terms of the gamma function:
If $a, \, b \in (0, \infty)$ then $B(a, b) = \frac{\Gamma(a) \Gamma(b)}{\Gamma(a + b)}$
Proof
From the definitions, we can express $\Gamma(a + b) B(a, b)$ as a double integral: $\Gamma(a + b) B(a, b) = \int_0^\infty x^{a + b -1} e^{-x} dx \int_0^1 y^{a-1} (1 - y)^{b-1} dy = \int_0^\infty \int_0^1 (x y)^{a-1}[x (1 - y)]^{b-1} x e^{-x} dx \, dy$ Next we use the transformation $w = x y$, $z = x (1 - y)$ which maps $(0, \infty) \times (0, 1)$ one-to-one onto $(0, \infty) \times (0, \infty)$. The inverse transformation is $x = w + z$, $y = w \big/(w + z)$ and the absolute value of the Jacobian is $\left|\det\frac{\partial(x, y)}{\partial(w, z)}\right| = \frac{1}{(w + z)}$ Thus, using the change of variables theorem for multiple integrals, the integral above becomes $\int_0^\infty \int_0^\infty w^{a-1} z^{b-1} (w + z) e^{-(w + z)} \frac{1}{w + z} dw \, dz$ which after simplifying is $\Gamma(a) \Gamma(b)$.
Recall that the gamma function is a generalization of the factorial function. Here is the corresponding result for the beta function:
If $j, \, k \in \N_+$ then $B(j, k) = \frac{(j - 1)! (k - 1)!}{(j + k - 1)!}$
Proof
Recall that $\Gamma(n) = (n - 1)!$ for $n \in \N_+$, so this result follows from the previous one.
Let's generalize this result. First, recall from our study of combinatorial structures that for $a \in \R$ and $j \in \N$, the ascending power of base $a$ and order $j$ is $a^{[j]} = a (a + 1) \cdots [a + (j - 1)]$
If $a, \, b \in (0, \infty)$, and $j, \, k \in \N$, then $\frac{B(a + j, b + k)}{B(a, b)} = \frac{a^{[j]} b^{[k]}}{(a + b)^{[j + k]}}$
Proof
Recall that $\Gamma(a + j) = a^{[j]} \Gamma(a)$, so the result follows from the representation above for the beta function in terms of the gamma function.
$B\left(\frac{1}{2}, \frac{1}{2}\right) = \pi$.
Proof
The Incomplete Beta Function
The integral that defines the beta function can be generalized by changing the interval of integration from $(0, 1)$ to $(0, x)$ where $x \in [0, 1]$.
The incomplete beta function is defined as follows $B(x; a, b) = \int_0^x u^{a-1} (1 - u)^{b-1} du, \quad x \in (0, 1); \; a, \, b \in (0, \infty)$
Of course, the ordinary (complete) beta function is $B(a, b) = B(1; a, b)$ for $a, \, b \in (0, \infty)$.
The Standard Beta Distribution
Distribution Functions
The beta distributions are a family of continuous distributions on the interval $(0, 1)$.
The (standard) beta distribution with left parameter $a \in (0, \infty)$ and right parameter $b \in (0, \infty)$ has probability density function $f$ given by $f(x) = \frac{1}{B(a, b)} x^{a-1} (1 - x)^{b-1}, \quad x \in (0, 1)$
Of course, the beta function is simply the normalizing constant, so it's clear that $f$ is a valid probability density function. If $a \ge 1$, $f$ is defined at 0, and if $b \ge 1$, $f$ is defined at 1. In these cases, it's customary to extend the domain of $f$ to these endpoints. The beta distribution is useful for modeling random probabilities and proportions, particularly in the context of Bayesian analysis. The distribution has just two parameters and yet a rich variety of shapes (so in particular, both parameters are shape parameters). Qualitatively, the first order properties of $f$ depend on whether each parameter is less than, equal to, or greater than 1.
For $a, \, b \in (0, \infty)$ with $a + b \ne 2$, define $x_0 = \frac{a - 1}{a + b - 2}$
1. If $0 \lt a \lt 1$ and $0 \lt b \lt 1$, $f$ decreases and then increases with minimum value at $x_0$ and with $f(x) \to \infty$ as $x \downarrow 0$ and as $x \uparrow 1$.
2. If $a = 1$ and $b = 1$, $f$ is constant.
3. If $0 \lt a \lt 1$ and $b \ge 1$, $f$ is decreasing with $f(x) \to \infty$ as $x \downarrow 0$.
4. If $a \ge 1$ and $0 \lt b \lt 1$, $f$ is increasing with $f(x) \to \infty$ as $x \uparrow 1$.
5. If $a = 1$ and $b \gt 1$, $f$ is decreasing with mode at $x = 0$.
6. If $a \gt 1$ and $b = 1$, $f$ is increasing with mode at $x = 1$.
7. If $a \gt 1$ and $b \gt 1$, $f$ increases and then decreases with mode at $x_0$.
Proof
These results follow from standard calculus. The first derivative is $f^\prime(x) = \frac{1}{B(a, b)} x^{a - 2}(1 - x)^{b - 2} [(a - 1) - (a + b - 2) x], \quad 0 \lt x \lt 1$
From part (b), note that the special case $a = 1$ and $b = 1$ gives the continuous uniform distribution on the interval $(0, 1)$ (the standard uniform distribution). Note also that when $a \lt 1$ or $b \lt 1$, the probability density function is unbounded, and hence the distribution has no mode. On the other hand, if $a \ge 1$, $b \ge 1$, and one of the inequalites is strict, the distribution has a unique mode at $x_0$. The second order properties are more complicated.
For $a, \, b \in (0, \infty)$ with $a + b \notin \{2, 3\}$ and $(a - 1)(b - 1)(a + b - 3) \ge 0$, define \begin{align} x_1 &= \frac{(a - 1)(a + b - 3) - \sqrt{(a - 1)(b - 1)(a + b - 3)}}{(a + b - 3)(a + b - 2)}\ x_2 &= \frac{(a - 1)(a + b - 3) + \sqrt{(a - 1)(b - 1)(a + b - 3)}}{(a + b - 3)(a + b - 2)} \end{align} For $a \lt 1$ and $a + b = 2$ or for $b \lt 1$ and $a + b = 2$, define $x_1 = x_2 = 1 - a / 2$.
1. If $a \le 1$ and $b \le 1$, or if $a \le 1$ and $b \ge 2$, or if $a \ge 2$ and $b \le 1$, $f$ is concave upward.
2. If $a \le 1$ and $1 \lt b \lt 2$, $f$ is concave upward and then downward with inflection point at $x_1$.
3. If $1 \lt a \lt 2$ and $b \le 1$, $f$ is concave downward and then upward with inflection point at $x_2$.
4. If $1 \lt a \le 2$ and $1 \lt b \le 2$, $f$ is concave downward.
5. If $1 \lt a \le 2$ and $b \gt 2$, $f$ is concave downward and then upward with inflection point at $x_2$.
6. If $a \gt 2$ and $1 \lt b \le 2$, $f$ is concave upward and then downward with inflection point at $x_1$.
7. If $a \gt 2$ and $b \gt 2$, $f$ is concave upward, then downward, then upward again, with inflection points at $x_1$ and $x_2$.
Proof
These results follow from standard (but very tedious) calculus. The second derivative is $f^{\prime\prime}(x) = \frac{1}{B(a, b)} x^{a - 3}(1 - x)^{b - 3} \left[(a + b - 2)(a + b - 3) x^2 - 2 (a - 1)(a + b - 3) x + (a - 1)(a - 2)\right]$
In the special distribution simulator, select the beta distribution. Vary the parameters and note the shape of the beta density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the true density function.
The special case $a = \frac{1}{2}$, $b = \frac{1}{2}$ is the arcsine distribution, with probability density function given by $f(x) = \frac{1}{\pi \, \sqrt{x (1 - x)}}, \quad x \in (0, 1)$ This distribution is important in a number of applications, and so the arcsine distribution is studied in a separate section.
The beta distribution function $F$ can be easily expressed in terms of the incomplete beta function. As usual $a$ denotes the left parameter and $b$ the right parameter.
The beta distribution function $F$ with parameters $a, \, b \in (0, \infty)$ is given by $F(x) = \frac{B(x; a, b)}{B(a, b)}, \quad x \in (0, 1)$
The distribution function $F$ is sometimes known as the regularized incomplete beta function. In some special cases, the distribution function $F$ and its inverse, the quantile function $F^{-1}$, can be computed in closed form, without resorting to special functions.
If $a \in (0, \infty)$ and $b = 1$ then
1. $F(x) = x^a$ for $x \in (0, 1)$
2. $F^{-1}(p) = p^{1/a}$ for $p \in (0, 1)$
If $a = 1$ and $b \in (0, \infty)$ then
1. $F(x) = 1 - (1 - x)^b$ for $x \in (0, 1)$
2. $F^{-1}(p) = 1 - (1 - p)^{1/b}$ for $p \in (0, 1)$
If $a = b = \frac{1}{2}$ (the arcsine distribution) then
1. $F(x) = \frac{2}{\pi} \arcsin\left(\sqrt{x}\right)$ for $x \in (0, 1)$
2. $F^{-1}(p) = \sin^2\left(\frac{\pi}{2} p\right)$ for $p \in (0, 1)$
There is an interesting relationship between the distribution functions of the beta distribution and the binomial distribution, when the beta parameters are positive integers. To state the relationship we need to embellish our notation to indicate the dependence on the parameters. Thus, let $F_{a, b}$ denote the beta distribution function with left parameter $a \in (0, \infty)$ and right parameter $b \in (0, \infty)$, and let $G_{n,p}$ denote the binomial distribution function with trial parameter $n \in \N_+$ and success parameter $p \in (0, 1)$.
If $j, \, k \in \N_+$ and $x \in (0, 1)$ then $F_{j,k}(x) = G_{j + k - 1, 1 - x}(k - 1)$
Proof
By definition $F_{j,k}(x) = \frac{1}{B(j,k)} \int_0^x t^{j-1} (1 - t)^{k-1} dt$ Integrate by parts with $u = (1 - t)^{k-1}$ and $dv = t^{j-1} dt$, so that $du = -(k -1) (1 - t)^{k-2}$ and $v = t^j / j$. The result is $F_{j,k}(x) = \frac{1}{j B(j, k)} (1 - x)^{k-1} x^j + \frac{k-1}{j B(j, k)} \int_0^x t^j (1 - t)^{k-2} dt$ But by the property of the beta function above, $B(j, k) = (j - 1)! (k - 1)! \big/(j + k - 1)!$. Hence $1 \big/ j B(j, k) = \binom{j + k - 1}{k - 1}$ and $(k - 1) \big/ j B(j, k) = 1 \big/ B(j + 1, k - 1)$. Thus, the last displayed equation can be rewritten as $F_{j,k}(x) = \binom{j + k - 1}{k - 1} (1 - x)^{k-1} x^j + F_{j + 1, k - 1}(x)$ Recall from the special case above that $F_{j + k - 1, 1}(x) = x^{j + k - 1}$. Iterating the last displayed equation gives the result.
In the special distribution calculator, select the beta distribution. Vary the parameters and note the shape of the density function and the distribution function. In each of the following cases, find the median, the first and third quartiles, and the interquartile range. Sketch the boxplot.
1. $a = 1$, $b = 1$
2. $a = 1$, $b = 3$
3. $a = 3$, $b = 1$
4. $a = 2$, $b = 4$
5. $a = 4$, $b = 2$
6. $a = 4$, $b = 4$
Moments
The moments of the beta distribution are easy to express in terms of the beta function. As before, suppose that $X$ has the beta distribution with left parameter $a \in (0, \infty)$ and right parameter $b \in (0, \infty)$.
If $k \in [0, \infty)$ then $\E\left(X^k\right) = \frac{B(a + k, b)}{B(a, b)}$ In particular, if $k \in \N$ then $\E\left(X^k\right) = \frac{a^{[k]}}{(a + b)^{[k]}}$
Proof
Note that $\E\left(X^k\right) = \int_0^1 x^k \frac{1}{B(a, b)} x^{a-1} (1 - x)^{b - 1} dx = \frac{1}{B(a, b)} \int_0^1 x^{a + k - 1} (1 - x)^{b - 1} dx = \frac{B(a + k, b)}{B(a, b)}$ If $k \in \N$, the formula simplifies by the property of the beta function above.
From the general formula for the moments, it's straightforward to compute the mean, variance, skewness, and kurtosis.
The mean and variance of $X$ are \begin{align} \E(X) &= \frac{a}{a + b} \ \var(X) &= \frac{a b}{(a + b)^2 (a + b + 1)} \end{align}
Proof
The formula for the mean and variance follow from the formula for the moments and the computational formula $\var(X) = \E(X^2) - [\E(X)]^2$
Note that the variance depends on the parameters $a$ and $b$ only through the product $a b$ and the sum $a + b$.
Open the special distribution simulator and select the beta distribution. Vary the parameters and note the size and location of the mean$\pm$standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the sample mean and standard deviation to the distribution mean and standard deviation.
The skewness and kurtosis of $X$ are \begin{align} \skw(X) &= \frac{2 (b - a) \sqrt{a + b + 1}}{(a + b + 2) \sqrt{a b}} \ \kur(X) &= \frac{3 a^3 b + 3 a b^3 + 6 a^2 b^2 + a^3 + b^3 + 13 a^2 b + 13 a b^2 + a^2 + b^2 + 14 a b}{a b (a + b + 2) (a + b + 3)} \end{align}
Proof
These results follow from the computational formulas that give the skewness and kurtosis in terms of $\E(X^k)$ for $k \in \{1, 2, 3, 4\}$, and the formula for the moments above.
In particular, note that the distribution is positively skewed if $a \lt b$, unskewed if $a = b$ (the distribution is symmetric about $x = \frac{1}{2}$ in this case) and negatively skewed if $a \gt b$.
Open the special distribution simulator and select the beta distribution. Vary the parameters and note the shape of the probability density function in light of the previous result on skewness. For various values of the parameters, run the simulation 1000 times and compare the empirical density function to the true probability density function.
Related Distributions
The beta distribution is related to a number of other special distributions.
If $X$ has the beta distribution with left parameter $a \in (0, \infty)$ and right parameter $b \in (0, \infty)$ then $Y = 1 - X$ has the beta distribution with left parameter $b$ and right parameter $a$.
Proof
This follows from the standard change of variables formula. If $f$ and $g$ denote the PDFs of $X$ and $Y$ respectively, then $g(y) = f(1 - y) = \frac{1}{B(a, b)} (1 - y)^{a - 1} y^{b - 1} = \frac{1}{B(b, a)} y^{b - 1} (1 - y)^{a - 1}, \quad y \in (0, 1)$
The beta distribution with right parameter 1 has a reciprocal relationship with the Pareto distribution.
Suppose that $a \in (0, \infty)$.
1. If $X$ has the beta distribution with left parameter $a$ and right parameter 1 then $Y = 1 / X$ has the Pareto distribution with shape parameter $a$.
2. If $Y$ has the Pareto distribution with shape parameter $a$ then $X = 1 / Y$ has the beta distribution with left parameter $a$ and right parameter 1.
Proof
The two results are equivalent. In (a), suppose that $X$ has the beta distribution with parameters $a$ and 1. The transformation $y = 1 / x$ maps $(0, 1)$ one-to-one onto $(0, \infty)$. The inverse is $x = 1 / y$ with $dx/dy = -1/y^2$. Recall also that $B(a, 1) = 1 / a$. By the change of variables formula, the PDF $g$ of $Y = 1 / X$ is given by $g(y) = f\left(\frac{1}{y}\right) \frac{1}{y^2} = a \left(\frac{1}{y}\right)^{a-1} \frac{1}{y^2} = \frac{a}{y^{a+1}}, \quad y \in (0, \infty)$ We recognize $g$ as the PDF of the Pareto distribution with shape parameter $a$.
The following result gives a connection between the beta distribution and the gamma distribution.
Suppose that $X$ has the gamma distribution with shape parameter $a \in (0, \infty)$ and rate parameter $r \in (0, \infty)$, $Y$ has the gamma distribution with shape parameter $b \in (0, \infty)$ and rate parameter $r$, and that $X$ and $Y$ are independent. Then $V = X \big/ (X + Y)$ has the beta distribution with left parameter $a$ and right parameter $b$.
Proof
Let $U = X + Y$ and $V = X \big/ (X + Y)$. We will actually prove stronger results: $U$ and $V$ are independent, $U$ has the gamma distribution with shape parameter $a + b$ and rate parameter $r$, and $V$ has the beta distribution with parameters $a$ and $b$. First note that $(X, Y)$ has joint PDF $f$ given by $f(x, y) = \frac{r^a}{\Gamma(a)} x^{a-1} e^{-r x} \frac{r^b}{\Gamma(b)} y^{b-1} e^{-r y} = \frac{r^{a+b}}{\Gamma(a) \Gamma(b)} x^{a-1} y^{b-1} e^{-r(x + y)}; \quad x, \, y \in (0, \infty)$ The transformation $u = x + y$ and $v = x \big/ (x + y)$ maps $(0, \infty) \times (0, \infty)$ one-to-one onto $(0, \infty) \times (0, 1)$. The inverse is $x = u v$, $y = u(1 - v)$ and the absolute value of the Jacobian is $\left|\det \frac{\partial(x, y)}{\partial(u, v)}\right| = u$ Hence by the multivariate change of variables theorem, the PDF $g$ of $(U, V)$ is given by \begin{align} g(u, v) & = f[u v, u(1 - v)] u = \frac{r^{a+b}}{\Gamma(a) \Gamma(b)} (u v)^{a-1} [u(1 - v)]^{b-1} e^{-ru} u \ & = \frac{r^{a+b}}{\Gamma(a) \Gamma(b)} u^{a+b-1} e^{-ru} v^{a-1} (1 - v)^{b-1} \ & = \frac{r^{a+b}}{\Gamma(a + b)} u^{a+b-1} e^{-ru} \frac{\Gamma(a + b)}{\Gamma(a) \Gamma(b)} v^{a-1} (1 - v)^{b-1}; \quad u \in (0, \infty), v \in (0, 1) \end{align} The results now follow from the factorization theorem. The factor in $u$ is the gamma PDF with shape parameter $a + b$ and rate parameter $r$ while the factor in $v$ is the beta PDF with parameters $a$ and $b$.
The following result gives a connection between the beta distribution and the $F$ distribution. This connection is a minor variation of the previous result.
If $X$ has the $F$ distribution with $n \in (0, \infty)$ degrees of freedom in the numerator and $d \in (0, \infty)$ degrees of freedom in the denominator then $Y = \frac{(n / d) X}{1 + (n / d)X}$ has the beta distribution with left parameter $a = n / 2$ and right parameter $b = d / 2$.
Proof
If $X$ has the $F$ distribution with $n \gt 0$ degrees of freedom in the numerator and $d \gt 0$ degrees of freedom in the denominator then $X$ can be written as $X = \frac{U / n}{V / d}$ where $U$ has the chi-square distribution with $n$ degrees of freedom, $V$ has the chi-square distribution with $d$ degrees of freedom, and $U$ and $V$ are independent. Hence $Y = \frac{(n / d) X}{1 + (n / d) X} = \frac{U / V}{1 + U / V} = \frac{U}{U + V}$ But the chi-square distribution is a special case of the gamma distribution. Specifically, $U$ has the gamma distribution with shape parameter $n / 2$ and rate parameter $1/2$, $V$ has the gamma distribution with shape parameter $d / 2$ and rate parameter $1/2$, and again $U$ and $V$ are independent. Hence by the previous result, $Y$ has the beta distribution with left parameter $n/2$ and right parameter $d/2$.
Our next result is that the beta distribution is a member of the general exponential family of distributions.
Suppose that $X$ has the beta distribution with left parameter $a \in (0, \infty)$ and right parameter $b \in (0, \infty)$. Then the distribution is a two-parameter exponential family with natural parameters $a - 1$ and $b - 1$, and natural statistics $\ln(X)$ and $\ln(1 - X)$.
Proof
This follows from the definition of the general exponential distribution, since the PDF $f$ of $X$ can be written as $f(x) = \frac{1}{B(a, b)} \exp\left[(a - 1) \ln(x) + (b - 1) \ln(1 - x)\right], \quad x \in (0, 1)$
The beta distribution is also the distribution of the order statistics of a random sample from the standard uniform distribution.
Suppose $n \in \N_+$ and that $(X_1, X_2, \ldots, X_n)$ is a sequence of independent variables, each with the standard uniform distribution. For $k \in \{1, 2, \ldots, n\}$, the $k$th order statistics $X_{(k)}$ has the beta distribution with left parameter $a = k$ and right parameter $b = n - k + 1$.
Proof
See the section on order statistics.
One of the most important properties of the beta distribution, and one of the main reasons for its wide use in statistics, is that it forms a conjugate family for the success probability in the binomial and negative binomial distributions.
Suppose that $P$ is a random probability having the beta distribution with left parameter $a \in (0, \infty)$ and right parameter $b \in (0, \infty)$. Suppose also that $X$ is a random variable such that the conditional distribution of $X$ given $P = p \in (0, 1)$ is binomial with trial parameter $n \in \N_+$ and success parameter $p$. Then the conditional distribution of $P$ given $X = k$ is beta with left parameter $a + k$ and right parameter $b + n - k$.
Proof
The joint PDF $f$ of $(P, X)$ on $(0, 1) \times \{0, 1, \ldots n\}$ is given by $f(p, k) = \frac{1}{B(a, b)} p^{a-1} (1 - p)^{b-1} \binom{n}{k} p^k (1 - p)^{n-k} = \frac{1}{B(a, b)} \binom{n}{k} p^{a + k - 1} (1 - p)^{b + n - k - 1}$ The conditional PDF of $P$ given $X = k$ is simply the normalized version of the function $p \mapsto f(p, k)$. We can tell from the functional form that this distribution is beta with the parameters given in the theorem.
Suppose again that $P$ is a random probability having the beta distribution with left parameter $a \in (0, \infty)$ and right parameter $b \in (0, \infty)$. Suppose also that $N$ is a random variable such that the conditional distribution of $N$ given $P = p \in (0, 1)$ is negative binomial with stopping parameter $k \in \N_+$ and success parameter $p$. Then the conditional distribution of $P$ given $N = n$ is beta with left parameter $a + k$ and right parameter $b + n - k$.
Proof
The joint PDF $f$ of $(P, N)$ on $(0, 1) \times \{k, k + 1, \ldots\}$ is given by $f(p, n) = \frac{1}{B(a, b)} p^{a-1} (1 - p)^{b-1} \binom{n - 1}{k - 1} p^k (1 - p)^{n-k} = \frac{1}{B(a, b)} \binom{n - 1}{k - 1} p^{a + k - 1} (1 - p)^{b + n - k - 1}$ The conditional PDF of $P$ given $N = n$ is simply the normalized version of the function $p \mapsto f(p, n)$. We can tell from the functional form that this distribution is beta with the parameters given in the theorem.
in both cases, note that in the posterior distribution of $P$, the left parameter is increased by the number of successes and the right parameter by the number of failures. For more on this, see the section on Bayesian estimation in the chapter on point estimation.
The General Beta Distribution
The beta distribution can be easily generalized from the support interval $(0, 1)$ to an arbitrary bounded interval using a linear transformation. Thus, this generalization is simply the location-scale family associated with the standard beta distribution.
Suppose that $Z$ has the standard beta distibution with left parameter $a \in (0, \infty)$ and right parameter $b \in (0, \infty)$. For $c \in \R$ and $d \in (0, \infty)$ random variable $X = c + d Z$ has the beta distribution with left parameter $a$, right parameter $b$, location parameter $c$ and scale parameter $d$.
For the remainder of this discussion, suppose that $X$ has the distribution in the definition above.
$X$ has probability density function
$f(x) = \frac{1}{B(a, b) d^{a + b - 1}} (x - c)^{a - 1} (c + d - x)^{b - 1}, \quad x \in (c, c + d)$
Proof
This follows from a standard result for location-scale families. If $g$ denotes the standard beta PDF of $Z$, then $X$ has PDF $f$ given by $f(x) = \frac{1}{d} g\left(\frac{x - c}{d}\right), \quad x \in (c, c + d)$
Most of the results in the previous sections have simple extensions to the general beta distribution.
The mean and variance of $X$ are
1. $\E(X) = c + d \frac{a}{a + b}$
2. $\var(X) = d^2 \frac{a b}{(a + b)^2 (a + b + 1)}$
Proof
This follows from the standard mean and variance and basic properties of expected value and variance.
1. $\E(X) = c + d \E(Z)$
2. $\var(X) = d^2 \var(Z)$
Recall that skewness and variance are defined in terms of standard scores, and hence are unchanged under location-scale transformations. Hence the skewness and kurtosis of $X$ are just as for the standard beta distribution. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.17%3A_The_Beta_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
Basic Theory
The beta prime distribution is the distribution of the odds ratio associated with a random variable with the beta distribution. Since variables with beta distributions are often used to model random probabilities and proportions, the corresponding odds ratios occur naturally as well.
Definition
Suppose that $U$ has the beta distribution with shape parameters $a, \, b \in (0, \infty)$. Random variable $X = U \big/ (1 - U)$ has the beta prime distribution with shape parameters $a$ and $b$.
The special case $a = b = 1$ is known as the standard beta prime distribution. Since $U$ has a continuous distribution on the interval $(0, 1)$, random variable $X$ has a continuous distribution on the interval $(0, \infty)$.
Distribution Functions
Suppose that $X$ has the beta prime distribution with shape parameters $a, \, b \in (0, \infty)$, and as usual, let $B$ denote the beta function.
$X$ has probability density function $f$ given by $f(x) = \frac{1}{B(a, b)} \frac{x^{a - 1}}{(1 + x)^{a + b}}, \quad x \in (0, \infty)$
Proof
First, recall that the beta PDF $g$ with parameters $a$ and $b$ is $g(u) = u^{a-1} (1 - u)^{b-1}, \quad u \in (0, 1)$ The transformation $x = u \big/ (1 - u)$ maps $(0, 1)$ onto $(0, \infty)$ and is increasing. The inverse transformation is $u = x \big/ (x + 1)$, and $1 - u = 1 \big/ (x + 1)$ and $du / dx = 1 \big/ (x + 1)^2$. Thus, by the change of variables formula, $f(x) = g(u) \frac{du}{dx} = \frac{1}{B(a, b)} \left(\frac{x}{x+1}\right)^{a-1} \left(\frac{1}{x + 1}\right)^{b-1} \frac{1}{(x + 1)^2} = \frac{1}{B(a, b)} \frac{x^{a-1}}{(x + 1)^{a + b}}, \quad x \in (0, \infty)$
If $a \ge 1$, the probability density function is defined at $x = 0$, so in this case, it's customary add this endpoint to the domain. In particular, for the standard beta prime distribution, $f(x) = \frac{1}{(1 + x)^2}, \quad x \in [0, \infty)$ Qualitatively, the first order properties of the probability density function $f$ depend only on $a$, and in particular on whether $a$ is less than, equal to, or greater than 1.
The probability density function $f$ satisfies the following properties:
1. If $0 \lt a \lt 1$, $f$ is decreasing with $f(x) \to \infty$ as $x \downarrow 0$.
2. If $a = 1$, $f$ is decreasing with mode at $x = 0$.
3. If $a \gt 1$, $f$ increases and then decreases with mode at $x = (a - 1) \big/ (b + 1)$.
Proof
These properties follow from standard calculus. The first derivative of $f$ is $f^\prime(x) = \frac{1}{B(a, b)} \frac{x^{a-2}}{(1 + x)^{a + b + 1}} [(a - 1) - x(b + 1)], \quad x \in (0, \infty)$
Qualitatively, the second order properties of $f$ also depend only on $a$, with transitions at $a = 1$ and $a = 2$.
For $a \gt 1$, define \begin{align} x_1 & = \frac{(a - 1)(b + 2) - \sqrt{(a - 1)(b + 2)(a + b)}}{(b + 1)(b + 2)} \ x_2 & = \frac{(a - 1)(b + 2) + \sqrt{(a - 1)(b + 2)(a + b)}}{(b + 1)(b + 2)} \end{align} The probability density function $f$ satisfies the following properties:
1. If $0 \lt a \le 1$, $f$ is concave upward.
2. If $1 \lt a \le 2$, $f$ is concave downward and then upward, with inflection point at $x_2$.
3. If $a \gt 2$, $f$ is concave upward, then downward, then upward again, with inflection points at $x_1$ and $x_2$.
Proof
These results follow from standard calculus. The second derivative of $f$ is $f^{\prime\prime}(x) = \frac{1}{B(a, b)} \frac{x^{a-3}}{(1 + x)^{a + b + 2}}\left[(a - 1)(a - 2) - 2 (a - 1)(b + 2) x + (b + 1)(b + 2)x^2\right], \quad x \in (0, \infty)$
Open the Special Distribution Simulator and select the beta prime distribution. Vary the parameters and note the shape of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
Because of the definition of the beta prime variable, the distribution function of $X$ has a simple expression in terms of the beta distribution function with the same parameters, which in turn is the regularized incomplete beta function. So let $G$ denote the distribution function of the beta distribution with parameters $a, \, b \in (0, \infty)$, and recall that $G(x) = \frac{B(x; a, b)}{B(a, b)}, \quad x \in (0, 1)$
$X$ has distribution function $F$ given by $F(x) = G\left(\frac{x}{x + 1}\right), \quad x \in [0, \infty)$
Proof
As noted in the proof of the formula for the PDF, $x = u \big/ (1 - u)$ is strictly increasing with inverse $u = x \big/ (x + 1)$. Hence $F(x) = \P(X \le x) = \P\left(\frac{U}{U - 1} \le x\right) = \P\left(U \le \frac{x}{x + 1}\right) = G\left(\frac{x}{x + 1}\right), \quad x \in [0, \infty)$
Similarly, the quantile function of $X$ has a simple expression in terms of the beta quantile function $G^{-1}$ with the same parameters.
$X$ has quantile function $F^{-1}$ given by $F^{-1}(p) = \frac{G^{-1}(p)}{1 - G^{-1}(p)}, \quad p \in [0, 1)$
Proof
This follows from the result for the CDF by solving $p = F(x) = G\left(\frac{x}{x+1}\right)$ for $x$ in terms of $p$.
Open the Special Distribution Calculator and choose the beta prime distribution. Vary the parameters and note the shape of the distribution function. For selected values of the parameters, find the median and the first and third quartiles.
For certain values of the parameters, the distribution and quantile functions have simple, closed form expressions.
If $a \in (0, \infty)$ and $b = 1$ then
1. $F(x) = \left(\frac{x}{x + 1}\right)^a$ for $x \in [0, \infty)$
2. $F^{-1}(p) = \frac{p^{1/a}}{1 - p^{1/a}}$ for $p \in [0, 1)$
Proof
For $a \gt 0$ and $b = 1$, $G(u) = u^a$ for $u \in [0, 1]$ and $G^{-1}(p) = p^{1/a}$ for $p \in [0, 1]$
If $a = 1$ and $b \in (0, \infty)$ then
1. $F(x) = 1 - \left(\frac{1}{x + 1}\right)^b$ for $x \in [0, \infty)$
2. $F^{-1}(p) = \frac{1 - (1 - p)^{1/b}}{(1 - p)^{1/b}}$ for $p \in [0, 1)$
Proof
For $a = 1$ and $b \gt 0$, $G(u) = 1 - (1 - u)^b$ for $u \in [0, 1]$ and $G^{-1}(p) = 1 - (1 - p)^{1/b}$ for $p \in [0, 1]$.
If $a = b = \frac{1}{2}$ then
1. $F(x) = \frac{2}{\pi} \arcsin\left(\sqrt{\frac{x}{x + 1}}\right)$ for $x \in [0, \infty)$
2. $F^{-1}(p) = \frac{\sin^2\left(\frac{\pi}{2} p\right)}{1 - \sin^2\left(\frac{\pi}{2} p\right)}$ for $p \in [0, 1)$
Proof
For $a = b = \frac{1}{2}$, $G(u) = \frac{2}{\pi} \arcsin\left(\sqrt{u}\right)$ for $u \in (0, 1)$ and $G^{-1}(p) = \sin^2\left(\frac{\pi}{2} p\right)$ for $p \in [0, 1]$
When $a = b = \frac{1}{2}$, $X$ is the odds ratio for a variable with the standard arcsine distribution.
Moments
As before, $X$ denotes a random variable with the beta prime distribution, with parameters $a, \, b \in (0, \infty)$. The moments of $X$ have a simple expression in terms of the beta function.
If $t \in (-a, b)$ then $\E\left(X^t\right) = \frac{B(a + t, b - t)}{B(a, b)}$ If $t \in (-\infty, -a] \cup [b, \infty)$ then $\E(X^t) = \infty$.
Proof
Once again, let $g$ denote the beta PDF with parameters $a$ and $b$. With the transformation $x = u \big/ (1 - u)$, as in the proof PDF formula, we have $f(x) dx = g(u) du$. Hence $\int_0^\infty x^t f(x) dx = \int_0^1 \left(\frac{u}{1 - u}\right)^t g(u) du = \frac{1}{B(a, b)} \int_0^1 u^{a + t - 1} (1 - u)^{b - t - 1} du$ If $t \le -a$ the improper integral diverges to $\infty$ at 0. If $t \ge b$ the improper integral diverges to $\infty$ at 1. If $-a \lt t \lt b$ the integral is $B(a + t, b - t)$ by definition of the beta function.
Of course, we are usually most interested in the integer moments of $X$. Recall that for $x \in \R$ and $n \in \N$, the rising power of $x$ of order $n$ is $x^{[n]} = x (x + 1) \cdots (x + n - 1)$.
Suppose that $n \in \N$. If $n \lt b$ Then $\E\left(X^n\right) = \prod_{k=1}^n \frac{a + k - 1}{b - k}$ If $n \ge b$ then $\E\left(X^n\right) = \infty$.
Proof
From the general moment result, $\E(X^n) = \frac{B(a + n, b - n)}{B(a, b)} = \frac{\Gamma(a + n) \Gamma(a - n)}{\Gamma(a + b)} \frac{\Gamma(a + b)}{\Gamma(a) \Gamma(b)} = \frac{\Gamma(a + n)}{\Gamma(a)} \frac{\Gamma(b - n)}{\Gamma(b)} = \frac{a^{[n]}}{(b - n)^{[n]}}$ by a basic property of the gamma function.
As a corollary, we have the mean and variance.
If $b \gt 1$ then $\E(X) = \frac{a}{b - 1}$
If $b \gt 2$ then $\var(X) = \frac{a (a + b - 1)}{(b - 1)^2 (b - 2)}$
Proof
This follows from the general moment result above and the computational formula $\var(X) = \E\left(X^2\right) = [\E(X)]^2$.
Open the Special Distribution Simulator and select the beta prime distribution. Vary the parameters and note the size and location of the mean$\pm$standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
Finally, the general moment result leads to the skewness and kurtosis of $X$.
If $b \gt 3$ then $\skw(X) = \frac{2 (2 a + b - 1)}{b - 3} \sqrt{\frac{b - 2}{a (a + b - 1)}}$
Proof
This follows from the usual computational formula for skewness in terms of the moments $\E(X^n)$ for $n \in \{1, 2, 3\}$ and the general moment result above.
In particular, the distibution is positively skewed for all $a \gt 0$ and $b \gt 3$.
If $b \gt 4$ then $\kur(X) = \frac{3 a^3 b^2 + 69 a^3 b - 30 a^3 + 6 a^2 b^3 +12 a^2 b^2 -78 a^2 b +60 a^2 + 3 a b^4 + 9 a b^3 - 69 a b^2 + 99 a b - 42 a + 6 b^4 - 30 b^3 + 54 b^2- 42 b + 12}{(a + b - 1)(b - 3)(b - 4)}$
Proof
This follows from the usual computational formula for kurtosis in terms of the moments $\E(X^n)$ for $n \in \{1, 2, 3, 4\}$ and the general moment result above.
Related Distributions
The most important connection is the one between the beta prime distribution and the beta distribution given in the definition. We repeat this for emphasis.
Suppose that $a, \, b \in (0, \infty)$.
1. If $U$ has the beta distribution with parameters $a$ and $b$, then $X = U \big/ (1 - U)$ has the beta prime distribution with parameters $a$ and $b$.
2. If $X$ has the beta prime distribution with parameters $a$ and $b$, then $U = X \big/ (X + 1)$ has the beta distribution with parameters $a$ and $b$.
The beta prime family is closed under the reciprocal transformation.
If $X$ has the beta prime distribution with parameters $a, \, b \in (0, \infty)$ then $1 / X$ has the beta prime distribution with parameters $b$ and $a$.
Proof
A direct proof using the change of variables formula is possible, of course, but a better proof uses a corresponding property of the beta distribution. By definition, we can take $X = U \big/ (1 - U)$ where $U$ has the beta distribution with parameters $a$ and $b$. But then $1/X = (1 - U) \big/ U$, and $1 - U$ has the beta distribution with parameters $b$ and $a$. By another application of the definition, $1/X$ has the beta prime distribution with parameters $b$ and $a$.
The beta prime distribution is closely related to the $F$ distribution by a simple scale transformation.
Connections with the $F$ distributions.
1. If $X$ has the beta prime distribution with parameters $a, \, b \in (0, \infty)$ then $Y = \frac{b}{a} X$ has the $F$ distribution with $2 a$ degrees of the freedom in the numerator and $2 b$ degrees of freedom in the denominator.
2. If $Y$ has the $F$ distribution with $n \in (0, \infty)$ degrees of freedom in the numerator and $d \in (0, \infty)$ degrees of freedom in the denominator, then $X = \frac{n}{d} Y$ has the beta prime distribution with parameters $n/2$ and $d/2$.
Proof
Let $f$ denote the PDF of $X$ and $g$ the PDF of $Y$.
1. By the change of variables formula, $g(y) = \frac{a}{b} f\left(\frac{a}{b} y\right), \quad x \in (0, \infty)$ Substituting into the beta prime PDF shows that $Y$ has the appropriate $F$ distribution.
2. Again using the change of variables formula, $f(x) = \frac{d}{n} g\left(\frac{d}{n} x\right), \quad x \in (0, \infty)$ Substituting into the $F$ PDF shows that $X$ has the appropriate beta prime PDF.
The beta prime is the distribution of the ratio of independent variables with standard gamma distributions. (Recall that standard here means that the scale parameter is 1.)
Suppose that $Y$ and $Z$ are independent and have standard gamma distributions with shape parameters $a \in (0, \infty)$ and $b \in (0, \infty)$, respectively. Then $X = Y / Z$ has the beta prime distribution with parameters $a$ and $b$.
Proof
Of course, a direct proof can be constructed, but a better approach is to use the previous result. Thus suppose that $Y$ and $Z$ are as stated in the theorem. Then $2 Y$ and $2 Z$ are independent chi-square variables with $2 a$ and $2 b$ degrees of freedom, respectively. Hence $W = \frac{Y / 2a}{Z / 2b}$ has the $F$ distribution with $2 a$ degrees of freedom in the numerator and $2 b$ degrees of freedom in the denominator. By the previous result, $X = \frac{2 a}{2 b} W = \frac{Y}{Z}$ has the beta prime distribution with parameters $a$ and $b$.
The standard beta prime distribution is the same as the standard log-logistic distribution.
Proof
The PDF of the standard beta prime distribution is $f(x) = 1 \big/ (1 + x)^2$ for $x \in [0, \infty)$, which is the same as the PDF of the standard log-logistic distribution.
Finally, the beta prime distribution is a member of the general exponential family of distributions.
Suppose that $X$ has the beta prime distribution with parameters $a, \, b \in (0, \infty)$. Then $X$ has a two-parameter general exponential distribution with natural parameters $a - 1$ and $-(a + b)$ and natural statistics $\ln(X)$ and $\ln(1 + X)$.
Proof
This follows from the definition of the general exponential family, since the PDF can be written in the form $f(x) = \frac{1}{B(a, b)} \exp[(a - 1) \ln(x) - (a + b) \ln(1 + x)], \quad x \in (0, \infty)$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.18%3A_The_Beta_Prime_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
The arcsine distribution is important in the study of Brownian motion and prime numbers, among other applications.
The Standard Arcsine Distribution
Distribution Functions
The standard arcsine distribution is a continuous distribution on the interval $(0, 1)$ with probability density function $g$ given by $g(x) = \frac{1}{\pi \sqrt{x (1 - x)}}, \quad x \in (0, 1)$
Proof
There are a couple of ways to see that $g$ is a valid PDF. First, it's the beta PDF with parameters $a = b = \frac{1}{2}$: $g(x) = \frac{1}{B(1/2, 1/2)} x^{-1/2} (1 - x)^{-1/2}, \quad x \in (0, 1)$ since we recall that $B\left(\frac 1 2, \frac 1 2\right) = \pi$. A direct proof is also easy: The substitution $u = \sqrt{x}$, $x = u^2$, $dx = 2 u \, du$ gives $\int_0^1 \frac{1}{\pi \sqrt{x (1 - x)}} dx = \int_0^1 \frac{2}{\pi \sqrt{1 - u^2}} du = \frac{2}{\pi} \arcsin u \biggm\vert_0^1 = \frac{2}{\pi} \left(\frac{\pi}{2} - 0\right) = 1$
The occurrence of the arcsine function in the proof that $g$ is a probability density function explains the name.
The standard arcsine probability density function $g$ satisfies the following properties:
1. $g$ is symmetric about $x = \frac{1}{2}$.
2. $g$ decreases and then increases with minimum value at $x = \frac{1}{2}$.
3. $g$ is concave upward
4. $g(x) \to \infty$ as $x \downarrow 0$ and as $x \uparrow 1$.
Proof
1. Note that $g$ is a function of $x$ only through $x (1 - x)$.
2. This follows from standard calculus: $g^\prime(x) = \frac{2 x - 1}{2 \pi [x (1 - x)]^{3/2}}$
3. This also follows from standard calculus: $g^{\prime\prime}(x) = \frac{3 - 8 x + 8 x^2}{4 \pi [x (1 - x)]^{5/2}}$
4. The limits are clear.
In particular, the standard arcsine distribution is U-shaped and has no mode.
Open the Special Distribution Simulator and select the arcsine distribution. Keep the default parameter values and note the shape of the probability density function. Run the simulation 1000 times and compare the emprical density function to the probability density function.
The distribution function has a simple expression in terms of the arcsine function, again justifying the name of the distribution.
The standard arcsine distribution function $G$ is given by $G(x) = \frac{2}{\pi} \arcsin\left(\sqrt{x}\right)$ for $x \in [0, 1]$.
Proof
Again, using the substitution $u = \sqrt{t}$, $t = u^2$, $dt = 2 u \, du$: $G(x) = \int_0^x \frac{1}{\pi \sqrt{t (1 - t)}} dt = \int_0^{\sqrt{x}} \frac{2}{\pi \sqrt{1 - u^2}} du = \frac{2}{\pi}\arcsin(t) \biggm\vert_0^{\sqrt{x}} = \frac{2}{\pi} \arcsin\left(\sqrt{x}\right)$
Not surprisingly, the quantile function has a simple expression in terms of the sine function.
The standard arcinse quantile function $G^{-1}$ is given by $G^{-1}(p) = \sin^2\left(\frac{\pi}{2} p\right)$ for $p \in [0, 1]$. In particular, the quartiles are
1. $q_1 = \sin^2\left(\frac{\pi}{8}\right) = \frac{1}{4}(2 - \sqrt{2}) \approx 0.1464$, the first quartile
2. $q_2 = \frac{1}{2}$, the median
3. $q_3 = \sin^2\left(\frac{3 \pi}{8}\right) = \frac{1}{4}(2 + \sqrt{2}) \approx 0.8536$, the third quartile
Proof
The formula for the quantile function follows from the distribution function by solving $p = G(x)$ for $x$ in terms of $p \in [0, 1]$.
Open the Special Distribution Calculator and select the arcsine distribution. Keep the default parameter values and note the shape of the distribution function. Compute selected values of the distribution function and the quantile function.
Moments
Suppose that random variable $Z$ has the standard arcsine distribution. First we give the mean and variance.
The mean and variance of $Z$ are
1. $\E(Z) = \frac{1}{2}$
2. $\var(Z) = \frac{1}{8}$
Proof
1. The mean is $\frac{1}{2}$ by symmetry.
2. Using the usual substitution $u = \sqrt{x}$, $x = u^2$ $dx = 2 u \, du$ and then the substitution $u = \sin \theta$, $du = \cos \theta \, d\theta$ gives $\E\left(Z^2\right) = \int_0^1 \frac{1}{\pi \sqrt{x (1 - x)}} dx = \int_0^1 \frac{2 u^4}{\pi \sqrt{1 - u^2}} = \int_0^{\pi/2} \frac{2}{\pi} \sin^4(\theta) d\theta = \frac{2}{\pi} \frac{3 \pi}{16} = \frac{3}{8}$
Open the Special Distribution Simulator and select the arcsine distribution. Keep the default parameter values. Run the simulation 1000 times and compare the empirical mean and stadard deviation to the true mean and standard deviation.
The general moments about 0 can be expressed as products.
For $n \in \N$, $\E\left(Z^n\right) = \prod_{j=0}^{n-1} \frac{2 j + 1}{2 j + 2}$
Proof
The same integral substitutions as before gives $\E(Z^n) = \int_0^{\pi/2} \frac{2}{\pi} \sin^{2 n}(\theta) d\theta = \prod_{j=0}^{n-1} \frac{2 j + 1}{2 j + 2}$
Of course, the moments can be used to give a formula for the moment generating function, but this formula is not particularly helpful since it is not in closed form.
$Z$ has moment generating function $m$ given by $m(t) = \E\left(e^{t Z}\right) = \sum_{n=0}^\infty \left(\prod_{j=0}^{n-1} \frac{2 j + 1}{2 j + 2}\right) \frac{t^n}{n!}, \quad t \in \R$
Finally we give the skewness and kurtosis.
The skewness and kurtosis of $Z$ are
1. $\skw(Z) = 0$
2. $\kur(Z) = \frac{3}{2}$
Proof
1. The skewness is 0 by the symmetry of the distribution.
2. The result for the kurtosis follows from the standard formula for kurtosis in terms of the moments: $\E(Z) = \frac{1}{2}$, $\E\left(Z^2\right) = \frac{3}{8}$, $\E\left(Z^3\right) = \frac{5}{16}$, and $\E\left(Z^4\right) = \frac{35}{128}$.
Related Distributions
As noted earlier, the standard arcsine distribution is a special case of the beta distribution.
The standard arcsine distribution is the beta distribution with left parameter $\frac{1}{2}$ and right parameter $\frac{1}{2}$.
Proof
The beta distribution with parameters $a = b = \frac{1}{2}$ has PDF $x \mapsto \frac{1}{B(1/2, 1/2)} x^{-1/2}(1 - x)^{-1/2}, \quad x \in (0, 1)$ But $B\left(\frac{1}{2}, \frac{1}{2}\right) = \pi$, so this is the standard arcsine PDF.
Since the quantile function is in closed form, the standard arcsine distribution can be simulated by the random quantile method.
Connections with the standard uniform distribution.
1. If $U$ has the standard uniform distribution (a random number) then $X = \sin^2\left(\frac{\pi}{2} U\right)$ has the standard arcsine distribution.
2. If $X$ has the standard arcsine distribution then $U = \frac{2}{\pi} \arcsin\left(\sqrt{X}\right)$ has the standard uniform distribution.
Open the random quantile simulator and select the arcsine distribution. Keep the default parameters. Run the experiment 1000 times and compare the empirical probability density function, mean, and standard deviation to their distributional counterparts. Note how the random quantiles simulate the distribution.
The following exercise illustrates the connection between the Brownian motion process and the standard arcsine distribution.
Open the Brownian motion simulator. Keep the default time parameter and select the last zero random variable. Note that this random variable has the standard arcsine distribution. Run the experiment 1000 time and compare the empirical probability density function, mean, and standard deviation to their distributional counterparts. Note how the last zero simulates the distribution.
The General Arcsine Distribution
The standard arcsine distribution is generalized by adding location and scale parameters.
Definition
If $Z$ has the standard arcsine distribution, and if $a \in \R$ and $b \in (0, \infty)$, then $X = a + b Z$ has the arcsine distribution with location parameter $a$ and scale parameter $b$.
So $X$ has a continuous distribution on the interval $(a, a + b)$.
Distribution Functions
Suppose that $X$ has the arcsine distribution with location parameter $a \in \R$ and scale parameter $w \in (0, \infty)$.
$X$ has probability density function $f$ given by $f(x) = \frac{1}{\pi \sqrt{(x - a)(a + w - x)}}, \quad x \in (a, a + w)$
1. $f$ is symmetric about $a + \frac{1}{2} w$.
2. $f$ decreases and then increases with minimum value at $x = a + \frac{1}{2} w$.
3. $f$ is concave upward.
4. $f(x) \to \infty$ as $x \downarrow a$ and as $x \uparrow a + w$.
Proof
Recall that $f(x) = \frac{1}{w} g\left(\frac{x - a}{w}\right)$ where $g$ is the PDF of the standard arcsine distribution.
An alternate parameterization of the general arcsine distribution is by the endpoints of the support interval: the left endpoint (location parameter) $a$ and the right endpoint $b = a + w$.
Open the Special Distribution Simulator and select the arcsine distribution. Vary the location and scale parameters and note the shape and location of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the emprical density function to the probability density function.
Once again, the distribution function has a simple representation in terms of the arcsine function.
$X$ has distribution function $F$ given by $F(x) = \frac{2}{\pi} \arcsin\left(\sqrt{\frac{x - a}{w}}\right), \quad x \in [a, a + w]$
Proof
Recall that $F(x) = G[(x - a) / w)$ where $G$ is the CDF of the standard arcsine distribution.
As before, the quantile function has a simple representation in terms of the sine functioon
$X$ has quantile function $F^{-1}$ given by $F^{-1}(p) = a + w \sin^2\left(\frac{\pi}{2} p\right)$ for $p \in [0, 1]$ In particular, the quartiles of $X$ are
1. $q_1 = a + w \sin^2\left(\frac{\pi}{8}\right) = a + \frac{1}{4}\left(2 - \sqrt{2}\right) w$, the first quartile
2. $q_2 = a + \frac{1}{2} w$, the median
3. $q_3 = a + w \sin^2\left(\frac{3 \pi}{8}\right) = a + \frac{1}{4}\left(2 + \sqrt{2}\right) w$, the third quartile
Proof
Recall that $F^{-1}(p) =a + w G^{-1}(p)$ where $G^{-1}$ is the quantile function of the standard arcsine distribution.
Open the Special Distribution Calculator and select the arcsine distribution. Vary the parameters and note the shape and location of the distribution function. For various values of the parameters, compute selected values of the distribution function and the quantile function.
Moments
Again, we assume that $X$ has the arcsine distribution with location parameter $a \in \R$ and scale parameter $w \in (0, \infty)$. First we give the mean and variance.
The mean and variance of $X$ are
1. $\E(X) = a + \frac{1}{2} w$
2. $\var(X) = \frac{1}{8} w^2$
Proof
These results from the representation $X = a + w Z$ and the results for the mean and variance of $Z$.
Open the Special Distribution Simulator and select the arcsine distribution. Vary the parameters and note the size and location of the mean$\pm$standard deviation bar. For various values of the parameters, run the simulation 1000 times and compare the empirical mean and stadard deviation to the true mean and standar deviation.
The moments of $X$ can be obtained from the moments of $Z$, but the results are messy, except when the location parameter is 0.
Suppose the location parameter $a = 0$. For $n \in \N$, $\E(X^n) = w^n \prod_{j=0}^{n-1} \frac{2 j + 1}{2 j + 2}$
Proof
This follows from the representation $X = w Z$ and the results for the moments of $Z$.
The moment generating function can be expressed as a series with product coefficients, and so is not particularly helpful.
$X$ has moment generating function $M$ given by $M(t) = \E\left(e^{t X}\right) = e^{a t} \sum_{n=0}^\infty \left(\prod_{j=0}^{n-1} \frac{2 j + 1}{2 j + 2}\right) \frac{w^n t^n}{n!}, \quad t \in \R$
Proof
Recall that $M(t) = e^{a t} m(w t)$ where $m$ is the moment generating function of $Z$.
Finally, the skewness and kurtosis are unchanged.
The skewness and kurtosis of $X$ are
1. $\skw(X) = 0$
2. $\kur(X) = \frac{3}{2}$
Proof
Recall that the skewness and kurtosis are defined in terms of the standard score of $X$ and hence are invariant under a location-scale transformation.
Related Distributions
By construction, the general arcsine distribution is a location-scale family, and so is closed under location-scale transformations.
If $X$ has the arcsine distribution with location parameter $a \in \R$ and scale parameter $w \in (0, \infty)$ and if $c \in \R$ and $d \in (0, \infty)$ then $c + d X$ has the arcsine distribution with location parameter $c + a d$ scale parameter $d w$.
Proof
By definition we can take $X = a + w Z$ where $Z$ has the standard arcsine distribution. Hence $c + d X = (c + d a) + (d w) Z$.
Since the quantile function is in closed form, the arcsine distribution can be simulated by the random quantile method.
Suppose that $a \in \R$ and $w \in (0, \infty)$.
1. If $U$ has the standard uniform distribution (a random number) then $X = a + w \sin^2\left(\frac{\pi}{2} U\right)$ has the arcsine distribution with location parameter $a$ and scale parameter $b$.
2. If $X$ has the arcsine distribution with location parameter $a$ and scale parameter $b$ then $U = \frac{2}{\pi} \arcsin\left(\sqrt{\frac{X - a}{w}}\right)$ has the standard uniform distribution.
Open the random quantile simulator and select the arcsine distribution. Vary the parameters and note the location and shape of the probability density function. For selected parameter values, run the experiment 1000 times and compare the empirical probability density function, mean, and standard deviation to their distributional counterparts. Note how the random quantiles simulate the distribution.
The following exercise illustrates the connection between the Brownian motion process and the arcsine distribution.
Open the Brownian motion simulator and select the last zero random variable. Vary the time parameter $t$ and note that the last zero has the arcsine distribution on the interval $(0, t)$. Run the experiment 1000 time and compare the empirical probability density function, mean, and standard deviation to their distributional counterparts. Note how the last zero simulates the distribution. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.19%3A_The_Arcsine_Distribution.txt |
Subsets and Splits