chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\bs}{\boldsymbol}$
This section explores uniform distributions in an abstract setting. If you are a new student of probability, or are not familiar with measure theory, you may want to skip this section and read the sections on the uniform distribution on an interval and the discrete uniform distributions.
Basic Theory
Definition
Suppose that $(S, \mathscr S, \lambda)$ is a measure space. That is, $S$ is a set, $\mathscr S$ a $\sigma$-algebra of subsets of $S$, and $\lambda$ a positive measure on $\mathscr S$. Suppose also that $0 \lt \lambda(S) \lt \infty$, so that $\lambda$ is a finite, positive measure.
Random variable $X$ with values in $S$ has the uniform distribution on $S$ (with respect to $\lambda$) if $\P(X \in A) = \frac{\lambda(A)}{\lambda(S)}, \quad A \in \mathscr S$
Thus, the probability assigned to a set $A \in \mathscr S$ depends only on the size of $A$ (as measured by $\lambda$).
The most common special cases are as follows:
1. Discrete: The set $S$ is finite and non-empty, $\mathscr S$ is the $\sigma$-algebra of all subsets of $S$, and $\lambda = \#$ (counting measure).
2. Euclidean: For $n \in \N_+$, let $\mathscr R_n$ denote the $\sigma$-algebra of Borel measureable subsets of $\R^n$ and let $\lambda_n$ denote Lebesgue measure on $(\R^n, \mathscr R_n)$. In this setting, $S \in \mathscr R_n$ with $0 \lt \lambda_n(S) \lt \infty$, $\mathscr S = \{A \in \mathscr R_n: A \subseteq S\}$, and the measure is $\lambda_n$ restricted to $(S, \mathscr S)$.
In the Euclidean case, recall that $\lambda_1$ is length measure on $\R$, $\lambda_2$ is area measure on $\R^2$, $\lambda_3$ is volume measure on $\R^3$, and in general $\lambda_n$ is sometimes referred to as $n$-dimensional volume. Thus, $S \in \mathscr R_n$ is a set with positive, finite volume.
Properties
Suppose $(S, \mathscr S, \lambda)$ is a finite, positive measure space, as above, and that $X$ is uniformly distributed on $S$.
The probability density function $f$ of $X$ (with respect to $\lambda$) is $f(x) = \frac{1}{\lambda(S)}, \quad x \in S$
Proof
This follows directly from the definition of probability density function: $\int_A \frac 1 {\lambda(S)} \, d\lambda(x) = \frac{\lambda(A)}{\lambda(S)}, \quad A \in \mathscr S$
Thus, the defining property of the uniform distribution on a set is constant density on that set. Another basic property is that uniform distributions are preserved under conditioning.
Suppose that $R \in \mathscr S$ with $\lambda(R) \gt 0$. The conditional distribution of $X$ given $X \in R$ is uniform on $R$.
Proof
For $A \in \mathscr S$ with $A \subseteq R$, $\P(X \in A \mid X \in R) = \frac{\P(X \in A)}{\P(X \in R)} = \frac{\lambda(A)/\lambda(S)}{\lambda(R)/\lambda(S)} = \frac{\lambda(A)}{\lambda(R)}$
In the setting of previous result, suppose that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of independent variables, each uniformly distributed on $S$. Let $N = \min\{n \in \N_+: X_n \in R\}$. Then $N$ has the geometric distribution on $\N_+$ with success parameter $p = \P(X \in R)$. More importantly, the distribution of $X_N$ is the same as the conditional distribution of $X$ given $X \in R$, and hence is uniform on $R$. This is the basis of the rejection method of simulation. If we can simulate a uniform distribution on $S$, then we can simulate a uniform distribution on $R$.
If $h$ is a real-valued function on $S$, then $\E[h(X)]$ is the average value of $h$ on $S$, as measured by $\lambda$:
If $h: S \to \R$ is integrable with respect to $\lambda$ Then $\E[h(X)] = \frac{1}{\lambda(S)} \int_S h(x) \, d\lambda(x)$
Proof
This result follows from the change of variables theorem for expected value, since $\E[h(X)] = \int_S h(x) f(x) \, d\lambda(x) = \frac 1 {\lambda(S)} \int_S h(x) \, d\lambda(x)$
The entropy of the uniform distribution on $S$ depends only on the size of $S$, as measured by $\lambda$:
The entropy of $X$ is $H(X) = \ln[\lambda(S)]$.
Proof $H(X) = \E\{-\ln[f(X)]\} = \int_S -\ln\left(\frac{1}{\lambda(S)}\right) \frac{1}{\lambda(S)} = -\ln\left(\frac{1}{\lambda(S)}\right) = \ln[\lambda(S)]$
Product Spaces
Suppose now that $(S, \mathscr S, \lambda)$ and $(T, \mathscr T, \mu)$ are finite, positive measure spaces, so that $0 \lt \lambda(S) \lt \infty$ and $0 \lt \mu(T) \lt \infty$. Recall the product space $(S \times T, \mathscr S \otimes \mathscr T, \lambda \otimes \mu)$. The product $\sigma$-algebra $\mathscr S \otimes \mathscr T$ is the $\sigma$-algebra of subsets of $S \times T$ generated by product sets $A \times B$ where $A \in \mathscr S$ and $B \in \mathscr T$. The product measure $\lambda \otimes \mu$ is the unique positive measure on $(S \times T, \mathscr S \otimes \mathscr T)$ that satisfies $(\lambda \otimes \mu)(A \times B) = \lambda(A) \mu(B)$ for $A \in \mathscr S$ and $B \in \mathscr T$.
$(X, Y)$ is uniformly distributed on $S \times T$ if and only if $X$ is uniformly distributed on $S$, $Y$ is uniformly distributed on $T$, and $X$ and $Y$ are independent.
Proof
Suppose first that $(X, Y)$ is uniformly distributed on $S \times T$. If $A \in \mathscr S$ and $B \in \mathscr T$ then $\P(X \in A, Y \in B) = \P[(X, Y) \in A \times B] = \frac{(\lambda \otimes \mu)(A \times B)}{(\lambda \otimes \mu)(S \times T)} = \frac{\lambda(A) \mu(B)}{\lambda(S) \mu(T)} = \frac{\lambda(A)}{\lambda(S)} \frac{\mu(B)}{\mu(T)}$ Taking $B = T$ in the displayed equation gives $\P(X \in A) = \lambda(A) \big/ \lambda(S)$ for $A \in \mathscr S$, so $X$ is uniformly distributed on $S$. Taking $A = S$ in the displayed equation gives $\P(Y \in B) = \mu(B) \big/ \mu(T)$ for $B \in \mathscr T$, so $Y$ is uniformly distributed on $T$. Returning to the displayed equation generally gives $\P(X \in A, Y \in B) = \P(X \in A) \P(Y \in B)$ for $A \in \mathscr S$ and $B \in \mathscr T$, so $X$ and $Y$ are independent.
Conversely, suppose that $X$ is uniformly distributed on $S$, $Y$ is uniformly distributed on $T$, and $X$ and $Y$ are independent. Then for $A \in \mathscr S$ and $B \in \mathscr T$, $\P[(X, Y) \in A \times B] = \P(X \in A, Y \in B) = \P(X \in A) \P(Y \in B) = \frac{\lambda(A)}{\lambda(S)} \frac{\mu(B)}{\mu(T)} = \frac{\lambda(A) \mu(B)}{\lambda(S) \mu(T)} = \frac{(\lambda \otimes \mu)(A \times B)}{(\lambda \otimes \mu)(S \times T)}$ It then follows (see the section on existence and uniqueness of measures) that $\P[(X, Y) \in C] = (\lambda \otimes \mu)(C) / (\lambda \otimes \mu)(S \times T)$ for every $C \in \mathscr S \otimes \mathscr T$, so $(X, Y)$ is uniformly distributed on $S \times T$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.20%3A_General_Uniform_Distributions.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$ $\newcommand{\bs}{\boldsymbol}$
The continuous uniform distribution on an interval of $\R$ is one of the simplest of all probability distributions, but nonetheless very important. In particular, continuous uniform distributions are the basic tools for simulating other probability distributions. The uniform distribution corresponds to picking a point at random from the interval. The uniform distribution on an interval is a special case of the general uniform distribution with respect to a measure, in this case Lebesgue measure (length measure) on $\R$.
The Standard Uniform Distribution
Definition
The continuous uniform distribution on the interval $[0, 1]$ is known as the standard uniform distribution. Thus if $U$ has the standard uniform distribution then $\P(U \in A) = \lambda(A)$ for every (Borel measurable) subset $A$ of $[0, 1]$, where $\lambda$ is Lebesgue (length) measure.
A simulation of a random variable with the standard uniform distribution is known in computer science as a random number. All programming languages have functions for computing random numbers, as do calculators, spreadsheets, and mathematical and statistical software packages.
Distribution Functions
Suppose that $U$ has the standard uniform distribution. By definition, the probability density function is constant on $[0, 1]$.
$U$ has probability density function $g$ given by $g(u) = 1$ for $u \in [0, 1]$.
Since the density function is constant, the mode is not meaningful.
Open the Special Distribution Simulator and select the continuous uniform distribution. Keep the default parameter values. Run the simulation 1000 times and compare the empirical density function and to the probability density function.
The distribution function is simply the identity function on $[0, 1]$.
$U$ has distribution function $G$ given by $G(u) = u$ for $u \in [0, 1]$.
Proof
Note that $\P(U \le u) = \lambda[0, u] = u$ for $u \in [0, 1]$. Recall again that $\lambda$ is length measure.
The quantile function is the same as the distribution function.
$U$ has quantile function $G^{-1}$ given by $G^{-1}(p) = p$ for $p \in [0, 1]$. The quartiles are
1. $q_1 = \frac{1}{4}$, the first quartile
2. $q_2 = \frac{1}{2}$, the median
3. $q_3 = \frac{3}{4}$, the third quartile
Proof
$G^{-1}$ is the ordinary inverse of $G$ on the interval $[0, 1]$, which is $G$ itself since $G$ is the identity function.
Open the Special Distribution Calculator and select the continuous uniform distribution. Keep the default parameter values. Compute a few values of the distribution function and the quantile function.
Moments
Suppose again that $U$ has the standard uniform distribution. The moments (about 0) are simple.
For $n \in \N$, $\E\left(U^n\right) = \frac{1}{n + 1}$
Proof
Since the PDF is 1 on $[0, 1]$, $\E\left(U^n\right) = \int_0^1 u^n \, du = \frac{1}{n + 1}$
The mean and variance follow easily from the general moment formula.
The mean and variance of $U$ are
1. $\E(U) = \frac{1}{2}$
2. $\var(U) = \frac{1}{12}$
Open the Special Distribution Simulator and select the continuous uniform distribution. Keep the default parameter values. Run the simulation 1000 times and compare the empirical mean and standard deviation to the true mean and standard deviation.
Next are the skewness and kurtosis.
The skewness and kurtosis of $U$ are
1. $\skw(U) = 0$
2. $\kur(U) = \frac{9}{5}$
Proof
1. This follows from the symmetry of the distribution about the mean $\frac{1}{2}$.
2. This follows from the usual formula for kurtosis in terms of the moments, or directly, since $\sigma^4 = \frac{1}{144}$ and $\E\left[\left(U - \frac{1}{2}\right)^4\right] = \int_0^1 \left(x - \frac{1}{2}\right)^4 dx = \frac{1}{80}$
Thus, the excess kurtosis is $\kur(U) - 3 = -\frac{6}{5}$
Finally, we give the moment generating function.
The moment generating function $m$ of $U$ is given by $m(0) = 1$ and $m(t) = \frac{e^t - 1}{t}, \quad t \in \R \setminus \{0\}$
Proof
Again, since the PDF is 1 on $[0, 1]$ $\E\left(e^{t U}\right) = \int_0^1 e^{t u} du = \frac{e^t - 1}{t}, \quad t \ne 0$ Trivially $m(0) = 1$.
Related Distributions
The standard uniform distribution is connected to every other probability distribution on $\R$ by means of the quantile function of the other distribution. When the quantile function has a simple closed form expression, this result forms the primary method of simulating the other distribution with a random number.
Suppose that $F$ is the distribution function for a probability distribution on $\R$, and that $F^{-1}$ is the corresponding quantile function. If $U$ has the standard uniform distribution, then $X = F^{-1}(U)$ has distribution function $F$.
Proof
A basic property of quantile functions is that $F(x) \le p$ if and only if $x \le F^{-1}(p)$ for $x \in \R$ and $p \in (0, 1)$. Hence from the distribution function of $U$, $\P(X \le x) = \P\left[F^{-1}(U) \le x\right] = \P[U \le F(x)] = F(x), \quad x \in \R$
Open the Random Quantile Experiment. For each distribution, run the simulation 1000 times and compare the empirical density function to the probability density function of the selected distribution. Note how the random quantiles simulate the distribution.
For a continuous distribution on an interval of $\R$, the connection goes the other way.
Suppose that $X$ has a continuous distribution on an interval $I \subseteq \R$, with distribution function $F$. Then $U = F(X)$ has the standard uniform distribution.
Proof
For $u \in (0, 1)$ recall that $F^{-1}(u)$ is a quantile of order $u$. Since $X$ has a continuous distribution, $\P(U \ge u) = \P[F(X) \ge u] = \P[X \ge F^{-1}(u)] = 1 - F[F^{-1}(u)] = 1 - u$ Hence $U$ is uniformly distributed on $(0, 1)$.
The standard uniform distribution is a special case of the beta distribution.
The beta distribution with left parameter $a = 1$ and right parameter $b = 1$ is the standard uniform distribution.
Proof
The beta distribution with parameters $a \gt 0$ and $b \gt 0$ has PDF $x \mapsto \frac{1}{B(a, b)} x^{a-1} (1 - x)^{b-1}, \quad x \in (0, 1)$ where $B$ is the beta function. With $a = b = 1$, the PDF is the standard uniform PDF.
The standard uniform distribution is also the building block of the Irwin-Hall distributions.
The Uniform Distribution on a General Interval
Definition
The standard uniform distribution is generalized by adding location-scale parameters.
Suppose that $U$ has the standard uniform distribution. For $a \in \R$ and $w \in (0, \infty)$ random variable $X = a + w U$ has the uniform distribution with location parameter $a$ and scale parameter $w$.
Distribution Functions
Suppose that $X$ has the uniform distribution with location parameter $a \in \R$ and scale parameter $w \in (0, \infty)$.
$X$ has probability density function $f$ given by $f(x) = 1/w$ for $x \in [a, a + w]$.
Proof
Recall that $f(x) = \frac{1}{w} g\left(\frac{x - a}{w}\right)$ for $x \in [a, a + w]$, where $g$ is the standard uniform PDF. But $g(u) = 1$ for $u \in [0, 1]$, so the result follows.
The last result shows that $X$ really does have a uniform distribution, since the probability density function is constant on the support interval. Moreover, we can clearly parameterize the distribution by the endpoints of this interval, namely $a$ and $b = a + w$, rather than by the location, scale parameters $a$ and $w$. In fact, the distribution is more commonly known as the uniform distribution on the interval $[a, b]$. Nonetheless, it is useful to know that the distribution is the location-scale family associated with the standard uniform distribution. In terms of the endpoint parameterization, $f(x) = \frac{1}{b - a}, \quad x \in [a, b]$
Open the Special Distribution Simulator and select the uniform distribution. Vary the location and scale parameters and note the graph of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
$X$ has distribution function $F$ given by $F(x) = \frac{x - a}{w}, \quad x \in [a, a + w]$
Proof
Recall that $F(x) = G\left(\frac{x - a}{w}\right)$ for $x \in [a, a + w]$, where $G$ is the standard uniform CDF. But $G(u) = u$ for $u \in [0, 1]$ so the result follows. Of course, a direct proof using the PDF is also easy.
In terms of the endpoint parameterization, $F(x) = \frac{x - a}{b - a}, \quad x \in [a, b]$
$X$ has quantile function $F^{-1}$ given by $F^{-1}(p) = a + p w = (1 - p) a + p b$ for $p \in [0, 1]$. The quartiles are
1. $q_1 = a + \frac{1}{4} w = \frac{3}{4} a + \frac{1}{4} b$, the first quartile
2. $q_2 = a + \frac{1}{2} w = \frac{1}{2} a + \frac{1}{2} b$, the median
3. $q_3 = a + \frac{3}{4} w = \frac{1}{4} a + \frac{3}{4} b$, the third quartile
Proof
Recall that $F^{-1}(p) = a + w G^{-1}(p)$ where $G^{-1}$ is the standard uniform quantile function. But $G^{-1}(p) = p$ for $p \in [0, 1]$ so the result follows. Of course a direct proof from the CDF is also easy.
Open the Special Distribution Calculator and select the uniform distribution. Vary the parameters and note the graph of the distribution function. For selected values of the parameters, compute a few values of the distribution function and the quantile function.
Moments
Again we assume that $X$ has the uniform distribution on the interval $[a, b]$ where $a, \, b \in \R$ and $a \lt b$. Thus the location parameter is $a$ and the scale parameter $w = b - a$.
The moments of $X$ are $\E(X^n) = \frac{b^{n+1} - a^{n+1}}{(n + 1)(b - a)}, \quad n \in \N$
Proof
For $n \in \N$, $\E(X^n) = \int_a^b x^n \frac{1}{b - a} dx = \frac{b^{n+1} - a^{n+1}}{(n + 1)(b - a)}$
The mean and variance of $X$ are
1. $\E(X) = \frac{1}{2}(a + b)$
2. $\var(X) = \frac{1}{12}(b - a)^2$
Open the Special Distribution Simulator and select the uniform distribution. Vary the parameters and note the location and size of the mean$\pm$standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness and kurtosis of $X$ are
1. $\skw(X) = 0$
2. $\kur(X) = \frac{9}{5}$
Proof
Recall that skewness and kurtosis are defined in terms of the standard score and hence are invariant under location-scale transformations.
Once again, the excess kurtosis is $\kur(X) - 3 = -\frac{6}{5}$.
The moment generating function $M$ of $X$ is given by $M(0) = 1$ and $M(t) = \frac{e^{b t} - e^{a t}}{t(b - a)}, \quad t \in \R \setminus \{0\}$
Proof
Recall that $M(t) = e^{a t} m(w t)$ where $m$ is the standard uniform MGF. Substituting gives the result.
If $h$ is a real-valued function on $[a, b]$, then $\E[h(X)]$ is the average value of $h$ on $[a, b]$, as defined in calculus:
If $h: [a, b] \to \R$ is integrable, then $\E[h(X)] = \frac{1}{b - a} \int_a^b h(x) \, dx$
Proof
This follows from the change of variables formula for expected value: $\E[h(X)] = \int_a^b h(x) f(x) \, dx$.
The entropy of the uniform distribution on an interval depends only on the length of the interval.
The entropy of $X$ is $H(X) = \ln(b - a)$.
Proof $H(X) = \E\{-\ln[f(X)]\} = \int_a^b -\ln\left(\frac{1}{b - a}\right) \frac{1}{b - a} \, dx = -\ln\left(\frac{1}{b - a}\right) = \ln(b - a)$
Related Distributions
Since the uniform distribution is a location-scale family, it is trivially closed under location-scale transformations.
If $X$ has the uniform distribution with location parameter $a$ and scale parameter $w$, and if $c \in \R$ and $d \in (0, \infty)$, then $Y = c + d X$ has the uniform distribution with location parameter $c + d a$ and scale parameter $d w$.
Proof
From the definition, we can take $X = a + w U$ where $U$ has the standard uniform distribution. Hence $Y = c + d X = (c + d a) + (d w) U$.
As we saw above, the standard uniform distribution is a basic tool in the random quantile method of simulation. Uniform distributions on intervals are also basic in the rejection method of simulation. We sketch the method in the next paragraph; see the section on general uniform distributions for more theory.
Suppose that $h$ is a probability density function for a continuous distribution with values in a bounded interval $(a, b) \subseteq \R$. Suppose also that $h$ is bounded, so that there exits $c \gt 0$ such that $h(x) \le c$ for all $x \in (a, b)$. Let $\bs{X} = (X_1, X_2, \ldots)$ be a sequence of independent variables, each uniformly distributed on $(a, b)$, and let $\bs{Y} = (Y_1, Y_2, \ldots)$ be a sequence of independent variables, each uniformly distributed on $(0, c)$. Finally, assume that $\bs{X}$ and $\bs{Y}$ are independent. Then $((X_1, Y_1), (X_2, Y_2), \ldots))$ is a sequence of independent variables, each uniformly distributed on $(a, b) \times (0, c)$. Let $N = \min\{n \in \N_+: 0 \lt Y_n \lt h(X_n)\}$. Then $(X_N, Y_N)$ is uniformly distributed on $R = \{(x, y) \in (a, b) \times (0, c): y \lt h(x)\}$ (the region under the graph of $h$), and therefore $X_N$ has probability density function $h$. In words, we generate uniform points in the rectangular region $(a, b) \times (0, c)$ until we get a point under the graph of $h$. The $x$-coordinate of that point is our simulated value. The rejection method can be used to approximately simulate random variables when the region under the density function is unbounded.
Open the rejection method simulator. For each distribution, select a set of parameter values. Run the experiment 2000 times and observe how the rejection method works. Compare the empirical density function, mean, and standard deviation to their distributional counterparts. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.21%3A_The_Uniform_Distribution_on_an_Interval.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
Uniform Distributions on a Finite Set
Suppose that $S$ is a nonempty, finite set. A random variable $X$ taking values in $S$ has the uniform distribution on $S$ if $\P(X \in A) = \frac{\#(A)}{\#(S)}, \quad A \subseteq S$
The discrete uniform distribution is a special case of the general uniform distribution with respect to a measure, in this case counting measure. The distribution corresponds to picking an element of $S$ at random. Most classical, combinatorial probability models are based on underlying discrete uniform distributions. The chapter on Finite Sampling Models explores a number of such models.
The probability density function $f$ of $X$ is given by $f(x) = \frac{1}{\#(S)}, \quad x \in S$
Proof
This follows from the definition of the (discrete) probability density function: $\P(X \in A) = \sum_{x \in A} f(x)$ for $A \subseteq S$. Or more simply, $f(x) = \P(X = x) = 1 / \#(S)$.
Like all uniform distributions, the discrete uniform distribution on a finite set is characterized by the property of constant density on the set. Another property that all uniform distributions share is invariance under conditioning on a subset.
Suppose that $R$ is a nonempty subset of $S$. Then the conditional distribution of $X$ given $X \in R$ is uniform on $R$.
Proof
For $A \subseteq R$, $\P(X \in A \mid X \in R) = \frac{\P(X \in A)}{\P(X \in R)} = \frac{\#(A) \big/ \#(S)}{\#(R) \big/ \#(S)} = \frac{\#(A)}{\#(R)}$
If $h: S \to \R$ then the expected value of $h(X)$ is simply the arithmetic average of the values of $h$: $\E[h(X)] = \frac{1}{\#(S)} \sum_{x \in S} h(x)$
Proof
This follows from the change of variables theorem for expected value: $\E[h(X)] = \sum_{x \in S} f(x) h(x) = \frac 1 {\#(S)} \sum_{x \in S} h(x)$
The entropy of $X$ depends only on the number of points in $S$.
The entropy of $X$ is $H(X) = \ln[\#(S)]$.
Proof
Let $n = \#(S)$. Then $H(X) = \E\{-\ln[f(X)]\} = \sum_{x \in S} -\ln\left(\frac{1}{n}\right) \frac{1}{n} = -\ln\left(\frac{1}{n}\right) = \ln(n)$
Uniform Distributions on Finite Subsets of $\R$
Without some additional structure, not much more can be said about discrete uniform distributions. Thus, suppose that $n \in \N_+$ and that $S = \{x_1, x_2, \ldots, x_n\}$ is a subset of $\R$ with $n$ points. We will assume that the points are indexed in order, so that $x_1 \lt x_2 \lt \cdots \lt x_n$. Suppose that $X$ has the uniform distribution on $S$.
The probability density function $f$ of $X$ is given by $f(x) = \frac{1}{n}$ for $x \in S$.
The distribution function $F$ of $X$ is given by
1. $F(x) = 0$ for $x \lt x_1$
2. $F(x) = \frac{k}{n}$ for $x_k \le x \lt x_{k+1}$ and $k \in \{1, 2, \ldots n - 1 \}$
3. $F(x) = 1$ for $x \gt x_n$
Proof
This follows from the definition of the distribution function: $F(x) = \P(X \le x)$ for $x \in \R$.
The quantile function $F^{-1}$ of $X$ is given by $F^{-1}(p) = x_{\lceil n p \rceil}$ for $p \in (0, 1]$.
Proof
By definition, $F^{-1}(p) = x_k$ for $\frac{k - 1}{n} \lt p \le \frac{k}{n}$ and $k \in \{1, 2, \ldots, n\}$. It follows that $k = \lceil n p \rceil$ in this formulation.
The moments of $X$ are ordinary arithmetic averages.
For $k \in \N$ $\E\left(X^k\right) = \frac{1}{n} \sum_{i=1}^n x_i^k$
In particular,
The mean and variance of $X$ are
1. $\mu = \frac{1}{n} \sum_{i=1}^n x_i$
2. $\sigma^2 = \frac{1}{n} \sum_{i=1}^n (x_i - \mu)^2$
Uniform Distributions on Discrete Intervals
We specialize further to the case where the finite subset of $\R$ is a discrete interval, that is, the points are uniformly spaced.
The Standard Distribution
Suppose that $n \in \N_+$ and that $Z$ has the discrete uniform distribution on $S = \{0, 1, \ldots, n - 1 \}$. The distribution of $Z$ is the standard discrete uniform distribution with $n$ points.
Of course, the results in the previous subsection apply with $x_i = i - 1$ and $i \in \{1, 2, \ldots, n\}$.
The probability density function $g$ of $Z$ is given by $g(z) = \frac{1}{n}$ for $z \in S$.
Open the Special Distribution Simulation and select the discrete uniform distribution. Vary the number of points, but keep the default values for the other parameters. Note the graph of the probability density function. Run the simulation 1000 times and compare the empirical density function to the probability density function.
The distribution function $G$ of $Z$ is given by $G(z) = \frac{1}{n}\left(\lfloor z \rfloor + 1\right)$ for $z \in [0, n - 1]$.
Proof
Note that $G(z) = \frac{k}{n}$ for $k - 1 \le z \lt k$ and $k \in \{1, 2, \ldots n - 1\}$. Thus $k - 1 = \lfloor z \rfloor$ in this formulation.
The quantile function $G^{-1}$ of $Z$ is given by $G^{-1}(p) = \lceil n p \rceil - 1$ for $p \in (0, 1]$. In particular
1. $G^{-1}(1/4) = \lceil n/4 \rceil - 1$ is the first quartile.
2. $G^{-1}(1/2) = \lceil n / 2 \rceil - 1$ is the median.
3. $G^{-1}(3/4) = \lceil 3 n / 4 \rceil - 1$ is the third quartile.
Proof
Note that $G^{-1}(p) = k - 1$ for $\frac{k - 1}{n} \lt p \le \frac{k}{n}$ and $k \in \{1, 2, \ldots, n\}$. Thus $k = \lceil n p \rceil$ in this formulation.
Open the special distribution calculator and select the discrete uniform distribution. Vary the number of points, but keep the default values for the other parameters. Note the graph of the distribution function. Compute a few values of the distribution function and the quantile function.
For the standard uniform distribution, results for the moments can be given in closed form.
The mean and variance of $Z$ are
1. $\E(Z) = \frac{1}{2}(n - 1)$
2. $\var(Z) = \frac{1}{12}(n^2 - 1)$
Proof
Recall that \begin{align} \sum_{k=0}^{n-1} k & = \frac{1}{2}n (n - 1) \ \sum_{k=0}^{n-1} k^2 & = \frac{1}{6} n (n - 1) (2 n - 1) \end{align} Hence $\E(Z) = \frac{1}{2}(n - 1)$ and $\E(Z^2) = \frac{1}{6}(n - 1)(2 n - 1)$. Part (b) follows from $\var(Z) = \E(Z^2) - [\E(Z)]^2$.
Open the Special Distribution Simulation and select the discrete uniform distribution. Vary the number of points, but keep the default values for the other parameters. Note the size and location of the mean$\pm$standard devation bar. Run the simulation 1000 times and compare the empirical mean and standard deviation to the true mean and standard deviation.
The skewness and kurtosis of $Z$ are
1. $\skw(Z) = 0$
2. $\kur(Z) = \frac{3}{5} \frac{3 n^2 - 7}{n^2 - 1}$
Proof
Recall that \begin{align} \sum_{k=1}^{n-1} k^3 & = \frac{1}{4}(n - 1)^2 n^2 \ \sum_{k=1}^{n-1} k^4 & = \frac{1}{30} (n - 1) (2 n - 1)(3 n^2 - 3 n - 1) \end{align} Hence $\E(Z^3) = \frac{1}{4}(n - 1)^2 n$ and $\E(Z^4) = \frac{1}{30}(n - 1)(2 n - 1)(3 n^2 - 3 n - 1)$. The results now follow from the results on the mean and varaince and the standard formulas for skewness and kurtosis. Of course, the fact that $\skw(Z) = 0$ also follows from the symmetry of the distribution.
Note that $\skw(Z) \to \frac{9}{5}$ as $n \to \infty$. The limiting value is the skewness of the uniform distribution on an interval.
$Z$ has probability generating function $P$ given by $P(1) = 1$ and $P(t) = \frac{1}{n}\frac{1 - t^n}{1 - t}, \quad t \in \R \setminus \{1\}$
Proof $P(t) = \E\left(t^Z\right) = \frac{1}{n} \sum_{k=0}^{n-1} t^k = \frac{1}{n} \frac{1 - t^n}{1 - t}$
The General Distribution
We now generalize the standard discrete uniform distribution by adding location and scale parameters.
Suppose that $Z$ has the standard discrete uniform distribution on $n \in \N_+$ points, and that $a \in \R$ and $h \in (0, \infty)$. Then $X = a + h Z$ has the uniform distribution on $n$ points with location parameter $a$ and scale parameter $h$.
Note that $X$ takes values in $S = \{a, a + h, a + 2 h, \ldots, a + (n - 1) h\}$ so that $S$ has $n$ elements, starting at $a$, with step size $h$, a discrete interval. In the further special case where $a \in \Z$ and $h = 1$, we have an integer interval. Note that the last point is $b = a + (n - 1) h$, so we can clearly also parameterize the distribution by the endpoints $a$ and $b$, and the step size $h$. With this parametrization, the number of points is $n = 1 + (b - a) / h$. For the remainder of this discussion, we assume that $X$ has the distribution in the definiiton. Our first result is that the distribution of $X$ really is uniform.
$X$ has probability density function $f$ given by $f(x) = \frac{1}{n}$ for $x \in S$
Proof
Recall that $f(x) = g\left(\frac{x - a}{h}\right)$ for $x \in S$, where $g$ is the PDF of $Z$.
Open the Special Distribution Simulation and select the discrete uniform distribution. Vary the parameters and note the graph of the probability density function. For various values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
The distribution function $F$ of $x$ is given by $F(x) = \frac{1}{n}\left(\left\lfloor \frac{x - a}{h} \right\rfloor + 1\right), \quad x \in [a, b]$
Proof
Recall that $F(x) = G\left(\frac{x - a}{h}\right)$ for $x \in S$, where $G$ is the CDF of $Z$.
The quantile function $F^{-1}$ of $X$ is given by $G^{-1}(p) = a + h \left( \lceil n p \rceil - 1 \right)$ for $p \in (0, 1]$. In particular
1. $F^{-1}(1/4) = a + h \left(\lceil n/4 \rceil - 1\right)$ is the first quartile.
2. $F^{-1}(1/2) = a + h \left(\lceil n / 2 \rceil - 1\right)$ is the median.
3. $F^{-1}(3/4) = a + h \left(\lceil 3 n / 4 \rceil - 1\right)$ is the third quartile.
Proof
Recall that $F^{-1}(p) = a + h G^{-1}(p)$ for $p \in (0, 1]$, where $G^{-1}$ is the quantile function of $Z$.
Open the special distribution calculator and select the discrete uniform distribution. Vary the parameters and note the graph of the distribution function. Compute a few values of the distribution function and the quantile function.
The mean and variance of $X$ are
1. $\E(X) = a + \frac{1}{2}(n - 1) h = \frac{1}{2}(a + b)$
2. $\var(X) = \frac{1}{12}(n^2 - 1) h^2 = \frac{1}{12}(b - a)(b - a + 2 h)$
Proof
Recall that $\E(X) = a + h \E(Z)$ and $\var(X) = h^2 \var(Z)$, so the results follow from the corresponding results for the standard distribution.
Note that the mean is the average of the endpoints (and so is the midpoint of the interval $[a, b]$) while the variance depends only on the number of points and the step size.
Open the Special Distribution Simulator and select the discrete uniform distribution. Vary the parameters and note the shape and location of the mean/standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the true mean and standard deviation.
The skewness and kurtosis of $Z$ are
1. $\skw(X) = 0$
2. $\kur(X) = \frac{3}{5} \frac{3 n^2 - 7}{n^2 - 1}$
Proof
Recall that skewness and kurtosis are defined in terms of the standard score, and hence are the skewness and kurtosis of $X$ are the same as the skewness and kurtosis of $Z$.
$X$ has moment generating function $M$ given by $M(0) = 1$ and $M(t) = \frac{1}{n} e^{t a} \frac{1 - e^{n t h}}{1 - e^{t h}}, \quad t \in \R \setminus \{0\}$
Proof
Note that $M(t) = \E\left(e^{t X}\right) = e^{t a} \E\left(e^{t h Z}\right) = e^{t a} P\left(e^{t h}\right)$ where $P$ is the probability generating function of $Z$.
Related Distributions
Since the discrete uniform distribution on a discrete interval is a location-scale family, it is trivially closed under location-scale transformations.
Suppose that $X$ has the discrete uniform distribution on $n \in \N_+$ points with location parameter $a \in \R$ and scale parameter $h \in (0, \infty)$. If $c \in \R$ and $w \in (0, \infty)$ then $Y = c + w X$ has the discrete uniform distribution on $n$ points with location parameter $c + w a$ and scale parameter $w h$.
Proof
By definition we can take $X = a + h Z$ where $Z$ has the standard uniform distribution on $n$ points. Then $Y = c + w X = (c + w a) + (w h) Z$.
In terms of the endpoint parameterization, $X$ has left endpoint $a$, right endpoint $a + (n - 1) h$, and step size $h$ while $Y$ has left endpoint $c + w a$, right endpoint $(c + w a) + (n - 1) wh$, and step size $wh$.
The uniform distribution on a discrete interval converges to the continuous uniform distribution on the interval with the same endpoints, as the step size decreases to 0.
Suppose that $X_n$ has the discrete uniform distribution with endpoints $a$ and $b$, and step size $(b - a) / n$, for each $n \in \N_+$. Then the distribution of $X_n$ converges to the continuous uniform distribution on $[a, b]$ as $n \to \infty$.
Proof
The CDF $F_n$ of $X_n$ is given by $F_n(x) = \frac{1}{n} \left\lfloor n \frac{x - a}{b - a} \right\rfloor, \quad x \in [a, b]$ But $n y - 1 \le \lfloor ny \rfloor \le n y$ for $y \in \R$ so $\lfloor n y \rfloor / n \to y$ as $n \to \infty$. Hence $F_n(x) \to (x - a) / (b - a)$ as $n \to \infty$ for $x \in [a, b]$, and this is the CDF of the continuous uniform distribution on $[a, b]$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.22%3A_Discrete_Uniform_Distributions.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{ \E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\sgn}{\text{sgn}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
The Semicircle Distribution
The semicircle distribution plays a very important role in the study of random matrices. It is also known as the Wigner distribution in honor of the physicist Eugene Wigner, who did pioneering work on random matrices.
The Standard Semicircle Distribution
Distribution Functions
The standard semicircle distribution is a continuous distribution on the interval $[-1, 1]$ with probability density function $g$ given by $g(x) = \frac{2}{\pi} \sqrt{1 - x^2}, \quad x \in [-1, 1]$
Proof
The graph of $x \mapsto \sqrt{1 - x^2}$ for $x \in [-1, 1]$ is the upper half of the circle of radius 1 centered at the origin. Hence the area under this graph is $\pi / 2$ and therefore $g$ is a valid PDF—the constant $2 / \pi$ in $g$ is the normalizing constant
As noted in the proof, $x \mapsto \sqrt{1 - x^2}$ for $x \in [-1, 1]$ is the upper half of the circle of radius 1 centered at the origin, hence the name.
The standard semicircle probability density function $g$ satisfies the following properties:
1. $g$ is symmetric about $x = 0$.
2. $g$ increases and then decreases with mode at $x = 0$.
3. $g$ is concave downward.
Proof
As noted earlier, except for the normalizing constant, the graph of $g$ is the upper half of the circle of radius 1 centered at the origin, and so these properties are obvious.
Open special distribution simulator and select the semicircle distribution. With the default parameter value, note the shape of the probability density function. Run the simulation 1000 times and compare the empirical density function to the probability density function.
The standard semicircle distribution function $G$ is given by $G(x) = \frac{1}{2} + \frac{1}{\pi} x \sqrt{1 - x^2} + \frac{1}{\pi} \arcsin x , \quad x \in [-1, 1]$
Proof
Of course $G(x) = \int_{-1}^x g(t) \, dt$ for $-1 \le x \le 1$. The integral is evaluated by using the trigonometric substitution $t = \sin \theta$.
We cannot give the quantile function $G^{-1}$ in closed form, but values of this function can be approximated. Clearly by symmetry, $G^{-1}\left(\frac{1}{2} - p\right) = -G^{-1}\left(\frac{1}{2} + p\right)$ for $0 \le p \le \frac{1}{2}$. In particular, the median is 0.
Open the special distribution simulator and select the semicircle distribution. With the default parameter value, note the shape of the distribution function. Compute the first and third quartiles.
Moments
Suppose that $X$ has the standard semicircle distribution. The moments of $X$ about 0 can be computed explicitly. In particular, the odd order moments are 0 by symmetry.
For $n \in \N$, the moment of order $2 n + 1$ is $\E\left(X^{2n+1}\right) = 0$ and the moment of order $2 n$ is $\E\left(X^{2n}\right) = \left(\frac{1}{2}\right)^{2n} \frac{1}{n + 1} \binom{2n}{n}$
Proof
Clearly $X$ has moments of all orders since the PDF $g$ is bounded and the support interval is bounded. So by symmetry, the odd order moments are 0, and we just need to prove the result for the even order moments. Note that $\E\left(X^{2n}\right) = \int_{-1}^1 x^{2n} \frac{2}{\pi} \sqrt{1 - x^2} \, dx$ We use the substitution $x = \sin \theta$ to get $\E\left(X^{2n}\right) = \int_{-\pi/2}^{\pi/2} \frac{2}{\pi} \sin^{2n}(\theta) \cos^2(\theta) \, d\theta$ This integral can be evaluated by standard calculus methods to give the result above.
The numbers $C_n = \frac{1}{n+1} \binom{2n}{n}$ for $n \in \N$ are known as the Catalan numbers, and are named for the Belgian mathematician Eugene Catalan. In particular, we can compute the mean, variance, skewness, and kurtosis.
The mean and variance of $X$ are
1. $\E(X) = 0$
2. $\var(X) = \frac{1}{4}$
Open the special distribution simulator and select the semicircle distribution. With the default parameter value, note the size and location of the mean $\pm$ standard deviation bar. Run the simulation 1000 times and compare the empirical mean and standard deviation to the true mean and standard deviation.
The skewness and kurtosis of $X$ are
1. $\skw(X) = 0$
2. $\kur(X) = 2$
Proof
The standard score of $X$ is $2 X$. Hence $\skw(X) = E\left(2^3 X^3\right) = 0$. Of course, this is also clear from the symmetry of the distribution of $X$. Similarly, by the moment formula, $\kur(X) = \E\left(2^4 X^4\right) = 2^4 \left(\frac{1}{2}\right)^4 \frac{1}{3}\binom{4}{2} = 2$
It follows that the excess kurtosis is $\kur(X) - 3 = -1$.
Related Distributions
The semicircle distribution has simple connections to the continuous uniform distribution.
If $(X, Y)$ is uniformly distributed on the circular region in $\R^2$ centered at the orgin with radius 1, then $X$ and $Y$ each have the standard semicircular distribution.
Proof
$(X, Y)$ has joint PDF $(x, y) \mapsto 1/\pi$ on $C = \{(x, y) \in \R^2: x^2 + y^2 \le 1\}$. Hence $X$ has PDF $g(x) = \int_{-\sqrt{1 - x^2}}^{\sqrt{1 - x^2}} \frac{1}{\pi} \, dy = \frac{2}{\pi} \sqrt{1 - x^2}, \quad x \in [-1, 1]$
It's easy to simulate a random point that is uniformly distributed on circular region in the previous theorem, and this provides a way of simulating a standard semicircle distribution. This is important since we can't use the random quantile method of simulation.
Suppose that $U$, $V$, and $W$ are independent random variables, each with the standard uniform distribution (random numbers). Let $R = \max\{U, V\}$ and $\Theta = 2 \pi W$, and then let $X = R \cos \Theta$, $Y = R \sin \Theta$. Then $(X, Y)$ is uniformly distributed on the circular region of radius 1 centered at the origin, and hence $X$ and $Y$ each have the standard semicircle distribution.
Proof
$U$ and $V$ have CDF $u \mapsto u$ for $u \in [0, 1]$ and therefore $R$ has CDF $r \mapsto r^2$ for $r \in [0, 1]$. Hence $R$ has PDF $r \mapsto 2 r$ for $r \in [0, 1]$. On the other hand, $\Theta$ is uniformly distributed on $[0, 2 \pi)$ and hence has density $\theta \mapsto 1 / 2 \pi$ on $[0, 2 \pi)$. By independence, the Joint PDF of $(R, \Theta)$ is $(r, \theta) \mapsto (2 r)(1 / 2 \pi) = r / \pi$ on $\{(r, \theta): 0 \le r \le 1, 0 \le \theta \le 2 \pi\}$. For the polar coordinate transformation $(x, y) \mapsto (r \cos \theta , r \sin \theta)$, the Jacobian is $r$. Hence by the change of variables theorem, $(X, Y)$ has PDF $(x, y) \mapsto \frac{r}{\pi} \frac{1}{r} = \frac{1}{\pi} \text{ on } \{(x, y) \in \R^2: x^2 + y^2 \le 1\}$
Of course, note that $X$ and $Y$ in the previous theorem are not independent. Another method of simulation is to use the rejection method. This method works well since the semicircle distribution has a bounded support interval and a bounded probability density function.
Open the rejection method app and select the semicircle distribution. Keep the default parameters to get the standard semicirle distribution. Run the simulation 1000 times and note the points in the scatterplot. Compare the empirical density function, mean, and standard deviation to their distributional counterparts.
The General Semicircle Distribution
Like so many standard distributions, the standard semicircle distribution is usually generalized by adding location and scale parameters.
Definition
Suppose that $Z$ has the standard semicircle distribution. For $a \in \R$ and $r \in (0, \infty)$, $X = a + r Z$ has the semicircle distribution with center (location parameter) $a$ and radius (scale parameter) $r$.
Distribution Functions
Suppose that $X$ has the semicircle distribution with center $a \in \R$ and radius $r \in (0, \infty)$.
$X$ has probability density function $f$ given by $f(x) = \frac{2}{\pi r^2} \sqrt{r^2 - (x - a)^2}, \quad x \in [a - r, a + r]$
Proof
This follows from a standard result for location-scale families. Recall that $f(x) = \frac{1}{r} g\left(\frac{x - a}{r}\right), \quad \frac{x - a}{r} \in [-1, 1]$ where $g$ is the standard semicircle PDF.
The graph of $x \mapsto \sqrt{r^2 - (x - a)^2}$ for $x \in [a - r, a + r]$ is the upper half of the circle of radius $r$ centered at $a$. The area under this semicircle is $\pi r^2 / 2$ so as a check on our work, we see that $f$ is a valid probability density function.
The probability density function $f$ of $X$ satisfies the following properties:
1. $f$ is symmetric about $x = a$.
2. $f$ increases and then decreases with mode at $x = a$.
3. $f$ is concave downward.
Open special distribution simulator and select the semicircle distribution. Vary the center $a$ and the radius $r$, and note the shape of the probability density function. For selected values of $a$ and $r$, run the simulation 1000 times and compare the empirical density function to the probability density function.
The distribution function $F$ of $X$ is $F(x) = \frac{1}{2} + \frac{x - a}{\pi r^2} \sqrt{r^2 - (x - a)^2} + \frac{1}{\pi} \arcsin\left(\frac{x - a}{r}\right), \quad x \in [a - r, a + r]$
Proof
This follows from a standard result for location-scale families: $F(x) = G\left(\frac{x - a}{r}\right), \quad \frac{x - a}{r} \in [-1, 1]$ where $G$ is the standard semicircle CDF.
As in the standard case, we cannot give the quantile function $F^{-1}$ in closed form, but values of this function can be approximated. Recall that $F^{-1}(p) = a + r G^{-1}(p)$ where $G^{-1}$ is the standard semicircle quantile function. In particular, $F^{-1}\left(\frac{1}{2} - p\right) = 2 a - F^{-1}\left(\frac{1}{2} + p\right)$ for $0 \le p \le \frac{1}{2}$. The median is $a$.
Open the special distribution simulator and select the semicircle distribution. Vary the center $a$ and the radius $r$, and note the shape of the distribution function. For selected values of $a$ and $r$, compute the first and third quartiles.
Moments
Suppose again that $X$ has the semicircle distribution with center $a \in \R$ and radius $r \in (0, \infty)$, so by definition we can assume $X = a + r Z$ where $Z$ has the standard semicircle distribution. The moments of $X$ can be computed from the moments of $Z$. Using the binomial theorem and the linearity of expected value we have $\E\left(X^n\right) = \sum_{k=0}^n \binom{n}{k} r^k a^{n-k} \E\left(Z^k\right), \quad n \in \N$ In particular,
The mean and variance of $X$ are
1. $\E(X) = a$
2. $\var(X) = r^2 / 4$
When the center is 0, the general moments have a simple form:
Suppose that $a = 0$. For $n \in \N$ the moment of order $2 n + 1$ is $\E\left(X^{2n+1}\right) = 0$ and the moment of order $2 n$ is $\E\left(X^{2n}\right) = \left(\frac{r}{2}\right)^{2n} \frac{1}{n + 1} \binom{2n}{n}$
Proof
This follows from the moment results for $Z$ since $X^m = r^m Z^m$ for $m \in \N$.
Open the special distribution simulator and select the semicircle distribution. Vary the center $a$ and the radius $r$, and note the size and location of the mean $\pm$ standard deviation bar. For selected values of $a$ and $r$, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness and kurtosis of $X$ are
1. $\skw(X) = 0$
2. $\kur(X) = 2$
Proof
These results follow immediately from the skewness and kurtosis of the standard distribution. Recall that skewness and kurtosis are defined in terms of the standard score, which is independent of the location and scale parameters..
Once again, the excess kurtosis is $\kur(X) - 3 = -1$.
Related Distributions
Since the semicircle distribution is a location-scale family, it's invariant under location-scale transformations.
Suppose that $X$ has the semicircle distribution with center $a \in \R$ and radius $r \in (0, \infty)$. If $b \in \R$ and $c \in (0, \infty)$ then $b + c X$ has the semicircle distribution with center $b + c a$ and radius $c r$.
Proof
Again from the definition we can take $X = a + r Z$ where $Z$ has the standard semicircle distribution. Then $b + c X = (b + c a) + (c r) Z$.
One member of the beta family of distributions is a semicircle distribution:
The beta distribution with left parameter $3/2$ and right parameter $3/2$ is the semicircle distribution with center $1/2$ and radius $1/2$.
Proof
By definition, the beta distribution with left and right parameters $3/2$ has PDF $f(x) = \frac{1}{B(3/2, 3/2)}x^{1/2}(1 - x)^{1/2}, \quad x \in [0, 1]$ But $B(3/2, 3/2) = \pi/8$ and $x^{1/2}(1 - x)^{1/2} = \sqrt{x - x^2}$. Completing the square gives $f(x) = \frac{8}{\pi} \sqrt{\frac{1}{4} - \left(x - \frac{1}{2}\right)^2}, \quad x \in [0, 1]$ which is the PDF of the semicircle distribution with center $1/2$ and radius $1/2$.
Since we can simulate a variable $Z$ with the standard semicircle distribution by the method above, we can simulate a variable with the semicircle distribution with center $a \in \R$ and radius $r \in (0, \infty)$ by our very definition: $X = a + r Z$. Once again, the rejection method also works well since the support and probability density fucntion of $X$ are bounded.
Open the rejection method app and select the semicircle distribution. For selected values of $a$ and $r$, run the simulation 1000 times and note the points in the scatterplot. Compare the empirical density function, mean and standard deviation to their distributional counterparts. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.23%3A_The_Semicircle_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{ \E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\sgn}{\text{sgn}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
Like the semicircle distribution, the triangle distribution is based on a simple geometric shape. The distribution arises naturally when uniformly distributed random variables are transformed in various ways.
The Standard Triangle Distribution
Distribution Functions
The standard triangle distribution with vertex at $p \in [0, 1]$ (equivalently, shape parameter $p$) is a continuous distribution on $[0, 1]$ with probability density function $g$ described as follows:
1. If $p = 0$ then $g(x) = 2 (1 - x)$ for $x \in [0, 1]$
2. If $p = 1$ then $g(x) = 2 x$ for $x \in [0, 1]$.
3. If $p \in (0, 1)$ then $g(x) = \begin{cases} \frac{2x}{p}, & x \in [0, p] \ \frac{2 (1 - x)}{1 - p}, & x \in [p, 1] \end{cases}$
The shape of the probability density function justifies the name triangle distribution.
The graph of $g$, together with the domain $[0, 1]$, forms a triangle with vertices $(0, 0)$, $(1, 0)$, and $(p, 2)$. The mode of the distribution is $x = p$.
1. If $p = 0$, $g$ is decreasing.
2. If $p = 1$, $g$ is increasing.
3. If $p \in (0, 1)$, $g$ increases and then decreases.
Proof
Using $[0, 1]$ as the base, we can compute the area of the triangle as $\frac{1}{2} 2 = 1$ so we see immediately that $g$ is a valid probability density function. The properties are obvious.
Open special distribution simulator and select the triangle distribution. Vary $p$ (but keep the default values for the other parameters) and note the shape of the probability density function. For selected values of $p$, run the simulation 1000 times and compare the empirical density function to the probability density function.
The distribution function $G$ is given as follows:
1. If $p = 0$, $G(x) = 1 - (1 - x)^2$ for $x \in [0, 1]$.
2. If $p = 1$, $G(x) = x^2$ for $x \in [0, 1]$.
3. If $p \in (0, 1)$ $G(x) = \begin{cases} \frac{x^2}{p}, & x \in [0, p] \ 1 - \frac{(1 - x)^2}{1 - p}, & x \in [p, 1] \end{cases}$
Proof
This result follows from standard calculus since $G(x) = \int_0^x g(t) \, dt$.
The quantile function $G^{-1}$ is given by $G^{-1}(u) = \begin{cases} \sqrt{u p}, & u \in [0, p] \ 1 - \sqrt{(1 - u)(1 - p)}, & u \in [p, 1] \end{cases}$
1. The first quartile is $\sqrt{\frac{1}{4}p}$ if $p \in \left[\frac{1}{4}, 1\right]$ and is $1 - \sqrt{\frac{3}{4} (1 - p)}$ if $p \in \left[0, \frac{1}{4}\right]$
2. The median is $\sqrt{\frac{1}{2} p}$ if $p \in \left[\frac{1}{2}, 1\right]$ and is $1 - \sqrt{\frac{1}{2}(1 - p)}$ if $p \in \left[0, \frac{1}{2}\right]$.
3. The third quartile is $\sqrt{\frac{3}{4} p}$ if $p \in \left[\frac{3}{4}, 1\right]$ and is $1 - \sqrt{\frac{1}{4}(1 - p)}$ if $p \in \left[0, \frac{3}{4}\right]$.
Open the special distribution calculator and select the triangle distribution. Vary $p$ (but keep the default values for the other parameters) and note the shape of the distribution function. For selected values of $p$, compute the first and third quartiles.
Moments
Suppose that $X$ has the standard triangle distribution with vertex $p \in [0, 1]$. The moments are easy to compute.
Suppose that $n \in \N$.
1. If $p = 1$, $\E(X^n) = 2 \big/ (n + 2)$.
2. If $p \in [0, 1)$, $\E(X^n) = \frac{2}{n + 1} p^{n+2} + \frac{2}{n + 1} \frac{1 - p^{n+1}}{1 - p} + \frac{2}{n + 2}\frac{1 - p^{n+2}}{1 - p}$
Proof
This follows from standard calculus, since $\E(X^n) = \int_0^1 x^n g(x) \, dx$.
From the general moment formula, we can compute the mean, variance, skewness, and kurtosis.
The mean and variance of $X$ are
1. $\E(X) = \frac{1}{3}(1 + p)$
2. $\var(X) = \frac{1}{18}[1 - p(1 - p)]$
Proof
This follows from the general moment result. Recall that $\var(X) = \E\left(X^2\right) - [\E(X)]^2$.
Note that $\E(X)$ increases from $\frac{1}{3}$ to $\frac{2}{3}$ as $p$ increases from 0 to 1. The graph of $\var(X)$ is a parabola opening downward; the largest value is $\frac{1}{18}$ when $p = 0$ or $p = 1$ and the smallest value is $\frac{1}{24}$ when $p = \frac{1}{2}$.
Open the special distribution simulator and select the triangle distribution. Vary $p$ (but keep the default values for the other paramters) and note the size and location of the mean $\pm$ standard deviation bar. For selected values of $p$, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness of $X$ is $\skw(X) = \frac{\sqrt{2} (1 - 2 p)(1 + p)(2 - p)}{5[1 - p(1 - p)]^{3/2}}$ The kurtosis of $X$ is $\kur(X) = \frac{12}{5}$.
Proof
These results follow from the general moment result and the computational formulas for skewness and kurtosis.
Note that $X$ is positively skewed for $p \lt \frac{1}{2}$, negatively skewed for $p \gt \frac{1}{2}$, and symmetric for $p = \frac{1}{2}$. More specifically, if we indicate the dependence on the parameter $p$ then $\skw_{1-p}(X) = -\skw_p(X)$. Note also that the kurtosis is independent of $p$, and the excess kurtosis is $\kur(X) - 3 = -\frac{3}{5}$.
Open the special distribution simulator and select the triangle distribution. Vary $p$ (but keep the default values for the other paramters) and note the degree of symmetry and the degree to which the distribution is peaked. For selected values of $p$, run the simulation 1000 times and compare the empirical density function to the probability density function.
Related Distributions
If $X$ has the standard triangle distribution with parameter $p$, then $1 - X$ has the standard triangle distribution with parameter $1 - p$.
Proof
For $x \in [0, 1]$, $\P(1 - X \le x) = \P(X \ge 1 - x) = 1 - G(1 - x)$, where $G$ is the CDF of $X$. The result now follows from the formula for the CDF.
The standard triangle distribution has a number of connections with the standard uniform distribution. Recall that a simulation of a random variable with a standard uniform distribution is a random number in computer science.
Suppose that $U_1$ and $U_2$ are independent random variables, each with the standard uniform distribution. Then
1. $X = \min\{U_1, U_2\}$ has the standard triangle distribution with $p = 0$.
2. $Y = \max\{U_1, U_2\}$ has the standard triangle distribution with $p = 1$.
Proof
$U_1$ and $U_2$ have CDF $u \mapsto u$ for $u \in [0, 1]$
1. $X$ has CDF $x \mapsto 1 - (1 - x)^2$ for $x \in [0, 1]$
2. $Y$ has CDF $y \mapsto y^2$ for $y \in [0, 1]$.
Suppose again that $U_1$ and $U_2$ are independent random variables, each with the standard uniform distribution. Then
1. $X = \left|U_2 - U_1\right|$ has the standard triangle distribution with $p = 0$.
2. $Y = \left(U_1 + U_2\right) \big/ 2$ has the standard triangle distribution with $p = \frac{1}{2}$.
Proof
1. Let $x \in [0, 1]$. Note that the event $\{X \gt x\} = \left\{\left|U_2 - U_1\right| \gt x\right\}$ is simply the union of two disjoint triangular regions, each with base and height of length $1 - x$. Hence $\P(X \le x) = 1 - (1 - x)^2$.
2. Let $y \in \left[0, \frac{1}{2}\right]$. The event $\{Y \le y\} = \left\{U_1 + U_2 \le 2 y\right\}$ is a triangular region with height and base of length $2 y$. Hence $\P(Y \le y) = 2 y^2$. For $y \in \left[\frac{1}{2}, 1\right]$, the event $\{Y \gt y\}$ is a triangular regtion with height and base if length $2 - 2y$. Hence $\P(Y \le y) = 1 - 2 (1 - y)$.
In the previous result, note that $Y$ is the sample mean from a random sample of size 2 from the standard uniform distribution. Since the quantile function has a simple closed-form expression, the standard triangle distribution can be simulated using the random quantile method.
Suppose that $U$ is has the standard uniform distribution and $p \in [0, 1]$. Then the random variable below has the standard triangle distribution with parameter $p$: $X = \begin{cases} \sqrt{p U}, & U \le p \ 1 - \sqrt{(1 -p)(1 - U)}, & p \lt U \le 1 \end{cases}$
Open the random quantile experiment and select the triangle distribution. Vary $p$ (but keep the default values for the other parameters) and note the shape of the distribution function/quantile function. For selected values of $p$, run the experiment 1000 times and watch the random quantiles. Compare the empirical density function, mean, and standard deviation to their distributional counterparts.
The standard triangle distribution can also be simulated using the rejection method, which also works well since the region $R$ under the probability density function $g$ is bounded. Recall that this method is based on the following fact: if $(X, Y)$ is uniformly distributed on the rectangular region $S = \{(x, y): 0 \le x \le 1, 0 \le y \le 2\}$ which contains $R$, then the conditional distribution of $(X, Y)$ given $(X, Y) \in R$ is uniformly distributed on $R$, and hence $X$ has probability density function $g$.
Open the rejection method experiment and select the triangle distribution. Vary $p$ (but keep the default values for the other parameters) and note the shape of the probability density function. For selected values of $p$, run the experiment 1000 times and watch the scatterplot. Compare the empirical density function, mean, and standard deviation to their distributional counterparts.
For the extreme values of the shape parameter, the standard triangle distributions are also beta distributions.
Connections to the beta distribution:
1. The standard triangle distribution with shape parameter $p = 0$ is the beta distribution with left parameter $a = 1$ and right parameter $b = 2$.
2. The standard triangle distribution with shape parameter $p = 1$ is the beta distribution with left parameter $a = 2$ and right parameter $b = 1$.
Proof
These results follow directly from the form of the standard triangle PDF.
Open the special distribution simulator and select the beta distribution. For parameter values given below, run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts.
1. $a = 1$, $b = 2$
2. $a = 2$, $b = 1$
The General Triangle Distribution
Like so many standard distributions, the standard triangle distribution is usually generalized by adding location and scale parameters.
Definition
Suppose that $Z$ has the standard triangle distribution with vertex at $p \in [0, 1]$. For $a \in \R$ and $w \in (0, \infty)$, random variable $X = a + w Z$ has the triangle distribution with location parameter $a$, and scale parameter $w$, and shape parameter $p$
Distribution Functions
Suppose that $X$ has the general triangle distribution given in the definition above.
$X$ has probability density function $f$ given as follows:
1. If $p = 0$, $f(x) = \frac{2}{w^2}(a + w - x)$ for $x \in [a, a + w]$.
2. If $p = 1$, $f(x) = \frac{2}{w^2}(x - a)$ for $x \in [a, a + w]$.
3. If $p \in (0, 1)$, $f(x) = \begin{cases} \frac{2}{p w^2}(x - a), & x \in [a, a + p w] \ \frac{2}{w^2 (1 - p)}(a + w - x), & x \in [a + p w, a + w] \end{cases}$
Proof
This follows from a standard result for location-scale families. Recall that $f(x) = \frac{1}{w} g\left(\frac{x - a}{w}\right), \quad \frac{x - a}{w} \in [0, 1]$ where $g$ is the standard triangle PDF with parameter $p$.
Once again, the shape of the probability density function justifies the name triangle distribution.
The graph of $f$, together with the domain $[a, a + w]$, forms a triangle with vertices $(a, 0)$, $(a + w, 0)$, and $(a + p w, 2/w)$. The mode of the distribution is $x = a + p w$.
1. If $p = 0$, $f$ is decreasing.
2. If $p = 1$, $f$ is increasing.
3. If $p \in (0, 1)$, $f$ increases and then decreases.
Clearly the general triangle distribution could be parameterized by the left endpoint $a$, the right endpoint $b = a + w$ and the location of the vertex $c = a + p w$, but the location-scale-shape parameterization is better.
Open special distribution simulator and select the triangle distribution. Vary the parameters $a$, $w$, and $p$, and note the shape and location of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
The distribution function $F$ of $X$ is given as follows:
1. If $p = 0$, $F(x) = 1 - \frac{1}{w^2}(a + w - x)^2$ for $x \in [a, a + w]$
2. If $p = 1$, $F(x) = \frac{1}{w^2}(x - a)^2$ for $x \in [a, a + w]$
3. If $p \in (0, 1)$, $F(x) = \begin{cases} \frac{1}{p w^2}(x - a)^2, & x \in [a, a + p w] \ 1 - \frac{1}{w^2 (1 - p)}(a + w - x)^2, & x \in [a + p w, a + w] \end{cases}$
Proof
This follows from a standard result for location-scale families: $F(x) = G\left(\frac{x - a}{w}\right), \quad x \in [a, a + w]$ where $G$ is the standard triangle CDF with parameter $p$.
$X$ has quantile function $F^{-1}$ given by $F^{-1}(u) = a + \begin{cases} w \sqrt{u p}, & 0 \le u \le p \ w\left[1 - \sqrt{(1 - u)(1 - p)}\right], & p \le u \le 1 \end{cases}$
1. The first quartile is $a + w \sqrt{\frac{1}{4} p}$ if $p \in \left[\frac{1}{4}, 1\right]$ and is $a + w \left( 1 - \sqrt{\frac{3}{4} (1 - p)} \right)$ if $p \in \left[0, \frac{1}{4}\right]$
2. The median is $a + w \sqrt{\frac{1}{2} p}$ if $p \in \left[\frac{1}{2}, 1\right]$ and is $a + w \left(1 - \sqrt{\frac{1}{2} (1 - p)}\right)$ if $p \in \left[0, \frac{1}{2}\right]$.
3. The third quartile is $a + w \sqrt{\frac{3}{4} p}$ if $p \in \left[\frac{3}{4}, 1\right]$ and is $a + w\left(1 - \sqrt{\frac{1}{4}(1 - p)}\right)$ if $p \in \left[0, \frac{3}{4}\right]$.
Proof
Ths follows from a standard result for location-scale families: $F^{-1}(u) = a + w G^{-1}(u)$ for $u \in [0, 1]$, where $G^{-1}$ is the standard triangle quantile function with parameter $p$.
Open the special distribution simulator and select the triangle distribution. Vary the the parameters $a$, $w$, and $p$, and note the shape and location of the distribution function. For selected values of parameters, compute the median and the first and third quartiles.
Moments
Suppose again that $X$ has the triangle distribution with location parameter $a \in \R$, scale parameter $w \in (0, \infty)$ and shape parameter $p \in [0, 1]$. Then we can take $X = a + w Z$ where $Z$ has the standard triangle distribution with parameter $p$. Hence the moments of $X$ can be computed from the moments of $Z$. Using the binomial theorem and the linearity of expected value we have $\E(X^n) = \sum_{k=0}^n \binom{n}{k} w^k a^{n-k} \E(Z^k), \quad n \in \N$
The general results are rather messy.
The mean and variance of $X$ are
1. $\E(X) = a + \frac{w}{3}(1 + p)$
2. $\var(X) = \frac{w^2}{18}[1 - p(1 - p)]$
Proof
This follows from the results for the mean and variance of the standard triangle distribution, and simple properties of expected value and variance.
Open the special distribution simulator and select the triangle distribution. Vary the parameters $a$, $w$, and $p$, and note the size and location of the mean $\pm$ standard deviation bar. For selected values of the paramters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness of $X$ is $\skw(X) = \frac{\sqrt{2} (1 - 2 p)(1 + p)(2 - p)}{5[1 - p(1 - p)]^{3/2}}$ The kurtosis of $X$ is $\kur(X) = \frac{12}{5}$.
Proof
These results follow immediately from the skewness and kurtosis of the standard triangle distribution. Recall that skewness and kurtosis are defined in terms of the standard score, which is independent of the location and scale parameters.
As before, the excess kurtosis is $\kur(X) - 3 = -\frac{3}{5}$.
Related Distributions
Since the triangle distribution is a location-scale family, it's invariant under location-scale transformations. More generally, the family is closed under linear transformations with nonzero slope.
Suppose that $X$ has the triangle distribution with shape parameter $a \in \R$, scale parameter $w \in (0, \infty)$, and shape parameter $p \in [0, 1]$. If $b \in \R$ and $c \in (0, \infty)$ then
1. $b + c X$ has the triangle distribution with location parameter $b + c a$, scale parameter $c w$, and shape parameter $p$.
2. $b - c X$ has the triangle distribution with location parameter $b - c (a + w)$, scale parameter $c w$, and shape parameter $1 - p$.
Proof
From the definition we can take $X = a + w Z$ where $Z$ has the standard triangle distribution with parameter $p$.
1. Note that $b + c X = (b + c a) + c w Z$.
2. Note that $b - c X = b - c(a + w) + c w (1 - Z)$, and recall from the result above that $1 - Z$ has the basic triangle distribution with parameter $1 - p$.
As with the standard distribution, there are several connections between the triangle distribution and the continuous uniform distribution.
Suppose that $V_1$ and $V_2$ are independent and are uniformly distributed on the interval $[a, a + w]$, where $a \in \R$ and $w \in (0, \infty)$. Then
1. $\min\{V_1, V_2\}$ has the triangle distribution with location parameter $a$, scale parameter $w$, and shape parameter $p = 0$.
2. $\max\{V_1, V_2\}$ has the triangle distribution with location parameter $a$, scale parameter $w$, and shape parameter $p = 1$.
Proof
The uniform distribution is itself a location-scale family, so we can write $V_1 = a + w U_1$ and $V_2 = a + w U_2$, where $U_1$ and $U_2$ are independent and each has the standard uniform distribution. Then $\min\{V_1, V_2\} = a + w \min\{U_1, U_2\}$ and $\max\{V_1, V_2\} = a + w \max\{U_1, U_2\}$ so the result follows from the corresponding result for the standard triangle distribution.
Suppose again that $V_1$ and $V_2$ are independent and are uniformly distributed on the interval $[a, a + w]$, where $a \in \R$ and $w \in (0, \infty)$. Then
1. $\left|V_2 - V_1\right|$ has the triangle distribution with location parameter 0, scale parameter $w$, and shape parameter $p = 0$.
2. $V_1 + V_2$ has the triangle distribution with location parameter $2 a$, scale parameter $2 w$, and shape parameter $p = \frac{1}{2}$.
3. $V_2 - V_1$ has the triangle distribution with location parameter $-w$, scale parameter $2 w$, and shape parameter $p = \frac{1}{2}$
Proof
As before, we can write $V_1 = a + w U_1$ and $V_2 = a + w U_2$, where $U_1$ and $U_2$ are independent and each has the standard uniform distribution.
1. $\left|V_2 - V_1\right| = w \left|U_2 - U_1\right|$ and by the result above, $\left|U_2 - U_1\right|$ has the standard triangle distribution with parameter $p = 0$.
2. $V_1 + V_2 = 2 a + 2 w \left[\frac{1}{2}(U_1 + U_2)\right]$ and by the result above, $\frac{1}{2}(U_1 + U_2)$ has the standard triangle distribution with parameter $p = \frac{1}{2}$.
3. Let $Z = \frac{1}{2} + \frac{1}{2}(U_2 - U_1) = \frac{1}{2}U_2 + \frac{1}{2}(1 - U_1)$. Since $1 - U_1$ also has the standard uniform distribution and is independent of $U_2$, it follows from the result above that $Z$ has the basic triangle distribution with parameter $p = \frac{1}{2}$. But $V_2 - V_1 = w (U_2 - U_1) = w (2 Z - 1) = 2 w Z - w$ and hence the result follows.
A special case of (b) leads to a connection between the triangle distribution and the Irwin-Hall distribution.
Suppose that $U_1$ and $U_2$ are independent random variables, each with the standard uniform distribution. Then $U_1 + U_2$ has the triangle distribtion with location parameter $0$, scale parameter $2$ and shape parameter $\frac{1}{2}$. But this is also the Irwin-Hall distribution of order $n = 2$.
Open the special distribution simulator and select the Irwin-Hall distribution. Set $n = 2$ and note the shape and location of the probability density function. Run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts.
Since we can simulate a variable $Z$ with the basic triangle distribution with parameter $p \in [0, 1]$ by the random quantile method above, we can simulate a variable with the triangle distribution that has location parameter $a \in \R$, scale parameter $w \in (0, \infty)$, and shape parameter $p$ by our very definition: $X = a + w Z$. Equivalently, we could compute a random quantile using the quantile function of $X$.
Open the random quantile experiment and select the triangle distribution. Vary the location parameter $a$, the scale parameter $w$, and the shape parameter $p$, and note the shape of the distribution function. For selected values of the parameters, run the experiment 1000 times and watch the random quantiles. Compare the empirical density function, mean and standard deviation to their distributional counterparts.
As with the standard distribution, the general triangle distribution has a bounded probability density function on a bounded interval, and hence can be simulated easily via the rejection method.
Open the rejection method experiment and select the triangle distribution. Vary the parameters and note the shape of the probability density function. For selected values of the parameters, run the experiment 1000 times and watch the scatterplot. Compare the empirical density function, mean, and standard deviation to their distributional counterparts. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.24%3A_The_Triangle_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{ \E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\sgn}{\text{sgn}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
The Irwin-Hall distribution, named for Joseph Irwin and Phillip Hall, is the distribution that governs the sum of independent random variables, each with the standard uniform distribution. It is also known as the uniform sum distribution. Since the standard uniform is one of the simplest and most basic distributions (and corresponds in computer science to a random number), the Irwin-Hall is a natural family of distributions. It also serves as a nice example of the central limit theorem, conceptually easy to understand.
Basic Theory
Definition
Suppose that $\bs{U} = (U_1, U_2, \ldots)$ is a sequence of indpendent random variables, each with the uniform distribution on the interval $[0, 1]$ (the standard uniform distribution). For $n \in \N_+$, let $X_n = \sum_{i=1}^n U_i$ Then $X_n$ has the Irwin-Hall distribution of order $n$.
So $X_n$ has a continuous distribution on the interval $[0, n]$ for $n \in \N_+$.
Distribution Functions
Let $f$ denote the probability density function of the standard uniform distribution, so that $f(x) = 1$ for $0 \le x \le 1$ (and is 0 otherwise). It follows immediately that the probability density function $f_n$ of $X_n$ satisfies $f_n = f^{*n}$, where of course $f^{*n}$ is the $n$-fold convolution power of $f$. We can compute $f_2$ and $f_3$ by hand.
The probability density function $f_2$ of $X_2$ is given by $f_2(x) = \begin{cases} x, & 0 \le x \le 1 \ x - 2 (x - 1), & 1 \le x \le 2 \end{cases}$
Proof
Note that $X_2$ takes values in $[0, 2]$ and $f_2(x) = \int_\R f(u) f(x - u) \, du$ for $x \in [0, 2]$. The integral reduces to $\int_0^x 1 \, du = x$ for $0 \le x \le 1$ and the integral reduces to $\int_{x-1}^1 1 \, du = 2 - x$ for $1 \le x \le 2$.
Note that the graph of $f_2$ on $[0, 2]$ consists of two lines, pieced together in a continuous way at $x = 1$. The form given above is not the simplest, but makes the continuity clear, and will be helpful when we generalize.
In the special distribution simulator, select the Irwin-Hall distribution and set $n = 2$. Note the shape of the probability density function. Run the simulation 1000 times and compare the empirical density function to the probability density function.
The probability density function $f_3$ of $X_3$ is given by $f_3(x) = \begin{cases} \frac{1}{2} x^2, & 0 \le x \le 1 \ \frac{1}{2} x^2 - \frac{3}{2}(x - 1)^2, & 1 \le x \le 2 \ \frac{1}{2} x^2 - \frac{3}{2}(x - 1)^2 + \frac{3}{2}(x - 2)^2, & 2 \le x \le 3 \end{cases}$
Note that the graph of $f_3$ on $[0, 3]$ consists of three parabolas pieced together in a continuous way at $x = 1$ and $x = 2$. The expressions for $f_3(x)$ for $1 \le x \le 2$ and $2 \le x \le 3$ can be expanded and simplified, but again the form given above makes the continuity clear, and will be helpful when we generalize.
In the special distribution simulator, select the Irwin-Hall distribution and set $n = 3$. Note the shape of the probability density function. Run the simulation 1000 times and compare the empirical density function to the probability density function.
Naturally, we don't want to perform the convolutions one at a time; we would like a general formula. To state the formula succinctly, we need to recall the floor function: $\lfloor x \rfloor = \max\{ n \in \Z: n \le x\}, \quad x \in \R$ so that $\lfloor x \rfloor = j$ if $j \in \Z$ and $j \le x \lt j + 1$.
For $n \in \N_+$, the probability density function $f_n$ of $X_n$ is given by $f_n(x) = \frac{1}{(n - 1)!} \sum_{k=0}^{\lfloor x \rfloor} (-1)^k \binom{n}{k} (x - k)^{n-1}, \quad x \in \R$
Proof
Let $f_n$ denote the function given by the formula above. Clearly $X_n$ takes values in $[0, n]$, so first let's note that $f_n$ gives the correct value outside of this interval. If $x \lt 0$, the sum is over an empty index set and hence is 0. Suppose $x \gt n$. Since $\binom{n}{k} = 0$ for $k \gt n$, we have $f_n(x) = \frac{1}{(n - 1)!} \sum_{k=0}^n (-1)^k \binom{n}{k} (x - k)^{n-1}, \quad x \in \R$ Using the binomial theorem, \begin{align*} \sum_{k=0}^n (-1)^k \binom{n}{k} (x - k)^{n-1} & = \sum_{k=0}^n (-1)^k \binom{n}{k} \sum_{j=0}^{n-1} \binom{n - 1}{j} x^j (-k)^{n - 1 - j} \ & = \sum_{j=0}^{n-1} (-1)^{n - 1 - j} \binom{n - 1}{j} x^j \sum_{k=0}^n (-1)^k \binom{n}{k} k^{n - 1 - j} \end{align*} The second sum in the last expression is 0 for $j \in \{0, 1, \ldots n - 1\}$ by the alternating series identity for binomial coefficients. We will see this identity again.
To show that the formula is correct on $[0, n]$ we use induction on $n$. Suppose that $n = 1$. If $0 \lt x \lt 1$, then $\lfloor x \rfloor = 0$ so $f_1(x) = \frac{1}{0!} (-1)^0 \binom{1}{0} x^0 = 1 = f(x)$ Suppose now that the formula is correct for a given $n \in \N_+$. We need to show that $f_n * f = f_{n+1}$. Note that $(f_n * f)(x) = \int_\R f_n(y) f (x - y) d y = \int_{x-1}^x f_n(y) dy$ As often with convolutions, we must take cases. Suppose that $j \le x \lt j + 1$ where $j \in \{0, 1, \ldots, n\}$. Then $(f_n * f)(x) = \int_{x-1}^x f_n(y) dy = \int_{x-1}^j f_n(y) dy + \int_j^x f_n(y) dy$ Substituting the formula for $f_n(y)$ and integrating gives \begin{align*} & \int_{x-1}^j f_n(y) dy = \frac{1}{n!} \sum_{k=0}^{j-1} (-1)^k \binom{n}{k}(j - k)^n - \frac{1}{n!} \sum_{k=0}^{j-1} (-1)^k \binom{n}{k}(x - 1 - k)^n \ & \int_j^x f_n(y) dy = \frac{1}{n!} \sum_{k=0}^j (-1)^k \binom{n}{k} (x - k)^n - \frac{1}{n!} \sum_{k=0}^j (-1)^k \binom{n}{k}(j - k)^n \end{align*} Adding these together, note that the first sum in the first equation cancels the second sum in the second equation. Re-indexing the second sum in the first equation we have $(f_n * f)(x) = \frac{1}{n!}\sum_{k=1}^j (-1)^k \binom{n}{k - 1}(x - k)^n + \frac{1}{n!} \sum_{k=0}^n (-1)^k \binom{n}{k} (x - k)^n$ Finally, using the famous binomial identity $\binom{n}{k - 1} + \binom{n}{k} = \binom{n+1}{k}$ for $k \in \{1, 2, \ldots n\}$ we have $(f_n * f)(x) = \frac{1}{n!} \sum_{k=0}^j (-1)^k \binom{n+1}{k} (x - k)^n = f_{n+1}(x)$
Note that for $n \in \N_+$, the graph of $f_n$ on $[0, n]$ consists of $n$ polynomials of degree $n - 1$ pieced together in a continuous way. Such a construction is known as a polynomial spline. The points where the polynomials are connected are known as knots. So $f_n$ is a polynomial spline of degree $n - 1$ with knots at $x \in \{1, 2, \ldots, n - 1\}$. There is another representation of $f_n$ as a sum. To state this one succinctly, we need to recall the sign function: $\sgn(x) = \begin{cases} -1, & x \lt 0 \ 0, & x = 0 \ 1, & x \gt 0 \end{cases}$
For $n \in \N_+$, the probability density function $f_n$ of $X_n$ is given by $f_n(x) = \frac{1}{2 (n - 1)!} \sum_{k=0}^n (-1)^k \binom{n}{k} \sgn(x - k) (x - k)^{n-1}, \quad x \in \R$
Direct Proof
Let $g_n$ denote the function defined in the theorem. We will show directly that $g_n = f_n$, the probability density function given in the previous theorem. Suppose that $j \le x \lt j + 1$, so that $\lfloor x \rfloor = j$. Note that $\sgn(x - k) = 1$ for $k \lt j$ and $\sgn(x - k) = - 1$ for $k \gt j$. Hence $g_n(x) = \frac{1}{2(n - 1)!} \sum_{k=0}^j (-1)^k \binom{n}{k} (x - k)^{n-1} - \frac{1}{2(n - 1)!} \sum_{k=j+1}^n (-1)^k \binom{n}{k} (x - k)^{n-1}$ Adding and subtracting a copy of the first term gives \begin{align*} g_n(x) & = \frac{1}{(n - 1)!} \sum_{k=0}^j (-1)^k \binom{n}{k} (x - k)^{n-1} - \frac{1}{2(n - 1)!} \sum_{k=0}^n (-1)^k \binom{n}{k} (x - k)^{n-1}\ & = f_n(x) - \frac{1}{2(n - 1)!}\sum_{k=0}^n (-1)^k \binom{n}{k} (x - k)^{n-1} \end{align*} The last sum is identically 0, from the proof of the previous theorem.
Proof by induction
For $n = 1$ the displayed formula is $\frac{1}{2}[\sgn(x) x^0 - \sgn(x - 1) (x - 1)^0] = \frac{1}{2}[\sgn(x) - \sgn(x - 1)] = \begin{cases} 1, & 0 \lt x \lt 1 \ 0, & \text{otherwise} \end{cases}$ So the formula is correct for $n = 1$. Assume now that the formula is correct for $n \in \N_+$. Then \begin{align} f_{n+1}(x) & = (f_n * f)(x) = \int_\R \frac{1}{2(n - 1)!} \sum_{k=0}^n (-1)^k \binom{n}{k} \sgn(u - k) (u - k)^{n-1} f(x - u) \, du \ & = \frac{1}{2(n - 1)!} \sum_{k=0}^n (-1)^k \binom{n}{k} \int_{x-1}^x \sgn(u - k) (u - k)^{n-1} \, du \end{align} But $\int_{x-1}^x \sgn(u - k) (u - k)^{n-1} \, du = \frac{1}{n}\left[\sgn(x - k) (x - k)^n - \sgn(x - k - 1) (x - k - 1)^n\right]$ for $k \in \{0, 1, \ldots, n\}$. So substituting and re-indexing one of the sums gives $f_{n+1}(x) = \frac{1}{2 n!} \sum_{k=0}^n (-1)^k \binom{n}{k} \sgn(x - k) (x - k)^n + \frac{1}{2 n!} \sum_{k=1}^{n+1} (-1)^k \binom{n}{k-1} \sgn(x - k) (x - k)^n$ Using the famous identity $\binom{n}{k} + \binom{n}{k-1} = \binom{n + 1}{k}$ for $k \in \{1, 2, \ldots, n\}$ we finally get $f_{n+1}(x) = \frac{1}{2 n!} \sum_{k=0}^{n+1} (-1)^k \binom{n+1}{k} \sgn(x - k) (x - k)^n$ which verifies the formula for $n + 1$.
Open the special distribution simulator and select the Irwin-Hall distribution. Start with $n = 1$ and increase $n$ successively to the maximum $n = 10$. Note the shape of the probability density function. For various values of $n$, run the simulation 1000 times and compare the empirical density function to the probability density function.
For $n \in \{2, 3, \ldots\}$, the Irwin-Hall distribution is symmetric and unimodal, with mode at $n / 2$.
The distribution function $F_n$ of $X_n$ is given by $F_n(x) = \frac{1}{n!} \sum_{k=0}^{\lfloor x \rfloor} (-1)^k \binom{n}{k} (x - k)^n, \quad x \in [0, n]$
Proof
This follows from the first form of the PDF and integration.
So $F_n$ is a polynomial spline of degree $n$ with knots at $\{1, 2, \ldots, n - 1\}$. The alternate from of the probability density function leads to an alternate form of the distribution function.
The distribution function $F_n$ of $X_n$ is given by $F_n(x) = \frac{1}{2} + \frac{1}{2 n!} \sum_{k=0}^n (-1)^k \binom{n}{k} \sgn(x - k) (x - k)^n, \quad x \in [0, n]$
Proof
The result follws from the second form of the PDF and integration.
The quantile function $F_n^{-1}$ does not have a simple representation, but of course by symmetry, the median is $n/2$.
Open the special distribution calculator and select the Irwin-Hall distribution. Vary $n$ from 1 to 10 and note the shape of the distribution function. For each value of $n$ compute the first and third quartiles.
Moments
The moments of the Irwin-Hall distribution are easy to obtain from the representation as a sum of independent standard uniform variables. Once again, we assume that $X_n$ has the Irwin-Hall distribution of order $n \in \N_+$.
The mean and variance of $X_n$ are
1. $\E(X_n) = n / 2$
2. $\var(X_n) = n / 12$
Proof
This follows immediately from the representation $X_n = \sum_{i=1}^n U_i$ where $\bs U = (U_1, U_2, \ldots)$ is a sequence of independent, standard uniform variables, since $\E(U_i) = 1/2$ and $\var(U_i) = 1/12$
Open the special distribution simulator and select the Irwin-Hall distribution. Vary $n$ and note the shape and location of the mean $\pm$ standard deviation bar. For selected values of $n$ run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness and kurtosis of $X_n$ are
1. $\skw(X_n) = 0$
2. $\kur(X_n) = 3 - \frac{6}{5 n}$
Proof
The fact that the skweness is 0 follows immediately from the symmetry of the distribution (once we know that $X_n$ has moments of all orders). The kurtosis result follows from the usual formula and the moments of the standard uniform distribution.
Note that $\kur(X_n) \to 3$, the kurtosis of the normal distribution, as $n \to \infty$. That is, the excess kurtosis $\kur(X_n) - 3 \to 0$ as $n \to \infty$.
Open the special distribution simulator and select the Irwin-Hall distribution. Vary $n$ and note the shape and of the probability density function in light of the previous results on skewness and kurtosis. For selected values of $n$ run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts.
The moment generating function $M_n$ of $X_n$ is given by $M_n(0) = 1$ and $M_n(t) = \left(\frac{e^t - 1}{t}\right)^n, \quad t \in \R \setminus\{0\}$
Proof
This follows immediately from the representation $X_n = \sum_{i=1}^n U_i$ where $\bs{U} = (U_1, U_2, \ldots)$ is a sequence of independent standard uniform variables. Recall that the standard uniform distribution has MGF $t \mapsto (e^t - 1) / t$, and the MGF of a sum of independent variables is the product of the MGFs.
Related Distributions
The most important connection is to the standard uniform distribution in the definition: The Irwin-Hall distribution of order $n \in \N_+$ is the distribution of the sum of $n$ independent variables, each with the standard uniform distribution. The Irwin-Hall distribution of order 2 is also a triangle distribution:
The Irwin-Hall distribution of order 2 is the triangle distribution with location parameter 0, scale parameter 2, and shape parameter $\frac{1}{2}$.
Proof
This follows immediately from the PDF $f_2$.
The Irwin-Hall distribution is connected to the normal distribution via the central limit theorem.
Suppose that $X_n$ has the Irwin-Hall distribution of order $n$ for each $n \in \N_+$. Then the distribution of $Z_n = \frac{X_n - n/2}{\sqrt{n/12}}$ converges to the standard normal distribution as $n \to \infty$.
Proof
This follows immediately from the central limit theorem, since $X_n = \sum_{i=1}^n U_i$ where $(U_1, U_2, \ldots)$ is a sequence of independent variables, each with the standard uniform distribution. Note that $Z_n$ is the standard score of $X_n$.
Thus, if $n$ is large, $X_n$ has approximately a normal distribution with mean $n/2$ and variance $n/12$.
Open the special distribution simulator and select the Irwin-Hall distribution. Start with $n = 1$ and increase $n$ successively to the maximum $n = 10$. Note how the probability density function becomes more normal as $n$ increases. For various values of $n$, run the simulation 1000 times and compare the empirical density function to the probability density function.
The Irwin-Hall distribution of order $n$ is trivial to simulate, as the sum of $n$ random numbers. Since the probability density function is bounded on a bounded support interval, the distribution can also be simulated via the rejection method. Computationally, this is a dumb thing to do, of course, but it can still be a fun exercise.
Open the rejection method experiment and select the Irwin-Hall distribution. For various values of $n$, run the simulation 2000 times. Compare the empirical density function, mean, and standard deviation to their distributional counterparts. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.25%3A_The_Irwin-Hall_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
The U-power distribution is a U-shaped family of distributions based on a simple family of power functions.
The Standard U-Power Distribution
Distribution Functions
The standard U-power distribution with shape parameter $k \in \N_+$ is a continuous distribution on $[-1, 1]$ with probability density function $g$ given by $g(x) = \frac{2 k + 1}{2} x^{2 k}, \quad x \in [-1, 1]$
Proof
From simple calculus, $g$ is a probability density function: $\int_{-1}^1 x^{2 k} dx = \frac{2}{2 k + 1}$
The algebraic form of the probability density function explains the name of the distribution. The most common of the standard U-power distributions is the U-quadratic distribution, which corresponds to $k = 1$.
The standard U-power probability density function $g$ satisfies the following properties:
1. $g$ is symmetric about $x = 0$.
2. $g$ decreases and then increases with minimum value at $x = 0$.
3. The modes are $x = \pm 1$.
4. $g$ is concave upward.
Proof
Again, these properties follow from basic calculus since \begin{align} g^\prime(x) & = \frac{1}{2}(2 k + 1)(2 k) x^{2 k - 1}, \quad x \in [-1, 1] \ g^{\prime \prime}(x) & = \frac{1}{2}(2 k + 1)(2 k) (2 k - 1) x^{2k - 2}, \quad x \in [-1, 1] \end{align}
Open the Special Distribution Simulator and select the U-power distribution. Vary the shape parameter but keep the default values for the other parameters. Note the graph of the probability density function. For selected values of the shape parameter, run the simulation 1000 times and compare the emprical density function to the probability density function.
The distribution function $G$ given by $G(x) = \frac{1}{2} \left(1 + x^{2 k + 1}\right), \quad x \in [-1, 1]$
Proof
This follows from the PDF above and simple calculus.
The quantile function $G^{-1}$ given by $G^{-1}(p) = (2 p - 1)^{1/(2 k + 1)}$ for $p \in [0, 1]$.
1. $G^{-1}(1 - p) = -G^{-1}(p)$ for $p \in [0, 1]$.
2. The first quartile is $q_1 = -\frac{1}{2^{1/(2 k + 1)}}$.
3. The median is 0.
4. The third quartile is $q_3 = \frac{1}{2^{1/(2 k + 1)}}$.
Proof
The formula for the quantile function follows immediately from the CDF above by solving $p = G(x)$ for $x$ in terms of $p \in [0, 1]$. Property (a) follows from the symmetry of the distribution about 0.
Open the Special Distribution Calculator and select the U-power distribution. Vary the shape parameter but keep the default values for the other parameters. Note the shape of the distribution function. For various values of the shape parameter, compute a few quantiles.
Moments
Suppose that $Z$ has the standard U-power distribution with parameter $k \in \N_+$. The moments (about 0) are easy to compute.
Let $n \in \N$. The moment of order $2 n + 1$ is $\E(Z^{2n + 1}) = 0$. The moment of order $2 n$ is $\E\left(Z^{2 n}\right) = \frac{2 k + 1}{2 (n + k) + 1}$
Proof
This result follows from simple calculus. The fact that the even order moments are 0 also follows from the symmetry of the distribution about 0.
Since the mean is 0, the moments about 0 are also the central moments.
The mean and variance of $Z$ are
1. $\E(Z) = 0$
2. $\var(Z) = \frac{2 k + 1}{2 k + 3}$
Proof
These results follow from the previous general moment result.
Note that $\var(Z) \to 1$ as $k \to \infty$.
Open the Special Distribution Simulator and select the U-power distribution. Vary the shape parameter but keep the default values for the other paramters. Note the position and size of the mean $\pm$ standard deviation bar. For selected values of the shape parameter, run the simulation 1000 times and compare the empirical mean and stadard deviation to the distribution mean and standard deviation.
The skewness and kurtosis of $Z$ are
1. $\skw(Z) = 0$
2. $\kur(Z) = \frac{(2 k + 3)^2}{(2 k + 5)(2 k + 1)}$
Proof
The skewness is 0 by the symmetry of the distribution. Since the mean is 0, the kurtosis is $\E(Z^4) / [\E(Z^2)]^2$ and so the result follows from the general moment result above
Note that $\kur(Z) \to 1$ as $k \to \infty$. The excess kurtosis is $\kur(Z) - 3 = \frac{(2 k + 3)^2}{(2 k + 5)(2 k + 1)} - 3$ and so $\kur(Z) - 3 \to -2$ as $k \to \infty$.
Related Distributions
The U-power probability density function $g$ actually makes sense for $k = 0$ as well, and in this case the distribution reduces to the uniform distribution on the interval $[-1, 1]$. But of course, this distribution is not U-shaped, except in a degenerate sense. There are other connections to the uniform distribution. The first is a standard result since the U-power quantile function has a simple, closed representation:
Suppose that $k \in \N_+$.
1. If $U$ has the standard uniform distribution then $Z = (2 U - 1)^{1/(2 k + 1)}$ has the standard U-power distribution with parameter $k$.
2. If $Z$ has the standard U-power distribution with parameter $k$ then $U = \frac{1}{2} \left(1 + Z^{2 k + 1} \right)$ has the standard uniform distribution.
Part (a) of course leads to the random quantile method of simulation.
Open the random quantile simulator and select the U-power distribution. Vary the shape parameter but keep the default values for the other parameters. Note the shape of the distribution and density functions. For selected values of the parameter, run the simulation 1000 times and note the random quantiles. Compare the empirical density function to the probability density function.
The standard U-power distribution with shape parameter $k \in \N_+$ converges to the discrete uniform distribution on $\{-1, 1\}$ as $k \to \infty$.
Proof
This follows from the definition of convergence in distribution. The U-power distribution function $G$ is 0 on $(-\infty, -1]$, is 1 on $[1, \infty)$, and is given by the formula above on $[-1, 1]$. As $k \to \infty$, $G(x) \to 0$ for $x \in (-\infty, -1)$, $G(x) \to \frac{1}{2}$ for $x \in (-1, 1)$, and $G(x) \to 1$ for $x \in (1, \infty)$. This agrees with the distribution function of the discrete uniform distribution on $\{-1, 1\}$ except at the points of discontinuity $\pm 1$.
The General U-Power Distribution
Like so many standard distributions, the standard U-power distribution is generalized by adding location and scale parameters.
Definition
Suppose that $Z$ has the standard U-power distribution with shape parameter $k \in \N_+$. If $\mu \in \R$ and $c \in (0, \infty)$ then $X = \mu + c Z$ has the U-power distribution with shape parameter $k$, location parameter $\mu$ and scale parameter $c$.
Note that $X$ has a continuous distribution on the interval $[a, b]$ where $a = \mu - c$ and $b = \mu + c$, so the distribution can also be parameterized by the the shape parameter $k$ and the endpoints $a$ and $b$. With this parametrization, the location parameter is $\mu = \frac{a + b}{2}$ and the scale parameter is $c = \frac{b - a}{2}$.
Distribution Functions
Suppose that $X$ has the U-power distribution with shape parameter $k \in \N_+$, location parameter $\mu \in \R$, and scale parameter $c \in (0, \infty)$.
$X$ has probability density function $f$ given by $f(x) = \frac{2 k + 1}{2 c} \left(\frac{x - \mu}{c}\right)^{2 k}, \quad x \in [\mu - c, \mu + c]$
1. $f$ is symmetric about $\mu$.
2. $f$ decreases and then increases with minimum value at $x = \mu$.
3. The modes are at $x = \mu \pm c$.
4. $f$ is concave upward.
Proof
Recall that $f(x) = \frac{1}{c} g\left(\frac{x - \mu}{c}\right)$ where $g$ is the PDF of $Z$.
Open the Special Distribution Simulator and select the U-power distribution. Vary the parameters and note the shape and location of the probability density function. For various values of the parameters, run the simulation 1000 times and compare the emprical density function to the probability density function.
$X$ has distribution function $F$ given by $F(x) = \frac{1}{2}\left[1 + \left(\frac{x - \mu}{c}\right)^{2 k + 1}\right], \quad x \in [\mu - c, \mu + c]$
Proof
Recall that $F(x) = G\left(\frac{x - \mu}{c}\right)$ where $G$ is the CDF of $Z$.
$X$ has quantile function $F^{-1}$ given by $F^{-1}(p) = \mu + c (2 p - 1)^{1/(2 k + 1)}$ for $p \in [0, 1]$.
1. $F^{-1}(1 - p) = \mu - c F^{-1}(p)$
2. The first quartile is $q_1 = \mu - c \frac{1}{2^{1/(2 k + 1)}}$
3. The median is $\mu$.
4. The third quartile is $q_3 = \mu + c \frac{1}{2^{1/(2 k + 1)}}$
Proof
Recall that $F^{-1}(p) = \mu + c G^{-1}(p)$ where $G^{-1}$ is the quantile function of $Z$.
Open the Special Distribution Calculator and select the U-power distribution. Vary the parameters and note the graph of the distribution function. For various values of the parameters, compute selected values of the distribution function and the quantile function.
Moments
Suppose again that $X$ has the U-power distribution with shape parameter $k \in \N_+$, location parameter $\mu \in \R$, and scale parameter $c \in (0, \infty)$.
The mean and variance of $X$ are
1. $\E(X) = \mu$
2. $\var(X) = c^2 \frac{2 k + 1}{2 k + 3}$
Proof
These results follow from the representation $X = \mu + c Z$ where $Z$ has the standard U-power distribution with shape parameter $k$, and from the mean and variance of $Z$.
Note that $\var(Z) \to c^2$ as $k \to \infty$
Open the Special Distribution Simulator and select the U-power distribution. Vary the parameters and note the size and location of the mean $\pm$ standard deviation bar. For various values of the parameters, run the simulation 1000 times and compare the empirical mean and stadard deviation to the distribution mean and standard deviation.
The moments about 0 are messy, but the central moments are simple.
Let $n \in \N_+$. The central moment of order $2 n + 1$ is $\E\left[(X - \mu)^{2n+1}\right] = 0$. The moment of order $2 n$ is $\E\left[(x - \mu)^{2 n}\right] = c^{2 n} \frac{2 k + 1}{2 (n + k) + 1}$
Proof
This follows from the representation $X = \mu + c Z$ where $Z$ has the standard U-power distribution with shape parameter $k$, and the central moments of $Z$.
The skewness and kurtosis of $X$ are
1. $\skw(X) = 0$
2. $\kur(X) = \frac{(2 k + 3)^2}{(2 k + 5)(2 k + 1)}$
Proof
Recall that the skewness and kurtosis are defined in terms of the standard score of $X$ and hence are invariant under a location-scale transformation. Thus, the results are the same as for the standard distribution.
Again, $\kur(X) \to 1$ as $k \to \infty$ and the excess kurtosis is $\kur(X) - 3 = \frac{(2 k + 3)^2}{(2 k + 5)(2 k + 1)} - 3$
Related Distributions
Since the U-power distribution with a given shape parameter is a location-scale family, it is trivially closed under location-scale transformations.
Suppose that $X$ has the U-power distribution with shape parameter $k \in \N_+$, location parameter $\mu \in \R$, and scale parameter $c \in (0, \infty)$. If $\alpha \in \R$ and $\beta \in (0, \infty)$, then $Y = \alpha + \beta X$ has the U-power distribution with shape parameter $k$, location parameter $\alpha + \beta \mu$, and scale parameter $\beta c$.
Proof
From the definition, we can take $X = \mu + c Z$ where $Z$ has the standard U-power distribution with shape parameter $k$. Then $Y = \alpha + \beta X = (\alpha + \beta \mu) + (\beta c) Z$.
As before, since the U-power distribution function and the U-power quantile function have simple forms, we have the usual connections with the standard uniform distribution.
Suppose that $k \in \N_+$, $\mu \in \R$ and $c \in (0, \infty)$.
1. If $U$ has the standard uniform distribution then $X = \mu + c (2 U - 1)^{1/(2 k + 1)}$ has the U-power distribution with shape parameter $k$, location parameter $\mu$, and scale parameter $c$.
2. If $X$ has the U-power distribution with shape parameter $k$, location parameter $\mu$, and scale parameter $c$, then $U = \frac{1}{2} \left[1 + \left(\frac{X - \mu}{c}\right)^{2 k + 1} \right]$ has the standard uniform distribution.
Again, part (a) of course leads to the random quantile method of simulation.
Open the random quantile simulator and select the U-power distribution. Vary the parameters and note the shape of the distribution and density functions. For selected values of the parameters, run the simulation 1000 times and note the random quantiles. Compare the empirical density function to the probability density function.
The U-power distribution with given location and scale parameters converges to the discrete uniform distribution at the endpoints as the shape parameter increases.
The U-power distribution with shape parameter $k \in \N_+$, location parameter $\mu \in \R$, and scale parameter $c \in (0, \infty)$ converges to the discrete uniform distribution on $\{\mu - c, \mu + c\}$ as $k \to \infty$.
Proof
This follows from the convergence result for the standard distribution and basic properties of convergence in distribution.
The U-power distribution is a general exponential family in the shape parameter, if the location and scale parameters are fixed.
Suppose that $X$ has the U-power distribution with unspecified shape parameter $k \in \N_+$, but with specified location parameter $\mu \in \R$ and scale parameter $c \in (0, \infty)$. Then $X$ has a one-parameter exponential distribution with natural parameter $2 k$ and natural statistics $\ln\left(\frac{X - \mu}{c}\right)$.
Proof
This follows from the definition of the general exponential family, since the PDF of the U-power distribution can be written as $f(x) = \frac{2 k + 1}{2 c} \exp\left[2 k \ln\left(\frac{x - \mu}{c}\right)\right], \quad x \in [\mu - c, \mu + c]$
Since the U-power distribution has a bounded probability density function on a bounded support interval, it can also be simulated via the rejection method.
Open the rejection method experiment and select the U-power distribution. Vary the parameters and note the shape of the probability density function. For selected values of the parameters, run the experiment 1000 times and watch the scatterplot. Compare the empirical density function, mean, and standard deviation to their distributional counterparts. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.26%3A_The_U-Power_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
The sine distribution is a simple probability distribution based on a portion of the sine curve. It is also known as Gilbert's sine distribution, named for the American geologist Grove Karl (GK) Gilbert who used the distribution in 1892 to study craters on the moon.
The Standard Sine Distribution
Distribution Functions
The standard sine distribution is a continuous distribution on $[0, 1]$ with probability density function $g$ given by $g(z) = \frac{\pi}{2} \sin(\pi z), \quad z \in [0, 1]$
1. $g$ is symmetric about $z = \frac 1 2$.
2. $g$ increases and then decreases with mode at $z = \frac 1 2$.
3. $g$ is concave downward.
Proof
From simple calculus, $g$ is a probability density function: $\sin(\pi x) \ge 0$ for $x \in [0, 1]$ and $\int_0^1 \sin(\pi z) dz = \frac{2}{\pi}$ The properties follow from basic calculus since \begin{align} g^\prime(z) & = \frac{\pi^2}{2} \cos(\pi z), \quad z \in [0, 1] \ g^{\prime \prime}(z) & = -\frac{\pi^3}{2} \sin(\pi z), \quad z \in [0, 1] \end{align}
Open the Special Distribution Simulator and select the sine distribution. Run the simulation 1000 times and compare the emprical density function to the probability density function.
The distribution function $G$ is given by $G(z) = \frac{1}{2} [1 - \cos(\pi z)]$ for $z \in [0, 1]$.
Proof
This follows from the PDF above and simple calculus.
The quantile function $G^{-1}$ is given by $G^{-1}(p) = \frac{1}{\pi} \arccos(1 - 2 p)$ for $p \in [0, 1]$.
1. The first quartile is $q_1 = \frac{1}{3}$.
2. The median is $\frac{1}{2}$.
3. The third quartile is $q_3 = \frac{2}{3}$.
Proof
The formula for the quantile function follows immediately from the CDF above by solving $p = G(z)$ for $z$ in terms of $p \in [0, 1]$.
Open the Special Distribution Calculator and select the sine distribution. Compute a few quantiles.
Moments
Suppose that $Z$ has the standard sine distribution. The moment generating function can be given in closed form.
The moment generating function $m$ of $Z$ is given by $m(t) = \E\left(e^{t Z}\right) = \frac{\pi^2 (1 + e^t)}{2(t^2 + \pi^2)}, \quad t \in \R$
Proof
Note first that $m(t) = \frac{\pi}{2} \int_0^1 e^{t z} \sin(\pi z) \, dz$ Integrating by parts with $u = e^{t z}$ and $dv = \sin(\pi z) dz$ gives $m(t) = \frac{t}{2} (1 + e^t) + \frac{t}{2} \int_0^1 e^{t z} \cos(\pi z) \, dz$ Integrating by parts again with $u = e^{t z}$ and $dv = \cos(\pi z) dz$ gives $m(t) = \frac{t}{2} (1 + e^t) - \frac{t^2}{\pi^2} m(t)$ Solving for $m(t)$ gives the result.
The moments of all orders exist, but a general formula is complicated and involves special functions. However, the mean and variance are easy to compute.
The mean and variance of $Z$ are
1. $\E(Z) = 1/2$
2. $\var(Z) = 1/4 - 2 / \pi^2$
Proof
1. We know that the mean exists since the PDF is continuous on a bounded interval. By symmetry, the mean must be $1/2$.
2. Integration by parts (twice) gives $\E(Z^2) = \int_0^1 z^2 \frac{\pi}{2} \sin(\pi z) \, dz = \frac{1}{2} - \frac{2}{\pi^2}$ The variance then follows from the usual computational formula $\var(Z) = \E(Z^2) - [\E(Z)]^2$.
Of course, the mean and variance could also be obtained by differentiating the MGF.
Numerically, $\sd(Z) \approx 0.2176$.
Open the Special Distribution Simulator and select the sine distribution. Note the position and size of the mean $\pm$ standard deviation bar. Run the simulation 1000 times and compare the empirical mean and stadard deviation to the distribution mean and standard deviation.
The skewness and kurtosis of $Z$ are
1. $\skw(Z) = 0$
2. $\kur(Z) = (384 - 48 \pi^2 + \pi^4) / (\pi^2 - 8)^2$
Proof
1. The skewness is 0 by the symmetry of the distribution.
2. The formula for the kurtosis follows from the usual computational formula and the first four moments: $\E(Z) = 1/2$, $\E(Z^2) = 1/2 - 2 / \pi^2$, $\E(Z^3) = 1/2 - 3 / \pi^2$, $\E(Z^4) = 1/2 + 24 / \pi^4 - 6 / \pi^2$.
Numerically, $\kur(Z) \approx 2.1938$.
Related Distributions
Since the distribution function and the quantile function have closed form representations, the standard sine distribution has the usual connection to the standard uniform distribution.
1. If $U$ has the standard uniform distribution then $Z = G^{-1}(U) = \frac{1}{\pi} \arccos(1 - 2 U)$ has the standard sine distribution.
2. If $Z$ has the standard sine distribution then $U = G(Z) = \frac{1}{2} [1 - \cos(\pi Z)]$ has the standard uniform distribution.
Part (a) of course leads to the random quantile method of simulation.
Open the random quantile simulator and select the sine distribution. Note the shape of the distribution and density functions. Run the simulation 1000 times and note the random quantiles. Compare the empirical density function to the probability density function.
Since the probability density function is continuous and is defined on a closed, bounded interval, the standard sine distribution can also be simulated using the rejection method.
Open the rejection method app and select the sine distribution. Run the simulation 1000 times and compare the empirical density function to the probability density function.
The General Sine Distribution
As with so many other standard distributions, the standard sine distribution is generalized by adding location and scale parameters.
Suppose that $Z$ has the standard sine distribution. For $a \in \R$ and $b \in (0, \infty)$, random variable $X = a + b Z$ has the sine distribution with location parameter $a$ and scale parameter $h$.
Distribution Functions
Analogies of the results above for the standard sine distribution follow easily from basic properties of the location-scale transformation. Suppose that $X$ has the sine distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$. So $X$ has a continuous distribution on the interval $[a, a + b]$.
The probability density function $f$ of $X$ is given by $f(x) = \frac{\pi}{2 b} \sin\left(\pi \frac{x - a}{b}\right), \quad x \in [a, a + b]$
1. $f$ is symmetric about $x = a + b / 2$.
2. $f$ increases and then decreases, with mode $x = a + b / 2$.
3. $f$ is concave downward.
Proof
Recall that $f(x) = \frac{1}{b} g\left(\frac{x - a}{b}\right), \quad x \in \R$ where $g$ is the standard PDF.
Pure scale transformations ($a = 0$ and $b \gt 0$) are particularly common, since $X$ often represents a random angle. The scale transformation with $b = \pi$ gives the angle in radians. In this case the probability density function is $f(x) = \frac{1}{2} \sin(x)$ for $x \in [0, \pi]$. Since the radian is the standard angle unit, this distribution could also be considered the standard one. The scale transformation with $b = 90$ gives the angle in degrees. In this case, the probability density function is $f(x) = \frac{\pi}{180} \sin\left(\frac{\pi}{90} x\right)$ for $x \in [0, 90]$. This was Gilbert's original formulation.
In the special distribution simulator, select the sine distribution. Vary the parameters and note the shape and location of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
The distribution function $F$ of $X$ is given by $F(x) = \frac{1}{2}\left[1 - \cos\left(\pi \frac{x - a}{b}\right)\right], \quad x \in [a, a + b]$
Proof
Recall that $F(x) = G\left(\frac{x - a}{b}\right), \quad x \in \R$ where $G$ is the standard CDF.
The quantile function $F^{-1}$ of $X$ is given by $F^{-1}(p) = a + \frac{b}{\pi} \arccos(1 - 2 p), \quad p \in (0, 1)$
1. The first quartile is $a + b / 3$.
2. The median is $a + b / 2$.
3. The third quartile is $a + 2 b / 3$
Proof
Recall that $F^{-1}(p) = a + b G^{-1}(p)$ for $p \in (0, 1)$, where $G^{-1}$ is the standard quantile function.
In the special distribution calculator, select the sine distribution. Vary the parameters and note the shape and location of the probability density function and the distribution function. For selected values of the parameters, find the quantiles of order 0.1 and 0.9.
Moments
Suppose again that $X$ has the sine distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$.
The moment generating function $M$ of $X$ is given by $M(t) = \frac{\pi^2 \left(e^{a t} + e^{(a + b) t}\right)}{2 \left(b^2 t^2 + \pi^2\right)}, \quad t \in \R$
Proof
Recall that $M(t) = e^{a t} m(b t)$ where $m$ is the standard MGF.
The mean and variance of $X$ are
1. $\E(X) = a + b / 2$
2. $\var(X) = b^2 (1 / 4 - 2 / \pi^2)$
Proof
By definition we can assume $X = a + b Z$ where $Z$ has the standard sine distribution. Using the mean and variance of $Z$ we have
1. $\E(X) = a + b \E(Z) = a + b / 2$
2. $\var(X) = b^2 \var(Z) = b^2 (1 / 4 - 2 / \pi^2)$
In the special distribution simulator, select the sine distribution. Vary the parameters and note the shape and location of the mean $\pm$ standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness and kurtosis of $X$ are
1. $\skw(X) = 0$
2. $\kur(X) = (384 - 48 \pi^2 + \pi^4) / (\pi^2 - 8)^2$
Proof
Recall that skewness and kurtosis are defined in terms of the standard score, and hence are invariant under location-scale transformations. So the skewness and kurtosis of $X$ are the same as the skewness and kurtosis of $Z$.
Related Distributions
The general sine distribution is a location-scale family, so it is trivially closed under location-scale transformations.
Suppose that $X$ has the sine distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$, and that $c \in \R$ and $d \in (0, \infty)$. Then $Y = c + d X$ has the sine distribution with location parameter $c + a d$ and scale parameter $b d$.
Proof
Again by definition we can take $X = a + b Z$ where $Z$ has the standard sine distribution. Then $Y = c + d X = (c + a d) + (b d) Z$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.27%3A_The_Sine_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$ $\newcommand{\bs}{\boldsymbol}$
The Laplace distribution, named for Pierre Simon Laplace arises naturally as the distribution of the difference of two independent, identically distributed exponential variables. For this reason, it is also called the double exponential distribution.
The Standard Laplace Distribution
Distribution Functions
The standard Laplace distribution is a continuous distribution on $\R$ with probability density function $g$ given by $g(u) = \frac{1}{2} e^{-\left|u\right|}, \quad u \in \R$
Proof
It's easy to see that $g$ is a valid PDF. By symmetry $\int_{-\infty}^\infty \frac{1}{2} e^{-\left|u\right|} du = \int_0^\infty e^{-u} du = 1$
The probability density function $g$ satisfies the following properties:
1. $g$ is symmetric about 0.
2. $g$ increases on $(-\infty, 0]$ and decreases on $[0, \infty)$, with mode $u = 0$.
3. $g$ is concave upward on $(-\infty, 0]$ and on $[0, \infty)$ with a cusp at $u = 0$
Proof
These results follow from standard calculus, since $g(u) = \frac 1 2 e^{-u}$ for $u \in [0, \infty)$ and $g(u) = \frac 1 2 e^u$ for $u \in (-\infty, 0]$.
Open the Special Distribution Simulator and select the Laplace distribution. Keep the default parameter value and note the shape of the probability density function. Run the simulation 1000 times and compare the emprical density function and the probability density function.
The standard Laplace distribution function $G$ is given by $G(u) = \begin{cases} \frac{1}{2} e^u, & u \in (-\infty, 0] \ 1 - \frac{1}{2} e^{-u}, & u \in [0, \infty) \end{cases}$
Proof
Again this follows from basic calculus, since $g(u) = \frac{1}{2} e^u$ for $u \le 0$ and $g(u) = \frac{1}{2} e^{-u}$ for $u \ge 0$. Of course $G(u) = \int_{-\infty} ^u g(t) \, dt$.
The quantile function $G^{-1}$ given by $G^{-1}(p) = \begin{cases} \ln(2 p), & p \in \left[0, \frac{1}{2}\right] \ -\ln[2(1 - p)], & p \in \left[\frac{1}{2}, 1\right] \end{cases}$
1. $G^{-1}(1 - p) = -G^{-1}(p)$ for $p \in (0, 1)$
2. The first quartile is $q_1 = -\ln 2 \approx -0.6931$.
3. The median is $q_2 = 0$
4. The third quartile is $q_3 = \ln 2 \approx 0.6931$.
Proof
The formula for the quantile function follows immediately from the CDF by solving $p = G(u)$ for $u$ in terms of $p \in (0, 1)$. Part (a) is due to the symmetry of $g$ about 0.
Open the Special Distribution Calculator and select the Laplace distribution. Keep the default parameter value. Compute selected values of the distribution function and the quantile function.
Moments
Suppose that $U$ has the standard Laplace distribution.
$U$ has moment generating function $m$ given by $m(t) = \E\left(e^{t U}\right) = \frac{1}{1 - t^2}, \quad t \in (-1, 1)$
Proof
For $t \in (-1, 1)$, $m(t) = \int_{-\infty}^\infty e^{t u} g(u) \, du = \int_{-\infty}^0 \frac{1}{2} e^{(t + 1)u} du + \int_0^\infty \frac{1}{2} e^{(t - 1)u} du = \frac{1}{2(t + 1)} - \frac{1}{2(t - 1)} = \frac{1}{1 - t^2}$
The moments of $U$ are
1. $\E(U^n) = 0$ if $n \in \N$ is odd.
2. $\E(U^n) = n!$ if $n \in \N$ is even.
Proof
This result can be obtained from the moment generating function or directly. That the odd order moments are 0 follows from the symmetry of the distribution. For the even order moments, symmetry and an integration by parts (or using the gamma function) gives $\E(U^n) = \frac{1}{2} \int_{-\infty}^0 u^n e^u du + \frac{1}{2} \int_0^\infty u^n e^{-u} du = \int_0^\infty u^n e^{-u} du = n!$
The mean and variance of $U$ are
1. $\E(U) = 0$
2. $\var(U) = 2$
Open the Special Distribution Simulator and select the Laplace distribution. Keep the default parameter value. Run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness and kurtosis of $U$ are
1. $\skw(U) = 0$
2. $\kur(U) = 6$
Proof
1. This follows from the symmetry of the distribution.
2. Since $\E(U) = 0$, we have $\kur(U) = \frac{\E(U^4)}{[\E(U^2)]^2} = \frac{4!}{(2!)^2} = 6$
It follows that the excess kurtosis is $\kur(U) - 3 = 3$.
Related Distributions
Of course, the standard Laplace distribution has simple connections to the standard exponential distribution.
If $U$ has the standard Laplace distribution then $V = |U|$ has the standard exponential distribution.
Proof
Using the CDF of U we have $\P(V \le v) = \P(-v \le U \le v) = G(v) - G(-v) = 1 - e^{-v}$ for $v \in [0, \infty)$. This function is the CDF of the standard exponential distribution.
If $V$ and $W$ are independent and each has the standard exponential distribution, then $U = V - W$ has the standard Laplace distribution.
Proof using PDFs
Let $h$ denote the standard exponential PDF, extended to all of $\R$, so that $h(v) = e^{-v}$ if $v \ge 0$ and $h(v) = 0$ if $v \lt 0$. Using convolution, the PDF of $V - W$ is $g(u) = \int_\R h(v) h(v - u) dv$. If $v \ge 0$, $g(u) = \int_u^\infty e^{-v} e^{-(v - u)} dv = e^u \int_u^\infty e^{-2 v} dv = \frac{1}{2} e^{-u}$ If $u \lt 0$ then $g(u) = \int_0^\infty e^{-v} e^{-(v - u)} = e^u \int_0^\infty e^{-2 v} dv = \frac{1}{2} e^u$
Proof using MGFs
The MGF of $V$ is $t \mapsto 1/(1 - t)$ for $t \lt 1$. The MGF of $-W$ is $t \mapsto 1 / (1 + t)$ for $t \gt -1$. Hence the MGF of $U$ is $t \mapsto 1 / (1 - t)(1 + t) = 1 / (1 - t^2)$ for $-1 \lt t \lt 1$, which is the standard Laplace MGF.
If $V$ has the standard exponential distribution, $I$ has the standard Bernoulli distribution, and $V$ and $I$ are independent, then $U = (2 I - 1) V$ has the standard Laplace distribution.
Proof
If $u \ge 0$ then $\P(U \le u) = \P(I = 0) + \P(I = 1, V \le u) = \P(I = 0) + \P(I = 1) \P(V \le u) = \frac{1}{2} + \frac{1}{2}(1 - e^{-u}) = 1 - \frac{1}{2} e^{-u}$ If $u \lt 0$, $\P(U \le u) = \P(I = 0, V \gt -u) = \P(I = 0) \P(V \gt -u) = \frac{1}{2} e^{u}$
The standard Laplace distribution has a curious connection to the standard normal distribution.
Suppose that $(Z_1, Z_2, Z_3, Z_4)$ is a random sample of size 4 from the standard normal distribution. Then $U = Z_1 Z_2 + Z_3 Z_4$ has the standard Laplace distribution.
Proof
$Z_1 Z_2$ and $Z_3 Z_4$ are independent, and each has a distribution known as the product normal distribution. The MGF of this distribution is $m_0(t) = \E\left(e^{t Z_1 Z_2}\right) = \int_{\R^2} e^{t x y} \frac{1}{2 \pi} e^{-(x^2 + y^2)/2} d(x, y)$ Changing to polar coordinates gives $m_0(t) = \frac{1}{2 \pi} \int_0^{2 \pi} \int_0^\infty e^{t r^2 \cos \theta \sin \theta} e^{-r^2/2} r \, dr \, d\theta = \frac{1}{2 \pi} \int_0^{2 \pi} \int_0^\infty \exp\left[r^2\left(t \cos \theta \sin\theta - \frac{1}{2}\right)\right] r \, dr \, d\theta$ The inside integral can be done with a simple substitution for $\left|t\right| \lt 1$, yielding $m_0(t) = \frac{1}{2 \pi} \int_0^{2 \pi} \frac{1}{1 - t \sin(2 \theta)} d\theta = \frac{1}{\sqrt{1 - t^2}}$ Hence $U$ has MGF $m_0^2(t) = \frac{1}{1 - t^2}$ for $\left|t\right| \lt 1$, which again is the standard Laplace MGF.
The standard Laplace distribution has the usual connections to the standard uniform distribution by means of the distribution function and the quantile function computed above.
Connections to the standard uniform distribution.
1. If $V$ has the standard uniform distribution then $U = \ln(2 V) \bs{1}\left(V \lt \frac{1}{2}\right) - \ln[2(1 - V)] \bs{1}\left(V \ge \frac{1}{2}\right)$ has the standard Laplace distribution.
2. If $U$ has the standard Laplace distribution then $V = \frac{1}{2} e^U \bs{1}(U \lt 0) + \left(1 - \frac{1}{2} e^{-U}\right) \bs{1}(U \ge 0)$ has the standard uniform distribution.
From part (a), the standard Laplace distribution can be simulated with the usual random quantile method.
Open the random quantile experiment and select the Laplace distribution. Keep the default parameter values and note the shape of the probability density and distribution functions. Run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts.
The General Laplace Distribution
The standard Laplace distribution is generalized by adding location and scale parameters.
Suppose that $U$ has the standard Laplace distribution. If $a \in \R$ and $b \in (0, \infty)$, then $X = a + b U$ has the Laplace distribution with location parameter $a$ and scale parameter $b$.
Distribution Functions
Suppos that $X$ has the Laplace distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$.
$X$ has probability density function $f$ given by $f(x) = \frac{1}{2 b} \exp\left(-\frac{\left|x - a\right|}{b}\right), \quad x \in \R$
1. $f$ is symmetric about $a$.
2. $f$ increases on $[0, a]$ and decreases on $[a, \infty)$ with mode $x = a$.
3. $f$ is concave upward on $[0, a]$ and on $[a, \infty)$ with a cusp at $x = a$.
Proof
Recall that $f(x) = \frac{1}{b} g\left(\frac{x - a}{b}\right)$ where $g$ is the standard Laplace PDF.
Open the Special Distribution Simulator and select the Laplace distribution. Vary the parameters and note the shape and location of the probability density function. For various values of the parameters, run the simulation 1000 times and compare the emprical density function to the probability density function.
$X$ has distribution function $F$ given by $F(x) = \begin{cases} \frac{1}{2} \exp\left(\frac{x - a}{b}\right), & x \in (-\infty, a] \ 1 - \frac{1}{2} \exp\left(-\frac{x - a}{b}\right), & x \in [a, \infty) \end{cases}$
Proof
Recall that $F(x) = G\left(\frac{x - a}{b}\right)$ where $G$ is the standard Laplace CDF.
$X$ has quantile function $F^{-1}$ given by $F^{-1}(p) = \begin{cases} a + b \ln(2 p), & 0 \le p \le \frac{1}{2} \ a - b \ln[2(1 - p)], & \frac{1}{2} \le p \lt 1 \end{cases}$
1. $F^{-1}(1 - p) = a - b F^{-1}(p)$ for $p \in (0, 1)$
2. The first quartile is $q_1 = a - b \ln 2$.
3. The median is $q_2 = a$
4. The third quartile is $q_3 = a + b \ln 2$.
Proof
Recall that $F^{-1}(p) = a + b G^{-1}(p)$ where $G^{-1}$ is the standard Laplace quantile function.
Open the Special Distribution Calculator and select the Laplace distribution. For various values of the scale parameter, compute selected values of the distribution function and the quantile function.
Moments
Again, we assume that $X$ has the Laplace distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$, so that by definition, $X = a + b U$ where $U$ has the standard Lapalce distribution.
$X$ has moment generating function $M$ given by $M(t) = \E\left(e^{t X}\right) = \frac{e^{a t}}{1 - b^2 t^2}, \quad t \in (-1/b, 1/b)$
Proof
Recall that $M(t) = e^{a t} m(b t)$ where $m$ is the standard Laplce MGF.
The moments of $X$ about the location parameter have a simple form.
The moments of $X$ about $a$ are
1. $\E\left[(X - a)^n\right] = 0$ if $n \in \N$ is odd.
2. $\E\left[(X - a)^n\right] = b^n n!$ if $n \in \N$ is even.
Proof
Note that $\E\left[(X - a)^n\right] = b^n \E(U^n)$ so the results follow the moments of $U$.
The mean and variance of $X$ are
1. $\E(X) = a$
2. $\var(X) = 2 b^2$
Proof
Recall that $\E(X) = a + b \E(U)$ and $\var(X) = b^2 \var(U)$, so the results follow from the mean and variance of $U$.
Open the Special Distribution Simulator and select the Laplace distribution. Vary the parameters and note the size and location of the mean $\pm$ standard deviation bar. For various values of the scale parameter, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness and kurtosis of $X$ are
1. $\skw(X) = 0$
2. $\kur(X) = 6$
Proof
Recall that skewness and kurtosis are defined in terms of the standard score, and hence are unchanged by a location-scale transformation. Thus the results from the skewness and kurtosis of $U$.
As before, the excess kurtosis is $\kur(X) - 3 = 3$.
Related Distributions
By construction, the Laplace distribution is a location-scale family, and so is closed under location-scale transformations.
Suppose that $X$ has the Laplace distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$, and that $c \in \R$ and $d \in (0, \infty)$. Then $Y = c + d X$ has the Laplace distribution with location parameter $c + a d$ scale parameter $b d$.
Proof
Again by definition, we can take $X = a + b U$ where $U$ has the standard Laplace distribution. Hence $Y = c + d X = (c + a d) + (b d) U$.
Once again, the Laplace distribution has the usual connections to the standard uniform distribution by means of the distribution function and the quantile function computed above. The latter leads to the usual random quantile method of simulation.
Suppose that $a \in \R$ and $b \in (0, \infty)$.
1. If $V$ has the standard uniform distribution then $U = \left[a + b \ln(2 V)\right] \bs{1}\left(V \lt \frac{1}{2}\right) + \left(a - b \ln[2(1 - V)]\right) \bs{1}\left(V \ge \frac{1}{2}\right)$ has the Laplace distribution with location parameter $a$ and scale parameter $b$.
2. If $X$ has the Laplace distribution with location parameter $a$ and scale parameter $b$, then $V = \frac{1}{2} \exp\left(\frac{X - a}{b}\right) \bs{1}(X \lt a) + \left[1 - \frac{1}{2} \exp\left(-\frac{X - a}{b}\right)\right] \bs{1}(X \ge a)$ has the standard uniform distribution.
Open the random quantile experiment and select the Laplace distribution. Vary the parameter values and note the shape of the probability density and distribution functions. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts.
The Laplace distribution is also a member of the general exponential family of distributions.
Suppose that $X$ has the Laplace distribution with known location parameter $a \in \R$ and unspecified scale parameter $b \in (0, \infty)$. Then $X$ has a general exponential distribution in the scale parameter $b$, with natural parameter $-1/b$ and natural statistics $\left|X - a\right|$.
Proof
This follows from the definition of the general exponential family and the form of the probability density function $f$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.28%3A_The_Laplace_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
The logistic distribution is used for various growth models, and is used in a certain type of regression, known appropriately as logistic regression.
The Standard Logistic Distribution
Distribution Functions
The standard logistic distribution is a continuous distribution on $\R$ with distribution function $G$ given by $G(z) = \frac{e^z}{1 + e^z}, \quad z \in \R$
Proof
Note that $G$ is continuous, and $G(z) \to 0$ as $z \to -\infty$ and $G(z) \to 1$ as $z \to \infty$. Moreover, $G^\prime(z) = \frac{e^z}{\left(1 + e^z\right)^2} \gt 0, \quad z \in \R$ so $G$ is increasing.
The probability density function $g$ of the standard logistic distribution is given by $g(z) = \frac{e^z}{\left(1 + e^z\right)^2}, \quad z \in \R$
1. $g$ is symmetric about $x = 0$.
2. $g$ increases and then decreases with the mode $x = 0$.
3. $g$ is concave upward, then downward, then upward again with inflection points at $x = \pm \ln\left(2 + \sqrt{3}\right) \approx = \pm 1.317$.
Proof
These result follow from standard calculus. First recall that $g = G^\prime$.
1. The symmetry of $g$ is not obvious at first, but note that $g(-z) = \frac{e^{-z}}{\left(1 + e^{-z}\right)^2} \frac{e^{2z}}{e^{2z}} = \frac{e^z}{\left(1 + e^z\right)^2} = g(z)$
2. The first derivative of $g$ is $g^\prime(z) = \frac{e^z (1 - e^z)}{(1 + e^z)^3}$
3. The second derivative of $g$ is $g^{\prime \prime}(z) = \frac{e^z \left(1 - 4 e^z + e^{2z}\right)}{(1 + e^z)^4}$
In the special distribution simulator, select the logistic distribution. Keep the default parameter values and note the shape and location of the probability density function. Run the simulation 1000 times and compare the empirical density function to the probability density function.
The quantile function $G^{-1}$ of the standard logistic distribution is given by $G^{-1}(p) = \ln \left( \frac{p}{1 - p} \right), \quad p \in (0, 1)$
1. The first quartile is $-\ln 3 \approx -1.0986$.
2. The median is 0.
3. The third quartile is $\ln 3 \approx 1.0986$
Proof
The formula for $G^{-1}$ follows by solving $p = G(z)$ for $z$ in terms of $p$.
Recall that $p : 1 - p$ are the odds in favor of an event with probability $p$. Thus, the logistic distribution has the interesting property that the quantiles are the logarithms of the corresponding odds ratios. Indeed, this function of $p$ is sometimes called the logit function. The fact that the median is 0 also follows from symmetry, of course.
In the special distribution calculator, select the logistic distribution. Keep the default parameter values and note the shape and location of the probability density function and the distribution function. Find the quantiles of order 0.1 and 0.9.
Moments
Suppose that $Z$ has the standard logistic distribution. The moment generating function of $Z$ has a simple representation in terms of the beta function $B$, and hence also in terms of the gamma function $\Gamma$
The moment generating function $m$ of $Z$ is given by
$m(t) = B(1 + t, 1 - t) = \Gamma(1 + t) \, \Gamma(1 - t), \quad t \in (-1, 1)$
Proof
Note that $m(t) = \int_{-\infty}^\infty e^{t z} \frac{e^z}{\left(1 + e^z\right)^2} dx$ Let $u = \frac{e^z}{1 + e^z}$ so that $du = \frac{e^z}{\left(1 + e^z\right)^2} dz$ and $e^z = \frac{u}{1 - u}$. Hence $m(t) = \int_0^1 \left(\frac{u}{1 - u}\right)^t du = \int_0^1 u^t (1 - u)^{-t} \, du$ The last integral, by definition, is $B(1 + t, 1 - t)$ for $t \in (-1, 1)$
Since the moment generating function is finite on an open interval containing 0, random variable $Z$ has moments of all orders. By symmetry, the odd order moments are 0. The even order moments can be represented in terms of Bernoulli numbers, named of course for Jacob Bernoulli. Let $\beta_n$ Bernoulli number of order $n \in \N$.
Let $n \in \N$
1. If $n$ is odd then $\E(Z^n) = 0$.
2. If $n$ is even then $\E\left(Z^n\right) = (2^n - 2) \pi^n \left|\beta_n\right|$
Proof
1. Again, this follows from symmetry
2. Recall that the moments of $Z$ can be computed by integrating powers of the quantile function. Hence $\E\left(Z^n\right) = \int_0^1 \left[\ln\left(\frac{p}{1 - p}\right)\right]^n dp$ This integral evaluates to the expression above involving the Bernoulli numbers.
In particular, we have the mean and variance.
The mean and variance of $Z$ are
1. $\E(Z) = 0$
2. $\var(Z) = \frac{\pi^2}{3}$
Proof
1. Again, $\E(Z) = 0$ by symmetry.
2. The second Bernoulli number is $\beta_2 = \frac{1}{6}$. Hence $\var(Z) = \E\left(Z^2\right) = (2^2 - 2) \pi^2 \frac{1}{6} = \frac{\pi^2}{ 3 }$.
In the special distribution simulator, select the logistic distribution. Keep the default parameter values and note the shape and location of the mean $\pm$ standard deviation bar. Run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness and kurtosis of $Z$ are
1. $\skw(Z) = 0$
2. $\kur(Z) = \frac{21}{5}$
Proof
1. Again, $\skw(Z) = 0$ by the symmetry of the distribution.
2. Recall that by symmetry, $\E(Z) = \E\left(X^3\right) = 0$. Also, $\left|\beta_4\right| = \frac{1}{30}$, so $\E\left(Z^4\right) = (2^4 - 2) \pi^4 \frac{1}{30} = \frac{7 \pi^4}{15}$. Hence from the usual computational formula for kurtosis, $\kur(Z) = \frac{\E\left(Z^4\right)}{[\var(Z)]^2} = \frac{7 \pi^4 / 15}{\pi^4 / 9} = \frac{21}{5}$
It follows that the excess kurtosis of $Z$ is $\kur(Z) - 3 = \frac{6}{5}$.
Related Distributions
The standard logistic distribution has the usual connections with the standard uniform distribution by means of the distribution function and quantile function given above. Recall that the standard uniform distribution is the continuous uniform distribution on the interval $(0, 1)$.
Connections with the standard uniform distribution.
1. If $Z$ has the standard logistic distribution then $U = G(Z) = \frac{e^Z}{1 + e^Z}$ has the standard uniform distribution.
2. If $U$ has the standard uniform distribution then $Z = G^{-1}(U) = \ln\left(\frac{U}{1 - U}\right) = \ln(U) - \ln(1 - U)$ has the standard logistic distribution.
Since the quantile function has a simple closed form, we can use the usual random quantile method to simulate the standard logistic distribution.
Open the random quantile experiment and select the logistic distribution. Keep the default parameter values and note the shape of the probability density and distribution functions. Run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts.
The standard logistic distribution also has several simple connections with the standard exponential distribution (the exponential distribution with rate parameter 1).
Connections with the standard exponential distribution:
1. If $Z$ has the standard logistic distribution, then $Y = \ln\left(e^X + 1\right)$ has the standard exponential distribution.
2. If $Y$ has the standard exponential distribution then $Z = \ln\left(e^Y - 1\right)$ has the standard logistic distribution.
Proof
These results follow from the standard change of variables formula. The transformations, inverses of each other of course, are $y = \ln\left(e^z + 1\right)$ and $z = \ln\left(e^y - 1\right)$ for $z \in \R$ and $y \in (0, \infty)$. Let $g$ and $h$ denote the PDFs of $Z$ and $Y$ respectively.
1. By definition, $g(z) = e^z \big/ (1 + e^z)^2$ for $z \in \R$ so $h(y) = g(z) \frac{dz}{dy} = \frac{\exp\left[\ln\left(e^y - 1\right)\right]}{\left(1 + \exp\left[\ln\left(e^y - 1\right)\right]\right)^2} \frac{e^y}{e^y - 1} = e^{-y}, \quad y \in (0, \infty)$ which is the PDF of the standard exponential distribution.
2. By definition, $g(y) = e^{-y}$ for $y \in (0, \infty)$ so $g(z) = h(y) \frac{dy}{dz} = \exp\left[-\ln\left(e^z + 1\right)\right] \frac{e^z}{e^z + 1} = \frac{e^z}{\left(e^z + 1\right)^2}, \quad z \in \R$ which is the PDF of the standard logistic distribution.
Suppose that $X$ and $Y$ are independent random variables, each with the standard exponential distribution. Then $Z = \ln(X / Y)$ has the standard logistic distribution.
Proof
For $z \in \R$, $\P(Z \le z) = \P[\ln(X / Y) \le z] = \P\left(X / Y \le e^z\right) = \P\left(Y \ge e^{-z} X\right)$ Recall that $\P(Y \ge y) = e^{-y}$ for $y \in (0, \infty)$ and $X$ has PDF $x \mapsto e^{-x}$ on $(0, \infty)$. We condition on $X$: $\P(Z \le z) = \E\left[\P\left(Y \ge e^{-z} X \mid X\right)\right] = \int_0^\infty e^{-e^{-z} x} e^{-x} dx = \int_0^\infty e^{(e^{-z} + 1)x} dx = \frac{1}{e^{-z} + 1} = \frac{e^z}{1 + e^z}$ As a function of $z$, this is the distribution function of the standard logistic distribution.
There are also simple connections between the standard logistic distribution and the Pareto distribution.
Connections with the Pareto distribution:
1. If $Z$ has the standard logistic distribution, then $Y = e^Z + 1$ has the Pareto distribution with shape parameter 1.
2. If $Y$ has the Pareto distribution with shape parameter 1, then $Z = \ln(Y - 1)$ has the standard logistic distribution.
Proof
These results follow from the basic change of variables theorem. The transformation, inverses of one another of course, are $y = e^z + 1$, $z = \ln(y - 1)$ for $z \in \R$ and $y \in (1, \infty)$. Let $g$ and $h$ denote PDFs of $Z$ and $Y$ respectively.
1. By definition, $g(z) = e^z \big/ \left(1 + e^z\right)^2$ for $z \in \R$. Hence $h(y) = g(z) \frac{dz}{dy} = \frac{\exp[\ln(y - 1)]}{\left(1 + \exp[\ln(y - 1)]\right)^2} \frac{1}{y - 1} = \frac{1}{y^2}, \quad y \in (1, \infty)$ which is the PDF of the Pareto distribution with shape parameter 1.
2. By definition, $h(y) = 1 / y^2$ for $y \in (1, \infty)$. Hence $g(z) = h(y) \frac{dy}{dz} = \frac{1}{(e^z + 1)^2} e^z, \quad z \in \R$ which is the PDF of the standard logistic distribution.
Finally, there are simple connections to the extreme value distribution.
If $X$ and $Y$ are independent and each has the standard Gumbel distribution, them $Z = Y - X$ has the standard logistic distribution.
Proof
The distribution function of $Y$ is $G(y) = \exp\left(-e^{-y}\right)$ for $y \in \R$ and the density function of $X$ is $g(x) = e^{-x} \exp\left(-e^{-x}\right)$ for $x \in \R$. For $z \in \R$, conditioning on $X$ gives $\P(Z \le z) = \P(Y \le X + z) = \E[\P(Y \le X + z \mid X)] = \int_{-\infty}^\infty \exp\left(-e^{-(x + z)}\right) e^{-x} \exp\left(-e^{-x}\right) dx$ Substituting $u = -e^{-(x + z)}$ gives $\P(Z \le z) = \int_{-\infty}^0 e^u \exp(e^z u) e^z du = e^z \int_{-\infty}^0 \exp\left[u(1 + e^z)\right] du = \frac{e^z}{1 + e^z}, \quad z \in \R$ As a function of $z$, this is the standard logistic distribution function.
The General Logistic Distribution
The general logistic distribution is the location-scale family associated with the standard logistic distribution.
Suppose that $Z$ has the standard logistic distribution. For $a \in \R$ and $b \in (0, \infty)$, random variable $X = a + b Z$ has the logistic distribution with location parameter $a$ and scale parameter $b$.
Distribution Functions
Analogies of the results above for the general logistic distribution follow easily from basic properties of the location-scale transformation. Suppose that $X$ has the logistic distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$.
The probability density function $f$ of $X$ is given by $f(x) = \frac{\exp \left(\frac{x - a}{b} \right)}{b \left[1 + \exp \left(\frac{x - a}{b} \right) \right]^2}, \quad x \in \R$
1. $f$ is symmetric about $x = a$.
2. $f$ increases and then decreases, with mode $x = a$.
3. $f$ is concave upward, then downward, then upward again, with inflection points at $x = a \pm \ln\left(2 + \sqrt{3}\right) b$.
Proof
Recall that $f(x) = \frac{1}{b} g\left(\frac{x - a}{b}\right), \quad x \in \R$ where $g$ is the standard logistic PDF.
In the special distribution simulator, select the logistic distribution. Vary the parameters and note the shape and location of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
The distribution function $F$ of $X$ is given by $F(x) = \frac{\exp \left( \frac{x - a}{b} \right)}{1 + \exp \left( \frac{x - a}{b} \right)}, \quad x \in \R$
Proof
Recall that $F(x) = G\left(\frac{x - a}{b}\right), \quad x \in \R$ where $G$ is the standard logistic CDF.
The quantile function $F^{-1}$ of $X$ is given by $F^{-1}(p) = a + b \ln \left( \frac{p}{1 - p} \right), \quad p \in (0, 1)$
1. The first quartile is $a - b \ln 3$.
2. The median is $a$.
3. The third quartile is $a + b \ln 3$
Proof
Recall that $F^{-1}(p) = a + b G^{-1}(p)$ for $p \in (0, 1)$, where $G^{-1}$ is the standard logistic quantile function.
In the special distribution calculator, select the logistic distribution. Vary the parameters and note the shape and location of the probability density function and the distribution function. For selected values of the parameters, find the quantiles of order 0.1 and 0.9.
Moments
Suppose again that $X$ has the logistic distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$. Recall that $B$ denotes the beta function and $\Gamma$ the gamma function.
The moment generating function $M$ of $X$ is given by $M(t) = e^{a t} B(1 + b t, 1 - b t) = e^{a t} \Gamma(1 + b t) \, \Gamma(1 - b t), \quad t \in (-1, 1)$
Proof
Recall that $M(t) = e^{a t} m(b t)$ where $m$ is the standard logistic MGF.
The mean and variance of $X$ are
1. $\E(X) = a$
2. $\var(X) = b^2 \frac{\pi^2}{3}$
Proof
By definition we can assume $X = a + b Z$ where $Z$ has the standard logistic distribution. Using the mean and variance of $Z$ we have
1. $\E(X) = a + b \E(Z) = a$
2. $\var(X) = b^2 \var(Z) = b^2 \frac{\pi^2}{3}$
In the special distribution simulator, select the logistic distribution. Vary the parameters and note the shape and location of the mean $\pm$ standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness and kurtosis of $X$ are
1. $\skw(X) = 0$
2. $\kur(X) = \frac{21}{5}$
Proof
Recall that skewness and kurtosis are defined in terms of the standard score, and hence are invariant under location-scale transformations. So the skewness and kurtosis of $Z$ are the same as the skewness and kurtosis of $Z$.
Once again, it follows that the excess kurtosis of $X$ is $\kur(X) - 3 = \frac{6}{5}$. The central moments of $X$ can be given in terms of the Bernoulli numbers. As before, let $\beta_n$ denote the Bernoulli number of order $n \in \N$.
Let $n \in \N$.
1. If $n$ is odd then $\E\left[(X - a)^n\right] = 0$.
2. If $n$ is even then $\E\left[(X - a)^n\right] = (2^n - 2) \pi^n b^n \left|\beta_n\right|$
Proof
Again by definition we can take $X = a + b Z$ where $Z$ has the standard logistic distribution. Then $\E\left[(X - a)^n\right] = b^n \E(Z^n)$ so the results follow from the moments of $Z$.
Related Distributions
The general logistic distribution is a location-scale family, so it is trivially closed under location-scale transformations.
Suppose that $X$ has the logistic distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$, and that $c \in \R$ and $d \in (0, \infty)$. Then $Y = c + d X$ has the logistic distribution with location parameter $c + a d$ and scale parameter $b d$.
Proof
Again by definition we can take $X = a + b Z$ where $Z$ has the standard logistic distribution. Then $Y = c + d X = (c + a d) + (b d) Z$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.29%3A_The_Logistic_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
Extreme value distributions arise as limiting distributions for maximums or minimums (extreme values) of a sample of independent, identically distributed random variables, as the sample size increases. Thus, these distributions are important in probability and mathematical statistics.
The Standard Distribution for Maximums
Distribution Functions
The standard extreme value distribution (for maximums) is a continuous distribution on $\R$ with distribution function $G$ given by $G(v) = \exp\left(-e^{-v}\right), \quad v \in \R$
Proof
Note that $G$ is continuous, increasing, and satisfies $G(v) \to 0$ as $v \to -\infty$ and $G(v) \to 1$ as $v \to \infty$.
The distribution is also known as the standard Gumbel distribution in honor of Emil Gumbel. As we will show below, it arises as the limit of the maximum of $n$ independent random variables, each with the standard exponential distribution (when this maximum is appropriately centered). This fact is the main reason that the distribution is special, and is the reason for the name. For the remainder of this discussion, suppose that random variable $V$ has the standard Gumbel distribution.
The probability density function $g$ of $V$ is given by $g(v) = e^{-v} \exp\left(-e^{-v}\right) = \exp\left[-\left(e^{-v} + v\right)\right], \quad v \in \R$
1. $g$ increases and then decreases with mode $v = 0$
2. $g$ is concave upward, then downward, then upward again, with inflection points at $v = \ln\left[(3 \pm \sqrt{5}) \big/ 2)\right] \approx \pm 0.9264$.
Proof
These results follow from standard calculus. The PDF is $g = G^\prime$.
1. The first derivative of $g$ satisfies $g^\prime(v) = g(v)\left(e^{-v} - 1\right)$ for $v \in \R$.
2. The second derivative of $g$ satisfies $g^{\prime \prime}(v) = g(v) \left(e^{-2 v} - 3 e^{-v} + 1\right)$ for $v \in \R$.
In the special distribution simulator, select the extreme value distribution. Keep the default parameter values and note the shape and location of the probability density function. In particular, note the lack of symmetry. Run the simulation 1000 times and compare the empirical density function to the probability density function.
The quantile function $G^{-1}$ of $V$ is given by $G^{-1}(p) = -\ln[-\ln(p)], \quad p \in (0, 1)$
1. The first quartile is $-\ln(-\ln 4) \approx -0.3266$.
2. The median is $-\ln(-\ln 2) \approx 0.3665$
3. The third quartile is $-\ln(\ln 4 - \ln 3) \approx 1.2459$
Proof
The formula for $G^{-1}$ follows from solving $p = G(v)$ for $v$ in terms of $p$.
In the special distribution calculator, select the extreme value distribution. Keep the default parameter values and note the shape and location of the probability density and distribution functions. Compute the quantiles of order 0.1, 0.3, 0.6, and 0.9
Moments
Suppose again that $V$ has the standard Gumbel distribution. The moment generating function of $V$ has a simple expression in terms of the gamma function $\Gamma$.
The moment generating function $m$ of $V$ is given by $m(t) = \E\left(e^{t V}\right) = \Gamma(1 - t), \quad t \in (-\infty, 1)$
Proof
Note that $m(t) = \int_{-\infty}^\infty e^{t v} \exp\left(-e^{-v}\right) e^{-v} dv$ The substitution $x = e^{-v}$, $dx = -e^{-v} dv$ gives $m(t) = \int_0^\infty x^{-t} e^{-x} dx = \Gamma(1 - t)$ for $t \in (-\infty, 1)$.
Next we give the mean and variance. First, recall that the Euler constant, named for Leonhard Euler is defined by $\gamma = -\Gamma^\prime(1) = -\int_0^\infty e^{-x} \ln x \, dx \approx 0.5772156649$
The mean and variance of $V$ are
1. $\E(V) = \gamma$
2. $\var(V) = \frac{\pi^2}{6}$
Proof
These results follow from the moment generating function.
1. $m^\prime(t) = -\Gamma^\prime(1 - t)$ and so $\E(V) = m^\prime(0) = - \Gamma^\prime(1) = \gamma$.
2. $m^{\prime \prime}(t) = \Gamma^{\prime \prime}(1 - t)$ and $\E(V^2) = m^{\prime \prime}(0) = \Gamma^{\prime \prime}(1) = \int_0^\infty (\ln x)^2 e^{-x} dx = \gamma^2 + \frac{\pi^2}{6}$ Hence $\var(V) = \E(V^2) - [\E(V)]^2 = \frac{\pi^2}{6}$
In the special distribution simulator, select the extreme value distribution and keep the default parameter values. Note the shape and location of the mean $\pm$ standard deviation bar. Run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
Next we give the skewness and kurtosis of $V$. The skewness involves a value of the Riemann zeta function $\zeta$, named of course for Georg Riemann. Recall that $\zeta$ is defined by $\zeta(n) = \sum_{k=1}^\infty \frac{1}{k^n}, \quad n \gt 1$
The skewness and kurtosis of $V$ are
1. $\skw(V) = 12 \sqrt{6} \zeta(3) \big/ \pi^3 \approx 1.13955$
2. $\kur(V) = \frac{27}{5}$
The particular value of the zeta function, $\zeta(3)$, is known as Apéry's constant. From (b), it follows that the excess kurtosis is $\kur(V) - 3 = \frac{12}{5}$.
Related Distributions
The standard Gumbel distribution has the usual connections to the standard uniform distribution by means of the distribution function and quantile function given above. Recall that the standard uniform distribution is the continuous uniform distribution on the interval $(0, 1)$.
The standard Gumbel and standard uniform distributions are related as follows:
1. If $U$ has the standard uniform distribution then $V = G^{-1}(U) = -\ln(-\ln U)$ has the standard Gumbel distribution.
2. If $V$ has the standard Gumbel distribution then $U = G(V) = \exp\left(e^{-V}\right)$ has the standard uniform distribution.
So we can simulate the standard Gumbel distribution using the usual random quantile method.
Open the random quantile experiment and select the extreme value distribution. Keep the default parameter values and note again the shape and location of the probability density and distribution functions. Run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counteparts.
The standard Gumbel distribution also has simple connections with the standard exponential distribution (the exponential distribution with rate parameter 1).
The standard Gumbel and standard exponential distributions are related as follows:
1. If $X$ has the standard exponential distribution then $V = -\ln X$ has the standard Gumbel distribution.
2. If $V$ has the standard Gumbel distribution then $X = e^{-V}$ has the standard exponential distribution.
Proof
These results follow from the usual change of variables theorem. The transformations are $v = -\ln x$ and $x = e^{-v}$ for $x \in (0, \infty)$ and $v \in \R$, and these are inverses of each other. Let $f$ and $g$ denote PDFs of $X$ and $V$ respectively.
1. We start with $f(x) = e^{-x}$ for $x \in (0, \infty)$ and then $g(v) = f(x) \left|\frac{dx}{dv}\right| = \exp\left(-e^{-v}\right) e^{-v}, \quad v \in \R$ so $V$ has the standard Gumbel distribution.
2. We start with $g(v) = \exp\left(-e^{-v}\right) e^{-v}$ for $v \in \R$ and then $f(x) = g(v) \left|\frac{dv}{dx}\right| = \exp\left[-\exp(\ln x)\right] \exp(\ln x) \frac{1}{x} = e^{-x}, \quad x \in (0, \infty)$ so $X$ has the standard exponential distribution.
As noted in the introduction, the following theorem provides the motivation for the name extreme value distribution.
Suppose that $(X_1, X_2, \ldots)$ is a sequence of independent random variables, each with the standard exponential distribution. The distribution of $Y_n = \max\{X_1, X_2, \ldots, X_n\} - \ln n$ converges to the standard Gumbel distribution as $n \to \infty$.
Proof
Let $X_{(n)} = \max\{X_1, X_2, \ldots, X_n\}$, so that $X_{(n)}$ is the $n$th order statistics of the random sample $(X_1, X_2, \ldots, X_n)$. Let $G$ denote the standard exponential CDF, so that $G(x) = 1 - e^{-x}$ for $x \in [0, \infty)$. Note that $X_{(n)}$ has CDF $G^n$. Let $F_n$ denote the CDF of $Y_n$. For $x \in \R$ $F_n(x) = \P(Y_n \le x) = \P\left[X_{(n)} \le x + \ln n\right] = G^n(x + \ln n) = \left[1 - e^{-(x + \ln n)}\right]^n = \left(1 - \frac{e^{-x}}{n} \right)^n$ By a famous limit from calculus, $F_n(x) \to e^{-e^{-x}}$ as $n \to \infty$.
The General Extreme Value Distribution
As with many other distributions we have studied, the standard extreme value distribution can be generalized by applying a linear transformation to the standard variable. First, if $V$ has the standard Gumbel distribution (the standard extreme value distribution for maximums), then $-V$ has the standard extreme value distribution for minimums. Here is the general definition.
Suppose that $V$ has the standard Gumbel distribution, and that $a, \, b \in \R$ with $b \ne 0$. Then $X = a + b V$ has the extreme value distribution with location parameter $a$ and scale parameter $|b|$.
1. If $b \gt 0$, then the distribution corresponds to maximums.
2. If $b \lt 0$, then the distribution corresponds to minimums.
So the family of distributions with $a \in \R$ and $b \in (0, \infty)$ is a location-scale family associated with the standard distribution for maximums, and the family of distributions with $a \in \R$ and $b \in (-\infty, 0)$ is the location-scale family associated with the standard distribution for minimums.. The distributions are also referred to more simply as Gumbel distributions rather than extreme value distributions. The web apps in this project use only the extreme value distributions for maximums. As you will see below, the differences in the distribution for maximums and the distribution for minimums are minor. For the remainder of this discussion, suppose that $X$ has the form given in the definition.
Distribution Functions
Lef $F$ denote the distribution function of $X$.
1. If $b \gt 0$ then $F(x) = \exp\left[-\exp\left(-\frac{x - a}{b}\right)\right], \quad x \in \R$
2. If $b \lt 0$ then $F(x) = 1 - \exp\left[-\exp\left(-\frac{x - a}{b}\right)\right], \quad x \in \R$
Proof
Let $G$ denote the CDF of $V$. Then
1. $F(x) = G\left(\frac{x - a}{b}\right)$ for $x \in \R$
2. $F(x) = 1 - G\left(\frac{x - a}{b}\right)$ for $x \in \R$
Let $f$ denote the probability density function of $X$. Then $f(x) = \frac{1}{|b|} \exp\left(-\frac{x - a}{b}\right) \exp\left[-\exp\left(-\frac{x - a}{b}\right)\right], \quad x \in \R$
Proof
Let $g$ denote the PDF of $V$. By the change of variables formula, $f(x) = \frac{1}{|b|} g\left(\frac{x - a}{b}\right), \quad x \in \R$
Open the special distribution simulator and select the extreme value distribution. Vary the parameters and note the shape and location of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
The quantile function $F^{-1}$ of $X$ is given as follows
1. If $b \gt 0$ then $F^{-1}(p) = a - b \ln(-\ln p)$ for $p \in (0, 1)$.
2. If $b \lt 0$ then $F^{-1}(p) = a - b \ln[-\ln(1 - p)],$ for $p \in (0, 1)$
Proof
Let $G^{-1}$ denote the quantile function of $V$. Then
1. $F^{-1}(p) = a + b G^{-1}(p)$ for $p \in (0, 1)$.
2. $F^{-1}(p) = a - b G^{-1}(1 - p)$ for $p \in (0, 1)$.
Open the special distribution calculator and select the extreme value distribution. Vary the parameters and note the shape and location of the probability density and distribution functions. For selected values of the parameters, compute a few values of the quantile function and the distribution function.
Moments
Suppose again that $X = a + b V$ where $V$ has the standard Gumbel distribution, and that $a, \, b \in \R$ with $b \ne 0$.
The moment generating function $M$ of $X$ is given by $M(t) = e^{a t} \Gamma(1 - b t)$.
1. With domain $t \in (-\infty, 1 / b)$ if $b \gt 0$
2. With domain $t \in (1 / b, \infty)$ if $b \lt 0$
Proof
Let $m$ denote the MGF of $V$. Then $M(t) = e^{a t} m(b t)$ for $b t \lt 1$
The mean and variance of $X$ are
1. $\E(X) = a + b \gamma$
2. $\var(X) = b^2 \frac{\pi^2}{6}$
Proof
These results follow from the mean and variance of $V$ and basic properties of expected value and variance.
1. $\E(X) = a + b \E(V)$
2. $\var(X) = b^2 \var(V)$
Open the special distribution simulator and select the extreme value distribution. Vary the parameters and note the size and location of the mean $\pm$ standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness of $X$ is
1. $\skw(X) = 12 \sqrt{6} \zeta(3) \big/ \pi^3 \approx 1.13955$ if $b \gt 0$.
2. $\skw(X) = -12 \sqrt{6} \zeta(3) \big/ \pi^3 \approx -1.13955$ if $b \lt 0$
Proof
Recall that skewness is defined in terms of the standard score, and hence is invariant under linear transformations with positive slope. A linear transformation with negative slope changes the sign of the skewness. Hence these results follow from the skewness of $V$.
The kurtosis of $X$ is $\kur(X) = \frac{27}{5}$
Proof
Recall that kurtosis is defined in terms of the standard score and is invariant under linear transformations with nonzero slope. Hence this result follows from the kurtosis of $V$.
Once again, the excess kurtosis is $\kur(X) - 3 = \frac{12}{5}$.
Related Distributions
Since the general extreme value distributions are location-scale families, they are trivially closed under linear transformations of the underlying variables (with nonzero slope).
Suppose that $X$ has the extreme value distribution with parameters $a, \, b$ with $b \ne 0$ and that $c, \, d \in \R$ with $d \ne 0$. Then $Y = c + d X$ has the extreme value distribution with parameters $a d + c$ and $b d$.
Proof
By definition, we can write $X = a + b V$ where $V$ has the standard Gumbel distribution. Hence $Y = c + d X = (ad + c) + (b d) V$.
Note if $d \gt 0$ then $X$ and $Y$ have the same association (max, max) or (min, min). If $d \lt 0$ then $X$ and $Y$ have opposite associations (max, min) or (min, max).
As with the standard Gumbel distribution, the general Gumbel distribution has the usual connections with the standard uniform distribution by means of the distribution and quantile functions. Since the quantile function has a simple closed form, the latter connection leads to the usual random quantile method of simulation. We state the result for maximums.
Suppose that $a, \, b \in \R$ with $b \ne 0$. Let $F$ denote distribution function and let $F^{-1}$ denote the quantile function above
1. If $U$ has the standard uniform distribution then $X = F^{-1}(U)$ has the extreme value distribution with parameters $a$ and $b$.
2. If $X$ has the extreme value distribution with parameters $a$ and $b$ then $U = F(X)$ has the standard uniform distribution.
Open the random quantile experiment and select the extreme value distribution. Vary the parameters and note again the shape and location of the probability density and distribution functions. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counteparts.
The extreme value distribution for maximums has a simple connection to the Weibull distribution, and this generalizes the in connection between the standard Gumbel and exponential distributions above. There is a similar result for the extreme value distribution for minimums.
The extreme value and Weibull distributions are related as follows:
1. If $X$ has the extreme value distribution with parameters $a \in \R$ and $b \in (0, \infty)$, then $Y = e^{-X}$ has the Weibull distribution with shape parameter $\frac{1}{b}$ and scale parameter $e^{-a}$.
2. If $Y$ has the Weibull distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$ then $X = -\ln Y$ has the extreme value distribution with parameters $-\ln b$ and $\frac{1}{k}$.
Proof
As before, these results can be obtained using the change of variables theorem for probability density functions. We give an alternate proof using special forms of the random variables.
1. We can write $X = a + b V$ where $V$ has the standard Gumbel distribution. Hence $Y = e^{-X} = e^{-a} \left(e^{-V}\right)^b$ As shown in above, $e^{-V}$ has the standard exponential distribution and therefore $Y$ has the Weibull distribution with shape parameter $1/b$ and scale parameter $e^{-a}$.
2. We can write $Y = b U^{1/k}$ where $U$ has the standard exponential distribution. Hence $X = -\ln Y = -\ln b + \frac{1}{k}(-\ln U)$ As shown in above, $-\ln U$ has the standard Gumbel distribution and hence $X$ has the Gumbel distribution with location parameter $-\ln b$ and scale parameter $\frac{1}{k}$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.30%3A_The_Extreme_Value_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{ \E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\sgn}{\text{sgn}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$ $\newcommand{\sech}{\text{sech}}$
The hyperbolic secant distribution is a location-scale familty with a number of interesting parallels to the normal distribution. As the name suggests, the hyperbolic secant function plays an important role in the distribution, so we should first review some definitions
The hyperbolic trig functions sinh, cosh, tanh, and sech are defined as follows, for $x \in \R$ \begin{align} \sinh x & = \frac{1}{2}(e^x - e^{-x}) \ \cosh x & = \frac{1}{2}(e^x + e^{-x}) \ \tanh x & = \frac{\sinh x}{\cosh x} = \frac{e^x - e^{-x}}{e^x + e^{-x}} \ \sech \, x & = \frac{1}{\cosh x} = \frac{2}{e^x + e^{-x}} \end{align}
The Standard Hyperbolic Secant Distribution
Distribution Functions
The standard hyperbolic secant distribution is a continuous distribution on $\R$ with probability density function $g$ given by $g(z) = \frac{1}{2} \sech\left(\frac{\pi}{2} z\right), \quad z \in \R$
1. $g$ is symmetric about 0.
2. $g$ increases and then decreases with mode $z = 0$.
3. $g$ is concave upward then downward then upward again, with inflection points at $z = \pm \frac{2}{\pi} \ln\left(\sqrt{2} + 1\right) \approx \pm 0.561$.
Proof
If we multiply numerator and denominator of $\sech(x)$ by $e^x$ and then use the simple substitution $u = e^x$ we see that $\int \sech(x) \, dx = \int 2 \frac{e^x}{e^{2x} + 1} \, dx = 2 \int \frac{1}{u^2 + 1} \, du = 2 \arctan(u) = 2 \arctan(e^x)$ It follows that $\int_{-\infty}^\infty g(z) \, dz = \frac{2}{\pi} \arctan\left(\frac{\pi}{2} e^z\right) \biggm|_{-\infty}^\infty = 1$ The properties of $g$ result follow from standard calculus. Recall that $\sech^\prime = -\tanh \sech$ and $\tanh^\prime = \sech^2$.
So $g$ has the classic unimodal shape. Recall that the inflection points in the standard normal probability density function are $\pm 1$. Compared to the standard normal distribution, the hyperbolic secant distribution is more peaked at the mode 0 but has fatter tails.
Open the special distribution simulator and select the hyperbolic secant distribtion. Keep the default parameter settings and note the shape and location of the probability density function. Run the simulation 1000 times and compare the empirical density function to the probability density function.
The distribution function $G$ of the standard hyperbolic secant distribution is given by $G(z) = \frac{2}{\pi} \arctan\left[\exp\left(\frac{\pi}{2} z\right)\right], \quad z \in \R$
Proof
Of course, $G(z) = \int_{-\infty}^z g(x) \, dx$. The form of $G$ follows from the same integration methods used for the PDF.
The quantile function $G^{-1}$ of the standard hyperbolic secant distribution is given by $G^{-1}(p) = \frac{2}{\pi} \ln\left[\tan\left(\frac{\pi}{2} p\right)\right], \quad p \in (0, 1)$
1. The first quartile is $G^{-1}\left(\frac{1}{4}\right) = -\frac{2}{\pi} \ln\left(1 + \sqrt{2}\right) \approx -0.561$
2. The median is $G^{-1}\left(\frac{1}{2}\right) = 0$
3. The third quartile is $G^{-1}\left(\frac{3}{4}\right) = \frac{2}{\pi} \ln\left(1 + \sqrt{2}\right) \approx 0.561$
Proof
The formula for $G^{-1}$ follows by solving $G(z) = p$ for $z$ in terms of $p$. For the quartiles, note that $\tan(\pi/8) = \sqrt{2} - 1 = 1 / (\sqrt{2} + 1)$ and $\tan(3 \pi / 8) = \sqrt{2} + 1$.
Of course, the fact that the median is 0 also follows from the symmetry of the distribution, as does the relationship between the first and third quartiles. In general, $G^{-1}(1 - p) = - G^{-1}(p)$ for $p \in (0, 1)$. Note that the first and third quartiles coincide with the inflection points, whereas in the normal distribution, the inflection points are at $\pm 1$ and coincide with the standard deviation.
Open the sepcial distribution calculator and select the hyperbolic secant distribution. Keep the default values of the parameters and note the shape of the distribution and probability density functions. Compute a few values of the distribution and quantile functions.
Moments
Suppose that $Z$ has the standard hyperbolic secant distribution. The moments of $Z$ are easiest to compute from the generating functions.
The characteristic function $\chi$ of $Z$ is the hyperbolic secant function: $\chi(t) = \sech(t), \quad t \in \R$
Proof
The charateristic function is $\chi(t) = \E\left(e^{i t Z}\right) = \int_{-\infty}^\infty \frac{e^{i t z}}{e^{\pi z / 2} + e^{-\pi z / 2}} \, dz$ The evaluation of this integral to $\sech(t)$ is complicated, but the details can be found in the book Continuous Univariate Distributions by Johnson, Kotz, and Balakrishnan.
Note that the probability density function can be obtained from the characteristic function by a scale transformation: $g(z) = \frac{1}{2} \chi\left(\frac{\pi}{2} z\right)$ for $z \in R$. This is another curious simularity to the normal distribution: the probability density function $\phi$ and characteristic function $\chi$ of the standard normal distribution are related by $\phi(z) = \frac{1}{\sqrt{2 \pi}} \chi(z)$.
The moment generating function $m$ of $Z$ is the secant function: $m(t) = \E\left(e^{tZ}\right) = \sec(t), \quad t \in \left(-\frac{\pi}{2}, \frac{\pi}{2}\right)$
Proof
This follows from the characteristic function since $m(t) = \chi(-i t)$.
It follows that $Z$ has moments of all orders, and then by symmetry, that the odd order moments are all 0.
The mean and variance of $Z$ are
1. $\E(Z) = 0$
2. $\var(Z) = 1$
Proof
As noted, the mean is 0 by symmetry. Hence also $\var(Z) = \E(Z^2) = m^{\prime\prime}(0)$. But $m^{\prime\prime}(t) = \sec(t) \tan^2(t) + \sec^3(t)$, so $\var(Z) = 1$.
Thus, the standard hyperbolic secant distribution has mean 0 and variance 1, just like the standard normal distribution.
Open the special distribution simulator and select the hyperbolic secant distribution. Keep the default parameters and note the size and location of the mean $\pm$ standard deviation bar. Run the simulation 1000 times compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness and kurtosis of $Z$ are
1. $\skw(Z) = 0$
2. $\kur(Z) = 5$
Proof
The skewness is 0 by the symmetry of the distribution. Also, since the mean is 0 and the variance 1, $\kur(Z) = \E\left(Z^4\right) = m^{(4)}(0)$. But by standard calculus, $m^{(4)}(t) = \sec(t) \tan^4(t) + 18 \sec^3(t) \tan^2(t) + 5 \sec^5(t)$ and hence $m^{(4)}(0) = 5$.
Recall that the kurtosis of the standard normal distribution is 3, so the excess kurtosis of the standard hyperbolic secant distribution is $\kur(Z) - 3 = 2$. This distribution is more sharply peaked at the mean 0 and has fatter tails, compared with the normal.
Related Distributions
The standard hyperbolic secant distribution has the usual connections with the standard uniform distribution by means of the distribution function and the quantile function computed above.
The standard hyperbolic secant distribution is related to the standard uniform distribution as follows:
1. If $Z$ has the standard hyperbolic secant distribution then $U = G(Z) = \frac{2}{\pi} \arctan\left[\exp\left(\frac{\pi}{2} Z\right)\right]$ has the standard uniform distribution.
2. If $U$ has the standard uniform distribution then $Z = G^{-1}(U) = \frac{2}{\pi} \ln\left[\tan\left(\frac{\pi}{2} U\right)\right]$ has the standard hyperbolic secant distribution.
Since the quantile function has a simple closed form, the standard hyperbolic secant distribution can be easily simulated by means of the random quantile method.
Open the random quantile experiment and select the hyperbolic secant distribution. Keep the default parameter values and note again the shape of the probability density and distribution functions. Run the experiment 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts.
The General Hyperbolic Secant Distribution
The standard hyperbolic secant distribution is generalized by adding location and scale parameters.
Suppose that $Z$ has the standard hyperbolic secant distribution and that $\mu \in \R$ and $\sigma \in (0, \infty)$. Then $X = \mu + \sigma Z$ has the hyperbolic secant distribution with location parameter $\mu$ and scale parameter $\sigma$.
Distribution Functions
Suppose that $X$ has the hyperbolic secant distribution with location parameter $\mu \in \R$ and scale parameter $\sigma \in (0, \infty)$.
The probability density function $f$ of $X$ is given by $f(x) = \frac{1}{2 \sigma} \sech\left[\frac{\pi}{2}\left(\frac{x - \mu}{\sigma}\right)\right], \quad x \in \R$
1. $f$ is symmetric about $\mu$.
2. $f$ increases and then decreases with mode $x = \mu$.
3. $f$ is concave upward then downward then upward again, with inflection points at $x = \mu \pm \frac{2}{\pi} \ln\left(\sqrt{2} + 1\right) \sigma \approx \mu \pm 0.561 \sigma$.
Proof
Recall that $f(x) = \frac{1}{\sigma}g\left(\frac{x - \mu}{\sigma}\right)$ for $x \in \R$ where $g$ is the standard hyperbolic secant PDF.
Open the special distribution simulator and select the hyperbolic secant distribution. Vary the parameters and note the shape and location of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
The distribution function $F$ of $X$ is given by $F(x) = \frac{2}{\pi} \arctan\left\{\exp\left[\frac{\pi}{2}\left(\frac{x - \mu}{\sigma}\right)\right]\right\}, \quad x \in \R$
Proof
Recall that $F(x) = G\left(\frac{x - \mu}{\sigma}\right)$ for $x \in \R$ where $G$ is the standard hyperbolic secant CDF.
The quantile function $F^{-1}$ of $X$ is given by $F^{-1}(p) = \mu + \sigma \frac{2}{\pi} \ln\left[\tan\left(\frac{\pi}{2} p\right)\right], \quad p \in (0, 1)$
1. The first quartile is $F^{-1}\left(\frac{1}{4}\right) = \mu - \frac{2}{\pi} \ln\left(1 + \sqrt{2}\right) \sigma \approx \mu -0.561 \sigma$
2. The median is $F^{-1}\left(\frac{1}{2}\right) = \mu$
3. The third quartile is $F^{-1}\left(\frac{3}{4}\right) = \mu + \frac{2}{\pi} \ln\left(1 + \sqrt{2}\right) \sigma \approx \mu + 0.561 \sigma$
Proof
Recall that $F^{-1}(p) = \mu + \sigma G^{-1}(p)$ where $G^{-1}$ is the standard quantile function.
Open the sepcial distribution calculator and select the hyperbolic secant distribution. Vary the parameters and note the shape of the distribution and density functions. For various values of the parameters, compute a few values of the distribution and quantile functions.
Moments
Suppose again that $X$ has the hyperbolic secant distribution with location parameter $\mu \in \R$ and scale parameter $\sigma \in (0, \infty)$.
The moment generating function $M$ of $X$ is given by $M(t) = e^{\mu t} \sec(\sigma t), \quad t \in \left(-\frac{\pi}{2 \sigma}, \frac{\pi}{2 \sigma}\right)$
Proof
Recall that $M(t) = e^{\mu t} m(\sigma t)$ where $m$ is the standard hyperbolic secant MGF.
Just as in the normal distribution, the location and scale parameters are the mean and standard deviation, respectively.
The mean and variance of $X$ are
1. $\E(X) = \mu$
2. $\var(X) = \sigma^2$
Proof
These results follow from the representation $X = \mu + \sigma Z$ where $Z$ has the standard hyperbolic secant distribution, basic properties of expected value and variance, and the mean and variance of $Z$:
1. $\E(X) = \mu + \sigma \E(Z) = \mu$
2. $\var(X) = \sigma^2 \var(Z) = \sigma^2$
Open the special distribution simulator and select the hyperbolic secant distribution. Vary the parameters and note the size and location of the mean $\pm$ standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness and kurtosis of $X$ are
1. $\skw(X) = 0$
2. $\kur(X) = 5$
Proof
Recall that skewness and kurtosis are defined in terms of the standard score, and hence are invariant under location-scale transformations. Thus, the skewness and kurtosis of $X$ are the same as the skewness and kurtosis of the standard distribution.
Once again, the excess kurtosis is $\kur(X) - 3 = 2$
Related Distributions
Since the hyperbolic secant distribution is a location-scale family, it is trivially closed under location-scale transformations.
Suppose that $X$ has the hyperbolic secant distribution with location parameter $\mu \in \R$ and scale parameter $\sigma \in (0, \infty)$, and that $a \in \R$ and $b \in (0, \infty)$. Then $Y = a + b X$ has the hyperbolic secant distribution with location parameter $a + b \mu$ and scale parameter $b \sigma$.
Proof
By definition, we can take $X = \mu + \sigma Z$ where $Z$ has the standard hyperbolic secant distribution. Hence $Y = a + b X = (a + b \mu) + (b \sigma) Z$.
The hyperbolic secant distribution has the usual connections with the standard uniform distribution by means of the distribution function and the quantile function computed above.
Suppose that $\mu \in \R$ and $\sigma \in (0, \infty)$.
1. If $X$ has the hyperbolic secant distribution with location parameter $\mu$ and scale parameter $\sigma$ then $U = F(X) = \frac{2}{\pi} \arctan\left\{\exp\left[\frac{\pi}{2}\left(\frac{X - \mu}{\sigma}\right)\right]\right\}$ has the standard uniform distribution.
2. If $U$ has the standard uniform distribution then $X = F^{-1}(U) = \mu + \sigma \frac{2}{\pi} \ln\left[\tan\left(\frac{\pi}{2} U\right)\right]$ has the hyperbolic secant distribution with location parameter $\mu$ and scale parameter $\sigma$.
Since the quantile function has a simple closed form, the hyperbolic secant distribution can be easily simulated by means of the random quantile method.
Open the random quantile experiment and select the hyperbolic secant distribution. Vary the parameters and note again the shape of the probability density and distribution functions. Run the experiment 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.31%3A_The_Hyperbolic_Secant_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{ \E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\sgn}{\text{sgn}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
The Cauchy distribution, named of course for the ubiquitous Augustin Cauchy, is interesting for a couple of reasons. First, it is a simple family of distributions for which the expected value (and other moments) do not exist. Second, the family is closed under the formation of sums of independent variables, and hence is an infinitely divisible family of distributions.
The Standard Cauchy Distribution
Distribution Functions
The standard Cauchy distribution is a continuous distribution on $\R$ with probability density function $g$ given by $g(x) = \frac{1}{\pi \left(1 + x^2\right)}, \quad x \in \R$
1. $g$ is symmetric about $x = 0$
2. $g$ increases and then decreases, with mode $x = 0$.
3. $g$ is concave upward, then downward, and then upward again, with inflection points at $x = \pm \frac{1}{\sqrt{3}}$.
4. $g(x) \to 0$ as $x \to \infty$ and as $x \to -\infty$
Proof
Note that $\int_{-\infty}^\infty \frac{1}{1 + x^2} \, dx = \arctan x \biggm|_{-\infty}^\infty = \frac{\pi}{2} - \left(-\frac{\pi}{2}\right) = \pi$ and hence $g$ is a PDF. Parts (a)–(d) follow from basic calculus.
Thus, the graph of $g$ has a simple, symmetric, unimodal shape that is qualitatively (but certainly not quantitatively) like the standard normal probability density function. The probability density function $g$ is obtained by normalizing the function $x \mapsto \frac{1}{1 + x^2}, \quad x \in \R$ The graph of this function is known as the witch of Agnesi, named for the Italian mathematician Maria Agnesi.
Open the special distribution simulator and select the Cauchy distribution. Keep the default parameter values to get the standard Cauchy distribution and note the shape and location of the probability density function. Run the simulation 1000 times and compare the empirical density function to the probability density function.
The standard Cauchy distribution function $G$ given by $G(x) = \frac{1}{2} + \frac{1}{\pi} \arctan x$ for $x \in \R$
Proof
For $x \in \R$, $G(x) = \int_{-\infty}^x g(t) \, dt = \frac{1}{\pi} \arctan t \biggm|_{-\infty}^x = \frac{1}{\pi} \arctan x + \frac{1}{2}$
The standard Cauchy quantile function $G^{-1}$ is given by $G^{-1}(p) = \tan\left[\pi\left(p - \frac{1}{2}\right)\right]$ for $p \in (0, 1)$. In particular,
1. The first quartile is $G^{-1}\left(\frac{1}{4}\right) = -1$
2. The median is $G^{-1}\left(\frac{1}{2}\right) = 0$
3. The third quartile is $G^{-1}\left(\frac{3}{4}\right) = 1$
Proof
As usual, $G^{-1}$ is computed from the CDF $G$ by solving $G(x) = p$ for $x$ in terms of $p$.
Of course, the fact that the median is 0 also follows from the symmetry of the distribution, as does the fact that $G^{-1}(1 - p) = -G^{-1}(p)$ for $p \in (0, 1)$.
Open the special distribution calculator and select the Cauchy distribution. Keep the default parameter values and note the shape of the distribution and probability density functions. Compute a few quantiles.
Moments
Suppose that random variable $X$ has the standard Cauchy distribution. As we noted in the introduction, part of the fame of this distribution comes from the fact that the expected value does not exist.
$\E(X)$ does not exist.
Proof
By definition, $\E(X) = \int_{-\infty}^\infty x g(x) \, dx$. For the improper integral to exist, even as an extended real number, at least one of the integrals $\int_{-\infty}^a x g(x) \, dx$ and $\int_a^\infty x g(x) \, dx$ must be finite, for some (and hence every) $a \in \R$. But by a simple substitution, $\int_a^\infty x g(x) \, dx = \int_a^\infty x \frac{1}{\pi (1 + x^2)} \, dx = \frac{1}{2 \pi} \ln(1 + x^2) \biggm|_a^\infty = \infty$ and similarly, $\int_{-\infty}^a x g(x) \, dx = -\infty$.
By symmetry, if the expected value did exist, it would have to be 0, just like the median and the mode, but alas the mean does not exist. Moreover, this is not just an artifact of how mathematicians define improper integrals, but has real consequences. Recall that if we think of the probability distribution as a mass distribution, then the mean is center of mass, the balance point, the point where the moment (in the sense of physics) to the right is balanced by the moment to the left. But as the proof of the last result shows, the moments to the right and to the left at any point $a \in \R$ are infinite. In this sense, 0 is no more important than any other $a \in \R$. Finally, if you are not convinced by the argument from physics, the next exercise may convince you that the law of large numbers fails as well.
Open the special distribution simulator and select the Cauchy distribution. Keep the default parameter values, which give the standard Cauchy distribution. Run the simulation 1000 times and note the behavior of the sample mean.
Earlier we noted some superficial similarities between the standard Cauchy distribution and the standard normal distribution (unimodal, symmetric about 0). But clearly there are huge quantitative differences. The Cauchy distribution is a heavy tailed distribution because the probability density function $g(x)$ decreases at a polynomial rate as $x \to \infty$ and $x \to -\infty$, as opposed to an exponential rate. This is yet another way to understand why the expected value does not exist.
In terms of the higher moments, $\E\left(X^n\right)$ does not exist if $n$ is odd, and is $\infty$ if $n$ is even. It follows that the moment generating function $m(t) = \E\left(e^{t X}\right)$ cannot be finite in an interval about 0. In fact, $m(t) = \infty$ for every $t \ne 0$, so this generating function is of no use to us. But every distribution on $\R$ has a characteristic function, and for the Cauchy distribution, this generating function will be quite useful.
$X$ has characteristic function $\chi_0$ given by $\chi_0(t) = \exp\left(-\left|t\right|\right)$ for $t \in \R$.
Proof
By definition, $\chi_0(t) = \E(e^{i t X}) = \int_{-\infty}^\infty e^{i t x} \frac{1}{\pi (1 + x^2)} \, dx$ We will compute this integral by evaluating a related contour integral in the complex plane using, appropriately enough, Cauchy's integral formula (named for you know who).
Suppose first that $t \ge 0$. For $r \gt 1$, let $\Gamma_r$ denote the curve in the complex plane consisting of the line segment $L_r$ on the $x$-axis from $-r$ to $r$ and the upper half circle $C_r$ of radius $r$ centered at the origin. We give $\Gamma_r$ the usual counter-clockwise orientation. On the one hand we have $\int_{\Gamma_r} \frac{e^{i t z}}{\pi (1 + z^2)} dz = \int_{L_r} \frac{e^{i t z}}{\pi (1 + z^2)} dz + \int_{C_r} \frac{e^{i t z}}{\pi (1 + z^2)} dz$ On $L_r$, $z = x$ and $dz = dx$ so $\int_{L_r} \frac{e^{i t z}}{\pi (1 + z^2)} dz = \int_{-r}^r \frac{e^{i t x}}{\pi (1 + x^2)} dx$ On $C_r$, let $z = x + i y$. Then $e^{i t z} = e^{-t y + i t x} = e^{-t y} [\cos(t x) + i \sin(t x)]$. Since $y \ge 0$ on $C_r$ and $t \ge 0$, we have $|e^{i t z} | \le 1$. Also, $\left|\frac{1}{1 + z^2}\right| \le \frac{1}{r^2 - 1}$ on $C_r$. It follows that $\left|\int_{C_r} \frac{e^{i t z}}{\pi (1 + z^2)} dz \right| \le \frac{1}{\pi (r^2 - 1)} \pi r = \frac{r}{r^2 - 1} \to 0 \text{ as } r \to \infty$ On the other hand, $e^{i t z} / [\pi (1 + z^2)]$ has one singularity inside $\Gamma_r$, at $i$. The residue is $\lim_{z \to i} (z - i) \frac{e^{i t z}}{\pi (1 + z^2)} = \lim_{z \to i} \frac{e^{i t z}}{\pi(z + i)} = \frac{e^{-t}}{2 \pi i}$ Hence by Cauchy's integral formula, $\int_{\Gamma_r} \frac{e^{i t z}}{\pi (1 + z^2} dz = 2 \pi i \frac{e^{-t}}{2 \pi i} = e^{-t}$. Putting the pieces together we have $e^{-t} = \int_{-r}^r \frac{e^{i t x}}{\pi (1 + x^2)} dx + \int_{C_r} \frac{e^{i t z}}{\pi (1 + z^2)} dz$ Letting $r \to \infty$ gives $\int_{-\infty}^\infty \frac{e^{i t x }}{\pi (1 + x^2)} dx = e^{-t}$ For $t \lt 0$, we can use the substitution $u = - x$ and our previous result to get $\int_{-\infty}^\infty \frac{e^{i t x}}{\pi (1 + x^2)} dx = \int_{-\infty}^\infty \frac{e^{i (-t) u}}{\pi (1 + u^2)} du = e^{t}$
Related Distributions
The standard Cauchy distribution a member of the Student $t$ family of distributions.
The standard Cauchy distribution is the Student $t$ distribution with one degree of freedom.
Proof
The Student $t$ distribution with one degree of freedom has PDF $g$ given by $g(t) = \frac{\Gamma(1)}{\sqrt{\pi} \Gamma(1/2)} \left(1 + t^2\right)^{-1} = \frac{1}{\pi (1 + t^2)}, \quad t \in \R$ which is the standard Cauchy PDF.
The standard Cauchy distribution also arises naturally as the ratio of independent standard normal variables.
Suppose that $Z$ and $W$ are independent random variables, each with the standard normal distribution. Then $X = Z / W$ has the standard Cauchy distribution. Equivalently, the standard Cauchy distribution is the Student $t$ distribution with 1 degree of freedom.
Proof
By definition, $W^2$ has the chi-square distribution with 1 degree of freedom, and is independent of $Z$. Hence, also by definition, $X = Z / \sqrt{W^2} = Z / W$ has the Student $t$ distribution with 1 degree of freedom, so the theorem follows from the previous result.
If $X$ has the standard Cauchy distribution, then so does $Y = 1 / X$
Proof
This is a corollary of the previous result. Suppose that $Z$ and $W$ are independent variables, each with the standard normal distribution. Then $X = Z / W$ has the standard Cauchy distribution. But then $1/X = W/Z$ also has the standard Cauchy distribution.
The standard Cauchy distribution has the usual connections to the standard uniform distribution via the distribution function and the quantile function computed above.
The standard Cauchy distribution and the standard uniform distribution are related as follows:
1. If $U$ has the standard uniform distribution then $X = G^{-1}(U) = \tan\left[\pi \left(U - \frac{1}{2}\right)\right]$ has the standard Cauchy distribution.
2. If $X$ has the standard Cauchy distribution then $U = G(X) = \frac{1}{2} + \frac{1}{\pi} \arctan(X)$ has the standard uniform distribution.
Proof
Recall that if $U$ has the standard uniform distribution, then $G^{-1}(U)$ has distribution function $G$. Conversely, if $X$ has distribution function $G$, then since $G$ is strictly increasing, $G(X)$ has the standard uniform distribution.
Since the quantile function has a simple, closed form, it's easy to simulate the standard Cauchy distribution using the random quantile method.
Open the random quantile experiment and select the Cauchy distribution. Keep the default parameter values and note again the shape and location of the distribution and probability density functions. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function. Note the behavior of the empirical mean and standard deviation.
For the Cauchy distribution, the random quantile method has a nice physical interpretation. Suppose that a light source is 1 unit away from position 0 of an infinite, straight wall. We shine the light at the wall at an angle $\Theta$ (to the perpendicular) that is uniformly distributed on the interval $\left(-\frac{\pi}{2}, \frac{\pi}{2}\right)$. Then the position $X = \tan \Theta$ of the light beam on the wall has the standard Cauchy distribution. Note that this follows since $\Theta$ has the same distribution as $\pi \left(U - \frac{1}{2}\right)$ where $U$ has the standard uniform distribution.
Open the Cauchy experiment and keep the default parameter values.
1. Run the experiment in single-step mode a few times, to make sure that you understand the experiment.
2. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function. Note the behavior of the empirical mean and standard deviation.
The General Cauchy Distribution
Like so many other standard distributions, the Cauchy distribution is generalized by adding location and scale parameters. Most of the results in this subsection follow immediately from results for the standard Cauchy distribution above and general results for location scale families.
Suppose that $Z$ has the standard Cauchy distribution and that $a \in \R$ and $b \in (0, \infty)$. Then $X = a + b Z$ has the Cauchy distribution with location parameter $a$ and scale parameter $b$.
Distribution Functions
Suppose that $X$ has the Cauchy distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$.
$X$ has probability density function $f$ given by $f(x) = \frac{b}{\pi [b^2 + (x - a)^2]}, \quad x \in \R$
1. $f$ is symmetric about $x = a$.
2. $f$ increases and then decreases, with mode $x = a$.
3. $f$ is concave upward, then downward, then upward again, with inflection points at $x = a \pm \frac{1}{\sqrt{3}} b$.
4. $f(x) \to 0$ as $x \to \infty$ and as $x \to -\infty$.
Proof
Recall that $f(x) = \frac{1}{b} g\left(\frac{x - a}{b}\right)$ where $g$ is the standard Cauchy PDF.
Open the special distribution simulator and select the Cauchy distribution. Vary the parameters and note the location and shape of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
$X$ has distribution function $F$ given by $F(x) = \frac{1}{2} + \frac{1}{\pi} \arctan\left(\frac{x - a}{b} \right), \quad x \in \R$
Proof
Recall that $F(x) = G\left(\frac{x - a}{b}\right)$ where $G$ is the standard Cauchy CDF.
$X$ has quantile function $F^{-1}$ given by $F^{-1}(p) = a + b \tan\left[\pi \left(p - \frac{1}{2}\right)\right], \quad p \in (0, 1)$ In particular,
1. The first quartile is $F^{-1}\left(\frac{1}{4}\right) = a - b$.
2. The median is $F^{-1}\left(\frac{1}{2}\right) = a$.
3. The third quartile is $F^{-1}\left(\frac{3}{4}\right) = a + b$.
Proof
Recall that $F^{-1}(p) = a + b G^{-1}(p)$ where $G^{-1}$ is the standard Cauchy quantile function.
Open the special distribution calculator and select the Cauchy distribution. Vary the parameters and note the shape and location of the distribution and probability density functions. Compute a few values of the distribution and quantile functions.
Moments
Suppose again that $X$ has the Cauchy distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$. Since the mean and other moments of the standard Cauchy distribution do not exist, they don't exist for the general Cauchy distribution either.
Open the special distribution simulator and select the Cauchy distribution. For selected values of the parameters, run the simulation 1000 times and note the behavior of the sample mean.
But of course the characteristic function of the Cauchy distribution exists and is easy to obtain from the characteristic function of the standard distribution.
$X$ has characteristic function $\chi$ given by $\chi(t) = \exp\left(a i t - b \left|t\right|\right)$ for $t \in \R$.
Proof
Recall that $\chi(t) = e^{i t a} \chi_0( b t)$ where $\chi_0$ is the standard Cauchy characteristic function.
Related Distributions
Like all location-scale families, the general Cauchy distribution is closed under location-scale transformations.
Suppose that $X$ has the Cauchy distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$, and that $c \in \R$ and $d \in (0, \infty)$. Then $Y = c + d X$ has the Cauchy distribution with location parameter $c + d a$ and scale parameter $b d$.
Proof
Once again, we give the standard proof. By definition we can take $X = a + b Z$ where $Z$ has the standard Cauchy distribution. But then $Y = c + d X = (c + a d) + (b d) Z$.
Much more interesting is the fact that the Cauchy family is closed under sums of independent variables. In fact, this is the main reason that the generalization to a location-scale family is justified.
Suppose that $X_i$ has the Cauchy distribution with location parameter $a_i \in \R$ and scale parameter $b_i \in (0, \infty)$ for $i \in \{1, 2\}$, and that $X_1$ and $X_2$ are independent. Then $Y = X_1 + X_2$ has the Cauchy distribution with location parameter $a_1 + a_2$ and scale parameter $b_1 + b_2$.
Proof
This follows easily from the characteristic function. Let $\chi_i$ denote the characteristic function of $X_i$ for $i = 1, 2$ and $\chi$ the charactersitic function of $Y$. Then $\chi(t) = \chi_1(t) \chi_2(t) = \exp\left(a_1 i t - b_1 \left|t\right|\right) \exp\left(a_2 i t - b_2 \left|t\right|\right) = \exp\left[\left(a_1 + a_2\right) i t - \left(b_1 + b_2\right) \left|t\right|\right]$
As a corollary, the Cauchy distribution is stable, with index $\alpha = 1$:
If $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a sequence of independent variables, each with the Cauchy distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$, then $X_1 + X_2 + \cdots + X_n$ has the Cauchy distribution with location parameter $n a$ and scale parameter $n b$.
Another corollary is the strange property that the sample mean of a random sample from a Cauchy distribution has that same Cauchy distribution. No wonder the expected value does not exist!
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a sequence of independent random variables, each with the Cauchy distribution with location parameter $a \in \R$ and scale parameter $b \in (0, \infty)$. (That is, $\bs{X}$ is a random sample of size $n$ from the Cauchy distribution.) Then the sample mean $M = \frac{1}{n} \sum_{i=1}^n X_i$ also has the Cauchy distribution with location parameter $a$ and scale parameter $b$.
Proof
From the previous stability result, $Y = \sum_{i=1}^n X_i$ has the Cauchy distribution with location parameter $n a$ and scale parameter $n b$. But then by the scaling result, $M = Y / n$ has the Cauchy distribution with location parameter $a$ and scale parameter $b$.
The next result shows explicitly that the Cauchy distribution is infinitely divisible. But of course, infinite divisibility is also a consequence of stability.
Suppose that $a \in \R$ and $b \in (0, \infty)$. For every $n \in \N_+$ the Cauchy distribution with location parameter $a$ and scale parameter $b$ is the distribution of the sum of $n$ independent variables, each of which has the Cauchy distribution with location parameters $a / n$ and scale parameter $b/n$.
Our next result is a very slight generalization of the reciprocal result above for the standard Cauchy distribution.
Suppose that $X$ has the Cauchy distribution with location parameter $0$ and scale parameter $b \in (0, \infty)$. Then $Y = 1 / X$ has the Cauchy distribution with location parameter $0$ and scale parameter $1 / b$.
Proof
$X$ has the same distribution as $b Z$ where $Z$ has the standard Cauchy distribution. Hence $\frac{1}{X}$ has the same distribution as $\frac{1}{b} \frac{1}{Z}$. But by the result above, $\frac{1}{Z}$ also has the standard Cauchy distribution, so $\frac{1}{b} \frac{1}{Z}$ has the Cauchy distribution with location parameter $0$ and scale parameter $1 / b$.
As with its standard cousin, the general Cauchy distribution has simple connections with the standard uniform distribution via the distribution function and quantile function computed above, and in particular, can be simulated via the random quantile method.
Suppose that $a \in \R$ and $b \in (0, \infty)$.
1. If $U$ has the standard uniform distribution, then $X = F^{-1}(U) = a + b \tan\left[\pi \left(U - \frac{1}{2}\right)\right]$ has the Cauchy distribution with location parameter $a$ and scale parameter $g$
2. If $X$ has the Cauchy distribution with location parameter $a$ and scale parameter $b$, then $U = F(X) = \frac{1}{2} + \frac{1}{\pi} \arctan\left(\frac{X - a}{b} \right)$ has the standard uniform distribution.
Open the random quantile experiment and select the Cauchy distribution. Vary the parameters and note again the shape and location of the distribution and probability density functions. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function. Note the behavior of the empirical mean and standard deviation.
As before, the random quantile method has a nice physical interpretation. Suppose that a light source is $b$ units away from position $a$ of an infinite, straight wall. We shine the light at the wall at an angle $\Theta$ (to the perpendicular) that is uniformly distributed on the interval $\left(-\frac{\pi}{2}, \frac{\pi}{2}\right)$. Then the position $X = a + b \tan \Theta$ of the light beam on the wall has the Cauchy distribution with location parameter $a$ and scale parameter $b$.
Open the Cauchy experiment. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function. Note the behavior of the empirical mean and standard deviation. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.32%3A_The_Cauchy_Distribution.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\Li}{\text{Li}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\skw}{\text{skew}}$
The exponential-logarithmic distribution arises when the rate parameter of the exponential distribution is randomized by the logarithmic distribution. The exponential-logarithmic distribution has applications in reliability theory in the context of devices or organisms that improve with age, due to hardening or immunity.
The Standard Exponential-Logarithmic Distribution
Distribution Functions
The standard exponential-logarithmic distribution with shape parameter $p \in (0, 1)$ is a continuous distribution on $[0, \infty)$ with probability density function $g$ given by $g(x) = -\frac{(1 - p) e^{-x}}{\ln(p)[1 - (1 - p) e^{-x}]}, \quad x \in [0, \infty)$
1. $g$ is decreasing on $[0, \infty)$ with mode $x = 0$.
2. $g$ is concave upward on $[0, \infty)$.
Proof
Substituting $u = (1 - p) e^{-x}$, $du = -(1 - p) e^{-x} dx$ gives $\int_0^\infty \frac{(1 - p) e^{-x}}{1 - (1 - p) e^{-x}} dx = \int_0^{1-p} \frac{du}{1 - u} = -\ln(p)$ so it follows that $g$ is a PDF. For the shape of the graph of $g$ note that \begin{align} g^\prime(x) & = \frac{(1 - p) e^{-x}}{\ln(p) [1 - (1 - p) e^{-x}]^2}, \quad x \in [0, \infty) \ g^{\prime\prime}(x) & = -\frac{(1 - p) e^{-x} [1 + (1 - p) e^{-x}}{\ln(p) [1 - (1 - p) e^{-x}]^3}, \quad x \in [0, \infty) \end{align}
Open the special distribution simulator and select the exponential-logarithmic distribution. Vary the shape parameter and note the shape of the probability density function. For selected values of the shape parameter, run the simulation 1000 times and compare the empirical density function to the probability density function.
The distribution function $G$ is given by $G(x) = 1 - \frac{\ln\left[1 - (1 - p) e^{-x}\right]}{\ln(p)}, \quad x \in [0, \infty)$
Proof
This follows from the same integral substitution used in the previous proof.
The quantile function $G^{-1}$ is given by $G^{-1}(u) = \ln\left(\frac{1 - p}{1 - p^{1 - u}}\right) = \ln(1 - p) - \ln\left(1 - p^{1 - u}\right), \quad u \in [0, 1)$
1. The first quartile is $q_1 = \ln(1 - p) - \ln\left(1 - p^{3/4}\right)$.
2. The median is $q_2 = \ln(1 - p) - \ln\left(1 - p^{1/2}\right) = \ln\left(1 + \sqrt{p}\right)$.
3. The third quartile is $q_3 = \ln(1 - p) - \ln\left(1 - p^{1/4}\right)$.
Proof
The formula for $G^{-1}$ follows from the distribution function by solving $u = G(x)$ for $x$ in terms of $u$.
Open the special distribution calculator and select the exponential-logarithmic distribution. Vary the shape parameter and note the shape of the distribution and probability density functions. For selected values of the shape parameter, computer a few values of the distribution function and the quantile function.
The reliability function $G^c$ given by $G^c(x) = \frac{\ln\left[1 - (1 - p) e^{-x}\right]}{\ln(p)}, \quad x \in [0, \infty)$
Proof
This follows trivially from the distribution function since $G^c = 1 - G$.
The standard exponential-logarithmic distribution has decreasing failure rate.
The failure rate function $r$ is given by $r(x) = -\frac{(1 - p) e^{-x}}{\left[1 - (1 - p) e^{-x}\right] \ln\left[1 - (1 - p) e^{-x}\right]}, \quad x \in (0, \infty)$
1. $r$ is decreasing on $[0, \infty)$.
2. $r$ is concave upward on $[0, \infty)$.
Proof
Recall that $r(x) = g(x) \big/ G^c(x)$ so the formula follows from the probability density function and the distribution function given above.
The Polylogarithm
The moments of the standard exponential-logarithmic distribution cannot be expressed in terms of the usual elementary functions, but can be expressed in terms of a special function known as the polylogarithm.
The polylogarithm of order $s \in \R$ is defined by $\Li_s(x) = \sum_{k=1}^\infty \frac{x^k}{k^s}, \quad x \in (-1, 1)$ The polylogarithm is a power series in $x$ with radius of convergence is 1 for each $s \in \R$.
Proof
To show that the radius of convergence is 1, we use the ratio test from calculus. For $s \in \R$, $\frac{|x|^{k+1} / (k + 1)^s}{|x|^k / k^s} = |x| \left(\frac{k}{k + 1}\right)^s \to |x| \text{ as } k \to \infty$ Hence the series converges absolutely for $|x| \lt 1$ and diverges for $|x| \gt 1$.
In this section, we are only interested in nonnegative integer orders, but the polylogarithm will show up again, for non-integer orders, in the study of the zeta distribution.
The polylogarithm functions of orders 0, 1, 2, and 3.
1. The polylogarithm of order 0 is $\Li_0(x) = \sum_{k=1}^\infty x^k = \frac{x}{1 - x}, \quad x \in (-1, 1)$
2. The polylogarithm of order 1 is $\Li_1(x) = \sum_{k=1}^\infty \frac{x^k}{k} = -\ln(1 - x), \quad x \in (-1, 1)$
3. The polylogarithm of order 2 is known as the dilogarithm
4. The polylogarithm of order 3 is known as the trilogarithm.
Thus, the polylogarithm of order 0 is a simple geometric series, and the polylogarithm of order 1 is the standard power series for the natural logarithm. Note that the probability density function of $X$ can be written in terms of the polylogarithms of orders 0 and 1: $g(x) = -\frac{\Li_0\left[(1 - p) e^{-x}\right]}{\ln(p)} = \frac{\Li_0\left[(1 - p) e^{-x}\right]}{\Li_1(1 - p)}, \quad x \in [0, \infty)$ The most important property of the polylogarithm is given in the following theorem:
The polylogarithm satisfies the following recursive integral formula: $\Li_{s+1}(x) = \int_0^x \frac{\Li_s(t)}{t} dt; \quad s \in \R, \; x \in (-1, 1)$ Equivalently, $x \, \Li_{s+1}^\prime(x) = \Li_s(x)$ for $x \in (-1, 1)$ and $s \in \R$.
Proof
Recall that a power series may integrated term by term, and the integrated series has the same radius of convergence. Hence for $s \in \R$, $\int_0^x \frac{\Li_s(t)}{t} dt = \sum_{k=1}^\infty \int_0^x \frac{t^{k-1}}{k^s} dt = \sum_{k=1}^\infty \frac{x^k}{s^{k+1}} = \Li_{s+1}(x), \quad x \in (-1, 1)$
When $s \gt 1$, the polylogarithm series converges at $x = 1$ also, and $\Li_s(1) = \zeta(s) = \sum_{k=1}^\infty \frac{1}{k^s}$ where $\zeta$ is the Riemann zeta function, named for Georg Riemann. The polylogarithm can be extended to complex orders and defined for complex $z$ with $|z| \lt 1$, but the simpler version suffices for our work here.
Moments
We assume that $X$ has the standard exponential-logarithmic distribution with shape parameter $p \in (0, 1)$.
The moments of $X$ (about 0) are $\E(X^n) = -n! \frac{\Li_{n+1}(1 - p)}{\ln(p)} = n! \frac{\Li_{n+1}(1 - p)}{\Li_1(1 - p)}, \quad n \in \N$
1. $\E(X^n) \to 0$ as $p \downarrow 0$
2. $\E(X^n) \to n!$ as $p \uparrow 1$
Proof
As noted earlier in the discussion of the polylogarithm, the PDF of $X$ can be written as $g(x) = -\frac{1}{\ln(p)} \sum_{k=1}^\infty (1 - p)^k e^{-kx}, \quad x \in [0, \infty)$ Hence $\E(X^n) = -\frac{1}{\ln(p)} \int_0^\infty \sum_{k=1}^\infty (1 - p)^k x^n e^{-k x} dx = -\frac{1}{\ln(p)} \sum_{k=1}^\infty (1 - p)^k \int_0^\infty x^n e^{-k x} dx$ But $\int_0^\infty x^n e^{-k x} dx = n! \big/ k^{n + 1}$ and hence $\E(X^n) = -\frac{1}{\ln(p)} n! \sum_{k=1}^\infty \frac{(1 - p)^k}{k^{n+1}} = - n! \frac{\Li_{n+1}(1 - p)}{\ln(p)}$
1. As $p \downarrow 0$, the numerator in the last expression for $\E(X^n)$ converges to $n! \zeta(n + 1)$ while the denominator diverges to $\infty$.
2. As $p \uparrow 1$, the expression for $\E(X^n)$ has the indeterminate form $\frac{0}{0}$. An application of L'Hospital's rule and the derivative rule above gives $\lim_{p \uparrow 1} \E(X^n) = \lim_{p \uparrow 1} n! p \frac{\Li_n(1 - p)}{1 - p}$ But from the series definition of the polylogarithm, $\Li_n(x) \big/ x \to 1$ as $x \to 0$.
We will get some additional insight into the asymptotics below when we consider the limiting distribution as $p \downarrow 0$ and $p \uparrow 1$. The mean and variance of the standard exponential logarithmic distribution follow easily from the general moment formula.
The mean and variance of $X$ are
1. $\E(X) = - \Li_2(1 - p) \big/ \ln(p)$
2. $\var(X) = -2 \Li_3(1 - p) \big/ \ln(p) - \left[\Li_2(1 - p) \big/ \ln(p)\right]^2$
From the asymptotics of the general moments, note that $\E(X) \to 0$ and $\var(X) \to 0$ as $p \downarrow 0$, and $E(X) \to 1$ and $\var(X) \to 1$ as $p \uparrow 1$.
Open the special distribution simulator and select the exponential-logarithmic distribution. Vary the shape parameter and note the size and location of the mean $\pm$ standard deviation bar. For selected values of the shape parameter, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
Related Distributions
The standard exponential-logarithmic distribution has the usual connections to the standard uniform distribution by means of the distribution function and the quantile function computed above.
Suppose that $p \in (0, 1)$.
1. If $U$ has the standard uniform distribution then $X = \ln\left(\frac{1 - p}{1 - p^U}\right) = \ln(1 - p) - \ln\left(1 - p^U \right)$ has the standard exponential-logarithmic distribution with shape parameter $p$.
2. If $X$ has the standard exponential-logarithmic distribution with shape parameter $p$ then $U = \frac{\ln\left[1 - (1 - p) e^{-X}\right]}{\ln(p)}$ has the standard uniform distribution.
Proof
1. Recall that if $U$ has the standard uniform distribution, then $G^{-1}(U)$ has the exponential-logarithmic distribution with shape parameter $p$. But $1 - U$ also has the standard uniform distribution and hence$X = G^{-1}(1 - U)$ also has the exponential-logarithmic distribution with shape parameter $p$.
2. Similarly, if $X$ has the exponential-logarithmic distribution with shape parameter $p$ then $G(X)$ has the standard uniform distribution. Hence $U = 1 - G(X)$ also has the standard uniform distribution.
Since the quantile function of the basic exponential-logarithmic distribution has a simple closed form, the distribution can be simulated using the random quantile method.
Open the random quantile experiment and select the exponential-logarithmic distribution. Vary the shape parameter and note the shape of the distribution and probability density functions. For selected values of the parameter, run the simulation 1000 times and compare the empirical density function to the probability density function.
As the name suggests, the basic exponential-logarithmic distribution arises from the exponential distribution and the logarithmic distribution via a certain type of randomization.
Suppose that $\bs T = (T_1, T_2, \ldots)$ is a sequence of independent random variables, each with the standard exponential distribution. Suppose also that $N$ has the logarithmic distribution with parameter $1 - p \in (0, 1)$ and is independent of $\bs T$. Then $X = \min\{T_1, T_2, \ldots, T_N\}$ has the basic exponential-logarithmic distribution with shape parameter $p$.
Proof
It's best to work with reliability functions. For $n \in \N_+$, $\min\{T_1, T_2, \ldots, T_n\}$ has the exponential distribution with rate parameter $n$, and hence $\P(\min\{T_1, T_2, \ldots T_n\} \gt x) = e^{-n x}$ for $x \in [0, \infty)$. Recall also that $\P(N = n) = -\frac{(1 - p)^n}{n \ln(p)} \quad, n \in \N_+$ Hence, using the polylogarithm of order 1 (the standard power series for the logarithm), $\P(X \gt x) = \E[\P(X \gt x \mid N)] = -\frac{1}{\ln(p)} \sum_{n=1}^\infty e^{-n x} \frac{(1 - p)^n}{n} = -\frac{1}{\ln(p)} \sum_{n=1}^\infty \frac{\left[e^{-x}(1 - p)\right]^n}{n} = \frac{\ln\left[1 - e^{-x} (1 - p)\right]}{\ln(p)}$ As a function of $x$, this is the reliability function of the exponential-logarithmic distribution with shape parameter $p$.
Also of interest, of course, are the limiting distributions of the standard exponential-logarithmic distribution as $p \to 0$ and as $p \to 1$.
The standard exponential-logarithmic distribution with shape parameter $p \in (0, 1)$ converges to
1. Point mass at 0 as $p \to 0$.
2. The standard exponential distribution as $p \to 1$.
Proof
It's slightly easier to work with the reliability function $G^c$ rather than the ordinary (left) distribution function $G$.
1. Note that $G^c(0) = 1$ for every $p \in (0, 1)$. On the other hand, if $x \gt 0$ then $G^c(x) \to 0$ as $p \to 0$.
2. $G^c(x)$ has the indeterminate form $\frac{0}{0}$ as $p \to 1$. An application of L'Hospital's rule shows that $\lim_{p \to 1} G^c(x) = \lim_{p \to 1} \frac{p e^{-x}}{1 - (1 - p) e^{-x}} = e^{-x}, \quad x \in [0, \infty)$ As a function of $x$, this is the reliability function of the standard exponential distribution.
The General Exponential-Logarithmic Distribution
The standard exponential-logarithmic distribution is generalized, like so many distributions on $[0, \infty)$, by adding a scale parameter.
Suppose that $Z$ has the standard exponential-logarithmic distribution with shape parameter $p \in (0, 1)$. If $b \in (0, \infty)$, then $X = b Z$ has the exponential-logarithmic distribution with shape parameter $p$ and scale parameter $b$.
Using the same terminology as the exponential distribution, $1/b$ is called the rate parameter.
Distribution Functions
Suppose that $X$ has the exponential-logarithmic distribution with shape parameter $p \in (0, 1)$ and scale parameter $b \in (0, \infty)$.
$X$ has probability density function $f$ given by $f(x) = -\frac{(1 - p) e^{-x / b}}{b \ln(p)[1 - (1 - p) e^{-x / b}]}, \quad x \in [0, \infty)$
1. $f$ is decreasing on $[0, \infty)$ with mode $x = 0$.
2. $f$ is concave upward on $[0, \infty)$.
Proof
Recall that $f(x) = \frac{1}{b}g\left(\frac{x}{b}\right)$ for $x \in [0, \infty)$ where $g$ is the PDF of the standard distribution.
Open the special distribution simulator and select the exponential-logarithmic distribution. Vary the shape and scale parameters and note the shape and location of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
$X$ has distribution function $F$ given by $F(x) = 1 - \frac{\ln\left[1 - (1 - p) e^{-x / b}\right]}{\ln(p)}, \quad x \in [0, \infty)$
Proof
Recall that $F(x) = G(x / b)$ for $x \in [0, \infty)$ where $G$ is the CDF of the standard distribution.
$X$ has quantile function $F^{-1}$ given by $F^{-1}(u) = b \ln\left(\frac{1 - p}{1 - p^{1 - u}}\right) = b \left[\ln(1 - p) - \ln\left(1 - p^{1 - u}\right)\right], \quad u \in [0, 1)$
1. The first quartile is $q_1 = b \left[\ln(1 - p) - \ln\left(1 - p^{3/4}\right)\right]$.
2. The median is $q_2 = b \left[\ln(1 - p) - \ln\left(1 - p^{1/2}\right)\right] = b \ln\left(1 + \sqrt{p}\right)$.
3. The third quartile is $q_3 = b \left[\ln(1 - p) - \ln\left(1 - p^{1/4}\right) \right]$.
Proof
Recall that $F^{-1}(u) = b G^{-1}(u)$ where $G^{-1}$ is the quantile function of the standard distribution.
Open the special distribution calculator and select the exponential-logarithmic distribution. Vary the shape and scale parameter and note the shape and location of the probability density and distribution functions. For selected values of the parameters, computer a few values of the distribution function and the quantile function.
$X$ has reliability function $F^c$ given by $F^c(x) = \frac{\ln\left[1 - (1 - p) e^{-x / b}\right]}{\ln(p)}, \quad x \in [0, \infty)$
Proof
This follows trivially from the distribution function since $F^c = 1 - F$.
The exponential-logarithmic distribution has decreasing failure rate.
The failure rate function $R$ of $X$ is given by. $R(x) = -\frac{(1 - p) e^{-x / b}}{b \left[1 - (1 - p) e^{-x / b}\right] \ln\left[1 - (1 - p) e^{-x / b}\right]}, \quad x \in [0, \infty)$
1. $R$ is decreasing on $[0, \infty)$.
2. $R$ is concave upward on $[0, \infty)$.
Proof
Recall that $R(x) = \frac{1}{b} r\left(\frac{x}{b}\right)$ for $x \in [0, \infty)$, where $r$ is the failure rate function of the standard distribution. Alternately, $R(x) = f(x) \big/ F^c(x)$.
Moments
Suppose again that $X$ has the exponential-logarithmic distribution with shape parameter $p \in (0, 1)$ and scale parameter $b \in (0, \infty)$. The moments of $X$ can be computed easily from the representation $X = b Z$ where $Z$ has the basic exponential-logarithmic distribution.
The moments of $X$ (about 0) are $\E(X^n) = -b^n n! \frac{\Li_{n+1}(1 - p)}{\ln(p)}, \quad n \in \N$
1. $\E(X^n) \to 0$ as $p \downarrow 0$
2. $\E(X^n) \to b^n n!$ as $p \uparrow 1$
Proof
These results follow from basic properties of expected value and the corresponding results for the standard distribution. We can write $X = b Z$ where $Z$ has the standard exponential-logarithmic distribution with shape parameter $p$. Hence $\E(X^n) = b^n \E(Z^n)$.
The mean and variance of $X$ are
1. $\E(X) = - b \Li_2(1 - p) \big/ \ln(p)$
2. $\var(X) = b^2 \left(-2 \Li_3(1 - p) \big/ \ln(p) - \left[\Li_2(1 - p) \big/ \ln(p)\right]^2 \right)$
From the general moment results, note that $\E(X) \to 0$ and $\var(X) \to 0$ as $p \downarrow 0$, while $\E(X) \to b$ and $\var(X) \to b^2$ as $p \uparrow 1$.
Open the special distribution simulator and select the exponential-logarithmic distribution. Vary the shape and scale parameters and note the size and location of the mean $\pm$ standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
Related Distributions
Since the exponential-logarithmic distribution is a scale family for each value of the shape parameter, it is trivially closed under scale transformations.
Suppose that $X$ has the exponential-logarithmic distribution with shape parameter $p \in (0, 1)$ and scale parameter $b \in (0, \infty)$. If $c \in (0, \infty)$, then $Y = c X$ has the exponential-logarithmic distribution with shape parameter $p$ and scale parameter $b c$.
Proof
By definition, we can take $X = b Z$ where $Z$ has the standard exponential-logarithmic distribution with shape parameter $p$. But then $Y = c X = (b c) Z$.
Once again, the exponential-logarithmic distribution has the usual connections to the standard uniform distribution by means of the distribution function and quantile function computed above.
Suppose that $p \in (0, 1)$ and $b \in (0, \infty)$.
1. If $U$ has the standard exponential distribution then $X = b \left[\ln\left(\frac{1 - p}{1 - p^U}\right)\right] = b \left[\ln(1 - p) - \ln\left(1 - p^U \right)\right]$ has the exponential-logarithmic distribution with shape parameter $p$ and scale parameter $b$.
2. If $X$ has the exponential-logarithmic distribution with shape parameter $p$ and scale parameter $b$, then $U = \frac{\ln\left[1 - (1 - p) e^{-X / b}\right]}{\ln(p)}$ has the standard uniform distribution.
Proof
These results follow from the representation $X = b Z$, where $Z$ has the standard exponential-logarithmic distribution with shape parameter $p$, and the corresponding result for $Z$.
Again, since the quantile function of the exponential-logarithmic distribution has a simple closed form, the distribution can be simulated using the random quantile method.
Open the random quantile experiment and select the exponential-logarithmic distribution. Vary the shape and scale parameters and note the shape and location of the distribution and probability density functions. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
Suppose that $\bs{T} = (T_1, T_2, \ldots)$ is a sequence of independent random variables, each with the exponential distribution with scale parameter $b \in (0, \infty)$. Suppose also that $N$ has the logarithmic distribution with parameter $1 - p \in (0, 1)$ and is independent of $\bs{T}$. Then $X = \min\{T_1, T_2, \ldots, T_N\}$ has the exponential-logarithmic distribution with shape parameter $p$ and scale parameter $b$.
Proof
Note that $V_i = T_i / b$ has the standard exponential distribution. Hence by the corresponding result above, $Z = \min\{V_1, V_2, \ldots, V_N\}$ has the basic exponential-logarithmic distribution with shape parameter $p$. Hence $X = b Z$ has the exponential-logarithmic distribution with shape parameter $p$ and scale parameter $b$.
The limiting distributions as $p \downarrow 0$ and as $p \uparrow 1$ also follow easily from the corresponding results for the standard case.
For fixed $b \in (0, \infty)$, the exponential-logarithmic distribution with shape parameter $p \in (0, 1)$ and scale parameter $b$ converges to
1. Point mass at 0 as $p \downarrow 0$.
2. The exponential distribution with scale parameter $b$ as $p \uparrow 1$.
Proof
Suppose that $X$ has the exponential-logarithmic distribution with shape parameter $p$ and scale parameter $b$, so that $X = b Z$ where $Z$ has the standard exponential-logarithmic distribution with shape parameter $p$. Using the corresponding result above,
1. The distribution of $Z$ converges to point mass at 0 as $p \downarrow 0$ and hence so does the distribution of $X$.
2. The distribution of $Z$ converges to the standard exponential distribution as $p \uparrow 1$ and hence the the distribution of $X$ converges to the exponential distribution with scale parameter $b$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.33%3A_The_Exponential-Logarithmic_Distribution.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\R}{\mathbb{R}}$
The Gompertz distributon, named for Benjamin Gompertz, is a continuous probability distribution on $[0, \infty)$ that has exponentially increasing failure rate. Unfortunately, the death rate of adult humans increases exponentially, so the Gompertz distribution is widely used in actuarial science.
The Basic Gompertz Distribution
Distribution Functions
We will start by giving the reliability function, since most applications of the Gompertz distribution deal with mortality.
The basic Gompertz distribution with shape parameter $a \in (0, \infty)$ is a continuous distribution on $[0, \infty)$ with reliability function $G^c$ given by $G^c(x) = \exp\left[-a\left(e^x - 1\right)\right], \quad x \in [0, \infty)$ The special case $a = 1$ gives the standard Gompertz distribution.
Proof
Note that $G^c$ is continuous and decreasing on $[0, \infty)$ with $G^c(0) = 1$ and $G^c(x) \to 0$ as $x \to \infty$.
The distribution function $G$ is given by $G(x) = 1 - \exp\left[-a\left(e^x - 1\right)\right], \quad x \in [0, \infty)$
Proof
This follows trivially from the reliability function reliability function, since $G = 1 - G^c$.
The quantile function $G^{-1}$ is given by $G^{-1}(p) = \ln\left[1 - \frac{1}{a} \ln(1 - p)\right], \quad p \in [0, 1)$
1. The first quartile is $q_1 = \ln\left[1 + (\ln 4 - \ln 3) \big/ a\right]$.
2. The median is $q_2 = \ln\left(1 + \ln 2 \big/ a\right)$.
3. The third quartile is $q_3 = \ln\left(1 + \ln 4 \big/ a\right)$.
Proof
The formula for $G^{-1}$ follows from the distribution function by solving $p = G(x)$ for $x$ in terms of $p$.
For the standard Gompertz distribution ($a = 1$), the first quartile is $q_1 = \ln\left[1 + (\ln 4 - \ln 3)\right] \approx 0.2529$, the median is $q_2 = \ln\left(1 + \ln 2\right) \approx 0.5266$, and the third quartile is $q_3 = \ln\left(1 + \ln 4\right) \approx 0.8697$.
Open the special distribution calculator and select the Gompertz distribution. Vary the shape parameter and note the shape of the distribution function. For selected values of the shape parameter, computer a few values of the distribution function and the quantile function.
The probability density function $g$ is given by $g(x) = a e^x \exp\left[-a\left(e^x - 1\right)\right], \quad x \in [0, \infty)$
1. If $a \lt 1$ then $g$ is increasing and then decreasing with mode $x = -\ln(a)$.
2. If $a \ge 1$ then $g$ is decreasing with mode $x = 0$.
3. If $a \lt (3 - \sqrt{5})\big/2 \approx 0.382$ then $g$ is concave up and then down then up again, with inflection points at $x = \ln\left[(3 \pm \sqrt{5})\big/2 a\right]$.
4. If $(3 - \sqrt{5})\big/2 \le a \lt (3 + \sqrt{5})\big/2 \approx 2.618$ then $g$ is concave down and then up, with inflection point at $x = \ln\left[(3 + \sqrt{5})\big/2 a\right]$.
5. If $a \ge (3 + \sqrt{5})\big/2$ then $g$ is concave up.
Proof
The formula for $g$ follows from the distribution function since $g = G^\prime$ Parts (a)–(d) follow from \begin{align} g^\prime(x) & = a e^x (1 - a e^x) \exp\left[-a\left(e^x - 1\right)\right] \ g^{\prime\prime}(x) & = a e^x (1 - 3 a e^x + a^2 e^{2x}) \exp\left[-a\left(e^x - 1\right)\right] \end{align}
So for the standard Gompertz distribution ($a = 1$), the inflection point is $x = \ln(3 + \sqrt{5}) \approx 1.6555$.
Open the special distribution simulator and select the Gompertz distribution. Vary the shape parameter and note the shape of the probability density function. For selected values of the shape parameter, run the simulation 1000 times and compare the empirical density function to the probability density function.
Finally, as promised, the Gompertz distribution has exponentially increasing failure rate.
The failure rate function $r$ is given by $r(x) = a e^x$ for $x \in [0, \infty)$
Proof
Recall that the is $r(x) = g(x) \big/ G^c(x)$ so the result follows from the distribution function and the probability density function.
Moments
The moments of the basic Gompertz distribution cannot be given in simple closed form, but the mean and moment generating function can at least be expressed in terms of a special function known as the exponential integral. There are many variations on the exponential integral, but for our purposes, the following version is best:
The exponential integral with parameter $a \in (0, \infty)$ is the function $E_a: \R \to (0, \infty)$ defined by $E_a(t) = \int_1^\infty u^t e^{-a u} du, \quad t \in \R$
For the remainder of this discussion, we assume that $X$ has the basic Gompertz distribution with shape parameter $a \in (0, \infty)$.
$X$ has moment generating function $m$ given by $m(t) = \E\left(e^{t X}\right) = a e^a E_a(t), \quad t \in \R$
Proof
Using the substitution $u = e^x$ we have $m(t) = \int_0^\infty e^{t x} a e^x e^a \exp\left(-a e^x \right) dx = a e^a \int_1^\infty u^t e^{-a u} du = a e^a E_a(t)$
It follows that $X$ has moments of all orders. Here is the mean:
$X$ has mean $\E(X) = e^a E_a(-1)$.
Proof
First we use the substitution $y = e^x$ to get $\E(X) = \int_0^\infty x a e^x e^a \exp\left(-a e^x\right) dx = a e^a \int_1^\infty \ln(y) e^{-a y} dy$ Next, integration by parts with $u = \ln y$, $dv = e^{-a y} dy$ gives $\E(X) = e^a \int_1^\infty \frac{1}{y} e^{-a y} dy = e^a E_a(-1)$
If $X$ has the standard Gompertz distribution, $\E(X) \approx 0.5963$.
Open the special distribution simulator and select the Gompertz distribution. Vary the shape parameter and note the size and location of the mean $\pm$ standard deviation bar. For selected values of the parameter, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
Related Distributions
The basic Gompertz distribution has the usual connections to the standard uniform distribution by means of the distribution function and quantile function computed above.
Suppose that $a \in (0, \infty)$.
1. If $U$ has the standard uniform distribution then $X = \ln\left(1 - \frac{1}{a} \ln U \right)$ has the basic Gompertz distribution with shape parameter $a$.
2. If $X$ has the basic Gompertz distribution with shape parameter $a$ then $U = \exp\left[-a\left(e^X - 1\right)\right]$ has the standard uniform distribution.
Proof
1. Recall that if $U$ has the standard uniform distribution, then $1 - U$ also has the standard uniform distribution, and hence $X = G^{-1}(1 - U)$ has the basic Gompertz distribution with shape parameter $a$.
2. If $X$ has the basic Gompertz distribution with shape parameter $a$ then $G(X)$ has the standard uniform distribution, and hence so does $U = 1 - G(X)$.
Since the quantile function of the basic Gompertz distribution has a simple closed form, the distribution can be simulated using the random quantile method.
Open the random quantile experiment and select the Gompertz distribution. Vary the shape parameter and note the shape of the distribution and probability density functions. For selected values of the parameter, run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts.
The basic Gompertz distribution also has simple connections to the exponential distribution.
Suppose that $a \in (0, \infty)$.
1. If $X$ has the basic Gompertz distribution with shape parameter $a$, then $Y = e^X - 1$ has the exponential distribution with rate parameter $a$.
2. If $Y$ has the exponential distribution with rate parameter $a$, then $X = \ln(Y + 1)$ has the Gompertz distribution with shape parameter $a$.
Proof
These results follow from the standard change of variables formula. The transformations, which are inverses of each other, are $y = e^x - 1$ and $x = \ln(y + 1)$ for $x, \, y \in [0, \infty)$. Let $g$ and $h$ denote PDFs of $X$ and $Y$ respectively.
1. We start with $g(x) = a e^x \exp\left[-a\left(e^x - 1\right)\right]$ for $x \in [0, \infty)$ and then $h(y) = g(x) \frac{dx}{dy} = a \exp[\ln(y + 1)] \exp\{-a [\exp(\ln(y + 1)) - 1]\} \frac{1}{y + 1} = a e^{-a y}, \quad y \in [0, \infty)$ which is the PDF of the exponential distribution with rate parameter $a$.
2. We start with $h(y) = a e^{-a y}$ for $y \in [0, \infty)$ and then $g(x) = h(y) \frac{dy}{dx} = a \exp\left[-a (e^x - 1)\right] e^x, \quad x \in [0, \infty)$ which is the PDF of the Gompertz distribution with shape parameter $a$.
In particular, if $Y$ has the standard exponential distribution (rate parameter 1), then $X = \ln(Y + 1)$ has the standard Gompertz distribution (shape parameter 1). Since the exponential distribution is a scale family (the scale parameter is the reciprocal of the rate parameter), we can construct an arbitrary basic Gompertz variable from a standard exponential variable. Specifically, if $Y$ has the standard exponential variable and $a \in (0, \infty)$, then $X = \ln\left(\frac{1}{a}Y + 1 \right)$ has the Gompertz distribution with shape parameter $a$.
The extreme value distribution (Gumbel distribution) is also related to the Gompertz distribution.
If $X$ has the standard extreme value distribution for minimums, then the conditional distribution of $X$ given $X \ge 0$ is the standard Gompertz distribution.
Proof
By definition, $X$ has PDF $f$ given by $f(x) = e^x \exp\left(-e^x\right)$ for $x \in \R$. The conditional PDF of $X$ given $X \ge 0$ is $g(x) = \frac{f(x)}{\P(X \ge 0)} = \frac{e^x \exp\left(-e^x\right)}{e^{-1}} = e^x \exp\left[-\left(e^x - 1\right)\right], \quad x \in [0, \infty)$ which is the PDF of the standard Gompertz distribution.
The General Gompertz Distribution
The basic Gompertz distribution is generalized, like so many distributions on $[0, \infty)$, by adding a scale parameter. Recall that scale transformations often correspond to a change of units (minutes to hours, for example) and thus are fundamental.
If $Z$ has the basic Gompertz distribution with shape parameter $a \in (0, \infty)$ and $b \in (0, \infty)$ then $X = b Z$ has the Gompertz distribution with shape parameter $a$ and scale parameter $b$.
Distribution Functions
Suppose that $X$ has the Gompertz distribution with shape parameter $a \in (0, \infty)$ and scale parameter $b \in (0, \infty)$.
$X$ has reliability function $F^c$ given by $F^c(x) = \P(X \gt x) = \exp\left[-a \left(e^{x / b} - 1\right)\right], \quad x \in [0, \infty)$
Proof
Recall that $F^c(x) = G^c(x / b)$ where $G^c$ is the reliability function of the corresponding basic distribution.
$X$ has distribution function $F$ given by $F(x) = \P(X \le x) = 1 - \exp\left[-a \left(e^{x / b} - 1\right)\right], \quad x \in [0, \infty)$
Proof
As before, $F = 1 - F^c$. Also, $F(x) = G(x / b)$ where $G$ is the CDF of the corresponding basic distribution.
$X$ has quantile function $F^{-1}$ given by $F^{-1}(p) = b \ln\left[1 - \frac{1}{a} \ln(1 - p)\right], \quad p \in [0, 1)$
1. The first quartile is $q_1 = b \ln\left[1 + (\ln 4 - \ln 3) \big/ a\right]$.
2. The median is $q_2 = b \ln\left(1 + \ln 2 \big/ a\right)$.
3. The third quartile is $q_3 = b \ln\left(1 + \ln 4 \big/ a\right)$.
Proof
Recall that $F^{-1}(p) = b G^{-1}(p)$ where $G^{-1}$ is the quantile function of the corresponding basic distribution.
Open the special distribution calculator and select the Gompertz distribution. Vary the shape and scale parameters and note the shape and location of the distribution function. For selected values of the parameters, computer a few values of the distribution function and the quantile function.
$X$ has probability density function $f$ given by $f(x) = \frac{a}{b} e^{x/b} \exp\left[-a\left(e^{x/b} - 1\right)\right], \quad x \in [0, \infty)$
1. If $a \lt 1$ then $f$ is increasing and then decreasing with mode $x = -b \ln(a)$.
2. If $a \ge 1$ then $f$ is decreasing with mode $x = 0$.
3. If $a \lt (3 - \sqrt{5}) \big/ 2 \approx 0.382$ then $f$ is concave up and then down then up again, with inflection points at $x = b \ln\left[(3 \pm \sqrt{5}) \big/ 2 a\right]$.
4. If $(3 - \sqrt{5}) \big/ 2 \le a \lt (3 + \sqrt{5}) \big/ 2 \approx 2.618$ then $f$ has is concave down and then up, with inflection point at $x = b \ln\left[(3 + \sqrt{5}) \big/ 2 a\right]$.
5. If $a \ge (3 + \sqrt{5}) \big/ 2$ then $f$ is concave up.
Proof
Recall that $f(x) = \frac{1}{b} g\left(\frac{x}{b}\right)$ where $g$ is the PDF of the corresponding basic distribution.
Open the special distribution simulator and select the Gompertz distribution. Vary the shape and scale parameters and note the shape and location of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
Once again, $X$ has exponentially increasing failure rate.
$X$ has failure rate function $R$ given by $R(x) = \frac{a}{b} e^{x / b}, \quad x \in [0, \infty)$
Proof
Recall that $R(x) = f(x) \big/ F^c(x)$. Also, $R(x) = \frac{1}{b} r\left(\frac{x}{b}\right)$ where $r$ is the failure rate function of the corresponding basic distribution.
Moments
As with the basic distribution, the moment generating function and mean of the general Gompertz distribution can be expressed in terms of the exponential integral. Suppose again that $X$ has the Gompertz distribution with shape parameter $a \in (0, \infty)$ and scale parameter $b \in (0, \infty)$.
$X$ has moment generating function $M$ given by $M(t) = \E\left(e^{t X}\right) = a e^a E_a(b t), \quad t \in \R$
Proof
Recall that $M(t) = m(b t)$ where $m$ is the MGF of the corresponding basic distribution.
$X$ has mean $\E(X) = b e^a E_a(-1)$.
Proof
This follows from the mean of the corresponding basic distribution, and the standard property $\E(X) = b \E(Z)$.
Open the special distribution simulator and select the Gompertz distribution. Vary the shape and scale parameters and note the size and location of the mean $\pm$ standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
Related Distributions
Since the Gompertz distribution is a scale family for each value of the shape parameter, it is trivially closed under scale transformations.
Suppose that $X$ has the Gompertz distribution with shape parameter $a \in (0, \infty)$ and scale parameter $b \in (0, \infty)$. If $c \in (0, \infty)$ then $Y = c X$ has the Gompertz distribution with shape parameter $a$ and scale parameter $b c$.
Proof
By definition, we can take $X = b Z$ where $Z$ has the standard Gompertz distribution with shape parameter $a$. But then $Y = c X = (b c) Z$.
As with the basic distribution, the Gompertz distribution has the usual connections with the standard uniform distribution by means of the distribution function and quantile function computed above.
Suppose that $a, \, b \in (0, \infty)$.
1. If $U$ has the standard uniform distribution then $X = b \ln\left(1 - \frac{1}{a} \ln U \right)$ has the Gompertz distribution with shape parameter $a$ and scale parameter $b$.
2. If $X$ has the Gompertz distribution with shape parameter $a$ and scale parameter $b$, then $U = \exp\left[-a\left(e^{X / b} - 1\right)\right]$ has the standard uniform distribution.
Proof
This follows from the corresponding result for the basic distribution and the definition of the general Gompertz variable as $X = b Z$ where $Z$ has the basic Gompertz distribution with shape parameter $a$.
Again, since the quantile function of the Gompertz distribution has a simple closed form, the distribution can be simulated using the random quantile method.
Open the random quantile experiment and select the Gompertz distribution. Vary the shape and scale parameters and note the shape and location of the distribution and probability density functions. For selected values of the parameters, run the simulation 1000 times and note the agreement between the empirical density function and the probability density function.
The following result is a slight generalization of the connection above between the basic Gompertz distribution and the extreme value distribution.
If $X$ has the extreme value distribution for minimums with scale parameter $b \gt 0$, then the conditional distribution of $X$ given $X \ge 0$ is the Gompertz distribution with shape parameter 1 and scale parameter $b$.
Proof
We can take $X = b V$ where $V$ has the standard extreme value distribution for minimums. Note that $X \ge 0$ if and only if $V \ge 0$. Hence the conditional distribution of $X$ given $X \ge 0$ is the same as the conditional distribution of $b V$ given $V \ge 0$. But by the result above the conditional distribution of $V$ given $V \ge 0$ has the standard Gompertz distribution.
Finally, we give a slight generalization of the connection above between the Gompertz distribution and the exponential distribution.
Suppose that $a, \, b \in (0, \infty)$.
1. If $X$ has the Gompertz distribution with shape parameter $a$ and scale parameter $b$, then $Y = e^{X/b} - 1$ has the exponential distribution with rate parameter $a$.
2. If $Y$ has the exponential distribution with rate parameter $a$, then $X = b \ln(Y + 1)$ has the Gompertz distribution with shape parameter $a$ and scale parameter $b$.
Proof
These results follow from the corresponding result for the basic distribution.
1. If $X$ has the Gompertz distribution with shape parameter $a$ and scale parameter $b$, then $X / b$ has the basic Gompertz distribution with shape parameter $a$. Hence $Y = e^{X / b} - 1$ has the exponential distribution with rate parameter $a$.
2. If $Y$ has the exponential distribution with rate parameter $a$ then $\ln (Y + 1)$ has the basic Gompertz distribution with shape parameter $a$ and hence $X = b \ln (Y + 1)$ has the Gompertz distribution with shape parameter $a$ and scale parameter $b$. (
As a corollary, we can construct a general Gompertz variable from a standard exponential variable. Specifically, if $Y$ has the standard exponential distribution and if $a, \, b \in (0, \infty)$ then $X = b \ln\left(\frac{1}{a} Y + 1\right)$ has the Gompertz distribution with shape parameter $a$ and scale parameter $b$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.34%3A_The_Gompertz_Distribution.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\sinc}{\text{sinc}}$
As the name suggests, the log-logistic distribution is the distribution of a variable whose logarithm has the logistic distribution. The log-logistic distribution is often used to model random lifetimes, and hence has applications in reliability.
The Basic Log-Logistic Distribution
Distribution Functions
The basic log-logistic distribution with shape parameter $k \in (0, \infty)$ is a continuous distribution on $[0, \infty)$ with distribution function $G$ given by $G(z) = \frac{z^k}{1 + z^k}, \quad z \in [0, \infty)$ In the special case that $k = 1$, the distribution is the standard log-logistic distribution.
Proof
Note that $G$ is continuous on $[0, \infty)$ with $G(0) = 0$ and $G(z) \to 1$ as $z \to \infty$. Moreover, $g(z) = G^\prime(z) = \frac{k z^{k-1}}{(1 + z^k)^2} \gt 0, \quad z \in (0, \infty)$ so $G$ is strictly increasing on $[0, \infty)$.
The probability density function function $g$ is given by $g(z) = \frac{k z^{k-1}}{(1 + z^k)^2}, \quad z \in (0, \infty)$
1. If $0 \lt k \lt 1$, $g$ is decreasing with $g(z) \to \infty$ as $z \downarrow 0$.
2. If $k = 1$, $g$ is deceasing with mode $z = 0$.
3. If $k \gt 1$, $g$ increases and then decreases with mode $z = \left(\frac{k - 1}{k + 1}\right)^{1/k}.$
4. If $k \le 1$, $g$ is concave upward.
5. If $1 \lt k \le 2$, $g$ is concave downward and then upward, with inflection point at $z = \left[\frac{2 (k^2 - 1) + 2 k \sqrt{3(k^2 - 1)}}{(k + 1)(k + 2)}\right]^{1/k}$
6. If $k \gt 2$, $g$ is concave upward then downward then upward again, with inflection points at $z = \left[\frac{2 (k^2 - 1) \pm 2 k \sqrt{3(k^2 - 1)}}{(k + 1)(k + 2)}\right]^{1/k}$
Proof
The PDF $g = G^\prime$ was computed in the proof of the CDF result. The rest follows from \begin{align} g^{\prime}(z) & = \frac{k z^{k-2}[(k - 1) - (k + 1) z^k]}{(1 + z^k)^3}, \quad z \in (0, \infty) \ g^{\prime \prime}(z) & = \frac{k z^{k - 3} \left[(k - 1)(k - 2) - 4(k^2 -1) z^k + (k + 1) (k + 2)z^{2 k}\right]}{(1 + z^k)^4}, \quad z \in (0, \infty) \end{align}
So $g$ has a rich variety of shapes, and is unimodal if $k \gt 1$. When $k \ge 1$, $g$ is defined at 0 as well.
Open the special distribution simulator and select the log-logistic distribution. Vary the shape parameter and note the shape of the probability density function. For selected values of the shape parameter, run the simulation 1000 times and compare the empirical density function to the probability density function.
The quantile function $G^{-1}$ is given by $G^{-1}(p) = \left(\frac{p}{1 - p}\right)^{1/k}, \quad p \in [0, 1)$
1. The first quartile is $q_1 = (1/3)^{1/k}$.
2. The median is $q_2 = 1$.
3. The third quartile is $q_3 = 3^{1/k}$.
Proof
The formula for $G^{-1}$ follows from the distribution function by solving $p = G(z)$ for $z$ in terms of $p$.
Recall that $p \big/ (1 - p)$ is the odds ratio associated with probability $p \in (0, 1)$. Thus, the quantile function of the basic log-logistic distribution with shape parameter $k$ is the $k$th root of the odds ratio function. In particular, the quantile function of the standard log-logistic distribution is the odds ratio function itself. Also of interest is that the median is 1 for every value of the shape parameter.
Open the special distribution calculator and select the log-logistic distribution. Vary the shape parameter and note the shape of the distribution and probability density functions. For selected values of the shape parameter, computer a few values of the distribution function and the quantile function.
The reliability function $G^c$ is given by $G^c(z) = \frac{1}{1 + z^k}, \quad z \in [0, \infty)$
Proof
This follows trivially from the distribution function since $G^c = 1 - G$.
The basic log-logistic distribution has either decreasing failure rate, or mixed decreasing-increasing failure rate, depending on the shape parameter.
The failure rate function $r$ is given by $r(z) = \frac{k z^{k-1}}{1 + z^k}, \quad z \in (0, \infty)$
1. If $0 \lt k \le 1$, $r$ is decreasing.
2. If $k \gt 1$, $r$ decreases and then increases with minimum at $z = (k - 1)^{1/k}$.
Proof
Recall that the is $r(z) = g(z) \big/ G^c(z)$ for $z \in (0, \infty)$ so the formula follows from the PDF and the reliability function above. Parts (a) and (b) follow from $r^\prime(z) = \frac{k z^{k-1}[(k - 1) - z^k]}{(1 + z^k)^2}, \quad z \in (0, \infty)$
If $k \ge 1$, $r$ is defined at 0 also.
Moments
Suppose that $Z$ has the basic log-logistic distribution with shape parameter $k \in (0, \infty)$. The moments (about 0) of the $Z$ have an interesting expression in terms of the beta function $B$ and in terms of the sine function. The simplest representation is in terms of a new special function constructed from the sine function.
The (normalized) cardinal sine function sinc is defined by $\sinc(x) = \frac{\sin(\pi x)}{\pi x}, \quad x \in \R$ where it is understood that $\sinc(0) = 1$ (the limiting value).
If $n \ge k$ then $\E(Z^n) = \infty$. If $0 \le n \lt k$ then $\E(Z^n) = B\left(1 - \frac{n}{k}, 1 + \frac{n}{k}\right) = \frac{1}{\sinc(n / k)}$
Proof
Using the PDF, $\E(Z^n) = \int_0^\infty z^n \frac{k z^{k-1}}{(1 + z^k)^2} dz$ The substitution $u = 1 / (1 + z^k)$, $du = -k z^{k-1}/(1 + z^k)^2$ gives $\E(Z^n) = \int_0^1 (1/u - 1)^{n/k} du = \int_0^1 u^{-n/k} (1 - u)^{n/k} du$ The result now follows from the definition of the beta function.
In particular, we can give the mean and variance.
If $k \gt 1$ then $\E(Z) = \frac{1}{\sinc(1/k)}$
If $k \gt 2$ then $\var(Z) = \frac{1}{\sinc(2 / k)} - \frac{1}{\sinc^2(1 / k)}$
Open the special distribution simulator and select the log-logistic distribution. Vary the shape parameter and note the size and location of the mean $\pm$ standard deviation bar. For selected values of the shape parameter, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
Related Distributions
The basic log-logistic distribution is preserved under power transformations.
If $Z$ has the basic log-logistic distribution with shape parameter $k \in (0, \infty)$ and if $n \in (0, \infty)$, then $W = Z^n$ has the basic log-logistic distribution with shape parameter $k / n$.
Proof
For $w \in [0, \infty)$, $\P(W \le w) = \P(Z \le w^{1/n}) = G\left(w^{1/n}\right) = \frac{w^{k/n}}{1 + w^{k/n}}$ As a function of $w$, this is the CDF of the basic log-logistic distribution with shape parameter $k/n$.
In particular, it follows that if $V$ has the standard log-logistic distribution and $k \in (0, \infty)$, then $Z = V^{1/k}$ has the basic log-logistic distribution with shape parameter $k$.
The log-logistic distribution has the usual connections with the standard uniform distribution by means of the distribution function and the quantile function given above.
Suppose that $k \in (0, \infty)$.
1. If $U$ has the standard uniform distribution then $Z = G^{-1}(U) = \left[U \big/ (1 - U)\right]^{1/k}$ has the basic log-logistic distribution with shape parameter $k$.
2. If $Z$ has the basic log-logistic distribution with shape parameter $k$ then $U = G(Z) = Z^k \big/ (1 + Z^k)$ has the standard uniform distribution.
Since the quantile function of the basic log-logistic distribution has a simple closed form, the distribution can be simulated using the random quantile method.
Open the random quantile experiment and select the log-logistic distribution. Vary the shape parameter and note the shape of the distribution and probability density functions. For selected values of the parameter, run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts..
Of course, as mentioned in the introduction, the log-logistic distribution is related to the logistic distribution.
Suppose that $k, \, b \in (0, \infty)$.
1. If $Z$ has the basic log-logistic distribution with shape parameter $k$ then $Y = \ln Z$ has the logistic distribution with location parameter 0 and scale parameter $1/k$.
2. If $Y$ has the logistic distribution with location parameter $0$ and scale parameter $b$ then $Z = e^Y$ has the basic log-logistic distribution with shape parameter $1 / b$.
Proof
1. Suppose first that $Z$ has the standard log-logistic distribution. Then $\P(Y \le y) = \P\left(Z \le e^y\right) = \frac{e^y}{1 + e^y}, \quad y \in \R$ and as a function of $y$, this is the CDF of the standard logistic distribution. Suppose now that $Z$ has the basic log-logistic distribution with shape parameter $k$. From the power result, we can take $Z = W^{1/k}$ where $W$ has the standard log-logistic distribution. Then $Y = \ln Z = \frac{1}{k} \ln W$. But $\ln(W)$ has the standard logistic distribution, and hence $\frac{1}{k} \ln W$ has the logistic distribution with location parameter $0$ and scale parameter $1/k$
2. Suppose first that $Y$ has the standard logistic distribution. Then $\P(Z \le z) = \P[Y \le \ln(z)] = \frac{e^{\ln z}}{1 + e^{\ln z }} = \frac{z}{1 + z}, \quad z \in (0, \infty)$ and as a function of $z$, this is the CDF of the standard log-logistic distribution. Suppose now that $Y$ has the logistic distribution with location parameter 0 and scale parameter $b$. We can take $Y = b V$ where $V$ has the standard logistic distribution. Hence $Z = e^Y = e^{b V} = \left(e^V\right)^b$. But $e^V$ has the standard log-logistic distribution, and again by the power result $\left(e^V\right)^b$ has the log-logistic distribution with shape parameter $1 / b$.
As a special case, (and as noted in the proof), if $Z$ has the standard log-logistic distribution, then $Y = \ln Z$ has the standard logistic distribution, and if $Y$ has the standard logistic distribution, then $Z = e^Y$ has the standard log-logistic distribution.
The standard log-logistic distribution is the same as the standard beta prime distribution.
Proof
The PDF of the standard log-logistic distribution is $g(z) = 1 \big/ (1 + z)^2$ for $z \in [0, \infty)$, which is the same as the PDF of the standard beta prime distribution.
Of course, limiting distributions with respect to parameters are always interesting.
The basic log-logistic distribution with shape parameter $k \in (0, \infty)$ converges to point mass at 1 as $k \to \infty$.
Proof from the definition
Note that the distribution function satisfies $G(z) \to 0$ as $k \to \infty$ for $0 \le z \lt 1$, $G(1) = \frac{1}{2}$ for all $k \gt 1$, and $G(z) \to 1$ as $k \to \infty$ for $z \gt 1$. Except for the point of discontinuity $z = 1$, the limiting values are the distribution function of point mass at 1.
Random variable proof
Suppose that $V$ has the standard log-logistic distribution, and for $k \in (0, \infty)$, let $Z_k = V^{1/k}$, so that $Z_k$ has the basic log-logistic distribution with shape parameter $k$. The event $\{V \gt 0\}$ has probability 1, and on this event, $Z_k \to 1$ as $k \to \infty$. But convergence with probability 1 implies convergence in distribution.
The General Log-Logistic Distribution
The basic log-logistic distribution is generalized, like so many distributions on $[0, \infty)$, by adding a scale parameter. Recall that a scale transformation often corresponds to a change of units (gallons into liters, for example), and so such transformations are of basic importance.
If $Z$ has the basic log-logistic distribution with shape parameter $k \in (0, \infty)$ and if $b \in (0, \infty)$ then $X = b Z$ has the log-logistic distribution with shape parameter $k$ and scale parameter $b$.
Distribution Functions
Suppose that $X$ has the log-logistic distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$.
$X$ has distribution function $F$ given by $F(x) = \frac{x^k}{b^k + x^k}, \quad x \in [0, \infty)$
Proof
Recall that $F(x) = G(x / b)$ where $G$ is the distribution function of the basic log-logistic distribution with shape parameter $k$.
$X$ has probability density function $f$ given by $f(x) = \frac{b^k k x^{k-1}}{(b^k + x^k)^2}, \quad x \in (0, \infty)$ When $k \ge 1$, $f$ is defined at 0 also. $f$ satisfies the following properties:
1. If $0 \lt k \lt 1$, $f$ is decreasing with $f(x) \to \infty$ as $x \downarrow 0$.
2. If $k = 1$, $f$ is deceasing with mode $x = 0$.
3. If $k \gt 1$, $f$ increases and then decreases with mode $x = b \left(\frac{k - 1}{k + 1}\right)^{1/k}.$
4. If $k \le 1$, $f$ is concave upward.
5. If $1 \lt k \le 2$, $f$ is concave downward and then upward, with inflection point at $x = b \left[\frac{2 (k^2 - 1) + 2 k \sqrt{3(k^2 - 1)}}{(k + 1)(k + 2)}\right]^{1/k}$
6. If $k \gt 2$, $f$ is concave upward then downward then upward again, with inflection points at $x = b \left[\frac{2 (k^2 - 1) \pm 2 k \sqrt{3(k^2 - 1)}}{(k + 1)(k + 2)}\right]^{1/k}$
Proof
Recall that $f(x) = \frac{1}{b} g\left(\frac{x}{b}\right)$ where $g$ is the probability density function of the basic log-logistic distribution with shape parameter $k$. Also of course, $f = F^\prime$.
Open the special distribution simulator and select the log-logistic distribution. Vary the shape and scale parameters and note the shape of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
$X$ has quantile function $F^{-1}$ given by $F^{-1}(p) = b \left(\frac{p}{1 - p}\right)^{1/k}, \quad p \in [0, 1)$
1. The first quartile is $q_1 = b (1/3)^{1/k}$.
2. The median is $q_2 = b$.
3. The third quartile is $q_3 = b 3^{1/k}$.
Proof
Recall that $F^{-1}(p) = b G^{-1}(p)$ for $p \in [0, 1)$ where $G^{-1}$ is the quantlie function of the basic log-logistic distribution with shape parameter $k$.
Open the special distribution calculator and select the log-logistic distribution. Vary the shape and sclae parameters and note the shape of the distribution and probability density functions. For selected values of the parameters, computer a few values of the distribution function and the quantile function.
$X$ has reliability function $F^c$ given by $F^c(x) = \frac{b^k}{b^k + x^k}, \quad x \in [0, \infty)$
Proof
This follows trivially from the distribution function, since $F^c = 1 - F$.
The log-logistic distribution has either decreasing failure rate, or mixed decreasing-increasing failure rate, depending on the shape parameter.
$X$ has failure rate function $R$ given by $R(x) = \frac{k x^{k-1}}{b^k + x^k}, \quad x \in (0, \infty)$
1. If $0 \lt k \le 1$, $R$ is decreasing.
2. If $k \gt 1$, $R$ decreases and then increases with minimum at $x = b (k - 1)^{1/k}$.
Proof
Recall that $R(x) = \frac{1}{b} r\left(\frac{x}{b}\right)$ where $r$ is the failure rate function of the basic log-logistic distribution with shape parameter $k$. Also, $R = f \big/ F^c$ where $f$ is the PDF and $F^c$ is the reliability function,.
Moments
Suppose again that $X$ has the log-logistic distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$. The moments of $X$ can be computed easily from the representation $X = b Z$ where $Z$ has the basic log-logistic distribution with shape parameter $k$. Again, the expressions are simplest in terms of the beta function $B$ and in terms of the normalized cardinal sine function sinc.
If $n \ge k$ then $\E(X^n) = \infty$. If $0 \le n \lt k$ then $\E(X^n) = b^n B\left(1 - \frac{n}{k}, 1 + \frac{n}{k}\right) = \frac{b^n}{\sinc(n / k)}$
If $k \gt 1$ then $\E(X) = \frac{b}{\sinc(1/k)}$
If $k \gt 2$ then $\var(X) = b^2 \left[\frac{1}{\sinc(2 / k)} - \frac{1}{\sinc^2(1 / k)} \right]$
Open the special distribution simulator and select the log-logistic distribution. Vary the shape and scale parameters and note the size and location of the mean/standard deviation bar. For selected values of the parameters, run the simulation 1000 times compare the empirical mean and standard deviation to the distribution mean and standard deviation.
Related Distributions
Since the log-logistic distribution is a scale family for each value of the shape parameter, it is trivially closed under scale transformations.
If $X$ has the log-logistic distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$, and if $c \in (0, \infty)$, then $Y = c X$ has the log-logistic distribution with shape parameter $k$ and scale parameter $b c$.
Proof
By definition we can take $X = b Z$ where $Z$ has the basic log-logistic distribution with shape parameter $k$. But then $Y = c X = (b c) Z$.
The log-logistic distribution is preserved under power transformations.
If $X$ has the log-logistic distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$, and if $n \in (0, \infty)$, then $Y = X^n$ has the log-logistic distribution with shape parameter $k / n$ and scale parameter $b^n$.
Proof
Again we can take $X = b Z$ where $Z$ has the basic log-logistic distribution with shape parameter $k$. Then $X^n = b^n Z^n$. But by the power result for the standard distribution, $Z^n$ has the basic log-logistic distribution with shape parameter $k / n$ and hence $X$ has the log-logistic distribution with shape parameter $k / n$ and scale parameter $b^n$.
In particular, if $V$ has the standard log-logistic distribution, then $X = b V^{1/k}$ has the log-logistic distribution with shape parameter $k$ and scale parameter $b$.
As before, the log-logistic distribution has the usual connections with the standard uniform distribution by means of the distribution function and the quantile function computed above.
Suppose that $k, \, b \in (0, \infty)$.
1. If $U$ has the standard uniform distribution then $X = F^{-1}(U) = b \left[U \big/ (1 - U)\right]^{1/k}$ has the log-logistic distribution with shape parameter $k$ and scale parameter $b$.
2. If $X$ has the log-logistic distribution with shape parameter $k$ and scale parameter $b$, then $U = F(X) = X^k \big/ (b^k + X^k)$ has the standard uniform distribution.
Again, since the quantile function of the log-logistic distribution has a simple closed form, the distribution can be simulated using the random quantile method.
Open the random quantile experiment and select the log-logistic distribution. Vary the shape and scale parameters and note the shape and location of the distribution and probability density functions. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function, mean and standard deviation to their distributional counterparts.
Again, the logarithm of a log-logistic variable has the logistic distribution.
Suppose that $k, \, b, \, c \in (0, \infty)$ and $a \in \R$.
1. If $X$ has the log-logistic distribution with shape parameter $k$ and scale parameter $b$ then $Y = \ln X$ has the logistic distribution with location parameter $\ln b$ and scale parameter $1 / k$.
2. If $Y$ has the logistic distribution with location parameter $a$ and scale parameter $c$ then $X = e^Y$ has the log-logistic distribution with shape parameter $1/c$ and scale parameter $e^a$.
Proof
1. As noted above, we can take $X = b V^{1/k}$ where $V$ has the standard log-logistic distribution. Then $Y = \ln X = \ln b + \frac{1}{k} \ln V$. But by the corresponding result for the basic distribution, $\ln V$ has the standard logistic distribution, so $Y$ has the logistic distribution with location parameter $\ln b$ and scale parameter $1/k$.
2. We can take $Y = a + c U$ where $U$ has the standard logistic distribution. Hence $X = e^Y = e^a e^{c U} = e^a \left(e^U\right)^c$. But by the result corresponding result for the standard distribution, $e^U$ has the standard log-logistic distribution so $X$ has the log-logistic distribution with shape parameter $1/c$ and scale parameter $e^a$.
Once again, the limiting distribution is also of interest.
For fixed $b \in (0, \infty)$, the log-logistic distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b$ converges to point mass at $b$ as $k \to \infty$.
Proof
If $X$ has the log-logistic distribution with shape parameter $k$ and scale parameter $b$, then as usual, we can write $X = b Z$ where $Z$ has the basic log-logistic distribution with shape parameter $k$. From the limit result for the basic distribution, we know that the distribution of $Z$ converges to point mass at 1 as $k \to \infty$, so it follows by the continuity theorem that the distribution of $X$ converges to point mass at $b$ as $k \to \infty$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.35%3A_The_Log-Logistic_Distribution.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
The Pareto distribution is a skewed, heavy-tailed distribution that is sometimes used to model the distribution of incomes and other financial variables.
The Basic Pareto Distribution
Distribution Functions
The basic Pareto distribution with shape parameter $a \in (0, \infty)$ is a continuous distribution on $[1, \infty)$ with distribution function $G$ given by $G(z) = 1 - \frac{1}{z^a}, \quad z \in [1, \infty)$ The special case $a = 1$ gives the standard Pareto distribuiton.
Proof
Clearly $G$ is increasing and continuous on $[1, \infty)$, with $G(1) = 0$ and $G(z) \to 1$ as $z \to \infty$.
The Pareto distribution is named for the economist Vilfredo Pareto.
The probability density function $g$ is given by $g(z) = \frac{a}{z^{a+1}}, \quad z \in [1, \infty)$
1. $g$ is decreasing with mode $z = 1$
2. $g$ is concave upward.
Proof
Recall that $g = G^\prime$. Parts (a) and (b) follow from standard calculus.
The reason that the Pareto distribution is heavy-tailed is that the $g$ decreases at a power rate rather than an exponential rate.
Open the special distribution simulator and select the Pareto distribution. Vary the shape parameter and note the shape of the probability density function. For selected values of the parameter, run the simulation 1000 times and compare the empirical density function to the probability density function.
The quantile function $G^{-1}$ is given by $G^{-1}(p) = \frac{1}{(1 - p)^{1/a}}, \quad p \in [0, 1)$
1. The first quartile is $q_1 = \left(\frac{4}{3}\right)^{1/a}$.
2. The median is $q_2 = 2^{1/a}$.
3. The third quartile is $q_3 = 4^{1/a}$.
Proof
The formula for $G^{-1}(p)$ comes from solving $G(z) = p$ for $z$ in terms of $p$.
Open the special distribution calculator and select the Pareto distribution. Vary the shape parameter and note the shape of the probability density and distribution functions. For selected values of the parameters, compute a few values of the distribution and quantile functions.
Moments
Suppose that random variable $Z$ has the basic Pareto distribution with shape parameter $a \in (0, \infty)$. Because the distribution is heavy-tailed, the mean, variance, and other moments of $Z$ are finite only if the shape parameter $a$ is sufficiently large.
The moments of $Z$ (about 0) are
1. $\E(Z^n) = \frac{a}{a - n}$ if $0 \lt n \lt a$
2. $\E(Z^n) = \infty$ if $n \ge a$
Proof
Note that $E(Z^n) = \int_1^\infty z^n \frac{a}{z^{a+1}} dz = \int_1^\infty a z^{-(a + 1 - n)} dz$ The integral diverges to $\infty$ if $a + 1 - n \le 1$ and evaluates to $\frac{a}{a - n}$ if $a + 1 - n \gt 1$.
It follows that the moment generating function of $Z$ cannot be finite on any interval about 0.
In particular, the mean and variance of $Z$ are
1. $\E(Z) = \frac{a}{a - 1}$ if $a \gt 1$
2. $\var(Z) = \frac{a}{(a - 1)^2 (a - 2)}$ if $a \gt 2$
Proof
This results follow from the general moment formula above and the computational formula $\var(Z) = \E\left(Z^2\right) - [E(Z)]^2$.
In the special distribution simulator, select the Pareto distribution. Vary the parameters and note the shape and location of the mean $\pm$ standard deviation bar. For each of the following parameter values, run the simulation 1000 times and note the behavior of the empirical moments:
1. $a = 1$
2. $a = 2$
3. $a = 3$
The skewness and kurtosis of $Z$ are as follows:
1. If $a \gt 3$, $\skw(Z) = \frac{2 (1 + a)}{a - 3} \sqrt{1 - \frac{2}{a}}$
2. If $a \gt 4$, $\kur(Z) = \frac{3 (a - 2)(3 a^2 + a + 2)}{a (a - 3)(a - 4)}$
Proof
These results follow from the standard computational formulas for skewness and kurtosis, and the first 4 moments of $Z$ given above.
So the distribution is positively skewed and $\skw(Z) \to 2$ as $a \to \infty$ while $\skw(Z) \to \infty$ as $a \downarrow 3$. Similarly, $\kur(Z) \to 9$ as $a \to \infty$ and $\kur(Z) \to \infty$ as $a \downarrow 4$. Recall that the excess kurtosis of $Z$ is $\kur(Z) - 3 = \frac{3 (a - 2)(3 a^2 + a + 2)}{a (a - 3)(a - 4)} - 3 = \frac{6 (a^3 + a^2 - 6 a - 1)}{a(a - 3)(a - 4)}$
Related Distributions
The basic Pareto distribution is invariant under positive powers of the underlying variable.
Suppose that $Z$ has the basic Pareto distribution with shape parameter $a \in (0, \infty)$ and that $n \in (0, \infty)$. Then $W = Z^n$ has the basic Pareto distribution with shape parameter $a / n$.
Proof
We use the CDF of $Z$ given above. $\P(W \le w) = \P\left(Z \le w^{1/n}\right) = 1 - \frac{1}{w^{a/n}}, \quad w \in [1, \infty)$ As a function of $w$, this is the Pareto CDF with shape parameter $a / n$.
In particular, if $Z$ has the standard Pareto distribution and $a \in (0, \infty)$, then $Z^{1/a}$ has the basic Pareto distribution with shape parameter $a$. Thus, all basic Pareto variables can be constructed from the standard one.
The basic Pareto distribution has a reciprocal relationship with the beta distribution.
Suppose that $a \in (0, \infty)$.
1. If $Z$ has the basic Pareto distribution with shape parameter $a$ then $V = 1 / Z$ has the beta distribution with left parameter $a$ and right parameter 1.
2. If $V$ has the beta distribution with left parameter $a$ and right parameter 1, then $Z = 1 / V$ has the basic Pareto distribution with shape parameter $a$.
Proof
We will use the standard change of variables theorem. The transformations are $v = 1 / z$ and $z = 1 / v$ for $z \in [1, \infty)$ and $v \in (0, 1]$. These are inverses of each another. Let $g$ and $h$ denote PDFs of $Z$ and $V$ respectively.
1. We start with $g(z) = a \big/ z^{a+1}$ for $z \in [1, \infty)$, thePDF of $Z$ given above. Then $h(v) = g(z) \left|\frac{dz}{dv}\right| = \frac{a}{(1 / v)^{a+1}} \frac{1}{v^2} = a v^{a-1}, \quad v \in (0, 1]$ which is the PDF of the beta distribution with left parameter $a$ and right parameter 1.
2. We start with $h(v) = a v^{a-1}$ for $v \in (0, 1]$. Then $g(z) = h(v) \left|\frac{dv}{dz}\right] = a\left(\frac{1}{z}\right)^{a-1} \frac{1}{z^2} = \frac{a}{z^{a+1}}, \quad z \in [1, \infty)$ which is the PDF of the basic Pareto distribution with shape parameter $a$.
The basic Pareto distribution has the usual connections with the standard uniform distribution by means of the distribution function and quantile function computed above.
Suppose that $a \in (0, \infty)$.
1. If $U$ has the standard uniform distribution then $Z = 1 \big/ U^{1/a}$ has the basic Pareto distribution with shape parameter $a$.
2. If $Z$ has the basic Pareto distribution with shape parameter $a$ then $U = 1 \big/ Z^a$ has the standard uniform distribution.
Proof
1. If $U$ has the standard uniform distribution, then so does $1 - U$. Hence $Z = G^{-1}(1 - U) = 1 \big/ U^{1/a}$ has the basic Pareto distribution with shape parameter $a$.
2. If $Z$ has the basic Pareto distribution with shape parameter $a$, then $G(Z)$ has the standard uniform distribution. But then $U = 1 - G(Z) = 1 \big/ Z^a$ also has the standard uniform distribution.
Since the quantile function has a simple closed form, the basic Pareto distribution can be simulated using the random quantile method.
Open the random quantile experiment and selected the Pareto distribution. Vary the shape parameter and note the shape of the distribution and probability density functions. For selected values of the parameter, run the experiment 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts.
The basic Pareto distribution also has simple connections to the exponential distribution.
Suppose that $a \in (0, \infty)$.
1. If $Z$ has the basic Pareto distribution with shape parameter $a$, then $T = \ln Z$ has the exponential distribution with rate parameter $a$.
2. If $T$ has the exponential distribution with rate parameter $a$, then $Z = e^T$ has the basic Pareto distribution with shape parameter $a$.
Proof
We use the Pareto CDF given above and the CDF of the exponential distribution.
1. If $t \in [0, \infty)$ then $\P(T \le t) = \P\left(Z \le e^t\right) = 1 - \frac{1}{\left(e^t\right)^a} = 1 - e^{-a t}$ which is the CDF of the exponential distribution with rate parameter $a$.
2. If $z \in [1, \infty)$ then $\P(Z \le z) = \P(T \le \ln z) = 1 - \exp(-a \ln z) = 1 - \frac{1}{z^z}$ which is the CDF of the basic Pareto distribution with shape parameter $a$.
The General Pareto Distribution
As with many other distributions that govern positive variables, the Pareto distribution is often generalized by adding a scale parameter. Recall that a scale transformation often corresponds to a change of units (dollars into Euros, for example) and thus such transformations are of basic importance.
Suppose that $Z$ has the basic Pareto distribution with shape parameter $a \in (0, \infty)$ and that $b \in (0, \infty)$. Random variable $X = b Z$ has the Pareto distribution with shape parameter $a$ and scale parameter $b$.
Note that $X$ has a continuous distribution on the interval $[b, \infty)$.
Distribution Functions
Suppose again that $X$ has the Pareto distribution with shape parameter $a \in (0, \infty)$ and scale parameter $b \in (0, \infty)$.
$X$ has distribution function $F$ given by $F(x) = 1 - \left( \frac{b}{x} \right)^a, \quad x \in [b, \infty)$
Proof
Recall that $F(x) = G\left(\frac{x}{b}\right)$ for $x \in [b, \infty)$ where $G$ is the CDF of the basic distribution with shape parameter $a$.
$X$ has probability density function $f$ given by $f(x) = \frac{a b^a}{x^{a + 1}}, \quad x \in [b, \infty)$
Proof
Recall that $f(x) = \frac{1}{b} g\left(\frac{x}{b}\right)$ for $x \in [b, \infty)$ where $g$ is the PDF of the basic distribution with shape parameter $a$.
Open the special distribution simulator and select the Pareto distribution. Vary the parameters and note the shape and location of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
$X$ has quantile function $F^{-1}$ given by $F^{-1}(p) = \frac{b}{(1 - p)^{1/a}}, \quad p \in [0, 1)$
1. The first quartile is $q_1 = b \left(\frac{4}{3}\right)^{1/a}$.
2. The median is $q_2 = b 2^{1/a}$.
3. The third quartile is $q_3 = b 4^{1/a}$.
Proof
Recall that $F^{-1}(p) = b G^{-1}(p)$ for $p \in [0, 1)$ where $G^{-1}$ is the quantile function of the basic distribution with shape parameter $a$.
Open the special distribution calculator and select the Pareto distribution. Vary the parameters and note the shape and location of the probability density and distribution functions. For selected values of the parameters, compute a few values of the distribution and quantile functions.
Moments
Suppose again that $X$ has the Pareto distribution with shape parameter $a \in (0, \infty)$ and scale parameter $b \in (0, \infty)$
The moments of $X$ are given by
1. $\E(X^n) = b^n \frac{a}{a - n}$ if $0 \lt n \lt a$
2. $\E(X^n) = \infty$ if $n \ge a$
Proof
By definition we can take $X = b Z$ where $Z$ has the basic Pareto distribution with shape parameter $a$. By the linearity of expected value, $\E(X^n) = b^n \E(Z^n)$, so the result follows from the moments of $Z$ given above.
The mean and variance of $X$ are
1. $\E(X) = b \frac{a}{a - 1}$ if $a \gt 1$
2. $\var(X) = b^2 \frac{a}{(a - 1)^2 (a - 2)}$ if $a \gt 2$
Open the special distribution simulator and select the Pareto distribution. Vary the parameters and note the shape and location of the mean $\pm$ standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness and kurtosis of $X$ are as follows:
1. If $a \gt 3$, $\skw(X) = \frac{2 (1 + a)}{a - 3} \sqrt{1 - \frac{2}{a}}$
2. If $a \gt 4$, $\kur(X) = \frac{3 (a - 2)(3 a^2 + a + 2)}{a (a - 3)(a - 4)}$
Proof
Recall that skewness and kurtosis are defined in terms of the standard score, and hence are invariant under scale transformations. Thus the skewness and kurtosis of $X$ are the same as the skewness and kurtosis of $Z = X / b$ given above.
Related Distributions
Since the Pareto distribution is a scale family for fixed values of the shape parameter, it is trivially closed under scale transformations.
Suppose that $X$ has the Pareto distribution with shape parameter $a \in (0, \infty)$ and scale parameter $b \in (0, \infty)$. If $c \in (0, \infty)$ then $Y = c X$ has the Pareto distribution with shape parameter $a$ and scale parameter $b c$.
Proof
By definition we can take $X = b Z$ where $Z$ has the basic Pareto distribution with shape parameter $a$. But then $Y = c X = (b c) Z$.
The Pareto distribution is closed under positive powers of the underlying variable.
Suppose that $X$ has the Pareto distribution with shape parameter $a \in (0, \infty)$ and scale parameter $b \in (0, \infty)$. If $n \in (0, \infty)$ then $Y = X^n$ has the Pareto distribution with shape parameter $a / n$ and scale parameter $b^n$.
Proof
Again we can write $X = b Z$ where $Z$ has the basic Pareto distribution with shape parameter $a$. Then from the power result above $Z^n$ has the basic Pareto distibution with shape parameter $a / n$ and hence $Y = X^n = b^n Z^n$ has the Pareto distribution with shape parameter $a / n$ and scale parameter $b^n$.
All Pareto variables can be constructed from the standard one. If $Z$ has the standard Pareto distribution and $a, \, b \in (0, \infty)$ then $X = b Z^{1/a}$ has the Pareto distribution with shape parameter $a$ and scale parameter $b$.
As before, the Pareto distribution has the usual connections with the standard uniform distribution by means of the distribution function and quantile function given above.
Suppose that $a, \, b \in (0, \infty)$.
1. If $U$ has the standard uniform distribution then $X = b \big/ U^{1/a}$ has the Pareto distribution with shape parameter $a$ and scale parameter $b$.
2. If $X$ has the Pareto distribution with shape parameter $a$ and scale parameter $b$, then $U = (b / X)^a$ has the standard uniform distribution.
Proof
1. If $U$ has the standard uniform distribution, then so does $1 - U$. Hence $X = F^{-1}(1 - U) = b \big/ U^{1/a}$ has the Pareto distribution with shape parameter $a$ and scale parameter $b$.
2. If $X$ has the Pareto distribution with shape parameter $a$ and scale parameter $b$, then $F(X)$ has the standard uniform distribution. But then $U = 1 - F(X) = (b / X)^a$ also has the standard uniform distribution.
Again, since the quantile function has a simple closed form, the basic Pareto distribution can be simulated using the random quantile method.
Open the random quantile experiment and selected the Pareto distribution. Vary the parameters and note the shape of the distribution and probability density functions. For selected values of the parameters, run the experiment 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts.
The Pareto distribution is closed with respect to conditioning on a right-tail event.
Suppose that $X$ has the Pareto distribution with shape parameter $a \in (0, \infty)$ and scale parameter $b \in (0, \infty)$. For $c \in [b, \infty)$, the conditional distribution of $X$ given $X \ge c$ is Pareto with shape parameter $a$ and scale parameter $c$.
Proof
Not surprisingly, its best to use right-tail distribution functions. Recall that this is the function $F^c = 1 - F$ where $F$ is the ordinary CDF given above. If $x \ge c$, them $\P(X \gt x \mid X \gt c) = \frac{\P(X \gt x)}{\P(X \gt c)} = \frac{(b / x)^a}{(b / c)^a} = (c / x)^a$
Finally, the Pareto distribution is a general exponential distribution with respect to the shape parameter, for a fixed value of the scale parameter.
Suppose that $X$ has the Pareto distribution with shape parameter $a \in (0, \infty)$ and scale parameter $b \in (0, \infty)$. For fixed $b$, the distribution of $X$ is a general exponential distribution with natural parameter $-(a + 1)$ and natural statistic $\ln X$.
Proof
This follows from the definition of the general exponential family, since the pdf above can be written in the form $f(x) = a b^a \exp[-(a + 1) \ln x], \quad x \in [b, \infty)$
Computational Exercises
Suppose that the income of a certain population has the Pareto distribution with shape parameter 3 and scale parameter 1000. Find each of the following:
1. The proportion of the population with incomes between 2000 and 4000.
2. The median income.
3. The first and third quartiles and the interquartile range.
4. The mean income.
5. The standard deviation of income.
6. The 90th percentile.
Answer
1. $\P(2000 \lt X \lt 4000) = 0.1637$ so the proportion is 16.37%
2. $Q_2 = 1259.92$
3. $Q_1 = 1100.64$, $Q_3 = 1587.40$, $Q_3 - Q_1 = 486.76$
4. $\E(X) = 1500$
5. $\sd(X) = 866.03$
6. $F^{-1}(0.9) = 2154.43$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.36%3A_The_Pareto_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\,\text{var}}$ $\newcommand{\skw}{\,\text{skew}}$ $\newcommand{\kur}{\,\text{kurt}}$
The Wald distribution, named for Abraham Wald, is important in the study of Brownian motion. Specifically, the distribution governs the first time that a Brownian motion with positive drift hits a fixed, positive value. In Brownian motion, the distribution of the random position at a fixed time has a normal (Gaussian) distribution, and thus the Wald distribution, which governs the random time at a fixed position, is sometimes called the inverse Gaussian distribution.
The Basic Wald Distribution
Distribution Functions
As usual, let $\Phi$ denote the standard normal distribution function.
The basic Wald distribution with shape parameter $\lambda \in (0, \infty)$ is a continuous distribution on $(0, \infty)$ with distribution function $G$ given by $G(u) = \Phi\left[\sqrt{\frac{\lambda}{u}}(u - 1)\right] + e^{2 \lambda} \Phi\left[-\sqrt{\frac{\lambda}{u}} (u + 1)\right], \quad u \in (0, \infty)$ The special case $\lambda = 1$ gives the standard Wald distribution.
Proof
Note that as $u \to \infty$, $\sqrt{\frac{\lambda}{u}}(u - 1) \to \infty$ and $-\sqrt{\frac{\lambda}{u}}(u + 1) \to -\infty$, and hence $G(u) \to 1$. As $u \downarrow 0$, $\sqrt{\frac{\lambda}{u}}(u - 1) \to -\infty$ and $-\sqrt{\frac{\lambda}{u}}(u + 1) \to -\infty$, and hence $G(u) \to 0$. Of course, $G$ is clearly continuous on $(0, \infty)$, so it remains to show that $G$ is increasing on this interval. Differentiating gives $G^\prime(u) = \phi\left[\sqrt{\frac{\lambda}{u}}(u - 1)\right] \left[\frac{\sqrt{\lambda}}{2}\left(u^{-1/2} + u^{-3/2}\right)\right] + e^{2 \lambda} \phi\left[-\sqrt{\frac{\lambda}{u}}(u + 1)\right] \left[-\frac{\sqrt{\lambda}}{2}\left(u^{-1/2} - u^{-3/2}\right)\right]$ where $\phi(z) = \frac{1}{\sqrt{2 \pi}} e^{-z^2/2}$ is the standard normal PDF. Simple algebra shows that $\phi\left[\sqrt{\frac{\lambda}{u}}(u - 1)\right] = e^{2 \lambda} \phi\left[-\sqrt{\frac{\lambda}{u}}(u + 1)\right] = \frac{1}{\sqrt{2 \pi}} \exp\left[-\frac{\lambda}{2 u} (u - 1)^2\right]$ so simplifying further gives $G^\prime(u) = \sqrt{\frac{\lambda}{2 \pi}} u^{-3/2} \exp\left[-\frac{\lambda}{2 u} (u - 1)^2\right] \gt 0, \quad u \in (0, \infty)$
The probability density function $g$ is given by $g(u) = \sqrt{\frac{\lambda}{2 \pi u^3}} \exp\left[-\frac{\lambda}{2 u}(u - 1)^2\right], \quad u \in (0, \infty)$
1. $g$ increases and then decreases with mode $u_0 = \sqrt{1 + \left(\frac{3}{2 \lambda}\right)^2} - \frac{3}{2 \lambda}$
2. $g$ is concave upward then downward then upward again.
Proof
The formula for the PDF follows immediately from the proof of the CDF above, since $g = G^\prime$. The first order properties come from $g^\prime(u) = \sqrt{\frac{\lambda}{8 \pi u^7}} \exp\left[-\frac{\lambda}{2 u}(u - 1)^2\right] \left[\lambda(1 - u^2) - 3 u\right], \quad u \in (0, \infty)$ and the second order properties from $g^{\prime\prime}(u) = \sqrt{\frac{\lambda}{32 \pi u^{11}}} \exp\left[-\frac{\lambda}{2 u}(u - 1)^2\right]\left[15 u^2 + \lambda^2 (u^2 - 1)^2 + 2 \lambda u(3 u^2 - 5)\right], \quad u \in (0, \infty)$
So $g$ has the classic unimodal shape, but the inflection points are very complicated functions of $\lambda$. For the mode, note that $u_0 \downarrow 0$ as $\lambda \downarrow 0$ and $u_0 \uparrow 1$ as $\lambda \uparrow \infty$. The probability density function of the standard Wald distribution is $g(u) = \sqrt{\frac{1}{2 \pi u^3}} \exp\left[-\frac{1}{2 u}(u - 1)^2\right], \quad u \in (0, \infty)$
Open the special distribution simulator and select the Wald distribution. Vary the shape parameter and note the shape of the probability density function. For various values of the parameter, run the simulation 1000 times and compare the empirical density function to the probability density function.
The quantile function of the standard Wald distribution does not have a simple closed form, so the median and other quantiles must be approximated.
Open the special distribution calculator and select the Wald distribution. Vary the shape parameter and note the shape of the distribution and probability density functions. For selected values of the parameter, compute approximate values of the first quartile, the median, and the third quartile.
Moments
Suppose that random variable $U$ has the standard Wald distribution with shape parameter $\lambda \in (0, \infty)$.
$U$ has moment generating function $m$ given by $m(t) = \E\left(e^{t U}\right) = \exp\left[\lambda \left(1 - \sqrt{1 - \frac{2 t}{\lambda}}\right)\right], \quad t \lt \frac{\lambda}{2}$
Proof
The proof requires some facts about the modified Bessel function of the second kind, denoted $K_\alpha$ where the parameter $\alpha \in \R$. This function is one of the two linearly independent solutions of the differential equation $x^2 \frac{d^2 y}{d x^2} + x \frac{d y}{dx} - (x^2 + \alpha^2) y = 0$ The other solution, appropriately enough, is the modified Bessel function of the first kind. The function of the second kind, the one that we care about here, is the solution that decays exponentially as $x \to \infty$. The first fact we need is that $K_{-1/2}(x) = \frac{1}{\sqrt{x}} e^{-x}, \quad x \in (0, \infty)$ which you can verify be direct substitution into the differential equation. The second fact that we need is the identity $\int_0^\infty x^{p-1} \exp\left[-\frac{1}{2}\left(a x + \frac{b}{x}\right)\right] dx = \frac{2 K_p(\sqrt{a b})}{(a / b)^{p/2}}, \quad a, \, b \in (0, \infty); \; p \in \R$ Now, for the moment generating function of $U$ we have $m(t) = \int_0^\infty e^{t x} \sqrt{\frac{\lambda}{2 \pi x^3}} \exp\left[-\frac{\lambda}{2x}(x - 1)^2\right] dx$ Combining the exponentials and doing some algebra, we can rewrite this as $m(t) = \sqrt{\frac{\lambda}{2 \pi}} e^\lambda \int_0^\infty x^{-3/2} \exp\left[-\frac{1}{2}(\lambda - 2 t) x - \frac{1}{2} \frac{\lambda}{x}\right] dx$ The integral now has the form of the identity given above with $p = -1/2$, $a = \lambda - 2 t$, and $b = \lambda$. Hence we have $m(t) = \sqrt{\frac{\lambda}{2 \pi}} e^\lambda \frac{2 K_{-1/2}\left[\sqrt{\lambda (\lambda - 2 t)}\right]}{[(\lambda - 2 t) / \lambda]^{-1/4}}$ Using the explicit form of $K_{-1/2}$ given above and doing more algebra we get $m(t) = \exp\left[\lambda - \sqrt{\lambda (\lambda - 2 t)}\right] = \exp\left[\lambda\left(1 - \sqrt{1 - \frac{2 t}{\lambda}}\right)\right]$
Since the moment generating function is finite in an interval containing 0, the basic Wald distribution has moments of all orders.
The mean and variance of $U$ are
1. $\E(U) = 1$
2. $\var(U) = \frac{1}{\lambda}$
Proof
Differentiating gives \begin{align} m^\prime(t) & = m(t) \left(1 - \frac{2 t}{\lambda}\right)^{-1/2} \ m^{\prime\prime}(t) &= m(t) \left[\left(1 - \frac{2 t}{\lambda}\right)^{-1} + \frac{1}{\lambda} \left(1 - \frac{2 t}{\lambda}\right)^{-3/2}\right] \end{align} and hence $\E(U) = m^\prime(0) = 1$ and $\E\left(U^2\right) = m^{\prime\prime}(0) = 1 + \frac{1}{\lambda}$.
So interestingly, the mean is 1 for all values of the shape parameter, while $\var(U) \to \infty$ as $\lambda \downarrow 0$ and $\var(U) \to 0$ as $\lambda \to \infty$.
Open the special distribution simulator and select the Wald distribution. Vary the shape parameter and note the size and location of the mean $\pm$ standard deviation bar. For various values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness and kurtosis of $U$ are
1. $\skw(U) = 3 / \sqrt{\lambda}$
2. $\kur(U) = 3 + 15 / \lambda$
Proof
The main tool is the differential equation for the moment generating function that we used in computing the mean and variance: $m^\prime(t) = m(t) \left(1 - \frac{2 t}{\lambda}\right)^{-1/2}$ Using this recursively, we can find the first four moments of $U$. We already know the first two: $m^\prime(0) = \E(U) = 1$, $m^{\prime \prime}(0) = \E(U^2) = 1 + 1 / \lambda$. The third and fourth are \begin{align} m^{(3)}(0) = & E(U^3) = 1 + 3 / \lambda + 3 / \lambda^2 \ m^{(4)}(0) = & E(U^4) = 1 + 6 / \lambda + 15 / \lambda^2 + 15 / \lambda^3 \end{align} The results then follow from the standard computational formulas for the skewness and kurtosis in terms of the moments.
It follows that the excess kurtosis is $\kur(U) - 3 = 15 / \lambda$. Note that $\skw(U) \to \infty$ as $\lambda \to 0$ and $\skw(U) \to 0$ as $\lambda \to \infty$. Similarly, $\kur(U) \to \infty$ as $\lambda \to 0$ and $\kur(U) \to 3$ as $\lambda \to \infty$.
The General Wald Distribution
The basic Wald distribution is generalized into a scale family. Scale parameters often correspond to a change of units, and so are of basic importance.
Suppose that $\lambda, \, \mu \in (0, \infty)$ and that $U$ has the basic Wald distribution with shape parameter $\lambda / \mu$. Then $X = \mu U$ has the Wald distribution with shape parameter $\lambda$ and mean $\mu$.
Justification for the name of the parameter $\mu$ as the mean is given below. Note that the generalization is consistent—when $\mu = 1$ we have the basic Wald distribution with shape parameter $\lambda$.
Distribution Functions
Suppose that $X$ has the Wald distribution with shape parameter $\lambda \in (0, \infty)$ and mean $\mu \in (0, \infty)$. Again, we let $\Phi$ denote the standard normal distribution function.
$X$ has distribution function $F$ given by $F(x) = \Phi\left[\sqrt{\frac{\lambda}{x}} \left(\frac{x}{\mu} - 1\right)\right] + \exp\left(\frac{2 \lambda}{\mu}\right) \Phi\left[-\sqrt{\frac{\lambda}{x}} \left(\frac{x}{\mu} + 1\right)\right], \quad x \in (0, \infty)$
Proof
Recall that the CDF $F$ of $X$ is related to the CDF $G$ of $U$ by $F(x) = G\left(\frac{x}{\mu}\right), \quad x \in (0, \infty)$ so the result follows from the CDF above, with $\lambda$ replaced by $\lambda / \mu$, and $x$ with $x / \mu$.
$X$ has probability density function $f$ given by $f(x) = \sqrt{\frac{\lambda}{2 \pi x^3}} \exp\left[\frac{-\lambda (x - \mu)^2}{2 \mu^2 x}\right], \quad x \in (0, \infty)$
1. $f$ increases and then decreases with mode $x_0 = \mu\left[\sqrt{1 + \left(\frac{3 \mu}{2 \lambda}\right)^2} - \frac{3 \mu}{2 \lambda}\right]$
2. $f$ is concave upward then downward then upward again.
Proof
Recall that the PDF $f$ of $X$ is related to the PDF $g$ of $U$ by $f(x) = \frac{1}{\mu} g\left(\frac{x}{\mu}\right), \quad x \in (0, \infty)$ Hence the result follows from the PDF above with $\lambda$ replaced by $\lambda / \mu$ and $x$ with $x / \mu$.
Once again, the graph of $f$ has the classic unimodal shape, but the inflection points are complicated functions of the parameters.
Open the special distribution simulator and select the Wald distribution. Vary the parameters and note the shape of the probability density function. For various values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
Again, the quantile function cannot be expressed in a simple closed form, so the median and other quantiles must be approximated.
Open the special distribution calculator and select the Wald distribution. Vary the parameters and note the shape of the distribution and density functions. For selected values of the parameters, compute approximate values of the first quartile, the median, and the third quartile.
Moments
Suppose again that $X$ has the Wald distribution with shape parameter $\lambda \in (0, \infty)$ and mean $\mu \in (0, \infty)$. By definition, we can take $X = \mu U$ where $U$ has the basic Wald distribution with shape parameter $\lambda / \mu$.
$X$ has moment generating function $M$ given by $M(t) = \exp\left[\frac{\lambda}{\mu} \left(1 - \sqrt{1 - \frac{2 \mu^2 t}{\lambda}}\right)\right], \quad t \lt \frac{\lambda}{2 \mu^2}$
Proof
Recall that the MGF $M$ of $X$ is related to the MGF $m$ of $U$ by $M(t) = m(t \mu)$. Hence the result follows from the result MFG above with $\lambda$ replaced by $\lambda / \mu$ and $t$ with $t \mu$.
As promised, the parameter $\mu$ is the mean of Wald distribution.
The mean and variance of $X$ are
1. $\E(X) = \mu$
2. $\var(X) = \mu^3 / \lambda$
Proof
From the results for the mean and variance above and basic properties of expected value and variance, we have $\E(X) = \mu \E(U) = \mu \cdot 1$ and $\var(X) = \mu^2 \var(U) = \mu^2 \frac{\mu}{\lambda}$.
Open the special distribution simulator and select the Wald distribution. Vary the parameters and note the size and location of the mean $\pm$ standard deviation bar. For various values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness and kurtosis of $X$ are
1. $\skw(X) = 3 \sqrt{\mu / \lambda}$
2. $\kur(U) = 3 + 15 \mu / \lambda$
Proof
Skewness and kurtosis are invariant under scale transformations, so $\skw(X) = \skw(U)$ and $\kur(X) = \kur(U)$. The results then follow from the skewness and kurtosis above, with $\lambda$ replaced by $\lambda / \mu$.
Related Distribution
As noted earlier, the Wald distribution is a scale family, although neither of the parameters is a scale parameter.
Suppose that $X$ has the Wald distribution with shape parameters $\lambda \in (0, \infty)$ and mean $\mu \in (0, \infty)$ and that $c \in (0, \infty)$. Then $Y = c X$ has the Wald distribution with shape parameter $c \lambda$ and mean $c \mu$.
Proof
By definition, we can take $X = \mu U$ where $U$ has the basic Wald distribution with shape parameter $\lambda / \mu$. Then $Y = c X = (c \mu) U$. Since $U$ has shape parameter $c \lambda / c \mu$, the result follows from the definition.
For the next result, it's helpful to re-parameterize the Wald distribution with the mean $\mu$ and the ratio $r = \lambda / \mu^2 \in (0, \infty)$. This parametrization is clearly equivalent, since we can recover the shape parameter from the mean and ratio as $\lambda = r \mu^2$. Note also that $r = \E(X) \big/ \var(X)$, the ratio of the mean to the variance. Finally, note that the moment generating function above becomes $M(t) = \exp\left[r \mu \left(1 - \sqrt{1 - \frac{2}{r}t}\right)\right], \quad t \lt \frac{r}{2}$ and of course, this function characterizes the Wald distribution with this parametrization. Our next result is that the Wald distribution is closed under convolution (corresponding to sums of independent variables) when the ratio is fixed.
Suppose that $X_1$ has the Wald distribution with mean $\mu_1 \in (0, \infty)$ and ratio $r \in (0, \infty)$; $X_2$ has the Wald distribution with mean $\mu_2 \in (0, \infty)$ and ratio $r$; and that $X_1$ and $X_2$ are independent. Then $Y = X_1 + X_2$ has the Wald distribution with mean $\mu_1 + \mu_2$ and ratio $r$.
Proof
For $i \in \{1, 2\}$, the MGF of $X_i$ is $M_i(t) = \exp\left[r \mu_i \left(1 - \sqrt{1 - \frac{2}{r}t}\right)\right], \quad t \lt \frac{r}{2}$ Hence the MGF of $Y = X_i + X_2$ is $M(t) = M_1(t) M_2(t) = \exp\left[r\left(\mu_1 + \mu_2\right) \left(1 - \sqrt{1 - \frac{2}{r}t}\right)\right], \quad t \lt \frac{r}{2}$ Hence $Y$ has the Wald distribution with mean $\mu_1 + \mu_2$ and ratio $r$.
In the previous result, note that the shape parameter of $X_1$ is $r \mu_1^2$, the shape parameter of $X_2$ is $r \mu_2^2$, and the shape parameter of $Y$ is $\lambda = r (\mu_1 + \mu_2)^2$. Also, of course, the result generalizes to a sum of any finite number of independent Wald variables, as long as the ratio is fixed. The next couple of results are simple corollaries.
Suppose that $(X_1, X_2, \ldots, X_n)$ is a sequence of independent variables, each with the Wald distribution with shape parameter $\lambda \in (0, \infty)$ and mean $\mu \in (0, \infty)$. Then
1. $Y_n = \sum_{i=1}^n X_i$ has the Wald distribution with shape parameter $n^2 \lambda$ and mean $n \mu$.
2. $M_n = \frac{1}{n} \sum_{i=1}^n X_i$ has the Wald distribution with shape parameter $n \lambda$ and mean $\mu$.
Proof
1. This follows from the previous result and induction. The mean of $Y_n$ of course is $n \mu$. The common ratio is $r = \lambda / \mu^2$, and hence the shape parameter of $Y_n$ is $(\lambda / \mu^2) (n \mu)^2 = n^2 \lambda$.
2. This follows from (a) and the scaling result above. The mean of $M_n$ of course is $\mu$ and the shape parameter is $(1 / n) (n^2 \lambda) = n \lambda$.
In the context of the previous theorem, $(X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the Wald distribution, and $\frac{1}{n} \sum_{i=1}^n X_i$ is the sample mean. The Wald distribution is infinitely divisible:
Suppose that $X$ has the Wald distribution with shape parameter $\lambda \in (0, \infty)$ and mean $\mu \in (0, \infty)$. For every $n \in \N_+$, $X$ has the same distribution as $\sum_{i=1}^n X_i$ where $(X_1, X_2, \ldots, X_n)$ are independent, and each has the Wald distribution with shape parameter $\lambda / n^2$ and mean $\mu / n$.
The Lévy distribution is related to the Wald distribution, not surprising since the Lévy distribution governs the first time that a standard Brownian motion hits a fixed positive value.
For fixed $\lambda \in (0, \infty)$, the Wald distribution with shape parameter $\lambda$ and mean $\mu \in (0, \infty)$ converges to the Lévy distribution with location parameter 0 and scale parameter $\lambda$ as $\mu \to \infty$.
Proof
From the formula for the CDF above, note that $F(x) \to \Phi\left(-\sqrt{\frac{\lambda}{x}}\right) + \Phi\left(-\sqrt{\frac{\lambda}{x}}\right) = 2 \Phi\left(-\sqrt{\frac{\lambda}{x}}\right) = 2\left[1 - \Phi\left(\sqrt{\frac{\lambda}{x}}\right)\right] \text{ as } \mu \to \infty$ But the last expression is the distribution function of the Lévy distribution with location parameter 0 and shape parameter $\lambda$.
The other limiting distribution, this time with the mean fixed, is less interesting.
For fixed $\mu \in (0, \infty)$, the Wald distribution with shape parameter $\lambda \in (0, \infty)$ and mean $\mu$ converges to point mass at $\mu$ and variance 1 as $\lambda \to \infty$.
Proof
This time, it's better to use $M$, the moment generating function above. By rationalizing we see that for fixed $\mu \in (0, \infty)$ and $t \in \R$ $\frac{\lambda}{\mu} \left(1 - \sqrt{1 - \frac{2 \mu^2 t}{\lambda}}\right) = \frac{2 \mu t}{1 + \sqrt{1 - 2 \mu^2 t / \lambda}} \to \mu t \text{ as } \lambda \to \infty$ Hence $M(t) \to e^{\mu t}$ as $\lambda \to \infty$ and the limit is the MGF of the constant random variable $\mu$.
Finally, the Wald distribution is a member of the general exponential family of distributions.
The Wald distribution is a general exponential distribution with natural parameters $-\lambda / (2 \mu^2)$ and $-\lambda / 2$, and natural statistics $X$ and $1 / X$.
Proof
This follows from the PDF $f$ above. If we expand the square and simplify, we can write $f$ in the form $f(x) = \sqrt{\frac{\lambda}{2 \pi}} \exp\left(\frac{\lambda}{2 \mu}\right) x^{-3/2} \exp\left(-\frac{\lambda}{2 \mu^2} x - \frac{\lambda}{2} \frac{1}{x}\right), \quad x \in (0, \infty)$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.37%3A_The_Wald_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
In this section, we will study a two-parameter family of distributions that has special importance in reliability.
The Basic Weibull Distribution
Distribution Functions
The basic Weibull distribution with shape parameter $k \in (0, \infty)$ is a continuous distribution on $[0, \infty)$ with distribution function $G$ given by $G(t) = 1 - \exp\left(-t^k\right), \quad t \in [0, \infty)$ The special case $k = 1$ gives the standard Weibull distribution.
Proof
Clearly $G$ is continuous and increasing on $[0, \infty)$ with $G(0) = 0$ and $G(t) \to 1$ as $t \to \infty$.
The Weibull distribution is named for Waloddi Weibull. Weibull was not the first person to use the distribution, but was the first to study it extensively and recognize its wide use in applications. The standard Weibull distribution is the same as the standard exponential distribution. But as we will see, every Weibull random variable can be obtained from a standard Weibull variable by a simple deterministic transformation, so the terminology is justified.
The probability density function $g$ is given by $g(t) = k t^{k - 1} \exp\left(-t^k\right), \quad t \in (0, \infty)$
1. If $0 \lt k \lt 1$, $g$ is decreasing and concave upward with $g(t) \to \infty$ as $t \downarrow 0$.
2. If $k = 1$, $g$ is decreasing and concave upward with mode $t = 0$.
3. If $k \gt 1$, $g$ increases and then decreases, with mode $t = \left( \frac{k - 1}{k} \right)^{1/k}$.
4. If $1 \lt k \le 2$, $g$ is concave downward and then upward, with inflection point at $t = \left[\frac{3 (k - 1) + \sqrt{(5 k - 1)(k - 1)}}{2 k}\right]^{1/k}$
5. If $k \gt 2$, $g$ is concave upward, then downward, then upward again, with inflection points at $t = \left[\frac{3 (k - 1) \pm \sqrt{(5 k - 1)(k - 1)}}{2 k}\right]^{1/k}$
Proof
These results follow from basic calculus. The PDF is $g = G^\prime$ where $G$ is the CDF above. The first order properties come from $g^\prime(t) = k t^{k-2} \exp\left(-t^k\right)\left[-k t^k + (k - 1)\right]$ The second order properties come from $g^{\prime\prime}(t) = k t^{k-3} \exp\left(-t^k\right)\left[k^2 t^{2 k} - 3 k (k - 1) t^k + (k - 1)(k - 2)\right]$
So the Weibull density function has a rich variety of shapes, depending on the shape parameter, and has the classic unimodal shape when $k \gt 1$. If $k \ge 1$, $g$ is defined at 0 also.
In the special distribution simulator, select the Weibull distribution. Vary the shape parameter and note the shape of the probability density function. For selected values of the shape parameter, run the simulation 1000 times and compare the empirical density function to the probability density function.
The quantile function $G^{-1}$ is given by $G^{-1}(p) = [-\ln(1 - p)]^{1/k}, \quad p \in [0, 1)$
1. The first quartile is $q_1 = (\ln 4 - \ln 3)^{1/k}$.
2. The median is $q_2 = (\ln 2)^{1/k}$.
3. The third quartile is $q_3 = (\ln 4)^{1/k}$.
Proof
The formula for $G^{-1}(p)$ comes from solving $G(t) = p$ for $t$ in terms of $p$.
Open the special distribution calculator and select the Weibull distribution. Vary the shape parameter and note the shape of the distribution and probability density functions. For selected values of the parameter, compute the median and the first and third quartiles.
The reliability function $G^c$ is given by $G^c(t) = \exp(-t^k), \quad t \in [0, \infty)$
Proof
This follows trivially from the CDF above, since $G^c = 1 - G$.
The failure rate function $r$ is given by $r(t) = k t^{k-1}, \quad t \in (0, \infty)$
1. If $0 \lt k \lt 1$, $r$ is decreasing with $r(t) \to \infty$ as $t \downarrow 0$ and $r(t) \to 0$ as $t \to \infty$.
2. If $k = 1$, $r$ is constant 1.
3. If $k \gt 1$, $r$ is increasing with $r(0) = 0$ and $r(t) \to \infty$ as $t \to \infty$.
Proof
The formula for $r$ follows immediately from the PDF $g$ and the reliability function $G^c$ given above, since $r = g \big/ G^c$.
Thus, the Weibull distribution can be used to model devices with decreasing failure rate, constant failure rate, or increasing failure rate. This versatility is one reason for the wide use of the Weibull distribution in reliability. If $k \ge 1$, $r$ is defined at 0 also.
Moments
Suppose that $Z$ has the basic Weibull distribution with shape parameter $k \in (0, \infty)$. The moments of $Z$, and hence the mean and variance of $Z$ can be expressed in terms of the gamma function $\Gamma$
$\E(Z^n) = \Gamma\left(1 + \frac{n}{k}\right)$ for $n \ge 0$.
Proof
For $n \ge 0$, $\E(Z^n) = \int_0^\infty t^n k t^{k-1} \exp(-t^k) \, dt$ Substituting $u = t^k$ gives $\E(Z^n) = \int_0^\infty u^{n/k} e^{-u} du = \Gamma\left(1 + \frac{n}{k}\right)$
So the Weibull distribution has moments of all orders. The moment generating function, however, does not have a simple, closed expression in terms of the usual elementary functions.
In particular, the mean and variance of $Z$ are
1. $\E(Z) = \Gamma\left(1 + \frac{1}{k}\right)$
2. $\var(Z) = \Gamma\left(1 + \frac{2}{k}\right) - \Gamma^2\left(1 + \frac{1}{k}\right)$
Note that $\E(Z) \to 1$ and $\var(Z) \to 0$ as $k \to \infty$. We will learn more about the limiting distribution below.
In the special distribution simulator, select the Weibull distribution. Vary the shape parameter and note the size and location of the mean $\pm$ standard deviation bar. For selected values of the shape parameter, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness and kurtosis also follow easily from the general moment result above, although the formulas are not particularly helpful.
Skewness and kurtosis
1. The skewness of $Z$ is $\skw(Z) = \frac{\Gamma(1 + 3 / k) - 3 \Gamma(1 + 1 / k) \Gamma(1 + 2 / k) + 2 \Gamma^3(1 + 1 / k)}{\left[\Gamma(1 + 2 / k) - \Gamma^2(1 + 1 / k)\right]^{3/2}}$
2. The kurtosis of $Z$ is $\kur(Z) = \frac{\Gamma(1 + 4 / k) - 4 \Gamma(1 + 1 / k) \Gamma(1 + 3 / k) + 6 \Gamma^2(1 + 1 / k) \Gamma(1 + 2 / k) - 3 \Gamma^4(1 + 1 / k)}{\left[\Gamma(1 + 2 / k) - \Gamma^2(1 + 1 / k)\right]^2}$
Proof
The results follow directly from the general moment result and the computational formulas for skewness and kurtosis.
Related Distributions
As noted above, the standard Weibull distribution (shape parameter 1) is the same as the standard exponential distribution. More generally, any basic Weibull variable can be constructed from a standard exponential variable.
Suppose that $k \in (0, \infty)$.
1. If $U$ has the standard exponential distribution then $Z = U^{1/k}$ has the basic Weibull distribution with shape parameter $k$.
2. If $Z$ has the basic Weibull distribution with shape parameter $k$ then $U = Z^k$ has the standard exponential distribution.
Proof
We use distribution functions. The basic Weibull CDF is given above; the standard exponential CDF is $u \mapsto 1 - e^{-u}$ on $[0, \infty)$. Note that the inverse transformations $z = u^k$ and $u = z^{1/k}$ are strictly increasing and map $[0, \infty)$ onto $[0, \infty)$.
1. $\P(Z \le z) = \P\left(U \le z^k\right) = 1 - \exp\left(-z^k\right)$ for $z \in [0, \infty)$.
2. $\P(U \le u) = \P\left(Z \le u^{1/k}\right) = 1 - \exp\left[-\left(u^{1/k}\right)^k\right] = 1 - e^{-u}$ for $u \in [0, \infty)$.
The basic Weibull distribution has the usual connections with the standard uniform distribution by means of the distribution function and the quantile function given above.
Suppose that $k \in (0, \infty)$.
1. If $U$ has the standard uniform distribution then $Z = (-\ln U)^{1/k}$ has the basic Weibull distribution with shape parameter $k$.
2. If $Z$ has the basic Weibull distribution with shape parameter $k$ then $U = \exp\left(-Z^k\right)$ has the standard uniform distribution.
Proof
Let $G$ denote the CDF of the basic Weibull distribution with shape parameter $k$ and $G^{-1}$ the corresponding quantile function, given above.
1. If $U$ has the standard uniform distribution then so does $1 - U$. Hence $Z = G^{-1}(1 - U) = (-\ln U)^{1/k}$ has the basic Weibull distribution with shape parameter $k$.
2. If $Z$ has the basic Weibull distribution with shape parameter $k$ then $G(Z)$ has the standard uniform distribution. But then so does $U = 1 - G(Z) = \exp\left(-Z^k\right)$.
Since the quantile function has a simple, closed form, the basic Weibull distribution can be simulated using the random quantile method.
Open the random quantile experiment and select the Weibull distribution. Vary the shape parameter and note again the shape of the distribution and density functions. For selected values of the parameter, run the simulation 1000 times and compare the empirical density, mean, and standard deviation to their distributional counterparts.
The limiting distribution with respect to the shape parameter is concentrated at a single point.
The basic Weibull distribution with shape parameter $k \in (0, \infty)$ converges to point mass at 1 as $k \to \infty$.
Proof
Once again, let $G$ denote the basic Weibull CDF with shape parameter $k$ given above. Note that $G(t) \to 0$ as $k \to \infty$ for $0 \le t \lt 1$; $G(1) = 1 - e^{-1}$ for all $k$; and $G(t) \to 1$ as $k \to \infty$ for $t \gt 1$. Except for the point of discontinuity $t = 1$, the limits are the CDF of point mass at 1.
The General Weibull Distribution
Like most special continuous distributions on $[0, \infty)$, the basic Weibull distribution is generalized by the inclusion of a scale parameter. A scale transformation often corresponds in applications to a change of units, and for the Weibull distribution this usually means a change in time units.
Suppose that $Z$ has the basic Weibull distribution with shape parameter $k \in (0, \infty)$. For $b \in (0, \infty)$, random variable $X = b Z$ has the Weibull distribution with shape parameter $k$ and scale parameter $b$.
Generalizations of the results given above follow easily from basic properties of the scale transformation.
Distribution Functions
Suppose that $X$ has the Weibull distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$.
$X$ distribution function $F$ given by $F(t) = 1 - \exp\left[-\left(\frac{t}{b}\right)^k\right], \quad t \in [0, \infty)$
Proof
Recall that $F(t) = G\left(\frac{t}{b}\right)$ for $t \in [0, \infty)$ where $G$ is the CDF of the basic Weibull distribution with shape parameter $k$, given above.
$X$ has probability density function $f$ given by $f(t) = \frac{k}{b^k} \, t^{k-1} \, \exp \left[ -\left( \frac{t}{b} \right)^k \right], \quad t \in (0, \infty)$
1. If $0 \lt k \lt 1$, $f$ is decreasing and concave upward with $f(t) \to \infty$ as $t \downarrow 0$.
2. If $k = 1$, $f$ is decreasing and concave upward with mode $t = 0$.
3. If $k \gt 1$, $f$ increases and then decreases, with mode $t = b \left( \frac{k - 1}{k} \right)^{1/k}$.
4. If $1 \lt k \le 2$, $f$ is concave downward and then upward, with inflection point at $t = b \left[\frac{3 (k - 1) + \sqrt{(5 k - 1)(k - 1)}}{2 k}\right]^{1/k}$
5. If $k \gt 2$, $f$ is concave upward, then downward, then upward again, with inflection points at $t = b \left[\frac{3 (k - 1) \pm \sqrt{(5 k - 1)(k - 1)}}{2 k}\right]^{1/k}$
Proof
Recall that $f(t) = \frac{1}{b} g\left(\frac{t}{b}\right)$ for $t \in (0, \infty)$ where $g$ is the PDF of the corresponding basic Weibull distribution given above.
Open the special distribution simulator and select the Weibull distribution. Vary the parameters and note the shape of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
$X$ has quantile function $F^{-1}$ given by $F^{-1}(p) = b [-\ln(1 - p)]^{1/k}, \quad p \in [0, 1)$
1. The first quartile is $q_1 = b (\ln 4 - \ln 3)^{1/k}$.
2. The median is $q_2 = b (\ln 2)^{1/k}$.
3. The third quartile is $q_3 = b (\ln 4)^{1/k}$.
Proof
Recall that $F^{-1}(p) = b G^{-1}(p)$ for $p \in [0, 1)$ where $G^{-1}$ is the quantile function of the corresponding basic Weibull distribution given above.
Open the special distribution calculator and select the Weibull distribution. Vary the parameters and note the shape of the distribution and probability density functions. For selected values of the parameters, compute the median and the first and third quartiles.
$X$ has reliability function $F^c$ given by $F^c(t) = \exp\left[-\left(\frac{t}{b}\right)^k\right], \quad t \in [0, \infty)$
Proof
This follows trivially from the CDF $F$ given above, since $F^c = 1 - F$.
As before, the Weibull distribution has decreasing, constant, or increasing failure rates, depending only on the shape parameter.
$X$ has failure rate function $R$ given by $R(t) = \frac{k t^{k-1}}{b^k}, \quad t \in (0, \infty)$
1. If $0 \lt k \lt 1$, $R$ is decreasing with $R(t) \to \infty$ as $t \downarrow 0$ and $R(t) \to 0$ as $t \to \infty$.
2. If $k = 1$, $R$ is constant $\frac{1}{b}$.
3. If $k \gt 1$, $R$ is increasing with $R(0) = 0$ and $R(t) \to \infty$ as $t \to \infty$.
Moments
Suppose again that $X$ has the Weibull distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$. Recall that by definition, we can take $X = b Z$ where $Z$ has the basic Weibull distribution with shape parameter $k$.
$\E(X^n) = b^n \Gamma\left(1 + \frac{n}{k}\right)$ for $n \ge 0$.
Proof
The result then follows from the moments of $Z$ above, since $\E(X^n) = b^n \E(Z^n)$.
In particular, the mean and variance of $X$ are
1. $\E(X) = b \Gamma\left(1 + \frac{1}{k}\right)$
2. $\var(X) = b^2 \left[\Gamma\left(1 + \frac{2}{k}\right) - \Gamma^2\left(1 + \frac{1}{k}\right)\right]$
Note that $\E(X) \to b$ and $\var(X) \to 0$ as $k \to \infty$.
Open the special distribution simulator and select the Weibull distribution. Vary the parameters and note the size and location of the mean $\pm$ standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
Skewness and kurtosis
1. The skewness of $X$ is $\skw(X) = \frac{\Gamma(1 + 3 / k) - 3 \Gamma(1 + 1 / k) \Gamma(1 + 2 / k) + 2 \Gamma^3(1 + 1 / k)}{\left[\Gamma(1 + 2 / k) - \Gamma^2(1 + 1 / k)\right]^{3/2}}$
2. The kurtosis of $X$ is $\kur(X) = \frac{\Gamma(1 + 4 / k) - 4 \Gamma(1 + 1 / k) \Gamma(1 + 3 / k) + 6 \Gamma^2(1 + 1 / k) \Gamma(1 + 2 / k) - 3 \Gamma^4(1 + 1 / k)}{\left[\Gamma(1 + 2 / k) - \Gamma^2(1 + 1 / k)\right]^2}$
Proof
Skewness and kurtosis depend only on the standard score of the random variable, and hence are invariant under scale transformations. So the results are the same as the skewness and kurtosis of $Z$.
Related Distributions
Since the Weibull distribution is a scale family for each value of the shape parameter, it is trivially closed under scale transformations.
Suppose that $X$ has the Weibull distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$. If $c \in (0, \infty)$ then $Y = c X$ has the Weibull distribution with shape parameter $k$ and scale parameter $b c$.
Proof
By definition, we can take $X = b Z$ where $Z$ has the basic Weibull distribution with shape parameter $k$. But then $Y = c X = (b c) Z$.
The exponential distribution is a special case of the Weibull distribution, the case corresponding to constant failure rate.
The Weibull distribution with shape parameter 1 and scale parameter $b \in (0, \infty)$ is the exponential distribution with scale parameter $b$.
Proof
When $k = 1$, the Weibull CDF $F$ is given by $F(t) = 1 - e^{-t / b}$ for $t \in [0, \infty)$. But this is also the CDF of the exponential distribution with scale parameter $b$.
More generally, any Weibull distributed variable can be constructed from the standard variable. The following result is a simple generalization of the connection between the basic Weibull distribution and the exponential distribution.
Suppose that $k, \, b \in (0, \infty)$.
1. If $X$ has the standard exponential distribution (parameter 1), then $Y = b \, X^{1/k}$ has the Weibull distribution with shape parameter $k$ and scale parameter $b$.
2. If $Y$ has the Weibull distribution with shape parameter $k$ and scale parameter $b$, then $X = (Y / b)^k$ has the standard exponential distribution.
Proof
The results are a simple consequence of the corresponding result above
1. If $X$ has the standard exponential distribution then $X^{1/k}$ has the basic Weibull distribution with shape parameter $k$, and hence $Y = b X^{1/k}$ has the Weibull distribution with shape parameter $k$ and scale parameter $b$.
2. If $Y$ has the Weibull distribution with shape parameter $k$ and scale parameter $b$ then $Y / b$ has the basic Weibull distribution with shape parameter $k$, and hence $X = (Y / b)^k$ has the standard exponential distributioon.
The Rayleigh distribution, named for William Strutt, Lord Rayleigh, is also a special case of the Weibull distribution.
The Rayleigh distribution with scale parameter $b \in (0, \infty)$ is the Weibull distribution with shape parameter $2$ and scale parameter $\sqrt{2} b$.
Proof
The Rayleigh distribution with scale parameter $b$ has CDF $F$ given by $F(x) = 1 - \exp\left(-\frac{x^2}{2 b^2}\right), \quad x \in [0, \infty)$ But this is also the Weibull CDF with shape parameter $2$ and scale parameter $\sqrt{2} b$.
Recall that the minimum of independent, exponentially distributed variables also has an exponential distribution (and the rate parameter of the minimum is the sum of the rate parameters of the variables). The Weibull distribution has a similar, but more restricted property.
Suppose that $(X_1, X_2, \ldots, X_n)$ is an independent sequence of variables, each having the Weibull distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$. Then $U = \min\{X_1, X_2, \ldots, X_n\}$ has the Weibull distribution with shape parameter $k$ and scale parameter $b / n^{1/k}$.
Proof
Recall that the reliability function of the minimum of independent variables is the product of the reliability functions of the variables. It follows that $U$ has reliability function given by $\P(U \gt t) = \left\{\exp\left[-\left(\frac{t}{b}\right)^k\right]\right\}^n = \exp\left[-n \left(\frac{t}{b}\right)^k\right] = \exp\left[-\left(\frac{t}{b / n^{1/k}}\right)^k\right], \quad t \in [0, \infty)$ and so the result follows.
As before, Weibull distribution has the usual connections with the standard uniform distribution by means of the distribution function and the quantile function given above..
Suppose that $k, \, b \in (0, \infty)$.
1. If $U$ has the standard uniform distribution then $X = b (-\ln U )^{1/k}$ has the Weibull distribution with shape parameter $k$ and scale parameter $b$.
2. If $X$ has the basic Weibull distribution with shape parameter $k$ then $U = \exp\left[-(X/b)^k\right]$ has the standard uniform distribution.
Proof
Let $F$ denote the Weibull CDF with shape parameter $k$ and scale parameter $b$ and so that $F^{-1}$ is the corresponding quantile function.
1. If $U$ has the standard uniform distribution then so does $1 - U$. Hence $X = F^{-1}(1 - U) = b (-\ln U )^{1/k}$ has the Weibull distribution with shape parameter $k$ and scale parameter $b$.
2. If $X$ has the Weibull distribution with shape parameter $k$ and scale parameter $b$ then $F(X)$ has the standard uniform distribution. But then so does $U = 1 - F(X) = \exp\left[-(X/b)^k\right]$.
Again, since the quantile function has a simple, closed form, the Weibull distribution can be simulated using the random quantile method.
Open the random quantile experiment and select the Weibull distribution. Vary the parameters and note again the shape of the distribution and density functions. For selected values of the parameters, run the simulation 1000 times and compare the empirical density, mean, and standard deviation to their distributional counterparts.
The limiting distribution with respect to the shape parameter is concentrated at a single point.
The Weibull distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$ converges to point mass at $b$ as $k \to \infty$.
Proof
If $X$ has the Weibull distribution with shape parameter $k$ and scale parameter $b$, then we can write $X = b Z$ where $Z$ has the basic Weibull distribution with shape parameter $k$. We showed above that the distribution of $Z$ converges to point mass at 1, so by the continuity theorem for convergence in distribution, the distribution of $X$ converges to point mass at $b$.
Finally, the Weibull distribution is a member of the family of general exponential distributions if the shape parameter is fixed.
Suppose that $X$ has the Weibull distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$. For fixed $k$, $X$ has a general exponential distribution with respect to $b$, with natural parameter $k - 1$ and natural statistics $\ln X$.
Proof
This follows from the definition of the general exponential distribution, since the Weibull PDF can be written in the form $f(t) = \frac{k}{b^k}\exp\left(-t^k\right) \exp[(k - 1) \ln t], \quad t \in (0, \infty)$
Computational Exercises
The lifetime $T$ of a device (in hours) has the Weibull distribution with shape parameter $k = 1.2$ and scale parameter $b = 1000$.
1. Find the probability that the device will last at least 1500 hours.
2. Approximate the mean and standard deviation of $T$.
3. Compute the failure rate function.
Answer
1. $\P(T \gt 1500) = 0.1966$
2. $\E(T) = 940.7$, $\sd(T) = 787.2$
3. $h(t) = 0.000301 t^{0.2}$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.38%3A_The_Weibull_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$
Benford's law refers to probability distributions that seem to govern the significant digits in real data sets. The law is named for the American physicist and engineer Frank Benford, although the law was actually discovered earlier by the astronomer and mathematician Simon Newcomb.
To understand Benford's law, we need some preliminaries. Recall that a positive real number $x$ can be written uniquely in the form $x = y \cdot 10^n$ (sometimes called scientific notation) where $y \in \left[\frac{1}{10}, 1\right)$ is the mantissa and $n \in \Z$ is the exponent (both of these terms are base 10, of course). Note that $\log x = \log y + n$ where the logarithm function is the base 10 common logarithm instead of the usual base $e$ natural logarithm. In the old days BC (before calculators), one would compute the logarithm of a number by looking up the logarithm of the mantissa in a table of logarithms, and then adding the exponent. Of course, these remarks apply to any base $b \gt 1$, not just base 10. Just replace 10 with $b$ and the common logarithm with the base $b$ logarithm.
Distribution of the Mantissa
Distribution Functions
Suppose now that $X$ is a number selected at random from a certain data set of positive numbers. Based on empirical evidence from a number of different types of data, Newcomb, and later Benford, noticed that the mantissa $Y$ of $X$ seemed to have distribution function $F(y) = 1 + \log y$ for $y \in [1/10, 1)$. We will generalize this to an arbitrary base $b \gt 1$.
The Benford mantissa distribution with base $b \in (1, \infty)$, is a continuous distribution on $[1/b, 1)$ with distribution function $F$ given by $F(y) = 1 + \log_b y, \quad y \in [1/b, 1)$ The special case $b = 10$ gives the standard Benford mantissa distribution.
Proof
Note that $F$ is continuous and strictly increasing on $[1 / b, 1)$ with $F(1 / b) = 0$ and $F(1) = 1$.
The probability density function $f$ is given by $f(y) = \frac{1}{y \ln b}, \quad y \in [1/b, 1)$
1. $f$ is decreasing with mode $y = \frac{1}{b}$.
2. $f$ is concave upward.
Proof
These results follow from the CDF $F$ above and standard calculus. Recall that $f = F^\prime$.
Open the Special Distribution Simulator and select the Benford mantissa distribution. Vary the base $b$ and note the shape of the probability density function. For various values of $b$, run the simulation 1000 times and compare the empirical density function to the probability density function.
The quantile function $F^{-1}$ is given by $F^{-1}(p) = \frac{1}{b^{1-p}}, \quad p \in [0, 1]$
1. The first quartile is $F^{-1}\left(\frac{1}{4}\right) = \frac{1}{b^{3/4}}$
2. The median is $F^{-1}\left(\frac{1}{2}\right) = \frac{1}{\sqrt{b}}$
3. The third quartile is $F^{-1}\left(\frac{3}{4}\right) = \frac{1}{b^{1/4}}$
Proof
The formula for $F^{-1}(p)$ follows by solving $F(x) = p$ for $x$ in terms of $p$.
Numerical values of the quartiles for the standard (base 10) distribution are given in an exercise below.
Open the special distribution calculator and select the Benford mantissa distribution. Vary the base and note the shape and location of the distribution and probability density functions. For selected values of the base, compute the median and the first and third quartiles.
Moments
Assume that $Y$ has the Benford mantissa distribution with base $b \in (1, \infty)$.
The moments of $Y$ are $\E\left(Y^n\right) = \frac{b^n - 1}{n b^n \ln b}, \quad n \in (0, \infty)$
Proof
For $n \gt 0$, $\E\left(Y^n\right) = \int_{1/b}^1 y^n \frac{1}{y \ln b} dy = \frac{1}{\ln b} \int_{1/b}^1 y^{n-1} dy = \frac{1 - 1 \big/ b^n}{n \ln(b)}$
Note that for fixed $n \gt 0$, $\E(Y^n) \to 1$ as $b \downarrow 1$ and $\E(Y^n) \to 0$ as $b \to \infty$. We will learn more about the limiting distribution below. The mean and variance follow easily from the general moment result.
Mean and variance
1. The mean of $Y$ is $\E(Y) = \frac{b - 1}{b \ln b}$
2. the variance of $Y$ is $\var(Y) = \frac{b - 1}{b^2 \ln b} \left[ \frac{b + 1}{2} - \frac{b - 1}{\ln b} \right]$
Numerical values of the mean and variance for the standard (base 10) distribution are given in an exercise below.
In the Special Distribution Simulator, select the Benford mantissa distribution. Vary the base $b$ and note the size and location of the mean $\pm$ standard deviation bar. For selected values of $b$, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
Related Distributions
The Benford mantissa distribution has the usual connections to the standard uniform distribution by means of the distribution function and quantile function given above.
Suppose that $b \in (1, \infty)$.
1. If $U$ has the standard uniform distribution then $Y = b^{-U}$ has the Benford mantissa distribution with base $b$.
2. If $Y$ has the Benford mantissa distribution with base $b$ then $U = -\log_b Y$ has the standard uniform distribution.
Proof
1. If $U$ has the standard uniform distribution then so does $1 - U$ and hence $Y = F^{-1}(1 - U) = b^{-U}$ has the Benford mantissa distribution with base $b$.
2. The CDF $F$ is strictly increasing on $[b^{-1}, 1)$. Hence if $Y$ has the Benford mantissa distribution with base $b$ then $F(Y) = 1 + \log_b(Y)$ has the standard uniform distribution and hence so does $1 - F(Y) = -\log_b(Y)$.
Since the quantile function has a simple closed form, the Benford mantissa distribution can be simulated using the random quantile method.
Open the random quantile experiment and select the Benford mantissa distribution. Vary the base $b$ and note again the shape and location of the distribution and probability density functions. For selected values of $b$, run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts.
Also of interest, of course, are the limiting distributions of $Y$ with respect to the base $b$.
The Benford mantissa distribution with base $b \in (1, \infty)$ converges to
1. Point mass at 1 as $b \downarrow 1$.
2. Point mass at 0 as $b \uparrow \infty$.
Proof
Note that the CDF of $Y$ above can be written as $F(y) = 1 + \ln(y) \big/ \ln(b)$ for $1/b \le y \lt 1$, and of course we also have $F(y) = 0$ for $y \lt 1/b$ and $F(y) = 1$ for $y \ge 1$.
1. As $b \downarrow 1$, $1/b \uparrow 1$, and $1 + \ln(y) \big/ \ln(b) \to 1$, so in the limit we have $F(y) = 0$ for $y \lt 1$ and $F(y) = 1$ for $y \gt 1$.
2. As $b \uparrow \infty$, $1/b \downarrow 0$, and again $1 + \ln(y) \big/ \ln(b) \to 1$, so in the limit we have $F(y) = 0$ for $y \lt 0$ and $F(y) = 1$ for $y \gt 0).$
Since the probability density function is bounded on a bounded support interval, the Benford mantissa distribution can also be simulated via the rejection method.
Open the rejection method experiment and select the Benford mantissa distribution. Vary the base $b$ and note again the shape and location of the probability density functions. For selected values of $b$, run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts.
Distributions of the Digits
Assume now that the base is a positive integer $b \in \{2, 3, \ldots\}$, which of course is the case in standard number systems. Suppose that the sequence of digits of our mantissa $Y$ (in base $b$) is $\left(N_1, N_2, \ldots\right)$, so that $Y = \sum_{k=1}^\infty \frac{N_k}{b^k}$ Thus, our leading digit $N_1$ takes values in $\{1, 2, \ldots, b - 1\}$, while each of the other significant digits takes values in $\{0, 1, \ldots, b - 1\}$. Note that $\left(N_1, N_2, \ldots\right)$ is a stochastic process so at least we would like to know the finite dimensional distributions. That is, we would like to know the joint probability density function of the first $k$ digits for every $k \in \N_+$. But let's start, appropriately enough, with the first digit law. The leading digit is the most important one, and fortunately also the easiest to analyze mathematically.
First Digit Law
$N_1$ has probability density function $g_1$ given by $g_1(n) = \log_b \left(1 + \frac{1}{n} \right) = \log_b(n + 1) - \log_b(n)$ for $n \in \{1, 2, \ldots, b - 1\}$. The density function $g_1$ is decreasing and hence the mode is $n = 1$.
Proof
Note that $N_1 = n$ if and only if $\frac{n}{b} \le Y \lt \frac{n + 1}{b}$ for $n \in \{1, 2, \ldots, b - 1\}$. Hence using the PDF of $Y$ above, $\P(N_1 = n) = \int_{n/b}^{(n+1)/b} \frac{1}{y \ln b} dy = \log_b\left(\frac{n+1}{b}\right) - \log_b\left(\frac{n}{b}\right) = \log_b(n + 1) - \log_b(n)$
Note that when $b = 2$, $N_1 = 1$ deterministically, which of course has to be the case. The first significant digit of a number in base 2 must be 1. Numerical values of $g_1$ for the standard (base 10) distribution are given in an exercise below.
In the Special Distribution Simulator, select the Benford first digit distribution. Vary the base $b$ with the input control and note the shape of the probability density function. For various values of $b$, run the simulation 1000 times and compare the empirical density function to the probability density function.
$N_1$ has distribution function $G_1$ given by $G_1(x) = \log_b \left(\lfloor x \rfloor + 1\right)$ for $x \in [1, b - 1]$.
Proof
Using the PDF of $N_1$ above note that $G_1(n) = \sum_{k=1}^n [\log_b(k + 1) - \log_b(k)] = \log_b(n + 1), \quad n \in \{1, 2, \ldots, b - 1\}$ More generally, $G_1(x) = G_1 \left(\lfloor x \rfloor\right)$ for $x \in [1, b - 1]$
$N_1$ has quantile function $G_1^{-1}$ given by $G_1^{-1}(p) = \left\lceil b^p - 1\right\rceil$ for $p \in (0, 1]$.
1. The first quartile is $\left\lceil b^{1/4} - 1 \right\rceil$.
2. The median is $\left\lceil b^{1/2} - 1 \right\rceil$.
3. The third quartile is $\left\lceil b^{3/4} - 1 \right\rceil$.
Proof
As usual, the formula for $G_1^{-1}(p)$ follows from the CDF $G$, by solving $p = G(x)$ for $x$ in terms of $p$.
Numerical values of the quantiles for the standard (base 10) distribution are given in an exercise below.
Open the special distribution calculator and choose the Benford first digit distribution. Vary the base and note the shape and location of the distribution and probability density functions. For selected values of the base, compute the median and the first and third quartiles.
For the most part the moments of $N_1$ do not have simple expressions. However, we do have the following result for the mean.
$\E(N_1) = (b - 1) - \log_b[(b - 1)!]$.
Proof
From the PDF of $N_1$ above and using standard properties of the logarithm, $\E(N_1) = \sum_{n=1}^{b-1} n \log_b\left(\frac{n + 1}{n}\right) = \log_b\left[\prod_{n=1}^{b-1} \left(\frac{n + 1}{n}\right)^n\right]$ The product in the displayed equation simplifies to $b^{b - 1} / (b - 1)!$, and the base $b$ logarithm of this expression is $(b - 1) - \log_b[(b - 1)!]$.
Numerical values of the mean and variance for the standard (base 10) distribution are given in an exercise below.
Opne the Special Distribution Simulator and select the Benford first digit distribution. Vary the base $b$ with the input control and note the size and location of the mean $\pm$ standard deviation bar. For various values of $b$, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation..
Since the quantile function has a simple, closed form, the Benford first digit distribution can be simulated via the random quantile method.
Open the random quantile experiment and select the Benford first digit distribution. Vary the base $b$ and note again the shape and location of the probability density function. For selected values of the base, run the experiment 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts.
Higher Digits
Now, to compute the joint probability density function of the first $k$ significant digits, some additional notation will help.
If $n_1 \in \{1, 2, \ldots, b - 1\}$ and $n_j \in \{0, 1, \ldots, b - 1\}$ for $j \in \{2, 3, \ldots, k\}$, let $[n_1 \, n_2 \, \cdots \, n_k]_b = \sum_{j=1}^k n_j b ^{k - j}$
Of course, this is just the base $b$ version of what we do in our standard base 10 system: we represent integers as strings of digits between 0 and 9 (except that the first digit cannot be 0). Here is a base 5 example: $[324]_5 = 3 \cdot 5^2 + 2 \cdot 5^1 + 4 \cdot 5^0 = 89$
The joint probability density function $h_k$ of $(N_1, N_2, \ldots, N_k)$ is given by
$h_k\left(n_1, n_2, \ldots, n_k\right) = \log_b \left( 1 + \frac{1}{[n_1 \, n_2 \, \cdots \, n_k]_b} \right), \quad n_1 \in \{1, 2, \ldots, b - 1\}, (n_2, \ldots, n_k) \in \{2, \ldots, b - 1\}^{k-1}$
Proof
Note that $\{N_1 = n_1, N_2 = n_2, \ldots, N_k = n_k\} = \{l \le Y \lt u \}$. where $l = \frac{[n_1 \, n_2 \, \cdots, n_k]_b}{b^k}, \; u = \frac{[n_1 \, n_2 \, \cdots \, n_k]_b + 1}{b^k}$ Hence using the PDF of $Y$ and properties of logarithms, $h_k\left(n_1, n_2, \ldots, n_k\right) = \int_l^u \frac{1}{y \ln(b)} dy = \log_b(u) - \log_b(l) = \log_b\left([n_1 \, n_2 \, \cdots \, n_k]_b + 1\right) - \log_b\left([n_1 \, n_2 \, \cdots, n_k]_b\right)$
The probability density function of $(N_1, N_2)$ in the standard (base 10) case is given in an exercise below. Of course, the probability density function of a given digit can be obtained by summing the joint probability density over the unwanted digits in the usual way. However, except for the first digit, these functions do not reduce to simple expressions.
The probability density function $g_2$ of $N_2$ is given by $g_2(n) = \sum_{k=1}^{b-1} \log_b \left(1 + \frac{1}{[k \, n]_b} \right) = \sum_{k=1}^{b-1} \log_b \left(1 + \frac{1}{k \, b + n} \right), \quad n \in \{0, 1, \ldots, b - 1\}$
The probability density function of $N_2$ in the standard (base 10) case is given in an exercise below.
Theoretical Explanation
Aside from the empirical evidence noted by Newcomb and Benford (and many others since), why does Benford's law work? For a theoretical explanation, see the article A Statistical Derivation of the Significant Digit Law by Ted Hill.
Computational Exercises
In the following exercises, suppose that $Y$ has the standard Benford mantissa distribution (the base 10 decimal case), and that $\left(N_1, N_2, \ldots\right)$ are the digits of $Y$.
Find each of the following for the mantissa $Y$
1. The density function $f$.
2. The mean and variance
3. The quartiles
Answer
1. $f(y) = \frac{1}{0.2303 y}, \quad y \in \left[\frac{1}{10}, 1\right)$
2. $\E(Y) = 0.3909$, $\var(Y) = 0.0622$
3. $q_1 = 0.1778$, $q_2 = 0.3162$, $q_3 = 0.5623$
For $N_1$, find each of the following numerically
1. The probability density function
2. The mean and variance
3. The quartiles
Answer
1. $n$ $\P(N_1 = n)$
1 0.3010
2 0.1761
3 0.1249
4 0.0969
5 0.0792
6 0.0669
7 0.0580
8 0.0512
9 0.0458
2. $\E(N_1) = 3.4402$, $\var(N_1) = 6.0567$
3. $q_1 = 1$, $q_2 = 3$, $q_3 = 5$
Explicitly compute the values of the joint probability density function of $(N_1, N_2)$.
Answer
$\P(N_1 = n_1, N_2 = n_2)$ $n_1 = 1$ 2 3 4 5 6 7 8 9
$n_2 = 0$ 0.0414 0.0212 0.0142 0.0107 0.0086 0.0072 0.0062 0.0054 0.0048
1 0.0378 0.0202 0.0138 0.0105 0.0084 0.0071 0.0061 0.0053 0.0047
2 0.0348 0.0193 0.0134 0.0102 0.0083 0.0069 0.0060 0.0053 0.0047
3 0.0322 0.0185 0.0130 0.0100 0.0081 0.0068 0.0059 0.0052 0.0046
4 0.0300 0.0177 0.0126 0.0098 0.0080 0.0067 0.0058 0.0051 0.0046
5 0.0280 0.0170 0.0122 0.0092 0.0078 0.0066 0.0058 0.0051 0.0045
6 0.0263 0.0164 0.0119 0.0093 0.0077 0.0065 0.0057 0.0050 0.0045
7 0.0248 0.0158 0.0116 0.0091 0.0076 0.0064 0.0056 0.0050 0.0045
8 0.0235 0.0152 0.0113 0.0090 0.0074 0.0063 0.0055 0.0049 0.0044
9 0.0223 0.0147 0.0110 0.0088 0.0073 0.0062 0.0055 0.0049 0.0044
For $N_2$, find each of the following numerically
1. The probability density function
2. $\E(N_2)$
3. $\var(N_2)$
Answer
1. $n$ $\P(N_2 = n)$
0 0.1197
1 0.1139
2 0.1088
3 0.1043
4 0.1003
5 0.0967
6 0.0934
7 0.0904
8 0.0876
9 0.0850
2. $\E(N_2) = 4.1847$
3. $\var(N_2) = 0.8254$
Comparing the result for $N_1$ and the result result for $N_2$ , note that the distribution of $N_2$ is flatter than the distribution of $N_1$. In general, it turns out that distribution of $N_k$ converges to the uniform distribution on $\{0, 1, \ldots, b - 1\}$ as $k \to \infty$. Interestingly, the digits are dependent.
$N_1$ and $N_2$ are dependent.
Proof
This result follows from the joint PDF, the marginal PDF of $N_1$, and the marginal PDF of $N_2$ above.
Find each of the following.
1. $\P(N_1 = 5, N_2 = 3, N_3 = 1)$
2. $\P(N_1 = 3, N_2 = 1, N_3 = 5)$
3. $\P(N_1 = 1, N_2 = 3, N_3 = 5)$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.39%3A_Benford%27s_Law.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\Li}{\text{Li}}$
The zeta distribution is used to model the size or ranks of certain types of objects randomly chosen from certain types of populations. Typical examples include the frequency of occurrence of a word randomly chosen from a text, or the population rank of a city randomly chosen from a country. The zeta distribution is also known as the Zipf distribution, in honor of the American linguist George Zipf.
Basic Theory
The Zeta Function
The Riemann zeta function $\zeta$, named after Bernhard Riemann, is defined as follows: $\zeta(a) = \sum_{n=1}^\infty \frac{1}{n^a}, \quad a \in (1, \infty)$
You might recall from calculus that the series in the zeta function converges for $a \gt 1$ and diverges for $a \le 1$.
The zeta function satifies the following properties:
1. $\zeta$ is decreasing.
2. $\zeta$ is concave upward.
3. $\zeta(a) \downarrow 1$ as $a \uparrow \infty$
4. $\zeta(a) \uparrow \infty$ as $a \downarrow 1$
The zeta function is transcendental, and most of its values must be approximated. However, $\zeta(a)$ can be given explicitly for even integer values of $a$; in particular, $\zeta(2) = \frac{\pi^2}{6}$ and $\zeta(4) = \frac{\pi^4}{90}$.
The Probability Density Function
The zeta distribution with shape parameter $a \in (1, \infty)$ is a discrete distribution on $\N_+$ with probability density function $f$ given by. $f(n) = \frac{1}{\zeta(a) n^a}, \quad n \in \N_+$
1. $f$ is decreasing with mode $n = 1$.
2. When smoothed, $f$ is concave upward.
Proof
Clearly $f$ is a valid PDF, since by definition, $\zeta(a)$ is the normalizing constant for the function $n \mapsto \frac{1}{n^a}$ on $\N_+$. Part (a) is clear. For part (b), note that the function $x \mapsto x^{-a}$ on $[1, \infty)$ has a positive second derivative.
Open the special distribution simulator and select the zeta distribution. Vary the shape parameter and note the shape of the probability density function. For selected values of the parameter, run the simulation 1000 times and compare the empirical density function to the probability density function.
The distribution function and quantile function do not have simple closed forms, except in terms of other special functions.
Open the special distribution calculator and select the zeta distribution. Vary the parameter and note the shape of the distribution and probability density functions. For selected values of the parameter, compute the median and the first and third quartiles.
Moments
Suppose that $N$ has the zeta distribution with shape parameter $a \in (1, \infty)$. The moments of $X$ can be expressed easily in terms of the zeta function.
If $k \ge a - 1$, $\E(X) = \infty$. If $k \lt a - 1$, $\E\left(N^k\right) = \frac{\zeta(a - k)}{\zeta(a)}$
Proof
Note that $\E\left(N^k\right) = \sum_{n=1}^\infty n^k \frac{1}{\zeta(a) n^a} = \frac{1}{\zeta(a)} \sum_{n=1}^\infty \frac{1}{n^{a - k}}$ If $a - k \le 1$, the last sum diverges to $\infty$. If $a - k \gt 1$, the sum converges to $\zeta(a - k)$
The mean and variance of $N$ are as follows:
1. If $a \gt 2$, $\E(N) = \frac{\zeta(a - 1)}{\zeta(a)}$
2. If $a \gt 3$, $\var(N) = \frac{\zeta(a - 2)}{\zeta(a)} - \left(\frac{\zeta(a - 1)}{\zeta(a)}\right)^2$
Open the special distribution simulator and select the zeta distribution. Vary the parameter and note the shape and location of the mean $\pm$ standard deviation bar. For selected values of the parameter, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness and kurtosis of $N$ are as follows:
1. If $a \gt 4$, $\skw(N) = \frac{\zeta(a - 3) \zeta^2(a) - 3 \zeta(a - 1) \zeta(a - 2) \zeta(a) + 2 \zeta^3(a - 1)}{[\zeta(a - 2) \zeta(a) - \zeta^2(a - 1)]^{3/2}}$
2. If $a \gt 5$, $\kur(N) = \frac{\zeta(a - 4) \zeta^3(a) - 4 \zeta(a - 1) \zeta(a - 3) \zeta^2(a) + 6 \zeta^2(a - 1) \zeta(a - 2) \zeta(a) - 3 \zeta^4(a - 1)}{\left[\zeta(a - 2) \zeta(a) - \zeta^2(a - 1)\right]^2}$
Proof
These results follow from the general moment result above and standard computational formulas for skewness and kurtosis.
The probability generating function of $N$ can be expressed in terms of the polylogarithm function $\Li$ that was introduced in the section on the exponential-logarithmic distribution. Recall that the polylogarithm of order $s \in \R$ is defined by $\Li_s(x) = \sum_{k=1}^\infty \frac{x^k}{k^s}, \quad x \in (-1, 1)$
$N$ has probability generating function $P$ given by $P(t) = \E\left(t^N\right) = \frac{\Li_a(t)}{\zeta(a)}, \quad t \in (-1, 1)$
Proof
Note that $\E\left(t^N\right) = \sum_{n=1}^\infty t^n \frac{1}{n^a \zeta(a)} = \frac{1}{\zeta(a)} \sum_{n=1}^\infty \frac{t^n}{n^a}$ The last sum is $\Li_a(t)$.
Related Distributions
In an algebraic sense, the zeta distribution is a discrete version of the Pareto distribution. Recall that if $a \gt 1$, the Pareto distribution with shape parameter $a - 1$ is a continuous distribution on $[1, \infty)$ with probability density function $f(x) = \frac{a - 1}{x^a}, \quad x \in [1, \infty)$
Naturally, the limits of the zeta distribution with respect to the shape parameter $a$ are of interest.
The zeta distribution with shape parameter $a \in (1, \infty)$ converges to point mass at 1 as $a \to \infty$.
Proof
For the PDF $f$ above, note that $f(1) = \zeta(a) \to 1$ as $a \to \infty$ and for $n \in \{2, 3, \ldots\}$, $f(n) = 1 \big/ n^a \zeta(a) \to 0$ as $a \to \infty$
Finally, the zeta distribution is a member of the family of general exponential distributions.
Suppose that $N$ has the zeta distribution with parameter $a$. Then the distribution is a one-parameter exponential family with natural parameter $a$ and natural statistic $-\ln N$.
Proof
This follows from the definition of the general exponential distribution, since the zeta PDF can be written in the form $f(n) = \frac{1}{\zeta(a)} \exp(-a \ln n), \quad n \in \N_+$
Computational Exercises
Let $N$ denote the frequency of occurrence of a word chosen at random from a certain text, and suppose that $X$ has the zeta distribution with parameter $a = 2$. Find $\P(N \gt 4)$.
Answer
$\P(N \gt 4) = 1 - \frac{49}{6 \pi^2} \approx 0.1725$
Suppose that $N$ has the zeta distribution with parameter $a = 6$. Approximate each of the following:
1. $\E(N)$
2. $\var(N)$
3. $\skw(N)$
4. $\kur(N)$
Answer
1. $\E(N) \approx 1.109$
2. $\var(N) \approx 0.025$
3. $\skw(N) \approx 11.700$
4. $\kur(N) \approx 309.19$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.40%3A_The_Zeta_Distribution.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\bs}{\boldsymbol}$
The logarithmic series distribution, as the name suggests, is based on the standard power series expansion of the natural logarithm function. It is also sometimes known more simply as the logarithmic distribution.
Basic Theory
Distribution Functions
The logarithmic series distribution with shape parameter $p \in (0, 1)$ is a discrete distribution on $\N_+$ with probability density function $f$ given by $f(n) = \frac{1}{-\ln(1 - p)} \frac{p^n}{n}, \quad n \in \N_+$
1. $f$ is decreasing with mode $n = 1$.
2. When smoothed, $f$ is concave upward.
Proof
Recall that the standard power series for $-\ln(1 - p)$, obtained by integrating the geometric series $\sum_{n=0}^\infty p^n = 1 \big/ (1 - p)$, is $-\ln(1 - p) = \sum_{n=1}^\infty \frac{p^n}{n}, \quad p \in (0, 1)$ For the properties, consider the function $x \mapsto p^x \big/ x$ on $[1, \infty)$. The first derivative is $\frac{p^x [x \ln(p) - 1]}{x^2}$ which is negative, and the second derivative is $\frac{p^x \left[x^2 \ln^2(p) - 2 x \ln(p) + 2\right]}{x^3}$ which is positive
Open the Special Distribution Simulator and select the logarithmic series distribution. Vary the parameter and note the shape of the probability density function. For selected values of the parameter, run the simulation 1000 times and compare the empirical density function to the probability density function.
The distribution function and the quantile function do not have simple, closed forms in terms of the standard elementary functions.
Open the special distribution calculator and select the logarithmic series distribution. Vary the parameter and note the shape of the distribution and probability density functions. For selected values of the parameters, compute the median and the first and third quartiles.
Moments
Suppose again that random variable $N$ has the logarithmic series distribution with shape parameter $p \in (0, 1)$. Recall that the permutation formula is $n^{(k)} = n (n - 1) \cdots (n - k + 1)$ for $n \in \R$ and $k \in \N$. The factorial moments of $N$ are $\E\left(N^{(k)}\right)$ for $k \in \N$.
The factorial moments of $N$ are given by $\E\left(N^{(k)}\right) = \frac{(k - 1)!}{-\ln(1 - p)} \left(\frac{p}{1 - p}\right)^k, \quad k \in \N_+$
Proof
Recall that a power series can be differentialed term by term within the open interval of convergence. Hence \begin{align} \E\left(N^{(k)}\right) & = \sum_{n=1}^\infty n^{(k)} \frac{1}{-\ln(1 - p)} \frac{p^n}{n} = \frac{p^k}{-\ln(1 - p)} \sum_{n=k}^\infty n^{(k)} \frac{p^{n-k}}{n} \ & = \frac{p^k}{-\ln(1 - p)} \sum_{n=k}^\infty \frac{d^k}{dp^k} \frac{p^n}{n} = \frac{p^k}{-\ln(1 - p)} \frac{d^k}{dp^k} \sum_{n=1}^\infty \frac{p^n}{n} \ = & \frac{p^k}{-\ln(1 - p)} \frac{d^k}{dp^k} [-\ln(1 - p)] = \frac{p^k}{-\ln(1 - p)} (k - 1)! (1 - p)^{-k} \end{align}
The mean and variance of $N$ are
1. $\E(N) = \frac{1}{-\ln(1 - p)} \frac{p}{1 - p}$
2. $\var(N) = \frac{1}{-\ln(1 - p)} \frac{p}{(1 - p)^2} \left[1 - \frac{p}{-\ln(1 - p)} \right]$
Proof
These results follow easily from the factorial moments. For part (b), note first that $\E\left(N^2\right) = \E[N(N - 1)] + \E(N) = \frac{1}{-\ln(1 - p)} \frac{p}{(1 - p)^2}$ The result then then follows from the usual computational formula $\var(N) = \E\left(N^2\right) - [\E(N)]^2$.
Open the special distribution simulator and select the logarithmic series distribution. Vary the parameter and note the shape of the mean $\pm$ standard deviation bar. For selected values of the parameter, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The probability generating function $P$ of $N$ is given by $P(t) = \E\left(t^N\right) = \frac{\ln(1 - p t)}{\ln(1 - p)}, \quad \left|t\right| \lt \frac{1}{p}$
Proof $P(t) = \sum_{n=1}^\infty t^n \frac{1}{-\ln(1 - p)} \frac{p^n}{n} = \frac{1}{-\ln(1 - p)} \sum_{n=1}^\infty \frac{(p t)^n}{n} = \frac{-\ln(1 - p t)}{-\ln(1 - p)}$
The factorial moments above can also be obtained from the probability generating function, since $P^{(k)}(1) = \E\left(N^{(k)}\right)$ for $k \in \N_+$.
Related Distributions
Naturally, the limits of the logarithmic series distribution with respect to the parameter $p$ are of interest.
The logarithmic series distribution with shape parameter $p \in (0, 1)$ converges to point mass at 1 as $p \downarrow 0$.
Proof
An application of L'Hospitals rule to the PGF $P$ above shows that $\lim_{p \downarrow 0} P(t) = t$, which is the PGF of point mass at 1.
The logarithmic series distribution is a power series distribution associated with the function $g(p) = -\ln(1 - p)$ for $p \in [0, 1)$.
Proof
This follows from the definition of a power series distribution, since as noted in the PDF proof, $\sum_{n=1}^\infty \frac{p^n}{n} = - \ln(1 - p), \quad p \in [0, 1)$
The moment results above actually follow from general results for power series distributions. The compound Poisson distribution based on the logarithmic series distribution gives a negative binomial distribution.
Suppose that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of independent random variables each with the logarithmic series distribution with parameter $p \in (0, 1)$. Suppose also that $N$ is independent of $\bs{X}$ and has the Poisson distribution with rate parameter $r \in (0, \infty)$. Then $Y = \sum_{i = 1}^N X_i$ has the negative binomial distribution on $\N$ with parameters $1 - p$ and $-r \big/\ln(1 - p)$
Proof
The PGF of $Y$ is $Q \circ P$, where $P$ is the PGF of the logarithmic series distribution, and where $Q$ is the PGF of the Poisson distribution so that $Q(s) = e^{r(s - 1)}$ for $s \in \R$. Thus we have $(Q \circ P)(t) = \exp \left(r \left[\frac{\ln(1 - p t)}{\ln(1 - p)} - 1\right]\right), \quad \left|t\right| \lt \frac{1}{p}$ With a little algebra, this can be written in the form $(Q \circ P)(t) = \left(\frac{1 - p}{1 - p t}\right)^{-r / \ln(1 - p)}, \quad \left|t\right| \lt \frac{1}{p}$ which is the PGF of the negative binomial distribution with parameters $1 - p$ and $-r \big/ \ln(1 - p)$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/05%3A_Special_Distributions/5.41%3A_The_Logarithmic_Series_Distribution.txt |
Point estimation refers to the process of estimating a parameter from a probability distribution, based on observed data from the distribution. It is one of the core topics in mathematical statistics. In this chapter, we will explore the most common methods of point estimation: the method of moments, the method of maximum likelihood, and Bayes' estimators. We also study important properties of estimators, including sufficiency and completeness, and the basic question of whether an estimator is the best possible one.
06: Random Samples
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
The Basic Statistical Model
In the basic statistical model, we have a population of objects of interest. The objects could be persons, families, computer chips, acres of corn. In addition, we have various measurements or variables defined on the objects. We select a sample from the population and record the variables of interest for each object in the sample. Here are a few examples based on the data sets in this project:
• In the M&M data, the objects are bags of M&Ms of a specified size. The variables recorded for a bag of M&Ms are net weight and the counts for red, green, blue, orange, yellow, and brown candies.
• In the cicada data, the objects are cicadas from the middle Tennessee area. The variables recorded for a cicada are body weight, wing length, wing width, body length, gender, and species.
• In Fisher's iris data, the objects are irises. The variables recorded for an iris are petal length, petal width, sepal length, sepal width, and type.
• In the Polio data set, the objects are children. Although many variables were probably recorded for a child, the two crucial variables, both binary, were whether or not the child was vaccinated, and whether or not the child contracted Polio within a certain time period.
• In the Challenger data sets, the objects are Space Shuttle launches. The variables recorded are temperature at the time of launch and various measures of O-ring erosion of the solid rocket boosters.
• In Michelson's data set, the objects are beams of light and the variable recorded is speed.
• In Pearson's data set, the objects are father-son pairs. The variables are the height of the father and the height of the son.
• In Snow's data set, the objects are persons who died of cholera. The variables record the address of the person.
• In one of the SAT data sets, the objects are states and the variables are participation rate, average SAT Math score and average SAT Verbal score.
Thus, the observed outcome of a statistical experiment (the data) has the form $\bs{x} = (x_1, x_2, \ldots, x_n)$ where $x_i$ is the vector of measurements for the $i$th object chosen from the population. The set $S$ of possible values of $\bs{x}$ (before the experiment is conducted) is called the sample space. It is literally the space of samples. Thus, although the outcome of a statistical experiment can have quite a complicated structure (a vector of vectors), the hallmark of mathematical abstraction is the ability to gray out the features that are not relevant at any particular time, to treat a complex structure as a single object. This we do with the outcome $\bs{x}$ of the experiment.
The techniques of statistics have been enormously successful; these techniques are widely used in just about every subject that deals with quantification—the natural sciences, the social sciences, law, and medicine. On the other hand, statistics has a legalistic quality and a great deal of terminology and jargon that can make the subject a bit intimidating at first. In the rest of this section, we begin discussing some of this terminology.
The Empirical Distribution
Suppose again that the data have the form $\bs{x} = (x_1, x_2, \ldots, x_n)$ where $x_i$ is the vector of measurements for the $i$th object chosen. The empirical distribution associated with $\bs{x}$ is the probability distribution that places probability $1/n$ at each $x_i$. Thus, if the values are distinct, the empirical distribution is the discrete uniform distribution on $\{x_1, x_2, \ldots, x_n\}$. More generally, if $x$ occurs $k$ times in the data, then the empirical distribution assigns probability $k / n$ to $x$. Thus, every finite data set defines a probability distribution.
Statistics
Technically, a statistic $w = w(\bs{x})$ is an observable function of the outcome $\bs{x}$ of the experiment. That is, a statistic is a computable function defined on the sample space $S$. The term observable means that the function should not contain any unknown quantities, because we need to be able to compute the value $w$ of the statistic from the observed data $\bs{x}$. As with the data $\bs{x}$, a statistic $w$ may have a complicated structure; typically, $w$ is vector valued. Indeed, the outcome $\bs{x}$ of the experiment is itself is a statistic; all other statistics are derived from $\bs{x}$.
Statistics $u$ and $v$ are equivalent if there exists a one-to-one function $r$ from the range of $u$ onto the range of $v$ such that $v = r(u)$. Equivalent statistics give equivalent information about $\bs{x}$.
Statistics $u$ and $v$ are equivalent if and only if the following condition holds: for any $\bs{x} \in S$ and $\bs{y} \in S$, $u(\bs{x}) = u(\bs{y})$ if and only if $v(\bs{x}) = v(\bs{y})$.
Equivalence really is an equivalence relation on the collection of statistics for a given statistical experiment. That is, if $u$, $v$, and $w$ are arbitrary statistics then
1. $u$ is equivalent to $u$ (the reflexive property).
2. If $u$ is equivalent to $v$ then $v$ is equivalent to $u$ (the symmetric property).
3. If $u$ is equivalent to $v$ and $v$ is equivalent to $w$ then $u$ is equivalent to $w$ (the transitive property).
Descriptive and Inferential Statistics
There are two broad branches of statistics. The term descriptive statistics refers to methods for summarizing and displaying the observed data $\bs{x}$. As the name suggests, the methods of descriptive statistics usually involve computing various statistics (in the technical sense) that give useful information about the data: measures of center and spread, measures of association, and so forth. In the context of descriptive statistics, the term parameter refers to a characteristic of the entire population.
The deeper and more useful branch of statistics is known as inferential statistics. Our point of view in this branch is that the statistical experiment (before it is conducted) is a random experiment with a probability measure $\P$ on an underlying sample space. Thus, the outcome $\bs{x}$ of the experiment is an observed value of a random variable $\bs{X}$ defined on this probability space, with the distribution of $\bs{X}$ not completely known to us. Our goal is to draw inferences about the distribution of $\bs{X}$ from the observed value $\bs{x}$. Thus, in a sense, inferential statistics is the dual of probability. In probability, we try to predict the value of $\bs{X}$ assuming complete knowledge of the distribution. In statistics, by contrast, we observe the value of $\bs{x}$ of the random variable $\bs{X}$ and try to infer information about the underlying distribution of $\bs{X}$. In inferential statistics, a statistic (a function of $\bs{X}$) is itself a random variable with a distribution of its own. On the other hand, the term parameter refers to a characteristic of the distribution of $\bs{X}$. Often the inferential problem is to use various statistics to estimate or test hypotheses about a parameter. Another way to think of inferential statistics is that we are trying to infer from the empirical distribution associated with the observed data $\bs{x}$ to the true distribution associated with $\bs{X}$.
There are two basic types of random experiments in the general area of inferential statistics. A designed experiment, as the name suggests, is carefully designed to study a particular inferential question. The experimenter has considerable control over how the objects are selected, what variables are to be recorded for these objects, and the values of certain of the variables. In an observational study, by contrast, the researcher has little control over these factors. Often the researcher is simply given the data set and asked to make sense out of it. For example, the Polio field trials were designed experiments to study the effectiveness of the Salk vaccine. The researchers had considerable control over how the children were selected, and how the children were assigned to the treatment and control groups. By contrast, the Challenger data sets used to explore the relationship between temperature and O-ring erosion are observational studies. Of course, just because an experiment is designed does not mean that it is well designed.
Difficulties
A number of difficulties can arise when trying to explore an inferential question. Often, problems arise because of confounding variables, which are variables that (as the name suggests) interfere with our understanding of the inferential question. In the first Polio field trial design, for example, age and parental consent are two confounding variables that interfere with the determination of the effectiveness of the vaccine. The entire point of the Berkeley admissions data, to give another example, is to illustrate how a confounding variable (department) can create a spurious correlation between two other variables (gender and admissions status). When we correct for the interference caused by a confounding variable, we say that we have controlled for the variable.
Problems also frequently arise because of measurement errors. Some variables are inherently difficult to measure, and systematic bias in the measurements can interfere with our understanding of the inferential question. The first Polio field trial design again provides a good example. Knowledge of the vaccination status of the children led to systematic bias by doctors attempting to diagnose polio in these children. Measurement errors are sometimes caused by hidden confounding variables.
Confounding variables and measurement errors abound in political polling, where the inferential question is who will win an election. How do confounding variables such as race, income, age, and gender (to name just a few) influence how a person will vote? How do we know that a person will vote for whom she says she will, or if she will vote at all (measurement errors)? The Literary Digest poll in the 1936 presidential election and the professional polls in the 1948 presidential election illustrate these problems.
Confouding variables, measurement errors and other causes often lead to selection bias, which means that the sample does not represent the population with respect to the inferential question at hand. Often randomization is used to overcome the effects of confounding variables and measurement errors.
Random Samples
The most common and important special case of the inferential statistical model occurs when the observation variable
$\bs{X} = (X_1, X_2, \ldots, X_n)$
is a sequence of independent and identically distributed random variables. Again, in the standard sampling model, $X_i$ is itself a vector of measurements for the $i$th object in the sample, and thus, we think of $(X_1, X_2, \ldots, X_n)$ as independent copies of an underlying measurement vector $X$. In this case, $(X_1, X_2, \ldots, X_n)$ is said to be a random sample of size $n$ from the distribution of $X$.
Variables
The mathematical operations that make sense for variable in a statistical experiment depend on the type and level of measurement of the variable.
Type
Recall that a real variable $x$ is continuous if the possible values form an interval of real numbers. For example, the weight variable in the M&M data set, and the length and width variables in Fisher's iris data are continuous. In contrast, a discrete variable is one whose set of possible values forms a discrete set. For example, the counting variables in the M&M data set, the type variable in Fisher's iris data, and the denomination and suit variables in the card experiment are discrete. Continuous variables represent quantities that can, in theory, be measured to any degree of accuracy. In practice, of course, measuring devices have limited accuracy so data collected from a continuous variable are necessarily discrete. That is, there is only a finite (but perhaps very large) set of possible values that can actually be measured. So, the distinction between a discrete and continuous variable is based on what is theoretically possible, not what is actually measured. Some additional examples may help:
• A person's age is usually given in years. However, one can imagine age being given in months, or weeks, or even (if the time of birth is known to a sufficient accuracy) in seconds. Age, whether of devices or persons, is usually considered to be a continuous variable.
• The price of an item is usually given (in the US) in dollars and cents, and of course, the smallest monetary object in circulation is the penny ($0.01). However, taxes are sometimes given in mills ($0.001), and one can imagine smaller divisions of a dollar, even if there are no coins to represent these divisions. Measures of wealth are usually thought of as continuous variables.
• On the other hand, the number of persons in a car at the time of an accident is a fundamentally discrete variable.
Levels of Measurement
A real variable $x$ is also distinguished by its level of measurement.
Qualitative variables simply encode types or names, and thus few mathematical operations make sense, even if numbers are used for the encoding. Such variables have the nominal level of measurement. For example, the type variable in Fisher's iris data is qualitative. Gender, a common variable in many studies of persons and animals, is also qualitative. Qualitative variables are almost always discrete; it's hard to imagine a continuous infinity of names.
A variable for which only order is meaningful is said to have the ordinal level of measurement; differences are not meaningful even if numbers are used for the encoding. For example, in many card games, the suits are ranked, so the suit variable has the ordinal level of measurement. For another example, consider the standard 5-point scale (terrible, bad, average, good, excellent) used to rank teachers, movies, restaurants etc.
A quantitative variable for which difference, but not ratios are meaningful is said to have the interval level of measurement. Equivalently, a variable at this level has a relative, rather than absolute, zero value. Typical examples are temperature (in Fahrenheit or Celsius) or time (clock or calendar).
Finally, a quantitative variable for which ratios are meaningful is said to have the ratio level of measurement. A variable at this level has an absolute zero value. The count and weight variables in the M&M data set, and the length and width variables in Fisher's iris data are examples.
Subsamples
In the basic statistical model, subsamples corresponding to some of the variables can be constructed by filtering with respect to other variables. This is particularly common when the filtering variables are qualitative. Consider the cicada data for example. We might be interested in the quantitative variables body weight, body length, wing width, and wing length by species, that is, separately for species 0, 1, and 2. Or, we might be interested in these quantitative variables by gender, that is separately for males and females.
Exercises
Study Michelson's experiment to measure the velocity of light.
1. Is this a designed experiment or an observational study?
2. Classify the velocity of light variable in terms of type and level of measurement.
3. Discuss possible confounding variables and problems with measurement errors.
Answer
1. Designed experiment
2. Continuous, interval. The level of measurement is only interval because the recorded variable is the speed of light in $\text{km} / \text{hr}$ minus $299\,000$ (to make the numbers simpler). The actual speed in $\text{km} / \text{hr}$ is a continuous, ratio variable.
Study Cavendish's experiment to measure the density of the earth.
1. Is this a designed experiment or an observational study?
2. Classify the density of earth variable in terms of type and level of measurement.
3. Discuss possible confounding variables and problems with measurement errors.
Answer
1. Designed experiment
2. Continuous, ratio.
Study Short's experiment to measure the parallax of the sun.
1. Is this a designed experiment or an observational study?
2. Classify the parallax of the sun variable in terms of type and level of measurement.
3. Discuss possible confounding variables and problems with measurement errors.
Answer
1. Observational study
2. Continuous, ratio.
In the M&M data, classify each variable in terms of type and level of measurement.
Answer
Each color count variable: discrete, ratio; Net weight: continuous, ratio
In the Cicada data, classify each variable in terms of type and level of measurement.
Answer
Body weight, wing length, wing width, body length: continuous, ratio. Gender, type: discrete, nominal
In Fisher's iris data, classify each variable in terms of type and level of measurement.
Answer
Petal width, petal length, sepal width, sepal length: continuous, ratio. Type: discrete, nominal
Study the Challenger experiment to explore the relationship between temperature and O-ring erosion.
1. Is this a designed experiment or an observational study?
2. Classify each variable in terms of type and level of measurement.
3. Discuss possible confounding variables and problems with measurement errors.
Answer
1. Observational study
2. Temperature: continuous, interval; Erosion: continuous, ratio; Damage index: discrete, ordinal
In the Vietnam draft data, classify each variable in terms of type and level of measurement.
Answer
Birth month: discrete, interval; Birth day: discrete, interval
In the two SAT data sets, classify each variable in terms of type and level of measurement.
Answer
SAT math and verbal scores: probably continuous, ratio; State: discrete, nominal; Year: discrete, interval
Study the Literary Digest experiment to to predict the outcome of the 1936 presidential election.
1. Is this a designed experiment or an observational study?
2. Classify each variable in terms of type and level of measurement.
3. Discuss possible confounding variables and problems with measurement errors.
Answer
1. designed experiment, although poorly designed
2. State: discrete, nominal; Electoral votes: discrete, ratio; Landon count: discrete, ratio; Roosevelt count: discrete, ratio
Study the 1948 polls to predict the outcome of the presidential election between Truman and Dewey. Are these designed experiments or an observational studies?
Answer
Designed experiments, but poorly designed
Study Pearson's experiment to explore the relationship between heights of fathers and heights of sons.
1. Is this a designed experiment or an observational study?
2. Classify each variable in terms of type and level of measurement.
3. Discuss possible confounding variables.
Answer
1. Observational study
2. height of the father: continuous ratio; height of the son: continuous ratio
Study the Polio field trials.
1. Are these designed experiments or observational studies?
2. Identify the essential variables and classify each in terms of type and level of measurement.
3. Discuss possible confounding variables and problems with measurement errors.
Answer
1. designed experiments
2. vacination status: discrete, nominal; Polio status: discrete, nominal
Identify the parameters in each of the following:
1. Buffon's Coin Experiment
2. Buffon's Needle Experiment
3. the Bernoulli trials model
4. the Poisson model
Answer
1. radius of the coin
2. length of the needle
3. probability of success
4. rate of arrivals
Note the parameters for each of the following families of special distributions:
1. the normal distribution
2. the gamma distribution
3. the beta distribution
4. the Pareto distribution
5. the Weibull distribution
Answer
1. mean $\mu$ and standard deviation $\sigma$
2. shape parameter $k$ and scale parameter $b$
3. left parameter $a$ and right parameter $b$
4. shape parameter $a$ and scale parameter $b$
5. shape parameter $k$ and scale parameter $b$
During World War II, the Allies recorded the serial numbers of captured German tanks. Classify the underlying serial number variable by type and level of measurement.
Answer
discrete, ordinal.
For a discussion of how the serial numbers were used to estimate the total number of tanks, see the section on Order Statistics in the chapter on Finite Sampling Models. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/06%3A_Random_Samples/6.01%3A_Introduction.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
Recall the basic model of statistics: we have a population of objects of interest, and we have various measurements (variables) that we make on the objects. We select objects from the population and record the variables for the objects in the sample; these become our data. Our first discussion is from a purely descriptive point of view. That is, we do not assume that the data are generated by an underlying probability distribution. However, recall that the data themselves define a probability distribution.
Definition and Basic Properties
Suppose that $\bs{x} = (x_1, x_2, \ldots, x_n)$ is a sample of size $n$ from a real-valued variable. The sample mean is simply the arithmetic average of the sample values: $m = \frac{1}{n} \sum_{i=1}^n x_i$
If we want to emphasize the dependence of the mean on the data, we write $m(\bs{x})$ instead of just $m$. Note that $m$ has the same physical units as the underlying variable. For example, if we have a sample of weights of cicadas, in grams, then $m$ is in grams also. The sample mean is frequently used as a measure of center of the data. Indeed, if each $x_i$ is the location of a point mass, then $m$ is the center of mass as defined in physics. In fact, a simple graphical display of the data is the dotplot: on a number line, a dot is placed at $x_i$ for each $i$. If values are repeated, the dots are stacked vertically. The sample mean $m$ is the balance point of the dotplot. The image below shows a dot plot with the mean as the balance point.
The standard notation for the sample mean corresponding to the data $\bs{x}$ is $\bar{x}$. We break with tradition and do not use the bar notation in this text, because it's clunky and because it's inconsistent with the notation for other statistics such as the sample variance, sample standard deviation, and sample covariance. However, you should be aware of the standard notation, since you will undoubtedly see it in other sources.
The following exercises establish a few simple properties of the sample mean. Suppose that $\bs{x} = (x_1, x_2, \ldots, x_n)$ and $\bs{y} = (y_1, y_2, \ldots, y_n)$ are samples of size $n$ from real-valued population variables and that $c$ is a constant. In vector notation, recall that $\bs{x} + \bs{y} = (x_1 + y_1, x_2 + y_2, \ldots, x_n + y_n)$ and $c \bs{x} = (c x_1, c x_2, \ldots, c x_n)$.
Computing the sample mean is a linear operation.
1. $m(\bs{x} + \bs{y}) = m(\bs{x}) + m(\bs{y})$
2. $m(c \, \bs{x}) = c \, m(\bs{x})$
Proof
1. $m(\bs{x} + \bs{y}) = \frac{1}{n} \sum_{i=1}^n (x_i + y_i) = \frac{1}{n} \sum_{i=1}^n x_i + \frac{1}{n} \sum_{i=1}^n y_i = m(\bs{x}) + m(\bs{y})$
2. $m(c \bs{x}) = \frac{1}{n} \sum_{i=1}^n c x_i = c \frac{1}{n} \sum_{i=1}^n x_i = c m(\bs{x})$
The sample mean preserves order.
1. If $x_i \ge 0$ for each $i$ then $m(\bs{x}) \ge 0$.
2. If $x_i \ge 0$ for each $i$ and $x_j \gt 0$ for some $j$ them $m(\bs{x}) \gt 0$
3. If $x_i \le y_i$ for each $i$ then $m(\bs{x}) \le m(\bs{y})$
4. If $x_i \le y_i$ for each $i$ and $x_j \lt y_j$ for some $j$ then $m(\bs{x}) \lt m(\bs{y})$
Proof
Parts (a) and (b) are obvious from the definition. Part (c) follows from part (a) and the linearity of expected value. Specifically, if $\bs{x} \le \bs{y}$ (in the product ordering), then $\bs{y} - \bs{x} \ge 0$. Hence by (a), $m(\bs{y} - \bs{x}) \ge 0$. But $m(\bs{y} - \bs{x}) = m(\bs{y}) - m(\bs{x})$. Hence $m(\bs{y}) \ge m(\bs{x})$. Similarly, (d) follows from (b) and the linearity of expected value.
Trivially, the mean of a constant sample is simply the constant. .
If $\bs{c} = (c, c, \ldots, c)$ is a constant sample then $m(\bs{c}) = c$.
Proof
Note that
$m(\bs{c}) = \frac{1}{n} \sum_{i=1}^n c_i = \frac{n c}{n} = c$
As a special case of these results, suppose that $\bs{x} = (x_1, x_2, \ldots, x_n)$ is a sample of size $n$ corresponding to a real variable $x$, and that $a$ and $b$ are constants. Then the sample corresponding to the variable $y = a + b x$, in our vector notation, is $\bs{a} + b \bs{x}$. The sample means are related in precisely the same way, that is, $m(\bs{a} + b \bs{x}) = a + b m(\bs{x})$. Linear transformations of this type, when $b \gt 0$, arise frequently when physical units are changed. In this case, the transformation is often called a location-scale transformation; $a$ is the location parameter and $b$ is the scale parameter. For example, if $x$ is the length of an object in inches, then $y = 2.54 x$ is the length of the object in centimeters. If $x$ is the temperature of an object in degrees Fahrenheit, then $y = \frac{5}{9}(x - 32)$ is the temperature of the object in degree Celsius.
Sample means are ubiquitous in statistics. In the next few paragraphs we will consider a number of special statistics that are based on sample means.
The Empirical Distribution
Suppose now that $\bs{x} = (x_1, x_2, \ldots, x_n)$ is a sample of size $n$ from a general variable taking values in a set $S$. For $A \subseteq S$, the frequency of $A$ corresponding to $\bs{x}$ is the number of data values that are in $A$: $n(A) = \#\{i \in \{1, 2, \ldots, n\}: x_i \in A\} = \sum_{i=1}^n \bs{1}(x_i \in A)$ The relative frequency of $A$ corresponding to $\bs{x}$ is the proportion of data values that are in $A$: $p(A) = \frac{n(A)}{n} = \frac{1}{n} \, \sum_{i=1}^n \bs{1}(x_i \in A)$ Note that for fixed $A$, $p(A)$ is itself a sample mean, corresponding to the data $\{\bs{1}(x_i \in A): i \in \{1, 2, \ldots, n\}\}$. This fact bears repeating: every sample proportion is a sample mean, corresponding to an indicator variable. In the picture below, the red dots represent the data, so $p(A) = 4/15$.
$p$ is a probability measure on $S$.
1. $p(A) \ge 0$ for every $A \subseteq S$
2. $p(S) = 1$
3. If $\{A_j: j \in J\}$ is a countable collection of pairwise disjont subsets of $S$ then $p\left(\bigcup_{j \in J} A_j\right) = \sum_{j \in J} p(A_j)$
Proof
Parts (a) and (b) are obvious. For part (c) note that since the sets are disjoint, \begin{align} p\left(\bigcup_{i \in I} A_i\right) & = \frac{1}{n} \sum_{i=1}^n \bs{1}\left(x_i \in \bigcup_{j \in J} A_j\right) = \frac{1}{n} \sum_{i=1}^n \sum_{j \in J} \bs{1}(x_i \in A_j) \ & = \sum_{j \in J} \frac{1}{n} \sum_{i=1}^n \bs{1}(x_i \in A_j) = \sum_{j \in J} p(A_j) \end{align}
This probability measure is known as the empirical probability distribution associated with the data set $\bs{x}$. It is a discrete distribution that places probability $\frac{1}{n}$ at each point $x_i$. In fact this observation supplies a simpler proof of previous theorem. Thus, if the data values are distinct, the empirical distribution is the discrete uniform distribution on $\{x_1, x_2, \ldots, x_n\}$. More generally, if $x \in S$ occurs $k$ times in the data then the empirical distribution assigns probability $k/n$ to $x$.
If the underlying variable is real-valued, then clearly the sample mean is simply the mean of the empirical distribution. It follows that the sample mean satisfies all properties of expected value, not just the linear properties and increasing properties given above. These properties are just the most important ones, and so were repeated for emphasis.
Empirical Density
Suppose now that the population variable $x$ takes values in a set $S \subseteq \R^d$ for some $d \in \N_+$. Recall that the standard measure on $\R^d$ is given by $\lambda_d(A) = \int_A 1 \, dx, \quad A \subseteq \R^d$ In particular $\lambda_1(A)$ is the length of $A$, for $A \subseteq \R$; $\lambda_2(A)$ is the area of $A$, for $A \subseteq \R^2$; and $\lambda_3(A)$ is the volume of $A$, for $A \subseteq \R^3$. Suppose that $x$ is a continuous variable in the sense that $\lambda_d(S) \gt 0$. Typically, $S$ is an interval if $d = 1$ and a Cartesian product of intervals if $d \gt 1$. Now for $A \subseteq S$ with $\lambda_d(A) \gt 0$, the empirical density of $A$ corresponding to $\bs{x}$ is $D(A) = \frac{p(A)}{\lambda_d(A)} = \frac{1}{n \, \lambda_d(A)} \sum_{i=1}^n \bs{1}(x_i \in A)$ Thus, the empirical density of $A$ is the proportion of data values in $A$, divided by the size of $A$. In the picture below (corresponding to $d = 2$), if $A$ has area 5, say, then $D(A) = 4/75$.
The Empirical Distribution Function
Suppose again that $\bs{x} = (x_1, x_2, \ldots, x_n)$ is a sample of size $n$ from a real-valued variable. For $x \in \R$, let $F(x)$ denote the relative frequency (empirical probability) of $(-\infty, x]$ corresponding to the data set $\bs{x}$. Thus, for each $x \in \R$, $F(x)$ is the sample mean of the data $\{\bs{1}(x_i \le x): i \in \{1, 2, \ldots, n\}\}:$ $F(x) = p\left((-\infty, x]\right) = \frac{1}{n} \sum_{i=1}^n \bs{1}(x_i \le x)$
$F$ is a distribution function.
1. $F$ increases from 0 to 1.
2. $F$ is a step function with jumps at the distinct sample values $\{x_1, x_2, \ldots, x_n\}$.
Proof
Suppose that $(y_1,y_2, \ldots, y_k)$ are the distinct values of the data, ordered from smallest to largest, and that $y_j$ occurs $n_j$ times in the data. Then $F(x) = 0$ for $x \lt y_1$, $F(x) = n_1 / n$ for $y_1 \le x \lt y_2$, $F(x) = (n_1 + n_2)/n$ for $y_2 \le x \lt y_3$, and so forth.
Appropriately enough, $F$ is called the empirical distribution function associated with $\bs{x}$ and is simply the distribution function of the empirical distribution corresponding to $\bs{x}$. If we know the sample size $n$ and the empirical distribution function $F$, we can recover the data, except for the order of the observations. The distinct values of the data are the places where $F$ jumps, and the number of data values at such a point is the size of the jump, times the sample size $n$.
The Empirical Discrete Density Function
Suppose now that $\bs{x} = (x_1, x_2, \ldots, x_n)$ is a sample of size $n$ from a discrete variable that takes values in a countable set $S$. For $x \in S$, let $f(x)$ be the relative frequency (empirical probability) of $x$ corresponding to the data set $\bs{x}$. Thus, for each $x \in S$, $f(x)$ is the sample mean of the data $\{\bs{1}(x_i = x): i \in \{1, 2, \ldots, n\}\}$: $f(x) = p(\{x\}) = \frac{1}{n} \sum_{i=1}^n \bs{1}(x_i = x)$ In the picture below, the dots are the possible values of the underlying variable. The red dots represent the data, and the numbers indicate repeated values. The blue dots are possible values of the the variable that did not happen to occur in the data. So, the sample size is 12, and for the value $x$ that occurs 3 times, we have $f(x) = 3/12$.
$f$ is a discrete probabiltiy density function:
1. $f(x) \ge 0$ for $x \in S$
2. $\sum_{x \in S} f(x) = 1$
Proof
Part (a) is obvious. For part (b), note that $\sum_{x \in S} f(x) = \sum_{x \in S} p(\{x\}) = 1$
Appropriately enough, $f$ is called the empirical probability density function or the relative frequency function associated with $\bs{x}$, and is simply the probabiltiy density function of the empirical distribution corresponding to $\bs{x}$. If we know the empirical PDF $f$ and the sample size $n$, then we can recover the data set, except for the order of the observations.
If the underlying population variable is real-valued, then the sample mean is the expected value computed relative to the empirical density function. That is, $\frac{1}{n} \sum_{i=1}^n x_i = \sum_{x \in S} x \, f(x)$
Proof
Note that
$\sum_{x \in S} x f(x) = \sum_{x \in S} x \frac{1}{n} \sum_{i=1}^n \bs{1}(x_i = x) = \frac{1}{n}\sum_{i=1}^n \sum_{x \in S} x \bs{1}(x_i = x) = \frac{1}{n} \sum_{i=1}^n x_i$
As we noted earlier, if the population variable is real-valued then the sample mean is the mean of the empirical distribution.
The Empirical Continuous Density Function
Suppose now that $\bs{x} = (x_1, x_2, \ldots, x_n)$ is a sample of size $n$ from a continuous variable that takes values in a set $S \subseteq \R^d$. Let $\mathscr{A} = \{A_j: j \in J\}$ be a partition of $S$ into a countable number of subsets, each of positive, finite measure. Recall that the word partition means that the subsets are pairwise disjoint and their union is $S$. Let $f$ be the function on $S$ defined by the rule that $f(x)$ is the empricial density of $A_j$, corresponding to the data set $\bs{x}$, for each $x \in A_j$. Thus, $f$ is constant on each of the partition sets: $f(x) = D(A_j) = \frac{p(A_j)}{\lambda_d(A_j)} = \frac{1}{n \lambda_d(A_j)} \sum_{i=1}^n \bs{1}(x_i \in A_j), \quad x \in A_j$
$f$ is a continuous probabiltiy density function.
1. $f(x) \ge 0$ for $x \in S$
2. $\int_S f(x) \, dx = 1$
Proof
Part (a) is obvious. For part (b) note that since $f$ is constant on $A_j$ for each $j \in J$ we have
$\int_S f(x) \, dx = \sum_{j \in J} \int_{A_j} f(x) \, dx = \sum_{j \in J} \lambda_k(A_j) \frac{p(A_j)}{\lambda_k(A_j)} = \sum_{j \in J} p(A_j) = 1$
The function $f$ is called the empirical probability density function associated with the data $\bs{x}$ and the partition $\mathscr{A}$. For the probability distribution defined by $f$, the empirical probability $p(A_j)$ is uniformly distributed over $A_j$ for each $j \in J$. In the picture below, the red dots represent the data and the black lines define a partition of $S$ into 9 rectangles. For the partition set $A$ in the upper right, the empirical distribution would distribute probability $3/15 = 1/5$ uniformly over $A$. If the area of $A$ is, say, 4, then $f(x) = 1/20$ for $x \in A$.
Unlike the discrete case, we cannot recover the data from the empirical PDF. If we know the sample size, then of course we can determine the number of data points in $A_j$ for each $j$, but not the precise location of these points in $A_j$. For this reason, the mean of the empirical PDF is not in general the same as the sample mean when the underlying variable is real-valued.
Histograms
Our next discussion is closely related to the previous one. Suppose again that $\bs{x} = (x_1, x_2, \ldots, x_n)$ is a sample of size $n$ from a variable that takes values in a set $S$ and that $\mathscr{A} = (A_1, A_2, \ldots, A_k)$ is a partition of $S$ into $k$ subsets. The sets in the partition are sometimes known as classes. The underlying variable may be discrete or continuous.
• The mapping that assigns frequencies to classes is known as a frequency distribution for the data set and the given partition.
• The mapping that assigns relative frequencies to classes is known as a relative frequency distribution for the data set and the given partition.
• In the case of a continuous variable, the mapping that assigns densities to classes is known as a density distribution for the data set and the given partition.
In dimensions 1 or 2, the bar graph any of these distributions, is known as a histogram. The histogram of a frequency distribution and the histogram of the corresponding relative frequency distribution look the same, except for a change of scale on the vertical axis. If the classes all have the same size, the histogram of the corresponding density histogram also looks the same, again except for a change of scale on the vertical axis. If the underlying variable is real-valued, the classes are usually intervals (discrete or continuous) and the midpoints of these intervals are sometimes referred to as class marks.
The whole purpose of constructing a partition and graphing one of these empirical distributions corresponding to the partition is to summarize and display the data in a meaningful way. Thus, there are some general guidelines in choosing the classes:
1. The number of classes should be moderate.
2. If possible, the classes should have the same size.
For highly skewed distributions, classes of different sizes are appropriate, to avoid numerous classes with very small frequencies. For a continuous variable with classes of different sizes, it is essential to use a density histogram, rather than a frequency or relative frequency histogram, otherwise the graphic is visually misleading, and in fact mathematically wrong.
It is important to realize that frequency data is inevitable for a continuous variable. For example, suppose that our variable represents the weight of a bag of M&Ms (in grams) and that our measuring device (a scale) is accurate to 0.01 grams. If we measure the weight of a bag as 50.32, then we are really saying that the weight is in the interval $[50.315, 50.324)$ (or perhaps some other interval, depending on how the measuring device works). Similarly, when two bags have the same measured weight, the apparent equality of the weights is really just an artifact of the imprecision of the measuring device; actually the two bags almost certainly do not have the exact same weight. Thus, two bags with the same measured weight really give us a frequency count of 2 for a certain interval.
Again, there is a trade-off between the number of classes and the size of the classes; these determine the resolution of the empirical distribution corresponding to the partition. At one extreme, when the class size is smaller than the accuracy of the recorded data, each class contains a single datum or no datum. In this case, there is no loss of information and we can recover the original data set from the frequency distribution (except for the order in which the data values were obtained). On the other hand, it can be hard to discern the shape of the data when we have many classes with small frequency. At the other extreme is a frequency distribution with one class that contains all of the possible values of the data set. In this case, all information is lost, except the number of the values in the data set. Between these two extreme cases, an empirical distribution gives us partial information, but not complete information. These intermediate cases can organize the data in a useful way.
Ogives
Suppose now the underlying variable is real-valued and that the set of possible values is partitioned into intervals $(A_1, A_2, \ldots, A_k)$, with the endpoints of the intervals ordered from smallest to largest. Let $n_j$ denote the frequency of class $A_j$, so that $p_j = n_j / n$ is the relative frequency of class $A_j$. Let $t_j$ denote the class mark (midpoint) of class $A_j$. The cumulative frequency of class $A_j$ is $N_j = \sum_{i=1}^j n_i$ and the cumulative relative frequency of class $A_j$ is $P_j = \sum_{i=1}^j p_i = N_j / n$. Note that the cumulative frequencies increase from $n_1$ to $n$ and the cumulative relative frequencies increase from $p_1$ to 1.
• The mapping that assigns cumulative frequencies to classes is known as a cumulative frequency distribution for the data set and the given partition. The polygonal graph that connects the points $(t_j, N_j)$ for $j \in \{1, 2, \ldots, k\}$ is the cumulative frequency ogive.
• The mapping that assigns cumulative relative frequencies to classes is known as a cumulative relative frequency distribution for the data set and the given partition. The polygonal graph that connects the points $(t_j, P_j)$ for $j \in \{1, 2, \ldots, k\}$ is the cumulative relative frequency ogive.
Note that the relative frquency ogive is simply the graph of the distribution function corresponding to the probability distibution that places probability $p_j$ at $t_j$ for each $j$.
Approximating the Mean
In the setting of the last subsection, suppose that we do not have the actual data $\bs{x}$, but just the frequency distribution. An approximate value of the sample mean is $\frac{1}{n} \sum_{j = 1}^k n_j t_j = \sum_{j = 1}^k p_j t_j$ This approximation is based on the hope that the mean of the data values in each class is close to the midpoint of that class. In fact, the expression on the right is the expected value of the distribution that places probability $p_j$ on class mark $t_j$ for each $j$.
Exercises
Basic Properties
Suppose that $x$ is the temperature (in degrees Fahrenheit) for a certain type of electronic component after 10 hours of operation.
1. Classify $x$ by type and level of measurement.
2. A sample of 30 components has mean 113°. Find the sample mean if the temperature is converted to degrees Celsius. The transformation is $y = \frac{5}{9}(x - 32)$.
Answer
1. continuous, interval
2. 45°
Suppose that $x$ is the length (in inches) of a machined part in a manufacturing process.
1. Classify $x$ by type and level of measurement.
2. A sample of 50 parts has mean 10.0. Find the sample mean if length is measured in centimeters. The transformation is $y = 2.54 x$.
Answer
1. continuous, ratio
2. 25.4
Suppose that $x$ is the number of brothers and $y$ the number of sisters for a person in a certain population. Thus, $z = x + y$ is the number of siblings.
1. Classify the variables by type and level of measurement.
2. For a sample of 100 persons, $m(\bs{x}) = 0.8$ and $m(\bs{y}) = 1.2$. Find $m(\bs{z})$.
Answer
1. discrete, ratio
2. 2.0
Professor Moriarity has a class of 25 students in her section of Stat 101 at Enormous State University (ESU). The mean grade on the first midterm exam was 64 (out of a possible 100 points). Professor Moriarity thinks the grades are a bit low and is considering various transformations for increasing the grades. In each case below give the mean of the transformed grades, or state that there is not enough information.
1. Add 10 points to each grade, so the transformation is $y = x + 10$.
2. Multiply each grade by 1.2, so the transformation is $z = 1.2 x$
3. Use the transformation $w = 10 \sqrt{x}$. Note that this is a non-linear transformation that curves the grades greatly at the low end and very little at the high end. For example, a grade of 100 is still 100, but a grade of 36 is transformed to 60.
One of the students did not study at all, and received a 10 on the midterm. Professor Moriarity considers this score to be an outlier.
1. What would the mean be if this score is omitted?
Answer
1. 74
2. 76.8
3. Not enough information
4. 66.25
Computational Exercises
All statistical software packages will compute means and proportions, draw dotplots and histograms, and in general perform the numerical and graphical procedures discussed in this section. For real statistical experiments, particularly those with large data sets, the use of statistical software is essential. On the other hand, there is some value in performing the computations by hand, with small, artificial data sets, in order to master the concepts and definitions. In this subsection, do the computations and draw the graphs with minimal technological aids.
Suppose that $x$ is the number of math courses completed by an ESU student. A sample of 10 ESU students gives the data $\bs{x} = (3, 1, 2, 0, 2, 4, 3, 2, 1, 2)$.
1. Classify $x$ by type and level of measurement.
2. Sketch the dotplot.
3. Compute the sample mean $m$ from the definition and indicate its location on the dotplot.
4. Find the empirical density function $f$ and sketch the graph.
5. Compute the sample mean $m$ using $f$.
6. Find the empirical distribution function $F$ and sketch the graph.
Answer
1. discrete, ratio
2. 2
3. $f(0) = 1/10$, $f(1) = 2/10$, $f(2) = 4/10$, $f(3) = 2/10$, $f(4) = 1/10$
4. 2
5. $F(x) = 0$ for $x \lt 0$, $F(x) = 1/10$ for $0 \le x \lt 1$, $F(x) = 3/10$ for $1 \le x \lt 2$, $F(x) = 7/10$ for $2 \le x \lt 3$, $F(x) = 9/10$ for $3 \le x \lt 4$, $F(x) = 1$ for $x \ge 4$
Suppose that a sample of size 12 from a discrete variable $x$ has empirical density function given by $f(-2) = 1/12$, $f(-1) = 1/4$, $f(0) = 1/3$, $f(1) = 1/6$, $f(2) = 1/6$.
1. Sketch the graph of $f$.
2. Compute the sample mean $m$ using $f$.
3. Find the empirical distribution function $F$
4. Give the sample values, ordered from smallest to largest.
Answer
1. $1/12$
2. $F(x) = 0$ for $x \lt -2$, $F(x) = 1/12$ for $-2 \le x \lt -1$, $F(x) = 1/3$ for $-1 \le x \lt 0$, $F(x) = 2/3$ for $0 \le x \lt 1$, $F(x) = 5/6$ for $1 \le x \lt 2$, $F(x) = 1$ for $x \ge 2$
3. $(-2, -1, -1, -1, 0, 0, 0, 0, 1, 1, 2, 2)$
The following table gives a frequency distribution for the commuting distance to the math/stat building (in miles) for a sample of ESU students.
Class Freq Rel Freq Density Cum Freq Cum Rel Freq Midpoint
$(0,2]$ 6
$(2,6]$ 16
$(6,10]$ 18
$(10,20]$ 10
Total
1. Complete the table
2. Sketch the density histogram
3. Sketch the cumulative relative frquency ogive.
4. Compute an approximation to the mean
Answer
1. Class Freq Rel Freq Density Cum Freq Cum Rel Freq Midpoint
$(0,2]$ 6 0.12 0.06 6 0.12 1
$(2,6]$ 16 0.32 0.08 22 0.44 4
$(6, 10]$ 18 0.36 0.09 40 0.80 8
$(10, 20])$ 10 0.20 0.02 50 1 15
Total 50 1
2. 7.28
App Exercises
In the interactive histogram, click on the $x$-axis at various points to generate a data set with at least 20 values. Vary the number of classes and switch between the frequency histogram and the relative frequency histogram. Note how the shape of the histogram changes as you perform these operations. Note in particular how the histogram loses resolution as you decrease the number of classes.
In the interactive histogram, click on the axis to generate a distribution of the given type with at least 30 points. Now vary the number of classes and note how the shape of the distribution changes.
1. A uniform distribution
2. A symmetric unimodal distribution
3. A unimodal distribution that is skewed right.
4. A unimodal distribution that is skewed left.
5. A symmetric bimodal distribution
6. A $u$-shaped distribution.
Data Analysis Exercises
Statistical software should be used for the problems in this subsection.
Consider the petal length and species variables in Fisher's iris data.
1. Classify the variables by type and level of measurement.
2. Compute the sample mean and plot a density histogram for petal length.
3. Compute the sample mean and plot a density histogram for petal length by species.
Answers
1. petal length: continuous, ratio. species: discrete, nominal
2. $m = 37.8$
3. $m(0) = 14.6, \; m(1) = 55.5, \; m(2) = 43.2$
Consider the erosion variable in the Challenger data set.
1. Classify the variable by type and level of measurement.
2. Compute the mean
3. Plot a density histogram with the classes $[0, 5)$, $[5, 40)$, $[40, 50)$, $[50, 60)$.
Answer
1. continuous, ratio
2. $m = 7.7$
Consider Michelson's velocity of light data.
1. Classify the variable by type and level of measurement.
2. Plot a density histogram.
3. Compute the sample mean.
4. Find the sample mean if the variable is converted to $\text{km}/\text{hr}$. The transformation is $y = x + 299\,000$
Answer
1. continuous, interval
2. $m = 852.4$
3. $m = 299\,852.4$
Consider Short's paralax of the sun data.
1. Classify the variable by type and level of measurement.
2. Plot a density histogram.
3. Compute the sample mean.
4. Find the sample mean if the variable is converted to degrees. There are 3600 seconds in a degree.
5. Find the sample mean if the variable is converted to radians. There are $\pi/180$ radians in a degree.
Answer
1. continuous, ratio
2. 8.616
3. 0.00239
4. 0.0000418
Consider Cavendish's density of the earth data.
1. Classify the variable by type and level of measurement.
2. Compute the sample mean.
3. Plot a density histogram.
Answer
1. continuous, ratio
2. $m = 5.448$
Consider the M&M data.
1. Classify the variables by type and level of measurement.
2. Compute the sample mean for each color count variable.
3. Compute the sample mean for the total number of candies, using the results from (b).
4. Plot a relative frequency histogram for the total number of candies.
5. Compute the sample mean and plot a density histogram for the net weight.
Answer
1. color counts: discrete ratio. net weight: continuous ratio.
2. $m(r) = 9.60$, $m(g) = 7.40$, $m(bl) = 7.23$, $m(o) = 6.63$, $m(y) = 13.77$, $m(br) = 12.47$
3. $m(n) = 57.10$
4. $m(w) = 49.215$
Consider the body weight, species, and gender variables in the Cicada data.
1. Classify the variables by type and level of measurement.
2. Compute the relative frequency function for species and plot the graph.
3. Compute the relative frequeny function for gender and plot the graph.
4. Compute the sample mean and plot a density histogram for body weight.
5. Compute the sample mean and plot a density histogrm for body weight by species.
6. Compute the sample mean and plot a density histogram for body weight by gender.
Answer
1. body weight: continuous, ratio. species: discrete, nominal. gender: discrete, nominal.
2. $f(0) = 0.423$, $f(1) = 0.519$, $f(2) = 0.058$
3. $f(0) = 0.567$, $f(1) = 0.433$
4. $m = 0.180$
5. $m(0) = 0.168, \; m(1) = 0.185, \; m(2) = 0.225$
6. $m(0) = 0.206, \; m(1) = 0.145$
Consider Pearson's height data.
1. Classify the variables by type and level of measurement.
2. Compute the sample mean and plot a density histogram for the height of the father.
3. Compute the sample mean and plot a density histogram for the height of the son.
Answer
1. continuous ratio
2. $m(f) = 67.69$
3. $m(s) = 68.68$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/06%3A_Random_Samples/6.02%3A_The_Sample_Mean.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
This section continues the discussion of the sample mean from the last section, but we now consider the more interesting setting where the variables are random. Specifically, suppose that we have a basic random experiment with an underlying probability measure $\P$, and that $X$ is random variable for the experiment. Suppose now that we perform $n$ independent replications of the basic experiment. This defines a new, compound experiment with a sequence of independent random variables $\bs{X} = (X_1, X_2, \ldots, X_n)$, each with the same distribution as $X$. Recall that in statistical terms, $\bs{X}$ is a random sample of size $n$ from the distribution of $X$. All of the relevant statistics discussed in the the previous section, are defined for $\bs{X}$, but of course now these statistics are random variables with distributions of their own. For the most part, we use the notation established previously, except that for the usual convention of denoting random variables with capital letters. Of course, the deterministic properties and relations established previously apply as well. When we acutally run the experiment and observe the values $\bs{x} = (x_1, x_2, \ldots, x_n)$ of the random variables, then we are precisely in the setting of the previous section.
Suppose now that the basic variable $X$ is real valued, and let $\mu = \E(X)$ denote the expected value of $X$ and $\sigma^2 = \var(X)$ the variance of $X$ (assumed finite). The sample mean is $M = \frac{1}{n} \sum_{i=1}^n X_i$ Ofen the distribution mean $\mu$ is unknown and the sample mean $M$ is used as an estimator of this unknown parameter.
Moments
The mean and variance of $M$ are
1. $\E(M) = \mu$
2. $\var(M) = \sigma^2 / n$
Proof
1. This follows from the linear property of expected value: $\E(M) = \frac{1}{n} \sum_{i=1}^n \E(X_i) = \frac{1}{n} \sum_{i=1}^n \mu = \frac{1}{n} n \mu = \mu$
2. This follows from basic properties of variance. Recall in particular that the variance of the sum of independent variables is the sum of the variances. $\var(M) = \frac{1}{n^2} \sum_{i=1}^n \var(X_i) = \frac{1}{n^2} \sum_{i=1}^n \sigma^2 = \frac{1}{n^2} n \sigma^2 = \frac{\sigma^2}{n}$
Part (a) means that the sample mean $M$ is an unbiased estimator of the distribution mean $\mu$. Therefore, the variance of $M$ is the mean square error, when $M$ is used as an estimator of $\mu$. Note that the variance of $M$ is an increasing function of the distribution variance and a decreasing function of the sample size. Both of these make intuitive sense if we think of the sample mean $M$ as an estimator of the distribution mean $\mu$. The fact that the mean square error (variance in this case) decreases to 0 as the sample size $n$ increases to $\infty$ means that the sample mean $M$ is a consistent estimator of the distribution mean $\mu$.
Recall that $X_i - M$ is the deviation of $X_i$ from $M$, that is, the directed distance from $M$ to $X_i$. The following theorem states that the sample mean is uncorrelated with each deviation, a result that will be crucial for showing the independence of the sample mean and the sample variance when the sampling distribution is normal.
$M$ and $X_i - M$ are uncorrelated.
Proof
This result follows from simple properties of covariance. Note that $\cov(M, X_i - M) = \cov(M, X_i) - \cov(M, M)$. By independence, $\cov(M, X_i) = \cov\left(\frac{1}{n}\sum_{j=1}^n X_j, X_i\right) = \frac{1}{n} \sum_{j=1}^n \cov(X_j, X_i) = \frac{1}{n} \cov(X_i, X_i) = \frac{1}{n} \var(X_i) = \frac{\sigma^2}{n}$ But by previous theorem, $\cov(M, M) = \var(M) = \sigma^2 / n$.
The Weak and Strong Laws of Large Numbers
The law of large numbers states that the sample mean converges to the distribution mean as the sample size increases, and is one of the fundamental theorems of probability. There are different versions of the law, depending on the mode of convergence.
Suppose again that $X$ is a real-valued random variable for our basic experiment, with mean $\mu$ and standard deviation $\sigma$ (assumed finite). We repeat the basic experiment indefinitely to create a new, compound experiment with an infinite sequence of independent random variables $(X_1, X_2, \ldots)$, each with the same distribution as $X$. In statistical terms, we are sampling from the distribution of $X$. In probabilistic terms, we have an independent, identically distributed (IID) sequence. For each $n$, let $M_n$ denote the sample mean of the first $n$ sample variables: $M_n = \frac{1}{n} \sum_{i=1}^n X_i$ From the result above on variance, note that $\var(M_n) = \E\left[\left(M_n - \mu\right)^2\right] \to 0$ as $n \to \infty$. This means that $M_n \to \mu$ as $n \to \infty$ in mean square. As stated in the next theorem, $M_n \to \mu$ as $n \to \infty$ in probability as well.
$\P\left(\left|M_n - \mu\right| \gt \epsilon\right) \to 0$ as $n \to \infty$ for every $\epsilon \gt 0$.
Proof
This follows from Chebyshev's inequality: $\P\left(\left|M_n - \mu\right| \gt \epsilon\right) \le \frac{\var(M_n)}{\epsilon^2} = \frac{\sigma^2}{n \epsilon^2} \to 0 \text{ as } n \to \infty$
Recall that in general, convergence in mean square implies convergence in probability. The convergence of the sample mean to the distribution mean in mean square and in probability are known as weak laws of large numbers.
Finally, the strong law of large numbers states that the sample mean $M_n$ converges to the distribution mean $\mu$ with probability 1 . As the name suggests, this is a much stronger result than the weak laws. We will need some additional notation for the proof. First let $Y_n = \sum_{i=1}^n X_i$ so that $M_n = Y_n / n$. Next, recall the definitions of the positive and negative parts a real number $x$: $x^+ = \max\{x, 0\}$, $x^- = \max\{-x, 0\}$. Note that $x^+ \ge 0$, $x^- \ge 0$, $x = x^+ - x^-$, and $|x| = x^+ + x^-$.
$M_n \to \mu$ as $n \to \infty$ with probability 1.
Proof
The proof is in three major steps. The first step is to show that with probability 1, $M_{n^2} \to \mu$ as $n \to \infty$. From Chebyshev's inequality, $\P\left(\left|M_{n^2} - \mu\right| \gt \epsilon\right) \le \sigma^2 \big/ n^2 \epsilon^2$ for every $n \in \N_+$ and every $\epsilon \gt 0$. Since $\sum_{n=1}^\infty \sigma^2 \big/ n^2 \epsilon^2 \lt \infty$, it follows from the first Borel-Cantelli lemma that for every $\epsilon \gt 0$, $\P\left(\left|M_{n^2} - \mu \right| \gt \epsilon \text{ for infinitely many } n \in \N_+\right) = 0$ Next, from Boole's inequality it follows that $\P\left(\text{For some rational } \epsilon \gt 0, \left|M_{n^2} - \mu\right| \gt \epsilon \text{ for infinitely many } n \in \N_+\right) = 0$ This is equivalent to the statment that $M_{n^2} \to \mu$ as $n \to \infty$ with probability 1.
For our next step, we will show that if the underlying sampling variable is nonnegative, so that $\P(X \ge 0) = 1$, then $M_n \to \mu$ as $n \to \infty$. with probabiity 1. Note first that with probability 1, $Y_n$ is increasing in $n$. For $n \in \N_+$, let $k_n$ be the unique positive integer such that $k_n^2 \le n \lt (k_n + 1)^2$. From the increasing property and simple algebra, it follows that with probability 1, $\frac{Y_{k_n^2}}{(k_n + 1)^2} \le \frac{Y_n}{n} \le \frac{Y_{(k_n + 1)^2}}{k_n^2}$ From our first step, with probability 1, $\frac{Y_{k_n^2}}{(k_n + 1)^2} = \frac{Y_{k_n^2}}{k_n^2} \frac{k_n^2}{(k_n+1)^2} \to \mu \text{ as } n \to \infty$ Similarly with probability 1 $\frac{Y_{(k_n+1)^2}}{k_n^2} = \frac{Y_{(k_n+1)^2}}{(k_n+1)^2} \frac{(k_n+1)^2}{k_n^2} \to \mu \text{ as } n \to \infty$ Finally by the squeeze theorem for limits it follows that with probability 1, $M_n = Y_n / n \to \mu$ as $n \to \infty$.
Finally we relax the condition that the underlying sampling variable $X$ is nonnegative. From step two, it follows that $\frac{1}{n} \sum_{i=1}^n X_i^+ \to \E\left(X^+\right)$ as $n \to \infty$ with probability 1, and $\frac{1}{n} \sum_{i=1}^n X_i^- \to \E\left(X^-\right)$ as $n \to \infty$ with probability 1. Now from algebra and the linearity of expected value, with probability 1, $\frac{1}{n} \sum_{i=1}^n X_i = \frac{1}{n}\sum_{i=1}^n \left(X_i^+ - X_i^-\right) = \frac{1}{n} \sum_{i=1}^n X_i^+ - \sum_{i=1}^n X_i^- \to \E\left(X^+\right) - \E\left(X^-\right) = \E\left(X^+ - X^-\right) = \E(X) \text{ as } n \to \infty$
The proof of the strong law of large numbers given above requires that the variance of the sampling distribution be finite (note that this is critical in the first step). However, there are better proofs that only require that $\E\left(\left|X\right|\right) \lt \infty$. An elegant proof showing that $M_n \to \mu$ as $n \to \infty$ with probability 1 and in mean, using backwards martingales, is given in the chapter on martingales. In the next few paragraphs, we apply the law of large numbers to some of the special statistics studied in the previous section.
Emprical Probability
Suppose that $X$ is the outcome random variable for a basic experiment, with sample space $S$ and probability measure $\P$. Now suppose that we repeat the basic experiment indefinitley to form a sequence of independent random variables $(X_1, X_2, \ldots)$ each with the same distribution as $X$. That is, we sample from the distribution of $X$. For $A \subseteq S$, let $P_n(A)$ denote the empricial probability of $A$ corresponding to the sample $(X_1, X_2, \ldots, X_n)$: $P_n(A) = \frac{1}{n} \sum_{i=1}^n \bs{1}(X_i \in A)$ Now of course, $P_n(A)$ is a random variable for each event $A$. In fact, the sum $\sum_{i=1}^n \bs{1}(X_i \in A)$ has the binomial distribution with parameters $n$ and $\P(A)$.
For each event $A$,
1. $\E\left[P_n(A)\right] = \P(A)$
2. $\var\left[P_n(A)\right] = \frac{1}{n} \P(A) \left[1 - \P(A)\right]$
3. $P_n(A) \to \P(A)$ as $n \to \infty$ with probability 1.
Proof
These results follow from the results of this section, since $P_n(A)$ is the sample mean for the random sample $\{\bs{1}(X_i \in A): i \in \{1, 2, \ldots, n\}\}$ from the distribution of $\bs{1}(X \in A)$.
This special case of the law of large numbers is central to the very concept of probability: the relative frequency of an event converges to the probability of the event as the experiment is repeated.
The Empirical Distribution Function
Suppose now that $X$ is a real-valued random variable for a basic experiment. Recall that the distribution function of $X$ is the function $F$ given by $F(x) = \P(X \le x), \quad x \in \R$ Now suppose that we repeat the basic experiment indefintely to form a sequence of independent random variables $(X_1, X_2, \ldots)$, each with the same distribution as $X$. That is, we sample from the distribution of $X$. Let $F_n$ denote the empirical distribution function corresponding to the sample $(X_1, X_2, \ldots, X_n)$: $F_n(x) = \frac{1}{n} \sum_{i=1}^n \bs{1}(X_i \le x), \quad x \in \R$ Now, of course, $F_n(x)$ is a random variable for each $x \in \R$. In fact, the sum $\sum_{i=1}^n \bs{1}(X_i \le x)$ has the binomial distribution with parameters $n$ and $F(x)$.
For each $x \in \R$,
1. $\E\left[F_n(x)\right] = F(x)$
2. $\var\left[F_n(x)\right] = \frac{1}{n} F(x) \left[1 - F(x)\right]$
3. $F_n(x) \to F(x)$ as $n \to \infty$ with probability 1.
Proof
These results follow immediately from the results in this section, since $F_n(x)$ is the sample mean for the random sample $\{\bs{1}(X_i \le x): i \in \{1, 2, \ldots, n\}\}$ from the distribution of $\bs{1}(X \le x)$.
Empirical Density for a Discrete Variable
Suppose now that $X$ is a random variable for a basic experiment with a discrete distribution on a countable set $S$. Recall that the probability density function of $X$ is the function $f$ given by $f(x) = \P(X = x), \quad x \in S$ Now suppose that we repeat the basic experiment to form a sequence of independent random variables $(X_1, X_2, \ldots)$ each with the same distribution as $X$. That is, we sample from the distribution of $X$. Let $f_n$ denote the empirical probability density function corresponding to the sample $(X_1, X_2, \ldots, X_n)$: $f_n(x) = \frac{1}{n} \sum_{i=1}^n \bs{1}(X_i = x), \quad x \in S$ Now, of course, $f_n(x)$ is a random variable for each $x \in S$. In fact, the sum $\sum_{i=1}^n \bs{1}(X_i = x)$ has the binomial distribution with parameters $n$ and $f(x)$.
For each $x \in S$,
1. $\E\left[f_n(x)\right] = f(x)$
2. $\var\left[f_n(x)\right] = \frac{1}{n} f(x) \left[1 - f(x)\right]$
3. $f_n(x) \to f(x)$ as $n \to \infty$ with probability 1.
Proof
These results follow immediately from the results in this section, since $f_n(x)$ is the sample mean for the random sample $\{\bs{1}(X_i = x): i \in \{1, 2, \ldots, n\}\}$ from the distribution of $\bs{1}(X = x)$.
Recall that a countable intersection of events with probability 1 still has probability 1. Thus, in the context of the previous theorem, we actually have $\P\left[f_n(x) \to f(x) \text{ as } n \to \infty \text{ for every } x \in S\right] = 1$
Empirical Density for a Continuous Variable
Suppose now that $X$ is a random variable for a basic experiment, with a continuous distribution on $S \subseteq \R^d$, and that $X$ has probability density function $f$. Technically, $f$ is the probability density function with respect to the standard (Lebsesgue) measure $\lambda_d$. Thus, by definition, $\P(X \in A) = \int_A f(x) \, dx, \quad A \subseteq S$ Again we repeat the basic experiment to generate a sequence of independent random variables $(X_1, X_2, \ldots)$ each with the same distribution as $X$. That is, we sample from the distribution of $X$. Suppose now that $\mathscr{A} = \{A_j: j \in J\}$ is a partition of $S$ into a countable number of subsets, each with positive, finite size. Let $f_n$ denote the empirical probability density function corresponding to the sample $(X_1, X_2, \ldots, X_n)$ and the partition $\mathscr{A}$: $f_n(x) = \frac{P_n(A_j)}{\lambda_d(A_j)} = \frac{1}{n \, \lambda_d(A_j)} \sum_{i=1}^n \bs{1}(X_i \in A_j); \quad j \in J, \; x \in A_j$ Of course now, $f_n(x)$ is a random variable for each $x \in S$. If the partition is sufficiently fine (so that $\lambda_d(A_j)$ is small for each $j$), and if the sample size $n$ is sufficiently large, then by the law of large numbers, $f_n(x) \approx f(x), \quad x \in S$
Exercises
Simulation Exercises
In the dice experiment, recall that the dice scores form a random sample from the specified die distribution. Select the average random variable, which is the sample mean of the sample of dice scores. For each die distribution, start with 1 die and increase the sample size $n$. Note how the distribution of the sample mean begins to resemble a point mass distribution. Note also that the mean of the sample mean stays the same, but the standard deviation of the sample mean decreases. For selected values of $n$ and selected die distributions, run the simulation 1000 times and compare the relative frequency function of the sample mean to the true probability density function, and compare the empirical moments of the sample mean to the true moments.
Several apps in this project are simulations of random experiments with events of interest. When you run the experiment, you are performing independent replications of the experiment. In most cases, the app displays the relative frequency of the event and its complement, both graphically in blue, and numerically in a table. When you run the experiment, the relative frequencies are shown graphically in red and also numerically.
In the simulation of Buffon's coin experiment, the event of interest is that the coin crosses a crack. For various values of the parameter (the radius of the coin), run the experiment 1000 times and compare the relative frequency of the event to the true probability.
In the simulation of Bertrand's experiment, the event of interest is that a random chord on a circle will be longer than the length of a side of the inscribed equilateral triangle. For each of the various models, run the experiment 1000 times and compuare the relative frequency of the event to the true probability.
Many of the apps in this project are simulations of experiments which result in discrete variables. When you run the simulation, you are performing independent replications of the experiment. In most cases, the app displays the true probability density function numerically in a table and visually as a blue bar graph. When you run the simulation, the relative frequency function is also shown numerically in the table and visually as a red bar graph.
In the simulation of the binomial coin experiment, select the number of heads. For selected values of the parameters, run the simulation 1000 times and compare the sample mean to the distribution mean, and compare the empirical density function to the probability density function.
In the simulation of the matching experiment, the random variable is the number of matches. For selected values of the parameter, run the simulation 1000 times and compare the sample mean and the distribution mean, and compare the empirical density function to the probability density function.
In the poker experiment, the random variable is the type of hand. Run the simulation 1000 times and compare the empirical density function to the true probability density function.
Many of the apps in this project are simulations of experiments which result in variables with continuous distributions. When you run the simulation, you are performing independent replications of the experiment. In most cases, the app displays the true probability density function visually as a blue graph. When you run the simulation, an empirical density function, based on a partition, is also shown visually as a red bar graph.
In the simulation of the gamma experiment, the random variable represents a random arrival time. For selected values of the parameters, run the experiment 1000 times and compare the sample mean to the distribution mean, and compare the empirical density function to the probability density function.
In the special distribution simulator, select the normal distribution. For various values of the parameters (the mean and standard deviation), run the experiment 1000 times and compare the sample mean to the distribution mean, and compare the empirical density function to the probability density function.
Probability Exercises
Suppose that $X$ has probability density function $f(x) = 12 x^2 (1 - x)$ for $0 \le x \le 1$. The distribution of $X$ is a member of the beta family. Compute each of the following
1. $\E(X)$
2. $\var(X)$
3. $\P\left(X \le \frac{1}{2}\right)$
Answer
1. $\frac{3}{5}$
2. $\frac{1}{25}$
3. $\frac{5}{16}$
Suppose now that $(X_1, X_2, \ldots, X_9)$ is a random sample of size 9 from the distribution in the previous problem. Find the expected value and variance of each of the following random variables:
1. The sample mean $M$
2. The empirical probability $P\left(\left[0, \frac{1}{2}\right]\right)$
Answer
1. $\frac{3}{5}, \; \frac{1}{225}$
2. $\frac{5}{16}, \; \frac{55}{2304}$
Suppose that $X$ has probability density function $f(x) = \frac{3}{x^4}$ for $1 \le x \lt \infty$. The distribution of $X$ is a member of the Pareto family. Compute each of the following
1. $\E(X)$)
2. $\var(X)$
3. $\P(2 \le X \le 3)$
Answer
1. $\frac{3}{2}$
2. $\frac{3}{4}$
3. $\frac{19}{216}$
Suppose now that $\left(X_1, X_2, \ldots, X_{16}\right)$ is a random sample of size 16 from the distribution in the previous problem. Find the expected value and variance of each of the following random variables:
1. The sample mean $M$
2. The empirical probability $P\left([2, 3]\right)$
Answer
1. $\frac{3}{2}, \; \frac{3}{64}$
2. $\frac{19}{216}, \frac{3743}{746 \; 496}$
Recall that for an ace-six flat die, faces 1 and 6 have probability $\frac{1}{4}$ each, while faces 2, 3, 4, and 5 have probability $\frac{1}{8}$ each. Let $X$ denote the score when an ace-six flat die is thrown. Compute each of the following:
1. The probability density function $f(x)$ for $x \in \{1, 2, 3, 4, 5, 6\}$
2. The distribution function $F(x)$ for $x \in \{1, 2, 3, 4, 5, 6\}$
3. $\E(X)$
4. $\var(X)$
Answer
1. $f(x) = \frac{1}{4}, \; x \in \{1, 6\}; \quad f(x) = \frac{1}{8}, \; x \in \{2, 3, 4, 5\}$
2. $F(1) = \frac{1}{4}, \; F(2) = \frac{3}{8}, \; F(3) = \frac{1}{2}, \; F(4) = \frac{5}{8}, \; F(5) = \frac{3}{4}, \; F(6) = 1$
3. $\E(X) = \frac{7}{2}$
4. $\var(X) = \frac{15}{4}$
Suppose now that an ace-six flat die is thrown $n$ times. Find the expected value and variance of each of the following random variables:
1. The empirical probability density function $f_n(x)$ for $x \in \{1, 2, 3, 4, 5, 6\}$
2. The empirical distribution function $F_n(x)$ for $x \in \{1, 2, 3, 4, 5, 6\}$
3. The average score $M$
Answer
1. $\E[f_n(x)] = \frac{1}{4}, \; x \in \{1, 6\}; \quad \E[f_n(x)] = \frac{1}{8}, \; x \in \{2, 3, 4, 5\}$
2. $\var[f_n(x)] = \frac{3}{16 n}, \; x \in \{1, 6\}; \quad \var[f_n(x)] = \frac{7}{64 n}, \; x \in \{2, 3, 4, 5\}$
3. $\E[F_n(1)] = \frac{1}{4}, \; \E[F_n(2)] = \frac{3}{8}, \; \E[F_n(3)] = \frac{1}{2}, \; \E[F_n(4)] = \frac{5}{8}, \; \E[F_n(5)] = \frac{3}{4}, \; \E[F_n(6)] = 1$
4. $\var[F_n(1)] = \frac{3}{16 n}, \; \var[F_n(2)] = \frac{15}{64 n}, \; \var[F_n(3)] = \frac{1}{4 n}, \; \var[F_n(4)] = \frac{15}{64 n}, \; \var[F_n(5)] = \frac{3}{16 n}, \; \var[F_n(6)] = 0$
5. $\E(M) = \frac{7}{2}, \; \var(M) = \frac{15}{4 n}$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/06%3A_Random_Samples/6.03%3A_The_Law_of_Large_Numbers.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\bs}{\boldsymbol}$
The central limit theorem and the law of large numbers are the two fundamental theorems of probability. Roughly, the central limit theorem states that the distribution of the sum (or average) of a large number of independent, identically distributed variables will be approximately normal, regardless of the underlying distribution. The importance of the central limit theorem is hard to overstate; indeed it is the reason that many statistical procedures work.
Partial Sum Processes
Definitions
Suppose that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of independent, identically distributed, real-valued random variables with common probability density function $f$, mean $\mu$, and variance $\sigma^2$. We assume that $0 \lt \sigma \lt \infty$, so that in particular, the random variables really are random and not constants. Let $Y_n = \sum_{i=1}^n X_i, \quad n \in \N$ Note that by convention, $Y_0 = 0$, since the sum is over an empty index set. The random process $\bs{Y} = (Y_0, Y_1, Y_2, \ldots)$ is called the partial sum process associated with $\bs{X}$. Special types of partial sum processes have been studied in many places in this text; in particular see
• the binomial distribution in the setting of Bernoulli trials
• the negative binomial distribution in the setting of Bernoulli trials
• the gamma distribution in the Poisson process
• the the arrival times in a general renewal process
Recall that in statistical terms, the sequence $\bs{X}$ corresponds to sampling from the underlying distribution. In particular, $(X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the distribution, and the corresponding sample mean is $M_n = \frac{Y_n}{n} = \frac{1}{n} \sum_{i=1}^n X_i$ By the law of large numbers, $M_n \to \mu$ as $n \to \infty$ with probability 1.
Stationary, Independent Increments
The partial sum process corresponding to a sequence of independent, identically distributed variables has two important properties, and these properties essentially characterize such processes.
If $m \le n$ then $Y_n - Y_m$ has the same distribution as $Y_{n-m}$. Thus the process $\bs{Y}$ has stationary increments.
Proof
Note that $Y_n - Y_m = \sum_{i=m+1}^n X_i$ and is the sum of $n - m$ independent variables, each with the common distribution. Of course, $Y_{n-m}$ is also the sum of $n - m$ independent variables, each with the common distribution.
Note however that $Y_n - Y_m$ and $Y_{n-m}$ are very different random variables; the theorem simply states that they have the same distribution.
If $n_1 \le n_2 \le n_3 \le \cdots$ then $\left(Y_{n_1}, Y_{n_2} - Y_{n_1}, Y_{n_3} - Y_{n_2}, \ldots\right)$ is a sequence of independent random variables. Thus the process $\bs{Y}$ has independent increments.
Proof
The terms in the sequence of increments $\left(Y_{n_1}, Y_{n_2} - Y_{n_1}, Y_{n_3} - Y_{n_2}, \ldots\right)$ are sums over disjoint collections of terms in the sequence $\bs{X}$. Since the sequence $\bs{X}$ is independent, so is the sequence of increments.
Conversely, suppose that $\bs{V} = (V_0, V_1, V_2, \ldots)$ is a random process with stationary, independent increments. Define $U_i = V_i - V_{i-1}$ for $i \in \N_+$. Then $\bs{U} = (U_1, U_2, \ldots)$ is a sequence of independent, identically distributed variables and $\bs{V}$ is the partial sum process associated with $\bs{U}$.
Thus, partial sum processes are the only discrete-time random processes that have stationary, independent increments. An interesting, and much harder problem, is to characterize the continuous-time processes that have stationary independent increments. The Poisson counting process has stationary independent increments, as does the Brownian motion process.
Moments
If $n \in \N$ then
1. $\E(Y_n) = n \mu$
2. $\var(Y_n) = n \sigma^2$
Proof
The results follow from basic properties of expected value and variance. Expected value is a linear operation so $\E(Y_n) = \sum_{i=1}^n \E(X_i) = n \mu$. By independence, $\var(Y_n) = \sum_{i=1}^n \var(X_i) = n \sigma^2$.
If $n \in \N_+$ and $m \in \N$ with $m \le n$ then
1. $\cov(Y_m, Y_n) = m \sigma^2$
2. $\cor(Y_m, Y_n) = \sqrt{\frac{m}{n}}$
3. $\E(Y_m Y_n) = m \sigma^2 + m n \mu^2$
Proof
1. Note that $Y_n = Y_m + (Y_n - Y_m)$. This follows from basic properties of covariance, and Theorem 1 and Theorem 2: $\cov(Y_m, Y_n) = \cov(Y_m, Y_m) + \cov(Y_m, Y_n - Y_m) = \var(Y_m) + 0 = m \sigma^2$
2. This result follows from part (a) and Theorem 4 $\cor(Y_m, Y_m) = \frac{\cov(Y_m, Y_n)}{\sd(Y_m) \sd(Y_n)} = \frac{m \sigma^2}{\sqrt{m \sigma^2} \sqrt{n \sigma^2}} = \sqrt{\frac{m}{n}}$
3. This result also follows from part (a) and Theorem 4: $\E(Y_m Y_n) = \cov(Y_m, Y_n) + \E(Y_m) \E(Y_n) = m \sigma^2 + m \mu n \mu$
If $X$ has moment generating function $G$ then $Y_n$ has moment generating function $G^n$.
Proof
This follows from a basic property of generating functions: the generating function of a sum of independent variables is the product of the generating functions of the terms.
Distributions
Suppose that $X$ has either a discrete distribution or a continuous distribution with probability density function $f$. Then the probability density function of $Y_n$ is $f^{*n} = f * f * \cdots * f$, the convolution power of $f$ of order $n$.
Proof
This follows from a basic property of PDFs: the pdf of a sum of independent variables is the convolution of the PDFs of the terms.
More generally, we can use the stationary and independence properties to find the joint distributions of the partial sum process:
If $n_1 \lt n_2 \lt \cdots \lt n_k$ then $(Y_{n_1}, Y_{n_2}, \ldots, Y_{n_k})$ has joint probability density function $f_{n_1, n_2, \ldots, n_k}(y_1, y_2, \ldots, y_k) = f^{*n_1}(y_1) f^{*(n_2 - n_1)}(y_2 - y_1) \cdots f^{*(n_k - n_{k-1})}(y_k - y_{k-1}), \quad (y_1, y_2, \ldots, y_k) \in \R^k$
Proof
This follows from the multivariate change of variables theorem.
The Central Limit Theorem
First, let's make the central limit theorem more precise. From Theorem 4, we cannot expect $Y_n$ itself to have a limiting distribution. Note that $\var(Y_n) \to \infty$ as $n \to \infty$ since $\sigma \gt 0$, and $\E(Y_n) \to \infty$ as $n \to \infty$ if $\mu \gt 0$ while $\E(Y_n) \to -\infty$ as $n \to \infty$ if $\mu \lt 0$. Similarly, we know that $M_n \to \mu$ as $n \to \infty$ with probability 1, so the limiting distribution of the sample mean is degenerate. Thus, to obtain a limiting distribution of $Y_n$ or $M_n$ that is not degenerate, we need to consider, not these variables themeselves, but rather the common standard score. Thus, let $Z_n = \frac{Y_n - n \mu}{\sqrt{n} \sigma} = \frac{M_n - \mu}{\sigma \big/ \sqrt{n}}$
$Z_n$ has mean 0 and variance 1.
1. $\E(Z_n) = 0$
2. $\var(Z_n) = 1$
Proof
These results follow from basic properties of expected value and variance, and are true for the standard score associated with any random variable. Recall also that the standard score of a variable is invariant under linear transformations with positive slope. The fact that the standard score of $Y_n$ and the standard score of $M_n$ are the same is a special case of this.
The precise statement of the central limit theorem is that the distribution of the standard score $Z_n$ converges to the standard normal distribution as $n \to \infty$. Recall that the standard normal distribution has probability density function $\phi(z) = \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2} z^2}, \quad z \in \R$ and is studied in more detail in the chapter on special distributions. A special case of the central limit theorem (to Bernoulli trials), dates to Abraham De Moivre. The term central limit theorem was coined by George Pólya in 1920. By definition of convergence in distribution, the central limit theorem states that $F_n(z) \to \Phi(z)$ as $n \to \infty$ for each $z \in \R$, where $F_n$ is the distribution function of $Z_n$ and $\Phi$ is the standard normal distribution function:
$\Phi(z) = \int_{-\infty}^z \phi(x) \, dx = \int_{-\infty}^z \frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2} x^2} \, dx, \quad z \in \R$
An equivalent statment of the central limit theorm involves convergence of the corresponding characteristic functions. This is the version that we will give and prove, but first we need a generalization of a famous limit from calculus.
Suppose that $(a_1, a_2, \ldots)$ is a sequence of real numbers and that $a_n \to a \in \R$ as $n \to \infty$. Then $\left( 1 + \frac{a_n}{n} \right)^n \to e^a \text{ as } n \to \infty$
Now let $\chi$ denote the characteristic function of the standard score of the sample variable $X$, and let $\chi_n$ denote the characteristic function of the standard score $Z_n$: $\chi(t) = \E \left[ \exp\left( i t \frac{X - \mu}{\sigma} \right) \right], \; \chi_n(t) = \E[\exp(i t Z_n)]; \quad t \in \R$ Recall that $t \mapsto e^{-\frac{1}{2}t^2}$ is the characteristic function of the standard normal distribution. We can now give a proof.
The central limit theorem. The distribution of $Z_n$ converges to the standard normal distribution as $n \to \infty$. That is, $\chi_n(t) \to e^{-\frac{1}{2}t^2}$ as $n \to \infty$ for each $t \in \R$.
Proof
Note that $\chi(0) = 1$, $\chi^\prime(0) = 0$, $\chi^{\prime \prime}(0) = -1$. Next $Z_n = \frac{1}{\sqrt{n}} \sum_{i=1}^n \frac{X_i - \mu}{\sigma}$ From properties of characteristic functions, $\chi_n(t) = \chi^n (t / \sqrt{n})$ for $t \in \R$. By Taylor's theorem (named after Brook Taylor), $\chi\left(\frac{t}{\sqrt{n}}\right) = 1 + \frac{1}{2} \chi^{\prime\prime}(s_n) \frac{t^2}{n} \text{ where } \left|s_n\right| \le \frac{\left|t\right|}{n}$ But $s_n \to 0$ and hence $\chi^{\prime\prime}(s_n) \to -1$ as $n \to \infty$. Finally, $\chi_n(t) = \left[1 + \frac{1}{2} \chi^{\prime\prime}(s_n) \frac{t^2}{n} \right]^n \to e^{-\frac{1}{2} t^2} \text{ as } n \to \infty$
Normal Approximations
The central limit theorem implies that if the sample size $n$ is large then the distribution of the partial sum $Y_n$ is approximately normal with mean $n \mu$ and variance $n \sigma^2$. Equivalently the sample mean $M_n$ is approximately normal with mean $\mu$ and variance $\sigma^2 / n$. The central limit theorem is of fundamental importance, because it means that we can approximate the distribution of certain statistics, even if we know very little about the underlying sampling distribution.
Of course, the term large is relative. Roughly, the more abnormal the basic distribution, the larger $n$ must be for normal approximations to work well. The rule of thumb is that a sample size $n$ of at least 30 will usually suffice if the basic distribution is not too weird; although for many distributions smaller $n$ will do.
Let $Y$ denote the sum of the variables in a random sample of size 30 from the uniform distribution on $[0, 1]$. Find normal approximations to each of the following:
1. $\P(13 \lt Y \lt 18)$
2. The 90th percentile of $Y$
Answer
1. 0.8682
2. 17.03
Random variable $Y$ in the previous exercise has the Irwin-Hall distribution of order 30. The Irwin-Hall distributions are studied in more detail in the chapter on Special Distributions and are named for Joseph Irwin and Phillip Hall.
In the special distribution simulator, select the Irwin-Hall distribution. Vary and $n$ from 1 to 10 and note the shape of the probability density function. With $n = 10$ run the experiment 1000 times and compare the empirical density function to the true probability density function.
Let $M$ denote the sample mean of a random sample of size 50 from the distribution with probability density function $f(x) = \frac{3}{x^4}$ for $1 \le x \lt \infty$. This is a Pareto distribution, named for Vilfredo Pareto. Find normal approximations to each of the following:
1. $\P(M \gt 1.6)$
2. The 60th percentile of $M$
Answer
1. 0.2071
2. 1.531
The Continuity Correction
A slight technical problem arises when the sampling distribution is discrete. In this case, the partial sum also has a discrete distribution, and hence we are approximating a discrete distribution with a continuous one. Suppose that $X$ takes integer values (the most common case) and hence so does the partial sum $Y_n$. For any $k \in \Z$ and $h \in [0, 1)$, note that the event $\{k - h \le Y_n \le k + h\}$ is equivalent to the event $\{Y = k\}$. Different values of $h$ lead to different normal approximations, even though the events are equivalent. The smallest approximation would be 0 when $h = 0$, and the approximations increase as $h$ increases. It is customary to split the difference by using $h = \frac{1}{2}$ for the normal approximation. This is sometimes called the half-unit continuity correction or the histogram correction. The continuity correction is extended to other events in the natural way, using the additivity of probability.
Suppose that $j, k \in \Z$ with $j \le k$.
1. For the event $\{j \le Y_n \le k\} = \{j - 1 \lt Y_n \lt k + 1\}$, use $\{j - \frac{1}{2} \le Y_n \le k + \frac{1}{2}\}$ in the normal approximation.
2. For the event $\{j \le Y_n\} = \{j - 1 \lt Y_n\}$, use $\{j - \frac{1}{2} \le Y_n\}$ in the normal approximation.
3. For the event $\{Y_n \le k\} = \{Y_n \lt k + 1\}$, use $\{Y_n \le k + \frac{1}{2}\}$ in the normal approximation.
Let $Y$ denote the sum of the scores of 20 fair dice. Compute the normal approximation to $\P(60 \le Y \le 75)$.
Answer
0.6741
In the dice experiment, set the die distribution to fair, select the sum random variable $Y$, and set $n = 20$. Run the simulation 1000 times and find each of the following. Compare with the result in the previous exercise:
1. $\P(60 \le Y \le 75)$
2. The relative frequency of the event $\{60 \le Y \le 75\}$ (from the simulation)
Normal Approximation to the Gamma Distribution
Recall that the gamma distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$ is a continuous distribution on $(0, \infty)$ with probability density function $f$ given by $f(x) = \frac{1}{\Gamma(k) b^k} x^{k-1} e^{-x/b}, \quad x \in (0, \infty)$ The mean is $k b$ and the variance is $k b ^2$. The gamma distribution is widely used to model random times (particularly in the context of the Poisson model) and other positive random variables. The general gamma distribution is studied in more detail in the chapter on Special Distributions. In the context of the Poisson model (where $k \in \N_+$), the gamma distribution is also known as the Erlang distribution, named for Agner Erlang; it is studied in more detail in the chapter on the Poisson Process. Suppose now that $Y_k$ has the gamma (Erlang) distribution with shape parameter $k \in \N_+$ and scale parameter $b \gt 0$ then $Y_k = \sum_{i=1}^k X_i$ where $(X_1, X_2, \ldots)$ is a sequence of independent variables, each having the exponential distribution with scale parameter $b$. (The exponential distribution is a special case of the gamma distribution with shape parameter 1.) It follows that if $k$ is large, the gamma distribution can be approximated by the normal distribution with mean $k b$ and variance $k b^2$. The same statement actually holds when $k$ is not an integer. Here is the precise statement:
Suppose that $Y_k$ has the gamma distribution with scale parameter $b \in (0, \infty)$ and shape parameter $k \in (0, \infty)$. Then the distribution of the standardized variable $Z_k$ below converges to the standard normal distribution as $k \to \infty$: $Z_k = \frac{Y_k - k b}{\sqrt{k} b}$
In the special distribution simulator, select the gamma distribution. Vary and $b$ and note the shape of the probability density function. With $k = 10$ and various values of $b$, run the experiment 1000 times and compare the empirical density function to the true probability density function.
Suppose that $Y$ has the gamma distribution with shape parameter $k = 10$ and scale parameter $b = 2$. Find normal approximations to each of the following:
1. $\P(18 \le Y \le 23)$
2. The 80th percentile of $Y$
Answer
1. 0.3063
2. 25.32
Normal Approximation to the Chi-Square Distribution
Recall that the chi-square distribution with $n \in (0, \infty)$ degrees of freedom is a special case of the gamma distribution, with shape parameter $k = n / 2$ and scale parameter $b = 2$. Thus, the chi-square distribution with $n$ degrees of freedom has probability density function $f(x) = \frac{1}{\Gamma(n/2) 2^{n/2}} x^{n/2 - 1}e^{-x/2}, \quad 0 \lt x \lt \infty$ When $n$ is a positive, integer, the chi-square distribution governs the sum of $n$ independent, standard normal variables. For this reason, it is one of the most important distributions in statistics. The chi-square distribution is studied in more detail in the chapter on Special Distributions. From the previous discussion, it follows that if $n$ is large, the chi-square distribution can be approximated by the normal distribution with mean $n$ and variance $2 n$. Here is the precise statement:
Suppose that $Y_n$ has the chi-square distribution with $n \in (0, \infty)$ degrees of freedom. Then the distribution of the standardized variable $Z_n$ below converges to the standard normal distribution as $n \to \infty$: $Z_n = \frac{Y_n - n}{\sqrt{2 n}}$
In the special distribution simulator, select the chi-square distribution. Vary $n$ and note the shape of the probability density function. With $n = 20$, run the experiment 1000 times andcompare the empirical density function to the probability density function.
Suppose that $Y$ has the chi-square distribution with $n = 20$ degrees of freedom. Find normal approximations to each of the following:
1. $\P(18 \lt Y \lt 25)$
2. The 75th percentile of $Y$
Answer
1. 0.4107
2. 24.3
Normal Approximation to the Binomial Distribution
Recall that a Bernoulli trials sequence, named for Jacob Bernoulli, is a sequence $(X_1, X_2, \ldots)$ of independent, identically distributed indicator variables with $\P(X_i = 1) = p$ for each $i$, where $p \in (0, 1)$ is the parameter. In the usual language of reliability, $X_i$ is the outcome of trial $i$, where 1 means success and 0 means failure. The common mean is $p$ and the common variance is $p (1 - p)$.
Let $Y_n = \sum_{i=1}^n X_i$, so that $Y_n$ is the number of successes in the first $n$ trials. Recall that $Y_n$ has the binomial distribution with parameters $n$ and $p$, and has probability density function $f(k) = \binom{n}{k} p^k (1 - p)^{n-k}, \quad k \in \{0, 1, \ldots, n\}$ The binomial distribution is studied in more detail in the chapter on Bernoulli trials.
It follows from the central limit theorem that if $n$ is large, the binomial distribution with parameters $n$ and $p$ can be approximated by the normal distribution with mean $n p$ and variance $n p (1 - p)$. The rule of thumb is that $n$ should be large enough for $n p \ge 5$ and $n (1 - p) \ge 5$. (The first condition is the important one when $p \lt \frac{1}{2}$ and the second condition is the important one when $p \gt \frac{1}{2}$.) Here is the precise statement:
Suppose that $Y_n$ has the binomial distribution with trial parameter $n \in \N_+$ and success parameter $p \in (0, 1)$. Then the distribution of the standardized variable $Z_n$ given below converges to the standard normal distribution as $n \to \infty$: $Z_n = \frac{Y_n - n p}{\sqrt{n p (1 - p)}}$
In the binomial timeline experiment, vary $n$ and $p$ and note the shape of the probability density function. With $n = 50$ and $p = 0.3$, run the simulation 1000 times and compute the following:
1. $\P(12 \le Y \le 16)$
2. The relative frequency of the event $\{12 \le Y \le 16\}$ (from the simulation)
Answer
1. 0.5448
Suppose that $Y$ has the binomial distribution with parameters $n = 50$ and $p = 0.3$. Compute the normal approximation to $\P(12 \le Y \le 16)$ (don't forget the continuity correction) and compare with the results of the previous exercise.
Answer
0.5383
Normal Approximation to the Poisson Distribution
Recall that the Poisson distribution, named for Simeon Poisson, is a discrete distribution on $\N$ with probability density function $f$ given by $f(x) = e^{-\theta} \frac{\theta^x}{x!}, \quad x \in \N$ where $\theta \gt 0$ is a parameter. The parameter is both the mean and the variance of the distribution. The Poisson distribution is widely used to model the number of random points in a region of time or space, and is studied in more detail in the chapter on the Poisson Process. In this context, the parameter is proportional to the size of the region.
Suppose now that $Y_n$ has the Poisson distribution with parameter $n \in \N_+$. Then $Y_n = \sum_{i=1}^n X_i$ where $(X_1, X_2, \ldots, X_n)$ is a sequence of independent variables, each with the Poisson distribution with parameter 1. It follows from the central limit theorem that if $n$ is large, the Poisson distribution with parameter $n$ can be approximated by the normal distribution with mean $n$ and variance $n$. The same statement holds when the parameter $n$ is not an integer. Here is the precise statement:
. Suppose that $Y_\theta$ has the Poisson distribution with parameter $\theta \in (0, \infty)$. Then the distribution of the standardized variable $Z_\theta$ below converges to the standard normal distribution as $\theta \to \infty$:
$Z_\theta = \frac{Y_\theta - \theta}{\sqrt{\theta}}$
Suppose that $Y$ has the Poisson distribution with mean 20.
1. Compute the true value of $\P(16 \le Y \le 23)$.
2. Compute the normal approximation to $\P(16 \le Y \le 23)$.
Answer
1. 0.6310
2. 0.6259
In the Poisson experiment, vary the time and rate parameters $t$ and $r$ (the parameter of the Poisson distribution in the experiment is the product $r t$). Note the shape of the probability density function. With $r = 5$ and $t = 4$, run the experiment 1000 times and compare the empirical density function to the true probability density function.
Normal Approximation to the Negative Binomial Distribution
The general version of the negative binomial distribution is a discrete distribution on $\N$, with shape parameter $k \in (0, \infty)$ and success parameter $p \in (0, 1)$. The probability density function $f$ is given by $f(n) = \binom{n + k - 1}{n} p^k (1 - p)^n, \quad n \in \N_+$ The mean is $k (1 - p) / p$ and the variance is $k (1 - p) / p^2$. The negative binomial distribution is studied in more detail in the chapter on Bernoulli trials. If $k \in \N_+$, the distribution governs the number of failures $Y_k$ before success number $k$ in a sequence of Bernoulli trials with success parameter $p$. Thus in this case, $Y_k = \sum_{i=1}^k X_i$ where $(X_1, X_2, \ldots, X_k)$ is a sequence of independent variables, each having the geometric distribution on $\N$ with parameter $p$. (The geometric distribution is a special case of the negative binomial, with parameters 1 and $p$.) In the context of the Bernoulli trials, $X_1$ is the number of failures before the first success, and for $i \in \{2, 3, \ldots\}$, $X_i$ is the number of failures between success number $i - 1$ success number $i$. It follows that if $k$ is large, the negative binomial distribution can be approximated by the normal distribution. The same statement holds if $k$ is not an integer. Here is the precise statement:
Suppose that $Y_k$ has the negative binomial distribution with shape parameter $k \in (0, 1)$ and scale parameter $p \in (0, 1)$. Then the distribution of the standardized variable $Z_k$ below converges to the standard normal distribution as $k \to \infty$: $Z_k = \frac{p Y_k - k(1 - p)}{\sqrt{k (1 - p)}}$
Another version of the negative binomial distribution is the distribution of the trial number $V_k$ of success number $k \in \N_+$. So $V_k = k + Y_k$ and $V_k$ has mean $k / p$ and variance $k (1 - p) / p^2$. The normal approximation applies to the distribution of $V_k$ as well, if $k$ is large, and since the distributions are related by a location transformation, the standard scores are the same. That is $\frac{p V_k - k}{\sqrt{k (1 - p)}} = \frac{p Y_k - k(1 - p)}{\sqrt{k ( 1 - p)}}$
In the negative binomial experiment, vary $k$ and $p$ and note the shape of the probability density function. With $k = 5$ and $p = 0.4$, run the experiment 1000 times and compare the empirical density function to the true probability density function.
Suppose that $Y$ has the negative binomial distribution with trial parameter $k = 10$ and success parameter $p = 0.4$. Find normal approximations to each of the following:
1. $\P(20 \lt Y \lt 30)$
2. The 80th percentile of $Y$
Answer
1. 0.6318
2. 30.1
Partial Sums with a Random Number of Terms
Our last topic is a bit more esoteric, but still fits with the general setting of this section. Recall that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of independent, identically distributed real-valued random variables with common mean $\mu$ and variance $\sigma^2$. Suppose now that $N$ is a random variable (on the same probability space) taking values in $\N$, also with finite mean and variance. Then $Y_N = \sum_{i=1}^N X_i$ is a random sum of the independent, identically distributed variables. That is, the terms are random of course, but so also is the number of terms $N$. We are primarily interested in the moments of $Y_N$.
Independent Number of Terms
Suppose first that $N$, the number of terms, is independent of $\bs{X}$, the sequence of terms. Computing the moments of $Y_N$ is a good exercise in conditional expectation.
The conditional expected value of $Y_N$ given $N$, and the expected value of $Y_N$ are
1. $\E(Y_N \mid N) = N \mu$
2. $\E(Y_N) = \E(N) \mu$
The conditional variance of $Y_N$ given $N$ and the variance of $Y_N$ are
1. $\var(Y_N \mid N) = N \sigma^2$
2. $\var(Y_N) = \E(N) \sigma^2 + \var(N) \mu^2$
Let $H$ denote the probability generating function of $N$. Show that the moment generating function of $Y_N$ is $H \circ G$.
1. $\E(e^{t Y_N} \mid N) = [G(t)]^N$
2. $\E(e^{t Y_N}) = H(G(t))$
Wald's Equation
The result in Exercise 29 (b) generalizes to the case where the random number of terms $N$ is a stopping time for the sequence $\bs{X}$. This means that the event $\{N = n\}$ depends only on (technically, is measurable with respect to) $(X_1, X_2, \ldots, X_n)$ for each $n \in \N$. The generalization is knowns as Wald's equation, and is named for Abraham Wald. Stopping times are studied in much more technical detail in the section on Filtrations and Stopping Times.
If $N$ is a stopping time for $\bs{X}$ then $\E(Y_N) = \E(N) \mu$.
Proof
First note that $Y_N = \sum_{i=1}^\infty X_i \bs{1}(i \le N)$. But $\{i \le N\} = \{N \lt i\}^c$ depends only on $\{X_1, \ldots, X_{i-1}\}$ and hence is independent of $X_i$. Thus $\E[X_i \bs{1}(i \le N)] = \mu \P(N \ge i)$. Suppose that $X_i \ge 0$ for each $i$. Taking expected values term by term gives Wald's equation in this special case. The interchange of sum and expected value is justified by the monotone convergence theorem. Now Wald's equation can be established in general by using the dominated convergence theorem.
An elgant proof of Wald's equation is given in the chapter on Martingales.
Suppose that the number of customers arriving at a store during a given day has the Poisson distribution with parameter 50. Each customer, independently of the others (and independently of the number of customers), spends an amount of money that is uniformly distributed on the interval $[0, 20]$. Find the mean and standard deviation of the amount of money that the store takes in during a day.
Answer
500, 81.65
When a certain critical component in a system fails, it is immediately replaced by a new, statistically identical component. The components are independent, and the lifetime of each (in hours) is exponentially distributed with scale parameter $b$. During the life of the system, the number of critical components used has a geometric distribution on $\N_+$ with parameter $p$. For the total life of the critical component,
1. Find the mean.
2. Find the standard deviation.
3. Find the moment generating function.
4. Identify the distribution by name.
Answer
1. $b / p$
2. $b / p$
3. $t \mapsto \frac{1}{1 - (b/p)t}$
4. Exponential distribution with scale parameter $b / p$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/06%3A_Random_Samples/6.04%3A_The_Central_Limit_Theorem.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\mse}{\text{mse}}$ $\newcommand{\mae}{\text{mae}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
Descriptive Theory
Recall the basic model of statistics: we have a population of objects of interest, and we have various measurements (variables) that we make on these objects. We select objects from the population and record the variables for the objects in the sample; these become our data. Once again, our first discussion is from a descriptive point of view. That is, we do not assume that the data are generated by an underlying probability distribution. Remember however, that the data themselves form a probability distribution.
Variance and Standard Deviation
Suppose that $\bs{x} = (x_1, x_2, \ldots, x_n)$ is a sample of size $n$ from a real-valued variable $x$. Recall that the sample mean is $m = \frac{1}{n} \sum_{i=1}^n x_i$ and is the most important measure of the center of the data set. The sample variance is defined to be $s^2 = \frac{1}{n - 1} \sum_{i=1}^n (x_i - m)^2$ If we need to indicate the dependence on the data vector $\bs{x}$, we write $s^2(\bs{x})$. The difference $x_i - m$ is the deviation of $x_i$ from the mean $m$ of the data set. Thus, the variance is the mean square deviation and is a measure of the spread of the data set with respet to the mean. The reason for dividing by $n - 1$ rather than $n$ is best understood in terms of the inferential point of view that we discuss in the next section; this definition makes the sample variance an unbiased estimator of the distribution variance. However, the reason for the averaging can also be understood in terms of a related concept.
$\sum_{i=1}^n (x_i - m) = 0$.
Proof
$\sum_{i=1}^n (x_i - m) = \sum_{i=1}^n x_i - \sum_{i=1}^n m = n m - n m = 0$.
Thus, if we know $n - 1$ of the deviations, we can compute the last one. This means that there are only $n - 1$ freely varying deviations, that is to say, $n - 1$ degrees of freedom in the set of deviations. In the definition of sample variance, we average the squared deviations, not by dividing by the number of terms, but rather by dividing by the number of degrees of freedom in those terms. However, this argument notwithstanding, it would be reasonable, from a purely descriptive point of view, to divide by $n$ in the definition of the sample variance. Moreover, when $n$ is sufficiently large, it hardly matters whether we divide by $n$ or by $n - 1$.
In any event, the square root $s$ of the sample variance $s^2$ is the sample standard deviation. It is the root mean square deviation and is also a measure of the spread of the data with respect to the mean. Both measures of spread are important. Variance has nicer mathematical properties, but its physical unit is the square of the unit of $x$. For example, if the underlying variable $x$ is the height of a person in inches, the variance is in square inches. On the other hand, the standard deviation has the same physical unit as the original variable, but its mathematical properties are not as nice.
Recall that the data set $\bs{x}$ naturally gives rise to a probability distribution, namely the empirical distribution that places probability $\frac{1}{n}$ at $x_i$ for each $i$. Thus, if the data are distinct, this is the uniform distribution on $\{x_1, x_2, \ldots, x_n\}$. The sample mean $m$ is simply the expected value of the empirical distribution. Similarly, if we were to divide by $n$ rather than $n - 1$, the sample variance would be the variance of the empirical distribution. Most of the properties and results this section follow from much more general properties and results for the variance of a probability distribution (although for the most part, we give independent proofs).
Measures of Center and Spread
Measures of center and measures of spread are best thought of together, in the context of an error function. The error function measures how well a single number $a$ represents the entire data set $\bs{x}$. The values of $a$ (if they exist) that minimize the error functions are our measures of center; the minimum value of the error function is the corresponding measure of spread. Of course, we hope for a single value of $a$ that minimizes the error function, so that we have a unique measure of center.
Let's apply this procedure to the mean square error function defined by $\mse(a) = \frac{1}{n - 1} \sum_{i=1}^n (x_i - a)^2, \quad a \in \R$ Minimizing $\mse$ is a standard problem in calculus.
The graph of $\mse$ is a parabola opening upward.
1. $\mse$ is minimized when $a = m$, the sample mean.
2. The minimum value of $\mse$ is $s^2$, the sample variance.
Proof
We can tell from the form of $\mse$ that the graph is a parabola opening upward. Taking the derivative gives $\frac{d}{da} \mse(a) = -\frac{2}{n - 1}\sum_{i=1}^n (x_i - a) = -\frac{2}{n - 1}(n m - n a)$ Hence $a = m$ is the unique value that minimizes $\mse$. Of course, $\mse(m) = s^2$.
Trivially, if we defined the mean square error function by dividing by $n$ rather than $n - 1$, then the minimum value would still occur at $m$, the sample mean, but the minimum value would be the alternate version of the sample variance in which we divide by $n$. On the other hand, if we were to use the root mean square deviation function $\text{rmse}(a) = \sqrt{\mse(a)}$, then because the square root function is strictly increasing on $[0, \infty)$, the minimum value would again occur at $m$, the sample mean, but the minimum value would be $s$, the sample standard deviation. The important point is that with all of these error functions, the unique measure of center is the sample mean, and the corresponding measures of spread are the various ones that we are studying.
Next, let's apply our procedure to the mean absolute error function defined by $\mae(a) = \frac{1}{n - 1} \sum_{i=1}^n \left|x_i - a\right|, \quad a \in \R$
The mean absolute error function satisfies the following properties:
1. $\mae$ is a continuous function.
2. The graph of $\mae$ consists of lines.
3. The slope of the line at $a$ depends on where $a$ is in the data set $\bs{x}$.
Proof
For parts (a) and (b), note that for each $i$, $\left|x_i - a\right|$ is a continuous function of $a$ with the graph consisting of two lines (of slopes $\pm 1$) meeting at $x_i$.
Mathematically, $\mae$ has some problems as an error function. First, the function will not be smooth (differentiable) at points where two lines of different slopes meet. More importantly, the values that minimize mae may occupy an entire interval, thus leaving us without a unique measure of center. The error function exercises below will show you that these pathologies can really happen. It turns out that $\mae$ is minimized at any point in the median interval of the data set $\bs{x}$. The proof of this result follows from a much more general result for probability distributions. Thus, the medians are the natural measures of center associated with $\mae$ as a measure of error, in the same way that the sample mean is the measure of center associated with the $\mse$ as a measure of error.
Properties
In this section, we establish some essential properties of the sample variance and standard deviation. First, the following alternate formula for the sample variance is better for computational purposes, and for certain theoretical purposes as well.
The sample variance can be computed as $s^2 = \frac{1}{n - 1} \sum_{i=1}^n x_i^2 - \frac{n}{n - 1} m^2$
Proof
Note that \begin{align} \sum_{i=1}^n (x_i - m)^2 & = \sum_{i=1}^n \left(x_i^2 - 2 m x_i + m^2\right) = \sum_{i=1}^n x_i^2 - 2 m \sum_{i=1}^n x_i - \sum_{i=1}^n m\ & = \sum_{i=1}^n x_i^2 - 2 n m^2 + n m^2 = \sum_{i=1}^n x_i^2 - n m^2 \end{align} Dividing by $n - 1$ gives the result.
If we let $\bs{x}^2 = (x_1^2, x_2^2, \ldots, x_n^2)$ denote the sample from the variable $x^2$, then the computational formula in the last exercise can be written succinctly as $s^2(\bs{x}) = \frac{n}{n - 1} \left[m(\bs{x}^2) - m^2(\bs{x})\right]$ The following theorem gives another computational formula for the sample variance, directly in terms of the variables and thus without the computation of an intermediate statistic.
The sample variance can be computed as $s^2 = \frac{1}{2 n (n - 1)} \sum_{i=1}^n \sum_{j=1}^n (x_i - x_j)^2$
Proof
Note that \begin{align} \frac{1}{2 n} \sum_{i=1}^n \sum_{j=1}^n (x_i - x_j)^2 & = \frac{1}{2 n} \sum_{i=1}^n \sum_{j=1}^n (x_i - m + m - x_j)^2 \ & = \frac{1}{2 n} \sum_{i=1}^n \sum_{j=1}^n \left[(x_i - m)^2 + 2 (x_i - m)(m - x_j) + (m - x_j)^2\right] \ & = \frac{1}{2 n} \sum_{i=1}^n \sum_{j=1}^n (x_i - m)^2 + \frac{1}{n} \sum_{i=1}^n \sum_{j=1}^n (x_i - m)(m - x_j) + \frac{1}{2 n} \sum_{i=1}^n \sum_{j=1}^n (m - x_j)^2 \ & = \frac{1}{2} \sum_{i=1}^n (x_i - m)^2 + 0 + \frac{1}{2} \sum_{j=1}^n (m - x_j)^2 \ & = \sum_{i=1}^n (x_i - m)^2 \end{align} Dividing by $n - 1$ gives the result.
The sample variance is nonnegative:
1. $s^2 \ge 0$
2. $s^2 = 0$ if and only if $x_i = x_j$ for each $i, \; j \in \{1, 2, \ldots, n\}$.
Proof
Part (a) is obvious. For part (b) note that if $s^2 = 0$ then $x_i = m$ for each $i$. Conversely, if $\bs{x}$ is a constant vector, then $m$ is that same constant.
Thus, $s^2 = 0$ if and only if the data set is constant (and then, of course, the mean is the common value).
If $c$ is a constant then
1. $s^2(c \, \bs{x}) = c^2 \, s^2(\bs{x})$
2. $s(c \, \bs{x}) = \left|c\right| \, s(\bs{x})$
Proof
For part (a), recall that $m(c \bs{x}) = c m(\bs{x})$. Hence $s^2(c \bs{x}) = \frac{1}{n - 1}\sum_{i=1}^n \left[c x_i - c m(\bs{x})\right]^2 = \frac{1}{n - 1} \sum_{i=1}^n c^2 \left[x_i - m(\bs{x})\right]^2 = c^2 s^2(\bs{x})$
If $\bs{c}$ is a sample of size $n$ from a constant $c$ then
1. $s^2(\bs{x} + \bs{c}) = s^2(\bs{x})$.
2. $s(\bs{x} + \bs{c}) = s(\bs{x})$
Proof
Recall that $m(\bs{x} + \bs{c}) = m(\bs{x}) + c$. Hence $s^2(\bs{x} + \bs{c}) = \frac{1}{n - 1} \sum_{i=1}^n \left\{(x_i + c) - \left[m(\bs{x}) + c\right]\right\}^2 = \frac{1}{n - 1} \sum_{i=1}^n \left[x_i - m(\bs{x})\right]^2 = s^2(\bs{x})$
As a special case of these results, suppose that $\bs{x} = (x_1, x_2, \ldots, x_n)$ is a sample of size $n$ corresponding to a real variable $x$, and that $a$ and $b$ are constants. The sample corresponding to the variable $y = a + b x$, in our vector notation, is $\bs{a} + b \bs{x}$. Then $m(\bs{a} + b \bs{x}) = a + b m(\bs{x})$ and $s(\bs{a} + b \bs{x}) = \left|b\right| s(\bs{x})$. Linear transformations of this type, when $b \gt 0$, arise frequently when physical units are changed. In this case, the transformation is often called a location-scale transformation; $a$ is the location parameter and $b$ is the scale parameter. For example, if $x$ is the length of an object in inches, then $y = 2.54 x$ is the length of the object in centimeters. If $x$ is the temperature of an object in degrees Fahrenheit, then $y = \frac{5}{9}(x - 32)$ is the temperature of the object in degree Celsius.
Now, for $i \in \{1, 2, \ldots, n\}$, let $z_i = (x_i - m) / s$. The number $z_i$ is the standard score associated with $x_i$. Note that since $x_i$, $m$, and $s$ have the same physical units, the standard score $z_i$ is dimensionless (that is, has no physical units); it measures the directed distance from the mean $m$ to the data value $x_i$ in standard deviations.
The sample of standard scores $\bs{z} = (z_1, z_2, \ldots, z_n)$ has mean 0 and variance 1. That is,
1. $m(\bs{z}) = 0$
2. $s^2(\bs{z}) = 1$
Proof
These results follow from Theroems 7 and 8. In vector notation, note that $\bs{z} = (\bs{x} - \bs{m})/s$. Hence $m(\bs{z}) = (m - m) / s = 0$ and $s(\bs{z}) = s / s = 1$.
Approximating the Variance
Suppose that instead of the actual data $\bs{x}$, we have a frequency distribution corresponding to a partition with classes (intervals) $(A_1, A_2, \ldots, A_k)$, class marks (midpoints of the intervals) $(t_1, t_2, \ldots, t_k)$, and frequencies $(n_1, n_2, \ldots, n_k)$. Recall that the relative frequency of class $A_j$ is $p_j = n_j / n$. In this case, approximate values of the sample mean and variance are, respectively,
\begin{align} m & = \frac{1}{n} \sum_{j=1}^k n_j \, t_j = \sum_{j = 1}^k p_j \, t_j \ s^2 & = \frac{1}{n - 1} \sum_{j=1}^k n_j (t_j - m)^2 = \frac{n}{n - 1} \sum_{j=1}^k p_j (t_j - m)^2 \end{align}
These approximations are based on the hope that the data values in each class are well represented by the class mark. In fact, these are the standard definitions of sample mean and variance for the data set in which $t_j$ occurs $n_j$ times for each $j$.
Inferential Statistics
We continue our discussion of the sample variance, but now we assume that the variables are random. Thus, suppose that we have a basic random experiment, and that $X$ is a real-valued random variable for the experiment with mean $\mu$ and standard deviation $\sigma$. We will need some higher order moments as well. Let $\sigma_3 = \E\left[(X - \mu)^3\right]$ and $\sigma_4 = \E\left[(X - \mu)^4\right]$ denote the 3rd and 4th moments about the mean. Recall that $\sigma_3 \big/ \sigma^3 = \skw(X)$, the skewness of $X$, and $\sigma_4 \big/ \sigma^4 = \kur(X)$, the kurtosis of $X$. We assume that $\sigma_4 \lt \infty$.
We repeat the basic experiment $n$ times to form a new, compound experiment, with a sequence of independent random variables $\bs{X} = (X_1, X_2, \ldots, X_n)$, each with the same distribution as $X$. In statistical terms, $\bs{X}$ is a random sample of size $n$ from the distribution of $X$. All of the statistics above make sense for $\bs{X}$, of course, but now these statistics are random variables. We will use the same notationt, except for the usual convention of denoting random variables by capital letters. Finally, note that the deterministic properties and relations established above still hold.
In addition to being a measure of the center of the data $\bs{X}$, the sample mean $M = \frac{1}{n} \sum_{i=1}^n X_i$ is a natural estimator of the distribution mean $\mu$. In this section, we will derive statistics that are natural estimators of the distribution variance $\sigma^2$. The statistics that we will derive are different, depending on whether $\mu$ is known or unknown; for this reason, $\mu$ is referred to as a nuisance parameter for the problem of estimating $\sigma^2$.
A Special Sample Variance
First we will assume that $\mu$ is known. Although this is almost always an artificial assumption, it is a nice place to start because the analysis is relatively easy and will give us insight for the standard case. A natural estimator of $\sigma^2$ is the following statistic, which we will refer to as the special sample variance. $W^2 = \frac{1}{n} \sum_{i=1}^n (X_i - \mu)^2$
$W^2$ is the sample mean for a random sample of size $n$ from the distribution of $(X - \mu)^2$, and satisfies the following properties:
1. $\E\left(W^2\right) = \sigma^2$
2. $\var\left(W^2\right) = \frac{1}{n}\left(\sigma_4 - \sigma^4\right)$
3. $W^2 \to \sigma^2$ as $n \to \infty$ with probability 1
4. The distribution of $\sqrt{n}\left(W^2 - \sigma^2\right) \big/ \sqrt{\sigma_4 - \sigma^4}$ converges to the standard normal distribution as $n \to \infty$.
Proof
These result follow immediately from standard results in the section on the Law of Large Numbers and the section on the Central Limit Theorem. For part (b), note that $\var\left[(X - \mu)^2\right] = \E\left[(X - \mu)^4\right] -\left(\E\left[(X - \mu)^2\right]\right)^2 = \sigma_4 - \sigma^4$
In particular part (a) means that $W^2$ is an unbiased estimator of $\sigma^2$. From part (b), note that $\var(W^2) \to 0$ as $n \to \infty$; this means that $W^2$ is a consistent estimator of $\sigma^2$. The square root of the special sample variance is a special version of the sample standard deviation, denoted $W$.
$\E(W) \le \sigma$. Thus, $W$ is a negativley biased estimator that tends to underestimate $\sigma$.
Proof
This follows from the unbiased property and Jensen's inequality. Since $w \mapsto \sqrt{w}$ is concave downward on $[0, \infty)$, we have $\E(W) = \E\left(\sqrt{W^2}\right) \le \sqrt{\E\left(W^2\right)} = \sqrt{\sigma^2} = \sigma$.
Next we compute the covariance and correlation between the sample mean and the special sample variance.
The covariance and correlation of $M$ and $W^2$ are
1. $\cov\left(M, W^2\right) = \sigma_3 / n$.
2. $\cor\left(M, W^2\right) = \sigma^3 \big/ \sqrt{\sigma^2 (\sigma_4 - \sigma^4)}$
Proof
1. From the bilinearity of the covariance operator and by independence, $\cov\left(M, W^2\right) = \cov\left[\frac{1}{n}\sum_{i=1}^n X_i, \frac{1}{n} \sum_{j=1}^n (X_j - \mu)^2\right] = \frac{1}{n^2} \sum_{i=1}^n \cov\left[X_i, (X_i - \mu)^2\right]$ But $\cov\left[X_i, (X_i - \mu)^2\right] = \cov\left[X_i - \mu, (X_i - \mu)^2\right] = \E\left[(X_i - \mu)^3\right] - \E(X_i - \mu) \E\left[(X_i - \mu)^2\right] = \sigma_3$. Substituting gives the result.
2. This follows from part (a), the unbiased property, and our previous result that $\var(M) = \sigma^2 / n$.
Note that the correlation does not depend on the sample size, and that the sample mean and the special sample variance are uncorrelated if $\sigma_3 = 0$ (equivalently $\skw(X) = 0$).
The Standard Sample Variance
Consider now the more realistic case in which $\mu$ is unknown. In this case, a natural approach is to average, in some sense, the squared deviations $(X_i - M)^2$ over $i \in \{1, 2, \ldots, n\}$. It might seem that we should average by dividing by $n$. However, another approach is to divide by whatever constant would give us an unbiased estimator of $\sigma^2$. This constant turns out to be $n - 1$, leading to the standard sample variance: $S^2 = \frac{1}{n - 1} \sum_{i=1}^n (X_i - M)^2$
$\E\left(S^2\right) = \sigma^2$.
Proof
By expanding (as was shown in the last section), $\sum_{i=1}^n (X_i - M)^2 = \sum_{i=1}^n X_i^2 - n M^2$ Recall that $\E(M) = \mu$ and $\var(M) = \sigma^2 / n$. Taking expected values in the displayed equation gives $\E\left(\sum_{i=1}^n (X_i - M)^2\right) = \sum_{i=1}^n (\sigma^2 + \mu^2) - n \left(\frac{\sigma^2}{n} + \mu^2\right) = n (\sigma^2 + \mu^2) -n \left(\frac{\sigma^2}{n} + \mu^2\right) = (n - 1) \sigma^2$
Of course, the square root of the sample variance is the sample standard deviation, denoted $S$.
$\E(S) \le \sigma$. Thus, $S$ is a negativley biased estimator than tends to underestimate $\sigma$.
Proof
The proof is exactly the same as for the special standard variance.
$S^2 \to \sigma^2$ as $n \to \infty$ with probability 1.
Proof
This follows from the strong law of large numbers. Recall again that $S^2 = \frac{1}{n - 1} \sum_{i=1}^n X_i^2 - \frac{n}{n - 1} M^2 = \frac{n}{n - 1}[M(\bs{X}^2) - M^2(\bs{X})]$ But with probability 1, $M(\bs{X}^2) \to \sigma^2 + \mu^2$ as $n \to \infty$ and $M^2(\bs{X}) \to \mu^2$ as $n \to \infty$.
Since $S^2$ is an unbiased estimator of $\sigma^2$, the variance of $S^2$ is the mean square error, a measure of the quality of the estimator.
$\var\left(S^2\right) = \frac{1}{n} \left( \sigma_4 - \frac{n - 3}{n - 1} \sigma^4 \right)$.
Proof
Recall from the result above that $S^2 = \frac{1}{2 n (n - 1)} \sum_{i=1}^n \sum_{j=1}^n (X_i - X_j)^2$ Hence, using the bilinear property of covariance we have $\var(S^2) = \cov(S^2, S^2) = \frac{1}{4 n^2 (n - 1)^2} \sum_{i=1}^n \sum_{j=1}^n \sum_{k=1}^n \sum_{k=1}^n \cov[(X_i - X_j)^2, (X_k - X_l)^2]$ We compute the covariances in this sum by considering disjoint cases:
• $\cov\left[(X_i - X_j)^2, (X_k - X_l)^2\right] = 0$ if $i = j$ or $k = l$, and there are $2 n^3 - n^2$ such terms.
• $\cov\left[(X_i - X_j)^2, (X_k - X_l)^2\right] = 0$ if $i, j, k, l$ are distinct, and there are $n (n - 1)(n - 2) (n - 3)$ such terms.
• $\cov\left[(X_i - X_j)^2, (X_k - X_l)^2\right] = 2 \sigma_4 + 2 \sigma^4$ if $i \ne j$ and $\{k, l\} = \{i, j\}$, and there are $2 n (n - 1)$ such terms.
• $\cov\left[(X_i - X_j)^2, (X_k - X_l)^2\right] = \sigma_4 - \sigma^4$ if $i \ne j$, $k \ne l$ and $\#(\{i, j\} \cap \{k, l\}) = 1$, and there are $4 n (n - 1)(n - 2)$ such terms.
Substituting gives the result.
Note that $\var(S^2) \to 0$ as $n \to \infty$, and hence $S^2$ is a consistent estimator of $\sigma^2$. On the other hand, it's not surprising that the variance of the standard sample variance (where we assume that $\mu$ is unknown) is greater than the variance of the special standard variance (in which we assume $\mu$ is known).
$\var\left(S^2\right) \gt \var\left(W^2\right)$.
Proof
From the formula above for the variance of $W^2$, the previous result for the variance of $S^2$, and simple algebra, $\var\left(S^2\right) - \var\left(W^2\right) = \frac{2}{n (n - 1)} \sigma^4$ Note however that the difference goes to 0 as $n \to \infty$.
Next we compute the covariance between the sample mean and the sample variance.
The covariance and correlation between the sample mean and sample variance are
1. $\cov\left(M, S^2\right) = \sigma_3 / n$
2. $\cor\left(M, S^2\right) = \frac{\sigma_3}{\sigma \sqrt{\sigma_4 - \sigma^4 (n - 3) / (n - 1)}}$
Proof
1. Recall again that $M = \frac{1}{n} \sum_{i=1}^n X_i, \quad S^2 = \frac{1}{2 n (n - 1)} \sum_{j=1}^n \sum_{k=1}^n (X_j - X_k)^2$ Hence, using the bilinear property of covariance we have $\cov(M, S^2) = \frac{1}{2 n^2 (n - 1)} \sum_{i=1}^n \sum_{j=1}^n \sum_{k=1}^n \cov[X_i, (X_j - X_k)^2]$ We compute the covariances in this sum by considering disjoint cases:
• $\cov\left[X_i, (X_j - X_k)^2\right] = 0$ if $j = k$, and there are $n^2$ such terms.
• $\cov\left[X_i, (X_j - X_k)^2\right] = 0$ if $i, j, k$ are distinct, and there are $n (n - 1)(n - 2)$ such terms.
• $\cov\left[X_i, (X_j - X_k)^2\right] = \sigma_3$ if $j \ne k$ and $i \in \{j, k\}$, and there are $2 n (n - 1)$ such terms.
Substituting gives the result.
2. This follows follows from part(a), the result above on the variance of $S^2$, and $\var(M) = \sigma^2 / n$.
In particular, note that $\cov(M, S^2) = \cov(M, W^2)$. Again, the sample mean and variance are uncorrelated if $\sigma_3 = 0$ so that $\skw(X) = 0$. Our last result gives the covariance and correlation between the special sample variance and the standard one. Curiously, the covariance the same as the variance of the special sample variance.
The covariance and correlation between $W^2$ and $S^2$ are
1. $\cov\left(W^2, S^2\right) = (\sigma_4 - \sigma^4) / n$
2. $\cor\left(W^2, S^2\right) = \sqrt{\frac{\sigma_4 - \sigma^4}{\sigma_4 - \sigma^4 (n - 3) / (n - 1)}}$
Proof
1. Recall again that $W^2 = \frac{1}{n} \sum_{i=1}^n (X_i - \mu)^2, \quad S^2 = \frac{1}{2 n (n - 1)} \sum_{j=1}^n \sum_{k=1}^n (X_j - X_k)^2$ so by the bilinear property of covariance we have $\cov(W^2, S^2) = \frac{1}{2 n^2 (n - 1)} \sum_{i=1}^n \sum_{j=1}^n \sum_{k=1}^n \cov[(X_i - \mu)^2, (X_j - X_k)^2]$ Once again, we compute the covariances in this sum by considering disjoint cases:
• $\cov[(X_i - \mu)^2, (X_j - X_k)^2] = 0$ if $j = k$, and there are $n^2$ such terms.
• $\cov[(X_i - \mu)^2, (X_j - X_k)^2] = 0$ if $i, j, k$ are distinct, and there are $n (n - 1)(n - 2)$ such terms.
• $\cov[(X_i - \mu)^2, (X_j - X_k)^2] = \sigma_4 - \sigma^4$ if $j \ne k$ and $i \in \{j, k\}$, and there are $2 n (n - 1)$ such terms.
Substituting gives the results.
2. This follows from part (a) and the formulas above for the variance of $W^2$ and the variance of $V^2$
Note that $\cor\left(W^2, S^2\right) \to 1$ as $n \to \infty$, not surprising since with probability 1, $S^2 \to \sigma^2$ and $W^2 \to \sigma^2$ as $n \to \infty$.
A particularly important special case occurs when the sampling distribution is normal. This case is explored in the section on Special Properties of Normal Samples.
Exercises
Basic Properties
Suppose that $x$ is the temperature (in degrees Fahrenheit) for a certain type of electronic component after 10 hours of operation. A sample of 30 components has mean 113° and standard deviation $18°$.
1. Classify $x$ by type and level of measurement.
2. Find the sample mean and standard deviation if the temperature is converted to degrees Celsius. The transformation is $y = \frac{5}{9}(x - 32)$.
Answer
1. continuous, interval
2. $m = 45°$, $s = 10°$
Suppose that $x$ is the length (in inches) of a machined part in a manufacturing process. A sample of 50 parts has mean 10.0 and standard deviation 2.0.
1. Classify $x$ by type and level of measurement.
2. Find the sample mean if length is measured in centimeters. The transformation is $y = 2.54 x$.
Answer
1. continuous, ratio
2. $m = 25.4$, $s = 5.08$
Professor Moriarity has a class of 25 students in her section of Stat 101 at Enormous State University (ESU). The mean grade on the first midterm exam was 64 (out of a possible 100 points) and the standard deviation was 16. Professor Moriarity thinks the grades are a bit low and is considering various transformations for increasing the grades. In each case below give the mean and standard deviation of the transformed grades, or state that there is not enough information.
1. Add 10 points to each grade, so the transformation is $y = x + 10$.
2. Multiply each grade by 1.2, so the transformation is $z = 1.2 x$
3. Use the transformation $w = 10 \sqrt{x}$. Note that this is a non-linear transformation that curves the grades greatly at the low end and very little at the high end. For example, a grade of 100 is still 100, but a grade of 36 is transformed to 60.
One of the students did not study at all, and received a 10 on the midterm. Professor Moriarity considers this score to be an outlier.
1. Find the mean and standard deviation if this score is omitted.
Answer
1. $m = 74$, $s = 16$
2. $m = 76.8$, $s = 19.2$
3. Not enough information
4. $m = 66.25$, $s = 11.62$
Computational Exercises
All statistical software packages will compute means, variances and standard deviations, draw dotplots and histograms, and in general perform the numerical and graphical procedures discussed in this section. For real statistical experiments, particularly those with large data sets, the use of statistical software is essential. On the other hand, there is some value in performing the computations by hand, with small, artificial data sets, in order to master the concepts and definitions. In this subsection, do the computations and draw the graphs with minimal technological aids.
Suppose that $x$ is the number of math courses completed by an ESU student. A sample of 10 ESU students gives the data $\bs{x} = (3, 1, 2, 0, 2, 4, 3, 2, 1, 2)$.
1. Classify $x$ by type and level of measurement.
2. Sketch the dotplot.
3. Construct a table with rows corresponding to cases and columns corresponding to $i$, $x_i$, $x_i - m$, and $(x_i - m)^2$. Add rows at the bottom in the $i$ column for totals and means.
Answer
1. discrete, ratio
2. $i$ $x_i$ $x_i - m$ $(x_i - m)^2$
$1$ $3$ $1$ $1$
$2$ $1$ $-1$ $1$
$3$ $2$ $0$ $0$
$4$ $0$ $-2$ $4$
$5$ $2$ $0$ $0$
$6$ $4$ $2$ $4$
$7$ $3$ $1$ $1$
$8$ $2$ $0$ $0$
$9$ $1$ $-1$ $1$
$10$ $2$ $0$ $0$
Total 20 0 14
Mean 2 0 $14/9$
Suppose that a sample of size 12 from a discrete variable $x$ has empirical density function given by $f(-2) = 1/12$, $f(-1) = 1/4$, $f(0) = 1/3$, $f(1) = 1/6$, $f(2) = 1/6$.
1. Sketch the graph of $f$.
2. Compute the sample mean and variance.
3. Give the sample values, ordered from smallest to largest.
Answer
1. $m = 1/12$, $s^2 = 203/121$
2. $(-2, -1, -1, -1, 0, 0, 0, 0, 1, 1, 2, 2)$
The following table gives a frequency distribution for the commuting distance to the math/stat building (in miles) for a sample of ESU students.
Class Freq Rel Freq Density Cum Freq Cum Rel Freq Midpoint
$(0, 2]$ 6
$(2, 6]$ 16
$(6, 10]$ 18
$(10, 20])$ 10
Total
1. Complete the table
2. Sketch the density histogram
3. Sketch the cumulative relative frquency ogive.
4. Compute an approximation to the mean and standard deviation.
Answer
1. Class Freq Rel Freq Density Cum Freq Cum Rel Freq Midpoint
$(0, 2]$ 6 0.12 0.06 6 0.12 1
$(2, 6]$ 16 0.32 0.08 22 0.44 4
$(6, 10]$ 18 0.36 0.09 40 0.80 8
$(10, 20]$ 10 0.20 0.02 50 1 15
Total 50 1
2. $m = 7.28$, $s = 4.549$
Error Function Exercises
In the error function app, select root mean square error. As you add points, note the shape of the graph of the error function, the value that minimizes the function, and the minimum value of the function.
In the error function app, select mean absolute error. As you add points, note the shape of the graph of the error function, the values that minimizes the function, and the minimum value of the function.
Suppose that our data vector is $(2, 1, 5, 7)$. Explicitly give $\mae$ as a piecewise function and sketch its graph. Note that
1. All values of $a \in [2, 5]$ minimize $\mae$.
2. $\mae$ is not differentiable at $a \in \{1, 2, 5, 7\}$.
Suppose that our data vector is $(3, 5, 1)$. Explicitly give $\mae$ as a piecewise function and sketch its graph. Note that
1. $\mae$ is minimized at $a = 3$.
2. $\mae$ is not differentiable at $a \in \{1, 3, 5\}$.
Simulation Exercises
Many of the apps in this project are simulations of experiments with a basic random variable of interest. When you run the simulation, you are performing independent replications of the experiment. In most cases, the app displays the standard deviation of the distribution, both numerically in a table and graphically as the radius of the blue, horizontal bar in the graph box. When you run the simulation, the sample standard deviation is also displayed numerically in the table and graphically as the radius of the red horizontal bar in the graph box.
In the binomial coin experiment, the random variable is the number of heads. For various values of the parameters $n$ (the number of coins) and $p$ (the probability of heads), run the simulation 1000 times and compare the sample standard deviation to the distribution standard deviation.
In the simulation of the matching experiment, the random variable is the number of matches. For selected values of $n$ (the number of balls), run the simulation 1000 times and compare the sample standard deviation to the distribution standard deviation.
Run the simulation of the gamma experiment 1000 times for various values of the rate parameter $r$ and the shape parameter $k$. Compare the sample standard deviation to the distribution standard deviation.
Probability Exercises
Suppose that $X$ has probability density function $f(x) = 12 \, x^2 \, (1 - x)$ for $0 \le x \le 1$. The distribution of $X$ is a member of the beta family. Compute each of the following
1. $\mu = \E(X)$
2. $\sigma^2 = \var(X)$
3. $d_3 = \E\left[(X - \mu)^3\right]$
4. $d_4 = \E\left[(X - \mu)^4\right]$
Answer
1. $3/5$
2. $1/25$
3. $-2/875$\)
4. $33/8750$
Suppose now that $(X_1, X_2, \ldots, X_{10})$ is a random sample of size 10 from the beta distribution in the previous problem. Find each of the following:
1. $\E(M)$
2. $\var(M)$
3. $\E\left(W^2\right)$
4. $\var\left(W^2\right)$
5. $\E\left(S^2\right)$
6. $\var\left(S^2\right)$
7. $\cov\left(M, W^2\right)$
8. $\cov\left(M, S^2\right)$
9. $\cov\left(W^2, S^2\right)$
Answer
1. $3/5$
2. $1/250$
3. $1/25$
4. $19/87\,500$
5. $1/25$
6. $199/787\,500$
7. $-2/8750$
8. $-2/8750$
9. $19/87\,500$
Suppose that $X$ has probability density function $f(x) = \lambda e^{-\lambda x}$ for $0 \le x \lt \infty$, where $\lambda \gt 0$ is a parameter. Thus $X$ has the exponential distribution with rate parameter $\lambda$. Compute each of the following
1. $\mu = \E(X)$
2. $\sigma^2 = \var(X)$
3. $d_3 = \E\left[(X - \mu)^3\right]$
4. $d_4 = \E\left[(X - \mu)^4\right]$
Answer
1. $1/\lambda$
2. $1/\lambda^2$
3. $2/\lambda^3$
4. $9/\lambda^4$
Suppose now that $(X_1, X_2, \ldots, X_5)$ is a random sample of size 5 from the exponential distribution in the previous problem. Find each of the following:
1. $\E(M)$
2. $\var(M)$
3. $\E\left(W^2\right)$
4. $\var\left(W^2\right)$
5. $\E\left(S^2\right)$
6. $\var\left(S^2\right)$
7. $\cov\left(M, W^2\right)$
8. $\cov\left(M, S^2\right)$
9. $\cov\left(W^2, S^2\right)$
Answer
1. $1/\lambda$
2. $1/5 \lambda^2$
3. $1/\lambda^2$
4. $8/5 \lambda^4$
5. $1/\lambda^2$
6. $17/10 \lambda^4$
7. $2/5 \lambda^3$
8. $2/5 \lambda^3$
9. $8/5 \lambda^4$
Recall that for an ace-six flat die, faces 1 and 6 have probability $\frac{1}{4}$ each, while faces 2, 3, 4, and 5 have probability $\frac{1}{8}$ each. Let $X$ denote the score when an ace-six flat die is thrown. Compute each of the following:
1. $\mu = \E(X)$
2. $\sigma^2 = \var(X)$
3. $d_3 = \E\left[(X - \mu)^3\right]$
4. $d_4 = \E\left[(X - \mu)^4\right]$
Answer
1. $7/2$
2. $15/4$
3. $0$
4. $333/16$
Suppose now that an ace-six flat die is tossed 8 times. Find each of the following:
1. $\E(M)$
2. $\var(M)$
3. $\E\left(W^2\right)$
4. $\var\left(W^2\right)$
5. $\E\left(S^2\right)$
6. $\var\left(S^2\right)$
7. $\cov\left(M, W^2\right)$
8. $\cov\left(M, S^2\right)$
9. $\cov\left(W^2, S^2\right)$
Answer
1. $7/2$
2. $15/32$
3. $15/4$
4. $27/32$
5. $15/4$
6. $207/512$
7. $0$
8. $0$
9. $27/32$
Data Analysis Exercises
Statistical software should be used for the problems in this subsection.
Consider the petal length and species variables in Fisher's iris data.
1. Classify the variables by type and level of measurement.
2. Compute the sample mean and standard deviation, and plot a density histogram for petal length.
3. Compute the sample mean and standard deviation, and plot a density histogram for petal length by species.
Answers
1. petal length: continuous, ratio. species: discrete, nominal
2. $m = 37.8$, $s = 17.8$
3. $m(0) = 14.6$, $s(0) = 1.7$; $m(1) = 55.5$, $s(1) = 30.5$; $m(2) = 43.2$, $s(2) = 28.7$
Consider the erosion variable in the Challenger data set.
1. Classify the variable by type and level of measurement.
2. Compute the mean and standard deviation
3. Plot a density histogram with the classes $[0, 5)$, $[5, 40)$, $[40, 50)$, $[50, 60)$.
Answer
1. continuous, ratio
2. $m = 7.7$, $s = 17.2$
Consider Michelson's velocity of light data.
1. Classify the variable by type and level of measurement.
2. Plot a density histogram.
3. Compute the sample mean and standard deviation.
4. Find the sample mean and standard deviation if the variable is converted to $\text{km}/\text{hr}$. The transformation is $y = x + 299\,000$
Answer
1. continuous, interval
2. $m = 852.4$, $s = 79.0$
3. $m = 299\,852.4$, $s = 79.0$
Consider Short's paralax of the sun data.
1. Classify the variable by type and level of measurement.
2. Plot a density histogram.
3. Compute the sample mean and standard deviation.
4. Find the sample mean and standard deviation if the variable is converted to degrees. There are 3600 seconds in a degree.
5. Find the sample mean and standard deviation if the variable is converted to radians. There are $\pi/180$ radians in a degree.
Answer
1. continuous, ratio
2. $m = 8.616$, $s = 0.749$
3. $m = 0.00239$, $s = 0.000208$
4. $m = 0.0000418$, $s = 0.00000363$
Consider Cavendish's density of the earth data.
1. Classify the variable by type and level of measurement.
2. Compute the sample mean and standard deviation.
3. Plot a density histogram.
Answer
1. continuous, ratio
2. $m = 5.448$, $s = 0.221$
Consider the M&M data.
1. Classify the variables by type and level of measurement.
2. Compute the sample mean and standard deviation for each color count variable.
3. Compute the sample mean and standard deviation for the total number of candies.
4. Plot a relative frequency histogram for the total number of candies.
5. Compute the sample mean and standard deviation, and plot a density histogram for the net weight.
Answer
1. color counts: discrete ratio. net weight: continuous ratio.
2. $m(r) = 9.60$, $s(r) = 4.12$; $m(g) = 7.40$, $s(g) = 0.57$; $m(bl) = 7.23$, $s(bl) = 4.35$; $m(o) = 6.63$, $s(0) = 3.69$; $m(y) = 13.77$, $s(y) = 6.06$; $m(br) = 12.47$, $s(br) = 5.13$
3. $m(n) = 57.10$, $s(n) = 2.4$
4. $m(w) = 49.215$, $s(w) = 1.522$
Consider the body weight, species, and gender variables in the Cicada data.
1. Classify the variables by type and level of measurement.
2. Compute the relative frequency function for species and plot the graph.
3. Compute the relative frequeny function for gender and plot the graph.
4. Compute the sample mean and standard deviation, and plot a density histogram for body weight.
5. Compute the sample mean and standard deviation, and plot a density histogrm for body weight by species.
6. Compute the sample mean and standard deviation, and plot a density histogram for body weight by gender.
Answer
1. body weight: continuous, ratio. species: discrete, nominal. gender: discrete, nominal.
2. $f(0) = 0.423$, $f(1) = 0.519$, $f(2) = 0.058$
3. $f(0) = 0.567$, $f(1) = 0.433$
4. $m = 0.180$, $s = 0.059$
5. $m(0) = 0.168$, $s(0) = 0.054$; $m(1) = 0.185$, $s(1) = 0.185$; $m(2) = 0.225$, $s(2) = 0.107$
6. $m(0) = 0.206$, $s(0) = 0.052$; $m(1) = 0.145$, $s(1) = 0.051$
Consider Pearson's height data.
1. Classify the variables by type and level of measurement.
2. Compute the sample mean and standard deviation, and plot a density histogram for the height of the father.
3. Compute the sample mean and standard deviation, and plot a density histogram for the height of the son.
Answer
1. continuous ratio
2. $m(x) = 67.69$, $s(x) = 2.75$
3. $m(y) = 68.68$, $s(y) = 2.82$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/06%3A_Random_Samples/6.05%3A_The_Sample_Variance.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\iqr}{\text{iqr}}$ $\newcommand{\bs}{\boldsymbol}$
Descriptive Theory
Recall again the basic model of statistics: we have a population of objects of interest, and we have various measurements (variables) that we make on these objects. We select objects from the population and record the variables for the objects in the sample; these become our data. Our first discussion is from a purely descriptive point of view. That is, we do not assume that the data are generated by an underlying probability distribution. But as always, remember that the data themselves define a probability distribution, namely the empirical distribution.
Order Statistics
Suppose that $x$ is a real-valued variable for a population and that $\bs{x} = (x_1, x_2, \ldots, x_n)$ are the observed values of a sample of size $n$ corresponding to this variable. The order statistic of rank $k$ is the $k$th smallest value in the data set, and is usually denoted $x_{(k)}$. To emphasize the dependence on the sample size, another common notation is $x_{n:k}$. Thus, $x_{(1)} \le x_{(2)} \le \cdots \le x_{(n-1)} \le x_{(n)}$ Naturally, the underlying variable $x$ should be at least at the ordinal level of measurement. The order statistics have the same physical units as $x$. One of the first steps in exploratory data analysis is to order the data, so order statistics occur naturally. In particular, note that the extreme order statistics are $x_{(1)} = \min\{x_1, x_2 \ldots, x_n\}, \quad x_{(n)} = \max\{x_1, x_2, \ldots, x_n\}$ The sample range is $r = x_{(n)} - x_{(1)}$ and the sample midrange is $\frac{r}{2} = \frac{1}{2}\left[x_{(n)} - x_{(1)}\right]$. These statistics have the same physical units as $x$ and are measures of the dispersion of the data set.
The Sample Median
If $n$ is odd, the sample median is the middle of the ordered observations, namely $x_{(k)}$ where $k = \frac{n + 1}{2}$. If $n$ is even, there is not a single middle observation, but rather two middle observations. Thus, the median interval is $\left[x_{(k)}, x_{(k + 1)}\right]$ where $k = \frac{n}{2}$. In this case, the sample median is defined to be the midpoint of the median interval, namely $\frac{1}{2}\left[x_{(k)} + x_{(k+1)}\right]$ where $k = \frac{n}{2}$. In a sense, this definition is a bit arbitrary because there is no compelling reason to prefer one point in the median interval over another. For more on this issue, see the discussion of error functions in the section on Sample Variance. In any event, sample median is a natural statistic that gives a measure of the center of the data set.
Sample Quantiles
We can generalize the sample median discussed above to other sample quantiles. Thus, suppose that $p \in [0, 1]$. Our goal is to find the value that is the fraction $p$ of the way through the (ordered) data set. We define the rank of the value that we are looking for as $(n - 1)p + 1$. Note that the rank is a linear function of $p$, and that the rank is 1 when $p = 0$ and $n$ when $p = 1$. But of course, the rank will not be an integer in general, so we let $k = \lfloor (n - 1)p + 1 \rfloor$, the integer part of the desired rank, and we let $t = [(n - 1)p + 1] - k$, the fractional part of the desired rank. Thus, $(n - 1)p + 1 = k + t$ where $k \in \{1, 2, \ldots, n\}$ and $t \in [0, 1)$. So, using linear interpolation, we define the sample quantile of order $p$ to be $x_{[p]} = x_{(k)} + t \left[x_{(k+1)}-x_{(k)}\right] = (1 - t) x_{(k)} + t x_{(k+1)}$ Sample quantiles have the same physical units as the underlying variable $x$. The algorithm really does generalize the results for sample medians.
The sample quantile of order $p = \frac{1}{2}$ is the median as defined earlier, in both cases where $n$ is odd and where $n$ is even.
The sample quantile of order $\frac{1}{4}$ is known as the first quartile and is frequently denoted $q_1$. The the sample quantile of order $\frac{3}{4}$ is known as the third quartile and is frequently denoted $q_3$. The sample median which is the quartile of order $\frac{1}{2}$ is sometimes denoted $q_2$. The interquartile range is defined to be $\iqr = q_3 - q_1$. Note that $\iqr$ is a statistic that measures the spread of the distribution about the median, but of course this number gives less information than the interval $[q_1, q_3]$.
The statistic $q_1 - \frac{3}{2} \iqr$ is called the lower fence and the statistic $q_3 + \frac{3}{2} \iqr$ is called the upper fence. Sometimes lower limit and upper limit are used instead of lower fence and upper fence. Values in the data set that are below the lower fence or above the upper fence are potential outliers, that is, values that don't seem to fit the overall pattern of the data. An outlier can be due to a measurement error, or may be a valid but rather extreme value. In any event, outliers usually deserve additional study.
The five statistics $\left(x_{(1)}, q_1, q_2, q_3, x_{(n)}\right)$ are often referred to as the five-number summary. Together, these statistics give a great deal of information about the data set in terms of the center, spread, and skewness. The five numbers roughly separate the data set into four intervals each of which contains approximately 25% of the data. Graphically, the five numbers, and the outliers, are often displayed as a boxplot, sometimes called a box and whisker plot. A boxplot consists of an axis that extends across the range of the data. A line is drawn from smallest value that is not an outlier (of course this may be the minimum $x_{(1)}$) to the largest value that is not an outlier (of course, this may be the maximum $x_{(n)}$). Vertical marks (whiskers) are drawn at the ends of this line. A rectangular box extends from the first quartile $q_1$ to the third quartile $q_3$ and with an additional whisker at the median $q_2$. Finally, the outliers are denoted as points (beyond the extreme whiskers). All statistical packages will compute the quartiles and most will draw boxplots. The picture below shows a boxplot with 3 outliers.
Alternate Definitions
The algorithm given above is not the only reasonable way to define sample quantiles, and indeed there are lots of alternatives. One natural method would be to first compute the empirical distribution function $F(x) = \frac{1}{n} \sum_{i=1}^n \bs{1}(x_i \le x), \quad x \in \R$ Recall that $F$ has the mathematical properties of a distribution function, and in fact $F$ is the distribution function of the empirical distribution of the data. Recall that this is the distribution that places probability $\frac{1}{n}$ at each data value $x_i$ (so this is the discrete uniform distribution on $\{x_1, x_2, \ldots, x_n\}$ if the data values are distinct). Thus, $F(x) = \frac{k}{n}$ for $x \in [x_{(k)}, x_{(k+1)})$. Then, we could define the quantile function to be the inverse of the distribution function, as we usually do for probability distributions: $F^{-1}(p) = \min\{x \in \R: F(x) \ge p\}, \quad p \in (0, 1)$ It's easy to see that with this definition, the quantile of order $p \in (0, 1)$ is simply $x_{(k)}$ where $k = \lceil n p \rceil$.
Another method is to compute the rank of the quantile of order $p \in (0, 1)$ as $(n + 1)p$, rather than $(n - 1)p + 1$, and then use linear interpolation just as we have done. To understand the reasoning behind this method, suppose that the underlying variable $x$ takes value in an interval $(a, b)$. Then the $n$ points in the data set $\bs{x}$ separate this interval into $n + 1$ subintervals, so it's reasonable to think of $x_{(k)}$ as the quantile of order $\frac{k}{n + 1}$. This method also reduces to the standard calculation for the median when $p = \frac{1}{2}$. However, the method will fail if $p$ is so small that $(n + 1) p \lt 1$ or so large that $(n + 1) p > n$.
The primary definition that we give above is the one that is most commonly used in statistical software and spreadsheets. Moreover, when the sample size $n$ is large, it doesn't matter very much which of these competing quantile definitions is used. All will give similar results.
Transformations
Suppose again that $\bs{x} = (x_1, x_2, \ldots, x_n)$ is a sample of size $n$ from a population variable $x$, but now suppose also that $y = a + b x$ is a new variable, where $a \in \R$ and $b \in (0, \infty)$. Recall that transformations of this type are location-scale transformations and often correspond to changes in units. For example, if $x$ is the length of an object in inches, then $y = 2.54 x$ is the length of the object in centimeters. If $x$ is the temperature of an object in degrees Fahrenheit, then $y = \frac{5}{9}(x - 32)$ is the temperature of the object in degrees Celsius. Let $\bs{y} = \bs{a} + b \bs{x}$ denote the sample from the variable $y$.
Order statistics and quantiles are preserved under location-scale transformations:
1. $y_{(i)} = a + b x_{(i)}$ for $i \in \{1, 2, \ldots, n\}$
2. $y_{[p]} = a + b x_{[p]}$ for $p \in [0, 1]$
Proof
Part (a) follows easily from the fact that the location-scale transformation is strictly increasing and hence preserves order: $x_i \lt x_j$ if and only if $a + b x_i \lt a + b x_j$. For part (b), let $p \in [0, 1]$ and let $k \in \{1, 2, \ldots,n\}$ and $t \in [0, 1)$ be as above in the definition of the sample quantile or order $p$. Then
$y_{[p]} = y_{(k)} + t[y_{(k+1)} - y_{(k)}] = a + b x_{(k)} + t[a + b x_{(k+1)} - (a + b x_{(k)})] = a + b\left(x_{(k)} + t [x_{(k+1)}- x_{(k)}]\right) = a + b x_{[p]}$
Like standard deviation (our most important measure of spread), range and interquartile range are not affected by the location parameter, but are scaled by the scale parameter.
The range and interquartile range of $\bs{y}$ are
1. $r(\bs{y}) = b \, r(\bs{x})$
2. $\iqr(\bs{y}) = b \, \iqr(\bs{x})$
Proof
These results follow immediately from the previous result.
More generally, suppose $y = g(x)$ where $g$ is a strictly increasing real-valued function on the set of possible values of $x$. Let $\bs{y} = \left(g(x_1), g(x_2), \ldots, g(x_n)\right)$ denote the sample corresponding to the variable $y$. Then (as in the proof of Theorem 2), the order statistics are preserved so $y_{(i)} = g(x_{(i)})$. However, if $g$ is nonlinear, the quantiles are not preserved (because the quantiles involve linear interpolation). That is, $y_{[p]}$ and $g(x_{[p]})$ are not usually the same. When $g$ is convex or concave we can at least give an inequality for the sample quantiles.
Suppose that $y = g(x)$ where $g$ is strictly increasing. Then
1. $y_{(i)} = g\left(x_{(i)}\right)$ for $i \in \{1, 2, \ldots, n\}$
2. If $g$ is convex then $y_{[p]} \ge g\left(x_{[p]}\right)$ for $p \in [0, 1]$
3. If $g$ is concave then $y_{[p]} \le g\left(x_{[p]}\right)$ for $p \in [0, 1]$
Proof
As noted, part (a) follows since $g$ is strictly increasing and hence preserves order. Part (b) follows from the definition of convexity. For $p \in [0, 1]$, and $k \in \{1, 2, \ldots, n\}$ and $t \in [0, 1)$ as in the definition of the sample quantile of order $p$, we have $y_{[p]} = (1 - t) y_{(k)} + t y_{(k+1)} = (1 - t) g\left(x_{(k)}\right) + t g\left(x_{(k+1)}\right) \ge g\left[(1 - t) x_{(k)} + t x_{(k+1)}\right] = g\left(x_{[p]}\right)$ Part (c) follows by the same argument.
Stem and Leaf Plots
A stem and leaf plot is a graphical display of the order statistics $\left(x_{(1)}, x_{(2)}, \ldots, x_{(n)}\right)$. It has the benefit of showing the data in a graphical way, like a histogram, and at the same time, preserving the ordered data. First we assume that the data have a fixed number format: a fixed number of digits, then perhaps a decimal point and another fixed number of digits. A stem and leaf plot is constructed by using an initial part of this string as the stem, and the remaining parts as the leaves. There are lots of variations in how to do this, so rather than give an exhaustive, complicated definition, we will just look at a couple of examples in the exercise below.
Probability Theory
We continue our discussion of order statistics except that now we assume that the variables are random variables. Specifically, suppose that we have a basic random experiment, and that $X$ is a real-valued random variable for the experiment with distribution function $F$. We perform $n$ independent replications of the basic experiment to generate a random sample $\bs{X} = (X_1, X_2, \ldots, X_n)$ of size $n$ from the distribution of $X$. Recall that this is a sequence of independent random variables, each with the distribution of $X$. All of the statistics defined in the previous section make sense, but now of course, they are random variables. We use the notation established previously, except that we follow our usual convention of denoting random variables with capital letters. Thus, for $k \in \{1, 2, \ldots, n\}$, $X_{(k)}$ is the $k$th order statistic, that is, the $k$ smallest of $(X_1, X_2, \ldots, X_n)$. Our interest now is on the distribution of the order statistics and statistics derived from them.
Distribution of the $k$th order statistic
Finding the distribution function of an order statistic is a nice application of Bernoulli trials and the binomial distribution.
The distribution function $F_k$ of $X_{(k)}$ is given by $F_k(x) = \sum_{j=k}^n \binom{n}{j} \left[F(x)\right]^j \left[1 - F(x)\right]^{n - j}, \quad x \in \R$
Proof
For $x \in \R$, let $N_x = \sum_{i=1}^n \bs{1}(X_i \le x)$ so that $N_x$ is the number of sample variables that fall in the interval $(-\infty, x]$. The indicator variables in the sum are independent, and each takes the value 1 with probability $F(x)$. Thus, $N_x$ has the binomial distribution with parameters $n$ and $F(x)$. Next note that $X_{(k)} \le x$ if and only if $N_x \ge k$ for $x \in \R$ and $k \in \{1, 2, \ldots, n\}$, since both events mean that there are at least $k$ sample variables in the interval $(-\infty x]$. Hence $\P\left(X_{(k)} \le x\right) = \P\left(N_x \ge k\right) = \sum_{j=k}^n \binom{n}{j} \left[F(x)\right]^j \left[1 - F(x)\right]^{n - j}$
As always, the extreme order statistics are particularly interesting.
The distribution functions $F_1$ of $X_{(1)}$ and $F_n$ of $X_{(n)}$ are given by
1. $F_1(x) = 1 - \left[1 - F(x)\right]^n$ for $x \in \R$
2. $F_n(x) = \left[F(x)\right]^n$ for $x \in \R$
The quantile functions $F_1^{-1}$ and $F_n^{-1}$ of $X_{(1)}$ and $X_{(n)}$ are given by
1. $F_1^{-1}(p) = F^{-1}\left[1 - (1 - p)^{1/n}\right]$ for $p \in (0, 1)$
2. $F_n^{-1}(p) = F^{-1}\left(p^{1/n}\right)$ for $p \in (0, 1)$
Proof
The formulas follow from the previous theorem and simple algebra. Recall that if $G$ is a distribution function, then the corresponding quantile function is given by $G^{-1}(p) = \min\{x \in \R: G(x) \ge p\}$ for $p \in (0, 1)$.
When the underlying distribution is continuous, we can give a simple formula for the probability density function of an order statistic.
Suppose now that $X$ has a continuous distribution with probability density function $f$. Then $X_{(k)}$ has a continuous distribution with probability density function $f_k$ given by $f_k(x) = \frac{n!}{(k - 1)! (n - k)!} \left[F(x)\right]^{k-1} \left[1 - F(x)\right]^{n-k} f(x), \quad x \in \R$
Proof
Of course, $f_k(x) = F_k^\prime(x)$. We take the derivatives term by term and use the product rule on $\frac{d}{dx}\left[F(x)\right]^j \left[1 - F(x)\right]^{n-j} = j \left[F(x)\right]^{j-1} f(x) \left[1 - F(x)\right]^{n-j} - (n - j)\left[F(x)\right]^j \left[1 - F(x)\right]^{n-j-1}f(x)$ We use the binomial identities $j \binom{n}{j} = n \binom{n - 1}{j - 1}$ and $(n - j) \binom{n}{j} = n \binom{n - 1}{j}$. The net effect is $f_k(x) = n f(x) \left[ \sum_{j=k}^n \binom{n - 1}{j - 1}[F(x)]^{j-1} [1 - F(x)]^{(n-1)-(j-1)} - \sum_{j=k}^{n-1} \binom{n-1}{j} [F(x)]^j [1 - F(x)]^{(n-1)-j}\right]$ The sums cancel, leaving only the $j = k$ term in the first sum. Hence $f_k(x) = n f(x) \binom{n-1}{k-1}[F(x)]^{k-1}[1 - F(x)]^{n-k}$ But $n \binom{n-1}{k-1} = \frac{n!}{(k-1)!(n-k)!}$.
Heuristic Proof
There is a simple heuristic argument for this result First, $f_k(x) \, dx$ is the probability that $X_{(k)}$ is in an infinitesimal interval of size $dx$ about $x$. On the other hand, this event means that one of sample variables is in the infinitesimal interval, $k - 1$ sample variables are less than $x$, and $n - k$ sample variables are greater than $x$. The number of ways of choosing these variables is the multinomial coefficient $\binom{n}{k - 1, 1, n - k} = \frac{n!}{(k - 1)! (n - k)!}$ By independence, the probability that the chosen variables are in the specified intervals is $\left[F(x)\right]^{k-1} \left[1 - F(x)\right]^{n-k} f(x) \, dx$
Here are the special cases for the extreme order statistics.
The probability density function $f_1$ of $X_{(1)}$ and $f_n$ of $X_{(n)}$ are given by
1. $f_1(x) = n \left[1 - F(x)\right]^{n-1} f(x)$ for $x \in \R$
2. $f_n(x) = n \left[F(x)\right]^{n-1} f(x)$ for $x \in \R$
Joint Distributions
We assume again that $X$ has a continuous distribution with distribution function $F$ and probability density function $f$.
Suppose that $j, k \in \{1, 2, \ldots, n\}$ with $j \lt k$. The joint probability density function $f_{j,k}$ of $\left(X_{(j)}, X_{(k)}\right)$ is given by
$f_{j,k}(x, y) = \frac{n!}{(j - 1)! (k - j - 1)! (n - k)!} \left[F(x)\right]^{j-1} \left[F(y) - F(x)\right]^{k - j - 1} \left[1 - F(y)\right]^{n-k} f(x) f(y); \quad x, \, y \in \R, x \lt y$
Heuristic Proof
We want to compute the probability that $X_{(j)}$ is in an infinitesimal interval $dx$ about $x$ and $X_{(k)}$ is in an infinitesimal interval $dy$ about $y$. Note that there must be $j - 1$ sample variables that are less than $x$, one variable in the infinitesimal interval about $x$, $k - j - 1$ sample variables that are between $x$ and $y$, one variable in the infinitesimal interval about $y$, and $n - k$ sample variables that are greater than $y$. The number of ways to select the variables is the multinomial coefficient $\binom{n}{j-1, 1, k - j - 1, 1, n - k} = \frac{n!}{(j - 1)! (k - j - 1)! (n - k)!}$ By independence, the probability that the chosen variables are in the specified intervals is $\left[F(x)\right]^{j-1} f(x) dx \left[F(y) - F(x)\right]^{k - j - 1} f(y) dy \left[1 - F(y)\right]^{n-k}$
From the joint distribution of two order statistics we can, in principle, find the distribution of various other statistics: the sample range $R$; sample quantiles $X_{[p]}$ for $p \in [0, 1]$, and in particular the sample quartiles $Q_1$, $Q_2$, $Q_3$; and the inter-quartile range IQR. The joint distribution of the extreme order statistics $(X_{(1)}, X_{(n)})$ is a particularly important case.
The joint probability density function $f_{1,n}$ of $\left(X_{(1)}, X_{(n)}\right)$ is given by $f_{1,n}(x, y) = n (n - 1) \left[F(y) - F(x)\right]^{n-2} f(x) f(y); \quad x, \, y \in \R, x \lt y$
Proof
This is a corollary of Theorem 7 with $j = 1$ and $k = n$.
Arguments similar to the one above can be used to obtain the joint probability density function of any number of the order statistics. Of course, we are particularly interested in the joint probability density function of all of the order statistics. It turns out that this density function has a remarkably simple form.
$\left(X_{(1)}, X_{(2)}, \ldots, X_{(n)}\right)$ has joint probability density function $g$ given by $g(x_1, x_2, \ldots, x_n) = n! f(x_1) \, f(x_2) \, \cdots \, f(x_n), \quad x_1 \lt x_2 \lt \cdots \lt x_n$
Proof
For each permutation $\bs{i} = (i_1, i_2, \ldots, i_n)$ of $(1, 2, \ldots, n)$, let $S_\bs{i} = \{\bs{x} \in \R^n: x_{i_1} \lt x_{i_2} \lt \cdots \lt x_{i_n}\}$. On $S_\bs{i}$, the mapping $(x_1, x_2, \ldots, x_n) \mapsto (x_{i_1}, x_{i_2}, \ldots, x_{i_n})$ is one-to-one, has continuous first partial derivatives, and has Jacobian 1. The sets $S_\bs{i}$ where $\bs{i}$ ranges over the $n!$ permutations of $(1, 2, \ldots, n)$ are disjoint. The probability that $(X_1, X_2, \ldots, X_n)$ is not in one of these sets is 0. The result now follows from the multivariate change of variables formula.
Heuristic Proof
Again, there is a simple heuristic argument for this result. For each $\bs{x} \in \R^n$ with $x_1 \lt x_2 \lt \cdots \lt x_n$, there are $n!$ permutations of the coordinates of $\bs{x}$. The probability density of $(X_1, X_2, \ldots, X_n)$ at each of these points is $f(x_1) \, f(x_2) \, \cdots \, f(x_n)$. Hence the probability density of $(X_{(1)}, X_{(2)}, \ldots, X_{(n)})$ at $\bs{x}$ is $n!$ times this product.
Probability Plots
A probability plot, also called a quantile-quantile plot or a Q-Q plot for short, is an informal, graphical test to determine if observed data come from a specified distribution. Thus, suppose that we observe real-valued data $(x_1, x_2, \ldots, x_n)$ from a random sample of size $n$. We are interested in the question of whether the data could reasonably have come from a continuous distribution with distribution function $F$. First, we order that data from smallest to largest; this gives us the sequence of observed values of the order statistics: $\left(x_{(1)}, x_{(2)}, \ldots, x_{(n)}\right)$.
Note that we can view $x_{(i)}$ has the sample quantile of order $\frac{i}{n + 1}$. Of course, by definition, the distribution quantile of order $\frac{i}{n + 1}$ is $y_i = F^{-1} \left( \frac{i}{n + 1} \right)$. If the data really do come from the distribution, then we would expect the points $\left(\left(x_{(1)}, y_1\right), \left(x_{(2)}, y_2\right) \ldots, \left(x_{(n)}, y_n\right)\right)$ to be close to the diagonal line $y = x$; conversely, strong deviation from this line is evidence that the distribution did not produce the data. The plot of these points is referred to as a probability plot.
Usually however, we are not trying to see if the data come from a particular distribution, but rather from a parametric family of distributions (such as the normal, uniform, or exponential families). We are usually forced into this situation because we don't know the parameters; indeed the next step, after the probability plot, may be to estimate the parameters. Fortunately, the probability plot method has a simple extension for any location-scale family of distributions. Thus, suppose that $G$ is a given distribution function. Recall that the location-scale family associated with $G$ has distribution function $F(x) = G \left( \frac{x - a}{b} \right)$ for, $x \in \R$, where $a \in \R$ is the location parameter and $b \in (0, \infty)$ is the scale parameter. Recall also that for $p \in (0, 1)$, if $z_p = G^{-1}(p)$ denote the quantile of order $p$ for $G$ and $y_p = F^{-1}(p)$ the quantile of order $p$ for $F$. Then $y_p = a + b \, z_p$. It follows that if the probability plot constructed with distribution function $F$ is nearly linear (and in particular, if it is close to the diagonal line), then the probability plot constructed with distribution function $G$ will be nearly linear. Thus, we can use the distribution function $G$ without having to know the location and scale parameters.
In the exercises below, you will explore probability plots for the normal, exponential, and uniform distributions. We will study a formal, quantitative procedure, known as the chi-square goodness of fit test in the chapter on Hypothesis Testing.
Exercises and Applications
Basic Properties
Suppose that $x$ is the temperature (in degrees Fahrenheit) for a certain type of electronic component after 10 hours of operation. A sample of 30 components has five number summary $(84, 102, 113, 120, 135)$.
1. Classify $x$ by type and level of measurement.
2. Find the range and interquartile range.
3. Find the five number summary, range, and interquartile range if the temperature is converted to degrees Celsius. The transformation is $y = \frac{5}{9}(x - 32)$.
Answer
1. continuous, interval
2. 51, 18
3. $(28.89, 38.89, 45.00, 48.89, 57.22)$, 28.33, 10
Suppose that $x$ is the length (in inches) of a machined part in a manufacturing process. A sample of 50 parts has five number summary (9.6, 9.8, 10.0, 10.1, 10.3).
1. Classify $x$ by type and level of measurement.
2. Find the range and interquartile range.
3. Find the five number summary, range, and interquartile if length is measured in centimeters. The transformation is $y = 2.54 x$.
Answer
1. continuous, ratio
2. 0.7, 0.3
3. $(24.38, 24.89, 25.40, 25.65, 26.16)$, 1.78, 0.76
Professor Moriarity has a class of 25 students in her section of Stat 101 at Enormous State University (ESU). For the first midterm exam, the five number summary was (16, 52, 64, 72, 81) (out of a possible 100 points). Professor Moriarity thinks the grades are a bit low and is considering various transformations for increasing the grades.
1. Find the range and interquartile range.
2. Suppose she adds 10 points to each grade. Find the five number summary, range, and interquartile range for the transformed grades.
3. Suppose she multiplies each grade by 1.2. Find the five number summary, range, and interquartile range for the transformed grades.
4. Suppose she uses the transformation $w = 10 \sqrt{x}$, which curves the grades greatly at the low end and very little at the high end. Give whatever information you can about the five number summary of the transformed grades.
5. Determine whether the low score of 16 is an outlier.
Answer
1. 65, 20
2. $(26, 62, 74, 82, 91)$, 65, 20
3. $(19.2, 62.4, 76.8, 86.4, 97.2)$, 78, 24
4. $y_{(1)} = 40$, $q_1 \le 72.11$, $q_2 \le 80$, $q_3 \le 84.85$, $y_{(25)} = 90$
5. The lower fence is 27, so yes 16 is an outlier.
Computational Exercises
All statistical software packages will compute order statistics and quantiles, draw stem-and-leaf plots and boxplots, and in general perform the numerical and graphical procedures discussed in this section. For real statistical experiments, particularly those with large data sets, the use of statistical software is essential. On the other hand, there is some value in performing the computations by hand, with small, artificial data sets, in order to master the concepts and definitions. In this subsection, do the computations and draw the graphs with minimal technological aids.
Suppose that $x$ is the number of math courses completed by an ESU student. A sample of 10 ESU students gives the data $\bs{x} = (3, 1, 2, 0, 2, 4, 3, 2, 1, 2)$.
1. Classify $x$ by type and level of measurement.
2. Give the order statistics
3. Compute the five number summary and draw the boxplot.
4. Compute the range and the interquartile range.
Answer
1. discrete, ratio
2. $(0, 1, 1, 2, 2, 2, 2, 3, 3, 4)$
3. $(0, 1.25, 2, 2.75, 4)$
4. 4, 1.5
Suppose that a sample of size 12 from a discrete variable $x$ has empirical density function given by $f(-2) = 1/12$, $f(-1) = 1/4$, $f(0) = 1/3$, $f(1) = 1/6$, $f(2) = 1/6$.
1. Give the order statistics.
2. Compute the five number summary and draw the boxplot.
3. Compute the range and the interquartile range.
Answer
1. $(-2, -1, -1, -1, 0, 0, 0, 0, 1, 1, 2, 2)$
2. $(-2, -1, 0, 1, 2)$
3. 4, 2
The stem and leaf plot below gives the grades for a 100-point test in a probability course with 38 students. The first digit is the stem and the second digit is the leaf. Thus, the low score was 47 and the high score was 98. The scores in the 6 row are 60, 60, 62, 63, 65, 65, 67, 68.
$\begin{array}{l|l} 4 & 7 \ 5 & 0346 \ 6 & 00235578 \ 7 &0112346678899 \ 8 &0367889 \ 9 & 1368 \end{array} \nonumber$
Compute the five number summary and draw the boxplot.
Answer
$(47, 65, 75, 83, 98)$
App Exercises
In the histogram app, construct a distribution with at least 30 values of each of the types indicated below. Note the five number summary.
1. A uniform distribution.
2. A symmetric, unimodal distribution.
3. A unimodal distribution that is skewed right.
4. A unimodal distribution that is skewed left.
5. A symmetric bimodal distribution.
6. A $u$-shaped distribution.
In the error function app, Start with a distribution and add additional points as follows. Note the effect on the five number summary:
1. Add a point below $x_{(1)}$.
2. Add a point between $x_{(1)}$ and $q_1$.
3. Add a point between $q_1$ and $q_2$.
4. Add a point between $q_2$ and $q_3$.
5. Add a point between $q_3$ and $x_{(n)}$.
6. Add a point above $x_{(n)}$.
In the last problem, you may have noticed that when you add an additional point to the distribution, one or more of the five statistics does not change. In general, quantiles can be relatively insensitive to changes in the data.
The Uniform Distribution
Recall that the standard uniform distribution is the uniform distribution on the interval $[0, 1]$.
Suppose that $\bs{X}$ is a random sample of size $n$ from the standard uniform distribution. For $k \in \{1, 2, \ldots, n\}$, $X_{(k)}$ has the beta distribution, with left parameter $k$ and right parameter $n - k + 1$. The probability density function $f_k$ is given by $f_k(x) = \frac{n!}{(k - 1)! (n - k)!} x^{k-1} (1 - x)^{n-k}, \quad 0 \le x \le 1$
Proof
This follows immediately from the basic theorem above since $f(x) = 1$ and $F(x) = x$ for $0 \le x \le 1$. From the form of $f_k$ we can identify the distribution as beta with left parameter $k$ and right parameter $n - k + 1$.
In the order statistic experiment, select the standard uniform distribution and $n = 5$. Vary $k$ from 1 to 5 and note the shape of the probability density function of $X_{(k)}$. For each value of $k$, run the simulation 1000 times and compare the empirical density function to the true probability density function.
It's easy to extend the results for the standard uniform distribution to the general uniform distribution on an interval.
Suppose that $\bs{X}$ is a random sample of size $n$ from the uniform distribution on the interval $[a, a + h]$ where $a \in \R$ and $h \in (0, \infty)$. For $k \in \{1, 2, \ldots, n\}$, $X_{(k)}$ has the beta distribution with left parameter $k$, right parameter $n - k + 1$, location parameter $a$, and scale parameter $h$. In particular,
1. $\E\left(X_{(k)}\right) = a + h \frac{k}{n + 1}$
2. $\var\left(X_{(k)}\right) = h^2 \frac{k (n - k + 1)}{(n + 1)^2 (n + 2)}$
Proof
Suppose that $\bs{U} = (U_1, U_2, \ldots, U_n)$ is a random sample of size $n$ from the standard uniform distribution, and let $X_i = a + h U_i$ for $i \in \{1, 2, \ldots, n\}$. Then $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the uniform distribution on the interval $[a, a + h]$, and moreover, $X_{(k)} = a + h U_{(k)}$. So the distribution of $X_{(k)}$ follows from the previous result. Parts (a) and (b) follow from standard results for the beta distribution.
We return to the standard uniform distribution and consider the range of the random sample.
Suppose that $\bs{X}$ is a random sample of size $n$ from the standard uniform distribution. The sample range $R$ has the beta distribution with left parameter $n - 1$ and right parameter 2. The probability density function $g$ is given by $g(r) = n (n - 1) r^{n-2} (1 - r), \quad 0 \le r \le 1$
Proof
From the result above, the joint PDF of $(X_{(1)}, X_{(n)})$ is $f_{1, n}(x, y) = n (n - 1) (y - x)^{n - 2}$ for $0 \le x \le y \le 1$. Hence, for $r \in [0, 1]$, $\P(R \gt r) = \P(X_{(n)} - X_{(1)} \gt r) = \int_0^{1-r} \int_{x+r}^1 n (n - 1) (y - x)^{n-2} \, dy \, dx = (n - 1) r^n - n r^{n-1} + 1$ It follows that the CDF of $R$ is $G(r) = n r^{n-1} - (n - 1)r^n$ for $0 \le r \le 1$. Taking the derivative with respect to $r$ and simplifying gives the PDF $g(r) = n (n - 1) r^{n-2} (1 - r)$ for $0 \le r \le 1$. We can tell from the form of $g$ that the distribution is beta with left parameter $n - 1$ and right parameter 2.
Once again, it's easy to extend this result to a general uniform distribution.
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the uniform distribution on $[a, a + h]$ where $a \in \R$ and $h \in (0, \infty)$. The sample range $R = X_{(n)} - X_{(1)}$ has the beta distribution with left parameter $n - 1$, right parameter $2$, and scale parameter $h$. In particular,
1. $\E(R) = h \frac{n - 1}{n + 1}$
2. $\var(R) = h^2 \frac{2 (n _ 1)}{(n + 1)^2 (n + 2)}$
Proof
Suppose again that $\bs{U} = (U_1, U_2, \ldots, U_n)$ is a random sample of size $n$ from the standard uniform distribution, and let $X_i = a + h U_i$ for $i \in \{1, 2, \ldots, n\}$. Then $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the uniform distribution on the interval $[a, a + h]$, and moreover, $X_{(k)} = a + h U_{(k)}$. Hence $X_{(n)} - X_{(1)} = h(U_{(n)} - U_{(1)}$ so the distribution of $R$ follows from the previous result. Parts (a) and (b) follow from standard results for the beta distribution.
The joint distribution of the order statistics for a sample from the uniform distribution is easy to get.
Suppose that $(X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the uniform distribution on the interval $[a, a + h]$, where $a \in \R$ and $h \in (0, \infty)$. Then $\left(X_{(1)}, X_{(2)}, \ldots, X_{(n)}\right)$ is uniformly distributed on $\left\{\bs{x} \in [a, a + h]^n: a \le x_1 \le x_2 \le \cdots \le x_n \lt a + h\right\}$.
Proof
This follows easily from the fact that $(X_1, X_2, \ldots, X_n)$ is uniformly distributed on $[a, a + h]^n$. From the result above, the joint PDF of the order statistics is $g(x_1, x_2, \ldots, x_n) = n! / h^n$ for $(x_1, x_2, \ldots, x_n) \in [a, a + h]^n$ with $a \le x_1 \le x_2 \le \cdots \le x_n \le a + h$.
The Exponential Distribution
Recall that the exponential distribution with rate parameter $\lambda \gt 0$ has probability density function $f(x) = \lambda e^{-\lambda x}, \quad 0 \le x \lt \infty$ The exponential distribution is widely used to model failure times and other random times under certain ideal conditions. In particular, the exponential distribution governs the times between arrivals in the Poisson process.
Suppose that $\bs{X}$ is a random sample of size $n$ from the exponential distribution with rate parameter $\lambda$. The probability density function of the $k$th order statistic $X_{(k)}$ is $f_k(x) = \frac{n!}{(k - 1)! (n - k)!} \lambda (1 - e^{-\lambda x})^{k-1} e^{-\lambda(n - k + 1)x}, \quad 0 \le x \lt \infty$ In particular, the minimum of the variables $X_{(1)}$ also has an exponential distribution, but with rate parameter $n \lambda$.
Proof
The PDF of $X_{(k)}$ follows from the theorem above since $F(x) = 1 - e^{-\lambda x}$ for $0 \le x \lt \infty$. Substituting $k = 1$ gives $f_1(x) = n \lambda e^{-n \lambda x}$ for $0 \le x \lt \infty$.
In the order statistic experiment, select the standard exponential distribution and $n = 5$. Vary $k$ from 1 to 5 and note the shape of the probability density function of $X_{(k)}$. For each value of $k$, run the simulation 1000 times and compare the empirical density function to the true probability density function.
Suppose again that $\bs{X}$ is a random sample of size $n$ from the exponential distribution with rate parameter $\lambda$. The sample range $R$ has the same distribution as the maximum of a random sample of size $n - 1$ from the exponential distribution. The probability density function is $h(t) = (n - 1) \lambda (1 - e^{-\lambda t})^{n - 2} e^{-\lambda t}, \quad 0 \le t \lt \infty$
Proof
By the result above, $(X_{(1)}, X_{(n)})$ has joint PDF $f_{1, n}(x, y) = n (n - 1) \lambda^2 (e^{-\lambda x} - e^{-\lambda y})^{n-2} e^{-\lambda x} e^{-\lambda y}$ for $0 \le x \le y \lt \infty$. Hence for $0 \le t \lt \infty$, $\P(R \le t) = \P(X_{(n)} - X_{(1)} \le t) = \int_0^\infty \int_x^{x + t} n (n - 1) \lambda^2 (e^{-\lambda x} - e^{-\lambda y})^{n-2} e^{-\lambda x} e^{-\lambda y} \, dy \, dx$ Substituting $u = e^{-\lambda y}$, $du = -\lambda e^{-\lambda y} \, dy$ into the inside integral and evaluating gives $\P(R \le t) = \int_0^\infty n \lambda e^{-n \lambda x} (1 - e^{-\lambda t})^{n-1} \, dx = (1 - e^{-\lambda t})^{n-1}$ Differentiating with respect to $t$ gives the the PDF. Comparing with our previous result, we see that this is the PDF of the maximum of a sample of size $n - 1$ from the exponential distribution.
Suppose again that $\bs{X}$ is a random sample of size $n$ from the exponential distribution with rate parameter $\lambda$. The joint probability density function of the order statistics $(X_{(1)}, X_{(2)}, \ldots, X_{(n)})$ is $g(x_1, x_2, \ldots, x_n) = n! \lambda^n e^{-\lambda(x_1 + x_2 + \cdots + x_n)}, \quad 0 \le x_1 \le x_2 \cdots \le x_n \lt \infty$
Proof
This follows from the result above and simple algebra.
Dice
Four fair dice are rolled. Find the probability density function of each of the order statistics.
Answer
$x$ 1 2 3 4 5 6
$f_1(x)$ $\frac{671}{1296}$ $\frac{369}{1296}$ $\frac{175}{1296}$ $\frac{65}{1296}$ $\frac{15}{1296}$ $\frac{1}{1296}$
$f_2(x)$ $\frac{171}{1296}$ $\frac{357}{1296}$ $\frac{363}{1296}$ $\frac{261}{1296}$ $\frac{123}{1296}$ $\frac{21}{1296}$
$f_3(x)$ $\frac{21}{1296}$ $\frac{123}{1296}$ $\frac{261}{1296}$ $\frac{363}{1296}$ $\frac{357}{1296}$ $\frac{171}{1296}$
$f_4(x)$ $\frac{1}{1296}$ $\frac{15}{1296}$ $\frac{65}{1296}$ $\frac{175}{1296}$ $\frac{369}{1296}$ $\frac{671}{1296}$
In the dice experiment, select the order statistic and die distribution given in parts (a)–(d) below. Increase the number of dice from 1 to 20, noting the shape of the probability density function at each stage. Now with $n = 4$, run the simulation 1000 times, and note the apparent convergence of the relative frequency function to the probability density function.
1. Maximum score with fair dice.
2. Minimum score with fair dice.
3. Maximum score with ace-six flat dice.
4. Minimum score with ace-six flat dice.
Four fair dice are rolled. Find the joint probability density function of the four order statistics.
Answer
The joint probability density function $g$ is defined on $\{(x_1, x_2, x_3, x_4) \in \{1, 2, 3, 4, 5, 6\}^4: x_1 \le x_2 \le x_3 \le x_4\}$
1. $g(x_1, x_2, x_3, x_4) = \frac{1}{1296}$ if the coordinates are all the same (there are 6 such vectors).
2. $g(x_1, x_2, x_3, x_4) = \frac{4}{1296}$ if there are two distinct coordinates, one value occurring 3 times and the other value once (there are 30 such vectors).
3. $g(x_1, x_2, x_3, x_4) = \frac{6}{1296}$ if there are two distinct coordinates in $(x_1, x_2, x_3, x_4)$, each value occurring 2 times (there are 15 such vectors).
4. $g(x_1, x_2, x_3, x_4) = \frac{12}{1296}$ if there are three distinct coordinates, one value occurring twice and the other values once (there are 60 such vectors).
5. $g(x_1, x_2, x_3, x_4) = \frac{24}{1296}$ if the coordinates are distinct (there are 15 such vectors).
Four fair dice are rolled. Find the probability density function of the sample range.
Answer
$R$ has probability density function $h$ given by $h(0) = \frac{6}{1296}, \; h(1) = \frac{70}{1296}, \; h(2) = \frac{300}{1296}, \; h(3) = \frac{300}{1296}, \; h(4) = \frac{318}{1296}, \; h(5) = \frac{302}{1296}$
Probability Plot Simulations
In the probability plot experiment, set the sampling distribution to normal distribution with mean 5 and standard deviation 2. Set the sample size to $n = 20$. For each of the following test distributions, run the experiment 50 times and note the geometry of the probability plot:
1. Standard normal
2. Uniform on the interval $[0, 1]$
3. Exponential with parameter 1
In the probability plot experiment, set the sampling distribution to the uniform distribution on $[4, 10]$. Set the sample size to $n = 20$. For each of the following test distributions, run the experiment 50 times and note the geometry of the probability plot:
1. Standard normal
2. Uniform on the interval $[0, 1]$
3. Exponential with parameter 1
In the probability plot experiment, Set the sampling distribution to the exponential distribution with parameter 3. Set the sample size to $n = 20$. For each of the following test distributions, run the experiment 50 times and note the geometry of the probability plot:
1. Standard normal
2. Uniform on the interval $[0, 1]$
3. Exponential with parameter 1
Data Analysis Exercises
Statistical software should be used for the problems in this subsection.
Consider the petal length and species variables in Fisher's iris data.
1. Classify the variables by type and level of measurement.
2. Compute the five number summary and draw the boxplot for petal length.
3. Compute the five number summary and draw the boxplot for petal length by species.
4. Draw the normal probability plot for petal length.
Answers
1. petal length: continuous, ratio. type: discrete, nominal
2. $(10, 15, 44, 51, 69)$
3. type 0: $(10, 14, 15, 16, 19)$; type 1: $(45, 51, 55.5, 59, 69)$; type 2: $(30, 40, 44, 47, 56)$
Consider the erosion variable in the Challenger data set.
1. Classify the variable by type and level of measurement.
2. Compute the five number summary and draw the boxplot.
3. Identify any outliers.
Answer
1. continuous, ratio
2. $(0, 0, 0, 0, 53)$
3. All of the positive values 28, 40, 48, and 53 are outliers.
A stem and leaf plot of Michelson's velocity of light data is given below. In this example, the last digit (which is always 0) has been left out, for convenience. Also, note that there are two sets of leaves for each stem, one corresponding to leaves from 0 to 4 (so actually from 00 to 40) and the other corresponding to leaves from 5 to 9 (so actually from 50 to 90). Thus, the minimum value is 620 and the numbers in the second 7 row are 750, 760, 760, and so forth.
$\begin{array}{l|l} 6 & 2 \ 6 & 5 \ 7 & 222444 \ 7 & 566666788999 \ 8 & 000001111111111223344444444 \ 9 & 0011233444 \ 9 & 55566667888 \ 10 & 000 \ 10 & 7 \end{array} \nonumber$
Classify the variable by type and level of measurement.
1. Compute the five number summary and draw the boxplot.
2. Compute the five number summary for the velocity in $\text{km}/\text{hr}$. The transformation is $y = x + 299\,000$.
3. Draw the normal probability plot.
Answer
1. continuous, interval
2. $(620, 805, 850, 895, 1071)$
3. $(299\,620, 299\,805, 299\,850, 299\,895, 300\,071)$
Consider Short's paralax of the sun data.
1. Classify the variable by type and level of measurement.
2. Compute the five number summary and draw the boxplot.
3. Compute the five number summary and draw the boxplot if the variable is converted to degrees. There are 3600 seconds in a degree.
4. Compute the five number summary and draw the boxplot if the variable is converted to radians. There are $\pi/180$ radians in a degree.
5. Draw the normal probability plot.
Answer
1. continuous, ratio
2. $(5.76, 8.34, 8.50, 9.02, 10.57)$
3. $(0.00160, 0.00232, 0.00236, 0.00251, 0.00294)$
4. $(0.0000278, 0.0000404, 0.0000412, 0.0000437, 0.0000512)$
Consider Cavendish's density of the earth data.
1. Classify the variable by type and level of measurement.
2. Compute the five number summary and draw the boxplot.
3. Draw the normal probability plot.
Answer
1. continuous, ratio
2. $(4.88, 5.30, 5.46, 5.61, 5.85)$
Consider the M&M data.
1. Classify the variables by type and level of measurement.
2. Compute the five number summary and draw the boxplot for each color count.
3. Construct a stem and leaf plot for the total number of candies.
4. Compute the five number summary and draw the boxplot for the total number of candies.
5. Compute the five number summary and draw the boxplot for net weight.
Answer
1. color counts: discrete ratio. net weight: continuous ratio.
2. red: $(3, 5.5, 9, 14, 20)$; green: $(2, 5, 7, 9, 17)$; blue: $(1, 4, 6.5, 10, 19)$; orange: $(0, 3.5, 6, 10.5, 13)$; yellow: $(3, 8, 13.5, 18, 26)$; brown: $(4, 8, 12.5, 18, 20)$
3. 5 0
5 3
5 4 5 5 5 5
5 6 6 6 6 7 7 7
5 8 8 8 8 8 8 8 8 8 9 9 9
6 0 0 1 1
4. $(50, 55.5, 58, 60, 61)$
5. $(46.22, 48.28, 49.07, 50.23, 52.06)$
Consider the body weight, species, and gender variables in the Cicada data.
1. Classify the variables by type and level of measurement.
2. Compute the five number summary and draw the boxplot for body weight.
3. Compute the five number summary and draw the boxplot for body weight by species.
4. Compute the five number summary and draw the boxplot for body weight by gender.
Answer
1. body weight: continuous, ratio. species: discrete, nominal. gender: discrete, nominal.
2. $(0.08, 0.13, 0.17, 0.22, 0.39)$
3. species 0: $(0.08, 0.13, 0.16, 0.21, 0.27)$; species 1: $(0.08, 0. 14, 0.18, 0.23, 0.31)$; species 2: $(0.12, 0.12, 0.215, 0.29, 0.39)$
4. female: $(0.08, 0.17, 0.21, 0.25, 0.31)$; male: $(0.08, 0.12, 0.14, 0.16, 0.39)$
Consider Pearson's height data.
1. Classify the variables by type and level of measurement.
2. Compute the five number summary and sketch the boxplot for the height of the father.
3. Compute the five number summary and sketch the boxplot for the height of the son.
Answer
1. continuous ratio
2. $(59.0, 65.8, 67.8, 69.6, 75.4)$
3. $(58.5, 66.9, 68.6, 70.5, 78.4)$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/06%3A_Random_Samples/6.06%3A_Order_Statistics.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\mse}{\text{mse}}$ $\newcommand{\sst}{\text{sst}}$ $\newcommand{\ssr}{\text{ssr}}$ $\newcommand{\sse}{\text{sse}}$ $\newcommand{\se}{\text{se}}$ $\newcommand{\bs}{\boldsymbol}$
Descriptive Theory
Recall the basic model of statistics: we have a population of objects of interest, and we have various measurements (variables) that we make on these objects. We select objects from the population and record the variables for the objects in the sample; these become our data. Our first discussion is from a purely descriptive point of view. That is, we do not assume that the data are generated by an underlying probability distribution. But as always, remember that the data themselves define a probability distribution, namely the empirical distribution that assigns equal probability to each data point.
Suppose that $x$ and $y$ are real-valued variables for a population, and that $\left((x_1, y_1), (x_2, y_2), \ldots, (x_n, y_n)\right)$ is an observed sample of size $n$ from $(x, y)$. We will let $\bs{x} = (x_1, x_2, \ldots, x_n)$ denote the sample from $x$ and $\bs{y} = (y_1, y_2, \ldots, y_n)$ the sample from $y$. In this section, we are interested in statistics that are measures of association between the $\bs{x}$ and $\bs{y}$, and in finding the line (or other curve) that best fits the data.
Recall that the sample means are $m(\bs{x}) = \frac{1}{n} \sum_{i=1}^n x_i, \quad m(\bs{y}) = \frac{1}{n} \sum_{i=1}^n y_i$ and the sample variances are $s^2(\bs{x}) = \frac{1}{n - 1} \sum_{i=1}^n [x_i - m(\bs{x})]^2, \quad s^2(\bs{y}) = \frac{1}{n - 1} \sum_{i=1}^n [y_i - m(\bs{y})]^2$
Scatterplots
Often, the first step in exploratory data analysis is to draw a graph of the points; this is called a scatterplot an can give a visual sense of the statistical realtionship between the variables.
In particular, we are interested in whether the cloud of points seems to show a linear trend or whether some nonlinear curve might fit the cloud of points. We are interested in the extent to which one variable $x$ can be used to predict the other variable $y$.
Defintions
Our next goal is to define statistics that measure the association between the $x$ and $y$ data.
The sample covariance is defined to be $s(\bs{x}, \bs{y}) = \frac{1}{n - 1} \sum_{i=1}^n [x_i - m(\bs{x})][y_i - m(\bs{y})]$ Assuming that the data vectors are not constant, so that the standard deviations are positive, the sample correlation is defined to be $r(\bs{x}, \bs{y}) = \frac{s(\bs{x}, \bs{y})}{s(\bs{x}) s(\bs{y})}$
Note that the sample covariance is an average of the product of the deviations of the $x$ and $y$ data from their means. Thus, the physical unit of the sample covariance is the product of the units of $x$ and $y$. Correlation is a standardized version of covariance. In particular, correlation is dimensionless (has no physical units), since the covariance in the numerator and the product of the standard devations in the denominator have the same units (the product of the units of $x$ and $y$). Note also that covariance and correlation have the same sign: positive, negative, or zero. In the first case, the data $\bs{x}$ and $\bs{y}$ are said to be positively correlated; in the second case $\bs{x}$ and $\bs{y}$ are said to be negatively correlated; and in the third case $\bs{x}$ and $\bs{y}$ are said to be uncorrelated
To see that the sample covariance is a measure of association, recall first that the point $\left(m(\bs{x}), m(\bs{y})\right)$ is a measure of the center of the bivariate data. Indeed, if each point is the location of a unit mass, then $\left(m(\bs{x}), m(\bs{y})\right)$ is the center of mass as defined in physics. Horizontal and vertical lines through this center point divide the plane into four quadrants. The product deviation $[x_i - m(\bs{x})][y_i - m(\bs{y})]$ is positive in the first and third quadrants and negative in the second and fourth quadrants. After we study linear regression below, we will have a much deeper sense of what covariance measures.
You may be perplexed that we average the product deviations by dividing by $n - 1$ rather than $n$. The best explanation is that in the probability model discussed below, the sample covariance is an unbiased estimator of the distribution covariance. However, the mode of averaging can also be understood in terms of degrees of freedom, as was done for sample variance. Initially, we have $2 n$ degrees of freedom in the bivariate data. We lose two by computing the sample means $m(\bs{x})$ and $m(\bs{y})$. Of the remaining $2 n - 2$ degrees of freedom, we lose $n - 1$ by computing the product deviations. Thus, we are left with $n - 1$ degrees of freedom total. As is typical in statistics, we average not by dividing by the number of terms in the sum but rather by the number of degrees of freedom in those terms. However, from a purely descriptive point of view, it would also be reasonable to divide by $n$.
Recall that there is a natural probability distribution associated with the data, namely the empirical distribution that gives probability $\frac{1}{n}$ to each data point $(x_i, y_i)$. (Thus, if these points are distinct this is the discrete uniform distribution on the data.) The sample means are simply the expected values of this bivariate distribution, and except for a constant multiple (dividing by $n - 1$ rather than $n$), the sample variances are simply the variances of this bivarite distribution. Similarly, except for a constant multiple (again dividing by $n - 1$ rather than $n$), the sample covariance is the covariance of the bivariate distribution and the sample correlation is the correlation of the bivariate distribution. All of the following results in our discussion of descriptive statistics are actually special cases of more general results for probability distributions.
Properties of Covariance
The next few exercises establish some essential properties of sample covariance. As usual, bold symbols denote samples of a fixed size $n$ from the corresponding population variables (that is, vectors of length $n$), while symbols in regular type denote real numbers. Our first result is a formula for sample covariance that is sometimes better than the definition for computational purposes. To state the result succinctly, let $\bs{x} \bs{y} = (x_1 \, y_1, x_2 \, y_2, \ldots, x_n \, y_n)$ denote the sample from the product variable $x y$.
The sample covariance can be computed as follows: $s(\bs{x}, \bs{y}) = \frac{1}{n - 1} \sum_{i=1}^n x_i \, y_i - \frac{n}{n - 1} m(\bs{x}) m(\bs{y}) = \frac{n}{n - 1} [m(\bs{x y}) - m(\bs{x}) m(\bs{y})]$
Proof
Note that \begin{align} \sum_{i=1}^n [(x_i - m(\bs{x})][y_i - m(\bs{y})] & = \sum_{i=1}^n [x_i y_i - x_i m(\bs{y}) - y_i m(\bs{x}) + m(\bs{x}) m(\bs{y})] \ & = \sum_{i=1}^n x_i y_i - m(\bs{y}) \sum_{i=1}^n x_i - m(\bs{x}) \sum_{i=1}^n y_i + n m(\bs{x}) m(\bs{y}) \ & = \sum_{i=1}^n x_i y_i - n m(\bs{y}) m(\bs{x}) - n m(\bs{x}) m(\bs{y}) + n m(\bs{x})m(\bs{y}) \ & = \sum_{i=1}^n x_i y_i - n m(\bs{x}) m(\bs{y}) \end{align}
The following theorem gives another formula for the sample covariance, one that does not require the computation of intermediate statistics.
The sample covariance can be computed as follows: $s(\bs{x}, \bs{y}) = \frac{1}{2 n (n - 1)} \sum_{i=1}^n \sum_{j=1}^n (x_i - x_j)(y_i - y_j)$
Proof
Note that \begin{align} \sum_{i=1}^n \sum_{j=1}^n (x_i - x_j)(y_i - y_j) & = \frac{1}{2 n} \sum_{i=1}^n \sum_{j=1}^n [x_i - m(\bs{x}) + m(\bs{x}) - x_j][y_i - m(\bs{y}) + m(\bs{y}) - y_j] \ & = \sum_{i=1}^n \sum_{j=1}^n \left([(x_i - m(\bs{x})][y_i - m(\bs{y})] + [x_i - m(\bs{x})][m(\bs{y}) - y_j] + [m(\bs{x}) - x_j][y_i - m(\bs{y})] + [m(\bs{x}) - x_j][m(\bs{y}) - y_j]\right) \end{align} We compute the sums term by term. The first is $n \sum_{i=1}^n [x_i - m(\bs{x})][y_i - m(\bs{y})]$ The second two sums are 0. The last sum is $n \sum_{j=1}^n [m(\bs{x}) - x_j][m(\bs{y}) - y_j] = n \sum_{i=1}^n [x_i - m(\bs{x})][y_i - m(\bs{y})]$ Dividing the entire sum by $2 n (n - 1)$ results in $\cov(\bs{x}, \bs{y})$.
As the name suggests, sample covariance generalizes sample variance.
$s(\bs{x}, \bs{x}) = s^2(\bs{x})$.
In light of the previous theorem, we can now see that the first computational formula and the second computational formula above generalize the computational formulas for sample variance. Clearly, sample covariance is symmetric.
$s(\bs{x}, \bs{y}) = s(\bs{y}, \bs{x})$.
Sample covariance is linear in the first argument with the second argument fixed.
If $\bs{x}$, $\bs{y}$, and $\bs{z}$ are data vectors from population variables $x$, $y$, and $z$, respectively, and if $c$ is a constant, then
1. $s(\bs{x} + \bs{y}, \bs{z}) = s(\bs{x}, \bs{z}) + s(\bs{y}, \bs{z})$
2. $s(c \bs{x}, \bs{y}) = c s(\bs{x}, \bs{y})$
Proof
1. Recall that $m(\bs{x} + \bs{y}) = m(\bs{x}) + m(\bs{y})$. Hence \begin{align} s(\bs{x} + \bs{y}, \bs{z}) & = \frac{1}{n - 1} \sum_{i=1}^n [x_i + y_i - m(\bs{x} + \bs{y})][z_i - m(\bs{z})] \ & = \frac{1}{n - 1} \sum_{i=1}^n \left([x_i - m(\bs{x})] + [y_i - m(\bs{y})]\right)[z_i - m(\bs{z})] \ & = \frac{1}{n - 1} \sum_{i=1}^n [x_i - m(\bs{x})][z_i - m(\bs{z})] + \frac{1}{n - 1} \sum_{i=1}^n [y_i - m(\bs{y})][z_i - m(\bs{z})] \ & = s(\bs{x}, \bs{z}) + s(\bs{y}, \bs{z}) \end{align}
2. Recall that $m(c \bs{x}) = c m(\bs{x})$. Hence \begin{align} s(c \bs{x}, \bs{y}) & = \frac{1}{n - 1} \sum_{i=1}^n [c x_i - m(c \bs{x})][y_i - m(\bs{y})] \ & = \frac{1}{n - 1} \sum_{i=1}^n [c x_i - c m(\bs{x})][y_i - m(\bs{y})] = c s(\bs{x}, \bs{y}) \end{align}
By symmetry, sample covariance is also linear in the second argument with the first argument fixed, and hence is bilinear. The general version of the bilinear property is given in the following theorem:
Suppose that $\bs{x}_i$ is a data vector from a population variable $x_i$ for $i \in \{1, 2, \ldots, k\}$ and that $\bs{y}_j$ is a data vector from a population variable $y_j$ for $j \in \{1, 2, \ldots, l\}$. Suppose also that $a_1, \, a_2, \ldots, \, a_k$ and $b_1, \, b_2, \ldots, b_l$ are constants. Then $s \left( \sum_{i=1}^k a_i \, \bs{x}_i, \sum_{j = 1}^l b_j \, \bs{y}_j \right) = \sum_{i=1}^k \sum_{j=1}^l a_i \, b_j \, s(\bs{x}_i, \bs{y}_j)$
A special case of the bilinear property provides a nice way to compute the sample variance of a sum.
$s^2(\bs{x} + \bs{y}) = s^2(\bs{x}) + 2 s(\bs{x}, \bs{y}) + s^2(\bs{y})$.
Proof
From the preceding results, \begin{align} s^2(\bs{x} + \bs{y}) & = s(\bs{x} + \bs{y}, \bs{x} + \bs{y}) = s(\bs{x}, \bs{x}) + s(\bs{x}, \bs{y}) + s(\bs{y}, \bs{x}) + s(\bs{y}, \bs{y}) \ & = s^2(\bs{x}) + 2 s(\bs{x}, \bs{y}) + s^2(\bs{y}) \end{align}
The generalization of this result to sums of three or more vectors is completely straightforward: namely, the sample variance of a sum is the sum of all of the pairwise sample covariances. Note that the sample variance of a sum can be greater than, less than, or equal to the sum of the sample variances, depending on the sign and magnitude of the pure covariance term. In particular, if the vectors are pairwise uncorrelated, then the variance of the sum is the sum of the variances.
If $\bs{c}$ is a constant data set then $s(\bs{x}, \bs{c}) = 0$.
Proof
This follows directly from the definition. If $c_i = c$ for each $i$, then $m(\bs{c}) = c$ and hence $c_i - m(\bs{c}) = 0$ for each $i$.
Combining the result in the last exercise with the bilinear property, we see that covariance is unchanged if constants are added to the data sets. That is, if $\bs{c}$ and $\bs{d}$ are constant vectors then $s(\bs{x}+ \bs{c}, \bs{y} + \bs{d}) = s(\bs{x}, \bs{y})$.
Properties of Correlation
A few simple properties of correlation are given next. Most of these follow easily from the corresponding properties of covariance. First, recall that the standard scores of $x_i$ and $y_i$ are, respectively, $u_i = \frac{x_i - m(\bs{x})}{s(\bs{x})}, \quad v_i = \frac{y_i - m(\bs{y})}{s(\bs{y})}$ The standard scores from a data set are dimensionless quantities that have mean 0 and variance 1.
The correlation between $\bs{x}$ and $\bs{y}$ is the covariance of their standard scores $\bs{u}$ and $\bs{v}$. That is, $r(\bs{x}, \bs{y}) = s(\bs{u}, \bs{v})$.
Proof
In vector notation, note that $\bs{u} = \frac{1}{s(\bs{x})}[\bs{x} - m(\bs{x})], \quad \bs{v} = \frac{1}{s(\bs{y})}[\bs{y} - m(\bs{y})]$ Hence the result follows immediatedly from properties of covariance: $s(\bs{u}, \bs{v}) = \frac{1}{s(\bs{x}) s(\bs{y})} s(\bs{x}, \bs{y}) = r(\bs{x}, \bs{y})$
Correlation is symmetric.
$r(\bs{x}, \bs{y}) = r(\bs{y}, \bs{x})$.
Unlike covariance, correlation is unaffected by multiplying one of the data sets by a positive constant (recall that this can always be thought of as a change of scale in the underlying variable). On the other hand, muliplying a data set by a negative constant changes the sign of the correlation.
If $c \ne 0$ is a constant then
1. $r(c \bs{x}, \bs{y}) = r(\bs{x}, \bs{y})$ if $c \gt 0$
2. $r(c \bs{x}, \bs{y}) = -r(\bs{x}, \bs{y})$ if $c \lt 0$
Proof
By definition and from the scaling property of covariance, $r(c \bs{x}, \bs{y}) = \frac{s(c \bs{x}, \bs{y})}{s(c \bs{x}) s(\bs{y})} = \frac{c s(\bs{x}, \bs{y})}{\left|c\right| s(\bs{x}) s(\bs{y})} = \frac{c}{\left|c\right|} r(\bs{x}, \bs{y})$ and of course, $c / \left|c\right| = 1$ if $c \gt 0$ and $c / \left|c\right| = -1$ if $c \lt 0$.
Like covariance, correlation is unaffected by adding constants to the data sets. Adding a constant to a data set often corresponds to a change of location.
If $\bs{c}$ and $\bs{d}$ are constant vectors then $r(\bs{x} + \bs{c}, \bs{y} + \bs{d}) = r(\bs{x}, \bs{y})$.
Proof
This result follows directly from the corresponding properties of covariance and standard deviation: $r(\bs{x} + \bs{c}, \bs{y} + \bs{d}) = \frac{s(\bs{x} + \bs{c}, \bs{y} + \bs{d})}{s(\bs{x} + \bs{c}) s(\bs{y} + \bs{d})} = \frac{s(\bs{x}, \bs{y})}{s(\bs{x}) s(\bs{y})} = r(\bs{x}, \bs{y})$
The last couple of properties reinforce the fact that correlation is a standardized measure of association that is not affected by changing the units of measurement. In the first Challenger data set, for example, the variables of interest are temperature at time of launch (in degrees Fahrenheit) and O-ring erosion (in millimeters). The correlation between these variables is of critical importance. If we were to measure temperature in degrees Celsius and O-ring erosion in inches, the correlation between the two variables would be unchanged.
The most important properties of correlation arise from studying the line that best fits the data, our next topic.
Linear Regression
We are interested in finding the line $y = a + b x$ that best fits the sample points $\left((x_1, y_1), (x_2, y_2), \ldots, (x_n, y_n)\right)$. This is a basic and important problem in many areas of mathematics, not just statistics. We think of $x$ as the predictor variable and $y$ as the response variable. Thus, the term best means that we want to find the line (that is, find the coefficients $a$ and $b$) that minimizes the average of the squared errors between the actual $y$ values in our data and the predicted $y$ values: $\mse(a, b) = \frac{1}{n - 1} \sum_{i=1}^n [y_i - (a + b \, x_i)]^2$ Note that the minimizing value of $(a, b)$ would be the same if the function were simply the sum of the squared errors, of if we averaged by dividing by $n$ rather than $n - 1$, or if we used the square root of any of these functions. Of course that actual minimum value of the function would be different if we changed the function, but again, not the point $(a, b)$ where the minimum occurs. Our particular choice of $\mse$ as the error function is best for statistical purposes. Finding $(a, b)$ that minimize $\mse$ is a standard problem in calculus.
The graph of $\mse$ is a paraboloid opening upward. The function $\mse$ is minimized when \begin{align} b(\bs{x}, \bs{y}) & = \frac{s(\bs{x}, \bs{y})}{s^2(\bs{x})} \ a(\bs{x}, \bs{y}) & = m(\bs{y}) - b(\bs{x}, \bs{y}) m(\bs{x}) = m(\bs{y}) - \frac{s(\bs{x}, \bs{y})}{s^2(\bs{x})} m(\bs{x}) \end{align}
Proof
We can tell from the algebraic form of $\mse$ that the graph is a paraboloid opening upward. To find the unique point that minimizes $\mse$, note that \begin{align} \frac{\partial}{\partial a}\mse(a, b) & = \frac{1}{n - 1} \sum 2[y_i - (a + b x_i)] (-1) = \frac{2}{n - 1} [-\sum_{i=1}^n y_i + n a + b \sum_{i=1}^n x_i ]\ \frac{\partial}{\partial b}\mse(a, b) & = \frac{1}{n - 1} \sum 2[y_i - (a + b x_i)](-x_i) = \frac{2}{n - 1} [-\sum_{i=1}^n x_i y_i + a \sum_{i=1}^n x_i + b \sum_{i=1}^n x_i^2] \end{align} Solving $\frac{\partial}{\partial a} \mse(a, b) = 0$, gives $a = m(\bs{y}) - b m(\bs{x})$. Substituting this into $\frac{\partial}{\partial b} \mse(a, b) = 0$ and solving for $b$ gives $b = \frac{n[m(\bs{x} \bs{y}) - m(\bs{x}) m(\bs{y})]}{n[m(\bs{x}^2) - m^2(\bs{x})]}$ Dividing the numerator and denominator in the last expression by $n - 1$ and using the computational formula above, we see that $b = s(\bs{x}, \bs{y}) / s^2(\bs{x})$.
Of course, the optimal values of $a$ and $b$ are statistics, that is, functions of the data. Thus the sample regression line is $y = m(\bs{y}) + \frac{s(\bs{x}, \bs{y})}{s^2(\bs{x})} [x - m(\bs{x})]$
Note that the regression line passes through the point $\left(m(\bs{x}), m(\bs{y})\right)$, the center of the sample of points.
The minimum mean square error is $\mse\left[a(\bs{x}, \bs{y}), b(\bs{x}, \bs{y})\right] = s(\bs{y})^2 \left[1 - r^2(\bs{x}, \bs{y})\right]$
Proof
This follows from substituting $a(\bs{x}, \bs{y})$ $b(\bs{x}, \bs{y})$ into $\mse$ and simplifying.
Sample correlation and covariance satisfy the following properties.
1. $-1 \le r(\bs{x}, \bs{y}) \le 1$
2. $-s(\bs{x}) s(\bs{y}) \le s(\bs{x}, \bs{y}) \le s(\bs{x}) s(\bs{y})$
3. $r(\bs{x}, \bs{y}) = -1$ if and only if the sample points lie on a line with negative slope.
4. $r(\bs{x}, \bs{y}) = 1$ if and only if the sample points lie on a line with positive slope.
Proof
Note that $\mse \ge 0$ and hence from the previous theorem, we must have $r^2(\bs{x}, \bs{y}) \le 1$. This is equivalent to part (a), which in turn, from the definition of sample correlation, is equivalent to part (b). For parts (c) and (d), note that $\mse(a, b) = 0$ if and only if $y_i = a + b x_i$ for each $i$, and moreover, $b(\bs{x}, \bs{y})$ has the same sign as $r(\bs{x}, \bs{y})$.
Thus, we now see in a deeper way that the sample covariance and correlation measure the degree of linearity of the sample points. Recall from our discussion of measures of center and spread that the constant $a$ that minimizes $\mse(a) = \frac{1}{n - 1} \sum_{i=1}^n (y_i - a)^2$ is the sample mean $m(\bs{y})$, and the minimum value of the mean square error is the sample variance $s^2(\bs{y})$. Thus, the difference between this value of the mean square error and the one above, namely $s^2(\bs{y}) r^2(\bs{x}, \bs{y})$ is the reduction in the variability of the $y$ data when the linear term in $x$ is added to the predictor. The fractional reduction is $r^2(\bs{x}, \bs{y})$, and hence this statistics is called the (sample) coefficient of determination. Note that if the data vectors $\bs{x}$ and $\bs{y}$ are uncorrelated, then $x$ has no value as a predictor of $y$; the regression line in this case is the horizontal line $y = m(\bs{y})$ and the mean square error is $s^2(\bs{y})$.
The choice of predictor and response variables is important.
The sample regression line with predictor variable $x$ and response variable $y$ is not the same as the sample regression line with predictor variable $y$ and response variable $x$, except in the extreme case $r(\bs{x}, \bs{y}) = \pm 1$ where the sample points all lie on a line.
Residuals
The difference between the actual $y$ value of a data point and the value predicted by the regression line is called the residual of that data point. Thus, the residual corresponding to $(x_i, y_i)$ is $d_i = y_i - \hat{y}_i$ where $\hat{y}_i$ is the regression line at $x_i$: $\hat{y}_i = m(\bs{y}) + \frac{s(\bs{x}, \bs{y})}{s(\bs{x})^2} [x_i - m(\bs{x})]$ Note that the predicted value $\hat{y}_i$ and the residual $d_i$ are statistics, that is, functions of the data $(\bs{x}, \bs{y})$, but we are suppressing this in the notation for simplicity.
The residuals sum to 0: $\sum_{i=1}^n d_i = 0$.
Proof
This follows from the definition, and is a restatement of the fact that the regression line passes through the center of the data set $\left(m(\bs{x}), m(\bs{y})\right)$.
Various plots of the residuals can help one understand the relationship between the $x$ and $y$ data. Some of the more common are given in the following definition:
Residual plots
1. A plot of $(i, d_i)$ for $i \in \{1, 2, \ldots, n\}$, that is, a plot of indices versus residuals.
2. A plot of $(x_i, d_i)$ for $i \in \{1, 2, \ldots, n\}$, that is, a plot of $x$ values versus residuals.
3. A plot of $(d_i, y_i)$ for $i \in \{1, 2, \ldots, n\}$, that is, a plot of residuals versus actual $y$ values.
4. A plot of $(d_i, \hat{y}_i)$ for $i \in \{1, 2, \ldots, n\}$, that is a plot of residuals versus predicted $y$ values.
5. A histogram of the residuals $(d_1, d_2, \ldots, d_n)$.
Sums of Squares
For our next discussion, we will re-interpret the minimum mean square error formulat above. Here are the new definitions:
Sums of squares
1. $\sst(\bs{y}) = \sum_{i=1}^n [y_i - m(\bs{y})]^2$ is the total sum of squares.
2. $\ssr(\bs{x}, \bs{y}) = \sum_{i=1}^n [\hat{y}_i - m(\bs{y})]^2$ is the regression sum of squares
3. $\sse(\bs{x}, \bs{y}) = \sum_{i=1}^n (y_i - \hat{y}_i)^2$ is the error sum of squares.
Note that $\sst(\bs{y})$ is simply $n - 1$ times the variance $s^2(\bs{y})$ and is the total of the sums of the squares of the deviations of the $y$ values from the mean of the $y$ values. Similarly, $\sse(\bs{x}, \bs{y})$ is simply $n - 1$ times the minimum mean square error given above. Of course, $\sst(\bs{y})$ has $n - 1$ degrees of freedom, while $\sse(\bs{x}, \bs{y})$ has $n - 2$ degrees of freedom and $\ssr(\bs{x}, \bs{y})$ a single degree of freedom. The total sum of squares is the sum of the regression sum of squares and the error sum of squares:
The sums of squares are related as follows:
1. $\ssr(\bs{x}, \bs{y}) = r^2(\bs{x}, \bs{y}) \sst(\bs{y})$
2. $\sst(\bs{y}) = \ssr(\bs{x}, \bs{y}) + \sse(\bs{x}, \bs{y})$
Proof
By definition of $\sst$ and $r$, we see that $r^2(\bs{x}, \bs{y}) \sst(\bs{y}) = s^2(\bs{x}, \bs{y}) \big/ s^2(\bs{x})$. But from the regression equation, $[\hat{y}_i - m(\bs{y})]^2 = \frac{s^2(\bs{x}, \bs{y})}{s^4(\bs{x})} [x_i - m(\bs{x})]^2$ Summing over $i$ gives $\ssr(\bs{x}, \bs{y}) = \sum_{i=1}^n [\hat{y}_i - m(\bs{y})]^2 = \frac{s^2(\bs{x}, \bs{y})}{s^2(\bs{x})}$ Hence $\ssr(\bs{x}, \bs{y}) = r^2(\bs{x}, \bs{y}) \sst(\bs{y})$. Finally, multiplying the result above by $n - 1$ gives $\sse(\bs{x}, \bs{y}) = \sst(\bs{y}) - r^2(\bs{x}, \bs{y}) \sst(\bs{y}) = \sst(\bs{y}) - \ssr(\bs{x}, \bs{y})$.
Note that $r^2(\bs{x}, \bs{y}) = \ssr(\bs{x}, \bs{y}) \big/ \sst(\bs{y})$, so once again, $r^2(\bs{x}, \bs{y})$ is the coefficient of determination—the proportion of the variability in the $y$ data explained by the $x$ data. We can average $\sse$ by dividing by its degrees of freedom and then take the square root to obtain a standard error:
The standard error of estimate is $\se(\bs{x}, \bs{y}) = \sqrt{\frac{\sse(\bs{x}, \bs{y})}{n - 2}}$
This really is a standard error in the same sense as a standard deviation. It's an average of the errors of sorts, but in the root mean square sense.
Finally, it's important to note that linear regression is a much more powerful idea than might first appear, and in fact the term linear can be a bit misleading. By applying various transformations to $y$ or $x$ or both, we can fit a variety of two-parameter curves to the given data $\left((x_1, y_1), (x_2, y_2), \ldots, (x_n, y_n)\right)$. Some of the most common transformations are explored in the exercises below.
Probability Theory
We continue our discussion of sample covariance, correlation, and regression but now from the more interesting point of view that the variables are random. Specifically, suppose that we have a basic random experiment, and that $X$ and $Y$ are real-valued random variables for the experiment. Equivalently, $(X, Y)$ is a random vector taking values in $\R^2$. Let $\mu = \E(X)$ and $\nu = \E(Y)$ denote the distribution means, $\sigma^2 = \var(X)$ and $\tau^2 = \var(Y)$ the distribution variances, and let $\delta = \cov(X, Y)$ denote the distribution covariance, so that the distribution correlation is $\rho = \cor(X, Y) = \frac{\cov(X, Y)}{\sd(X) \, \sd(Y)} = \frac{\delta}{\sigma \, \tau}$ We will also need some higher order moments. Let $\sigma_4 = \E\left[(X - \mu)^4\right]$, $\tau_4 = \E\left[(Y - \nu)^4\right]$, and $\delta_2 = \E\left[(X - \mu)^2 (Y - \nu)^2\right]$. Naturally, we assume that all of these moments are finite.
Now suppose that we run the basic experiment $n$ times. This creates a compound experiment with a sequence of independent random vectors $\left((X_1, Y_1), (X_2, Y_2), \ldots, (X_n, Y_n)\right)$ each with the same distribution as $(X, Y)$. In statistical terms, this is a random sample of size $n$ from the distribution of $(X, Y)$. The statistics discussed in previous section are well defined but now they are all random variables. We use the notation established previously, except that we use our usual convention of denoting random variables with capital letters. Of course, the deterministic properties and relations established above still hold. Note that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the distribution of $X$ and $\bs{Y} = (Y_1, Y_2, \ldots, Y_n)$ is a random sample of size $n$ from the distribution of $Y$. The main purpose of this subsection is to study the relationship between various statistics from $\bs{X}$ and $\bs{Y}$, and to study statistics that are natural estimators of the distribution covariance and correlation.
The Sample Means
Recall that the sample means are $M(\bs{X}) = \frac{1}{n} \sum_{i=1}^n X_i, \quad M(\bs{Y}) = \frac{1}{n} \sum_{i=1}^n Y_i$ From the sections on the law of large numbers and the central limit theorem, we know a great deal about the distributions of $M(\bs{X})$ and $M(\bs{Y})$ individually. But we need to know more about the joint distribution.
The covariance and correlation between $M(\bs{X})$ and $M(\bs{Y})$ are
1. $\cov[M(\bs{X}), M(\bs{Y})] = \delta / n$
2. $\cor[(M(\bs{X}), M(\bs{Y})] = \rho$
Proof
Part (a) follows from the bilinearity of the covariance operator: $\cov\left(\frac{1}{n} \sum_{i=1}^n X_i, \frac{1}{n} \sum_{j=1}^n Y_j\right) = \frac{1}{n^2}\sum_{i=1}^n \sum_{j=1}^n \cov(X_i, Y_j)$ By independence, the terms in the last sum are 0 if $i \ne j$. For $i = j$ the terms are $\cov(X, Y) = \delta$. There are $n$ such terms so $\cov[M(\bs{X}), M(\bs{Y})] = \delta / n$. For part (b), recall that $\var[M(\bs{X})] = \sigma^2 / n$ and $\var[M(\bs{Y})] = \tau^2 / n$. Hence $\cor[M(\bs{X}), M(\bs{Y})] = \frac{\delta / n}{(\sigma / \sqrt{n}) (\tau / \sqrt{n})} = \frac{\delta}{\sigma \tau} = \rho$
Note that the correlation between the sample means is the same as the correlation of the underlying sampling distribution. In particular, the correlation does not depend on the sample size $n$.
The Sample Variances
Recall that special versions of the sample variances, in the unlikely event that the distribution means are known, are $W^2(\bs{X}) = \frac{1}{n} \sum_{i=1}^n (X_i - \mu)^2, \quad W^2(\bs{Y}) = \frac{1}{n} \sum_{i=1}^n (Y_i - \nu)^2$ Once again, we have studied these statistics individually, so our emphasis now is on the joint distribution.
The covariance and correlation between $W^2(\bs{X})$ and $W^2(\bs{Y})$ are
1. $\cov[W^2(\bs{X}), W^2(\bs{Y})] = (\delta_2 - \sigma^2 \tau^2) \big/ n$
2. $\cor[W^2(\bs{X}), W^2(\bs{Y})] = (\delta_2 - \sigma^2 \tau^2) \big/ \sqrt{(\sigma_4 - \sigma^4)(\tau_4 - \tau^4)}$
Proof
For part (a), we use the bilinearity of the covariance operator to obtain $\cov[W^2(\bs{X}), W^2(\bs{Y})] = \cov\left(\frac{1}{n} \sum_{i=1}^n (X_i - \mu)^2, \frac{1}{n} \sum_{j=1}^n (Y_j - \nu)^2\right) = \frac{1}{n^2} \sum_{i=1}^n \sum_{j=1}^n \cov[(X_i - \mu)^2, (Y_j - \nu)^2]$ By independence, the terms in the last sum are 0 when $i \ne j$. When $i = j$ the terms are $\cov[(X - \mu)^2 (Y - \nu)^2] = \E[(X - \mu)^2 (Y - \nu)^2] - \E[(X - \mu)^2] \E[(Y - \nu)^2] = \delta_2 - \sigma^2 \tau^2$ There are $n$ such terms, so $\cov[W^2(\bs{X}), W^2(\bs{Y})] = (\delta_2 - \sigma^2 \tau^2) \big/ n$. Part (b) follows from part (a) and the variances of $W^2(\bs{X})$ and $W^2(\bs{Y})$ from the section on Sample Variance.
Note that the correlation does not dependend on the sample size $n$. Next, recall that the standard versions of the sample variances are $S^2(\bs{X}) = \frac{1}{n - 1} \sum_{i=1}^n [X_i - M(\bs{X})]^2, \quad S^2(\bs{Y}) = \frac{1}{n - 1} \sum_{i=1}^n [Y_i - M(\bs{Y})]^2$
The covariance and correlation of the sample variances are
1. $\cov[S^2(\bs{X}), S^2(\bs{Y})] = (\delta_2 - \sigma^2 \tau^2) \big/ n + 2 \delta^2 / [n (n - 1)]$
2. $\cor[S^2(\bs{X}), S^2(\bs{Y})] = [(n - 1)(\delta_2 - \sigma^2 \tau^2) + 2 \delta^2] \big/ \sqrt{[(n - 1) \sigma_4 - (n - 3) \sigma^4][(n - 1) \tau_4 - (n - 3) \tau^4]}$
Proof
Recall that $S^2(\bs{X}) = \frac{1}{2 n (n - 1)} \sum_{i=1}^n \sum_{j=1}^n (X_i - X_j)^2, \quad S^2(\bs{Y}) = \frac{1}{2 n (n - 1)} \sum_{k=1}^n \sum_{l=1}^n (Y_k - Y_l)^2$ Hence using the bilinearity of the covariance operator we have $\cov[S^2(\bs{X}), S^2(\bs{Y})] = \frac{1}{4 n^2 (n - 1)^2} \sum_{i=1}^n \sum_{j=1}^n \sum_{k=1}^n \sum_{l=1}^n \cov[(X_i - X_j)^2, (Y_k - Y_l)^2]$ We compute the covariances in this sum by considering disjoint cases:
• $\cov[(X_i - X_j)^2, (Y_k - Y_l)^2] = 0$ if $i = j$ or if $k = l$, and there are $2 n^3 - n^2$ such terms.
• $\cov[(X_i - X_j)^2, (Y_k - Y_l)^2] = 0$ by independence if $i, j, k, l$ are distinct, and there are $n (n - 1)(n - 2)(n - 3)$ such terms.
• $\cov[(X_i - X_j)^2, (Y_k - Y_l)^2] = 2 \delta_2 - 2 \sigma^2 \tau^2 + 4 \delta^2$ if $i \ne j$ and $\{k, l\} = \{i, j\}$, and there are $2 n (n - 1)$ such terms.
• $\cov[(X_i - X_j)^2, (Y_k - Y_l)^2] = \delta_2 - \sigma^2 \tau^2$ if $i \ne j$, $k \ne l$, and $\#(\{i, j\} \cap \{k, l\}) = 1$, and there are $4 n (n - 1)(n - 2)$ such terms.
Substituting and simplifying gives the result in (a). For (b), we use the definition of correlation and the formulas for $\var[S^2(\bs{X})]$ and $\var[S^2(\bs{Y})]$ from the section on the sample variance.
Asymptotically, the correlation between the sample variances is the same as the correlation between the special sample variances given above: $\cor\left[S^2(\bs{X}), S^2(\bs{Y})\right] \to \frac{\delta_2 - \sigma^2 \tau^2}{\sqrt{(\sigma_4 - \sigma^4)(\tau_4 - \tau^4)}} \text{ as } n \to \infty$
Sample Covariance
Suppose first that the distribution means $\mu$ and $\nu$ are known. As noted earlier, this is almost always an unrealistic assumption, but is still a good place to start because the analysis is very simple and the results we obtain will be useful below. A natural estimator of the distsribution covariance $\delta = \cov(X, Y)$ in this case is the special sample covariance $W(\bs{X}, \bs{Y}) = \frac{1}{n} \sum_{i=1}^n (X_i - \mu)(Y_i - \nu)$ Note that the special sample covariance generalizes the special sample variance: $W(\bs{X}, \bs{X}) = W^2(\bs{X})$.
$W(\bs{X}, \bs{Y})$ is the sample mean for a random sample of size $n$ from the distribution of $(X - \mu)(Y - \nu)$ and satisfies the following properties:
1. $\E[W(\bs{X}, \bs{Y})] = \delta$
2. $\var[W(\bs{X}, \bs{Y})] = \frac{1}{n}(\delta_2 - \delta^2)$
3. $W(\bs{X}, \bs{Y}) \to \delta$ as $n \to \infty$ with probability 1
Proof
These results follow directly from the section on the Law of Large Numbers. For part (b), note that $\var[(X - \mu)(Y - \nu)] = \E[(X - \mu)^2 (Y - \nu)^2] - \left(\E[(X - \mu)(Y - \nu)]\right)^2 = \delta_2 - \delta^2$
As an estimator of $\delta$, part (a) means that $W(\bs{X}, \bs{Y})$ is unbiased and part (b) means that $W(\bs{X}, \bs{Y})$ is consistent.
Consider now the more realistic assumption that the distribution means $\mu$ and $\nu$ are unknown. A natural approach in this case is to average $[(X_i - M(\bs{X})][Y_i - M(\bs{Y})]$ over $i \in \{1, 2, \ldots, n\}$. But rather than dividing by $n$ in our average, we should divide by whatever constant gives an unbiased estimator of $\delta$. As shown in the next theorem, this constant turns out to be $n - 1$, leading to the standard sample covariance: $S(\bs{X}, \bs{Y}) = \frac{1}{n - 1} \sum_{i=1}^n [X_i - M(\bs{X})][Y_i - M(\bs{Y})]$
$\E[S(\bs{X}, \bs{Y})] = \delta$.
Proof
Expanding as above we have, $\sum_{i=1}^n[X_i - M(\bs{X})][Y_i - M(\bs{Y})] = \sum_{i=1}^n X_i Y_i - n M(\bs{X})M(\bs{Y})$ But $\E(X_i Y_i) = \cov(X_i, Y_i) + \E(X_i) \E(Y_i) = \delta + \mu \nu$. Similarly, from the covariance of the sample means and the unbiased property, $\E[M(\bs{X}) M(\bs{Y})] = \cov[M(\bs{X}), M(\bs{Y})] + \E[M(\bs{X})] \E[M(\bs{Y})] = \delta / n + \mu \nu$. So taking expected values in the displayed equation above gives $\E\left(\sum_{i=1}^n [X_i - M(\bs{X})][Y_i - M(\bs{Y})]\right) = n ( \delta + \mu \nu) - n (\delta / n + \mu \nu) = (n - 1) \delta$
$S(\bs{X}, \bs{Y}) \to \delta$ as $n \to \infty$ with probability 1.
Proof
Once again, we have $S(\bs{X}, \bs{Y}) = \frac{n}{n - 1} [M(\bs{X} \bs{Y}) - M(\bs{X}) M(\bs{Y})]$ where $M(\bs{X} \bs{Y})$ denotes the sample mean for the sample of the products $(X_1 Y_1, X_2 Y_2, \ldots, X_n Y_n)$. By the strong law of large numbers, $M(\bs{X}) \to \mu$ as $n \to \infty$, $M(\bs{Y}) \to \nu$ as $n \to \infty$, and $M(\bs{X} \bs{Y}) \to \E(X Y) = \delta + \mu \nu$ as $n \to \infty$, each with probability 1. So the result follows by letting $n \to \infty$ in the displayed equation.
Of courese, the sample correlation is $R(\bs{X}, \bs{Y}) = \frac{S(\bs{X}, \bs{Y})}{S(\bs{X}) \, S(\bs{Y})}$ Since the sample correlation $R(\bs{X}, \bs{Y})$ is a nonlinear function of the sample covariance and sample standard deviations, it will not in general be an unbiased estimator of the distribution correlation $\rho$. In most cases, it would be difficult to even compute the mean and variance of $R(\bs{X}, \bs{Y})$. Nonetheless, we can show convergence of the sample correlation to the distribution correlation.
$R(\bs{X}, \bs{Y}) \to \rho$ as $n \to \infty$ with probability 1.
Proof
This follows immediately from the strong law of large numbers and previous results. From the result above $S(\bs{X}, \bs{Y}) \to \delta$ as $n \to \infty$, and from the section on the sample variance, $S(\bs{X}) \to \sigma$ as $n \to \infty$ and $S(\bs{Y}) \to \tau$ as $n \to \infty$, each with probability 1. Hence $R(\bs{X}, \bs{Y}) \to \delta / \sigma \tau = \rho$ as $n \to \infty$ with probability 1.
Our next theorem gives a formuala for the variance of the sample covariance, not to be confused with the covariance of the sample variances given above!
The variance of the sample covariance is $\var[S(\bs{X}, \bs{Y})] = \frac{1}{n} \left( \delta_2 + \frac{1}{n - 1} \sigma^2 \, \tau^2 - \frac{n - 2}{n - 1} \delta^2 \right)$
Proof
Recall first that $S(\bs{X}, \bs{Y}) = \frac{1}{2 \, n \, (n - 1)} \sum_{i=1}^n \sum_{j=1}^n (X_i - X_j)(Y_i - Y_j)$ Hence using the bilinearity of the covariance operator we have $\var[S(\bs{X}), \bs{Y})] = \frac{1}{4 n^2 (n - 1)^2} \sum_{i=1}^n \sum_{j=1}^n \sum_{k=1}^n \sum_{l=1}^n \cov[(X_i - X_j)(Y_i - Y_j), (X_k - X_l)(Y_k - Y_l)]$ We compute the covariances in this sum by considering disjoint cases:
• $\cov[(X_i - X_j)(Y_i - Y_j), (X_k - X_l)(Y_k - Y_l)] = 0$ if $i = j$ or if $k = l$, and there are $2 n^3 - n^2$ such terms.
• $\cov[(X_i - X_j)(Y_i - Y_j), (X_k - X_l)(Y_k - Y_l)] = 0$ if $i, j, k, l$ are distinct, and there are $n (n - 1)(n - 2)(n - 3)$ such terms.
• $\cov[(X_i - X_j)(Y_i - Y_j), (X_k - X_l)(Y_k - Y_l)] = 2 \, \delta_2 + 2 \sigma^2 \tau^2$ if $i \ne j$ and $\{k, l\} = \{i, j\}$, and there are $2 n (n - 1)$ such terms.
• $\cov[(X_i - X_j)(Y_i - Y_j), (X_k - X_l)(Y_k - Y_l)] = \delta_2 - \delta^2$ if $i \ne j$, $k \ne l$, and $\#(\{i, j\} \cap \{k, l\}) = 1$, and there are $4 n (n - 1)(n - 2)$ such terms.
Substituting and simplifying gives the result
It's not surprising that the variance of the standard sample covariance (where we don't know the distribution means) is greater than the variance of the special sample covariance (where we do know the distribution means).
$\var[S(\bs{X}, \bs{Y})] \gt \var[W(\bs{X}, \bs{Y})]$.
Proof
From results above, and some simple algebra, $\var[S(\bs{X}, \bs{Y})] - \var[W(\bs{X}, \bs{Y})] = \frac{1}{n (n - 1)}(\delta^2 + \sigma^2 \tau^2) \gt 0$ But note that the difference goes to 0 as $n \to \infty$.
$\var[S(\bs{X}, \bs{Y})] \to 0$ as $n \to \infty$. Thus, the sample covariance is a consistent estimator of the distribution covariance.
Regression
In our first discussion above, we studied regression from a deterministic, descriptive point of view. The results obtained applied only to the sample. Statistically more interesting and deeper questions arise when the data come from a random experiment, and we try to draw inferences about the underlying distribution from the sample regression. There are two models that commonly arise. One is where the response variable is random, but the predictor variable is deterministic. The other is the model we consider here, where the predictor variable and the response variable are both random, so that the data form a random sample from a bivariate distribution.
Thus, suppose again that we have a basic random vector $(X, Y)$ for an experiment. Recall that in the section on (distribution) correlation and regression, we showed that the best linear predictor of $Y$ given $X$, in the sense of minimizing mean square error, is the random variable $L(Y \mid X) = \E(Y) + \frac{\cov(X, Y)}{\var(X)}[X - \E(X)] = \nu + \frac{\delta}{\sigma^2}(X - \mu)$ so that the distribution regression line is given by $y = L(Y \mid X = x) = \nu + \frac{\delta}{\sigma^2}(x - \mu)$ Moreover, the (minimum) value of the mean square error is $\E\{[Y - L(Y \mid X)]\} = \var(Y)[1 - \cor^2(X, Y)] = r^2 (1 - \rho^2)$.
Of course, in real applications, we are unlikely to know the distribution parameters $\mu$, $\nu$, $\sigma^2$, and $\delta$. If we want to estimate the distribution regression line, a natural approach would be to consider a random sample $\left((X_1, Y_1), (X_2, Y_2), \ldots, (X_n, Y_n)\right)$ from the distribution of $(X, Y)$ and compute the sample regression line. Of course, the results are exactly the same as in the discussion above, except that all of the relevant quantities are random variables. The sample regression line is
$y = M(\bs{Y}) + \frac{S(\bs{X}, \bs{Y})}{S^2(\bs{X})}[x - M(\bs{X})]$
The mean square error is $S^2(\bs{Y})[1 - R^2(\bs{X}, \bs{Y})]$ and the coefficient of determination is $R^2(\bs{X}, \bs{Y})$.
The fact that the sample regression line and mean square error are completely analogous to the distribution regression line and mean square error is mathematically elegant and reassuring. Again, the coefficients of the sample regression line can be viewed as estimators of the respective coefficients in the distribution regression line.
The coefficients of the sample regression line converge to the coefficients of the distribution regression line with probability 1.
1. $\frac{S(\bs{X}, \bs{Y})}{S^2(\bs{X})} \to \frac{\delta}{\sigma^2}$ as $n \to \infty$
2. $M(\bs{Y}) - \frac{S(\bs{X}, \bs{Y})}{S^2(\bs{X})} M(\bs{X}) \to \nu - \frac{\delta}{\sigma^2} \mu$ as $n \to \infty$
Proof
This follows from the strong law of large numbers and previous results. with probability 1, $S(\bs{X}, \bs{Y}) \to \delta$ as $n \to \infty$, $S^2(\bs{X}) \to \sigma^2$ as $n \to \infty$, $M(\bs{X}) \to \mu$ as $n \to \infty$, and $M(\bs{Y}) \to \nu$ as $n \to \infty$.
Of course, if the linear relationship between $X$ and $Y$ is not strong, as measured by the sample correlation, then transformation applied to one or both variables may help. Again, some typical transformations are explored in the exercises below.
Exercises
Basic Properties
Suppose that $x$ and $y$ are population variables, and $\bs{x}$ and $\bs{y}$ samples of size $n$ from $x$ and $y$ respectively. Suppose also that $m(\bs{x}) = 3$, $m(\bs{y}) = -1$, $s^2(\bs{x} ) = 4$, $s^2(\bs{y}) = 9$, and $s(\bs{x}, \bs{y}) = 5$. Find each of the following:
1. $r(\bs{x}, \bs{y})$
2. $m(2 \bs{x} + 3 \bs{y})$
3. $s^2(2 \bs{x} + 3 \bs{y})$
4. $s(2 \bs{x} + 3 \bs{y} - 1, 4 \bs{x} + 2 \bs{y} - 3)$
Suppose that $x$ is the temperature (in degrees Fahrenheit) and $y$ the resistance (in ohms) for a certain type of electronic component after 10 hours of operation. For a sample of 30 components, $m(\bs{x}) = 113$, $s(\bs{x}) = 18$, $m(\bs{y}) = 100$, $s(\bs{y}) = 10$, $r(\bs{x}, \bs{y}) = 0.6$.
1. Classify $x$ and $y$ by type and level of measurement.
2. Find the sample covariance.
3. Find the equation of the regression line.
Suppose now that temperature is converted to degrees Celsius (the transformation is $\frac{5}{9}(x - 32)$).
1. Find the sample means.
2. Find the sample standard deviations.
3. Find the sample covariance and correlation.
4. Find the equation of the regression line.
Answer
1. continuous, interval
2. $m = 45°$, $s = 10°$
Suppose that $x$ is the length and $y$ the width (in inches) of a leaf in a certain type of plant. For a sample of 50 leaves $m(\bs{x}) = 10$, $s(\bs{x}) = 2$, $m(\bs{y}) = 4$, $s(\bs{y}) = 1$, and $r(\bs{x}, \bs{y}) = 0.8$.
1. Classify $x$ and $y$ by type and level of measurement.
2. Find the sample covariance.
3. Find the equation of the regression line with $x$ as the predictor variable and $y$ as the response variable.
Suppose now that $x$ and $y$ are converted to inches (0.3937 inches per centimeter).
1. Find the sample means.
2. Find the sample standard deviations.
3. Find the sample covariance and correlation.
4. Find the equation of the regression line.
Answer
1. continuous, ratio
2. $m = 25.4$, $s = 5.08$
Scatterplot Exercises
Click in the interactive scatterplot, in various places, and watch how the means, standard deviations, correlation, and regression line change.
Click in the interactive scatterplot to define 20 points and try to come as close as possible to each of the following sample correlations:
1. $0$
2. $0.5$
3. $-0.5$
4. $0.7$
5. $-0.7$
6. $0.9$
7. $-0.9$.
Click in the interactive scatterplot to define 20 points. Try to generate a scatterplot in which the regression line has
1. slope 1, intercept 1
2. slope 3, intercept 0
3. slope $-2$, intercept 1
Simulation Exercises
Run the bivariate uniform experiment 2000 times in each of the following cases. Compare the sample means to the distribution means, the sample standard deviations to the distribution standard deviations, the sample correlation to the distribution correlation, and the sample regression line to the distribution regression line.
1. The uniform distribution on the square
2. The uniform distribution on the triangle.
3. The uniform distribution on the circle.
Run the bivariate normal experiment 2000 times for various values of the distribution standard deviations and the distribution correlation. Compare the sample means to the distribution means, the sample standard deviations to the distribution standard deviations, the sample correlation to the distribution correlation, and the sample regression line to the distribution regression line.
Transformations
Consider the function $y = a + b x^2$.
1. Sketch the graph for some representative values of $a$ and $b$.
2. Note that $y$ is a linear function of $x^2$, with intercept $a$ and slope $b$.
3. Hence, to fit this curve to sample data, simply apply the standard regression procedure to the data from the variables $x^2$ and $y$.
Consider the function $y = \frac{1}{a + b x}$.
1. Sketch the graph for some representative values of $a$ and $b$.
2. Note that $\frac{1}{y}$ is a linear function of $x$, with intercept $a$ and slope $b$.
3. Hence, to fit this curve to our sample data, simply apply the standard regression procedure to the data from the variables $x$ and $\frac{1}{y}$.
Consider the function $y = \frac{x}{a + b x}$.
1. Sketch the graph for some representative values of $a$ and $b$.
2. Note that $\frac{1}{y}$ is a linear function of $\frac{1}{x}$, with intercept $b$ and slope $a$.
3. Hence, to fit this curve to sample data, simply apply the standard regression procedure to the data from the variables $\frac{1}{x}$ and $\frac{1}{y}$.
4. Note again that the names of the intercept and slope are reversed from the standard formulas.
Consider the function $y = a e^{b x}$.
1. Sketch the graph for some representative values of $a$ and $b$.
2. Note that $\ln(y)$ is a linear function of $x$, with intercept $\ln(a)$ and slope $b$.
3. Hence, to fit this curve to sample data, simply apply the standard regression procedure to the data from the variables $x$ and $\ln(y)$.
4. After solving for the intercept $\ln(a)$, recover the statistic $a = e^{\ln(a)}$.
Consider the function $y = a x^b$.
1. Sketch the graph for some representative values of $a$ and $b$.
2. Note that $\ln(y)$ is a linear function of $\ln(x)$, with intercept $\ln(a)$ and slope $b$.
3. Hence, to fit this curve to sample data, simply apply the standard regression procedure to the data from the variables $\ln(x)$ and $\ln(y)$.
4. After solving for the intercept $\ln(a)$, recover the statistic $a = e^{\ln(a)}$.
Computational Exercises
All statistical software packages will perform regression analysis. In addition to the regression line, most packages will typically report the coefficient of determination $r^2(\bs{x}, \bs{y})$, the sums of squares $\sst(\bs{y})$, $\ssr(\bs{x}, \bs{y})$, $\sse(\bs{x}, \bs{y})$, and the standard error of estimate $\se(\bs{x}, \bs{y})$. Most packages will also draw the scatterplot, with the regression line superimposed, and will draw the various graphs of residuals discussed above. Many packages also provide easy ways to transform the data. Thus, there is very little reason to perform the computations by hand, except with a small data set to master the definitions and formulas. In the following problem, do the computations and draw the graphs with minimal technological aids.
Suppose that $x$ is the number of math courses completed and $y$ the number of science courses completed for a student at Enormous State University (ESU). A sample of 10 ESU students gives the following data: $\left((1, 1), (3, 3), (6, 4), (2, 1), (8, 5), (2, 2), (4, 3), (6, 4), (4, 3), (4, 4)\right)$.
1. Classify $x$ and $y$ by type and level of measurement.
2. Sketch the scatterplot.
Construct a table with rows corresponding to cases and columns corresponding to $i$, $x_i$, $y_i$, $x_i - m(\bs{x})$, $y_i - m(\bs{y})$, $[x_i - m(\bs{x})]^2$, $[y_i - m(\bs{y})]^2$, $[x_i - m(\bs{x})][y_i - m(\bs{y})]$, $\hat{y}_i$, $\hat{y}_i - m(\bs{y})$, $[\hat{y}_i - m(\bs{y})]^2$, $y_i - \hat{y}_i$, and $(y_i - \hat{y}_i)^2$. Add a rows at the bottom for totals and means. Use precision arithmetic.
1. Complete the first 8 columns.
2. Find the sample correlation and the coefficient of determination.
3. Find the sample regression equation.
4. Complete the table.
5. Verify the identities for the sums of squares.
Answer
$i$ $x_i$ $y_i$ $x_i - m(\bs{x})$ $y_i - m(\bs{y})$ $[x_i - m(\bs{x})]^2$ $[y_i - m(\bs{y})]^2$ $[x_i - m(\bs{x})][y_i - m(\bs{y})]$ $\hat{y}_i$ $\hat{y}_i - m(\bs{y})$ $[\hat{y}_i - m(\bs{y})]^2$ $y_i - \hat{y}_i$ $(y_i - \hat{y}_i)^2$
1 1 1 $-3$ $-2$ $9$ $4$ $6$ $9/7$ $-12/7$ $144/49$ $-2/7$ $4/49$
2 3 3 $-1$ $0$ $1$ $0$ $0$ $17/7$ $-4/7$ $16/49$ $4/7$ $16/49$
3 6 4 $2$ $1$ $4$ $1$ $2$ $29/7$ $8/7$ $64/49$ $-1/7$ $1/49$
4 2 1 $-2$ $-2$ $4$ $4$ $4$ $13/7$ $-8/7$ $64/49$ $-6/7$ $36/49$
5 8 5 $4$ $2$ $16$ $4$ $8$ $37/7$ $16/7$ $256/49$ $-2/7$ $4/49$
6 2 2 $-2$ $-1$ $4$ $1$ $2$ $13/7$ $-8/7$ $64/49$ $1/7$ $1/49$
7 4 3 $0$ $0$ $0$ $0$ $0$ $3$ $0$ $0$ $0$ $0$
8 6 4 $2$ $1$ $4$ $1$ $2$ $29/7$ $8/7$ $64/49$ $-1/7$ $1/49$
9 4 3 $0$ $0$ $0$ $0$ $0$ $3$ $0$ $0$ $0$ $0$
10 4 4 $0$ $1$ $0$ $1$ $0$ $3$ $0$ $0$ $1$ $1$
Total $40$ $30$ $0$ $0$ $42$ $16$ $24$ $30$ $0$ $96/7$ $0$ $16/7$
Mean $4$ $3$ $0$ $0$ $14/3$ $16/9$ $8/3$ $3$ $0$ $96/7$ $0$ $2/7$
1. discrete, ratio
2. $r = 2 \sqrt{3/14} \approx 0.926$, $r^2 = 6/7$
3. $y = 3 + \frac{4}{7}(x - 4)$
4. $16 = 96/7 + 16/7$
The following two exercise should help you review some of the probability topics in this section.
Suppose that $(X, Y)$ has a continuous distribution with probability density function $f(x, y) = 15 x^2 y$ for $0 \le x \le y \le 1$. Find each of the following:
1. $\mu = \E(X)$ and $\nu = \E(Y)$
2. $\sigma^2 = \var(X)$ and $\tau^2 = \var(Y)$
3. $\sigma_3 = \E\left[(X - \mu)^3\right]$ and $\tau_3 = \E\left[(Y - \nu)^3\right]$
4. $\sigma_4 = \E\left[(X - \mu)^4\right]$ and $\tau_4 = \E\left[(Y - \nu)^4\right]$
5. $\delta = \cov(X, Y)$, $\rho = \cor(X, Y)$, and $\delta_2 = \E\left[(X - \mu)^2 (Y - \nu)^2\right]$
6. $L(Y \mid X)$ and $L(X \mid Y)$
Answer
1. $5/8$, $5/6$
2. $17/448$, $5/252$
3. $-5/1792$, $-5/1512$
4. $305/86\,016$, $5/3024$
5. $5/336$, $\sqrt{5/17}$, $1/768$
6. $L(Y \mid X) = \frac{10}{17} + \frac{20}{51} X$, $L(X \mid Y) = \frac{3}{4} Y$
Suppose now that $\left((X_1, Y_1), (X_2, Y_2), \ldots (X_9, Y_9)\right)$ is a random sample of size $9$ from the distribution in the previous exercise. Find each of the following:
1. $\E[M(\bs{X})]$ and $\var[M(\bs{X})]$
2. $\E[M(\bs{Y})]$ and $\var[M(\bs{Y})]$
3. $\cov[M(\bs{X}), M(\bs{Y})]$ and $\cor[M(\bs{X}), M(\bs{Y})]$
4. $\E[W^2(\bs{X})]$ and $\var[W^2(\bs{X})]$
5. $\E[W^2(\bs{Y})]$ and $\var[W^2(\bs{Y})]$
6. $\E[S^2(\bs{X})]$ and $\var[S^2(\bs{X})]$
7. $\E[S^2(\bs{Y})]$ and $\var[S^2(\bs{Y})]$
8. $\E[W(\bs{X}, \bs{Y})]$ and $\var[W(\bs{X}, \bs{Y})]$
9. $\E[S(\bs{X}, \bs{Y})]$ and $\var[S(\bs{X}, \bs{Y})]$
Answer
1. $5/8$, $17/4032$
2. $5/6$, $5/2268$
3. $5/3024$, $\sqrt{5/17}$
4. $17/448$, $317/1\,354\,752$
5. $5/252$, $5/35\,721$
6. $17/448$, $5935/21\,676\,032$
7. $5/252$, $115/762\,048$
8. $5/336$, $61/508\,032$
9. $5/336$, $181/1\,354\,752$
Data Analysis Exercises
Use statistical software for the following problems.
Consider the height variables in Pearson's height data.
1. Classify the variables by type and level of measurement.
2. Compute the correlation coefficient and the coefficient of determination
3. Compute the least squares regression line, with the height of the father as the predictor variable and the height of the son as the response variable.
4. Draw the scatterplot and the regression line together.
5. Predict the height of a son whose father is 68 inches tall.
6. Compute the regression line if the heights are converted to centimeters (there are 2.54 centimeters per inch).
Answer
1. Continuous, ratio
2. $r = 0.501$, $r^2 = 0.251$
3. $y = 33.893 + 0.514 x$
4. 68.85
5. $y = 86.088 + 0.514 x$
Consider the petal length, petal width, and species variables in Fisher's iris data.
1. Classify the variables by type and level of measurement.
2. Compute the correlation between petal length and petal width.
3. Compute the correlation between petal length and petal width by species.
Answer
1. Species: discrete, nominal; petal length and width: continuous ratio
2. 0.9559
3. Setosa: 0.3316, Verginica: 0.3496, Versicolor: 0.6162
Consider the number of candies and net weight variables in the M&M data.
1. Classify the variable by type and level of measurement.
2. Compute the correlation coefficient and the coefficient of determination.
3. Compute the least squares regression line with number of candies as the predictor variable and net weight as the response variable.
4. Draw the scatterplot and the regression line in part (b) together.
5. Predict the net weight of a bag of M&Ms with 56 candies.
6. Naively, one might expect a much stronger correlation between the number of candies and the net weight in a bag of M&Ms. What is another source of variability in net weight?
Answer
1. Number of candies: discrete, ratio; net weight: continuous, ratio
2. $r = 0.793$, $r^2 = 0.629$
3. $y = 20.278 + 0.507 x$
4. 48.657
5. Variability in the weight of individual candies.
Consider the response rate and total SAT score variables in the SAT by state data set.
1. Classify the variables by type and level of measurement.
2. Compute the correlation coefficient and the coefficient of determination.
3. Compute the least squares regression line with response rate as the predictor variable and SAT score as the response variable.
4. Draw the scatterplot and regression line together.
5. Give a possible explanation for the negative correlation.
Answer
1. Response rate: continuous, ratio. SAT score could probably be considered either discrete or continuous, but is only at the interval level of measurement, since the smallest possible scores is 400 (200 each on the verbal and math portions).
2. $r = -0.849$, $r^2 = 0.721$
3. $y = 1141.5 - 2.1 x$
4. States with low response rate may be states for which the SAT is optional. In that case, the students who take the test are the better, college-bound students. Conversely, states with high response rates may be states for which the SAT is mandatory. In that case, all students including the weaker, non-college-bound students take the test.
Consider the verbal and math SAT scores (for all students) in the SAT by year data set.
1. Classify the variables by type and level of measurement.
2. Compute the correlation coefficient and the coefficient of determination.
3. Compute the least squares regression line.
4. Draw the scatterplot and regression line together.
Answer
1. Continuous perhaps, but only at the interval level of measurement because the smallest possible score on each part is 200.
2. $r = 0.614$, $r^2 = 0.377$
3. $y = 321.5 + 0.3 \, x$
Consider the temperature and erosion variables in the first data set in the Challenger data.
1. Classify the variables by type and level of measurement.
2. Compute the correlation coefficient and the coefficient of determination.
3. Compute the least squares regression line.
4. Draw the scatter plot and the regression line together.
5. Predict the O-ring erosion with a temperature of 31° F.
6. Is the prediction in part (c) meaningful? Explain.
7. Find the regression line if temperature is converted to degrees Celsius. Recall that the conversion is $\frac{5}{9}(x - 32)$.
Answer
1. temperature: continuous, interval; erosion: continuous ratio
2. $r = -0.555$, $r^2 = 0.308$
3. $y = 106.8 - 1.414 x$
4. 62.9.
5. This estimate is problematic, because 31° is far outside of the range of the sample data.
6. $y = 61.54 - 2.545 x$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/06%3A_Random_Samples/6.07%3A_Sample_Correlation_and_Regression.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\bs}{\boldsymbol}$
Random samples from normal distributions are the most important special cases of the topics in this chapter. As we will see, many of the results simplify significantly when the underlying sampling distribution is normal. In addition we will derive the distributions of a number of random variables constructed from normal samples that are of fundamental important in inferential statistics.
The One Sample Model
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from the normal distribution with mean $\mu \in \R$ and standard deviation $\sigma \in (0, \infty)$. Recall that the term random sample means that $\bs{X}$ is a sequence of independent, identically distributed random variables. Recall also that the normal distribution has probability density function $f(x) = \frac{1}{\sqrt{2 \, \pi} \sigma} \exp \left[ -\frac{1}{2} \left( \frac{x - \mu}{\sigma} \right)^2 \right], \quad x \in \R$ In the notation that we have used elsewhere in this chapter, $\sigma_3 = \E\left[(X - \mu)^3\right] = 0$ (equivalently, the skewness of the normal distribution is 0) and $\sigma_4 = \E\left[(X - \mu)^4\right] = 3 \sigma^4$ (equivalently, the kurtosis of the normal distribution is 3). Since the sample (and in particular the sample size $n$) is fixed is this subsection, it will be suppressed in the notation.
The Sample Mean
First recall that the sample mean is $M = \frac{1}{n} \sum_{i=1}^n X_i$
$M$ is normally distributed with mean and variance given by
1. $\E(M) = \mu$
2. $\var(M) = \sigma^2 / n$
Proof
This follows from basic properties of the normal distribution. Recall that the sum of independent normally distributed variables also has a normal distribution, and a linear transformation of a normally distributed variable is also normally distributed. The mean and variance of $M$ hold in general, and were derived in the section on the Law of Large Numbers.
Of course, by the central limit theorem, the distribution of $M$ is approximately normal, if $n$ is large, even if the underlying sampling distribution is not normal. The standard score of $M$ is given as follows: $Z = \frac{M - \mu}{\sigma / \sqrt{n}}$
$Z$ has the standard normal distribution.
The standard score $Z$ associated with the sample mean $M$ plays a critical role in constructing interval estimates and hypothesis tests for the distribution mean $\mu$ when the distribution standard deviation $\sigma$ is known. The random variable $Z$ will also appear in several derivations in this section.
The Sample Variance
The main goal of this subsection is to show that certain multiples of the two versions of the sample variance that we have studied have chi-square distributions. Recall that the chi-square distribution with $k \in \N_+$ degrees of freedom has probability density function $f(x) = \frac{1}{\Gamma(k / 2) 2^{k/2}} x^{k/2 - 1} e^{-x/2}, \quad 0 \lt x \lt \infty$ and has mean $k$ and variance $2k$. The moment generating function is $G(t) = \frac{1}{(1 - 2t)^{k/2}}, \quad -\infty \lt t \lt \frac{1}{2}$ The most important result to remember is that the chi-square distribution with $k$ degrees of freedom governs $\sum_{i=1}^k Z_i^2$, where $(Z_1, Z_2, \ldots, Z_k)$ is a sequence of independent, standard normal random variables.
Recall that if $\mu$ is known, a natural estimator of the variance $\sigma^2$ is the statistic $W^2 = \frac{1}{n} \sum_{i=1}^n (X_i - \mu)^2$ Although the assumption that $\mu$ is known is almost always artificial, $W^2$ is very easy to analyze and it will be used in some of the derivations below. Our first result is the distribution of a simple multiple of $W^2$. Let $U = \frac{n}{\sigma^2} W^2$
$U$ has the chi-square distribution with $n$ degrees of freedom.
Proof
Note that $\frac{n}{\sigma^2} W^2 = \sum_{i=1}^n \left(\frac{X_i - \mu}{\sigma}\right)^2$ and the terms in the sum are independent standard normal variables.
The variable $U$ associated with the statistic $W^2$ plays a critical role in constructing interval estimates and hypothesis tests for the distribution standard deviation $\sigma$ when the distribution mean $\mu$ is known (although again, this assumption is usually not realistic).
The mean and variance of $W^2$ are
1. $\E(W^2) = \sigma^2$
2. $\var(W^2) = 2 \sigma^4 / n$
Proof
These results follow from the chi-square distribution of $U$ and standard properties of expected value and variance.
As an estimator of $\sigma^2$, part (a) means that $W^2$ is unbiased and part (b) means that $W^2$ is consistent. Of course, these moment results are special cases of the general results obtained in the section on Sample Variance. In that section, we also showed that $M$ and $W^2$ are uncorrelated if the underlying sampling distribution has skewness 0 ($\sigma_3 = 0$), as is the case here.
Recall now that the standard version of the sample variance is the statistic $S^2 = \frac{1}{n - 1} \sum_{i=1}^n (X_i - M)^2$ The sample variance $S^2$ is the usual estimator of $\sigma^2$ when $\mu$ is unknown (which is usually the case). We showed earlier that in general, the sample mean $M$ and the sample variance $S^2$ are uncorrelated if the underlying sampling distribution has skewness 0 ($\sigma_3 = 0$). It turns out that if the sampling distribution is normal, these variables are in fact independent, a very important and useful property, and at first blush, a very surprising result since $S^2$ appears to depend explicitly on $M$.
The sample mean $M$ and the sample variance $S^2$ are independent.
Proof
The proof is based on the vector of deviations from the sample mean. Let $\bs{D} = (X_1 - M, X_2 - M, \ldots, X_{n-1} - M)$ Note that $S^2$ can be written as a function of $\bs{D}$ since $\sum_{i=1}^n (X_i - M) = 0$. Next, $M$ and the vector $\bs{D}$ have a joint multivariate normal distribution. We showed earlier that $M$ and $X_i - M$ are uncorrelated for each $i$, and hence it follows that $M$ and $\bs{D}$ are independent. Finally, since $S^2$ is a function of $\bs{D}$, it follows that $M$ and $S^2$ are independent.
We can now determine the distribution of a simple multiple of the sample variance $S^2$. Let $V = \frac{n-1}{\sigma^2} S^2$
$V$ has the chi-square distribution with $n - 1$ degrees of freedom.
Proof
We first show that $U = V + Z^2$ where $U$ is the chi-square variable associated with $W^2$ and where $Z$ is the standard score associated with $M$. To see this, note that \begin{align} U & = \frac{1}{\sigma^2} \sum_{i=1}^n (X_i - \mu)^2 = \frac{1}{\sigma^2} \sum_{i=1}^n (X_i - M + M - \mu)^2 \ & = \frac{1}{\sigma^2} \sum_{i=1}^n (X_i - M)^2 + \frac{2}{\sigma^2} \sum_{i=1}^n (X_i - M)(M - \mu) + \frac{1}{\sigma^2} \sum_{i=1}^n (M - \mu)^2 \end{align} In the right side of the last equation, the first term is $V$. The second term is 0 because $\sum_{i=1}^n (X_i - M) = 0$. The last term is $\frac{n}{\sigma^2}(M - \mu)^2 = Z^2$. Now, from the result above, $U$ has the chi-square distribution with $n$ degrees of freedom. and of course $Z^2$ has the chi-square distribution with 1 degree of freedom. From the previous result, $V$ and $Z^2$ are independent. Recall that the moment generating function of a sum of independent variables is the product of the MGFs. Thus, taking moment generating functions in the equation $U = V + Z^2$ gives $\frac{1}{(1 - 2t)^{n/2}} = \E(e^{t V}) \frac{1}{(1 - 2 t)^{1/2}}, \quad t \lt \frac{1}{2}$ Solving we have $\E(e^{t V}) = 1 \big/ (1 - 2 t)^{(n-1)/2}$ for $t \lt 1/2$ and therefore $V$ has the chi-square distribution with $n - 1$ degrees of freedom.
The variable $V$ associated with the statistic $S^2$ plays a critical role in constructing interval estimates and hypothesis tests for the distribution standard deviation $\sigma$ when the distribution mean $\mu$ is unknown (almost always the case).
The mean and variance of $S^2$ are
1. $\E(S^2) = \sigma^2$
2. $\var(S^2) = 2 \sigma^4 \big/ (n - 1)$
Proof
These results follow from the chi-square distribution of $V$ and standard properties of expected value and variance.
As before, these moment results are special cases of the general results obtained in the section on Sample Variance. Again, as an estimator of $\sigma^2$, part (a) means that $S^2$ is unbiased, and part (b) means that $S^2$ is consistent. Note also that $\var(S^2)$ is larger than $\var(W^2)$ (not surprising), by a factor of $\frac{n}{n - 1}$.
In the special distribution simulator, select the chi-square distribution. Vary the degree of freedom parameter and note the shape and location of the probability density function and the mean, standard deviation bar. For selected values of the parameter, run the experiment 1000 times and compare the empirical density function and moments to the true probability density function and moments.
The covariance and correlation between the special sample variance and the standard sample variance are
1. $\cov(W^2, S^2) = 2 \sigma^4 / n$
2. $\cor(W^2, S^2) = \sqrt{(n - 1) / n}$
Proof
These results follows from general results obtained in the section on sample variance and the fact that $\sigma_4 = 3 \sigma^4$.
Note that the correlation does not depend on the parameters $\mu$ and $\sigma$, and converges to 1 as $n \to \infty$,
The $T$ Variable
Recall that the Student $t$ distribution with $k \in \N_+$ degrees of freedom has probability density function $f(t) = C_k \left( 1 + \frac{t^2}{k} \right)^{-(k + 1) / 2}, \quad t \in \R$ where $C_k$ is the appropriate normalizing constant. The distribution has mean 0 if $k \gt 1$ and variance $k / (k - 2)$ if $k \gt 2$. In this subsection, the main point to remember is that the $t$ distribution with $k$ degrees of freedom is the distribution of $\frac{Z}{\sqrt{V / k}}$ where $Z$ has the standard normal distribution; $V$ has the chi-square distribution with $k$ degrees of freedom; and $Z$ and $V$ are independent. Our goal is to derive the distribution of $T = \frac{M - \mu}{S / \sqrt{n}}$ Note that $T$ is similar to the standard score $Z$ associated with $M$, but with the sample standard deviation $S$ replacing the distribution standard deviation $\sigma$. The variable $T$ plays a critical role in constructing interval estimates and hypothesis tests for the distribution mean $\mu$ when the distribution standard deviation $\sigma$ is unknown.
As usual, let $Z$ denote the standard score associated with the sample mean $M$, and let $V$ denote the chi-square variable associated with the sample variance $S^2$. Then $T = \frac{Z}{\sqrt{V / (n - 1)}}$ and hence $T$ has the student $t$ distribution with $n - 1$ degrees of freedom.
Proof
In the definition of $T$, divide the numerator and denominator by $\sigma / \sqrt{n}$. The numerator is then $(M - \mu) \big/ (\sigma / \sqrt{n}) = Z$ and the denominator is $S / \sigma = \sqrt{V / (n - 1)}$. Since $Z$ and $V$ are independent, $Z$ has the standard normal distribution, and $V$ has the chi-squre distribution with $n - 1$ degrees of freedom, it follows that $T$ has the student $t$ distribution with $n - 1$ degrees of freedom.
In the special distribution simulator, select the $t$ distribution. Vary the degree of freedom parameter and note the shape and location of the probability density function and the mean$\pm$standard deviation bar. For selected values of the parameters, run the experiment 1000 times and compare the empirical density function and moments to the distribution density function and moments.
The Two Sample Model
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_m)$ is a random sample of size $m$ from the normal distribution with mean $\mu \in \R$ and standard deviation $\sigma \in (0, \infty)$, and that $\bs{Y} = (Y_1, Y_2, \ldots, Y_n)$ is a random sample of size $n$ from the normal distribution with mean $\nu \in \R$ and standard deviation $\tau \in (0, \infty)$. Finally, suppose that $\bs{X}$ and $\bs{Y}$ are independent. Of course, all of the results above in the one sample model apply to $\bs{X}$ and $\bs{Y}$ separately, but now we are interested in statistics that are helpful in inferential procedures that compare the two normal distributions. We will use the basic notation established above, but we will indicate the dependence on the sample.
The two-sample (or more generally the multi-sample model) occurs naturally when a basic variable in the statistical experiment is filtered according to one or more other variable (often nominal variables). For example, in the cicada data, the weights of the male cicadas and the weights of the female cicadas may fit observations from the two-sample normal model. The basic variable weight is filtered by the variable gender. If weight is filtered by gender and species, we might have observations from the 6-sample normal model.
The Difference in the Sample Means
We know from our work above that $M(\bs{X})$ and $M(\bs{Y})$ have normal distributions. Moreover, these sample means are independent because the underlying samples $\bs{X}$ and $\bs{Y}$ are independent. Hence, it follows from a basic property of the normal distribution that any linear combination of $M(\bs{X})$ and $M(\bs{Y})$ will be normally distributed as well. For inferential procedures that compare the distribution means $\mu$ and $\nu$, the linear combination that is most important is the difference.
$M(\bs{X}) - M(\bs{Y})$ has a normal distribution with mean and variance given by
1. $\E\left[(M(\bs{X}) - M(\bs{Y})\right] = \mu - \nu$
2. $\var\left[(M(\bs{X}) - M(\bs{Y})\right] = \sigma^2 / m + \tau^2 / n$
Hence the standard score $Z = \frac{\left[(M(\bs{X}) - M(\bs{Y})\right] - (\mu - \nu)}{\sqrt{\sigma^2 / m + \tau^2 / n}}$ has the standard normal distribution. This standard score plays a fundamental role in constructing interval estimates and hypothesis test for the difference $\mu - \nu$ when the distribution standard deviations $\sigma$ and $\tau$ are known.
Ratios of Sample Variances
Next we will show that the ratios of certain multiples of the sample variances (both versions) of $\bs{X}$ and $\bs{Y}$ have $F$ distributions. Recall that the $F$ distribution with $j \in \N_+$ degrees of freedom in the numerator and $k \in \N_+$ degrees of freedom in the denominator is the distribution of $\frac{U / j}{V / k}$ where $U$ has the chi-square distribution with $j$ degrees of freedom; $V$ has the chi-square distribution with $k$ degrees of freedom; and $U$ and $V$ are independent. The $F$ distribution is named in honor of Ronald Fisher and has probability density function $f(x) = C_{j,k} \frac{x^{(j-2) / 2}}{\left[1 + (j / k) x\right]^{(j + k) / 2}}, \quad 0 \lt x \lt \infty$ where $C_{j,k}$ is the appropriate normalizing constant. The mean is $\frac{k}{k - 2}$ if $k \gt 2$, and the variance is $2 \left(\frac{k}{k - 2}\right)^2 \frac{j + k - 2}{j (k - 4)}$ if $k \gt 4$.
The random variable given below has the $F$ distribution with $m$ degrees of freedom in the numerator and $n$ degrees of freedom in the denominator: $\frac{W^2(\bs{X}) / \sigma^2}{W^2(\bs{Y}) / \tau^2}$
Proof
Using the notation in the subsection on the special sample variances, note that $W^2(\bs{X}) / \sigma^2 = U(\bs{X}) / m$ and $W^2(\bs{Y}) / \tau^2 = U(\bs{Y}) / n$. The result then follows immediately since $U(\bs{X})$ and $U(\bs{Y})$ are independent chi-square variables with $m$ and $n$ degrees of freedom, respectivley.
The random variable given below has the $F$ distribution with $m - 1$ degrees of freedom in the numerator and $n - 1$ degrees of freedom in the denominator: $\frac{S^2(\bs{X}) / \sigma^2}{S^2(\bs{Y}) / \tau^2}$
Proof
Using the notation in the subsection on the standard sample variances, note that $S^2(\bs{X}) / \sigma^2 = V(\bs{X}) \big/ (m - 1)$ and $S^2(\bs{Y}) / \tau^2 = V(\bs{Y}) \big/ (n - 1)$. The result then follows immediately since $V(\bs{X})$ and $V(\bs{Y})$ are independent chi-square variables with $m - 1$ and $n - 1$ degrees of freedom, respectively.
These variables are useful for constructing interval estimates and hypothesis tests of the ratio of the standard deviations $\sigma / \tau$. The choice of the $F$ variable depends on whether the means $\mu$ and $\nu$ are known or unknown. Usually, of course, the means are unknown and so the statistic in above is used.
In the special distribution simulator, select the $F$ distribution. Vary the degrees of freedom parameters and note the shape and location of the probability density function and the mean$\pm$standard deviation bar. For selected values of the parameters, run the experiment 1000 times and compare the empirical density function and moments to the true distribution density function and moments.
The $T$ Variable
Our final construction in the two sample normal model will result in a variable that has the student $t$ distribution. This variable plays a fundamental role in constructing interval estimates and hypothesis test for the difference $\mu - \nu$ when the distribution standard deviations $\sigma$ and $\tau$ are unknown. The construction requires the additional assumption that the distribution standard deviations are the same: $\sigma = \tau$. This assumption is reasonable if there is an inherent variability in the measurement variables that does not change even when different treatments are applied to the objects in the population.
Note first that the standard score associated with the difference in the sample means becomes $Z = \frac{[M(\bs{Y}) - M(\bs{X})] - (\nu - \mu)}{\sigma \sqrt{1 / m + 1 / n}}$ To construct our desired variable, we first need an estimate of $\sigma^2$. A natural approach is to consider a weighted average of the sample variances $S^2(\bs{X})$ and $S^2(\bs{Y})$, with the degrees of freedom as the weight factors (this is called the pooled estimate of $\sigma^2$. Thus, let $S^2(\bs{X}, \bs{Y}) = \frac{(m - 1) S^2(\bs{X}) + (n - 1) S^2(\bs{Y})}{m + n - 2}$
The random variable $V$ given below has the chi-square distribution with $m + n - 2$ degrees of freedom: $V = \frac{(m - 1)S^2(\bs{X}) + (n - 1) S^2(\bs{Y})}{\sigma^2}$
Proof
The variable can be expressed as the sum of independent chi-square variables.
The variables $M(\bs{Y}) - M(\bs{X})$ and $S^2(\bs{X}, \bs{Y})$ are independent.
Proof
The following pairs of variables are independent: $(M(\bs{X}), S(\bs{X}))$ and $(M(\bs{Y}, S(\bs{Y}))$; $M(\bs{X})$ and $S(\bs{X})$; $M(\bs{Y})$ and $S(\bs{Y})$
The random variable $T$ given below has the student $t$ distribution with $m + n - 2$ degrees of freedom. $T = \frac{[M(\bs{Y}) - M(\bs{X})] - (\nu - \mu)}{S(\bs{X}, \bs{Y}) \sqrt{1 / m + 1 / n}}$
Proof
The random variable can be written as $Z / \sqrt{V / (m + n - 2}$ where $Z$ is the the standard normal variable given above and $V$ is the chi-square variable given above. Moreover, $Z$ and $V$ are independent by the previous result.
The Bivariate Sample Model
Suppose now that $\left((X_1, Y_1), (X_2, Y_2), \ldots, (X_n, Y_n)\right)$ is a random sample of size $n$ from the bivariate normal distribution with means $\mu \in \R$ and $\nu \in \R$, standard deviations $\sigma \in (0, \infty)$ and $\tau \in (0, \infty)$, and correlation $\rho \in [0, 1]$. Of course, $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the normal distribution with mean $\mu$ and standard deviation $\sigma$, and $\bs{Y} = (Y_1, Y_2, \ldots, Y_n)$ is a random sample of size $n$ from the normal distribution with mean $\nu$ and standard deviation $\tau$, so the results above in the one sample model apply to $\bs{X}$ and $\bs{Y}$ individually. Thus our interest in this section is in the relation between various $\bs{X}$ and $\bs{Y}$ statistics and properties of sample covariance.
The bivariate (or more generally multivariate) model occurs naturally when considering two (or more) variables in the statistical experiment. For example, the heights of the fathers and the heights of the sons in Pearson's height data may well fit observations from the bivariate normal model.
In the notation that we have used previously, recall that $\sigma^3 = \E\left[(X - \mu)^3\right] = 0$, $\sigma_4 = \E\left[(X - \mu)^4\right] = 3 \sigma^4$, $\tau_3 = \E\left[(Y - \nu)^3\right] = 0$, $\tau_4 = \E\left[(Y - \nu)^4\right] = 3 \tau^4$, $\delta = \cov(X, Y) = \sigma \tau \rho$. and $\delta_2 = \E[(X - \mu)^2 (Y - \nu)^2] = \sigma^2 \tau^2 (1 + 2 \rho^2)$.
The data vector $((X_1, Y_1), (X_2, Y_2), \ldots (X_n, Y_n))$ has a multivariate normal distribution.
1. The mean vector has a block form, with each block being $(\mu, \nu)$.
2. The variance-covariance matrix has a block-diagonal form, with each block being $\left[\begin{matrix} \sigma^2 & \sigma \tau \rho \ \sigma \tau \rho & \tau^2 \end{matrix} \right]$.
Proof
This follows from standard results for the multivariate normal distribution. Of course the blocks in parts (a) and (b) are simply the mean and variance-covariance matrix of a single observation $(X, Y)$.
Sample Means
$\left(M(\bs{X}), M(\bs{Y})\right)$ has a bivariate normal distribution. The covariance and correlation are
1. $\cov\left[M(\bs{X}), M(\bs{Y})\right] = \sigma \tau \rho / n$
2. $\cor\left[M(\bs{X}), M(\bs{Y})\right] = \rho$
Proof
The bivariate normal distribution follows from previous result since $(M(\bs{X}), M(\bs{Y}))$ can be obtained from the data vector by a linear transformation. Parts (a) and (b) follow from our previous general results.
Of course, we know the individual means and variances of $M(\bs{X})$ and $M(\bs{Y})$ from the one-sample model above. Hence we know the complete distribution of $(M(\bs{X}), M(\bs{Y}))$.
Sample Variances
The covariance and correlation between the special sample variances are
1. $\cov\left[W^2(\bs{X}), W^2(\bs{Y})\right] = 2 \sigma^2 \tau^2 \rho^2 / n$
2. $\cor\left[W^2(\bs{X}), W^2(\bs{Y})\right] = \rho^2$
Proof
These results follow from our previous general results and the special form of $\delta_2$, $\sigma_4$, and $\tau_4$.
The covariance and correlation between the standard sample variances are
1. $\cov\left[S^2(\bs{X}), S^2(\bs{Y})\right] = 2 \sigma^2 \tau^2 \rho^2 / (n - 1)$
2. $\cor\left[S^2(\bs{X}), S^2(\bs{Y})\right] = \rho^2$
Proof
These results follow from our previous general results and the special form of $\delta$, $\delta_2$, $\sigma_4$, and $\tau_4$.
Sample Covariance
If $\mu$ and $\nu$ are known (again usually an artificial assumption), a natural estimator of the distribution covariance $\delta$ is the special version of the sample covariance $W(\bs{X}, \bs{Y}) = \frac{1}{n} \sum_{i=1}^n (X_i - \mu)(Y_i - \nu)$
The mean and variance of $W(\bs{X}, \bs{Y})$ are
1. $\E[W(\bs{X}, \bs{Y})] = \sigma \tau \rho$
2. $\var[W(\bs{X}, \bs{Y})] = \sigma^2 \tau^2 (1 + \rho^2) / n$
Proof
These results follow from our previous general results and the special form of $\delta$ and $\delta_2$.
If $\mu$ and $\nu$ are unknown (again usually the case), then a natural estimator of the distribution covariance $\delta$ is the standard sample covariance $S(\bs{X}, \bs{Y}) = \frac{1}{n - 1} \sum_{i=1}^n [X_i - M(\bs{X})][Y_i - M(\bs{Y})]$
The mean and variance of the sample variance are
1. $\E[S(\bs{X}, \bs{Y})] = \sigma \tau \rho$
2. $\var[S(\bs{X}, \bs{Y})] = \sigma^2 \tau^2 (1 + \rho^2) \big/ (n - 1)$
Proof
These results follow from our previous general results and the special form of $\delta$ and $\delta_2$.
Computational Exercises
We use the basic notation established above for samples $\bs{X}$ and $\bs{Y}$, and for the statistics $M$, $W^2$, $S^2$, $T$, and so forth.
Suppose that the net weights (in grams) of 25 bags of M&Ms form a random sample $\bs{X}$ from the normal distribution with mean 50 and standard deviation 4. Find each of the following:
1. The mean and standard deviation of $M$.
2. The mean and standard deviation of $W^2$.
3. The mean and standard deviation of $S^2$.
4. The mean and standard deviation of $T$.
5. $\P(M \gt 49, S^2 \lt 20))$.
6. $\P(-1 \lt T \lt 1)$.
Answer
1. $50, \; 4 / 5$
2. $16, \; 16 \sqrt{2} / 5$
3. $16, \; 8 / \sqrt{3}$
4. $0, \; 2 \sqrt{3 / 11}$
5. $0.7291$
6. $0.6727$
Suppose that the SAT math scores from 16 Alabama students form a random sample $\bs{X}$ from the normal distribution with mean 550 and standard deviation 20, while the SAT math scores from 25 Georgia students form a random sample $\bs{Y}$ from the normal distribution with mean 540 and standard deviation 15. The two samples are independent. Find each of the following:
1. The mean and standard deviation of $M(\bs{X})$.
2. The mean and standard deviation of $M(\bs{Y})$.
3. The mean and standard deviation of $M(\bs{X}) - M(\bs{Y})$.
4. $\P[M(\bs{X}) \gt M(\bs{Y})]$.
5. The mean and standard deviation of $S^2(\bs{X})$.
6. The mean and standard deviation of $S^2(\bs{Y})$.
7. The mean and standard deviation of $S^2(\bs{X}) / S^2(\bs{Y})$
8. $\P[S(\bs{X}) \gt S(\bs{Y})]$.
Answer
1. $550, \; 5$
2. $540, \; 3$
3. $10, \; \sqrt{34}$
4. $0.9568$
5. $400, \; 80 \sqrt{10 / 3}$
6. $225, \; 75 \sqrt{3} / 2$
7. $64 / 33, \; \frac{32}{165} \sqrt{74 / 3}$
8. $0.8750$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/06%3A_Random_Samples/6.08%3A_Special_Properties_of_Normal_Samples.txt |
Point estimation refers to the process of estimating a parameter from a probability distribution, based on observed data from the distribution. It is one of the core topics in mathematical statistics. In this chapter, we will explore the most common methods of point estimation: the method of moments, the method of maximum likelihood, and Bayes' estimators. We also study important properties of estimators, including sufficiency and completeness, and the basic question of whether an estimator is the best possible one.
07: Point Estimation
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\bias}{\text{bias}}$ $\newcommand{\mse}{\text{mse}}$ $\newcommand{\eff}{\text{eff}}$ $\newcommand{\bs}{\boldsymbol}$
The Basic Statistical Model
As usual, our starting point is a random experiment with an underlying sample space and a probability measure $\P$. In the basic statistical model, we have an observable random variable $\bs{X}$ taking values in a set $S$. Recall that in general, this variable can have quite a complicated structure. For example, if the experiment is to sample $n$ objects from a population and record various measurements of interest, then the data vector has the form $\bs{X} = (X_1, X_2, \ldots, X_n)$ where $X_i$ is the vector of measurements for the $i$th object. The most important special case is when $(X_1, X_2, \ldots, X_n)$ are independent and identically distributed (IID). In this case $\bs{X}$ is a random sample of size $n$ from the distribution of an underlying measurement variable $X$.
Statistics
Recall also that a statistic is an observable function of the outcome variable of the random experiment: $\bs{U} = \bs{u}(\bs{X})$ where $\bs{u}$ is a known function from $S$ into another set $T$. Thus, a statistic is simply a random variable derived from the observation variable $\bs{X}$, with the assumption that $\bs{U}$ is also observable. As the notation indicates, $\bs{U}$ is typically also vector-valued. Note that the original data vector $\bs{X}$ is itself a statistic, but usually we are interested in statistics derived from $\bs{X}$. A statistic $\bs{U}$ may be computed to answer an inferential question. In this context, if the dimension of $\bs{U}$ (as a vector) is smaller than the dimension of $\bs{X}$ (as is usually the case), then we have achieved data reduction. Ideally, we would like to achieve significant data reduction with no loss of information about the inferential question at hand.
Parameters
In the technical sense, a parameter $\bs{\theta}$ is a function of the distribution of $\bs{X}$, taking values in a parameter space $T$. Typically, the distribution of $\bs{X}$ will have $k \in \N_+$ real parameters of interest, so that $\bs{\theta}$ has the form $\bs{\theta} = (\theta_1, \theta_2, \ldots, \theta_k)$ and thus $T \subseteq \R^k$. In many cases, one or more of the parameters are unknown, and must be estimated from the data variable $\bs{X}$. This is one of the of the most important and basic of all statistical problems, and is the subject of this chapter. If $\bs{U}$ is a statistic, then the distribution of $\bs{U}$ will depend on the parameters of $\bs{X}$, and thus so will distributional constructs such as means, variances, covariances, probability density functions and so forth. We usually suppress this dependence notationally to keep our mathematical expressions from becoming too unwieldy, but it's very important to realize that the underlying dependence is present. Remember that the critical idea is that by observing a value $\bs{u}$ of a statistic $\bs{U}$ we (hopefully) gain information about the unknown parameters.
Estimators
Suppose now that we have an unknown real parameter $\theta$ taking values in a parameter space $T \subseteq \R$. A real-valued statistic $U = u(\bs{X})$ that is used to estimate $\theta$ is called, appropriately enough, an estimator of $\theta$. Thus, the estimator is a random variable and hence has a distribution, a mean, a variance, and so on (all of which, as noted above, will generally depend on $\theta$). When we actually run the experiment and observe the data $\bs{x}$, the observed value $u = u(\bs{x})$ (a single number) is the estimate of the parameter $\theta$. The following definitions are basic.
Suppose that $U$ is a statistic used as an estimator of a parameter $\theta$ with values in $T \subseteq \R$. For $\theta \in T$,
1. $U - \theta$ is the error.
2. $\bias(U) = E(U - \theta) = \E(U) - \theta$ is the bias of $U$
3. $\mse(U) = \E\left[(U - \theta)^2\right]$ is the mean square error of $U$
Thus the error is the difference between the estimator and the parameter being estimated, so of course the error is a random variable. The bias of $U$ is simply the expected error, and the mean square error (the name says it all) is the expected square of the error. Note that bias and mean square error are functions of $\theta \in T$. The following definitions are a natural complement to the definition of bias.
Suppose again that $U$ is a statistic used as an estimator of a parameter $\theta$ with values in $T \subseteq \R$.
1. $U$ is unbiased if $\bias(U) = 0$, or equivalently $\E(U) = \theta$, for all $\theta \in T$.
2. $U$ is negatively biased if $\bias(U) \le 0$, or equivalently $\E(U) \le \theta$, for all $\theta \in T$.
3. $U$ is positively biased if $\bias(U) \ge 0$, or equivalently $\E(U) \ge \theta$, for all $\theta \in T$.
Thus, for an unbiased estimator, the expected value of the estimator is the parameter being estimated, clearly a desirable property. On the other hand, a positively biased estimator overestimates the parameter, on average, while a negatively biased estimator underestimates the parameter on average. Our definitions of negative and positive bias are weak in the sense that the weak inequalities $\le$ and $\ge$ are used. There are corresponding strong definitions, of course, using the strong inequalities $\lt$ and $\gt$. Note, however, that none of these definitions may apply. For example, it might be the case that $\bias(U) \lt 0$ for some $\theta \in T$, $\bias(U) = 0$ for other $\theta \in T$, and $\bias(U) \gt 0$ for yet other $\theta \in T$.
$\mse(U) = \var(U) + \bias^2(U)$
Proof
This follows from basic properties of expected value and variance: $\E[(U - \theta)^2] = \var(U - \theta) + [\E(U - \theta)]^2 = \var(U) + \bias^2(U)$
In particular, if the estimator is unbiased, then the mean square error of $U$ is simply the variance of $U$.
Ideally, we would like to have unbiased estimators with small mean square error. However, this is not always possible, and the result in (3) shows the delicate relationship between bias and mean square error. In the next section we will see an example with two estimators of a parameter that are multiples of each other; one is unbiased, but the other has smaller mean square error. However, if we have two unbiased estimators of $\theta$, we naturally prefer the one with the smaller variance (mean square error).
Suppose that $U$ and $V$ are unbiased estimators of a parameter $\theta$ with values in $T \subseteq \R$.
1. $U$ is more efficient than $V$ if $\var(U) \le \var(V)$.
2. The relative efficiency of $U$ with respect to $V$ is $\eff(U, V) = \frac{\var(V)}{\var(U)}$
Asymptotic Properties
Suppose again that we have a real parameter $\theta$ with possible values in a parameter space $T$. Often in a statistical experiment, we observe an infinite sequence of random variables over time, $\bs{X} = (X_1, X_2, \ldots,)$, so that at time $n$ we have observed $\bs{X}_n = (X_1, X_2, \ldots, X_n)$. In this setting we often have a general formula that defines an estimator of $\theta$ for each sample size $n$. Technically, this gives a sequence of real-valued estimators of $\theta$: $\bs{U} = (U_1, U_2, \ldots)$ where $U_n$ is a real-valued function of $\bs{X}_n$ for each $n \in \N_+$. In this case, we can discuss the asymptotic properties of the estimators as $n \to \infty$. Most of the definitions are natural generalizations of the ones above.
The sequence of estimators $\bs{U} = (U_1, U_2, \ldots)$ is asymptotically unbiased if $\bias(U_n) \to 0$ as $n \to \infty$ for every $\theta \in T$, or equivalently, $\E(U_n) \to \theta$ as $n \to \infty$ for every $\theta \in T$.
Suppose that $\bs{U} = (U_1, U_2, \ldots)$ and $\bs{V} = (V_1, V_2, \ldots)$ are two sequences of estimators that are asymptotically unbiased. The asymptotic relative efficiency of $\bs{U}$ to $\bs{V}$ is $\lim_{n \to \infty} \eff(U_n, V_n) = \lim_{n \to \infty} \frac{\var(V_n)}{\var(U_n)}$ assuming that the limit exists.
Naturally, we expect our estimators to improve, as the sample size $n$ increases, and in some sense to converge to the parameter as $n \to \infty$. This general idea is known as consistency. Once again, for the remainder of this discussion, we assume that $\bs{U} = (U_1, U_2, \ldots)$ is a sequence of estimators for a real-valued parameter $\theta$, with values in the parameter space $T$.
Consistency
1. $\bs{U}$ is consistent if $U_n \to \theta$ as $n \to \infty$ in probability for each $\theta \in T$. That is, $\P\left(\left|U_n - \theta\right| \gt \epsilon\right) \to 0$ as $n \to \infty$ for every $\epsilon \gt 0$ and $\theta \in T$.
2. $\bs{U}$ is mean-square consistent if $\mse(U_n) = \E[(U_n - \theta)^2] \to 0$ as $n \to \infty$ for $\theta \in T$.
Here is the connection between the two definitions:
If $\bs{U}$ is mean-square consistent then $\bs{U}$ is consistent.
Proof
From Markov's inequality, $\P\left(\left|U_n - \theta\right| \gt \epsilon\right) = \P\left[(U_n - \theta)^2 \gt \epsilon^2\right] \le \frac{\E\left[(U_n - \theta)^2\right]}{\epsilon^2} \to 0 \text{ as } n \to \infty$
That mean-square consistency implies simple consistency is simply a statistical version of the theorem that states that mean-square convergence implies convergence in probability. Here is another nice consequence of mean-square consistency.
If $\bs{U}$ is mean-square consistent then $\bs{U}$ is asymptotically unbiased.
Proof
This result follows from the fact that mean absolute error is smaller than root mean square error, which in turn is special case of a general result for norms. See the advanced section on vector spaces for more details. So, using this result and the ordinary triangle inequality for expected value we have $|\E(U_n - \theta)| \le \E(|U_n - \theta|) \le \sqrt{\E[(U_n - \theta)]^2} \to 0 \text{ as } n \to \infty$ Hence $\E(U_n) \to \theta$ as $n \to \infty$ for $\theta \in T$.
In the next several subsections, we will review several basic estimation problems that were studied in the chapter on Random Samples.
Estimation in the Single Variable Model
Suppose that $X$ is a basic real-valued random variable for an experiment, with mean $\mu \in \R$ and variance $\sigma^2 \in (0, \infty)$. We sample from the distribution of $X$ to produce a sequence $\bs{X} = (X_1, X_2, \ldots)$ of independent variables, each with the distribution of $X$. For each $n \in \N_+$, $\bs{X}_n = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the distribution of $X$.
Estimating the Mean
This subsection is a review of some results obtained in the section on the Law of Large Numbers in the chapter on Random Samples. Recall that a natural estimator of the distribution mean $\mu$ is the sample mean, defined by $M_n = \frac{1}{n} \sum_{i=1}^n X_i, \quad n \in \N_+$
Properties of $\bs M = (M_1, M_2, \ldots)$ as a sequence of estimators of $\mu$.
1. $\E(M_n) = \mu$ so $M_n$ is unbiased for $n \in \N_+$
2. $\var(M_n) = \sigma^2 / n$ for $n \in \N_+$ so $\bs M$ is consistent.
The consistency of $\bs M$ is simply the weak law of large numbers. Moreover, there are a number of important special cases of the results in (10). See the section on Sample Mean for the details.
Special cases of the sample mean
1. Suppose that $X = \bs{1}_A$, the indicator variable for an event $A$ that has probability $\P(A)$. Then the sample mean for a random sample of size $n \in \N_+$ from the distribution of $X$ is the relative frequency or empirical probability of $A$, denoted $P_n(A)$. Hence $P_n(A)$ is an unbiased estimator of $\P(A)$ for $n \in \N_+$ and $(\P_n(A): n \in \N_+)$ is consistent..
2. Suppose that $F$ denotes the distribution function of a real-valued random variable $Y$. Then for fixed $y \in \R$, the empirical distribution function $F_n(y)$ is simply the sample mean for a random sample of size $n \in \N_+$ from the distribution of the indicator variable $X = \bs{1}(Y \le y)$. Hence $F_n(y)$ is an unbiased estimator of $F(y)$ for $n \in \N_+$ and $(F_n(y): n \in \N_+)$ is consistent.
3. Suppose that $U$ is a random variable with a discrete distribution on a countable set $S$ and $f$ denotes the probability density function of $U$. Then for fixed $u \in S$, the empirical probability density function $f_n(u)$ is simply the sample mean for a random sample of size $n \in \N_+$ from the distribution of the indicator variable $X = \bs{1}(U = u)$. Hence $f_n(u)$ is an unbiased estimator of $f(u)$ for $n \in \N_+$ and $(f_n(u): n \in \N_+)$ is consistent.
Estimating the Variance
This subsection is a review of some results obtained in the section on the Sample Variance in the chapter on Random Samples. We also assume that the fourth central moment $\sigma_4 = \E\left[(X - \mu)^4\right]$ is finite. Recall that $\sigma_4 / \sigma^4$ is the kurtosis of $X$. Recall first that if $\mu$ is known (almost always an artificial assumption), then a natural estimator of $\sigma^2$ is a special version of the sample variance, defined by $W_n^2 = \frac{1}{n} \sum_{i=1}^n (X_i - \mu)^2, \quad n \in \N_+$
Properties of $\bs W^2 = (W_1^2, W_2^2, \ldots)$ as a sequence of estimators of $\sigma^2$.
1. $\E\left(W_n^2\right) = \sigma^2$ so $W_n^2$ is unbiased for $n \in \N_+$
2. $\var\left(W_n^2\right) = \frac{1}{n}(\sigma_4 - \sigma^4)$ for $n \in \N_+$ so $\bs W^2$ is consistent.
Proof
$\bs W^2$ corresponds to sampling from the distribution of $(X - \mu)^2$. This distribution as mean $\sigma^2$ and variance $\sigma_4 - \sigma^4$, so the results follow immediately from theorem (10).
If $\mu$ is unknown (the more reasonable assumption), then a natural estimator of the distribution variance is the standard version of the sample variance, defined by $S_n^2 = \frac{1}{n - 1} \sum_{i=1}^n (X_i - M_n)^2, \quad n \in \{2, 3, \ldots\}$
Properties of $\bs S^2 = (S_2^2, S_3^2, \ldots)$ as a sequence of estimators of $\sigma^2$
1. $\E\left(S_n^2\right) = \sigma^2$ so $S_n^2$ is unbiased for $n \in \{2, 3, \ldots\}$
2. $\var\left(S_n^2\right) = \frac{1}{n} \left(\sigma_4 - \frac{n - 3}{n - 1} \sigma^4 \right)$ for $n \in \{2, 3, \ldots\}$ so $\bs S^2$ is consistent sequence.
Naturally, we would like to compare the sequences $\bs W^2$ and $\bs S^2$ as estimators of $\sigma^2$. But again remember that $\bs W^2$ only makes sense if $\mu$ is known.
Comparison of $\bs W^2$ and $\bs S^2$
1. $\var\left(W_n^2\right) \lt \var(S_n^2)$ for $n \in \{2, 3, \ldots\}$.
2. The asymptotic relative efficiency of $\bs W^2$ to $\bs S^2$ is 1.
So by (a) $W_n^2$ is better than $S_n^2$ for $n \in \{2, 3, \ldots\}$, assuming that $\mu$ is known so that we can actually use $W_n^2$. This is perhaps not surprising, but by (b) $S_n^2$ works just about as well as $W_n^2$ for a large sample size $n$. Of course, the sample standard deviation $S_n$ is a natural estimator of the distribution standard deviation $\sigma$. Unfortunately, this estimator is biased. Here is a more general result:
Suppose that $\theta$ is a parameter with possible values in $T \subseteq (0, \infty)$ (with at least two points) and that $U$ is a statistic with values in $T$. If $U^2$ is an unbiased estimator of $\theta^2$ then $U$ is a negatively biased estimator of $\theta$.
Proof
Note that $\var(U) = \E(U^2) - [\E(U)]^2 = \theta^2 - [\E(U)]^2, \quad \theta \in T$ Since $T$ has at least two points, $U$ cannot be deterministic so $\var(U) \gt 0$. It follows that $[\E(U)]^2 \lt \theta^2$ so $\E(U) \lt \theta$ for $\theta \in T$.
Thus, we should not be too obsessed with the unbiased property. For most sampling distributions, there will be no statistic $U$ with the property that $U$ is an unbiased estimator of $\sigma$ and $U^2$ is an unbiased estimator of $\sigma^2$.
Estimation in the Bivariate Model
In this subsection we review some of the results obtained in the section on the Correlation and Regression in the chapter on Random Samples
Suppose that $X$ and $Y$ are real-valued random variables for an experiment, so that $(X, Y)$ has a bivariate distribution in $\R^2$. Let $\mu = \E(X)$ and $\sigma^2 = \var(X)$ denote the mean and variance of $X$, and let $\nu = \E(Y)$ and $\tau^2 = \var(Y)$ denote the mean and variance of $Y$. For the bivariate parameters, let $\delta = \cov(X, Y)$ denote the distribution covariance and $\rho = \cor(X, Y)$ the distribution correlation. We need one higher-order moment as well: let $\delta_2 = \E\left[(X - \mu)^2 (Y - \nu)^2\right]$, and as usual, we assume that all of the parameters exist. So the general parameter spaces are $\mu, \, \nu \in \R$, $\sigma^2, \, \tau^2 \in (0, \infty)$, $\delta \in \R$, and $\rho \in [0, 1]$. Suppose now that we sample from the distribution of $(X, Y)$ to generate a sequence of independent variables $\left((X_1, Y_1), (X_2, Y_2), \ldots\right)$, each with the distribution of $(X, Y)$. As usual, we will let $\bs{X}_n = (X_1, X_2, \ldots, X_n)$ and $\bs{Y}_n = (Y_1, Y_2, \ldots, Y_n)$; these are random samples of size $n$ from the distributions of $X$ and $Y$, respectively.
Since we now have two underlying variables, we need to enhance our notation somewhat. It will help to define the deterministic versions of our statistics. So if $\bs x = (x_1, x_2, \ldots)$ and $\bs y = (y_1, y_2, \ldots)$ are sequences of real numbers and $n \in \N_+$, we define the mean and special covariance functions by \begin{align*} m_n(\bs x) & = \frac{1}{n} \sum_{i=1}^n x_i \ w_n(\bs x, \bs y) & = \frac{1}{n} \sum_{i=1}^n (x_i - \mu)(y_i - \nu) \end{align*} If $n \in \{2, 3, \ldots\}$ we define the variance and standard covariance functions by \begin{align*} s_n^2(\bs x) & = \frac{1}{n - 1} \sum_{i=1}^n [x_i - m_n(\bs x)]^2 \ s_n(\bs x, \bs y) & = \frac{1}{n - 1} \sum_{i=1}^n [x_i - m_n(\bs x)][y_i - m_n(\bs y)] \end{align*} It should be clear from context whether we are using the one argument or two argument version of $s_n$. On this point, note that $s_n(\bs x, \bs x) = s_n^2(\bs x)$.
Estimating the Covariance
If $\mu$ and $\nu$ are known (almost always an artificial assumption), then a natural estimator of the distribution covariance $\delta$ is a special version of the sample covariance, defined by $W_n = w_n\left(\bs{X}, \bs{Y}\right) = \frac{1}{n} \sum_{i=1}^n (X_i - \mu)(Y_i - \nu), \quad n \in \N_+$
Properties of $\bs W = (W_1, W_2, \ldots)$ as a sequence of estimators of $\delta$.
1. $\E\left(W_n\right) = \delta$ so $W_n$ is unbiased for $n \in \N_+$.
2. $\var\left(W_n\right) = \frac{1}{n}(\delta_2 - \delta^2)$ for $n \in \N_+$ so $\bs W$ is consistent.
Proof
We've done this proof before, but it's so basic that it's worth repeating. Note that $\bs W$ corresponds to sampling from the distribution of $(X - \mu) (Y - \nu)$. This distribution as mean $\delta$ and variance $\delta_2 - \delta^2$, so the results follow immediately from Theorem (10).
If $\mu$ and $\nu$ are unknown (usually the more reasonable assumption), then a natural estimator of the distribution covariance $\delta$ is the standard version of the sample covariance, defined by $S_n = s_n(\bs X , \bs Y) = \frac{1}{n - 1} \sum_{i=1}^n [X_i - m_n(\bs X)][Y_i - m_n(\bs Y)], \quad n \in \{2, 3, \ldots\}$
Properties of $\bs S = (S_2, S_3, \ldots)$ as a sequence of estimators of $\delta$.
1. $\E\left(S_n\right) = \delta$ so $S_n$is unbiased for $n \in \{2, 3, \ldots\}$.
2. $\var\left(S_n\right) = \frac{1}{n}\left(\delta_2 + \frac{1}{n - 1} \sigma^2 \tau^2 - \frac{n - 2}{n - 1} \delta^2\right)$ for $n \in \{2, 3, \ldots\}$ so $\bs S$ is consistent.
Once again, since we have two competing sequences of estimators of $\delta$, we would like to compare them.
Comparison of $\bs W$ and $\bs S$ as estimators of $\delta$:
1. $\var\left(W_n\right) \lt \var\left(S_n\right)$ for $n \in \{2, 3, \ldots\}$.
2. The asymptotic relative efficiency of $\bs W$ to $\bs S$ is 1.
Thus, $U_n$ is better than $V_n$ for $n \in \{2, 3, \ldots\}$, assuming that $\mu$ and $\nu$ are known so that we can actually use $W_n$. But for large $n$, $V_n$ works just about as well as $U_n$.
Estimating the Correlation
A natural estimator of the distribution correlation $\rho$ is the sample correlation $R_n = \frac{s_n (\bs X, \bs Y)}{s_n(\bs X) s_n(\bs Y)}, \quad n \in \{2, 3, \ldots\}$ Note that this statistics is a nonlinear function of the sample covariance and the two sample standard deviations. For most distributions of $(X, Y)$, we have no hope of computing the bias or mean square error of this estimator. If we could compute the expected value, we would probably find that the estimator is biased. On the other hand, even though we cannot compute the mean square error, a simple application of the law of large numbers shows that $R_n \to \rho$ as $n \to \infty$ with probability 1. Thus, $\bs R = (R_2, R_3, \ldots)$ is at least consistent.
Estimating the regression coefficients
Recall that the distribution regression line, with $X$ as the predictor variable and $Y$ as the response variable, is $y = a + b \, x$ where $a = \E(Y) - \frac{\cov(X, Y)}{\var(X)} \E(X), \quad b = \frac{\cov(X, Y)}{\var(X)}$ On the other hand, the sample regression line, based on the sample of size $n \in \{2, 3, \ldots\}$, is $y = A_n + B_n x$ where $A_n = m_n(\bs Y) - \frac{s_n(\bs X, \bs Y)}{s_n^2(\bs X )} m_n(\bs X), \quad B_n = \frac{s_n(\bs X, \bs Y)}{s_n^2(\bs X)}$ Of course, the statistics $A_n$ and $B_n$ are natural estimators of the parameters $a$ and $b$, respectively, and in a sense are derived from our previous estimators of the distribution mean, variance, and covariance. Once again, for most distributions of $(X, Y)$, it would be difficult to compute the bias and mean square errors of these estimators. But applications of the law of large numbers show that with probability 1, $A_n \to a$ and $B_n \to b$ as $n \to \infty$, so at least $\bs A = (A_2, A_3, \ldots)$ and $\bs B = (B_2, B_3, \ldots)$ are consistent.
Exercises and Special Cases
The Poisson Distribution
Let's consider a simple example that illustrates some of the ideas above. Recall that the Poisson distribution with parameter $\lambda \in (0, \infty)$ has probability density function $g$ given by $g(x) = e^{-\lambda} \frac{\lambda^x}{x!}, \quad x \in \N$ The Poisson distribution is often used to model the number of random points in a region of time or space, and is studied in more detail in the chapter on the Poisson process. The parameter $\lambda$ is proportional to the size of the region of time or space; the proportionality constant is the average rate of the random points. The distribution is named for Simeon Poisson.
Suppose that $X$ has the Poisson distribution with parameter $\lambda$. . Hence
1. $\mu = \E(X) = \lambda$
2. $\sigma^2 = \var(X) = \lambda$
3. $\sigma_4 = \E\left[(X - \lambda)^4\right] = 3 \lambda^2 + \lambda$
Proof
Recall the permutation notation $x^{(n)} = x (x - 1) \cdots (x - n + 1)$ for $x \in \R$ and $n \in \N$. The expected value $\E[X^{(n)}]$ is the factorial moment of $X$ of order $n$. It's easy to see that he factorial moments are $\E\left[X^{(n)}\right] = \lambda^n$ for $n \in \N$. The results follow from this.
Suppose now that we sample from the distribution of $X$ to produce a sequence of independent random variables $\bs{X} = (X_1, X_2, \ldots)$, each having the Poisson distribution with unknown parameter $\lambda \in (0, \infty)$. Again, $\bs{X}_n = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n \in \N_+$ from the from the distribution for each $n \in \N$. From the previous exercise, $\lambda$ is both the mean and the variance of the distribution, so that we could use either the sample mean $M_n$ or the sample variance $S_n^2$ as an estimator of $\lambda$. Both are unbiased, so which is better? Naturally, we use mean square error as our criterion.
Comparison of $\bs M$ to $\bs S^2$ as estimators of $\lambda$.
1. $\var\left(M_n\right) = \frac{\lambda}{n}$ for $n \in \N_+$.
2. $\var\left(S_n^2\right) = \frac{\lambda}{n} \left(1 + 2 \lambda \frac{n}{n - 1} \right)$ for $n \in \{2, 3, \ldots\}$.
3. $\var\left(M_n\right) \lt \var\left(S_n^2\right)$ so $M_n$ for $n \in \{2, 3, \ldots\}$.
4. The asymptotic relative efficiency of $\bs M$ to $\bs S^2$ is $1 + 2 \lambda$.
So our conclusion is that the sample mean $M_n$ is a better estimator of the parameter $\lambda$ than the sample variance $S_n^2$ for $n \in \{2, 3, \ldots\}$, and the difference in quality increases with $\lambda$.
Run the Poisson experiment 100 times for several values of the parameter. In each case, compute the estimators $M$ and $S^2$. Which estimator seems to work better?
The emission of elementary particles from a sample of radioactive material in a time interval is often assumed to follow the Poisson distribution. Thus, suppose that the alpha emissions data set is a sample from a Poisson distribution. Estimate the rate parameter $\lambda$.
1. using the sample mean
2. using the sample variance
Answer
1. 8.367
2. 8.649
Simulation Exercises
In the sample mean experiment, set the sampling distribution to gamma. Increase the sample size with the scroll bar and note graphically and numerically the unbiased and consistent properties. Run the experiment 1000 times and compare the sample mean to the distribution mean.
Run the normal estimation experiment 1000 times for several values of the parameters.
1. Compare the empirical bias and mean square error of $M$ with the theoretical values.
2. Compare the empirical bias and mean square error of $S^2$ and of $W^2$ to their theoretical values. Which estimator seems to work better?
In matching experiment, the random variable is the number of matches. Run the simulation 1000 times and compare
1. the sample mean to the distribution mean.
2. the empirical density function to the probability density function.
Run the exponential experiment 1000 times and compare the sample standard deviation to the distribution standard deviation.
Data Analysis Exercises
For Michelson's velocity of light data, compute the sample mean and sample variance.
Answer
852.4, 6242.67
For Cavendish's density of the earth data, compute the sample mean and sample variance.
Answer
5.448, 0.048817
For Short's parallax of the sun data, compute the sample mean and sample variance.
Answer
8.616, 0.561032
Consider the Cicada data.
1. Compute the sample mean and sample variance of the body length variable.
2. Compute the sample mean and sample variance of the body weight variable.
3. Compute the sample covariance and sample correlation between the body length and body weight variables.
Answer
1. 24.0, 3.92
2. 0.180, 0.003512
3. 0.0471, 0.4012
Consider the M&M data.
1. Compute the sample mean and sample variance of the net weight variable.
2. Compute the sample mean and sample variance of the total number of candies.
3. Compute the sample covariance and sample correlation between the number of candies and the net weight.
Answer
1. 57.1, 5.68
2. 49.215, 2.3163
3. 2.878, 0.794
Consider the Pearson data.
1. Compute the sample mean and sample variance of the height of the father.
2. Compute the sample mean and sample variance of the height of the son.
3. Compute the sample covariance and sample correlation between the height of the father and height of the son.
Answer
1. 67.69, 7.5396
2. 68.68, 7.9309
3. 3.875, 0.501
The estimators of the mean, variance, and covariance that we have considered in this section have been natural in a sense. However, for other parameters, it is not clear how to even find a reasonable estimator in the first place. In the next several sections, we will consider the problem of constructing estimators. Then we return to the study of the mathematical properties of estimators, and consider the question of when we can know that an estimator is the best possible, given the data. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/07%3A_Point_Estimation/7.01%3A_Estimators.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\bias}{\text{bias}}$ $\newcommand{\mse}{\text{mse}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
The Method
Suppose that we have a basic random experiment with an observable, real-valued random variable $X$. The distribution of $X$ has $k$ unknown real-valued parameters, or equivalently, a parameter vector $\bs{\theta} = (\theta_1, \theta_2, \ldots, \theta_k)$ taking values in a parameter space, a subset of $\R^k$. As usual, we repeat the experiment $n$ times to generate a random sample of size $n$ from the distribution of $X$. $\bs{X} = (X_1, X_2, \ldots, X_n)$ Thus, $\bs{X}$ is a sequence of independent random variables, each with the distribution of $X$. The method of moments is a technique for constructing estimators of the parameters that is based on matching the sample moments with the corresponding distribution moments. First, let $\mu^{(j)}(\bs{\theta}) = \E\left(X^j\right), \quad j \in \N_+$ so that $\mu^{(j)}(\bs{\theta})$ is the $j$th moment of $X$ about 0. Note that we are emphasizing the dependence of these moments on the vector of parameters $\bs{\theta}$. Note also that $\mu^{(1)}(\bs{\theta})$ is just the mean of $X$, which we usually denote simply by $\mu$. Next, let $M^{(j)}(\bs{X}) = \frac{1}{n} \sum_{i=1}^n X_i^j, \quad j \in \N_+$ so that $M^{(j)}(\bs{X})$ is the $j$th sample moment about 0. Equivalently, $M^{(j)}(\bs{X})$ is the sample mean for the random sample $\left(X_1^j, X_2^j, \ldots, X_n^j\right)$ from the distribution of $X^j$. Note that we are emphasizing the dependence of the sample moments on the sample $\bs{X}$. Note also that $M^{(1)}(\bs{X})$ is just the ordinary sample mean, which we usually just denote by $M$ (or by $M_n$ if we wish to emphasize the dependence on the sample size). From our previous work, we know that $M^{(j)}(\bs{X})$ is an unbiased and consistent estimator of $\mu^{(j)}(\bs{\theta})$ for each $j$. Here's how the method works:
To construct the method of moments estimators $\left(W_1, W_2, \ldots, W_k\right)$ for the parameters $(\theta_1, \theta_2, \ldots, \theta_k)$ respectively, we consider the equations $\mu^{(j)}(W_1, W_2, \ldots, W_k) = M^{(j)}(X_1, X_2, \ldots, X_n)$ consecutively for $j \in \N_+$ until we are able to solve for $\left(W_1, W_2, \ldots, W_k\right)$ in terms of $\left(M^{(1)}, M^{(2)}, \ldots\right)$.
The equations for $j \in \{1, 2, \ldots, k\}$ give $k$ equations in $k$ unknowns, so there is hope (but no guarantee) that the equations can be solved for $(W_1, W_2, \ldots, W_k)$ in terms of $(M^{(1)}, M^{(2)}, \ldots, M^{(k)})$. In fact, sometimes we need equations with $j \gt k$. Exercise 28 below gives a simple example. The method of moments can be extended to parameters associated with bivariate or more general multivariate distributions, by matching sample product moments with the corresponding distribution product moments. The method of moments also sometimes makes sense when the sample variables $(X_1, X_2, \ldots, X_n)$ are not independent, but at least are identically distributed. The hypergeometric model below is an example of this.
Of course, the method of moments estimators depend on the sample size $n \in \N_+$. We have suppressed this so far, to keep the notation simple. But in the applications below, we put the notation back in because we want to discuss asymptotic behavior.
Estimates for the Mean and Variance
Estimating the mean and variance of a distribution are the simplest applications of the method of moments. Throughout this subsection, we assume that we have a basic real-valued random variable $X$ with $\mu = \E(X) \in \R$ and $\sigma^2 = \var(X) \in (0, \infty)$. Occasionally we will also need $\sigma_4 = \E[(X - \mu)^4]$, the fourth central moment. We sample from the distribution of $X$ to produce a sequence $\bs X = (X_1, X_2, \ldots)$ of independent variables, each with the distribution of $X$. For each $n \in \N_+$, $\bs X_n = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the distribution of $X$. We start by estimating the mean, which is essentially trivial by this method.
Suppose that the mean $\mu$ is unknown. The method of moments estimator of $\mu$ based on $\bs X_n$ is the sample mean $M_n = \frac{1}{n} \sum_{i=1}^n X_i$
1. $\E(M_n) = \mu$ so $M_n$ is unbiased for $n \in \N_+$
2. $\var(M_n) = \sigma^2/n$ for $n \in \N_+$so $\bs M = (M_1, M_2, \ldots)$ is consistent.
Proof
It does not get any more basic than this. The method of moments works by matching the distribution mean with the sample mean. The fact that $\E(M_n) = \mu$ and $\var(M_n) = \sigma^2 / n$ for $n \in \N_+$ are properties that we have seen several times before.
Estimating the variance of the distribution, on the other hand, depends on whether the distribution mean $\mu$ is known or unknown. First we will consider the more realistic case when the mean in also unknown. Recall that for $n \in \{2, 3, \ldots\}$, the sample variance based on $\bs X_n$ is $S_n^2 = \frac{1}{n - 1} \sum_{i=1}^n (X_i - M_n)^2$ Recall also that $\E(S_n^2) = \sigma^2$ so $S_n^2$ is unbiased for $n \in \{2, 3, \ldots\}$, and that $\var(S_n^2) = \frac{1}{n} \left(\sigma_4 - \frac{n - 3}{n - 1} \sigma^4 \right)$ so $\bs S^2 = (S_2^2, S_3^2, \ldots)$ is consistent.
Suppose that the mean $\mu$ and the variance $\sigma^2$ are both unknown. For $n \in \N_+$, the method of moments estimator of $\sigma^2$ based on $\bs X_n$ is $T_n^2 = \frac{1}{n} \sum_{i=1}^n (X_i - M_n)^2$
1. $\bias(T_n^2) = -\sigma^2 / n$ for $n \in \N_+$ so $\bs T^2 = (T_1^2, T_2^2, \ldots)$ is asymptotically unbiased.
2. $\mse(T_n^2) = \frac{1}{n^3}\left[(n - 1)^2 \sigma_4 - (n^2 - 5 n + 3) \sigma^4\right]$ for $n \in \N_+$ so $\bs T^2$ is consistent.
Proof
As before, the method of moments estimator of the distribution mean $\mu$ is the sample mean $M_n$. On the other hand, $\sigma^2 = \mu^{(2)} - \mu^2$ and hence the method of moments estimator of $\sigma^2$ is $T_n^2 = M_n^{(2)} - M_n^2$, which simplifies to the result above. Note that $T_n^2 = \frac{n - 1}{n} S_n^2$ for $n \in \{2, 3, \ldots\}$.
1. Note that $\E(T_n^2) = \frac{n - 1}{n} \E(S_n^2) = \frac{n - 1}{n} \sigma^2$, so $\bias(T_n^2) = \frac{n-1}{n}\sigma^2 - \sigma^2 = -\frac{1}{n} \sigma^2$.
2. Recall that $\mse(T_n^2) = \var(T_n^2) + \bias^2(T_n^2)$. But $\var(T_n^2) = \left(\frac{n-1}{n}\right)^2 \var(S_n^2)$. The result follows from substituting $\var(S_n^2)$ given above and $\bias(T_n^2)$ in part (a).
Hence $T_n^2$ is negatively biased and on average underestimates $\sigma^2$. Because of this result, $T_n^2$ is referred to as the biased sample variance to distinguish it from the ordinary (unbiased) sample variance $S_n^2$.
Next let's consider the usually unrealistic (but mathematically interesting) case where the mean is known, but not the variance.
Suppose that the mean $\mu$ is known and the variance $\sigma^2$ unknown. For $n \in \N_+$, the method of moments estimator of $\sigma^2$ based on $\bs X_n$ is $W_n^2 = \frac{1}{n} \sum_{i=1}^n (X_i - \mu)^2$
1. $\E(W_n^2) = \sigma^2$ so $W_n^2$ is unbiased for $n \in \N_+$
2. $\var(W_n^2) = \frac{1}{n}(\sigma_4 - \sigma^4)$ for $n \in \N_+$ so $\bs W^2 = (W_1^2, W_2^2, \ldots)$ is consistent.
Proof
These results follow since $\W_n^2$ is the sample mean corresponding to a random sample of size $n$ from the distribution of $(X - \mu)^2$.
We compared the sequence of estimators $\bs S^2$ with the sequence of estimators $\bs W^2$ in the introductory section on Estimators. Recall that $\var(W_n^2) \lt \var(S_n^2)$ for $n \in \{2, 3, \ldots\}$ but $\var(S_n^2) / \var(W_n^2) \to 1$ as $n \to \infty$. There is no simple, general relationship between $\mse(T_n^2)$ and $\mse(S_n^2)$ or between $\mse(T_n^2)$ and $\mse(W_n^2)$, but the asymptotic relationship is simple.
$\mse(T_n^2) / \mse(W_n^2) \to 1$ and $\mse(T_n^2) / \mse(S_n^2) \to 1$ as $n \to \infty$
Proof
In light of the previous remarks, we just have to prove one of these limits. The first limit is simple, since the coefficients of $\sigma_4$ and $\sigma^4$ in $\mse(T_n^2)$ are asymptotically $1 / n$ as $n \to \infty$.
It also follows that if both $\mu$ and $\sigma^2$ are unknown, then the method of moments estimator of the standard deviation $\sigma$ is $T = \sqrt{T^2}$. In the unlikely event that $\mu$ is known, but $\sigma^2$ unknown, then the method of moments estimator of $\sigma$ is $W = \sqrt{W^2}$.
Estimating Two Parameters
There are several important special distributions with two paraemters; some of these are included in the computational exercises below. With two parameters, we can derive the method of moments estimators by matching the distribution mean and variance with the sample mean and variance, rather than matching the distribution mean and second moment with the sample mean and second moment. This alternative approach sometimes leads to easier equations. To setup the notation, suppose that a distribution on $\R$ has parameters $a$ and $b$. We sample from the distribution to produce a sequence of independent variables $\bs X = (X_1, X_2, \ldots)$, each with the common distribution. For $n \in \N_+$, $\bs X_n = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the distribution. Let $M_n$, $M_n^{(2)}$, and $T_n^2$ denote the sample mean, second-order sample mean, and biased sample variance corresponding to $\bs X_n$, and let $\mu(a, b)$, $\mu^{(2)}(a, b)$, and $\sigma^2(a, b)$ denote the mean, second-order mean, and variance of the distribution.
If the method of moments estimators $U_n$ and $V_n$ of $a$ and $b$, respectively, can be found by solving the first two equations $\mu(U_n, V_n) = M_n, \quad \mu^{(2)}(U_n, V_n) = M_n^{(2)}$ then $U_n$ and $V_n$ can also be found by solving the equations $\mu(U_n, V_n) = M_n, \quad \sigma^2(U_n, V_n) = T_n^2$
Proof
Recall that $\sigma^2(a, b) = \mu^{(2)}(a, b) - \mu^2(a, b)$. In addition, $T_n^2 = M_n^{(2)} - M_n^2$. Hence the equations $\mu(U_n, V_n) = M_n$, $\sigma^2(U_n, V_n) = T_n^2$ are equivalent to the equations $\mu(U_n, V_n) = M_n$, $\mu^{(2)}(U_n, V_n) = M_n^{(2)}$.
Because of this result, the biased sample variance $T_n^2$ will appear in many of the estimation problems for special distributions that we consider below.
Special Distributions
The Normal Distribution
The normal distribution with mean $\mu \in \R$ and variance $\sigma^2 \in (0, \infty)$ is a continuous distribution on $\R$ with probability density function $g$ given by $g(x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left[-\frac{1}{2}\left(\frac{x - \mu}{\sigma}\right)^2\right], \quad x \in \R$ This is one of the most important distributions in probability and statistics, primarily because of the central limit theorem. The normal distribution is studied in more detail in the chapter on Special Distributions.
Suppose now that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the normal distribution with mean $\mu$ and variance $\sigma^2$. Form our general work above, we know that if $\mu$ is unknown then the sample mean $M$ is the method of moments estimator of $\mu$, and if in addition, $\sigma^2$ is unknown then the method of moments estimator of $\sigma^2$ is $T^2$. On the other hand, in the unlikely event that $\mu$ is known then $W^2$ is the method of moments estimator of $\sigma^2$. Our goal is to see how the comparisons above simplify for the normal distribution.
Mean square errors of $S_n^2$ and $T_n^2$.
1. $\mse(T^2) = \frac{2 n - 1}{n^2} \sigma^4$
2. $\mse(S^2) = \frac{2}{n - 1} \sigma^4$
3. $\mse(T^2) \lt \mse(S^2)$ for $n \in \{2, 3, \ldots, \}$
Proof
Recall that for the normal distribution, $\sigma_4 = 3 \sigma^4$. Substituting this into the general results gives parts (a) and (b). Part (c) follows from (a) and (b). Of course the asymptotic relative efficiency is still 1, from our previous theorem.
Thus, $S^2$ and $T^2$ are multiplies of one another; $S^2$ is unbiased, but when the sampling distribution is normal, $T^2$ has smaller mean square error. Surprisingly, $T^2$ has smaller mean square error even than $W^2$.
Mean square errors of $T^2$ and $W^2$.
1. $\mse(W^2) = \frac{2}{n} \sigma^4$
2. $\mse(T^2) \lt \mse(W^2)$ for $n \in \{2, 3, \ldots\}$
Proof
Again, since the sampling distribution is normal, $\sigma_4 = 3 \sigma^4$. Substituting this into the gneral formula for $\var(W_n^2)$ gives part (a).
Run the normal estimation experiment 1000 times for several values of the sample size $n$ and the parameters $\mu$ and $\sigma$. Compare the empirical bias and mean square error of $S^2$ and of $T^2$ to their theoretical values. Which estimator is better in terms of bias? Which estimator is better in terms of mean square error?
Next we consider estimators of the standard deviation $\sigma$. As noted in the general discussion above, $T = \sqrt{T^2}$ is the method of moments estimator when $\mu$ is unknown, while $W = \sqrt{W^2}$ is the method of moments estimator in the unlikely event that $\mu$ is known. Another natural estimator, of course, is $S = \sqrt{S^2}$, the usual sample standard deviation. The following sequence, defined in terms of the gamma function turns out to be important in the analysis of all three estimators.
Consider the sequence $a_n = \sqrt{\frac{2}{n}} \frac{\Gamma[(n + 1) / 2)}{\Gamma(n / 2)}, \quad n \in \N_+$ Then $0 \lt a_n \lt 1$ for $n \in \N_+$ and $a_n \uparrow 1$ as $n \uparrow \infty$.
First, assume that $\mu$ is known so that $W_n$ is the method of moments estimator of $\sigma$.
For $n \in \N_+$,
1. $\E(W) = a_n \sigma$
2. $\bias(W) = (a_n - 1) \sigma$
3. $\var(W) = \left(1 - a_n^2\right) \sigma^2$
4. $\mse(W) = 2 (1 - a_n) \sigma^2$
Proof
Recall that $U^2 = n W^2 / \sigma^2$ has the chi-square distribution with $n$ degrees of freedom, and hence $U$ has the chi distribution with $n$ degrees of freedom. Solving gives $W = \frac{\sigma}{\sqrt{n}} U$ From the formulas for the mean and variance of the chi distribution we have \begin{align*} \E(W) & = \frac{\sigma}{\sqrt{n}} \E(U) = \frac{\sigma}{\sqrt{n}} \sqrt{2} \frac{\Gamma[(n + 1) / 2)}{\Gamma(n / 2)} = \sigma a_n \ \var(W) & = \frac{\sigma^2}{n} \var(U) = \frac{\sigma^2}{n}\left\{n - [\E(U)]^2\right\} = \sigma^2\left(1 - a_n^2\right) \end{align*}
Thus $W$ is negatively biased as an estimator of $\sigma$ but asymptotically unbiased and consistent. Of course we know that in general (regardless of the underlying distribution), $W^2$ is an unbiased estimator of $\sigma^2$ and so $W$ is negatively biased as an estimator of $\sigma$. In the normal case, since $a_n$ involves no unknown parameters, the statistic $W / a_n$ is an unbiased estimator of $\sigma$. Next we consider the usual sample standard deviation $S$.
For $n \in \{2, 3, \ldots\}$,
1. $\E(S) = a_{n-1} \sigma$
2. $\bias(S) = (a_{n-1} - 1) \sigma$
3. $\var(S) = \left(1 - a_{n-1}^2\right) \sigma^2$
4. $\mse(S) = 2 (1 - a_{n-1}) \sigma^2$
Proof
Recall that $V^2 = (n - 1) S^2 / \sigma^2$ has the chi-square distribution with $n - 1$ degrees of freedom, and hence $V$ has the chi distribution with $n - 1$ degrees of freedom. The proof now proceeds just as in the previous theorem, but with $n - 1$ replacing $n$.
As with $W$, the statistic $S$ is negatively biased as an estimator of $\sigma$ but asymptotically unbiased, and also consistent. Since $a_{n - 1}$ involves no unknown parameters, the statistic $S / a_{n-1}$ is an unbiased estimator of $\sigma$. Note also that, in terms of bias and mean square error, $S$ with sample size $n$ behaves like $W$ with sample size $n - 1$. Finally we consider $T$, the method of moments estimator of $\sigma$ when $\mu$ is unknown.
For $n \in \{2, 3, \ldots\}$,
1. $\E(T) = \sqrt{\frac{n - 1}{n}} a_{n-1} \sigma$
2. $\bias(T) = \left(\sqrt{\frac{n - 1}{n}} a_{n-1} - 1\right) \sigma$
3. $\var(T) = \frac{n - 1}{n} \left(1 - a_{n-1}^2 \right) \sigma^2$
4. $\mse(T) = \left(2 - \frac{1}{n} - 2 \sqrt{\frac{n-1}{n}} a_{n-1} \right) \sigma^2$
Proof
The results follow easily from the previous theorem since $T_n = \sqrt{\frac{n - 1}{n}} S_n$.
The Bernoulli Distribution
Recall that an indicator variable is a random variable $X$ that takes only the values 0 and 1. The distribution of $X$ is known as the Bernoulli distribution, named for Jacob Bernoulli, and has probability density function $g$ given by $g(x) = p^x (1 - p)^{1 - x}, \quad x \in \{0, 1\}$ where $p \in (0, 1)$ is the success parameter. The mean of the distribution is $p$ and the variance is $p (1 - p)$.
Suppose now that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the Bernoulli distribution with unknown success parameter $p$. Since the mean of the distribution is $p$, it follows from our general work above that the method of moments estimator of $p$ is $M$, the sample mean. In this case, the sample $\bs{X}$ is a sequence of Bernoulli trials, and $M$ has a scaled version of the binomial distribution with parameters $n$ and $p$: $\P\left(M = \frac{k}{n}\right) = \binom{n}{k} p^k (1 - p)^{n - k}, \quad k \in \{0, 1, \ldots, n\}$ Note that since $X^k = X$ for every $k \in \N_+$, it follows that $\mu^{(k)} = p$ and $M^{(k)} = M$ for every $k \in \N_+$. So any of the method of moments equations would lead to the sample mean $M$ as the estimator of $p$. Although very simple, this is an important application, since Bernoulli trials are found embedded in all sorts of estimation problems, such as empirical probability density functions and empirical distribution functions.
The Geometric Distribution
The geometric distribution on $\N_+$ with success parameter $p \in (0, 1)$ has probability density function $g$ given by $g(x) = p (1 - p)^{x-1}, \quad x \in \N_+$ The geometric distribution on $\N_+$ governs the number of trials needed to get the first success in a sequence of Bernoulli trials with success parameter $p$. The mean of the distribution is $\mu = 1 / p$.
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the geometric distribution on $\N_+$ with unknown success parameter $p$. The method of moments estimator of $p$ is $U = \frac{1}{M}$
Proof
The method of moments equation for $U$ is $1 / U = M$.
The geometric distribution on $\N$ with success parameter $p \in (0, 1)$ has probability density function $g(x) = p (1 - p)^x, \quad x \in \N$ This version of the geometric distribution governs the number of failures before the first success in a sequence of Bernoulli trials. The mean of the distribution is $\mu = (1 - p) \big/ p$.
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the geometric distribution on $\N$ with unknown parameter $p$. The method of moments estimator of $p$ is $U = \frac{1}{M + 1}$
Proof
The method of moments equation for $U$ is $(1 - U) \big/ U = M$.
The Negative Binomial Distribution
More generally, the negative binomial distribution on $\N$ with shape parameter $k \in (0, \infty)$ and success parameter $p \in (0, 1)$ has probability density function $g(x) = \binom{x + k - 1}{k - 1} p^k (1 - p)^x, \quad x \in \N$ If $k$ is a positive integer, then this distribution governs the number of failures before the $k$th success in a sequence of Bernoulli trials with success parameter $p$. However, the distribution makes sense for general $k \in (0, \infty)$. The negative binomial distribution is studied in more detail in the chapter on Bernoulli Trials. The mean of the distribution is $k (1 - p) \big/ p$ and the variance is $k (1 - p) \big/ p^2$. Suppose now that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the negative binomial distribution on $\N$ with shape parameter $k$ and success parameter $p$
If $k$ and $p$ are unknown, then the corresponding method of moments estimators $U$ and $V$ are $U = \frac{M^2}{T^2 - M}, \quad V = \frac{M}{T^2}$
Proof
Matching the distribution mean and variance to the sample mean and variance gives the equations $U \frac{1 - V}{V} = M, \quad U \frac{1 - V}{V^2} = T^2$
As usual, the results are nicer when one of the parameters is known.
Suppose that $k$ is known but $p$ is unknown. The method of moments estimator $V_k$ of $p$ is $V_k = \frac{k}{M + k}$
Proof
Matching the distribution mean to the sample mean gives the equation $k \frac{1 - V_k}{V_k} = M$
Suppose that $k$ is unknown but $p$ is known. The method of moments estimator of $k$ is $U_p = \frac{p}{1 - p} M$
1. $\E(U_p) = k$ so $U_p$ is unbiased.
2. $\var(U_p) = \frac{k}{n (1 - p)}$ so $U_p$ is consistent.
Proof
Matching the distribution mean to the sample mean gives the equation $U_p \frac{1 - p}{p} = M$.
1. $E(U_p) = \frac{p}{1 - p} \E(M)$ and $\E(M) = \frac{1 - p}{p} k$
2. $\var(U_p) = \left(\frac{p}{1 - p}\right)^2 \var(M)$ and $\var(M) = \frac{1}{n} \var(X) = \frac{1 - p}{n p^2}$
The Poisson Distribution
The Poisson distribution with parameter $r \in (0, \infty)$ is a discrete distribution on $\N$ with probability density function $g$ given by $g(x) = e^{-r} \frac{r^x}{x!}, \quad x \in \N$ The mean and variance are both $r$. The distribution is named for Simeon Poisson and is widely used to model the number of random points is a region of time or space. The parameter $r$ is proportional to the size of the region, with the proportionality constant playing the role of the average rate at which the points are distributed in time or space. The Poisson distribution is studied in more detail in the chapter on the Poisson Process.
Suppose now that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the Poisson distribution with parameter $r$. Since $r$ is the mean, it follows from our general work above that the method of moments estimator of $r$ is the sample mean $M$.
The Gamma Distribution
The gamma distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$ is a continuous distribution on $(0, \infty)$ with probability density function $g$ given by $g(x) = \frac{1}{\Gamma(k) b^k} x^{k-1} e^{-x / b}, \quad x \in (0, \infty)$ The gamma probability density function has a variety of shapes, and so this distribution is used to model various types of positive random variables. The gamma distribution is studied in more detail in the chapter on Special Distributions. The mean is $\mu = k b$ and the variance is $\sigma^2 = k b^2$.
Suppose now that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from the gamma distribution with shape parameter $k$ and scale parameter $b$.
Suppose that $k$ and $b$ are both unknown, and let $U$ and $V$ be the corresponding method of moments estimators. Then $U = \frac{M^2}{T^2}, \quad V = \frac{T^2}{M}$
Proof
Matching the distribution mean and variance with the sample mean and variance leads to the equations $U V = M$, $U V^2 = T^2$. Solving gives the results.
The method of moments estimators of $k$ and $b$ given in the previous exercise are complicated, nonlinear functions of the sample mean $M$ and the sample variance $T^2$. Thus, computing the bias and mean square errors of these estimators are difficult problems that we will not attempt. However, we can judge the quality of the estimators empirically, through simulations.
When one of the parameters is known, the method of moments estimator of the other parameter is much simpler.
Suppose that $k$ is unknown, but $b$ is known. The method of moments estimator of $k$ is $U_b = \frac{M}{b}$
1. $\E(U_b) = k$ so $U_b$ is unbiased.
2. $\var(U_b) = k / n$ so $U_b$ is consistent.
Proof
If $b$ is known, then the method of moments equation for $U_b$ is $b U_b = M$. Solving gives (a). Next, $\E(U_b) = \E(M) / b = k b / b = k$, so $U_b$ is unbiased. Finally $\var(U_b) = \var(M) / b^2 = k b ^2 / (n b^2) = k / n$.
Suppose that $b$ is unknown, but $k$ is known. The method of moments estimator of $b$ is $V_k = \frac{M}{k}$
1. $\E(V_k) = b$ so $V_k$ is unbiased.
2. $\var(V_k) = b^2 / k n$ so that $V_k$ is consistent.
Proof
If $k$ is known, then the method of moments equation for $V_k$ is $k V_k = M$. Solving gives (a). Next, $\E(V_k) = \E(M) / k = k b / k = b$, so $V_k$ is unbiased. Finally $\var(V_k) = \var(M) / k^2 = k b ^2 / (n k^2) = b^2 / k n$.
Run the gamma estimation experiment 1000 times for several different values of the sample size $n$ and the parameters $k$ and $b$. Note the empirical bias and mean square error of the estimators $U$, $V$, $U_b$, and $V_k$. One would think that the estimators when one of the parameters is known should work better than the corresponding estimators when both parameters are unknown; but investigate this question empirically.
The Beta Distribution
The beta distribution with left parameter $a \in (0, \infty)$ and right parameter $b \in (0, \infty)$ is a continuous distribution on $(0, 1)$ with probability density function $g$ given by $g(x) = \frac{1}{B(a, b)} x^{a-1} (1 - x)^{b-1}, \quad 0 \lt x \lt 1$ The beta probability density function has a variety of shapes, and so this distribution is widely used to model various types of random variables that take values in bounded intervals. The beta distribution is studied in more detail in the chapter on Special Distributions. The first two moments are $\mu = \frac{a}{a + b}$ and $\mu^{(2)} = \frac{a (a + 1)}{(a + b)(a + b + 1)}$.
Suppose now that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the beta distribution with left parameter $a$ and right parameter $b$.
Suppose that $a$ and $b$ are both unknown, and let $U$ and $V$ be the corresponding method of moments estimators. Then $U = \frac{M \left(M - M^{(2)}\right)}{M^{(2)} - M^2}, \quad V = \frac{(1 - M)\left(M - M^{(2)}\right)}{M^{(2)} - M^2}$
Proof
The method of moments equations for $U$ and $V$ are $\frac{U}{U + V} = M, \quad \frac{U(U + 1)}{(U + V)(U + V + 1)} = M^{(2)}$ Solving gives the result.
The method of moments estimators of $a$ and $b$ given in the previous exercise are complicated nonlinear functions of the sample moments $M$ and $M^{(2)}$. Thus, we will not attempt to determine the bias and mean square errors analytically, but you will have an opportunity to explore them empricially through a simulation.
Suppose that $a$ is unknown, but $b$ is known. Let $U_b$ be the method of moments estimator of $a$. Then $U_b = b \frac{M}{1 - M}$
Proof
If $b$ is known then the method of moments equation for $U_b$ as an estimator of $a$ is $U_b \big/ (U_b + b) = M$. Solving for $U_b$ gives the result.
Suppose that $b$ is unknown, but $a$ is known. Let $V_a$ be the method of moments estimator of $b$. Then $V_a = a \frac{1 - M}{M}$
Proof
If $a$ is known then the method of moments equation for $V_a$ as an estimator of $b$ is $a \big/ (a + V_a) = M$. Solving for $V_a$ gives the result.
Run the beta estimation experiment 1000 times for several different values of the sample size $n$ and the parameters $a$ and $b$. Note the empirical bias and mean square error of the estimators $U$, $V$, $U_b$, and $V_a$. One would think that the estimators when one of the parameters is known should work better than the corresponding estimators when both parameters are unknown; but investigate this question empirically.
The following problem gives a distribution with just one parameter but the second moment equation from the method of moments is needed to derive an estimator.
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from the symmetric beta distribution, in which the left and right parameters are equal to an unknown value $c \in (0, \infty)$. The method of moments estimator of $c$ is $U = \frac{2 M^{(2)}}{1 - 4 M^{(2)}}$
Proof
Note that the mean $\mu$ of the symmetric distribution is $\frac{1}{2}$, independently of $c$, and so the first equation in the method of moments is useless. However, matching the second distribution moment to the second sample moment leads to the equation $\frac{U + 1}{2 (2 U + 1)} = M^{(2)}$ Solving gives the result.
The Pareto Distribution
The Pareto distribution with shape parameter $a \in (0, \infty)$ and scale parameter $b \in (0, \infty)$ is a continuous distribution on $(b, \infty)$ with probability density function $g$ given by $g(x) = \frac{a b^a}{x^{a + 1}}, \quad b \le x \lt \infty$ The Pareto distribution is named for Vilfredo Pareto and is a highly skewed and heavy-tailed distribution. It is often used to model income and certain other types of positive random variables. The Pareto distribution is studied in more detail in the chapter on Special Distributions. If $a \gt 2$, the first two moments of the Pareto distribution are $\mu = \frac{a b}{a - 1}$ and $\mu^{(2)} = \frac{a b^2}{a - 2}$.
Suppose now that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the Pareto distribution with shape parameter $a \gt 2$ and scale parameter $b \gt 0$.
Suppose that $a$ and $b$ are both unknown, and let $U$ and $V$ be the corresponding method of moments estimators. Then \begin{align} U & = 1 + \sqrt{\frac{M^{(2)}}{M^{(2)} - M^2}} \ V & = \frac{M^{(2)}}{M} \left( 1 - \sqrt{\frac{M^{(2)} - M^2}{M^{(2)}}} \right) \end{align}
Proof
The method of moments equations for $U$ and $V$ are \begin{align} \frac{U V}{U - 1} & = M \ \frac{U V^2}{U - 2} & = M^{(2)} \end{align} Solving for $U$ and $V$ gives the results.
As with our previous examples, the method of moments estimators are complicatd nonlinear functions of $M$ and $M^{(2)}$, so computing the bias and mean square error of the estimator is difficult. Instead, we can investigate the bias and mean square error empirically, through a simulation.
Run the Pareto estimation experiment 1000 times for several different values of the sample size $n$ and the parameters $a$ and $b$. Note the empirical bias and mean square error of the estimators $U$ and $V$.
When one of the parameters is known, the method of moments estimator for the other parameter is simpler.
Suppose that $a$ is unknown, but $b$ is known. Let $U_b$ be the method of moments estimator of $a$. Then $U_b = \frac{M}{M - b}$
Proof
If $b$ is known then the method of moment equation for $U_b$ as an estimator of $a$ is $b U_b \big/ (U_b - 1) = M$. Solving for $U_b$ gives the result.
Suppose that $b$ is unknown, but $a$ is known. Let $V_a$ be the method of moments estimator of $b$. Then $V_a = \frac{a - 1}{a}M$
1. $\E(V_a) = b$ so $V_a$ is unbiased.
2. $\var(V_a) = \frac{b^2}{n a (a - 2)}$ so $V_a$ is consistent.
Proof
If $a$ is known then the method of moments equation for $V_a$ as an estimator of $b$ is $a V_a \big/ (a - 1) = M$. Solving for $V_a$ gives (a). Next, $\E(V_a) = \frac{a - 1}{a} \E(M) = \frac{a - 1}{a} \frac{a b}{a - 1} = b$ so $V_a$ is unbiased. Finally, $\var(V_a) = \left(\frac{a - 1}{a}\right)^2 \var(M) = \frac{(a - 1)^2}{a^2} \frac{a b^2}{n (a - 1)^2 (a - 2)} = \frac{b^2}{n a (a - 2)}$.
The Uniform Distribution
The (continuous) uniform distribution with location parameter $a \in \R$ and scale parameter $h \in (0, \infty)$ has probability density function $g$ given by $g(x) = \frac{1}{h}, \quad x \in [a, a + h]$ The distribution models a point chosen at random from the interval $[a, a + h]$. The mean of the distribution is $\mu = a + \frac{1}{2} h$ and the variance is $\sigma^2 = \frac{1}{12} h^2$. The uniform distribution is studied in more detail in the chapter on Special Distributions. Suppose now that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the uniform distribution.
Suppose that $a$ and $h$ are both unknown, and let $U$ and $V$ denote the corresponding method of moments estimators. Then $U = 2 M - \sqrt{3} T, \quad V = 2 \sqrt{3} T$
Proof
Matching the distribution mean and variance to the sample mean and variance leads to the equations $U + \frac{1}{2} V = M$ and $\frac{1}{12} V^2 = T^2$. Solving gives the result.
As usual, we get nicer results when one of the parameters is known.
Suppose that $a$ is known and $h$ is unknown, and let $V_a$ denote the method of moments estimator of $h$. Then $V_a = 2 (M - a)$
1. $\E(V_a) = h$ so $V$ is unbiased.
2. $\var(V_a) = \frac{h^2}{3 n}$ so $V_a$ is consistent.
Proof
Matching the distribution mean to the sample mean leads to the equation $a + \frac{1}{2} V_a = M$. Solving gives the result.
1. $\E(V_a) = 2[\E(M) - a] = 2(a + h/2 - a) = h$
2. $\var(V_a) = 4 \var(M) = \frac{h^2}{3 n}$
Suppose that $h$ is known and $a$ is unknown, and let $U_h$ denote the method of moments estimator of $a$. Then $U_h = M - \frac{1}{2} h$
1. $\E(U_h) = a$ so $U_h$ is unbiased.
2. $\var(U_h) = \frac{h^2}{12 n}$ so $U_h$ is consistent.
Proof
Matching the distribution mean to the sample mean leads to the quation $U_h + \frac{1}{2} h = M$. Solving gives the result.
1. $\E(U_h) = \E(M) - \frac{1}{2}h = a + \frac{1}{2} h - \frac{1}{2} h = a$
2. $\var(U_h) = \var(M) = \frac{h^2}{12 n}$
The Hypergeometric Model
Our basic assumption in the method of moments is that the sequence of observed random variables $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from a distribution. However, the method makes sense, at least in some cases, when the variables are identically distributed but dependent. In the hypergeometric model, we have a population of $N$ objects with $r$ of the objects type 1 and the remaining $N - r$ objects type 0. The parameter $N$, the population size, is a positive integer. The parameter $r$, the type 1 size, is a nonnegative integer with $r \le N$. These are the basic parameters, and typically one or both is unknown. Here are some typical examples:
1. The objects are devices, classified as good or defective.
2. The objects are persons, classified as female or male.
3. The objects are voters, classified as for or against a particular candidate.
4. The objects are wildlife or a particular type, either tagged or untagged.
We sample $n$ objects from the population at random, without replacement. Let $X_i$ be the type of the $i$th object selected, so that our sequence of observed variables is $\bs{X} = (X_1, X_2, \ldots, X_n)$. The variables are identically distributed indicator variables, with $P(X_i = 1) = r / N$ for each $i \in \{1, 2, \ldots, n\}$, but are dependent since the sampling is without replacement. The number of type 1 objects in the sample is $Y = \sum_{i=1}^n X_i$. This statistic has the hypergeometric distribution with parameter $N$, $r$, and $n$, and has probability density function given by $P(Y = y) = \frac{\binom{r}{y} \binom{N - r}{n - y}}{\binom{N}{n}} = \binom{n}{y} \frac{r^{(y)} (N - r)^{(n - y)}}{N^{(n)}}, \quad y \in \{\max\{0, N - n + r\}, \ldots, \min\{n, r\}\}$ The hypergeometric model is studied in more detail in the chapter on Finite Sampling Models.
As above, let $\bs{X} = (X_1, X_2, \ldots, X_n)$ be the observed variables in the hypergeometric model with parameters $N$ and $r$. Then
1. The method of moments estimator of $p = r / N$ is $M = Y / n$, the sample mean.
2. The method of moments estimator of $r$ with $N$ known is $U = N M = N Y / n$.
3. The method of moments estimator of $N$ with $r$ known is $V = r / M = r n / Y$ if $Y > 0$.
Proof
These results all follow simply from the fact that $\E(X) = \P(X = 1) = r / N$.
In the voter example (3) above, typically $N$ and $r$ are both unknown, but we would only be interested in estimating the ratio $p = r / N$. In the reliability example (1), we might typically know $N$ and would be interested in estimating $r$. In the wildlife example (4), we would typically know $r$ and would be interested in estimating $N$. This example is known as the capture-recapture model.
Clearly there is a close relationship between the hypergeometric model and the Bernoulli trials model above. In fact, if the sampling is with replacement, the Bernoulli trials model would apply rather than the hypergeometric model. In addition, if the population size $N$ is large compared to the sample size $n$, the hypergeometric model is well approximated by the Bernoulli trials model. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/07%3A_Point_Estimation/7.02%3A_The_Method_of_Moments.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\bias}{\text{bias}}$ $\newcommand{\mse}{\text{mse}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
The Method
Suppose again that we have an observable random variable $\bs{X}$ for an experiment, that takes values in a set $S$. Suppose also that distribution of $\bs{X}$ depends on an unknown parameter $\theta$, taking values in a parameter space $\Theta$. Of course, our data variable $\bs{X}$ will almost always be vector valued. The parameter $\theta$ may also be vector valued. We will denote the probability density function of $\bs{X}$ on $S$ by $f_\theta$ for $\theta \in \Theta$. The distribution of $\bs{X}$ could be discrete or continuous.
The likelihood function is the function obtained by reversing the roles of $\bs{x}$ and $\theta$ in the probability density function; that is, we view $\theta$ as the variable and $\bs{x}$ as the given information (which is precisely the point of view in estimation).
The likelihood function at $\bs{x} \in S$ is the function $L_{\bs{x}}: \Theta \to [0, \infty)$ given by $L_\bs{x}(\theta) = f_\theta(\bs{x}), \quad \theta \in \Theta$
In the method of maximum likelihood, we try to find the value of the parameter that maximizes the likelihood function for each value of the data vector.
Suppose that the maximum value of $L_{\bs{x}}$ occurs at $u(\bs{x}) \in \Theta$ for each $\bs{x} \in S$. Then the statistic $u(\bs{X})$ is a maximum likelihood estimator of $\theta$.
The method of maximum likelihood is intuitively appealing—we try to find the value of the parameter that would have most likely produced the data we in fact observed.
Since the natural logarithm function is strictly increasing on $(0, \infty)$, the maximum value of the likelihood function, if it exists, will occur at the same points as the maximum value of the logarithm of the likelihood function.
The log-likelihood function at $\bs{x} \in S$ is the function $\ln L_{\bs{x}}$: $\ln L_{\bs{x}}(\theta) = \ln f_\theta(\bs{x}), \quad \theta \in \Theta$ If the maximum value of $\ln L_{\bs{x}}$ occurs at $u(\bs{x}) \in \Theta$ for each $\bs{x} \in S$. Then the statistic $u(\bs{X})$ is a maximum likelihood estimator of $\theta$
The log-likelihood function is often easier to work with than the likelihood function (typically because the probability density function $f_\theta(\bs{x})$ has a product structure).
Vector of Parameters
An important special case is when $\bs{\theta} = (\theta_1, \theta_2, \ldots, \theta_k)$ is a vector of $k$ real parameters, so that $\Theta \subseteq \R^k$. In this case, the maximum likelihood problem is to maximize a function of several variables. If $\Theta$ is a continuous set, the methods of calculus can be used. If the maximum value of $L_\bs{x}$ occurs at a point $\bs{\theta}$ in the interior of $\Theta$, then $L_\bs{x}$ has a local maximum at $\bs{\theta}$. Therefore, assuming that the likelihood function is differentiable, we can find this point by solving $\frac{\partial}{\partial \theta_i} L_\bs{x}(\bs{\theta}) = 0, \quad i \in \{1, 2, \ldots, k\}$ or equivalently $\frac{\partial}{\partial \theta_i} \ln L_\bs{x}(\bs{\theta}) = 0, \quad i \in \{1, 2, \ldots, k\}$ On the other hand, the maximum value may occur at a boundary point of $\Theta$, or may not exist at all.
Random Samples
The most important special case is when the data variables form a random sample from a distribution.
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the distribution of a random variable $X$ taking values in $R$, with probability density function $g_\theta$ for $\theta \in \Theta$. Then $\bs{X}$ takes values in $S = R^n$, and the likelihood and log-likelihood functions for $\bs{x} = (x_1, x_2, \ldots, x_n) \in S$ are \begin{align*} L_\bs{x}(\theta) & = \prod_{i=1}^n g_\theta(x_i), \quad \theta \in \Theta \ \ln L_\bs{x}(\theta) & = \sum_{i=1}^n \ln g_\theta(x_i), \quad \theta \in \Theta \end{align*}
Extending the Method and the Invariance Property
Returning to the general setting, suppose now that $h$ is a one-to-one function from the parameter space $\Theta$ onto a set $\Lambda$. We can view $\lambda = h(\theta)$ as a new parameter taking values in the space $\Lambda$, and it is easy to re-parameterize the probability density function with the new parameter. Thus, let $\hat{f}_\lambda(\bs{x}) = f_{h^{-1}(\lambda)}(\bs{x})$ for $\bs{x} \in S$ and $\lambda \in \Lambda$. The corresponding likelihood function for $\bs{x} \in S$ is $\hat{L}_\bs{x}(\lambda) = L_\bs{x}\left[h^{-1}(\lambda)\right], \quad \lambda \in \Lambda$ Clearly if $u(\bs{x}) \in \Theta$ maximizes $L_\bs{x}$ for $\bs{x} \in S$. Then $h\left[u(\bs{x})\right] \in \Lambda$ maximizes $\hat{L}_\bs{x}$ for $\bs{x} \in S$. It follows that if $U$ is a maximum likelihood estimator for $\theta$, then $V = h(U)$ is a maximum likelihood estimator for $\lambda = h(\theta)$.
If the function $h$ is not one-to-one, the maximum likelihood function for the new parameter $\lambda = h(\theta)$ is not well defined, because we cannot parameterize the probability density function in terms of $\lambda$. However, there is a natural generalization of the method.
Suppose that $h: \Theta \to \Lambda$, and let $\lambda = h(\theta)$ denote the new parameter. Define the likelihood function for $\lambda$ at $\bs{x} \in S$ by $\hat{L}_\bs{x}(\lambda) = \max\left\{L_\bs{x}(\theta): \theta \in h^{-1}\{\lambda\} \right\}; \quad \lambda \in \Lambda$ If $v(\bs{x}) \in \Lambda$ maximizes $\hat{L}_{\bs{x}}$ for each $\bs{x} \in S$, then $V = v(\bs{X})$ is a maximum likelihood estimator of $\lambda$.
This definition extends the maximum likelihood method to cases where the probability density function is not completely parameterized by the parameter of interest. The following theorem is known as the invariance property: if we can solve the maximum likelihood problem for $\theta$ then we can solve the maximum likelihood problem for $\lambda = h(\theta)$.
In the setting of the previous theorem, if $U$ is a maximum likelihood estimator of $\theta$, then $V = h(U)$ is a maximum likelihood estimator of $\lambda$.
Proof
As before, if $u(\bs{x}) \in \Theta$ maximizes $L_\bs{x}$ for $\bs{x} \in S$. Then $h\left[u(\bs{x})\right] \in \Lambda$ maximizes $\hat{L}_\bs{x}$ for $\bs{x} \in S$.
Examples and Special Cases
In the following subsections, we will study maximum likelihood estimation for a number of special parametric families of distributions. Recall that if $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from a distribution with mean $\mu$ and variance $\sigma^2$, then the method of moments estimators of $\mu$ and $\sigma^2$ are, respectively, \begin{align} M & = \frac{1}{n} \sum_{i=1}^n X_i \ T^2 & = \frac{1}{n} \sum_{i=1}^n (X_i - M)^2 \end{align} Of course, $M$ is the sample mean, and $T^2$ is the biased version of the sample variance. These statistics will also sometimes occur as maximum likelihood estimators. Another statistic that will occur in some of the examples below is $M_2 = \frac{1}{n} \sum_{i=1}^n X_i^2$ the second-order sample mean. As always, be sure to try the derivations yourself before looking at the solutions.
The Bernoulli Distribution
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the Bernoulli distribution with success parameter $p \in [0, 1]$. Recall that the Bernoulli probability density function is $g(x) = p^x (1 - p)^{1 - x}, \quad x \in \{0, 1\}$ Thus, $\bs{X}$ is a sequence of independent indicator variables with $\P(X_i = 1) = p$ for each $i$. In the usual language of reliability, $X_i$ is the outcome of trial $i$, where 1 means success and 0 means failure. Let $Y = \sum_{i=1}^n X_i$ denote the number of successes, so that the proportion of successes (the sample mean) is $M = Y / n$. Recall that $Y$ has the binomial distribution with parameters $n$ and $p$.
The sample mean $M$ is the maximum likelihood estimator of $p$ on the parameter space $(0, 1)$.
Proof
Note that $\ln g(x) = x \ln p + (1 - x) \ln(1 - p)$ for $x \in \{0, 1\}$ Hence the log-likelihood function at $\bs{x} = (x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$ is $\ln L_{\bs{x}}(p) = \sum_{i=1}^n [x_i \ln p + (1 - x_i) \ln(1 - p)], \quad p \in (0, 1)$ Differentiating with respect to $p$ and simplifying gives $\frac{d}{dp} \ln L_{\bs{x}}(p) = \frac{y}{p} - \frac{n - y}{1 - p}$ where $y = \sum_{i=1}^n x_i$. Thus, there is a single critical point at $p = y / n = m$. The second deriviative is $\frac{d^2}{d p^2} \ln L_{\bs{x}}(p) = -\frac{y}{p^2} - \frac{n - 1}{(1 - p)^2} \lt 0$ Hence the log-likelihood function is concave downward and so the maximum occurs at the unique critical point $m$.
Recall that $M$ is also the method of moments estimator of $p$. It's always nice when two different estimation procedures yield the same result. Next let's look at the same problem, but with a much restricted parameter space.
Suppose now that $p$ takes values in $\left\{\frac{1}{2}, 1\right\}$. Then the maximum likelihood estimator of $p$ is the statistic $U = \begin{cases} 1, & Y = n\ \frac{1}{2}, & Y \lt n \end{cases}$
1. $\E(U) = \begin{cases} 1, & p = 1 \ \frac{1}{2} + \left(\frac{1}{2}\right)^{n+1}, & p = \frac{1}{2} \end{cases}$
2. $U$ is positively biased, but is asymptotically unbiased.
3. $\mse(U) = \begin{cases} 0 & p = 1 \ \left(\frac{1}{2}\right)^{n+2}, & p = \frac{1}{2} \end{cases}$
4. $U$ is consistent.
Proof
Note that the likelihood function at $\bs{x} = (x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$ is $L_{\bs{x}}(p) = p^y (1 - p)^{n-y}$ for $p \in \left\{\frac{1}{2}, 1\right\}$ where as usual, $y = \sum_{i=1}^n x_i$. Thus $L_{\bs{x}}\left(\frac{1}{2}\right) = \left(\frac{1}{2}\right)^y$. On the other hand, $L_{\bs{x}}(1) = 0$ if $y \lt n$ while $L_{\bs{x}}(1) = 1$ if $y = n$. Thus, if $y = n$ the maximum occurs when $p = 1$ while if $y \lt n$ the maximum occurs when $p = \frac{1}{2}$.
1. If $p = 1$ then $\P(U = 1) = \P(Y = n) = 1$, so trivially $\E(U) = 1$. If $p = \frac{1}{2}$, $\E(U) = 1 \P(Y = n) + \frac{1}{2} \P(Y \lt n) = 1 \left(\frac{1}{2}\right)^n + \frac{1}{2}\left[1 - \left(\frac{1}{2}\right)^n\right] = \frac{1}{2} + \left(\frac{1}{2}\right)^{n+1}$
2. Note that $\E(U) \ge p$ and $\E(U) \to p$ as $n \to \infty$ both in the case that $p = 1$ and $p = \frac{1}{2}$.
3. If $p = 1$ then $U = 1$ with probability 1, so trivially $\mse(U) = 0$. If $p = \frac{1}{2}$, $\mse(U) = \left(1 - \frac{1}{2}\right)^2 \P(Y = n) + \left(\frac{1}{2} - \frac{1}{2}\right)^2 \P(Y \lt n) = \left(\frac{1}{2}\right)^2 \left(\frac{1}{2}\right)^n = \left(\frac{1}{2}\right)^{n+2}$
4. From (c), $\mse(U) \to 0$ as $n \to \infty$.
Note that the Bernoulli distribution in the last exercise would model a coin that is either fair or two-headed. The last two exercises show that the maximum likelihood estimator of a parameter, like the solution to any maximization problem, depends critically on the domain.
$U$ is uniformly better than $M$ on the parameter space $\left\{\frac{1}{2}, 1\right\}$.
Proof
Recall that $\mse(M) = \var(M) = p (1 - p) / n$. If $p = 1$ then $\mse(M) = \mse(U) = 0$ so that both estimators give the correct answer. If $p = \frac{1}{2}$, $\mse(U) = \left(\frac{1}{2}\right)^{n+2} \lt \frac{1}{4 n} = \mse(M)$.
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the Bernoulli distribution with unknown success parameter $p \in (0, 1)$. Find the maximum likelihood estimator of $p (1 - p)$, which is the variance of the sampling distribution.
Answer
By the invariance principle, the estimator is $M (1 - M)$ where $M$ is the sample mean.
The Geometric Distribution
Recall that the geometric distribution on $\N_+$ with success parameter $p \in (0, 1)$ has probability density function $g(x) = p (1 - p)^{x-1}, \quad x \in \N_+$ The geometric distribution governs the trial number of the first success in a sequence of Bernoulli trials.
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from the geometric distribution with unknown parameter $p \in (0, 1)$. The maximum likelihood estimator of $p$ is $U = 1 / M$.
Proof
Note that $\ln g(x) = \ln p + (x - 1) \ln(1 - p)$ for $x \in \N_+$. Hence the log-likelihood function corresponding to the data $\bs{x} = (x_1, x_2, \ldots, x_n) \in \N_+^n$ is $\ln L_\bs{x}(p) = n \ln p + (y - n) \ln(1 - p), \quad p \in (0, 1)$ where $y = \sum_{i=1}^n x_i$. So $\frac{d}{dp} \ln L(p) = \frac{n}{p} - \frac{y - n}{1 - p}$ The derivative is 0 when $p = n / y = 1 / m$. Finally, $\frac{d^2}{dp^2} \ln L_\bs{x}(p) = -n / p^2 - (y - n) / (1 - p)^2 \lt 0$ so the maximum occurs at the critical point.
Recall that $U$ is also the method of moments estimator of $p$. It's always reassuring when two different estimation procedures produce the same estimator.
The Negative Binomial Distribution
More generally, the negative binomial distribution on $\N$ with shape parameter $k \in (0, \infty)$ and success parameter $p \in (0, 1)$ has probability density function $g(x) = \binom{x + k - 1}{k - 1} p^k (1 - p)^x, \quad x \in \N$ If $k$ is a positive integer, then this distribution governs the number of failures before the $k$th success in a sequence of Bernoulli trials with success parameter $p$. However, the distribution makes sense for general $k \in (0, \infty)$. The negative binomial distribution is studied in more detail in the chapter on Bernoulli Trials.
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the negative binomial distribution on $\N$ with known shape parameter $k$ and unknown success parameter $p \in (0, 1)$. The maximum likelihood estimator of $p$ is $U = \frac{k}{k + M}$
Proof
Note that $\ln g(x) = \ln \binom{x + k - 1}{k - 1} + k \ln p + x \ln(1 - p)$ for $x \in \N$. Hence the log-likelihood function corresponding to $\bs{x} = (x_1, x_2, \ldots, x_n) \in \N^n$ is $\ln L_\bs{x}(p) = n k \ln p + y \ln(1 - p) + C, \quad p \in (0, 1)$ where $y = \sum_{i=1}^n x_i$ and $C = \sum_{i=1}^n \ln \binom{x_i + k - 1}{k - 1}$. Hence $\frac{d}{dp} \ln L_\bs{x}(p) = \frac{n k}{p} - \frac{y}{1 - p}$ The derivative is 0 when $p = n k / (n k + y) = k / (k + m)$ where as usual, $m = y / n$. Finally, $\frac{d^2}{dp^2} \ln L_\bs{x}(p) = - n k / p^2 - y / (1 - p)^2 \lt 0$, so the maximum occurs at the critical point.
Once again, this is the same as the method of moments estimator of $p$ with $k$ known.
The Poisson Distribution
Recall that the Poisson distribution with parameter $r \gt 0$ has probability density function $g(x) = e^{-r} \frac{r^x}{x!}, \quad x \in \N$ The Poisson distribution is named for Simeon Poisson and is widely used to model the number of random points in a region of time or space. The parameter $r$ is proportional to the size of the region. The Poisson distribution is studied in more detail in the chapter on the Poisson process.
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from the Poisson distribution with unknown parameter $r \in (0, \infty)$. The maximum likelihood estimator of $r$ is the sample mean $M$.
Proof
Note that $\ln g(x) = -r + x \ln r - \ln(x!)$ for $x \in \N$. Hence the log-likelihood function corresponding to $\bs{x} = (x_1, x_2, \ldots, x_n) \in \N^n$ is $\ln L_\bs{x}(r) = -n r + y \ln r - C, \quad r \in (0, \infty)$ where $y = \sum_{i=1}^n x_i$ and $C = \sum_{i=1}^n \ln(x_i!)$. Hence $\frac{d}{dr} \ln L_\bs{x}(r) = -n + y / r$. The derivative is 0 when $r = y / n = m$. Finally, $\frac{d^2}{dr^2} \ln L_\bs{x}(r) = -y / r^2 \lt 0$, so the maximum occurs at the critical point.
Recall that for the Poisson distribution, the parameter $r$ is both the mean and the variance. Thus $M$ is also the method of moments estimator of $r$. We showed in the introductory section that $M$ has smaller mean square error than $S^2$, although both are unbiased.
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from the Poisson distribution with parameter $r \in (0, \infty)$, and let $p = \P(X = 0) = e^{-r}$. Find the maximum likelihood estimator of $p$ in two ways:
1. Directly, by finding the likelihood function corresponding to the parameter $p$.
2. By using the result of the last exercise and the invariance property.
Answer
$e^{-M}$ where $M$ is the sample mean.
The Normal Distribution
Recall that the normal distribution with mean $\mu$ and variance $\sigma^2$ has probability density function $g(x) = \frac{1}{\sqrt{2 \, \pi} \sigma} \exp \left[-\frac{1}{2} \left(\frac{x - \mu}{\sigma}\right)^2\right], \quad x \in \R$ The normal distribution is often used to model physical quantities subject to small, random errors, and is studied in more detail in the chapter on Special Distributions
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from the normal distribution with unknown mean $\mu \in \R$ and variance $\sigma^2 \in (0, \infty)$. The maximum likelihood estimators of $\mu$ and $\sigma^2$ are $M$ and $T^2$, respectively.
Proof
Note that $\ln g(x) = -\frac{1}{2} \ln(2 \pi) - \frac{1}{2} \ln(\sigma^2) - \frac{1}{2 \sigma^2} (x - \mu)^2, \quad x \in \R$ Hence the log-likelihood function corresponding to the data $\bs{x} = (x_1, x_2, \ldots, x_n) \in \R^n$ is $\ln L_\bs{x}(\mu, \sigma^2) = -\frac{n}{2} \ln(2 \pi) - \frac{n}{2} \ln(\sigma^2) - \frac{1}{2 \sigma^2} \sum_{i=1}^n (x_i - \mu)^2, \quad (\mu, \sigma^2) \in \R \times (0, \infty)$ Taking partial derivatives gives \begin{align*} \frac{\partial}{\partial \mu} \ln L_\bs{x}(\mu, \sigma^2) &= \frac{1}{\sigma^2} \sum_{i=1}^n (x_i - \mu) = \frac{1}{\sigma^2}\left(\sum_{i=1}^n x_i - n \mu\right) \ \frac{\partial}{\partial \sigma^2} \ln L_\bs{x}(\mu, \sigma^2) &= -\frac{n}{2 \sigma^2} + \frac{1}{2 \sigma^4} \sum_{i=1}^n (x_i - \mu)^2 \end{align*} The partial derivatives are 0 when $\mu = \frac{1}{n} \sum_{i=1}^n x_i$ and $\sigma^2 = \frac{1}{n} \sum_{i=1}^n (x_i - \mu)^2$. Hence the unique critical point is $(m, t^2)$. Finally, with a bit more calculus, the second partial derivatives evaluated at the critical point are $\frac{\partial^2}{\partial \mu^2} \ln L_\bs{x}(m, t^2) = -n / t^2, \; \frac{\partial^2}{\partial \mu \partial \sigma^2} \ln L_\bs{x}(m, t^2) = 0, \; \frac{\partial^2}{\partial (\sigma^2)^2} \ln L_\bs{x}(m, t^2) = -n / t^4$ Hence the second derivative matrix at the critical point is negative definite and so the maximum occurs at the critical point.
Of course, $M$ and $T^2$ are also the method of moments estimators of $\mu$ and $\sigma^2$, respectively.
Run the Normal estimation experiment 1000 times for several values of the sample size $n$, the mean $\mu$, and the variance $\sigma^2$. For the parameter $\sigma^2$, compare the maximum likelihood estimator $T^2$ with the standard sample variance $S^2$. Which estimator seems to work better in terms of mean square error?
Suppose again that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from the normal distribution with unknown mean $\mu \in \R$ and unknown variance $\sigma^2 \in (0, \infty)$. Find the maximum likelihood estimator of $\mu^2 + \sigma^2$, which is the second moment about 0 for the sampling distribution.
Answer
By the invariance principle, the estimator is $M^2 + T^2$ where $M$ is the sample mean and $T^2$ is the (biased version of the) sample variance.
The Gamma Distribution
Recall that the gamma distribution with shape parameter $k \gt 0$ and scale parameter $b \gt 0$ has probability density function $g(x) = \frac{1}{\Gamma(k) \, b^k} x^{k-1} e^{-x / b}, \quad 0 \lt x \lt \infty$ The gamma distribution is often used to model random times and certain other types of positive random variables, and is studied in more detail in the chapter on Special Distributions
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from the gamma distribution with known shape parameter $k$ and unknown scale parameter $b \in (0, \infty)$. The maximum likelihood estimator of $b$ is $V_k = \frac{1}{k} M$.
Proof
Note that for $x \in (0, \infty)$, $\ln g(x) = -\ln \Gamma(k) - k \ln b + (k - 1) \ln x - \frac{x}{b}$ and hence the log-likelihood function corresponding to the data $\bs{x} = (x_1, x_2, \ldots, x_n) \in (0, \infty)^n$ is $\ln L_\bs{x}(b) = - n k \ln b - \frac{y}{b} + C, \quad b \in (0, \infty)$ where $y = \sum_{i=1}^n x_i$ and $C = -n \ln \Gamma(k) + (k - 1) \sum_{i=1}^n \ln x_i$. It follows that $\frac{d}{d b} \ln L_\bs{x}(b) = -\frac{n k}{b} + \frac{y}{b^2}$ The derivative is 0 when $b = y / n k = 1 / k m$. Finally, $\frac{d^2}{db^2} \ln L_\bs{x}(b) = n k / b^2 - 2 y / b^3$. At the critical point $b = y / n k$, the second derivative is $-(n k)^3 / y^2 \lt 0$ so the maximum occurs at the critical point.
Recall that $V_k$ is also the method of moments estimator of $b$ when $k$ is known. But when $k$ is unknown, the method of moments estimator of $b$ is $V = \frac{T^2}{M}$.
Run the gamma estimation experiment 1000 times for several values of the sample size $n$, shape parameter $k$, and scale parameter $b$. In each case, compare the method of moments estimator $V$ of $b$ when $k$ is unknown with the method of moments and maximum likelihood estimator $V_k$ of $b$ when $k$ is known. Which estimator seems to work better in terms of mean square error?
The Beta Distribution
Recall that the beta distribution with left parameter $a \in (0, \infty)$ and right parameter $b = 1$ has probability density function $g(x) = a x^{a-1}, \quad x \in (0, 1)$ The beta distribution is often used to model random proportions and other random variables that take values in bounded intervals. It is studied in more detail in the chapter on Special Distribution
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from the beta distribution with unknown left parameter $a \in (0, \infty)$ and right parameter $b = 1$. The maximum likelihood estimator of $a$ is $W = - \frac{n}{\sum_{i=1}^n \ln X_i} = -\frac{n}{\ln(X_1 X_2 \cdots X_n)}$
Proof
Note that $\ln g(x) = \ln a + (a - 1) \ln x$ for $x \in (0, \infty)$ Hence the log-likelihood function corresponding to the data $\bs{x} = (x_1, x_2, \ldots, x_n) \in (0, \infty)^n$ is $\ln L_\bs{x}(a) = n \ln a + (a - 1) \sum_{i=1}^n \ln x_i, \quad a \in (0, \infty)$ Therefore $\frac{d}{da} \ln L_\bs{x}(a) = n / a + \sum_{i=1}^n \ln x_i$. The derivative is 0 when $a = -n \big/ \sum_{i=1}^n \ln x_i$. Finally, $\frac{d^2}{da^2} \ln L_\bs{x}(a) = -n / a^2 \lt 0$, so the maximum occurs at the critical point.
Recall that when $b = 1$, the method of moments estimator of $a$ is $U_1 = M \big/ (1 - M)$, but when $b \in (0, \infty)$ is also unknown, the method of moments estimator of $a$ is $U = M (M - M_2) \big/ (M_2 - M^2)$. When $b = 1$, which estimator is better, the method of moments estimator or the maximum likelihood estimator?
In the beta estimation experiment, set $b = 1$. Run the experiment 1000 times for several values of the sample size $n$ and the parameter $a$. In each case, compare the estimators $U$, $U_1$ and $W$. Which estimator seems to work better in terms of mean square error?
Finally, note that $1 / W$ is the sample mean for a random sample of size $n$ from the distribution of $-\ln X$. This distribution is the exponential distribution with rate $a$.
The Pareto Distribution
Recall that the Pareto distribution with shape parameter $a \gt 0$ and scale parameter $b \gt 0$ has probability density function $g(x) = \frac{a b^a}{x^{a+1}}, \quad b \le x \lt \infty$ The Pareto distribution, named for Vilfredo Pareto, is a heavy-tailed distribution often used to model income and certain other types of random variables. It is studied in more detail in the chapter on Special Distribution.
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from the Pareto distribution with unknown shape parameter $a \in (0, \infty)$ and scale parameter $b \in (0, \infty)$. The maximum likelihood estimator of $b$ is $X_{(1)} = \min\{X_1, X_2, \ldots, X_n\}$, the first order statistic. The maximum likelihood estimator of $a$ is $U = \frac{n}{\sum_{i=1}^n \ln X_i - n \ln X_{(1)}} = \frac{n}{\sum_{i=1}^n \left(\ln X_i - \ln X_{(1)}\right)}$
Proof
Note that $\ln g(x) = \ln a + a \ln b - (a + 1) \ln x$ for $x \in [b, \infty)$. Hence the log-likelihood function corresponding to the data $\bs{x} = (x_1, x_2, \ldots, x_n)$ is $\ln L_\bs{x}(a, b) = n \ln a + n a \ln b - (a + 1) \sum_{i=1}^n \ln x_i; \quad 0 \lt a \lt \infty, \, 0 \lt b \le x_i \text{ for each } i \in \{1, 2, \ldots, n\}$ Equivalently, the domain is $0 \lt a \lt \infty$ and $0 \lt b \le x_{(1)}$. Note that $\ln L_{\bs{x}}(a, b)$ is increasing in $b$ for each $a$, and hence is maximized when $b = x_{(1)}$ for each $a$. Next, $\frac{d}{d a} \ln L_{\bs{x}}\left(a, x_{(1)}\right) = \frac{n}{a} + n \ln x_{(1)} - \sum_{i=1}^n \ln x_i$ The derivative is 0 when $a = n \big/ \left(\sum_{i=1}^n \ln x_i - n \ln x_{(1)}\right)$. Finally, $\frac{d^2}{da^2} \ln L_\bs{x}\left(a, x_{(1)}\right) = -n / a^2 \lt 0$, so the maximum occurs at the critical point.
Recall that if $a \gt 2$, the method of moments estimators of $a$ and $b$ are $1 + \sqrt{\frac{M_2}{M_2 - M^2}}, \; \frac{M_2}{M} \left(1 - \sqrt{\frac{M_2 - M^2}{M_2}}\right)$
Open the the Pareto estimation experiment. Run the experiment 1000 times for several values of the sample size $n$ and the parameters $a$ and $b$. Compare the method of moments and maximum likelihood estimators. Which estimators seem to work better in terms of bias and mean square error?
Often the scale parameter in the Pareto distribution is known.
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from the Pareto distribution with unknown shape parameter $a \in (0, \infty)$ and known scale parameter $b \in (0, \infty)$. The maximum likelihood estimator of $a$ is $U = \frac{n}{\sum_{i=1}^n \ln X_i - n \ln b} = \frac{n}{\sum_{i=1}^n \left(\ln X_i - \ln b \right)}$
Proof
Modifying the previous proof, the log-likelihood function corresponding to the data $\bs{x} = (x_1, x_2, \ldots, x_n)$ is $\ln L_\bs{x}(a) = n \ln a + n a \ln b - (a + 1) \sum_{i=1}^n \ln x_i, \quad 0 \lt a \lt \infty$ The derivative is $\frac{d}{d a} \ln L_{\bs{x}}(a) = \frac{n}{a} + n \ln b - \sum_{i=1}^n \ln x_i$ The derivative is 0 when $a = n \big/ \left(\sum_{i=1}^n \ln x_i - n \ln b\right)$. Finally, $\frac{d^2}{da^2} \ln L_\bs{x}(a) = -n / a^2 \lt 0$, so the maximum occurs at the critical point.
Uniform Distributions
In this section we will study estimation problems related to the uniform distribution that are a good source of insight and counterexamples. In a sense, our first estimation problem is the continuous analogue of an estimation problem studied in the section on Order Statistics in the chapter Finite Sampling Models. Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from the uniform distribution on the interval $[0, h]$, where $h \in (0, \infty)$ is an unknown parameter. Thus, the sampling distribution has probability density function $g(x) = \frac{1}{h}, \quad x \in [0, h]$ First let's review results from the last section.
The method of moments estimator of $h$ is $U = 2 M$. The estimator $U$ satisfies the following properties:
1. $U$ is unbiased.
2. $\var(U) = \frac{h^2}{3 n}$ so $U$ is consistent.
Now let's find the maximum likelihood estimator
The maximum likelihood estimator of $h$ is $X_{(n)} = \max\{X_1, X_2, \ldots, X_n\}$, the $n$th order statistic. The estimator $X_{(n)}$ satisfies the following properties:
1. $\E\left(X_{(n)}\right) = \frac{n}{n + 1} h$
2. $\bias\left(X_{(n)}\right) = -\frac{h}{n+1}$ so that $X_{(n)}$ is negatively biased but asymptotically unbiased.
3. $\var\left(X_{(n)}\right) = \frac{n}{(n+2)(n+1)^2} h^2$
4. $\mse\left(X_{(n)}\right) = \frac{2}{(n+1)(n+2)}h^2$ so that $X_{(n)}$ is consistent.
Proof
The likelihood function corresponding to the data $\bs{x} = (x_1, x_2, \ldots, x_n)$ is $L_\bs{x}(h) = 1 / h^n$ for $h \ge x_i$ for each $i \in \{1, 2, \ldots n\}$. The domain is equivalent to $h \ge x_{(n)}$. The function $h \mapsto 1 / h^n$ is decreasing, and so the maximum occurs at the smallest value, namely $x_{(n)}$. Parts (a) and (c) are restatements of results from the section on order statistics. Parts (b) and (d) follow from (a) and (c).
Since the expected value of $X_{(n)}$ is a known multiple of the parameter $h$, we can easily construct an unbiased estimator.
Let $V = \frac{n+1}{n} X_{(n)}$. The estimator $V$ satisfies the following properties:
1. $V$ is unbiased.
2. $\var(V) = \frac{h^2}{n(n + 2)}$ so that $V$ is consistent.
3. The asymptotic relative efficiency of $V$ to $U$ is infinite.
Proof
Parts (a) and (b) follow from the previous result and basic properties of the expected value and variance. For part (c), $\frac{\var(U)}{\var(V)} = \frac{h^2 / 3 n}{h^2 / n (n + 2)} = \frac{n + 2}{3} \to \infty \text{ as } n \to \infty$
The last part shows that the unbiased version $V$ of the maximum likelihood estimator is a much better estimator than the method of moments estimator $U$. In fact, an estimator such as $V$, whose mean square error decreases on the order of $\frac{1}{n^2}$, is called super efficient. Now, having found a really good estimator, let's see if we can find a really bad one. A natural candidate is an estimator based on $X_{(1)} = \min\{X_1, X_2, \ldots, X_n\}$, the first order statistic. The next result will make the computations very easy.
The sample $\bs{X} = (X_1, X_2, \ldots, X_n)$ satisfies the following properties:
1. $h - X_i$ is uniformly distributed on $[0, h]$ for each $i$.
2. $(h - X_1, h - X_2, \ldots, h - X_n)$ is also a random sample from the uniform distribution on $[0, h]$.
3. $X_{(1)}$ has the same distribution as $h - X_{(n)}$.
Proof
1. This is a simple consequence of the fact that uniform distributions are preserved under linear transformations on the random variable.
2. This follows from (a) and that the fact that if $\bs{X}$ is a sequence of independent variables, then so is $(h - X_1, h - X_2, \ldots, h - X_n)$.
3. From part (b), $X_{(1)} = \min\{X_1, X_2, \ldots, X_n\}$ has the same distribution as $\min\{h - X_1, h - X_2, \ldots, h - X_n\} = h - \max\{X_1, X_2, \ldots, X_n\} = h - X_{(n)}$.
Now we can construct our really bad estimator.
Let $W = (n + 1)X_{(1)}$. Then
1. $W$ is an unbiased estimator of $h$.
2. $\var(W) = \frac{n}{n+2} h^2$, so $W$ is not even consistent.
Proof
These results follow from the ones above:
1. $\E(X_{(1)}) = h - \E(X_{(n)}) = h - \frac{n}{n + 1} h = \frac{1}{n + 1} h$ and hence $\E(W) = h$.
2. $\var(W) = (n + 1)^2 \var(X_{(1)}) = (n + 1)^2 \var(h - X_{(n)}) = (n + 1)^2 \frac{n}{(n + 1)^2 (n + 2)} h^2 = \frac{n}{n + 2} h^2$.
Run the uniform estimation experiment 1000 times for several values of the sample size $n$ and the parameter $a$. In each case, compare the empirical bias and mean square error of the estimators with their theoretical values. Rank the estimators in terms of empirical mean square error.
Our next series of exercises will show that the maximum likelihood estimator is not necessarily unique. Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from the uniform distribution on the interval $[a, a + 1]$, where $a \in \R$ is an unknown parameter. Thus, the sampling distribution has probability density function $g(x) = 1, \quad a \le x \le a + 1$ As usual, let's first review the method of moments estimator.
The method of moments estimator of $a$ is $U = M - \frac{1}{2}$. The estimator $U$ satisfies the following properties:
1. $U$ is unbiased.
2. $\var(U) = \frac{1}{12 n}$ so $U$ is consistent.
However, as promised, there is not a unique maximum likelihood estimatr.
Any statistic $V \in \left[X_{(n)} - 1, X_{(1)}\right]$ is a maximum likelihood estimator of $a$.
Proof
The likelihood function corresponding to the data $\bs{x} = (x_1, x_2, \ldots, x_n\}$ is $L_\bs{x}(a) = 1$ for $a \le x_i \le a + 1$ and $i \in \{1, 2, \ldots, n\}$. The domain is equivalent to $a \le x_{(1)}$ and $a \ge x_{(n)} - 1$. Since the likelihood function is constant on this domain, the result follows.
For completeness, let's consider the full estimation problem. Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the uniform distribution on $[a, a + h]$ where $a \in \R$ and $h \in (0, \infty)$ are both unknown. Here's the result from the last section:
Let $U$ and $V$ denote the method of moments estimators of $a$ and $h$, respectively. Then $U = 2 M - \sqrt{3} T, \quad V = 2 \sqrt{3} T$ where $M = \frac{1}{n} \sum_{i=1}^n X_i$ is the sample mean, and $T = \frac{1}{n} \sum_{i=1}^n (X_i - M)^2$ is the biased version of the sample variance.
It should come as no surprise at this point that the maximum likelihood estimators are functions of the largest and smallest order statistics.
The maximum likelihood estimators or $a$ and $h$ are $U = X_{(1)}$ and $V = X_{(n)} - X_{(1)}$, respectively.
1. $E(U) = a + \frac{h}{n + 1}$ so $U$ is positively biased and asymptotically unbiased.
2. $E(V) = h \frac{n - 1}{n + 1}$ so $V$ is negatively biased and asymptotically unbiased.
3. $\var(U) = h^2 \frac{n}{(n + 1)^2 (n + 2)}$ so $U$ is consistent.
4. $\var(V) = h^2 \frac{2(n - 1)}{(n + 1)^2(n + 2)}$ so $V$ is consistent.
Proof
The likelihood function corresponding to the data $\bs{x} = (x_1, x_2, \ldots, x_n)$ is $L_\bs{x}(a, h) = \frac{1}{h^n}$ for $a \le x_i \le a + h$ and $i \in \{1, 2, \ldots, n\}$. The domain is equivalent to $a \le x_{(1)}$ and $a + h \ge x_{(n)}$. Since the likelihood function depends only on $h$ in this domain and is decreasing, the maximum occurs when $a = x_{(1)}$ and $h = x_{(n)} - x_{(1)}$. Parts (a)–(d) follow from standard results for the order statistics from the uniform distribution.
The Hypergeometric Model
In all of our previous examples, the sequence of observed random variables $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from a distribution. However, maximum likelihood is a very general method that does not require the observation variables to be independent or identically distributed. In the hypergeometric model, we have a population of $N$ objects with $r$ of the objects type 1 and the remaining $N - r$ objects type 0. The population size $N$, is a positive integer. The type 1 size $r$, is a nonnegative integer with $r \le N$. These are the basic parameters, and typically one or both is unknown. Here are some typical examples:
1. The objects are devices, classified as good or defective.
2. The objects are persons, classified as female or male.
3. The objects are voters, classified as for or against a particular candidate.
4. The objects are wildlife or a particular type, either tagged or untagged.
We sample $n$ objects from the population at random, without replacement. Let $X_i$ be the type of the $i$th object selected, so that our sequence of observed variables is $\bs{X} = (X_1, X_2, \ldots, X_n)$. The variables are identically distributed indicator variables, with $P(X_i = 1) = r / N$ for each $i \in \{1, 2, \ldots, n\}$, but are dependent since the sampling is without replacement. The number of type 1 objects in the sample is $Y = \sum_{i=1}^n X_i$. This statistic has the hypergeometric distribution with parameter $N$, $r$, and $n$, and has probability density function given by $P(Y = y) = \frac{\binom{r}{y} \binom{N - r}{n - y}}{\binom{N}{n}} = \binom{n}{y} \frac{r^{(y)} (N - r)^{(n - y)}}{N^{(n)}}, \quad y \in \{\max\{0, N - n + r\}, \ldots, \min\{n, r\}\}$ Recall the falling power notation: $x^{(k)} = x (x - 1) \cdots (x - k + 1)$ for $x \in \R$ and $k \in \N$. The hypergeometric model is studied in more detail in the chapter on Finite Sampling Models.
As above, let $\bs{X} = (X_1, X_2, \ldots, X_n)$ be the observed variables in the hypergeometric model with parameters $N$ and $r$. Then
1. The maximum likelihood estimator of $r$ with $N$ known is $U = \lfloor N M \rfloor = \lfloor N Y / n \rfloor$.
2. The maximum likelihood estimator of $N$ with $r$ known is $V = \lfloor r / M \rfloor = \lfloor r n / Y \rfloor$ if $Y \gt 0$.
Proof
By a simple application of the multiplication rule, the PDF $f$ of $\bs{X}$ is $f(\bs{x}) = \frac{r^{(y)} (N - r)^{(n - y)}}{N^{(n)}}, \quad \bs{x} = (x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$ where $y = \sum_{i=1}^n x_i$.
1. With $N$ known, the likelihood function corresponding to the data $\bs{x} = (x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$ is $L_{\bs{x}}(r) = \frac{r^{(y)} (N - r)^{(n - y)}}{N^{(n)}}, \quad r \in \{y, \ldots, \min\{n, y + N - n\}\}$ After some algebra, $L_{\bs{x}}(r - 1) \lt L_{\bs{x}}(r)$ if and only if $(r - y)(N - r + 1) \lt r (N - r - n + y + 1)$ if and only if $r \lt N y / n$. So the maximum of $L_{\bs{x}}(r)$ occurs when $r = \lfloor N y / n \rfloor$.
2. Similarly, with $r$ known, the likelihood function corresponding to the data $\bs{x} = (x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$ is $L_{\bs{x}}(N) = \frac{r^{(y)} (N - r)^{(n - y)}}{N^{(n)}}, \quad N \in \{\max\{r, n\}, \ldots\}$ After some algebra, $L_{\bs{x}}(N - 1) \lt L_{\bs{x}}(N)$ if and only if $(N - r - n + y) / (N - n) \lt (N - r) / N$ if and only if $N \lt r n / y$ (assuming $y \gt 0$). So the maximum of $L_{\bs{x}}(r)$ occurs when $N = \lfloor r n / y \rfloor$.
In the reliability example (1), we might typically know $N$ and would be interested in estimating $r$. In the wildlife example (4), we would typically know $r$ and would be interested in estimating $N$. This example is known as the capture-recapture model.
Clearly there is a close relationship between the hypergeometric model and the Bernoulli trials model above. In fact, if the sampling is with replacement, the Bernoulli trials model with $p = r / N$ would apply rather than the hypergeometric model. In addition, if the population size $N$ is large compared to the sample size $n$, the hypergeometric model is well approximated by the Bernoulli trials model, again with $p = r / N$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/07%3A_Point_Estimation/7.03%3A_Maximum_Likelihood.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\bias}{\text{bias}}$ $\newcommand{\mse}{\text{mse}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
The General Method
Suppose again that we have an observable random variable $\bs{X}$ for an experiment, that takes values in a set $S$. Suppose also that distribution of $\bs{X}$ depends on a parameter $\theta$ taking values in a parameter space $T$. Of course, our data variable $\bs{X}$ is almost always vector-valued, so that typically $S \subseteq \R^n$ for some $n \in \N_+$. Depending on the nature of the sample space $S$, the distribution of $\bs X$ may be discrete or continuous. The parameter $\theta$ may also be vector-valued, so that typically $T \subseteq \R^k$ for some $k \in \N_+$.
In Bayesian analysis, named for the famous Thomas Bayes, we model the deterministic, but unknown parameter $\theta$ with a random variable $\Theta$ that has a specified distribution on the parameter space $T$. Depending on the nature of the parameter space, this distribution may also be either discrete or continuous. It is called the prior distribution of $\Theta$ and is intended to reflect our knowledge of the parameter $\theta$, before we gather data. After observing $\bs X = \bs x \in S$, we then use Bayes' theorem, to compute the conditional distribution of $\Theta$ given $\bs X = \bs x$. This distribution is called the posterior distribution of $\Theta$, and is an updated distribution, given the information in the data. Here is the mathematical description, stated in terms of probability density functions.
Suppose that the prior distribution of $\Theta$ on $T$ has probability density function $h$, and that given $\Theta = \theta \in T$, the conditional probability density function of $\bs X$ on $S$ is $f(\cdot \mid \theta)$. Then the probability density function of the posterior distribution of $\Theta$ given $\bs X = \bs x \in S$ is $h(\theta \mid \bs x) = \frac{h(\theta) f(\bs x \mid \theta)}{f(\bs x)}, \quad \theta \in T$ where the function in the denominator is defined as follows, in the discrete and continuous cases, respectively: \begin{align*} f(\bs x) & = \sum_{\theta \in T} h(\theta) f(\bs x | \theta), \quad \bs x \in S \ f(\bs x) & = \int_T h(\theta) f(\bs{x} \mid \theta) \, d\theta, \quad \bs x \in S \end{align*}
Proof
This is just Bayes' theorem with new terminology. Recall that he joint probability density function of $(\bs{X}, \Theta)$ is the mapping on $S \times T$ given by $(\bs{x}, \theta) \mapsto h(\theta) f(\bs{x} \mid \theta)$ Then the function in the denominator is the marginal probability density function of $\bs X$. So by definition, $h(\theta \mid x) = h(\theta) f(\bs x \mid \theta) / f(\bs x)$ for $\theta \in T$ is the conditional probability density function of $\Theta$ given $\bs X = \bs x$.
For $\bs x \in S$, note that $f(\bs{x})$ is simply the normalizing constant for the function $\theta \mapsto h(\theta) f(\bs{x} \mid \theta)$. It may not be necessary to explicitly compute $f(\bs{x})$, if one can recognize the functional form of $\theta \mapsto h(\theta) f(\bs{x} \mid \theta)$ as that of a known distribution. This will indeed be the case in several of the examples explored below.
If the parameter space $T$ has finite measure $c$ (counting measure in the discrete case or Lebesgue measure in the continuous case), then one possible prior distribution is the uniform distribution on $T$, with probability density function $h(\theta) = 1 / c$ for $\theta \in T$. This distribution reflects no prior knowledge about the parameter, and so is called the non-informative prior distributioon.
Random Samples
Of course, an important and essential special case occurs when $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the distribution of a basic variable $X$. Specifically, suppose that $X$ takes values in a set $R$ and has probability density function $g(\cdot \mid \theta)$ for a given $\theta \in T$. In this case, $S = R^n$ and the probability density function $f(\cdot \mid \theta)$ of $\bs{X}$ given $\theta$ is $f(x_1, x_2, \ldots, x_n \mid \theta) = g(x_1 \mid \theta) g(x_2 \mid \theta) \cdots g(x_n \mid \theta), \quad (x_1, x_2, \ldots, x_n) \in S$
Real Parameters
Suppose that $\theta$ is a real-valued parameter, so that $T \subseteq \R$. Here is our main definition.
The conditional expected value $\E(\Theta \mid \bs{X})$ is the Bayesian estimator of $\theta$.
1. If $\Theta$ has a discrete distribution on $T$ then $\E(\Theta \mid \bs X = \bs x) = \sum_{\theta \in T} \theta h(\theta \mid \bs x), \quad \bs x \in S$
2. If $\Theta$ has a continuous distribution on $T$ then $\E(\Theta \mid \bs X = \bs x) = \int_T \theta h(\theta \mid \bs x) d\theta, \quad \bs x \in S$
Recall that $\E(\Theta \mid \bs{X})$ is a function of $\bs{X}$ and, among all functions of $\bs{X}$, is closest to $\Theta$ in the mean square sense. Of course, once we collect the data and observe $\bs{X} = \bs{x}$, the Bayesian estimate of $\theta$ is $\E(\Theta \mid \bs{X} = \bs{x})$. As always, the term estimator refers to a random variable, before the data are collected, and the term estimate refers to an observed value of the random variable after the data are collected. The definitions of bias and mean square error are as before, but now conditioned on $\Theta = \theta \in T$.
Suppose that $U$ is the Bayes estimator of $\theta$.
1. The bias of $U$ is $\bias(U \mid \theta) = \E(U - \theta \mid \Theta = \theta)$ for $\theta \in T$.
2. The mean square error of $U$ is $\mse(U \mid \theta) = \E[(U - \theta)^2 \mid \Theta = \theta]$ for $\theta \in T$.
As before, $\bias(U \mid \theta) = \E(U \mid \theta) - \theta$ and $\mse(U \mid \theta) = \var(U \mid \theta) + \bias^2(U \mid \theta)$. Suppose now that we observe the random variables $(X_1, X_2, X_3, \ldots)$ sequentially, and we compute the Bayes estimator $U_n$ of $\theta$ based on $(X_1, X_2, \ldots, X_n)$ for each $n \in \N_+$. Again, the most common case is when we are sampling from a distribution, so that the sequence is independent and identically distributed (given $\theta$). We have the natural asymptotic properties that we have seen before.
Let $\bs U = (U_n: n \in \N_+)$ be the sequence of Bayes estimators of $\theta$ as above.
1. $\bs U$ is asymptotically unbiased if $\bias(U_n \mid \theta) \to 0$ as $n \to \infty$ for each $\theta \in T$.
2. $\bs U$ is mean-square consistent if $\mse(U_n \mid \theta) \to 0$ as $n \to \infty$ for each $\theta \in T$.
Often we cannot construct unbiased Bayesian estimators, but we do hope that our estimators are at least asymptotically unbiased and consistent. It turns out that the sequence of Bayesian estimators $\bs U$ is a martingale. The theory of martingales provides some powerful tools for studying these estimators.
From the Bayesian perspective, the posterior distribution of $\Theta$ given the data $\bs X = \bs x$ is of primary importance. Point estimates of $\theta$ derived from this distribution are of secondary importance. In particular, the mean square error function $u \mapsto \E[(\Theta - u)^2 \mid \bs X = \bs x)$, minimized as we have noted at $\E(\Theta \mid \bs X = \bs x)$, is not the only loss function that can be used. (Although it's the only one that we consider.) Another possible loss function, among many, is the mean absolute error function $u \mapsto \E(|\Theta - u| \mid \bs X = \bs x)$, which we know is minimized at the median(s) of the posterior distribution.
Conjugate Families
Often, the prior distribution of $\Theta$ is itself a member of a parametric family, with the parameters specified to reflect our prior knowledge of $\theta$. In many important special cases, the parametric family can be chosen so that the posterior distribution of $\Theta$ given $\bs{X} = \bs{x}$ belongs to the same family for each $\bs x \in S$. In such a case, the family of distributions of $\Theta$ is said to be conjugate to the family of distributions of $\bs{X}$. Conjugate families are nice from a computational point of view, since we can often compute the posterior distribution through a simple formula involving the parameters of the family, without having to use Bayes' theorem directly. Similarly, in the case that the parameter is real valued, we can often compute the Bayesian estimator through a simple formula involving the parameters of the conjugate family.
Special Distributions
The Bernoulli Distribution
Suppose that $\bs X = (X_1, X_2, \ldots)$ is sequence of independent variables, each having the Bernoulli distribution with unknown success parameter $p \in (0, 1)$. In short, $\bs X$ is a sequence of Bernoulli trials, given $p$. In the usual language of reliability, $X_i = 1$ means success on trial $i$ and $X_i = 0$ means failure on trial $i$. Recall that given $p$, the Bernoulli distribution has probability density function $g(x \mid p) = p^x (1 - p)^{1-x}, \quad x \in \{0, 1\}$ Note that the number of successes in the first $n$ trials is $Y_n = \sum_{i=1}^n X_i$. Given $p$, random variable $Y_n$ has the binomial distribution with parameters $n$ and $p$.
Suppose now that we model $p$ with a random variable $P$ that has a prior beta distribution with left parameter $a \in (0, \infty)$ and right parameter $b \in (0, \infty)$, where $a$ and $b$ are chosen to reflect our initial information about $p$. So $P$ has probability density function $h(p) = \frac{1}{B(a, b)} p^{a-1} (1 - p)^{b-1}, \quad p \in (0, 1)$ and has mean $a / (a + b)$. For example, if we know nothing about $p$, we might let $a = b = 1$, so that the prior distribution is uniform on the parameter space $(0, 1)$ (the non-informative prior). On the other hand, if we believe that $p$ is about $\frac{2}{3}$, we might let $a = 4$ and $b = 2$, so that the prior distribution is unimodal, with mean $\frac{2}{3}$. As a random process, the sequence $\bs X$ with $p$ randomized by $P$, is known as the beta-Bernoulli process, and is very interesting on its own, outside of the context of Bayesian estimation.
For $n \in \N_+$, the posterior distribution of $P$ given $\bs{X}_n = (X_1, X_2, \ldots, X_n)$ is beta with left parameter $a + Y_n$ and right parameter $b + (n - Y_n)$.
Proof
Fix $n \in \N_+$. Let $\bs x = (x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$, and let $y = \sum_{i=1}^n x_i$. Then $f(\bs x \mid p) = g(x_1 \mid p) g(x_2 \mid p) \cdots g(x_n \mid p) = p^y (1 - p)^{n-y}$ Hence $h(p) f(\bs x \mid p) = \frac{1}{B(a, b)} p^{a-1} (1 - p)^{b-1} p^y (1 - p)^{n-y} = \frac{1}{B(a, b)}p^{a + y - 1} (1 - p)^{b + n - y - 1}, \quad p \in (0, 1)$ As a function of $p$ this expression is proportional to the beta PDF with parameters $a + y$, $b + n - y$. Note that it's not necessary to compute the normalizing factor $f(\bs x)$.
Thus, the beta distribution is conjugate to the Bernoulli distribution. Note also that the posterior distribution depends on the data vector $\bs{X}_n$ only through the number of successes $Y_n$. This is true because $Y_n$ is a sufficient statistic for $p$. In particular, note that the left beta parameter is increased by the number of successes $Y_n$ and the right beta parameter is increased by the number of failures $n - Y_n$.
The Bayesian estimator of $p$ given $\bs{X}_n$ is $U_n = \frac{a + Y_n}{a + b + n}$
Proof
Recall that the mean of the beta distribution is the left parameter divided by the sum of the parameters, so this result follows from the previous result.
In the beta coin experiment, set $n = 20$ and $p = 0.3$, and set $a = 4$ and $b = 2$. Run the simulation 100 times and note the estimate of $p$ and the shape and location of the posterior probability density function of $p$ on each run.
Next let's compute the bias and mean-square error functions.
For $n \in \N_+$, $\bias(U_n \mid p) = \frac{a(1 - p) - b p}{a + b + n}, \quad p \in (0, 1)$ The sequence $\bs U = (U_n: n \in \N_+)$ is asymptotically unbiased.
Proof
Given $p$, $Y_n$ has the binomial distribution with parameters $n$ and $p$ so $E(Y_n \mid p) = n p$. Hence $\bias(U_n \mid p) = \E(U_n \mid p) - p = \frac{a + n p}{a + b + n} - p$ Simplifying gives the formula above. Clearly $\bias(U_n \mid p) \to 0$ as $n \to \infty$.
Note also that we cannot choose $a$ and $b$ to make $U_n$ unbiased, since such a choice would involve the true value of $p$, which we do not know.
In the beta coin experiment, vary the parameters and note the change in the bias. Now set $n = 20$ and $p = 0.8$, and set $a = 2$ and $b = 6$. Run the simulation 1000 times. Note the estimate of $p$ and the shape and location of the posterior probability density function of $p$ on each update. Compare the empirical bias to the true bias.
For $n \in \N_+$, $\mse(U_n \mid p) = \frac{p [n - 2 \, a (a + b)] + p^2[(a + b)^2 - n] + a^2}{(a + b + n)^2}, \quad p \in (0, 1)$ The sequence $(U_n: n \in \N_+)$ is mean-square consistent.
Proof
Once again, given $p$, $Y_n$ has the binomail distribution with parameters $n$ and $p$ so $\var(U_n \mid p) = \frac{n p (1 - p)}{(a + b + n)^2}$ Hence $\mse(U_n \mid p) = \frac{n p (1 - p)}{(a + b + n)^2} + \left[\frac{a (1 - p) - b p}{a + b + n}\right]^2$ Simplifying gives the result. Clearly $\mse(U_n \mid p) \to 0$ as $n \to \infty$.
In the beta coin experiment, vary the parameters and note the change in the mean square error. Now set $n = 10$ and $p = 0.7$, and set $a = b = 1$. Run the simulation 1000 times. Note the estimate of $p$ and the shape and location of the posterior probability density function of $p$ on each update. Compare the empirical mean square error to the true mean square error.
Interestingly, we can choose $a$ and $b$ so that $U$ has mean square error that is independent of the unknown parameter $p$:
Let $n \in \N_+$ and let $a = b = \sqrt{n} / 2$. Then
$\mse(U_n \mid p) = \frac{n}{4 \left(n + \sqrt{n}\right)^2}, \quad p \in (0, 1)$
In the beta coin experiment, set $n = 36$ and $a = b = 3$. Vary $p$ and note that the mean square error does not change. Now set $p = 0.8$ and run the simulation 1000 times. Note the estimate of $p$ and the shape and location of the posterior probability density function on each update. Compare the empirical bias and mean square error to the true values.
Recall that the method of moments estimator and the maximum likelihood estimator of $p$ (on the interval $(0, 1)$) is the sample mean (the proportion of successes): $M_n = \frac{Y}{n} = \frac{1}{n} \sum_{i=1}^n X_i$ This estimator has mean square error $\mse(M_n \mid p) = \frac{1}{n} p (1 - p)$. To see the connection between the estimators, note from (6) that $U_n = \frac{a + b}{a + b + n} \frac{a}{a + b} + \frac{n}{a + b + n} M_n$ So $U_n$ is a weighted average of $a / (a + b)$ (the mean of the prior distribution) and $M_n$ (the maximum likelihood estimator).
Another Bernoulli Distribution
Bayesian estimation, like other forms of parametric estimation, depends critically on the parameter space. Suppose again that $(X_1, X_2, \ldots)$ is a sequence of Bernoulli trials, given the unknown success parameter $p$, but suppose now that the parameter space is $\left\{\frac{1}{2}, 1\right\}$. This setup corresponds to the tossing of a coin that is either fair or two-headed, but we don't know which. We model $p$ with a random variable $P$ that has the prior probability density function $h$ given by $h(1) = a$, $h\left(\frac{1}{2}\right) = 1 - a$, where $a \in (0, 1)$ is chosen to reflect our prior knowledge of the probability that the coin is two-headed. If we are completely ignorant, we might let $a = \frac{1}{2}$ (the non-informative prior). If with think the coin is more likely to be two-headed, we might let $a = \frac{3}{4}$. Again let $Y_n = \sum_{i=1}^n X_i$ for $n \in \N_+$.
The posterior distribution of $P$ given $\bs{X}_n = (X_1, X_2, \ldots, X_n)$ is
1. $h(1 \mid \bs{X}_n) = \frac{2^n a}{2^n a + (1 - a)}$ if $Y_n = n$ and $h(1 \mid \bs{X}_n) = 0$ if $Y_n \lt n$
2. $h\left(\frac{1}{2} \mid \bs{X}_n\right) = \frac{1 - a}{2^n a + (1 - a)}$ if $Y_n = n$ and $h\left(\frac{1}{2} \mid \bs{X}_n\right) = 1$ if $Y_n \lt n$
Proof
Fix $n \in \N_+$. Let $\bs x = (x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$, and let $y = \sum_{i=1}^n x_i$. As before, $f(\bs x \mid p) = p^y (1 - p)^{n-y}$ We adopt the usual conventions (which gives the correct mathematics) that $0^k = 0$ if $k \in \N_+$ but $0^0 = 1$. So from Bayes' theorem, \begin{align} h(1 \mid \bs x) & = \frac{h(1) f(\bs x \mid 1)}{h(1/2) f(\bs x \mid 1/2) + h(1) f(\bs x \mid 1)} \ & = \frac{a 1^y 0^{n-y}}{(1 - a)(1/2)^n + a 1^y 0^{n-y}} \end{align} So if $y \lt n$ then $h(1 \mid \bs x) = 0$ while if $y = n$ $h(1 \mid \bs x) = \frac{a}{(1 - a)(1/2)^n + a}$ Of course, $h\left(\frac{1}{2} \mid \bs x\right) = 1 - h(1 \mid \bs x)$. The results now follow after a bit of algebra.
Now let $p_n = \frac{2^{n+1} a + (1 - a)}{2^{n+1} a + 2 (1 - a)}$
The Bayes' estimator of $p$ given $\bs{X}_n$ the statistic $U_n$ defined by
1. $U_n = p_n$ if $Y_n = n$
2. $U_n = \frac{1}{2}$ if $Y_n \lt n$
Proof
By definition, the Bayes' estimator is $U_n = E(P \mid \bs{X}_n)$. From the previous result, if $Y_n = n$ then $U_n = 1 \cdot \frac{2^n a}{2^n a + (1 - a)} + \frac{1}{2} \cdot \frac{1 - a}{2^n a + (1 - a)}$ which simplifies to $p_n$. If $Y_n \lt n$ then $U = 1 \cdot 0 + \frac{1}{2} \cdot 1 = \frac{1}{2}$.
If we observe $Y_n \lt n$ then $U_n$ gives the correct answer $\frac{1}{2}$. This certainly makes sense since we know that we do not have the two-headed coin. On the other hand, if we observe $Y_n = n$ then we are not certain which coin we have, and the Bayesian estimate $p_n$ is not even in the parameter space! But note that $p_n \to 1$ as $n \to \infty$ exponentially fast. Next let's compute the bias and mean-square error for a given $p \in \left\{\frac{1}{2}, 1\right\}$.
For $n \in \N_+$,
1. $\bias(U_n \mid 1) = p_n - 1$
2. $\bias\left(U_n \mid \frac{1}{2}\right) = \left(\frac{1}{2}\right)^n \left(p_n - \frac{1}{2}\right)$
The sequence of estimators $(U_n: n \in \N_+)$ is asymptotically unbiased.
Proof
By definition, $\bias(U_n \mid p) = E(U - p \mid p)$. Hence from the previous result, \begin{align} \bias(U \mid p) & = (p_n - p) \P(Y = n \mid p) + \left(\frac{1}{2} - p\right) \P(Y \lt n \mid p) \ & = (p_n - p) p^n + \left(\frac{1}{2} - p\right) (1 - p^n) \end{align} Substituting $p = 1$ and $p = \frac{1}{2}$ gives the results. In both cases, $\bias(U_n \mid p) \to 0$ as $n \to \infty$ since $p_n \to 1$ and $\left(\frac{1}{2}\right)^n \to 0$ as $n \to \infty$.
If $p = 1$, the estimator $U_n$ is negatively biased; we noted this earlier. If $p = \frac{1}{2}$, then $U_n$ is positively biased for sufficiently large $n$ (depending on $a$).
For $n \in \N_+$,
1. $\mse(U_n \mid 1) = (p_n - 1)^2$
2. $\mse\left(U_n \mid \frac{1}{2}\right) = \left(\frac{1}{2}\right)^n \left(p_n - \frac{1}{2}\right)^2$
The sequence of estimators $\bs U = (U_n: n \in \N_+)$ is mean-square consistent.
Proof
By definition, $\mse(U_n \mid p) = \E[(U_n - p)^2 \mid p]$. Hence \begin{align} \mse(U_n \mid p) & = (p_n - p)^2 \P(Y_n = n \mid p) + \left(\frac{1}{2} - p\right)^2 \P(Y_n \lt n \mid p) \ & = (p_n - p)^2 p^n + \left(\frac{1}{2} - p\right)^2 (1 - p^n) \end{align} Substituting $p = 1$ and $p = \frac{1}{2}$ gives the results. In both cases, $\mse(U_n \mid p) \to 0$ as $n \to \infty$ since $p_n \to 1$ and $\left(\frac{1}{2}\right)^n \to 0$ as $n \to \infty$.
The Geometric distribution
Suppose that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of independent random variables, each having the geometric distribution on $\N_+$ with unknown success parameter $p \in (0, 1)$. Recall that these variables can be interpreted as the number of trials between successive successes in a sequence of Bernoulli trials. Given $p$, the geometric distribution has probability density function $g(x \mid p) = p (1 - p)^{x-1}, \quad x \in \N_+$ Once again for $n \in \N_+$, let $Y_n = \sum_{i=1}^n X_i$. In this setting, $Y_n$ is the trial number of the $n$th success, and given $p$, has the negative binomial distribution with parameters $n$ and $p$.
Suppose now that we model $p$ with a random variable $P$ having a prior beta distribution with left parameter $a \in (0, \infty)$ and right parameter $b \in (0, \infty)$. As usual, $a$ and $b$ are chosen to reflect our prior knowledge of $p$.
The posterior distribution of $P$ given $\bs{X}_n = (X_1, X_2, \ldots, X_n)$ is beta with left parameter $a + n$ and right parameter $b + (Y_n - n)$.
Proof
Fix $n \in \N_+$. Let $\bs x = (x_1, x_2, \ldots, x_n) \in \N_+^n$ and let $y = \sum_{i=1}^n x_i$. Then $f(\bs x \mid p) = g(x_1 \mid p) g(x_2 \mid p) \cdots g(x_n \mid p) = p^n (1 - p)^{y - n}$ Hence $h(p) f( \bs x \mid p) = \frac{1}{B(a, b)} p^{a-1} (1 - p)^{b-1} p^n (1 - p)^{y - n} = \frac{1}{B(a, b)} p^{a + n - 1} (1 - p)^{b + y - n - 1}, \quad p \in (0, 1)$ As a function of $p \in (0, 1)$ this expression is proportional to the beta PDF with parameters $a + n$ and $b + y - n$. Note that it's not necessary to compute the normalizing constant $f(\bs{x})$.
Thus, the beta distribution is conjugate to the geometric distribution. Moreover, note that in the posterior beta distribution, the left parameter is increased by the number of successes $n$ while the right parameter is increased by the number of failures $Y - n$, just as in the Bernoulli model. In particular, the posterior left parameter is deterministic and depends on the data only through the sample size $n$.
The Bayesian estimator of $p$ based on $\bs{X}_n$is $V_n = \frac{a + n}{a + b + Y_n}$
Proof
By definition, the Bayesian estimator is the mean of the posterior distribution. Recall again that the mean of the beta distribution is the left parameter divided by the sum of the parameters, so the result follows from our previous theorem.
Recall that the method of moments estimator of $p$, and the maximum likelihood estimator of $p$ on the interval $(0, 1)$ are both $W_n = 1 / M_n = n / Y_n$. To see the connection between the estimators, note from (19) that $\frac{1}{V_n} = \frac{a}{a + n} \frac{a + b}{a} + \frac{n}{a + n} \frac{1}{W_n}$ So $1 / V_n$ (the reciprocal of the Bayesian estimator) is a weighted average of $(a + b) / a$ (the reciprocal of the mean of the prior distribution) and $1 / W_n$ (the reciprocal of the maximum likelihood estimator).
The Poisson Distribution
Suppose that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of random variable each having the Poisson distribution with unknown parameter $\lambda \in (0, \infty)$. Recall that the Poisson distribution is often used to model the number of random points in a region of time or space and is studied in more detail in the chapter on the Poisson Process. The distribution is named for the inimitable Simeon Poisson and given $\lambda$, has probability density function $g(x \mid \lambda) = e^{-\lambda} \frac{\lambda^x}{x!}, \quad x \in \N$ Once again, for $n \in \N_+$, let $Y_n = \sum_{i=1}^n X_i$. Given $\lambda$, random variable $Y_n$ also has a Poisson distribution, but with parameter $n \lambda$.
Suppose now that we model $\lambda$ with a random variable $\Lambda$ having a prior gamma distribution with shape parameter $k \in (0, \infty)$ and rate parameter $r \in (0, \infty)$. As usual $k$ and $r$ are chosen to reflect our prior knowledge of $\lambda$. Thus the prior probability density function of $\Lambda$ is $h(\lambda) = \frac{r^k}{\Gamma(k)} \lambda^{k-1} e^{-r \lambda}, \quad \lambda \in (0, \infty)$ and the mean is $k / r$. The scale parameter of the gamma distribution is $b = 1/r$, but the formulas will work out nicer if we use the rate parameter.
The posterior distribution of $\Lambda$ given $\bs{X}_n = (X_1, X_2, \ldots, X_n)$ is gamma with shape parameter $k + Y_n$ and rate parameter $r + n$.
Proof
Fix $n \in \N_+$. Let $\bs x = (x_1, x_2, \ldots, x_n) \in \N^n$ and $y = \sum_{i=1}^n x_i$. Then $f(\bs x \mid \lambda) = g(x_1 \mid \lambda) g(x_2 \mid \lambda) \cdots g(x_n \mid \lambda) = e^{-n \lambda} \frac{\lambda^y}{x_1! x_2! \cdots x_n!}$ Hence \begin{align} h(\lambda) f( \bs x \mid \lambda) & = \frac{r^k}{\Gamma(k)} \lambda^{k-1} e^{-r \lambda} e^{-n \lambda} \frac{\lambda^y}{x_1! x_2! \cdots x_n!} \ & = \frac{r^k}{\Gamma(k) x_1! x_2! \cdots x_n!} e^{-(r + n) \lambda} \lambda^{k + y - 1}, \quad \lambda \in (0, \infty) \end{align} As a function of $\lambda \in (0, \infty)$ the last expression is proportional to the gamma PDF with shape parameter $k + y$ and rate parameter $r + n$. Note again that it's not necessary to compute the normalizing constant $f(\bs{x})$.
It follows that the gamma distribution is conjugate to the Poisson distribution. Note that the posterior rate parameter is deterministic and depends on the data only through the sample size $n$.
The Bayesian estimator of $\lambda$ based on $\bs{X}_n = (X_1, X_2, \ldots, X_n)$ is $V_n = \frac{k + Y_n}{r + n}$
Proof
By definition, the Bayes estimator is the mean of the posterior distribution. Recall that mean of the gamma distribution is the shape parameter divided by the rate parameter.
Since $V_n$ is a linear function of $Y_n$, and we know the distribution of $Y_n$ given $\lambda \in (0, \infty)$, we can compute the bias and mean-square error functions.
For $n \in \N_+$, $\bias(V_n \mid \lambda) = \frac{k - r \lambda}{r + n}, \quad \lambda \in (0, \infty)$ The sequence of estimators $\bs V = (V_n: n \in \N_+)$ is asymptotically unbiased.
Proof
The computation is simple, since the distribution of $Y_n$ given $\lambda$ is Poisson with parameter $n \lambda$. $\bias(V_n \mid \lambda) = \E(V_n \mid \lambda) - \lambda = \frac{k + n \lambda}{r + n} - \lambda = \frac{k - r \lambda}{r + n}$ Clearly $\bias(V_n \mid \lambda) \to 0$ as $n \to \infty$.
Note that, as before, we cannot choose $k$ and $r$ to make $V_n$ unbiased, without knowledge of $\lambda$.
For $n \in \N_+$, $\mse(V_n \mid \lambda) = \frac{n \lambda + (k - r \lambda)^2}{(r + n)^2}, \quad \lambda \in (0, \infty)$ The sequence of estimators $\bs V = (V_n: n \in \N_+)$ is mean-square consistent.
Proof
Again, the computation is easy since the distribution of $Y_n$ given $\lambda$ is Poisson with parameter $n \lambda$. $\mse(V \mid \lambda) = \var(V_n \mid \lambda) + \bias^2(V_n \mid \lambda) = \frac{n \lambda}{(r + n)^2} + \left(\frac{k - r \lambda}{r + n}\right)^2$ Clearly $\mse(V_n \mid \lambda) \to 0$ as $n \to \infty$.
Recall that the method of moments estimator of $\lambda$ and the maximum likelihood estimator of $\lambda$ on the interval $(0, \infty)$ are both $M_n = Y_n / n$, the sample mean. This estimator is unbiased and has mean square error $\lambda / n$. To see the connection between the estimators, note from (21) that $V_n = \frac{r}{r + n} \frac{k}{r} + \frac{n}{r + n} M_n$ So $V_n$ is a weighted average of $k / r$ (the mean of the prior distribution) and $M_n$ (the maximum likelihood estimator).
The Normal Distribution
Suppose that $\bs X = (X_1, X_2, \ldots)$ is a sequence of independent random variables, each having the normal distribution with unknown mean $\mu \in \R$ but known variance $\sigma^2 \in (0, \infty)$. Of course, the normal distribution plays an especially important role in statistics, in part because of the central limit theorem. The normal distribution is widely used to model physical quantities subject to numerous small, random errors. In many statistical applications, the variance of the normal distribution is more stable than the mean, so the assumption that the variance is known is not entirely artificial. Recall that the normal probability density function (given $\mu$) is $g(x \mid \mu) = \frac{1}{\sqrt{2 \, \pi} \sigma} \exp\left[-\frac{1}{2}\left(\frac{x - \mu}{\sigma}\right)^2 \right], \quad x \in \R$ Again, for $n \in \N_+$ let $Y_n = \sum_{i=1}^n X_i$. Recall that $Y_n$ also has a normal distribution (given $\mu$) but with mean $n \mu$ and variance $n \sigma^2$.
Suppose now that $\mu$ is modeled by a random variable $\Psi$ that has a prior normal distribution with mean $a \in \R$ and variance $b^2 \in (0, \infty)$. As usual, $a$ and $b$ are chosen to reflect our prior knowledge of $\mu$. An interesting special case is when we take $b = \sigma$, so the variance of the prior distribution of $\Psi$ is the same as the variance of the underlying sampling distribution.
For $n \in \N_+$, the posterior distribution of $\Psi$ given $\bs{X}_n = (X_1, X_2, \ldots, X_n)$ is normal with mean and variance given by \begin{align} \E(\Psi \mid \bs{X}_n) & = \frac{Y_n b^2 + a \sigma^2}{n b^2 + \sigma^2}\ \var(\Psi \mid \bs{X}_n) & = \frac{\sigma^2 b^2}{n b^2 + \sigma^2} \end{align}
Proof
Fix $n \in \N_+$. Suppose $\bs x = (x_1, x_2, \ldots, x_n) \in \R$ and let $y = \sum_{i=1}^n x_i$ and $w^2 = \sum_{i=1}^n x_i^2$. Then \begin{align} f(\bs x \mid \mu) & = g(x_1 \mid \mu) g(x_2 \mid \mu) \cdots g(x_n \mid \mu) = \frac{1}{(2 \pi)^{n/2} \sigma^n} \exp\left[-\frac{1}{2} \sum_{i=1}^n \left(\frac{x_i - \mu}{\sigma}\right)^2\right] \ & = \frac{1}{(2 \pi)^{n/2} \sigma^n} \exp\left[-\frac{1}{2 \sigma^2}(w^2 - 2 \mu y + n \mu^2)\right] \end{align} On the other hand, of course $h(\mu) = \frac{1}{\sqrt{2 \pi} b} \exp\left[-\frac{1}{2}\left(\frac{\mu - a}{b}\right)^2\right] = \frac{1}{\sqrt{2 \pi}b} \exp\left[-\frac{1}{2 b^2}(\mu^2 - 2 a \mu + a^2)\right]$ Therefore, $h(\mu) f(\bs x \mid \mu) = C \exp\left\{-\frac{1}{2}\left[\left(\frac{1}{b^2} + \frac{n}{\sigma^2}\right) \mu^2 - 2 \left(\frac{a}{b^2} + \frac{y}{\sigma^2}\right) \mu\right]\right\}$ where $C$ depends on $n$, $\sigma$, $a$, $b$, $\bs x$, but importantly not on $\mu$. So we don't really care what $C$ is. Completing the square in $\mu$ in the expression above gives $h(\mu) f(\bs x \mid \mu) = K \exp\left[-\frac{1}{2}\left(\frac{1}{b^2} + \frac{n}{\sigma^2}\right) \left(\mu - \frac{a / b^2 + y / \sigma^2}{1 / b^2 + n / \sigma^2}\right)^2\right]$ where $K$ is yet another factor that depends on lots of stuff, but not $\mu$. As a function of $\mu$, this expression is proportional to the normal distribution with mean and variance, respectively, given by \begin{align} &\frac{a / b^2 + y / \sigma^2}{1 / b^2 + n / \sigma^2} = \frac{y b^2 + a \sigma^2}{n b^2 + \sigma^2} \ & \frac{1}{1 / b^2 + n / \sigma^2} = \frac{\sigma^2 b^2}{\sigma^2 + n b^2} \end{align} Once again, it was not necessary to compute the normalizing constant $f(\bs{x})$, which would have been yet another factor that we do not care about.
Therefore, the normal distribution is conjugate to the normal distribution with unknown mean and known variance. Note that the posterior variance is deterministic, and depends on the data only through the sample size $n$. In the special case that $b = \sigma$, the posterior distribution of $\Psi$ given $\bs{X}_n$ is normal with mean $(Y_n + a) / (n + 1)$ and variance $\sigma^2 / (n + 1)$.
The Bayesian estimator of $\mu$ is $U_n = \frac{Y_n b^2 + a \sigma^2}{n b^2 + \sigma^2}$
Proof
This follows immediately from the previous result.
Note that $U_n = (Y_n + a) / (n + 1)$ in the special case that $b = \sigma$.
For $n \in \N_+$, $\bias(U_n \mid \mu) = \frac{\sigma^2 (a - \mu)}{\sigma^2 + n \, b^2}, \quad \mu \in \R$ The sequence of estimators $\bs U = (U_n: n \in \N_+)$ is asymptotically unbiased.
Proof
Recall that $Y_n$ has mean $n \mu$ given $\mu$. Hence $\bias(U_n \mid \mu) = \E(U_n \mid \mu) - \mu = \frac{n b^2 \mu + a \sigma^2}{n b^2 + \sigma^2} - \mu = \frac{(a - \mu) \sigma^2}{n b^2 + \sigma^2}$ Clearly $\bias(U_n \mid \mu) \to 0$ as $n \to \infty$ for every $\mu \in \R$.
When $b = \sigma$, $\bias(U_n \mid \mu) = (a - \mu) / (n + 1)$.
For $n \in \N_+$, $\mse(U_n \mid \mu) = \frac{n \sigma^2 b^4 + \sigma^4 (a - \mu)^2}{(\sigma^2 + n \, b^2)^2}, \quad \mu \in \R$ The sequence of estimators $\bs U = (U_n: n \in \N_+)$ is mean-square consistent.
Proof
Recall that $Y_n$ as variance $n \sigma^2$. Hence $\mse(U_n \mid \mu) = \var(U_n \mid \mu) + \bias^2(U_n \mid \mu) = \left(\frac{b^2}{n b^2 + \sigma^2}\right)^2 n \sigma^2 + \left(\frac{(a - \mu) \sigma^2}{n b^2 + \sigma^2}\right)^2$ Clearly $\mse(U_n \mid \mu) \to 0$ as $n \to \infty$ for every $\mu \in \R$.
When $b = \sigma$, $\mse(U \mid \mu) = [n \sigma^2 + (a - \mu)^2] / (n + 1)^2$. Recall that the method of moments estimator of $\mu$ and the maximum likelihood estimator of $\mu$ on $\R$ are both $M_n = Y_n / n$, the sample mean. This estimator is unbiased and has mean square error $\var(M) = \sigma^2 / n$. To see the connection between the estimators, note from (25) that $U_n = \frac{\sigma^2}{n b^2 + \sigma^2} a + \frac{n b^2}{n b^2 + \sigma^2} M_n$ So $U_n$ is a weighted average of $a$ (the mean of the prior distribution) and $M_n$ (the maximum likelihood estimator).
The Beta Distribution
Suppose that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of independent random variables each having the beta distribution with unknown left shape parameter $a \in (0, \infty)$ and right shape parameter $b = 1$. The beta distribution is widely used to model random proportions and probabilities and other variables that take values in bounded intervals (scaled to take values in $(0, 1)$). Recall that the probability density function (given $a$) is $g(x \mid a) = a \, x^{a-1}, \quad x \in (0, 1)$ Suppose now that $a$ is modeled by a random variable $A$ that has a prior gamma distribution with shape parameter $k \in (0, \infty)$ and rate parameter $r \in (0, \infty)$. As usual, $k$ and $r$ are chosen to reflect our prior knowledge of $a$. Thus the prior probabiltiy density function of $A$ is $h(a) = \frac{r^k}{\Gamma(k)} a^{k-1} e^{-r a}, \quad a \in (0, \infty)$ The mean of the prior distribution is $k / r$.
The posterior distribution of $A$ given $\bs{X}_n = (X_1, X_2, \ldots, X_n)$ is gamma, with shape parameter $k + n$ and rate parameter $r - \ln(X_1 X_2 \cdots X_n)$.
Proof
Fix $n \in \N_+$. Let $\bs x = (x_1, x_2, \ldots, x_n) \in (0, 1)^n$ and let $z = x_1 x_2 \cdots x_n$ Then $f(\bs x \mid a) = g(x_1 \mid a) g(x_2 \mid a) \cdots g(x_n \mid a) = a^n z^{a - 1} = \frac{a^n}{z} e^{a \ln z}$ Hence $h(a) f(\bs x \mid a) = \frac{r^k}{z \Gamma(k)} a^{n + k - 1} e^{-a (r - \ln z)}, \quad a \in (0, \infty)$ As a function of $a \in (0, \infty)$ this expression is proportional to the gamma PDF with shape parameter $n + k$ and scale parameter $r - \ln z$. Once again, it's not necessary to compute the normalizing constant $f(\bs x)$.
Thus, the gamma distribution is conjugate to the beta distribution with unknown left parameter and right parameter 1. Note that the posterior shape parameter is deterministic and depends on the data only through the sample size $n$.
The Bayesian estimator of $a$ based on $\bs{X}_n$ is $U_n = \frac{k + n}{r - \ln(X_1 X_2 \cdots X_n)}$
Proof
The mean of the gamma distribution is the shape parameter divided by the rate parameter, so this follows from the previous theorem.
Given the complicated structure, the bias and mean square error of $U_n$ given $a \in (0, \infty)$ would be difficult to compute explicitly. Recall that the maximum likelihood estimator of $a$ is $W_n = -n / \ln(X_1 \, X_2 \cdots X_n)$. To see the connection between the estimators, note from (29) that $\frac{1}{U_n} = \frac{k}{k + n} \frac{r}{k} + \frac{n}{k + n} \frac{1}{W_n}$ So $1 / U_n$ (the reciprocal of the Bayesian estimator) is a weighted average of $r / k$ (the reciprocal of the mean of the prior distribution) and $1 / W_n$ (the reciprocal of the maximum likelihood estimator).
The Pareto Distribution
Suppose that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of independent random variables each having the Pareto distribution with unknown shape parameter $a \in (0, \infty)$ and scale parameter $b = 1$. The Pareto distribution is used to model certain financial variables and other variables with heavy-tailed distributions, and is named for Vilfredo Pareto. Recall that the probability density function (given $a$) is $g(x \mid a) = \frac{a}{x^{a+1}}, \quad x \in [1, \infty)$ Suppose now that $a$ is modeled by a random variable $A$ that has a prior gamma distribution with shape parameter $k \in (0, \infty)$ and rate parameter $r \in (0, \infty)$. As usual, $k$ and $r$ are chosen to reflect our prior knowledge of $a$. Thus the prior probabiltiy density function of $A$ is $h(a) = \frac{r^k}{\Gamma(k)} a^{k-1} e^{-r a}, \quad a \in (0, \infty)$
For $n \in \N_+$, the posterior distribution of $A$ given $\bs{X}_n = (X_1, X_2, \dots, X_n)$ is gamma, with shape parameter $k + n$ and rate parameter $r + \ln(X_1 X_2 \cdots X_n)$.
Proof
Fix $n \in \N_+$. Let $\bs x = (x_1, x_2, \ldots, x_n) \in [1, \infty)^n$ and let $z = x_1 x_2 \cdots x_n$ Then $f(\bs x \mid a) = g(x_1 \mid a) g(x_2 \mid a) \cdots g(x_n \mid a) = \frac{a^n}{z^{a + 1}} = \frac{a^n}{z} e^{- a \ln z}$ Hence $h(a) f(\bs x \mid a) = \frac{r^k}{z \Gamma(k)} a^{n + k - 1} e^{-a (r + \ln z)}, \quad a \in (0, \infty)$ As a function of $a \in (0, \infty)$ this expression is proportional to the gamma PDF with shape parameter $n + k$ and scale parameter $r + \ln z$. Once again, it's not necessary to compute the normalizing constant $f(\bs x)$.
Thus, the gamma distribution is conjugate to Pareto distribution with unknown shape parameter. Note that the posterior shape parameter is deterministic and depends on the data only through the sample size $n$.
The Bayesian estimator of $a$ based on $\bs{X}_n$ is $U_n = \frac{k + n}{r + \ln(X_1 X_2 \cdots X_n)}$
Proof
Once again, the mean of the gamma distribution is the shape parameter divided by the rate parameter, so this follows from the previous theorem.
Given the complicated structure, the bias and mean square error of $U$ given $a \in (0, \infty)$ would be difficult to compute explicitly. Recall that the maximum likelihood estimator of $a$ is $W_n = n / \ln(X_1 \, X_2 \cdots X_n)$. To see the connection between the estimators, note from (31) that $\frac{1}{U_n} = \frac{k}{k + n} \frac{r}{k} + \frac{n}{k + n} \frac{1}{W_n}$ So $1 / U_n$ (the reciprocal of the Bayesian estimator) is a weighted average of $r / k$ (the reciprocal of the mean of the prior distribution) and $1 / W_n$ (the maximum likelihood estimator). | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/07%3A_Point_Estimation/7.04%3A_Bayesian_Estimation.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\bias}{\text{bias}}$ $\newcommand{\MSE}{\text{MSE}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
Consider again the basic statistical model, in which we have a random experiment that results in an observable random variable $\bs{X}$ taking values in a set $S$. Once again, the experiment is typically to sample $n$ objects from a population and record one or more measurements for each item. In this case, the observable random variable has the form $\bs{X} = (X_1, X_2, \ldots, X_n)$ where $X_i$ is the vector of measurements for the $i$th item.
Suppose that $\theta$ is a real parameter of the distribution of $\bs{X}$, taking values in a parameter space $\Theta$. Let $f_\theta$ denote the probability density function of $\bs{X}$ for $\theta \in \Theta$. Note that the expected value, variance, and covariance operators also depend on $\theta$, although we will sometimes suppress this to keep the notation from becoming too unwieldy.
Definitions
Suppose now that $\lambda = \lambda(\theta)$ is a parameter of interest that is derived from $\theta$. (Of course, $\lambda$ might be $\theta$ itself, but more generally might be a function of $\theta$.) In this section we will consider the general problem of finding the best estimator of $\lambda$ among a given class of unbiased estimators. Recall that if $U$ is an unbiased estimator of $\lambda$, then $\var_\theta(U)$ is the mean square error. Mean square error is our measure of the quality of unbiased estimators, so the following definitions are natural.
Suppose that $U$ and $V$ are unbiased estimators of $\lambda$.
1. If $\var_\theta(U) \le \var_\theta(V)$ for all $\theta \in \Theta$ then $U$ is a uniformly better estimator than $V$.
2. If $U$ is uniformly better than every other unbiased estimator of $\lambda$, then $U$ is a Uniformly Minimum Variance Unbiased Estimator (UMVUE) of $\lambda$.
Given unbiased estimators $U$ and $V$ of $\lambda$, it may be the case that $U$ has smaller variance for some values of $\theta$ while $V$ has smaller variance for other values of $\theta$, so that neither estimator is uniformly better than the other. Of course, a minimum variance unbiased estimator is the best we can hope for.
The Cramér-Rao Lower Bound
We will show that under mild conditions, there is a lower bound on the variance of any unbiased estimator of the parameter $\lambda$. Thus, if we can find an estimator that achieves this lower bound for all $\theta$, then the estimator must be an UMVUE of $\lambda$. The derivative of the log likelihood function, sometimes called the score, will play a critical role in our anaylsis. A lesser, but still important role, is played by the negative of the second derivative of the log-likelihood function. Life will be much easier if we give these functions names.
For $\bs{x} \in S$ and $\theta \in \Theta$, define \begin{align} L_1(\bs{x}, \theta) & = \frac{d}{d \theta} \ln\left(f_\theta(\bs{x})\right) \ L_2(\bs{x}, \theta) & = -\frac{d}{d \theta} L_1(\bs{x}, \theta) = -\frac{d^2}{d \theta^2} \ln\left(f_\theta(\bs{x})\right) \end{align}
In the rest of this subsection, we consider statistics $h(\bs{X})$ where $h: S \to \R$ (and so in particular, $h$ does not depend on $\theta$). We need a fundamental assumption:
We will consider only statistics $h(\bs{X})$ with $\E_\theta\left(h^2(\bs{X})\right) \lt \infty$ for $\theta \in \Theta$. We also assume that $\frac{d}{d \theta} \E_\theta\left(h(\bs{X})\right) = \E_\theta\left(h(\bs{X}) L_1(\bs{X}, \theta)\right)$ This is equivalent to the assumption that the derivative operator $d / d\theta$ can be interchanged with the expected value operator $\E_\theta$.
Proof
Note first that $\frac{d}{d \theta} \E\left(h(\bs{X})\right)= \frac{d}{d \theta} \int_S h(\bs{x}) f_\theta(\bs{x}) \, d \bs{x}$ On the other hand, \begin{align} \E_\theta\left(h(\bs{X}) L_1(\bs{X}, \theta)\right) & = \E_\theta\left(h(\bs{X}) \frac{d}{d \theta} \ln\left(f_\theta(\bs{X})\right) \right) = \int_S h(\bs{x}) \frac{d}{d \theta} \ln\left(f_\theta(\bs{x})\right) f_\theta(\bs{x}) \, d \bs{x} \ & = \int_S h(\bs{x}) \frac{\frac{d}{d \theta} f_\theta(\bs{x})}{f_\theta(\bs{x})} f_\theta(\bs{x}) \, d \bs{x} = \int_S h(\bs{x}) \frac{d}{d \theta} f_\theta(\bs{x}) \, d \bs{x} = \int_S \frac{d}{d \theta} h(\bs{x}) f_\theta(\bs{x}) \, d \bs{x} \end{align} Thus the two expressions are the same if and only if we can interchange the derivative and integral operators.
Generally speaking, the fundamental assumption will be satisfied if $f_\theta(\bs{x})$ is differentiable as a function of $\theta$, with a derivative that is jointly continuous in $\bs{x}$ and $\theta$, and if the support set $\left\{\bs{x} \in S: f_\theta(\bs{x}) \gt 0 \right\}$ does not depend on $\theta$.
$\E_\theta\left(L_1(\bs{X}, \theta)\right) = 0$ for $\theta \in \Theta$.
Proof
This follows from the fundamental assumption by letting $h(\bs{x}) = 1$ for $\bs{x} \in S$.
If $h(\bs{X})$ is a statistic then
$\cov_\theta\left(h(\bs{X}), L_1(\bs{X}, \theta)\right) = \frac{d}{d \theta} \E_\theta\left(h(\bs{X})\right)$
Proof
First note that the covariance is simply the expected value of the product of the variables, since the second variable has mean 0 by the previous theorem. The result then follows from the basic condition.
$\var_\theta\left(L_1(\bs{X}, \theta)\right) = \E_\theta\left(L_1^2(\bs{X}, \theta)\right)$
Proof
This follows since $L_1(\bs{X}, \theta)$ has mean 0 by the theorem above.
The following theorem gives the general Cramér-Rao lower bound on the variance of a statistic. The lower bound is named for Harold Cramér and CR Rao:
If $h(\bs{X})$ is a statistic then $\var_\theta\left(h(\bs{X})\right) \ge \frac{\left(\frac{d}{d \theta} \E_\theta\left(h(\bs{X})\right) \right)^2}{\E_\theta\left(L_1^2(\bs{X}, \theta)\right)}$
Proof
From the Cauchy-Scharwtz (correlation) inequality, $\cov_\theta^2\left(h(\bs{X}), L_1(\bs{X}, \theta)\right) \le \var_\theta\left(h(\bs{X})\right) \var_\theta\left(L_1(\bs{X}, \theta)\right)$ The result now follows from the previous two theorems.
We can now give the first version of the Cramér-Rao lower bound for unbiased estimators of a parameter.
Suppose now that $\lambda(\theta)$ is a parameter of interest and $h(\bs{X})$ is an unbiased estimator of $\lambda$. Then $\var_\theta\left(h(\bs{X})\right) \ge \frac{\left(d\lambda / d\theta\right)^2}{\E_\theta\left(L_1^2(\bs{X}, \theta)\right)}$
Proof
This follows immediately from the Cramér-Rao lower bound, since $\E_\theta\left(h(\bs{X})\right) = \lambda$ for $\theta \in \Theta$.
An estimator of $\lambda$ that achieves the Cramér-Rao lower bound must be a uniformly minimum variance unbiased estimator (UMVUE) of $\lambda$.
Equality holds in the previous theorem, and hence $h(\bs{X})$ is an UMVUE, if and only if there exists a function $u(\theta)$ such that (with probability 1) $h(\bs{X}) = \lambda(\theta) + u(\theta) L_1(\bs{X}, \theta)$
Proof
Equality holds in the Cauchy-Schwartz inequality if and only if the random variables are linear transformations of each other. Recall also that $L_1(\bs{X}, \theta)$ has mean 0.
The quantity $\E_\theta\left(L^2(\bs{X}, \theta)\right)$ that occurs in the denominator of the lower bounds in the previous two theorems is called the Fisher information number of $\bs{X}$, named after Sir Ronald Fisher. The following theorem gives an alternate version of the Fisher information number that is usually computationally better.
If the appropriate derivatives exist and if the appropriate interchanges are permissible then $\E_\theta\left(L_1^2(\bs{X}, \theta)\right) = \E_\theta\left(L_2(\bs{X}, \theta)\right)$
The following theorem gives the second version of the Cramér-Rao lower bound for unbiased estimators of a parameter.
If $\lambda(\theta)$ is a parameter of interest and $h(\bs{X})$ is an unbiased estimator of $\lambda$ then
$\var_\theta\left(h(\bs{X})\right) \ge \frac{\left(d\lambda / d\theta\right)^2}{\E_\theta\left(L_2(\bs{X}, \theta)\right)}$
Proof
This follows from the results above.
Random Samples
Suppose now that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the distribution of a random variable $X$ having probability density function $g_\theta$ and taking values in a set $R$. Thus $S = R^n$. We will use lower-case letters for the derivative of the log likelihood function of $X$ and the negative of the second derivative of the log likelihood function of $X$.
For $x \in R$ and $\theta \in \Theta$ define \begin{align} l(x, \theta) & = \frac{d}{d\theta} \ln\left(g_\theta(x)\right) \ l_2(x, \theta) & = -\frac{d^2}{d\theta^2} \ln\left(g_\theta(x)\right) \end{align}
$L^2$ can be written in terms of $l^2$ and $L_2$ can be written in terms of $l_2$:
1. $\E_\theta\left(L^2(\bs{X}, \theta)\right) = n \E_\theta\left(l^2(X, \theta)\right)$
2. $\E_\theta\left(L_2(\bs{X}, \theta)\right) = n \E_\theta\left(l_2(X, \theta)\right)$
The following theorem gives the second version of the general Cramér-Rao lower bound on the variance of a statistic, specialized for random samples.
If $h(\bs{X})$ is a statistic then
$\var_\theta\left(h(\bs{X})\right) \ge \frac{\left(\frac{d}{d\theta} \E_\theta\left(h(\bs{X})\right) \right)^2}{n \E_\theta\left(l^2(X, \theta)\right)}$
The following theorem give the third version of the Cramér-Rao lower bound for unbiased estimators of a parameter, specialized for random samples.
Suppose now that $\lambda(\theta)$ is a parameter of interest and $h(\bs{X})$ is an unbiased estimator of $\lambda$. Then $\var_\theta\left(h(\bs{X})\right) \ge \frac{(d\lambda / d\theta)^2}{n \E_\theta\left(l^2(X, \theta)\right)}$
Note that the Cramér-Rao lower bound varies inversely with the sample size $n$. The following version gives the fourth version of the Cramér-Rao lower bound for unbiased estimators of a parameter, again specialized for random samples.
If the appropriate derivatives exist and the appropriate interchanges are permissible) then $\var_\theta\left(h(\bs{X})\right) \ge \frac{\left(d\lambda / d\theta\right)^2}{n \E_\theta\left(l_2(X, \theta)\right)}$
To summarize, we have four versions of the Cramér-Rao lower bound for the variance of an unbiased estimate of $\lambda$: version 1 and version 2 in the general case, and version 1 and version 2 in the special case that $\bs{X}$ is a random sample from the distribution of $X$. If an ubiased estimator of $\lambda$ achieves the lower bound, then the estimator is an UMVUE.
Examples and Special Cases
We will apply the results above to several parametric families of distributions. First we need to recall some standard notation. Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the distribution of a real-valued random variable $X$ with mean $\mu$ and variance $\sigma^2$. The sample mean is $M = \frac{1}{n} \sum_{i=1}^n X_i$ Recall that $\E(M) = \mu$ and $\var(M) = \sigma^2 / n$. The special version of the sample variance, when $\mu$ is known, and standard version of the sample variance are, respectively, \begin{align} W^2 & = \frac{1}{n} \sum_{i=1}^n (X_i - \mu)^2 \ S^2 & = \frac{1}{n - 1} \sum_{i=1}^n (X_i - M)^2 \end{align}
The Bernoulli Distribution
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the Bernoulli distribution with unknown success parameter $p \in (0, 1)$. In the usual language of reliability, $X_i = 1$ means success on trial $i$ and $X_i = 0$ means failure on trial $i$; the distribution is named for Jacob Bernoulli. Recall that the Bernoulli distribution has probability density function $g_p(x) = p^x (1 - p)^{1-x}, \quad x \in \{0, 1\}$ The basic assumption is satisfied. Moreover, recall that the mean of the Bernoulli distribution is $p$, while the variance is $p (1 - p)$.
$p (1 - p) / n$ is the Cramér-Rao lower bound for the variance of unbiased estimators of $p$.
The sample mean $M$ (which is the proportion of successes) attains the lower bound in the previous exercise and hence is an UMVUE of $p$.
The Poisson Distribution
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the Poisson distribution with parameter $\theta \in (0, \infty)$. Recall that this distribution is often used to model the number of random points in a region of time or space and is studied in more detail in the chapter on the Poisson Process. The Poisson distribution is named for Simeon Poisson and has probability density function $g_\theta(x) = e^{-\theta} \frac{\theta^x}{x!}, \quad x \in \N$ The basic assumption is satisfied. Recall also that the mean and variance of the distribution are both $\theta$.
$\theta / n$ is the Cramér-Rao lower bound for the variance of unbiased estimators of $\theta$.
The sample mean $M$ attains the lower bound in the previous exercise and hence is an UMVUE of $\theta$.
The Normal Distribution
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the normal distribution with mean $\mu \in \R$ and variance $\sigma^2 \in (0, \infty)$. Recall that the normal distribution plays an especially important role in statistics, in part because of the central limit theorem. The normal distribution is widely used to model physical quantities subject to numerous small, random errors, and has probability density function $g_{\mu,\sigma^2}(x) = \frac{1}{\sqrt{2 \, \pi} \sigma} \exp\left[-\left(\frac{x - \mu}{\sigma}\right)^2 \right], \quad x \in \R$
The basic assumption is satisfied with respect to both of these parameters. Recall also that the fourth central moment is $\E\left((X - \mu)^4\right) = 3 \, \sigma^4$.
$\sigma^2 / n$ is the Cramér-Rao lower bound for the variance of unbiased estimators of $\mu$.
The sample mean $M$ attains the lower bound in the previous exercise and hence is an UMVUE of $\mu$.
$\frac{2 \sigma^4}{n}$ is the Cramér-Rao lower bound for the variance of unbiased estimators of $\sigma^2$.
The sample variance $S^2$ has variance $\frac{2 \sigma^4}{n-1}$ and hence does not attain the lower bound in the previous exercise.
If $\mu$ is known, then the special sample variance $W^2$ attains the lower bound above and hence is an UMVUE of $\sigma^2$.
If $\mu$ is unknown, no unbiased estimator of $\sigma^2$ attains the Cramér-Rao lower bound above.
Proof
This follows from the result above on equality in the Cramér-Rao inequality.
The Gamma Distribution
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the gamma distribution with known shape parameter $k \gt 0$ and unknown scale parameter $b \gt 0$. The gamma distribution is often used to model random times and certain other types of positive random variables, and is studied in more detail in the chapter on Special Distributions. The probability density function is $g_b(x) = \frac{1}{\Gamma(k) b^k} x^{k-1} e^{-x/b}, \quad x \in (0, \infty)$ The basic assumption is satisfied with respect to $b$. Moreover, the mean and variance of the gamma distribution are $k b$ and $k b^2$, respectively.
$\frac{b^2}{n k}$ is the Cramér-Rao lower bound for the variance of unbiased estimators of $b$.
$\frac{M}{k}$ attains the lower bound in the previous exercise and hence is an UMVUE of $b$.
The Beta Distribution
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the beta distribution with left parameter $a \gt 0$ and right parameter $b = 1$. Beta distributions are widely used to model random proportions and other random variables that take values in bounded intervals, and are studied in more detail in the chapter on Special Distributions. In our specialized case, the probability density function of the sampling distribution is $g_a(x) = a \, x^{a-1}, \quad x \in (0, 1)$
The basic assumption is satisfied with respect to $a$.
The mean and variance of the distribution are
1. $\mu = \frac{a}{a+1}$
2. $\sigma^2 = \frac{a}{(a + 1)^2 (a + 2)}$
The Cramér-Rao lower bound for the variance of unbiased estimators of $\mu$ is $\frac{a^2}{n \, (a + 1)^4}$.
The sample mean $M$ does not achieve the Cramér-Rao lower bound in the previous exercise, and hence is not an UMVUE of $\mu$.
The Uniform Distribution
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the uniform distribution on $[0, a]$ where $a \gt 0$ is the unknown parameter. Thus, the probability density function of the sampling distribution is $g_a(x) = \frac{1}{a}, \quad x \in [0, a]$
The basic assumption is not satisfied.
The Cramér-Rao lower bound for the variance of unbiased estimators of $a$ is $\frac{a^2}{n}$. Of course, the Cramér-Rao Theorem does not apply, by the previous exercise.
Recall that $V = \frac{n+1}{n} \max\{X_1, X_2, \ldots, X_n\}$ is unbiased and has variance $\frac{a^2}{n (n + 2)}$. This variance is smaller than the Cramér-Rao bound in the previous exercise.
The reason that the basic assumption is not satisfied is that the support set $\left\{x \in \R: g_a(x) \gt 0\right\}$ depends on the parameter $a$.
Best Linear Unbiased Estimators
We now consider a somewhat specialized problem, but one that fits the general theme of this section. Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a sequence of observable real-valued random variables that are uncorrelated and have the same unknown mean $\mu \in \R$, but possibly different standard deviations. Let $\bs{\sigma} = (\sigma_1, \sigma_2, \ldots, \sigma_n)$ where $\sigma_i = \sd(X_i)$ for $i \in \{1, 2, \ldots, n\}$.
We will consider estimators of $\mu$ that are linear functions of the outcome variables. Specifically, we will consider estimators of the following form, where the vector of coefficients $\bs{c} = (c_1, c_2, \ldots, c_n)$ is to be determined: $Y = \sum_{i=1}^n c_i X_i$
$Y$ is unbiased if and only if $\sum_{i=1}^n c_i = 1$.
The variance of $Y$ is $\var(Y) = \sum_{i=1}^n c_i^2 \sigma_i^2$
The variance is minimized, subject to the unbiased constraint, when $c_j = \frac{1 / \sigma_j^2}{\sum_{i=1}^n 1 / \sigma_i^2}, \quad j \in \{1, 2, \ldots, n\}$
Proof
Use the method of Lagrange multipliers (named after Joseph-Louis Lagrange).
This exercise shows how to construct the Best Linear Unbiased Estimator (BLUE) of $\mu$, assuming that the vector of standard deviations $\bs{\sigma}$ is known.
Suppose now that $\sigma_i = \sigma$ for $i \in \{1, 2, \ldots, n\}$ so that the outcome variables have the same standard deviation. In particular, this would be the case if the outcome variables form a random sample of size $n$ from a distribution with mean $\mu$ and standard deviation $\sigma$.
In this case the variance is minimized when $c_i = 1 / n$ for each $i$ and hence $Y = M$, the sample mean.
This exercise shows that the sample mean $M$ is the best linear unbiased estimator of $\mu$ when the standard deviations are the same, and that moreover, we do not need to know the value of the standard deviation. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/07%3A_Point_Estimation/7.05%3A_Best_Unbiased_Estimators.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\bias}{\text{bias}}$ $\newcommand{\MSE}{\text{MSE}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
The Basic Statistical Model
Consider again the basic statistical model, in which we have a random experiment with an observable random variable $\bs X$ taking values in a set $S$. Once again, the experiment is typically to sample $n$ objects from a population and record one or more measurements for each item. In this case, the outcome variable has the form $\bs X = (X_1, X_2, \ldots, X_n)$ where $X_i$ is the vector of measurements for the $i$th item. In general, we suppose that the distribution of $\bs X$ depends on a parameter $\theta$ taking values in a parameter space $T$. The parameter $\theta$ may also be vector-valued. We will sometimes use subscripts in probability density functions, expected values, etc. to denote the dependence on $\theta$.
As usual, the most important special case is when $\bs X$ is a sequence of independent, identically distributed random variables. In this case $\bs X$ is a random sample from the common distribution.
Sufficient Statistics
Let $U = u(\bs X)$ be a statistic taking values in a set $R$. Intuitively, $U$ is sufficient for $\theta$ if $U$ contains all of the information about $\theta$ that is available in the entire data variable $\bs X$. Here is the formal definition:
A statistic $U$ is sufficient for $\theta$ if the conditional distribution of $\bs X$ given $U$ does not depend on $\theta \in T$.
Sufficiency is related to the concept of data reduction. Suppose that $\bs X$ takes values in $\R^n$. If we can find a sufficient statistic $\bs U$ that takes values in $\R^j$, then we can reduce the original data vector $\bs X$ (whose dimension $n$ is usually large) to the vector of statistics $\bs U$ (whose dimension $j$ is usually much smaller) with no loss of information about the parameter $\theta$.
The following result gives a condition for sufficiency that is equivalent to this definition.
Let $U = u(\bs X)$ be a statistic taking values in $R$, and let $f_\theta$ and $h_\theta$ denote the probability density functions of $\bs X$ and $U$ respectively. Then $U$ is suffcient for $\theta$ if and only if the function on $S$ given below does not depend on $\theta \in T$: $\bs x \mapsto \frac{f_\theta(\bs x)}{h_\theta[u(\bs x)]}$
Proof
The joint distribution of $(\bs X, U)$ is concentrated on the set $\{(\bs x, y): \bs x \in S, y = u(\bs x)\} \subseteq S \times R$. The conditional PDF of $\bs X$ given $U = u(\bs x)$ is $f_\theta(\bs x) \big/ h_\theta[u(\bs x)]$ on this set, and is 0 otherwise.
The definition precisely captures the intuitive notion of sufficiency given above, but can be difficult to apply. We must know in advance a candidate statistic $U$, and then we must be able to compute the conditional distribution of $\bs X$ given $U$. The Fisher-Neyman factorization theorem given next often allows the identification of a sufficient statistic from the form of the probability density function of $\bs X$. It is named for Ronald Fisher and Jerzy Neyman.
Fisher-Neyman Factorization Theorem. Let $f_\theta$ denote the probability density function of $\bs X$ and suppose that $U = u(\bs X)$ is a statistic taking values in $R$. Then $U$ is sufficient for $\theta$ if and only if there exists $G: R \times T \to [0, \infty)$ and $r: S \to [0, \infty)$ such that $f_\theta(\bs x) = G[u(\bs x), \theta] r(\bs x); \quad \bs x \in S, \; \theta \in T$
Proof
Let $h_\theta$ denote the PDF of $U$ for $\theta \in T$. If $U$ is sufficient for $\theta$, then from the previous theorem, the function $r(\bs x) = f_\theta(\bs x) \big/ h_\theta[u(\bs x)]$ for $\bs x \in S$ does not depend on $\theta \in T$. Hence $f_\theta(\bs x) = h_\theta[u(\bs x)] r(\bs x)$ for $(\bs x, \theta) \in S \times T$ and so $(\bs x, \theta) \mapsto f_\theta(\bs x)$ has the form given in the theorem. Conversely, suppose that $(\bs x, \theta) \mapsto f_\theta(\bs x)$ has the form given in the theorem. Then there exists a positive constant $C$ such that $h_\theta(y) = C G(y, \theta)$ for $\theta \in T$ and $y \in R$. Hence $f_\theta(\bs x) \big/ h_\theta[u(x)] = r(\bs x) / C$ for $\bs x \in S$, independent of $\theta \in T$.
Note that $r$ depends only on the data $\bs x$ but not on the parameter $\theta$. Less technically, $u(\bs X)$ is sufficient for $\theta$ if the probability density function $f_\theta(\bs x)$ depends on the data vector $\bs x$ and the parameter $\theta$ only through $u(\bs x)$.
If $U$ and $V$ are equivalent statistics and $U$ is sufficient for $\theta$ then $V$ is sufficient for $\theta$.
Minimal Sufficient Statistics
The entire data variable $\bs X$ is trivially sufficient for $\theta$. However, as noted above, there usually exists a statistic $U$ that is sufficient for $\theta$ and has smaller dimension, so that we can achieve real data reduction. Naturally, we would like to find the statistic $U$ that has the smallest dimension possible. In many cases, this smallest dimension $j$ will be the same as the dimension $k$ of the parameter vector $\theta$. However, as we will see, this is not necessarily the case; $j$ can be smaller or larger than $k$. An example based on the uniform distribution is given in (38).
Suppose that a statistic $U$ is sufficient for $\theta$. Then $U$ is minimally sufficient if $U$ is a function of any other statistic $V$ that is sufficient for $\theta$.
Once again, the definition precisely captures the notion of minimal sufficiency, but is hard to apply. The following result gives an equivalent condition.
Let $f_\theta$ denote the probability density function of $\bs X$ corresponding to the parameter value $\theta \in T$ and suppose that $U = u(\bs X)$ is a statistic taking values in $R$. Then $U$ is minimally sufficient for $\theta$ if the following condition holds: for $\bs x \in S$ and $\bs y \in S$ $\frac{f_\theta(\bs x)}{f_\theta(\bs{y})} \text{ is independent of } \theta \text{ if and only if } u(\bs x) = u(\bs{y})$
Proof
Suppose that the condition in the theorem is satisfied. Then the PDF $f_\theta$ of $\bs X$ must have the form given in the factorization theorem (3) so $U$ is sufficient for $\theta$. Next, suppose that $V = v(\bs X)$ is another sufficient statistic for $\theta$, taking values in $R$. From the factorization theorem, there exists $G: R \times T \to [0, \infty)$ and $r: S \to [0, \infty)$ such that $f_\theta(\bs x) = G[v(\bs x), \theta] r(\bs x)$ for $(\bs x, \theta) \in S \times T$. Hence if $\bs x, \bs y \in S$ and $v(\bs x) = v(\bs y)$ then $\frac{f_\theta(\bs x)}{f_\theta(\bs{y})} = \frac{G[v(\bs x), \theta] r(\bs x)}{G[v(\bs{y}), \theta] r(\bs{y})} = \frac{r(\bs x)}{r(\bs y)}$ does not depend on $\theta \in \Theta$. Hence from the condition in the theorem, $u(\bs x) = u(\bs y)$ and it follows that $U$ is a function of $V$.
If $U$ and $V$ are equivalent statistics and $U$ is minimally sufficient for $\theta$ then $V$ is minimally sufficient for $\theta$.
Properties of Sufficient Statistics
Sufficiency is related to several of the methods of constructing estimators that we have studied.
Suppose that $U$ is sufficient for $\theta$ and that there exists a maximum likelihood estimator of $\theta$. Then there exists a maximum likelihood estimator $V$ that is a function of $U$.
Proof
From the factorization theorem (3), the log likelihood function for $\bs x \in S$ is $\theta \mapsto \ln G[u(\bs x), \theta] + \ln r(\bs x)$ Hence a value of $\theta$ that maximizes this function, if it exists, must be a function of $u(\bs x)$.
In particular, suppose that $V$ is the unique maximum likelihood estimator of $\theta$ and that $V$ is sufficient for $\theta$. If $U$ is sufficient for $\theta$ then $V$ is a function of $U$ by the previous theorem. Hence it follows that $V$ is minimally sufficient for $\theta$. Our next result applies to Bayesian analysis.
Suppose that the statistic $U = u(\bs X)$ is sufficient for the parameter $\theta$ and that $\theta$ is modeled by a random variable $\Theta$ with values in $T$. Then the posterior distribution of $\Theta$ given $\bs X = \bs x \in S$ is a function of $u(\bs x)$.
Proof
Let $h$ denote the prior PDF of $\Theta$ and $f(\cdot \mid \theta)$ the conditional PDF of $\bs X$ given $\Theta = \theta \in T$. By the factorization theorem (3), this conditional PDF has the form $f(\bs x \mid \theta) = G[u(\bs x), \theta] r(\bs x)$ for $\bs x \in S$ and $\theta \in T$. The posterior PDF of $\Theta$ given $\bs X = \bs x \in S$ is $h(\theta \mid \bs x) = \frac{h(\theta) f(\bs x \mid \theta)}{f(\bs x)}, \quad \theta \in T$ where the function in the denominator is the marginal PDF of $\bs X$, or simply the normalizing constant for the function of $\theta$ in the numerator. Let's suppose that $\Theta$ has a continuous distribution on $T$, so that $f(\bs x) = \int_T h(t) G[u(\bs x), t] r(\bs x) dt$ for $\bs x \in S$. Then the posterior PDF simplifies to $h(\theta \mid \bs x) = \frac{h(\theta) G[u(\bs x), \theta]}{\int_T h(t) G[u(\bs x), t] dt}$ which depends on $\bs x \in S$ only through $u(\bs x)$.
Continuing with the setting of Bayesian analysis, suppose that $\theta$ is a real-valued parameter. If we use the usual mean-square loss function, then the Bayesian estimator is $V = \E(\Theta \mid \bs X)$. By the previous result, $V$ is a function of the sufficient statistics $U$. That is, $\E(\Theta \mid \bs X) = \E(\Theta \mid U)$.
The next result is the Rao-Blackwell theorem, named for CR Rao and David Blackwell. The theorem shows how a sufficient statistic can be used to improve an unbiased estimator.
Rao-Blackwell Theorem. Suppose that $U$ is sufficient for $\theta$ and that $V$ is an unbiased estimator of a real parameter $\lambda = \lambda(\theta)$. Then $\E_\theta(V \mid U)$ is also an unbiased estimator of $\lambda$ and is uniformly better than $V$.
Proof
This follows from basic properties of conditional expected value and conditional variance. First, since $V$ is a function of $\bs X$ and $U$ is sufficient for $\theta$, $\E_\theta(V \mid U)$ is a valid statistic; that is, it does not depend on $\theta$, in spite of the formal dependence on $\theta$ in the expected value. Next, $\E_\theta(V \mid U)$ is a function of $U$ and $\E_\theta[\E_\theta(V \mid U)] = \E_\theta(V) = \lambda$ for $\theta \in \Theta$. Thus $\E_\theta(V \mid U)$ is an unbiased estimator of $\lambda$. Finally $\var_\theta[\E_\theta(V \mid U)] = \var_\theta(V) - \E_\theta[\var_\theta(V \mid U)] \le \var_\theta(V)$ for any $\theta \in T$.
Complete Statistics
Suppose that $U = u(\bs X)$ is a statistic taking values in a set $R$. Then $U$ is a complete statistic for $\theta$ if for any function $r: R \to \R$ $\E_\theta\left[r(U)\right] = 0 \text{ for all } \theta \in T \implies \P_\theta\left[r(U) = 0\right] = 1 \text{ for all } \theta \in T$
To understand this rather strange looking condition, suppose that $r(U)$ is a statistic constructed from $U$ that is being used as an estimator of 0 (thought of as a function of $\theta$). The completeness condition means that the only such unbiased estimator is the statistic that is 0 with probability 1.
If $U$ and $V$ are equivalent statistics and $U$ is complete for $\theta$ then $V$ is complete for $\theta$.
The next result shows the importance of statistics that are both complete and sufficient; it is known as the Lehmann-Scheffé theorem, named for Erich Lehmann and Henry Scheffé.
Lehmann-Scheffé Theorem. Suppose that $U$ is sufficient and complete for $\theta$ and that $V = r(U)$ is an unbiased estimator of a real parameter $\lambda = \lambda(\theta)$. Then $V$ is a uniformly minimum variance unbiased estimator (UMVUE) of $\lambda$.
Proof
Suppose that $W$ is an unbiased estimator of $\lambda$. By the Rao-Blackwell theorem (10), $\E(W \mid U)$ is also an unbiased estimator of $\lambda$ and is uniformly better than $W$. Since $\E(W \mid U)$ is a function of $U$, it follows from completeness that $V = \E(W \mid U)$ with probability 1.
Ancillary Statistics
Suppose that $V = v(\bs X)$ is a statistic taking values in a set $R$. If the distribution of $V$ does not depend on $\theta$, then $V$ is called an ancillary statistic for $\theta$.
Thus, the notion of an ancillary statistic is complementary to the notion of a sufficient statistic. A sufficient statistic contains all available information about the parameter; an ancillary statistic contains no information about the parameter. The following result, known as Basu's Theorem and named for Debabrata Basu, makes this point more precisely.
Basu's Theorem. Suppose that $U$ is complete and sufficient for a parameter $\theta$ and that $V$ is an ancillary statistic for $\theta$. Then $U$ and $V$ are independent.
Proof
Let $g$ denote the probability density function of $V$ and let $v \mapsto g(v \mid U)$ denote the conditional probability density function of $V$ given $U$. From properties of conditional expected value, $\E[g(v \mid U)] = g(v)$ for $v \in R$. But then from completeness, $g(v \mid U) = g(v)$ with probability 1.
If $U$ and $V$ are equivalent statistics and $U$ is ancillary for $\theta$ then $V$ is ancillary for $\theta$.
Applications and Special Distributions
In this subsection, we will explore sufficient, complete, and ancillary statistics for a number of special distributions. As always, be sure to try the problems yourself before looking at the solutions.
The Bernoulli Distribution
Recall that the Bernoulli distribuiton with parameter $p \in (0, 1)$ is a discrete distribution on $\{0, 1\}$ with probability density function $g$ defined by $g(x) = p^x (1 - p)^{1-x}, \quad x \in \{0, 1\}$ Suppose that $\bs X = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the Bernoulli distribution with parameter $p$. Equivalently, $\bs X$ is a sequence of Bernoulli trials, so that in the usual langauage of reliability, $X_i = 1$ if trial $i$ is a success, and $X_i = 0$ if trial $i$ is a failure. The Bernoulli distribution is named for Jacob Bernoulli and is studied in more detail in the chapter on Bernoulli Trials
Let $Y = \sum_{i=1}^n X_i$ denote the number of successes. Recall that $Y$ has the binomial distribution with parameters $n$ and $p$, and has probability density function $h$ defined by $h(y) = \binom{n}{y} p^y (1 - p)^{n-y}, \quad y \in \{0, 1, \ldots, n\}$
$Y$ is sufficient for $p$. Specifically, for $y \in \{0, 1, \ldots, n\}$, the conditional distribution of $\bs X$ given $Y = y$ is uniform on the set of points $D_y = \left\{(x_1, x_2, \ldots, x_n) \in \{0, 1\}^n: x_1 + x_2 + \cdots + x_n = y\right\}$
Proof
The joint PDF $f$ of $\bs X$ is defined by $f(\bs x) = g(x_1) g(x_2) \cdots g(x_n) = p^y (1 - p)^{n-y}, \quad \bs x = (x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$ where $y = \sum_{i=1}^n x_i$. Now let $y \in \{0, 1, \ldots, n\}$. Given $Y = y$, $\bs X$ is concentrated on $D_y$ and $\P(\bs X = \bs x \mid Y = y) = \frac{\P(\bs X = \bs x)}{\P(Y = y)} = \frac{p^y (1 - p)^{n-y}}{\binom{n}{y} p^y (1 - p)^{n-y}} = \frac{1}{\binom{n}{y}}, \quad \bs x \in D_y$ Of course, $\binom{n}{y}$ is the cardinality of $D_y$.
This result is intuitively appealing: in a sequence of Bernoulli trials, all of the information about the probability of success $p$ is contained in the number of successes $Y$. The particular order of the successes and failures provides no additional information. Of course, the sufficiency of $Y$ follows more easily from the factorization theorem (3), but the conditional distribution provides additional insight.
$Y$ is complete for $p$ on the parameter space $(0, 1)$.
Proof
If $r: \{0, 1, \ldots, n\} \to \R$, then $\E[r(Y)] = \sum_{y=0}^n r(y) \binom{n}{k} p^y (1 - p)^{n-y} = (1 - p)^n \sum_{y=0}^n r(y) \binom{n}{y} \left(\frac{p}{1 - p}\right)^y$ The last sum is a polynomial in the variable $t = \frac{p}{1 - p} \in (0, \infty)$. If this polynomial is 0 for all $t \in (0, \infty)$, then all of the coefficients must be 0. Hence we must have $r(y) = 0$ for $y \in \{0, 1, \ldots, n\}$.
The proof of the last result actually shows that if the parameter space is any subset of $(0, 1)$ containing an interval of positive length, then $Y$ is complete for $p$. But the notion of completeness depends very much on the parameter space. The following result considers the case where $p$ has a finite set of values.
Suppose that the parameter space $T \subset (0, 1)$ is a finite set with $k \in \N_+$ elements. If the sample size $n$ is at least $k$, then $Y$ is not complete for $p$.
Proof
Suppose that $r: \{0, 1, \ldots, n\} \to \R$ and that $\E[r(Y)] = 0$ for $p \in T$. Then we have $\sum_{y=0}^n \binom{n}{y} p^y (1 - p)^{n-y} r(y) = 0, \quad p \in T$ This is a set of $k$ linear, homogenous equations in the variables $(r(0), r(1), \ldots, r(n))$. Since $n \ge k$, we have at least $k + 1$ variables, so there are infinitely many nontrivial solutions.
The sample mean $M = Y / n$ (the sample proportion of successes) is clearly equivalent to $Y$ (the number of successes), and hence is also sufficient for $p$ and is complete for $p \in (0, 1)$. Recall that the sample mean $M$ is the method of moments estimator of $p$, and is the maximum likelihood estimator of $p$ on the parameter space $(0, 1)$.
In Bayesian analysis, the usual approach is to model $p$ with a random variable $P$ that has a prior beta distribution with left parameter $a \in (0, \infty)$ and right parameter $b \in (0, \infty)$. Then the posterior distribution of $P$ given $\bs X$ is beta with left parameter $a + Y$ and right parameter $b + (n - Y)$. The posterior distribution depends on the data only through the sufficient statistic $Y$, as guaranteed by theorem (9).
The sample variance $S^2$ is an UMVUE of the distribution variance $p (1 - p)$ for $p \in (0, 1)$, and can be written as $S^2 = \frac{Y}{n - 1} \left(1 - \frac{Y}{n}\right)$
Proof
Recall that the sample variance can be written as $S^2 = \frac{1}{n - 1} \sum_{i=1}^n X_i^2 - \frac{n}{n - 1} M^2$ But $X_i^2 = X_i$ since $X_i$ is an indicator variable, and $M = Y / n$. Substituting gives the representation above. In general, $S^2$ is an unbiased estimator of the distribution variance $\sigma^2$. But in this case, $S^2$ is a function of the complete, sufficient statistic $Y$, and hence by the Lehmann Scheffé theorem (13), $S^2$ is an UMVUE of $\sigma^2 = p (1 - p)$.
The Poisson Distribution
Recall that the Poisson distribution with parameter $\theta \in (0, \infty)$ is a discrete distribution on $\N$ with probability density function $g$ defined by $g(x) = e^{-\theta} \frac{\theta^x}{x!}, \quad x \in \N$ The Poisson distribution is named for Simeon Poisson and is used to model the number of random points in region of time or space, under certain ideal conditions. The parameter $\theta$ is proportional to the size of the region, and is both the mean and the variance of the distribution. The Poisson distribution is studied in more detail in the chapter on Poisson process.
Suppose now that $\bs X = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the Poisson distribution with parameter $\theta$. Recall that the sum of the scores $Y = \sum_{i=1}^n X_i$ also has the Poisson distribution, but with parameter $n \theta$.
The statistic $Y$ is sufficient for $\theta$. Specifically, for $y \in \N$, the conditional distribution of $\bs X$ given $Y = y$ is the multinomial distribution with $y$ trials, $n$ trial values, and uniform trial probabilities.
Proof
The joint PDF $f$ of $\bs X$ is defined by $f(\bs x) = g(x_1) g(x_2) \cdot g(x_n) = \frac{e^{-n \theta} \theta^y}{x_1! x_2! \cdots x_n!}, \quad \bs x = (x_1, x_2, \ldots, x_n) \in \N^n$ where $y = \sum_{i=1}^n x_i$. Given $Y = y \in \N$, random vector $\bs X$ takes values in the set $D_y = \left\{\bs x = (x_1, x_2, \ldots, x_n) \in \N^n: \sum_{i=1}^n x_i = y\right\}$. Moreover, $\P(\bs X = \bs x \mid Y = y) = \frac{\P(\bs X = \bs x)}{\P(Y = y)} = \frac{e^{-n \theta} \theta^y / (x_1! x_2! \cdots x_n!)}{e^{-n \theta} (n \theta)^y / y!} = \frac{y!}{x_1! x_2! \cdots x_n!} \frac{1}{n^y}, \quad \bs x \in D_y$ The last expression is the PDF of the multinomial distribution stated in the theorem. Of course, the important point is that the conditional distribution does not depend on $\theta$.
As before, it's easier to use the factorization theorem to prove the sufficiency of $Y$, but the conditional distribution gives some additional insight.
$Y$ is complete for $\theta \in (0, \infty)$.
Proof
If $r: \N \to \R$ then $\E\left[r(Y)\right] = \sum_{y=0}^\infty e^{-n \theta} \frac{(n \theta)^y}{y!} r(y) = e^{-n \theta} \sum_{y=0}^\infty \frac{n^y}{y!} r(y) \theta^y$ The last sum is a power series in $\theta$ with coefficients $n^y r(y) / y!$ for $y \in \N$. If this series is 0 for all $\theta$ in an open interval, then the coefficients must be 0 and hence $r(y) = 0$ for $y \in \N$.
As with our discussion of Bernoulli trials, the sample mean $M = Y / n$ is clearly equivalent to $Y$ and hence is also sufficient for $\theta$ and complete for $\theta \in (0, \infty)$. Recall that $M$ is the method of moments estimator of $\theta$ and is the maximum likelihood estimator on the parameter space $(0, \infty)$.
An UMVUE of the parameter $\P(X = 0) = e^{-\theta}$ for $\theta \in (0, \infty)$ is $U = \left( \frac{n-1}{n} \right)^Y$
Proof
The probability generating function of $Y$ is $P(t) = \E(t^Y) = e^{n \theta(t - 1)}, \quad t \in \R$ Hence $\E\left[\left(\frac{n - 1}{n}\right)^Y\right] = \exp \left[n \theta \left(\frac{n - 1}{n} - 1\right)\right] = e^{-\theta}, \quad \theta \in (0, \infty)$ So $U = [(n - 1) / n]^Y$ is an unbiased estimator of $e^{-\theta}$. Since $U$ is a function of the complete, sufficient statistic $Y$, it follows from the Lehmann Scheffé theorem (13) that $U$ is an UMVUE of $e^{-\theta}$.
The Normal Distribution
Recall that the normal distribution with mean $\mu \in \R$ and variance $\sigma^2 \in (0, \infty)$ is a continuous distribution on $\R$ with probability density function $g$ defined by $g(x) = \frac{1}{\sqrt{2 \, \pi} \sigma} \exp\left[-\frac{1}{2}\left(\frac{x - \mu}{\sigma}\right)^2\right], \quad x \in \R$ The normal distribution is often used to model physical quantities subject to small, random errors, and is studied in more detail in the chapter on Special Distributions. Because of the central limit theorem, the normal distribution is perhaps the most important distribution in statistics.
Suppose that $\bs X = (X_1, X_2, \ldots, X_n)$ is a random sample from the normal distribution with mean $\mu$ and variance $\sigma^2$. Then each of the following pairs of statistics is minimally sufficient for $(\mu, \sigma^2)$
1. $(Y, V)$ where $Y = \sum_{i=1}^n X_i$ and $V = \sum_{i=1}^n X_i^2$.
2. $\left(M, S^2\right)$ where $M = \frac{1}{n} \sum_{i=1}^n X_i$ is the sample mean and $S^2 = \frac{1}{n - 1} \sum_{i=1}^n (X_i - M)^2$ is the sample variance.
3. $(M, T^2)$ where $T^2 = \frac{1}{n} \sum_{i=1}^n (X_i - M)^2$ is the biased sample variance.
Proof
1. The joint PDF $f$ of $\bs X$ is given by $f(\bs x) = g(x_1) g(x_2) \cdots g(x_n) = \frac{1}{(2 \pi)^{n/2} \sigma^n} \exp\left[-\frac{1}{2 \sigma^2} \sum_{i=1}^n (x_i - \mu)^2\right], \quad \bs x = (x_1, x_2 \ldots, x_n) \in \R^n$ After some algebra, this can be written as $f(\bs x) = \frac{1}{(2 \pi)^{n/2} \sigma^n} e^{-n \mu^2 / \sigma^2} \exp\left(-\frac{1}{2 \sigma^2} \sum_{i=1}^n x_i^2 + \frac{2 \mu}{\sigma^2} \sum_{i=1}^n x_i \right), \quad \bs x = (x_1, x_2 \ldots, x_n) \in \R^n$ It follows from the factorization theorem (3) that $(Y, V)$ is sufficient for $\left(\mu, \sigma^2\right)$. Minimal sufficiency follows from the condition in theorem (6).
2. Note that $M = \frac{1}{n} Y, \; S^2 = \frac{1}{n - 1} V - \frac{n}{n - 1} M^2$. Hence $\left(M, S^2\right)$ is equivalent to $(Y, V)$ and so $\left(M, S^2\right)$ is also minimally sufficient for $\left(\mu, \sigma^2\right)$.
3. Similarly, $M = \frac{1}{n} Y$ and $T^2 = \frac{1}{n} V - M^2$. Hence $(M, T^2)$ is equivalent to $(Y, V)$ and so $(M, T^2)$ is also minimally sufficient for $(\mu, \sigma^2)$.
Recall that $M$ and $T^2$ are the method of moments estimators of $\mu$ and $\sigma^2$, respectively, and are also the maximum likelihood estimators on the parameter space $\R \times (0, \infty)$.
Run the normal estimation experiment 1000 times with various values of the parameters. Compare the estimates of the parameters in terms of bias and mean square error.
Sometimes the variance $\sigma^2$ of the normal distribution is known, but not the mean $\mu$. It's rarely the case that $\mu$ is known but not $\sigma^2$. Nonetheless we can give sufficient statistics in both cases.
Suppose again that $\bs X = (X_1, X_2, \ldots, X_n)$ is a random sample from the normal distribution with mean $\mu \in \R$ and variance $\sigma^2 \in (0, \infty)$. If
1. If $\sigma^2$ is known then $Y = \sum_{i=1}^n X_i$ is minimally sufficient for $\mu$.
2. If $\mu$ is known then $U = \sum_{i=1}^n (X_i - \mu)^2$ is sufficient for $\sigma^2$.
Proof
1. This results follow from the second displayed equation for the PDF $f(\bs x)$ of $\bs X$ in the proof of the previous theorem.
2. This result follows from the first displayed equation for the PDF $f(\bs x)$ of $bs X$ in the proof of the previous theorem.
Of course by equivalence, in part (a) the sample mean $M = Y / n$ is minimally sufficient for $\mu$, and in part (b) the special sample variance $W = U / n$ is minimally sufficient for $\sigma^2$. Moreover, in part (a), $M$ is complete for $\mu$ on the parameter space $\R$ and the sample variance $S^2$ is ancillary for $\mu$ (Recall that $(n - 1) S^2 / \sigma^2$ has the chi-square distribution with $n - 1$ degrees of freedom.) It follows from Basu's theorem (15) that the sample mean $M$ and the sample variance $S^2$ are independent. We proved this by more direct means in the section on special properties of normal samples, but the formulation in terms of sufficient and ancillary statistics gives additional insight.
The Gamma Distribution
Recall that the gamma distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$ is a continuous distribution on $(0, \infty)$ with probability density function $g$ given by $g(x) = \frac{1}{\Gamma(k) b^k} x^{k-1} e^{-x / b}, \quad x \in (0, \infty)$ The gamma distribution is often used to model random times and certain other types of positive random variables, and is studied in more detail in the chapter on Special Distributions.
Suppose that $\bs X = (X_1, X_2, \ldots, X_n)$ is a random sample from the gamma distribution with shape parameter $k$ and scale parameter $b$. Each of the following pairs of statistics is minimally sufficient for $(k, b)$
1. $(Y, V)$ where $Y = \sum_{i=1}^n X_i$ is the sum of the scores and $V = \prod_{i=1}^n X_i$ is the product of the scores.
2. $(M, U)$ where $M = Y / n$ is the sample (arithmetic) mean of $\bs X$ and $U = V^{1/n}$ is the sample geometric mean of $\bs X$.
Proof
1. The joint PDF $f$ of $\bs X$ is given by $f(\bs x) = g(x_1) g(x_2) \cdots g(x_n) = \frac{1}{\Gamma^n(k) b^{nk}} (x_1 x_2 \ldots x_n)^{k-1} e^{-(x_1 + x_2 + \cdots + x_n) / b}, \quad \bs x = (x_1, x_2, \ldots, x_n) \in (0, \infty)^n$ From the factorization theorem (3), $(Y, V)$ is sufficient for $(k, b)$. Minimal sufficiency follows from condition (6).
2. Clearly $M = Y / n$ is equivalent to $Y$ and $U = V^{1/n}$ is equivalent to $V$. Hence $(M, U)$ is also minimally sufficient for $(k, b)$.
Recall that the method of moments estimators of $k$ and $b$ are $M^2 / T^2$ and $T^2 / M$, respectively, where $M = \frac{1}{n} \sum_{i=1}^n X_i$ is the sample mean and $T^2 = \frac{1}{n} \sum_{i=1}^n (X_i - M)^2$ is the biased sample variance. If the shape parameter $k$ is known, $\frac{1}{k} M$ is both the method of moments estimator of $b$ and the maximum likelihood estimator on the parameter space $(0, \infty)$. Note that $T^2$ is not a function of the sufficient statistics $(Y, V)$, and hence estimators based on $T^2$ suffer from a loss of information.
Run the gamma estimation experiment 1000 times with various values of the parameters and the sample size $n$. Compare the estimates of the parameters in terms of bias and mean square error.
The proof of the last theorem actually shows that $Y$ is sufficient for $b$ if $k$ is known, and that $V$ is sufficient for $k$ if $b$ is known.
Suppose again that $\bs X = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the gamma distribution with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$. Then $Y = \sum_{i=1}^n X_i$ is complete for $b$.
Proof
$Y$ has the gamma distribution with shape parameter $n k$ and scale parameter $b$. Hence, if $r: [0, \infty) \to \R$, then $\E\left[r(Y)\right] = \int_0^\infty \frac{1}{\Gamma(n k) b^{n k}} y^{n k-1} e^{-y/b} r(y) \, dy = \frac{1}{\Gamma(n k) b^{n k}} \int_0^\infty y^{n k - 1} r(y) e^{-y / b} \, dy$ The last integral can be interpreted as the Laplace transform of the function $y \mapsto y^{n k - 1} r(y)$ evaluated at $1 / b$. If this transform is 0 for all $b$ in an open interval, then $r(y) = 0$ almost everywhere in $(0, \infty)$.
Suppose again that $\bs X = (X_1, X_2, \ldots, X_n)$ is a random sample from the gamma distribution on $(0, \infty)$ with shape parameter $k \in (0, \infty)$ and scale parameter $b \in (0, \infty)$. Let $M = \frac{1}{n} \sum_{i=1}^n X_i$ denote the sample mean and $U = (X_1 X_2 \ldots X_n)^{1/n}$ the sample geometric mean, as before. Then
1. $M / U$ is ancillary for $b$.
2. $M$ and $M / U$ are independent.
Proof
1. We can take $X_i = b Z_i$ for $i \in \{1, 2, \ldots, n\}$ where $\bs{Z} = (Z_1, X_2, \ldots, Z_n)$ is a random sample of size $n$ from the gamma distribution with shape parameter $k$ and scale parameter 1 (the standard gamma distribution with shape parameter $k$). Then $\frac{M}{U} = \frac{1}{n} \sum_{i=1}^n \frac{X_i}{(X_1 X_2 \cdots X_n)^{1/n}} = \frac{1}{n} \sum_{i=1}^n \left(\frac{X_i^n}{X_1 X_2 \cdots X_n}\right)^{1/n} = \frac{1}{n} \sum_{i=1}^n \left(\prod_{j \ne i} \frac{X_i}{X_j}\right)^{1/n}$ But $X_i / X_j = Z_i / Z_j$ for $i \ne j$, and the distribution of $\left\{Z_i / Z_j: i, j \in \{1, 2, \ldots, n\}, \; i \ne j\right\}$ does not depend on $b$. Hence the distribution of $M / U$ does not depend on $b$.
2. This follows from Basu's theorem (15), since $M$ is complete and sufficient for $b$ and $M / U$ is ancillary for $b$.
The Beta Distribution
Recall that the beta distribution with left parameter $a \in (0, \infty)$ and right parameter $b \in (0, \infty)$ is a continuous distribution on $(0, 1)$ with probability density function $g$ given by $g(x) = \frac{1}{B(a, b)} x^{a-1} (1 - x)^{b-1}, \quad x \in (0, 1)$ where $B$ is the beta function. The beta distribution is often used to model random proportions and other random variables that take values in bounded intervals. It is studied in more detail in the chapter on Special Distribution
Suppose that $\bs X = (X_1, X_2, \ldots, X_n)$ is a random sample from the beta distribution with left parameter $a$ and right parameter $b$. Then $(P, Q)$ is minimally sufficient for $(a, b)$ where $P = \prod_{i=1}^n X_i$ and $Q = \prod_{i=1}^n (1 - X_i)$.
Proof
The joint PDF $f$ of $\bs X$ is given by $f(\bs x) = g(x_1) g(x_2) \cdots g(x_n) = \frac{1}{B^n(a, b)} (x_1 x_2 \cdots x_n)^{a - 1} [(1 - x_1) (1 - x_2) \cdots (1 - x_n)]^{b-1}, \quad \bs x = (x_1, x_2, \ldots, x_n) \in (0, 1)^n$ From the factorization theorem (3), it follows that $(U, V)$ is sufficient for $(a, b)$. Minimal sufficiency follows from condition (6).
The proof also shows that $P$ is sufficient for $a$ if $b$ is known, and that $Q$ is sufficient for $b$ if $a$ is known. Recall that the method of moments estimators of $a$ and $b$ are $U = \frac{M\left(M - M^{(2)}\right)}{M^{(2)} - M^2}, \quad V = \frac{(1 - M)\left(M - M^{(2)}\right)}{M^{(2)} - M^2}$ respectively, where $M = \frac{1}{n} \sum_{i=1}^n X_i$ is the sample mean and $M^{(2)} = \frac{1}{n} \sum_{i=1}^n X_i^2$ is the second order sample mean. If $b$ is known, the method of moments estimator of $a$ is $U_b = b M / (1 - M)$, while if $a$ is known, the method of moments estimator of $b$ is $V_a = a (1 - M) / M$. None of these estimators is a function of the sufficient statistics $(P, Q)$ and so all suffer from a loss of information. On the other hand, if $b = 1$, the maximum likelihood estimator of $a$ on the interval $(0, \infty)$ is $W = -n / \sum_{i=1}^n \ln X_i$, which is a function of $P$ (as it must be).
Run the beta estimation experiment 1000 times with various values of the parameters. Compare the estimates of the parameters.
The Pareto Distribution
Recall that the Pareto distribution with shape parameter $a \in (0, \infty)$ and scale parameter $b \in (0, \infty)$ is a continuous distribution on $[b, \infty)$ with probability density function $g$ given by $g(x) = \frac{a b^a}{x^{a+1}}, \quad b \le x \lt \infty$ The Pareto distribution, named for Vilfredo Pareto, is a heavy-tailed distribution often used to model income and certain other types of random variables. It is studied in more detail in the chapter on Special Distribution.
Suppose that $\bs X = (X_1, X_2, \ldots, X_n)$ is a random sample from the Pareto distribution with shape parameter $a$ and scale parameter $b$. Then $\left(P, X_{(1)}\right)$ is minimally sufficient for $(a, b)$ where $P = \prod_{i=1}^n X_i$ is the product of the sample variables and where $X_{(1)} = \min\{X_1, X_2, \ldots, X_n\}$ is the first order statistic.
Proof
The joint PDF $f$ of $\bs X$ at $\bs x = (x_1, x_2, \ldots, x_n)$ is given by $f(\bs x) = g(x_1) g(x_2) \cdots g(x_n) = \frac{a^n b^{n a}}{(x_1 x_2 \cdots x_n)^{a + 1}}, \quad x_1 \ge b, x_2 \ge b, \ldots, x_n \ge b$ which can be rewritten as $f(\bs x) = g(x_1) g(x_2) \cdots g(x_n) = \frac{a^n b^{n a}}{(x_1 x_2 \cdots x_n)^{a + 1}} \bs{1}\left(x_{(n)} \ge b\right), \quad (x_1, x_2, \ldots, x_n) \in (0, \infty)^n$ So the result follows from the factorization theorem (3). Minimal sufficiency follows from condition (6).
The proof also shows that $P$ is sufficient for $a$ if $b$ is known (which is often the case), and that $X_{(1)}$ is sufficient for $b$ if $a$ is known (much less likely). Recall that the method of moments estimators of $a$ and $b$ are $U = 1 + \sqrt{\frac{M^{(2)}}{M^{(2)} - M^2}}, \quad V = \frac{M^{(2)}}{M} \left( 1 - \sqrt{\frac{M^{(2)} - M^2}{M^{(2)}}} \right)$ respectively, where as before $M = \frac{1}{n} \sum_{i=1}^n X_i$ is the sample mean and $M^{(2)} = \sum_{i=1}^n X_i^2$ the second order sample mean. These estimators are not functions of the sufficient statistics and hence suffers from loss of information. On the other hand, the maximum likelihood estimators of $a$ and $b$ on the interval $(0, \infty)$ are $W = \frac{n}{\sum_{i=1}^n \ln X_i - n \ln X_{(1)}}, \quad X_{(1)}$ respectively. These are functions of the sufficient statistics, as they must be.
Run the Pareto estimation experiment 1000 times with various values of the parameters $a$ and $b$ and the sample size $n$. Compare the method of moments estimates of the parameters with the maximum likelihood estimates in terms of the empirical bias and mean square error.
The Uniform Distribution
Recall that the continuous uniform distribution on the interval $[a, a + h]$, where $a \in \R$ is the location parameter and $h \in (0, \infty)$ is the scale parameter, has probability density function $g$ given by $g(x) = \frac{1}{h}, \quad x \in [a, a + h]$ Continuous uniform distributions are widely used in applications to model a number chosen at random from an interval. Continuous uniform distributions are studied in more detail in the chapter on Special Distributions. Let's first consider the case where both parameters are unknown.
Suppose that $\bs X = (X_1, X_2, \ldots, X_n)$ is a random sample from the uniform distribution on the interval $[a, a + h]$. Then $\left(X_{(1)}, X_{(n)}\right)$ is minimally sufficient for $(a, h)$, where $X_{(1)} = \min\{X_1, X_2, \ldots, X_n\}$ is the first order statistic and $X_{(n)} = \max\{X_1, X_2, \ldots, X_n\}$ is the last order statistic.
Proof
The PDF $f$ of $\bs X$ is given by $f(\bs x) = g(x_1) g(x_2) \cdots g(x_n) = \frac{1}{h^n}, \quad \bs x = (x_1, x_2, \ldots x_n) \in [a, a + h]^n$ We can rewrite the PDF as $f(\bs x) = \frac{1}{h^n} \bs{1}[x_{(1)} \ge a] \bs{1}[x_{(n)} \le a + h], \quad \bs x = (x_1, x_2, \ldots, x_n) \in \R^n$ It then follows from the factorization theorem (3) that $\left(X_{(1)}, X_{(n)}\right)$ is sufficient for $(a, h)$. Next, suppose that $\bs x, \, \bs y \in \R^n$ and that $x_{(1)} \ne y_{(1)}$ or $x_{(n)} \ne y_{(n)}$. For a given $h \in (0, \infty)$, we can easily find values of $a \in \R$ such that $f(\bs x) = 0$ and $f(\bs y) = 1 / h^n$, and other values of $a \in \R$ such that $f(\bs x) = f(\bs y) = 1 / h^n$. By condition (6), $\left(X_{(1)}, X_{(n)}\right)$ is minimally sufficient.
If the location parameter $a$ is known, then the largest order statistic is sufficient for the scale parameter $h$. But if the scale parameter $h$ is known, we still need both order statistics for the location parameter $a$. So in this case, we have a single real-valued parameter, but the minimally sufficient statistic is a pair of real-valued random variables.
Suppose again that $\bs X = (X_1, X_2, \ldots, X_n)$ is a random sample from the uniform distribution on the interval $[a, a + h]$.
1. If $a \in \R$ is known, then $X_{(n)}$ is sufficient for $h$.
2. If $h \in (0, \infty)$ is known, then $\left(X_{(1)}, X_{(n)}\right)$ is minimally sufficient for $a$.
Proof
Both parts follow easily from the analysis given in the proof of the last theorem.
Run the uniform estimation experiment 1000 times with various values of the parameter. Compare the estimates of the parameter.
Recall that if both parameters are unknown, the method of moments estimators of $a$ and $h$ are $U = 2 M - \sqrt{3} T$ and $V = 2 \sqrt{3} T$, respectively, where $M = \frac{1}{n} \sum_{i=1}^n X_i$ is the sample mean and $T^2 = \frac{1}{n} \sum_{i=1}^n (X_i - M)^2$ is the biased sample variance. If $a$ is known, the method of moments estimator of $h$ is $V_a = 2 (M - a)$, while if $h$ is known, the method of moments estimator of $h$ is $U_h = M - \frac{1}{2} h$. None of these estimators are functions of the minimally sufficient statistics, and hence result in loss of information.
The Hypergeometric Model
So far, in all of our examples, the basic variables have formed a random sample from a distribution. In this subsection, our basic variables will be dependent.
Recall that in the hypergeometric model, we have a population of $N$ objects, and that $r$ of the objects are type 1 and the remaining $N - r$ are type 0. The population size $N$ is a positive integer and the type 1 size $r$ is a nonnegative integer with $r \le N$. Typically one or both parameters are unknown. We select a random sample of $n$ objects, without replacement from the population, and let $X_i$ be the type of the $i$th object chosen. So our basic sequence of random variables is $\bs X = (X_1, X_2, \ldots, X_n)$. The variables are identically distributed indicator variables with $\P(X_i = 1) = r / N$ for $i \in \{1, 2, \ldots, n\}$, but are dependent. Of course, the sample size $n$ is a positive integer with $n \le N$.
The variable $Y = \sum_{i=1}^n X_i$ is the number of type 1 objects in the sample. This variable has the hypergeometric distribution with parameters $N$, $r$, and $n$, and has probability density function $h$ given by $h(y) = \frac{\binom{r}{y} \binom{N - r}{n - y}}{\binom{N}{n}} = \binom{n}{y} \frac{r^{(y)} (N - r)^{(n - y)}}{N^{(n)}}, \quad y \in \{\max\{0, N - n + r\}, \ldots, \min\{n, r\}\}$ (Recall the falling power notation $x^{(k)} = x (x - 1) \cdots (x - k + 1)$). The hypergeometric distribution is studied in more detail in the chapter on Finite Sampling Models.
$Y$ is sufficient for $(N, r)$. Specifically, for $y \in \{\max\{0, N - n + r\}, \ldots, \min\{n, r\}\}$, the conditional distribution of $\bs X$ given $Y = y$ is uniform on the set of points $D_y = \left\{(x_1, x_2, \ldots, x_n) \in \{0, 1\}^n: x_1 + x_2 + \cdots + x_n = y\right\}$
Proof
By a simple application of the multiplication rule of combinatorics, the PDF $f$ of $\bs X$ is given by $f(\bs x) = \frac{r^{(y)} (N - r)^{(n - y)}}{N^{(n)}}, \quad \bs x = (x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$ where $y = \sum_{i=1}^n x_i$. If $y \in \{\max\{0, N - n + r\}, \ldots, \min\{n, r\}\}$, the conditional distribution of $\bs X$ given $Y = y$ is concentrated on $D_y$ and $\P(\bs X = \bs x \mid Y = y) = \frac{\P(\bs X = \bs x)}{\P(Y = y)} = \frac{r^{(y)} (N - r)^{(n-y)}/N^{(n)}}{\binom{n}{y} r^{(y)} (N - r)^{(n - y)} / N^{(n)}} = \frac{1}{\binom{n}{y}}, \quad \bs x \in D_y$ Of course, $\binom{n}{y}$ is the cardinality of $D_y$.
There are clearly strong similarities between the hypergeometric model and the Bernoulli trials model above. Indeed if the sampling were with replacement, the Bernoulli trials model with $p = r / N$ would apply rather than the hypergeometric model. It's also interesting to note that we have a single real-valued statistic that is sufficient for two real-valued parameters.
Once again, the sample mean $M = Y / n$ is equivalent to $Y$ and hence is also sufficient for $(N, r)$. Recall that the method of moments estimator of $r$ with $N$ known is $N M$ and the method of moment estimator of $N$ with $r$ known is $r / M$. The estimator of $r$ is the one that is used in the capture-recapture experiment.
Exponential Families
Suppose now that our data vector $\bs X$ takes values in a set $S$, and that the distribution of $\bs X$ depends on a parameter vector $\bs{\theta}$ taking values in a parameter space $\Theta$. The distribution of $\bs X$ is a $k$-parameter exponential family if $S$ does not depend on $\bs{\theta}$ and if the probability density function of $\bs X$ can be written as
$f_\bs{\theta}(\bs x) = \alpha(\bs{\theta}) r(\bs x) \exp\left(\sum_{i=1}^k \beta_i(\bs{\theta}) u_i(\bs x) \right); \quad \bs x \in S, \; \bs{\theta} \in \Theta$
where $\alpha$ and $\left(\beta_1, \beta_2, \ldots, \beta_k\right)$ are real-valued functions on $\Theta$, and where $r$ and $\left(u_1, u_2, \ldots, u_k\right)$ are real-valued functions on $S$. Moreover, $k$ is assumed to be the smallest such integer. The parameter vector $\bs{\beta} = \left(\beta_1(\bs{\theta}), \beta_2(\bs{\theta}), \ldots, \beta_k(\bs{\theta})\right)$ is sometimes called the natural parameter of the distribution, and the random vector $\bs U = \left(u_1(\bs X), u_2(\bs X), \ldots, u_k(\bs X)\right)$ is sometimes called the natural statistic of the distribution. Although the definition may look intimidating, exponential families are useful because they have many nice mathematical properties, and because many special parametric families are exponential families. In particular, the sampling distributions from the Bernoulli, Poisson, gamma, normal, beta, and Pareto considered above are exponential families. Exponential families of distributions are studied in more detail in the chapter on special distributions.
$\bs U$ is minimally sufficient for $\bs{\theta}$.
Proof
That $U$ is sufficient for $\theta$ follows immediately from the factorization theorem. That $U$ is minimally sufficient follows since $k$ is the smallest integer in the exponential formulation.
It turns out that $\bs U$ is complete for $\bs{\theta}$ as well, although the proof is more difficult. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/07%3A_Point_Estimation/7.06%3A_Sufficient_Complete_and_Ancillary_Statistics.txt |
Set estimation refers to the process of constructing a subset of the parameter space, based on observed data from a probability distribution. The subset will contain the true value of the parameter with a specified confidence level. In this chapter, we explore the basic method of set estimation using pivot variables. We study set estimation in some of the most important models: the single variable normal model, the two-variable normal model, and the Bernoulli model.
08: Set Estimation
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
The Basic Statistical Model
As usual, our starting point is a random experiment with an underlying sample space and a probability measure $\P$. In the basic statistical model, we have an observable random variable $\bs{X}$ taking values in a set $S$. In general, $\bs{X}$ can have quite a complicated structure. For example, if the experiment is to sample $n$ objects from a population and record various measurements of interest, then $\bs{X} = (X_1, X_2, \ldots, X_n)$ where $X_i$ is the vector of measurements for the $i$th object. The most important special case occurs when $(X_1, X_2, \ldots, X_n)$ are independent and identically distributed. In this case, we have a random sample of size $n$ from the common distribution.
Suppose also that the distribution of $\bs{X}$ depends on a parameter $\theta$ taking values in a parameter space $\Theta$. The parameter may also be vector-valued, in which case $\Theta \subseteq \R^k$ for some $k \in \N_+$ and the parameter vector has the form $\bs{\theta} = (\theta_1, \theta_2, \ldots, \theta_k)$.
Confidence Sets
A confidence set is a subset $C(\bs{X})$ of the parameter space $\Theta$ that depends only on the data variable $\bs{X}$, and no unknown parameters. the confidence level is the smallest probability that $\theta \in C(\bs{X})$: $\min\left\{\P[\theta \in C(\bs{X})]: \theta \in \Theta\right\}$
Thus, in a sense, a confidence set is a set-valued statistic. A confidence set is an estimator of $\theta$ in the sense that we hope that $\theta \in C(\bs{X})$ with high probability, so that the confidence level is high. Note that since the distribution of $\bs{X}$ depends on $\theta$, there is a dependence on $\theta$ in the probability measure $\P$ in the definition of confidence level. However, we usually suppress this, just to keep the notation simple. Usually, we try to construct a confidence set for $\theta$ with a prescribed confidence level $1 - \alpha$ where $0 \lt \alpha \lt 1$. Typical confidence levels are 0.9, 0.95, and 0.99. Sometimes the best we can do is to construct a confidence set whose confidence level is at least $1 - \alpha$; this is called a conservative $1 - \alpha$ confidence set for $\theta$.
Suppose that $C(\bs{X})$ is $1 - \alpha$ level confidence set for a parameter $\theta$. Note that when we run the experiment and observe the data $\bs{x}$, the computed confidence set is $C(\bs{x})$. The true value of $\theta$ is either in this set, or is not, and we will usually never know. However, by the law of large numbers, if we were to repeat the confidence experiment over and over, the proportion of sets that contain $\theta$ would converge to $\P[\theta \in C(\bs{X})] = 1 - \alpha$. This is the precise meaning of the term confidence. In the usual terminology of statistics, the random set $C(\bs{X})$ is the estimator; the deterministic set $C(\bs{x})$ based on an observed value $\bs{x}$ is the estimate.
Next, note that the quality of a confidence set, as an estimator of $\theta$, is based on two factors: the confidence level and the precision as measured by the size of the set. A good estimator has small size (and hence gives a precise estimate of $\theta$) and large confidence. However, for a given $\bs{X}$, there is usually a tradeoff between confidence level and precision—increasing the confidence level comes only at the expense of increasing the size of the set, and decreasing the size of the set comes only at the expense of decreasing the confidence level. How we measure the size of the confidence set depends on the dimension of the parameter space and the nature of the confidence set. Moreover, the size of the set is usually random, although in some special cases it may be deterministic.
Considering the extreme cases may give us some insight. First, suppose that $C(\bs{X}) = \Theta$. This set estimator has maximum confidence 1, but no precision and hence it is worthless (we already knew that $\theta \in \Theta$). At the other extreme, suppose that $C(\bs{X})$ is a singleton set. This set estimator has the best possible precision, but typically for continuous distributions, would have confidence 0. In between these extremes, hopefully, are set estimators that have high confidence and high precision.
Suppose that $C_i(\bs{X})$ is a $1 - \alpha_i$ level confidence set for $\theta$ for $i \in \{1, 2, \ldots, k\}$. If $\alpha = \alpha_1 + \alpha_2 + \cdots + \alpha_k \lt 1$ then $C_1(\bs{X}) \cap C_2(\bs{X}) \cap \cdots \cap C_k(\bs{X})$ is a conservative $1 - \alpha$ level confidence set for $\theta$.
Proof
This follows from Bonferroni's inequality.
Real-Valued Parameters
In many cases, we are interested in estimating a real-valued parameter $\lambda = \lambda(\theta)$ taking values in an interval parameter space $(a, b)$, where $a, \, b \in \R$ with $a \lt b$. Of course, it's possible that $a = -\infty$ or $b = \infty$. In this context our confidence set frequently has the form $C(\bs{X}) = \left\{\theta \in \Theta: L(\bs{X}) \lt \lambda(\theta) \lt U(\bs{X})\right\}$ where $L(\bs{X})$ and $U(\bs{X})$ are real-valued statistics. In this case $(L(\bs{X}), U(\bs{X}))$ is called a confidence interval for $\lambda$. If $L(\bs{X})$ and $U(\bs{X})$ are both random, then the confidence interval is often said to be two-sided. In the special case that $U(\bs{X}) = b$, $L(\bs{X})$ is called a confidence lower bound for $\lambda$. In the special case that $L(\bs{X}) = a$, $U(\bs{X})$ is called a confidence upper bound for $\lambda$.
Suppose that $L(\bs{X})$ is a $1 - \alpha$ level confidence lower bound for $\lambda$ and that $U(\bs{X})$ is a $1 - \beta$ level confidence upper bound for $\lambda$. If $\alpha + \beta \lt 1$ then $(L(\bs{X}), U(\bs{X}))$ is a conservative $1 - (\alpha + \beta)$ level confidence interval for $\lambda$.
Proof
This follows immediately from (2).
Pivot Variables
You might think that it should be very difficult to construct confidence sets for a parameter $\theta$. However, in many important special cases, confidence sets can be constructed easily from certain random variables known as pivot variables.
Suppose that $V$ is a function from $S \times \Theta$ into a set $T$. The random variable $V(\bs{X}, \theta)$ is a pivot variable for $\theta$ if its distribution does not depend on $\theta$. Specifically, $\P[V(\bs{X}, \theta) \in B]$ is constant in $\theta \in \Theta$ for each $B \subseteq T$.
The basic idea is that we try to combine $\bs{X}$ and $\theta$ algebraically in such a way that we factor out the dependence on $\theta$ in the distribution of the resulting random variable $V(\bs{X}, \theta)$. If we know the distribution of the pivot variable, then for a given $\alpha$, we can try to find $B \subseteq T$ (that does not depend on $\theta$) such that $\P_\theta\left[V(\bs{X}, \theta) \in B\right] = 1 - \alpha$. It then follows that a $1 - \alpha$ confidence set for the parameter is given by $C(\bs{X}) = \{ \theta \in \Theta: V(\bs{X}, \theta) \in B \}$.
Suppose now that our pivot variable $V(\bs{X}, \theta)$ is real-valued, which for simplicity, we will assume has a continuous distribution. For $p \in (0, 1)$, let $v(p)$ denote the quantile of order $p$ for the pivot variable $V(\bs{X}, \theta)$. By the very meaning of pivot variable, $v(p)$ does not depend on $\theta$.
For any $p \in (0, 1)$, a $1 - \alpha$ level confidence set for $\theta$ is $\left\{\theta \in \Theta: v(\alpha - p \alpha) \lt V(\bs{X}, \theta) \lt v(1 - p \alpha)\right\}$
Proof
By definition, the probability of the event is $(1 - p \alpha) - (\alpha - p \alpha) = 1 - \alpha$.
The confidence set above corresponds to $(1 - p) \alpha$ in the left tail and $p \alpha$ in the right tail, in terms of the distribution of the pivot variable $V(\bs{X}, \lambda)$. The special case $p = \frac{1}{2}$ is the equal-tailed case, the most common case.
The confidence set (5) is decreasing in $\alpha$ and hence increasing in $1 - \alpha$ (in the sense of the subset relation) for fixed $p$.
For the confidence set (5), we would naturally like to choose $p$ that minimizes the size of the set in some sense. However this is often a difficult problem. The equal-tailed interval, corresponding to $p = \frac{1}{2}$, is the most commonly used case, and is sometimes (but not always) an optimal choice. Pivot variables are far from unique; the challenge is to find a pivot variable whose distribution is known and which gives tight bounds on the parameter (high precision).
Suppose that $V(\bs{X}, \theta)$ is a pivot variable for $\theta$. If $g$ is a function defined on the range of $V$ and $g$ involves no unknown parameters, then $U = g[V(\bs{X}, \theta)]$ is also a pivot variable for $\theta$.
Examples and Special Cases
Location-Scale Families
In the case of location-scale families of distributions, we can easily find pivot variables. Suppose that $Z$ is a real-valued random variable with a continuous distribution that has probability density function $g$, and no unknown parameters. Let $X = \mu + \sigma Z$ where $\mu \in \R$ and $\sigma \in (0, \infty)$ are parameters. Recall that the probability density function of $X$ is given by $f_{\mu, \sigma}(x) = \frac{1}{\sigma} g\left( \frac{x - \mu}{\sigma} \right), \quad x \in \R$ and the corresponding family of distributions is called the location-scale family associated with the distribution of $Z$; $\mu$ is the location parameter and $\sigma$ is the scale parameter. Generally, we are assuming that these parameters are unknown.
Now suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the distribution of $X$; this is our observable outcome vector. For each $i$, let $Z_i = \frac{X_i - \mu}{\sigma}$
The random vector $\bs{Z} = (Z_1, Z_2, \ldots, Z_n)$ is a random sample of size $n$ from the distribution of $Z$.
In particular, note that $\bs{Z}$ is a pivot variable for $(\mu, \sigma)$, since $\bs{Z}$ is a function of $\bs{X}$, $\mu$, and $\sigma$, but the distribution of $\bs{Z}$ does not depend on $\mu$ or $\sigma$. Hence, any function of $\bs{Z}$ will also be a pivot variable for $(\mu, \sigma)$, (if the function does not involve the parameters). Of course, some of these pivot variables will be much more useful than others in estimating $\mu$ and $\sigma$. In the following exercises, we will explore two common and important pivot variables.
Let $M(\bs{X})$ and $M(\bs{Z})$ denote the sample means of $\bs{X}$ and $\bs{Z}$, respectively. Then $M(\bs{Z})$ is a pivot variable for $(\mu, \sigma)$ since $M(\bs{Z}) = \frac{M(\bs{X}) - \mu}{\sigma}$
Let $m$ denote the quantile function of the pivot variable $M(\bs{Z})$. For any $p \in (0, 1)$, a $1 - \alpha$ confidence set for $(\mu, \sigma)$ is $Z_{\alpha, p}(\bs{X}) = \{(\mu, \sigma): M(\bs{X}) - m(1 - p \alpha) \sigma \lt \mu \lt M(\bs{X}) - m(\alpha - p \alpha) \sigma \}$
The confidence set constructed above is a cone in the $(\mu, \sigma)$ parameter space, with vertex at $(M(\bs{X}), 0)$ and boundary lines of slopes $-1 / m(1 - p \alpha)$ and $-1 / m(\alpha - p \alpha)$, as shown in the graph below. (Note, however, that both slopes might be negative or both positive.)
The fact that the confidence set is unbounded is clearly not good, but is perhaps not surprising; we are estimating two real parameters with a single real-valued pivot variable. However, if $\sigma$ is known, the confidence set defines a confidence interval for $\mu$. Geometrically, the confidence interval simply corresponds to the horizontal cross section at $\sigma$.
$1 - \alpha$ confidence sets for $(\mu, \sigma)$ are
1. $Z_{\alpha, 1}(\bs{X}) = \{(\mu, \sigma): M(\bs{X}) - m(1 - \alpha) \sigma \lt \mu \lt \infty\}$
2. $Z_{\alpha, 0}(\bs{X}) = \{(\mu, \sigma): - \infty \lt \mu \lt M(\bs{X}) - m(\alpha) \sigma\}$
Proof
In the confidence set constructed above, let $p \uparrow 1$ and $p \downarrow 0$, respectively.
If $\sigma$ is known, then (a) gives a $1 - \alpha$ confidence lower bound for $\mu$ and (b) gives a $1 - \alpha$ confidence upper bound for $\mu$.
Let $S(\bs{X})$ and $S(\bs{Z})$ denote the sample standard deviations of $\bs{X}$ and $\bs{Z}$, respectively. Then $S(\bs{Z})$ is a pivot variable for $(\mu, \sigma)$ and a pivot variable for $\sigma$ since $S(\bs{Z}) = \frac{S(\bs{X})}{\sigma}$
Let $s$ denote the quantile function of $S(\bs{Z})$. For any $\alpha \in (0, 1)$ and $p \in (0, 1)$, a $1 - \alpha$ confidence set for $(\mu, \sigma)$ is $V_{\alpha, p}(\bs{X}) = \left\{(\mu, \sigma): \frac{S(\bs{X})}{s(1 - p \alpha)} \lt \sigma \lt \frac{S(\bs{X})}{s(\alpha - p \alpha)} \right\}$
Note that the confidence set gives no information about $\mu$ since the random variable above is a pivot variable for $\sigma$ alone. The confidence set can also be viewed as a bounded confidence interval for $\sigma$.
$1 - \alpha$ confidence sets for $(\mu, \sigma)$ are
1. $V_{\alpha, 1}(\bs{X}) = \left \{ (\mu, \sigma): S(\bs{X}) / s(1 - \alpha) \lt \sigma \lt \infty \right \}$
2. $V_{\alpha, 0}(\bs{X}) = \left \{ (\mu, \sigma): 0 \lt \sigma \lt S(\bs{X}) / s(\alpha) \right \}$
Proof
In the confidence set constructed above, let $p \uparrow 1$ and $p \downarrow 0$, respectively.
The set in part (a) gives a $1 - \alpha$ confidence lower bound for $\sigma$ and the set in part (b) gives a $1 - \alpha$ confidence upper bound for $\sigma$.
We can intersect the confidence sets corresponding to the two pivot variables to produce conservative, bounded confidence sets.
If $\alpha, \; \beta, \; p, \; q \in (0, 1)$ with $\alpha + \beta \lt 1$ then $Z_{\alpha, p} \cap V_{\beta, q}$ is a conservative $1 - (\alpha + \beta)$ confidence set for $(\mu, \sigma)$.
Proof
The most important location-scale family is the family of normal distributions. The problem of estimation in the normal model is considered in the next section. In the remainder of this section, we will explore another important scale family.
The Exponential Distribution
Recall that the exponential distribution with scale parameter $\sigma \in (0, \infty)$ has probability density function $f(x) = \frac{1}{\sigma} e^{-x / \sigma}, \; x \in [0, \infty)$. It is the scale family associated with the standard exponential distribution, which has probability density function $g(x) = e^{-x}, \; x \in [0, \infty)$. The exponential distribution is widely used to model random times (such as lifetimes and arrival times), particularly in the context of the Poisson model. Now suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the exponential distribution with unknown scale parameter $\sigma$. Let $Y = \sum_{i=1}^n X_i$
The random variable $\frac{2}{\sigma} Y$ has the chi-square distribution with $2 n$ degrees of freedom, and hence is a pivot variable for $\sigma$.
Note that this pivot variable is a multiple of the variable $M$ constructed above for general location-scale families (with $\mu = 0$). For $p \in (0, 1)$ and $k \in (0, \infty)$, let $\chi_k^2(p)$ denote the quantile of order $p$ for the chi-square distribution with $k$ degrees of freedom. For selected values of $k$ and $p$, $\chi_k^2(p)$ can be obtained from the special distribution calculator or from most statistical software packages.
Recall that
1. $\chi_k^2(p) \to 0$ as $p \downarrow 0$
2. $\chi_k^2(p) \to \infty$ as $p \uparrow 1$
For any $\alpha \in (0, 1)$ and any $p \in (0, 1)$, a $1 - \alpha$ confidence interval for $\sigma$ is $\left( \frac{2\,Y}{\chi_{2n}^2(1 - p \alpha)}, \frac{2\,Y}{\chi_{2n}^2(\alpha - p \alpha)} \right)$
Note that
1. $2 Y \big/ \chi_{2n}^2(1 - \alpha)$ is a $1 - \alpha$ confidence lower bound for $\sigma$.
2. $2 Y \big/ \chi_{2n}^2(\alpha)$ is a $1 - \alpha$ confidence lower bound for $\sigma$.
Of the two-sided confidence intervals constructed above, we would naturally prefer the one with the smallest length, because this interval gives the most information about the parameter $\sigma$. However, minimizing the length as a function of $p$ is computationally difficult. The two-sided confidence interval that is typically used is the equal tailed interval obtained by letting $p = \frac{1}{2}$: $\left( \frac{2\,Y}{\chi_{2n}^2(1 - \alpha/2)}, \frac{2\,Y}{\chi_{2n}^2(\alpha/2)} \right)$
The lifetime of a certain type of component (in hours) has an exponential distribution with unknown scale parameter $\sigma$. Ten devices are operated until failure; the lifetimes are 592, 861, 1470, 2412, 335, 3485, 736, 758, 530, 1961.
1. Construct the 95% two-sided confidence interval for $\sigma$.
2. Construct the 95% confidence lower bound for $\sigma$.
3. Construct the 95% confidence upper bound for $\sigma$.
Answer
1. $(769.1, 2740.1)$
2. 836.7
3. 2421.9 | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/08%3A_Set_Estimation/8.01%3A_Introduction_to_Set_Estimation.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
The Normal Model
The normal distribution is perhaps the most important distribution in the study of mathematical statistics, in part because of the central limit theorem. As a consequence of this theorem, a measured quantity that is subject to numerous small, random errors will have, at least approximately, a normal distribution. Such variables are ubiquitous in statistical experiments, in subjects varying from the physical and biological sciences to the social sciences.
So in this section, we assume that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from the normal distribution with mean $\mu$ and standard deviation $\sigma$. Our goal is to construct confidence intervals for $\mu$ and $\sigma$ individually, and then more generally, confidence sets for $(\mu, \sigma)$. These are among of the most important special cases of set estimation. A parallel section on Tests in the Normal Model is in the chapter on Hypothesis Testing. First we need to review some basic facts that will be critical for our analysis.
Recall that the sample mean $M$ and sample variance $S^2$ are $M = \frac{1}{n} \sum_{i=1}^n X_i, \quad S^2 = \frac{1}{n - 1} \sum_{i=1}^n (X_i - M)^2$
From our study of point estimation, recall that $M$ is an unbiased and consistent estimator of $\mu$ while $S^2$ is an unbiased and consistent estimator of $\sigma^2$. From these basic statistics we can construct the pivot variables that will be used to construct our interval estimates. The following results were established in the section on Special Properties of the Normal Distribution.
Define $Z = \frac{M - \mu}{\sigma \big/ \sqrt{n}}, \quad T = \frac{M - \mu}{S \big/ \sqrt{n}}, \quad V = \frac{n - 1}{\sigma^2} S^2$
1. $Z$ has the standard normal distribution.
2. $T$ has the student $t$ distribution with $n - 1$ degrees of freedom.
3. $V$ has the chi-square distribution with $n - 1$ degrees of freedom.
4. $Z$ and $V$ are independent.
It follows that each of these random variables is a pivot variable for $(\mu, \sigma)$ since the distributions do not depend on the parameters, but the variables themselves functionally depend on one or both parameters. Pivot variables $Z$ and $T$ will be used to construct interval estimates of $\mu$ while $V$ will be used to construct interval estimates of $\sigma^2$. To construct our estimates, we will need quantiles of these standard distributions. The quantiles can be computed using the special distribution calculator or from most mathematical and statistical software packages. Here is the notation we will use:
Let $p \in (0, 1)$ and $k \in \N_+$.
1. $z(p)$ denotes the quantile of order $p$ for the standard normal distribution.
2. $t_k(p)$ denotes the quantile of order $p$ for the student $t$ distribution with $k$ degrees of freedom.
3. $\chi^2_k(p)$ denotes the quantile of order $p$ for the chi-square distribution with $k$ degrees of freedom
Since the standard normal and student $t$ distributions are symmetric about 0, it follows that $z(1 - p) = -z(p)$ and $t_k(1 - p) = -t_k(p)$ for $p \in (0, 1)$ and $k \in \N_+$. On the other hand, the chi-square distribution is not symmetric.
Confidence Intervals for $\mu$ with $\sigma$ Known
For our first discussion, we assume that the distribution mean $\mu$ is unknown but the standard deviation $\sigma$ is known. This is not always an artificial assumption. There are often situations where $\sigma$ is stable over time, and hence is at least approximately known, while $\mu$ changes because of different treatments. Examples are given in the computational exercises below. The pivot variable $Z$ leads to confidence intervals for $\mu$.
For $\alpha \in (0, 1)$,
1. $\left[M - z\left(1 - \frac{\alpha}{2}\right) \frac{\sigma}{\sqrt{n}}, M + z\left(1 - \frac{\alpha}{2}\right) \frac{\sigma}{\sqrt{n}}\right]$ is a $1 - \alpha$ confidence interval for $\mu$.
2. $M - z(1 - \alpha) \frac{\sigma}{\sqrt{n}}$ is a $1 - \alpha$ confidence lower bound for $\mu$
3. $M + z(1 - \alpha) \frac{\sigma}{\sqrt{n}}$ is a $1 - \alpha$ confidence upper bound for $\mu$
Proof
Since $Z = \frac{M - \mu}{\sigma / \sqrt{n}}$ has the standard normal distribution, each of the following events has probability $1 - \alpha$ by definition of the quantiles:
1. $\left\{-z\left(1 - \frac{\alpha}{2}\right) \le \frac{M - \mu}{\sigma / \sqrt{n}} \le z\left(1 - \frac{\alpha}{2}\right)\right\}$
2. $\left\{\frac{M - \mu}{\sigma / \sqrt{n}} \ge z(1 - \alpha)\right\}$
3. $\left\{\frac{M - \mu}{\sigma / \sqrt{n}} \le -z(1 - \alpha)\right\}$
In each case, solving the inequality for $\mu$ gives the result.
These are the standard interval estimates for $\mu$ when $\sigma$ is known. The two-sided confidence interval in (a) is symmetric about the sample mean $M$, and as the proof shows, corresponds to equal probability $\frac{\alpha}{2}$ in each tail of the distribution of the pivot variable $Z$. But of course, this is not the only two-sided $1 - \alpha$ confidence interval; we can divide the probability $\alpha$ anyway we want between the left and right tails of the distribution of $Z$.
For every $\alpha, \, p \in (0, 1)$, a $1 - \alpha$ confidence interval for $\mu$ is $\left[M - z(1 - p \alpha) \frac{\sigma}{\sqrt{n}}, M - z(\alpha - p \alpha) \frac{\sigma}{\sqrt{n}} \right]$
1. $p = \frac{1}{2}$ gives the symmetric, equal-tail confidence interval.
2. $p \to 0$ gives the interval with the confidence upper bound.
3. $p \to 1$ gives the interval with the confidence lower bound.
Proof
From the normal distribution of $M$ and the definition of the quantile function, $\P \left[ z(\alpha - p \, \alpha) \lt \frac{M - \mu}{\sigma / \sqrt{n}} \lt z(1 - p \alpha) \right] = 1 - \alpha$ The result then follows by solving for $\mu$ in the inequality.
In terms of the distribution of the pivot variable $Z$, as the proof shows, the two-sided confidence interval above corresponds to $p \alpha$ in the right tail and $(1 - p) \alpha$ in the left tail. Next, let's study the length of this confidence interval.
For $\alpha, \, p \in (0, 1)$, the (deterministic) length of the two-sided $1 - \alpha$ confidence interval above is $L = \frac{\left[z(1 - p \alpha) - z(\alpha - p \alpha)\right] \sigma}{\sqrt{n}}$
1. $L$ is a decreasing function of $\alpha$, and $L \downarrow 0$ as $\alpha \uparrow 1$ and $L \uparrow \infty$ as $\alpha \downarrow 0$.
2. $L$ is a decreasing function of $n$, and $L \downarrow 0$ as $n \uparrow \infty$.
3. $L$ is an increasing function of $\sigma$, and $L \downarrow 0$ as $\sigma \downarrow 0$ and $L \uparrow \infty$ as $\sigma \uparrow \infty$.
4. As a function of $p$, $L$ decreases and then increases, with minimum at the point of symmetry $p = \frac{1}{2}$.
The last result shows again that there is a tradeoff between the confidence level and the length of the confidence interval. If $n$ and $p$ are fixed, we can decrease $L$, and hence tighten our estimate, only at the expense of decreasing our confidence in the estimate. Conversely, we can increase our confidence in the estimate only at the expense of increasing the length of the interval. In terms of $p$, the best of the two-sided $1 - \alpha$ confidence intervals (and the one that is almost always used) is symmetric, equal-tail interval with $p = \frac{1}{2}$:
Use the mean estimation experiment to explore the procedure. Select the normal distribution and select normal pivot. Use various parameter values, confidence levels, sample sizes, and interval types. For each configuration, run the experiment 1000 times. As the simulation runs, note that the confidence interval successfully captures the mean if and only if the value of the pivot variable is between the quantiles. Note the size and location of the confidence intervals and compare the proportion of successful intervals to the theoretical confidence level.
For the standard confidence intervals, let $d$ denote the distance between the sample mean $M$ and an endpoint. That is, $d = z_\alpha \frac{\sigma}{\sqrt{n}}$ where $z_\alpha = z(1 - \alpha /2 )$ for the two-sided interval and $z_\alpha = z(1 - \alpha)$ for the upper or lower confidence interval. The number $d$ is the margin of error of the estimate.
Note that $d$ is deterministic, and the length of the standard two-sided interval is $L = 2 d$. In many cases, the first step in the design of the experiment is to determine the sample size needed to estimate $\mu$ with a given margin of error and a given confidence level.
The sample size needed to estimate $\mu$ with confidence $1 - \alpha$ and margin of error $d$ is $n = \left \lceil \frac{z_\alpha^2 \sigma^2}{d^2} \right\rceil$
Proof
This follows by solving for $n$ in the definition of $d$ above, and then rounding up to the next integer.
Note that $n$ varies directly with $z_\alpha^2$ and with $\sigma^2$ and inversely with $d^2$. This last fact implies a law of diminishing return in reducing the margin of error. For example, if we want to reduce a given margin of error by a factor of $\frac{1}{2}$, we must increase the sample size by a factor of 4.
Confidence Intervals for $\mu$ with $\sigma$ Unknown
For our next discussion, we assume that the distribution mean $\mu$ and standard deviation $\sigma$ are unknown, the usual situation. In this case, we can use the $T$ pivot variable, rather than the $Z$ pivot variable, to construct confidence intervals for $\mu$.
For $\alpha \in (0, 1)$,
1. $\left[M - t_{n-1}\left(1 - \frac{\alpha}{2}\right) \frac{S}{\sqrt{n}}, M + t_{n-1}\left(1 - \frac{\alpha}{2}\right) \frac{S}{\sqrt{n}}\right]$ is a $1 - \alpha$ confidence interval for $\mu$.
2. $M - t_{n-1}(1 - \alpha) \frac{S}{\sqrt{n}}$ is a $1 - \alpha$ lower bound for $\mu$
3. $M + t_{n-1}(1 - \alpha) \frac{S}{\sqrt{n}}$ is a $1 - \alpha$ upper bound for $\mu$
Proof
Since $T = \frac{M - \mu}{S / \sqrt{n}}$ has the $t$ distribution with $n - 1$ degees of freedom, each of the following events has probability $1 - \alpha$, by definition of the quantiles:
1. $\left\{-t_{n-1}\left(1 - \frac{\alpha}{2}\right) \le \frac{M - \mu}{S / \sqrt{n}} \le t_{n-1}\left(1 - \frac{\alpha}{2}\right)\right\}$
2. $\left\{\frac{M - \mu}{S / \sqrt{n}} \ge t_{n-1}(1 - \alpha)\right\}$
3. $\left\{\frac{M - \mu}{S / \sqrt{n}} \le -t_{n-1}(1 - \alpha)\right\}$
In each case, solving for $\mu$ in the inequality gives the result.
These are the standard interval estimates of $\mu$ with $\sigma$ unknown. The two-sided confidence interval in (a) is symmetric about the sample mean $M$ and corresponds to equal probability $\frac{\alpha}{2}$ in each tail of the distribution of the pivot variable $T$. As before, this is not the only confidence interval; we can divide $\alpha$ between the left and right tails any way that we want.
For every $\alpha, \, p \in (0, 1)$, a $1 - \alpha$ confidence interval for $\mu$ is $\left[M - t_{n-1}(1 - p \alpha) \frac{S}{\sqrt{n}}, M - t_{n-1}(\alpha - p \alpha) \frac{S}{\sqrt{n}} \right]$
1. $p = \frac{1}{2}$ gives the symmetric, equal-tail confidence interval.
2. $p \to 0$ gives the interval with the confidence upper bound.
3. $p \to 1$ gives the interval with the confidence lower bound.
Proof
Since $T$ has the student $t$ distribution with $n - 1$ degrees of freedom, it follows from the definition of the quantiles that $\P \left[ t_{n-1}(\alpha - p \alpha) \lt \frac{M - \mu}{S \big/ \sqrt{n}} \lt t_{n-1}(1 - p \alpha) \right] = 1 - \alpha$ The result then follows by solving for $\mu$ in the inequality.
The two-sided confidence interval above corresponds to $p \alpha$ in the right tail and $(1 - p) \alpha$ in the left tail of the distribution of the pivot variable $T$. Next, let's study the length of this confidence interal.
For $\alpha, \, p \in (0, 1)$, the (random) length of the two-sided $1 - \alpha$ confidence interval above is $L = \frac{t_{n-1}(1 - p \alpha) - t_{n-1}(\alpha - p \alpha)} {\sqrt{n}} S$
1. $L$ is a decreasing function of $\alpha$, and $L \downarrow 0$ as $\alpha \uparrow 1$ and $L \uparrow \infty$ as $\alpha \downarrow 0$.
2. As a function of $p$, $L$ decreases and then increases, with minimum at the point of symmetry $p = \frac{1}{2}$.
3. $\E(L) = \frac{[t_{n-1}(1 - p \alpha) - t_{n-1}(\alpha - p \alpha)] \sqrt{2} \sigma \Gamma(n/2)}{\sqrt{n (n - 1)} \Gamma[(n - 1)/2]}$
4. $\var(L) = \frac{1}{n}\left[t_{n-1}(1 - p \alpha) - t_{n-1}(\alpha - p \alpha)\right]^2 \sigma^2\left[1 - \frac{2 \Gamma^2(n / 2)}{(n - 1) \Gamma^2[(n - 1) / 2]}\right]$
Proof
Parts (a) and (b) follow from properties of the student quantile function $t_{n-1}$. Parts (c) and (d) follow from the fact that $\frac{\sqrt{n - 1}}{\sigma} S$ has a chi distribution with $n - 1$ degrees of freedom.
Once again, there is a tradeoff between the confidence level and the length of the confidence interval. If $n$ and $p$ are fixed, we can decrease $L$, and hence tighten our estimate, only at the expense of decreasing our confidence in the estimate. Conversely, we can increase our confidence in the estimate only at the expense of increasing the length of the interval. In terms of $p$, the best of the two-sided $1 - \alpha$ confidence intervals (and the one that is almost always used) is symmetric, equal-tail interval with $p = \frac{1}{2}$. Finally, note that it does not really make sense to consider $L$ as a function of $S$, since $S$ is a statistic rather than an algebraic variable. Similarly, it does not make sense to consider $L$ as a function of $n$, since changing $n$ means new data and hence a new value of $S$.
Use the mean estimation experiment to explore the procedure. Select the normal distribution and the $T$ pivot. Use various parameter values, confidence levels, sample sizes, and interval types. For each configuration, run the experiment 1000 times. As the simulation runs, note that the confidence interval successfully captures the mean if and only if the value of the pivot variable is between the quantiles. Note the size and location of the confidence intervals and compare the proportion of successful intervals to the theoretical confidence level.
Confidence Intervals for $\sigma^2$
Next we will construct confidence intervals for $\sigma^2$ using the pivot variable $V$ given above
For $\alpha \in (0, 1)$,
1. $\left[\frac{n - 1}{\chi^2_{n-1}\left(1 - \alpha / 2\right)} S^2,\frac{n - 1}{\chi^2_{n-1}\left(\alpha / 2\right)} S^2\right]$ is a $1 - \alpha$ confidence interval for $\sigma^2$
2. $\frac{n - 1}{\chi^2_{n-1}\left(1 - \alpha\right)} S^2$ is a $1 - \alpha$ confidence lower bound for $\sigma^2$
3. $\frac{n - 1}{\chi^2_{n-1}(\alpha)} S^2$ is a $1 - \alpha$ confidence upper bound for $\sigma^2$.
Proof
Since $V = \frac{n - 1}{\sigma^2} S^2$ has the chi-square distribution with $n - 1$ degrees of freedom, each of the following events has probability $1 - \alpha$ by definition of the quantiles:
1. $\left\{\chi^2_{n-1}(\alpha / 2) \le \frac{n - 1}{\sigma^2} S^2 \le \chi^2_{n-1}(1 - \alpha / 2)\right\}$
2. $\left\{\frac{n - 1}{\sigma^2} S^2 \le \chi^2_{n-1}(1 - \alpha)\right\}$
3. $\left\{\frac{n - 1}{\sigma^2} S^2 \ge \chi^2_{n-1}(\alpha)\right\}$
In each case, solving for $\sigma^2$ in the inequality give the result.
These are the standard interval estimates for $\sigma^2$. The two-sided interval in (a) is the equal-tail interval, corresponding to probability $\alpha / 2$ in each tail of the distribution of the pivot variable $V$. Note however that this interval is not symmetric about the sample variance $S^2$. Once again, we can partition the probability $\alpha$ between the left and right tails of the distribution of $V$ any way that we like.
For every $\alpha, \, p \in (0, 1)$, a $1 - \alpha$ confidence interval for $\sigma^2$ is $\left[\frac{n - 1}{\chi^2_{n-1}(1 - p \alpha)} S^2, \frac{n - 1}{\chi^2_{n-1}(\alpha - p \alpha)} S^2\right]$
1. $p = \frac{1}{2}$ gives the equal-tail $1 - \alpha$ confidence interval.
2. $p \to 0$ gives the interval with the $1 - \alpha$ upper bound
3. $p \to 1$ gives the interval with the $1 - \alpha$ lower bound.
In terms of the distribution of the pivot variable $V$, the confidence interval above corresponds to $p \alpha$ in the right tail and $(1 - p) \alpha$ in the left tail. Once again, let's look at the length of the general two-sided confidence interval. The length is random, but is a multiple of the sample variance $S^2$. Hence we can compute the expected value and variance of the length.
For $\alpha, \, p \in (0, 1)$, the (random) length of the two-sided confidence interval in the last theorem is $L = \left[\frac{1}{\chi^2_{n-1}(\alpha - p \alpha)} - \frac{1}{\chi^2_{n-1}(1 - p \alpha)}\right] (n - 1) S^2$
1. $\E(L) = \left[\frac{1}{\chi^2_{n-1}(\alpha - p \alpha)} - \frac{1}{\chi^2_{n-1}(1 - p \alpha)}\right] (n - 1) \sigma^2$
2. $\var(L) = 2 \left[\frac{1}{\chi^2_{n-1}(\alpha - p \alpha)} - \frac{1}{\chi^2_{n-1}(1 - p \alpha)}\right]^2 (n - 1) \sigma^4$
To construct an optimal two-sided confidence interval, it would be natural to find $p$ that minimizes the expected length. This is a complicated problem, but it turns out that for large $n$, the equal-tail interval with $p = \frac{1}{2}$ is close to optimal. Of course, taking square roots of the endpoints of any of the confidence intervals for $\sigma^2$ gives $1 - \alpha$ confidence intervals for the distribution standard deviation $\sigma$.
Use variance estimation experiment to explore the procedure. Select the normal distribution. Use various parameter values, confidence levels, sample sizes, and interval types. For each configuration, run the experiment 1000 time. As the simulation runs, note that the confidence interval successfully captures the standard deviation if and only if the value of the pivot variable is between the quantiles. Note the size and location of the confidence intervals and compare the proportion of successful intervals to the theoretical confidence level.
Confidence Sets for $(\mu, \sigma)$
In the discussion above, we constructed confidence intervals for $\mu$ and for $\sigma$ separately (again, usually both parameters are unknown). In our next discussion, we will consider confidence sets for the parameter point $(\mu, \sigma)$. These sets will be subsets of the underlying parameter space $\R \times (0, \infty)$.
Confidence Sets Constructed from the Pivot Variables
Each of the pivot variables $Z$, $T$, and $V$ can be used to construct confidence sets for $(\mu, \sigma)$. In isolation, each will produce an unbounded confidence set, not surprising since, we are using a single pivot variable to estimate two parameters. We consider the normal pivot variable $Z$ first.
For any $\alpha, \, p \in (0, 1)$, a $1 - \alpha$ level confidence set for $(\mu, \sigma)$ is $Z_{\alpha,p} = \left\{ (\mu, \sigma): M - z(1 - p \alpha) \frac{\sigma}{\sqrt{n}} \lt \mu \lt M - z(\alpha - p \alpha) \frac{\sigma}{\sqrt{n}} \right\}$ The confidence set is a cone in the $(\mu, \sigma)$ parameter space, with vertex at $(M, 0)$ and boundary lines of slopes $-\sqrt{n} \big/ z(1 - p \alpha)$ and $-\sqrt{n} \big/ z(\alpha - p \alpha)$
Proof
From the normal distribution of $M$ and the definition of the quantile function, $\P \left[ z(\alpha - p \, \alpha) \lt \frac{M - \mu}{\sigma / \sqrt{n}} \lt z(1 - p \alpha) \right] = 1 - \alpha$ The result then follows by solving for $\mu$ in the inequality.
The confidence cone is shown in the graph below. (Note, however, that both slopes might be negative or both positive.)
The pivot variable $T$ leads to the following result:
For every $\alpha, \, p \in (0, 1)$, a $1 - \alpha$ level confidence set for $(\mu, \sigma)$ is $T_{\alpha, p} = \left\{ (\mu, \sigma): M - t_{n-1}(1 - p \alpha) \frac{S}{\sqrt{n}} \lt \mu \lt M - t_{n-1}(\alpha - p \alpha) \frac{S}{\sqrt{n}} \right\}$
Proof
By design, this confidence set gives no information about $\sigma$. Finally, the pivot variable $V$ leads to the following result:
For every $\alpha, \, p \in (0, 1)$, a $1 - \alpha$ level confidence set for $(\mu, \sigma)$ is $V_{\alpha, p} = \left\{ (\mu, \sigma): \frac{(n - 1)S^2}{\chi_{n-1}^2(1 - p \alpha)} \lt \sigma^2 \lt \frac{(n - 1)S^2}{\chi_{n-1}^2(\alpha - p \alpha)} \right\}$
Proof
By design, this confidence set gives no information about $\mu$.
Intersections
We can now form intersections of some of the confidence sets constructed above to obtain bounded confidence sets for $(\mu, \sigma)$. We will use the fact that the sample mean $M$ and the sample variance $S^2$ are independent, one of the most important special properties of a normal sample. We will also need the result from the Introduction on the intersection of confidence interals. In the following theorems, suppose that $\alpha, \, \beta, \, p, \, q \in (0, 1)$ with $\alpha + \beta \lt 1$.
The set $T_{\alpha, p} \cap V_{\beta, q}$ is a conservative $1 - (\alpha + \beta)$ confidence sets for $(\mu, \sigma)$.
The set $Z_{\alpha, p} \cap V_{\beta, q}$ is a $(1 - \alpha)(1 - \beta)$ confidence set for $(\mu, \sigma)$.
It is interesting to note that the confidence set $T_{\alpha, p} \cap V_{\beta, q}$ is a product set as a subset of the parameter space, but is not a product set as a subset of the sample space. By contrast, the confidence set $Z_{\alpha, p} \cap V_{\beta, q}$ is not a product set as a subset of the parameter space, but is a product set as a subset of the sample space.
Exercises
Robustness
The main assumption that we made was that the underlying sampling distribution is normal. Of course, in real statistical problems, we are unlikely to know much about the sampling distribution, let alone whether or not it is normal. When a statistical procedure works reasonably well, even when the underlying assumptions are violated, the procedure is said to be robust. In this subsection, we will explore the robustness of the estimation procedures for $\mu$ and $\sigma$.
Suppose in fact that the underlying distribution is not normal. When the sample size $n$ is relatively large, the distribution of the sample mean will still be approximately normal by the central limit theorem. Thus, our interval estimates of $\mu$ may still be approximately valid.
Use the simulation of the mean estimation experiment to explore the procedure. Select the gamma distribution and select student pivot. Use various parameter values, confidence levels, sample sizes, and interval types. For each configuration, run the experiment 1000 times. Note the size and location of the confidence intervals and compare the proportion of successful intervals to the theoretical confidence level.
In the mean estimation experiment, repeat the previous exercise with the uniform distribution.
How large $n$ needs to be for the interval estimation procedures of $\mu$ to work well depends, of course, on the underlying distribution; the more this distribution deviates from normality, the larger $n$ must be. Fortunately, convergence to normality in the central limit theorem is rapid and hence, as you observed in the exercises, we can get away with relatively small sample sizes (30 or more) in most cases.
In general, the interval estimation procedures for $\sigma$ are not robust; there is no analog of the central limit theorem to save us from deviations from normality.
In variance estimation experiment, select the gamma distribution. Use various parameter values, confidence levels, sample sizes, and interval types. For each configuration, run the experiment 1000 times. Note the size and location of the confidence intervals and compare the proportion of successful intervals to the theoretical confidence level.
In variance estimation experiment, select the uniform distribution. Use various parameter values, confidence levels, sample sizes, and interval types. For each configuration, run the experiment 1000 times. Note the size and location of the confidence intervals and compare the proportion of successful intervals to the theoretical confidence level.
Computational Exercises
In the following exercises, use the equal-tailed construction for two-sided confidence intervals, unless otherwise instructed.
The length of a certain machined part is supposed to be 10 centimeters but due to imperfections in the manufacturing process, the actual length is a normally distributed with mean $\mu$ and variance $\sigma^2$. The variance is due to inherent factors in the process, which remain fairly stable over time. From historical data, it is known that $\sigma = 0.3$. On the other hand, $\mu$ may be set by adjusting various parameters in the process and hence may change to an unknown value fairly frequently. A sample of 100 parts has mean 10.2.
1. Construct the 95% confidence interval for $\mu$.
2. Construct the 95% confidence upper bound for $\mu$.
3. Construct the 95% confidence lower bound for $\mu$.
Answer
1. $(10.1, 10.26)$
2. 10.25
3. 10.15
Suppose that the weight of a bag of potato chips (in grams) is a normally distributed random variable with mean $\mu$ and standard deviation $\sigma$, both unknown. A sample of 75 bags has mean 250 and standard deviation 10.
1. Construct the 90% confidence interval for $\mu$.
2. Construct the 90% confidence interval for $\sigma$.
3. Construct a conservative 90% confidence rectangle for $(\mu, \sigma)$.
Answer
1. $(248.1, 251.9)$
2. $(8.8, 11.6)$
3. $(247.70, 252.30) \times (8.62, 11.92)$
At a telemarketing firm, the length of a telephone solicitation (in seconds) is a normally distributed random variable with mean $\mu$ and standard deviation $\sigma$, both unknown. A sample of 50 calls has mean length 300 and standard deviation 60.
1. Construct the 95% confidence upper bound for $\mu$.
2. Construct the 95% confidence lower bound for $\sigma$.
Answer
1. 314.3.
2. 51.6.
At a certain farm the weight of a peach (in ounces) at harvest time is a normally distributed random variable with standard deviation 0.5. How many peaches must be sampled to estimate the mean weight with a margin of error $\pm 2$ and with 95% confidence.
Answer
25
The hourly salary for a certain type of construction work is a normally distributed random variable with standard deviation $1.25 and unknown mean $\mu$. How many workers must be sampled to construct a 95% confidence lower bound for $\mu$ with margin of error$0.25?
Answer
68
Data Analysis Exercises
In Michelson's data, assume that the measured speed of light has a normal distribution with mean $\mu$ and standard deviation $\sigma$, both unknown.
1. Construct the 95% confidence interval for $\mu$. Is the true value of the speed of light in this interval?
2. Construct the 95% confidence interval for $\sigma$.
3. Explore, in an informal graphical way, the assumption that the underlying distribution is normal.
Answer
1. $(836.8, 868.0)$. No, the true value is not in the interval.
2. $(69.4, 91.8)$
In Cavendish's data, assume that the measured density of the earth has a normal distribution with mean $\mu$ and standard deviation $\sigma$, both unknown.
1. Construct the 95% confidence interval for $\mu$. Is the true value of the density of the earth in this interval?
2. Construct the 95% confidence interval for $\sigma$.
3. Explore, in an informal graphical way, the assumption that the underlying distribution is normal.
Answer
1. $(5.364, 5.532)$. Yes, the true value is in the interval.
2. $(0.1725, 0.3074)$
In Short's data, assume that the measured parallax of the sun has a normal distribution with mean $\mu$ and standard deviation $\sigma$, both unknown.
1. Construct the 95% confidence interval for $\mu$. Is the true value of the parallax of the sun in this interval?
2. Construct the 95% confidence interval for $\sigma$.
3. Explore, in an informal graphical way, the assumption that the underlying distribution is normal.
Answer
1. $(8.410, 8.822)$. Yes, the true value is in the interval.
2. $(0.629, 0.927)$
Suppose that the length of an iris petal of a given type (Setosa, Verginica, or Versicolor) is normally distributed. Use Fisher's iris data to construct 90% two-sided confidence intervals for each of the following parameters.
1. The mean length of a Sertosa iris petal.
2. The mean length of a Vergnica iris petal.
3. The mean length of a Versicolor iris petal.
Answer
1. $(14.21, 15.03)$
2. $(54.21, 56.83)$
3. $(41.95, 44.49)$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/08%3A_Set_Estimation/8.02%3A_Estimation_the_Normal_Model.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\bs}{\boldsymbol}$
Introduction
Recall that an indicator variable is a random variable that just takes the values 0 and 1. In applications, an indicator variable indicates which of two complementary events in a random experiment has occurred. Typical examples include
• A manufactured item subject to unavoidable random factors is either defective or acceptable.
• A voter selected from a population either supports a particular candidate or does not.
• A person selected from a population either does or does not have a particular medical condition.
• A student in a class either passes or fails a standardized test.
• A sample of radioactive material either does or does not emit an alpha particle in a specified ten-second period.
Recall also that the distribution of an indicator variable is known as the Bernoulli distribution, named for Jacob Bernoulli, and has probability density function given by $\P(X = 1) = p$, $\P(X = 0) = 1 - p$, where $p \in (0, 1)$ is the basic parameter. In the context of the examples above,
• $p$ is the probability that the manufactured item is defective.
• $p$ is the proportion of voters in the population who favor the candidate.
• $p$ is the poportion of persons in the population that have the medical condition.
• $p$ is the probability that a student in the class will pass the exam.
• $p$ is the probability that the material will emit an alpha particle in the specified period.
Recall that the mean and variance of the Bernoulli distribution are $\E(X) = p$ and $\var(X) = p (1 - p)$. Often in statistical applications, $p$ is unknown and must be estimated from sample data. In this section, we will see how to construct interval estimates for the parameter from sample data. A parallel section on Tests in the Bernoulli Model is in the chapter on Hypothesis Testing.
The One-Sample Model
Preliminaries
Suppose that $\bs X = (X_1, X_2, \ldots, X_n)$ is a random sample from the Bernoulli distribution with unknown parameter $p \in [0, 1]$. That is, $\bs X$ is a squence of Bernoulli trials. From the examples in the introduction above, note that often the underlying experiment is to sample at random from a dichotomous population. When the sampling is with replacement, $\bs X$ really is a sequence of Bernoulli trials. When the sampling is without replacement, the variables are dependent, but the Bernoulli model is still approximately valid if the population size is large compared to the sample size $n$. For more on these points, see the discussion of sampling with and without replacement in the chapter on Finite Sampling Models.
Note that the sample mean of our data vector $\bs X$, namely $M = \frac{1}{n} \sum_{i=1}^n X_i$ is the sample proportion of objects of the type of interest. By the central limit theorem, the standard score $Z = \frac{M - p}{\sqrt{p (1 - p) / n}}$ has approximately a standard normal distribution and hence is (approximately) a pivot variable for $p$. For a given sample size $n$, the distribution of $Z$ is closest to normal when $p$ is near $\frac{1}{2}$ and farthest from normal when $p$ is near 0 or 1 (extreme). Because the pivot variable is (approximately) normally distributed, the construction of confidence intervals for $p$ in this model is similar to the construction of confidence intervals for the distribution mean $\mu$ in the normal model. But of course all of the confidence intervals so constructed are approximate.
As usual, for $r \in (0, 1)$, let $z(r)$ denote the quantile of order $r$ for the standard normal distribution. Values of $z(r)$ can be obtained from the special distribution calculator, or from most statistical software packages.
Basic Confidence Intervals
For $\alpha \in (0, 1)$, the following are approximate $1 - \alpha$ confidence sets for $p$:
1. $\left\{ p \in [0, 1]: M - z(1 - \alpha / 2) \sqrt{p (1 - p) / n} \le p \le M + z(1 - \alpha / 2) \sqrt{p (1 - p) / n} \right\}$
2. $\left\{ p \in [0, 1]: p \le M + z(1 - \alpha) \sqrt{p (1 - p) / n} \right\}$
3. $\left\{ p \in [0, 1]: M - z(1 - \alpha) \sqrt{p (1 - p) / n} \le p \right\}$
Proof
From our discussion above, $(M - p) / \sqrt{p (1 - p) / n}$ has approximately a standard normal distribution. Hence by definition of the quantiles,
1. $\P[-z(1 - \alpha / 2) \le (M - p) / \sqrt{p (1 - p) / n} \le z(1 - \alpha / 2)] \approx 1 - \alpha$
2. $\P[-z(1 - \alpha) \le (M - p) / \sqrt{p (1 - p) / n}] \approx 1 - \alpha$
3. $\P[(M - p) / \sqrt{p (1 - p) / n} \le z(1 - \alpha)] \approx 1 - \alpha$
Solving the inequalities for $p$ in the numerator of $(M - p) / \sqrt{p (1 - p) / n}$ for each event gives the corresponding confidence set.
These confidence sets are actually intervals, known as the Wilson intervals, in honor of Edwin Wilson.
The confidence sets for $p$ in (1) are intervals. Let $U(z) = \frac{n}{n + z^2} \left(M + \frac{z^2}{2 n} + z \sqrt{\frac{M (1 - M)}{n} + \frac{z^2}{4 n^2}}\right)$ Then the following have approximate confidecne level $1 - \alpha$ for $p$.
1. The two-sided interval $\left[U[-z(1 - \alpha / 2)], U[z(1 - \alpha / 2)]\right]$.
2. The upper bound $U[z(1 - \alpha)]$.
3. The lower bound $U[-z(1 - \alpha)]$.
Proof
This follows by solving the inequalities in (1) for $p$. For each inequality, we can isolate the square root term, and then square both sides. This gives quadratic inequalities, which can be solved using the quadratic formula.
As usual, the equal-tailed confidence interval in (a) is not the only two-sided $1 - \alpha$ confidence interval for $p$. We can divide the $\alpha$ probability between the left and right tails of the standard normal distribution in any way that we please.
For $\alpha, \, r \in (0, 1)$, an approximate two-sided $1 - \alpha$ confidence interval for $p$ is $\left[U[z(\alpha - r \alpha)], U[z(1 - r \alpha)]\right]$ where $U$ is the function in (2).
Proof
As in the proof of (1), $\P\left[z(\alpha - r \alpha) \le \frac{M - p}{\sqrt{p (1 - p) / n}} \le z(1 - r \alpha)\right] \approx 1 - \alpha$ Solving for $p$ with the help of the quadratic formula gives the result.
In practice, the equal-tailed $1 - \alpha$ confidence interval in part (a) of (2), obtained by setting $r = \frac{1}{2}$, is the one that is always used. As $r \uparrow 1$, the right enpoint converges to the $1 - \alpha$ confidence upper bound in part (b), and as $r \downarrow 0$ the left endpoint converges to the $1 - \alpha$ confidence lower bound in part (c).
Simplified Confidence Intervals
Simplified approximate $1 - \alpha$ confidence intervals for $p$ can be obtained by replacing the distribution mean $p$ by the sample mean $M$ in the extreme parts of the inequalities in (1).
For $\alpha \in (0, 1)$, the following have approximate confidence level $1 - \alpha$ for $p$:
1. The two-sided interval with endpoints $M \pm z(1 - \alpha / 2) \sqrt{M (1 - M) / n}$.
2. The upper bound $M + z(1 - \alpha) \sqrt{M (1 - M) / n}$.
3. The lower bound $M - z(1 - \alpha) \sqrt{M (1 - M) / n}$.
Proof
As noted, these results follows from the confidence set in (1) by replacing $p$ with $M$ in the expression $\sqrt{p (1 - p) / n}$.
These confidence intervals are known as Wald intervals, in honor of Abraham Wald.. Note that the Wald interval can also be obtained from the Wilson intervals in (2) by assuming that $n$ is large compared to $z$, so that $n \big/ (n + z^2) \approx 1$, $z^2 / 2 n \approx 0$, and $z^2 / 4 n^2 \approx 0$. Note that this interval in (c) is symmetric about the sample proportion $M$ but that the length of the interval, as well as the center is random. This is the two-sided interval that is normally used.
Use the simulation of the proportion estimation experiment to explore the procedure. Use various values of $p$ and various confidence levels, sample sizes, and interval types. For each configuration, run the experiment 1000 times and compare the proportion of successful intervals to the theoretical confidence level.
As always, the equal-tailed interval in (4) is not the only two-sided, $1 - \alpha$ confidence interval.
For $\alpha, \, r \in (0, 1)$, an approximate two-sided $1 - \alpha$ confidence interval for $p$ is $\left[M - z(1 - r \alpha) \sqrt{\frac{M (1 - M)}{n}}, M - z(\alpha - r \alpha) \sqrt{\frac{M (1 - M)}{ n}}\right]$ The interval with smallest length is the equal-tail interval with $r = \frac 1 2$.
Conservative Confidence Intervals
Note that the function $p \mapsto p(1 - p)$ on the interval $[0, 1]$ is maximized when $p = \frac 1 2$ and thus the maximum value is $\frac{1}{4}$. We can obtain conservative confidence intervals for $p$ from the basic confidence intervals by using this fact.
For $\alpha \in (0, 1)$, the following have approximate confidence level at least $1 - \alpha$ for $p$:
1. The two-sided interval with endpoints $M \pm z(1 - \alpha / 2) \frac{1}{2 \sqrt{n}}$.
2. The upper bound $M + z(1 - \alpha) \frac{1}{2 \sqrt{n}}$.
3. The lower bound $M - z(1 - \alpha) \frac{1}{2 \sqrt{n}}$.
Proof
As noted, these results follows from the confidence sets in (1) by replacing $p$ with $\frac 1 2$ in the expression $\sqrt{p (1 - p) / n}$.
Note that the confidence interval in (a) is symmetric about the sample proportion $M$ and that the length of the interval is deterministic. Of course, the conservative confidence intervals will be larger than the approximate simplified confidence intervals in (4). The conservative estimate can be used to design the experiment. Recall that the margin of error is the distance between the sample proportion $M$ and an endpoint of the confidence interval.
A conservative estimate of the sample size $n$ needed to estimate $p$ with confidence $1 - \alpha$ and margin of error $d$ is $n = \left\lceil \frac{z_\alpha^2}{4 d^2} \right\rceil$ where $z_\alpha = z(1 - \alpha / 2)$ for the two-sided interval and $z_\alpha = z(1 - \alpha)$ for the confidence upper or lower bound.
Proof
With confidence level $1 - \alpha$, the margin of error is $z_\alpha \frac{1}{2 \sqrt{n}}$. Setting this equal to the prescribed value $d$ and solving gives the result.
As always, the equal-tailed interval in (7) is not the only two-sided, conservative, $1 - \alpha$ confidence interval.
For $\alpha, \, r \in (0, 1)$, an approximate two-sided, conservative $1 - \alpha$ confidence interval for $p$ is $\left[M - z(1 - r \alpha) \frac{1}{2 \sqrt{n}}, M - z(\alpha - r \alpha) \frac{1}{2 \sqrt{n}}\right]$ The interval with smallest length is the equal-tail interval with $r = \frac 1 2$.
The Two-Sample Model
Preliminaries
Often we have two underlying Bernoulli distributions, with parameters $p_1, \, p_2 \in [0, 1]$ and we would like to estimate the difference $p_1 - p_2$. This problem could arise in the following typical examples:
• In a quality control setting, suppose that $p_1$ is the proportion of defective items produced under one set of manufacturing conditions while $p_2$ is the proportion of defectives under a different set of conditions.
• In an election, suppose that $p_1$ is the proportion of voters who favor a particular candidate at one point in the campaign, while $p_2$ is the proportion of voters who favor the candidate at a later point (perhaps after a scandal has erupted).
• Suppose that $p_1$ is the proportion of students who pass a certain standardized test with the usual test preparation methods while $p_2$ is the proportion of students who pass the test with a new set of preparation methods.
• Suppose that $p_1$ is the proportion of unvaccinated persons in a certain population who contract a certain disease, while $p_2$ is the proportion of vaccinated person who contract the disease.
Note that several of these examples can be thought of as treatment-control problems. Of course, we could construct interval estimates $I_1$ for $p_1$ and $I_2$ for $p_2$ separately, as in the subsections above. But as we noted in the Introduction, if these two intervals have confidence level $1 - \alpha$, then the product set $I_1 \times I_2$ has confidence level $(1 - \alpha)^2$ for $(p_1, p_2)$. So if $p_1 - p_2$ is our parameter of interest, we will use a different approach.
Simplified Confidence Intervals
Suppose now that $\bs X = (X_1, X_2, \ldots, X_{n_1})$ is a random sample of size $n_1$ from the Bernoulli distribution with parameter $p_1$, and $\bs Y = (Y_1, Y_2, \ldots, Y_{n_2})$ is a random sample of size $n_2$ from the Bernoulli distribution with parameter $p_2$. We assume that the samples $\bs X$ and $\bs Y$ are independent. Let $M_1 = \frac{1}{n_1} \sum_{i=1}^{n_1} X_i, \quad M_2 = \frac{1}{n_2} \sum_{i=1}^{n_2} Y_i$ denote the sample means (sample proportions) for the samples $\bs X$ and $\bs Y$. A natural point estimate for $p_1 - p_2$, and the building block for our interval estimate, is $M_1 - M_2$. As noted in the one-sample model, if $n_i$ is large, $M_i$ has an approximate normal distribution with mean $p_i$ and variance $p_i (1 - p_i) / n_i$ for $i \in \{1, 2\}$. Since the samples are independent, so are the sample means. Hence $M_1 - M_2$ has an approximate normal distribution with mean $p_1 - p_2$ and variance $p_1 (1 - p_1) / n_1 + p_2 (1 - p_2) / n_2$. We now have all the tools we need for a simplified, approximate confidence interval for $p_1 - p_2$.
For $\alpha \in (0, 1)$, the following have approximate confidence level $1 - \alpha$ for $p_1 - p_2$:
1. The two-sided interval with endpoints $(M_1 - M_2) \pm z\left(1 - \alpha / 2\right) \sqrt{M_1 (1 - M_1) / n_1 + M_2 (1 - M_2) / n_2}$.
2. The lower bound $(M_1 - M_2) - z(1 - \alpha) \sqrt{M_1 (1 - M_1) / n_1 + M_2 (1 - M_2) / n_2}$.
3. The upper bound $(M_1 - M_2) + z(1 - \alpha) \sqrt{M_1 (1 - M_1) / n_1 + M_2 (1 - M_2) / n_2}$.
Proof
As noted above, if $n_1$ and $n_2$ are large, $\frac{(M_1 - M_2) - (p_1 - p_2)}{\sqrt{p_1(1 - p_1) / n_1 + p_2(1 - p_2)/n_2}}$ has approximatle a standard normal distribution, and hence so does $Z = \frac{(M_1 - M_2) - (p_1 - p_2)}{\sqrt{M_1(1 - M_1) / n_1 + M_2(1 - M_2)/n_2}}$
1. $\P[-z(1 - \alpha / 2) \le Z \le z(1 - \alpha / 2)] \approx 1 - \alpha$. Solving for $p_1 - p_2$ gives the two-sided confidence interval.
2. $\P[-z(1 - \alpha) \le Z] \approx 1 - \alpha$. Solving for $p_1 - p_2$ gives the confidence upper bound.
3. $\P[Z \le z(1 - \alpha / 2)] \approx 1 - \alpha$. Solving for $p_1 - p_2$ gives the confidence lower bound.
As always, the equal-tailed interval in (a) is not the only approximate two-sided $1 - \alpha$ confidence interval.
For $\alpha, \, r \in (0, 1)$, an approximate $1 - \alpha$ confidence set for $p_1 - p_2$ is $\left[(M_1 - M_2) - z(1 - r \alpha) \sqrt{M_1 (1 - M_1) / n_1 + M_2 (1 - M_2) / n_2}, (M_1 - M_2) - z(\alpha - r \alpha) \sqrt{M_1 (1 - M_1) / n_1 + M_2 (1 - M_2) / n_2} \right]$
Proof
As noted in the proof of the previous theorem, $Z = \frac{(M_1 - M_2) - (p_1 - p_2)}{\sqrt{M_1(1 - M_1) / n_1 + M_2(1 - M_2)/n_2}}$ has approximately a standard normal distribution if $n_1$ and $n_2$ are large. Hence $\P[-z(\alpha - r \alpha) \le Z \le z(1 - r \alpha)] \approx 1 - \alpha$. Solving for $p_1 - p_2$ gives the two-sided confidence interval.
Conservative Confidence Intervals
Once again, $p \mapsto p (1 - p)$ is maximized when $p = \frac 1 2$ with maximum value $\frac 1 4$. We can use this to construct approximate conservative confidence intervals for $p_1 - p_2$.
For $\alpha \in (0, 1)$, the following have approximate confidence level at least $1 - \alpha$ for $p_1 - p_2$:
1. The two-sided interval with endpoints $(M_1 - M_2) \pm \frac{1}{2} z\left(1 - \alpha / 2\right) \sqrt{1 / n_1 + 1 / n_2}$.
2. The lower bound $(M_1 - M_2) - \frac{1}{2} z(1 - \alpha) \sqrt{1 / n_1 + 1 / n_2}$.
3. The upper bound $(M_1 - M_2) + \frac{1}{2} z(1 - \alpha) \sqrt{1 / n_1 + 1 / n_2}$.
Proof
These results follow from the previous theorem by replacing $M_1 (1 - M_1)$ and $M_2 (1 - M_2)$ each with $\frac 1 4$.
Computational Exercises
In a poll of 1000 registered voters in a certain district, 427 prefer candidate X. Construct the 95% two-sided confidence interval for the proportion of all registered voters in the district that prefer X.
Answer
$(0.396, 0.458)$
A coin is tossed 500 times and results in 302 heads. Construct the 95% confidence lower bound for the probability of heads. Do you believe that the coin is fair?
Answer
0.579. No, the coin is almost certainly not fair.
A sample of 400 memory chips from a production line are tested, and 30 are defective. Construct the conservative 90% two-sided confidence interval for the proportion of defective chips.
Answer
$(0.034, 0.116)$
A drug company wants to estimate the proportion of persons who will experience an adverse reaction to a certain new drug. The company wants a two-sided interval with margin of error 0.03 with 95% confidence. How large should the sample be?
Answer
1068
An advertising agency wants to construct a 99% confidence lower bound for the proportion of dentists who recommend a certain brand of toothpaste. The margin of error is to be 0.02. How large should the sample be?
Answer
3382
The Buffon trial data set gives the results of 104 repetitions of Buffon's needle experiment. Theoretically, the data should correspond to Bernoulli trials with $p = 2 / \pi$, but because real students dropped the needle, the true value of $p$ is unknown. Construct a 95% confidence interval for $p$. Do you believe that $p$ is the theoretical value?
Answer
$(0.433, 0.634)$. The theoretical value is approximately 0.637, which is not in the confidence interval.
A manufacturing facility has two production lines for a certain item. In a sample of 150 items from line 1, 12 are defective. From a sample of 130 items from line 2, 10 are defective. Construct the two-sided 95% confidence interval for $p_1 - p_2$, where $p_i$ is the proportion of defective items from line $i$, for $i \in \{1, 2\}$
Answer
$[-0.050, 0.056]$
The vaccine for influenza is tailored each year to match the predicted dominant strain of influenza. Suppose that of 500 unvaccinated persons, 45 contracted the flu in a certain time period. Of 300 vaccinated persons, 20 contracted the flu in the same time period. Construct the two-sided 99% confidence interval for $p_1 - p_2$, where $p_1$ is the incidence of flu in the unvaccinated population and $p_2$ the incidence of flu in the vaccinated population. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/08%3A_Set_Estimation/8.03%3A_Estimation_in_the_Bernoulli_Model.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\bs}{\boldsymbol}$
As we have noted before, the normal distribution is perhaps the most important distribution in the study of mathematical statistics, in part because of the central limit theorem. As a consequence of this theorem, measured quantities that are subject to numerous small, random errors will have, at least approximately, normal distributions. Such variables are ubiquitous in statistical experiments, in subjects varying from the physical and biological sciences to the social sciences.
In this section, we will study estimation problems in the two-sample normal model and in the bivariate normal model. This section parallels the section on Tests in the Two-Sample Normal Model in the Chapter on Hypothesis Testing.
The Two-Sample Normal Model
Preliminaries
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_m)$ is a random sample of size $m$ from the normal distribution with mean $\mu$ and standard deviation $\sigma$, and that $\bs{Y} = (Y_1, Y_2, \ldots, Y_n)$ is a random sample of size $n$ from the normal distribution with mean $\nu$ and standard deviation $\tau$. Moreover, suppose that the samples $\bs{X}$ and $\bs{Y}$ are independent. Usually, the parameters are unknown, so the parameter space for our vector of parameters $(\mu, \nu, \sigma, \tau)$ is $\R^2 \times (0, \infty)^2$.
This type of situation arises frequently when the random variables represent a measurement of interest for the objects of the population, and the samples correspond to two different treatments. For example, we might be interested in the blood pressure of a certain population of patients. The $\bs{X}$ vector records the blood pressures of a control sample, while the $\bs{Y}$ vector records the blood pressures of the sample receiving a new drug. Similarly, we might be interested in the yield of an acre of corn. The $\bs{X}$ vector records the yields of a sample receiving one type of fertilizer, while the $\bs{Y}$ vector records the yields of a sample receiving a different type of fertilizer.
Usually our interest is in a comparison of the parameters (either the means or standard deviations) for the two sampling distributions. In this section we will construct confidence intervals for the difference of the distribution means $\nu - \mu$ and for the ratio of the distribution variances $\tau^2 / \sigma^2$. As with previous estimation problems, the construction depends on finding appropriate pivot variables.
For a generic sample $\bs{U} = (U_1, U_2, \ldots, U_k)$ from a distribution with mean $a$, we will use our standard notation for the sample mean and for the sample variance. \begin{align} M(\bs{U}) & = \frac{1}{k} \sum_{i=1}^k U_i \ S^2(\bs{U}) & = \frac{1}{k - 1} \sum_{i=1}^k [U_i - M(\bs{U})]^2 \end{align} We will need to also recall the special properties of these statistics when the sampling distribution is normal. The special pivot distributions that will play a fundamental role in this section are the standard normal, the student $t$, and the Fisher $F$ distributions. To construct our interval estimates we will need the quantiles of these distributions. The quantiles can be computed using the special distribution calculator or from most mathematical and statistical software packages. Here is the notation we will use:
Let $p \in (0, 1)$ and let $j, \, k \in \N_+$.
1. $z(p)$ denotes the quantile of order $p$ for the standard normal distribution.
2. $t_k(p)$ denotes the quantile of order $p$ for the student $t$ distribution with $k$ degrees of freedom.
3. $f_{j,k}(p)$ denotes the quantile of order $p$ for the student $f$ distribution with $j$ degrees of freedom in the numerator and $k$ degrees of freedom in the denominator.
Recall that by symmetry, $z(p) = -z(1 - p)$ and $t_k(p) = -t_k(1 - p)$ for $p \in (0, 1)$ and $k \in \N_+$. On the other hand, there is no simple relationship between the left and right tail probabilities of the $F$ distribution.
Confidence Intervals for the Difference of the Means with Known Variances
First we will construct confidence intervals for $\nu - \mu$ under the assumption that the distribution variances $\sigma^2$ and $\tau^2$ are known. This is not always an artificial assumption. As in the one sample normal model, the variances are sometime stable, and hence are at least approximately known, while the means change under different treatments. First recall the following basic facts:
The difference of the sample means $M(\bs{Y}) - M(\bs{X})$ has the normal distribution with mean $\nu - \mu$ and variance $\sigma^2 / m + \tau^2 / n$. Hence the standard score of the difference of the sample means $Z = \frac{[M(\bs{Y}) - M(\bs{X})] - (\nu - \mu)}{\sqrt{\sigma^2 / m + \tau^2 / n}}$ has the standard normal distribution. Thus, this variable is a pivotal variable for $\nu - \mu$ when $\sigma, \tau$ are known.
The basic confidence interval and upper and lower bound are now easy to construct.
For $\alpha \in (0, 1)$,
1. $\left[M(\bs{Y}) - M(\bs{X}) - z\left(1 - \frac{\alpha}{2}\right) \sqrt{\frac{\sigma^2}{m} + \frac{\tau^2}{n}}, M(\bs{Y}) - M(\bs{X}) + z\left(1 - \frac{\alpha}{2}\right) \sqrt{\frac{\sigma^2}{m} + \frac{\tau^2}{n}}\right]$ is a $1 - \alpha$ confidence interval for $\nu - \mu$.
2. $M(\bs{Y}) - M(\bs{X}) - z(1 - \alpha) \sqrt{\frac{\sigma^2}{m} + \frac{\tau^2}{n}}$ is a $1 - \alpha$ confidence lower bound for $\nu - \mu$.
3. $M(\bs{Y}) - M(\bs{X}) + z(1 - \alpha) \sqrt{\frac{\sigma^2}{m} + \frac{\tau^2}{n}}$ is a $1 - \alpha$ confidence upper bound for $\nu - \mu$.
Proof
The variable $T$ given above has the standard normal distribution. Hence each of the following events has probability $1 - \alpha$ by definition of the quantiles:
1. $\left\{-z\left(1 - \frac{\alpha}{2}\right) \le Z \le z\left(1 - \frac{\alpha}{2}\right)\right\}$
2. $\left\{Z \ge z(1 - \alpha)\right\}$
3. $\left\{Z \le -z(1 - \alpha)\right\}$
In each case, solving the inequality for $\nu - \mu$ gives the result.
The two-sided interval in part (a) is the symmetric interval corresponding to $\alpha / 2$ in both tails of the standard normal distribution. As usual, we can construct more general two-sided intervals by partitioning $\alpha$ between the left and right tails in anyway that we please.
For every $\alpha, \, p \in (0, 1)$, a $1 - \alpha$ confidence interval for $\nu - \mu$ is $\left[M(\bs{Y}) - M(\bs{X}) - z(1 - \alpha p) \sqrt{\frac{\sigma^2}{m} + \frac{\tau^2}{n}}, M(\bs{Y}) - M(\bs{X}) - z(\alpha - p \alpha) \sqrt{\frac{\sigma^2}{m} + \frac{\tau^2}{n}} \right]$
1. $p = \frac{1}{2}$ gives the symmetric two-sided interval.
2. $p \to 1$ gives the interval with the confidence lower bound.
3. $p \to 0$ gives the interval with confidence upper bound.
Proof
From the distribution of the pivot variable and the definition of the quantile function, $\P \left[ z(\alpha - p \alpha) \lt \frac{[M(\bs{Y}) - M(\bs{X})] - (\nu - \mu)}{\sqrt{\sigma^2 / m + \tau^2 / n}} \lt z(1 - p \alpha) \right] = 1 - \alpha$ Solving for $\nu - \mu$ in the inequality gives the confidence interval.
The following theorem gives some basic properties of the length of this interval.
The (deterministic) length of the general two-sided confidence interval is $L = [z(1 - \alpha p) - z(\alpha - \alpha p)] \sqrt{\frac{\sigma^2}{m} + \frac{\tau^2}{n}}$
1. $L$ is a decreasing function of $m$ and a decreasing function of $n$.
2. $L$ is an increasing function of $\sigma$ and an increasing function of $\tau$
3. $L$ is an decreasing function of $\alpha$ and hence an increasing function of the confidence level.
4. As a function of $p$, $L$ decreases and then increases, with minimum value at $p = \frac{1}{2}$.
Part (a) means that we can make the estimate more precise by increasing either or both sample sizes. Part (b) means that the estimate becomes less precise as the variance in either distribution increases. Part (c) we have seen before. All other things being equal, we can increase the confidence level only at the expense of making the estimate less precise. Part (d) means that the symmetric, equal-tail confidence interval is the best of the two-sided intervals.
Confidence Intervals for the Difference of the Means with Unknown Variances
Our next method is a construction of confidence intervals for the difference of the means $\nu - \mu$ without needing to know the standard deviations $\sigma$ and $\tau$. However, there is a cost; we will assume that the standard deviations are the same, $\sigma = \tau$, but the common value is unknown. This assumption is reasonable if there is an inherent variability in the measurement variables that does not change even when different treatments are applied to the objects in the population. We need to recall some basic facts from our study of special properties of normal samples.
The pooled estimate of the common variance $\sigma^2 = \tau^2$ is $S^2(\bs{X}, \bs{Y}) = \frac{(m - 1) S^2(\bs{X}) + (n - 1) S^2(\bs{Y})}{m + n - 2}$ The random variable $T = \frac{\left[M(\bs{Y}) - M(\bs{X})\right] - (\nu - \mu)}{S(\bs{X}, \bs{Y}) \sqrt{1 / m + 1 / n}}$ has the student $t$ distribution with $m + n - 2$ degrees of freedom
Note that $S^2(\bs{X}, \bs{Y})$ is a weighted average of the sample variances, with the degrees of freedom as the weight factors. Note also that $T$ is a pivot variable for $\nu - \mu$ and so we can construct confidence intervals for $\nu - \mu$ in the usual way.
For $\alpha \in (0, 1)$,
1. $\left[M(\bs{Y}) - M(\bs{X}) - t_{m + n - 2}\left(1 - \frac{\alpha}{2}\right) S(\bs{X},\bs{Y})\sqrt{\frac{1}{m} + \frac{1}{n}}, M(\bs{Y}) - M(\bs{X}) + t_{m + n - 2}\left(1 - \frac{\alpha}{2}\right) S(\bs{X},\bs{Y})\sqrt{\frac{1}{m} + \frac{1}{n}}\right]$ is a $1 - \alpha$ confidence interval for $\nu - \mu$.
2. $M(\bs{Y}) - M(\bs{X}) - t_{m + n - 2}(1 - \alpha) S(\bs{X},\bs{Y})\sqrt{\frac{1}{m} + \frac{1}{n}}$ is a $1 - \alpha$ confidence lower bound for $\nu - \mu$.
3. $M(\bs{Y}) - M(\bs{X}) + t_{m + n - 2}(1 - \alpha) S(\bs{X},\bs{Y})\sqrt{\frac{1}{m} + \frac{1}{n}}$ is a $1 - \alpha$ confidence upper bound for $\nu - \mu$.
Proof
The variable $T$ given above has the standard normal distribution. Hence each of the following events has probability $1 - \alpha$ by definition of the quantiles:
1. $\left\{-t_{m+n-2}\left(1 - \frac{\alpha}{2}\right) \le T \le t_{m+n-2}\left(1 - \frac{\alpha}{2}\right)\right\}$
2. $\left\{T \ge t_{m+n-2}(1 - \alpha)\right\}$
3. $\left\{T \le -t_{m+n-2}(1 - \alpha)\right\}$
In each case, solving the inequality for $\nu - \mu$ gives the result.
The two-sided interval in part (a) is the symmetric interval corresponding to $\alpha / 2$ in both tails of the student $t$ distribution. As usual, we can construct more general two-sided intervals by partitioning $\alpha$ between the left and right tails in anyway that we please.
For every $\alpha, \, p \in (0, 1)$, a $1 - \alpha$ confidence interval for $\nu - \mu$ is $\left[M(\bs{Y}) - M(\bs{X}) - t_{m+n-2}(1 - \alpha p) S(\bs{X}, \bs{Y})\sqrt{\frac{1}{m} + \frac{1}{n}}, M(\bs{Y}) - M(\bs{X}) - t_{m+n-2}(\alpha - p \alpha) S(\bs{X}, \bs{Y}) \sqrt{\frac{1}{m} + \frac{1}{n}} \right]$
1. $p = \frac{1}{2}$ gives the symmetric two-sided interval.
2. $p \to 1$ gives the interval with the confidence lower bound.
3. $p \to 0$ gives the inteval with confidence upper bound.
Proof
From the distribution of the pivot variable and the definition of the quantile function, $\P \left[ t_{m+n-2}(\alpha - p \alpha) \lt \frac{[M(\bs{Y}) - M(\bs{X})] - (\nu - \mu)}{S(\bs{X}, \bs{Y})\sqrt{1 / m + 1 / n}} \lt t_{m+n-2}(1 - p \alpha) \right] = 1 - \alpha$ Solving for $\nu - \mu$ in the inequality gives the confidence interval.
The next result considers the length of the general two-sided interval.
The (random) length of the two-sided interval above is $L = [t_{m+n-2}(1 - p \alpha) - t_{m+n-2}(\alpha - p \alpha)] S(\bs{X}, \bs{Y}) \sqrt{\frac{1}{m} + \frac{1}{n}}$
1. $L$ is an decreasing function of $\alpha$ and hence an increasing function of the confidence level.
2. As a function of $p$, $L$ decreases and then increases, with minimum value at $p = \frac{1}{2}$.
As in the case of known variances, part (c) means that all other things being equal, we can increase the confidence level only at the expense of making the estimate less precise. Part (b) means that the symmetric, equal-tail confidence interval is the best of the two-sided intervals.
Confidence Intervals for the Ratio of the Variances
Our next construction will produce interval estimates for the ratio of the variances $\tau^2 / \sigma^2$ (or by taking square roots, for the ratio of the standard deviations $\tau / \sigma$). Once again, we need to recall some basic facts from our study of special properties of random samples from the normal distribution.
The ratio $U = \frac{S^2(\bs{X}) \tau^2}{S^2(\bs{Y}) \sigma^2}$ has the $F$ distribution with $m - 1$ degrees of freedom in the numerator and $n - 1$ degrees of freedom in the denominator, and hence this variable is a pivot variable for $\tau^2 / \sigma^2$.
The pivot variable $U$ can be used to construct confidence intervals for $\tau^2 / \sigma^2$ in the usual way.
For $\alpha \in (0, 1)$,
1. $\left[f_{m-1, n-1}\left(\frac{\alpha}{2}\right) \frac{S^2(\bs{Y})}{S^2(\bs{X})}, f_{m-1, n-1}\left(1 - \frac{\alpha}{2}\right) \frac{S^2(\bs{Y})}{S^2(\bs{X})} \right]$ is a $1 - \alpha$ confidence interval for $\tau^2 / \sigma^2$.
2. $f_{m-1, n-1}(1 - \alpha) \frac{S^2(\bs{Y})}{S^2(\bs{X})}$ is a $1 - \alpha$ confidence lower bound for $\tau^2 / \sigma^2$.
3. $f_{m-1, n-1}(\alpha) \frac{S^2(\bs{Y})}{S^2(\bs{X})}$ is a $1 - \alpha$ confidence upper bound for $\nu - \mu$.
Proof
The variable $U$ given above has the $F$ distribution with $m - 1$ degrees of freedom in the numerator and $n - 1$ degrees of freedom in the denominator. Hence each of the following events has probability $1 - \alpha$ by definition of the quantiles:
1. $\left\{f_{m-1,n-1}\left(\frac{\alpha}{2}\right) \le U \le f_{m-1,n-1}\left(1 - \frac{\alpha}{2}\right)\right\}$
2. $\left\{U \ge f_{m-1,n-1}(1 - \alpha)\right\}$
3. $\left\{U \le f{m-1,n-1}(\alpha)\right\}$
In each case, solving the inequality for $\tau^2 / \sigma^2$ gives the result.
The two-sided confidence interval in part (a) is the equal-tail confidence interval, and is the one commonly used. But as usual, we can partition $\alpha$ between the left and right tails of the distribution of the pivot variable in any way that we please.
For every $\alpha, \, p \in (0, 1)$, a $1 - \alpha$ confidence set for $\tau^2 / \sigma^2$ is $\left[f_{m-1, n-1}(\alpha - p \alpha) \frac{S^2(\bs{Y})}{S^2(\bs{X})}, f_{m-1, n-1}(1 - p \alpha) \frac{S^2(\bs{Y})}{S^2(\bs{X})} \right]$
1. $p = \frac{1}{2}$ gives the equal-tail, two-sided interval.
2. $p \to 1$ gives the interval with the confidence lower bound.
3. $p \to 0$ gives the inteval with confidence upper bound.
Proof
From the $F$ pivot variable and the definition of the quantile function, $\P \left[ f_{m-1,n-1}(\alpha - p \, \alpha) \lt \frac{S^2(\bs{X}, \mu) \tau^2}{S^2(\bs{Y}, \nu) \sigma^2} \lt f_{m-1,n-1}(1 - p \,\alpha) \right] = 1 - \alpha$ Solving for $\tau^2 / \sigma^2$ in the inequality.
The length of the general confidence interval is considered next.
The (random) length of the general two-sided confidence interval above is $L = \left[f_{m-1,n-1}(1 - p \alpha) - f_{m-1,n-1}(\alpha - p \alpha) \right] \frac{S^2(\bs{Y})}{S^2(\bs{X})}$ Assuming that $m \gt 5$ and $n \gt 1$,
1. $L$ is an decreasing function of $\alpha$ and hence an increasing function of the confidence level.
2. $\E(L) = \frac{\tau^2}{\sigma^2} \frac{m - 1}{m - 3}$
3. $\var(L) = 2 \frac{\tau^4}{\sigma^4} \left(\frac{m - 1}{m - 3}\right)^2 \frac{m + n - 4}{(n - 1) (m - 5)}$
Proof
Parts (b) and (c) follow since $\frac{\sigma^2}{\tau^2} \frac{S^2(\bs{Y})}{S^2(\bs{X})^2}$ as the $F$ distribution with $n - 1$ degrees of freedom in the numerator and $m - 1$ degrees of freedom in the denominator.
Optimally, we might want to choose $p$ so that $\E(L)$ is minimized. However, this is difficult computationally, and fortunately the equal-tail interval with $p = \frac{1}{2}$ is not too far from optimal when the sample sizes $m$ and $n$ are large.
Estimation in the Bivariate Normal Model
In this subsection, we consider a model that is superficially similar to the two-sample normal model, but is actually much simpler. Suppose that $\left((X_1, Y_1), (X_2, Y_2), \ldots, (X_n, Y_n)\right)$ is a random sample of size $n$ from the bivariate normal distribution of a random vector $(X, Y)$, with $\E(X) = \mu$, $\E(Y) = \nu$, $\var(X) = \sigma^2$, $\var(Y) = \tau^2$, and $\cov(X, Y) = \delta$.
Thus, instead of a pair of samples, we have a sample of pairs. This type of model frequently arises in before and after experiments, in which a measurement of interest is recorded for a sample of $n$ objects from the population, both before and after a treatment. For example, we could record the blood pressure of a sample of $n$ patients, before and after the administration of a certain drug. The critical point is that in this model, $X_i$ and $Y_i$ are measurements made on the same underlying object in the sample. As with the two-sample normal model, the interest is usually in estimating the difference of the means.
We will use our usual notation for the sample means and variances of $\bs{X} = (X_1, X_2, \ldots, X_n)$ and $\bs{Y} = (Y_1, Y_2, \ldots, Y_n)$. Recall also that the sample covariance of $(\bs{X}, \bs{Y})$, is $S(\bs{X}, \bs{Y}) = \frac{1}{n - 1} \sum_{i=1}^n [X_i - M(\bs{X})][Y_i - M(\bs{Y})]$ (not to be confused with the pooled estimate of the standard deviation in the two sample model).
The vector of differences $\bs{Y} - \bs{X} = (Y_1 - X_1, Y_2 - X_2, \ldots, Y_n - X_n)$ is a random sample of size $n$ from the distribution of $Y - X$, which is normal with
1. $\E(Y - X) = \nu - \mu$
2. $\var(Y - X) = \sigma^2 + \tau^2 - 2 \, \delta$
The sample mean and variance of the sample of differences are given by
1. $M(\bs{Y} - \bs{X}) = M(\bs{Y}) - M(\bs{X})$
2. $S^2(\bs{Y} - \bs{X}) = S^2(\bs{X}) + S^2(\bs{Y}) - 2 \, S(\bs{X}, \bs{Y})$
Thus, the sample of differences $\bs{Y} - \bs{X}$ fits the normal model for a single variable. The section on Estimation in the Normal Model could be used to obtain confidence sets and intervals for the parameters $(\nu - \mu, \sigma^2 + \tau^2 - 2 \, \delta)$.
In the setting of this subsection, suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ and $\bs{Y} = (Y_1, Y_2, \ldots, Y_n)$ are independent. Mathematically this fits both models—the two-sample normal model and the bivariate normal model. Which procedure would work better for estimating the difference of means $\nu - \mu$?
1. If the standard deviations $\sigma$ and $\tau$ are known.
2. If the standard deviations $\sigma$ and $\tau$ are unknown.
Answer
1. The two methods are equivalent.
2. The bivariate normal model works better.
Although the setting in the last problem fits both models mathematically, only one model would make sense in a real problem. Again, the critical point is whether $(X_i, Y_i)$ makes sense as a pair of random variables (measurements) corresponding to a given object in the sample.
Computational Exercises
A new drug is being developed to reduce a certain blood chemical. A sample of 36 patients are given a placebo while a sample of 49 patients are given the drug. Let $X$ denote the measurement for a patient given the placebo and $Y$ the measurement for a patient given the drug (in mg). The statistics are $m(\bs{x}) = 87$, $s(\bs{x}) = 4$, $m(\bs{y}) = 63$, $s(\bs{y}) = 6$.
1. Compute the 90% confidence interval for $\tau / \sigma$.
2. Assuming that $\sigma = \tau$, compute the 90% confidence interval for $\nu - \mu$.
3. Based on (a), is the assumption that $\sigma = \tau$ reasonable?
4. Based on (b), is the drug effective?
Answer
1. $(1.149, 1.936)$
2. $(-24.834, -23.166)$
3. Perhaps not.
4. Yes
A company claims that an herbal supplement improves intelligence. A sample of 25 persons are given a standard IQ test before and after taking the supplement. Let $X$ denote the IQ of a subject before taking the supplement and $Y$ the IQ of the subject after the supplement. The before and after statistics are $m(\bs{x}) = 105$, $s(\bs{x}) = 13$, $m(\bs{y}) = 110$, $s(\bs{y}) = 17$, $s(\bs{x}, \bs{y}) = 190$. Do you believe the company's claim?
Answer
A 90% confidence lower bound for the difference in IQ is 2.675. There may be a vary small increase.
In Fisher's iris data, let $X$ denote consider the petal length of a Versicolor iris and $Y$ the petal length of a Virginica iris.
1. Compute the 90% confidence interval for $\tau / \sigma$.
2. Assuming that $\sigma = \tau$, compute the 90% confidence interval for $\nu - \mu$.
3. Based on (a), is the assumption that $\sigma = \tau$ reasonable?
Answer
1. $(0.8, 1.3)$
2. $(10.5, 14.1)$
3. Yes
A plant has two machines that produce a circular rod whose diameter (in cm) is critical. Let $X$ denote the diameter of a rod from the first machine and $Y$ the diameter of a rod from the second machine. A sample of 100 rods from the first machine as mean 10.3 and standard deviation 1.2. A sample of 100 rods from the second machine has mean 9.8 and standard deviation 1.6.
1. Compute the 90% confidence interval for $\tau / \sigma$.
2. Assuming that $\sigma = \tau$, compute the 90% confidence interval for $\nu - \mu$.
3. Based on (a), is the assumption that $\sigma = \tau$ reasonable?
Answer
1. $(1.127, 1.578)$
2. $(0.832, 0.168)$
3. Perhaps not. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/08%3A_Set_Estimation/8.04%3A_Estimation_in_the_Two-Sample_Normal_Model.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
As usual, our starting point is a random experiment with an underlying sample space and a probability measure $\P$. In the basic statistical model, we have an observable random variable $\bs{X}$ taking values in a set $S$. In general, $\bs{X}$ can have quite a complicated structure. For example, if the experiment is to sample $n$ objects from a population and record various measurements of interest, then $\bs{X} = (X_1, X_2, \ldots, X_n)$ where $X_i$ is the vector of measurements for the $i$th object.
Suppose also that the distribution of $\bs{X}$ depends on a parameter $\theta$ taking values in a parameter space $\Theta$. The parameter may also be vector valued, in which case $\Theta \subseteq \R^k$ for some $k \in \N_+$ and the parameter has the form $\bs{\theta} = (\theta_1, \theta_2, \ldots, \theta_k)$.
The Bayesian Formulation
Recall that in Bayesian analysis, the unknown parameter $\theta$ is treated as a random variable. Specifically, suppose that the conditional probability density function of the data vector $\bs{X}$ given $\theta\ \in \Theta$ is denoted $f(\bs{x} \mid \theta)$ for $\bs{x} \in S$. Moreover, the parameter $\theta$ is given a prior distribution with probability density function $h$ on $\Theta$. (The prior distribution is often subjective, and is chosen to reflect our knowledge, if any, of the parameter.) The joint probability density function of the data vector and the parameter is $(\bs{x}, \theta) \mapsto h(\theta) f(\bs{x} \mid \theta); \quad (\bs{x}, \theta) \in S \times \Theta$ Next, the (unconditional) probability density function of $\bs{X}$ is the function $f$ given by $f(\bs{x}) = \sum_{\theta \in \Theta} h(\theta) f(\bs{x} \mid \theta), \quad \bs{x} \in S$ if the parameter has a discrete distribution, or by $f(\bs{x}) = \int_\Theta h(\theta) f(\bs{x} \mid \theta) \, d\theta, \quad \bs{x} \in S$ if the parameter has a continuous distribution. Finally, by Bayes' theorem, the posterior probability density function of $\theta$ given $\bs{x} \in S$ is $h(\theta \mid \bs{x}) = \frac{h(\theta) f(\bs{x} \mid \theta)}{f(\bs{x})}, \quad \theta \in \Theta$
In some cases, we can recognize the posterior distribution from the functional form of $\theta \mapsto h(\theta) f(\bs{x} \mid \theta)$ without having to actually compute the normalizing constant $f(\bs{x})$, and thus reducing the computational burden significantly. In particular, this is often the case when we have a conjugate parametric family of distributions of $\theta$. Recall that this means that when the prior distribution of $\theta$ belongs to the family, so does the posterior distribution given $\bs{x} \in S$.
Confidence Sets
Now let $C(\bs{X})$ be a confidence set (that is, a subset of the parameter space that depends on the data variable $\bs{X}$ but no unknown parameters). One possible definition of a $1 - \alpha$ level Bayesian confidence set requires that $\P\left[\theta \in C(\bs{x}) \mid \bs{X} = \bs{x}\right] = 1 - \alpha$ In this definition, only $\theta$ is random and thus the probability above is computed using the posterior probability density function $\theta \mapsto h(\theta \mid \bs{x})$. Another possible definition requires that $\P\left[\theta \in C(\bs{X})\right] = 1 - \alpha$ In this definition, $\bs{X}$ and $\theta$ are both random, and so the probability above would be computed using the joint probability density function $(\bs{x}, \theta) \mapsto h(\theta) f(\bs{x} \mid \theta)$. Whatever the philosophical arguments may be, the first definition is certainly the easier one from a computational viewpoint, and hence is the one most commonly used.
Let us compare the classical and Bayesian approaches. In the classical approach, the parameter $\theta$ is deterministic, but unknown. Before the data are collected, the confidence set $C(\bs{X})$ (which is random by virtue of $\bs{X}$) will contain the parameter with probability $1 - \alpha$. After the data are collected, the computed confidence set $C(\bs{x})$ either contains $\theta$ or does not, and we will usually never know which. By contrast in a Bayesian confidence set, the random parameter $\theta$ falls in the computed, deterministic confidence set $C(\bs{x})$ with probability $1 - \alpha$.
Real Parameters
Suppose that $\theta$ is real valued, so that $\Theta \subseteq \R$. For $r \in (0, 1)$, we can compute the $1 - \alpha$ level Bayesian confidence interval as $\left[U_{(1 - r) \alpha}(\bs{x}), U_{1 - r \alpha}(\bs{x})\right]$ where $U_p(\bs{x})$ is the quantile of order $p$ for the posterior distribution of $\theta$ given $\bs{X} = \bs{x}$. As in past sections, $r$ is the fraction of $\alpha$ in the right tail of the posterior distribution and $1 - r$ is the fraction of $\alpha$ in the left tail of the posterior distribution. As usual, $r = \frac{1}{2}$ gives the symmetric, two-sided confidence interval; letting $r \to 0$ gives the confidence lower bound; and letting $r \to 1$ gives the confidence upper bound.
Random Samples
In terms of our data vector $\bs{X}$ the most important special case arises when we have a basic variable $X$ with values in a set $R$, and given $\theta$, $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from $X$. That is, given $\theta$, $\bs{X}$ is a sequence of independent, identically distributed variables, each with the same distribution as $X$ given $\theta$. Thus $S = R^n$ and if $X$ has conditional probability density function $g(x \mid \theta)$, then $f(\bs{x} \mid \theta) = g(x_1 \mid \theta) g(x_2 \mid \theta) \cdots g(x_n \mid \theta), \quad \bs{x} = (x_1, x_2, \ldots, x_n) \in S$
Applications
The Bernoulli Distribution
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the Bernoulli distribution with unknown success parameter $p \in (0, 1)$. In the usual language of reliability, $X_i = 1$ means success on trial $i$ and $X_i = 0$ means failure on trial $i$. The distribution is named for Jacob Bernoulli. Recall that the Bernoulli distribution has probability density function (given $p$) $g(x \mid p) = p^x (1 - p)^{1-x}, \quad x \in \{0, 1\}$ Note that the number of successes in the $n$ trials is $Y = \sum_{i=1}^n X_i$. Given $p$, random variable $Y$ has the binomial distribution with parameters $n$ and $p$.
In our previous discussion of Bayesian estimation, we showed that the beta distribution is conjugate for $p$. Specifically, if the prior distribution of $p$ is beta with left parameter $a \gt 0$ and right parameter $b \gt 0$, then the posterior distribution of $p$ given $\bs{X}$ is beta with left parameter $a + Y$ and right parameter $b + (n - Y)$; the left parameter is increased by the number of successes and the right parameter by the number of failure. It follows that a $1 - \alpha$ level Bayesian confidence interval for $p$ is $\left[U_{\alpha/2}(y), U_{1-\alpha/2}(y)\right]$ where $U_r(y)$ is the quantile of order $r$ for the posterior beta distribution. In the special case $a = b = 1$ the prior distribution is uniform on $(0, 1)$ and reflects a lack of previous knowledge about $p$.
Suppose that we have a coin with an unknown probability $p$ of heads, and that we give $p$ the uniform prior, reflecting our lack of knowledge about $p$. We then toss the coin 50 times, observing 30 heads.
1. Find the posterior distribution of $p$ given the data.
2. Construct the 95% Bayesian confidence interval.
3. Construct the classical Wald confidence interval at the 95% level.
Answer
1. Beta with left parameter 31 and right parameter 21.
2. $[0.461, 0.724$
3. $[0.464, 0.736]$
The Poisson Distribution
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the Poisson distribution with parameter $\lambda \in (0, \infty)$. Recall that the Poisson distribution is often used to model the number of random points in a region of time or space and is studied in more detail in the chapter on the Poisson Process. The distribution is named for the inimitable Simeon Poisson and given $\lambda$, has probability density function $g(x \mid \theta) = e^{-\lambda} \frac{\lambda^x}{x!}, \quad x \in \N$ As usual, we will denote the sum of the sample values by $Y = \sum_{i=1}^n X_i$. Given $\lambda$, random variable $Y$ also has a Poisson distribution, but with parameter $n \lambda$.
In our previous discussion of Bayesian estimation, we showed that the gamma distribution is conjugate for $\lambda$. Specifically, if the prior distribution of $\lambda$ is gamma with shape parameter $k \gt 0$ and rate parameter $r \gt 0$ (so that the scale parameter is $1 / r$), then the posterior distribution of $\lambda$ given $\bs{X}$ is gamma with shape parameter $k + Y$ and rate parameter $r + n$. It follows that a $1 - \alpha$ level Bayesian confidence interval for $\lambda$ is $\left[U_{\alpha/2}(y), U_{1-\alpha/2}(y)\right]$ where $U_p(y)$ is the quantile of order $p$ for the posterior gamma distribution.
Consider the alpha emissions data, which we believe come from a Poisson distribution with unknown parameter $\lambda$. Suppose that a priori, we believe that $\lambda$ is about 5, so we give $\lambda$ a prior gamma distribution with shape parameter $5$ and rate parameter 1. (Thus the mean is 5 and the standard deviation $\sqrt{5} = 2.236$.)
1. Find the posterior distribution of $\lambda$ given the data.
2. Construct the 95% Bayesian confidence interval.
3. Construct the classical $t$ confidence interval at the 95% level.
Answer
1. Gamma with shape parameter 10104 and rate parameter 1208.
2. $(8.202, 8.528)$
3. $(8.324, 8.410)$
The Normal Distribution
Suppose that $\bs{x} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the normal distribution with unknown mean $\mu \in \R$ and known variance $\sigma^2 \in (0, \infty)$. Of course, the normal distribution plays an especially important role in statistics, in part because of the central limit theorem. The normal distribution is widely used to model physical quantities subject to numerous small, random errors. Recall that the normal probability density function (given the parameters) is $g(x \mid \mu, \sigma) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left[-\left(\frac{x - \mu}{\sigma}\right)^2 \right], \quad x \in \R$ We denote the sum of the sample values by $Y = \sum_{i=1}^n X_i$. Recall that $Y$ also has a normal distribution (given $\mu$ and $\sigma$), but with mean $n \mu$ and variance $n \sigma^2$.
In our previous discussion of Bayesian estimation, we showed that the normal distribution is conjugate for $\mu$ (with $\sigma$ known). Specifically, if the prior distribution of $\mu$ is normal with mean $a \in \R$ and standard deviation $b \in (0, \infty)$, then the posterior distribution of $\mu$ given $\bs{X}$ is also normal, with $\E(\mu \mid \bs{X}) = \frac{Y b^2 + a \sigma^2}{\sigma^2 + n b^2}, \quad \var(\mu \mid \bs{X}) = \frac{\sigma^2 b^2}{\sigma^2 + n b^2}$ It follows that a $1 - \alpha$ level Bayesian confidence interval for $\mu$ is $\left[U_{\alpha/2}(y), U_{1-\alpha/2}(y)\right]$ where $U_p(y)$ is the quantile of order $p$ for the posterior normal distribution. An interesting special case is when $b = \sigma$, so that the standard deviation of the prior distribution of $\mu$ is the same as the standard deviation of the sampling distribution. In this case, the posterior mean is $(Y + a) \big/ (n + 1)$ and the posterior variance is $\sigma^2 \big/ (n + 1)$
The length of a certain machined part is supposed to be 10 centimeters but due to imperfections in the manufacturing process, the actual length is a normally distributed with mean $\mu$ and variance $\sigma^2$. The variance is due to inherent factors in the process, which remain fairly stable over time. From historical data, it is known that $\sigma = 0.3$. On the other hand, $\mu$ may be set by adjusting various parameters in the process and hence may change to an unknown value fairly frequently. Thus, suppose that we give $\mu$ with a prior normal distribution with mean 10 and standard deviation 0.03 A sample of 100 parts has mean 10.2.
1. Find the posterior distribution of $\mu$ given the data.
2. Construct the 95% Bayesian confidence interval.
3. Construct the classical $z$ confidence interval at the 95% level.
Answer
1. Normal with mean 10.198 and standard deviation 0.0299.
2. $(10.14, 10.26)$
3. $(10.14, 10.26)$
The Beta Distribution
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the beta distribution with unknown left shape parameter $a \in (0, \infty)$ and right shape parameter $b = 1$. The beta distribution is widely used to model random proportions and probabilities and other variables that take values in bounded intervals. Recall that the probability density function (given $a$) is $g(x \mid a) = a x^{a-1}, \quad x \in (0, 1)$ We denote the product of the sample values by $W = X_1 X_2 \cdots X_n$.
In our previous discussion of Bayesian estimation, we showed that the gamma distribution is conjugate for $a$. Specifically, if the prior distribution of $a$ is gamma with shape parameter $k \gt 0$ and rate parameter $r \gt 0$, then the posterior distribution of $a$ given $\bs{X}$ is also gamma, with shape parameter $k + n$ and rate parameter $r - \ln(W)$. It follows that a $1 - \alpha$ level Bayesian confidence interval for $a$ is $\left[U_{\alpha/2}(w), U_{1-\alpha/2}(w)\right]$ where $U_p(w)$ is the quantile of order $p$ for the posterior gamma distribution. In the special case that $k = 1$, the prior distribution of $a$ is exponential with rate parameter $r$.
Suppose that the resistance of an electrical component (in Ohms) has the beta distribution with unknown left parameter $a$ and right parameter $b = 1$. We believe that $a$ may be about 10, so we give $a$ the prior gamma distribution with shape parameter 10 and rate parameter 1. We sample 20 components and observe the data $0.98, 0.93, 0.99, 0.89, 0.79, 0.99, 0.92, 0.97, 0.88, 0.97, 0.86, 0.84, 0.96, 0.97, 0.92, 0.90, 0.98, 0.96, 0.96, 1.00$
1. Find the posterior distribution of $a$.
2. Construct the 95% Bayesian confidence interval for $a$.
Answer
1. Gamma with shape parameter 30 and rate parameter 2.424.
2. $(8.349, 17.180)$
The Pareto Distribution
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from the Pareto distribution with shape parameter $a \in (0, \infty)$ and scale parameter $b = 1$. The Pareto distribution is used to model certain financial variables and other variables with heavy-tailed distributions, and is named for Vilfredo Pareto. Recall that the probability density function (given $a$) is $g(x \mid a) = \frac{a}{x^{a+1}}, \quad x \in [1, \infty)$ We denote the product of the sample values by $W = X_1 X_2 \cdots X_n$.
In our previous discussion of Bayesian estimation, we showed that the gamma distribution is conjugate for $a$. Specifically, if the prior distribution of $a$ is gamma with shape parameter $k \gt 0$ and rate parameter $r \gt 0$, then the posterior distribution of $a$ given $\bs{X}$ is also gamma, with shape parameter $k + n$ and rate parameter $r + \ln(W)$. It follows that a $1 - \alpha$ level Bayesian confidence interval for $a$ is $\left[U_{\alpha/2}(w), U_{1-\alpha/2}(w)\right]$ where $U_p(w)$ is the quantile of order $p$ for the posterior gamma distribution. In the special case that $k = 1$, the prior distribution of $a$ is exponential with rate parameter $r$.
Suppose that a financial variable has the Pareto distribution with unknown shape parameter $a$ and scale parameter $b = 1$. We believe that $a$ may be about 4, so we give $a$ the prior gamma distribution with shape parameter 4 and rate parameter 1. A random sample of size 20 from the variable gives the data $1.09, 1.13, 2.00, 1.43, 1.26, 1.00, 1.36, 1.03, 1.46, 1.18, 2.16, 1.16, 1.22, 1.06, 1.28, 1.23, 1.11, 1.03, 1.04, 1.05$
1. Find the posterior distribution of $a$.
2. Construct the 95% Bayesian confidence interval for $a$.
Answer
1. Gamma with shape parameter 24 and rate parameter 5.223.
2. $(2.944, 6.608)$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/08%3A_Set_Estimation/8.05%3A_Bayesian_Set_Estimation.txt |
Hypothesis testing refers to the process of choosing between competing hypotheses about a probability distribution, based on observed data from the distribution. It is a core topic in mathematical statistics, and indeed is a fundamental part of the language of statistics. In this chapter, we study the basics of hypothesis testing, and explore hypothesis tests in some of the most important parametric models: the normal model and the Bernoulli model.
09: Hypothesis Testing
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
Preliminaries
As usual, our starting point is a random experiment with an underlying sample space and a probability measure $\P$. In the basic statistical model, we have an observable random variable $\bs{X}$ taking values in a set $S$. In general, $\bs{X}$ can have quite a complicated structure. For example, if the experiment is to sample $n$ objects from a population and record various measurements of interest, then $\bs{X} = (X_1, X_2, \ldots, X_n)$ where $X_i$ is the vector of measurements for the $i$th object. The most important special case occurs when $(X_1, X_2, \ldots, X_n)$ are independent and identically distributed. In this case, we have a random sample of size $n$ from the common distribution.
The purpose of this section is to define and discuss the basic concepts of statistical hypothesis testing. Collectively, these concepts are sometimes referred to as the Neyman-Pearson framework, in honor of Jerzy Neyman and Egon Pearson, who first formalized them.
Hypotheses
A statistical hypothesis is a statement about the distribution of $\bs{X}$. Equivalently, a statistical hypothesis specifies a set of possible distributions of $\bs{X}$: the set of distributions for which the statement is true. A hypothesis that specifies a single distribution for $\bs{X}$ is called simple; a hypothesis that specifies more than one distribution for $\bs{X}$ is called composite.
In hypothesis testing, the goal is to see if there is sufficient statistical evidence to reject a presumed null hypothesis in favor of a conjectured alternative hypothesis. The null hypothesis is usually denoted $H_0$ while the alternative hypothesis is usually denoted $H_1$.
An hypothesis test is a statistical decision; the conclusion will either be to reject the null hypothesis in favor of the alternative, or to fail to reject the null hypothesis. The decision that we make must, of course, be based on the observed value $\bs{x}$ of the data vector $\bs{X}$. Thus, we will find an appropriate subset $R$ of the sample space $S$ and reject $H_0$ if and only if $\bs{x} \in R$. The set $R$ is known as the rejection region or the critical region. Note the asymmetry between the null and alternative hypotheses. This asymmetry is due to the fact that we assume the null hypothesis, in a sense, and then see if there is sufficient evidence in $\bs{x}$ to overturn this assumption in favor of the alternative.
An hypothesis test is a statistical analogy to proof by contradiction, in a sense. Suppose for a moment that $H_1$ is a statement in a mathematical theory and that $H_0$ is its negation. One way that we can prove $H_1$ is to assume $H_0$ and work our way logically to a contradiction. In an hypothesis test, we don't prove anything of course, but there are similarities. We assume $H_0$ and then see if the data $\bs{x}$ are sufficiently at odds with that assumption that we feel justified in rejecting $H_0$ in favor of $H_1$.
Often, the critical region is defined in terms of a statistic $w(\bs{X})$, known as a test statistic, where $w$ is a function from $S$ into another set $T$. We find an appropriate rejection region $R_T \subseteq T$ and reject $H_0$ when the observed value $w(\bs{x}) \in R_T$. Thus, the rejection region in $S$ is then $R = w^{-1}(R_T) = \left\{\bs{x} \in S: w(\bs{x}) \in R_T\right\}$. As usual, the use of a statistic often allows significant data reduction when the dimension of the test statistic is much smaller than the dimension of the data vector.
Errors
The ultimate decision may be correct or may be in error. There are two types of errors, depending on which of the hypotheses is actually true.
Types of errors:
1. A type 1 error is rejecting the null hypothesis $H_0$ when $H_0$ is true.
2. A type 2 error is failing to reject the null hypothesis $H_0$ when the alternative hypothesis $H_1$ is true.
Similarly, there are two ways to make a correct decision: we could reject $H_0$ when $H_1$ is true or we could fail to reject $H_0$ when $H_0$ is true. The possibilities are summarized in the following table:
Hypothesis Test
State | Decision Fail to reject $H_0$ Reject $H_0$
$H_0$ True Correct Type 1 error
$H_1$ True Type 2 error Correct
Of course, when we observe $\bs{X} = \bs{x}$ and make our decision, either we will have made the correct decision or we will have committed an error, and usually we will never know which of these events has occurred. Prior to gathering the data, however, we can consider the probabilities of the various errors.
If $H_0$ is true (that is, the distribution of $\bs{X}$ is specified by $H_0$), then $\P(\bs{X} \in R)$ is the probability of a type 1 error for this distribution. If $H_0$ is composite, then $H_0$ specifies a variety of different distributions for $\bs{X}$ and thus there is a set of type 1 error probabilities.
The maximum probability of a type 1 error, over the set of distributions specified by $H_0$, is the significance level of the test or the size of the critical region.
The significance level is often denoted by $\alpha$. Usually, the rejection region is constructed so that the significance level is a prescribed, small value (typically 0.1, 0.05, 0.01).
If $H_1$ is true (that is, the distribution of $\bs{X}$ is specified by $H_1$), then $\P(\bs{X} \notin R)$ is the probability of a type 2 error for this distribution. Again, if $H_1$ is composite then $H_1$ specifies a variety of different distributions for $\bs{X}$, and thus there will be a set of type 2 error probabilities. Generally, there is a tradeoff between the type 1 and type 2 error probabilities. If we reduce the probability of a type 1 error, by making the rejection region $R$ smaller, we necessarily increase the probability of a type 2 error because the complementary region $S \setminus R$ is larger.
The extreme cases can give us some insight. First consider the decision rule in which we never reject $H_0$, regardless of the evidence $\bs{x}$. This corresponds to the rejection region $R = \emptyset$. A type 1 error is impossible, so the significance level is 0. On the other hand, the probability of a type 2 error is 1 for any distribution defined by $H_1$. At the other extreme, consider the decision rule in which we always rejects $H_0$ regardless of the evidence $\bs{x}$. This corresponds to the rejection region $R = S$. A type 2 error is impossible, but now the probability of a type 1 error is 1 for any distribution defined by $H_0$. In between these two worthless tests are meaningful tests that take the evidence $\bs{x}$ into account.
Power
If $H_1$ is true, so that the distribution of $\bs{X}$ is specified by $H_1$, then $\P(\bs{X} \in R)$, the probability of rejecting $H_0$ is the power of the test for that distribution.
Thus the power of the test for a distribution specified by $H_1$ is the probability of making the correct decision.
Suppose that we have two tests, corresponding to rejection regions $R_1$ and $R_2$, respectively, each having significance level $\alpha$. The test with region $R_1$ is uniformly more powerful than the test with region $R_2$ if $\P(\bs{X} \in R_1) \ge \P(\bs{X} \in R_2) \text{ for every distribution of } \bs{X} \text{ specified by } H_1$
Naturally, in this case, we would prefer the first test. Often, however, two tests will not be uniformly ordered; one test will be more powerful for some distributions specified by $H_1$ while the other test will be more powerful for other distributions specified by $H_1$.
If a test has significance level $\alpha$ and is uniformly more powerful than any other test with significance level $\alpha$, then the test is said to be a uniformly most powerful test at level $\alpha$.
Clearly a uniformly most powerful test is the best we can do.
$P$-value
In most cases, we have a general procedure that allows us to construct a test (that is, a rejection region $R_\alpha$) for any given significance level $\alpha \in (0, 1)$. Typically, $R_\alpha$ decreases (in the subset sense) as $\alpha$ decreases.
The $P$-value of the observed value $\bs{x}$ of $\bs{X}$, denoted $P(\bs{x})$, is defined to be the smallest $\alpha$ for which $\bs{x} \in R_\alpha$; that is, the smallest significance level for which $H_0$ is rejected, given $\bs{X} = \bs{x}$.
Knowing $P(\bs{x})$ allows us to test $H_0$ at any significance level for the given data $\bs{x}$: If $P(\bs{x}) \le \alpha$ then we would reject $H_0$ at significance level $\alpha$; if $P(\bs{x}) \gt \alpha$ then we fail to reject $H_0$ at significance level $\alpha$. Note that $P(\bs{X})$ is a statistic. Informally, $P(\bs{x})$ can often be thought of as the probability of an outcome as or more extreme than the observed value $\bs{x}$, where extreme is interpreted relative to the null hypothesis $H_0$.
Analogy with Justice Systems
There is a helpful analogy between statistical hypothesis testing and the criminal justice system in the US and various other countries. Consider a person charged with a crime. The presumed null hypothesis is that the person is innocent of the crime; the conjectured alternative hypothesis is that the person is guilty of the crime. The test of the hypotheses is a trial with evidence presented by both sides playing the role of the data. After considering the evidence, the jury delivers the decision as either not guilty or guilty. Note that innocent is not a possible verdict of the jury, because it is not the point of the trial to prove the person innocent. Rather, the point of the trial is to see whether there is sufficient evidence to overturn the null hypothesis that the person is innocent in favor of the alternative hypothesis of that the person is guilty. A type 1 error is convicting a person who is innocent; a type 2 error is acquitting a person who is guilty. Generally, a type 1 error is considered the more serious of the two possible errors, so in an attempt to hold the chance of a type 1 error to a very low level, the standard for conviction in serious criminal cases is beyond a reasonable doubt.
Tests of an Unknown Parameter
Hypothesis testing is a very general concept, but an important special class occurs when the distribution of the data variable $\bs{X}$ depends on a parameter $\theta$ taking values in a parameter space $\Theta$. The parameter may be vector-valued, so that $\bs{\theta} = (\theta_1, \theta_2, \ldots, \theta_n)$ and $\Theta \subseteq \R^k$ for some $k \in \N_+$. The hypotheses generally take the form $H_0: \theta \in \Theta_0 \text{ versus } H_1: \theta \notin \Theta_0$ where $\Theta_0$ is a prescribed subset of the parameter space $\Theta$. In this setting, the probabilities of making an error or a correct decision depend on the true value of $\theta$. If $R$ is the rejection region, then the power function $Q$ is given by $Q(\theta) = \P_\theta(\bs{X} \in R), \quad \theta \in \Theta$ The power function gives a lot of information about the test.
The power function satisfies the following properties:
1. $Q(\theta)$ is the probability of a type 1 error when $\theta \in \Theta_0$.
2. $\max\left\{Q(\theta): \theta \in \Theta_0\right\}$ is the significance level of the test.
3. $1 - Q(\theta)$ is the probability of a type 2 error when $\theta \notin \Theta_0$.
4. $Q(\theta)$ is the power of the test when $\theta \notin \Theta_0$.
If we have two tests, we can compare them by means of their power functions.
Suppose that we have two tests, corresponding to rejection regions $R_1$ and $R_2$, respectively, each having significance level $\alpha$. The test with rejection region $R_1$ is uniformly more powerful than the test with rejection region $R_2$ if $Q_1(\theta) \ge Q_2(\theta)$ for all $\theta \notin \Theta_0$.
Most hypothesis tests of an unknown real parameter $\theta$ fall into three special cases:
Suppose that $\theta$ is a real parameter and $\theta_0 \in \Theta$ a specified value. The tests below are respectively the two-sided test, the left-tailed test, and the right-tailed test.
1. $H_0: \theta = \theta_0$ versus $H_1: \theta \ne \theta_0$
2. $H_0: \theta \ge \theta_0$ versus $H_1: \theta \lt \theta_0$
3. $H_0: \theta \le \theta_0$ versus $H_1: \theta \gt \theta_0$
Thus the tests are named after the conjectured alternative. Of course, there may be other unknown parameters besides $\theta$ (known as nuisance parameters).
Equivalence Between Hypothesis Test and Confidence Sets
There is an equivalence between hypothesis tests and confidence sets for a parameter $\theta$.
Suppose that $C(\bs{x})$ is a $1 - \alpha$ level confidence set for $\theta$. The following test has significance level $\alpha$ for the hypothesis $H_0: \theta = \theta_0$ versus $H_1: \theta \ne \theta_0$: Reject $H_0$ if and only if $\theta_0 \notin C(\bs{x})$
Proof
By definition, $\P[\theta \in C(\bs{X})] = 1 - \alpha$. Hence if $H_0$ is true so that $\theta = \theta_0$, then the probability of a type 1 error is $P[\theta \notin C(\bs{X})] = \alpha$.
Equivalently, we fail to reject $H_0$ at significance level $\alpha$ if and only if $\theta_0$ is in the corresponding $1 - \alpha$ level confidence set. In particular, this equivalence applies to interval estimates of a real parameter $\theta$ and the common tests for $\theta$ given above.
In each case below, the confidence interval has confidence level $1 - \alpha$ and the test has significance level $\alpha$.
1. Suppose that $\left[L(\bs{X}, U(\bs{X})\right]$ is a two-sided confidence interval for $\theta$. Reject $H_0: \theta = \theta_0$ versus $H_1: \theta \ne \theta_0$ if and only if $\theta_0 \lt L(\bs{X})$ or $\theta_0 \gt U(\bs{X})$.
2. Suppose that $L(\bs{X})$ is a confidence lower bound for $\theta$. Reject $H_0: \theta \le \theta_0$ versus $H_1: \theta \gt \theta_0$ if and only if $\theta_0 \lt L(\bs{X})$.
3. Suppose that $U(\bs{X})$ is a confidence upper bound for $\theta$. Reject $H_0: \theta \ge \theta_0$ versus $H_1: \theta \lt \theta_0$ if and only if $\theta_0 \gt U(\bs{X})$.
Pivot Variables and Test Statistics
Recall that confidence sets of an unknown parameter $\theta$ are often constructed through a pivot variable, that is, a random variable $W(\bs{X}, \theta)$ that depends on the data vector $\bs{X}$ and the parameter $\theta$, but whose distribution does not depend on $\theta$ and is known. In this case, a natural test statistic for the basic tests given above is $W(\bs{X}, \theta_0)$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/09%3A_Hypothesis_Testing/9.01%3A_Introduction_to_Hypothesis_Testing.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
The Normal Model
The normal distribution is perhaps the most important distribution in the study of mathematical statistics, in part because of the central limit theorem. As a consequence of this theorem, a measured quantity that is subject to numerous small, random errors will have, at least approximately, a normal distribution. Such variables are ubiquitous in statistical experiments, in subjects varying from the physical and biological sciences to the social sciences.
So in this section, we assume that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from the normal distribution with mean $\mu$ and standard deviation $\sigma$. Our goal in this section is to to construct hypothesis tests for $\mu$ and $\sigma$; these are among of the most important special cases of hypothesis testing. This section parallels the section on Estimation in the Normal Model in the chapter on Set Estimation, and in particular, the duality between interval estimation and hypothesis testing will play an important role. But first we need to review some basic facts that will be critical for our analysis.
Recall that the sample mean $M$ and sample variance $S^2$ are $M = \frac{1}{n} \sum_{i=1}^n X_i, \quad S^2 = \frac{1}{n - 1} \sum_{i=1}^n (X_i - M)^2$
From our study of point estimation, recall that $M$ is an unbiased and consistent estimator of $\mu$ while $S^2$ is an unbiased and consistent estimator of $\sigma^2$. From these basic statistics we can construct the test statistics that will be used to construct our hypothesis tests. The following results were established in the section on Special Properties of the Normal Distribution.
Define $Z = \frac{M - \mu}{\sigma \big/ \sqrt{n}}, \quad T = \frac{M - \mu}{S \big/ \sqrt{n}}, \quad V = \frac{n - 1}{\sigma^2} S^2$
1. $Z$ has the standard normal distribution.
2. $T$ has the student $t$ distribution with $n - 1$ degrees of freedom.
3. $V$ has the chi-square distribution with $n - 1$ degrees of freedom.
4. $Z$ and $V$ are independent.
It follows that each of these random variables is a pivot variable for $(\mu, \sigma)$ since the distributions do not depend on the parameters, but the variables themselves functionally depend on one or both parameters. The pivot variables will lead to natural test statistics that can then be used to perform the hypothesis tests of the parameters. To construct our tests, we will need quantiles of these standard distributions. The quantiles can be computed using the special distribution calculator or from most mathematical and statistical software packages. Here is the notation we will use:
Let $p \in (0, 1)$ and $k \in \N_+$.
1. $z(p)$ denotes the quantile of order $p$ for the standard normal distribution.
2. $t_k(p)$ denotes the quantile of order $p$ for the student $t$ distribution with $k$ degrees of freedom.
3. $\chi^2_k(p)$ denotes the quantile of order $p$ for the chi-square distribution with $k$ degrees of freedom
Since the standard normal and student $t$ distributions are symmetric about 0, it follows that $z(1 - p) = -z(p)$ and $t_k(1 - p) = -t_k(p)$ for $p \in (0, 1)$ and $k \in \N_+$. On the other hand, the chi-square distribution is not symmetric.
Tests for the Mean with Known Standard Deviation
For our first discussion, we assume that the distribution mean $\mu$ is unknown but the standard deviation $\sigma$ is known. This is not always an artificial assumption. There are often situations where $\sigma$ is stable over time, and hence is at least approximately known, while $\mu$ changes because of different treatments. Examples are given in the computational exercises below.
For a conjectured $\mu_0 \in \R$, define the test statistic $Z = \frac{M - \mu_0}{\sigma \big/ \sqrt{n}}$
1. If $\mu = \mu_0$ then $Z$ has the standard normal distribution.
2. If $\mu \ne \mu_0$ then $Z$ has the normal distribution with mean $\frac{\mu - \mu_0}{\sigma / \sqrt{n}}$ and variance 1.
So in case (b), $\frac{\mu - \mu_0}{\sigma / \sqrt{n}}$ can be viewed as a non-centrality parameter. The graph of the probability density function of $Z$ is like that of the standard normal probability density function, but shifted to the right or left by the non-centrality parameter, depending on whether $\mu \gt \mu_0$ or $\mu \lt \mu_0$.
For $\alpha \in (0, 1)$, each of the following tests has significance level $\alpha$:
1. Reject $H_0: \mu = \mu_0$ versus $H_1: \mu \ne \mu_0$ if and only if $Z \lt -z(1 - \alpha /2)$ or $Z \gt z(1 - \alpha / 2)$ if and only if $M \lt \mu_0 - z(1 - \alpha / 2) \frac{\sigma}{\sqrt{n}}$ or $M \gt \mu_0 + z(1 - \alpha / 2) \frac{\sigma}{\sqrt{n}}$.
2. Reject $H_0: \mu \le \mu_0$ versus $H_1: \mu \gt \mu_0$ if and only if $Z \gt z(1 - \alpha)$ if and only if $M \gt \mu_0 + z(1 - \alpha) \frac{\sigma}{\sqrt{n}}$.
3. Reject $H_0: \mu \ge \mu_0$ versus $H_1: \mu \lt \mu_0$ if and only if $Z \lt -z(1 - \alpha)$ if and only if $M \lt \mu_0 - z(1 - \alpha) \frac{\sigma}{\sqrt{n}}$.
Proof
In part (a), $H_0$ is a simple hypothesis, and under $H_0$, $Z$ has the standard normal distribution. So $\alpha$ is probability of falsely rejecting $H_0$ by definition of the quantiles. In parts (b) and (c), $Z$ has a non-central normal distribution under $H_0$ as discussed above. So if $H_0$ is true, the the maximum type 1 error probability $\alpha$ occurs when $\mu = \mu_0$. The decision rules in terms of $M$ are equivalent to the corresponding ones in terms of $Z$ by simple algebra.
Part (a) is the standard two-sided test, while (b) is the right-tailed test and (c) is the left-tailed test. Note that in each case, the hypothesis test is the dual of the corresponding interval estimate constructed in the section on Estimation in the Normal Model.
For each of the tests above, we fail to reject $H_0$ at significance level $\alpha$ if and only if $\mu_0$ is in the corresponding $1 - \alpha$ confidence interval, that is
1. $M - z(1 - \alpha / 2) \frac{\sigma}{\sqrt{n}} \le \mu_0 \le M + z(1 - \alpha / 2) \frac{\sigma}{\sqrt{n}}$
2. $\mu_0 \le M + z(1 - \alpha) \frac{\sigma}{\sqrt{n}}$
3. $\mu_0 \ge M - z(1 - \alpha) \frac{\sigma}{\sqrt{n}}$
Proof
This follows from the previous result. In each case, we start with the inequality that corresponds to not rejecting $H_0$ and solve for $\mu_0$.
The two-sided test in (a) corresponds to $\alpha / 2$ in each tail of the distribution of the test statistic $Z$, under $H_0$. This set is said to be unbiased. But of course we can construct other biased tests by partitioning the confidence level $\alpha$ between the left and right tails in a non-symmetric way.
For every $\alpha, \, p \in (0, 1)$, the following test has significance level $\alpha$: Reject $H_0: \mu = \mu_0$ versus $H_1: \mu \ne \mu_0$ if and only if $Z \lt z(\alpha - p \alpha)$ or $Z \ge z(1 - p \alpha)$.
1. $p = \frac{1}{2}$ gives the symmetric, unbiased test.
2. $p \downarrow 0$ gives the left-tailed test.
3. $p \uparrow 1$ gives the right-tailed test.
Proof
As before $H_0$ is a simple hypothesis, and if $H_0$ is true, $Z$ has the standard normal distribution. So the probability of falsely rejecting $H_0$ is $\alpha$ by definition of the quantiles. Parts (a)–(c) follow from properties of the standard normal quantile function.
The $P$-value of these test can be computed in terms of the standard normal distribution function $\Phi$.
The $P$-values of the standard tests above are respectively
1. $2 \left[1 - \Phi\left(\left|Z\right|\right)\right]$
2. $1 - \Phi(Z)$
3. $\Phi(Z)$
Recall that the power function of a test of a parameter is the probability of rejecting the null hypothesis, as a function of the true value of the parameter. Our next series of results will explore the power functions of the tests above.
The power function of the general two-sided test above is given by $Q(\mu) = \Phi \left( z(\alpha - p \alpha) - \frac{\sqrt{n}}{\sigma} (\mu - \mu_0) \right) + \Phi \left( \frac{\sqrt{n}}{\sigma} (\mu - \mu_0) - z(1 - p \alpha) \right), \quad \mu \in \R$
1. $Q$ is decreasing on $(-\infty, m_0)$ and increasing on $(m_0, \infty)$ where $m_0 = \mu_0 + \left[z(\alpha - p \alpha) + z(1 - p \alpha)\right] \frac{\sqrt{n}}{2 \sigma}$.
2. $Q(\mu_0) = \alpha$.
3. $Q(\mu) \to 1$ as $\mu \uparrow \infty$ and $Q(\mu) \to 1$ as $\mu \downarrow -\infty$.
4. If $p = \frac{1}{2}$ then $Q$ is symmetric about $\mu_0$ (and $m_0 = \mu_0$).
5. As $p$ increases, $Q(\mu)$ increases if $\mu \gt \mu_0$ and decreases if $\mu \lt \mu_0$.
So by varying $p$, we can make the test more powerful for some values of $\mu$, but only at the expense of making the test less powerful for other values of $\mu$.
The power function of the left-tailed test above is given by
$Q(\mu) = \Phi \left( z(\alpha) + \frac{\sqrt{n}}{\sigma}(\mu - \mu_0) \right), \quad \mu \in \R$
1. $Q$ is increasing on $\R$.
2. $Q(\mu_0) = \alpha$.
3. $Q(\mu) \to 1$ as $\mu \uparrow \infty$ and $Q(\mu) \to 0$ as $\mu \downarrow -\infty$.
The power function of the right-tailed test above, is given by $Q(\mu) = \Phi \left( z(\alpha) - \frac{\sqrt{n}}{\sigma}(\mu - \mu_0) \right), \quad \mu \in \R$
1. $Q$ is decreasing on $\R$.
2. $Q(\mu_0) = \alpha$.
3. $Q(\mu) \to 0$ as $\mu \uparrow \infty$ and $Q(\mu) \to 1$ as $\mu \downarrow -\infty$.
For any of the three tests in above , increasing the sample size $n$ or decreasing the standard deviation $\sigma$ results in a uniformly more powerful test.
In the mean test experiment, select the normal test statistic and select the normal sampling distribution with standard deviation $\sigma = 2$, significance level $\alpha = 0.1$, sample size $n = 20$, and $\mu_0 = 0$. Run the experiment 1000 times for several values of the true distribution mean $\mu$. For each value of $\mu$, note the relative frequency of the event that the null hypothesis is rejected. Sketch the empirical power function.
In the mean estimate experiment, select the normal pivot variable and select the normal distribution with $\mu = 0$ and standard deviation $\sigma = 2$, confidence level $1 - \alpha = 0.90$, and sample size $n = 10$. For each of the three types of confidence intervals, run the experiment 20 times. State the corresponding hypotheses and significance level, and for each run, give the set of $\mu_0$ for which the null hypothesis would be rejected.
In many cases, the first step is to design the experiment so that the significance level is $\alpha$ and so that the test has a given power $\beta$ for a given alternative $\mu_1$.
For either of the one-sided tests in above, the sample size $n$ needed for a test with significance level $\alpha$ and power $\beta$ for the alternative $\mu_1$ is $n = \left( \frac{\sigma \left[z(\beta) - z(\alpha)\right]}{\mu_1 - \mu_0} \right)^2$
Proof
This follows from setting the power function equal to $\beta$ and solving for $n$
For the unbiased, two-sided test, the sample size $n$ needed for a test with significance level $\alpha$ and power $\beta$ for the alternative $\mu_1$ is approximately $n = \left( \frac{\sigma \left[z(\beta) - z(\alpha / 2)\right]}{\mu_1 - \mu_0} \right)^2$
Proof
In the power function for the two-sided test given above, we can neglect the first term if $\mu_1 \lt \mu_0$ and neglect the second term if $\mu_1 \gt \mu_0$.
Tests of the Mean with Unknown Standard Deviation
For our next discussion, we construct tests of $\mu$ without requiring the assumption that $\sigma$ is known. And in applications of course, $\sigma$ is usually unknown.
For a conjectured $\mu_0 \in \R$, define the test statistic $T = \frac{M - \mu_0}{S \big/ \sqrt{n}}$
1. If $\mu = \mu_0$, the statistic $T$ has the student $t$ distribution with $n - 1$ degrees of freedom.
2. If $\mu \ne \mu_0$ then $T$ has a non-central $t$ distribution with $n - 1$ degrees of freedom and non-centrality parameter $\frac{\mu - \mu_0}{\sigma / \sqrt{n}}$.
In case (b), the graph of the probability density function of $T$ is much (but not exactly) the same as that of the ordinary $t$ distribution with $n - 1$ degrees of freedom, but shifted to the right or left by the non-centrality parameter, depending on whether $\mu \gt \mu_0$ or $\mu \lt \mu_0$.
For $\alpha \in (0, 1)$, each of the following tests has significance level $\alpha$:
1. Reject $H_0: \mu = \mu_0$ versus $H_1: \mu \ne \mu_0$ if and only if $T \lt -t_{n-1}(1 - \alpha /2)$ or $T \gt t_{n-1}(1 - \alpha / 2)$ if and only if $M \lt \mu_0 - t_{n-1}(1 - \alpha / 2) \frac{S}{\sqrt{n}}$ or $T \gt \mu_0 + t_{n-1}(1 - \alpha / 2) \frac{S}{\sqrt{n}}$.
2. Reject $H_0: \mu \le \mu_0$ versus $H_1: \mu \gt \mu_0$ if and only if $T \gt t_{n-1}(1 - \alpha)$ if and only if $M \gt \mu_0 + t_{n-1}(1 - \alpha) \frac{S}{\sqrt{n}}$.
3. Reject $H_0: \mu \ge \mu_0$ versus $H_1: \mu \lt \mu_0$ if and only if $T \lt -t_{n-1}(1 - \alpha)$ if and only if $M \lt \mu_0 - t_{n-1}(1 - \alpha) \frac{S}{\sqrt{n}}$.
Proof
In part (a), $T$ has the chi-square distribution with $n - 1$ degrees of freedom under $H_0$. So if $H_0$ is true, the probability of falsely rejecting $H_0$ is $\alpha$ by definition of the quantiles. In parts (b) and (c), $T$ has a non-central $t$ distribution with $n - 1$ degrees of freedom under $H_0$, as discussed above. Hence if $H_0$ is true, the maximum type 1 error probability $\alpha$ occurs when $\mu = \mu_0$. The decision rules in terms of $M$ are equivalent to the corresponding ones in terms of $T$ by simple algebra.
Part (a) is the standard two-sided test, while (b) is the right-tailed test and (c) is the left-tailed test. Note that in each case, the hypothesis test is the dual of the corresponding interval estimate constructed in the section on Estimation in the Normal Model.
For each of the tests above, we fail to reject $H_0$ at significance level $\alpha$ if and only if $\mu_0$ is in the corresponding $1 - \alpha$ confidence interval.
1. $M - t_{n-1}(1 - \alpha / 2) \frac{S}{\sqrt{n}} \le \mu_0 \le M + t_{n-1}(1 - \alpha / 2) \frac{S}{\sqrt{n}}$
2. $\mu_0 \le M + t_{n-1}(1 - \alpha) \frac{S}{\sqrt{n}}$
3. $\mu_0 \ge M - t_{n-1}(1 - \alpha) \frac{S}{\sqrt{n}}$
Proof
This follows from the previous result. In each case, we start with the inequality that corresponds to not rejecting $H_0$ and then solve for $\mu_0$.
The two-sided test in (a) corresponds to $\alpha / 2$ in each tail of the distribution of the test statistic $T$, under $H_0$. This set is said to be unbiased. But of course we can construct other biased tests by partitioning the confidence level $\alpha$ between the left and right tails in a non-symmetric way.
For every $\alpha, \, p \in (0, 1)$, the following test has significance level $\alpha$: Reject $H_0: \mu = \mu_0$ versus $H_1: \mu \ne \mu_0$ if and only if $T \lt t_{n-1}(\alpha - p \alpha)$ or $T \ge t_{n-1}(1 - p \alpha)$ if and only if $M \lt \mu_0 + t_{n-1}(\alpha - p \alpha) \frac{S}{\sqrt{n}}$ or $M \gt \mu_0 + t_{n-1}(1 - p \alpha) \frac{S}{\sqrt{n}}$.
1. $p = \frac{1}{2}$ gives the symmetric, unbiased test.
2. $p \downarrow 0$ gives the left-tailed test.
3. $p \uparrow 1$ gives the right-tailed test.
Proof
Once again, $H_0$ is a simple hypothesis, and under $H_0$ the test statistic $T$ has the student $t$ distribution with $n - 1$ degrees of freedom. So if $H_0$ is true, the probability of falsely rejecting $H_0$ is $\alpha$ by definition of the quantiles. Parts (a)–(c) follow from properties of the quantile function.
The $P$-value of these test can be computed in terms of the distribution function $\Phi_{n-1}$ of the $t$-distribution with $n - 1$ degrees of freedom.
The $P$-values of the standard tests above are respectively
1. $2 \left[1 - \Phi_{n-1}\left(\left|T\right|\right)\right]$
2. $1 - \Phi_{n-1}(T)$
3. $\Phi_{n-1}(T)$
In the mean test experiment, select the student test statistic and select the normal sampling distribution with standard deviation $\sigma = 2$, significance level $\alpha = 0.1$, sample size $n = 20$, and $\mu_0 = 1$. Run the experiment 1000 times for several values of the true distribution mean $\mu$. For each value of $\mu$, note the relative frequency of the event that the null hypothesis is rejected. Sketch the empirical power function.
In the mean estimate experiment, select the student pivot variable and select the normal sampling distribution with mean 0 and standard deviation 2. Select confidence level 0.90 and sample size 10. For each of the three types of intervals, run the experiment 20 times. State the corresponding hypotheses and significance level, and for each run, give the set of $\mu_0$ for which the null hypothesis would be rejected.
The power function for the $t$ tests above can be computed explicitly in terms of the non-central $t$ distribution function. Qualitatively, the graphs of the power functions are similar to the case when $\sigma$ is known, given above two-sided, left-tailed, and right-tailed cases.
If an upper bound $\sigma_0$ on the standard deviation $\sigma$ is known, then conservative estimates on the sample size needed for a given confidence level and a given margin of error can be obtained using the methods for the normal pivot variable, in the two-sided and one-sided cases.
Tests of the Standard Deviation
For our next discussion, we will construct hypothesis tests for the distribution standard deviation $\sigma$. So our assumption is that $\sigma$ is unknown, and of course almost always, $\mu$ would be unknown as well.
For a conjectured value $\sigma_0 \in (0, \infty)$, define the test statistic $V = \frac{n - 1}{\sigma_0^2} S^2$
1. If $\sigma = \sigma_0$, then $V$ has the chi-square distribution with $n - 1$ degrees of freedom.
2. If $\sigma \ne \sigma_0$ then $V$ has the gamma distribution with shape parameter $(n - 1) / 2$ and scale parameter $2 \sigma^2 \big/ \sigma_0^2$.
Recall that the ordinary chi-square distribution with $n - 1$ degrees of freedom is the gamma distribution with shape parameter $(n - 1) / 2$ and scale parameter $\frac{1}{2}$. So in case (b), the ordinary chi-square distribution is scaled by $\sigma^2 \big/ \sigma_0^2$. In particular, the scale factor is greater than 1 if $\sigma \gt \sigma_0$ and less than 1 if $\sigma \lt \sigma_0$.
For every $\alpha \in (0, 1)$, the following test has significance level $\alpha$:
1. Reject $H_0: \sigma = \sigma_0$ versus $H_1: \sigma \ne \sigma_0$ if and only if $V \lt \chi_{n-1}^2(\alpha / 2)$ or $V \gt \chi_{n-1}^2(1 - \alpha / 2)$ if and only if $S^2 \lt \chi_{n-1}^2(\alpha / 2) \frac{\sigma_0^2}{n - 1}$ or $S^2 \gt \chi_{n-1}^2(1 - \alpha / 2) \frac{\sigma_0^2}{n - 1}$
2. Reject $H_0: \sigma \ge \sigma_0$ versus $H_1: \sigma \lt \sigma_0$ if and only if $V \lt \chi_{n-1}^2(\alpha)$ if and only if $S^2 \lt \chi_{n-1}^2(\alpha) \frac{\sigma_0^2}{n - 1}$
3. Reject $H_0: \sigma \le \sigma_0$ versus $H_1: \sigma \gt \sigma_0$ if and only if $V \gt \chi_{n-1}^2(1 - \alpha)$ if and only if $S^2 \gt \chi_{n-1}^2(1 - \alpha) \frac{\sigma_0^2}{n - 1}$
Proof
The logic is largely the same as with our other hypothesis test. In part (a), $H_0$ is a simple hypothesis, and under $H_0$, the test statistic $V$ has the chi-square distribution with $n - 1$ degrees of freedom. So if $H_0$ is true, the probability of falsely rejecting $H_0$ is $\alpha$ by definition of the quantiles. In parts (b) and (c), $V$ has the more general gamma distribution under $H_0$, as discussed above. If $H_0$ is true, the maximum type 1 error probability is $\alpha$ and occurs when $\sigma = \sigma_0$.
Part (a) is the unbiased, two-sided test that corresponds to $\alpha / 2$ in each tail of the chi-square distribution of the test statistic $V$, under $H_0$. Part (b) is the left-tailed test and part (c) is the right-tailed test. Once again, we have a duality between the hypothesis tests and the interval estimates constructed in the section on Estimation in the Normal Model.
For each of the tests in above, we fail to reject $H_0$ at significance level $\alpha$ if and only if $\sigma_0^2$ is in the corresponding $1 - \alpha$ confidence interval. That is
1. $\frac{n - 1}{\chi_{n-1}^2(1 - \alpha / 2)} S^2 \le \sigma_0^2 \le \frac{n - 1}{\chi_{n-1}^2(\alpha / 2)} S^2$
2. $\sigma_0^2 \le \frac{n - 1}{\chi_{n-1}^2(\alpha)} S^2$
3. $\sigma_0^2 \ge \frac{n - 1}{\chi_{n-1}^2(1 - \alpha)} S^2$
Proof
This follows from the previous result. In each case, we start with the inequality that corresponds to not rejecting $H_0$ and then solve for $\sigma_0^2$.
As before, we can construct more general two-sided tests by partitioning the significance level $\alpha$ between the left and right tails of the chi-square distribution in an arbitrary way.
For every $\alpha, \, p \in (0, 1)$, the following test has significance level $\alpha$: Reject $H_0: \sigma = \sigma_0$ versus $H_1: \sigma \ne \sigma_0$ if and only if $V \le \chi_{n-1}^2(\alpha - p \alpha)$ or $V \ge \chi_{n-1}^2(1 - p \alpha)$ if and only if $S^2 \lt \chi_{n-1}^2(\alpha - p \alpha) \frac{\sigma_0^2}{n - 1}$ or $S^2 \gt \chi_{n-1}^2(1 - p \alpha) \frac{\sigma_0^2}{n - 1}$.
1. $p = \frac{1}{2}$ gives the equal-tail test.
2. $p \downarrow 0$ gives the left-tail test.
3. $p \uparrow 1$ gives the right-tail test.
Proof
As before, $H_0$ is a simple hypothesis, and under $H_0$ the test statistic $V$ has the chi-square distribution with $n - 1$ degrees of freedom. So if $H_0$ is true, the probability of falsely rejecting $H_0$ is $\alpha$ by definition of the quantiles. Parts (a)–(c) follow from properties of the quantile function.
Recall again that the power function of a test of a parameter is the probability of rejecting the null hypothesis, as a function of the true value of the parameter. The power functions of the tests for $\sigma$ can be expressed in terms of the distribution function $G_{n-1}$ of the chi-square distribution with $n - 1$ degrees of freedom.
The power function of the general two-sided test above is given by the following formula, and satisfies the given properties: $Q(\sigma) = 1 - G_{n-1} \left( \frac{\sigma_0^2}{\sigma^2} \chi_{n-1}^2(1 - p \, \alpha) \right) + G_{n-1} \left(\frac{\sigma_0^2}{\sigma^2} \chi_{n-1}^2(\alpha - p \, \alpha) \right)$
1. $Q$ is decreasing on $(-\infty, \sigma_0)$ and increasing on $(\sigma_0, \infty)$.
2. $Q(\sigma_0) = \alpha$.
3. $Q(\sigma) \to 1$ as $\sigma \uparrow \infty$ and $Q(\sigma) \to 1$ as $\sigma \downarrow 0$.
The power function of the left-tailed test in above is given by the following formula, and satisfies the given properties: $Q(\sigma) = 1 - G_{n-1} \left( \frac{\sigma_0^2}{\sigma^2} \chi_{n-1}^2(1 - \alpha) \right)$
1. $Q$ is increasing on $(0, \infty)$.
2. $Q(\sigma_0) = \alpha$.
3. $Q(\sigma) \to 1$ as $\sigma \uparrow \infty$ and $Q(\sigma) \to 0$ as $\sigma \downarrow 0$.
The power function for the right-tailed test above is given by the following formula, and satisfies the given properties: $Q(\sigma) = G_{n-1} \left( \frac{\sigma_0^2}{\sigma^2} \chi_{n-1}^2(\alpha) \right)$
1. $Q$ is decreasing on $(0, \infty)$.
2. $Q(\sigma_0) =\alpha$.
3. $Q(\sigma) \to 0$ as $\sigma \uparrow \infty)$ and $Q(\sigma) \to 0$ as $\sigma \uparrow \infty$ and as $\sigma \downarrow 0$.
In the variance test experiment, select the normal distribution with mean 0, and select significance level 0.1, sample size 10, and test standard deviation 1.0. For various values of the true standard deviation, run the simulation 1000 times. Record the relative frequency of rejecting the null hypothesis and plot the empirical power curve.
1. Two-sided test
2. Left-tailed test
3. Right-tailed test
In the variance estimate experiment, select the normal distribution with mean 0 and standard deviation 2, and select confidence level 0.90 and sample size 10. Run the experiment 20 times. State the corresponding hypotheses and significance level, and for each run, give the set of test standard deviations for which the null hypothesis would be rejected.
1. Two-sided confidence interval
2. Confidence lower bound
3. Confidence upper bound
Exercises
Robustness
The primary assumption that we made is that the underlying sampling distribution is normal. Of course, in real statistical problems, we are unlikely to know much about the sampling distribution, let alone whether or not it is normal. Suppose in fact that the underlying distribution is not normal. When the sample size $n$ is relatively large, the distribution of the sample mean will still be approximately normal by the central limit theorem, and thus our tests of the mean $\mu$ should still be approximately valid. On the other hand, tests of the variance $\sigma^2$ are less robust to deviations form the assumption of normality. The following exercises explore these ideas.
In the mean test experiment, select the gamma distribution with shape parameter 1 and scale parameter 1. For the three different tests and for various significance levels, sample sizes, and values of $\mu_0$, run the experiment 1000 times. For each configuration, note the relative frequency of rejecting $H_0$. When $H_0$ is true, compare the relative frequency with the significance level.
In the mean test experiment, select the uniform distribution on $[0, 4]$. For the three different tests and for various significance levels, sample sizes, and values of $\mu_0$, run the experiment 1000 times. For each configuration, note the relative frequency of rejecting $H_0$. When $H_0$ is true, compare the relative frequency with the significance level.
How large $n$ needs to be for the testing procedure to work well depends, of course, on the underlying distribution; the more this distribution deviates from normality, the larger $n$ must be. Fortunately, convergence to normality in the central limit theorem is rapid and hence, as you observed in the exercises, we can get away with relatively small sample sizes (30 or more) in most cases.
In the variance test experiment, select the gamma distribution with shape parameter 1 and scale parameter 1. For the three different tests and for various significance levels, sample sizes, and values of $\sigma_0$, run the experiment 1000 times. For each configuration, note the relative frequency of rejecting $H_0$. When $H_0$ is true, compare the relative frequency with the significance level.
In the variance test experiment, select the uniform distribution on $[0, 4]$. For the three different tests and for various significance levels, sample sizes, and values of $\mu_0$, run the experiment 1000 times. For each configuration, note the relative frequency of rejecting $H_0$. When $H_0$ is true, compare the relative frequency with the significance level.
Computational Exercises
The length of a certain machined part is supposed to be 10 centimeters. In fact, due to imperfections in the manufacturing process, the actual length is a random variable. The standard deviation is due to inherent factors in the process, which remain fairly stable over time. From historical data, the standard deviation is known with a high degree of accuracy to be 0.3. The mean, on the other hand, may be set by adjusting various parameters in the process and hence may change to an unknown value fairly frequently. We are interested in testing $H_0: \mu = 10$ versus $H_1: \mu \ne 10$.
1. Suppose that a sample of 100 parts has mean 10.1. Perform the test at the 0.1 level of significance.
2. Compute the $P$-value for the data in (a).
3. Compute the power of the test in (a) at $\mu = 10.05$.
4. Compute the approximate sample size needed for significance level 0.1 and power 0.8 when $\mu = 10.05$.
Answer
1. Test statistic 3.33, critical values $\pm 1.645$. Reject $H_0$.
2. $P = 0.0010$
3. The power of the test at 10.05 is approximately 0.0509.
4. Sample size 223
A bag of potato chips of a certain brand has an advertised weight of 250 grams. Actually, the weight (in grams) is a random variable. Suppose that a sample of 75 bags has mean 248 and standard deviation 5. At the 0.05 significance level, perform the following tests:
1. $H_0: \mu \ge 250$ versus $H_1: \mu \lt 250$
2. $H_0: \sigma \ge 7$ versus $H_1: \sigma \lt 7$
Answer
1. Test statistic $-3.464$, critical value $-1.665$. Reject $H_0$.
2. $P \lt 0.0001$ so reject $H_0$.
At a telemarketing firm, the length of a telephone solicitation (in seconds) is a random variable. A sample of 50 calls has mean 310 and standard deviation 25. At the 0.1 level of significance, can we conclude that
1. $\mu \gt 300$?
2. $\sigma \gt 20$?
Answer
1. Test statistic 2.828, critical value 1.2988. Reject $H_0$.
2. $P = 0.0071$ so reject $H_0$.
At a certain farm the weight of a peach (in ounces) at harvest time is a random variable. A sample of 100 peaches has mean 8.2 and standard deviation 1.0. At the 0.01 level of significance, can we conclude that
1. $\mu \gt 8$?
2. $\sigma \lt 1.5$?
Answer
1. Test statistic 2.0, critical value 2.363. Fail to reject $H_0$.
2. $P \lt 0.0001$ so reject $H_0$.
The hourly wage for a certain type of construction work is a random variable with standard deviation 1.25. For sample of 25 workers, the mean wage was \$6.75. At the 0.01 level of significance, can we conclude that $\mu \lt 7.00$?
Answer
Test statistic $-1$, critical value $-2.328$. Fail to reject $H_0$.
Data Analysis Exercises
Using Michelson's data, test to see if the velocity of light is greater than 730 (+299000) km/sec, at the 0.005 significance level.
Answer
Test statistic 15.49, critical value 2.6270. Reject $H_0$.
Using Cavendish's data, test to see if the density of the earth is less than 5.5 times the density of water, at the 0.05 significance level .
Answer
Test statistic $-1.269$, critical value $-1.7017$. Fail to reject $H_0$.
Using Short's data, test to see if the parallax of the sun differs from 9 seconds of a degree, at the 0.1 significance level.
Answer
Test statistic $-3.730$, critical value $\pm 1.6749$. Reject $H_0$.
Using Fisher's iris data, perform the following tests, at the 0.1 level:
1. The mean petal length of Setosa irises differs from 15 mm.
2. The mean petal length of Verginica irises is greater than 52 mm.
3. The mean petal length of Versicolor irises is less than 44 mm.
Answer
1. Test statistic $-1.563$, critical values $\pm 1.672$. Fail to reject $H_0$.
2. Test statistic 4.556, critical value 1.2988. Reject $H_0$.
3. Test statistic $-1.028$, critical value $-1.2988$. Fail to Reject $H_0$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/09%3A_Hypothesis_Testing/9.02%3A_Tests_in_the_Normal_Model.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Tests
Preliminaries
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from the Bernoulli distribution with unknown parameter $p \in (0, 1)$. Thus, these are independent random variables taking the values 1 and 0 with probabilities $p$ and $1 - p$ respectively. In the usual language of reliability, 1 denotes success and 0 denotes failure, but of course these are generic terms. Often this model arises in one of the following contexts:
1. There is an event of interest in a basic experiment, with unknown probability $p$. We replicate the experiment $n$ times and define $X_i = 1$ if and only if the event occurred on run $i$.
2. We have a population of objects of several different types; $p$ is the unknown proportion of objects of a particular type of interest. We select $n$ objects at random from the population and let $X_i = 1$ if and only if object $i$ is of the type of interest. When the sampling is with replacement, these variables really do form a random sample from the Bernoulli distribution. When the sampling is without replacement, the variables are dependent, but the Bernoulli model may still be approximately valid if the population size is very large compared to the sample size $n$. For more on these points, see the discussion of sampling with and without replacement in the chapter on Finite Sampling Models.
In this section, we will construct hypothesis tests for the parameter $p$. The parameter space for $p$ is the interval $(0, 1)$, and all hypotheses define subsets of this space. This section parallels the section on Estimation in the Bernoulli Model in the Chapter on Interval Estimation.
The Binomial Test
Recall that the number of successes $Y = \sum_{i=1}^n X_i$ has the binomial distribution with parameters $n$ and $p$, and has probability density function given by $\P(Y = y) = \binom{n}{y} p^y (1 - p)^{n-y}, \quad y \in \{0, 1, \ldots, n\}$ Recall also that the mean is $\E(Y) = n p$ and variance is $\var(Y) = n p (1 - p)$. Moreover $Y$ is sufficient for $p$ and hence is a natural candidate to be a test statistic for hypothesis tests about $p$. For $\alpha \in (0, 1)$, let $b_{n, p}(\alpha)$ denote the quantile of order $\alpha$ for the binomial distribution with parameters $n$ and $p$. Since the binomial distribution is discrete, only certain (exact) quantiles are possible. For the remainder of this discussion, $p_0 \in (0, 1)$ is a conjectured value of $p$.
For every $\alpha \in (0, 1)$, the following tests have approximate significance level $\alpha$:
1. Reject $H_0: p = p_0$ versus $H_1: p \ne p_0$ if and only if $Y \le b_{n, p_0}(\alpha / 2)$ or $Y \ge b_{n, p_0}(1 - \alpha / 2)$.
2. Reject $H_0: p \ge p_0$ versus $H_1: p \lt p_0$ if and only if $Y \le b_{n, p_0}(\alpha)$.
3. Reject $H_0: p \le p_0$ versus $H_1: p \gt p_0$ if and only if $Y \ge b_{n, p_0}(1 - \alpha)$.
Proof
In part (a), $H_0$ is a simple hypothesis, and under $H_0$ the test statistic $Y$ has the binomial distribution with parameter $n$ and $p_0$. Thus, if $H_0$ is true, then $\alpha$ is (approximately) the probability of falsely rejecting $H_0$ by definition of the quantiles. In parts (b) and (c), $H_0$ specifies a range of values of $p$. But if $H_0$ is true, the maximum type 1 probability is (approximately) $\alpha$ and occurs when $p = p_0$.
The test in (a) is the standard, symmetric, two-sided test, corresponding to probability $\alpha / 2$ (approximately) in both tails of the binomial distribution under $H_0$. The test in (b) is the left-tailed and test and the test in (c) is the right-tailed test. As usual, we can generalize the two-sided test by partitioning $\alpha$ between the left and right tails of the binomial distribution in an arbitrary manner.
For any $\alpha, \, r \in (0, 1)$, the following test has (approximate) significance level $\alpha$: Reject $H_0: p = p_0$ versus $H_1: p \ne p_0$ if and only if $Y \le b_{n, p_0}(\alpha - r \alpha)$ or $Y \ge b_{n, p_0}(1 - r \alpha)$.
1. $r = \frac{1}{2}$ gives the standard symmetric two-sided test.
2. $r \downarrow 0$ gives the left-tailed test.
3. $r \uparrow 1$ gives the right-tailed test.
Proof
Once again, $H_0$ is a simple hypothesis and under $H_0$, the test statistic $Y$ has the binomial distribution with parameters $n$ and $p_0$. Thus if $H_0$ is true then the probability of falsely rejecting $H_0$ is $\alpha$ by definition of the quantiles. Parts (a)–(c) follow from properties of the quantile function.
An Approximate Normal Test
When $n$ is large, the distribution of $Y$ is approximately normal, by the central limit theorem, so we can construct an approximate normal test.
Suppose that the sample size $n$ is large. For a conjectured $p_0 \in (0, 1)$, define the test statistic $Z = \frac{Y - n p_0}{\sqrt{n p_0 (1 - p_0)}}$
1. If $p = p_0$, then $Z$ has approximately a standard normal distribution.
2. If $p \ne p_0$, then $Z$ has approximately a normal distribution with mean $\sqrt{n} \frac{p - p_0}{\sqrt{p_0 (1 - p_0)}}$ and variance $\frac{p (1 - p)}{p_0 (1 - p_0)}$
Proof
1. This follows from the DeMoivre-Laplace theorem, the special case of the central limit theorem applied to the binomial distribution. Note that $Z$ is simply the standard score associated with $Y$.
2. With some fairly simple algebra, we can write $Z = \sqrt{n} \frac{p - p_0}{\sqrt{p_0 (1 - p_0)}} + \sqrt{\frac{p (1 - p)}{p_0 (1 - p_0)}} \frac{Y - n p}{\sqrt{n p (1 - p)}}$ The second factor in the second term is again simply the standard score associated with $Y$ and hence this factor has approximately a standard normal distribution. So the result follows from the basic linearity property of the normal distribution.
As usual, for $\alpha \in (0, 1)$, let $z(\alpha)$ denote the quantile of order $\alpha$ for the standard normal distribution. For selected values of $\alpha$, $z(\alpha)$ can be obtained from the special distribution calculator, or from most statistical software packages. Recall also by symmetry that $z(1 - \alpha) = - z(\alpha)$.
For every $\alpha \in (0, 1)$, the following tests have approximate significance level $\alpha$:
1. Reject $H_0: p = p_0$ versus $H_1: p \ne p_0$ if and only if $Z \lt -z(1 - \alpha / 2)$ or $Z \gt z(1 - \alpha / 2)$.
2. Reject $H_0: p \ge p_0$ versus $H_1: p \lt p_0$ if and only if $Z \lt -z(1 - \alpha)$.
3. Reject $H_0: p \le p_0$ versus $H_1: p \ge p_0$ if and only if $Z \gt z(1 - \alpha)$.
Proof
In part (a), $H_0$ is a simple hypothesis and under $H_0$ the test statistic $Z$ has approximately a standard normal distribution. Hence if $H_0$ is true then the probability of falsely rejecting $H_0$ is approximately $\alpha$ by definition of the quantiles. In parts (b) and (c), $H_0$ specifies a range of values of $p$, and under $H_0$ the test statistic $Z$ has a nonstandard normal distribution, as described above. The maximum type one error probability is $\alpha$ and occurs when $p = p_0$.
The test in (a) is the symmetric, two-sided test that corresponds to $\alpha / 2$ in both tails of the distribution of $Z$, under $H_0$. The test in (b) is the left-tailed test and the test in (c) is the right-tailed test. As usual, we can construct a more general two-sided test by partitioning $\alpha$ between the left and right tails of the standard normal distribution in an arbitrary manner.
For every $\alpha, \, r \in (0, 1)$, the following test has approximate significance level $\alpha$: Reject $H_0: p = p_0$ versus $H_1: p \ne p_0$ if and only if $Z \lt z(\alpha - r \alpha)$ or $Z \gt z(1 - r \alpha)$.
1. $r = \frac{1}{2}$ gives the standard, symmetric two-sided test.
2. $r \downarrow 0$ gives the left-tailed test.
3. $r \uparrow 1$ gives the right-tailed test.
Proof
In part (a), $H_0$ is again a simple hypothesis, and under $H_0$ the test statistic $Z$ has approximately a standard normal distribution. So if $H_0$ is true, the probability of falsely rejecting $H_0$ is $\alpha$ by definition of the quantiles.
Simulation Exercises
In the proportion test experiment, set $H_0: p = p_0$, and select sample size 10, significance level 0.1, and $p_0 = 0.5$. For each $p \in \{0.1, 0.2, \ldots, 0.9\}$, run the experiment 1000 times and then note the relative frequency of rejecting the null hypothesis. Graph the empirical power function.
In the proportion test experiment, repeat the previous exercise with sample size 20.
In the proportion test experiment, set $H_0: p \le p_0$, and select sample size 15, significance level 0.05, and $p_0 = 0.3$. For each $p \in \{0.1, 0.2, \ldots, 0.9\}$, run the experiment 1000 times and note the relative frequency of rejecting the null hypothesis. Graph the empirical power function.
In the proportion test experiment, repeat the previous exercise with sample size 30.
In the proportion test experiment, set $H_0: p \ge p_0$, and select sample size 20, significance level 0.01, and $p_0 = 0.6$. For each $p \in \{0.1, 0.2, \ldots, 0.9\}$, run the experiment 1000 times and then note the relative frequency of rejecting the null hypothesis. Graph the empirical power function.
In the proportion test experiment, repeat the previous exercise with sample size 50.
Computational Exercises
In a pole of 1000 registered voters in a certain district, 427 prefer candidate X. At the 0.1 level, is the evidence sufficient to conclude that more that 40% of the registered voters prefer X?
Answer
Test statistic 1.743, critical value 1.282. Reject $H_0$.
A coin is tossed 500 times and results in 302 heads. At the 0.05 level, test to see if the coin is unfair.
Answer
Test statistic 4.651, critical values $\pm 1.961$. Reject $H_0$; the coin is almost certainly unfair.
A sample of 400 memory chips from a production line are tested, and 32 are defective. At the 0.05 level, test to see if the proportion of defective chips is less than 0.1.
Answer
Test statistic $-1.333$, critical value $-1.645$. Fail to reject $H_0$.
A new drug is administered to 50 patients and the drug is effective in 42 cases. At the 0.1 level, test to see if the success rate for the new drug is greater that 0.8.
Answer
Test statistic 0.707, critical value 1.282. Fail to reject $H_0$.
Using the M&M data, test the following alternative hypotheses at the 0.1 significance level:
1. The proportion of red M&Ms differs from $\frac{1}{6}$.
2. The proportion of green M&Ms is less than $\frac{1}{6}$.
3. The proportion of yellow M&M is greater than $\frac{1}{6}$.
Answer
1. Test statistic 0.162, critical values $\pm 1.645$. Fail to reject $H_0$.
2. Test statistic $-4.117$, critical value $-1.282$. Reject $H_0$.
3. Test statistic 8.266, critical value 1.282. Reject $H_0$.
The Sign Test
Derivation
Suppose now that we have a basic random experiment with a real-valued random variable $U$ of interest. We assume that $U$ has a continuous distribution with support on an interval of $S \subseteq \R$. Let $m$ denote the quantile of a specified order $p_0 \in (0, 1)$ for the distribution of $U$. Thus, by definition, $p_0 = \P(U \le m)$ In general of course, $m$ is unknown, even though $p_0$ is specified, because we don't know the distribution of $U$. Suppose that we want to construct hypothesis tests for $m$. For a given test value $m_0$, let $p = \P(U \le m_0)$ Note that $p$ is unknown even though $m_0$ is specified, because again, we don't know the distribution of $U$.
Relations
1. $m = m_0$ if and only if $p = p_0$.
2. $m \lt m_0$ if and only if $p \gt p_0$.
3. $m \gt m_0$ if and only if $p \lt p_0$.
Proof
These results follow since we are assuming that the distribution of $U$ is continuous and is supported on the interval $S$.
As usual, we repeat the basic experiment $n$ times to generate a random sample $\bs{U} = (U_1, U_2, \ldots, U_n)$ of size $n$ from the distribution of $U$. Let $X_i = \bs{1}(U_i \le m_0)$ be the indicator variable of the event $\{U_i \le m_0\}$ for $i \in \{1, 2, \ldots, n\}$.
Note that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a statistic (an observable function of the data vector $\bs{U}$) and is a random sample of size $n$ from the Bernoulli distribution with parameter $p$.
From the last two results it follows that tests of the unknown quantile $m$ can be converted to tests of the Bernoulli parameter $p$, and thus the tests developed above apply. This procedure is known as the sign test, because essentially, only the sign of $U_i - m_0$ is recorded for each $i$. This procedure is also an example of a nonparametric test, because no assumptions about the distribution of $U$ are made (except for continuity). In particular, we do not need to assume that the distribution of $U$ belongs to a particular parametric family.
The most important special case of the sign test is the case where $p_0 = \frac{1}{2}$; this is the sign test of the median. If the distribution of $U$ is known to be symmetric, the median and the mean agree. In this case, sign tests of the median are also tests of the mean.
Simulation Exercises
In the sign test experiment, set the sampling distribution to normal with mean 0 and standard deviation 2. Set the sample size to 10 and the significance level to 0.1. For each of the 9 values of $m_0$, run the simulation 1000 times.
1. When $m = m_0$, give the empirical estimate of the significance level of the test and compare with 0.1.
2. In the other cases, give the empirical estimate of the power of the test.
In the sign test experiment, set the sampling distribution to uniform on the interval $[0, 5]$. Set the sample size to 20 and the significance level to 0.05. For each of the 9 values of $m_0$, run the simulation 1000 times.
1. When $m = m_0$, give the empirical estimate of the significance level of the test and compare with 0.05.
2. In the other cases, give the empirical estimate of the power of the test.
In the sign test experiment, set the sampling distribution to gamma with shape parameter 2 and scale parameter 1. Set the sample size to 30 and the significance level to 0.025. For each of the 9 values of $m_0$, run the simulation 1000 times.
1. When $m = m_0$, give the empirical estimate of the significance level of the test and compare with 0.025.
2. In the other cases, give the empirical estimate of the power of the test.
Computational Exercises
Using the M&M data, test to see if the median weight exceeds 47.9 grams, at the 0.1 level.
Answer
Test statistic 3.286, critical value 1.282. Reject $H_0$.
Using Fisher's iris data, perform the following tests, at the 0.1 level:
1. The median petal length of Setosa irises differs from 15 mm.
2. The median petal length of Verginica irises is less than 52 mm.
3. The median petal length of Versicolor irises is less than 42 mm.
Answer
1. Test statistic 3.394, critical values $\pm 1.645$. Reject $H_0$.
2. Test statistic $-1.980$, critical value $-1.282$. Reject $H_0$.
3. Test statistic $-0.566$, critical value $-1.282$. Fail to reject $H_0$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/09%3A_Hypothesis_Testing/9.03%3A_Tests_in_the_Bernoulli_Model.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\cov}{\text{cov}}$
In this section, we will study hypothesis tests in the two-sample normal model and in the bivariate normal model. This section parallels the section on Estimation in the Two Sample Normal Model in the chapter on Interval Estimation.
The Two-Sample Normal Model
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $m$ from the normal distribution with mean $\mu$ and standard deviation $\sigma$, and that $\bs{Y} = (Y_1, Y_2, \ldots, Y_n)$ is a random sample of size $n$ from the normal distribution with mean $\nu$ and standard deviation $\tau$. Moreover, suppose that the samples $\bs{X}$ and $\bs{Y}$ are independent.
This type of situation arises frequently when the random variables represent a measurement of interest for the objects of the population, and the samples correspond to two different treatments. For example, we might be interested in the blood pressure of a certain population of patients. The $\bs{X}$ vector records the blood pressures of a control sample, while the $\bs{Y}$ vector records the blood pressures of the sample receiving a new drug. Similarly, we might be interested in the yield of an acre of corn. The $\bs{X}$ vector records the yields of a sample receiving one type of fertilizer, while the $\bs{Y}$ vector records the yields of a sample receiving a different type of fertilizer.
Usually our interest is in a comparison of the parameters (either the mean or variance) for the two sampling distributions. In this section we will construct tests for the for the difference of the means and the ratio of the variances. As with previous estimation problems we have studied, the procedures vary depending on what parameters are known or unknown. Also as before, key elements in the construction of the tests are the sample means and sample variances and the special properties of these statistics when the sampling distribution is normal.
We will use the following notation for the sample mean and sample variance of a generic sample $\bs{U} = (U_1, U_2, \ldots, U_k)$:
$M(\bs{U}) = \frac{1}{k} \sum_{i=1}^k U_i, \quad S^2(\bs{U}) = \frac{1}{k - 1} \sum_{i=1}^k [U_i - M(\bs{U})]^2$
Tests of the Difference in the Means with Known Standard Deviations
Our first discussion concerns tests for the difference in the means $\nu - \mu$ under the assumption that the standard deviations $\sigma$ and $\tau$ are known. This is often, but not always, an unrealistic assumption. In some statistical problems, the variances are stable, and are at least approximately known, while the means may be different because of different treatments. Also this is a good place to start because the analysis is fairly easy.
For a conjectured difference of the means $\delta \in \R$, define the test statistic $Z = \frac{[M(\bs{Y}) - M(\bs{X})] - \delta}{\sqrt{\sigma^2 / m + \tau^2 / n}}$
1. If $\nu - \mu = \delta$ then $Z$ has the standard normal distribution.
2. If $\nu - \mu \ne \delta$ then $Z$ has the normal distribution with mean $[(\nu - \mu) - \delta] \big/ {\sqrt{\sigma^2 / m + \tau^2 / n}}$ and variance 1.
Proof
From properties of normal samples, $M(\bs{X})$ has a normal distribution with mean $\mu$ and variance $\sigma^2 / m$ and similarly $M(\bs{Y})$ has a normal distribution with mean $\nu$ and variance $\tau^2 / n$. Since the samples are independent, $M(\bs{X})$ and $M(\bs{Y})$ are independent, so $M(\bs{Y}) - M(\bs{X})$ has a normal distribution with mean $\nu - \mu$ and variance $\sigma^2 / m + \sigma^2 / n$. The final result then follows since $Z$ is a linear function of $M(\bs{Y}) - M(\bs{X})$.
Of course (b) actually subsumes (a), but we separate them because the two cases play an impotrant role in the hypothesis tests. In part (b), the non-zero mean can be viewed as a non-centrality parameter.
As usual, for $p \in (0, 1)$, let $z(p)$ denote the quantile of order $p$ for the standard normal distribution. For selected values of $p$, $z(p)$ can be obtained from the special distribution calculator or from most statistical software packages. Recall also by symmetry that $z(1 - p) = -z(p)$.
For every $\alpha \in (0, 1)$, the following tests have significance level $\alpha$:
1. Reject $H_0: \nu - \mu = \delta$ versus $H_1: \nu - \mu \ne \delta$ if and only if $Z \lt -z(1 - \alpha / 2)$ or $Z \gt z(1 - \alpha / 2)$ if and only if $M(\bs{Y}) - M(\bs{X}) \gt \delta + z(1 - \alpha / 2) \sqrt{\sigma^2 / m + \tau^2 / n}$ or $M(\bs{Y}) - M(\bs{X}) \lt \delta - z(1 - \alpha / 2) \sqrt{\sigma^2 / m + \tau^2 / n}$.
2. Reject $H_0: \nu - \mu \ge \delta$ versus $H_1: \nu - \mu \lt \delta$ if and only if $Z \lt -z(1 - \alpha)$ if and only if $M(\bs{Y}) - M(\bs{X}) \lt \delta - z(1 - \alpha) \sqrt{\sigma^2 / m + \tau^2 / n}$.
3. Reject $H_0: \nu - \mu \le \delta$ versus $H_1: \nu - \mu \gt \delta$ if and only if $Z \gt z( 1 - \alpha)$ if and only if $M(\bs{Y}) - M(\bs{X}) \gt \delta + z(1 - \alpha) \sqrt{\sigma^2 / m + \tau^2 / n}$.
Proof
This follows the same logic that we have seen before. In part (a), $H_0$ is a simple hypothesis, and under this hypothesis $Z$ has the standard normal distribution. Thus, if $H_0$ is true then the probability of falsely rejecting $H_0$ is $\alpha$ by definition of the quantiles. In parts (b) and (c), $H_0$ specifies a range of values of $\nu - \mu$, and under $H_0$, $Z$ has a nonstandard normal distribution, as described above. But the largest type 1 error probability is $\alpha$ and occurs when $\nu - \mu = \delta$. The decision rules in terms of $M(\bs{Y}) - M(\bs{X})$ are equivalent to those in terms of $Z$ by simple algebra.
For each of the tests above, we fail to reject $H_0$ at significance level $\alpha$ if and only if $\delta$ is in the corresponding $1 - \alpha$ level confidence interval.
1. $[M(\bs{Y}) - M(\bs{X})] - z(1 - \alpha / 2) \sqrt{\sigma^2 / m + \tau^2 / n} \le \delta \le [M(\bs{Y}) - M(\bs{X})] + z(1 - \alpha / 2) \sqrt{\sigma^2 / m + \tau^2 / n}$
2. $\delta \le [M(\bs{Y}) - M(\bs{X})] + z(1 - \alpha) \sqrt{\sigma^2 / m + \tau^2 / n}$
3. $\delta \ge [M(\bs{Y}) - M(\bs{X})] - z(1 - \alpha) \sqrt{\sigma^2 / m + \tau^2 / n}$
Proof
These results follow from the previous results above. In each case, we start with the inequality that corresponds to not rejecting the null hypothesis and solve for $\delta$.
Tests of the Difference of the Means with Unknown Standard Deviations
Next we will construct tests for the difference in the means $\nu - \mu$ under the more realistic assumption that the standard deviations $\sigma$ and $\tau$ are unknown. In this case, it is more difficult to find a suitable test statistic, but we can do the analysis in the special case that the standard deviations are the same. Thus, we will assume that $\sigma = \tau$, and the common value $\sigma$ is unknown. This assumption is reasonable if there is an inherent variability in the measurement variables that does not change even when different treatments are applied to the objects in the population. Recall that the pooled estimate of the common variance $\sigma^2$ is the weighted average of the sample variances, with the degrees of freedom as the weight factors: $S^2(\bs{X}, \bs{Y}) = \frac{(m - 1) S^2(\bs{X}) + (n - 1) S^2(\bs{Y})}{m + n - 2}$ The statistic $S^2(\bs{X}, \bs{Y})$ is an unbiased and consistent estimator of the common variance $\sigma^2$.
For a conjectured $\delta \in \R$ define the test statistc $T = \frac{[M(\bs{Y}) - M(\bs{X})] - \delta}{S(\bs{X}, \bs{Y}) \sqrt{1 / m + 1 / n}}$
1. If $\nu - \mu = \delta$ then $T$ has the $t$ distribution with $m + n - 2$ degrees of freedom,
2. If $\nu - \mu \ne \delta$ then $T$ has a non-central $t$ distribution with $m + n - 2$ degrees of freedom and non-centrality parameter $\frac{(\nu - \mu) - \delta}{\sigma \sqrt{1/m + 1 /n}}$
Proof
Part (b) actually subsumes part (a), since the ordinary $t$ distribution is a special case of the non-central $t$ distribution, with non-centrality parameter 0. With some basic algebra, we can write $T$ in the form $T = \frac{Z + a}{\sqrt{V \big/ (m + n - 2)}}$ where $Z$ is the standard score of $M(\bs{Y}) - M(\bs{X})$, $a$ is the non-centrality parameter given in the theorem, and $V = \frac{m + n - 2}{\sigma^2} S^2(\bs{X}, \bs{Y})$. So $Z$ has the standard normal distribution, $V$ has the chi-square distribution with $m + n - 2$ degrees of freedom, and $Z$ and $V$ are independent. Thus by definition, $T$ has the non-central $t$ distribution with $m + n - 2$ degrees of freedom and non-centrality parameter $a$.
As usual, for $k \gt 0$ and $p \in (0, 1)$, let $t_k(p)$ denote the quantile of order $p$ for the $t$ distribution with $k$ degrees of freedom. For selected values of $k$ and $p$, values of $t_k(p)$ can be computed from the special distribution calculator, or from most statistical software packages. Recall also that, by symmetry, $t_k(1 - p) = -t_k(p)$.
The following tests have significance level $\alpha$:
1. Reject $H_0: \nu - \mu = \delta$ versus $H_1: \nu - \mu \ne \delta$ if and only if $T \lt -t_{m + n - 2}(1 - \alpha / 2)$ or $T \gt t_{m + n - 2}(1 - \alpha / 2)$ if and only if $M(\bs{Y}) - M(\bs{X}) \gt \delta + t_{m+n-2}(1 - \alpha / 2) \sqrt{\sigma^2 / m + \tau^2 / n}$ or $M(\bs{Y}) - M(\bs{X}) \lt \delta - t_{m+n-2}(1 - \alpha / 2) \sqrt{\sigma^2 / m + \tau^2 / n}$
2. Reject $H_0: \nu - \mu \ge \delta$ versus $H_1: \nu - \mu \lt \delta$ if and only if $T \le -t_{m-n+2}(1 - \alpha)$ if and only if $M(\bs{Y}) - M(\bs{X}) \lt \delta - t_{m+n-2}(1 - \alpha) \sqrt{\sigma^2 / m + \tau^2 / n}$
3. Reject $H_0: \nu - \mu \le \delta$ versus $H_1: \nu - \mu \gt \delta$ if and only if $T \ge t_{m-n+2}(1 - \alpha)$ if and only if $M(\bs{Y}) - M(\bs{X}) \gt \delta + t_{m+n-2}(1 - \alpha) \sqrt{\sigma^2 / m + \tau^2 / n}$
Proof
This follows the same logic that we have seen before. In part (a), $H_0$ is a simple hypothesis, and under this hypothesis $T$ has the $t$ distribution with $m + n - 2$ degrees of freedom. Thus, if $H_0$ is true then the probability of falsely rejecting $H_0$ is $\alpha$ by definition of the quantiles. In parts (b) and (c), $H_0$ specifies a range of values of $\nu - \mu$, and under $H_0$, $T$ has a non-central $t$ distribution, as described above. But the largest type 1 error probability is $\alpha$ and occurs when $\nu - \mu = \delta$. The decision rules in terms of $M(\bs{Y}) - M(\bs{X})$ are equivalent to those in terms of $T$ by simple algebra.
For each of the tests above, we fail to reject $H_0$ at significance level $\alpha$ if and only if $\delta$ is in the corresponding $1 - \alpha$ level confidence interval.
1. $[M(\bs{Y}) - M(\bs{X})] - t_{m+n-2}(1 - \alpha / 2) \sqrt{\sigma^2 / m + \tau^2 / n} \le \delta \le [M(\bs{Y}) - M(\bs{X})] + t_{m+n-2}(1 - \alpha / 2) \sqrt{\sigma^2 / m + \tau^2 / n}$
2. $\delta \le [M(\bs{Y}) - M(\bs{X})] + t_{m+n-2}(1 - \alpha) \sqrt{\sigma^2 / m + \tau^2 / n}$
3. $\delta \ge [M(\bs{Y}) - M(\bs{X})] - t_{m+n-2}(1 - \alpha) \sqrt{\sigma^2 / m + \tau^2 / n}$
Proof
These results follow from the previous results above. In each case, we start with the inequality that corresponds to not rejecting the null hypothesis and solve for $\delta$.
Tests of the Ratio of the Variances
Next we will construct tests for the ratio of the distribution variances $\tau^2 / \sigma^2$. So the basic assumption is that the variances, and of course the means $\mu$ and $\nu$ are unknown.
For a conjectured $\rho \in (0, \infty)$, define the test statistics $F = \frac{S^2(\bs{X})}{S^2(\bs{Y})} \rho$
1. If $\tau^2 / \sigma^2 = \rho$ then $F$ has the $F$ distribution with $m - 1$ degrees of freedom in the numerator and $n - 1$ degrees of freedom in the denominator.
2. If $\tau^2 / \sigma^2 \ne \rho$ then $F$ has a scaled $F$ distribution with $m - 1$ degrees of freedom in the numerator, $n - 1$ degrees of freedom in the denominator, and scale factor $\rho \frac{\sigma^2}{\tau^2}$.
Proof
Part (b) actually subsumes part (a) when $\rho = \tau^2 / \rho^2$, so we will just prove (b). Note that $F = \left(\frac{S^2(\bs{X}) \big/ \sigma^2}{S^2(\bs{Y}) \big/ \tau^2}\right) \rho \frac{\sigma^2}{\tau^2}$ But $S^2(\bs{X}) \big/ \sigma^2$ has the chi-square distribution with $m - 1$ degrees of freedom, $S^2(\bs{Y}) \big/ \tau^2$ has the chi-square distribution with $n - 1$ degrees of freedom, and the variables are independent. Hence the ratio has the $F$ distribution with $m - 1$ degrees of freedom in the numerator and $n - 1$ degrees of freedom in the denominator
The following tests have significance level $\alpha$:
1. Reject $H_0: \tau^2 / \sigma^2 = \rho$ versus $H_1: \tau^2 / \sigma^2 \ne \rho$ if and only if $F \gt f_{m-1, n-1}(1 - \alpha / 2)$ or $F \lt f_{m-1, n-1}(\alpha / 2 )$.
2. Reject $H_0: \tau^2 / \sigma^2 \le \rho$ versus $H_1: \tau^2 / \sigma^2 \gt \rho$ if and only if $F \lt f_{m-1, n-1}(\alpha)$.
3. Reject $H_0: \tau^2 / \sigma^2 \ge \rho$ versus $H_1: \tau^2 / \sigma^2 \lt \rho$ if and only if $F \gt f_{m-1, n-1}(1 - \alpha)$.
Proof
The proof is the usual argument. In part (a), $H_0$ is a simple hypothesis, and under this hypothesis $F$ has the $f$ distribution with $m - 1$ degrees of freedom in the numerator $n - 1$ degrees of freedom in the denominator. Thus, if $H_0$ is true then the probability of falsely rejecting $H_0$ is $\alpha$ by definition of the quantiles. In parts (b) and (c), $H_0$ specifies a range of values of $\tau^2 / \sigma^2$, and under $H_0$, $F$ has a scaled $F$ distribution, as described above. But the largest type 1 error probability is $\alpha$ and occurs when $\tau^2 / \sigma^2 = \rho$.
For each of the tests above, we fail to reject $H_0$ at significance level $\alpha$ if and only if $\rho_0$ is in the corresponding $1 - \alpha$ level confidence interval.
1. $\frac{S^2(\bs{Y})}{S^2(\bs{X})} F_{m-1,n-1}(\alpha / 2) \le \rho \le \frac{S^2(\bs{Y})}{S^2(\bs{X})} F_{m-1,n-1}(1 - \alpha / 2)$
2. $\rho \le \frac{S^2(\bs{Y})}{S^2(\bs{X})} F_{m-1,n-1}(\alpha)$
3. $\rho \ge \frac{S^2(\bs{Y})}{S^2(\bs{X})} F_{m-1,n-1}(1 - \alpha)$
Proof
These results follow from the previous results above. In each case, we start with the inequality that corresponds to not rejecting the null hypothesis and solve for $\rho$.
Tests in the Bivariate Normal Model
In this subsection, we consider a model that is superficially similar to the two-sample normal model, but is actually much simpler. Suppose that $((X_1, Y_1), (X_2, Y_2), \ldots, (X_n, Y_n))$ is a random sample of size $n$ from the bivariate normal distribution of $(X, Y)$ with $\E(X) = \mu$, $\E(Y) = \nu$, $\var(X) = \sigma^2$, $\var(Y) = \tau^2$, and $\cov(X, Y) = \delta$.
Thus, instead of a pair of samples, we have a sample of pairs. The fundamental difference is that in this model, variables $X$ and $Y$ are measured on the same objects in a sample drawn from the population, while in the previous model, variables $X$ and $Y$ are measured on two distinct samples drawn from the population. The bivariate model arises, for example, in before and after experiments, in which a measurement of interest is recorded for a sample of $n$ objects from the population, both before and after a treatment. For example, we could record the blood pressure of a sample of $n$ patients, before and after the administration of a certain drug.
We will use our usual notation for the sample means and variances of $\bs{X} = (X_1, X_2, \ldots, X_n)$ and $\bs{Y} = (Y_1, Y_2, \ldots, Y_n)$. Recall also that the sample covariance of $(\bs{X}, \bs{Y})$ is $S(\bs{X}, \bs{Y}) = \frac{1}{n - 1} \sum_{i=1}^n [X_i - M(\bs{X})][Y_i - M(\bs{Y})]$ (not to be confused with the pooled estimate of the standard deviation in the two-sample model above).
The sequence of differences $\bs{Y} - \bs{X} = (Y_1 - X_1, Y_2 - X_2, \ldots, Y_n - X_n)$ is a random sample of size $n$ from the distribution of $Y - X$. The sampling distribution is normal with
1. $\E(Y - X) = \nu - \mu$
2. $\var(Y - X) = \sigma^2 + \tau^2 - 2 \, \delta$
The sample mean and variance of the sample of differences are
1. $M(\bs{Y} - \bs{X}) = M(\bs{Y}) - M(\bs{X})$
2. $S^2(\bs{Y} - \bs{X}) = S^2(\bs{X}) + S^2(\bs{Y}) - 2 \, S(\bs{X}, \bs{Y})$
The sample of differences $\bs{Y} - \bs{X}$ fits the normal model for a single variable. The section on Tests in the Normal Model could be used to perform tests for the distribution mean $\nu - \mu$ and the distribution variance $\sigma^2 + \tau^2 - 2 \delta$.
Computational Exercises
A new drug is being developed to reduce a certain blood chemical. A sample of 36 patients are given a placebo while a sample of 49 patients are given the drug. The statistics (in mg) are $m_1 = 87$, $s_1\ = 4$, $m_2 = 63$, $s_2 = 6$. Test the following at the 10% significance level:
1. $H_0: \sigma_1 = \sigma_2$ versus $H_1: \sigma_1 \ne \sigma_2$.
2. $H_0: \mu_1 \le \mu_2$ versus $H_1: \mu_1 \gt \mu_2$ (assuming that $\sigma_1 = \sigma_2$).
3. Based on (b), is the drug effective?
Answer
1. Test statistic 0.4, critical values 0.585, 1.667. Reject $H_0$.
2. Test statistic 1.0, critical values $\pm 1.6625$. Fail to reject $H_0$.
3. Probably not
A company claims that an herbal supplement improves intelligence. A sample of 25 persons are given a standard IQ test before and after taking the supplement. The before and after statistics are $m_1 = 105$, $s_1 = 13$, $m_2 = 110$, $s_2 = 17$, $s_{1, \, 2} = 190$. At the 10% significance level, do you believe the company's claim?
Answer
Test statistic 2.8, critical value 1.3184. Reject $H_0$.
In Fisher's iris data, consider the petal length variable for the samples of Versicolor and Virginica irises. Test the following at the 10% significance level:
1. $H_0: \sigma_1 = \sigma_2$ versus $H_1: \sigma_1 \ne \sigma_2$.
2. $H_0: \mu_1 \le \mu_2$ versus $\mu_1 \gt \mu_2$ (assuming that $\sigma_1 = \sigma_2$).
Answer
1. Test statistic 1.1, critical values 0.6227, 1.6072. Fail to reject $H_0$.
2. Test statistic $-11.4$, critical value $-1.6602$. Reject $H_0$.
A plant has two machines that produce a circular rod whose diameter (in cm) is critical. A sample of 100 rods from the first machine as mean 10.3 and standard deviation 1.2. A sample of 100 rods from the second machine has mean 9.8 and standard deviation 1.6. Test the following hypotheses at the 10% level.
1. $H_0: \sigma_1 = \sigma_2$ versus $H_1: \sigma_1 \ne \sigma_2$.
2. $H_0: \mu_1 = \mu_2$ versus $H_1: \mu_1 \ne \mu_2$ (assuming that $\sigma_1 = \sigma_2$).
Answer
1. Test statistic 0.56, critical values 0.7175, 1.3942. Reject $H_0$.
2. Test statistic $-4.97$, critical values $\pm 1.645$. Reject $H_0$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/09%3A_Hypothesis_Testing/9.04%3A_Tests_in_the_Two-Sample_Normal_Model.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
As usual, our starting point is a random experiment with an underlying sample space, and a probability measure $\P$. In the basic statistical model, we have an observable random variable $\bs{X}$ taking values in a set $S$. In general, $\bs{X}$ can have quite a complicated structure. For example, if the experiment is to sample $n$ objects from a population and record various measurements of interest, then $\bs{X} = (X_1, X_2, \ldots, X_n)$ where $X_i$ is the vector of measurements for the $i$th object. The most important special case occurs when $(X_1, X_2, \ldots, X_n)$ are independent and identically distributed. In this case, we have a random sample of size $n$ from the common distribution.
In the previous sections, we developed tests for parameters based on natural test statistics. However, in other cases, the tests may not be parametric, or there may not be an obvious statistic to start with. Thus, we need a more general method for constructing test statistics. Moreover, we do not yet know if the tests constructed so far are the best, in the sense of maximizing the power for the set of alternatives. In this and the next section, we investigate both of these ideas. Likelihood functions, similar to those used in maximum likelihood estimation, will play a key role.
Tests of Simple Hypotheses
Suppose that $\bs{X}$ has one of two possible distributions. Our simple hypotheses are
• $H_0: \bs{X}$ has probability density function $f_0$.
• $H_1: \bs{X}$ has probability density function $f_1$.
We will use subscripts on the probability measure $\P$ to indicate the two hypotheses, and we assume that $f_0$ and $f_1$ are postive on $S$. The test that we will construct is based on the following simple idea: if we observe $\bs{X} = \bs{x}$, then the condition $f_1(\bs{x}) \gt f_0(\bs{x})$ is evidence in favor of the alternative; the opposite inequality is evidence against the alternative.
The likelihood ratio function $L: S \to (0, \infty)$ is defined by $L(\bs{x}) = \frac{f_0(\bs{x})}{f_1(\bs{x})}, \quad \bs{x} \in S$ The statistic $L(\bs{X})$ is the likelihood ratio statistic.
Restating our earlier observation, note that small values of $L$ are evidence in favor of $H_1$. Thus it seems reasonable that the likelihood ratio statistic may be a good test statistic, and that we should consider tests in which we teject $H_0$ if and only if $L \le l$, where $l$ is a constant to be determined:
The significance level of the test is $\alpha = \P_0(L \le l)$.
As usual, we can try to construct a test by choosing $l$ so that $\alpha$ is a prescribed value. If $\bs{X}$ has a discrete distribution, this will only be possible when $\alpha$ is a value of the distribution function of $L(\bs{X})$.
An important special case of this model occurs when the distribution of $\bs{X}$ depends on a parameter $\theta$ that has two possible values. Thus, the parameter space is $\{\theta_0, \theta_1\}$, and $f_0$ denotes the probability density function of $\bs{X}$ when $\theta = \theta_0$ and $f_1$ denotes the probability density function of $\bs{X}$ when $\theta = \theta_1$. In this case, the hypotheses are equivalent to $H_0: \theta = \theta_0$ versus $H_1: \theta = \theta_1$.
As noted earlier, another important special case is when $\bs X = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n$ from a distribution an underlying random variable $X$ taking values in a set $R$. In this case, $S = R^n$ and the probability density function $f$ of $\bs X$ has the form $f(x_1, x_2, \ldots, x_n) = g(x_1) g(x_2) \cdots g(x_n), \quad (x_1, x_2, \ldots, x_n) \in S$ where $g$ is the probability density function of $X$. So the hypotheses simplify to
• $H_0: X$ has probability density function $g_0$.
• $H_1: X$ has probability density function $g_1$.
and the likelihood ratio statistic is $L(X_1, X_2, \ldots, X_n) = \prod_{i=1}^n \frac{g_0(X_i)}{g_1(X_i)}$ In this special case, it turns out that under $H_1$, the likelihood ratio statistic, as a function of the sample size $n$, is a martingale.
The Neyman-Pearson Lemma
The following theorem is the Neyman-Pearson Lemma, named for Jerzy Neyman and Egon Pearson. It shows that the test given above is most powerful. Let $R = \{\bs{x} \in S: L(\bs{x}) \le l\}$ and recall that the size of a rejection region is the significance of the test with that rejection region.
Consider the tests with rejection regions $R$ given above and arbitrary $A \subseteq S$. If the size of $R$ is at least as large as the size of $A$ then the test with rejection region $R$ is more powerful than the test with rejection region $A$. That is, if $\P_0(\bs{X} \in R) \ge \P_0(\bs{X} \in A)$ then $\P_1(\bs{X} \in R) \ge \P_1(\bs{X} \in A)$.
Proof
First note that from the definitions of $L$ and $R$ that the following inequalities hold: \begin{align} \P_0(\bs{X} \in A) & \le l \, \P_1(\bs{X} \in A) \text{ for } A \subseteq R\ \P_0(\bs{X} \in A) & \ge l \, \P_1(\bs{X} \in A) \text{ for } A \subseteq R^c \end{align} Now for arbitrary $A \subseteq S$, write $R = (R \cap A) \cup (R \setminus A)$ and $A = (A \cap R) \cup (A \setminus R)$. From the additivity of probability and the inequalities above, it follows that $\P_1(\bs{X} \in R) - \P_1(\bs{X} \in A) \ge \frac{1}{l} \left[\P_0(\bs{X} \in R) - \P_0(\bs{X} \in A)\right]$ Hence if $\P_0(\bs{X} \in R) \ge \P_0(\bs{X} \in A)$ then $\P_1(\bs{X} \in R) \ge \P_1(\bs{X} \in A)$.
The Neyman-Pearson lemma is more useful than might be first apparent. In many important cases, the same most powerful test works for a range of alternatives, and thus is a uniformly most powerful test for this range. Several special cases are discussed below.
Generalized Likelihood Ratio
The likelihood ratio statistic can be generalized to composite hypotheses. Suppose again that the probability density function $f_\theta$ of the data variable $\bs{X}$ depends on a parameter $\theta$, taking values in a parameter space $\Theta$. Consider the hypotheses $\theta \in \Theta_0$ versus $\theta \notin \Theta_0$, where $\Theta_0 \subseteq \Theta$.
Define $L(\bs{x}) = \frac{\sup\left\{f_\theta(\bs{x}): \theta \in \Theta_0\right\}}{\sup\left\{f_\theta(\bs{x}): \theta \in \Theta\right\}}$ The function $L$ is the likelihood ratio function and $L(\bs{X})$ is the likelihood ratio statistic.
By the same reasoning as before, small values of $L(\bs{x})$ are evidence in favor of the alternative hypothesis.
Examples and Special Cases
Tests for the Exponential Model
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n \in \N_+$ from the exponential distribution with scale parameter $b \in (0, \infty)$. The sample variables might represent the lifetimes from a sample of devices of a certain type. We are interested in testing the simple hypotheses $H_0: b = b_0$ versus $H_1: b = b_1$, where $b_0, \, b_1 \in (0, \infty)$ are distinct specified values.
Recall that the sum of the variables is a sufficient statistic for $b$: $Y = \sum_{i=1}^n X_i$ Recall also that $Y$ has the gamma distribution with shape parameter $n$ and scale parameter $b$. For $\alpha \gt 0$, we will denote the quantile of order $\alpha$ for the this distribution by $\gamma_{n, b}(\alpha)$.
The likelihood ratio statistic is $L = \left(\frac{b_1}{b_0}\right)^n \exp\left[\left(\frac{1}{b_1} - \frac{1}{b_0}\right) Y \right]$
Proof
Recall that the PDF $g$ of the exponential distribution with scale parameter $b \in (0, \infty)$ is given by $g(x) = (1 / b) e^{-x / b}$ for $x \in (0, \infty)$. If $g_j$ denotes the PDF when $b = b_j$ for $j \in \{0, 1\}$ then $\frac{g_0(x)}{g_1(x)} = \frac{(1/b_0) e^{-x / b_0}}{(1/b_1) e^{-x/b_1}} = \frac{b_1}{b_0} e^{(1/b_1 - 1/b_0) x}, \quad x \in (0, \infty)$ Hence the likelihood ratio function is $L(x_1, x_2, \ldots, x_n) = \prod_{i=1}^n \frac{g_0(x_i)}{g_1(x_i)} = \left(\frac{b_1}{b_0}\right)^n e^{(1/b_1 - 1/b_0) y}, \quad (x_1, x_2, \ldots, x_n) \in (0, \infty)^n$ where $y = \sum_{i=1}^n x_i$.
The following tests are most powerful test at the $\alpha$ level
1. Suppose that $b_1 \gt b_0$. Reject $H_0: b = b_0$ versus $H_1: b = b_1$ if and only if $Y \ge \gamma_{n, b_0}(1 - \alpha)$.
2. Suppose that $b_1 \lt b_0$. Reject $H_0: b = b_0$ versus $H_1: b = b_1$ if and only if $Y \le \gamma_{n, b_0}(\alpha)$.
Proof
Under $H_0$, $Y$ has the gamma distribution with parameters $n$ and $b_0$.
1. If $b_1 \gt b_0$ then $1/b_1 \lt 1/b_0$. From simple algebra, a rejection region of the form $L(\bs X) \le l$ becomes a rejection region of the form $Y \ge y$. The precise value of $y$ in terms of $l$ is not important. For the test to have significance level $\alpha$ we must choose $y = \gamma_{n, b_0}(1 - \alpha)$
2. If $b_1 \lt b_0$ then $1/b_1 \gt 1/b_0$. From simple algebra, a rejection region of the form $L(\bs X) \le l$ becomes a rejection region of the form $Y \le y$. Again, the precise value of $y$ in terms of $l$ is not important. For the test to have significance level $\alpha$ we must choose $y = \gamma_{n, b_0}(\alpha)$
Note that the these tests do not depend on the value of $b_1$. This fact, together with the monotonicity of the power function can be used to shows that the tests are uniformly most powerful for the usual one-sided tests.
Suppose that $b_0 \in (0, \infty)$.
1. The decision rule in part (a) above is uniformly most powerful for the test $H_0: b \le b_0$ versus $H_1: b \gt b_0$.
2. The decision rule in part (b) above is uniformly most powerful for the test $H_0: b \ge b_0$ versus $H_1: b \lt b_0$.
Tests for the Bernoulli Model
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n \in \N_+$ from the Bernoulli distribution with success parameter $p$. The sample could represent the results of tossing a coin $n$ times, where $p$ is the probability of heads. We wish to test the simple hypotheses $H_0: p = p_0$ versus $H_1: p = p_1$, where $p_0, \, p_1 \in (0, 1)$ are distinct specified values. In the coin tossing model, we know that the probability of heads is either $p_0$ or $p_1$, but we don't know which.
Recall that the number of successes is a sufficient statistic for $p$: $Y = \sum_{i=1}^n X_i$ Recall also that $Y$ has the binomial distribution with parameters $n$ and $p$. For $\alpha \in (0, 1)$, we will denote the quantile of order $\alpha$ for the this distribution by $b_{n, p}(\alpha)$; although since the distribution is discrete, only certain values of $\alpha$ are possible.
The likelihood ratio statistic is $L = \left(\frac{1 - p_0}{1 - p_1}\right)^n \left[\frac{p_0 (1 - p_1)}{p_1 (1 - p_0)}\right]^Y$
Proof
Recall that the PDF $g$ of the Bernoulli distribution with parameter $p \in (0, 1)$ is given by $g(x) = p^x (1 - p)^{1 - x}$ for $x \in \{0, 1\}$. If $g_j$ denotes the PDF when $p = p_j$ for $j \in \{0, 1\}$ then $\frac{g_0(x)}{g_1(x)} = \frac{p_0^x (1 - p_0)^{1-x}}{p_1^x (1 - p_1^{1-x}} = \left(\frac{p_0}{p_1}\right)^x \left(\frac{1 - p_0}{1 - p_1}\right)^{1 - x} = \left(\frac{1 - p_0}{1 - p_1}\right) \left[\frac{p_0 (1 - p_1)}{p_1 (1 - p_0)}\right]^x, \quad x \in \{0, 1\}$ Hence the likelihood ratio function is $L(x_1, x_2, \ldots, x_n) = \prod_{i=1}^n \frac{g_0(x_i)}{g_1(x_i)} = \left(\frac{1 - p_0}{1 - p_1}\right)^n \left[\frac{p_0 (1 - p_1)}{p_1 (1 - p_0)}\right]^y, \quad (x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$ where $y = \sum_{i=1}^n x_i$.
The following tests are most powerful test at the $\alpha$ level
1. Suppose that $p_1 \gt p_0$. Reject $H_0: p = p_0$ versus $H_1: p = p_1$ if and only if $Y \ge b_{n, p_0}(1 - \alpha)$.
2. Suppose that $p_1 \lt p_0$. Reject $p = p_0$ versus $p = p_1$ if and only if $Y \le b_{n, p_0}(\alpha)$.
Proof
Under $H_0$, $Y$ has the binomial distribution with parameters $n$ and $p_0$.
1. If $p_1 \gt p_0$ then $p_0(1 - p_1) / p_1(1 - p_0) \lt 1$. From simple algebra, a rejection region of the form $L(\bs X) \le l$ becomes a rejection region of the form $Y \ge y$. The precise value of $y$ in terms of $l$ is not important. For the test to have significance level $\alpha$ we must choose $y = b_{n, p_0}(1 - \alpha)$
2. If $p_1 \lt p_0$ then $p_0 (1 - p_1) / p_1 (1 - p_0) \gt 1$. From simple algebra, a rejection region of the form $L(\bs X) \le l$ becomes a rejection region of the form $Y \le y$. Again, the precise value of $y$ in terms of $l$ is not important. For the test to have significance level $\alpha$ we must choose $y = b_{n, p_0}(\alpha)$
Note that these tests do not depend on the value of $p_1$. This fact, together with the monotonicity of the power function can be used to shows that the tests are uniformly most powerful for the usual one-sided tests.
Suppose that $p_0 \in (0, 1)$.
1. The decision rule in part (a) above is uniformly most powerful for the test $H_0: p \le p_0$ versus $H_1: p \gt p_0$.
2. The decision rule in part (b) above is uniformly most powerful for the test $H_0: p \ge p_0$ versus $H_1: p \lt p_0$.
Tests in the Normal Model
The one-sided tests that we derived in the normal model, for $\mu$ with $\sigma$ known, for $\mu$ with $\sigma$ unknown, and for $\sigma$ with $\mu$ unknown are all uniformly most powerful. On the other hand, none of the two-sided tests are uniformly most powerful.
A Nonparametric Example
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample of size $n \in \N_+$, either from the Poisson distribution with parameter 1 or from the geometric distribution on $\N$ with parameter $p = \frac{1}{2}$. Note that both distributions have mean 1 (although the Poisson distribution has variance 1 while the geometric distribution has variance 2). So, we wish to test the hypotheses
• $H_0: X$ has probability density function $g_0(x) = e^{-1} \frac{1}{x!}$ for $x \in \N$.
• $H_1: X$ has probability density function $g_1(x) = \left(\frac{1}{2}\right)^{x+1}$ for $x \in \N$.
The likelihood ratio statistic is $L = 2^n e^{-n} \frac{2^Y}{U} \text{ where } Y = \sum_{i=1}^n X_i \text{ and } U = \prod_{i=1}^n X_i!$
Proof
Note that $\frac{g_0(x)}{g_1(x)} = \frac{e^{-1} / x!}{(1/2)^{x+1}} = 2 e^{-1} \frac{2^x}{x!}, \quad x \in \N$ Hence the likelihood ratio function is $L(x_1, x_2, \ldots, x_n) = \prod_{i=1}^n \frac{g_0(x_i)}{g_1(x_i)} = 2^n e^{-n} \frac{2^y}{u}, \quad (x_1, x_2, \ldots, x_n) \in \N^n$ where $y = \sum_{i=1}^n x_i$ and $u = \prod_{i=1}^n x_i!$.
The most powerful tests have the following form, where $d$ is a constant: reject $H_0$ if and only if $\ln(2) Y - \ln(U) \le d$.
Proof
A rejection region of the form $L(\bs X) \le l$ is equivalent to $\frac{2^Y}{U} \le \frac{l e^n}{2^n}$ Taking the natural logarithm, this is equivalent to $\ln(2) Y - \ln(U) \le d$ where $d = n + \ln(l) - n \ln(2)$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/09%3A_Hypothesis_Testing/9.05%3A_Likelihood_Ratio_Tests.txt |
$\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\bs}{\boldsymbol}$
In this section, we will study a number of important hypothesis tests that fall under the general term chi-square tests. These are named, as you might guess, because in each case the test statistics has (in the limit) a chi-square distribution. Although there are several different tests in this general category, they all share some common themes:
• In each test, there are one or more underlying multinomial samples, Of course, the multinomial model includes the Bernoulli model as a special case.
• Each test works by comparing the observed frequencies of the various outcomes with expected frequencies under the null hypothesis.
• If the model is incompletely specified, some of the expected frequencies must be estimated; this reduces the degrees of freedom in the limiting chi-square distribution.
We will start with the simplest case, where the derivation is the most straightforward; in fact this test is equivalent to a test we have already studied. We then move to successively more complicated models.
The One-Sample Bernoulli Model
Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a random sample from the Bernoulli distribution with unknown success parameter $p \in (0, 1)$. Thus, these are independent random variables taking the values 1 and 0 with probabilities $p$ and $1 - p$ respectively. We want to test $H_0: p = p_0$ versus $H_1: p \ne p_0$, where $p_0 \in (0, 1)$ is specified. Of course, we have already studied such tests in the Bernoulli model. But keep in mind that our methods in this section will generalize to a variety of new models that we have not yet studied.
Let $O_1 = \sum_{j=1}^n X_j$ and $O_0 = n - O_1 = \sum_{j=1}^n (1 - X_j)$. These statistics give the number of times (frequency) that outcomes 1 and 0 occur, respectively. Moreover, we know that each has a binomial distribution; $O_1$ has parameters $n$ and $p$, while $O_0$ has parameters $n$ and $1 - p$. In particular, $\E(O_1) = n p$, $\E(O_0) = n (1 - p)$, and $\var(O_1) = \var(O_0) = n p (1 - p)$. Moreover, recall that $O_1$ is sufficient for $p$. Thus, any good test statistic should be a function of $O_1$. Next, recall that when $n$ is large, the distribution of $O_1$ is approximately normal, by the central limit theorem. Let $Z = \frac{O_1 - n p_0}{\sqrt{n p_0 (1 - p_0)}}$ Note that $Z$ is the standard score of $O_1$ under $H_0$. Hence if $n$ is large, $Z$ has approximately the standard normal distribution under $H_0$, and therefore $V = Z^2$ has approximately the chi-square distribution with 1 degree of freedom under $H_0$. As usual, let $\chi_k^2$ denote the quantile function of the chi-square distribution with $k$ degrees of freedom.
An approximate test of $H_0$ versus $H_1$ at the $\alpha$ level of significance is to reject $H_0$ if and only if $V \gt \chi_1^2(1 - \alpha)$.
The test above is equivalent to the unbiased test with test statistic $Z$ (the approximate normal test) derived in the section on Tests in the Bernoulli model.
For purposes of generalization, the critical result in the next exercise is a special representation of $V$. Let $e_0 = n (1 - p_0)$ and $e_1 = n p_0$. Note that these are the expected frequencies of the outcomes 0 and 1, respectively, under $H_0$.
$V$ can be written in terms of the observed and expected frequencies as follows: $V = \frac{(O_0 - e_0)^2}{e_0} + \frac{(O_1 - e_1)^2}{e_1}$
This representation shows that our test statistic $V$ measures the discrepancy between the expected frequencies, under $H_0$, and the observed frequencies. Of course, large values of $V$ are evidence in favor of $H_1$. Finally, note that although there are two terms in the expansion of $V$ in Exercise 3, there is only one degree of freedom since $O_0 + O_1 = n$. The observed and expected frequencies could be stored in a $1 \times 2$ table.
The Multi-Sample Bernoulli Model
Suppose now that we have samples from several (possibly) different, independent Bernoulli trials processes. Specifically, suppose that $\bs{X}_i = (X_{i,1}, X_{i,2}, \ldots, X_{i,n_i})$ is a random sample of size $n_i$ from the Bernoulli distribution with unknown success parameter $p_i \in (0, 1)$ for each $i \in \{1, 2, \ldots, m\}$. Moreover, the samples $(\bs{X}_1, \bs{X}_2, \ldots, \bs{X}_m)$ are independent. We want to test hypotheses about the unknown parameter vector $\bs{p} = (p_1, p_2, \ldots, p_m)$. There are two common cases that we consider below, but first let's set up the essential notation that we will need for both cases. For $i \in \{1, 2, \ldots, m\}$ and $j \in \{0, 1\}$, let $O_{i,j}$ denote the number of times that outcome $j$ occurs in sample $\bs{X}_i$. The observed frequency $O_{i,j}$ has a binomial distribution; $O_{i,1}$ has parameters $n_i$ and $p_i$ while $O_{i,0}$ has parameters $n_i$ and $1 - p_i$.
The Completely Specified Case
Consider a specified parameter vector $\bs{p}_0 = (p_{0,1}, p_{0,2}, \ldots, p_{0,m}) \in (0, 1)^m$. We want to test the null hypothesis $H_0: \bs{p} = \bs{p}_0$, versus $H_1: \bs{p} \ne \bs{p}_0$. Since the null hypothesis specifies the value of $p_i$ for each $i$, this is called the completely specified case. Now let $e_{i,0} = n_i (1 - p_{i,0})$ and let $e_{i,1} = n_i p_{i,0}$. These are the expected frequencies of the outcomes 0 and 1, respectively, from sample $\bs{X}_i$ under $H_0$.
If $n_i$ is large for each $i$, then under $H_0$ the following test statistic has approximately the chi-square distribution with $m$ degrees of freedom: $V = \sum_{i=1}^m \sum_{j=0}^1 \frac{(O_{i,j} - e_{i,j})^2}{e_{i,j}}$
Proof
This follows from the result above and independence.
As a rule of thumb, large means that we need $e_{i,j} \ge 5$ for each $i \in \{1, 2, \ldots, m\}$ and $j \in \{0, 1\}$. But of course, the larger these expected frequencies the better.
Under the large sample assumption, an approximate test of $H_0$ versus $H_1$ at the $\alpha$ level of significance is to reject $H_0$ if and only if $V \gt \chi_m^2(1 - \alpha)$.
Once again, note that the test statistic $V$ measures the discrepancy between the expected and observed frequencies, over all outcomes and all samples. There are $2 \, m$ terms in the expansion of $V$ in Exercise 4, but only $m$ degrees of freedom, since $O_{i,0} + O_{i,1} = n_i$ for each $i \in \{1, 2, \ldots, m\}$. The observed and expected frequencies could be stored in an $m \times 2$ table.
The Equal Probability Case
Suppose now that we want to test the null hypothesis $H_0: p_1 = p_2 = \cdots = p_m$ that all of the success probabilities are the same, versus the complementary alternative hypothesis $H_1$ that the probabilities are not all the same. Note, in contrast to the previous model, that the null hypothesis does not specify the value of the common success probability $p$. But note also that under the null hypothesis, the $m$ samples can be combined to form one large sample of Bernoulli trials with success probability $p$. Thus, a natural approach is to estimate $p$ and then define the test statistic that measures the discrepancy between the expected and observed frequencies, just as before. The challenge will be to find the distribution of the test statistic.
Let $n = \sum_{i=1}^m n_i$ denote the total sample size when the samples are combined. Then the overall sample mean, which in this context is the overall sample proportion of successes, is $P = \frac{1}{n} \sum_{i=1}^m \sum_{j=1}^{n_i} X_{i,j} = \frac{1}{n} \sum_{i=1}^m O_{i,1}$ The sample proportion $P$ is the best estimate of $p$, in just about any sense of the word. Next, let $E_{i,0} = n_i \, (1 - P)$ and $E_{i,1} = n_i \, P$. These are the estimated expected frequencies of 0 and 1, respectively, from sample $\bs{X}_i$ under $H_0$. Of course these estimated frequencies are now statistics (and hence random) rather than parameters. Just as before, we define our test statistic $V = \sum_{i=1}^m \sum_{j=0}^1 \frac{(O_{i,j} - E_{i,j})^2}{E_{i,j}}$ It turns out that under $H_0$, the distribution of $V$ converges to the chi-square distribution with $m - 1$ degrees of freedom as $n \to \infty$.
An approximate test of $H_0$ versus $H_1$ at the $\alpha$ level of significance is to reject $H_0$ if and only if $V \gt \chi_{m-1}^2(1 - \alpha)$.
Intuitively, we lost a degree of freedom over the completely specified case because we had to estimate the unknown common success probability $p$. Again, the observed and expected frequencies could be stored in an $m \times 2$ table.
The One-Sample Multinomial Model
Our next model generalizes the one-sample Bernoulli model in a different direction. Suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a sequence of multinomial trials. Thus, these are independent, identically distributed random variables, each taking values in a set $S$ with $k$ elements. If we want, we can assume that $S = \{0, 1, \ldots, k - 1\}$; the one-sample Bernoulli model then corresponds to $k = 2$. Let $f$ denote the common probability density function of the sample variables on $S$, so that $f(j) = \P(X_i = j)$ for $i \in \{1, 2, \ldots, n\}$ and $j \in S$. The values of $f$ are assumed unknown, but of course we must have $\sum_{j \in S} f(j) = 1$, so there are really only $k - 1$ unknown parameters. For a given probability density function $f_0$ on $S$ we want to test $H_0: f = f_0$ versus $H_1: f \ne f_0$.
By this time, our general approach should be clear. We let $O_j$ denote the number of times that outcome $j \in S$ occurs in sample $\bs{X}$: $O_j = \sum_{i=1}^n \bs{1}(X_i = j)$ Note that $O_j$ has the binomial distribution with parameters $n$ and $f(j)$. Thus, $e_j = n \, f_0(j)$ is the expected number of times that outcome $j$ occurs, under $H_0$. Out test statistic, of course, is $V = \sum_{j \in S} \frac{(O_j - e_j)^2}{e^j}$ It turns out that under $H_0$, the distribution of $V$ converges to the chi-square distribution with $k - 1$ degrees of freedom as $n \to \infty$. Note that there are $k$ terms in the expansion of $V$, but only $k - 1$ degrees of freedom since $\sum_{j \in S} O_j = n$.
An approximate test of $H_0$ versus $H_1$ at the $\alpha$ level of significance is to reject $H_0$ if and only if $V \gt \chi_{k-1}^2(1 - \alpha)$.
Again, as a rule of thumb, we need $e_j \ge 5$ for each $j \in S$, but the larger the expected frequencies the better.
The Multi-Sample Multinomial Model
As you might guess, our final generalization is to the multi-sample multinomial model. Specifically, suppose that $\bs{X}_i = (X_{i,1}, X_{i,2}, \ldots, X_{i,n_i})$ is a random sample of size $n_i$ from a distribution on a set $S$ with $k$ elements, for each $i \in \{1, 2, \ldots, m\}$. Moreover, we assume that the samples $(\bs{X}_1, \bs{X}_2, \ldots, \bs{X}_m)$ are independent. Again there is no loss in generality if we take $S = \{0, 1, \ldots, k - 1\}$. Then $k = 2$ reduces to the multi-sample Bernoulli model, and $m = 1$ corresponds to the one-sample multinomial model.
Let $f_i$ denote the common probability density function of the variables in sample $\bs{X}_i$, so that $f_i(j) = \P(X_{i,l} = j)$ for $i \in \{1, 2, \ldots, m\}$, $l \in \{1, 2, \ldots, n_i\}$, and $j \in S$. These are generally unknown, so that our vector of parameters is the vector of probability density functions: $\bs{f} = (f_1, f_2, \ldots, f_m)$. Of course, $\sum_{j \in S} f_i(j) = 1$ for $i \in \{1, 2, \ldots, m\}$, so there are actually $m \, (k - 1)$ unknown parameters. We are interested in testing hypotheses about $\bs{f}$. As in the multi-sample Bernoulli model, there are two common cases that we consider below, but first let's set up the essential notation that we will need for both cases. For $i \in \{1, 2, \ldots, m\}$ and $j \in S$, let $O_{i,j}$ denote the number of times that outcome $j$ occurs in sample $\bs{X}_i$. The observed frequency $O_{i,j}$ has a binomial distribution with parameters $n_i$ and $f_i(j)$.
The Completely Specified Case
Consider a given vector of probability density functions on $S$, denoted $\bs{f}_0 = (f_{0,1}, f_{0,2}, \ldots, f_{0,m})$. We want to test the null hypothesis $H_0: \bs{f} = \bs{f}_0$, versus $H_1: \bs{f} \ne \bs{f}_0$. Since the null hypothesis specifies the value of $f_i(j)$ for each $i$ and $j$, this is called the completely specified case. Let $e_{i,j} = n_i \, f_{0,i}(j)$. This is the expected frequency of outcome $j$ in sample $\bs{X}_i$ under $H_0$.
If $n_i$ is large for each $i$, then under $H_0$, the test statistic $V$ below has approximately the chi-square distribution with $m \, (k - 1)$ degrees of freedom: $V = \sum_{i=1}^m \sum_{j \in S} \frac{(O_{i,j} - e_{i,j})^2}{e_{i,j}}$
Proof
This follows from the one-sample multinomial case and independence.
As usual, our rule of thumb is that we need $e_{i,j} \ge 5$ for each $i \in \{1, 2, \ldots, m\}$ and $j \in S$. But of course, the larger these expected frequencies the better.
Under the large sample assumption, an approximate test of $H_0$ versus $H_1$ at the $\alpha$ level of significance is to reject $H_0$ if and only if $V \gt \chi_{m \, (k - 1)}^2(1 - \alpha)$.
As always, the test statistic $V$ measures the discrepancy between the expected and observed frequencies, over all outcomes and all samples. There are $m k$ terms in the expansion of $V$ in Exercise 8, but we lose $m$ degrees of freedom, since $\sum_{j \in S} O_{i,j} = n_i$ for each $i \in \{1, 2, \ldots, m\}$.
The Equal PDF Case
Suppose now that we want to test the null hypothesis $H_0: f_1 = f_2 = \cdots = f_m$ that all of the probability density functions are the same, versus the complementary alternative hypothesis $H_1$ that the probability density functions are not all the same. Note, in contrast to the previous model, that the null hypothesis does not specify the value of the common success probability density function $f$. But note also that under the null hypothesis, the $m$ samples can be combined to form one large sample of multinomial trials with probability density function $f$. Thus, a natural approach is to estimate the values of $f$ and then define the test statistic that measures the discrepancy between the expected and observed frequencies, just as before.
Let $n = \sum_{i=1}^m n_i$ denote the total sample size when the samples are combined. Under $H_0$, our best estimate of $f(j)$ is $P_j = \frac{1}{n} \sum_{i=1}^m O_{i,j}$ Hence our estimate of the expected frequency of outcome $j$ in sample $\bs{X}_i$ under $H_0$ is $E_{i,j} = n_i P_j$. Again, this estimated frequency is now a statistic (and hence random) rather than a parameter. Just as before, we define our test statistic $V = \sum_{i=1}^m \sum_{j \in S} \frac{(O_{i,j} - E_{i,j})^2}{E_{i,j}}$ As you no doubt expect by now, it turns out that under $H_0$, the distribution of $V$ converges to a chi-square distribution as $n \to \infty$. But let's see if we can determine the degrees of freedom heuristically.
The limiting distribution of $V$ has $(k - 1) (m - 1)$ degrees of freedom.
Proof
There are $k \, m$ terms in the expansion of $V$. We lose $m$ degrees of freedom since $\sum_{j \in S} O_{i,j} = n_i$ for each $i \in \{1, 2, \ldots, m\}$. We must estimate all but one of the probabilities $f(j)$ for $j \in S$, thus losing $k - 1$ degrees of freedom.
An approximate test of $H_0$ versus $H_1$ at the $\alpha$ level of significance is to reject $H_0$ if and only if $V \gt \chi_{(k - 1) \, (m - 1)}^2(1 -\alpha)$.
A Goodness of Fit Test
A goodness of fit test is an hypothesis test that an unknown sampling distribution is a particular, specified distribution or belongs to a parametric family of distributions. Such tests are clearly fundamental and important. The one-sample multinomial model leads to a quite general goodness of fit test.
To set the stage, suppose that we have an observable random variable $X$ for an experiment, taking values in a general set $S$. Random variable $X$ might have a continuous or discrete distribution, and might be single-variable or multi-variable. We want to test the null hypothesis that $X$ has a given, completely specified distribution, or that the distribution of $X$ belongs to a particular parametric family.
Our first step, in either case, is to sample from the distribution of $X$ to obtain a sequence of independent, identically distributed variables $\bs{X} = (X_1, X_2, \ldots, X_n)$. Next, we select $k \in \N_+$ and partition $S$ into $k$ (disjoint) subsets. We will denote the partition by $\{A_j: j \in J\}$ where $\#(J) = k$. Next, we define the sequence of random variables $\bs{Y} = (Y_1, Y_2, \ldots, Y_n)$ by $Y_i = j$ if and only if $X_i \in A_j$ for $i \in \{1, 2, \ldots, n\}$ and $j \in J$.
$\bs{Y}$ is a multinomial trials sequence with parameters $n$ and $f$, where $f(j) = \P(X \in A_j)$ for $j \in J$.
The Completely Specified Case
Let $H$ denote the statement that $X$ has a given, completely specified distribution. Let $f_0$ denote the probability density function on $J$ defined by $f_0(j) = \P(X \in A_j \mid H)$ for $j \in J$. To test hypothesis $H$, we can formally test $H_0: f = f_0$ versus $H_1: f \ne f_0$, which of course, is precisely the problem we solved in the one-sample multinomial model.
Generally, we would partition the space $S$ into as many subsets as possible, subject to the restriction that the expected frequencies all be at least 5.
The Partially Specified Case
Often we don't really want to test whether $X$ has a completely specified distribution (such as the normal distribution with mean 5 and variance 9), but rather whether the distribution of $X$ belongs to a specified parametric family (such as the normal). A natural course of action in this case would be to estimate the unknown parameters and then proceed just as above. As we have seen before, the expected frequencies would be statistics $E_j$ because they would be based on the estimated parameters. As a rule of thumb, we lose a degree of freedom in the chi-square statistic $V$ for each parameter that we estimate, although the precise mathematics can be complicated.
A Test of Independence
Suppose that we have observable random variables $X$ and $Y$ for an experiment, where $X$ takes values in a set $S$ with $k$ elements, and $Y$ takes values in a set $T$ with $m$ elements. Let $f$ denote the joint probability density function of $(X, Y)$, so that $f(i, j) = \P(X = i, Y = j)$ for $i \in S$ and $j \in T$. Recall that the marginal probability density functions of $X$ and $Y$ are the functions $g$ and $h$ respectively, where \begin{align} g(i) = & \sum_{j \in T} f(i, j), \quad i \in S \ h(j) = & \sum_{i \in S} f(i, j), \quad j \in T \end{align} Usually, of course, $f$, $g$, and $h$ are unknown. In this section, we are interested in testing whether $X$ and $Y$ are independent, a basic and important test. Formally then we want to test the null hypothesis $H_0: f(i, j) = g(i) \, h(j), \quad (i, j) \in S \times T$ versus the complementary alternative $H_1$.
Our first step, of course, is to draw a random sample $(\bs{X}, \bs{Y}) = ((X_1, Y_1), (X_2, Y_2), \ldots, (X_n, Y_n))$ from the distribution of $(X, Y)$. Since the state spaces are finite, this sample forms a sequence of multinomial trials. Thus, with our usual notation, let $O_{i,j}$ denote the number of times that $(i, j)$ occurs in the sample, for each $(i, j) \in S \times T$. This statistic has the binomial distribution with trial parameter $n$ and success parameter $f(i, j)$. Under $H_0$, the success parameter is $g(i) \, h(j)$. However, since we don't know the success parameters, we must estimate them in order to compute the expected frequencies. Our best estimate of $f(i, j)$ is the sample proportion $\frac{1}{n} O_{i,j}$. Thus, our best estimates of $g(i)$ and $h(j)$ are $\frac{1}{n} N_i$ and $\frac{1}{n} M_j$, respectively, where $N_i$ is the number of times that $i$ occurs in sample $\bs{X}$ and $M_j$ is the number of times that $j$ occurs in sample $\bs{Y}$: \begin{align} N_i & = \sum_{j \in T} O_{i,j} \ M_j & = \sum_{i \in S} O_{i,j} \end{align} Thus, our estimate of the expected frequency of $(i, j)$ under $H_0$ is $E_{i,j} = n \, \frac{1}{n} \, N_i \frac{1}{n} \, M_j = \frac{1}{n} \, N_i \, M_j$ Of course, we define our test statistic by $V = \sum_{i \in J} \sum_{j \in T} \frac{(O_{i,j} - E_{i,j})^2}{E_{i,j}}$ As you now expect, the distribution of $V$ converges to a chi-square distribution as $n \to \infty$. But let's see if we can determine the appropriate degrees of freedom on heuristic grounds.
The limiting distribution of $V$ has $(k - 1) \, (m - 1)$ degrees of freedom.
Proof
There are $k m$ terms in the expansion of $V$. We lose one degree of freedom since $\sum_{i \in S} \sum_{j \in T} O_{i,j} = n$. We must estimate all but one of the probabilities $g(i)$ for $i \in S$, thus losing $k - 1$ degrees of freedom. We must estimate all but one of the probabilities $h(j)$ for $j \in T$, thus losing $m - 1$ degrees of freedom.
An approximate test of $H_0$ versus $H_1$ at the $\alpha$ level of significance is to reject $H_0$ if and only if $V \gt \chi_{(k-1) (m-1)}^2(1 - \alpha)$.
The observed frequencies are often recorded in a $k \times m$ table, known as a contingency table, so that $O_{i,j}$ is the number in row $i$ and column $j$. In this setting, note that $N_i$ is the sum of the frequencies in the $i$th row and $M_j$ is the sum of the frequencies in the $j$th column. Also, for historical reasons, the random variables $X$ and $Y$ are sometimes called factors and the possible values of the variables categories.
Computational and Simulation Exercises
Computational Exercises
In each of the following exercises, specify the number of degrees of freedom of the chi-square statistic, give the value of the statistic and compute the $P$-value of the test.
A coin is tossed 100 times, resulting in 55 heads. Test the null hypothesis that the coin is fair.
Answer
1 degree of freedom, $V = 1$, $P = 0.3173$.
Suppose that we have 3 coins. The coins are tossed, yielding the data in the following table:
Heads Tails
Coin 1 29 21
Coin 2 23 17
Coin 3 42 18
1. Test the null hypothesis that all 3 coin are fair.
2. Test the null hypothesis that coin 1 has probability of heads $\frac{3}{5}$; coin 2 is fair; and coin 3 has probability of heads $\frac{2}{3}$.
3. Test the null hypothesis that the 3 coins have the same probability of heads.
Answer
1. 3 degree of freedom, $V = 11.78$, $P = 0.008$.
2. 3 degree of freedom, $V = 1.283$, $P = 0.733$.
3. 2 degree of freedom, $V = 2.301$, $P = 0.316$.
A die is thrown 240 times, yielding the data in the following table:
Score 1 2 3 4 5 6
Frequency 57 39 28 28 36 52
1. Test the null hypothesis that the die is fair.
2. Test the null hypothesis that the die is an ace-six flat die (faces 1 and 6 have probability $\frac{1}{4}$ each while faces 2, 3, 4, and 5 have probability $\frac{1}{8}$ each).
Answer
1. 5 degree of freedom, $V = 18.45$, $P = 0.0024$.
2. 5 degree of freedom, $V = 5.383$, $P = 0.3709$.
Two dice are thrown, yielding the data in the following table:
Score 1 2 3 4 5 6
Die 1 22 17 22 13 22 24
Die 2 44 24 19 19 18 36
1. Test the null hypothesis that die 1 is fair and die 2 is an ace-six flat.
2. Test the null hypothesis that all the dice have have the same probability distribuiton.
Answer
1. 10 degree of freedom, $V = 6.2$, $P = 0.798$.
2. 5 degree of freedom, $V = 7.103$, $P = 0.213$.
A university classifies faculty by rank as instructors, assistant professors, associate professors, and full professors. The data, by faculty rank and gender, are given in the following contingency table. Test to see if faculty rank and gender are independent.
Faculty Instructor Assistant Professor Associate Professor Full Professor
Male 62 238 185 115
Female 118 122 123 37
Answer
3 degrees of freedom, $V = 70.111$, $P \approx 0$.
Data Analysis Exercises
The Buffon trial data set gives the results of 104 repetitions of Buffon's needle experiment. The number of crack crossings is 56. In theory, this data set should correspond to 104 Bernoulli trials with success probability $p = \frac{2}{\pi}$. Test to see if this is reasonable.
Answer
1 degree of freedom, $V = 4.332$, $P = 0.037$.
Test to see if the alpha emissions data come from a Poisson distribution.
Answer
We partition of $\N$ into 17 subsets: $\{0, 1\}$, $\{x\}$ for $x \in \{2, 3, \ldots, 16\}$, and $\{17, 18, \ldots \}$. There are 15 degrees of freedom. The estimated Poisson parameter is 8.367, $V = 9.644$, $P = 0.842$.
Test to see if Michelson's velocity of light data come from a normal distribution.
Answer
Using the following partition of $\R$: $\{(-\infty, 750), [750, 775), [775, 800), [800, 825), [825, 850), [850, 875), [875, 900), [900, 925), [925, 950), [950, 975), [975, \infty)\}$. We have 8 degrees of freedom, $V = 11.443$, $P = 0.178$.
Simulation Exercises
In the simulation exercises below, you will be able to explore the goodness of fit test empirically.
In the dice goodness of fit experiment, set the sampling distribution to fair, the sample size to 50, and the significance level to 0.1. Set the test distribution as indicated below and in each case, run the simulation 1000 times. In case (a), give the empirical estimate of the significance level of the test and compare with 0.1. In the other cases, give the empirical estimate of the power of the test. Rank the distributions in (b)-(d) in increasing order of apparent power. Do your results seem reasonable?
1. fair
2. ace-six flats
3. the symmetric, unimodal distribution
4. the distribution skewed right
In the dice goodness of fit experiment, set the sampling distribution to ace-six flats, the sample size to 50, and the significance level to 0.1. Set the test distribution as indicated below and in each case, run the simulation 1000 times. In case (a), give the empirical estimate of the significance level of the test and compare with 0.1. In the other cases, give the empirical estimate of the power of the test. Rank the distributions in (b)-(d) in increasing order of apparent power. Do your results seem reasonable?
1. fair
2. ace-six flats
3. the symmetric, unimodal distribution
4. the distribution skewed right
In the dice goodness of fit experiment, set the sampling distribution to the symmetric, unimodal distribution, the sample size to 50, and the significance level to 0.1. Set the test distribution as indicated below and in each case, run the simulation 1000 times. In case (a), give the empirical estimate of the significance level of the test and compare with 0.1. In the other cases, give the empirical estimate of the power of the test. Rank the distributions in (b)-(d) in increasing order of apparent power. Do your results seem reasonable?
1. the symmetric, unimodal distribution
2. fair
3. ace-six flats
4. the distribution skewed right
In the dice goodness of fit experiment, set the sampling distribution to the distribution skewed right, the sample size to 50, and the significance level to 0.1. Set the test distribution as indicated below and in each case, run the simulation 1000 times. In case (a), give the empirical estimate of the significance level of the test and compare with 0.1. In the other cases, give the empirical estimate of the power of the test. Rank the distributions in (b)-(d) in increasing order of apparent power. Do your results seem reasonable?
1. the distribution skewed right
2. fair
3. ace-six flats
4. the symmetric, unimodal distribution
Suppose that $D_1$ and $D_2$ are different distributions. Is the power of the test with sampling distribution $D_1$ and test distribution $D_2$ the same as the power of the test with sampling distribution $D_2$ and test distribution $D_1$? Make a conjecture based on your results in the previous three exercises.
In the dice goodness of fit experiment, set the sampling and test distributions to fair and the significance level to 0.05. Run the experiment 1000 times for each of the following sample sizes. In each case, give the empirical estimate of the significance level and compare with 0.05.
1. $n = 10$
2. $n = 20$
3. $n = 40$
4. $n = 100$
In the dice goodness of fit experiment, set the sampling distribution to fair, the test distributions to ace-six flats, and the significance level to 0.05. Run the experiment 1000 times for each of the following sample sizes. In each case, give the empirical estimate of the power of the test. Do the powers seem to be converging?
1. $n = 10$
2. $n = 20$
3. $n = 40$
4. $n = 100$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/09%3A_Hypothesis_Testing/9.06%3A_Chi-Square_Tests.txt |
In this chapter, we explore several problems in geometric probability. These problems are interesting, conceptually clear, and the analysis is relatively simple. Thus, they are good problems for the student of probability. In addition, Buffon's problems and Bertrand's problem are historically famous, and contributed significantly to the early development of probability theory.
• 10.1: Buffon's Problems
Buffon's experiments are very old and famous random experiments, named after comte de Buffon. These experiments are considered to be among the first problems in geometric probability.
• 10.2: Bertrand's Paradox
Bertrand's problem is to find the probability that a random chord on a circle will be longer than the length of a side of the inscribed equilateral triangle. The problem is named after the French mathematician Joseph Louis Bertrand, who studied the problem in 1889.
• 10.3: Random Triangles
Suppose that a stick is randomly broken in two places. What is the probability that the three pieces form a triangle?
10: Geometric Models
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\area}{\text{area}}$
Buffon's experiments are very old and famous random experiments, named after comte de Buffon. These experiments are considered to be among the first problems in geometric probability.
Buffon's Coin Experiment
Buffon's coin experiment consists of dropping a coin randomly on a floor covered with identically shaped tiles. The event of interest is that the coin crosses a crack between tiles. We will model Buffon's coin problem with square tiles of side length 1—assuming the side length is 1 is equivalent to taking the side length as the unit of measurement.
Assumptions
First, let us define the experiment mathematically. As usual, we will idealize the physical objects by assuming that the coin is a perfect circle with radius $r$ and that the cracks between tiles are line segments. A natural way to describe the outcome of the experiment is to record the center of the coin relative to the center of the tile where the coin happens to fall. More precisely, we will construct coordinate axes so that the tile where the coin falls occupies the square $S = \left[ -\frac{1}{2}, \frac{1}{2} \right]^2$.
Now when the coin is tossed, we will denote the center of the coin by $(X, Y) \in S$ so that $S$ is our sample space and $X$ and $Y$ are our basic random variables. Finally, we will assume that $r \lt \frac{1}{2}$ so that it is at least possible for the coin to fall inside the square without touching a crack.
Next, we need to define an appropriate probability measure that describes our basic random vector $(X, Y)$. If the coin falls randomly on the floor, then it is natural to assume that $(X, Y)$ is uniformly distributed on $S$. By definition, this means that
$\P[(X, Y) \in A] = \frac{\area(A)}{\area(S)}, \quad A \subseteq S$
Run Buffon's coin experiment with the default settings. Watch how the points seem to fill the sample space $S$ in a uniform manner.
The Probability of a Crack Crossing
Our interest is in the probability of the event $C$ that the coin crosses a crack.
The probability of a crack crossing is $\P(C) - 1 - (1 - 2 r)^2$.
Proof
In Buffon's coin experiment, change the radius with the scroll bar and watch how the events $C$ and $C^c$ and change. Run the experiment with various values of $r$ and compare the physical experiment with the points in the scatterplot. Compare the relative frequency of $C$ to the probability of $C$.
The convergence of the relative frequency of an event (as the experiment is repeated) to the probability of the event is a special case of the law of large numbers.
Solve Buffon's coin problem with rectangular tiles that have height $h$ and width $w$.
Answer
$1 - \frac{(h - 2 \, r)(w - 2 \, r)}{h \, w}, \quad r \lt \min \left\{ \frac{h}{2}, \frac{w}{2} \right\}$
Solve Buffon's coin problem with equilateral triangular tiles that have side length 1.
Recall that random numbers are simulation of independent random variables, each with the standard uniform distribution, that is, the continuous uniform distribution on the interval $(0, 1)$.
Show how to simulate the center of the coin $(X, Y)$ in Buffon's coin experiment using random numbers.
Answer
$X = U - \frac{1}{2}$, $Y = V - \frac{1}{2}$, where $U$ and $V$ are random numbers.
Buffon's Needle Problem
Buffon's needle experiment consists of dropping a needle on a hardwood floor. The main event of interest is that the needle crosses a crack between floorboards. Strangely enough, the probability of this event leads to a statistical estimate of the number $\pi$!
Assumptions
Our first step is to define the experiment mathematically. Again we idealize the physical objects by assuming that the floorboards are uniform and that each has width 1. We will also assume that the needle has length $L \lt 1$ so that the needle cannot cross more than one crack. Finally, we assume that the cracks between the floorboards and the needle are line segments.
When the needle is dropped, we want to record its orientation relative to the floorboard cracks. One way to do this is to record the angle $X$ that the top half of the needle makes with the line through the center of the needle, parallel to the floorboards, and the distance $Y$ from the center of the needle to the bottom crack. These will be the basic random variables of our experiment, and thus the sample space of the experiment is $S = [0, \pi) \times [0, 1) = \{(x, y): 0 \le x \lt \pi, \; 0 \le y \lt 1\}$
Again, our main modeling assumption is that the needle is tossed randomly on the floor. Thus, a reasonable mathematical assumption might be that the basic random vector $(X, Y)$ is uniformly distributed over the sample space. By definition, this means that $\P[(X, Y) \in A] = \frac{\area(A)}{\area(S)}, \quad A \subseteq S$
Run Buffon's needle experiment with the default settings and watch the outcomes being plotted in the sample space. Note how the points in the scatterplot seem to fill the sample space $S$ in a uniform way.
The Probability of a Crack Crossing
Our main interest is in the event $C$ that the needle crosses a crack between the floorboards.
The event $C$ can be written in terms of the basic angle and distance variables as follows: $C = \left\{ Y \lt \frac{L}{2} \, \sin(X) \right\} \cup \left\{ Y \gt 1 - \frac{L}{2} \, \sin(X) \right\}$
The curves $y = \frac{L}{2} \, \sin(x)$ and $y = 1 - \frac{L}{2} \, \sin(x)$ on the interval $0 \le x \lt \pi$ are shown in blue in the scatterplot of Buffon's needle experiment, and hence event $C$ is the union of the regions below the lower curve and above the upper curve. Thus, the needle crosses a crack precisely when a point falls in this region.
The probability of a crack crossing is $\P(C) = 2 L / \pi$.
Proof
In the Buffon's needle experiment, vary the needle length $L$ with the scroll bar and watch how the event $C$ changes. Run the experiment with various values of $L$ and compare the physical experiment with the points in the scatterplot. Compare the relative frequency of $C$ to the probability of $C$.
The convergence of the relative frequency of an event (as the experiment is repeated) to the probability of the event is a special case of the law of large numbers.
Find the probabilities of the following events in Buffon's needle experiment. In each case, sketch the event as a subset of the sample space.
1. $\{0 \lt X \lt \pi / 2, \; 0 \lt Y \lt 1 / 3\}$
2. $\{1 / 4 \lt Y \lt 2 / 3\}$
3. $\{X \lt Y\}$
4. $\{X + Y \lt 2\}$
Answer
1. $\frac{1}{6}$
2. $\frac{5}{12}$
3. $\frac{1}{2 \pi}$
4. $\frac{3}{2 \pi}$
The Estimate of $\pi$
Suppose that we run Buffon's needle experiment a large number of times. By the law of large numbers, the proportion of crack crossings should be about the same as the probability of a crack crossing. More precisely, we will denote the number of crack crossings in the first $n$ runs by $N_n$. Note that $N_n$ is a random variable for the compound experiment that consists of $n$ replications of the basic needle experiment. Thus, if $n$ is large, we should have $\frac{N_n}{n} \approx \frac{2 L}{\pi}$ and hence $\pi \approx \frac{2 L n}{N_n}$ This is Buffon's famous estimate of $\pi$. In the simulation of Buffon's needle experiment, this estimate is computed on each run and shown numerically in the second table and visually in a graph.
Run the Buffon's needle experiment with needle lengths $L \in \{0.3, 0.5, 0.7, 1\}$. In each case, watch the estimate of $\pi$ as the simulation runs.
Let us analyze the estimation problem more carefully. On each run $j$ we have an indicator variable $I_j$, where $I_j = 1$ if the needle crosses a crack on run $j$ and $I_j = 0$ if the needle does not cross a crack on run $j$. These indicator variables are independent, and identically distributed, since we are assuming independent replications of the experiment. Thus, the sequence forms a Bernoulli trials process.
The number of crack crossings in the first $n$ runs of the experiment is $N_n = \sum_{j=1}^n I_j$ which has the binomial distribution with parameters $n$ and $2 L / \pi$.
The mean and variance of $N_n$ are
1. $\E(N_n) = n \frac{2 L}{\pi}$
2. $\var(N_n) = n \frac{2 L}{\pi} \left(1 - \frac{2 L}{\pi}\right)$
With probability 1, $\frac{N_n}{2 L n} \to \frac{1}{\pi}$ as $n \to \infty$ and $\frac{2 L n}{N_n} \to \pi$ as $n \to \infty$.
Proof a
These results follow from the strong law of large numbers.
Thus, we have two basic estimators: $\frac{N_n}{2 L n}$ as an estimator of $\frac{1}{\pi}$ and $\frac{2 L n}{N_n}$ as an estimator of $\pi$. The estimator of $\frac{1}{\pi}$ has several important statistical properties. First, it is unbiased since the expected value of the estimator is the parameter being estimated:
The estimator of $\frac{1}{\pi}$ is unbiased: $\E \left( \frac{N_n}{2 L n} \right) = \frac{1}{\pi}$
Proof
This follows from the results above for the binomial distribution and properties of expected value.
Since this estimator is unbiased, the variance gives the mean square error: $\var \left( \frac{N_n}{2 L n} \right) = \E \left[ \left( \frac{N_n}{2 L n} - \frac{1}{\pi} \right)^2 \right]$
The mean square error of the estimator of $\frac{1}{\pi}$ is $\var \left( \frac{N_n}{2 L n} \right) = \frac{\pi - 2 L}{2 L n \pi^2}$
The variance is a decreasing function of the needle length $L$.
Thus, the estimator of $\frac{1}{\pi}$ improves as the needle length increases. On the other hand, the estimator of $\pi$ is biased; it tends to overestimate $\pi$:
The estimator of $\pi$ is positively biased: $\E \left( \frac{2 L n}{N_n} \right) \ge \pi$
Proof
Use Jensen's inequality.
The estimator of $\pi$ also tends to improve as the needle length increases. This is not easy to see mathematically. However, you can see it empirically.
In the Buffon's needle experiment, run the simulation 5000 times each with $L = 0.3$, $L = 0.5$, $L = 0.7$, and $L = 0.9$. Note how well the estimator seems to work in each case.
Finally, we should note that as a practical matter, Buffon's needle experiment is not a very efficient method of approximating $\pi$. According to Richard Durrett, to estimate $\pi$ to four decimal places with $L = \frac{1}{2}$ would require about 100 million tosses!
Run the Buffon's needle experiment until the estimates of $\pi$ seem to be consistently correct to two decimal places. Note the number of runs required. Try this for needle lengths $L = 0.3$, $L = 0.5$, $L = 0.7$, and $L = 0.9$ and compare the results.
Show how to simulate the angle $X$ and distance $Y$ in Buffon's needle experiment using random numbers.
Answer
$X = \pi U$, $Y = V$, where $U$ and $V$ are random numbers.
Notes
Buffon's needle problem is essentially solved by Monte-Carlo integration. In general, Monte-Carlo methods use statistical sampling to approximate the solutions of problems that are difficult to solve analytically. The modern theory of Monte-Carlo methods began with Stanislaw Ulam, who used the methods on problems associated with the development of the hydrogen bomb.
The original needle problem has been extended in many ways, starting with Simon Laplace who considered a floor with rectangular tiles. Indeed, variations on the problem are active research problems even today.
Neil Weiss has pointed out that our computer simulation of Buffon's needle experiment is circular, in the sense the program assumes knowledge of $\pi$ (you can see this from the simulation result above).
Try to write a computer algorithm for Buffon's needle problem, without assuming the value of $\pi$ or any other transcendental numbers. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/10%3A_Geometric_Models/10.01%3A_Buffon%27s_Problems.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\area}{\text{area}}$
Preliminaries
Statement of the Problem
Bertrand's problem is to find the probability that a random chord on a circle will be longer than the length of a side of the inscribed equilateral triangle. The problem is named after the French mathematician Joseph Louis Bertrand, who studied the problem in 1889.
It turns out, as we will see, that there are (at least) three answers to Bertrand's problem, depending on how one interprets the phrase random chord. The lack of a unique answer was considered a paradox at the time, because it was assumed (naively, in hindsight) that there should be a single natural answer.
Run Bertrand's experiment 100 times for each of the following models. Do not be concerned with the exact meaning of the models, but see if you can detect a difference in the behavior of the outcomes
1. Uniform distance
2. Uniform angle
3. Uniform endpoint
Mathematical Formulation
To formulate the problem mathematically, let us take $(0, 0)$ as the center of the circle and take the radius of the circle to be 1. These assumptions entail no loss of generality because they amount to measuring distances relative to the center of the circle, and taking the radius of the circle as the unit of length. Now consider a chord on the circle. By rotating the circle, we can assume that one point of the chord is $(1, 0)$ and the other point is $(X, Y)$ where $Y \gt 0$ and $X^2 + Y^2 = 1$.
With these assumptions, the chord is completely specified by giving any one of the following variables
1. The (perpendicular) distance $D$ from the center of the circle to the midpoint of the chord. Note that $0 \le D \le 1$.
2. The angle $A$ between the $x$-axis and the line from the center of the circle to the midpoint of the chord. Note that $0 \le A \le \pi / 2$.
3. The horizontal coordinate $X$. Note that $-1 \le X \le 1$.
The variables are related as follows:
1. $D = \cos(A)$
2. $X = 2 D^2 - 1$
3. $Y = 2 D \sqrt{1 - D^2}$
The inverse relations are given below. Note again that there are one-to-one correspondences between $X$, $A$, and $D$.
1. $A = \arccos(D)$
2. $D = \sqrt{\frac{1}{2}(x + 1)}$
3. $D = \sqrt{\frac{1}{2} \pm \frac{1}{2} \sqrt{1 - y^2} }$
If the chord is generated in a probabilistic way, $D$, $A$, $X$, and $Y$ become random variables. In light of the previous results, specifying the distribution of any of the variables $D$, $A$, or $X$ completely determines the distribution of all four variables.
The angle $A$ is also the angle between the chord and the tangent line to the circle at $(1, 0)$.
Now consider the equilateral triangle inscribed in the circle so that one of the vertices is $(1, 0)$. Consider the chord defined by the upper side of the triangle.
For this chord, the angle, distance, and coordinate variables are given as follows:
1. $a = \pi / 3$
2. $d = 1 / 2$
3. $x = -1 / 2$
4. $y = \sqrt{3} / 2$
Now suppose that a chord is chosen in probabilistic way.
The length of the chord is greater than the length of a side of the inscribed equilateral triangle if and only if the following equivalent conditions occur:
1. $0 \lt D \lt 1 / 2$
2. $\pi / 3 \lt A \lt \pi / 2$
3. $-1 \lt X \lt -1 / 2$
Models
When an object is generated at random, a sequence of natural variables that determines the object should be given an appropriate uniform distribution. The coordinates of the coin center are such a sequence in Buffon's coin experiment; the angle and distance variables are such a sequence in Buffon's needle experiment. The crux of Bertrand's paradox is the fact that the distance $D$, the angle $A$, and the coordinate $X$ each seems to be a natural variable that determine the chord, but different models are obtained, depending on which is given the uniform distribution.
The Model with Uniform Distance
Suppose that $D$ is uniformly distributed on the interval $[0, 1]$.
The solution of Bertrand's problem is $\P \left( D \lt \frac{1}{2} \right) = \frac{1}{2}$
In Bertrand's experiment, select the uniform distance model. Run the experiment 1000 times and compare the relative frequency function of the chord event to the true probability.
The angle $A$ has probability density function $g(a) = \sin(a), \quad 0 \lt a \lt \frac{\pi}{2}$
Proof
This follows from the standard the change of variables formula.
The coordinate $X$ has probability density function $h(x) = \frac{1}{\sqrt{8 (x + 1)}}, \quad -1 \lt x \lt 1$
Proof
This follows from the standard the change of variables formula.
Note that $A$ and $X$ do not have uniform distributions. Recall that a random number is a simulation of a variable with the standard uniform distribution, that is the continuous uniform distribution on the interval $[0, 1)$.
Show how to simulate $D$, $A$, $X$, and $Y$ using a random number.
Answer
$A = \arccos(D)$, $X = 2 D^2 - 1$, $Y = 2 D \sqrt{1 - D^2}$, where $D$ is a random number
The Model with Uniform Angle
Suppose that $A$ is uniformly distributed on the interval $(0, \pi / 2)$.
The solution of Bertrand's problem is $\P\left(A \gt \frac{\pi}{3} \right) = \frac{1}{3}$
In Bertrand's experiment, select the uniform angle model. Run the experiment 1000 times and compare the relative frequency function of the chord event to the true probability.
The distance $D$ has probability density function $f(d) = \frac{2}{\pi \sqrt{1 - d^2}}, \quad 0 \lt d \lt 1$
Proof
This follows from the standard change of variables formula.
The coordinate $X$ has probability density function $h(x) = \frac{1}{\pi \sqrt{1 - x^2}}, \quad -1 \lt x \lt 1$
Proof
This follows from the change of variables formula.
Note that $D$ and $X$ do not have uniform distributions.
Show how to simulate $D$, $A$, $X$, and $Y$ using a random number.
Answer
$A = \frac{\pi}{2} U$, $D = \cos(A)$, $X = 2 D^2 - 1$, $Y = 2 D \sqrt{1 - D^2}$, where $U$ is a random number.
The Model with Uniform Endpoint
Suppose that $X$ is uniformly distributed on the interval $(-1, 1)$.
The solution of Bertrand's problem is $\P \left( -1 \lt X \lt -\frac{1}{2} \right) = \frac{1}{4}$
In Bertrand's experiment, select the uniform endpoint model. Run the experiment 1000 times and compare the relative frequency function of the chord event to the true probability.
The distance $D$ has probability density function $f(d) = 2 d, \quad 0 \lt d \lt 1$
Proof
This follows from the change of variables formula.
The angle $A$ has probability density function $g(a) = 2 \sin(a) \cos(a), \quad 0 \lt a \lt \frac{\pi}{2}$
Proof
This follows from the change of variables formula.
Note that $D$ and $A$ do not have uniform distributions; in fact, $D$ has a beta distribution with left parameter 2 and right parameter 1.
Physical Experiments
Suppose that a random chord is generated by tossing a coin of radius 1 on a table ruled with parallel lines that are distance 2 apart. Which of the models (if any) would apply to this physical experiment?
Answer
Uniform distance
Suppose that a needle is attached to the edge of disk of radius 1. A random chord is generated by spinning the needle. Which of the models (if any) would apply to this physical experiment?
Answer
Uniform angle
Suppose that a thin trough is constructed on the edge of a disk of radius 1. Rolling a ball in the trough generates a random point on the circle, so a random chord is generated by rolling the ball twice. Which of the models (if any) would apply to this physical experiment?
Answer
Uniform angle | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/10%3A_Geometric_Models/10.02%3A_Bertrand%27s_Paradox.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\area}{\text{area}}$
Preliminaries
Statement of the Problem
Suppose that a stick is randomly broken in two places. What is the probability that the three pieces form a triangle?
Without looking below, make a guess.
Run the triangle experiment 50 times. Do not be concerned with all of the information displayed in the app, but just note whether the pieces form a triangle. Would you like to revise your guess?
Mathematical Formulation
As usual, the first step is to model the random experiment mathematically. We will take the length of the stick as our unit of length, so that we can identify the stick with the interval $[0, 1]$. To break the stick into three pieces, we just need to select two points in the interval. Thus, let $X$ denote the first point chosen and $Y$ the second point chosen. Note that $X$ and $Y$ are random variables and hence the sample space of our experiment is $S = [0, 1]^2$. Now, to model the statement that the points are chosen at random, let us assume, as in the previous sections, that $X$ and $Y$ are independent and each is uniformly distributed on $[0, 1]$.
The random point $(X, Y)$ is uniformly distributed on $S = [0, 1]^2$.
Hence $\P\left[(X, Y) \in A\right] = \frac{\area(A)}{\area(S)}$
Triangles
The Probability of a Triangle
The three pieces form a triangle if and only if the triangle inequalities hold: the sum of the lengths of any two pieces must be greater than the length of the third piece.
The event that the pieces form a triangle is $T_1 \cup T_2$ where
1. $T_1 = \left\{ (x, y) \in S: y \gt \frac{1}{2}, \; x \lt \frac{1}{2}, \; y - x \lt \frac{1}{2} \right\}$
2. $T_2 = \left\{ (x, y) \in S: x \gt \frac{1}{2}, \; y \lt \frac{1}{2}, \; x - y \lt \frac{1}{2} \right\}$
A sketch of the event $T$ is given below. Curiously, $T$ is composed of triangles!
The probability that the pieces form a triangle is $\P(T) = \frac{1}{4}$.
How close did you come with your initial guess? The relative low value of $\P(T)$ is a bit surprising.
Run the triangle experiment 1000 times and compare the empirical probability of $T^c$ to the true probability.
Triangles of Different Types
Now let us compute the probability that the pieces form a triangle of a given type. Recall that in an acute triangle all three angles are less than 90°, while an obtuse triangle has one angle (and only one) that is greater than 90°. A right triangle, of course, has one 90° angle.
Suppose that a triangle has side lengths $a$, $b$, and $c$, where $c$ is the largest of these. The triangle is
1. acute if and only if $c^2 \lt a^2 + b^2$.
2. obtuse if and only if $c^2 \gt a^2 + b^2$.
3. right if and only if $c^2 = a^2 + b^2$.
Part (c), of course, is the famous Pythagorean theorem, named for the ancient Greek mathematician Pythagoras.
The right triangle equations for the stick pieces are
1. $(y - x)^2 = x^2 + (1 - y)^2$ in $T_1$
2. $(1 - x)^2 = x^2 + (y - x)^2$ in $T_1$
3. $x^2 = (y - x)^2 + (1 - y)^2$ in $T_1$
4. $(x - y)^2 = y^2 + (1 - x)^2$ in $T_2$
5. $(1 - x)^2 = y^2 + (x - y)^2$ in $T_2$
6. $y^2 = (x - y)^2 + (1 - x)^2$ in $T_2$
Let $R$ denote the event that the pieces form a right triangle. Then $\P(R) = 0$.
The event that the pieces form an acute triangle is $A = A_1 \cup A_2$ where
1. $A_1$ is the region inside curves (a), (b), and (c) of the right triangle equations.
2. $A_2$ is the region inside curves (d), (e), and (f) of the right triangle equations.
The event that the pieces form an obtuse triangle is $B = B_1 \cup B_2 \cup B_3 \cup B_4 \cup B_5 \cup B_6$ where
1. $B_1$, $B_2$, and $B_3$ are the regions inside $T_1$ and outside of curves (a), (b), and (c) of the right triangle equations, respectively.
2. $B_4$, $B_5$, and $B_6$ are the regions inside $T_2$ and outside of curves (d), (e), and (f) of the right triangle equations, respectively.
The probability that the pieces form an obtuse triangle is $\P(B) = \frac{9}{4} - 3 \ln(2) \approx 0.1706$
Proof
Simple calculus shows that $\P(B_i) = 3/8 - \ln(2) / 2$ for each $i \in \{1, 2, 3, 4, 5, 6\}$. For example \begin{align} \P(B_1) & = \int_0^{1/2} \frac{x \, (1 - 2 \, x)}{2 - 2 \, x} \, dx \ \P(B_3) & = \int_0^{1/2} \left(y + \frac{1}{2 \, y} - \frac{3}{2} \right) \, dy \end{align} From symmetry it also follows that $\P(B_i)$ is the same for each $i$.
The probability that the pieces form an acute triangle is $\P(A) = 3 \ln(2) - 2 \approx 0.07944$
Proof
Note that $A \cup B \cup R = T$, and $A$, $B$, and $R$ are disjoint.
Run the triangle experiment 1000 times and compare the empirical probabilities to the true probabilities. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/10%3A_Geometric_Models/10.03%3A_Random_Triangles.txt |
The Bernoulli trials process is one of the simplest, yet most important, of all random processes. It is an essential topic in any course in probability or mathematical statistics. The process consists of independent trials with two outcomes and with constant probabilities from trial to trial. Thus it is the mathematical abstraction of coin tossing. The process leads to several important probability distributions: the binomial, geometric, and negative binomial.
• 11.1: Introduction to Bernoulli Trials
The Bernoulli trials process, named after Jacob Bernoulli, is one of the simplest yet most important random processes in probability. Essentially, the process is the mathematical abstraction of coin tossing, but because of its wide applicability, it is usually stated in terms of a sequence of generic trials.
• 11.2: The Binomial Distribution
In this section we will study the random variable that gives the number of successes in the first n trials and the random variable that gives the proportion of successes in the first n trials. The underlying distribution, the binomial distribution, is one of the most important in probability theory, and so deserves to be studied in considerable detail. As you will see, some of the results in this section have two or more proofs.
• 11.3: The Geometric Distribution
• 11.4: The Negative Binomial Distribution
• 11.5: The Multinomial Distribution
• 11.6: The Simple Random Walk
The simple random walk process is a minor modification of the Bernoulli trials process. Nonetheless, the process has a number of very interesting properties, and so deserves a section of its own. In some respects, it's a discrete time analogue of the Brownian motion process.
• 11.7: The Beta-Bernoulli Process
An interesting thing to do in almost any parametric probability model is to randomize one or more of the parameters. Done in a clever way, this often leads to interesting new models and unexpected connections between models. In this section we will randomize the success parameter in the Bernoulli trials process. This leads to interesting and surprising connections with Pólya's urn process.
11: Bernoulli Trials
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
Basic Theory
Definition
The Bernoulli trials process, named after Jacob Bernoulli, is one of the simplest yet most important random processes in probability. Essentially, the process is the mathematical abstraction of coin tossing, but because of its wide applicability, it is usually stated in terms of a sequence of generic trials.
A sequence of Bernoulli trials satisfies the following assumptions:
1. Each trial has two possible outcomes, in the language of reliability called success and failure.
2. The trials are independent. Intuitively, the outcome of one trial has no influence over the outcome of another trial.
3. On each trial, the probability of success is $p$ and the probability of failure is $1 - p$ where $p \in [0, 1]$ is the success parameter of the process.
Random Variables
Mathematically, we can describe the Bernoulli trials process with a sequence of indicator random variables:
$\bs{X} = (X_1, X_2, \ldots)$
An indicator variable is a random variable that takes only the values 1 and 0, which in this setting denote success and failure, respectively. Indicator variable $X_i$ simply records the outcome of trial $i$. Thus, the indicator variables are independent and have the same probability density function: $\P(X_i = 1) = p, \quad \P(X_i = 0) = 1 - p$ The distribution defined by this probability density function is known as the Bernoulli distribution. In statistical terms, the Bernoulli trials process corresponds to sampling from the Bernoulli distribution. In particular, the first $n$ trials $(X_1, X_2, \ldots, X_n)$ form a random sample of size $n$ from the Bernoulli distribution. Note again that the Bernoulli trials process is characterized by a single parameter $p$.
The joint probability density function of $(X_1, X_2, \ldots, X_n)$ trials is given by $f_n(x_1, x_2, \ldots, x_n) = p^{x_1 + x_2 + \cdots + x_n} (1 - p)^{n- (x_1 + x_2 + \cdots + x_n)}, \quad (x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$
Proof
This follows from the basic assumptions of independence and the constant probabilities of 1 and 0.
Note that the exponent of $p$ in the probability density function is the number of successes in the $n$ trials, while the exponent of $1 - p$ is the number of failures.
If $\bs{X} = (X_1, X_2, \ldots,)$ is a Bernoulli trials process with parameter $p$ then $\bs{1} - \bs{X} = (1 - X_1, 1 - X_2, \ldots)$ is a Bernoulli trials sequence with parameter $1 - p$.
Suppose that $\bs{U} = (U_1, U_2, \ldots)$ is a sequence of independent random variables, each with the uniform distribution on the interval $[0, 1]$. For $p \in [0, 1]$ and $i \in \N_+$, let $X_i(p) = \bs{1}(U_i \le p)$. Then $\bs{X}(p) = \left( X_1(p), X_2(p), \ldots \right)$ is a Bernoulli trials process with probability $p$.
Note that in the previous result, the Bernoulli trials processes for all possible values of the parameter $p$ are defined on a common probability space. This type of construction is sometimes referred to as coupling. This result also shows how to simulate a Bernoulli trials process with random numbers. All of the other random process studied in this chapter are functions of the Bernoulli trials sequence, and hence can be simulated as well.
Moments
Let $X$ be an indciator variable with $\P(X = 1) = p$, where $p \in [0, 1]$. Thus, $X$ is the result of a generic Bernoulli trial and has the Bernoulli distribution with parameter $p$. The following results give the mean, variance and some of the higher moments. A helpful fact is that if we take a positive power of an indicator variable, nothing happens; that is, $X^n = X$ for $n \gt 0$
The mean and variance of $X$ are
1. $\E(X) = p$
2. $\var(X) = p (1 - p)$
Proof
1. $\E(X) = 1 \cdot p ( 0 \cdot (1 - p) = p$
2. $\var(X) = \E\left(X^2\right) - \left[\E(X)\right]^2 = \E(X) - \left[\E(X)\right]^2 = p - p^2$
Note that the graph of $\var(X)$, as a function of $p \in [0, 1]$ is a parabola opening downward. In particular the largest value is $\frac{1}{4}$ when $p = \frac{1}{2}$, and the smallest value is 0 when $p = 0$ or $p = 1$. Of course, in the last two cases, $X$ is deterministic, taking the single value 0 when $p = 0$ and the single value 1 when $p = 1$
Suppose that $p \in (0, 1)$. The skewness and kurtosis of $X$ are
1. $\skw(X) = \frac{1 - 2 p}{\sqrt{p (1 - p)}}$
2. $\kur(X) = -3 + \frac{1}{p (1 - p)}$
The probability generating function of $X$ is $P(t) = \E\left(t^X\right) = (1 - p) + p t$ for $t \in \R$.
Examples and Applications
Coins
As we noted earlier, the most obvious example of Bernoulli trials is coin tossing, where success means heads and failure means tails. The parameter $p$ is the probability of heads (so in general, the coin is biased).
In the basic coin experiment, set $n = 100$ and For each $p \in \{0.1, 0.3, 0.5, 0.7, 0.9\}$ run the experiment and observe the outcomes.
Generic Examples
In a sense, the most general example of Bernoulli trials occurs when an experiment is replicated. Specifically, suppose that we have a basic random experiment and an event of interest $A$. Suppose now that we create a compound experiment that consists of independent replications of the basic experiment. Define success on trial $i$ to mean that event $A$ occurred on the $i$th run, and define failure on trial $i$ to mean that event $A$ did not occur on the $i$th run. This clearly defines a Bernoulli trials process with parameter $p = \P(A)$.
Bernoulli trials are also formed when we sample from a dichotomous population. Specifically, suppose that we have a population of two types of objects, which we will refer to as type 0 and type 1. For example, the objects could be persons, classified as male or female, or the objects could be components, classified as good or defective. We select $n$ objects at random from the population; by definition, this means that each object in the population at the time of the draw is equally likely to be chosen. If the sampling is with replacement, then each object drawn is replaced before the next draw. In this case, successive draws are independent, so the types of the objects in the sample form a sequence of Bernoulli trials, in which the parameter $p$ is the proportion of type 1 objects in the population. If the sampling is without replacement, then the successive draws are dependent, so the types of the objects in the sample do not form a sequence of Bernoulli trials. However, if the population size is large compared to the sample size, the dependence caused by not replacing the objects may be negligible, so that for all practical purposes, the types of the objects in the sample can be treated as a sequence of Bernoulli trials. Additional discussion of sampling from a dichotomous population is in the in the chapter Finite Sampling Models.
Suppose that a student takes a multiple choice test. The test has 10 questions, each of which has 4 possible answers (only one correct). If the student blindly guesses the answer to each question, do the questions form a sequence of Bernoulli trials? If so, identify the trial outcomes and the parameter $p$.
Answer
Yes, probably so. The outcomes are correct and incorrect and $p = \frac{1}{4}$.
Candidate $A$ is running for office in a certain district. Twenty persons are selected at random from the population of registered voters and asked if they prefer candidate $A$. Do the responses form a sequence of Bernoulli trials? If so identify the trial outcomes and the meaning of the parameter $p$.
Answer
Yes, approximately, assuming that the number of registered voters is large, compared to the sample size of 20. The outcomes are prefer $A$ and do not prefer $A$; $p$ is the proportion of voters in the entire district who prefer $A$.
An American roulette wheel has 38 slots; 18 are red, 18 are black, and 2 are green. A gambler plays roulette 15 times, betting on red each time. Do the outcomes form a sequence of Bernoulli trials? If so, identify the trial outcomes and the parameter $p$.
Answer
Yes, the outcomes are red and black, and $p = \frac{18}{38}$.
Roulette is discussed in more detail in the chapter on Games of Chance.
Two tennis players play a set of 6 games. Do the games form a sequence of Bernoulli trials? If so, identify the trial outcomes and the meaning of the parameter $p$.
Answer
No, probably not. The games are almost certainly dependent, and the win probably depends on who is serving and thus is not constant from game to game.
Reliability
Recall that in the standard model of structural reliability, a system is composed of $n$ components that operate independently of each other. Let $X_i$ denote the state of component $i$, where 1 means working and 0 means failure. If the components are all of the same type, then our basic assumption is that the state vector $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a sequence of Bernoulli trials. The state of the system, (again where 1 means working and 0 means failed) depends only on the states of the components, and thus is a random variable $Y = s(X_1, X_, \ldots, X_n)$ where $s: \{0, 1\}^n \to \{0, 1\}$ is the structure function. Generally, the probability that a device is working is the reliability of the device, so the parameter $p$ of the Bernoulli trials sequence is the common reliability of the components. By independence, the system reliability $r$ is a function of the component reliability: $r(p) = \P_p(Y = 1), \quad p \in [0, 1]$ where we are emphasizing the dependence of the probability measure $\P$ on the parameter $p$. Appropriately enough, this function is known as the reliability function. Our challenge is usually to find the reliability function, given the structure function.
A series system is working if and only if each component is working.
1. The state of the system is $Y = X_1 X_2 \cdots X_n = \min\{X_1, X_2, \ldots, X_n\}$.
2. The reliability function is $r(p) = p^n$ for $p \in [0, 1]$.
A parallel system is working if and only if at least one component is working.
1. The state of the system is $Y = 1 - (1 - X_1)(1 - X_2) \cdots (1 - X_n) = \max\{X_1, X_2, \ldots, X_n\}$.
2. The reliability function is $r(p) - 1 - (1 - p)^n$ for $p \in [0, 1]$.
Recall that in some cases, the system can be represented as a graph or network. The edges represent the components and the vertices the connections between the components. The system functions if and only if there is a working path between two designated vertices, which we will denote by $a$ and $b$.
Find the reliability of the Wheatstone bridge network shown below (named for Charles Wheatstone).
Answer
$r(p) = p \, (2 \, p - p^2)^2 + (1 - p)(2 \, p^2 - p^4)$
The Pooled Blood Test
Suppose that each person in a population, independently of all others, has a certain disease with probability $p \in (0, 1)$. Thus, with respect to the disease, the persons in the population form a sequence of Bernoulli trials. The disease can be identified by a blood test, but of course the test has a cost.
For a group of $k$ persons, we will compare two strategies. The first is to test the $k$ persons individually, so that of course, $k$ tests are required. The second strategy is to pool the blood samples of the $k$ persons and test the pooled sample first. We assume that the test is negative if and only if all $k$ persons are free of the disease; in this case just one test is required. On the other hand, the test is positive if and only if at least one person has the disease, in which case we then have to test the persons individually; in this case $k + 1$ tests are required. Thus, let $Y$ denote the number of tests required for the pooled strategy.
The number of tests $Y$ has the following properties:
1. $\P(Y = 1) = (1 - p)^k, \quad \P(Y = k + 1) = 1 - (1 - p)^k$
2. $\E(Y) = 1 + k \left[1 - (1 - p)^k\right]$
3. $\var(Y) = k^2 (1 - p)^k \left[1 - (1 - p)^k\right]$
In terms of expected value, the pooled strategy is better than the basic strategy if and only if $p \lt 1 - \left( \frac{1}{k} \right)^{1/k}$
The graph of the critical value $p_k = 1 - (1 / k)^{1/k}$ as a function of $k \in [2, 20]$ is shown in the graph below:
The critical value $p_k$ satisfies the following properties:
1. The maximum value of $p_k$ occurs at $k = 3$ and $p_3 \approx 0.307$.
2. $p_k \to 0$ as $k \to \infty$.
It follows that if $p \ge 0.307$, pooling never makes sense, regardless of the size of the group $k$. At the other extreme, if $p$ is very small, so that the disease is quite rare, pooling is better unless the group size $k$ is very large.
Now suppose that we have $n$ persons. If $k \mid n$ then we can partition the population into $n / k$ groups of $k$ each, and apply the pooled strategy to each group. Note that $k = 1$ corresponds to individual testing, and $k = n$ corresponds to the pooled strategy on the entire population. Let $Y_i$ denote the number of tests required for group $i$.
The random variables $(Y_1, Y_2, \ldots, Y_{n/k})$ are independent and each has the distribution given above.
The total number of tests required for this partitioning scheme is $Z_{n,k} = Y_1 + Y_2 + \cdots + Y_{n/k}$.
The expected total number of tests is $\E(Z_{n,k}) = \begin{cases} n, & k = 1 \ n \left[ \left(1 + \frac{1}{k} \right) - (1 - p)^k \right], & k \gt 1 \end{cases}$
The variance of the total number of tests is $\var(Z_{n,}) = \begin{cases} 0, & k = 1 \ n \, k \, (1 - p)^k \left[ 1 - (1 - p)^k \right], & k \gt 1 \end{cases}$
Thus, in terms of expected value, the optimal strategy is to group the population into $n / k$ groups of size $k$, where $k$ minimizes the expected value function above. It is difficult to get a closed-form expression for the optimal value of $k$, but this value can be determined numerically for specific $n$ and $p$.
For the following values of $n$ and $p$, find the optimal pooling size $k$ and the expected number of tests. (Restrict your attention to values of $k$ that divide $n$.)
1. $n = 100$, $p = 0.01$
2. $n = 1000$, $p = 0.05$
3. $n = 1000$, $p = 0.001$
Answer
1. $k = 10$, $\E(Y_k) = 19.56$
2. $k = 5$, $\E(Y_k) = 426.22$
3. $k = 40$, $\E(Y_k) = 64.23$
If $k$ does not divide $n$, then we could divide the population of $n$ persons into $\lfloor n / k \rfloor$ groups of $k$ each and one remainder group with $n \mod k$ members. This clearly complicates the analysis, but does not introduce any new ideas, so we will leave this extension to the interested reader. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/11%3A_Bernoulli_Trials/11.01%3A_Introduction_to_Bernoulli_Trials.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
Basic Theory
Definitions
Our random experiment is to perform a sequence of Bernoulli trials $\bs{X} = (X_1, X_2, \ldots)$. Recall that $\bs{X}$ is a sequence of independent, identically distributed indicator random variables, and in the usual language of reliability, 1 denotes success and 0 denotes failure. The common probability of success $p = \P(X_i = 1)$, is the basic parameter of the process. In statistical terms, the first $n$ trails $(X_1, X_2, \ldots, X_n)$ form a random sample of size $n$ from the Bernoulli distribution.
In this section we will study the random variable that gives the number of successes in the first $n$ trials and the random variable that gives the proportion of successes in the first $n$ trials. The underlying distribution, the binomial distribution, is one of the most important in probability theory, and so deserves to be studied in considerable detail. As you will see, some of the results in this section have two or more proofs. In almost all cases, note that the proof from Bernoulli trials is the simplest and most elegant.
For $n \in \N$, the number of successes in the first $n$ trials is the random variable $Y_n = \sum_{i=1}^n X_i, \quad n \in \N$ The distribution of $Y_n$ is the binomial distribution with trial parameter $n$ and success parameter $p$.
Note that $\bs{Y} = (Y_0, Y_1, \ldots)$ is the partial sum process associated with the Bernoulli trials sequence $\bs{X}$. In particular, $Y_0 = 0$, so point mass at 0 is considered a degenerate form of the binomial distribution.
Distribution Functions
The probability density function $f_n$ of $Y_n$ is given by $f_n(y) = \binom{n}{y} p^y (1 - p)^{n-y}, \quad y \in \{0, 1, \ldots, n\}$
Proof
Recall that if $(x_1, x_2 \ldots) \in \{0, 1\}^n$ with $\sum_{i=1}^n x_i = y$ (that is, a bit string of length $n$ with 1 occurring exactly $y$ times), then by independence, $\P\left[(X_1, X_2, \ldots, X_n) = (x_1, x_2, \ldots, x_n)\right] = p^y (1 - p)^{n - y}$ Moreover, the number of bit strings of length $n$ with 1 occurring exactly $y$ times is the binomial coefficient $\binom{n}{y}$. By the additive property of probability $\P(Y_n = y) = \binom{n}{y} p^y (1 - p)^{n-y}, \quad y \in \{0, 1, \ldots, n\}$
Check that $f_n$ is a valid PDF
Clearly $f_n(y) \ge 0$ for $y \in \{0, 1, \ldots, n\}$. From the binomial theorem $\sum_{y=0}^n f_n(y) = \sum_{y=0}^n \binom{n}{y} p^y (1 - p)^{n-y} = \left[p + (1 - p)\right]^n = 1$
In the binomial coin experiment, vary $n$ and $p$ with the scrollbars, and note the shape and location of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the relative frequency function to the probability density function.
The binomial distribution is unimodal: For $k \in \{1, 2, \ldots, n\}$,
1. $f_n(k) \gt f_n(k - 1)$ if and only if $k \lt (n + 1) p$.
2. $f_n(k) = f_n(k - 1)$ if and only if $k = (n + 1) p$ is an integer an integer between 1 and $n$.
Proof
1. $f_n(k) \gt f_n(k - 1)$ if and only if $\binom{n}{k} p^k (1 - p)^{n-k} \gt \binom{n}{k - 1} p^{k-1} (1 - p)^{n - k + 1}$ if and only if $\frac{p}{k} \gt \frac{1 - p}{n - k + 1}$ if and only if $k \lt (n + 1) p$
2. As in (a), $f_n(k) = f_n(k - 1)$ if and only if $k = (n + 1) p$, which must be an integer.
Thus, the density function at first increases and then decreases, reaching its maximum value at $\lfloor (n + 1) p \rfloor$. This integer is a mode of the distribution. In the case that $m = (n + 1) p$ is an integer between 1 and $n$, there are two consecutive modes, at $m - 1$ and $m$.
Now let $F_n$ denote the distribution function of $Y_n$, so that $F_n(y) = \P(Y_n \le y) = \sum_{k=0}^y f_n(k) = \sum_{k=0}^y \binom{n}{k} p^k (1 - p)^{n-k}, \quad y \in \{0, 1, \ldots, n\}$ The distribution function $F_n$ and the quantile function $F_n^{-1}$ do not have simple, closed forms, but values of these functions can be computed from mathematical and statistical software.
Open the special distribution calculator and select the binomial distribution and set the view to CDF. Vary $n$ and $p$ and note the shape and location of the distribution/quantile function. For various values of the parameters, compute the median and the first and third quartiles.
The binomial distribution function also has a nice relationship to the beta distribution function.
The distribution function $F_n$ can be written in the form $F_n(k) = \frac{n!}{(n - k - 1)! k!} \int_0^{1-p} x^{n-k-1} (1 - x)^k \, dx, \quad k \in \{0, 1, \ldots, n\}$
Proof
Let $G_n(k)$ denote the expression on the right. Substitution and simple integration shows that $G_n(0) = (1 - p)^n = f_n(0) = F_n(0)$. For $k \in \{1, 2, \ldots, n\}$, integrating by parts with $u = (1 - x)^k$ and $dv = x^{n - k - 1} dx$ gives $G_n(k) = \binom{n}{k} p^k (1 - p)^{n-k} + \frac{n!}{(n - k)! (k - 1)!} \int_0^{1-p} x^{n-k} (1 - x)^k \, dx = f_n(k) + G_n(k - 1)$ It follows that $G_n(k) = \sum_{j=0}^k f_n(j) = F_n(k)$ for $k \in \{0, 1, \ldots, n\}$.
The expression on the right in the previous theorem is the beta distribution function, with left parameter $n - k$ and right parameter $k + 1$, evaluated at $1 - p$.
Moments
The mean, variance and other moments of the binomial distribution can be computed in several different ways. Again let $Y_n = \sum_{i=1}^n X_i$ where $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of Bernoulli trials with success parameter $p$.
The mean and variance of $Y_n$ are
1. $E(Y_n) = n p$
2. $\var(Y_n) = n p (1 - p)$
Proof from Bernoulli trials
1. Recall that $\E(X_i) = p$ for each $i$. Hence from the additive property of expected value, $\E(Y_n) = \sum_{i=1}^n \E(X_i) = \sum_{i=1}^n p = n p$
2. Recall that $\var(X_i) = p (1 - p)$ for each $i$. Hence from the additive property of variance for independent variables, $\var(Y_n) = \sum_{i=1}^n \var(X_i) = \sum_{i=1}^n p (1 - p) = n p (1 - p)$
Direct proof of (a)
Recall the identity $y \binom{n}{y} = n \binom{n - 1}{y - 1}$ for $n, \, y \in \N_+$. Using the binomial theorem, \begin{align} \E(Y_n) & = \sum_{y=0}^n y \binom{n}{y} p^y (1 - p)^{n-y} = \sum_{y=1}^n y \binom{n}{y} p^y (1 - p)^{n-y} \ & = \sum_{y=1}^n n \binom{n - 1}{y - 1} p^y (1 - p)^{n-y} = n p \sum_{y=1}^n \binom{n - 1}{y - 1} p^{y-1} (1 - p)^{(n - 1) - (y - 1)} \ & = n p \sum_{k=0}^{n-1} \binom{n - 1}{k} p^k (1 - p)^{n - 1 - k} = n p \left[p + (1 - p)\right]^{n-1} = n p \end{align} A similar, but more complicated proof can be used for part (b).
The expected value of $Y_n$ also makes intuitive sense, since $p$ should be approximately the proportion of successes in a large number of trials. We will discuss the point further in the subsection below on the proportion of successes. Note that the graph of $\var(Y_n)$ as a function of $p \in [0, 1]$ is parabola opening downward. In particular the maximum value of the variance is $n / 4$ when $p = 1 / 2$, and the minimm value is 0 when $p = 0$ or $p = 1$. Of course, in the last two cases, $Y_n$ is deterministic, taking just the value 0 if $p = 0$ and just the value $n$ when $p = 1$.
In the binomial coin experiment, vary $n$ and $p$ with the scrollbars and note the location and size of the mean$\pm$standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the sample mean and standard deviation to the distribution mean and standard deviation.
The probability generating function of $Y_n$ is $P_n(t) = \E \left(t^{Y_n} \right) = \left[(1 - p) + p t\right]^n$ for $t \in \R$.
Proof from Bernoulli trials
Recall that the probability generating function of a sum of indpendent variables is the product of the probability generating functions of the terms. Recall also that the PGF of $X_i$ is $P(t) = \E\left(t^{X_i}\right) = (1 - p) + p t$ for each $i$. Hence $P_n(t) = P^n(t)$.
Direct Proof
Using the binomial theorem yet again, $\E\left(t^{Y_n}\right) = \sum_{y=0}^n t^y \binom{n}{y} p^y (1 - p)^{n-y} = \sum_{y=0}^n \binom{n}{y} (p t)^y (1 - p)^{n-y} = \left[ p t + (1 - p)\right]^n$
Recall that for $x \in \R$ and $k \in \N$, the falling power of $x$ of order $k$ is $x^{(k)} = x (x - 1) \cdots (x - k + 1)$. If $X$ is a random variable and $k \in \N$, then $\E\left[X^{(k)}\right]$ is the factorial moment of $X$ of onder $k$. The probability generating function provides an easy way to compute the factorial moments of the binomial distribution.
$\E\left[Y_n^{(k)}\right] = n^{(k)} p^k$ for $k \in \N$.
Proof
Recall that $P_n^{(k)}(1) = \E\left[Y_n^{(k)}\right]$ where $P_n^{(k)}$ denotes the $k$th derivative of $P_n$. By simple calculus, $P_n^{(k)}(t) = n^{(k)} \left[(1 - p) + p t\right]^{n-k} p^k$, so $P_n^{(k)} = n^{(k)} p^k$.
Our next result gives a recursion equation and initial conditions for the moments of the binomial distribution.
Recursion relation and initial conditions
1. $\E \left(Y_n^k\right) = n p \E \left[ \left(Y_{n-1} + 1\right)^{k-1} \right]$ for $n, \, k \in \N_+$
2. $\E\left(Y_n^0\right) = 1$ for $n \in \N$
3. $\E\left(Y_0^k\right) = 0$ for $k \in \N_+$
Proof
Recall again the identity $y \binom{n}{y} = n \binom{n - 1}{y - 1}$. \begin{align} \E\left(Y_n^k\right) & = \sum_{y=0}^n y^k \binom{n}{y} p^y (1 - p)^{n-y} = \sum_{y=1}^n y^{k-1} y \binom{n}{y} p^y (1 - p)^{n-y} \ & = \sum_{y=1}^n y^{k-1} n \binom{n-1}{y-1} p^y (1 - p)^{n-y} = n p \sum_{y=1}^n y^{k-1} \binom{n-1}{y-1} p^{y-1} (1 - p)^{(n-1)-(y-1)} \ & = n p \sum_{j=0}^{n-1} (j + 1)^{k-1} \binom{n-1}{j} p^j (1 - p)^{n-1 - j} = n p \E\left[(Y_{n-1} + 1)^{k-1}\right] \end{align}
The ordinary (raw) moments of $Y_n$ can be computed from the factorial moments and from the recursion relation. Here are the first four, which will be needed below.
The first four moments of $Y_n$ are
1. $\E\left(Y_n\right) = n p$
2. $\E\left(Y_n^2\right) = n^{(2)} p^2 + n p$
3. $\E\left(Y_n^3\right) = n^{(3)} p^3 + 3 n^{(2)} p^2 + n p$
4. $\E\left(Y_n^4\right) = n^{(4)} p^4 + + 6 n^{(3)} p^3 + 7 n^{(2)} p^2 + n p$
Our final moment results gives the skewness and kurtosis of the binomial distribution.
For $p \in (0, 1)$, the skewness of $Y_n$ is $\skw(Y_n) = \frac{1 - 2 p}{\sqrt{n p (1 - p)}}$
1. $\skw(Y_n) \gt 0$ if $p \lt \frac{1}{2}$, $\skw(Y_n) \lt 0$ if $p \gt \frac{1}{2}$, and $\skw(Y_n) = 0$ if $p = \frac{1}{2}$
2. For fixed $n$, $\skw(Y_n) \to \infty$ as $p \downarrow 0$ and as $p \uparrow 1$
3. For fixed $p$, $\skw(Y_n) \to 0$ as $n \to \infty$
Proof
These result follow from the standard computational formulas for skewness and kurtosis and the first three moments of the binomial distribution.
Open the binomial timeline experiment. For each of the following values of $n$, vary $p$ from 0 to 1 and note the shape of the probability density function in light of the previous results on skewness.
1. $n = 10$
2. $n = 20$
3. $n = 100$
For $p \in (0, 1)$, the kurtosis of $Y_n$ is $\kur(Y_n) = 3 - \frac{6}{n} + \frac{1}{n p (1 - p)}$
1. For fixed $n$, $\kur(Y_n)$ decreases and then increases as a function of $p$, with minimum value $3 - \frac{2}{n}$ at the point of symmetry $p = \frac{1}{2}$
2. For fixed $n$, $\kur(Y_n) \to \infty$ as $p \downarrow 0$ and as $p \uparrow 1$
3. For fixed $p$, $\kur(Y_n) \to 3$ as $n \to \infty$
Proof
These result follow from the standard computational formulas for skewness and kurtosis and the first four moments of the binomial distribution.
Note that the excess kurtosis is $\kur(Y_n) - 3 = \frac{1}{n p (1 - p)} - \frac{6}{n} \to 0$ as $n \to \infty$. This is related to the convergence of the binomial distribution to the normal, which we will discuss below.
The Partial Sum Process
Several important properties of the random process $\bs{Y} = (Y_0, Y_1, Y_2, \ldots)$ stem from the fact that it is a partial sum process corresponding to the sequence $\bs{X} = (X_1, X_2, \ldots)$ of independent, identically distributed indicator variables.
$\bs{Y}$ has stationary, independent increments:
1. If $m$ and $n$ are positive integers with $m \le n$ then $Y_n - Y_m$ has the same distribution as $Y_{n-m}$, namely binomial with parameters $n - m$ and $p$.
2. If $n_1 \le n_2 \le n_3 \le \cdots$ then $\left(Y_{n_1}, Y_{n_2} - Y_{n_1}, Y_{n_3} - Y_{n_1}, \ldots\right)$ is a sequence of independent variables.
Proof
Every partial sum process corresponding to a sequence of independent, identically distributed variables has stationary, independent increments.
The following result gives the finite dimensional distributions of $\bs{Y}$.
The joint probability density functions of the sequence $\bs{Y}$ are given as follows: $\P\left(Y_{n_1} = y_1, Y_{n_2} = y_2, \ldots, Y_{n_k} = y_k\right) = \binom{n_1}{y_1} \binom{n_2 - n_1}{y_2 - y_1} \cdots \binom{n_k - n_{k-1}}{y_k - y_{k-1}} p^{y_k} (1 - p)^{n_k - y_k}$ where $n_1, n_2, \ldots, n_k \in \N_+$ with $n_1 \lt n_2 \lt \cdots \lt n_k$ and where $y_1, y_2, \ldots, y_k \in \N$ with $0 \le y_j - y_{j-1} \le n_j - n_{j-1}$ for each $j \in \{1, 2, \ldots, k\}$.
Proof
From the stationary and independent increments properties, $\P\left(Y_{n_1} = y_1, Y_{n_2} = y_2, \ldots, Y_{n_k} = y_k\right) = f_{n_1}(y_1) f_{n_2 - n_1}(y_2 - y_1) \cdots f_{n_k - n_{k-1}}(y_k - y_{k-1})$ The result then follows from substitution and simplification.
Transformations that Preserve the Binomial Distribution
There are two simple but important transformations that perserve the binomial distribution.
If $U$ is a random variable having the binomial distribution with parameters $n$ and $p$, then $n - U$ has the binomial distribution with parameters $n$ and $1 - p$.
Proof from Bernoulli trials
Recall that if $(X_1, X_2, \ldots)$ is a Bernoulli trials sequence with parameter $p$, then $(1 - X_1, 1 - X_2, \ldots)$ is a Bernoulli trials sequence with parameter $1 - p$. Also $U$ has the same distribution as $\sum_{i=1}^n X_i$ (binomial with parameters $n$ and $p$) so $n - U$ has the same distribution as $\sum_{i=1}^n (1 - X_i)$ (binomial with parameters $n$ and $1 - p$).
Proof from density functions
Note that $\P(n - U = k) = \P(U = n - k) = \binom{n}{k} p^{n-k} (1 - p)^k$ for $k \in \{0, 1, \ldots, n\}$
The sum of two independent binomial variables with the same success parameter also has a binomial distribution.
Suppose that $U$ and $V$ are independent random variables, and that $U$ has the binomial distribution with parameters $m$ and $p$, and $V$ has the binomial distribution with parameters $n$ and $p$. Then $U + V$ has the binomial distribution with parameters $m + n$ and $p$.
Proof from Bernoulli trials
Let $(X_1, X_2, \ldots)$ be a Bernoulli trials sequence with parameter $p$, and let $Y_k = \sum_{i=1}^k X_i$ for $k \in \N$. Then $U$ has the same distribution as $Y_m$ and $V$ has the same distribution as $Y_{m+n} - Y_m$. Since $Y_m$ and $Y_{m+n} - Y_m$ are independent, $U + V$ has the same distribution as $Y_m + (Y_{m+n} - Y_m) = Y_{m+n}$.
Proof from convolution powers
Let $f$ denote the PDF of an indicator variable $X$ with parameter $p$, so that $f(x) = p^x (1 - p)^{1 - x}$ for $x \in \{0, 1\}$. The binomial distribution with parameters $k \in \N_+$ and $p$ has PDF $f_k = f^{* k}$, the $k$-fold convolution power of $f$. In particular, $U$ has PDF $f^{* m}$, $V$ has PDF $f^{* n}$ and hence $U + V$ has PDF $f^{* m} * f^{* n} = f^{* (m + n)}$.
Proof from generating functions
$U$ and $V$ have PGFs $P_m(t) = (1 - p + p t)^m$ and $P_n(t) = (1 - p + p t)^n$ for $t \in \R$, respectively. Hence by independence, $U + V$ has PGF $P_m P_n = P_{n+m}$.
Sampling and the Hypergeometric Distribution
Suppose that we have a dichotomous population, that is a population of two types of objects. Specifically, suppose that we have $m$ objects, and that $r$ of the objects are type 1 and the remaining $m - r$ objects are type 0. Thus $m \in \N_+$ and $r \in \{0, 1, \ldots, m\}$. We select $n$ objects at random from the population, so that all samples of size $n$ are equally likely. If the sampling is with replacement, the sample size $n$ can be any positive integer. If the sampling is without replacement, then we must have $n \in \{1, 2, \ldots, m\}$.
In either case, let $X_i$ denote the type of the $i$'th object selected for $i \in \{1, 2, \ldots, n\}$ so that $Y = \sum_{i=1}^n X_i$ is the number of type 1 objects in the sample. As noted in the Introduction, if the sampling is with replacement, $(X_1, X_2, \ldots, X_n)$ is a sequence of Bernoulli trials, and hence $Y$ has the binomial distribution parameters $n$ and $p = r / m$. If the sampling is without replacement, then $Y$ has the hypergeometric distribution with parameters $m$, $r$, and $n$. The hypergeometric distribution is studied in detail in the chapter on Finite Sampling Models. For reference, the probability density function of $Y$ is given by $\P(Y = y) = \frac{\binom{r}{y} \binom{m - r}{n - y}}{\binom{m}{n}} = \binom{n}{y} \frac{r^{(y)} (m - r)^{(n-y)}}{m^{(n)}}, \quad y \in \{0, 1, \ldots, n\}$ and the mean and variance of $Y$ are $\E(Y) = n \frac{r}{m}, \quad \var(Y) = n \frac{r}{m} \left(1 - \frac{r}{m}\right) \frac{m - n}{m - 1}$ If the population size $m$ is large compared to the sample size $n$, then the dependence between the indicator variables is slight, and so the hypergeometric distribution should be close to the binomial distribution. The following theorem makes this precise.
Suppose that $r_m \in \{0, 1, \ldots, m\}$ for each $m \in \N_+$ and that $r_m/m \to p \in [0, 1]$ as $m \to \infty$. Then for fixed $n \in \N_+$the hypergeometric distribution with parameters $m$, $r_m$ and $n$ converges to the binomial distribution with parameters $n$ and $p$ as $m \to \infty$.
Proof
The hypergeometric PDF has the form $g_m(y) = \binom{n}{y} \frac{r_m^{(y)} (m - r_m)^{(n-y)}}{m^{(n)}}, \quad y \in \{0, 1, \ldots, n\}$ Note that the fraction above has $n$ factors in the numerator and $n$ factors in the denominator. We can group these, in order, to form a product of $n$ fractions. The first $y$ fractions have the form $\frac{r_m - i}{m - i}$ where $i \in \{0, 1, \ldots, y - 1\}$. Each of these converges to $p$ as $m \to \infty$. The remaining $n - y$ fractions have the form $\frac{m - r_m - j}{m - y - j}$ where $j \in \{0, 1, \ldots, n - y - 1\}$. For fixed $y$ and $n$, each of these converges to $1 - p$ as $m \to \infty$. Hence $g_m(y) \to \binom{n}{y} p^y (1 - p)^{n -y}$ as $m \to \infty$ for each $y \in \{0, 1, \ldots, n\}$
Under the conditions in the previous theorem, the mean and variance of the hypergeometric distribution converge to the mean and variance of the limiting binomial distribution:
1. $n \frac{r_m}{m} \to n p$ as $m \to \infty$
2. $n \frac{r_n}{m} \left(1 - \frac{r_m}{m}\right) \frac{m - n}{m - 1} \to n p (1 - p)$ as $m \to \infty$
Proof
By assumption $r_m / m \to p$ as $m \to \infty$ and $n$ is fixed, so also $(m - n) \big/ (m - 1) \to 1$ as $n \to \infty$
In the ball and urn experiment, vary the parameters and switch between sampling without replacement and sampling with replacement. Note the difference between the graphs of the hypergeometric probability density function and the binomial probability density function. In particular, note the similarity when $m$ is large and $n$ small. For selected values of the parameters, and for both sampling modes, run the experiment 1000 times.
From a practical point of view, the convergence of the hypergeometric distribution to the binomial means that if the population size $m$ is large compared to the sample size, then the hypergeometric distribution with parameters $m$, $r$ and $n$ is well approximated by the binomial distribution with parameters $n$ and $p = r / m$. This is often a useful result, because the binomial distribution has fewer parameters than the hypergeometric distribution (and often in real problems, the parameters may only be known approximately). Specifically, in the approximating binomial distribution, we do not need to know the population size $m$ and the number of type 1 objects $r$ individually, but only in the ratio $r / m$. Generally, the approximation works well if $m$ is large compared to $n$ that $\frac{m - n}{m - 1}$ is close to 1. This ensures that the variance of the hypergeometric distribution is close to the variance of the approximating binomial distribution.
Now let's return to our usual sequence of Bernoulli trials $\bs{X} = (X_1, X_2, \ldots)$, with success parameter $p$, and to the binomial variables $Y_n = \sum_{i=1}^n X_i$ for $n \in \N$. Our next result shows that given $k$ successes in the first $n$ trials, the trials on which the successes occur is simply a random sample of size $k$ chosen without replacement from $\{1, 2, \ldots, n\}$.
Suppose that $n \in \N_+$ and $k \in \{0, 1, \ldots, n\}$. Then for $(x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$ with $\sum_{i=1}^n x_i = k$, $\P\left[(X_1, X_2, \ldots, X_n) = (x_1, x_2, \ldots, x_n) \mid Y_n = k\right] = \frac{1}{\binom{n}{k}}$
Proof
From the definition of conditional probability, $\P\left[(X_1, X_2, \ldots, X_n) = (x_1, x_2, \ldots, x_n) \mid Y_n = k\right] = \frac{\P\left[(X_1, X_2, \ldots, X_n) = (x_1, x_2, \ldots, x_n)\right]}{\P(Y_n = k)} = \frac{p^k (1 - p)^{n-k}}{\binom{n}{k} p^k (1 - p)^{n-k}}= \frac{1}{\binom{n}{k}}$
Note in particular that the conditional distribution above does not depend on $p$. In statistical terms, this means that relative to $(X_1, X_2, \ldots, X_n)$, random variable $Y_n$ is a sufficient statistic for $p$. Roughly, $Y_n$ contains all of the information about $p$ that is available in the entire sample $(X_1, X_2, \ldots, X_n)$. Sufficiency is discussed in more detail in the chapter on Point Estimation. Next, if $m \le n$ then the conditional distribution of $Y_m$ given $Y_n = k$ is hypergeometric, with population size $n$, type 1 size $m$, and sample size $k$.
Suppose that $p \in (0, 1)$ and that $m, \, n, \, k \in \N_+$ with $m \le n$ and $k \le n$. Then $\P(Y_m = j \mid Y_n = k) = \frac{\binom{m}{j} \binom{m - n}{k - j}}{\binom{n}{k}}, \quad j \in \{0, 1, \ldots, k\}$
Proof from the previous result
Given $Y_n = k$, the trial numbers of the successes form a random sample of size $k$ chosen without replacement from $\{1, 2, \ldots, n\}$. Designate trials $\{1, 2, \ldots, m\}$ as type 1 and trials $\{m + 1, m+2, \ldots, n\}$ as type 0. Then $Y_m$ is the number of type 1 trials in the sample, and hence (given $Y_n = k$) has the hypergeometric distribution with population size $n$, type 1 size $m$, and sample size $k$.
Direct proof
From the definition of conditional probability, $\P(Y_m = j \mid Y_n = k) = \frac{\P(Y_m = j, Y_n = k)}{\P(Y_n = k)} = \frac{\P(Y_m = j, Y_n - Y_m = k - j)}{\P(Y_n = k)}$ But $Y_m$ and $Y_n - Y_m$ are independent. Both variables have binomial distributions; the first with parameters $m$ and $p$, and the second with parameters $n - m$ and $p$. Hence
$\P(Y_m = j \mid Y_n = k) = \frac{\binom{m}{j} p^j (1 - p)^{m - j} \binom{n - m}{k - j} p^{k - j} (1 - p)^{n - m - (k - j)}}{\binom{n}{k} p^k (1 - p)^{n - k}} = \frac{\binom{m}{j} \binom{m - n}{k - j}}{\binom{n}{k}}$
Once again, note that the conditional distribution is independent of the success parameter $p$.
The Poisson Approximation
The Poisson process on $[0, \infty)$, named for Simeon Poisson, is a model for random points in continuous time. There are many deep and interesting connections between the Bernoulli trials process (which can be thought of as a model for random points in discrete time) and the Poisson process. These connections are explored in detail in the chapter on the Poisson process. In this section we just give the most famous and important result—the convergence of the binomial distribution to the Poisson distribution.
For reference, the Poisson distribution with rate parameter $r \in (0, \infty)$ has probability density function $g(n) = e^{-r} \frac{r^n}{n!}, \quad n \in \N$ The parameter $r$ is both the mean and the variance of the distribution. In addition, the probability generating function is $t \mapsto e^{t (r - 1)}$.
Suppose that $p_n \in (0, 1)$ for $n \in \N_+$ and that $n p_n \to r \in (0, \infty)$ as $n \to \infty$. Then the binomial distribution with parameters $n$ and $p_n$ converges to the Poisson distribution with parameter $r$ as $n \to \infty$.
Proof from density functions
Let $f_n$ denote the binomial PDF with parameters $n$ and $p_n$. Then for $k \in \{0, 1, \ldots, n\}$ $f_n(k) = \binom{n}{k} p_n^k (1 - p_n)^{n-k} = \frac{1}{k!}\left[n p_n\right]\left[(n - 1) p_n\right] \cdots \left[(n - k + 1) p_n\right] \left(1 - n \frac{p_n}{n}\right)^{n-k}$ But $(n - j) p_n \to r$ as $n \to \infty$ for fixed $j$. Also, using a basic theorem from calculus, $\left(1 - n p_n \big/ n\right)^{n-k} \to e^{-r}$ as $n \to \infty$. Hence $f_n(k) \to e^{-r} \frac{r^k}{k!}$ as $n \to \infty$.
Proof from generating functions
For $t \in \R$, using the same basic limit from calculus, $\left[(1 - p_n) + p_n t\right]^n = \left[1 + n \frac{p_n}{n}(t - 1)\right]^n \to e^{r(t - 1)} \text{ as } n \to \infty$ The left side is the PGF of the binomial distribution with parameters $n$ and $p_n$, while the right side is the PGF of the Poisson distribution with parameter $r$.
Under the same conditions as the previous theorem, the mean and variance of the binomial distribution converge to the mean and variance of the limiting Poisson distribution, respectively.
1. $n p_n \to r$ as $n \to \infty$
2. $n p_n (1 - p_n) \to r$ as $n \to \infty$
Proof
By assumption, $n p_n \to r$ as $n \to \infty$, and so it also follows that $p_n \to 0$ as $n \to \infty$.
Compare the Poisson experiment and the binomial timeline experiment.
1. Open the Poisson experiment and set $r = 1$ and $t = 5$. Run the experiment a few times and note the general behavior of the random points in time. Note also the shape and location of the probability density function and the mean$\pm$standard deviation bar.
2. Now open the binomial timeline experiment and set $n = 100$ and $p = 0.05$. Run the experiment a few times and note the general behavior of the random points in time. Note also the shape and location of the probability density function and the mean$\pm$standard deviation bar.
From a practical point of view, the convergence of the binomial distribution to the Poisson means that if the number of trials $n$ is large and the probability of success $p$ small, so that $n p^2$ is small, then the binomial distribution with parameters $n$ and $p$ is well approximated by the Poisson distribution with parameter $r = n p$. This is often a useful result, because the Poisson distribution has fewer parameters than the binomial distribution (and often in real problems, the parameters may only be known approximately). Specifically, in the approximating Poisson distribution, we do not need to know the number of trials $n$ and the probability of success $p$ individually, but only in the product $n p$. The condition that $n p^2$ be small means that the variance of the binomial distribution, namely $n p (1 - p) = n p - n p^2$ is approximately $r$, the variance of the approximating Poisson distribution.
The Normal Approximation
Open the binomial timeline experiment. For selected values of $p \in (0, 1)$, start with $n = 1$ and successively increase $n$ by 1. For each value of $n$, Note the shape of the probability density function of the number of successes and the proportion of successes. With $n = 100$, run the experiment 1000 times and compare the empirical density function to the probability density function for the number of successes and the proportion of successes
The characteristic bell shape that you should observe in the previous exercise is an example of the central limit theorem, because the binomial variable can be written as a sum of $n$ independent, identically distributed random variables (the indicator variables).
The standard score $Z_n$ of $Y_n$ is the same as the standard score of $M_n$: $Z_n = \frac{Y_n - n p}{\sqrt{n p (1 - p)}} = \frac{M_n - p}{\sqrt{p (1 - p) / n}}$ The distribution of $Z_n$ converges to the standard normal distribution as $n \to \infty$.
This version of the central limit theorem is known as the DeMoivre-Laplace theorem, and is named after Abraham DeMoivre and Simeon Laplace. From a practical point of view, this result means that, for large $n$, the distribution of $Y_n$ is approximately normal, with mean $n p$ and standard deviation $\sqrt{n p (1 - p)}$ and the distribution of $M_n$ is approximately normal, with mean $p$ and standard deviation $\sqrt{p (1 - p) / n}$. Just how large $n$ needs to be for the normal approximation to work well depends on the value of $p$. The rule of thumb is that we need $n p \ge 5$ and $n (1 - p) \ge 5$ (the first condition is the significant one when $p \le \frac{1}{2}$ and the second condition is the significant one when $p \ge \frac{1}{2}$). Finally, when using the normal approximation, we should remember to use the continuity correction, since the binomial is a discrete distribution.
General Families
For a fixed number of trials $n$, the binomial distribution is a member of two general families of distributions. First, it is a general exponential distribution.
Suppose that $Y$ has the binomial distribution with parameters $n$ and $p$, where $n \in \N_+$ is fixed and $p \in (0, 1)$. The distribution of $Y$ is a one-parameter exponential family with natural parameter $\ln \left( \frac{p}{1 - p} \right)$ and natural statistic $Y$.
Proof
This follows from the definition of the general exponential family. The support set $\{0, 1, \ldots, n\}$ does not depend on $p$, and for $y$ in this set, $f_n(y) = \binom{n}{y} p^y (1 - p)^{n-y} = \binom{n}{y} (1 - p)^n \left(\frac{p}{1 - p}\right)^y = \binom{n}{y} (1 - p)^n \exp\left[y \ln\left(\frac{p}{1 - p}\right)\right]$
Note that the natural parameter is the logarithm of the odds ratio corresponding to $p$. This function is sometimes called the logit function. The binomial distribution is also a power series distribution
Suppose again that $Y$ has the binomial distribution with parameters $n$ and $p$, where $n \in \N_+$ is fixed and $p \in (0, 1)$. The distribution of $Y$ is a power series distribution in the parameter $\theta = \frac{p}{1 - p}$, corresponding to the function $\theta \mapsto (1 + \theta)^n$.
Proof
This follows from the definition of the power series distribution. As before, for $y \in \{0, 1, \ldots, n\}$, $f_n(y) = \binom{n}{y} p^y (1 - p)^{n-y} = \binom{n}{y} (1 - p)^n \left(\frac{p}{1 - p}\right)^y = \frac{1}{(1 + \theta)^n} \binom{n}{y} \theta^y$ where $\theta = \frac{p}{1 - p}$. This is the power series distribution in $\theta$, with coefficients $\binom{n}{y}$, corresponding to the function $\theta \mapsto (1 + \theta)^n$.
The Proportion of Successes
Suppose again that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of Bernoulli trials with success parameter $p$, and that as usual, $Y_n = \sum_{i=1}^n X_i$ is the number of successes in the first $n$ trials for $n \in \N$. The proportion of successes in the first $n$ trials is the random variable $M_n = \frac{Y_n}{n} = \frac{1}{n} \sum_{i=1}^n X_i$ In statistical terms, $M_n$ is the sample mean of the random sample $(X_1, X_2, \ldots, X_n)$. The proportion of successes $M_n$ is typically used to estimate the probability of success $p$ when this probability is unknown.
It is easy to express the probability density function of the proportion of successes $M_n$ in terms of the probability density function of the number of successes $Y_n$. First, note that $M_n$ takes the values $k / n$ where $k \in \{0, 1, \dots, n\}$.
The probabiliy density function of $M_n$ is given by $\P \left( M_n = \frac{k}{n} \right) = \binom{n}{k} p^k (1 - p)^{n-k}, \quad k \in \{0, 1, \ldots, n\}$
Proof
Trivially, $M_n = k / n$ if and only if $Y_n = k$ for $k \in \{0, 1, \ldots, n\}$.
In the binomial coin experiment, select the proportion of heads. Vary $n$ and $p$ with the scroll bars and note the shape of the probability density function. For selected values of the parameters, run the experiment 1000 times and compare the relative frequency function to the probability density function.
The mean and variance of the proportion of successes $M_n$ are easy to compute from the mean and variance of the number of successes $Y_n$.
The mean and variance of $M_n$ are
1. $\E(M_n) = p$
2. $\var(M_n) = \frac{1}{n} p (1 - p)$.
Proof
From the scaling properties of expected value and variance,
1. $\E(M_n) = \frac{1}{n} \E(Y_n) = \frac{1}{n} n p = p$
2. $\var(M_n) = \frac{1}{n^2} \var(Y_n) = \frac{1}{n^2} n p (1 - p) = \frac{1}{n} p (1 - p)$
In the binomial coin experiment, select the proportion of heads. Vary $n$ and $p$ and note the size and location of the mean$\pm$standard deviation bar. For selected values of the parameters, run the experiment 1000 times and compare the empirical moments to the distribution moments.
Recall that skewness and kurtosis are standardized measures. Since $M_n$ and $Y_n$ have the same standard score, the skewness and kurtosis of $M_n$ are the same as the skewness and kurtosis of $Y_n$ given above.
In statistical terms, part (a) of the moment result above means that $M_n$ is an unbiased estimator of $p$. From part (b) note that $\var(M_n) \le \frac{1}{4 n}$ for any $p \in [0, 1]$. In particular, $\var(M_n) \to 0$ as $n \to \infty$ and the convergence is uniform in $p \in [0, 1]$. Thus, the estimate improves as $n$ increases; in statistical terms, this is known as consistency.
For every $\epsilon \gt 0$, $\P\left(\left|M_n - p\right| \ge \epsilon\right) \to 0$ as $n \to \infty$ and the convergence is uniform in $p \in [0, 1]$.
Proof
This follows from the last result and Chebyshev's inequality.
The last result is a special case of the weak law of large numbers and means that $M_n \to p$ as $n \to \infty$ in probability. The strong law of large numbers states that the convergence actually holds with probability 1.
The proportion of successes $M_n$ has a number of nice properties as an estimator of the probability of success $p$. As already noted, it is unbiased and consistent. In addition, since $Y_n$ is a sufficient statistic for $p$, based on the sample $(X_1, X_2, \ldots, X_n)$, it follows that $M_n$ is sufficient for $p$ as well. Since $\E(X_i) = p$, $M_n$ is trivially the method of moments estimator of $p$. Assuming that the parameter space for $p$ is $[0, 1]$, it is also the maximum likelihood estimator of $p$.
The likelihood function for $p \in [0, 1]$, based on the observed sample $(x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$, is $L_y(p) = p^y (1 - p)^{n-y}$, where $y = \sum_{i=1}^n x_i$. The likelihood is maximized at $m = y / n$.
Proof
By definition, the likelihood function is simply the joint PDF of $(X_1, X_2, \ldots, X_n)$ thought of as a function of the parameter $p$, for fixed $(x_1, x_2, \ldots, x_n)$. Thus the form of the likelihood function follows from the joint PDF given in the Introduction. If $y = 0$, $L_y$ is decreasing and hence is maximized at $p = 0 = y / n$. If $y = n$, $L_y$ is increasing and is maximized at $p = 1 = y / n$. If $0 \lt y \lt n$, the log-likelihood function is $\ln\left[L_y(p)\right] = y \ln(p) + (n - y) \ln(1 - p)$ and the derivative is $\frac{d}{dp} \ln\left[L_y(p)\right] = \frac{y}{p} - \frac{n - y}{1 - p}$ There is a single critical point at $y / n$. The second derivative of the log-likelihood function is negative, so the maximum on $(0, 1)$ occurs at the critical point.
See Estimation in the Bernoulli Model in the chapter on Set Estimation for a different approach to the problem of estimating $p$.
Examples and Applications
Simple Exercises
A student takes a multiple choice test with 20 questions, each with 5 choices (only one of which is correct). Suppose that the student blindly guesses. Let $X$ denote the number of questions that the student answers correctly. Find each of the following:
1. The probability density function of $X$.
2. The mean of $X$.
3. The variance of $X$.
4. The probability that the student answers at least 12 questions correctly (the score that she needs to pass).
Answer
1. $\P(X = x) = \binom{20}{x} \left(\frac{1}{5}\right)^x \left(\frac{4}{5}\right)^{20-x}$ for $x \in \{0, 1, \ldots, 20\}$
2. $\E(X) = 4$
3. $\var(X) = \frac{16}{5}$
4. $\P(X \ge 12) \approx 0.000102$. She has no hope of passing.
A certain type of missile has failure probability 0.02. Let $Y$ denote the number of failures in 50 tests. Find each of the following:
1. The probability density function of $Y$.
2. The mean of $Y$.
3. The variance of $Y$.
4. The probability of at least 47 successful tests.
Answer
1. $\P(Y = y) = \binom{50}{k} \left(\frac{1}{50}\right)^y \left(\frac{49}{50}\right)^{50-y}$ for $y \in \{0, 1, \ldots, 50\}$
2. $\E(Y) = 1$
3. $\var(Y) = \frac{49}{50}$
4. $\P(Y \le 3) \approx 0.9822$
Suppose that in a certain district, 40% of the registered voters prefer candidate $A$. A random sample of 50 registered voters is selected. Let $Z$ denote the number in the sample who prefer $A$. Find each of the following:
1. The probability density function of $Z$.
2. The mean of $Z$.
3. The variance of $Z$.
4. The probability that $Z$ is less that 19.
5. The normal approximation to the probability in (d).
Answer
1. $\P(Z = z) = \binom{50}{z} \left(\frac{2}{5}\right)^z \left(\frac{3}{5}\right)^{50-z}$ for $z \in \{0, 1, \ldots, 50\}$
2. $\E(Z) = 20$
3. $\var(Z) = 12$
4. $\P(Z \lt 19) = 0.3356$
5. $\P(Z \lt 19) \approx 0.3330$
Coins and Dice
Recall that a standard die is a six-sided die. A fair die is one in which the faces are equally likely. An ace-six flat die is a standard die in which faces 1 and 6 have probability $\frac{1}{4}$ each, and faces 2, 3, 4, and 5 have probability $\frac{1}{8}$.
A standard, fair die is tossed 10 times. Let $N$ denote the number of aces. Find each of the following:
1. The probability density function of $N$.
2. The mean of $N$.
3. The variance of $N$.
Answer
1. $\P(N = k) = \binom{10}{k} \left(\frac{1}{6}\right)^k \left(\frac{5}{6}\right)^{10-k}$ for $k \in \{0, 1, \ldots, 10\}$
2. $\E(N) = \frac{5}{3}$
3. $\var(N) = \frac{25}{18}$
A coin is tossed 100 times and results in 30 heads. Find the probability density function of the number of heads in the first 20 tosses.
Answer
Let $Y_n$ denote the number of heads in the first $n$ tosses.
$\P(Y_{20} = y \mid Y_{100} = 30) = \frac{\binom{20}{y} \binom{80}{30 - y}}{\binom{100}{30}}, \quad y \in \{0, 1, \ldots, 20\}$
An ace-six flat die is rolled 1000 times. Let $Z$ denote the number of times that a score of 1 or 2 occurred. Find each of the following:
1. The probability density function of $Z$.
2. The mean of $Z$.
3. The variance of $Z$.
4. The probability that $Z$ is at least 400.
5. The normal approximation of the probability in (d)
Answer
1. $\P(Z = z) = \binom{1000}{z} \left(\frac{3}{8}\right)^z \left(\frac{5}{8}\right)^{1000-z}$ for $z \in \{0, 1, \ldots, 1000\}$
2. $\E(Z) = 375$
3. $\var(Z) = 1875 / 8$
4. $\P(Z \ge 400) \approx 0.0552$
5. $\P(Z \ge 400) \approx 0.0550$
In the binomial coin experiment, select the proportion of heads. Set $n = 10$ and $p = 0.4$. Run the experiment 100 times. Over all 100 runs, compute the square root of the average of the squares of the errors, when $M$ used to estimate $p$. This number is a measure of the quality of the estimate.
In the binomial coin experiment, select the number of heads $Y$, and set $p = 0.5$ and $n = 15$. Run the experiment 1000 times with and compute the following:
1. $\P(5 \le Y \le 10)$
2. The relative frequency of the event $\{5 \le Y \le 10\}$
3. The normal approximation to $\P(5 \le Y \le 10)$
Answer
1. $\P(5 \le Y \le 10) = 0.8815$
2. $\P(5 \le Y \le 10) \approx 0.878$
In the binomial coin experiment, select the proportion of heads $M$ and set $n = 30$, $p = 0.6$. Run the experiment 1000 times and compute each of the following:
1. $\P(0.5 \le M \le 0.7)$
2. The relative frequency of the event $\{0.5 \le M \le 0.7\}$
3. The normal approximation to $\P(0.5 \le M \le 0.7)$
Answer
1. $\P(0.5 \le M \le 0.7) = 0.8089$
2. $\P(0.5 \le M \le 0.7) \approx 0.808$
Famous Problems
In 1693, Samuel Pepys asked Isaac Newton whether it is more likely to get at least one 6 in 6 rolls of a die, or at least two 6's in 12 rolls, or at least three 6's in 18 rolls. This problems is known a Pepys' problem; naturally, Pepys had fair dice in mind.
Solve Pepys' problem using the binomial distribution.
Answer
Let $Y_n$ denote the number of 6's in $n$ rolls of a fair die, so that $Y_n$ has the binomial distribution with parameters $n$ and $p = \frac{1}{6}$. Using the binomial PDF (and complements to simplify the computations),
1. $\P(Y_6 \ge 1) = 1 - \P(Y_6 = 0) = 0.6651$
2. $\P(Y_{12} \ge 2) = 1 - \P(Y_{12} \le 1) = 0.6187$
3. $\P(Y_{18} \ge 3) = 1 - \P(Y_{18} \le 2) = 0.5973$
So the first event, at least one 6 in 6 rolls, is the most likely, and in fact the three events, in the order given, decrease in probability.
With fair dice run the simulation of the dice experiment 500 times and compute the relative frequency of the event of interest. Compare the results with the theoretical probabilities above.
1. At least one 6 with $n = 6$.
2. At least two 6's with $n = 12$.
3. At least three 6's with $n = 18$.
It appears that Pepys had some sort of misguided linearity in mind, given the comparisons that he wanted. Let's solve the general problem.
For $k \in \N_+$, let $Y_k$ denote the number of 6's in $6 k$ rolls of a fair die.
1. Identify the distribution of $Y_k$.
2. Find the mean and variance of $Y_k$.
3. Give an exact formula for $\P(Y_k \ge k)$, the probability of at least $k$ 6's in $6 k$ rolls of a fair die.
4. Give the normal approximation of the probability in (c).
5. Find the limit of the probability in (d) as $k \to \infty$.
Proof
1. $Y_k$ has the binomial distribution with parameters $n = 6 k$ and $p = \frac{1}{6}$.
2. $\E(Y_k) = (6 k) \frac{1}{6} = k$, $\var(Y_k) = (6 k) \frac{1}{6} \frac{5}{6} = \frac{5}{6} k$
3. $\P(Y_k \ge k) = 1 - \P(Y \le k - 1) = 1 - \sum_{j=0}^{k-1} \binom{6 k}{j} \left(\frac{1}{6}\right)^j \left(\frac{5}{6}\right)^{6 k - j}$
4. Using the continuity correction, $\P(Y_k \ge k) = \P\left(Y_k \ge k - \frac{1}{2}\right) = \P\left(\frac{Y_k - k}{\sqrt{(5/6)k}} \ge \frac{-1/2}{\sqrt{(5/6)k}}\right) = 1 - \Phi\left(\frac{-1/2}{\sqrt{(5/6)k}}\right)$
5. $1 - \Phi\left(\frac{-1/2}{\sqrt{(5/6)k}}\right) \to 1 - \Phi(0) = \frac{1}{2}$ as $k \to \infty$
So on average, the number of 6's in $6 k$ rolls of a fair die is $k$, and that fact might have influenced Pepys thinking. The next problem is known as DeMere's problem, named after Chevalier De Mere
Which is more likely: at least one 6 with 4 throws of a fair die or at least one double 6 in 24 throws of two fair dice? .
Answer
Let $Y_n$ denote the number of 6's in $n$ rolls of a fair die, and let $Z_n$ denote the number of double 6's in $n$ rolls of a pair of fair dice. Then $Y_n$ as the binomial distribution with parameters $n$ and $p = \frac{1}{6}$, and $Z_n$ has the binomial distribution with parameters $n$ and $p = \frac{1}{36}$. Using the binomial PDF,
1. $\P(Y_4 \ge 1) = 0.5177$
2. $\P(Z_{24} \ge 1) = 0.4914$
Data Analysis Exercises
In the cicada data, compute the proportion of males in the entire sample, and the proportion of males of each species in the sample.
Answer
1. $m = 0.433$
2. $m_0 = 0.636$
3. $m_1 = 0.259$
4. $m_2 = 0.5$
In the M&M data, pool the bags to create a large sample of M&Ms. Now compute the sample proportion of red M&Ms.
Answer
$m_{\text{red}} = 0.168$
The Galton Board
The Galton board is a triangular array of pegs. The rows are numbered by the natural numbers $\N = \{0, 1, \ldots\}$ from top downward. Row $n$ has $n + 1$ pegs numbered from left to right by the integers $\{0, 1, \ldots, n\}$. Thus a peg can be uniquely identified by the ordered pair $(n, k)$ where $n$ is the row number and $k$ is the peg number in that row. The Galton board is named after Francis Galton.
Now suppose that a ball is dropped from above the top peg $(0, 0)$. Each time the ball hits a peg, it bounces to the right with probability $p$ and to the left with probability $1 - p$, independently from bounce to bounce.
The number of the peg that the ball hits in row $n$ has the binomial distribution with parameters $n$ and $p$.
In the Galton board experiment, select random variable $Y$ (the number of moves right). Vary the parameters $n$ and $p$ and note the shape and location of the probability density function and the mean$\pm$standard deviation bar. For selected values of the parameters, click single step several times and watch the ball fall through the pegs. Then run the experiment 1000 times and watch the path of the ball. Compare the relative frequency function and empirical moments to the probability density function and distribution moments, respectively.
Structural Reliability
Recall the discussion of structural reliability given in the last section on Bernoulli trials. In particular, we have a system of $n$ similar components that function independently, each with reliability $p$. Suppose now that the system as a whole functions properly if and only if at least $k$ of the $n$ components are good. Such a systems is called, appropriately enough, a $k$ out of $n$ system. Note that the series and parallel systems considered in the previous section are $n$ out of $n$ and 1 out of $n$ systems, respectively.
Consider the $k$ out of $n$ system.
1. The state of the system is $\bs{1}(Y_n \ge k)$ where $Y_n$ is the number of working components.
2. The reliability function is $r_{n,k}(p) = \sum_{i=k}^n \binom{n}{i} p^i (1 - p)^{n-i}$.
In the binomial coin experiment, set $n = 10$ and $p = 0.9$ and run the simulation 1000 times. Compute the empirical reliability and compare with the true reliability in each of the following cases:
1. 10 out of 10 (series) system.
2. 1 out of 10 (parallel) system.
3. 4 out of 10 system.
Consider a system with $n = 4$ components. Sketch the graphs of $r_{4,1}$, $r_{4,2}$, $r_{4,3}$, and $r_{4,4}$ on the same set of axes.
An $n$ out of $2 n - 1$ system is a majority rules system.
1. Compute the reliability of a 2 out of 3 system.
2. Compute the reliability of a 3 out of 5 system
3. For what values of $p$ is a 3 out of 5 system more reliable than a 2 out of 3 system?
4. Sketch the graphs of $r_{3,2}$ and $r_{5,3}$ on the same set of axes.
5. Show that $r_{2 n - 1, n} \left( \frac{1}{2} \right) = \frac{1}{2}$.
Answer
1. $r_{3,2}(p) = 3 p^2 - 2 p^3$
2. $r_{5,3}(p) = 10 p^3 - 15 p^4 + 6 \, p^5$
3. 3 out of 5 is better for $p \ge \frac{1}{2}$
In the binomial coin experiment, compute the empirical reliability, based on 100 runs, in each of the following cases. Compare your results to the true probabilities.
1. A 2 out of 3 system with $p = 0.3$
2. A 3 out of 5 system with $p = 0.3$
3. A 2 out of 3 system with $p = 0.8$
4. A 3 out of 5 system with $p = 0.8$
Reliable Communications
Consider the transmission of bits (0s and 1s) through a noisy channel. Specifically, suppose that when bit $i \in \{0, 1\}$ is transmitted, bit $i$ is received with probability $p_i \in (0, 1)$ and the complementary bit $1 - i$ is received with probability $1 - p_i$. Given the bits transmitted, bits are received correctly or incorrectly independently of one-another. Suppose now, that to increase reliability, a given bit $I$ is repeated $n \in \N_+$ times in the transmission. A priori, we believe that $\P(I = 1) = \alpha \in (0, 1)$ and $\P(I = 0) = 1 - \alpha$. Let $X$ denote the number of 1s received when bit $I$ is transmitted $n$ times.
Find each of the following:
1. The conditional distribution of $X$ given $I = i \in \{0, 1\}$
2. he probability density function of $X$
3. $\E(X)$
4. $\var(X)$
Answer
1. Give $I = 1$, $X$ has the binomial distribution with parameters $n$ and $p_1$. Given $I = 0$, $X$ has the binomial distribution with parameters $n$ and $1 - p_0$.
2. $\P(X = k) = \binom{n}{k} \left[\alpha p_1^k (1 - p_1)^{n-k} + (1 - \alpha) (1 - p_0)^k p_0^{n-k}\right]$ for $k \in \{0, 1, \ldots, n\}$.
3. $\E(X) = n \left[\alpha p_1 + (1 - \alpha) (1 - p_0)\right]$
4. $\var(X) = \alpha \left[n p_1 (1 - p_1) + n^2 p_1^2\right] + (1 - \alpha)\left[n p_0 (1 - p_0) + n^2 (1 - p_0)^2\right] - n^2 \left[\alpha p_1 + (1 - \alpha)(1 - p_0)\right]^2$
Simplify the results in the last exercise in the symmetric case where $p_1 = p_0 =: p$ (so that the bits are equally reliable) and with $\alpha = \frac{1}{2}$ (so that we have no prior information).
Answer
1. Give $I = 1$, $X$ has the binomial distribution with parameters $n$ and $p$. Given $I = 0$, $X$ has the binomial distribution with parameters $n$ and $1 - p$.
2. $\P(X = k) = \frac{1}{2}\binom{n}{k} \left[p^k (1 - p)^{n-k} + (1 - p)^k p^{n-k}\right]$ for $k \in \{0, 1, \ldots, n\}$.
3. $\E(X) = \frac{1}{2}n$
4. $\var(X) = n p (1 - p) + \frac{1}{2} n^2 \left[p^2 + (1 - p)^2\right] - \frac{1}{4} n^2$
Our interest, of course, is predicting the bit transmitted given the bits received.
Find the posterior probability that $I = 1$ given $X = k \in \{0, 1, \ldots, n\}$.
Answer $\P(I = 1 \mid X = k) = \frac{\alpha p_1^k (1 - p_1)^{n-k}}{\alpha p_1^k (1 - p_1)^{n-k} + (1 - \alpha) (1 - p_0)^k p_0^{n-k}}$
Presumably, our decision rule would be to conclude that 1 was transmitted if the posterior probability in the previous exercise is greater than $\frac{1}{2}$ and to conclude that 0 was transmitted if the this probability is less than $\frac{1}{2}$. If the probability equals $\frac{1}{2}$, we have no basis to prefer one bit over the other.
Give the decision rule in the symmetric case where $p_1 = p_0 =: p$, so that the bits are equally reliable. Assume that $p \gt \frac{1}{2}$, so that we at least have a better than even chance of receiving the bit transmitted.
Answer
Give $X = k$, we conclude that bit 1 was transmitted if $k \gt \frac{n}{2} - \frac{1}{2} \frac{\ln(\alpha) - \ln(1 - \alpha)}{\ln(p) - \ln(1 - p)}$ and we conclude that bit 0 was transmitted if the reverse inequality holds.
Not surprisingly, in the symmetric case with no prior information, so that $\alpha = \frac{1}{2}$, we conclude that bit $i$ was transmitted if a majority of bits received are $i$.
Bernstein Polynomials
The Weierstrass Approximation Theorem, named after Karl Weierstrass, states that any real-valued function that is continuous on a closed, bounded interval can be uniformly approximated on that interval, to any degree of accuracy, with a polynomial. The theorem is important, since polynomials are simple and basic functions, and a bit surprising, since continuous functions can be quite strange.
In 1911, Sergi Bernstein gave an explicit construction of polynomials that uniformly approximate a given continuous function, using Bernoulli trials. Bernstein's result is a beautiful example of the probabilistic method, the use of probability theory to obtain results in other areas of mathematics that are seemingly unrelated to probability.
Suppose that $f$ is a real-valued function that is continuous on the interval $[0, 1]$. The Bernstein polynomial of degree $n$ for $f$ is defined by $b_n(p) = \E_p\left[f(M_n)\right], \quad p \in [0, 1]$ where $M_n$ is the proportion of successes in the first $n$ Bernoulli trials with success parameter $p$, as defined earlier. Note that we are emphasizing the dependence on $p$ in the expected value operator. The next exercise gives a more explicit representation, and shows that the Bernstein polynomial is, in fact, a polynomial
The Bernstein polynomial of degree $n$ can be written as follows: $b_n(p) = \sum_{k=0}^n f \left(\frac{k}{n} \right) \binom{n}{k} p^k (1 - p)^{n-k}, \quad p \in [0, 1]$
Proof
This follows from the change of variables theorem for expected value.
The Bernstein polynomials satisfy the following properties:
1. $b_n(0) = f(0)$ and $b_n(1) = f(1)$
2. $b_1(p) = f(0) + \left[f(1) - f(0)\right] p$ for $p \in [0, 1]$.
3. $b_2(p) = f(0) + 2 \, \left[ f \left( \frac{1}{2} \right) - f(0) \right] p + \left[ f(1) - 2 f \left( \frac{1}{2} \right) + f(0) \right] p^2$ for $p \in [0, 1]$
From part (a), the graph of $b_n$ passes through the endpoints $\left(0, f(0)\right)$ and $\left(1, f(1)\right)$. From part (b), the graph of $b_1$ is a line connecting the endpoints. From (c), the graph of $b_2$ is parabola passing through the endpoints and the point $\left( \frac{1}{2}, \frac{1}{4} \, f(0) + \frac{1}{2} \, f\left(\frac{1}{2}\right) + \frac{1}{4} \, f(1) \right)$.
The next result gives Bernstein's theorem explicitly.
$b_n \to f$ as $n \to \infty$ uniformly on $[0, 1]$.
Proof
Since $f$ is continuous on the closed, bounded interval $[0, 1]$, it is bounded on this interval. Thus, there exists a constant $C$ such that $\left|f(p)\right| \le C$ for all $p \in [0, 1]$. Also, $f$ it is uniformly continuous on $[0, 1]$. Thus, for any $\epsilon \gt 0$ there exists $\delta \gt 0$ such that if $p, \, q \in [0, 1]$ and $\left|p - q\right| \lt \delta$ then $\left|f(p) - f(q)\right| \lt \epsilon$. From basic properties of expected value, $\left|b_n(p) - f(p)\right| \le \E_p\left[\left|f(M_n) - f(p)\right|, \left|M_n - p\right| \lt \delta\right] + \E_p\left[\left|f(M_n) - f(p)\right|, \left|M_n - p\right| \ge \delta\right]$ Hence $\left|b_n(p) - f(p)\right| \le \epsilon + 2 C \P_p\left(\left|M_n - p\right| \ge \delta\right)$ for any $p \in [0, 1]$. But by weak law of large numbers above, $\P_p\left(\left|M_n - p\right| \ge \delta\right) \to 0$ as $n \to \infty$ uniformly in $p \in [0, 1]$.
Compute the Bernstein polynomials of orders 1, 2, and 3 for the function $f$ defined by $f(x) = \cos(\pi x)$ for $x \in [0, 1]$. Graph $f$ and the three polynomials on the same set of axes.
Answer
1. $b_1(p) = 1 - 2 p$
2. $b_2(p) = 1 - 2 p$
3. $b_3(p) = 1 - \frac{3}{2} p - \frac{3}{2} p^2 + p^3$
Use a computer algebra system to compute the Bernstein polynomials of orders 10, 20, and 30 for the function $f$ defined below. Use the CAS to graph the function and the three polynomials on the same axes. $f(x) = \begin{cases} 0, & x = 0 \ x \sin\left(\frac{\pi}{x}\right), & x \in (0, 1] \end{cases}$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/11%3A_Bernoulli_Trials/11.02%3A_The_Binomial_Distribution.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
Basic Theory
Definitions
Suppose again that our random experiment is to perform a sequence of Bernoulli trials $\bs{X} = (X_1, X_2, \ldots)$ with success parameter $p \in (0, 1]$. In this section we will study the random variable $N$ that gives the trial number of the first success and the random variable $M$ that gives the number of failures before the first success.
Let $N = \min\{n \in \N_+: X_n = 1\}$, the trial number of the first success, and let $M = N - 1$, the number of failures before the first success. The distribution of $N$ is the geometric distribution on $\N_+$ and the distribution of $M$ is the geometric distribution on $\N$. In both cases, $p$ is the success parameter of the distribution.
Since $N$ and $M$ differ by a constant, the properties of their distributions are very similar. Nonetheless, there are applications where it more natural to use one rather than the other, and in the literature, the term geometric distribution can refer to either. In this section, we will concentrate on the distribution of $N$, pausing occasionally to summarize the corresponding results for $M$.
The Probability Density Function
$N$ has probability density function $f$ given by $f(n) = p (1 - p)^{n-1}$ for $n \in \N_+$.
Proof
Note first that $\{N = n\} = \{X_1 = 0, \ldots, X_{n-1} = 0, X_n = 1\}$. By independence, the probability of this event is $(1 - p)^{n-1} p$.
Check that $f$ is a valid PDF
By standard results for geometric series $\sum_{n=1}^\infty \P(N = n) = \sum_{n=1}^\infty (1 - p)^{n-1} p = \frac{p}{1 - (1 - p)} = 1$
A priori, we might have thought it possible to have $N = \infty$ with positive probability; that is, we might have thought that we could run Bernoulli trials forever without ever seeing a success. However, we now know this cannot happen when the success parameter $p$ is positive.
The probability density function of $M$ is given by $\P(M = n) = p (1 - p)^n$ for $n \in \N$.
In the negative binomial experiment, set $k = 1$ to get the geometric distribution on $\N_+$. Vary $p$ with the scroll bar and note the shape and location of the probability density function. For selected values of $p$, run the simulation 1000 times and compare the relative frequency function to the probability density function.
Note that the probability density functions of $N$ and $M$ are decreasing, and hence have modes at 1 and 0, respectively. The geometric form of the probability density functions also explains the term geometric distribution.
Distribution Functions and the Memoryless Property
Suppose that $T$ is a random variable taking values in $\N_+$. Recall that the ordinary distribution function of $T$ is the function $n \mapsto \P(T \le n)$. In this section, the complementary function $n \mapsto \P(T \gt n)$ will play a fundamental role. We will refer to this function as the right distribution function of $T$. Of course both functions completely determine the distribution of $T$. Suppose again that $N$ has the geometric distribution on $\N_+$ with success parameter $p \in (0, 1]$.
$N$ has right distribution function $G$ given by $G(n) = (1 - p)^n$ for $n \in \N$.
Proof from Bernoulli trials
Note that $\{N \gt n\} = \{X_1 = 0, \ldots, X_n = 0\}$. By independence, the probability of this event is $(1 - p)^n$.
Direct proof
Using geometric series, $\P(N \gt n) = \sum_{k=n+1}^\infty \P(N = k) = \sum_{k=n+1}^\infty (1 - p)^{k-1} p = \frac{p (1 - p)^n}{1 - (1 - p)} = (1 - p)^n$
From the last result, it follows that the ordinary (left) distribution function of $N$ is given by $F(n) = 1 - (1 - p)^n, \quad n \in \N$ We will now explore another characterization known as the memoryless property.
For $m \in \N$, the conditional distribution of $N - m$ given $N \gt m$ is the same as the distribution of $N$. That is, $\P(N \gt n + m \mid N \gt m) = \P(N \gt n); \quad m, \, n \in \N$
Proof
From the result above and the definition of conditional probability, $\P(N \gt n + m \mid N \gt m) = \frac{\P(N \gt n + m)}{\P(N \gt m)} = \frac{(1 - p)^{n+m}}{(1 - p)^m} = (1 - p)^n = \P(N \gt n)$
Thus, if the first success has not occurred by trial number $m$, then the remaining number of trials needed to achieve the first success has the same distribution as the trial number of the first success in a fresh sequence of Bernoulli trials. In short, Bernoulli trials have no memory. This fact has implications for a gambler betting on Bernoulli trials (such as in the casino games roulette or craps). No betting strategy based on observations of past outcomes of the trials can possibly help the gambler.
Conversely, if $T$ is a random variable taking values in $\N_+$ that satisfies the memoryless property, then $T$ has a geometric distribution.
Proof
Let $G(n) = \P(T \gt n)$ for $n \in \N$. The memoryless property and the definition of conditional probability imply that $G(m + n) = G(m) G(n)$ for $m, \; n \in \N$. Note that this is the law of exponents for $G$. It follows that $G(n) = G^n(1)$ for $n \in \N$. Hence $T$ has the geometric distribution with parameter $p = 1 - G(1)$.
Moments
Suppose again that $N$ is the trial number of the first success in a sequence of Bernoulli trials, so that $N$ has the geometric distribution on $\N_+$ with parameter $p \in (0, 1]$. The mean and variance of $N$ can be computed in several different ways.
$\E(N) = \frac{1}{p}$
Proof from the density function
Using the derivative of the geometric series, \begin{align} \E(N) &= \sum_{n=1}^\infty n p (1 - p)^{n-1} = p \sum_{n=1}^\infty n (1 - p)^{n-1} \ &= p \sum_{n=1}^\infty - \frac{d}{dp}(1 - p)^n = - p \frac{d}{d p} \sum_{n=0}^\infty (1 - p)^n \ &= -p \frac{d}{dp} \frac{1}{p} = -p \left(-\frac{1}{p^2}\right) = \frac{1}{p}\end{align}
Proof from the right distribution function
Recall that since $N$ takes positive integer values, its expected value can be computed as the sum of the right distribution function. Hence $\E(N) = \sum_{n=0}^\infty \P(N \gt n) = \sum_{n=0}^\infty (1 - p)^n = \frac{1}{p}$
Proof from Bernoulli trials
We condition on the first trial $X_1$: If $X_1 = 1$ then $N = 1$ and hence $\E(N \mid X_1 = 1) = 1$. If $X_1 = 0$ (equivalently $N \gt 1)$ then by the memoryless property, $N - 1$ has the same distribution as $N$. Hence $\E(N \mid X_1 = 0) = 1 + \E(N)$. In short $\E(N \mid X_1) = 1 + (1 - X_1) \E(N)$ It follows that $\E(N) = \E\left[\E(N \mid X_1)\right] = 1 + (1 - p) \E(N)$ Solving gives $\E(N) = \frac{1}{p}$.
This result makes intuitive sense. In a sequence of Bernoulli trials with success parameter $p$ we would expect to wait $1/p$ trials for the first success.
$\var(N) = \frac{1 - p}{p^2}$
Direct proof
We first compute $\E\left[N(N - 1)\right]$. This is an example of a factorial moment, and we will compute the general factorial moments below. Using derivatives of the geometric series again, \begin{align} \E\left[N(N - 1)\right] & = \sum_{n=2}^\infty n (n - 1) p (1 - p)^{n-1} = p (1 - p) \sum_{n=2}^\infty n(n - 1) (1 - p)^{n-2} \ & = p (1 - p) \frac{d^2}{d p^2} \sum_{n=0}^\infty (1 - p)^n = p (1 - p) \frac{d^2}{dp^2} \frac{1}{p} = p (1 - p) \frac{2}{p^3} = 2 \frac{1 - p}{p^2} \end{align} Since $\E(N) = \frac{1}{p}$, it follows that $\E\left(N^2\right) = \frac{2 - p}{p^2}$ and hence $\var(N) = \frac{1 - p}{p^2}$
Proof from Bernoulli trials
Recall that $\E(N \mid X_1) = 1 + (1 - X_1) \E(N) = 1 + \frac{1}{p} (1 - X_1)$ and by the same reasoning, $\var(N \mid X_1) = (1 - X_1) \var(N)$. Hence $\var(N) = \var\left[\E(N \mid X_1)\right] + \E\left[\var(N \mid X_1)\right] = \frac{1}{p^2} p(1 - p) + (1 - p) \var(N)$ Solving gives $\var(N) = \frac{1 - p}{p^2}$.
Note that $\var(N) = 0$ if $p = 1$, hardly surprising since $N$ is deterministic (taking just the value 1) in this case. At the other extreme, $\var(N) \uparrow \infty$ as $p \downarrow 0$.
In the negative binomial experiment, set $k = 1$ to get the geometric distribution. Vary $p$ with the scroll bar and note the location and size of the mean$\pm$standard deviation bar. For selected values of $p$, run the simulation 1000 times and compare the sample mean and standard deviation to the distribution mean and standard deviation.
the probability generating function $P$ of $N$ is given by $P(t) = \E\left(t^N\right) = \frac{p t}{1 - (1 - p) t}, \quad \left|t\right| \lt \frac{1}{1 - p}$
Proof
This result follows from yet another application of geometric series: $\E\left(t^N\right) = \sum_{n=1}^\infty t^n p (1 - p)^{n-1} = p t \sum_{n=1}^\infty \left[t (1 - p)\right]^{n-1} = \frac{p t}{1 - (1 - p) t}, \quad \left|(1 - p) t\right| \lt 1$
Recall again that for $x \in \R$ and $k \in \N$, the falling power of $x$ of order $k$ is $x^{(k)} = x (x - 1) \cdots (x - k + 1)$. If $X$ is a random variable, then $\E\left[X^{(k)}\right]$ is the factorial moment of $X$ of order $k$.
The factorial moments of $N$ are given by $\E\left[N^{(k)}\right] = k! \frac{(1 - p)^{k-1}}{p^k}, \quad k \in \N_+$
Proof from geometric series
Using derivatives of geometric series again, \begin{align} \E\left[N^{(k)}\right] & = \sum_{n=k}^\infty n^{(k)} p (1 - p)^{n-1} = p (1 - p)^{k-1} \sum_{n=k}^\infty n^{(k)} (1 - p)^{n-k} \ & = p (1 - p)^{k-1} (-1)^k \frac{d^k}{dp^k} \sum_{n=0}^\infty (1 - p)^n = p (1 - p)^{k-1} (-1)^k \frac{d^k}{dp^k} \frac{1}{p} = k! \frac{(1 - p)^{k-1}}{p^k} \end{align}
Proof from the generating function
Recall that $\E\left[N^{(k)}\right] = P^{(k)}(1)$ where $P$ is the probability generating function of $N$. So the result follows from standard calculus.
Suppose that $p \in (0, 1)$. The skewness and kurtosis of $N$ are
1. $\skw(N) = \frac{2 - p}{\sqrt{1 - p}}$
2. $\kur(N) = \frac{p^2}{1 - p}$
Proof
The factorial moments can be used to find the moments of $N$ about 0. The results then follow from the standard computational formulas for skewness and kurtosis.
Note that the geometric distribution is always positively skewed. Moreover, $\skw(N) \to \infty$ and $\kur(N) \to \infty$ as $p \uparrow 1$.
Suppose now that $M = N - 1$, so that $M$ (the number of failures before the first success) has the geometric distribution on $\N$. Then
1. $\E(M) = \frac{1 - p}{p}$
2. $\var(M) = \frac{1 - p}{p^2}$
3. $\skw(M) = \frac{2 - p}{\sqrt{1 - p}}$
4. $\kur(M) = \frac{p^2}{1 - p}$
5. $\E\left(t^M\right) = \frac{p}{1 - (1 - p) \, t}$ for $\left|t\right| \lt \frac{1}{1 - p}$
Of course, the fact that the variance, skewness, and kurtosis are unchanged follows easily, since $N$ and $M$ differ by a constant.
The Quantile Function
Let $F$ denote the distribution function of $N$, so that $F(n) = 1 - (1 - p)^n$ for $n \in \N$. Recall that $F^{-1}(r) = \min \{n \in \N_+: F(n) \ge r\}$ for $r \in (0, 1)$ is the quantile function of $N$.
The quantile function of $N$ is $F^{-1}(r) = \left\lceil \frac{\ln(1 - r)}{\ln(1 - p)}\right\rceil, \quad r \in (0, 1)$
Of course, the quantile function, like the probability density function and the distribution function, completely determines the distribution of $N$. Moreover, we can compute the median and quartiles to get measures of center and spread.
The first quartile, the median (or second quartile), and the third quartile are
1. $F^{-1}\left(\frac{1}{4}\right) = \left\lceil \ln(3/4) \big/ \ln(1 - p)\right\rceil \approx \left\lceil-0.2877 \big/ \ln(1 - p)\right\rceil$
2. $F^{-1}\left(\frac{1}{2}\right) = \left\lceil \ln(1/2) \big/ \ln(1 - p)\right\rceil \approx \left\lceil-0.6931 \big/ \ln(1 - p)\right\rceil$
3. $F^{-1}\left(\frac{3}{4}\right) = \left\lceil \ln(1/4) \big/ \ln(1 - p)\right\rceil \approx \left\lceil-1.3863 \big/ \ln(1 - p)\right\rceil$
Open the special distribution calculator, and select the geometric distribution and CDF view. Vary $p$ and note the shape and location of the CDF/quantile function. For various values of $p$, compute the median and the first and third quartiles.
The Constant Rate Property
Suppose that $T$ is a random variable taking values in $\N_+$, which we interpret as the first time that some event of interest occurs.
The function $h$ given by $h(n) = \P(T = n \mid T \ge n) = \frac{\P(T = n)}{\P(T \ge n)}, \quad n \in \N_+$ is the rate function of $T$.
If $T$ is interpreted as the (discrete) lifetime of a device, then $h$ is a discrete version of the failure rate function studied in reliability theory. However, in our usual formulation of Bernoulli trials, the event of interest is success rather than failure (or death), so we will simply use the term rate function to avoid confusion. The constant rate property characterizes the geometric distribution. As usual, let $N$ denote the trial number of the first success in a sequence of Bernoulli trials with success parameter $p \in (0, 1)$, so that $N$ has the geometric distribution on $\N_+$ with parameter $p$.
$N$ has constant rate $p$.
Proof
From the results above, $\P(N = n) = p (1 - p)^{n-1}$ and $\P(N \ge n) = \P(N \gt n - 1) = (1 - p)^{n-1}$, so $\P(N = n) \big/ \P(N \ge n) = p$ for $n \in \N_+$.
Conversely, if $T$ has constant rate $p \in (0, 1)$ then $T$ has the geometric distrbution on $\N_+$ with success parameter $p$.
Proof
Let $H(n) = \P(T \ge n)$ for $n \in \N_+$. From the constant rate property, $\P(T = n) = p \, H(n)$ for $n \in \N_+$. Next note that $\P(T = n) = H(n) - H(n + 1)$ for $n \in \N_+$. Thus, $H$ satisfies the recurrence relation $H(n + 1) = (1 - p) \, H(n)$ for $n \in \N_+$. Also $H$ satisfies the initial condition $H(1) = 1$. Solving the recurrence relation gives $H(n) = (1 - p)^{n-1}$ for $n \in \N_+$.
Relation to the Uniform Distribution
Suppose again that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of Bernoulli trials with success parameter $p \in (0, 1)$. For $n \in \N_+$, recall that $Y_n = \sum_{i=1}^n X_i$, the number of successes in the first $n$ trials, has the binomial distribution with parameters $n$ and $p$. As before, $N$ denotes the trial number of the first success.
Suppose that $n \in \N_+$. The conditional distribution of $N$ given $Y_n = 1$ is uniform on $\{1, 2, \ldots, n\}$.
Proof from sampling
We showed in the last section that given $Y_n = k$, the trial numbers of the successes form a random sample of size $k$ chosen without replacement from $\{1, 2, \ldots, n\}$. This result is a simple corollary with $k = 1$
Direct proof
For $j \in \{1, 2, \ldots, n\}$ $\P(N = j \mid Y_n = 1) = \frac{\P(N = j, Y_n = 1)}{\P(Y_n = 1)} = \frac{\P\left(Y_{j-1} = 0, X_j = 1, Y_{n} - Y_j = 0\right)}{\P(Y_n = 1)}$ In words, the events in the numerator of the last fraction are that there are no successes in the first $j - 1$ trials, a success on trial $j$, and no successes in trials $j + 1$ to $n$. These events are independent so $\P(N = j \mid Y_n = 1) = \frac{(1 - p)^{j-1} p (1 - p)^{n-j}}{n p (1 - p)^{n - 1}} = \frac{1}{n}$
Note that the conditional distribution does not depend on the success parameter $p$. If we know that there is exactly one success in the first $n$ trials, then the trial number of that success is equally likely to be any of the $n$ possibilities.
Another connection between the geometric distribution and the uniform distribution is given below in the alternating coin tossing game: the conditional distribution of $N$ given $N \le n$ converges to the uniform distribution on $\{1, 2, \ldots, n\}$ as $p \downarrow 0$.
Relation to the Exponential Distribution
The Poisson process on $[0, \infty)$, named for Simeon Poisson, is a model for random points in continuous time. There are many deep and interesting connections between the Bernoulli trials process (which can be thought of as a model for random points in discrete time) and the Poisson process. These connections are explored in detail in the chapter on the Poisson process. In this section we just give the most famous and important result—the convergence of the geometric distribution to the exponential distribution. The geometric distribution, as we know, governs the time of the first random point in the Bernoulli trials process, while the exponential distribution governs the time of the first random point in the Poisson process.
For reference, the exponential distribution with rate parameter $r \in (0, \infty)$ has distribution function $F(x) = 1 - e^{-r x}$ for $x \in [0, \infty)$. The mean of the exponential distribution is $1 / r$ and the variance is $1 / r^2$. In addition, the moment generating function is $s \mapsto \frac{1}{s - r}$ for $s \gt r$.
For $n \in \N_+$, suppose that $U_n$ has the geometric distribution on $\N_+$ with success parameter $p_n \in (0, 1)$, where $n p_n \to r \gt 0$ as $n \to \infty$. Then the distribution of $U_n / n$ converges to the exponential distribution with parameter $r$ as $n \to \infty$.
Proof
Let $F_n$ denote the CDF of $U_n / n$. Then for $x \in [0, \infty)$ $F_n(x) = \P\left(\frac{U_n}{n} \le x\right) = \P(U_n \le n x) = \P\left(U_n \le \lfloor n x \rfloor\right) = 1 - \left(1 - p_n\right)^{\lfloor n x \rfloor}$ But by a famous limit from calculus, $\left(1 - p_n\right)^n = \left(1 - \frac{n p_n}{n}\right)^n \to e^{-r}$ as $n \to \infty$, and hence $\left(1 - p_n\right)^{n x} \to e^{-r x}$ as $n \to \infty$. But by definition, $\lfloor n x \rfloor \le n x \lt \lfloor n x \rfloor + 1$ or equivalently, $n x - 1 \lt \lfloor n x \rfloor \le n x$ so it follows that $\left(1 - p_n \right)^{\lfloor n x \rfloor} \to e^{- r x}$ as $n \to \infty$. Hence $F_n(x) \to 1 - e^{-r x}$ as $n \to \infty$, which is the CDF of the exponential distribution.
Note that the condition $n p_n \to r$ as $n \to \infty$ is the same condition required for the convergence of the binomial distribution to the Poisson that we studied in the last section.
Special Families
The geometric distribution on $\N$ is an infinitely divisible distribution and is a compound Poisson distribution. For the details, visit these individual sections and see the next section on the negative binomial distribution.
Examples and Applications
Simple Exercises
A standard, fair die is thrown until an ace occurs. Let $N$ denote the number of throws. Find each of the following:
1. The probability density function of $N$.
2. The mean of $N$.
3. The variance of $N$.
4. The probability that the die will have to be thrown at least 5 times.
5. The quantile function of $N$.
6. The median and the first and third quartiles.
Answwer
1. $\P(N = n) = \left(\frac{5}{6}\right)^{n-1} \frac{1}{6}$ for $n \in \N_+$
2. $\E(N) = 6$
3. $\var(N) = 30$
4. $\P(N \ge 5) = 525 / 1296$
5. $F^{-1}(r) = \lceil \ln(1 - r) / \ln(5 / 6)\rceil$ for $r \in (0, 1)$
6. Quartiles $q_1 = 2$, $q_2 = 4$, $q_3 = 8$
A type of missile has failure probability 0.02. Let $N$ denote the number of launches before the first failure. Find each of the following:
1. The probability density function of $N$.
2. The mean of $N$.
3. The variance of $N$.
4. The probability of 20 consecutive successful launches.
5. The quantile function of $N$.
6. The median and the first and third quartiles.
Answer
1. $\P(N = n) \ \left(\frac{49}{50}\right)^{n-1} \frac{1}{50}$ for $n \in \N_+$
2. $\E(N) = 50$
3. $\var(N) = 2450$
4. $\P(N \gt 20) = 0.6676$
5. $F^{-1}(r) = \lceil \ln(1 - r) / \ln(0.98)\rceil$ for $r \in (0, 1)$
6. Quartiles $q_1 = 15$, $q_2 = 35$, $q_3 = 69$
A student takes a multiple choice test with 10 questions, each with 5 choices (only one correct). The student blindly guesses and gets one question correct. Find the probability that the correct question was one of the first 4.
Answer
0.4
Recall that an American roulette wheel has 38 slots: 18 are red, 18 are black, and 2 are green. Suppose that you observe red or green on 10 consecutive spins. Give the conditional distribution of the number of additional spins needed for black to occur.
Answer
Geometric with $p = \frac{18}{38}$
The game of roulette is studied in more detail in the chapter on Games of Chance.
In the negative binomial experiment, set $k = 1$ to get the geometric distribution and set $p = 0.3$. Run the experiment 1000 times. Compute the appropriate relative frequencies and empirically investigate the memoryless property $\P(V \gt 5 \mid V \gt 2) = \P(V \gt 3)$
The Petersburg Problem
We will now explore a gambling situation, known as the Petersburg problem, which leads to some famous and surprising results. Suppose that we are betting on a sequence of Bernoulli trials with success parameter $p \in (0, 1)$. We can bet any amount of money on a trial at even stakes: if the trial results in success, we receive that amount, and if the trial results in failure, we must pay that amount. We will use the following strategy, known as a martingale strategy:
1. We bet $c$ units on the first trial.
2. Whenever we lose a trial, we double the bet for the next trial.
3. We stop as soon as we win a trial.
Let $N$ denote the number of trials played, so that $N$ has the geometric distribution with parameter $p$, and let $W$ denote our net winnings when we stop.
$W = c$
Proof
The first win occurs on trial $N$, so the initial bet was doubled $N - 1$ times. The net winnings are $W = -c \sum_{i=0}^{N-2} 2^i + c 2^{N-1} = c\left(1 - 2^{N-1} + 2^{N-1}\right) = c$
Thus, $W$ is not random and $W$ is independent of $p$! Since $c$ is an arbitrary constant, it would appear that we have an ideal strategy. However, let us study the amount of money $Z$ needed to play the strategy.
$Z = c (2^N - 1)$
The expected amount of money needed for the martingale strategy is $\E(Z) = \begin{cases} \frac{c}{2 p - 1}, & p \gt \frac{1}{2} \ \infty, & p \le \frac{1}{2} \end{cases}$
Thus, the strategy is fatally flawed when the trials are unfavorable and even when they are fair, since we need infinite expected capital to make the strategy work in these cases.
Compute $\E(Z)$ explicitly if $c = 100$ and $p = 0.55$.
Answer
\$1000
In the negative binomial experiment, set $k = 1$. For each of the following values of $p$, run the experiment 100 times. For each run compute $Z$ (with $c = 1$). Find the average value of $Z$ over the 100 runs:
1. $p = 0.2$
2. $p = 0.5$
3. $p = 0.8$
For more information about gambling strategies, see the section on Red and Black. Martingales are studied in detail in a separate chapter.
The Alternating Coin-Tossing Game
A coin has probability of heads $p \in (0, 1]$. There are $n$ players who take turns tossing the coin in round-robin style: player 1 first, then player 2, continuing until player $n$, then player 1 again, and so forth. The first player to toss heads wins the game.
Let $N$ denote the number of the first toss that results in heads. Of course, $N$ has the geometric distribution on $\N_+$ with parameter $p$. Additionally, let $W$ denote the winner of the game; $W$ takes values in the set $\{1, 2, \ldots, n\}$. We are interested in the probability distribution of $W$.
For $i \in \{1, 2, \ldots, n\}$, $W = i$ if and only if $N = i + k n$ for some $k \in \N$. That is, using modular arithmetic, $W = [(N - 1) \mod n] + 1$
The winning player $W$ has probability density function $\P(W = i) = \frac{p (1 - p)^{i-1}}{1 - (1 - p)^n}, \quad i \in \{1, 2, \ldots, n\}$
Proof
This follows from the previous exercise and the geometric distribution of $N$.
$\P(W = i) = (1 - p)^{i-1} \P(W = 1)$ for $i \in \{1, 2, \ldots, n\}$.
Proof
This result can be argued directly, using the memoryless property of the geometric distribution. In order for player $i$ to win, the previous $i - 1$ players must first all toss tails. Then, player $i$ effectively becomes the first player in a new sequence of tosses. This result can be used to give another derivation of the probability density function in the previous exercise.
Note that $\P(W = i)$ is a decreasing function of $i \in \{1, 2, \ldots, n\}$. Not surprisingly, the lower the toss order the better for the player.
Explicitly compute the probability density function of $W$ when the coin is fair ($p = 1 / 2$).
Answer
$\P(W = i) = 2^{n-1} \big/ (2^n - 1), \quad i \in \{1, 2, \ldots, n\}$
Note from the result above that $W$ itself has a truncated geometric distribution.
The distribution of $W$ is the same as the conditional distribution of $N$ given $N \le n$: $\P(W = i) = \P(N = i \mid N \le n), \quad i \in \{1, 2, \ldots, n\}$
The following problems explore some limiting distributions related to the alternating coin-tossing game.
For fixed $p \in (0, 1]$, the distribution of $W$ converges to the geometric distribution with parameter $p$ as $n \uparrow \infty$.
For fixed $n$, the distribution of $W$ converges to the uniform distribution on $\{1, 2, \ldots, n\}$ as $p \downarrow 0$.
Players at the end of the tossing order should hope for a coin biased towards tails.
Odd Man Out
In the game of odd man out, we start with a specified number of players, each with a coin that has the same probability of heads. The players toss their coins at the same time. If there is an odd man, that is a player with an outcome different than all of the other players, then the odd player is eliminated; otherwise no player is eliminated. In any event, the remaining players continue the game in the same manner. A slight technical problem arises with just two players, since different outcomes would make both players odd. So in this case, we might (arbitrarily) make the player with tails the odd man.
Suppose there are $k \in \{2, 3, \ldots\}$ players and $p \in [0, 1]$. In a single round, the probability of an odd man is $r_k(p) = \begin{cases} 2 p (1 - p), & k = 2 \ k p (1 - p)^{k-1} + k p^{k-1} (1 - p), & k \in \{3, 4, \ldots\} \end{cases}$
Proof
Let $Y$ denote the number of heads. If $k = 2$, the event that there is an odd man is $\{Y = 1\}$. If $k \ge 3$, the event that there is an odd man is $\{Y \in \{1, k - 1\}\}$. The result now follows since $Y$ has a binomial distribution with parameters $k$ and $p$.
The graph of $r_k$ is more interesting than you might think.
For $k \in \{2, 3, \ldots\}$, $r_k$ has the following properties:
1. $r_k(0) = r_k(1) = 0$
2. $r_k$ is symmetric about $p = \frac{1}{2}$
3. For fixed $p \in [0, 1]$, $r_k(p) \to 0$ as $k \to \infty$.
Proof
These properties are clear from the functional form of $r_k(p)$. Note that $r_k(p) = r_k(1 - p)$.
For $k \in \{2, 3, 4\}$, $r_k$ has the following properties:
1. $r_k$ increases and then decreases, with maximum at $p = \frac{1}{2}$.
2. $r_k$ is concave downward
Proof
This follows by computing the first derivatives: $r_2^\prime(p) = 2 (1 - 2 p)$, $r_3^\prime(p) = 3 (1 - 2 p)$, $r_4^\prime(p) = 4 (1 - 2 p)^3$, and the second derivatives: $r_2^{\prime\prime}(p) = -4$, $r_3^{\prime\prime}(p) = - 6$, $r_4^{\prime\prime}(p) = -24 (1 - 2 p)^2$.
For $k \in \{5, 6, \ldots\}$, $r_k$ has the following properties:
1. The maximum occurs at two points of the form $p_k$ and $1 - p_k$ where $p_k \in \left(0, \frac{1}{2}\right)$ and $p_k \to 0$ as $k \to \infty$.
2. The maximum value $r_k(p_k) \to 1/e \approx 0.3679$ as $k \to \infty$.
3. The graph has a local minimum at $p = \frac{1}{2}$.
Proof sketch
Note that $r_k(p) = s_k(p) + s_k(1 - p)$ where $s_k(t) = k t^{k-1}(1 - t)$ for $t \in [0, 1]$. Also, $p \mapsto s_k(p)$ is the dominant term when $p \gt \frac{1}{2}$ while $p \mapsto s_k(1 - p)$ is the dominant term when $p \lt \frac{1}{2}$. A simple analysis of the derivative shows that $s_k$ increases and then decreases, reaching its maximum at $(k - 1) / k$. Moreover, the maximum value is $s_k\left[(k - 1) / k\right] = (1 - 1 / k)^{k-1} \to e^{-1}$ as $k \to \infty$. Also, $s_k$ is concave upward and then downward, wit inflection point at $(k - 2) / k$.
Suppose $p \in (0, 1)$, and let $N_k$ denote the number of rounds until an odd man is eliminated, starting with $k$ players. Then $N_k$ has the geometric distribution on $\N_+$ with parameter $r_k(p)$. The mean and variance are
1. $\mu_k(p) = 1 \big/ r_k(p)$
2. $\sigma_k^2(p) = \left[1 - r_k(p)\right] \big/ r_k^2(p)$
As we might expect, $\mu_k(p) \to \infty$ and $\sigma_k^2(p) \to \infty$ as $k \to \infty$ for fixed $p \in (0, 1)$. On the other hand, from the result above, $\mu_k(p_k) \to e$ and $\sigma_k^2(p_k) \to e^2 - e$ as $k \to \infty$.
Suppose we start with $k \in \{2, 3, \ldots\}$ players and $p \in (0, 1)$. The number of rounds until a single player remains is $M_k = \sum_{j = 2}^k N_j$ where $(N_2, N_3, \ldots, N_k)$ are independent and $N_j$ has the geometric distribution on $\N_+$ with parameter $r_j(p)$. The mean and variance are
1. $\E(M_k) = \sum_{j=2}^k 1 \big/ r_j(p)$
2. $\var(M_k) = \sum_{j=2}^k \left[1 - r_j(p)\right] \big/ r_j^2(p)$
Proof
The form of $M_k$ follows from the previous result: $N_k$ is the number of rounds until the first player is eliminated. Then the game continues independently with $k - 1$ players, so $N_{k-1}$ is the number of additional rounds until the second player is eliminated, and so forth. Parts (a) and (b) follow from the previous result and standard properties of expected value and variance.
Starting with $k$ players and probability of heads $p \in (0, 1)$, the total number of coin tosses is $T_k = \sum_{j=2}^k j N_j$. The mean and variance are
1. $\E(T_k) = \sum_{j=2}^k j \big/ r_j(p)$
2. $\var(T_k) = \sum_{j=2}^k j^2 \left[1 - r_j(p)\right] \big/ r_j^2(p)$
Proof
As before, the form of $M_k$ follows from result above: $N_k$ is the number of rounds until the first player is eliminated, and each these rounds has $k$ tosses. Then the game continues independently with $k - 1$ players, so $N_{k-1}$ is the number of additional rounds until the second player is eliminated with each round having $k - 1$ tosses, and so forth. Parts (a) and (b) also follow from the result above and standard properties of expected value and variance.
Number of Trials Before a Pattern
Consider again a sequence of Bernoulli trials $\bs X = (X_1, X_2, \ldots)$ with success parameter $p \in (0, 1)$. Recall that the number of trials $M$ before the first success (outcome 1) occurs has the geometric distribution on $\N$ with parameter $p$. A natural generalization is the random variable that gives the number of trials before a specific finite sequence of outcomes occurs for the first time. (Such a sequence is sometimes referred to as a word from the alphabet $\{0, 1\}$ or simply a bit string). In general, finding the distribution of this variable is a difficult problem, with the difficulty depending very much on the nature of the word. The problem of finding just the expected number of trials before a word occurs can be solved using powerful tools from the theory of renewal processes and from the theory of martingalges.
To set up the notation, let $\bs x$ denote a finite bit string and let $M_{\bs x}$ denote the number of trials before $\bs x$ occurs for the first time. Finally, let $q = 1 - p$. Note that $M_{\bs x}$ takes values in $\N$. In the following exercises, we will consider $\bs x = 10$, a success followed by a failure. As always, try to derive the results yourself before looking at the proofs.
The probability density function $f_{10}$ of $M_{10}$ is given as follows:
1. If $p \ne \frac{1}{2}$ then $f_{10}(n) = p q \frac{p^{n+1} - q^{n+1}}{p - q}, \quad n \in \N$
2. If $p = \frac{1}{2}$ then $f_{10}(n) = (n + 1) \left(\frac{1}{2}\right)^{n+2}$ for $n \in \N$.
Proof
For $n \in \N$, the event $\{M_{10} = n\}$ can only occur if there is an initial string of 0s of length $k \in \{0, 1, \ldots, n\}$ followed by a string of 1s of length $n - k$ and then 1 on trial $n + 1$ and 0 on trial $n + 2$. Hence $f_{10}(n) = \P(M_{10} = n) = \sum_{k=0}^n q^k p^{n - k} p q, \quad n \in \N$ The stated result then follows from standard results on geometric series.
It's interesting to note that $f$ is symmetric in $p$ and $q$, that is, symmetric about $p = \frac{1}{2}$. It follows that the distribution function, probability generating function, expected value, and variance, which we consider below, are all also symmetric about $p = \frac{1}{2}$. It's also interesting to note that $f_{10}(0) = f_{10}(1) = p q$, and this is the largest value. So regardless of $p \in (0, 1)$ the distribution is bimodal with modes 0 and 1.
The distribution function $F_{10}$ of $M_{10}$ is given as follows:
1. If $p \ne \frac{1}{2}$ then $F_{10}(n) = 1 - \frac{p^{n+3} - q^{n+3}}{p - q}, \quad n \in \N$
2. If $p = \frac{1}{2}$ then $F_{10} = 1 - (n + 3) \left(\frac{1}{2}\right)^{n+2}$ for $n \in \N$.
Proof
By definition, $F_{10}(n) = \sum_{k=0}^n f_{10}(k)$ for $n \in \N$. The stated result then follows from the previous theorem, standard results on geometric series, and some algebra.
The probability generating function $P_{10}$ of $M_{10}$ is given as follows:
1. If $p \ne \frac{1}{2}$ then $P_{10}(t) = \frac{p q}{p - q} \left(\frac{p}{1 - t p} - \frac{q}{1 - t q}\right), \quad |t| \lt \min \{1 / p, 1 / q\}$
2. If $p = \frac{1}{2}$ then $P_{10}(t) = 1 / (t - 2)^2$ for $|t| \lt 2$
Proof
By definition, $P_{10}(t) = \E\left(t^{M_{10}}\right) = \sum_{n=0}^\infty f_{10}(n) t^n$ for all $t \in \R$ for which the series converges absolutely. The stated result then follows from the theorem above, and once again, standard results on geometric series.
The mean of $M_{10}$ is given as follows:
1. If $p \ne \frac{1}{2}$ then $\E(M_{10}) = \frac{p^4 - q^4}{p q (p - q)}$
2. If $p = \frac{1}{2}$ then $\E(M_{10}) = 2$.
Proof
Recall that $\E(M_{10}) = P^\prime_{10}(1)$ so the stated result follows from calculus, using the previous theorem on the probability generating function. The mean can also be computed from the definition $\E(M_{10}) = \sum_{n=0}^\infty n f_{10}(n)$ using standard results from geometric series, but this method is more tedious.
The graph of $\E(M_{10})$ as a function of $p \in (0, 1)$ is given below. It's not surprising that $\E(M_{10}) \to \infty$ as $p \downarrow 0$ and as $p \uparrow 1$, and that the minimum value occurs when $p = \frac{1}{2}$.
The variance of $M_{10}$ is given as follows:
1. If $p \ne \frac{1}{2}$ then $\var(M_{10}) = \frac{2}{p^2 q^2} \left(\frac{p^6 - q^6}{p - q}\right) + \frac{1}{p q} \left(\frac{p^4 - q^4}{p - q}\right) - \frac{1}{p^2 q^2}\left(\frac{p^4 - q^4}{p - q}\right)^2$
2. If $p = \frac{1}{2}$ then $\var(M_{10}) = 4$.
Proof
Recall that $P^{\prime \prime}_{10}(1) = \E[M_{10}(M_{10} - 1)]$, the second factorial moment, and so $\var(M_{10}) = P^{\prime \prime}_{10}(1) + P^\prime_{10}(1) - [P^\prime_{10}(1)]^2$ The stated result then follows from calculus and the theorem above giving the probability generating function. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/11%3A_Bernoulli_Trials/11.03%3A_The_Geometric_Distribution.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$ $\newcommand{\sd}{\text{sd}}$
Basic Theory
Suppose again that our random experiment is to perform a sequence of Bernoulli trials $\bs{X} = (X_1, X_2, \ldots)$ with success parameter $p \in (0, 1]$. Recall that the number of successes in the first $n$ trials $Y_n = \sum_{i=1}^n X_i$ has the binomial distribution with parameters $n$ and $p$. In this section we will study the random variable that gives the trial number of the $k$th success: $V_k = \min\left\{n \in \N_+: Y_n = k\right\}$ Note that $V_1$ is the number of trials needed to get the first success, which we now know has the geometric distribution on $\N_+$ with parameter $p$.
The Probability Density Function
The probability distribution of $V_k$ is given by $\P(V_k = n) = \binom{n - 1}{k - 1} p^k (1 - p)^{n - k}, \quad n \in \{k, k + 1, k + 2, \ldots\}$
Proof
Note that $V_k = n$ if and only if $X_n = 1$ and $Y_{n-1} = k - 1$. Hence, from independence and the binomial distribution, $\P(V_k = n) = \P(Y_{n-1} = k - 1) \P(X_n = 1) = \binom{n - 1}{k - 1} p^{k - 1}(1 - p)^{(n - 1) - (k - 1)} \, p = \binom{n - 1}{k - 1} p^k (1 - p)^{n - k}$
The distribution defined by the density function in (1) is known as the negative binomial distribution; it has two parameters, the stopping parameter $k$ and the success probability $p$.
In the negative binomial experiment, vary $k$ and $p$ with the scroll bars and note the shape of the density function. For selected values of $k$ and $p$, run the experiment 1000 times and compare the relative frequency function to the probability density function.
The binomial and negative binomial sequences are inverse to each other in a certain sense.
For $n \in \N_+$ and $k \in \{0, 1, \ldots, n\}$,
1. $Y_n \ge k \iff V_k \le n$ and hence $\P(Y_n \ge k) = \P(V_k \le n)$
2. $k \P\left(Y_n = k\right) = n \P\left(V_k = n\right)$
Proof
1. The events $\left\{Y_n \ge k\right\}$ and $\left\{V_k \le n\right\}$ both mean that there are at least $k$ successes in the first $n$ Bernoulli trials.
2. From the formulas for the binomial and negative binomial PDFs, $k \P(Y_n = k)$ and $n \P(V_k = n)$ both simplify to $\frac{n!}{(k - 1)! (n - k)!} p^k (1 - p)^{n - k}$.
In particular, it follows from part (a) that any event that can be expressed in terms of the negative binomial variables can also be expressed in terms of the binomial variables.
The negative binomial distribution is unimodal. Let $t = 1 + \frac{k - 1}{p}$. Then
1. $\P(V_k = n) \gt \P(V_k = n - 1)$ if and only if $n \lt t$.
2. The probability density function at first increases and then decreases, reaching its maximum value at $\lfloor t \rfloor$.
3. There is a single mode at $\lfloor t \rfloor$ if $t$ is not an integers, and two consecutive modes at $t - 1$ and $t$ if $t$ is an integer.
Times Between Successes
Next we will define the random variables that give the number of trials between successive successes. Let $U_1 = V_1$ and $U_k = V_k - V_{k-1}$ for $k \in \{2, 3, \ldots\}$
$\bs{U} = (U_1, U_2, \ldots)$ is a sequence of independent random variables, each having the geometric distribution on $\N_+$ with parameter $p$. Moreover, $V_k = \sum_{i=1}^k U_i$
In statistical terms, $\bs{U}$ corresponds to sampling from the geometric distribution with parameter $p$, so that for each $k$, $(U_1, U_2, \ldots, U_k)$ is a random sample of size $k$ from this distribution. The sample mean corresponding to this sample is $V_k / k$; this random variable gives the average number of trials between the first $k$ successes. In probability terms, the sequence of negative binomial variables $\bs{V}$ is the partial sum process corresponding to the sequence $\bs{U}$. Partial sum processes are studied in more generality in the chapter on Random Samples.
The random process $\bs{V} = (V_1, V_2, \ldots)$ has stationary, independent increments:
1. If $j \lt k$ then $V_k - V_j$ has the same distribution as $V_{k - j}$, namely negative binomial with parameters $k - j$ and $p$.
2. If $k_1 \lt k_2 \lt k_3 \lt \cdots$ then $(V_{k_1}, V_{k_2} - V_{k_1}, V_{k_3} - V_{k_2}, \ldots)$ is a sequence of independent random variables.
Actually, any partial sum process corresponding to an independent, identically distributed sequence will have stationary, independent increments.
Basic Properties
The mean, variance and probability generating function of $V_k$ can be computed in several ways. The method using the representation as a sum of independent, identically distributed geometrically distributed variables is the easiest.
$V_k$ has probability generating function $P$ given by $P(t) = \left( \frac{p \, t}{1 - (1 - p) t} \right)^k, \quad \left|t\right| \lt \frac{1}{1 - p}$
Proof
Recall that the probability generating function of a sum of independent variables is the product of the probability generating functions of the variables. Recall also, the probability generating function of the geometric distribution with parameter $p$ is $t \mapsto p \, t \big/ \left[1 - (1 - p) t\right]$. Thus, the result follows immediately from the sum representation above. A derivation can also be given directly from the probability density function.
The mean and variance of $V_k$ are
1. $\E(V_k) = k \frac{1}{p}$.
2. $\var(V_k) = k \frac{1 - p}{p^2}$
Proof
The geometric distribution with parameter $p$ has mean $1 / p$ and variance $(1 - p) \big/ p^2$, so the results follows immediately from the sum representation above. Recall that the mean of a sum is the sum of the means, and the variance of the sum of independent variables is the sum of the variances. These results can also be proven directly from the probability density function or from the probability generating function.
In the negative binomial experiment, vary $k$ and $p$ with the scroll bars and note the location and size of the mean/standard deviation bar. For selected values of the parameters, run the experiment 1000 times and compare the sample mean and standard deviation to the distribution mean and standard deviation.
Suppose that $V$ and $W$ are independent random variables for an experiment, and that $V$ has the negative binomial distribution with parameters $j$ and $p$, and $W$ has the negative binomial distribution with parameters $k$ and $p$. Then $V + W$ has the negative binomial distribution with parameters $k + j$ and $p$.
Proof
Once again, the simplest proof is based on the representation as a sum of independent geometric variables. In the context of the sum representation above, we can take $V = V_j$ and $W = V_{k+j} - V_j$, so that $V + W = V_{k + j}$. Another simple proof uses probability generating functions. Recall again that the PGF of the sum of independent variables is the product of the PGFs. Finally, a difficult proof can be constructed using probability density functions. Recall that the PDF of a sum of independent variables is the convolution of the PDFs.
Normal Approximation
In the negative binomial experiment, start with various values of $p$ and $k = 1$. Successively increase $k$ by 1, noting the shape of the probability density function each time.
Even though you are limited to $k = 5$ in the app, you can still see the characteristic bell shape. This is a consequence of the central limit theorem because the negative binomial variable can be written as a sum of $k$ independent, identically distributed (geometric) random variables.
The standard score of $V_k$ is $Z_k = \frac{p \, V_k - k}{\sqrt{k (1 - p)}}$ The distribution of $Z_k$ converges to the standard normal distribution as $k \to \infty$.
From a practical point of view, this result means that if $k$ is large, the distribution of $V_k$ is approximately normal with mean $k \frac{1}{p}$ and variance $k \frac{1 - p}{p^2}$. Just how large $k$ needs to be for the approximation to work well depends on $p$. Also, when using the normal approximation, we should remember to use the continuity correction, since the negative binomial is a discrete distribution.
Relation to Order Statistics
Suppose that $n \in \N_+$ and $k \in \{1, 2, \ldots, n\}$, and let $L = \left\{ (n_1, n_2, \ldots, n_k) \in \{1, 2, \ldots, n\}^k: n_1 \lt n_2 \lt \cdots \lt n_k \right\}$. Then $\P\left(V_1 = n_1, V_2 = n_2, \ldots, V_k = n_k \mid Y_n = k\right) = \frac{1}{\binom{n}{k}}, \quad (n_1, n_2, \ldots, n_k) \in L$
Proof
$\P\left(V_1 = n_1, V_2 = n_2, \ldots, V_k = n_k \mid Y_n = k\right) = \frac{\P\left(V_1 = n_1, V_2 = n_2, \ldots, V_k = n_k, Y_n = k\right)}{\P(Y_n = k)} = \frac{p^k (1 - p)^{n - k}}{\binom{n}{k} p^k (1 - p)^{n - k}} = \frac{1}{\binom{n}{k}}$ Note that the event in the numerator of the first fraction means that in the first $n$ trials, successes occurred at trials $n_1, n_2, \ldots, n_k$ and failures occurred at all other trials.
Thus, given exactly $k$ successes in the first $n$ trials, the vector of success trial numbers is uniformly distributed on the set of possibilities $L$, regardless of the value of the success parameter $p$. Equivalently, the vector of success trial numbers is distributed as the vector of order statistics corresponding to a sample of size $k$ chosen at random and without replacement from $\{1, 2, \ldots, n\}$.
Suppose that $n \in \N_+$, $k \in \{1, 2, \ldots, n\}$, and $j \in \{1, 2, \ldots, k\}$. Then $\P\left(V_j = m \mid Y_n = k\right) = \frac{\binom{m - 1}{j - 1} \binom{n - m}{k - j}}{\binom{n}{k}}, \quad m \in \{j, j + 1, \ldots, n + k - j\}$
Proof
This follows immediately from the previous result and a theorem in the section on order statistics. However, a direct proof is also easy. Note that the event $\{V_j = m, Y_n = k\}$ means that there were $j - 1$ successes in the first $m - 1$ trials, a success on trial $m$ and $k - j$ success in trials $m + 1$ to $n$. Hence using the binomial distribution and independence, \begin{align} \P\left(V_j = m \mid Y_n = k\right) & = \frac{\P(V_j = m, Y_n = k)}{\P(Y_n = k)} = \frac{\binom{m - 1}{j - 1} p^{j - 1}(1 - p)^{(m - 1) - (j - 1)} p \binom{n - m}{k - j} p^{k - j}(1 - p)^{(n - m) - (k - j)}}{\binom{n}{k} p^k (1 - p)^{n - k}} \ & = \frac{\binom{m - 1}{j - 1} \binom{n - m}{k - j} p^k(1 - p)^{n - k}}{\binom{n}{k} p^k (1 - p)^{n - k}} = \frac{\binom{m - 1}{j - 1} \binom{n - m}{k - j}}{\binom{n}{k}}, \end{align}
Thus, given exactly $k$ successes in the first $n$ trials, the trial number of the $j$th success has the same distribution as the $j$th order statistic when a sample of size $k$ is selected at random and without replacement from the population $\{1, 2, \ldots, n\}$. Again, this result does not depend on the value of the success parameter $p$. The following theorem gives the mean and variance of the conditional distribution.
Suppose again that $n \in \N_+$, $k \in \{1, 2, \ldots, n\}$, and $j \in \{1, 2, \ldots, k\}$. Then
1. $\E\left(V_j \mid Y_n = k\right) = j \frac{n + 1}{k + 1}$
2. $\var\left(V_j \mid Y_n = k\right) = j (k - j + 1) \frac{(n + 1)(n - k)}{(k + 1)^2 (k + 2)}$
Proof
These moment results follow immediately from the previous theorem and a theorem in the section on order statistics. However, there is also a nice heuristic argument for (a) using indicator variables. Given $Y_n = k$, the $k$ successes divide the set of indices where the failures occur into $k + 1$ disjoint sets (some may be empty, of course, if there are adjacent successes).
Let $I_i$ take the value 1 if the $i$th failure occurs before the $j$th success, and 0 otherwise, for $i \in \{1, 2, \ldots, n - k\}$. Then given $Y_n = k$, $V_j = j + \sum_{i=1}^{n - k} I_i$ Given $Y_n = k$, we know that the $k$ successes and $n - k$ failures are randomly placed in $\{1, 2, \ldots, n\}$, with each possible configuration having the same probability. Thus, $\E(I_i \mid Y_n = k) = \P(I_i = 1 \mid Y_n = k) = \frac{j}{k + 1}, \quad i \in \{1, 2, \ldots n - k\}$ Hence $\E(V_j \mid Y_n = k) = j + (n - k) \frac{j}{k + 1} = j \frac{n + 1}{k + 1}$
Examples and Applications
Coins, Dice and Other Gadgets
A standard, fair die is thrown until 3 aces occur. Let $V$ denote the number of throws. Find each of the following:
1. The probability density function of $V$.
2. The mean of $V$.
3. The variance of $V$.
4. The probability that at least 20 throws will needed.
Answer
1. $\P(V = n) = \binom{n - 1}{2} \left(\frac{1}{6}\right)^3 \left(\frac{5}{6}\right)^{n-3}, \quad n \in \{3, 4, \ldots\}$
2. $\E(V) = 18$
3. $\var(V) = 90$
4. $\P(V \ge 20) = 0.3643$
A coin is tossed repeatedly. The 10th head occurs on the 25th toss. Find each of the following:
1. The probability density function of the trial number of the 5th head.
2. The mean of the distribution in (a).
3. The variance of the distribution in (a).
Answer
1. $\P(V_5 = m \mid V_{10} = 25) = \frac{\binom{m - 1}{4} \binom{24 - m}{4}}{\binom{24}{9}}, \quad m \in \{5, 6, \ldots, 2\}$
2. $\E(V_5 \mid V_{10} = 25) = \frac{25}{2}$
3. $\var(V_5 \mid V_{10} = 25) = \frac{375}{44}$
A certain type of missile has failure probability 0.02. Let $N$ denote the launch number of the fourth failure. Find each of the following:
1. The probability density function of $N$.
2. The mean of $N$.
3. The variance of $N$.
4. The probability that there will be at least 4 failures in the first 200 launches.
Answer
1. $\P(N = n) = \frac{n - 1}{2} \left(\frac{1}{50}\right)^4 \left(\frac{49}{50}\right)^{n-4}, \quad n \in \{4, 5, \ldots\}$
2. $\E(N) = 200$
3. $\var(N) = 9800$
4. $\P(N \le 200) = 0.5685$
In the negative binomial experiment, set $p = 0.5$ and $k = 5$. Run the experiment 1000 times. Compute and compare each of the following:
1. $\P(8 \le V_5 \le 15)$
2. The relative frequency of the event $\{8 \le V_5 \le 15\}$ in the simulation
3. The normal approximation to $\P(8 \le V_5 \le 15)$
Answer
1. $\P(8 \le V_5 \le 15) = 0.7142$
2. $\P(8 \le V_5 \le 15) \approx 0.7445$
A coin is tossed until the 50th head occurs.
1. Assuming that the coin is fair, find the normal approximation of the probability that the coin is tossed at least 125 times.
2. Suppose that you perform this experiment, and 125 tosses are required. Do you believe that the coin is fair?
Answer
1. 0.0072
2. No.
The Banach Match Problem
Suppose that an absent-minded professor (is there any other kind?) has $m$ matches in his right pocket and $m$ matches in his left pocket. When he needs a match to light his pipe, he is equally likely to choose a match from either pocket. We want to compute the probability density function of the random variable $W$ that gives the number of matches remaining when the professor first discovers that one of the pockets is empty. This is known as the Banach match problem, named for the mathematician Stefan Banach, who evidently behaved in the manner described.
We can recast the problem in terms of the negative binomial distribution. Clearly the match choices form a sequence of Bernoulli trials with parameter $p = \frac{1}{2}$. Specifically, we can consider a match from the right pocket as a win for player $R$, and a match from the left pocket as a win for player $L$. In a hypothetical infinite sequence of trials, let $U$ denote the number of trials necessary for $R$ to win $m + 1$ trials, and $V$ the number of trials necessary for $L$ to win $m + 1$ trials. Note that $U$ and $V$ each have the negative binomial distribution with parameters $m + 1$ and $p$.
For $k \in \{0, 1, \ldots, m\}$,
1. $L$ has $m - k$ wins at the moment when $R$ wins $m + 1$ games if and only if $U = 2 m - k + 1$.
2. $\{U = 2 m - k + 1\}$ is equivalent to the event that the professor first discovers that the right pocket is empty and that the left pocket has $k$ matches
3. $\P(U = 2 m - k + 1) = \binom{2 m - k}{m} \left(\frac{1}{2}\right)^{2 m - k + 1}$
For $k \in \{0, 1, \ldots, m\}$,
1. $R$ has $m - k$ wins at the moment when $L$ wins $m + 1$ games if and only if $V = 2 m - k + 1$.
2. $\{V = 2 m - k + 1\}$ is equivalent to the event that the professor first discovers that the right pocket is empty and that the left pocket has $k$ matches
3. $\P(V = 2 m - k + 1) = \binom{2 \, m - k}{m} \left(\frac{1}{2}\right)^{2 m - k + 1}$.
$W$ has probability density function $\P(W = k) = \binom{2 m - k}{m} \left(\frac{1}{2}\right)^{2 m - k}, \quad k \in \{0, 1, \ldots, m\}$
Proof
This result follows from the previous two exercises, since $\P(W = k) = \P(U = 2 m - k + 1) + \P(V = 2 m - k + 1)$.
We can also solve the non-symmetric Banach match problem, using the same methods as above. Thus, suppose that the professor reaches for a match in his right pocket with probability $p$ and in his left pocket with probability $1 - p$, where $0 \lt p \lt 1$. The essential change in the analysis is that $U$ has the negative binomial distribution with parameters $m + 1$ and $p$, while $V$ has the negative binomial distribution with parameters $m + 1$ and $1 - p$.
For the Banach match problem with parameter $p$, $W$ has probability density function $\P(W = k) = \binom{2 m - k}{m} \left[ p^{m+1} (1 - p)^{m-k} + (1 - p)^{m+1} p^{m-k} \right], \quad k \in \{0, 1, \ldots, m\}$
The Problem of Points
Suppose that two teams (or individuals) $A$ and $B$ play a sequence of Bernoulli trials (which we will also call points), where $p \in (0, 1)$ is the probability that player $A$ wins a point. For nonnegative integers $n$ and $m$, let $A_{n,m}(p)$ denote the probability that $A$ wins $n$ points before $B$ wins $m$ points. Computing $A_{n,m}(p)$ is an historically famous problem, known as the problem of points, that was solved by Pierre de Fermat and by Blaise Pascal.
Comment on the validity of the Bernoulli trial assumptions (independence of trials and constant probability of success) for games of sport that have a skill component as well as a random component.
There is an easy solution to the problem of points using the binomial distribution; this was essentially Pascal's solution. There is also an easy solution to the problem of points using the negative binomial distribution In a sense, this has to be the case, given the equivalence between the binomial and negative binomial processes in (3). First, let us pretend that the trials go on forever, regardless of the outcomes. Let $Y_{n+m-1}$ denote the number of wins by player $A$ in the first $n + m - 1$ points, and let $V_n$ denote the number of trials needed for $A$ to win $n$ points. By definition, $Y_{n+m-1}$ has the binomial distribution with parameters $n + m - 1$ and $p$, and $V_n$ has the negative binomial distribution with parameters $n$ and $p$.
Player $A$ wins $n$ points before $B$ wins $m$ points if and only if $Y_{n + m - 1} \ge n$ if and only if $V_n \le m + n - 1$. Hence $A_{n,m}(p) = \sum_{k=n}^{n + m - 1} \binom{n + m - 1}{k} p^k (1 - p)^{n + m - 1 - k} = \sum_{j=n}^{n+m-1} \binom{j - 1}{n - 1} p^n (1 - p)^{j - n}$
$A_{n,m}(p)$ satisfies the following properties:
1. $A_{n,m}(p)$ increases from 0 to 1 as $p$ increases from 0 to 1 for fixed $n$ and $m$.
2. $A_{n,m}(p)$ decreases as $n$ increases for fixed $m$ and $p$.
3. $A_{n,m}(p)$ increases as $m$ increases for fixed $n$ and $p$.
$1 - A_{n,m}(p) = A_{m,n}(1 - p)$ for any $m, \; n \in \N_+$ and $p \in (0, 1)$.
Proof
A simple probabilistic proof is to note that both sides can be interpreted as the probability that a player with point probability $1 - p$ wins $m$ points before the opponent wins $n$ points. An analytic proof can also be constructed using the formulas above for $A_{n,m}(p)$
In the problem of points experiments, vary the parameters $n$, $m$, and $p$, and note how the probability changes. For selected values of the parameters, run the simulation 1000 times and note the apparent convergence of the relative frequency to the probability.
The win probability function for player $A$ satisfies the following recurrence relation and boundary conditions (this was essentially Fermat's solution):
1. $A_{m,n}(p) = p \, A_{n-1,m}(p) + (1 - p) \, A_{n,m-1}(p), \quad n, \; m \in \N_+$
2. $A_{n,0}(p) = 0$, $A_{0,m}(p) = 1$
Proof
Condition on the outcome of the first trial.
Next let $N_{n,m}$ denote the number of trials needed until either $A$ wins $n$ points or $B$ wins $m$ points, whichever occurs first—the length of the problem of points experiment. The following result gives the distribution of $N_{n,m}$
For $k \in \left\{\min\{m, n\}, \ldots, n + m - 1\right\}$ $\P\left(N_{n,m} = k\right) = \binom{k - 1}{n - 1} p^n (1 - p)^{k - n} + \binom{k - 1}{m - 1} (1 - p)^m p^{k - m}$
Proof
Again, imagine that we continue the trials indefinitely. Let $V_n$ denote the number of trials needed for $A$ to win $n$ points, and let $W$ denote the number of trials needed for $B$ to win $m$ points. Then $\P\left(N_{n,m} = k\right) = \P(V_n = k) + \P(W_m = k)$ for $k$ in the indicated range.
Series of Games
The special case of the problem of points experiment with $m = n$ is important, because it corresponds to $A$ and $B$ playing a best of $2 n - 1$ game series. That is, the first player to win $n$ games wins the series. Such series, especially when $n \in \{2, 3, 4\}$, are frequently used in championship tournaments.
Let $A_n(p)$ denote the probability that player $A$ wins the series. Then $A_n(p) = \sum_{k=n}^{2 n - 1} \binom{2 n - 1}{k} p^k (1 - p)^{2 n - 1 - k} = \sum_{j=n}^{2 n - 1} \binom{j - 1}{n - 1} p^n (1 - p)^{j - n}$
Proof
This follows directly from the problem of points probability above, since $A_n(p) = A_{n,n}(p)$.
Suppose that $p = 0.6$. Explicitly find the probability that team $A$ wins in each of the following cases:
1. A best of 5 game series.
2. A best of 7 game series.
Answer
1. 0.6825.
2. 0.7102
In the problem of points experiments, vary the parameters $n$, $m$, and $p$ (keeping $n = m$), and note how the probability changes. Now simulate a best of 5 series by selecting $n = m = 3$, $p = 0.6$. Run the experiment 1000 times and compare the relative frequency to the true probability.
$A_n(1 - p) = 1 - A_n(p)$ for any $n \in \N_+$ and $p \in [0, 1]$. Therefore
1. The graph of $A_n$ is symmetric with respect to $p = \frac{1}{2}$.
2. $A_n\left(\frac{1}{2}\right) = \frac{1}{2}$.
Proof
Again, there is a simple probabilistic argument for the equation: both sides represent the probabiltiy that a player with game probability $1 - p$ will win the series.
In the problem of points experiments, vary the parameters $n$, $m$, and $p$ (keeping $n = m$), and note how the probability changes. Now simulate a best 7 series by selecting $n = m = 4$, $p = 0.45$. Run the experiment 1000 times and compare the relative frequency to the true probability.
If $n \gt m$ then
1. $A_n(p) \lt A_m(p)$ if $0 \lt p \lt \frac{1}{2}$
2. $A_n(p) \gt A_m(p)$ if $\frac{1}{2} \lt p \lt 1$
Proof
The greater the number of games in the series, the more the series favors the stronger player (the one with the larger game probability).
Let $N_n$ denote the number of trials in the series. Then $N_n$ has probability density function $\P(N_n = k) = \binom{k - 1}{n - 1} \left[p^n (1 - p)^{k-n} + (1 - p)^n p^{k-n} \right], \quad k \in \{n, n + 1, \ldots, 2 \, n - 1\}$
Proof
This result follows directly from the corresponding problem of points result above with $n = m$.
Explicitly compute the probability density function, expected value, and standard deviation for the number of games in a best of 7 series with the following values of $p$:
1. 0.5
2. 0.7
3. 0.9
Answer
1. $f(k) = \binom{k - 1}{3} \left(\frac{1}{2}\right)^{k-1}, \quad k \in \{4, 5, 6, 7\}$, $\E(N) = 5.8125$, $\sd(N) = 1.0136$
2. $f(k) = \binom{k - 1}{3} \left[(0.7)^4 (0.3)^{k-4} + (0.3)^4 (0.7)^{k-4}\right], \quad k \in \{4, 5, 6, 7\}$, $\E(N) = 5.3780$, $\sd(N) = 1.0497$
3. $f(k) = \binom{k - 1}{3} \left[(0.9)^4 (0.1)^{k-4} + (0.1)^4 (0.9)^{k-4}\right], \quad k \in \{4, 5, 6, 7\}$, $\E(N) = 4.4394$, $\sd(N) = 0.6831$
Division of Stakes
The problem of points originated from a question posed by Chevalier de Mere, who was interested in the fair division of stakes when a game is interrupted. Specifically, suppose that players $A$ and $B$ each put up $c$ monetary units, and then play Bernoulli trials until one of them wins a specified number of trials. The winner then takes the entire $2 c$ fortune.
If the game is interrupted when $A$ needs to win $n$ more trials and $B$ needs to win $m$ more trials, then the fortune should be divided between $A$ and $B$, respectively, as follows:
1. $2 c A_{n,m}(p)$ for $A$
2. $2 c \left[1 - A_{n,m}(p)\right] = 2 c A_{m,n}(1 - p)$ for $B$.
Suppose that players $A$ and $B$ bet $50 each. The players toss a fair coin until one of them has 10 wins; the winner takes the entire fortune. Suppose that the game is interrupted by the gambling police when $A$ has 5 wins and $B$ has 3 wins. How should the stakes be divided? Answer $A$ gets$72.56, $B$ gets \$27.44
Alternate and General Versions
Let's return to the formulation at the beginning of this section. Thus, suppose that we have a sequence of Bernoulli trials $\bs{X}$ with success parameter $p \in (0, 1]$, and for $k \in \N_+$, we let $V_k$ denote the trial number of the $k$th success. Thus, $V_k$ has the negative binomial distribution with parameters $k$ and $p$ as we studied above. The random variable $W_k = V_k - k$ is the number of failures before the $k$th success. Let $N_1 = W_1$, the number of failures before the first success, and let $N_k = W_k - W_{k-1}$, the number of failures between the $(k - 1)$st success and the $k$th success, for $k \in \{2, 3, \ldots\}$.
$\bs{N} = (N_1, N_2, \ldots)$ is a sequence of independent random variables, each having the geometric distribution on $\N$ with parameter $p$. Moreover,
$W_k = \sum_{i=1}^k N_i$
Thus, $\bs{W} = (W_1, W_2, \ldots)$ is the partial sum process associated with $\bs{N}$. In particular, $\bs{W}$ has stationary, independent increments.
Probability Density Functions
The probability density function of $W_k$ is given by
$\P(W_k = n) = \binom{n + k - 1}{k - 1} p^k (1 - p)^n = \binom{n + k - 1}{n} p^k (1 - p)^n, \quad n \in \N$
Proof
This result follows directly from the PDF of $V_k$, since $\P(W_k = n) = \P(V_k = k + n)$ for $n \in \N$.
The distribution of $W_k$ is also referred to as the negative binomial distribution with parameters $k$ and $p$. Thus, the term negative binomial distribution can refer either to the distribution of the trial number of the $k$th success or the distribution of the number of failures before the $k$th success, depending on the author and the context. The two random variables differ by a constant, so it's not a particularly important issue as long as we know which version is intended. In this text, we will refer to the alternate version as the negative binomial distribution on $\N$, to distinguish it from the original version, which has support set $\{k, k + 1, \ldots\}$
More interestingly, however, the probability density function in the last result makes sense for any $k \in (0, \infty)$, not just integers. To see this, first recall the definition of the general binomial coefficient: if $a \in \R$ and $n \in \N$, we define $\binom{a}{n} = \frac{a^{(n)}}{n!} = \frac{a (a - 1) \cdots (a - n + 1)}{n!}$
The function $f$ given below defines a probability density function for every $p \in (0, 1)$ and $k \in (0, \infty)$:
$f(n) = \binom{n + k - 1}{n} p^k (1 - p)^n, \quad n \in \N$
Proof
Recall from the section on Combinatorial Structures that $\binom{n + k - 1}{n} = (-1)^n \binom{-k}{n}$. From the general binomial theorem,
$\sum_{n=0}^\infty f(n) = p^k \sum_{n=0}^\infty \binom{-k}{n} (-1)^n (1 - p)^n = p^k \left[1 - (1 - p)\right]^{-k} = 1$
Once again, the distribution defined by the probability density function in the last theorem is the negative binomial distribution on $\N$, with parameters $k$ and $p$. The special case when $k$ is a positive integer is sometimes referred to as the Pascal distribution, in honor of Blaise Pascal.
The distribution is unimodal. Let $t = \left|k - 1\right| \frac{1 - p}{p}$.
1. $f(n - 1) \lt f(n)$ if and only if $n \lt t$.
2. The distribution has a single mode at $\lfloor t \rfloor$ if $t$ is not an integer.
3. The distribution has two consecutive modes at $t - 1$ and $t$ if $t$ is a positive integer.
Basic Properties
Suppose that $W$ has the negative binomial distribution on $\N$ with parameters $k \in (0, \infty)$ and $p \in (0, 1)$. To establish basic properties, we can no longer use the decomposition of $W$ as a sum of independent geometric variables. Instead, the best approach is to derive the probability generating function and then use the generating function to obtain other basic properties.
$W$ has probability generating function $P$ given by
$P(t) = \E\left(t^W\right) = \left( \frac{p}{1 - (1 - p) \, t} \right)^k, \quad \left|t\right| \lt \frac{1}{1 - p}$
Proof
This follows from the general binomial theorem: for $\left|t\right| \lt 1 / (1 - p)$, $\E\left(t^W\right) = \sum_{n=0}^\infty f(n) t^n = p^k \sum_{n=0}^\infty \binom{-k}{n} (-1)^n (1 - p)^n t^n = p^k \left[1 - (1 - p) t\right]^{-k}\$
The moments of $W$ can be obtained from the derivatives of the probability generating funciton.
$W$ has the following moments:
1. $\E(W) = k \frac{1 - p}{p}$
2. $\var(W) = k \frac{1 - p}{p^2}$
3. $\skw(W) = \frac{2 - p}{\sqrt{k (1 - p)}}$
4. $\kur(W) = \frac{3 \, (k + 2) (1 - p) + p^2}{k \, (1 - p)}$
Proof
Recall that the factorial moments of $W$ can be obtained from the derivatives of the probability generating function: $\E\left[W^{(k)}\right] = P^{(k)}(1)$. Then the various moments above can be obtained from standard formulas.
The negative binomial distribution on $\N$ is preserved under sums of independent variables.
Suppose that $V$ has the negative binomial distribution on $\N$ with parameters $a \in(0, \infty)$ and $p \in (0, 1)$, and that $W$ has the negative binomial distribution on $\N$ with parameters $b \in (0, \infty)$ and $p \in (0, 1)$, and that $V$ and $W$ are independent. Then $V + W$ has the negative binomial on $\N$ distribution with parameters $a + b$ and $p$.
Proof
This result follows from the probability generating functions. Recall that the PGF of $V + W$ is the product of the PGFs of $V$ and $W$.
In the last result, note that the success parameter $p$ must be the same for both variables.
Normal Approximation
Because of the decomposition of $W$ when the parameter $k$ is a positive integer, it's not surprising that a central limit theorm holds for the general negative binomial distribution.
Suppose that $W$ has the negative binomial distibtion with parameters $k \in (0, \infty)$ and $p \in (0, 1)$. The standard score of $W$ is $Z = \frac{p \, W - k \, (1 - p)}{\sqrt{k \, (1 - p)}}$ The distribution of $Z$ converges to the standard normal distribution as $k \to \infty$.
Thus, if $k$ is large (and not necessarily an integer), then the distribution of $W$ is approximately normal with mean $k \frac{1 - p}{p}$ and variance $k \frac{1 - p}{p^2}$.
Special Families
The negative binomial distribution on $\N$ belongs to several special families of distributions. First, It follows from the result above on sums that we can decompose a negative binomial variable on $\N$ into the sum of an arbitrary number of independent, identically distributed variables. This special property is known as infinite divisibility, and is studied in more detail in the chapter on Special Distributions.
The negative binomial distribution on $\N$ is infinitely divisible.
Proof
Suppose that $V$ has the negative binomial distribution on $\N$ with parameters $k \in (0, \infty)$ and $p \in (0, 1)$. It follows from the previous result that for any $n \in \N_+$, $V$ can be represented as $V = \sum_{i=1}^n V_i$ where $(V_1, V_2, \ldots, V_n)$ are independent, and each has the negative binomial distribution on $\N$ with parameters $k/n$ and $p$.
A Poisson-distributed random sum of independent, identically distributed random variables is said to have a compound Poisson distributions; these distributions are studied in more detail in the chapter on the Poisson Process. A theorem of William Feller states that an infinite divisible distribution on $\N$ must be compound Poisson. Hence it follows from the previous result that the negative binomial distribution on $\N$ belongs to this family. Here is the explicit result:
Let $p, \, k \in (0, \infty)$. Suppose that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of independent variables, each having the logarithmic series distribution with shape parameter $1 - p$. Suppose also that $N$ is independent of $\bs{X}$ and has the Poisson distribution with parameter $- k \ln(p)$. Then $W = \sum_{i=1}^N X_i$ has the negative binomial distribution on $\N$ with parameters $k$ and $p$.
Proof
From the general theory of compound Poisson distributions, the probability generating function of $W$ is $P(t) = \exp\left( \lambda [Q(t) - 1]\right)$ where $\lambda$ is the parameter of the Poisson variable $N$ and $Q(t)$ is the common PGF of the the terms in the sum. Using the PGF of the logarithmic series distribution, and the particular values of the parameters, we have $P(t) = \exp \left[-k \ln(p) \left(\frac{\ln[1 - (1 - p)t]}{\ln(p)} - 1\right)\right], \quad \left|t\right| \lt \frac{1}{1 - p}$ Using properties of logarithms and simple algebra, this reduces to $P(t) = \left(\frac{p}{1 - (1 - p)t}\right)^k, \quad \left|t\right| \lt \frac{1}{1 - p}$ which is the PGF of the negative binomial distribution with parameters $k$ and $p$.
As a special case ($k = 1$), it follows that the geometric distribution on $\N$ is infinitely divisible and compound Poisson.
Next, the negative binomial distribution on $\N$ belongs to the general exponential family. This family is important in inferential statistics and is studied in more detail in the chapter on Special Distributions.
Suppose that $W$ has the negative binomial distribution on $\N$ with parameters $k \in (0, \infty)$ and $p \in (0, 1)$. For fixed $k$, $W$ has a one-parameter exponential distribution with natural statistic $W$ and natural parameter $\ln(1 - p)$.
Proof
The PDF of $W$ can be written as $f(n) = \binom{n + k - 1}{n} p^k \exp\left[n \ln(1 - p)\right], \quad n \in \N$ so the result follows from the definition of the general exponential family.
Finally, the negative binomial distribution on $\N$ is a power series distribution. Many special discrete distribution belong to this family, which is studied in more detail in the chapter on Special Distributions.
For fixed $k \in (0, \infty)$, the negative binomial distribution on $\N$ with parameters $k$ and $p \in (0, 1)$ is a power series distribution corresponding to the function $g(\theta) = 1 \big/ (1 - \theta)^k$ for $\theta \in (0, 1)$, where $\theta = 1 - p$.
Proof
In terms of the new parameter $\theta$, the negative binomial pdf has the form $f(n) = \frac{1}{g(\theta)} \binom{n + k - 1}{n} \theta^n$ for $n \in \N$, and $\sum_{n=0}^\infty \binom{n + k - 1}{n} \theta^n = g(\theta)$.
Computational Exercises
Suppose that $W$ has the negative binomial distribution with parameters $k = \frac{15}{2}$ and $p = \frac{3}{4}$. Compute each of the following:
1. $\P(W = 3)$
2. $\E(W)$
3. $\var(W)$
Answer
1. $\P(W = 3) = 0.1823$
2. $\E(W) = \frac{5}{2}$
3. $\var(W) = \frac{10}{3}$
Suppose that $W$ has the negative binomial distribution with parameters $k = \frac{1}{3}$ and $p = \frac{1}{4}$. Compute each of the following:
1. $\P(W \le 2)$
2. $\E(W)$
3. $\var(W)$
Answer
1. $\P(W \le 2) = \frac{11}{8 \sqrt[3]{4}}$
2. $\E(W) = 1$
3. $\var(W) = 4$
Suppose that $W$ has the negative binomial distribution with parameters $k = 10 \, \pi$ and $p = \frac{1}{3}$. Compute each of the following:
1. $\E(W)$
2. $\var(W)$
3. The normal approximation to $\P(50 \le W \le 70)$
Answer
1. $\E(W) = 20 \, \pi$
2. $\var(W) = 60 \, \pi$
3. $\P(50 \le W \le 70) \approx 0.5461$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/11%3A_Bernoulli_Trials/11.04%3A_The_Negative_Binomial_Distribution.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$
Basic Theory
Multinomial trials
A multinomial trials process is a sequence of independent, identically distributed random variables $\bs{X} =(X_1, X_2, \ldots)$ each taking $k$ possible values. Thus, the multinomial trials process is a simple generalization of the Bernoulli trials process (which corresponds to $k = 2$). For simplicity, we will denote the set of outcomes by $\{1, 2, \ldots, k\}$, and we will denote the common probability density function of the trial variables by $p_i = \P(X_j = i), \quad i \in \{1, 2, \ldots, k\}$ Of course $p_i \gt 0$ for each $i$ and $\sum_{i=1}^k p_i = 1$. In statistical terms, the sequence $\bs{X}$ is formed by sampling from the distribution.
As with our discussion of the binomial distribution, we are interested in the random variables that count the number of times each outcome occurred. Thus, let $Y_i = \#\left\{j \in \{1, 2, \ldots, n\}: X_j = i\right\} = \sum_{j=1}^n \bs{1}(X_j = i), \quad i \in \{1, 2, \ldots, k\}$ Of course, these random variables also depend on the parameter $n$ (the number of trials), but this parameter is fixed in our discussion so we suppress it to keep the notation simple. Note that $\sum_{i=1}^k Y_i = n$ so if we know the values of $k - 1$ of the counting variables, we can find the value of the remaining variable.
Basic arguments using independence and combinatorics can be used to derive the joint, marginal, and conditional densities of the counting variables. In particular, recall the definition of the multinomial coefficient: for nonnegative integers $(j_1, j_2, \ldots, j_n)$ with $\sum_{i=1}^k j_i = n$, $\binom{n}{j_1, j_2, \dots, j_k} = \frac{n!}{j_1! j_2! \cdots j_k!}$
Joint Distribution
For nonnegative integers $(j_1, j_2, \ldots, j_k)$ with $\sum_{i=1}^k j_i = n$, $\P(Y_1 = j_1, Y_2, = j_2 \ldots, Y_k = j_k) = \binom{n}{j_1, j_2, \ldots, j_k} p_1^{j_1} p_2^{j_2} \cdots p_k^{j_k}$
Proof
By independence, any sequence of trials in which outcome $i$ occurs exactly $j_i$ times for $i \in \{1, 2, \ldots, k\}$ has probability $p_1^{j_1} p_2^{j_2} \cdots p_k^{j_k}$. The number of such sequences is the multinomial coefficient $\binom{n}{j_1, j_2, \ldots, j_k}$. Thus, the result follows from the additive property of probability.
The distribution of $\bs{Y} = (Y_1, Y_2, \ldots, Y_k)$ is called the multinomial distribution with parameters $n$ and $\bs{p} = (p_1, p_2, \ldots, p_k)$. We also say that $(Y_1, Y_2, \ldots, Y_{k-1})$ has this distribution (recall that the values of $k - 1$ of the counting variables determine the value of the remaining variable). Usually, it is clear from context which meaning of the term multinomial distribution is intended. Again, the ordinary binomial distribution corresponds to $k = 2$.
Marginal Distributions
For each $i \in \{1, 2, \ldots, k\}$, $Y_i$ has the binomial distribution with parameters $n$ and $p_i$: $\P(Y_i = j) = \binom{n}{j} p_i^j (1 - p_i)^{n-j}, \quad j \in \{0, 1, \ldots, n\}$
Proof
There is a simple probabilistic proof. If we think of each trial as resulting in outcome $i$ or not, then clearly we have a sequence of $n$ Bernoulli trials with success parameter $p_i$. Random variable $Y_i$ is the number of successes in the $n$ trials. The result could also be obtained by summing the joint probability density function in Exercise 1 over all of the other variables, but this would be much harder.
Grouping
The multinomial distribution is preserved when the counting variables are combined. Specifically, suppose that $(A_1, A_2, \ldots, A_m)$ is a partition of the index set $\{1, 2, \ldots, k\}$ into nonempty subsets. For $j \in \{1, 2, \ldots, m\}$ let $Z_j = \sum_{i \in A_j} Y_i, \quad q_j = \sum_{i \in A_j} p_i$
$\bs{Z} = (Z_1, Z_2, \ldots, Z_m)$ has the multinomial distribution with parameters $n$ and $\bs{q} = (q_1, q_2, \ldots, q_m)$.
Proof
Again, there is a simple probabilistic proof. Each trial, independently of the others, results in an outome in $A_j$ with probability $q_j$. For each $j$, $Z_j$ counts the number of trails which result in an outcome in $A_j$. This result could also be derived from the joint probability density function in Exercise 1, but again, this would be a much harder proof.
Conditional Distribution
The multinomial distribution is also preserved when some of the counting variables are observed. Specifically, suppose that $(A, B)$ is a partition of the index set $\{1, 2, \ldots, k\}$ into nonempty subsets. Suppose that $(j_i : i \in B)$ is a sequence of nonnegative integers, indexed by $B$ such that $j = \sum_{i \in B} j_i \le n$. Let $p = \sum_{i \in A} p_i$.
The conditional distribution of $(Y_i: i \in A)$ given $(Y_i = j_i: i \in B)$ is multinomial with parameters $n - j$ and $(p_i / p: i \in A)$.
Proof
Again, there is a simple probabilistic argument and a harder analytic argument. If we know $Y_i = j_i$ for $i \in B$, then there are $n - j$ trials remaining, each of which, independently of the others, must result in an outcome in $A$. The conditional probability of a trial resulting in $i \in A$ is $p_i / p$.
Combinations of the basic results involving grouping and conditioning can be used to compute any marginal or conditional distributions.
Moments
We will compute the mean and variance of each counting variable, and the covariance and correlation of each pair of variables.
For $i \in \{1, 2, \ldots, k\}$, the mean and variance of $Y_i$ are
1. $\E(Y_i) = n p_i$
2. $\var(Y_i) = n p_i (1 - p_i)$
Proof
Recall that $Y_i$ has the binomial distribution with parameters $n$ and $p_i$.
For distinct $i, \; j \in \{1, 2, \ldots, k\}$,
1. $\cov(Y_i, Y_j) = - n p_i p_j$
2. $\cor(Y_i, Y_j) = -\sqrt{p_i p_j \big/ \left[(1 - p_i)(1 - p_j)\right]}$
Proof
From the bi-linearity of the covariance operator, we have $\cov(Y_i, Y_j) = \sum_{s=1}^n \sum_{t=1}^n \cov[\bs{1}(X_s = i), \bs{1}(X_t = j)]$ If $s = t$, the covariance of the indicator variables is $-p_i p_j$. If $s \ne t$ the covariance is 0 by independence. Part (b) can be obtained from part (a) using the definition of correlation and the variances of $Y_i$ and $Y_j$ given above.
From the last result, note that the number of times outcome $i$ occurs and the number of times outcome $j$ occurs are negatively correlated, but the correlation does not depend on $n$.
If $k = 2$, then the number of times outcome 1 occurs and the number of times outcome 2 occurs are perfectly correlated.
Proof
This follows immediately from the result above on covariance since we must have $i = 1$ and $j = 2$, and $p_2 = 1 - p_1$. Of course we can also argue this directly since $Y_2 = n - Y_1$.
Examples and Applications
In the dice experiment, select the number of aces. For each die distribution, start with a single die and add dice one at a time, noting the shape of the probability density function and the size and location of the mean/standard deviation bar. When you get to 10 dice, run the simulation 1000 times and compare the relative frequency function to the probability density function, and the empirical moments to the distribution moments.
Suppose that we throw 10 standard, fair dice. Find the probability of each of the following events:
1. Scores 1 and 6 occur once each and the other scores occur twice each.
2. Scores 2 and 4 occur 3 times each.
3. There are 4 even scores and 6 odd scores.
4. Scores 1 and 3 occur twice each given that score 2 occurs once and score 5 three times.
Answer
1. 0.00375
2. 0.0178
3. 0.205
4. 0.0879
Suppose that we roll 4 ace-six flat dice (faces 1 and 6 have probability $\frac{1}{4}$ each; faces 2, 3, 4, and 5 have probability $\frac{1}{8}$ each). Find the joint probability density function of the number of times each score occurs.
Answer
$f(u, v, w, x, y, z) = \binom{4}{u, v, w, x, y, z} \left(\frac{1}{4}\right)^{u+z} \left(\frac{1}{8}\right)^{v + w + x + y}$ for nonnegative integers $u, \, v, \, w, \, x, \, y, \, z$ that sum to 4
In the dice experiment, select 4 ace-six flats. Run the experiment 500 times and compute the joint relative frequency function of the number times each score occurs. Compare the relative frequency function to the true probability density function.
Suppose that we roll 20 ace-six flat dice. Find the covariance and correlation of the number of 1's and the number of 2's.
Answer
covariance: $-0.625$; correlation: $-0.0386$
In the dice experiment, select 20 ace-six flat dice. Run the experiment 500 times, updating after each run. Compute the empirical covariance and correlation of the number of 1's and the number of 2's. Compare the results with the theoretical results computed previously. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/11%3A_Bernoulli_Trials/11.05%3A_The_Multinomial_Distribution.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$
The simple random walk process is a minor modification of the Bernoulli trials process. Nonetheless, the process has a number of very interesting properties, and so deserves a section of its own. In some respects, it's a discrete time analogue of the Brownian motion process.
The Basic Process
Suppose that $\bs{U} = (U_1, U_2, \ldots)$ is a sequence of independent random variables, each taking values 1 and $-1$ with probabilities $p \in [0, 1]$ and $1 - p$ respectively. Let $\bs{X} = (X_0, X_1, X_2, \ldots)$ be the partial sum process associated with $\bs{U}$, so that $X_n = \sum_{i=1}^n U_i, \quad n \in \N$ The sequence $\bs{X}$ is the simple random walk with parameter $p$.
We imagine a person or a particle on an axis, so that at each discrete time step, the walker moves either one unit to the right (with probability $p$) or one unit to the left (with probability $1 - p$), independently from step to step. The walker could accomplish this by tossing a coin with probability of heads $p$ at each step, to determine whether to move right or move left. Other types of random walks, and additional properties of this random walk, are studied in the chapter on Markov Chains.
The mean and standard deviation, respectively of a step $U$ are
1. $\E(U) = 2 p - 1$
2. $\var(U) = 4 p (1 - p)$
Let $I_j = \frac{1}{2}\left(U_j + 1\right)$ for $j \in \N_+$. Then $\bs{I} = (I_1, I_2, \ldots)$ is a Bernoulli trials sequence with success parameter $p$.
Proof
Note that $I_j = 1$ if $U_j = 1$ and $I_j = 0$ if $I_j = -1$.
In terms of the random walker, $I_j$ is the indicator variable for the event that the $j$th step is to the right.
Let $R_n = \sum_{i=1}^n I_j$ for $n \in \N$, so that $\bs{R} = (R_0, R_1, \ldots)$ is the partial sum process associated with $\bs{I}$. Then
1. $X_n = 2 R_n - n$ for $n \in \N$.
2. $R_n$ has the binomial distribution with trial parameter $n$ and success parameter $p$.
In terms of the walker, $R_n$ is the number of steps to the right in the first $n$ steps.
$X_n$ has probability density function $\P(X_n = k) = \binom{n}{(n + k) / 2} p^{(n + k)/2} (1 - p)^{(n-k) / 2}, \quad k \in \{-n, -n + 2, \ldots, n - 2, n\}$
Proof
Since $R_n$ takes values in $\{0, 1, \ldots, n\}$, $X_n$ takes values in $\{-n, -n + 2, \ldots, n-2, n\}$. For $k$ in this set, $\P(X_n = k)\ = \P\left[R_n = (n + k) / 2\right]$, so the result follows from the binomial distribution of $R_n$.
The mean and variance of $X_n$ are
1. $\E(X_n) = n (2 p - 1)$
2. $\var(X_n) = 4 n p (1 - p)$
The Simple Symmetric Random Walk
Suppose now that $p =\frac{1}{2}$. In this case, $\bs{X} = (X_0, X_1, \ldots)$ is called the simple symmetric random walk. The symmetric random walk can be analyzed using some special and clever combinatorial arguments. But first we give the basic results above for this special case.
For each $n \in \N_+$, the random vector $\bs{U}_n = (U_1, U_2, \ldots, U_n)$ is uniformly distributed on $\{-1, 1\}^n$, and therefore $\P(\bs{U}_n \in A) = \frac{\#(A)}{2^n}, \quad A \subseteq S$
$X_n$ has probability density function $\P(X_n = k) = \binom{n}{(n + k) / 2} \frac{1}{2^n}, \quad k \in \{-n, -n + 2, \ldots, n - 2, n\}$
The mean and variance of $X_n$ are
1. $\E(X_n) = 0$
2. $\var(X_n) = n$
In the random walk simulation, select the final position. Vary the number of steps and note the shape and location of the probability density function and the mean$\pm$standard deviation bar. For selected values of the parameter, run the simulation 1000 times and compare the empirical density function and moments to the true probability density function and moments.
In the random walk simulation, select the final position and set the number of steps to 50. Run the simulation 1000 times and compute and compare the following:
1. $\P(-6 \le X_{50} \le 10)$
2. The relative frequency of the event $\{-6 \le X_{50} \le 10\}$
3. The normal approximation to $\P(-6 \le X_{50} \le 10)$
Answer
1. 0.7794
2. 0.7752
The Maximum Position
Consider again the simple, symmetric random walk. Let $Y_n = \max\{X_0, X_1, \ldots, X_n\}$, the maximum position during the first $n$ steps. Note that $Y_n$ takes values in the set $\{0, 1, \ldots, n\}$. The distribution of $Y_n$ can be derived from a simple and wonderful idea known as the reflection principle.
For $n \in \N$ and $y \in \{0, 1, \ldots, n\}$, $\P(Y_n = y) = \begin{cases} \P(X_n = y) = \binom{n}{(y + n) / 2} \frac{1}{2^n}; & y, \, n \text{ have the same parity (both even or both odd)} \ \P(X_n = y + 1) = \binom{n}{(y + n + 1) / 2} \frac{1}{2^n}; & y, \, n \text{ have opposite parity (one even and one odd)} \end{cases}$
Proof
Note first that $Y_n \ge y$ if and only if $X_i = y$ for some $i \in \{0, 1, \ldots, n\}$. Suppose that $k \le y \le n$. For each path that satisfies $Y_n \ge y$ and $X_n = k$ there is another path that satisfies $X_n = 2 y - k$. The second path is obtained from the first path by reflecting in the line $x = y$, after the first path hits $y$. Since the paths are equally likely, $\P(Y_n \ge y, X_n = k) = \P(X_n = 2 y - k), \quad k \le y \le n$ Hence it follows that $\P(Y_n = y, X_n = k) = \P(X_n = 2 y - k) - \P[X_n = 2 (y + 1) - k], \quad k \le y \le n$
In the random walk simulation, select the maximum value variable. Vary the number of steps and note the shape and location of the probability density function and the mean/standard deviation bar. Now set the number of steps to 30 and run the simulation 1000 times. Compare the relative frequency function and empirical moments to the true probability density function and moments.
For every $n$, the probability density function of $Y_n$ is decreasing.
The last result is a bit surprising; in particular, the single most likely value for the maximum (and hence the mode of the distribution) is 0.
Explicitly compute the probability density function, mean, and standard deviation of $Y_5$.
Answer
1. Probability density function of $Y_5$: $f(0) = f(1) = \frac{10}{32}$, $f(2) = f(3) = \frac{5}{32}$, $f(4) = f(5) = \frac{1}{32}$
2. $\E(Y_5) = \frac{11}{8}$
3. $\var(Y_5) = \frac{111}{64}$
A fair coin is tossed 10 times. Find the probability that the difference between the number of heads and the number of tails is never greater than 4.
Answer
$\P(Y_{10} \le 4) = \frac{57}{64}$
The Last Visit to 0
Consider again the simple, symmetric random walk. Our next topic is the last visit to 0 during the first $2 n$ steps: $Z_{2 n} = \max \left\{ j \in \{0, 2, \ldots, 2 n\}: X_j = 0 \right\}, \quad n \in \N$ Note that since visits to 0 can only occur at even times, $Z_{2 n}$ takes the values in the set $\{0, 2, \ldots, 2 n\}$. This random variable has a strange and interesting distribution known as the discrete arcsine distribution. Along the way to our derivation, we will discover some other interesting results as well.
The probability density function of $Z_{2 n}$ is $\P(Z_{2 n} = 2 k) = \binom{2 k}{k} \binom{2 n - 2 k}{n - k} \frac{1}{2^{2 n}}, \quad k \in \{0, 1, \ldots, n\}$
Proof
Note that $\P(Z_{2 n} = 2 k) = \P(X_{2 k} = 0, X_{2 k+1} \ne 0, \ldots, X_{2 n} \ne 0), \quad k \in \{0, 1, \ldots, n\}$ From independence and symmetry it follows that $\P(Z_{2 n} = 2 k) = \P(X_{2 k} = 0) \P(X_1 \ne 0, X_2 \ne 0, \ldots, X_{2 n - 2 k} \ne 0), \quad k \in \{0, 1, \ldots, n\}$ We know the first factor on the right from the distribution of $X_{2 k}$. Thus, we need to compute the second factor, the probability that our random walk never returns to 0 during a time interval. Using results for the maximum position we have $\P(X_1 \le 0, X_2 \le 0, \ldots, X_{2 j} \le 0) = \P(Y_{2 j} = 0) \ \binom{2 j}{j} \frac{1}{2^{2 j}}$ From symmetry (which is just the reflection principle at $y = 0$), it follows that $\P(X_1 \ge 0, X_2 \ge 0, \ldots, X_{2 n} \ge 0) = \binom{2 n}{n} \frac{1}{2^{2 n}}$ Next, $\{X_1 \gt 0, X_2 \gt 0, \ldots, X_{2 j} \gt 0\} = \{X_1 = 1, X_2 \ge 1, \ldots, X_{2 j} \ge 1\}$. From independence and symmetry, $\P(X_1 \gt 0, X_2 \gt 0, \ldots, X_{2 j} \gt 0\} = \P(X_1 = 0) \P(X_1 \ge 0, X_2 \ge 0, \ldots, X_{2 j - 1} \ge 0)$ But $X_{2 j - 1} \ge 0$ implies $X_{2 j} \ge 0$. Hence $\P(X_1 \gt 0, X_2 \gt 0, \ldots, X_{2 j} \gt 0) = \binom{2 j}{j} \frac{1}{2^{2 j + 1}}$ From symmetry, $\P(X_1 \ne 0, X_2 \ne 0, \ldots, X_{2 j} \ne 0) = \binom{2 j}{j} \frac{1}{2^{2 j}}$
In the random walk simulation, choose the last visit to 0 and then vary the number of steps with the scroll bar. Note the shape and location of the probability density function and the mean/standard deviation bar. For various values of the parameter, run the simulation 1000 times and compare the empirical density function and moments to the true probability density function and moments.
The probability density function of $Z_{2 n}$ is symmetric about $n$ and is $u$-shaped:
1. $\P(Z_{2 n} = 2 k) = \P(Z_{2 n} = 2 n - 2 k)$
2. $\P(Z_{2 n} = 2 j) \gt \P(Z_{2 n} = 2 k)$ if and only if $j \lt k$ and $2 k \le n$
In particular, 0 and $2 n$ are the most likely values and hence are the modes of the distribution. The discrete arcsine distribution is quite surprising. Since we are tossing a fair coin to determine the steps of the walker, you might easily think that the random walk should be positive half of the time and negative half of the time, and that it should return to 0 frequently. But in fact, the arcsine law implies that with probability $\frac{1}{2}$, there will be no return to 0 during the second half of the walk, from time $n + 1$ to $2 n$, regardless of $n$, and it is not uncommon for the walk to stay positive (or negative) during the entire time from 1 to $2 n$.
Explicitly compute the probability density function, mean, and variance of $Z_{10}$.
Answer
1. Probability density function of $Z_{10}$: $f(0) = f(10) = \frac{63}{256}$, $f(2) = f(8) = \frac{35}{256}$, $f(4) = f(6) = \frac{30}{256}$
2. $\E(Z_{10}) = 5$
3. $\var(Z_{10}) = 15$
The Ballot Problem and the First Return to Zero
The Ballot Problem
Suppose that in an election, candidate $A$ receives $a$ votes and candidate $B$ receives $b$ votes where $a \gt b$. Assuming a random ordering of the votes, what is the probability that $A$ is always ahead of $B$ in the vote count? This is an historically famous problem known as the Ballot Problem, that was solved by Joseph Louis Bertrand in 1887. The ballot problem is intimately related to simple random walks.
Comment on the validity of the assumption that the voters are randomly ordered for a real election.
The ballot problem can be solved by using a simple conditional probability argument to obtain a recurrence relation. Let $f(a, b)$ denote the probability that $A$ is always ahead of $B$ in the vote count.
$f$ satisfies the initial condition $f(1, 0) = 1$ and the following recurrence relation: $f(a, b) = \frac{a}{a + b} f(a - 1,b) + \frac{b}{a + b} f(a, b - 1)$
Proof
This follows by conditioning on the candidate that receives the last vote.
The probability that $A$ is always ahead in the vote count is $f(a, b) = \frac{a - b}{a + b}$
Proof
This follows from the recurrence relation and induction on the total number of votes $n = a + b$
In the ballot experiment, vary the parameters $a$ and $b$ and note the change the ballot probability. For selected values of the parameters, run the experiment 1000 times and compare the relative frequency to the true probability.
In an election for mayor of a small town, Mr. Smith received 4352 votes while Ms. Jones received 7543 votes. Compute the probability that Jones was always ahead of Smith in the vote count.
Answer
$\frac{3191}{11895} \approx 0.2683$
Relation to Random Walks
Consider again the simple random walk $\bs{X}$ with parameter $p$.
Given $X_n = k$,
1. There are $\frac{n + k}{2}$ steps to the right and $\frac{n - k}{2}$ steps to the left.
2. All possible orderings of the steps to the right and the steps to the left are equally likely.
For $k \gt 0$, $\P(X_1 \gt 0, X_2 \gt 0, \ldots, X_{n - 1} \gt 0 \mid X_n = k) = \frac{k}{n}$
Proof
This follows from the previous result and the ballot probability.
In the ballot experiment, vary the parameters $a$ and $b$ and note the change the ballot probability. For selected values of the parameters, run the experiment 1000 times and compare the relative frequency to the true probability.
An American roulette wheel has 38 slots; 18 are red, 18 are black, and 2 are green. Fred bet \$1 on red, at even stakes, 50 times, winning 22 times and losing 28 times. Find the probability that Fred's net fortune was always negative.
Answer
$\frac{3}{25}$
Roulette is studied in more detail in the chapter on Games of Chance.
The Distribution of the First Zero
Consider again the simple random walk with parameter $p$, as in the last subsection. Let $T$ denote the time of the first return to 0: $T = \min\{n \in \N_+: X_n = 0\}$ Note that returns to 0 can only occur at even times; it may also be possible that the random walk never returns to 0. Thus, $T$ takes values in the set $\{2, 4, \ldots\} \cup \{\infty\}$.
The probability density funtion of $T_{2 n}$ is given by $\P(T = 2 n) = \binom{2 n}{n} \frac{1}{2 n - 1} p^n (1 - p)^n, \quad n \in \N_+$
Proof
For $n \in \N_+$ $\P(T = 2 n) = \P(T = 2 n, X_{2 n} = 0) = \P(T = 2 n \mid X_{2 n} = 0 ) \P(X_{2 n} = 0)$ From the ballot problem, $\P(T = 2 n \mid X_{2 n} = 0) = \frac{1}{2 n - 1}$
Fred and Wilma are tossing a fair coin; Fred gets a point for each head and Wilma gets a point for each tail. Find the probability that their scores are equal for the first time after $n$ tosses, for each $n \in \{2, 4, 6, 8, 10\}$.
Answer
$f(2) = \frac{1}{2}$, $f(4) = \frac{1}{8}$, $f(6) = \frac{1}{16}$, $f(8) = \frac{5}{128}$, $f(10) = \frac{7}{512}$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/11%3A_Bernoulli_Trials/11.06%3A_The_Simple_Random_Walk.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$
An interesting thing to do in almost any parametric probability model is to randomize one or more of the parameters. Done in a clever way, this often leads to interesting new models and unexpected connections between models. In this section we will randomize the success parameter in the Bernoulli trials process. This leads to interesting and surprising connections with Pólya's urn process.
Basic Theory
Definitions
First, recall that the beta distribution with left parameter $a \in (0, \infty)$ and right parameter $b \in (0, \infty)$ is a continuous distribution on the interval $(0, 1)$ with probability density function $g$ given by $g(p) = \frac{1}{B(a, b)} p^{a-1} (1 - p)^{b-1}, \quad p \in (0, 1)$ where $B$ is the beta function. So $B(a, b)$ is simply the normalizing constant for the function $p \mapsto p^{a-1} (1 - p)^{b-1}$ on the interval $(0, 1)$. Here is our main definition:
Suppose that $P$ has the beta distribution with left parameter $a \in (0, \infty)$ and right parameter $b \in (0, \infty)$. Next suppose that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of indicator random variables with the property that given $P = p \in (0, 1)$, $\bs{X}$ is a conditionally independent sequence with $\P(X_i = 1 \mid P = p) = p, \quad i \in \N_+$ Then $\bs{X}$ is the beta-Bernoulli process with parameters $a$ and $b$.
In short, given $P = p$, the sequence $\bs{X}$ is a Bernoulli trials sequence with success parameter $p$. In the usual language of reliability, $X_i$ is the outcome of trial $i$, where 1 denotes success and 0 denotes failure. For a specific application, suppose that we select a random probability of heads according to the beta distribution with with parameters $a$ and $b$, and then toss a coin with this probability of heads repeatedly.
Outcome Variables
What's our first step? Well, of course we need to compute the finite dimensional distributions of $\bs{X}$. Recall that for $r \in \R$ and $j \in \N$, $r^{[j]}$ denotes the ascending power $r (r + 1) \cdots [r + (j - 1)]$. By convention, a product over an empty index set is 1, so $r^{[0]} = 1$.
Suppose that $n \in \N_+$ and $(x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$. Let $k = x_1 + x_2 + \cdots + x_n$. Then $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n) = \frac{a^{[k]} b^{[n-k]}}{(a + b)^{[n]}}$
Proof
First, note that $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n \mid P = p) = p^k (1 - p)^{n-k}$ by the conditional independence. Thus, conditioning on $P$ gives \begin{align} \P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n) & = \E[\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n \mid P)] \ & = \int_0^1 p^k (1 - p)^{n-k} \frac{1}{B(a, b)} p^{a-1} (1 - p)^{b-1} \, dp \ & = \frac{B[a + k, b + (n -k)]}{B(a, b)} = \frac{a^{[k]} b^{[n-k]}}{(a + b)^{[n]}} \end{align} The last step uses a property of the beta function.
From this result, it follows that Pólya's urn process with parameters $a, \, b, \, c \in \N_+$ is equivalent to the beta-Bernoulli process with parameters $a / c$ and $b / c$, quite an interesting result. Note that since the joint distribution above depends only on $x_1 + x_2 + \cdots + x_n$, the sequence $\bs{X}$ is exchangeable. Finally, it's interesting to note that the beta-Bernoulli process with parameters $a$ and $b$ could simply be defined as the sequence with the finite-dimensional distributions above, without reference to the beta distribution! It turns out that every exchangeable sequence of indicator random variables can be obtained by randomizing the success parameter in a sequence of Bernoulli trials. This is de Finetti's theorem, named for Bruno de Finetti, which is studied in the section on backwards martingales.
For each $i \in \N_+$
1. $\E(X_i) = \frac{a}{a + b}$
2. $\var(X_i) = \frac{a}{a + b} \frac{b}{a + b}$
Proof
Since the sequence is exchangeable, $X_i$ has the same distribution as $X_1$, so $\P(X_i = 1) = \frac{a}{a + b}$. The mean and variance now follow from standard results for indicator variables.
Thus $\bs{X}$ is a sequence of identically distributed variables, quite surprising at first but of course inevitable for any exchangeable sequence. Compare the joint distribution with the marginal distributions. Clearly the variables are dependent, so let's compute the covariance and correlation of a pair of outcome variables.
Suppose that $i, \; j \in \N_+$ are distinct. Then
1. $\cov(X_i, X_j) = \frac{a \, b}{(a + b)^2 (a + b + 1)}$
2. $\cor(X_i, X_j) = \frac{1}{a + b + 1}$
Proof
Since the variables are exchangeable, $\P(X_i = 1, X_j = 1) = \P(X_1 = 1, X_2 = 1) = \frac{a}{a + b} \frac{a + 1}{a + b + 1}$. The results now follow from standard formulas for covariance and correlation.
Thus, the variables are positively correlated. It turns out that in any infinite sequence of exchangeable variables, the the variables must be nonnegatively correlated. Here is another result that explores how the variables are related.
Suppose that $n \in \N_+$ and $(x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$. Let $k = \sum_{i=1}^n x_i$. Then $\P(X_{n+1} = 1 \mid X_1 = x_1, X_2 = x_2, \ldots X_n = x_n) = \frac{a + k}{a + b + n}$
Proof
Using the joint distribution, \begin{align*} \P(X_{n+1} = 1 \mid X_1 = x_1, X_2 = x_2, \ldots X_n = x_n) & = \frac{\P( X_1 = x_1, X_2 = x_2, \ldots X_n = x_n, X_{n+1} = 1)}{\P(X_1 = x_1, X_2 = x_2, \ldots X_n = x_n)} \ & = \frac{a^{[k+1]} b^{[n-k]}}{(a + b)^{[n + 1]}} \frac{(a + b)^{[n]}}{a^{[k]} b^{[n-k]}} = \frac{a + k}{a + b + n} \end{align*}
The beta-Bernoulli model starts with the conditional distribution of $\bs X$ given $P$. Let's find the conditional distribution in the other direction.
Suppose that $n \in \N_+$ and $(x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$. Let $k = \sum_{i=1}^n x_i$. Then the conditional distribution of $P$ given $(X_1 = x_1, X_2, = x_2, \ldots, X_n = x_n)$ is beta with left parameter $a + k$ and right parameter $b + (n - k)$. Hence $\E(P \mid X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n) = \frac{a + k}{a + b + k}$
Proof
This follows from Bayes' theorem. The conditional PDF $g(\cdot \mid x_1, x_2, \ldots, x_n)$ is given by $g(p \mid X_1, x_2, \ldots, x_n) = \frac{g(p) \P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n) \mid P = p)}{\int_0^1 g(t) \P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n \mid P = t) dt}, \quad p \in (0, 1)$ The numerator is $\frac{1}{B(a, b)} p^{a-1} (1 - p)^{b-1} p^k (1 - p)^{n-k} = \frac{1}{B(a, b)} p^{a + k - 1} (1 - p)^{b + n - k - 1}$ The denominator is simply the normalizing constant for the expression, as a function of $p$ and is $B(a + k, b + n - k) / B(a, b)$. Hence $g(p \mid k) = \frac{1}{B(a + k, b + n - k)} p^{a + k - 1} (1 - p)^{b + n - k - 1}, \quad p \in (0, 1)$ The last result follows since the mean of the beta distribution is the left parameter divided by the sum of the parameters.
Thus, the left parameter increases by the number of successes while the right parameter increases by the number of failures. In the language of Bayesian statistics, the original distribution of $P$ is the prior distribution, and the conditional distribution of $P$ given the data $(x_1, x_2, \ldots, x_n)$ is the posterior distribution. The fact that the posterior distribution is beta whenever the prior distribution is beta means that the beta distributions is conjugate to the Bernoulli distribution. The conditional expected value in the last theorem is the Bayesian estimate of $p$ when $p$ is modeled by the random variable $P$. These concepts are studied in more generality in the section on Bayes Estimators in the chapter on Point Estimation. It's also interesting to note that the expected values in the last two theorems are the same: If $n \in \N$, $(x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$ and $k = \sum_{i=1}^n x_i$ then $\E(X_{n+1} \mid X_1 = x_1, \ldots, X_n = x_n) = \E(P \mid X_1 = x_1, \ldots, X_n = x_n) = \frac{a + k}{a + b + n}$
Run the simulation of the beta coin experiment for various values of the parameter. Note how the posterior probability density function changes from the prior probability density function, given the number of heads.
The Number of Successes
It's already clear that the number of successes in a given number of trials plays an important role, so let's study these variables. For $n \in \N_+$, let $Y_n = \sum_{i=1}^n X_i$ denote the number of successes in the first $n$ trials. Of course, $\bs{Y} = (Y_0, Y_1, \ldots)$ is the partial sum process associated with $\bs{X} = (X_1, X_2, \ldots)$.
$Y_n$ has probability density function given by $\P(Y_n = k) = \binom{n}{k} \frac{a^{[k]} b^{[n-k]}}{(a + b)^{[n]}}, \quad k \in \{0, 1, \ldots, n\}$
Proof
Every bit string of length $n$ with 1 occurring exactly $k$ times has the probability given in the joint distribution above. There are $\binom{n}{k}$ such bit strings.
The distribution of $Y_n$ is known as the beta-binomial distribution with parameters $n$, $a$, and $b$.
In the simulation of the beta-binomial experiment, vary the parameters and note how the shape of the probability density function of $Y_n$ (discrete) parallels the shape of the probability density function of $P$ (continuous). For various values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
The case where the parameters are both 1 is interesting.
If $a = b = 1$, so that $P$ is uniformly distributed on $(0, 1)$, then $Y_n$ is uniformly distributed on $\{0, 1, \ldots, n\}$.
Proof
Note that $1^{[j]} = j!$ and $2^{[j]} = (j + 1)!$ for $j \in \N$. Hence, from the general PDF $Y_n$ above $\P(Y_n = k) = \frac{n!}{k! (n - k)!} \frac{k! (n - k)!}{(n + 1)!} = \frac{1}{n + 1}, \quad k \in \{0, 1, \ldots, n\}$
Next, let's compute the mean and variance of $Y_n$.
The mean and variance of $Y_n$ are
1. $\E(Y_n) = n \frac{a}{a + b}$
2. $\var(Y_n) = n \frac{a b}{(a + b)^2} \left[1 + (n - 1) \frac{1}{a + b + 1} \right]$
Proof
These results follow from the mean and covariance results given above: \begin{align} \E(Y_n) & = \sum_{i=a}^n \E(X_i) = n \frac{a}{a + b} \ \var(Y_n) & = \sum_{i=1}^n \sum_{j=1}^n \cov(X_i, X_j) = n \frac{a \, b}{(a + b)^2} + n (n - 1) \frac{a \, b}{(a + b)^2 (a + b + 1)} \end{align}
In the simulation of the beta-binomial experiment, vary the parameters and note the location and size of the mean-standard deviation bar. For various values of the parameters, run the simulation 1000 times and compare the empirical moments to the true moments.
We can restate the conditional distributions in the last subsection more elegantly in terms of $Y_n$.
Let $n \in \N$.
1. The conditional distribution of $X_{n+1}$ given $Y_n$ is $\P(X_{n+1} = 1 \mid Y_n) = \E(X_{n+1} \mid Y_n) = \frac{a + Y_n}{a + b + n}$
2. The conditional distribution of $P$ given $Y_n$ is beta with left parameter $a + Y_n$ and right parameter $b + (n - Y_n)$. In particular $\E(P \mid Y_n) = \frac{a + Y_n}{a + b + n}$
Proof
The proof is easy using the nesting property of conditional expected value and the fact that the conditional distributions given $(X_1, X_2, \ldots, X_n)$ depend only on $Y_n = \sum_{i=1}^n X_i$.
1. Note that \begin{align} \E(X_{n+1} \mid Y_n) & = \E[\E(X_{n+1} \mid Y_n) \mid X_1, X_2, \ldots, X_n] \ & = \E[E(X_{n+1} \mid X_1, X_2, \ldots, X_n) \mid Y_n] = \E\left(\frac{a + Y_n}{a + b + n} \biggm| Y_n\right) = \frac{a + Y_n}{a + b + n} \end{align}
2. Similarly, if $A \subseteq (0, 1)$ is measurable then $\P(P \in A \mid X_1, X_2, \ldots, X_n)$ depends only on $Y_n$ and so \begin{align} \P(P \in A \mid Y_n) & = \E[\P(P \in A \mid Y_n) \mid X_1, X_2, \ldots, X_n] \ & = \E[\P(P \in A \mid X_1, X_2, \ldots, X_n) \mid Y_n] = \P(P \in A \mid Y_n) \end{align}
Once again, the conditional expected value $\E(P \mid Y_n)$ is the Bayesian estimator of $p$. In particular, if $a = b = 1$, so that $P$ has the uniform distribution on $(0, 1)$, then $\P(X_{n+1} = 1 \mid Y_n = n) = \frac{n + 1}{n + 2}$. This is Laplace's rule of succession, another interesting connection. The rule is named for Pierre Simon Laplace, and is studied from a different point of view in the section on Independence.
The Proportion of Successes
For $n \in \N_+$, let $M_n = \frac{Y_n}{n} = \frac{1}{n} \sum_{i=1}^n X_i$ so that $M_n$ is the sample mean of $(X_1, X_2, \ldots, X_n)$, or equivalently the proportion of successes in the first $n$ trials. Properties of $M_n$ follow easily from the corresponding properties of $Y_n$. In particular, $\P(M_n = k / n) = \P(Y_n = k)$ for $k \in \{0, 1, \ldots, n\}$ as given above, so let's move on to the mean and variance.
For $n \in \N_+$, the mean and variance of $M_n$ are
1. $\E(M_n) = \frac{a}{a + b}$
2. $\var(M_n) = \frac{1}{n} \frac{a b}{(a + b)^2} + \frac{n-1}{n} \frac{a b}{(a + b)^2 (a + b + 1)}$
Proof
These results follow from the mean and variance of $Y_n$ above and properties of expected value and variance:
1. $\E(M_n) = \frac{1}{n} \E(Y_n)$
2. $\var(M_n) = \frac{1}{n^2} \var(Y_n)$
So $\E(M_n)$ is constant in $n \in \N_+$ while $\var(M_n) \to a b / (a + b)^2 (a + b + 1)$ as $n \to \infty$. These results suggest that perhaps $M_n$ has a limit, in some sense, as $n \to \infty$. For an ordinary sequence of Bernoulli trials with success parameter $p \in (0, 1)$, we know from the law of large numbers that $M_n \to p$ as $n \to \infty$ with probability 1 and in mean (and hence also in distribution). What happens here when the success probability $P$ has been randomized with the beta distribution? The answer is what we might hope.
$M_n \to P$ as $n \to \infty$ with probability 1 and in mean square, and hence also in in distribution.
Proof
Let $g$ denote the PDF of $P$. For convergence with probability 1, we condition on $P$ \begin{align} \P(M_n \to P \text{ as } n \to \infty) & = \E[\P(M_n \to P \text{ as } n \to \infty) \mid P] \ & = \int_0^1 \P(M_n \to p \text{ as } n \to \infty \mid P = p) g(p) dp = \int_0^1 g(p) dp = 1 \end{align} For convergence in mean square, once again we condition on $P$. Note that $\E[(M_n - P)^2 \mid P = p] = \E[(M_n - p)^2 \mid P = p] = \frac{p (1 - p)}{n} \to 0\ \text{ as } n \to \infty$ Hence by the dominated convergence theorem, $\E[(M_n - P)^2] = \int_0^1 \frac{p (1 - p)}{n} g(p) dp \to 0 \text{ as } n \to \infty$
Proof of convergence in distribution
Convergence with probability 1 implies convergence in distribution, but it's interesting to gove a direct proof. For $x \in (0, 1)$, note that $\P(M_n \le x) = \P(Y_n \le n x) = \sum_{k=0}^{\lfloor nx \rfloor} \binom{n}{k} \frac{a^{[k]} b^{[n-k]}}{(a + b)^{[n]}}$ where $\lfloor \cdot \rfloor$ is the floor function. But recall that $\frac{a^{[k]} b^{[n-k]}}{(a + b)^{[n]}} = \frac{B(a + k, b + n - k)}{B(a, b)} = \frac{1}{B(a, b)} \int_0^1 p^{a + k - 1} (1 - p)^{b + n - k - 1} dp$ Substituting and doing some algebra we get $\P(M_n \le x) = \frac{1}{B(a, b)} \int_0^1 \left[\sum_{k=0}^{\lfloor nx \rfloor} \binom{n}{k} p^k (1 - p)^{n-k}\right] p^{a - 1}(1 - p)^{b - 1} dp$ The sum in the square brackets is $\P(W_n \le n x) = \P(W_n / n \le x)$ where $W_n$ has the ordinary binomial distribution with parameters $n$ and $p$. But $W_n / n$ converges (in every sense) to $p$ as $n \to \infty$ so $\P(W_n / n \le x) \to \bs{1}(p \le x)$ as $n \to \infty$. So by the dominated convergence theorem, $\P(M_n \le x) \to \frac{1}{B(a, b)} \int_0^x p^{a - 1} (1 - p)^{b - 1} dp = \P(P \le x)$
Recall again that the Bayesian estimator of $p$ based on $(X_1, X_2, \ldots, X_n)$ is $\E(P \mid Y_n) = \frac{a + Y_n}{a + b + n} = \frac{a / n + M_n}{a / n + b / n + 1}$ It follows from the last theorem that $\E(P \mid Y_n) \to P$ with probability 1, in mean square, and in distribution. The stochastic process $\bs Z = \{Z_n = (a + Y_n) / (a + b + n): n \in \N\}$ that we have seen several times now is of fundamental importance, and turns out to be a martingale. The theory of martingales provides powerful tools for studying convergence in the beta-Bernoulli process.
The Trial Number of a Success
For $k \in \N_+$, let $V_k$ denote the trial number of the $k$th success. As we have seen before in similar circumstances, the process $\bs{V} = (V_1, V_2, \ldots)$ can be defined in terms of the process $\bs{Y}$: $V_k = \min\{n \in \N_+: Y_n = k\}, \quad k \in \N_+$ Note that $V_k$ takes values in $\{k, k + 1, \ldots\}$. The random processes $\bs{V} = (V_1, V_2, \ldots)$ and $\bs{Y} = (Y_0, Y_1, \ldots)$ are inverses of each other in a sense.
For $k \in \N$ and $n \in \N_+$ with $k \le n$,
1. $V_k \le n$ if and only if $Y_n \ge k$
2. $V_k = n$ if and only if $Y_{n-1} = k - 1$ and $X_n = 1$
The probability denisty function of $V_k$ is given by $\P(V_k = n) = \binom{n - 1}{k - 1} \frac{a^{[k]} b^{[n-k]}}{(a + b)^{[n]}}, \quad n \in \{k, k + 1, \ldots\}$
Proof 1
As usual, we can condition on $P$ and use known results for ordinary Bernoulli trials. Given $P = p$, random variable $V_k$ has the negative binomial distribution with parameters $k$ and $p$. Hence \begin{align*} \P(V_k = n) & = \int_0^1 \P(V_k = n \mid P = p) g(p) dp = \int_0^1 \binom{n - 1}{k - 1} p^k (1 - p)^{n-k} \frac{1}{B(a, b)} p^{a-1} (1 - p)^{b - 1} dp \ & = \binom{n - 1}{k - 1} \frac{1}{B(a, b)} \int_0^1 p^{a + k - 1} (1 - p)^{b + n - k - 1} dp \ & = \binom{n - 1}{k - 1} \frac{B(a + k, b + n - k)}{B(a, b)} = \binom{n - 1}{k - 1} \frac{a^{[k]} b^{[n-k]}}{(a + b)^{[n]}} \end{align*}
Proof 2
In this proof, we condition on $Y_{n-1}$. Using the PDF of $Y_{n-1}$ and the result above, \begin{align*} \P(V_k = n) & = \P(Y_{n-1} = k - 1, X_n = 1) = \P(Y_{n-1} = k - 1) \P(X_n = 1 \mid Y_{n-1} = k - 1) \ & = \binom{n - 1}{k - 1} \frac{a^{[k - 1]} b^{[(n - 1) - (k - 1)]}}{(a + b)^{[n-1]}} \frac{a + k - 1}{a + b + (n - 1)} = \binom{n - 1}{k - 1} \frac{a^{[k]} b^{[n-k]}}{(a + b)^{[n]}} \end{align*}
The distribution of $V_k$ is known as the beta-negative binomial distribution with parameters $k$, $a$, and $b$.
If $a = b = 1$ so that $P$ is uniformly distributed on $(0, 1)$, then $\P(V_k = n) = \frac{k}{n (n + 1)}, \quad n \in \{k, k + 1, k + 2, \ldots\}$
Proof
Recall again that $1^{[j]} = j!$ and $2^{[j]} = (j + 1)!$ for $j \in \N$. Hence from the previous result, $\P(V_k = n) = \frac{(n - 1)!}{(k - 1)! (n - k)!} \frac{k! (n - k)!}{(n + 1)!} = \frac{k}{n (n + 1)}, \quad n \in \{k, k + 1, \ldots\}$
In the simulation of the beta-negative binomial experiment, vary the parameters and note the shape of the probability density function. For various values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
The mean and variance of $V_k$ are
1. $\E(V_k) = k \frac{a + b - 1}{a - 1}$ if $a \gt 1$.
2. $\var(V_k) = k \frac{a + b - 1}{(a - 1)(a - 2)} [b + k (a + b - 2)] - k^2 \left(\frac{a + b - 1}{a - 1}\right)^2$
Proof
From our work with the negative binomial distribution we know that $\E(V_k | P = p) = k \frac{1}{p}$ and $\E(V_k^2 | P = p) = k \frac{1 - p}{p^2} + \frac{k^2}{p^2}$. Thus, conditioning on $P$ we have $\E(V_k) = \E[\E(V_k | P)] = \int_0^1 \frac{k}{p} \frac{p^{a-1} (1 - p)^{b - 1}}{B(a, b)} = k \frac{B(a - 1, b)}{B(a, b)} = k \frac{a + b - 1}{a - 1}$ which gives part (a). Similarly \begin{align} \E(V_k^2) & = \E[\E(V_k^2 | P)] = \int_0^1 \left(k \frac{1 - p}{p^2} + \frac{k^2}{p^2}\right) \frac{p^{a - 1} (1 - p)^{b - 1}}{B(a, b)}\ & = k \frac{B(a - 2, b + 1)}{B(a, b)} + k^2 \frac{B(a - 2, b)}{B(a, b)} = k \frac{b (a + b - 2)}{(a - 1)(a - 2)} + k^2 \frac{(a + b - 1)(a + b - 2)}{(a - 1)(a - 2)} \end{align} Simplifying and using part (a) gives part (b).
In the simulation of the beta-negative binomial experiment, vary the parameters and note the location and size of the mean$\pm$standard deviation bar. For various values of the parameters, run the simulation 1000 times and compare the empirical moments to the true moments. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/11%3A_Bernoulli_Trials/11.07%3A_The_Beta-Bernoulli_Process.txt |
This chapter explores a number of models and problems based on sampling from a finite population. Sampling without replacement from a population of objects of various types leads to the hypergeometric and multivariate hypergeometric models. Sampling with replacement from a finite population leads naturally to the birthday and coupon-collector problems. Sampling without replacement form an ordered population leads naturally to the matching problem and to the study of order statistics.
12: Finite Sampling Models
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\jack}{\text{j}}$ $\newcommand{\queen}{\text{q}}$ $\newcommand{\king}{\text{k}}$
Basic Theory
Sampling Models
Suppose that we have a population $D$ of $m$ objects. The population could be a deck of cards, a set of people, an urn full of balls, or any number of other collections. In many cases, we simply label the objects from 1 to $m$, so that $D = \{1, 2, \ldots, m\}$. In other cases (such as the card experiment), it may be more natural to label the objects with vectors. In any case, $D$ is usually a finite subset of $\R^k$ for some $k \in \N_+$.
Our basic experiment consists of selecting $n$ objects from the population $D$ at random and recording the sequence of objects chosen. Thus, the outcome is $\bs{X} = (X_1, X_2, \ldots, X_n)$ where $X_i \in D$ is the $i$th object chosen. If the sampling is with replacement, the sample size $n$ can be any positive integer. In this case, the sample space $S$ is $S = D^n = \left\{(x_1, x_2, \ldots, x_n): x_i \in D \text{ for each } i \right\}$ If the sampling is without replacement, the sample size $n$ can be no larger than the population size $m$. In this case, the sample space $S$ consists of all permutations of size $n$ chosen from $D$: $S = D_n = \left\{(x_1, x_2, \ldots, x_n): x_i \in D \text{ for each } i \text{ and } x_i \ne x_j \text{ for all } i \ne j\right\}$
From the multiplication principle of combinatorics,
1. $\#(D^n) = m^n$
2. $\#(D_n) = m^{(n)} = m (m - 1) \cdots (m - n + 1)$
With either type of sampling, we assume that the samples are equally likely and thus that the outcome variable $\bs{X}$ is uniformly distributed on the appropriate sample space $S$; this is the meaning of the phrase random sample: $\P(\bs{X} \in A) = \frac{\#(A)}{\#(S)}, \quad A \subseteq S$
The Exchangeable Property
Suppose again that we select $n$ objects at random from the population $D$, either with or without replacement and record the ordered sample $\bs{X} = (X_1, X_2, \ldots, X_n)$
Any permutation of $\bs{X}$ has the same distribution as $\bs{X}$ itself, namely the uniform distribution on the appropriate sample space $S$:
1. $D^n$ if the sampling is with replacement.
2. $D_n$ if the sampling is without replacement.
A sequence of random variables with this property is said to be exchangeable. Although this property is very simple to understand, both intuitively and mathematically, it is nonetheless very important. We will use the exchangeable property often in this chapter.
More generally, any sequence of $k$ of the $n$ outcome variables is uniformly distributed on the appropriate sample space:
1. $D^k$ if the sampling is with replacement.
2. $D_k$ if the sampling is without replacement.
In particular, for either sampling method, $X_i$ is uniformly distributed on $D$ for each $i \in \{1, 2, \ldots, n\}$.
If the sampling is with replacement then $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a sequence of independent random variables.
Thus, when the sampling is with replacement, the sample variables form a random sample from the uniform distribution, in statistical terminology.
If the sampling is without replacement, then the conditional distribution of a sequence of $k$ of the outcome variables, given the values of a sequence of $j$ other outcome variables, is the uniform distribution on the set of permutations of size $k$ chosen from the population when the $j$ known values are removed (of course, $j + k \le n$).
In particular, $X_i$ and $X_j$ are dependent for any distinct $i$ and $j$ when the sampling is without replacement.
The Unordered Sample
In many cases when the sampling is without replacement, the order in which the objects are chosen is not important; all that matters is the (unordered) set of objects: $\bs{W} = \{X_1, X_2, \ldots, X_n\}$ The random set $\bs{W}$ takes values in the set of combinations of size $n$ chosen from $D$: $T = \left\{ \{x_1, x_2, \ldots, x_n\}: x_i \in D \text{ for each } i \text{ and } x_i \ne x_j \text{ for all } i \ne j \right\}$
Recall that $\#(T) = \binom{m}{n}$.
$\bs{W}$ is uniformly distributed over $T$: $\P(\bs{W} \in B) = \frac{\#(B)}{\#(T)} = \frac{\#(B)}{\binom{m}{n}}, \quad B \subseteq T$
Proof
For any combination of size $n$ from $D$, there are $n!$ permutations of size $n$.
Suppose now that the sampling is with replacement, and we again denote the unordered outcome by $\bs{W}$. In this case, $\bs{W}$ takes values in the collection of multisets of size $n$ from $D$. (A multiset is like an ordinary set, except that repeated elements are allowed). $T = \left\{ \{x_1, x_2, \ldots, x_n\}: x_i \in D \text{ for each } i \right\}$
Recall that $\#(T) = \binom{m + n - 1}{n}$.
$\bs{W}$ is not uniformly distributed on $T$.
Summary of Sampling Formulas
The following table summarizes the formulas for the number of samples of size $n$ chosen from a population of $m$ elements, based on the criteria of order and replacement.
Sampling Formulas
Number of samples With order Without
With replacement $m^n$ $\binom{m + n - 1}{n}$
Without $m^{(n)}$ $\binom{m}{n}$
Examples and Applications
Suppose that a sample of size 2 is chosen from the population $\{1, 2, 3, 4\}$. Explicitly list all samples in the following cases:
1. Ordered samples, with replacement.
2. Ordered samples, without replacement.
3. Unordered samples, with replacement.
4. Unordered samples, without replacement.
Answer
1. $\{(1,1), (1,2), (1,3), (1,4), (2,1), (2,2), (2,3), (2,4), (3,1), (3,2), (3,3), (3,4), (4,1), (4,2), (4,3), (4,4)\}$
2. $\{(1,2), (1,3), (1,4), (2,1), (2,3), (2,4), (3,1), (3,2), (3,4), (4,1), (4,2), (4,3)\}$
3. $\{\{1,1\}, \{1,2\}, \{1,3\}, \{1,4\}, \{2,2\}, \{2,3\}, \{2,4\}, \{3,3\}, \{3,4\}, \{4,4\}\}$
4. $\{\{1,2\}, \{1,3\}, \{1,4\}, \{2,3\}, \{2,4\}, \{3,4\}\}$
Multi-type Populations
A dichotomous population consists of two types of objects.
Suppose that a batch of 100 components includes 10 that are defective. A random sample of 5 components is selected without replacement. Compute the probability that the sample contains at least one defective component.
Answer
0.4162
An urn contains 50 balls, 30 red and 20 green. A sample of 15 balls is chosen at random. Find the probability that the sample contains 10 red balls in each of the following cases:
1. The sampling is without replacement
2. The sampling is with replacement
Answer
1. 0.2070
2. 0.1859
In the ball and urn experiment select 50 balls with 30 red balls, and sample size 15. Run the experiment 100 times. Compute the relative frequency of the event that the sample has 10 red balls in each of the following cases, and compare with the respective probability in the previous exercise:
1. The sampling is without replacement
2. The sampling is with replacement
Suppose that a club has 100 members, 40 men and 60 women. A committee of 10 members is selected at random (and without replacement, of course).
1. Find the probability that both genders are represented on the committee.
2. If you observed the experiment and in fact the committee members are all of the same gender, would you believe that the sampling was random?
Answer
1. 0.9956
2. No
Suppose that a small pond contains 500 fish, 50 of them tagged. A fisherman catches 10 fish. Find the probability that the catch contains at least 2 tagged fish.
Answer
0.2635
The basic distribution that arises from sampling without replacement from a dichotomous population is studied in the section on the hypergeometric distribution. More generally, a multi-type population consists of objects of $k$ different types.
Suppose that a legislative body consists of 60 republicans, 40 democrats, and 20 independents. A committee of 10 members is chosen at random. Find the probability that at least one party is not represented on the committee.
Answer
0.1633. Use the inclusion-exclusion law.
The basic distribution that arises from sampling without replacement from a multi-type population is studied in the section on the multivariate hypergeometric distribution.
Cards
Recall that a standard card deck can be modeled by the product set $D = \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, \jack, \queen, \king\} \times \{\clubsuit, \diamondsuit, \heartsuit, \spadesuit\}$ where the first coordinate encodes the denomination or kind (ace, 2-10, jack, queen, king) and where the second coordinate encodes the suit (clubs, diamonds, hearts, spades). The general card experiment consists of drawing $n$ cards at random and without replacement from the deck $D$. Thus, the $i$th card is $X_i = (Y_i, Z_i)$ where $Y_i$ is the denomination and $Z_i$ is the suit. The special case $n = 5$ is the poker experiment and the special case $n = 13$ is the bridge experiment. Note that with respect to the denominations or with respect to the suits, a deck of cards is a multi-type population as discussed above.
In the card experiment with $n = 5$ cards (poker), there are
1. 311,875,200 ordered hands
2. 2,598,960 unordered hands
In the card experiment with $n = 13$ cards (bridge), there are
1. 3,954,242,643,911,239,680,000 ordered hands
2. 635,013,559,600 unordered hands
In the card experiment, set $n = 5$. Run the simulation 5 times and on each run, list all of the (ordered) sequences of cards that would give the same unordered hand as the one you observed.
In the card experiment,
1. $Y_i$ is uniformly distributed on $\{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, \jack, \queen, \king\}$ for each $i$.
2. $Z_j$ is uniformly distributed on $\{\clubsuit, \diamondsuit, \heartsuit, \spadesuit\}$ for each $i$.
In the card experiment, $Y_i$ and $Z_j$ are independent for any $i$ and $j$.
In the card experiment, $(Y_1, Y_2)$ and $(Z_1, Z_2)$ are dependent.
Suppose that a sequence of 5 cards is dealt. Find each of the following:
1. The probability that the third card is a spade.
2. The probability that the second and fourth cards are queens.
3. The conditional probability that the second card is a heart given that the fifth card is a heart.
4. The probability that the third card is a queen and the fourth card is a heart.
Answer
1. $\frac{1}{4}$
2. $\frac{1}{221}$
3. $\frac{4}{17}$
4. $\frac{1}{52}$
Run the card experiment 500 time. Compute the relative frequency corresponding to each probability in the previous exercise.
Find the probability that a bridge hand will contain no honor cards that is, no cards of denomination 10, jack, queen, king, or ace. Such a hand is called a Yarborough, in honor of the second Earl of Yarborough.
Answer
0.000547
Dice
Rolling $n$ fair, six-sided dice is equivalent to choosing a random sample of size $n$ with replacement from the population $\{1, 2, 3, 4, 5, 6\}$. Generally, selecting a random sample of size $n$ with replacement from $D = \{1, 2, \ldots, m\}$ is equivalent to rolling $n$ fair, $m$-sided dice.
In the game of poker dice, 5 standard, fair dice are thrown. Find each of the following:
1. The probability that all dice show the same score.
2. The probability that the scores are distinct.
3. The probability that 1 occurs twice and 6 occurs 3 times.
Answer
1. $\frac{1}{1296}$
2. $\frac{5}{24}$
3. $\frac{5}{3888}$
Run the poker dice experiment 500 times. Compute the relative frequency of each event in the previous exercise and compare with the corresponding probability.
The game of poker dice is treated in more detail in the chapter on Games of Chance.
Birthdays
Supposes that we select $n$ persons at random and record their birthdays. If we assume that birthdays are uniformly distributed throughout the year, and if we ignore leap years, then this experiment is equivalent to selecting a sample of size $n$ with replacement from $D = \{1, 2, \ldots, 365\}$. Similarly, we could record birth months or birth weeks.
Suppose that a probability class has 30 students. Find each of the following:
1. The probability that the birthdays are distinct.
2. The probability that there is at least one duplicate birthday.
Answer
1. 0.2937
2. 0.7063
In the birthday experiment, set $m = 365$ and $n = 30$. Run the experiment 1000 times and compare the relative frequency of each event in the previous exercise to the corresponding probability.
The birthday problem is treated in more detail later in this chapter.
Balls into Cells
Suppose that we distribute $n$ distinct balls into $m$ distinct cells at random. This experiment also fits the basic model, where $D$ is the population of cells and $X_i$ is the cell containing the $i$th ball. Sampling with replacement means that a cell may contain more than one ball; sampling without replacement means that a cell may contain at most one ball.
Suppose that 5 balls are distributed into 10 cells (with no restrictions). Find each of the following:
1. The probability that the balls are all in different cells.
2. The probability that the balls are all in the same cell.
Answer
1. $\frac{189}{625}$
2. $\frac{1}{10000}$
Coupons
Suppose that when we purchase a certain product (bubble gum, or cereal for example), we receive a coupon (a baseball card or small toy, for example), which is equally likely to be any one of $m$ types. We can think of this experiment as sampling with replacement from the population of coupon types; $X_i$ is the coupon that we receive on the $i$th purchase.
Suppose that a kid's meal at a fast food restaurant comes with a toy. The toy is equally likely to be any of 5 types. Suppose that a mom buys a kid's meal for each of her 3 kids. Find each of the following:
1. The probability that the toys are all the same.
2. The probability that the toys are all different.
Answer
1. $\frac{1}{25}$
2. $\frac{12}{25}$
The coupon collector problem is studied in more detail later in this chapter.
The Key Problem
Suppose that a person has $n$ keys, only one of which opens a certain door. The person tries the keys at random. We will let $N$ denote the trial number when the person finds the correct key.
Suppose that unsuccessful keys are discarded (the rational thing to do, of course). Then $N$ has the uniform distribution on $\{1, 2, \ldots, n\}$.
1. $\P(N = i) = \frac{1}{n}, \quad i \in \{1, 2, \ldots, n\}$.
2. $\E(N) = \frac{n + 1}{2}$.
3. $\var(N) = \frac{n^2 - 1}{12}$.
Suppose that unsuccessful keys are not discarded (perhaps the person has had a bit too much to drink). Then $N$ has a geometric distribution on $\N_+$.
1. $\P(N = i) = \frac{1}{n} \left( \frac{n-1}{n} \right)^{i-1}, \quad i \in \N_+$.
2. $\E(N) = n$.
3. $\var(N) = n (n - 1)$.
Simulating a Random Samples
It's very easy to simulate a random sample of size $n$, with replacement from $D = \{1, 2, \ldots, m\}$. Recall that the ceiling function $\lceil x \rceil$ gives the smallest integer that is at least as large as $x$.
Let $\bs{U} = (U_1, U_2, \ldots, U_n)$ be a sequence of be a random numbers. Recall that these are independent random variables, each uniformly distributed on the interval $[0, 1]$ (the standard uniform distribution). Then $X_i = \lceil m \, U_i \rceil$ for $i \in \{1, 2, \ldots, n\}$ simulates a random sample, with replacement, from $D$.
It's a bit harder to simulate a random sample of size $n$, without replacement, since we need to remove each sample value before the next draw.
The following algorithm generates a random sample of size $n$, without replacement, from $D$.
1. For $i = 1$ to $m$, let $b_i = i$.
2. For $i = 1$ to $n$,
1. let $j = m - i + 1$
2. let $U_i$ be a random number
3. let $J = \lfloor j U_i \rfloor$
4. let $X_i = b_J$
5. let $k = b_j$
6. let $b_j = b_J$
7. let $b_J = k$
3. Return $\bs{X} = (X_1, X_2, \ldots, X_n)$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/12%3A_Finite_Sampling_Models/12.01%3A_Introduction_to_Finite_Sampling_Models.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$
Basic Theory
Dichotomous Populations
Suppose that we have a dichotomous population $D$. That is, a population that consists of two types of objects, which we will refer to as type 1 and type 0. For example, we could have
• balls in an urn that are either red or green
• a batch of components that are either good or defective
• a population of people who are either male or female
• a population of animals that are either tagged or untagged
• voters who are either democrats or republicans
Let $R$ denote the subset of $D$ consisting of the type 1 objects, and suppose that $\#(D) = m$ and $\#(R) = r$. As in the basic sampling model, we sample $n$ objects at random from $D$. In this section, our only concern is in the types of the objects, so let $X_i$ denote the type of the $i$th object chosen (1 or 0). The random vector of types is $\bs{X} = (X_1, X_2, \ldots, X_n)$ Our main interest is the random variable $Y$ that gives the number of type 1 objects in the sample. Note that $Y$ is a counting variable, and thus like all counting variables, can be written as a sum of indicator variables, in this case the type variables: $Y = \sum_{i=1}^n X_i$ We will assume initially that the sampling is without replacement, which is usually the realistic setting with dichotomous populations.
The Probability Density Function
Recall that since the sampling is without replacement, the unordered sample is uniformly distributed over the set of all combinations of size $n$ chosen from $D$. This observation leads to a simple combinatorial derivation of the probability density function of $Y$.
The probability density function of $Y$ is given by $\P(Y = y) = \frac{\binom{r}{y} \binom{m - r}{n -y}}{\binom{m}{n}}, \quad y \in \left\{\max\{0, n - (m - r)\}, \ldots, \min\{n, r\}\right\}$
Proof
Consider the unordered outcome, which is uniformly distributed on the set of combinations of size $n$ chosen from the population of size $m$. The number of ways to select $y$ type 1 objects from the $r$ type 1 objects in the population is $\binom{r}{y}$. Similarly the number of ways to select the remaining $n - y$ type 0 objects from the $m - r$ type 0 objects in the population is $\binom{m - r}{n - y}$. Finally the number of ways to select the sample of size $n$ from the population of size $m$ is $\binom{m}{n}$.
This distribution defined by this probability density function is known as the hypergeometric distribution with parameters $m$, $r$, and $n$.
Another form of the probability density function of $Y$ is
$\P(Y = y) = \binom{n}{y} \frac{r^{(y)} (m - r)^{(n - y)}}{m^{(n)}}, \quad y \in \left\{\max\{0, n - (m - r)\}, \ldots, \min\{n, r\}\right\}$
Combinatorial Proof
The combinatorial proof is much like the previous proof, except that we consider the ordered sample, which is uniformly distributed on the set of permutations of size $n$ chosen from the population of $m$ objects. The binomial coefficient $\binom{n}{y}$ is the number of ways to select the coordinates where the type 1 objects will go; $r^{(y)}$ is the number of ways to select an ordered sequence of $y$ type 1 objects objects; and $(m - r)^{(n-y)}$ is the number of ways to select an ordered sequence of $n - y$ type 0 objects. Finally $m^{(n)}$ is the number of ways to select an ordered sequence of $n$ objects from the population.
Algebraic Proof
The new form of the PDF can also be derived algebraically by starting with the previous form of the PDF. Use the formula $\binom{k}{j} = k^{(j)} / j!$ for each binomial coefficient, and then rearrange things a bit.
Recall our convention that $j^{(i)} = \binom{j}{i} = 0$ for $i \gt j$. With this convention, the two formulas for the probability density function are correct for $y \in \{0, 1, \ldots, n\}$. We usually use this simpler set as the set of values for the hypergeometric distribution.
The hypergeometric distribution is unimodal. Let $v = \frac{(r + 1)(n + 1)}{m + 2}$. Then
1. $\P(Y = y) \gt \P(Y = y - 1)$ if and only if $y \lt v$.
2. The mode occurs at $\lfloor v \rfloor$ if $v$ is not an integer, and at $v$ and $v - 1$ if $v$ is an integer greater than 0.
In the ball and urn experiment, select sampling without replacement. Vary the parameters and note the shape of the probability density function. For selected values of the parameters, run the experiment 1000 times and compare the relative frequency function to the probability density function.
You may wonder about the rather exotic name hypergeometric distribution, which seems to have nothing to do with sampling from a dichotomous population. The name comes from a power series, which was studied by Leonhard Euler, Carl Friedrich Gauss, Bernhard Riemann, and others.
A (generalized) hypergeometric series is a power series $\sum_{k=0}^\infty a^k x^k$ where $k \mapsto a_{k+1} \big/ a_k$ is a rational function (that is, a ratio of polynomials).
Many of the basic power series studied in calculus are hypergeometric series, including the ordinary geometric series and the exponential series.
The probability generating function of the hypergeometric distribution is a hypergeometric series.
Proof
The PGF is $P(t) = \sum_{k=0}^n f(k) t^k$ where $f$ is the hypergeometric PDF, given above. Simple algebra shows that $\frac{f(k+1)}{f(k)} = \frac{(r - k)(n - k)}{(k + 1)(N - r - n + k + 1)}$
In addition, the hypergeometric distribution function can be expressed in terms of a hypergeometric series. These representations are not particularly helpful, so basically were stuck with the non-descriptive term for historical reasons.
Moments
Next we will derive the mean and variance of $Y$. The exchangeable property of the indicator variables, and properties of covariance and correlation will play a key role.
$\E(X_i) = \frac{r}{m}$ for each $i$.
Proof
Recall that $X_i$ is an indicator variable with $\P(X_i = 1) = r / m$ for each $i$.
From the representation of $Y$ as the sum of indicator variables, the expected value of $Y$ is trivial to compute. But just for fun, we give the derivation from the probability density function as well.
$\E(Y) = n \frac{r}{m}$.
Proof
This follows from the previous result and the additive property of expected value.
Proof from the definition
Using the hypergeometric PDF, $\E(Y) = \sum_{y=0}^n y \frac{\binom{r}{y} \binom{m - r}{n - y}}{\binom{m}{n}}$ Note that the $y = 0$ term is 0. For the other terms, we can use the identity $y \binom{r}{y} = r \binom{r-1}{y-1}$ to get $\E(Y) = \frac{r}{\binom{m}{n}} \sum_{y=1}^n \binom{r - 1}{y - 1} \binom{m - r}{n - y}$ But substituting $k = y - 1$ and using another fundamental identity, $\sum_{y=1}^n \binom{r - 1}{y - 1} \binom{m - r}{n - y} = \sum_{k=0}^{n-1} \binom{r - 1}{k} \binom{m - r}{n - 1 - k} = \binom{m - 1}{n - 1}$ So substituting and doing a bit of algebra gives $\E(Y) = n \frac{r}{m}$.
Next we turn to the variance of the hypergeometric distribution. For that, we will need not only the variances of the indicator variables, but their covariances as well.
$\var(X_i) = \frac{r}{m}(1 - \frac{r}{m})$ for each $i$.
Proof
Again this follows because $X_i$ is an indicator variable with $\P(X_i = 1) = r / m$ for each $i$.
For distinct $i, \; j$,
1. $\cov\left(X_i, X_j\right) = -\frac{r}{m}(1 - \frac{r}{m}) \frac{1}{m - 1}$
2. $\cor\left(X_i, X_j\right) = -\frac{1}{m - 1}$
Proof
Note that $X_i \, X_j$ is an indicator variable that indicates the event that the $i$th and $j$th objects are both type 1. By the exchangeable property, $\P\left(X_i \, X_j = 1\right) = \P(X_i = 1) \P\left(X_j = 1 \mid X_i = 1\right) = \frac{r}{m} \frac{r-1}{m-1}$. Part (a) then follows from $\cov\left(X_i, X_j\right) = \E\left(X_i X_j\right) - \E(X_i) \E\left(X_j\right)$. Part (b) follows from part (a) and the definition of correlation.
Note that the event of a type 1 object on draw $i$ and the event of a type 1 object on draw $j$ are negatively correlated, but the correlation depends only on the population size and not on the number of type 1 objects. Note also that the correlation is perfect if $m = 2$, which must be the case.
$\var(Y) = n \frac{r}{m} (1 - \frac{r}{m}) \frac{m - n}{m - 1}$.
Proof
This result follows from the previous results on the variance and covariance of the indicator variables. Recall that the variance of $Y$ is the sum of $\cov\left(X_i, X_j\right)$ over all $i$ and $j$.
Note that $\var(Y) = 0$ if $r = 0$ or $r = m$ or $n = m$, which must be true since $Y$ is deterministic in each of these cases.
In the ball and urn experiment, select sampling without replacement. Vary the parameters and note the size and location of the mean $\pm$ standard deviation bar. For selected values of the parameters, run the experiment 1000 times and compare the empirical mean and standard deviation to the true mean and standard deviation.
Sampling with Replacement
Suppose now that the sampling is with replacement, even though this is usually not realistic in applications.
$(X_1, X_2, \ldots, X_n)$ is a sequence of $n$ Bernoulli trials with success parameter $\frac{r}{m}$.
The following results now follow immediately from the general theory of Bernoulli trials, although modifications of the arguments above could also be used.
$Y$ has the binomial distribution with parameters $n$ and $\frac{r}{m}$: $\P(Y = y) = \binom{n}{y} \left(\frac{r}{m}\right)^y \left(1 - \frac{r}{m}\right)^{n-y}, \quad y \in \{0, 1, \ldots, n\}$
The mean and variance of $Y$ are
1. $\E(Y) = n \frac{r}{m}$
2. $\var(Y) = n \frac{r}{m} (1 - \frac{r}{m})$
Note that for any values of the parameters, the mean of $Y$ is the same, whether the sampling is with or without replacement. On the other hand, the variance of $Y$ is smaller, by a factor of $\frac{m - n}{m - 1}$, when the sampling is without replacement than with replacement. It certainly makes sense that the variance of $Y$ should be smaller when sampling without replacement, since each selection reduces the variablility in the population that remains. The factor $\frac{m - n}{m - 1}$ is sometimes called the finite population correction factor.
In the ball and urn experiment, vary the parameters and switch between sampling without replacement and sampling with replacement. Note the difference between the graphs of the hypergeometric probability density function and the binomial probability density function. Note also the difference between the mean $\pm$ standard deviation bars. For selected values of the parameters and for the two different sampling modes, run the simulation 1000 times.
Convergence of the Hypergeometric Distribution to the Binomial
Suppose that the population size $m$ is very large compared to the sample size $n$. In this case, it seems reasonable that sampling without replacement is not too much different than sampling with replacement, and hence the hypergeometric distribution should be well approximated by the binomial. The following exercise makes this observation precise. Practically, it is a valuable result, since the binomial distribution has fewer parameters. More specifically, we do not need to know the population size $m$ and the number of type 1 objects $r$ individually, but only in the ratio $r / m$.
Suppose that $r_m \in \{0, 1, \ldots, m\}$ for each $m \in \N_+$ and that $r_m / m \to p \in [0, 1]$ as $m \to \infty$. Then for fixed $n$, the hypergeometric probability density function with parameters $m$, $r_m$, and $n$ converges to the binomial probability density function with parameters $n$ and $p$ as $m \to \infty$
Proof
Consider the second version of the hypergeometric PDF above. In the fraction, note that there are $n$ factors in the numerator and $n$ in the denominator. Suppose we pair the factors to write the original fraction as the product of $n$ fractions. The first $y$ fractions have the form $\frac{r_m - i}{m - i}$ where $i$ does not depend on $m$. Hence each of these fractions converge to $p$ as $m \to \infty$. The remaining $n - y$ fractions have the form $\frac{m - r_m - j}{m - y - j}$, where again, $j$ does not depend on $m$. Hence each of these fractions converges to $1 - p$ as $m \to \infty$.
The type of convergence in the previous exercise is known as convergence in distribution.
In the ball and urn experiment, vary the parameters and switch between sampling without replacement and sampling with replacement. Note the difference between the graphs of the hypergeometric probability density function and the binomial probability density function. In particular, note the similarity when $m$ is large and $n$ small. For selected values of the parameters, and for both sampling modes, run the experiment 1000 times.
In the setting of the convergence result above, note that the mean and variance of the hypergeometric distribution converge to the mean and variance of the binomial distribution as $m \to \infty$.
Inferences in the Hypergeometric Model
In many real problems, the parameters $r$ or $m$ (or both) may be unknown. In this case we are interested in drawing inferences about the unknown parameters based on our observation of $Y$, the number of type 1 objects in the sample. We will assume initially that the sampling is without replacement, the realistic setting in most applications.
Estimation of $r$ with $m$ Known
Suppose that the size of the population $m$ is known but that the number of type 1 objects $r$ is unknown. This type of problem could arise, for example, if we had a batch of $m$ manufactured items containing an unknown number $r$ of defective items. It would be too costly to test all $m$ items (perhaps even destructive), so we might instead select $n$ items at random and test those.
A simple estimator of $r$ can be derived by hoping that the sample proportion of type 1 objects is close to the population proportion of type 1 objects. That is, $\frac{Y}{n} \approx \frac{r}{m} \implies r \approx \frac{m}{n} Y$ Thus, our estimator of $r$ is $\frac{m}{n} Y$. This method of deriving an estimator is known as the method of moments.
$\E \left(\frac{m}{n} Y \right) = r$
Proof
This follows from the expected value of $Y$ above, and the scale property of expected value.
The result in the previous exercise means that $\frac{m}{n} Y$ is an unbiased estimator of $r$. Hence the variance is a measure of the quality of the estimator, in the mean square sense.
$\var \left(\frac{m}{n} Y \right) = (m - r) \frac{r}{n} \frac{m - n}{m - 1}$.
Proof
This follows from variance of $Y$ above, and standard properties of variance.
For fixed $m$ and $r$, $\var\left(\frac{m}{n} Y\right) \downarrow 0$ as $n \uparrow m$.
Thus, the estimator improves as the sample size increases; this property is known as consistency.
In the ball and urn experiment, select sampling without replacement. For selected values of the parameters, run the experiment 100 times and note the estimate of $r$ on each run.
1. Compute the average error and the average squared error over the 100 runs.
2. Compare the average squared error with the variance in mean square error given above.
Often we just want to estimate the ratio $r / m$ (particularly if we don't know $m$ either. In this case, the natural estimator is the sample proportion $Y / n$.
The estimator of $\frac{r}{m}$ has the following properties:
1. $\E\left(\frac{Y}{n}\right) = \frac{r}{m}$, so the estimator is unbiased.
2. $\var\left(\frac{Y}{n}\right) = \frac{1}{n} \frac{r}{m} (1 - \frac{r}{m} \frac{m - n}{m - 1})$
3. $\var\left(\frac{Y}{n}\right) \downarrow 0$ as $n \uparrow m$ so the estimator is consistent.
Estimation of $m$ with $r$ Known
Suppose now that the number of type 1 objects $r$ is known, but the population size $m$ is unknown. As an example of this type of problem, suppose that we have a lake containing $m$ fish where $m$ is unknown. We capture $r$ of the fish, tag them, and return them to the lake. Next we capture $n$ of the fish and observe $Y$, the number of tagged fish in the sample. We wish to estimate $m$ from this data. In this context, the estimation problem is sometimes called the capture-recapture problem.
Do you think that the main assumption of the sampling model, namely equally likely samples, would be satisfied for a real capture-recapture problem? Explain.
Once again, we can use the method of moments to derive a simple estimate of $m$, by hoping that the sample proportion of type 1 objects is close the population proportion of type 1 objects. That is, $\frac{Y}{n} \approx \frac{r}{m} \implies m \approx \frac{n r}{Y}$ Thus, our estimator of $m$ is $\frac{n r}{Y}$ if $Y \gt 0$ and is $\infty$ if $Y = 0$.
In the ball and urn experiment, select sampling without replacement. For selected values of the parameters, run the experiment 100 times.
1. On each run, compare the true value of $m$ with the estimated value.
2. Compute the average error and the average squared error over the 100 runs.
If $y \gt 0$ then $\frac{n r}{y}$ maximizes $\P(Y = y)$ as a function of $m$ for fixed $r$ and $n$. This means that $\frac{n r}{Y}$ is a maximum likelihood estimator of $m$.
$\E\left(\frac{n r}{Y}\right) \ge m$.
Proof
This result follows from Jensen's inequality since $y \mapsto \frac{n r}{y}$ is a convex function on $(0, \infty)$.
Thus, the estimator is positivley biased and tends to over-estimate $m$. Indeed, if $n \le m - r$, so that $\P(Y = 0) \gt 0$ then $\E\left(\frac{n r}{Y}\right) = \infty$. For another approach to estimating the population size $m$, see the section on Order Statistics.
Sampling with Replacement
Suppose now that the sampling is with replacement, even though this is unrealistic in most applications. In this case, $Y$ has the binomial distribution with parameters $n$ and $\frac{r}{m}$. The estimators of $r$ with $m$ known, $\frac{r}{m}$, and $m$ with $r$ known make sense, just as before, but have slightly different properties.
The estimator $\frac{m}{n}Y$ of $r$ with $m$ known satisfies
1. $\E\left(\frac{m}{n} Y\right) = r$
2. $\var\left(\frac{m}{n}Y \right) = \frac{r(m - r)}{n}$
The estimator $\frac{1}{n}Y$ of $\frac{r}{m}$ satisfies
1. $\E\left(\frac{1}{n} Y\right) = \frac{r}{m}$
2. $\var\left(\frac{1}{n}Y \right) = \frac{1}{n} \frac{r}{m}(1 - \frac{r}{m})$
Thus, the estimators are still unbiased and consistent, but have larger mean square error than before. Thus, sampling without replacement works better, for any values of the parameters, than sampling with replacement.
In the ball and urn experiment, select sampling with replacement. For selected values of the parameters, run the experiment 100 times.
1. On each run, compare the true value of $r$ with the estimated value.
2. Compute the average error and the average squared error over the 100 runs.
Examples and Applications
A batch of 100 computer chips contains 10 defective chips. Five chips are chosen at random, without replacement. Find each of the following:
1. The probability density function of the number of defective chips in the sample.
2. The mean and variance of the number of defective chips in the sample
3. The probability that the sample contains at least one defective chip.
Answer
Let $Y$ denote the number of defective chips in the sample
1. $\P(Y = y) = \frac{\binom{10}{y} \binom{90}{5 - y}}{\binom{100}{5}}, \quad y \in \{0, 1, 2, 3, 4, 5\}$
2. $\E(Y) = 0.5$, $\var(Y) = 0.432$
3. $\P(Y \gt 0) = 0.416$
A club contains 50 members; 20 are men and 30 are women. A committee of 10 members is chosen at random. Find each of the following:
1. The probability density function of the number of women on the committee.
2. The mean and variance of the number of women on the committee.
3. The mean and variance of the number of men on the committee.
4. The probability that the committee members are all the same gender.
Answer
Let $Y$ denote the number of women, so that $Z = 10 - Y$ is the number of men.
1. $\P(Y = y) = \frac{\binom{30}{y} \binom{20}{10 - y}}{\binom{50}{10}}, \quad y \in \{0, 1, \ldots, 10\}$
2. $\E(Y) = 6, \var(Y) = 1.959$
3. $\E(Z) = 4$, $\var(Z) = 1.959$
4. $\P(Y = 0) + \P(Y = 10) = 0.00294$
A small pond contains 1000 fish; 100 are tagged. Suppose that 20 fish are caught. Find each of the following:
1. The probability density function of the number of tagged fish in the sample.
2. The mean and variance of the number of tagged fish in the sample.
3. The probability that the sample contains at least 2 tagged fish.
4. The binomial approximation to the probability in (c).
Answer
Let $Y$ denote the number of tagged fish in the sample
1. $\P(Y = y) = \frac{\binom{100}{y} \binom{900}{20-y}}{\binom{1000}{20}}, \quad y \in \{0, 1, \ldots, 20\}$
2. $\E(Y) = 2$, $\var(Y) = \frac{196}{111}$
3. $\P(Y \ge 2) = 0.6108$
4. $\P(Y \ge 2) = 0.6083$
Forty percent of the registered voters in a certain district prefer candidate $A$. Suppose that 10 voters are chosen at random. Find each of the following:
1. The probability density function of the number of voters in the sample who prefer $A$.
2. The mean and variance of the number of voters in the sample who prefer $A$.
3. The probability that at least 5 voters in the sample prefer $A$.
Answer
1. $\P(Y = y) = \binom{10}{y} (0.4)^y (0.6)^{10-y}, \quad y \in \{0, 1, \ldots, 10\}$
2. $\E(Y) = 4$, $\var(Y) = 2.4$
3. $\P(Y \ge 5) = 0.3669$
Suppose that 10 memory chips are sampled at random and without replacement from a batch of 100 chips. The chips are tested and 2 are defective. Estimate the number of defective chips in the entire batch.
Answer
20
A voting district has 5000 registered voters. Suppose that 100 voters are selected at random and polled, and that 40 prefer candidate $A$. Estimate the number of voters in the district who prefer candidate $A$.
Answer
2000
From a certain lake, 200 fish are caught, tagged and returned to the lake. Then 100 fish are caught and it turns out that 10 are tagged. Estimate the population of fish in the lake.
Answer
2000
Cards
Recall that the general card experiment is to select $n$ cards at random and without replacement from a standard deck of 52 cards. The special case $n = 5$ is the poker experiment and the special case $n = 13$ is the bridge experiment.
In a poker hand, find the probability density function, mean, and variance of the following random variables:
1. The number of spades
2. The number of aces
Answer
Let $U$ denote the number of spades and $V$ the number of aces.
1. $\P(U = u) = \frac{\binom{13}{u} \binom{39}{5-u}}{\binom{52}{5}}, \quad u \in \{0, 1, \ldots, 5\}$, $\E(U) = \frac{5}{4}$, $\var(U) = \frac{235}{272}$
2. $\P(V = v) = \frac{\binom{4}{v} \binom{48}{5-v}}{\binom{52}{5}}, \quad v \in \{0, 1, 2, 3, 4\}$, $\E(V) = \frac{5}{13}$, $\var(V) = \frac{940}{2873}$
In a bridge hand, find each of the following:
1. The probability density function, mean, and variance of the number of hearts
2. The probability density function, mean, and variance of the number of honor cards (ace, king, queen, jack, or 10).
3. The probability that the hand has no honor cards. A hand of this kind is known as a Yarborough, in honor of Second Earl of Yarborough.
Answer
Let $U$ denote the number of hearts and $V$ the number of honor cards.
1. $\P(U = u) = \frac{\binom{13}{u} \binom{39}{13-u}}{\binom{52}{13}}, \quad u \in \{0, 1, \ldots, 13\}$, $\E(U) = \frac{13}{4}$, $\var(U) = \frac{507}{272}$
2. $\P(V = v) = \frac{\binom{20}{v} \binom{32}{13-v}}{\binom{52}{13}}, \quad v \in \{0, 1, \ldots, 13\}$, $\E(V) = 5$, $\var(V) = 2.353$
3. $\frac{5394}{9\,860\,459} \approx 0.000547$
The Randomized Urn
An interesting thing to do in almost any parametric probability model is to randomize one or more of the parameters. Done in the right way, this often leads to an interesting new parametric model, since the distribution of the randomized parameter will often itself belong to a parametric family. This is also the natural setting to apply Bayes' theorem.
In this section, we will randomize the number of type 1 objects in the basic hypergeometric model. Specifically, we assume that we have $m$ objects in the population, as before. However, instead of a fixed number $r$ of type 1 objects, we assume that each of the $m$ objects in the population, independently of the others, is type 1 with probability $p$ and type 0 with probability $1 - p$. We have eliminated one parameter, $r$, in favor of a new parameter $p$ with values in the interval $[0, 1]$. Let $U_i$ denote the type of the $i$th object in the population, so that $\bs{U} = (U_1, U_2, \ldots, U_n)$ is a sequence of Bernoulli trials with success parameter $p$. Let $V = \sum_{i=1}^m U_i$ denote the number of type 1 objects in the population, so that $V$ has the binomial distribution with parameters $m$ and $p$.
As before, we sample $n$ object from the population. Again we let $X_i$ denote the type of the $i$th object sampled, and we let $Y = \sum_{i=1}^n X_i$ denote the number of type 1 objects in the sample. We will consider sampling with and without replacement. In the first case, the sample size can be any positive integer, but in the second case, the sample size cannot exceed the population size. The key technique in the analysis of the randomized urn is to condition on $V$. If we know that $V = r$, then the model reduces to the model studied above: a population of size $m$ with $r$ type 1 objects, and a sample of size $n$.
With either type of sampling, $\P(X_i = 1) = p$
Proof
$\P(X_i = 1) = \E\left[\P(X_i = 1 \mid V)\right] = \E(V / m) = p$
Thus, in either model, $\bs{X}$ is a sequence of identically distributed indicator variables. Ah, but what about dependence?
Suppose that the sampling is without replacement. Let $(x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$ and let $y = \sum_{i=1}^n x_i$. Then $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n) = p^y (1 - p)^{n-y}$
Proof
Conditioning on $V$ gives $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n) = \E\left[\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n \mid V)\right] = \E\left[\frac{V^{(y)} (m - V)^{(n-y)}}{m^{(n)}}\right]$ Now let $G(s, t) = \E(s^V t^{m - V})$. Note that $G$ is a is a probability generating function of sorts. From the binomial theorem, $G(s, t) = [p s + (1 - p)t]^m$. Let $G_{j,k}$ denote the partial derivative of $G$ of order $j + k$, with $j$ derivatives with respect to the first argument and $k$ derivatives with respect to the second argument. From the definition of $G$, $G_{j,k}(1, 1) = \E[V^{(j)} (m - V)^{(k)}]$. But from the binomial representation, $G_{j,k}(1,1) = m^{j+k} p^j (1 - p)^k$
From the joint distribution in the previous exercise, we see that $\bs{X}$ is a sequence of Bernoulli trials with success parameter $p$, and hence $Y$ has the binomial distribution with parameters $n$ and $p$. We could also argue that $\bs{X}$ is a Bernoulli trials sequence directly, by noting that $\{X_1, X_2, \ldots, X_n\}$ is a randomly chosen subset of $\{U_1, U_2, \ldots, U_m\}$.
Suppose now that the sampling is with replacement. Again, let $(x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$ and let $y = \sum_{i=1}^n x_i$. Then $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n) = \E\left[\frac{V^y (m - V)^{n-y}}{m^n}\right]$
Proof
The result follows as before by conditioning on $V$: $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n) = \E\left[\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n \mid V)\right] = \E\left[\frac{V^y (m - V)^{n-y}}{m^n}\right]$
A closed form expression for the joint distribution of $\bs{X}$, in terms of the parameters $m$, $n$, and $p$ is not easy, but it is at least clear that the joint distribution will not be the same as the one when the sampling is without replacement. In particular, $\bs{X}$ is a dependent sequence. Note however that $\bs{X}$ is an exchangeable sequence, since the joint distribution is invariant under a permutation of the coordinates (this is a simple consequence of the fact that the joint distribution depends only on the sum $y$).
The probability density function of $Y$ is given by $\P(Y = y) = \binom{n}{y} \E\left[\frac{V^y (m - V)^{n - y}}{m^n} \right], \quad y \in \{0, 1, \ldots, n\}$
Suppose that $i$ and $j$ are distinct indices. The covariance and correlation of $(X_i, X_j)$ are
1. $\cov\left(X_i, X_j\right) = \frac{p (1 - p)}{m}$
2. $\cor\left(X_i, X_j\right) = \frac{1}{m}$
Proof
Conditioning on $V$ once again we have $\P\left(X_i = 1, X_j = 1\right) = \E\left[\left(\frac{V}{m}\right)^2\right] = \frac{p(1 - p)}{m} + p^2$. The results now follow from standard formulas for covariance and correlation.
The mean and variance of $Y$ are
1. $\E(Y) = n p$
2. $\var(Y) = n p (1 - p) \frac{m + n - 1}{m}$
Proof
Part (a) follows from the distribution of the indicator variables above, and the additive property of expected value. Part (b) follows from the previous result on covariance. Recall again that the variance of $Y$ is the sum of $\cov\left(X_i, X_j\right)$ over all $i$ and $j$.
Let's conclude with an interesting observation: For the randomized urn, $\bs{X}$ is a sequence of independent variables when the sampling is without replacement but a sequence of dependent variables when the sampling is with replacement—just the opposite of the situation for the deterministic urn with a fixed number of type 1 objects. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/12%3A_Finite_Sampling_Models/12.02%3A_The_Hypergeometric_Distribution.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$
Basic Theory
The Multitype Model
As in the basic sampling model, we start with a finite population $D$ consisting of $m$ objects. In this section, we suppose in addition that each object is one of $k$ types; that is, we have a multitype population. For example, we could have an urn with balls of several different colors, or a population of voters who are either democrat, republican, or independent. Let $D_i$ denote the subset of all type $i$ objects and let $m_i = \#(D_i)$ for $i \in \{1, 2, \ldots, k\}$. Thus $D = \bigcup_{i=1}^k D_i$ and $m = \sum_{i=1}^k m_i$. The dichotomous model considered earlier is clearly a special case, with $k = 2$.
As in the basic sampling model, we sample $n$ objects at random from $D$. Thus the outcome of the experiment is $\bs{X} = (X_1, X_2, \ldots, X_n)$ where $X_i \in D$ is the $i$th object chosen. Now let $Y_i$ denote the number of type $i$ objects in the sample, for $i \in \{1, 2, \ldots, k\}$. Note that $\sum_{i=1}^k Y_i = n$ so if we know the values of $k - 1$ of the counting variables, we can find the value of the remaining counting variable. As with any counting variable, we can express $Y_i$ as a sum of indicator variables:
For $i \in \{1, 2, \ldots, k\}$ $Y_i = \sum_{j=1}^n \bs{1}\left(X_j \in D_i\right)$
We assume initially that the sampling is without replacement, since this is the realistic case in most applications.
The Joint Distribution
Basic combinatorial arguments can be used to derive the probability density function of the random vector of counting variables. Recall that since the sampling is without replacement, the unordered sample is uniformly distributed over the combinations of size $n$ chosen from $D$.
The probability density funtion of $(Y_1, Y_2, \ldots, Y_k)$ is given by $\P(Y_1 = y_1, Y_2 = y_2, \ldots, Y_k = y_k) = \frac{\binom{m_1}{y_1} \binom{m_2}{y_2} \cdots \binom{m_k}{y_k}}{\binom{m}{n}}, \quad (y_1, y_2, \ldots, y_k) \in \N^k \text{ with } \sum_{i=1}^k y_i = n$
Proof
The binomial coefficient $\binom{m_i}{y_i}$ is the number of unordered subsets of $D_i$ (the type $i$ objects) of size $y_i$. The binomial coefficient $\binom{m}{n}$ is the number of unordered samples of size $n$ chosen from $D$. Thus the result follows from the multiplication principle of combinatorics and the uniform distribution of the unordered sample
The distribution of $(Y_1, Y_2, \ldots, Y_k)$ is called the multivariate hypergeometric distribution with parameters $m$, $(m_1, m_2, \ldots, m_k)$, and $n$. We also say that $(Y_1, Y_2, \ldots, Y_{k-1})$ has this distribution (recall again that the values of any $k - 1$ of the variables determines the value of the remaining variable). Usually it is clear from context which meaning is intended. The ordinary hypergeometric distribution corresponds to $k = 2$.
An alternate form of the probability density function of $Y_1, Y_2, \ldots, Y_k)$ is $\P(Y_1 = y_1, Y_2 = y_2, \ldots, Y_k = y_k) = \binom{n}{y_1, y_2, \ldots, y_k} \frac{m_1^{(y_1)} m_2^{(y_2)} \cdots m_k^{(y_k)}}{m^{(n)}}, \quad (y_1, y_2, \ldots, y_k) \in \N_k \text{ with } \sum_{i=1}^k y_i = n$
Combinatorial Proof
The combinatorial proof is to consider the ordered sample, which is uniformly distributed on the set of permutations of size $n$ from $D$. The multinomial coefficient on the right is the number of ways to partition the index set $\{1, 2, \ldots, n\}$ into $k$ groups where group $i$ has $y_i$ elements (these are the coordinates of the type $i$ objects). The number of (ordered) ways to select the type $i$ objects is $m_i^{(y_i)}$. The denominator $m^{(n)}$ is the number of ordered samples of size $n$ chosen from $D$.
Algebraic Proof
There is also a simple algebraic proof, starting from the first version of probability density function above. Write each binomial coefficient $\binom{a}{j} = a^{(j)}/j!$ and rearrange a bit.
The Marginal Distributions
For $i \in \{1, 2, \ldots, k\}$, $Y_i$ has the hypergeometric distribution with parameters $m$, $m_i$, and $n$ $\P(Y_i = y) = \frac{\binom{m_i}{y} \binom{m - m_i}{n - y}}{\binom{m}{n}}, \quad y \in \{0, 1, \ldots, n\}$
Proof
An analytic proof is possible, by starting with the first version or the second version of the joint PDF and summing over the unwanted variables. However, a probabilistic proof is much better: $Y_i$ is the number of type $i$ objects in a sample of size $n$ chosen at random (and without replacement) from a population of $m$ objects, with $m_i$ of type $i$ and the remaining $m - m_i$ not of this type.
Grouping
The multivariate hypergeometric distribution is preserved when the counting variables are combined. Specifically, suppose that $(A_1, A_2, \ldots, A_l)$ is a partition of the index set $\{1, 2, \ldots, k\}$ into nonempty, disjoint subsets. Let $W_j = \sum_{i \in A_j} Y_i$ and $r_j = \sum_{i \in A_j} m_i$ for $j \in \{1, 2, \ldots, l\}$
$(W_1, W_2, \ldots, W_l)$ has the multivariate hypergeometric distribution with parameters $m$, $(r_1, r_2, \ldots, r_l)$, and $n$.
Proof
Again, an analytic proof is possible, but a probabilistic proof is much better. Effectively, we now have a population of $m$ objects with $l$ types, and $r_i$ is the number of objects of the new type $i$. As before we sample $n$ objects without replacement, and $W_i$ is the number of objects in the sample of the new type $i$.
Note that the marginal distribution of $Y_i$ given above is a special case of grouping. We have two types: type $i$ and not type $i$. More generally, the marginal distribution of any subsequence of $(Y_1, Y_2, \ldots, Y_n)$ is hypergeometric, with the appropriate parameters.
Conditioning
The multivariate hypergeometric distribution is also preserved when some of the counting variables are observed. Specifically, suppose that $(A, B)$ is a partition of the index set $\{1, 2, \ldots, k\}$ into nonempty, disjoint subsets. Suppose that we observe $Y_j = y_j$ for $j \in B$. Let $z = n - \sum_{j \in B} y_j$ and $r = \sum_{i \in A} m_i$.
The conditional distribution of $(Y_i: i \in A)$ given $\left(Y_j = y_j: j \in B\right)$ is multivariate hypergeometric with parameters $r$, $(m_i: i \in A)$, and $z$.
Proof
Once again, an analytic argument is possible using the definition of conditional probability and the appropriate joint distributions. A probabilistic argument is much better. Effectively, we are selecting a sample of size $z$ from a population of size $r$, with $m_i$ objects of type $i$ for each $i \in A$.
Combinations of the grouping result and the conditioning result can be used to compute any marginal or conditional distributions of the counting variables.
Moments
We will compute the mean, variance, covariance, and correlation of the counting variables. Results from the hypergeometric distribution and the representation in terms of indicator variables are the main tools.
For $i \in \{1, 2, \ldots, k\}$,
1. $\E(Y_i) = n \frac{m_i}{m}$
2. $\var(Y_i) = n \frac{m_i}{m}\frac{m - m_i}{m} \frac{m-n}{m-1}$
Proof
This follows immediately, since $Y_i$ has the hypergeometric distribution with parameters $m$, $m_i$, and $n$.
Now let $I_{t i} = \bs{1}(X_t \in D_i)$, the indicator variable of the event that the $t$th object selected is type $i$, for $t \in \{1, 2, \ldots, n\}$ and $i \in \{1, 2, \ldots, k\}$.
Suppose that $r$ and $s$ are distinct elements of $\{1, 2, \ldots, n\}$, and $i$ and $j$ are distinct elements of $\{1, 2, \ldots, k\}$. Then \begin{align} \cov\left(I_{r i}, I_{r j}\right) & = -\frac{m_i}{m} \frac{m_j}{m}\ \cov\left(I_{r i}, I_{s j}\right) & = \frac{1}{m - 1} \frac{m_i}{m} \frac{m_j}{m} \end{align}
Proof
Recall that if $A$ and $B$ are events, then $\cov(A, B) = \P(A \cap B) - \P(A) \P(B)$. In the first case the events are that sample item $r$ is type $i$ and that sample item $r$ is type $j$. These events are disjoint, and the individual probabilities are $\frac{m_i}{m}$ and $\frac{m_j}{m}$. In the second case, the events are that sample item $r$ is type $i$ and that sample item $s$ is type $j$. The probability that both events occur is $\frac{m_i}{m} \frac{m_j}{m-1}$ while the individual probabilities are the same as in the first case.
Suppose again that $r$ and $s$ are distinct elements of $\{1, 2, \ldots, n\}$, and $i$ and $j$ are distinct elements of $\{1, 2, \ldots, k\}$. Then \begin{align} \cor\left(I_{r i}, I_{r j}\right) & = -\sqrt{\frac{m_i}{m - m_i} \frac{m_j}{m - m_j}} \ \cor\left(I_{r i}, I_{s j}\right) & = \frac{1}{m - 1} \sqrt{\frac{m_i}{m - m_i} \frac{m_j}{m - m_j}} \end{align}
Proof
This follows from the previous result and the definition of correlation. Recall that if $I$ is an indicator variable with parameter $p$ then $\var(I) = p (1 - p)$.
In particular, $I_{r i}$ and $I_{r j}$ are negatively correlated while $I_{r i}$ and $I_{s j}$ are positively correlated.
For distinct $i, \, j \in \{1, 2, \ldots, k\}$,
\begin{align} \cov\left(Y_i, Y_j\right) = & -n \frac{m_i}{m} \frac{m_j}{m} \frac{m - n}{m - 1}\ \cor\left(Y_i, Y_j\right) = & -\sqrt{\frac{m_i}{m - m_i} \frac{m_j}{m - m_j}} \end{align}
Sampling with Replacement
Suppose now that the sampling is with replacement, even though this is usually not realistic in applications.
The types of the objects in the sample form a sequence of $n$ multinomial trials with parameters $(m_1 / m, m_2 / m, \ldots, m_k / m)$.
The following results now follow immediately from the general theory of multinomial trials, although modifications of the arguments above could also be used.
$(Y_1, Y_2, \ldots, Y_k)$ has the multinomial distribution with parameters $n$ and $(m_1 / m, m_2, / m, \ldots, m_k / m)$: $\P(Y_1 = y_1, Y_2 = y_2, \ldots, Y_k = y_k) = \binom{n}{y_1, y_2, \ldots, y_k} \frac{m_1^{y_1} m_2^{y_2} \cdots m_k^{y_k}}{m^n}, \quad (y_1, y_2, \ldots, y_k) \in \N^k \text{ with } \sum_{i=1}^k y_i = n$
For distinct $i, \, j \in \{1, 2, \ldots, k\}$,
1. $\E\left(Y_i\right) = n \frac{m_i}{m}$
2. $\var\left(Y_i\right) = n \frac{m_i}{m} \frac{m - m_i}{m}$
3. $\cov\left(Y_i, Y_j\right) = -n \frac{m_i}{m} \frac{m_j}{m}$
4. $\cor\left(Y_i, Y_j\right) = -\sqrt{\frac{m_i}{m - m_i} \frac{m_j}{m - m_j}}$
Comparing with our previous results, note that the means and correlations are the same, whether sampling with or without replacement. The variances and covariances are smaller when sampling without replacement, by a factor of the finite population correction factor $(m - n) / (m - 1)$
Convergence to the Multinomial Distribution
Suppose that the population size $m$ is very large compared to the sample size $n$. In this case, it seems reasonable that sampling without replacement is not too much different than sampling with replacement, and hence the multivariate hypergeometric distribution should be well approximated by the multinomial. The following exercise makes this observation precise. Practically, it is a valuable result, since in many cases we do not know the population size exactly. For the approximate multinomial distribution, we do not need to know $m_i$ and $m$ individually, but only in the ratio $m_i / m$.
Suppose that $m_i$ depends on $m$ and that $m_i / m \to p_i$ as $m \to \infty$ for $i \in \{1, 2, \ldots, k\}$. For fixed $n$, the multivariate hypergeometric probability density function with parameters $m$, $(m_1, m_2, \ldots, m_k)$, and $n$ converges to the multinomial probability density function with parameters $n$ and $(p_1, p_2, \ldots, p_k)$.
Proof
Consider the second version of the hypergeometric probability density function. In the fraction, there are $n$ factors in the denominator and $n$ in the numerator. If we group the factors to form a product of $n$ fractions, then each fraction in group $i$ converges to $p_i$.
Examples and Applications
A population of 100 voters consists of 40 republicans, 35 democrats and 25 independents. A random sample of 10 voters is chosen. Find each of the following:
1. The joint density function of the number of republicans, number of democrats, and number of independents in the sample
2. The mean of each variable in (a).
3. The variance of each variable in (a).
4. The covariance of each pair of variables in (a).
5. The probability that the sample contains at least 4 republicans, at least 3 democrats, and at least 2 independents.
Answer
1. $\P(X = x, Y = y, Z = z) = \frac{\binom{40}{x} \binom{35}{y} \binom{25}{z}}{\binom{100}{10}}$ for $x, \; y, \; z \in \N$ with $x + y + z = 10$
2. $\E(X) = 4$, $\E(Y) = 3.5$, $\E(Z) = 2.5$
3. $\var(X) = 2.1818$, $\var(Y) = 2.0682$, $\var(Z) = 1.7045$
4. $\cov(X, Y) = -1.6346$, $\cov(X, Z) = -0.9091$, $\cov(Y, Z) = -0.7955$
5. 0.2474
Cards
Recall that the general card experiment is to select $n$ cards at random and without replacement from a standard deck of 52 cards. The special case $n = 5$ is the poker experiment and the special case $n = 13$ is the bridge experiment.
In a bridge hand, find the probability density function of
1. The number of spades, number of hearts, and number of diamonds.
2. The number of spades and number of hearts.
3. The number of spades.
4. The number of red cards and the number of black cards.
Answer
Let $X$, $Y$, $Z$, $U$, and $V$ denote the number of spades, hearts, diamonds, red cards, and black cards, respectively, in the hand.
1. $\P(X = x, Y = y, Z = z) = \frac{\binom{13}{x} \binom{13}{y} \binom{13}{z}\binom{13}{13 - x - y - z}}{\binom{52}{13}}$ for $x, \; y, \; z \in \N$ with $x + y + z \le 13$
2. $\P(X = x, Y = y) = \frac{\binom{13}{x} \binom{13}{y} \binom{26}{13-x-y}}{\binom{52}{13}}$ for $x, \; y \in \N$ with $x + y \le 13$
3. $\P(X = x) = \frac{\binom{13}{x} \binom{39}{13-x}}{\binom{52}{13}}$ for $x \in \{0, 1, \ldots 13\}$
4. $\P(U = u, V = v) = \frac{\binom{26}{u} \binom{26}{v}}{\binom{52}{13}}$ for $u, \; v \in \N$ with $u + v = 13$
In a bridge hand, find each of the following:
1. The mean and variance of the number of spades.
2. The covariance and correlation between the number of spades and the number of hearts.
3. The mean and variance of the number of red cards.
Answer
Let $X$, $Y$, and $U$ denote the number of spades, hearts, and red cards, respectively, in the hand.
1. $\E(X) = \frac{13}{4}$, $\var(X) = \frac{507}{272}$
2. $\cov(X, Y) = -\frac{169}{272}$
3. $\E(U) = \frac{13}{2}$, $\var(U) = \frac{169}{272}$
In a bridge hand, find each of the following:
1. The conditional probability density function of the number of spades and the number of hearts, given that the hand has 4 diamonds.
2. The conditional probability density function of the number of spades given that the hand has 3 hearts and 2 diamonds.
Answer
Let $X$, $Y$ and $Z$ denote the number of spades, hearts, and diamonds respectively, in the hand.
1. $\P(X = x, Y = y, \mid Z = 4) = \frac{\binom{13}{x} \binom{13}{y} \binom{22}{9-x-y}}{\binom{48}{9}}$ for $x, \; y \in \N$ with $x + y \le 9$
2. $\P(X = x \mid Y = 3, Z = 2) = \frac{\binom{13}{x} \binom{34}{8-x}}{\binom{47}{8}}$ for $x \in \{0, 1, \ldots, 8\}$
In the card experiment, a hand that does not contain any cards of a particular suit is said to be void in that suit.
Use the inclusion-exclusion rule to show that the probability that a poker hand is void in at least one suit is $\frac{1913496}{2598960} \approx 0.736$
In the card experiment, set $n = 5$. Run the simulation 1000 times and compute the relative frequency of the event that the hand is void in at least one suit. Compare the relative frequency with the true probability given in the previous exercise.
Use the inclusion-exclusion rule to show that the probability that a bridge hand is void in at least one suit is $\frac{32427298180}{635013559600} \approx 0.051$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/12%3A_Finite_Sampling_Models/12.03%3A_The_Multivariate_Hypergeometric_Distribution.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$
Basic Theory
Definitions
Suppose that the objects in our population are numbered from 1 to $m$, so that $D = \{1, 2, \ldots, m\}$. For example, the population might consist of manufactured items, and the labels might correspond to serial numbers. As in the basic sampling model we select $n$ objects at random, without replacement from $D$. Thus the outcome is $\bs{X} = (X_1, X_2, \ldots, X_n)$ where $X_i \in S$ is the $i$th object chosen. Recall that $\bs{X}$ is uniformly distributed over the set of permutations of size $n$ chosen from $D$. Recall also that $\bs{W} = \{X_1, X_2, \ldots, X_n\}$ is the unordered sample, which is uniformly distributed on the set of combinations of size $n$ chosen from $D$.
For $i \in \{1, 2, \ldots, n\}$ let $X_{(i)} = i$th smallest element of $\{X_1, X_2, \ldots, X_n\}$. The random variable $X_{(i)}$ is known as the order statistic of order $i$ for the sample $\bs{X}$. In particular, the extreme order statistics are \begin{align} X_{(1)} & = \min\{X_1, X_2, \ldots, X_n\} \ X_{(n)} & = \max\{X_1, X_2, \ldots, X_n\} \end{align} Random variable $X_{(i)}$ takes values in $\{i, i + 1, \ldots, m - n + i\}$ for $i \in \{1, 2, \ldots, n\}$.
We will denote the vector of order statistics by $\bs{Y} = \left(X_{(1)}, X_{(2)}, \ldots, X_{(n)}\right)$. Note that $\bs{Y}$ takes values in $L = \left\{(y_1, y_2, \ldots, y_n) \in D^n: y_1 \lt y_2 \lt \cdots \lt y_n\right\}$
Run the order statistic experiment. Note that you can vary the population size $m$ and the sample size $n$. The order statistics are recorded on each update.
Distributions
$L$ has $\binom{m}{n}$ elements and $\bs{Y}$ is uniformly distributed on $L$.
Proof
For $\bs{y} = (y_1, y_2, \ldots, y_n) \in L$, $\bs{Y} = \bs{y}$ if and only if $\bs{X}$ is one of the $n!$ permutations of $\bs{y}$. Hence $\P(\bs{Y} = \bs{y}) = n! \big/ m^{(n)} = 1 \big/ \binom{m}{n}$.
The probability density function of $X_{(i)}$ is $\P\left[X_{(i)} = x\right] = \frac{\binom{x-1}{i-1} \binom{m-x}{n-i}}{\binom{m}{n}}, \quad x \in \{i, i + 1, \ldots, m - n + i\}$
Proof
The event that the $i$th order statistic is $x$ means that $i - 1$ sample values are less than $x$ and $n - i$ are greater than $x$, and of course, one of the sample values is $x$. By the multiplication principle of combinatorics, the number of unordered samples corresponding to this event is $\binom{x-1}{i-1} \binom{m - x}{n - i}$. The total number of unordered samples is $\binom{m}{n}$.
In the order statistic experiment, vary the parameters and note the shape and location of the probability density function. For selected values of the parameters, run the experiment 1000 times and compare the relative frequency function to the probability density function.
Moments
The probability density function of $X_{(i)}$ above can be used to obtain an interesting identity involving the binomial coefficients. This identity, in turn, can be used to find the mean and variance of $X_{(i)}$.
For $i, \, n, \, m \in \N_+$ with $i \le n \le m$, $\sum_{k=i}^{m-n+i} \binom{k-1}{i-1} \binom{m-k}{n - i} = \binom{m}{n}$
Proof
This result follows immediately from the probability density function of $X_{(i)}$ above
The expected value of $X_{(i)}$ is $\E\left[X_{(i)}\right] = i \frac{m + 1}{n+1}$
Proof
We start with the definition of expected value. Recall that $x \binom{x - 1}{i - 1} = i \binom{x}{i}$. Next we use the identity above with $m$ replaced with $m + 1$, $n$ replaced with $n + 1$, and $i$ replaced with $i + 1$. Simplifying gives the result.
The variance of $X_{(i)}$ is $\var\left[X_{(i)}\right] = i (n - i + 1) \frac{(m + 1) (m - n)}{(n + 1)^2 (n + 2)}$
Proof
The result follows from another application of the identity above.
In the order statistic experiment, vary the parameters and note the size and location of the mean $\pm$ standard deviation bar. For selected values of the parameters, run the experiment 1000 times and compare the sample mean and standard deviation to the distribution mean and standard deviation.
Estimators of $m$ Based on Order Statistics
Suppose that the population size $m$ is unknown. In this subsection we consider estimators of $m$ constructed from the various order statistics.
For $i \in \{1, 2, \ldots, n\}$, the following statistic is an unbiased estimator of $m$: $U_i = \frac{n + 1}{i} X_{(i)} - 1$
Proof
From the expected value of $X_{(i)}$ above and the linear property of expected value, note that $\E(U_i) = m$.
Since $U_i$ is unbiased, its variance is the mean square error, a measure of the quality of the estimator.
The variance of $U_i$ is $\var(U_i) = \frac{(m + 1) (m - n) (n - i +1)}{i (n + 2)}$
Proof
This result follows from variance of $X_{(i)}$ given above and standard properties of variance.
For fixed $m$ and $n$, $\var(U_i)$ decreases as $i$ increases. Thus, the estimators improve as $i$ increases; in particular, $U_n$ is the best and $U_1$ the worst.
The relative efficiency of $U_j$ with respect to $U_i$ is $\frac{\var(U_i)}{\var(U_j)} = \frac{j (n - i + 1)}{i (n - j + 1)}$
Note that the relative efficiency depends only on the orders $i$ and $j$ and the sample size $n$, but not on the population size $m$ (the unknown parameter). In particular, the relative efficiency of $U_n$ with respect to $U_1$ is $n^2$. For fixed $i$ and $j$, the asymptotic relative efficiency of $U_j$ to $U_i$ is $j / i$. Usually, we hope that an estimator improves (in the sense of mean square error) as the sample size $n$ increases (the more information we have, the better our estimate should be). This general idea is known as consistency.
$\var(U_n)$ decreases to 0 as $n$ increases from 1 to $m$, and so $U_n$ is consistent: $\var(U_n) = \frac{(m + 1)(m - n)}{n (n + 2)}$
For fixed $i$, $\var(U_i)$ at first increases and then decreases to 0 as $n$ increases from $i$ to $m$. Thus, $U_i$ is inconsistent.
An Estimator of $m$ Based on the Sample Mean
In this subsection, we will derive another estimator of the parameter $m$ based on the average of the sample variables $M = \frac{1}{n} \sum_{i=1}^n x_i$, (the sample mean) and compare this estimator with the estimator based on the maximum of the variables (the largest order statistic).
$\E(M) = \frac{m + 1}{2}$.
Proof
Recall that $X_i$ is uniformly distributed on $D$ for each $i$ and hence $\E(X_i) = \frac{m + 1}{2}$.
It follows that $V = 2 M - 1$ is an unbiased estimator of $m$. Moreover, it seems that superficially at least, $V$ uses more information from the sample (since it involves all of the sample variables) than $U_n$. Could it be better? To find out, we need to compute the variance of the estimator (which, since it is unbiased, is the mean square error). This computation is a bit complicated since the sample variables are dependent. We will compute the variance of the sum as the sum of all of the pairwise covariances.
For distinct $i, \, j \in \{1, 2, \ldots, n\}$, $\cov\left(X_i, X_j\right) = -\frac{m+1}{12}$.
Proof
First recall that given $X_i = x$, $X_j$ is uniformly distributed on $D \setminus \{x\}$. Hence $\E(X_j \mid X_i = x) = \frac{m(m + 1)}{2 (m - 1)} - \frac{x}{m - 1}$. Thus conditioning on $X_i$ gives $\E(X_i X_j) = \frac{(m +1)(3 \, m + 2)}{12}$. The result now follows from the standard formula $\cov(X_i, X_j) = \E(X_i X_j) - \E(X_i) \E(X_j)$.
For $i \in \{1, 2, \ldots, n\}$, $\var(X_i) = \frac{m^2 - 1}{12}$.
Proof
This follows since $X_i$ is uniformly distributed on $D$.
$\var(M) = \frac{(m+1)(m-n)}{12 \, n}$.
Proof
The variance of $M$ is $\frac{1}{n^2}$ times the sum of $\cov\left(X_i, X_j\right)$ over all $i, \, j \in \{1, 2, \ldots, n\}$. There are $n$ covariance terms with the value given in the variance result above (corresponding to $i = j$) and $n^2 - n$ terms with the value given in the pure covariance result above (corresponding to $i \ne j$). Simplifying gives the result.
$\var(V) = \frac{(m + 1)(m - n)}{3 \, n}$.
Proof
This follows from the variance of $M$ above and standard properties of variance.
The variance of $V$ is decreasing with $n$, so $V$ is also consistent. Let's compute the relative efficiency of the estimator based on the maximum to the estimator based on the mean.
$\var(V) \big/ \var(U_n) = (n + 2) / 3$.
Thus, once again, the estimator based on the maximum is better. In addition to the mathematical analysis, all of the estimators except $U_n$ can sometimes be manifestly worthless by giving estimates that are smaller than some of the smaple values.
Sampling with Replacement
If the sampling is with replacement, then the sample $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a sequence of independent and identically distributed random variables. The order statistics from such samples are studied in the chapter on Random Samples.
Examples and Applications
Suppose that in a lottery, tickets numbered from 1 to 25 are placed in a bowl. Five tickets are chosen at random and without replacement.
1. Find the probability density function of $X_{(3)}$.
2. Find $\E\left[X_{(3)}\right]$.
3. Find $\var\left[X_{(3)}\right]$.
Answer
1. $\P\left[X_{(3)} = x\right] = \frac{\binom{x-1}{2} \binom{25-x}{2}}{\binom{25}{5}}$ for $x \in \{3, 4, \ldots, 23\}$
2. $\E\left[X_{(3)}\right] = 13$
3. $\var\left[X_{(3)}\right] = \frac{130}{7}$
The German Tank Problem
The estimator $U_n$ was used by the Allies during World War II to estimate the number of German tanks $m$ that had been produced. German tanks had serial numbers, and captured German tanks and records formed the sample data. The statistical estimates turned out to be much more accurate than intelligence estimates. Some of the data are given in the table below.
German Tank Data. Source: Wikipedia
Date Statistical Estimate Intelligence Estimate German Records
June 1940 169 1000 122
June 1941 244 1550 271
August 1942 327 1550 342
One of the morals, evidently, is not to put serial numbers on your weapons!
Suppose that in a certain war, 5 enemy tanks have been captured. The serial numbers are 51, 3, 27, 82, 65. Compute the estimate of $m$, the total number of tanks, using all of the estimators discussed above.
Answer
1. $u_1 = 17$
2. $u_2 = 80$
3. $u_3 = 101$
4. $u_4 = 96.5$
5. $u_5 = 97.4$
6. $v = 90.2$
In the order statistic experiment, and set $m = 100$ and $n = 10$. Run the experiment 50 times. For each run, compute the estimate of $m$ based on each order statistic. For each estimator, compute the square root of the average of the squares of the errors over the 50 runs. Based on these empirical error estimates, rank the estimators of $m$ in terms of quality.
Suppose that in a certain war, 10 enemy tanks have been captured. The serial numbers are 304, 125, 417, 226, 192, 340, 468, 499, 87, 352. Compute the estimate of $m$, the total number of tanks, using the estimator based on the maximum and the estimator based on the mean.
Answer
1. $u = 548$
2. $v = 601$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/12%3A_Finite_Sampling_Models/12.04%3A_Order_Statistics.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$
Definitions and Notation
The Matching Experiment
The matching experiment is a random experiment that can the formulated in a number of colorful ways:
• Suppose that $n$ male-female couples are at a party and that the males and females are randomly paired for a dance. A match occurs if a couple happens to be paired together.
• An absent-minded secretary prepares $n$ letters and envelopes to send to $n$ different people, but then randomly stuffs the letters into the envelopes. A match occurs if a letter is inserted in the proper envelope.
• $n$ people with hats have had a bit too much to drink at a party. As they leave the party, each person randomly grabs a hat. A match occurs if a person gets his or her own hat.
These experiments are clearly equivalent from a mathematical point of view, and correspond to selecting a random permutation $\bs{X} = (X_1, X_2, \ldots, X_n)$ of the population $D_n = \{1, 2, \ldots, n\}$. Here are the interpretations for the examples above:
• Number the couples from 1 to $n$. Then $X_i$ is the number of the woman paired with the $i$th man.
• Number the letters and corresponding envelopes from 1 to $n$. Then $X_i$ is the number of the envelope containing the $i$th letter.
• Number the people and their corresponding hats from 1 to $n$. Then $X_i$ is the number of the hat chosen by the $i$th person.
Our modeling assumption, of course, is that $\bs{X}$ is uniformly distributed on the sample space of permutations of $D_n$. The number of objects $n$ is the basic parameter of the experiment. We will also consider the case of sampling with replacement from the population $D_n$, because the analysis is much easier but still provides insight. In this case, $\bs{X}$ is a sequence of independent random variables, each uniformly distributed over $D_n$.
Matches
We will say that a match occurs at position $j$ if $X_j = j$. Thus, number of matches is the random variable $N$ defined mathematically by $N_n = \sum_{j=1}^n I_j$ where $I_j = \bs{1}(X_j = j)$ is the indicator variable for the event of match at position $j$. Our problem is to compute the probability distribution of the number of matches. This is an old and famous problem in probability that was first considered by Pierre-Remond Montmort; it sometimes referred to as Montmort's matching problem in his honor.
Sampling With Replacement
First let's solve the matching problem in the easy case, when the sampling is with replacement. Of course, this is not the way that the matching game is usually played, but the analysis will give us some insight.
$(I_1, I_2, \ldots, I_n)$ is a sequence of $n$ Bernoulli trials, with success probability $\frac{1}{n}$.
Proof
The variables are independent since the sampling is with replacement. Since $X_j$ is uniformly distributed, $\P(I_j = 1) = \P(X_j = j) = \frac{1}{n}$.
The number of matches $N_n$ has the binomial distribution with trial parameter $n$ and success parameter $\frac{1}{n}$. $\P(N_n = k) = \binom{n}{k} \left(\frac{1}{n}\right)^k \left(1 - \frac{1}{n}\right)^{n-k}, \quad k \in \{0, 1, \ldots, n\}$
Proof
This follows immediately from the previous result on Bernoulli trials.
The mean and variance of the number of matches are
1. $\E(N_n) = 1$
2. $\var(N_n) = \frac{n-1}{n}$
Proof
These results follow from the previous result on the binomial distribution of $N_n$. Recall that the binomial distribution with parameters $n$ and $p$ has mean $n p$ and variance $n p (1 - p)$.
The distribution of the number of matches converges to the Poisson distribution with parameter 1 as $n \to \infty$: $\P(N_n = k) \to \frac{e^{-1}}{k!} \text{ as } n \to \infty \text{ for } k \in \N$
Proof
This is a special case of the convergence of the binomial distribution to the Poisson. For a direct proof, note that $\P(N_n = k) = \frac{1}{k!} \frac{n^{(k)}}{n^k} \left(1 - \frac{1}{n}\right)^{n-k}$ But $\frac{n^{(k)}}{n^k} \to 1$ as $n \to \infty$ and $\left(1 - \frac{1}{n}\right)^{n-k} \to e^{-1}$ as $n \to \infty$ by a famous limit from calculus.
Sampling Without Replacement
Now let's consider the case of real interest, when the sampling is without replacement, so that $\bs{X}$ is a random permutation of the elements of $D_n = \{1, 2, \ldots, n\}$.
Counting Permutations with Matches
To find the probability density function of $N_n$, we need to count the number of permutations of $D_n$ with a specified number of matches. This will turn out to be easy once we have counted the number of permutations with no matches; these are called derangements of $D_n$. We will denote the number of permutations of $D_n$ with exactly $k$ matches by $b_n(k) = \#\{N_n = k\}$ for $k \in \{0, 1, \ldots, n\}$. In particular, $b_n(0)$ is the number of derrangements of $D_n$.
The number of derrangements is $b_n(0) = n! \sum_{j=0}^n \frac{(-1)^j}{j!}$
Proof
By the complement rule for counting measure $b_n(0) = n! - \#(\bigcup_{i=1}^n \{X_i = i\})$. From the inclusion-exclusion formula, $b_n(0) = n! - \sum_{j=1}^n (-1)^{j-1} \sum_{J \subseteq D_n, \; \#(J) = j} \#\{X_i = i \text{ for all } i \in J\}$ But if $J \subseteq D_n$ with $\#(J) = j$ then $\#\{X_i = i \text{ for all } i \in J\} = (n - j)!$. Finally, the number of subsets $J$ of $D_n$ with $\#(J) = j$ is $\binom{n}{j}$. Substituting into the displayed equation and simplifying gives the result.
The number of permutations with exactly $k$ matches is $b_n(k) = \frac{n!}{k!} \sum_{j=0}^{n-k} \frac{(-1)^j}{j!}, \quad k \in \{0, 1, \ldots, n\}$
Proof
The following is two-step procedure that generates all permutations with exactly $k$ matches: First select the $k$ integers that will match. The number of ways of performing this step is $\binom{n}{k}$. Second, select a permutation of the remaining $n - k$ integers with no matches. The number of ways of performing this step is $b_{n-k}(0)$. By the multiplication principle of combinatorics it follows that $b_n(k) = \binom{n}{k} b_{n-k}(0)$. Using the result above for derrangements and simplifying gives the results.
The Probability Density Function
The probability density function of the number of matches is $\P(N_n = k) = \frac{1}{k!} \sum_{j=0}^{n-k} \frac{(-1)^j}{j!}, \quad k \in \{0, 1, \ldots, n\}$
Proof
This follows directly from the result above on permutations with matches, since $\P(N_n = k) = \#\{N_n = k\} \big/ n!$.
In the matching experiment, vary the parameter $n$ and note the shape and location of the probability density function. For selected values of $n$, run the simulation 1000 times and compare the empirical density function to the true probability density function.
$\P(N_n = n - 1) = 0$.
Proof
A simple probabilistic proof is to note that the event is impossible—if there are $n - 1$ matches, then there must be $n$ matches. An algebraic proof can also be constructed from the probability density function of $N_n$ above.
The distribution of the number of matches converges to the Poisson distribution with parameter 1 as $n \to \infty$: $\P(N_n = k) \to \frac{e^{-1}}{k!} \text{ as } n \to \infty, \quad k \in \N$
Proof
From the power series for the exponential function, $\sum_{j=0}^{n-k} \frac{(-1)^j}{j!} \to \sum_{j=0}^\infty \frac{(-1)^j}{j!} = e^{-1} \text{ as } n \to \infty$ So the result follows from the probability density function of $N_n$ above.
The convergence is remarkably rapid.
In the matching experiment, increase $n$ and note how the probability density function stabilizes rapidly. For selected values of $n$, run the simulation 1000 times and compare the relative frequency function to the probability density function.
Moments
The mean and variance of the number of matches could be computed directly from the distribution. However, it is much better to use the representation in terms of indicator variables. The exchangeable property is an important tool in this section.
$\E(I_j) = \frac{1}{n}$ for $j \in \{1, 2, \ldots, n\}$.
Proof
$X_j$ is uniformly distributed on $D_n$ for each $j$ so $\P(I_j = 1) = \P(X_j = x) = \frac{1}{n}$.
$\E(N_n) = 1$ for each $n$
Proof
This follows from the previous result and basic properties of expected value.
Thus, the expected number of matches is 1, regardless of $n$, just as when the sampling is with replacement.
$\var(I_j) = \frac{n-1}{n^2}$ for $j \in \{1, 2, \ldots, n\}$.
Proof
This follows from $\P(I_j = 1) = \frac{1}{n}$.
A match in one position would seem to make it more likely that there would be a match in another position. Thus, we might guess that the indicator variables are positively correlated.
For distinct $j, \, k \in \{1, 2, \ldots, n\}$,
1. $\cov(I_j, I_k) = \frac{1}{n^2 (n - 1)}$
2. $\cor(I_j, I_k) = \frac{1}{(n - 1)^2}$
Proof
Note that $I_j I_k$ is the indicator variable of the event of a match in position $j$ and a match in position $k$. Hence by the exchangeable property $\P(I_j I_k = 1) = \P(I_j = 1) \P(I_k = 1 \mid I_j = 1) = \frac{1}{n} \frac{1}{n-1}$. As before, $\P(I_j = 1) = \P(I_k = 1) = \frac{1}{n}$. The results now follow from standard computational formulas for covariance and correlation.
Note that when $n = 2$, the event that there is a match in position 1 is perfectly correlated with the event that there is a match in position 2. This makes sense, since there will either be 0 matches or 2 matches.
$\var(N_n) = 1$ for every $n \in \{2, 3, \ldots\}$.
Proof
This follows from the previous two results on the variance and the covariance of the indicator variables, and basic properties of covariance. Recall that $\var(N_n) = \sum_{j=1}^n \sum_{k=1}^n \cov(I_j, I_k)$.
In the matching experiment, vary the parameter $n$ and note the shape and location of the mean $\pm$ standard deviation bar. For selected values of the parameter, run the simulation 1000 times and compare the sample mean and standard deviation to the distribution mean and standard deviation.
For distinct $j, \, k \in \{1, 2, \ldots, n\}$, $\cov(I_j, I_k) \to 0$ as $n \to \infty$.
Thus, the event that a match occurs in position $j$ is nearly independent of the event that a match occurs in position $k$ if $n$ is large. For large $n$, the indicator variables behave nearly like $n$ Bernoulli trials with success probability $\frac{1}{n}$, which of course, is what happens when the sampling is with replacement.
A Recursion Relation
In this subsection, we will give an alternate derivation of the distribution of the number of matches, in a sense by embedding the experiment with parameter $n$ into the experiment with parameter $n + 1$.
The probability density function of the number of matches satisfies the following recursion relation and initial condition:
1. $\P(N_n = k) = (k + 1) \P(N_{n+1} = k + 1)$ for $k \in \{0, 1, \ldots, n\}$.
2. $\P(N_1 = 1) = 1$.
Proof
First, consider the random permutation $(X_1, X_2, \ldots, X_n, X_{n+1})$ of $D_{n+1}$. Note that $(X_1, X_2, \ldots, X_n)$ is a random permutation of $D_n$ if and only if $X_{n+1} = n + 1$ if and only if $I_{n+1} = 1$. It follows that $\P(N_n = k) = \P(N_{n+1} = k + 1 \mid I_{n+1} = 1), \quad k \in \{0, 1, \ldots, n\}$ From the defnition of conditional probability argument we have $\P(N_n = k) = \P(N_{n+1} = k + 1) \frac{\P(I_{n+1} = 1 \mid N_{n+1} = k + 1)}{\P(I_{n+1} = 1)}, \quad k \in \{0, 1, \ldots, n\}$ But $\P(I_{n+1} = 1) = \frac{1}{n+1}$ and $\P(I_{n+1} = 1 \mid N_{n+1} = k + 1) = \frac{k+1}{n+1}$. Substituting into the last displayed equation gives the recurrence relation. The initial condition is obvious, since if $n = 1$ we must have one match.
This result can be used to obtain the probability density function of $N_n$ recursively for any $n$.
The Probability Generating Function
Next recall that the probability generating function of $N_n$ is given by $G_n(t) = \E\left(t^{N_n}\right) = \sum_{j=0}^n \P(N_n = j) t^j, \quad t \in \R$
The family of probability generating functions satisfies the following differential equations and ancillary conditions:
1. $G_{n+1}^\prime(t) = G_n(t)$ for $t \in \R$ and $n \in \N_+$
2. $G_n(1) = 1$ for $n \in \N_+$
Note also that $G_1(t) = t$ for $t \in \R$. Thus, the system of differential equations can be used to compute $G_n$ for any $n \in \N_+$.
In particular, for $t \in \R$,
1. $G_2(t) = \frac{1}{2} + \frac{1}{2} t^2$
2. $G_3(t) = \frac{1}{3} + \frac{1}{2} t + \frac{1}{6} t^3$
3. $G_4(t) = \frac{3}{8} + \frac{1}{3} t + \frac{1}{4} t^2 + \frac{1}{24} t^4$
For $k, \; n \in \N_+$ with $k \lt n$, $G_n^{(k)}(t) = G_{n-k}(t), \quad t \in \R$
Proof
This follows from differential equation for the PGF given above.
For $n \in \N_+$, $\P(N_n = k) = \frac{1}{k!} \P(N_{n-k} = 0), \quad k \in \{0, 1, \ldots, n - 1\}$
Proof
This follows from the previous result and basic properties of generating functions.
Examples and Applications
A secretary randomly stuffs 5 letters into 5 envelopes. Find each of the following:
1. The number of outcomes with exactly $k$ matches, for each $k \in \{0, 1, 2, 3, 4, 5\}$.
2. The probability density function of the number of matches.
3. The covariance and correlation of a match in one envelope and a match in another envelope.
Answer
1. $k$ 0 1 2 3 4 5
$b_5(k)$ 44 45 20 10 0 1
2. $k$ 0 1 2 3 4 5
$\P(N_5 = k)$ 0.3667 0.3750 0.1667 0.0833 0 0.0083
3. Covariance: $\frac{1}{100}$, correlation $\frac{1}{16}$
Ten married couples are randomly paired for a dance. Find each of the following:
1. The probability density function of the number of matches.
2. The mean and variance of the number of matches.
3. The probability of at least 3 matches.
Answer
1. $k$ $\P(N_{10} = k)$
0 $\frac{16\,481}{44\,800} \approx 0.3678795$
1 $\frac{16\,687}{45\,360} \approx 0.3678792$
2 $\frac{2119}{11\,520} \approx 0.1839410$
3 $\frac{103}{1680} \approx 0.06130952$
4 $\frac{53}{3456} \approx 0.01533565$
5 $\frac{11}{3600} \approx 0.003055556$
6 $\frac{1}{1920} \approx 0.0005208333$
7 $\frac{1}{15\,120} \approx 0.00006613757$
8 $\frac{1}{80\,640} \approx 0.00001240079$
9 0
10 $\frac{1}{3\,628\,800} \approx 2.755732 \times 10^{-7}$
2. $\E(N_{10}) = 1$, $\var(N_{10}) = 1$
3. $\P(N_{10} \ge 3) = \frac{145\,697}{1\,814\,400} \approx 0.08030037$
In the matching experiment, set $n = 10$. Run the experiment 1000 times and compare the following for the number of matches:
1. The true probabilities
2. The relative frequencies from the simulation
3. The limiting Poisson probabilities
Answer
1. See part (a) of the previous problem.
2. $k$ $e^{-1} \frac{1}{k!}$
0 0.3678794
1 0.3678794
2 0.1839397
3 0.06131324
4 0.01532831
5 0.003065662
6 0.0005109437
7 0.00007299195
8 $9.123994 \times 10^{-6}$
9 $1.013777 \times 10^{-6}$
10 $1.013777 \times 10^{-7}$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/12%3A_Finite_Sampling_Models/12.05%3A_The_Matching_Problem.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\sd}{\text{sd}}$
Introduction
The Sampling Model
As in the basic sampling model, suppose that we select $n$ numbers at random, with replacement, from the population $D =\{1, 2, \ldots, m\}$. Thus, our outcome vector is $\bs{X} = (X_1, X_2, \ldots, X_n)$ where $X_i$ is the $i$th number chosen. Recall that our basic modeling assumption is that $\bs{X}$ is uniformly distributed on the sample space $S = D^n = \{1, 2, \ldots, m\}^n$
In this section, we are interested in the number of population values missing from the sample, and the number of (distinct) population values in the sample. The computation of probabilities related to these random variables are generally referred to as birthday problems. Often, we will interpret the sampling experiment as a distribution of $n$ balls into $m$ cells; $X_i$ is the cell number of ball $i$. In this interpretation, our interest is in the number of empty cells and the number of occupied cells.
For $i \in D$, let $Y_i$ denote the number of times that $i$ occurs in the sample: $Y_i = \#\left\{j \in \{1, 2, \ldots, n\}: X_j = i\right\} = \sum_{j=1}^n \bs{1}(X_j = i)$
$\bs{Y} = (Y_1, Y_2, \ldots, Y_m)$ has the multinomial distribution with parameters $n$ and $(1/m, 1/m, \ldots, 1/m)$: $\P(Y_1 = y_1, Y_2 = y_2, \ldots, Y_m = y_m) = \binom{n}{y_1, y_2, \ldots, y_n} \frac{1}{m^n}, \quad (y_1, y_2, \ldots, y_m) \in \N^n \text{ with } \sum_{i=1}^m y_i = n$
Proof
This follows immediately from the definition of the multinomial distribution, since $(X_1, X_2, \ldots, X_n)$ is an independent sequence, and $X_i$ is uniformly distributed on $\{1, 2, \ldots, m\}$ for each $i$.
We will now define the main random variables of interest.
The number of population values missing in the sample is $U = \#\left\{j \in \{1, 2, \ldots, m\}: Y_j = 0\right\} = \sum_{j=1}^m \bs{1}(Y_j = 0)$ and the number of (distinct) population values that occur in the sample is $V = \#\left\{j \in \{1, 2, \ldots, m\}: Y_j \gt 0\right\} = \sum_{j=1}^m \bs{1}(Y_j \gt 0)$ Also, $U$ takes values in $\{\max\{m - n, 0\}, \ldots, m - 1\}$ and $V$ takes values in $\{1, 2, \ldots, \min\{m,n\}\}$.
Clearly we must have $U + V = m$ so once we have the probability distribution and moments of one variable, we can easily find them for the other variable. However, we will first solve the simplest version of the birthday problem.
The Simple Birthday Problem
The event that there is at least one duplication when a sample of size $n$ is chosen from a population of size $m$ is $B_{m,n} = \{V \lt n\} = \{U \gt m - n\}$ The (simple) birthday problem is to compute the probability of this event. For example, suppose that we choose $n$ people at random and note their birthdays. If we ignore leap years and assume that birthdays are uniformly distributed throughout the year, then our sampling model applies with $m = 365$. In this setting, the birthday problem is to compute the probability that at least two people have the same birthday (this special case is the origin of the name).
The solution of the birthday problem is an easy exercise in combinatorial probability.
The probability of the birthday event is $\P\left(B_{m,n}\right) = 1 - \frac{m^{(n)}}{m^n}, \quad n \le m$ and $P\left(B_{m,n}\right) = 1$ for $n \gt m$
Proof
The complementary event $B^c$ occurs if and only if the outcome vector $\bs{X}$ forms a permutation of size $n$ from $\{1, 2, \ldots, m\}$. The number of permutations is $m^{(n)}$ and of course the number of samples is $m^n$.
The fact that the probability is 1 for $n \gt m$ is sometimes referred to as the pigeonhole principle: if more than $m$ pigeons are placed into $m$ holes then at least one hole has 2 or more pigeons. The following result gives a recurrence relation for the probability of distinct sample values and thus gives another way to compute the birthday probability.
Let $p_{m,n}$ denote the probability of the complementary birthday event $B^c$, that the sample variables are distinct, with population size $m$ and sample size $n$. Then $p_{m,n}$ satisfies the following recursion relation and initial condition:
1. $p_{m,n+1} = \frac{m-n}{m} p_{m,n}$
2. $p_{m,1} = 1$
Examples
Let $m = 365$ (the standard birthday problem).
1. $\P\left(B_{365,10}\right) = 0.117$
2. $\P\left(B_{365,20}\right) = 0.411$
3. $\P\left(B_{365,30}\right) = 0.706$
4. $\P\left(B_{365,40}\right) = 0.891$
5. $\P\left(B_{365,50}\right) = 0.970$
6. $\P\left(B_{365,60}\right) = 0.994$
In the birthday experiment, set $n = 365$ and select the indicator variable $I$. For $n \in \{10, 20, 30, 40, 50, 60\}$ run the experiment 1000 times each and compare the relative frequencies with the true probabilities.
In spite of its easy solution, the birthday problem is famous because, numerically, the probabilities can be a bit surprising. Note that with a just 60 people, the event is almost certain! With just 23 people, the birthday event is about $\frac{1}{2}$; specifically $\P(B_{365,23}) = 0.507$. Mathematically, the rapid increase in the birthday probability, as $n$ increases, is due to the fact that $m^n$ grows much faster than $m^{(n)}$.
Four fair, standard dice are rolled. Find the probability that the scores are distinct.
Answer
$\frac{5}{18}$
In the birthday experiment, set $m = 6$ and select the indicator variable $I$. Vary $n$ with the scrollbar and note graphically how the probabilities change. Now with $n = 4$, run the experiment 1000 times and compare the relative frequency of the event to the corresponding probability.
Five persons are chosen at random.
1. Find the probability that at least 2 have the same birth month.
2. Criticize the sampling model in this setting
Answer
1. $\frac{89}{144}$
2. The number of days in a month varies, so the assumption that a person's birth month is uniformly distributed over the 12 months not quite accurate.
In the birthday experiment, set $m = 12$ and select the indicator variable $I$. Vary $n$ with the scrollbar and note graphically how the probabilities change. Now with $n = 5$, run the experiment 1000 times and compare the relative frequency of the event to the corresponding probability.
A fast-food restaurant gives away one of 10 different toys with the purchase of a kid's meal. A family with 5 children buys 5 kid's meals. Find the probability that the 5 toys are different.
Answer
$\frac{189}{625}$
In the birthday experiment, set $m = 10$ and select the indicator variable $I$. Vary $n$ with the scrollbar and note graphically how the probabilities change. Now with $n = 5$, run the experiment 1000 times and comparethe relative frequency of the event to the corresponding probability.
Let $m = 52$. Find the smallest value of $n$ such that the probability of a duplication is at least $\frac{1}{2}$.
Answer
$n = 9$
The General Birthday Problem
We now return to the more general problem of finding the distribution of the number of distinct sample values and the distribution of the number of excluded sample values.
The Probability Density Function
The number of samples with exactly $j$ values excluded is $\#\{U = j\} = \binom{m}{j} \sum_{k=0}^{m-j} (-1)^k \binom{m - j}{k}(m - j - k)^n, \quad j \in \{\max\{m-n, 0\}, \ldots, m - 1\}$
Proof
For $i \in D$, consider the event that $i$ does not occur in the sample: $A_i = \{Y_i = 0\}$. Now let $K \subseteq D$ with $\#(K) = k$. Using the multiplication rule of combinatorics, it is easy to count the number of samples that do not contain any elements of $K$: $\#\left(\bigcap_{i \in K} A_i\right) = (m - k)^n$ Now the inclusion-exclusion rule of combinatorics can be used to count the number samples that are missing at least one population value: $\#\left(\bigcup_{i=1}^m A_i\right) = \sum_{k=1}^m (-1)^{k-1} \binom{m}{k}(m - k)^n$ Once we have this, we can use DeMorgan's law to count the number samples that contain all population values: $\#\left(\bigcap_{i=1}^m A_i^c\right) = \sum_{k=0}^m (-1)^k \binom{m}{k} (m - k)^n$ Now we can use a two-step procedure to generate all samples that exclude exactly $j$ population values: First, choose the $j$ values that are to be excluded. The number of ways to perform this step is $\binom{m}{j}$. Next select a sample of size $n$ from the remaining population values so that none are excluded. The number of ways to perform this step is the result in the last displayed equation, but with $m - j$ replacing $m$. The multiplication principle of combinatorics gives the result.
The distributions of the number of excluded values and the number of distinct values are now easy.
The probability density function of $U$ is given by $\P(U = j) = \binom{m}{j} \sum_{k=0}^{m-j} (-1)^k \binom{m-j}{k}\left(1 - \frac{j + k}{m}\right)^n, \quad j \in \{\max\{m-n,0\}, \ldots, m-1\}$
Proof
Since the samples are uniformly distributed, $\P(U = j) = \#\{U = j\} / m^n$ and so the result follows from the previous exercise.
The probability density function of the number of distinct values $V$ is given by $\P(V = j) = \binom{m}{j} \sum_{k=0}^j (-1)^k \binom{j}{k} \left(\frac{j - k}{m}\right)^n, \quad j \in \{1, 2, \ldots, \min\{m,n\}\}$
Proof
This follows from the previous theorem since $\P(V = j) = \P(U = m - j).$
In the birthday experiment, select the number of distinct sample values. Vary the parameters and note the shape and location of the probability density function. For selected values of the parameters, run the simulation 1000 and compare the relative frequency function to the probability density function.
The distribution of the number of excluded values can also be obtained by a recursion argument.
Let $f_{m,n}$ denote the probability density function of the number of excluded values $U$, when the population size is $m$ and the sample size is $n$. Then
1. $f_{m,1}(m - 1) = 1$
2. $f_{m,n+1}(j) = \frac{m - j}{m} f_{m,n}(j) + \frac{j+1}{m} f_{m,n}(j+1)$
Moments
Now we will find the means and variances. The number of excluded values and the number of distinct values are counting variables and hence can be written as sums of indicator variables. As we have seen in many other models, this representation is frequently the best for computing moments.
For $j \in \{0, 1, \ldots, m\}$, let $I_j = \bs{1}(Y_j = 0)$, the indicator variable of the event that $j$ is not in the sample. Note that the number of population values missing in the sample can be written as the sum of the indicator variables: $U = \sum_{j=1}^m I_j$
For distinct $i, \, j \in \{1, 2, \ldots, m\}$,
1. $E\left(I_j\right) = \left(1 - \frac{1}{m}\right)^n$
2. $\var\left(I_j\right) = \left(1 - \frac{1}{m}\right)^n - \left(1 - \frac{1}{m}\right)^{2\,n}$
3. $\cov\left(I_i, I_j\right) = \left(1 - \frac{2}{m}\right)^n - \left(1 - \frac{1}{m}\right)^{2\,n}$
Proof
Since each population value is equally likely to be chosen, $\P(I_j = 1) = (1 - 1 / m)^n$. Thus, parts (a) and (b) follow from standard results for the mean and variance of an indicator variable. Next, $I_i I_j$ is the indicator variable of the event that $i$ and $j$ are both excluded, so $\P(I_i I_j = 1) = (1 - 2 / m)^n$. Part (c) then follows from the standard formula for covariance.
The expected number of excluded values and the expected number of distinct values are
1. $\E(U) = m \left(1 - \frac{1}{m}\right)^n$
2. $\E(V) = m \left[1 - \left(1 - \frac{1}{m}\right)^n \right]$
Proof
Part (a) follows from the previous exericse and the representation $U = \sum_{j=1}^n I_j$. Part (b) follows from part (a) since $U + V = m$.
The variance of the number of exluded values and the variance of the number of distinct values are $\var(U) = \var(V) = m (m - 1) \left(1 - \frac{2}{m}\right)^n + m \left(1 - \frac{1}{m}\right)^n - m^2 \left(1 - \frac{1}{m}\right)^{2 n}$
Proof
Recall that $\var(U) = \sum_{i=1}^m \sum_{j=1}^m \cov(I_i, I_j)$. Using the results above on the covariance of the indicator variables and simplifying gives the variance of $U$. Also, $\var(V) = \var(U)$ since $U + V = m$.
In the birthday experiment, select the number of distinct sample values. Vary the parameters and note the size and location of the mean $\pm$ standard-deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the sample mean and variance to the distribution mean and variance.
Examples and Applications
Suppose that 30 persons are chosen at random. Find each of the following:
1. The probability density function of the number of distinct birthdays.
2. The mean of the number of distinct birthdays.
3. The variance of the number of distinct birthdays.
4. The probability that there are at least 28 different birthdays represented.
Answer
1. $\P(V = j) = \binom{30}{j} \sum_{k=0}^j (-1)^k \left(\frac{j-k}{365}\right)^{30}, \quad j \in \{1, 2, \ldots, 30\}$
2. $\E(V) = 28.8381$
3. $\var(V) = 1.0458$
4. $\P(V \ge 28) = 0.89767$
In the birthday experiment, set $m = 365$ and $n = 30$. Run the experiment 1000 times with an update frequency of 10 and compute the relative frequency of the event in part (d) of the last exercise.
Suppose that 10 fair dice are rolled. Find each of the following:
1. The probability density function of the number of distinct scores.
2. The mean of the number of distinct scores.
3. The variance of the number of distinct scores.
4. The probability that there will 4 or fewer distinct scores.
Answer
1. $\P(V = j) = \binom{10}{j} \sum_{k=0}^j (-1)^k \binom{j}{k} \left(\frac{j-k}{6}\right)^{10}, \quad j \in \{1, 2, \ldots, 6\}$
2. $\E(V) = 5.0310$
3. $\var(V) = 0.5503$
4. $\P(V \le 4) = 0.22182$
In the birthday experiment, set $m = 6$ and $n = 10$. Run the experiment 1000 times and compute the relative frequency of the event in part (d) of the last exercise.
A fast food restaurant gives away one of 10 different toys with the purchase of each kid's meal. A family buys 15 kid's meals. Find each of the following:
1. The probability density function of the number of toys that are missing.
2. The mean of the number of toys that are missing.
3. The variance of the number of toys that are missing.
4. The probability that at least 3 toys are missing.
Answwer
1. $\P(U = j) = \binom{15}{j} \sum_{k=0}^{10-j} (-1)^k \binom{10-j}{k}\left(1 - \frac{j+k}{10}\right)^{15}, \quad j \in \{0, 1, \ldots, 9\}$
2. $\E(U) = 2.0589$
3. $\var(U) = 0.9864$
4. $\P(U \ge 3) = 0.3174$
In the birthday experiment, set $m = 10$ and $n = 15$. Run the experiment 1000 times and compute the relative frequency of the event in part (d).
The lying students problem. Suppose that 3 students, who ride together, miss a mathematics exam. They decide to lie to the instructor by saying that the car had a flat tire. The instructor separates the students and asks each of them which tire was flat. The students, who did not anticipate this, select their answers independently and at random. Find each of the following:
1. The probability density function of the number of distinct answers.
2. The probability that the students get away with their deception.
3. The mean of the number of distinct answers.
4. The standard deviation of the number of distinct answers.
Answer
1. $j$ 1 2 3
$\P(V = j)$ $\frac{1}{16}$ $\frac{9}{16}$ $\frac{6}{16}$
2. $\P(V = 1) = \frac{1}{16}$
3. $\E(V) = \frac{37}{16}$
4. $\sd(V) = \sqrt{\frac{87}{256}} \approx 0.58296$
The duck hunter problem. Suppose that there are 5 duck hunters, each a perfect shot. A flock of 10 ducks fly over, and each hunter selects one duck at random and shoots. Find each of the following:
1. The probability density function of the number of ducks that are killed.
2. The mean of the number of ducks that are killed.
3. The standard deviation of the number of ducks that are killed.
Answer
1. $j$ 1 2 3 4 5
$\P(V = j)$ $\frac{1}{10\,000}$ $\frac{27}{2000}$ $\frac{9}{50}$ $\frac{63}{125}$ $\frac{189}{625}$
2. $\E(V) = \frac{40\,951}{10\,000} = 4.0951$
3. $\sd(V) = 0.72768$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/12%3A_Finite_Sampling_Models/12.06%3A_The_Birthday_Problem.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$
Basic Theory
Definitions
In this section, our random experiment is to sample repeatedly, with replacement, from the population $D = \{1, 2, \ldots, m\}$. This generates a sequence of independent random variables $\bs{X} = (X_1, X_2, \ldots)$, each uniformly distributed on $D$
We will often interpret the sampling in terms of a coupon collector: each time the collector buys a certain product (bubble gum or Cracker Jack, for example) she receives a coupon (a baseball card or a toy, for example) which is equally likely to be any one of $m$ types. Thus, in this setting, $X_i \in D$ is the coupon type received on the $i$th purchase.
Let $V_n$ denote the number of distinct values in the first $n$ selections, for $n \in \N_+$. This is the random variable studied in the last section on the Birthday Problem. Our interest is in this section is the sample size needed to get a specified number of distinct sample values
For $k \in \{1, 2, \ldots, m\}$, let $W_k = \min\{n \in \N_+: V_n = k\}$ the sample size needed to get $k$ distinct sample values.
In terms of the coupon collector, this random variable gives the number of products required to get $k$ distinct coupon types. Note that the set of possible values of $W_k$ is $\{k, k + 1, \ldots\}$. We will be particularly interested in $W_m$, the sample size needed to get the entire population. In terms of the coupon collector, this is the number of products required to get the entire set of coupons.
In the coupon collector experiment, run the experiment in single-step mode a few times for selected values of the parameters.
The Probability Density Function
Now let's find the distribution of $W_k$. The results of the previous section will be very helpful
For $k \in \{1, 2, \ldots, m\}$, the probability density function of $W_k$ is given by $\P(W_k = n) = \binom{m-1}{k-1} \sum_{j=0}^{k-1} (-1)^j \binom{k - 1}{j} \left(\frac{k - j - 1}{m}\right)^{n-1}, \quad n \in \{k, k + 1, \ldots\}$
Proof
Note first that $W_k = n$ if and only if $V_{n-1} = k - 1$ and $V_n = k$. Hence $\P(W_k = n) = \P(V_{n-1} = k - 1) \P(V_n = k \mid V_{n-1} = k - 1) = \frac{m - k + 1}{m} \P(V_{n-1} = k - 1)$ Using the PDF of $V_{n-1}$ from the previous section gives the result.
In the coupon collector experiment, vary the parameters and note the shape of and position of the probability density function. For selected values of the parameters, run the experiment 1000 times and compare the relative frequency function to the probability density function.
An alternate approach to the probability density function of $W_k$ is via a recursion formula.
For fixed $m$, let $g_k$ denote the probability density function of $W_k$. Then
1. $g_k(n + 1) = \frac{k-1}{m} g_k(n) + \frac{m - k + 1}{m} g_{k-1}(n)$
2. $g_1(1) = 1$
Decomposition as a Sum
We will now show that $W_k$ can be decomposed as a sum of $k$ independent, geometrically distributed random variables. This will provide some additional insight into the nature of the distribution and will make the computation of the mean and variance easy.
For $i \in \{1, 2, \ldots, m\}$, let $Z_i$ denote the number of additional samples needed to go from $i - 1$ distinct values to $i$ distinct values. Then $\bs{Z} = (Z_1, Z_2, \ldots, Z_m)$ is a sequence of independent random variables, and $Z_i$ has the geometric distribution on $\N_+$ with parameter $p_i = \frac{m - i + 1}{m}$. Moreover, $W_k = \sum_{i=1}^k Z_i, \quad k \in \{1, 2, \ldots, m\}$
This result shows clearly that each time a new coupon is obtained, it becomes harder to get the next new coupon.
In the coupon collector experiment, run the experiment in single-step mode a few times for selected values of the parameters. In particular, try this with $m$ large and $k$ near $m$.
Moments
The decomposition as a sum of independent variables provides an easy way to compute the mean and other moments of $W_k$.
The mean and variance of the sample size needed to get $k$ distinct values are
1. $\E(W_k) = \sum_{i=1}^k \frac{m}{m - i + 1}$
2. $\var(W_k) = \sum_{i=1}^k \frac{(i - 1)m}{(m - i + 1)^2}$
Proof
These results follow from the decomposition of $W_k$ as a sum of independent variables and standard results for the geometric distribution, since $\E(W_k) = \sum_{i=1}^k \E(Z_i)$ and $\var(W_k) = \sum_{i=1}^k \var(Z_i)$.
In the coupon collector experiment, vary the parameters and note the shape and location of the mean $\pm$ standard deviation bar. For selected values of the parameters, run the experiment 1000 times and compare the sample mean and standard deviation to the distribution mean and standard deviation.
The probability generating function of $W_k$ is given by $\E\left(t^{W_k}\right) = \prod_{i=1}^k \frac{m - i + 1}{m - (i - 1)t}, \quad \left|t\right| \lt \frac{m}{k - 1}$
Proof
This follows from the decomposition of $W_k$ as a sum of independent variables and standard results for the geometric distribution on $\N_+$, since $\E\left(t^{W_k}\right) = \prod_{i=1}^k \E\left(t^{Z_i}\right)$.
Examples and Applications
Suppose that people are sampled at random until 40 distinct birthdays are obtained. Find each of the following:
1. The probability density function of the sample size.
2. The mean of the sample size.
3. The variance of the sample size.
4. The probability generating function of the sample size.
Answer
Let $W$ denote the sample size.
1. $\P(W = n) = \binom{364}{n} \sum_{j=0}^{30} (-1)^j \binom{39}{j} \left(\frac{39-j}{365}\right)^{n-1}$ for $n \in \{40, 41, \ldots\}$
2. $\E(W) = 42.3049$
3. $\var(W) = 2.4878$
4. $\E\left(t^W\right) = \prod_{i=1}^{40} \frac{366-i}{365-(i-1)t}$ for $|t| \lt \frac{365}{39}$
Suppose that a standard, fair die is thrown until all 6 scores have occurred. Find each of the following:
1. The probability density function of the number of throws.
2. The mean of the number of throws.
3. The variance of the number of throws.
4. The probability that at least 10 throws are required.
Answer
Let $W$ denote the number of throws.
1. $\P(W = n) = \sum_{j=0}^5 (-1)^j \binom{5}{j} \left(\frac{5 -j}{6}\right)^{n-1}$ for $n \in \{6, 7, \ldots\}$
2. $\E(W) = 14.7$
3. $\var(W) = 38.99$
4. $\P(W \ge 10) = \frac{1051}{1296} \approx 0.81096$
A box of a certain brand of cereal comes with a special toy. There are 10 different toys in all. A collector buys boxes of cereal until she has all 10 toys. Find each of the following:
1. The probability density function of the number boxes purchased.
2. The mean of the number of boxes purchased.
3. The variance of the number of boxes purchased.
4. The probability that no more than 15 boxes were purchased.
Answer
Let $W$ denote the number of boxes purchased.
1. $\P(W = n) = \sum_{j=0}^9 (-1)^j \binom{9}{j} \left(\frac{9-j}{10}\right)^{n-1}$, for $n \in \{10, 11, \ldots\}$
2. $\E(W) = 29.2897$
3. $\var(W) = 125.6871$
4. $\P(W \le 15) = 0.04595$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/12%3A_Finite_Sampling_Models/12.07%3A_The_Coupon_Collector_Problem.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$
Basic Theory
The Model
Pólya's urn scheme is a dichotomous sampling model that generalizes the hypergeometric model (sampling without replacement) and the Bernoulli model (sampling with replacement). Pólya's urn proccess leads to a famous example of a sequence of random variables that is exchangeable, but not independent, and has deep conections with the beta-Bernoulli process.
Suppose that we have an urn (what else!) that initially contains $a$ red and $b$ green balls, where $a$ and $b$ are positive integers. At each discrete time (trial), we select a ball from the urn and then return the ball to the urn along with $c$ new balls of the same color. Ordinarily, the parameter $c$ is a nonnegative integer. However, the model actually makes sense if $c$ is a negative integer, if we interpret this to mean that we remove the balls rather than add them, and assuming that there are enough balls of the proper color in the urn to perform this action. In any case, the random process is known as Pólya's urn process, named for George Pólya.
In terms of the colors of the selected balls, Pólya's urn scheme generalizes the standard models of sampling with and without replacement.
1. $c = 0$ corresponds to sampling with replacement.
2. $c = -1$ corresponds to sampling without replacement.
For the most part, we will assume that $c$ is nonnegative so that the process can be continued indefinitely. Occasionally we consider the case $c = -1$ so that we can interpret the results in terms of sampling without replacement.
The Outcome Variables
Let $X_i$ denote the color of the ball selected at time $i$, where 0 denotes green and 1 denotes red. Mathematically, our basic random process is the sequence of indicator variables $\bs{X} = (X_1, X_2, \ldots)$, known as the Pólya process. As with any random process, our first goal is to compute the finite dimensional distributions of $\bs{X}$. That is, we want to compute the joint distribution of $(X_1, X_2, \ldots, X_n)$ for each $n \in \N_+$. Some additional notation will really help. Recall the generalized permutation formula in our study of combinatorial structures: for $r, \, s \in \R$ and $j \in \N$, we defined $r^{(s,j)} = r (r + s) (r + 2 s) \cdots [r + (j-1)s]$ Note that the expression has $j$ factors, starting with $r$, and with each factor obtained by adding $s$ to the previous factor. As usual, we adopt the convention that a product over an empty index set is 1. Hence $r^{(s,0)} = 1$ for every $r$ and $s$.
Recall that
1. $r^{(0,j)} = r^j$, an ordinary power
2. $r^{(-1,j)} = r^{(j)} = r (r - 1) \cdots (r - j + 1)$, a descending power
3. $r^{(1,j)} = r^{[j]} = r (r + 1) \cdots (r + j - 1)$, an ascending power
4. $r^{(r,j)} = j! r^j$
5. $1^{(1,j)} = j!$
The following simple result will turn out to be quite useful.
Suppose that $r, \, s \in (0, \infty)$ and $j \in \N$. Then $\frac{r^{(s, j)}}{s^j} = \left(\frac{r}{s}\right)^{[j]}$
Proof
It's just a matter of grouping the factors: \begin{align*} \frac{r^{(s, j)}}{s^j} & = \frac{r (r + s) (r + 2 s) \cdots [r + ( j - 1)s]}{s^j} \ & = \left(\frac{r}{s}\right) \left(\frac{r}{s} + 1 \right) \left(\frac{r}{s} + 2\right) \cdots \left[\frac{r}{s} + (j - 1)\right] = \left(\frac{r}{s}\right)^{[j]} \end{align*}
The finite dimensional distributions are easy to compute using the multiplication rule of conditional probability. If we know the contents of the urn at any given time, then the probability of an outcome at the next time is all but trivial.
Let $n \in \N_+$, $(x_1, x_2, \ldots, x_n) \in \{0,1\}^n$ and let $k = x_1 + x_2 + \cdots + x_n$. Then $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n) = \frac{a^{(c,k)} b^{(c, n - k)}}{(a + b)^{(c,n)}}$
Proof
By the multiplication rule for conditional probability, $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n) = \P(X_1 = x_1) \P(X_2 = x_2 \mid X_1 = x_1) \cdots \P(X_n = x_n \mid X_1 = x_1, \ldots, X_{n-1} = x_{n-1})$ Of course, if we know that the urn has, say, $r$ red and $g$ green balls at a particular time, then the probability of a red ball on the next draw is $r / (r + g)$ while the probability of a green ball is $g / (r + g)$. The right side of the displayed equation above has $n$ factors. The denominators are the total number of balls at the $n$ times, and form the product $(a + b) (a + b + c) \ldots [a + b + (n - 1) c] = (a + b)^{(c,n)}$. In the numerators, $k$ of the factors correspond to probabilities of selecting red balls; these factors form the product $a (a + c) \cdots [a + (k - 1)c] = a^{(c,k)}$. The remaining $n - k$ factors in the numerators correspond to selecting green balls; these factors form the product $b (b + c) \cdots [b + (n - k - 1)c] = b^{(c, n-k)}$.
The joint probability in the previous exercise depends on $(x_1, x_2, \ldots, x_n)$ only through the number of red balls $k = \sum_{i=1}^n x_i$ in the sample. Thus, the joint distribution is invariant under a permutation of $(x_1, x_2, \ldots, x_n)$, and hence $\bs{X}$ is an exchangeable sequence of random variables. This means that for each $n$, all permutations of $(X_1, X_2, \ldots, X_n)$ have the same distribution. Of course the joint distribution reduces to the formulas we have obtained earlier in the special cases of sampling with replacement ($c = 0$) or sampling without replacement ($c = - 1$), although in the latter case we must have $n \le a + b$. When $c \gt 0$, the Pólya process is a special case of the beta-Bernoulli process, studied in the chapter on Bernoulli trials.
The Pólya process $\bs X = (X_1, X_2, \ldots)$ with parameters $a, \, b, \, c \in \N_+$ is the beta-Bernoulli process with parameters $a / c$ and $b / c$. That is, for $n \in \N_+$, $(x_1, x_2, \ldots, x_n) \in \{0,1\}^n$, and with $k = x_1 + x_2 + \cdots + x_n$, $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n) = \frac{(a / c)^{[k]} (b / c)^{[n-k]}}{(a / c + b / c)^{[n]}}$
Proof
From the previous two results, $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n) = \frac{a^{(c,k)} b^{(c, n - k)}}{(a + b)^{(c,n)}} =\frac{[a^{(c,k)} / c^k] [b^{(c, n - k)} / c^{n-k}]}{(a + b)^{(c,n)} / c^n} = \frac{(a /c)^{[k]} (b / c)^{[n-k]}}{(a / c + b / c)^{[n]}}$ and this is the corresponding finite dimensional distribution of the beta-Bernoulli distribution with parameters $a / c$ and $(b / c)$.
Recall that the beta-Bernoulli process is obtained, in the usual formulation, by randomizing the success parameter in a Bernoulli trials sequence, giving the success parameter a beta distribution. So specifically, suppose $a, \, b, \, c \in \N_+$ and that random variable $P$ has the beta distribution with parameters $a / c$ and $b / c$. Suppose also that given $P = p \in (0, 1)$, the random process $\bs X = (X_1, X_2, \ldots)$ is a sequence of Bernoulli trials with success parameter $p$. Then $\bs X$ is the Pólya process with parameters $a, \, b, \, c$. This is a fascinating connection between two processes that at first, seem to have little in common. In fact however, every exchangeable sequence of indicator random variables can be obtained by randomizing the success parameter in a sequence of Bernoulli trials. This is de Finetti's theorem, named for Bruno de Finetti, which is studied in the section on backwards martingales. When $c \in \N_+$, all of the results in this section are special cases of the corresponding results for the beta-Bernoulli process, but it's still interesting to interpret the results in terms of the urn model.
For each $i \in \N_+$
1. $\E(X_i) = \frac{a}{a + b}$
2. $\var(X_i) = \frac{a}{a + b} \frac{b}{a + b}$
Proof
Since the sequence is exchangeable, $X_i$ has the same distribution as $X_1$, so $\P(X_i = 1) = \frac{a}{a + b}$. The mean and variance now follow from standard results for indicator variables.
Thus $\bs{X}$ is a sequence of identically distributed variables, quite surprising at first but of course inevitable for any exchangeable sequence. Compare the joint and marginal distributions. Note that $\bs{X}$ is an independent sequence if and only if $c = 0$, when we have simple sampling with replacement. Pólya's urn is one of the most famous examples of a random process in which the outcome variables are exchangeable, but dependent (in general).
Next, let's compute the covariance and correlation of a pair of outcome variables.
Suppose that $i, \, j \in \N_+$ are distinct. Then
1. $\cov\left(X_i, X_j\right) = \frac{a b c}{(a + b)^2 (a + b + c)}$
2. $\cor\left(X_i, X_j\right) = \frac{c}{a + b + c}$
Proof
Since the variables are exchangeable, $\P(X_i = 1, X_j = 1) = \P(X_1 = 1, X_2 = 1) = \frac{a}{a + b} \frac{a + c}{a + b + c}$. The results now follow from standard formulas for covariance and correlation.
Thus, the variables are positively correlated if $c \gt 0$, negatively correlated if $c \lt 0$, and uncorrelated (in fact, independent), if $c = 0$. These results certainly make sense when we recall the dynamics of Pólya's urn. It turns out that in any infinite sequence of exchangeable variables, the variables must be nonnegatively correlated. Here is another result that explores how the variables are related.
Suppose that $n \in \N_+$ and $(x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$. Let $k = \sum_{i=1}^n x_i$. Then $\P(X_{n+1} = 1 \mid X_1 = x_1, X_2 = x_2, \ldots X_n = x_n) = \frac{a + k c}{a + b + n c}$
Proof
Using the joint distribution, \begin{align*} \P(X_{n+1} = 1 \mid X_1 = x_1, X_2 = x_2, \ldots X_n = x_n) & = \frac{\P( X_1 = x_1, X_2 = x_2, \ldots X_n = x_n, X_{n+1} = 1)}{\P(X_1 = x_1, X_2 = x_2, \ldots X_n = x_n)} \ & = \frac{a^{(c, k+1)} b^{(c, n-k)}}{(a + b)^{(c, n + 1)}} \frac{(a + b)^{(c, n)}}{a^{(c, k)} b^{(c, n-k)}} = \frac{a + c k}{a + b + c n} \end{align*}
Pólya's urn is described by a sequence of indicator variables. We can study the same derived random processes that we studied with Bernoulli trials: the number of red balls in the first $n$ trials, the trial number of the $k$th red ball, and so forth.
The Number of Red Balls
For $n \in \N$, the number of red balls selected in the first $n$ trials is $Y_n = \sum_{i=1}^n X_i$ so that $\bs{Y} = (Y_0, Y_1, \ldots)$ is the partial sum process associated with $\bs{X} = (X_1, X_2, \ldots)$.
Note that
1. The number of green balls selected in the first $n$ trials is $n - Y_n$.
2. The number of red balls in the urn after the first $n$ trials is $a + c \, Y_n$.
3. The number of green balls in the urn after the first $n$ trials is $b + c (n - Y_n)$.
4. The number of balls in the urn after the first $n$ trials is $a + b + c \, n$.
The basic analysis of $\bs{Y}$ follows easily from our work with $\bs{X}$.
The probability density function of $Y_n$ is given by $\P(Y_n = k) = \binom{n}{k} \frac{a^{(c,k)} b^{(c, n-k)}}{(a + b)^{(c,n)}}, \quad k \in \{0, 1, \ldots, n\}$
Proof
$\P(Y_n = y)$ is the sum of $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n)$ over all $(x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$ with $\sum_{i=1}^n x_i = k$. There are $\binom{n}{k}$ such sequences, and each has the probability given above.
The distribution defined by this probability density function is known, appropriately enough, as the Pólya distribution with parameters $n$, $a$, $b$, and $c$. Of course, the distribution reduces to the binomial distribution with parameters $n$ and $a / (a + b)$ in the case of sampling with replacement ($c = 0$) and to the hypergeometric distribution with parameters $n$, $a$, and $b$ in the case of sampling without replacement ($c = - 1$), although again in this case we need $n \le a + b$. When $c \gt 0$, the Póyla distribution is a special case of the beta-binomial distribution.
If $a, \, b, \, c \in \N_+$ then the Pólya distribution with parameters $a, \, b, \, c$ is the beta-binomial distribution with parameters $a / c$ and $b / c$. That is, $P(Y_n = k) = \binom{n}{k} \frac{(a / c)^{[k]} (b/c)^{[n - k]}}{(a/c + b/c)^{[n]}}, \quad k \in \{0, 1, \ldots, n\}$
Proof
This follows immediately from the result above that $\bs X = (X_1, X_2, \ldots)$ is the beta-Bernoulli process with parameters $a / c$ and $b / c$. So by definition, $Y_n = \sum_{i=1}^n X_i$ has the beta-binomial distribution with parameters $n$, $a / c$, and $b / c$. A direct proof is also simple using the permutation formula above: $\P(Y_n = k) = \binom{n}{k} \frac{a^{(c,k)} b^{(c, n-k)}}{(a + b)^{(c,n)}} = \binom{n}{k} \frac{[a^{(c,k)} / c^k] [b^{(c, n-k)} / c^{n-k}]}{(a + b)^{(c,n)} / c^n} = \binom{n}{k} \frac{(a / c)^{[k]} (b/c)^{[n - k]}}{(a/c + b/c)^{[n]}}, \quad k \in \{0, 1, \ldots, n\}$
The case where all three parameters are equal is particularly interesting.
If $a = b = c$ then $Y_n$ is uniformly distributed on $\{0, 1, \ldots, n\}$.
Proof
This follows from the previous result, since the beta-binomial distribution with parameters $n$, 1, and 1 reduces to the uniform distribution. Specifically, note that $1^{[k]} = k!$, $1^{[n-k]} = (n - k)!$ and $2^{[n]} = (n + 1)!$. So substituting gives $\P(Y_n = k) = \frac{n!}{k!(n - k)!} \frac{k! (n - k)!}{(n + 1)!} = \frac{1}{n + 1}, \quad k \in \{0, 1, \ldots, n\}$
In general, the Pólya family of distributions has a diverse collection of shapes.
Start the simulation of the Pólya Urn Experiment. Vary the parameters and note the shape of the probability density function. In particular, note when the function is skewed, when the function is symmetric, when the function is unimodal, when the function is monotone, and when the function is U-shaped. For various values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
The Pólya probability density function is
1. unimodal if $a \gt b \gt c$ and $n \gt \frac{a - c}{b - c}$
2. unimodal if $b \gt a \gt c$ and $n \gt \frac{b - c}{a - c}$
3. U-shaped if $c \gt a \gt b$ and $n \gt \frac{c - b}{c - a}$
4. U-shaped if $c \gt b \gt a$ and $n \gt \frac{c - a}{c - b}$
5. increasing if $b \lt c \lt a$
6. decreasing if $a \lt c \lt b$
Proof
These results follow from solving the inequality $\P(Y_n = k) \gt \P(Y_n = k - 1)$.
Next, let's find the mean and variance. Curiously, the mean does not depend on the parameter $c$.
The mean and variance of the number of red balls selected are
1. $\E(Y_n) = n \frac{a}{a + b}$
2. $\var(Y_n) = n \frac{a b}{(a + b)^2} \left[1 + (n - 1) \frac{c}{a + b + c}\right]$
Proof
These results follow from the mean and covariance of the indicator variables given above, and basic properties of expected value and variance.
1. $\E(Y_n) = \sum_{i=1}^n \E(X_i)$
2. $\var(Y_n) = \sum_{i=1}^n \sum_{j=1}^n \cov\left(X_i, X_j\right)$
Start the simulation of the Pólya Urn Experiment. Vary the parameters and note the shape and location of the mean $\pm$ standard deviation bar. For various values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
Explicitly compute the probability density function, mean, and variance of $Y_5$ when $a = 6$, $b = 4$, and for the values of $c \in \{-1, 0, 1, 2, 3, 10\}$. Sketch the graph of the density function in each case.
Fix $a$, $b$, and $n$, and let $c \to \infty$. Then
1. $\P(Y_n = 0) \to \frac{b}{a + b}$
2. $\P(Y_n = n) \to \frac{a}{a + b}$
3. $\P\left(Y_n \in \{1, 2, \ldots, n - 1\}\right) \to 0$
Proof
Note that $\P(Y_n = 0) = \frac{b^{(c, n)}}{(a + b)^{(c, n)}}$. The numerator and denominator each have $n$ factors. If these factors are grouped into a product of $n$ fractions, then the first is $\frac{b}{a + b}$. The rest have the form $\frac{a + j \, c}{a + b + j \, c}$ where $j \in \{1, 2, \ldots n - 1\}$ Each of these converges to 1 as $c \to \infty$. Part (b) follows by a similar argument. Part (c) follows from (a) and (b) and the complement rule.
Thus, the limiting distribution of $Y_n$ as $c \to \infty$ is concentrated on 0 and $n$. The limiting probabilities are just the initial proportion of green and red balls, respectively. Interpret this result in terms of the dynamics of Pólya's urn scheme.
Our next result gives the conditional distribution of $X_{n+1}$ given $Y_n$.
Suppose that $n \in \N$ and $k \in \{0, 1, \ldots, n\}$. Then $\P(X_{n+1} = 1 \mid Y_n = k) = \frac{a + c k}{a + b + c n }$
Proof
Let $S = \left\{(x_1, x_2, \ldots, x_n) \in \{0, 1\}^n: \sum_{i=1}^n x_i = k\right\}$ and let $\bs{X}_n = (X_1, X_2, \ldots, X_n)$. Note that the events $\{\bs{X}_n = \bs{x}\}$ over $\bs{x} \in S$ partition the event $\{Y_n = k\}$. Conditioning on $\bs{X}_n$, \begin{align*} \P(X_{n+1} = 1 \mid Y_n = k) & = \sum_{\bs x \in S} \P(X_{n+1} = 1 \mid Y_n = k, \bs{X}_n = \bs{x}) \P(\bs{X}_n = \bs{x} \mid Y_n = k) \ & = \sum_{\bs x \in S} \P(X_{n+1} = 1 \mid \bs{X}_n = \bs x) \P(\bs{X}_n = \bs x \mid Y_n = k) \end{align*} But from our result above, $\P(X_{n+1} = 1 \mid \bs{X}_n = \bs x) = (a + c k) / (a + b + cn)$ for every $\bs{x} \in S$. Hence $\P(X_{n+1} = 1 \mid Y_n = k) = \frac{a + c k}{a + b + c n} \sum_{\bs x \in S} \P(\bs{X}_n = x \mid Y_n = k)$ The last sum is 1.
In particular, if $a = b = c$ then $\P(X_{n+1} = 1 \mid Y_n = n) = \frac{n + 1}{n + 2}$. This is Laplace's rule of succession, another interesting connection. The rule is named for Pierre Simon Laplace, and is studied from a different point of view in the section on Independence.
The Proportion of Red Balls
Suppose that $c \in \N$, so that the process continues indefinitely. For $n \in \N_+$, the proportion of red balls selected in the first $n$ trials is $M_n = \frac{Y_n}{n} = \frac{1}{n} \sum_{i=1}^n X_i$ This is an interesting variable, since a little reflection suggests that it may have a limit as $n \to \infty$. Indeed, if $c = 0$, then $M_n$ is just the sample mean corresponding to $n$ Bernoulli trials. Thus, by the law of large numbers, $M_n$ converges to the success parameter $\frac{a}{a + b}$ as $n \to \infty$ with probability 1. On the other hand, the proportion of red balls in the urn after $n$ trials is $Z_n = \frac{a + c Y_n}{a + b + c n}$ When $c = 0$, of course, $Z_n = \frac{a}{a + b}$ so that in this case, $Z_n$ and $M_n$ have the same limiting behavior. Note that $Z_n = \frac{a}{a + b + c n} + \frac{c n}{a + b + c n} M_n$ Since the constant term converges to 0 as $n \to \infty$ and the coefficient of $M_n$ converges to 1 as $n \to \infty$, it follows that the limits of $M_n$ and $Z_n$ as $n \to \infty$ will be the same, if the limit exists, for any mode of convergence: with probability 1, in mean, or in distribution. Here is the general result when $c \gt 0$.
Suppose that $a, \, b, \, c \in \N_+$. There exists a random variable $P$ having the beta distribution with parameters $a / c$ and $b / c$ such that $M_n \to P$ and $Z_n \to P$ as $n \to \infty$ with probability 1 and in mean square, and hence also in distribution.
Proof
As noted earlier, the urn process is equivalent to the beta-Bernoulli process with parameters $a / c$ and $b / c$. We showed in that section that $M_n \to P$ as $n \to \infty$ with probability 1 and in mean square, where $P$ is the beta random variable used in the construction.
In turns out that the random process $\bs Z = \{Z_n = (a + c Y_n) / (a + b + c n): n \in \N\}$ is a martingale. The theory of martingales provides powerful tools for studying convergence in Pólya's urn process. As an interesting special case, note that if $a = b = c$ then the limiting distribution is the uniform distribution on $(0, 1)$.
The Trial Number of the $k$th Red Ball
Suppose again that $c \in \N$, so that the process continues indefinitely. For $k \in \N_+$ let $V_k$ denote the trial number of the $k$th red ball selected. Thus $V_k = \min\{n \in \N_+: Y_n = k\}$ Note that $V_k$ takes values in $\{k, k + 1, \ldots\}$. The random processes $\bs{V} = (V_1, V_2, \ldots)$ and $\bs{Y} = (Y_1, Y_2, \ldots)$ are inverses of each other in a sense.
For $k, \, n \in \N_+$ with $k \le n$,
1. $V_k \le n$ if and only if $Y_n \ge k$
2. $V_k = n$ if and only if $Y_{n-1} = k - 1$ and $X_n = 1$
The probability denisty function of $V_k$ is given by $\P(V_k = n) = \binom{n - 1}{k - 1} \frac{a^{(c,k)} b^{(c,n-k)}}{(a + b)^{(c,n)}}, \quad n \in \{k, k + 1, \ldots\}$
Proof
We condition on $Y_{n-1}$. Using the PDF of $Y_{n-1}$ and the result above, \begin{align*} \P(V_k = n) & = \P(Y_{n-1} = k - 1, X_n = 1) = \P(Y_{n-1} = k - 1) \P(X_n = 1 \mid Y_{n-1} = k - 1) \ & = \binom{n - 1}{k - 1} \frac{a^{(c, k - 1)} b^{(c, (n - 1) - (k - 1))}}{(a + b)^{(c, n-1)}} \frac{a + c (k - 1)}{a + b + c (n - 1)} = \binom{n - 1}{k - 1} \frac{a^{(c, k)} b^{(c, n-k)}}{(a + b)^{(c, n)}} \end{align*}
Of course this probability density function reduces to the negative binomial density function with trial parameter $k$ and success parameter $p = \frac{a}{a+b}$ when $c = 0$ (sampling with replacement). When $c \gt 0$, the distribution is a special case of the beta-negative binomial distribution.
If $a, \, b, \, c \in \N_+$ then $V_k$ has the beta-negative binomial distribution with parameters $k$, $a / c$, and $b / c$. That is, $\P(V_k = n) = \binom{n - 1}{k - 1} \frac{(a/c)^{[k]} (b/c)^{[n-k]}}{(a / c + b / c)^{[n]}}, \quad n \in \{k, k + 1, \ldots\}$
Proof
As with previous proofs, this result follows since the underlying process $\bs X = (X_1, X_2, \ldots)$ is the beta-Bernoulli process with parameters $a / c$ and $b / c$. The form of the PDF also follows easily from the previous result by dividing the numerator and denominator $c^n$.
If $a = b = c$ then $\P(V_k = n) = \frac{k}{n (n + 1)}, \quad n \in \{k, k + 1, k + 2, \ldots\}$
Proof
As in the corresponding proof for the number of red balls, the fraction in the PDF of $V_k$ in the previous result reduces to $\frac{k! (n - k)!}{(n + 1)!}$, while the binomial coefficient is $\frac{(n - 1)!}{(k - 1)! (n - k)!}$.
Fix $a$, $b$, and $k$, and let $c \to \infty$. Then
1. $\P(V_k = k) \to \frac{a}{a + b}$
2. $\P(V_k \in \{k + 1, k + 2, \ldots\}) \to 0$
Thus, the limiting distribution of $V_k$ is concentrated on $k$ and $\infty$. The limiting probabilities at these two points are just the initial proportion of red and green balls, respectively. Interpret this result in terms of the dynamics of Pólya's urn scheme. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/12%3A_Finite_Sampling_Models/12.08%3A_Polya%27s_Urn_Process.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$
In this section we will study a nice problem known variously as the secretary problem or the marriage problem. It is simple to state and not difficult to solve, but the solution is interesting and a bit surprising. Also, the problem serves as a nice introduction to the general area of statistical decision making.
Statement of the Problem
As always, we must start with a clear statement of the problem.
We have $n$ candidates (perhaps applicants for a job or possible marriage partners). The assumptions are
1. The candidates are totally ordered from best to worst with no ties.
2. The candidates arrive sequentially in random order.
3. We can only determine the relative ranks of the candidates as they arrive. We cannot observe the absolute ranks.
4. Our goal is choose the very best candidate; no one less will do.
5. Once a candidate is rejected, she is gone forever and cannot be recalled.
6. The number of candidates $n$ is known.
The assumptions, of course, are not entirely reasonable in real applications. The last assumption, for example, that $n$ is known, is more appropriate for the secretary interpretation than for the marriage interpretation.
What is an optimal strategy? What is the probability of success with this strategy? What happens to the strategy and the probability of success as $n$ increases? In particular, when $n$ is large, is there any reasonable hope of finding the best candidate?
Strategies
Play the secretary game several times with $n = 10$ candidates. See if you can find a good strategy just by trial and error.
After playing the secretary game a few times, it should be clear that the only reasonable type of strategy is to let a certain number $k - 1$ of the candidates go by, and then select the first candidate we see who is better than all of the previous candidates (if she exists). If she does not exist (that is, if no candidate better than all previous candidates appears), we will agree to accept the last candidate, even though this means failure. The parameter $k$ must be between 1 and $n$; if $k = 1$, we select the first candidate; if $k = n$, we select the last candidate; for any other value of $k$, the selected candidate is random, distributed on $\{k, k + 1, \ldots, n\}$. We will refer to this let $k - 1$ go by strategy as strategy $k$.
Thus, we need to compute the probability of success $p_n(k)$ using strategy $k$ with $n$ candidates. Then we can maximize the probability over $k$ to find the optimal strategy, and then take the limit over $n$ to study the asymptotic behavior.
Analysis
First, let's do some basic computations.
For the case $n = 3$, list the 6 permutations of $\{1, 2, 3\}$ and verify the probabilities in the table below. Note that $k = 2$ is optimal.
$k$ 1 2 3
$p_3(k)$ $\frac{2}{6}$ $\frac{3}{6}$ $\frac{2}{6}$
Answer
The following table gives the $3! = 6$ permutations of the candidates $(1, 2, 3)$, and the candidate selected by each strategy. The last row gives the total number of successes for each strategy.
Permutation $k = 1$ $k = 2$ $k = 3$
$(1, 2, 3)$ 1 3 3
$(1, 3, 2)$ 1 2 2
$(2, 1, 3)$ 2 1 3
$(2, 3, 1)$ 2 1 1
$(3, 1, 2)$ 3 1 2
$(3, 2, 1)$ 3 2 1
Total 2 3 2
In the secretary experiment, set the number of candidates to $n = 3$. Run the experiment 1000 times with each strategy $k \in \{1, 2, 3\}$
For the case $n = 4$, list the 24 permutations of $\{1, 2, 3, 4\}$ and verify the probabilities in the table below. Note that $k = 2$ is optimal. The last row gives the total number of successes for each strategy.
$k$ 1 2 3 4
$p_4(k)$ $\frac{6}{24}$ $\frac{11}{24}$ $\frac{10}{24}$ $\frac{6}{24}$
Answer
The following table gives the $4! = 24$ permutations of the candidates $(1, 2, 3, 4)$, and the candidate selected by each strategy.
Permutation $k = 1$ $k = 2$ $k = 3$ $k = 4$
$(1, 2, 3, 4)$ 1 4 4 4
$(1, 2, 4, 3)$ 1 3 3 3
$(1, 3, 2, 4)$ 1 4 4 4
$(1, 3, 4, 2)$ 1 3 2 2
$(1, 4, 2, 3)$ 1 3 3 3
$(1, 4, 3, 2)$ 1 2 2 2
$(2, 1, 3, 4)$ 2 1 4 4
$(2, 1, 4, 3)$ 2 1 3 3
$(2, 3, 1, 4)$ 2 1 1 4
$(2, 3, 4, 1)$ 2 1 1 1
$(2, 4, 1, 3)$ 2 1 1 3
$(2, 4, 3, 1)$ 2 1 1 1
$(3, 1, 2, 4)$ 3 1 4 4
$(3, 1, 4, 2)$ 3 1 2 2
$(3, 2, 1, 4)$ 3 2 1 4
$(3, 2, 4, 1)$ 3 2 1 1
$(3, 4, 1, 2)$ 3 1 1 2
$(3, 4, 2, 1)$ 3 2 2 1
$(4, 1, 2, 3)$ 4 1 3 3
$(4, 1, 3, 2)$ 4 1 2 2
$(4, 2, 1, 3)$ 4 2 1 3
$(4, 2, 3, 1)$ 4 2 1 1
$(4, 3, 1, 2)$ 4 3 1 2
$(4, 3, 2, 1)$ 4 3 2 1
Total 6 11 10 6
In the secretary experiment, set the number of candidates to $n = 4$. Run the experiment 1000 times with each strategy $k \in \{1, 2, 3, 4\}$
For the case $n = 5$, list the 120 permutations of $\{1, 2, 3, 4, 5\}$ and verify the probabilities in the table below. Note that $k = 3$ is optimal.
$k$ 1 2 3 4 5
$p_5(k)$ $\frac{24}{120}$ $\frac{50}{120}$ $\frac{52}{120}$ $\frac{42}{120}$ $\frac{24}{120}$
In the secretary experiment, set the number of candidates to $n = 5$. Run the experiment 1000 times with each strategy $k \in \{1, 2, 3, 4, 5\}$
Well, clearly we don't want to keep doing this. Let's see if we can find a general analysis. With $n$ candidates, let $X_n$ denote the number (arrival order) of the best candidate, and let $S_{n,k}$ denote the event of success for strategy $k$ (we select the best candidate).
$X_n$ is uniformly distributed on $\{1, 2, \ldots, n\}$.
Proof
This follows since the candidates arrive in random order.
Next we will compute the conditional probability of success given the arrival order of the best candidate.
For $n \in \N_+$ and $k \in \{2, 3, \ldots, n\}$, $\P(S_{n,k} \mid X_n = j) = \begin{cases} 0, & j \in \{1, 2, \ldots, k-1\} \ \frac{k-1}{j-1}, & j \in \{k, k + 1, \ldots, n\} \end{cases}$
Proof
For the first case, note that if the arrival number of the best candidate is $j \lt k$, then strategy $k$ will certainly fail. For the second cases, note that if the arrival order of the best candidate is $j \ge k$, then strategy $k$ will succeed if and only if one of the first $k - 1$ candidates (the ones that are automatically rejected) is the best among the first $j - 1$
The two cases are illustrated below. The large dot indicates the best candidate. Red dots indicate candidates that are rejected out of hand, while blue dots indicate candidates that are considered.
Now we can compute the probability of success with strategy $k$.
For $n \in \N_+$ $p_n(k) = \P(S_{n,k}) = \begin{cases} \frac{1}{n}, & k = 1 \ \frac{k - 1}{n} \sum_{j=k}^n \frac{1}{j - 1}, & k \in \{2, 3, \ldots, n\} \end{cases}$
Proof
When $k = 1$ we simply select the first candidate. This candidate will be the best one with probability $1 / n$. The result for $k \in \{2, 3, \ldots, n\}$ follows from the previous two results, by conditioning on $X_n$: $\P(S_{n,k}) = \sum_{j=1}^n \P(X_n = j) \P(S_{n,k} \mid X_n = j) = \sum_{j=k}^n \frac{1}{n} \frac{k - 1}{j - 1}$
Values of the function $p_n$ can be computed by hand for small $n$ and by a computer algebra system for moderate $n$. The graph of $p_{100}$ is shown below. Note the concave downward shape of the graph and the optimal value of $k$, which turns out to be 38. The optimal probability is about 0.37104.
The optimal strategy $k_n$ that maximizes $k \mapsto p_n(k)$, the ratio $k_n / n$, and the optimal probability $p_n(k_n)$ of finding the best candidate, as functions of $n \in \{3, 4, \dots, 20\}$ are given in the following table:
Candidates $n$ Optimal strategy $k_n$ Ratio $k_n / n$ Optimal probability $p_n(k_n)$
3 2 0.6667 0.5000
4 2 0.5000 0.4583
5 3 0.6000 0.4333
6 3 0.5000 0.4278
7 3 0.4286 0.4143
8 4 0.5000 0.4098
9 4 0.4444 0.4060
10 4 0.4000 0.3987
11 5 0.4545 0.3984
12 5 0.4167 0.3955
13 6 0.4615 0.3923
14 6 0.4286 0.3917
15 6 0.4000 0.3894
16 7 0.4375 0.3881
17 7 0.4118 0.3873
18 7 0.3889 0.3854
19 8 0.4211 0.3850
20 8 0.4000 0.3842
Apparently, as we might expect, the optimal strategy $k_n$ increases and the optimal probability $p_n(k_n)$ decreases as $n \to \infty$. On the other hand, it's encouraging, and a bit surprising, that the optimal probability does not appear to be decreasing to 0. It's perhaps least clear what's going on with the ratio. Graphical displays of some of the information in the table may help:
Could it be that the ratio $k_n / n$ and the probability $p_n(k_n)$ are both converging, and moreover, are converging to the same number? First let's try to establish rigorously some of the trends observed in the table.
The success probability $p_n$ satisfies $p_n(k - 1) \lt p_n(k) \text{ if and only if } \sum_{j=k}^n \frac{1}{j-1} \gt 1$
It follows that for each $n \in \N_+$, the function $p_n$ at first increases and then decreases. The maximum value of $p_n$ occurs at the largest $k$ with $\sum_{j=k}^n \frac{1}{j - 1} \gt 1$. This is the optimal strategy with $n$ candidates, which we have denoted by $k_n$.
As $n$ increases, $k_n$ increases and the optimal probability $p_n(k_n)$ decreases.
Asymptotic Analysis
We are naturally interested in the asymptotic behavior of the function $p_n$, and the optimal strategy as $n \to \infty$. The key is recognizing $p_n$ as a Riemann sum for a simple integral. (Riemann sums, of course, are named for Georg Riemann.)
If $k(n)$ depends on $n$ and $k(n) / n \to x \in (0, 1)$ as $n \to \infty$ then $p_n[k(n)] \to -x \ln x$ as $n \to \infty$.
Proof
First note that $p_n(k) = \frac{k-1}{n} \sum_{j=k}^n \frac{1}{n} \frac{n}{j-1}$ We recognize the sum above as the left Riemann sum for the the function $f(t) = \frac{1}{t}$ corresponding to the partition of the interval $\left[\frac{k-1}{n}, 1\right]$ into $(n - k) + 1$ subintervals of length $\frac{1}{n}$ each: $\left(\frac{k-1}{n}, \frac{k}{n}, \ldots, \frac{n-1}{n}, 1\right)$. It follows that $p_n[k(n)] \to x \int_x^1 \frac{1}{t} dt = -x \ln x \text{ as } n \to \infty$
The optimal strategy $k_n$ that maximizes $k \mapsto p_n(k)$, the ratio $k_n / n$, and the optimal probability $p_n(k_n)$ of finding the best candidate, as functions of $n \in \{10, 20, \dots, 100\}$ are given in the following table:
Candidates $n$ Optimal strategy $k_n$ Ratio $k_n / n$ Optimal probability $p_n(k_n)$
10 4 0.4000 0.3987
20 8 0.4000 0.3842
30 12 0.4000 0.3786
40 16 0.4000 0.3757
50 19 0.3800 0.3743
60 23 0.3833 0.3732
70 27 0.3857 0.3724
80 30 0.3750 0.3719
90 34 0.3778 0.3714
100 38 0.3800 0.3710
The graph below shows the true probabilities $p_n(k)$ and the limiting values $-\frac{k}{n} \, \ln\left(\frac{k}{n}\right)$ as a function of $k$ with $n = 100$.
For the optimal strategy $k_n$, there exists $x_0 \in (0, 1)$ such that $k_n / n \to x_0$ as $n \to \infty$. Thus, $x_0 \in (0, 1)$ is the limiting proportion of the candidates that we reject out of hand. Moreover, $x_0$ maximizes $x \mapsto -x \ln x$ on $(0, 1)$.
The maximum value of $-x \ln x$ occurs at $x_0 = 1 / e$ and the maximum value is also $1 / e$.
Proof
Thus, the magic number $1 / e \approx 0.3679$ occurs twice in the problem. For large $n$:
• Our approximate optimal strategy is to reject out of hand the first 37% of the candidates and then select the first candidate (if she appears) that is better than all of the previous candidates.
• Our probability of finding the best candidate is about 0.37.
The article Who Solved the Secretary Problem? by Tom Ferguson (1989) has an interesting historical discussion of the problem, including speculation that Johannes Kepler may have used the optimal strategy to choose his second wife. The article also discusses many interesting generalizations of the problem. A different version of the secretary problem, in which the candidates are assigned a score in $[0, 1]$, rather than a relative rank, is discussed in the section on Stopping Times in the chapter on Martingales | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/12%3A_Finite_Sampling_Models/12.09%3A_The_Secretary_Problem.txt |
Games of chance hold an honored place in probability theory, because of their conceptual clarity and because of their fundamental influence on the early development of the subject. In this chapter, we explore some of the most common and basic games of chance. Roulette, craps, and Keno are casino games. The Monty Hall problem is based on a TV game show, and has become famous because of the controversy that it generated. Lotteries are now basic ways that governments and other institutions raise money. In the last four sections on the game of red and black, we study various types of gambling strategies, a study which leads to some deep and fascinating mathematics.
13: Games of Chance
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$
Gambling and Probability
Games of chance are among the oldest of human inventions. The use of a certain type of animal heel bone (called the astragalus or colloquially the knucklebone) as a crude die dates to about 3600 BCE. The modern six-sided die dates to 2000 BCE, and the term bones is used as a slang expression for dice to this day (as in roll the bones). It is because of these ancient origins, by the way, that we use the die as the fundamental symbol in this project.
Gambling is intimately interwoven with the development of probability as a mathematical theory. Most of the early development of probability, in particular, was stimulated by special gambling problems, such as
• DeMere's problem
• Pepy's problem
• the problem of points
• the Petersburg problem
Some of the very first books on probability theory were written to analyze games of chance, for example Liber de Ludo Aleae (The Book on Games of Chance), by Girolamo Cardano, and Essay d' Analyse sur les Jeux de Hazard (Analytical Essay on Games of Chance), by Pierre-Remond Montmort. Gambling problems continue to be a source of interesting and deep problems in probability to this day (see the discussion of Red and Black for an example).
Of course, it is important to keep in mind that breakthroughs in probability, even when they are originally motivated by gambling problems, are often profoundly important in the natural sciences, the social sciences, law, and medicine. Also, games of chance provide some of the conceptually clearest and cleanest examples of random experiments, and thus their analysis can be very helpful to students of probability.
However, nothing in this chapter should be construed as encouraging you, gentle reader, to gamble. On the contrary, our analysis will show that, in the long run, only the gambling houses prosper. The gambler, inevitably, is a sad victim of the law of large numbers.
In this chapter we will study some interesting games of chance. Poker, poker dice, craps, and roulette are popular parlor and casino games. The Monty Hall problem, on the other hand, is interesting because of the controversy that it generated. The lottery is a basic way that many states and nations use to raise money (a voluntary tax, of sorts).
Terminology
Let us discuss some of the basic terminology that will be used in several sections of this chapter. Suppose that $A$ is an event in a random experiment. The mathematical odds concerning $A$ refer to the probability of $A$.
If $a$ and $b$ are positive numbers, then by definition, the following are equivalent:
1. the odds in favor of $A$ are $a : b$.
2. $\P(A) = \frac{a}{a + b}$.
3. the odds against $A$ are $b : a$.
4. $\P(A^c) = \frac{b}{a + b}$.
In many cases, $a$ and $b$ can be given as positive integers with no common factors.
Similarly, suppose that $p \in [0, 1]$. The following are equivalent:
1. $\P(A) = p$.
2. The odds in favor of $A$ are $p : 1 - p$.
3. $\P(A^c) = 1 - p$.
4. The odds against $A$ are $1 - p : p$.
On the other hand, the house odds of an event refer to the payout when a bet is made on the event.
A bet on event $A$ pays $n : m$ means that if a gambler bets $m$ units on $A$ then
1. If $A$ occurs, the gambler receives the $m$ units back and an additional $n$ units (for a net profit of $n$)
2. If $A$ does not occur, the gambler loses the bet of $m$ units (for a net profit of $-m$).
Equivalently, the gambler puts up $m$ units (betting on $A$), the house puts up $n$ units, (betting on $A^c$) and the winner takes the pot. Of course, it is usually not necessary for the gambler to bet exactly $m$; a smaller or larger is bet is scaled appropriately. Thus, if the gambler bets $k$ units and wins, his payout is $k \frac{n}{m}$.
Naturally, our main interest is in the net winnings if we make a bet on an event. The following result gives the probability density function, mean, and variance for a unit bet. The expected value is particularly interesting, because by the law of large numbers, it gives the long term gain or loss, per unit bet.
Suppose that the odds in favor of event $A$ are $a : b$ and that a bet on event $A$ pays $n : m$. Let $W$ denote the winnings from a unit bet on $A$. Then
1. $\P(W = -1) = \frac{b}{a + b}$, $\P\left(W = \frac{n}{m}\right) = \frac{a}{a + b}$
2. $\E(W) = \frac{a\,n - b m}{m(a + b)}$
3. $\var(W) = \frac{a b (n + m)^2}{m^2 (a + b)^2}$
In particular, the expected value of the bet is zero if and only if $a n = b m$, positive if and only if $a n \gt b m$, and negative if and only if $a n \lt b m$. The first case means that the bet is fair, and occurs when the payoff is the same as the odds against the event. The second means that the bet is favorable to the gambler, and occurs when the payoff is greater that the odds against the event. The third case means that the bet is unfair to the gambler, and occurs when the payoff is less than the odds against the event. Unfortunately, all casino games fall into the third category.
More About Dice
Shapes of Dice
The standard die, of course, is a cube with six sides. A bit more generally, most real dice are in the shape of Platonic solids, named for Plato naturally. The faces of a Platonic solid are congruent regular polygons. Moreover, the same number of faces meet at each vertex so all of the edges and angles are congruent as well.
The five Platonic solids are
1. The tetrahedron, with 4 sides.
2. The hexahedron (cube), with 6 sides
3. The octahedron, with 8 sides
4. The dodecahedron, with 12 sides
5. The icosahedron, with 20 sides
Note that the 4-sided die is the only Platonic die in which the outcome is the face that is down rather than up (or perhaps it's better to think of the vertex that is up as the outcome).
Fair and Crooked Dice
Recall that a fair die is one in which the faces are equally likely. In addition to fair dice, there are various types of crooked dice. For the standard six-sided die, there are three crooked types that we use frequently in this project. To understand the geometry, recall that with the standard six-sided die, opposite faces sum to 7.
Flat Dice
1. An ace-six flat die is a six-sided die in which faces 1 and 6 have probability $\frac{1}{4}$ each while faces 2, 3, 4, and 5 have probability $\frac{1}{8}$ each.
2. A two-five flat die is a six-sided die in which faces 2 and 5 have probability $\frac{1}{4}$ each while faces 1, 3, 4, and 6 have probability $\frac{1}{8}$ each.
3. A three-four flat die is a six-sided die in which faces 3 and 4 have probability $\frac{1}{4}$ each while faces 1, 2, 5, and 6 have probability $\frac{1}{8}$ each.
A flat die, as the name suggests, is a die that is not a cube, but rather is shorter in one of the three directions. The particular probabilities that we use ($1/4$ and $1/8$) are fictitious, but the essential property of a flat die is that the opposite faces on the shorter axis have slightly larger probabilities (because they have slightly larger areas) than the other four faces. Flat dice are sometimes used by gamblers to cheat.
In the Dice Experiment, select one die. Run the experiment 1000 times in each of the following cases and observe the outcomes.
1. fair die
2. ace-six flat die
3. two-five flat die
4. three-four flat die
Simulation
It's very easy to simulate a fair die with a random number. Recall that the ceiling function $\lceil x \rceil$ gives the smallest integer that is at least as large as $x$.
Suppose that $U$ is uniformly distributed on the interval $(0, 1]$, so that $U$ has the standard uniform distribution (a random number). Then $X = \lceil 6 \, U \rceil$ is uniformly distributed on the set $\{1, 2, 3, 4, 5, 6\}$ and so simulates a fair six-sided die. More generally, $X = \lceil n \, U \rceil$ is uniformly distributed on $\{1, 2, \ldots, n\}$ and so simlates a fair $n$-sided die.
We can also use a real fair die to simulate other types of fair dice. Recall that if $X$ is uniformly distributed on $\{1, 2, \ldots, n\}$ and $k \in \{1, 2, \ldots, n - 1\}$, then the conditional distribution of $X$ given that $X \in \{1, 2, \ldots, k\}$ is uniformly distributed on $\{1, 2, \ldots, k\}$. Thus, suppose that we have a real, fair, $n$-sided die. If we ignore outcomes greater than $k$ then we simulate a fair $k$-sided die. For example, suppose that we have a carefully constructed icosahedron that is a fair 20-sided die. We can simulate a fair 13-sided die by simply rolling the die and stopping as soon as we have a score between 1 and 13.
To see how to simulate a card hand, see the Introduction to Finite Sampling Models. A general method of simulating random variables is based on the quantile function. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/13%3A_Games_of_Chance/13.01%3A_Introduction_to_Games_of_Chance.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\jack}{\text{j}}$ $\newcommand{\queen}{\text{q}}$ $\newcommand{\king}{\text{k}}$
Basic Theory
The Poker Hand
A deck of cards naturally has the structure of a product set and thus can be modeled mathematically by $D = \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, \jack, \queen, \king\} \times \{\clubsuit, \diamondsuit, \heartsuit, \spadesuit\}$ where the first coordinate represents the denomination or kind (ace, two through 10, jack, queen, king) and where the second coordinate represents the suit (clubs, diamond, hearts, spades). Sometimes we represent a card as a string rather than an ordered pair (for example $\queen \, \heartsuit$).
There are many different poker games, but we will be interested in standard draw poker, which consists of dealing 5 cards at random from the deck $D$. The order of the cards does not matter in draw poker, so we will record the outcome of our random experiment as the random set (hand) $\bs{X} = \{X_1, X_2, X_3, X_4, X_5\}$ where $X_i = (Y_i, Z_i) \in D$ for each $i$ and $X_i \ne X_j$ for $i \ne j$. Thus, the sample space consists of all possible poker hands: $S = \left\{\{x_1, x_2, x_3, x_4, x_5\}: x_i \in D \text{ for each } i \text{ and } x_i \ne x_j \text{ for all } i \ne j \right\}$ Our basic modeling assumption (and the meaning of the term at random) is that all poker hands are equally likely. Thus, the random variable $\bs{X}$ is uniformly distributed over the set of possible poker hands $S$. $\P(\bs{X} \in A) = \frac{\#(A)}{\#(S)}$ In statistical terms, a poker hand is a random sample of size 5 drawn without replacement and without regard to order from the population $D$. For more on this topic, see the chapter on Finite Sampling Models.
The Value of the Hand
There are nine different types of poker hands in terms of value. We will use the numbers 0 to 8 to denote the value of the hand, where 0 is the type of least value (actually no value) and 8 the type of most value.
The hand value $V$ of a poker hand is a random variable taking values 0 through 8, and is defined as follows:
1. No Value. The hand is of none of the other types.
2. One Pair. The hand has 2 cards of one kind, and one card each of three other kinds.
3. Two Pair. The hand has 2 cards of one kind, 2 cards of another kind, and one card of a third kind.
4. Three of a Kind. The hand has 3 cards of one kind and one card of each of two other kinds.
5. Straight. The kinds of cards in the hand form a consecutive sequence but the cards are not all in the same suit. An ace can be considered the smallest denomination or the largest denomination.
6. Flush. The cards are all in the same suit, but the kinds of the cards do not form a consecutive sequence.
7. Full House. The hand has 3 cards of one kind and 2 cards of another kind.
8. Four of a Kind. The hand has 4 cards of one kind, and 1 card of another kind.
9. Straight Flush. The cards are all in the same suit and the kinds form a consecutive sequence.
Run the poker experiment 10 times in single-step mode. For each outcome, note that the value of the random variable corresponds to the type of hand, as given above.
For some comic relief before we get to the analysis, look at two of the paintings of Dogs Playing Poker by CM Coolidge.
1. His Station and Four Aces
2. Waterloo
The Probability Density Function
Computing the probability density function of $V$ is a good exercise in combinatorial probability. In the following exercises, we need the two fundamental rules of combinatorics to count the number of poker hands of a given type: the multiplication rule and the addition rule. We also need some basic combinatorial structures, particularly combinations.
The number of different poker hands is $\#(S) = \binom{52}{5} = 2\,598\,960$
$\P(V = 1) = 1\,098\,240 / 2\,598\,960 \approx 0.422569$.
Proof
The following steps form an algorithm for generating poker hands with one pair. The number of ways of performing each step is also given.
1. Select a kind of card: $13$
2. Select 2 cards of the kind in part (a): $\binom{4}{2}$
3. Select 3 kinds of cards, different than the kind in (a): $\binom{12}{3}$
4. Select a card of each of the kinds in part (c): $4^3$
$\P(V = 2) = 123\,552 / 2\,598\,960 \approx 0.047539$.
Proof
The following steps form an algorithm for generating poker hands with two pair. The number of ways of performing each step is also given.
1. Select two kinds of cards: $\binom{13}{2}$
2. Select two cards of each of the kinds in (a): $\binom{4}{2} \binom{4}{2}$
3. Select a kind of card different from the kinds in (a): $11$
4. Select a card of the kind in (c): $4$
$\P(V = 3) = 54\,912 / 2\,598\,860 \approx 0.021129$.
Proof
The following steps form an algorithm for generating poker hands with three of a kind. The number of ways of performing each step is also given.
1. Select a kind of card: $13$
2. Select 3 cards of the kind in (a): $\binom{4}{3}$
3. Select 2 kinds of cards, different than the kind in (a): $\binom{12}{2}$
4. Select one card of each of the kinds in (c): $4^2$
$\P(V = 8) = 40 / 2\,598\,960 \approx 0.000015$.
Proof
The following steps form an algorithm for generating poker hands with a straight flush. The number of ways of performing each step is also given.
1. Select the kind of the lowest card in the sequence: $10$
2. Select a suit: $4$
$\P(V = 4) = 10\,200 / 2\,598\,960 \approx 0.003925$.
Proof
The following steps form an algorithm for generating poker hands with a straight or a straight flush. The number of ways of performing each step is also given.
1. Select the kind of the lowest card in the sequence: $10$
2. Select a card of each kind in the sequence: $4^5$
Finally, we need to subtract the number of staight flushes above to get the number of hands with a straight.
$\P(V = 5) = 5108 / 2\,598\,960 \approx 0.001965$.
Proof
The following steps form an algorithm for generating poker hands with a flush or a straight flush. The number of ways of performing each step is also given.
1. Select a suit: $4$
2. Select 5 cards of the suit in (a): $\binom{13}{5}$
Finally, we need to subtract the number of straight flushes above to get the number of hands with a flush.
$\P(V = 6) = 3744 / 2\,598\,960 \approx 0.001441$.
Proof
The following steps form an algorithm for generating poker hands with a full house. The number of ways of performing each step is also given.
1. Select a kind of card: $13$
2. Select 3 cards of the kind in (a): $\binom{4}{3}$
3. Select another kind of card: $12$
4. Select 2 cards of the kind in (c): $\binom{4}{2}$
$\P(V = 7) = 624 / 2\,598\,960 \approx 0.000240$.
Proof
The following steps form an algorithm for generating poker hands with four of a kind. The number of ways of performing each step is also given.
1. Select a kind of card: $13$
2. Select 4 cards of the kind in (a): $1$
3. Select another kind of card: $12$
4. Select a card of the kind in (c): $4$
$\P(V = 0) = 1\,302\,540 / 2\,598\,960 \approx 0.501177$.
Proof
By the complement rule, $\P(V = 0) = 1 - \sum_{k=1}^8 \P(V = k)$
Note that the probability density function of $V$ is decreasing; the more valuable the type of hand, the less likely the type of hand is to occur. Note also that no value and one pair account for more than 92% of all poker hands.
In the poker experiment, note the shape of the density graph. Note that some of the probabilities are so small that they are essentially invisible in the graph. Now run the poker hand 1000 times and compare the relative frequency function to the density function.
In the poker experiment, set the stop criterion to the value of $V$ given below. Note the number of poker hands required.
1. $V = 3$
2. $V = 4$
3. $V = 5$
4. $V = 6$
5. $V = 7$
6. $V = 8$
Find the probability of getting a hand that is three of a kind or better.
Answer
0.0287
In the movie The Parent Trap (1998), both twins get straight flushes on the same poker deal. Find the probability of this event.
Answer
$3.913 \times 10^{-10}$
Classify $V$ in terms of level of measurement: nominal, ordinal, interval, or ratio. Is the expected value of $V$ meaningful?
Answer
Ordinal. No.
A hand with a pair of aces and a pair of eights (and a fifth card of a different type) is called a dead man's hand. The name is in honor of Wild Bill Hickok, who held such a hand at the time of his murder in 1876. Find the probability of getting a dead man's hand.
Answer
$1584 / 2\,598\,960$
Drawing Cards
In draw poker, each player is dealt a poker hand and there is an initial round of betting. Typically, each player then gets to discard up to 3 cards and is dealt that number of cards from the remaining deck. This leads to myriad problems in conditional probability, as partial information becomes available. A complete analysis is far beyond the scope of this section, but we will consider a comple of simple examples.
Suppose that Fred's hand is $\{4\,\heartsuit, 5\,\heartsuit, 7\,\spadesuit, \queen\,\clubsuit, 1\,\diamondsuit\}$. Fred discards the $\queen\,\clubsuit$ and $1\,\diamondsuit$ and draws two new cards, hoping to complete the straight. Note that Fred must get a 6 and either a 3 or an 8. Since he is missing a middle denomination (6), Fred is drawing to an inside straight. Find the probability that Fred is successful.
Answer
$32 / 1081$
Suppose that Wilma's hand is $\{4\,\heartsuit, 5\,\heartsuit, 6\,\spadesuit, \queen\,\clubsuit, 1\,\diamondsuit\}$. Wilma discards $\queen\,\clubsuit$ and $1\,\diamondsuit$ and draws two new cards, hoping to complete the straight. Note that Wilma must get a 2 and a 3, or a 7 and an 8, or a 3 and a 7. Find the probability that Wilma is successful. Clearly, Wilma has a better chance than Fred.
Answer
$48 / 1081$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/13%3A_Games_of_Chance/13.02%3A_Poker.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$
In this section, we will analyze several simple games played with dice—poker dice, chuck-a-luck, and high-low. The casino game craps is more complicated and is studied in the next section.
Poker Dice
Definition
The game of poker dice is a bit like standard poker, but played with dice instead of cards. In poker dice, 5 fair dice are rolled. We will record the outcome of our random experiment as the (ordered) sequence of scores: $\bs{X} = (X_1, X_2, X_3, X_4, X_5)$ Thus, the sample space is $S = \{1, 2, 3, 4, 5, 6\}^5$. Since the dice are fair, our basic modeling assumption is that $\bs{X}$ is a sequence of independent random variables and each is uniformly distributed on $\{1, 2, 3, 4, 5, 6\}$.
Equivalently, $\bs{X}$ is uniformly distributed on $S$: $\P(\bs{X} \in A) = \frac{\#(A)}{\#(S)}, \quad A \subseteq S$
In statistical terms, a poker dice hand is a random sample of size 5 drawn with replacement and with regard to order from the population $D = \{1, 2, 3, 4, 5, 6\}$. For more on this topic, see the chapter on Finite Sampling Models. In particular, in this chapter you will learn that the result of Exercise 1 would not be true if we recorded the outcome of the poker dice experiment as an unordered set instead of an ordered sequence.
The Value of the Hand
The value $V$ of the poker dice hand is a random variable with support set $\{0, 1, 2, 3, 4, 5, 6\}$. The values are defined as follows:
1. None alike. Five distinct scores occur.
2. One Pair. Four distinct scores occur; one score occurs twice and the other three scores occur once each.
3. Two Pair. Three distinct scores occur; one score occurs twice and the other three scores occur once each.
4. Three of a Kind. Three distinct scores occur; one score occurs three times and the other two scores occur once each.
5. Full House. Two distinct scores occur; one score occurs three times and the other score occurs twice.
6. Four of a king. Two distinct scores occur; one score occurs four times and the other score occurs once.
7. Five of a kind. Once score occurs five times.
Run the poker dice experiment 10 times in single-step mode. For each outcome, note that the value of the random variable corresponds to the type of hand, as given above.
The Probability Density Function
Computing the probability density function of $V$ is a good exercise in combinatorial probability. In the following exercises, we will need the two fundamental rules of combinatorics to count the number of dice sequences of a given type: the multiplication rule and the addition rule. We will also need some basic combinatorial structures, particularly combinations and permutations (with types of objects that are identical).
The number of different poker dice hands is $\#(S) = 6^5 = 7776$.
$\P(V = 0) = \frac{720}{7776} = 0.09259$.
Proof
Note that the dice scores form a permutation of size 5 from $\{1, 2, 3, 4, 5\}$.
$\P(V = 1) = \frac{3600}{7776} \approx 0.46296$.
Proof
The following steps form an algorithm for generating poker dice hands with one pair. The number of ways of performing each step is also given:
1. Select the score that will appear twice: $6$
2. Select the 3 scores that will appear once each: $\binom{5}{3}$
3. Select a permutation of the 5 numbers in parts (a) and (b): $\binom{5}{2, 1, 1, 1}$
$\P(V = 2) = \frac{1800}{7776} \approx 0.23148$.
Proof
The following steps form an algorithm for generating poker dice hands with two pair. The number of ways of performing each step is also given:
1. Select two scores that will appear twice each: $\binom{6}{2}$
2. Select the score that will appear once: $4$
3. Select a permutation of the 5 numbers in parts (a) and (b): $\binom{5}{2, 2, 1}$
$\P(V = 3) = \frac{1200}{7776} \approx 0.15432$.
Proof
The following steps form an algorithm for generating poker dice hands with three of a kind. The number of ways of performing each step is also given:
1. Select the score that will appear 3 times: $6$
2. Select the 2 scores that will appear once each: $\binom{5}{2}$
3. Select a permutation of the 5 numbers in parts (a) and (b): $\binom{5}{3, 1, 1}$
$\P(V = 4) = \frac{300}{7776} \approx 0.03858$.
Proof
The following steps form an algorithm for generating poker dice hands with a full house. The number of ways of performing each step is also given:
1. Select the score that will appear 3 times: $6$
2. Select the score that will appear twice: $5$
3. Select a permutation of the 5 numbers in parts (a) and (b): $\binom{5}{3, 2}$
$\P(V = 5) = \frac{150}{7776} = 0.01929$.
Proof
The following steps form an algorithm for generating poker dice hands with four of a kind. The number of ways of performing each step is also given:
1. Select the score that will appear 4 times: $6$
2. Select the score that will appear once: 5
3. Select a permutation of the 5 numbers in parts (a) and (b): $\binom{5}{4, 1}$
$\P(V = 6) = \frac{6}{7776} \approx 0.00077$.
Proof
There are 6 choices for the score that will appear 5 times.
Run the poker dice experiment 1000 times and compare the relative frequency function to the density function.
Find the probability of rolling a hand that has 3 of a kind or better.
Answer
0.2130
In the poker dice experiment, set the stop criterion to the value of $V$ given below. Note the number of hands required.
1. $V = 3$
2. $V = 4$
3. $V = 5$
4. $V = 6$
Chuck-a-Luck
Chuck-a-luck is a popular carnival game, played with three dice. According to Richard Epstein, the original name was Sweat Cloth, and in British pubs, the game is known as Crown and Anchor (because the six sides of the dice are inscribed clubs, diamonds, hearts, spades, crown and anchor). The dice are over-sized and are kept in an hourglass-shaped cage known as the bird cage. The dice are rolled by spinning the bird cage.
Chuck-a-luck is very simple. The gambler selects an integer from 1 to 6, and then the three dice are rolled. If exactly $k$ dice show the gambler's number, the payoff is $k : 1$. As with poker dice, our basic mathematical assumption is that the dice are fair, and therefore the outcome vector $\bs{X} = (X_1, X_2, X_3)$ is uniformly distributed on the sample space $S = \{1, 2, 3, 4, 5, 6\}^3$.
Let $Y$ denote the number of dice that show the gambler's number. Then $Y$ has the binomial distribution with parameters $n = 3$ and $p = \frac{1}{6}$: $\P(Y = k) = \binom{3}{k} \left(\frac{1}{6}\right)^k \left(\frac{5}{6}\right)^{3 - k}, \quad k \in \{0, 1, 2, 3\}$
Let $W$ denote the net winnings for a unit bet. Then
1. $W = - 1$ if $Y = 0$
2. $W = Y$ if $Y \gt 0$
The probability density function of $W$ is given by
1. $\P(W = -1) = \frac{125}{216}$
2. $\P(W = 1) = \frac{75}{216}$
3. $\P(W = 2) = \frac{15}{216}$
4. $\P(W = 3) = \frac{1}{216}$
Run the chuck-a-luck experiment 1000 times and compare the empirical density function of $W$ to the true probability density function.
The expected value and variance of $W$ are
1. $\mathbb{E}(W) = -\frac{17}{216} \approx 0.0787$
2. $\text{var}(W) = \frac{75815}{46656} \approx 1.239$
Run the chuck-a-luck experiment 1000 times and compare the empirical mean and standard deviation of $W$ to the true mean and standard deviation. Suppose you had bet \$1 on each of the 1000 games. What would your net winnings be?
High-Low
In the game of high-low, a pair of fair dice are rolled. The outcome is
• high if the sum is 8, 9, 10, 11, or 12.
• low if the sum is 2, 3, 4, 5, or 6
• seven if the sum is 7
A player can bet on any of the three outcomes. The payoff for a bet of high or for a bet of low is $1:1$. The payoff for a bet of seven is $4:1$.
Let $Z$ denote the outcome of a game of high-low. Find the probability density function of $Z$.
Answer
$\P(Z = h) = \frac{15}{36}$, $\P(Z = l) = \frac{15}{36}$, $\P(Z = s) = \frac{6}{36}$, where $h$ denotes high, $l$ denotes low, and $s$ denotes seven.
Let $W$ denote the net winnings for a unit bet. Find the expected value and variance of $W$ for each of the three bets:
1. high
2. low
3. seven
Answer
Let $W$ denote the net winnings on a unit bet in high-low.
1. Bet high: $\E(W) = -\frac{1}{6}$, $\var(W) = \frac{35}{36}$
2. Bet low: $\E(W) = -\frac{1}{6}$, $\var(W) = \frac{35}{36}$
3. Bet seven: $\E(W) = -\frac{1}{6}$, $\var(W) = \frac{7}{2}$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/13%3A_Games_of_Chance/13.03%3A_Simple_Dice_Games.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$
The Basic Game
Craps is a popular casino game, because of its complexity and because of the rich variety of bets that can be made.
According to Richard Epstein, craps is descended from an earlier game known as Hazard, that dates to the Middle Ages. The formal rules for Hazard were established by Montmort early in the 1700s. The origin of the name craps is shrouded in doubt, but it may have come from the English crabs or from the French Crapeaud (for toad).
From a mathematical point of view, craps is interesting because it is an example of a random experiment that takes place in stages; the evolution of the game depends critically on the outcome of the first roll. In particular, the number of rolls is a random variable.
Definitions
The rules for craps are as follows:
The player (known as the shooter) rolls a pair of fair dice
1. If the sum is 7 or 11 on the first throw, the shooter wins; this event is called a natural.
2. If the sum is 2, 3, or 12 on the first throw, the shooter loses; this event is called craps.
3. If the sum is 4, 5, 6, 8, 9, or 10 on the first throw, this number becomes the shooter's point. The shooter continues rolling the dice until either she rolls the point again (in which case she wins) or rolls a 7 (in which case she loses).
As long as the shooter wins, or loses by rolling craps, she retrains the dice and continues. Once she loses by failing to make her point, the dice are passed to the next shooter.
Let us consider the game of craps mathematically. Our basic assumption, of course, is that the dice are fair and that the outcomes of the various rolls are independent. Let $N$ denote the (random) number of rolls in the game and let $(X_i, Y_i)$ denote the outcome of the $i$th roll for $i \in \{1, 2, \ldots, N\}$. Finally, let $Z_i = X_i + Y_i$, the sum of the scores on the $i$th roll, and let $V$ denote the event that the shooter wins.
In the craps experiment, press single step a few times and observe the outcomes. Make sure that you understand the rules of the game.
The Probability of Winning
We will compute the probability that the shooter wins in stages, based on the outcome of the first roll.
The sum of the scores $Z$ on a given roll has the probability density function in the following table:
$z$ 2 3 4 5 6 7 8 9 10 11 12
$\P(Z = z)$ $\frac{1}{36}$ $\frac{2}{36}$ $\frac{3}{36}$ $\frac{4}{36}$ $\frac{5}{36}$ $\frac{6}{36}$ $\frac{5}{36}$ $\frac{4}{36}$ $\frac{3}{36}$ $\frac{2}{36}$ $\frac{1}{36}$
The probability that the player makes her point can be computed using a simple conditioning argument. For example, suppose that the player throws 4 initially, so that 4 is the point. The player continues until she either throws 4 again or throws 7. Thus, the final roll will be an element of the following set: $S_4 = \{(1,3), (2,2), (3,1), (1,6), (2,5), (3,4), (4,3), (5,2), (6,1)\}$ Since the dice are fair, these outcomes are equally likely, so the probability that the player makes her 4 point is $\frac{3}{9}$. A similar argument can be used for the other points. Here are the results:
The probabilities of making the point $z$ are given in the following table:
$z$ 4 5 6 8 9 10
$\P(V \mid Z_1 = z)$ $\frac{3}{9}$ $\frac{4}{10}$ $\frac{5}{11}$ $\frac{5}{11}$ $\frac{4}{10}$ $\frac{3}{9}$
The probability that the shooter wins is $\P(V) = \frac{244}{495} \approx 0.49293$
Proof
This follows from the the rules of the game and the the previous result, by conditioning on the first roll:
$\P(V) = \sum_{z=2}^{12} \P(Z_1 = z) \P(I = 1 \mid Z_1 = z)$
Note that craps is nearly a fair game. For the sake of completeness, the following result gives the probability of winning, given a point on the first roll.
$\P(V \mid Z_1 \in \{4, 5, 6, 8, 9, 10\}) = \frac{67}{165} \approx 0.406$
Proof
Let $A = \{4, 5, 6, 8, 9, 10\}$. From the definition of conditional probability, $\P(V \mid Z_1 \in A) = \frac{\P(V \cap \{Z_1 \in A\})}{\P(Z_1 \in A)}$ For the numerator, using our results above, $\P(V \cap \{Z_1 \in A\}) = \sum_{z \in A} \P(V \mid Z_1 = z) \P(Z_1 = z) = \frac{134}{495}$ Also from previous results $\P(Z_1 \in A) = \frac{2}{3}$.
Bets
There is a bewildering variety of bets that can be made in craps. In the exercises in this subsection, we will discuss some typical bets and compute the probability density function, mean, and standard deviation of each. (Most of these bets are illustrated in the picture of the craps table above). Note however, that some of the details of the bets and, in particular the payout odds, vary from one casino to another. Of course the expected value of any bet is inevitably negative (for the gambler), and thus the gambler is doomed to lose money in the long run. Nonetheless, as we will see, some bets are better than others.
Pass and Don't Pass
A pass bet is a bet that the shooter will win and pays $1 : 1$.
Let $W$ denote the winnings from a unit pass bet. Then
1. $\P(W = -1) = \frac{251}{495}$, $\P(W = 1) = \frac{244}{495}$
2. $\E(W) = -\frac{7}{495} \approx -0.0141$
3. $\sd(W) \approx 0.9999$
In the craps experiment, select the pass bet. Run the simulation 1000 times and compare the empirical density function and moments of $W$ to the true probability density function and moments. Suppose that you bet $1 on each of the 1000 games. What would your net winnings be? A don't pass bet is a bet that the shooter will lose, except that 12 on the first throw is excluded (that is, the shooter loses, of course, but the don't pass better neither wins nor loses). This is the meaning of the phrase don't pass bar double 6 on the craps table. The don't pass bet also pays $1 : 1$. Let $W$ denote the winnings for a unit don't pass bet. Then 1. $\P(W = -1) = \frac{244}{495}$, $\P(W = 0) = \frac{1}{36}$, $\P(W = 1) = \frac{949}{1980}$ 2. $\E(W) = -\frac{27}{1980} \approx -0.01363$ 3. $\sd(W) \approx 0.9859$ Thus, the don't pass bet is slightly better for the gambler than the pass bet. In the craps experiment, select the don't pass bet. Run the simulation 1000 times and compare the empirical density function and moments of $W$ to the true probability density function and moments. Suppose that you bet$1 on each of the 1000 games. What would your net winnings be?
The come bet and the don't come bet are analogous to the pass and don't pass bets, respectively, except that they are made after the point has been established.
Field
A field bet is a bet on the outcome of the next throw. It pays $1 : 1$ if 3, 4, 9, 10, or 11 is thrown, $2 : 1$ if 2 or 12 is thrown, and loses otherwise.
Let $W$ denote the winnings for a unit field bet. Then
1. $\P(W = -1) = \frac{5}{9}$, $\P(W = 1) = \frac{7}{18}$, $\P(W = 2) = \frac{1}{18}$
2. $\E(W) = -\frac{1}{18} \approx -0.0556$
3. $\sd(W) \approx 1.0787$
In the craps experiment, select the field bet. Run the simulation 1000 times and compare the empirical density function and moments of $W$ to the true probability density function and moments. Suppose that you bet $1 on each of the 1000 games. What would your net winnings be? Seven and Eleven A 7 bet is a bet on the outcome of the next throw. It pays $4 : 1$ if a 7 is thrown. Similarly, an 11 bet is a bet on the outcome of the next throw, and pays $15 : 1$ if an 11 is thrown. In spite of the romance of the number 7, the next exercise shows that the 7 bet is one of the worst bets you can make. Let $W$ denote the winnings for a unit 7 bet. Then 1. $\P(W = -1) = \frac{5}{6}$, $\P(W = 4) = \frac{1}{6}$ 2. $\E(W) = -\frac{1}{6} \approx -0.1667$ 3. $\sd(W) \approx 1.8634$ In the craps experiment, select the 7 bet. Run the simulation 1000 times and compare the empirical density function and moments of $W$ to the true probability density function and moments. Suppose that you bet$1 on each of the 1000 games. What would your net winnings be?
Let $W$ denote the winnings for a unit 11 bet. Then
1. $\P(W = -1) = \frac{17}{18}$, $\P(W = 15) = \frac{1}{18}$
2. $\E(W) = -\frac{1}{9} \approx -0.1111$
3. $\sd(W) \approx 3.6650$
In the craps experiment, select the 11 bet. Run the simulation 1000 times and compare the empirical density function and moments of $W$ to the true probability density function and moments. Suppose that you bet $1 on each of the 1000 games. What would your net winnings be? Craps All craps bets are bets on the next throw. The basic craps bet pays $7 : 1$ if 2, 3, or 12 is thrown. The craps 2 bet pays $30 : 1$ if a 2 is thrown. Similarly, the craps 12 bet pays $30 : 1$ if a 12 is thrown. Finally, the craps 3 bet pays $15 : 1$ if a 3 is thrown. Let $W$ denote the winnings for a unit craps bet. Then 1. $\P(W = -1) = \frac{8}{9}$, $\P(W = 7) = \frac{1}{9}$ 2. $\E(W) = -\frac{1}{9} \approx -0.1111$ 3. $\sd(W) \approx 5.0944$ In the craps experiment, select the craps bet. Run the simulation 1000 times and compare the empirical density function and moments of $W$ to the true probability density function and moments. Suppose that you bet$1 on each of the 1000 games. What would your net winnings be?
Let $W$ denote the winnings for a unit craps 2 bet or a unit craps 12 bet. Then
1. $\P(W = -1) = \frac{35}{36}$, $\P(W = 30) = \frac{1}{36}$
2. $\E(W) = -\frac{5}{36} \approx -0.1389$
3. $\sd(W) = 5.0944$
In the craps experiment, select the craps 2 bet. Run the simulation 1000 times and compare the empirical density function and moments of $W$ to the true probability density function and moments. Suppose that you bet $1 on each of the 1000 games. What would your net winnings be? In the craps experiment, select the craps 12 bet. Run the simulation 1000 times and compare the empirical density function and moments of $W$ to the true probability density function and moments. Suppose that you bet$1 on each of the 1000 games. What would your net winnings be?
Let $W$ denote the winnings for a unit craps 3 bet. Then
1. $\P(W = -1) = \frac{17}{18}$, $\P(W = 15) = \frac{1}{18}$
2. $\E(W) = -\frac{1}{9} \approx -0.1111$
3. $\sd(W) \approx 3.6650$
In the craps experiment, select the big 8 bet. Run the simulation 1000 times and compare the empirical density function and moments of $W$ to the true probability density function and moments. Suppose that you bet $1 on each of the 1000 games. What would your net winnings be? Hardway Bets A hardway bet can be made on any of the numbers 4, 6, 8, or 10. It is a bet that the chosen number $n$ will be thrown the hardway as $(n/2, n/2)$, before 7 is thrown and before the chosen number is thrown in any other combination. Hardway bets on 4 and 10 pay $7 : 1$, while hardway bets on 6 and 8 pay $9 : 1$. Let $W$ denote the winnings for a unit hardway 4 or hardway 10 bet. Then 1. $\P(W = -1) = \frac{8}{9}$, $\P(W = 7) = \frac{1}{9}$ 2. $\E(W) = -\frac{1}{9} \approx -0.1111$ 3. $\sd(W) = 2.5142$ In the craps experiment, select the hardway 4 bet. Run the simulation 1000 times and compare the empirical density function and moments of $W$ to the true probability density function and moments. Suppose that you bet$1 on each of the 1000 games. What would your net winnings be?
In the craps experiment, select the hardway 10 bet. Run the simulation 1000 times and compare the empirical density function and moments of $W$ to the true probability density function and moments. Suppose that you bet $1 on each of the 1000 games. What would your net winnings be? Let $W$ denote the winnings for a unit hardway 6 or hardway 8 bet. Then 1. $\P(W = -1) = \frac{10}{11}$, $\P(W = 9) = \frac{1}{11}$ 2. $\E(W) = -\frac{1}{11} \approx -0.0909$ 3. $\sd(W) \approx 2.8748$ In the craps experiment, select the hardway 6 bet. Run the simulation 1000 times and compare the empirical density and moments of $W$ to the true density and moments. Suppose that you bet$1 on each of the 1000 games. What would your net winnings be?
In the craps experiment, select the hardway 8 bet. Run the simulation 1000 times and compare the empirical density function and moments of $W$ to the true probability density function and moments. Suppose that you bet \$1 on each of the 1000 games. What would your net winnings be?
Thus, the hardway 6 and 8 bets are better than the hardway 4 and 10 bets for the gambler, in terms of expected value.
The Distribution of the Number of Rolls
Next let us compute the distribution and moments of the number of rolls $N$ in a game of craps. This random variable is of no special interest to the casino or the players, but provides a good mathematically exercise. By definition, if the shooter wins or loses on the first roll, $N = 1$. Otherwise, the shooter continues until she either makes her point or rolls 7. In this latter case, we can use the geometric distribution on $\N_+$ which governs the trial number of the first success in a sequence of Bernoulli trials. The distribution of $N$ is a mixture of distributions.
The probability density function of $N$ is
$\P(N = n) = \begin{cases} \frac{12}{36}, & n = 1 \ \frac{1}{24} \left(\frac{3}{4}\right)^{n-2} + \frac{5}{81} \left(\frac{13}{18}\right)^{n-2} + \frac{55}{648} \left(\frac{25}{36}\right)^{n-2}, & n \in \{2, 3, \ldots\} \end{cases}$
Proof
First note that $\P(N = 1 \mid Z_1 = z) = 1$ for $z \in \{2, 3, 7, 11, 12\}$. Next, $\P(N = n \mid Z_1 = z) = p_z (1 - p_z)^{n-2}$ for $n \in \{2, 3, \ldots\}$ and for the values of $z$ and $p_z$ given in the following table:
$z$ 4 5 6 8 9 10
$p_z$ $\frac{9}{36}$ $\frac{10}{36}$ $\frac{11}{36}$ $\frac{11}{36}$ $\frac{10}{36}$ $\frac{9}{36}$
Thus the conditional distribution of $N - 1$ given $Z = z$ is geometric with probability $p_z$. The final result now follows by conditioning on the first roll: $\P(N = n) = \sum_{z=2}^{12} \P(Z_1 = z) \P(N = n \mid Z_1 = z)$
The first few values of the probability density function of $N$ are given in the following table:
$n$ 1 2 3 4 5
$\P(N = n)$ 0.33333 0.18827 0.13477 0.09657 0.06926
Find the probability that a game of craps will last at least 8 rolls.
Answer
0.09235
The mean and variance of the number of rolls are
1. $\E(N) = \frac{557}{165} \approx 3.3758$
2. $\var(N) = \frac{245\,672}{27\,225} \approx 9.02376$
Proof
These result also can be obtained by conditioning on the first roll: \begin{align} \E(N) & = \E\left[\E(N \mid Z_1)\right] = \frac{557}{165} \ \E(N^2) & = \E\left[\E\left(N^2 \mid Z_1\right)\right] = \frac{61\,769}{3025} \end{align} | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/13%3A_Games_of_Chance/13.04%3A_Craps.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$
The Roulette Wheel
According to Richard Epstein, roulette is the oldest casino game still in operation. It's invention has been variously attributed to Blaise Pascal, the Italian mathematician Don Pasquale, and several others. In any event, the roulette wheel was first introduced into Paris in 1765. Here are the characteristics of the wheel:
The (American) roulette wheel has 38 slots numbered 00, 0, and 1–36.
1. Slots 0, 00 are green;
2. Slots 1, 3, 5, 7, 9, 12, 14, 16, 18, 19, 21, 23, 25, 27, 30, 32, 34, 36 are red;
3. Slots 2, 4, 6, 8, 10, 11, 13, 15, 17, 20, 22, 24, 26, 28, 29, 31, 33, 35 are black.
Except for 0 and 00, the slots on the wheel alternate between red and black. The strange order of the numbers on the wheel is intended so that high and low numbers, as well as odd and even numbers, tend to alternate.
The roulette experiment is very simple. The wheel is spun and then a small ball is rolled in a groove, in the opposite direction as the motion of the wheel. Eventually the ball falls into one of the slots. Naturally, we assume mathematically that the wheel is fair, so that the random variable $X$ that gives the slot number of the ball is uniformly distributed over the sample space $S = \{00, 0, 1, \ldots, 36\}$. Thus, $\P(X = x) = \frac{1}{38}$ for each $x \in S$.
Bets
As with craps, roulette is a popular casino game because of the rich variety of bets that can be made. The picture above shows the roulette table and indicates some of the bets we will study. All bets turn out to have the same expected value (negative, of course). However, the variances differ depending on the bet.
Although all bets in roulette have the same expected value, the standard deviations vary inversely with the number of numbers selected. What are the implications of this for the gambler?
Straight Bets
A straight bet is a bet on a single number, and pays $35 : 1$.
Let $W$ denote the winnings on a unit straight bet. Then
1. $\P(W = -1) = \frac{37}{38}$, $\P(W = 35) = \frac{1}{38}$
2. $\E(W) = -\frac{1}{19} \approx -0.0526$
3. $\sd(W) \approx 5.7626$
Three Number Bets
A 3-number bet (or row bet) is a bet on the three numbers in a vertical row on the roulette table. The bet pays $11 : 1$.
Let $W$ denote the winnings on a unit row bet. Then
1. $\P(W = -1) = \frac{35}{38}$, $\P(W = 11) = \frac{3}{38}$
2. $\E(W) = -\frac{1}{19} \approx -0.0526$
3. $\sd(W) \approx 3.2359$
Six Number Bets
A 6-number bet or 2-row bet is a bet on the 6 numbers in two adjacent rows of the roulette table. The bet pays $5 : 1$.
Let $W$ denote the winnings on a unit 6-number bet. Then
1. $\P(W = -1) = \frac{16}{19}$, $\P(W = 5) = \frac{3}{19}$
2. $\E(W) = -\frac{1}{19} \approx -0.0526$
3. $\sd(W) \approx 2.1879$
Eighteen Number Bets
An 18-number bet is a bet on 18 numbers. In particular, A color bet is a bet either on red or on black. A parity bet is a bet on the odd numbers from 1 to 36 or the even numbers from 1 to 36. The low bet is a bet on the numbers 1-18, and the high bet is the bet on the numbers from 19-36. An 18-number bet pays $1 : 1$.
Let $W$ denote the winnings on a unit 18-number bet. Then
1. $\P(W = -1) = \frac{10}{19}$, $\P(W = 1) = \frac{9}{19}$
2. $\E(W) = -\frac{1}{19} \approx -0.0526$
3. $\sd(W) \approx 0.9986$
In the roulette experiment, select the 18-number bet. Run the simulation 1000 times and compare the empirical density function and moments of $W$ to the true probability density function and moments. Suppose that you bet \$1 on each of the 1000 games. What would your net winnings be? | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/13%3A_Games_of_Chance/13.05%3A_Roulette.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$
Preliminaries
Statement of the Problem
The Monty Hall problem involves a classical game show situation and is named after Monty Hall, the long-time host of the TV game show Let's Make a Deal. There are three doors labeled 1, 2, and 3. A car is behind one of the doors, while goats are behind the other two:
The rules are as follows:
1. The player selects a door.
2. The host selects a different door and opens it.
3. The host gives the player the option of switching from her original choice to the remaining closed door.
4. The door finally selected by the player is opened and she either wins or loses.
The Monty Hall problem became the subject of intense controversy because of several articles by Marilyn Vos Savant in the Ask Marilyn column of Parade magazine, a popular Sunday newspaper supplement. The controversy began when a reader posed the problem in the following way:
Suppose you're on a game show, and you're given a choice of three doors. Behind one door is a car; behind the others, goats. You pick a door—say No. 1—and the host, who knows what's behind the doors, opens another door—say No. 3—which has a goat. He then says to you, Do you want to pick door No. 2? Is it to your advantage to switch your choice?
Marilyn's response was that the contestant should switch doors, claiming that there is a $\frac{1}{3}$ chance that the car is behind door 1, while there is a $\frac{2}{3}$ chance that the car is behind door 2. In two follow-up columns, Marilyn printed a number of responses, some from academics, most of whom claimed in angry or sarcastic tones that she was wrong and that there are equal chances that the car is behind doors 1 or 2. Marilyn stood by her original answer and offered additional, but non-mathematical, arguments.
Think about the problem. Do you agree with Marilyn or with her critics, or do you think that neither solution is correct?
In the Monty Hall game, set the host strategy to standard (the meaning of this strategy will be explained in the below). Play the Monty Hall game 50 times with each of the following strategies. Do you want to reconsider your answer to question above?
1. Always switch
2. Never switch
In the Monty Hall game, set the host strategy to blind (the meaning of this strategy will be explained below). Play the Monty Hall game 50 times with each of the following strategies. Do you want to reconsider your answer to question above?
1. Always switch
2. Never switch
Modeling Assumptions
When we begin to think carefully about the Monty Hall problem, we realize that the statement of the problem by Marilyn's reader is so vague that a meaningful discussion is not possible without clarifying assumptions about the strategies of the host and player. Indeed, we will see that misunderstandings about these strategies are the cause of the controversy.
Let us try to formulate the problem mathematically. In general, the actions of the host and player can vary from game to game, but if we are to have a random experiment in the classical sense, we must assume that the same probability distributions govern the host and player on each game and that the games are independent.
There are four basic random variables for a game:
1. $U$: the number of the door containing the car.
2. $X$: the number of the first door selected by the player.
3. $V$: the number of the door opened by the host.
4. $Y$: the number of the second door selected by the player.
Each of these random variables has the possible values 1, 2, and 3. However, because of the rules of the game, the door opened by the host cannot be either of the doors selected by the player, so $V \ne X$ and $V \ne Y$. In general, we will allow the possibility $V = U$, that the host opens the door with the car behind it. Whether this is a reasonable action of the host is a big part of the controversy about this problem.
The Monty Hall experiment will be completely defined mathematically once the joint distribution of the basic variables is specified. This joint distribution in turn depends on the strategies of the host and player, which we will consider next.
Strategies
Host Strategies
In the Monty Hall experiment, note that the host determines the probability density function of the door containing the car, namely $\P(U = i)$ for $i \in \{1, 2, 3\}$. The obvious choice for the host is to randomly assign the car to one of the three doors. This leads to the uniform distribution, and unless otherwise noted, we will always assume that $U$ has this distribution. Thus, $\P(U = i) = \frac{1}{3}$ for $i \in \{1, 2, 3\}$.
The host also determines the conditional density function of the door he opens, given knowledge of the door containing the car and the first door selected by the player, namely $\P(V = k \mid U = i, X = j)$ for $i, \, j \in \{1, 2, 3\}$. Recall that since the host cannot open the door chosen by the player, this probability must be 0 for $k = j$.
Thus, the distribution of $U$ and the conditional distribution of $V$ given $U$ and $X$ constitute the host strategy.
The Standard Strategy
In most real game shows, the host would always open a door with a goat behind it. If the player's first choice is incorrect, then the host has no choice; he cannot open the door with the car or the player's choice and must therefore open the only remaining door. On the other hand, if the player's first choice is correct, then the host can open either of the remaining doors, since goats are behind both. Thus, he might naturally pick one of these doors at random.
This strategy leads to the following conditional distribution for $V$ given $U$ and $X$: $\P(V = k \mid U = i, X = j) = \begin{cases} 1, & i \ne j, \; i \ne k, \; k \ne j \ \frac{1}{2}, & i = j, \; k \ne i \ 0, & k = i, \; k = j \end{cases}$
This distribution, along with the uniform distribution for $U$, will be referred to as the standard strategy for the host.
In the Monty Hall game, set the host strategy to standard. Play the game 50 times with each of the following player strategies. Which works better?
1. Always switch
2. Never switch
The Blind Strategy
Another possible second-stage strategy is for the host to always open a door chosen at random from the two possibilities. Thus, the host might well open the door containing the car.
This strategy leads to the following conditional distribution for $V$ given $U$ and $X$: $\P(V = k \mid U = i, X = j) = \begin{cases} \frac{1}{2}, & k \ne j \ 0, & k = j \end{cases}$
This distribution, together with the uniform distribution for $U$, will be referred to as the blind strategy for the host. The blind strategy seems a bit odd. However, the confusion between the two strategies is the source of the controversy concerning this problem.
In the Monty Hall game, set the host strategy to blind. Play the game 50 times with each of the following player strategies. Which works better?
1. Always switch
2. Never switch
Player Strategies
The player, on the other hand, determines the probability density function of her first choice, namely $\P(X = j)$ for $j \in \{1, 2, 3\}$. The obvious first choice for the player is to randomly choose a door, since the player has no knowledge at this point. This leads to the uniform distribution, so $\P(X = j) = \frac{1}{3}$ for $j \in \{1, 2, 3\}$
The player also determines the conditional density function of her second choice, given knowledge of her first choice and the door opened by the host, namely $\P(Y = l \mid X = j, V = k)$ for $i, \, j, \, k \in \{1, 2, 3\}$ with $j \ne k$. Recall that since the player cannot choose the door opened by the host, this probability must be 0 for $l = k$. The distribution of $X$ and the conditional distribution of $Y$ given $X$ and $V$ constitute the player strategy.
Suppose that the player switches with probability $p \in [0, 1]$. This leads to the following conditional distribution: $\P(Y = l \mid X = j, V = k) = \begin{cases} p, & j \ne k, \; j \ne l, \; k \ne l \ 1 - p, & j \ne k, \; l = j \ 0, & j = k, \; l = k \end{cases}$
In particular, if $p = 1$, the player always switches, while if $p = 0$, the player never switches.
Mathematical Analysis
We are almost ready to analyze the Monty Hall problem mathematically. But first we must make some independence assumptions to incorporate the lack of knowledge that the host and player have about each other's actions. First, the player has no knowledge of the door containing the car, so we assume that $U$ and $X$ are independent. Also, the only information about the car door that the player has when she makes her second choice is the information (if any) revealed by her first choice and the host's subsequent selection. Mathematically, this means that $Y$ is conditionally independent of $U$ given $X$ and $V$.
Distributions
The host and player strategies form the basic data for the Monty Hall problem. Because of the independence assumptions, the joint distribution of the basic random variables is completely determined by these strategies.
The joint probability density function of $(U, X, V, Y)$ is given by
$\P(U = i, X = j, V = k, Y = l) = \P(U = i) \P(X = j) \P(V = k \mid U = i, X = j) \P(Y = l \mid X = j, V = k), \quad i, \; j, \; k, \; l \in \{1, 2, 3\}$
Proof
This follows from the independence assumptions and the multiplication rule of conditional probability.
The probability of any event defined in terms of the Monty Hall problem can be computed by summing the joint density over the appropriate values of $(i, j, k, l)$.
With either of the basic host strategies, $V$ is uniformly distributed on $\{1, 2, 3\}$.
Suppose that the player switches with probability $p$. With either of the basic host strategies, $Y$ is uniformly distributed on $\{1, 2, 3\}$.
In the Monty Hall experiment, set the host strategy to standard. For each of the following values of $p$, run the simulation 1000 times. Based on relative frequency, which strategy works best?
1. $p = 0$ (never switch)
2. $p = 0.3$
3. $p = 0.5$
4. $p = 0.7$
5. $p = 1$ (always switch)
In the Monty Hall experiment, set the host strategy to blind. For each of the following values of $p$, run the experiment 1000 times. Based on relative frequency, which strategy works best?
1. $p = 0$ (never switch)
2. $p = 0.3$
3. $p = 0.5$
4. $p = 0.7$
5. $p = 1$ (always switch)
The Probability of Winning
The event that the player wins a game is $\{Y = U\}$. We will compute the probability of this event with the basic host and player strategies.
Suppose that the host follows the standard strategy and that the player switches with probability $p$. Then the probability that the player wins is $\P(Y = U) = \frac{1 + p}{3}$
In particular, if the player always switches, the probability that she wins is $p = \frac{2}{3}$ and if the player never switches, the probability that she wins is $p = \frac{1}{3}$.
In the Monty Hall experiment, set the host strategy to standard. For each of the following values of $p$, run the simulation 1000 times. In each case, compare the relative frequency of winning to the probability of winning.
1. $p = 0$ (never switch)
2. $p = 0.3$
3. $p = 0.5$
4. $p = 0.7$
5. $p = 1$ (always switch)
Suppose that the host follows the blind strategy. Then for any player strategy, the probability that the player wins is $\P(Y = U) = \frac{1}{3}$
In the Monty Hall experiment, set the host strategy to blind. For each of the following values of $p$, run the experiment 1000 times. In each case, compare the relative frequency of winning to the probability of winning.
1. $p = 0$ (never switch)
2. $p = 0.3$
3. $p = 0.5$
4. $p = 0.7$
5. $p = 1$ (always switch)
For a complete solution of the Monty Hall problem, we want to compute the conditional probability that the player wins, given that the host opens a door with a goat behind it: $\P(Y = U \mid V \ne U) = \frac{\P(Y = U)}{\P(V \ne U)}$ With the basic host and player strategies, the numerator, the probability of winning, has been computed. Thus we need to consider the denominator, the probability that the host opens a door with a goat. If the host use the standard strategy, then the conditional probability of winning is the same as the unconditional probability of winning, regardless of the player strategy. In particular, we have the following result:
If the host follows the standard strategy and the player switches with probability $p$, then $\P(Y = U \mid V \ne U) = \frac{1 + p}{3}$
Proof
This follows from the win probability above
Once again, the probability increases from $\frac{1}{3}$ when $p = 0$, so that the player never switches, to $\frac{2}{3}$ when $p = 1$, so that the player always switches.
If the host follows the blind strategy, then for any player strategy, $\P(V \ne U) = \frac{2}{3}$ and therefore $\P(Y = U \mid V \ne U) = \frac{1}{2}$.
In the Monty Hall experiment, set the host strategy to blind. For each of the following values of $p$, run the experiment 500 times. In each case, compute the conditional relative frequency of winning, given that the host shows a goat, and compare with the theoretical answer above,
1. $p = 0$ (never switch)
2. $p = 0.3$
3. $p = 0.5$
4. $p = 0.7$
5. $p = 1$ (always switch)
The confusion between the conditional probability of winning for these two strategies has been the source of much controversy in the Monty Hall problem. Marilyn was probably thinking of the standard host strategy, while some of her critics were thinking of the blind strategy. This problem points out the importance of careful modeling, of the careful statement of assumptions. Marilyn is correct if the host follows the standard strategy; the critics are correct if the host follows the blind strategy; any number of other answers could be correct if the host follows other strategies.
The mathematical formulation we have used is fairly complete. However, if we just want to solve Marilyn's problem, there is a much simpler analysis (which you may have discovered yourself). Suppose that the host follows the standard strategy, and thus always opens a door with a goat. If the player's first door is incorrect (contains a goat), then the host has no choice and must open the other door with a goat. Then, if the player switches, she wins. On the other hand, if the player's first door is correct and she switches, then of course she loses. Thus, we see that if the player always switches, then she wins if and only if her first choice is incorrect, an event that obviously has probability $\frac{2}{3}$. If the player never switches, then she wins if and only if her first choice is correct, an event with probability $\frac{1}{3}$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/13%3A_Games_of_Chance/13.06%3A_The_Monty_Hall_Problem.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$
You realize the odds of winning [the lottery] are the same as being mauled by a polar bear and a regular bear in the same day.
E*TRADE baby, January 2010.
Lotteries are among the simplest and most widely played of all games of chance, and unfortunately for the gambler, among the worst in terms of expected value. Lotteries come in such an incredible number of variations that it is impractical to analyze all of them. So, in this section, we will study some of the more common lottery formats.
The Basic Lottery
Basic Format
The basic lottery is a random experiment in which the gambling house (in many cases a government agency) selects $n$ numbers at random, without replacement, from the integers from 1 to $N$. The integer parameters $N$ and $n$ vary from one lottery to another, and of course, $n$ cannot be larger than $N$. The order in which the numbers are chosen usually does not matter, and thus in this case, the sample space $S$ of the experiment consists of all subsets (combinations) of size $n$ chosen from the population $\{1, 2, \ldots, N\}$. $S = \left\{ \bs{x} \subseteq \{1, 2, \ldots, N\}: \#(\bs{x}) = n\right\}$
Recall that $\#(S) = \binom{N}{n} = \frac{N!}{n! (N - n)!}$
Naturally, we assume that all such combinations are equally likely, and thus, the chosen combination $\bs{X}$, the basic random variable of the experiment, is uniformly distributed on $S$. $\P(\bs{X} = \bs{x}) = \frac{1}{\binom{N}{n}}, \quad \bs{x} \in S$ The player of the lottery pays a fee and gets to select $m$ numbers, without replacement, from the integers from 1 to $N$. Again, order does not matter, so the player essentially chooses a combination $\bs{y}$ of size $m$ from the population $\{1, 2, \ldots, N\}$. In many cases $m = n$, so that the player gets to choose the same number of numbers as the house. In general then, there are three parameters in the basic $(N, n, m)$ lottery.
The player's goal, of course, is to maximize the number of matches (often called catches by gamblers) between her combination $\bs{y}$ and the random combination $\bs{X}$ chosen by the house. Essentially, the player is trying to guess the outcome of the random experiment before it is run. Thus, let $U = \#(\bs{X} \cap \bs{y})$ denote the number of catches.
The number of catches $U$ in the $(N, n, m)$, lottery has probability density function given by $\P(U = k) = \frac{\binom{m}{k} \binom{N - m}{n - k}}{\binom{N}{n}}, \quad k \in \{0, 1, \ldots, m\}$
The distribution of $U$ is the hypergeometric distribution with parameters $N$, $n$, and $m$, and is studied in detail in the chapter on Finite Sampling Models. In particular, from this section, it follows that the mean and variance of the number of catches $U$ are \begin{align} \E(U) = & n \frac{m}{N} \ \var(U) = & n \frac{m}{N} \left(1 - \frac{m}{N}\right) \frac{N - n}{N - 1} \end{align} Note that $\P(U = k) = 0$ if $k \gt n$ or $k \lt n + m - N$. However, in most lotteries, $m \le n$ and $N$ is much larger than $n + m$. In these common cases, the density function is positive for the values of $k$ given in above.
We will refer to the special case where $m = n$ as the $(N, n)$ lottery; this is the case in most state lotteries. In this case, the probability density function of the number of catches $U$ is $\P(U = k) = \frac{\binom{n}{k} \binom{N - n}{n - k}}{\binom{N}{n}}, \quad k \in \{0, 1, \ldots, n\}$ The mean and variance of the number of catches $U$ in this special case are \begin{align} \E(U) & = \frac{n^2}{N} \ \var(U) & = \frac{n^2 (N - n)^2}{N^2 (N - 1)} \end{align}
Explicitly give the probability density function, mean, and standard deviation of the number of catches in the $(47, 5)$ lottery.
Answer
$\E(U) = 0.5319148936$, $\sd(U) = 0.6587832083$
$k$ $\P(U = k)$
0 0.5545644253
1 0.3648450167
2 0.0748400034
3 0.0056130003
4 0.0001369024
5 0.0000006519
Explicitly give the probability density function, mean, and standard deviation of the number of catches in the $(49, 5)$ lottery.
Answer
$\E(U) = 0.5102040816$, $\sd(U) = 0.6480462207$
$k$ $\P(U = k)$
0 0.5695196981
1 0.3559498113
2 0.0694536217
3 0.0049609730
4 0.0001153715
5 0.0000005244
Explicitly give the probability density function, mean, and standard deviation of the number of catches in the $(47, 7)$ lottery.
Answer
$\E(U) = 1.042553191$, $\sd(U) = 0.8783776109$
$k$ $\P(U = k)$
0 0.2964400642
1 0.4272224454
2 0.2197144005
3 0.0508598149
4 0.0054983583
5 0.0002604486
6 0.0000044521
7 0.0000000159
The analysis above was based on the assumption that the player's combination $\bs{y}$ is selected deterministically. Would it matter if the player chose the combination in a random way? Thus, suppose that the player's selected combination $\bs{Y}$ is a random variable taking values in $S$. (For example, in many lotteries, players can buy tickets with combinations randomly selected by a computer; this is typically known as Quick Pick). Clearly, $\bs{X}$ and $\bs{Y}$ must be independent, since the player (and her randomizing device) can have no knowledge of the winning combination $\bs{X}$. As you might guess, such randomization makes no difference.
Let $U$ denote the number of catches in the $(N, n, m)$ lottery when the player's combination $\bs{Y}$ is a random variable, independent of the winning combination $\bs{X}$. Then $U$ has the same distribution as in the deterministic case above.
Proof
This follows by conditioning on the value of $\bs{Y}$: $\P(U = k) = \sum_{\bs{y} \in S} \P(U = k \mid \bs{Y} = \bs{y}) \P(\bs{Y} = \bs{y}) = \sum_{\bs{y} \in S} \P(U = k) \P(\bs{Y} = \bs{y}) = \P(U = k)$
There are many websites that publish data on the frequency of occurrence of numbers in various state lotteries. Some gamblers evidently feel that some numbers are luckier than others.
Given the assumptions and analysis above, do you believe that some numbers are luckier than others? Does it make any mathematical sense to study historical data for a lottery?
The prize money in most state lotteries depends on the sales of the lottery tickets. Typically, about 50% of the sales money is returned as prize money; the rest goes for administrative costs and profit for the state. The total prize money is divided among the winning tickets, and the prize for a given ticket depends on the number of catches $U$. For all of these reasons, it is impossible to give a simple mathematical analysis of the expected value of playing a given state lottery. Note however, that since the state keeps a fixed percentage of the sales, there is essentially no risk for the state.
From a pure gambling point of view, state lotteries are bad games. In most casino games, by comparison, 90% or more of the money that comes in is returned to the players as prize money. Of course, state lotteries should be viewed as a form of voluntary taxation, not simply as games. The profits from lotteries are typically used for education, health care, and other essential services. A discussion of the value and costs of lotteries from a political and social point of view (as opposed to a mathematical one) is beyond the scope of this project.
Bonus Numbers
Many state lotteries now augment the basic $(N, n)$, format with a bonus number. The bonus number $T$ is selected from a specified set of integers, in addition to the combination $\bs{X}$, selected as before. The player likewise picks a bonus number $s$, in addition to a combination $\bs{y}$. The player's prize then depends on the number of catches $U$ between $\bs{X}$ and $\bs{y}$, as before, and in addition on whether the player's bonus number $s$ matches the random bonus number $T$ chosen by the house. We will let $I$ denote the indicator variable of this latter event. Thus, our interest now is in the joint distribution of $(I, U)$.
In one common format, the bonus number $T$ is selected at random from the set of integers $\{1, 2, \ldots, M\}$, independently of the combination $\bs{X}$ of size $n$ chosen from $\{1, 2, \ldots, N\}$. Usually $M \lt N$. Note that with this format, the game is essentially two independent lotteries, one in the $(N, n)$, format and the other in the $(M, 1)$, format.
Explicitly compute the joint probability density function of $(I, U)$ for the $(47, 5)$ lottery with independent bonus number from 1 to 27. This format is used in the California lottery, among others.
Answer
Joint distribution of $(I, U)$
$\P(I = i, U = k)$ $i = 0$ 1
$k = 0$ 0.5340250022 0.0205394232
1 0.3513322383 0.0135127784
2 0.0720681514 0.0027718520
3 0.0054051114 0.0002078889
4 0.0001318320 0.0000050705
5 0.0000006278 0.0000000241
Explicitly compute the joint probability density function of $(I, U)$ for the $(49, 5)$ lottery with independent bonus number from 1 to 42. This format is used in the Powerball lottery, among others.
Answer
Joint distribution of $(I, U)$
$\P(I = i, U = k)$ $i = 0$ 1
$k = 0$ 0.5559597053 0.0135599928
1 0.3474748158 0.0084749955
2 0.0677999641 0.0016536577
3 0.0048428546 0.0001181184
4 0.0001126245 0.0000027469
5 0.0000005119 0.0000000125
In another format, the bonus number $T$ is chosen from 1 to $N$, and is distinct from the numbers in the combination $\bs{X}$. To model this game, we assume that $T$ is uniformly distributed on $\{1, 2, \ldots, N\}$, and given $T = t$, $\bs{X}$ is uniformly distributed on the set of combinations of size $n$ chosen from $\{1, 2, \ldots, N\} \setminus \{t\}$. For this format, the joint probability density function is harder to compute.
The probability density function of $(I, U)$ is given by \begin{align} \P(I = 1, U = k) & = \frac{\binom{n}{k} \binom{N - 1 - n}{n - k}}{N \binom{N - 1}{n}}, \quad k \in \{0, 1, \ldots, n\} \ \P(I = 0, U = k) & = (N - n + 1) \frac{\binom{n}{k} \binom{N - 1 - n}{n - k}}{N \binom{N - 1}{n}} + n \frac{\binom{n - 1}{k} \binom{N - n}{n - k}}{N \binom{N - 1}{n}}, \quad k \in \{0, 1, \ldots, n\} \end{align}
Proof
The second equation is obtained by conditioning on whether $T \in \{y_1, y_2, \ldots, y_n\}$.
Explicitly compute the joint probability density function of $(I, U)$ for the $(47, 7)$ lottery with bonus number chosen as described above. This format is used in the Super 7 Canada lottery, among others.
Keno
Keno is a lottery game played in casinos. For a fixed $N$ (usually 80) and $n$ (usually 20), the player can play a range of basic $(N, n, m)$ games, as described in the first subsection. Typically, $m$ ranges from 1 to 15, and the payoff depends on $m$ and the number of catches $U$. In this section, you will compute the density function, mean, and standard deviation of the random payoff, based on a unit bet, for a typical keno game with $N = 80$, $n = 20$, and $m \in \{1, 2, \ldots, 15\}$. The payoff tables are based on the keno game at the Tropicana casino in Atlantic City, New Jersey.
Recall that the probability density function of the number of catches $U$ above , is given by $\P(U = k) = \frac{\binom{m}{k} \binom{80 - m}{20 - k}}{\binom{80}{20}}, \quad k \in \{0, 1, \ldots, m\}$
The payoff table for $m = 1$ is given below. Compute the probability density function, mean, and standard deviation of the payoff.
Pick $m = 1$
Catches 0 1
Payoff 0 3
Answer
Pick $m = 1$, $\E(V) = 0.75$, $\sd(V) = 1.299038106$
$v$ $\P(V = v)$
0 0.75
3 0.25
The payoff table for $m = 2$ is given below. Compute the probability density function, mean, and standard deviation of the payoff.
Pick $m = 2$
Catches 0 1 2
Payoff 0 0 12
Answer
Pick $m = 2$, $E(V) = 0.7353943525$, $\sd(V) = 5.025285956$
$v$ $\P(V = v)$
12 0.0601265822
The payoff table for $m = 3$ is given below. Compute the probability density function, mean, and standard deviation of the payoff.
Pick $m = 3$
Catches 0 1 2 3
Payoff 0 0 1 43
Answer
Pick $m = 3$, $\E(V) = 0.7353943525$, $\sd(V) = 5.025285956$
$v$ $\P(V = v)$
0 0.8473709834
1 0.1387536514
43 0.0138753651
The payoff table for $m = 4$ is given below. Compute the probability density function, mean, and standard deviation of the payoff.
Pick $m = 4$
Catches 0 1 2 3 4
Payoff 0 0 1 3 130
Answer
Pick $m = 4$, $\E(V) = 0.7406201394$, $\sd(V) = 7.198935911$
$v$ $\P(V = v)$
0 0.7410532505
1 0.2126354658
3 0.0432478914
130 0.0030633923
The payoff table for $m = 5$ is given below. Compute the probability density function, mean, and standard deviation of the payoff.
Pick $m = 5$
Catches 0 1 2 3 4 5
Payoff 0 0 0 1 10 800
Answer
Pick $m = 5$, $\E(V) = 0.7207981892$, $\sd(V) = 20.33532453$
$v$ $\P(V = v)$
0 0.9033276850
1 0.0839350523
10 0.0120923380
800 0.0006449247
The payoff table for $m = 6$ is given below. Compute the probability density function, mean, and standard deviation of the payoff.
Pick $m = 6$
Catches 0 1 2 3 4 5 6
Payoff 0 0 0 1 4 95 1500
Answer
Pick $m = 6$, $\E(V) = 0.7315342885$, $\sd(V) = 17.83831647$
$v$ $\P(V = v)$
0 0.8384179112
1 0.1298195475
4 0.0285379178
95 0.0030956385
1500 0.0001289849
The payoff table for $m = 7$ is given below. Compute the probability density function, mean, and standard deviation of the payoff.
Pick $m = 7$
Catches 0 1 2 3 4 5 6 7
Payoff 0 0 0 0 1 25 350 8000
Answer
Pick $m = 7$, $\E(V) = 0.7196008747$, $\sd(V) = 40.69860455$
$v$ $\P(V = v)$
0 0.9384140492
1 0.0521909668
25 0.0086385048
350 0.0007320767
8000 0.0000244026
The payoff table for $m = 8$ is given below. Compute the probability density function, mean, and standard deviation of the payoff.
Pick $m = 8$
Catches 0 1 2 3 4 5 6 7 8
Payoff 0 0 0 0 0 9 90 1500 25,000
Answer
Pick $m = 8$, $\E(V) = 0.7270517606$, $\sd(V) = 55.64771986$
$v$ $\P(V = v)$
0 0.9791658999
9 0.0183025856
90 0.0023667137
1500 0.0001604552
25,000 0.0000043457
The payoff table for $m = 9$ is given below. Compute the probability density function, mean, and standard deviation of the payoff.
Pick $m = 9$
Catches 0 1 2 3 4 5 6 7 8 9
Payoff 0 0 0 0 0 4 50 280 4000 50,000
Answer
Pick $m = 9$, $\E(V) = 0.7270517606$, $\sd(V) = 55.64771986$
$v$ $\P(V = v)$
0 0.9791658999
9 0.0183025856
90 0.0023667137
1500 0.0001604552
25,000 0.0000043457
The payoff table for $m = 10$ is given below. Compute the probability density function, mean, and standard deviation of the payoff.
Pick $m = 10$
Catches 0 1 2 3 4 5 6 7 8 9 10
Payoff 0 0 0 0 0 1 22 150 1000 5000 100,000
Answer
Pick $m = 10$, $\E(V) = 0.7228896221$, $\sd(V) = 38.10367609$
$v$ $\P(V = v)$
0 0.9353401224
1 0.0514276877
22 0.0114793946
150 0.0016111431
1000 0.0001354194
5000 0.0000061206
100,000 0.0000001122
The payoff table for $m = 11$ is given below. Compute the probability density function, mean, and standard deviation of the payoff.
Pick $m = 11$
Catches 0 1 2 3 4 5 6 7 8 9 10 11
Payoff 0 0 0 0 0 0 8 80 400 2500 25,000 100,000
Answer
Pick $m = 11$, $\E(V) = 0.7138083347$, $\sd(V) = 32.99373346$
$v$ $\P(V = v)$
0 0.9757475913
8 0.0202037345
80 0.0036078097
400 0.0004114169
2500 0.0000283736
25,000 0.0000010580
100,000 0.0000000160
The payoff table for $m = 12$ is given below. Compute the probability density function, mean, and standard deviation of the payoff.
Pick $m = 12$
Catches 0 1 2 3 4 5 6 7 8 9 10 11 12
Payoff 0 0 0 0 0 0 5 32 200 1000 5000 25,000 100,000
Answer
Pick $m = 12$, $\E(V) = 0.7167721544$, $\sd(V) = 20.12030014$
$v$ $\P(V = v)$
0 0.9596431653
5 0.0322088520
32 0.0070273859
200 0.0010195984
1000 0.0000954010
5000 0.0000054280
25,000 0.0000001673
100,000 0.0000000021
The payoff table for $m = 13$ is given below. Compute the probability density function, mean, and standard deviation of the payoff.
Pick $m = 13$
Catches 0 1 2 3 4 5 6 7 8 9 10 11 12 13
Payoff 1 0 0 0 0 0 1 20 80 600 3500 10,000 50,000 100,000
Proof
Pick $m = 13$, $\E(V) = 0.7216651326$, $\sd(V) = 22.68311303$
$v$ $\P(V = v)$
0 0.9213238456
1 0.0638969375
20 0.0123151493
80 0.0021831401
600 0.0002598976
3500 0.0000200623
10,000 0.0000009434
50,000 0.0000000240
100,000 0.0000000002
The payoff table for $m = 14$ is given below. Compute the probability density function, mean, and standard deviation of the payoff.
Pick $m = 14$
Catches 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Payoff 1 0 0 0 0 0 1 9 42 310 1100 8000 25,000 50,000 100,000
Answer
Pick $m = 14$, $\E(V) = 0.7194160496$, $\sd(V) = 21.98977077$
$v$ $\P(V = v)$
0 0.898036333063
1 0.077258807301
9 0.019851285448
42 0.004181636518
310 0.000608238039
1100 0.000059737665
8000 0.000003811015
25,000 0.000000147841
50,000 0.000000003084
100,000 0.000000000026
The payoff table for $m = 15$ is given below. Compute the probability density function, mean, and standard deviation of the payoff.
Pick $m = 15$
Catches 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Payoff 1 0 0 0 0 0 0 10 25 100 300 2800 25,000 50,000 100,000 100,000
Answer
Pick $m = 15$, $\E(V) = 0.7144017020$, $\sd(V) = 24.31901706$
$v$ $\P(V = v)$
0 0.95333046038902
1 0.00801614417729
10 0.02988971956684
25 0.00733144064847
100 0.00126716258122
300 0.00015205950975
2800 0.00001234249267
25,000 0.00000064960488
50,000 0.00000002067708
100,000 0.00000000035046
100,000 0.00000000000234
In the exercises above, you should have noticed that the expected payoff on a unit bet varies from about 0.71 to 0.75, so the expected profit (for the gambler) varies from about $-0.25$ to $-0.29$. This is quite bad for the gambler playing a casino game, but as always, the lure of a very high payoff on a small bet for an extremely rare event overrides the expected value analysis for most players.
With $m = 15$, show that the top 4 prizes (25,000, 50,000, 100,000, 100,000) contribute only about 0.017 (less than 2 cents) to the total expected value of about 0.714.
On the other hand, the standard deviation of the payoff varies quite a bit, from about 1 to about 55.
Although the game is highly unfavorable for each $m$, with expected value that is nearly constant, which do you think is better for the gambler—a format with high standard deviation or one with low standard deviation? | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/13%3A_Games_of_Chance/13.07%3A_Lotteries.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$
In this section and the following three sections, we will study gambling strategies for one of the simplest gambling models. Yet in spite of the simplicity of the model, the mathematical analysis leads to some beautiful and sometimes surprising results that have importance and application well beyond gambling. Our exposition is based primarily on the classic book by Dubbins and Savage, Inequalities for Stochastic Processes (How to Gamble if You Must) by Lester E Dubbins and Leonard J Savage (1965).
Basic Theory
Assumptions
Here is the basic situation: The gambler starts with an initial sum of money. She bets on independent, probabilistically identical games, each with two outcomes—win or lose. If she wins a game, she receives the amount of the bet on that game; if she loses a game, she must pay the amount of the bet. Thus, the gambler plays at even stakes. This particular situation (IID games and even stakes) is known as red and black, and is named for the color bets in the casino game roulette. Other examples are the pass and don't pass bets in craps.
Let us try to formulate the gambling experiment mathematically. First, let $I_n$ denote the outcome of the $n$th game for $n \in \N_+$, where 1 denotes a win and 0 denotes a loss. These are independent indicator random variables with the same distribution: $\P\left(I_j = 1\right) = p_j, \quad \P\left(I_j = 0\right) = q = 1 - p_j$ where $p \in [0, 1]$ is the probability of winning an individual game. Thus, $\bs{I} = (I_1, I_2, \ldots)$ is a sequence of Bernoulli trials.
If $p = 0$, then the gambler always loses and if $p = 1$ then the gambler always wins. These trivial cases are not interesting, so we will usually assume that $0 \lt p \lt 1$. In real gambling houses, of course, $p \lt \frac{1}{2}$ (that is, the games are unfair to the player), so we will be particularly interested in this case.
Random Processes
The gambler's fortune over time is the basic random process of interest: Let $X_0$ denote the gambler's initial fortune and $X_i$ the gambler's fortune after $i$ games. The gambler's strategy consists of the decisions of how much to bet on the various games and when to quit. Let $Y_i$ denote the amount of the $i$th bet, and let $N$ denote the number of games played by the gambler. If we want to, we can always assume that the games go on forever, but with the assumption that the gambler bets 0 on all games after $N$. With this understanding, the game outcome, fortune, and bet processes are defined for all times $i \in \N_+$.
The fortune process is related to the wager process as follows: $X_j = X_{j-1} + \left(2 I_j - 1\right) Y_j, \quad j \in \N_+$
Strategies
The gambler's strategy can be very complicated. For example, the random variable $Y_n$, the gambler's bet on game $n$, or the event $N = n - 1$, her decision to stop after $n - 1$ games, could be based on the entire past history of the game, up to time $n$. Technically, this history forms a $\sigma$-algebra: $\mathscr{H}_n = \sigma\left\{X_0, Y_1, I_1, Y_2, I_2, \ldots, Y_{n-1}, I_{n-1}\right\}$ Moreover, they could have additional sources of randomness. For example a gambler playing roulette could partly base her bets on the roll of a lucky die that she keeps in her pocket. However, the gambler cannot see into the future (unfortunately from her point of view), so we can at least assume that $Y_n$ and $\{N = n - 1\}$ are independent of $\left(I_1, I_2, \ldots, I_{n-1}\right)$.
At least in terms of expected value, any gambling strategy is futile if the games are unfair.
$\E\left(X_i\right) = \E\left(X_{i-1}\right) + (2 p - 1) \E\left(Y_i\right)$ for $i \in \N_+$
Proof
This follows from the previous result and the assumption of no prescience.
Suppose that the gambler has a positive probability of making a real bet on game $i$, so that $\E(Y_i) \gt 0$. Then
1. $\E(X_i) \lt \E(X_{i-1})$ if $p \lt \frac{1}{2}$
2. $\E(X_i) \gt \E(X_{i-1})$ if $p \gt \frac{1}{2}$
3. $\E(X_i) = \E(X_{i-1})$ if $p = \frac{1}{2}$
Proof
This follows from the previous result on the expected value of $X_i$.
Thus on any game in which the gambler makes a positive bet, her expected fortune strictly decreases if the games are unfair, remains the same if the games are fair, and strictly increases if the games are favorable.
As we noted earlier, a general strategy can depend on the past history and can be randomized. However, since the underlying Bernoulli games are independent, one might guess that these complicated strategies are no better than simple strategies in which the amount of the bet and the decision to stop are based only on the gambler's current fortune. These simple strategies do indeed play a fundamental role and are referred to as stationary, deterministic strategies. Such a strategy can be described by a betting function $S$ from the space of fortunes to the space of allowable bets, so that $S(x)$ is the amount that the gambler bets when her current fortune is $x$.
The Stopping Rule
From now on, we will assume that the gambler's stopping rule is a very simple and standard one: she will bet on the games until she either loses her entire fortune and is ruined or reaches a fixed target fortune $a$: $N = \min\{n \in \N: X_n = 0 \text{ or } X_n = a\}$ Thus, any strategy (betting function) $S$ must satisfy $s(x) \le \min\{x, a - x\}$ for $0 \le x \le a$: the gambler cannot bet what she does not have, and will not bet more than is necessary to reach the target $a$.
If we want to, we can think of the difference between the target fortune and the initial fortune as the entire fortune of the house. With this interpretation, the player and the house play symmetric roles, but with complementary win probabilities: play continues until either the player is ruined or the house is ruined. Our main interest is in the final fortune $X_N$ of the gambler. Note that this random variable takes just two values; 0 and $a$.
The mean and variance of the final fortune are given by
1. $\E(X_N) = a \P(X_N = a)$
2. $\var(X_N) = a^2 \P(X_N = a) \left[1 - \P(X_N = a)\right]$
Presumably, the gambler would like to maximize the probability of reaching the target fortune. Is it better to bet small amounts or large amounts, or does it not matter? How does the optimal strategy, if there is one, depend on the initial fortune, the target fortune, and the game win probability?
We are also interested in $\E(N)$, the expected number of games played. Perhaps a secondary goal of the gambler is to maximize the expected number of games that she gets to play. Are the two goals compatible or incompatible? That is, can the gambler maximize both her probability of reaching the target and the expected number of games played, or does maximizing one quantity necessarily mean minimizing the other?
In the next two sections, we will analyze and compare two strategies that are in a sense opposites:
• Timid Play: On each game, until she stops, the gambler makes a small constant bet, say \$1.
• Bold Play: On each game, until she stops, the gambler bets either her entire fortune or the amount needed to reach the target fortune, whichever is smaller.
In the final section of the chapter, we will return to the question of optimal strategies. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/13%3A_Games_of_Chance/13.08%3A_The_Red_and_Black_Game.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$
Basic Theory
Recall that with the strategy of timid play in red and black, the gambler makes a small constant bet, say $1, on each game until she stops. Thus, on each game, the gambler's fortune either increases by 1 or decreases by 1, until the fortune reaches either 0 or the target $a$ (which we assume is a positive integer). Thus, the fortune process $(X_0, X_1, \ldots)$ is a random walk on the fortune space $\{0, 1, \ldots, a\}$ with 0 and $a$ as absorbing barriers. As usual, we are interested in the probability of winning and the expected number of games. The key idea in the analysis is that after each game, the fortune process simply starts over again, but with a different initial value. This is an example of the Markov property, named for Andrei Markov. A separate chapter on Markov Chains explores these random processes in more detail. In particular, this chapter has sections on Birth-Death Chains and Random Walks on Graphs, particular classes of Markov chains that generalize the random processes that we are studying here. The Probability of Winning Our analysis based on the Markov property suggests that we treat the initial fortune as a variable. Thus, we will denote the probability that the gambler reaches the target $a$, starting with an initial fortune $x$ by $f(x) = \P(X_N = a \mid X_0 = x), \quad x \in \{0, 1, \ldots, a\}$ The function $f$ satisfies the following difference equation and boundary conditions: 1. $f(x) = q f(x - 1) + p f(x + 1)$ for $x \in \{1, 2, \ldots, a - 1\}$ 2. $f(0) = 0$, $f(a) = 1$ Proof The boundary conditions are just a matter of definition. The difference equation follows from conditioning on the outcome of the first trial. She loses this trial with probability $q$ and if she loses, then effectively she starts a new sequence of trials but with initial fortune $x - 1$. She wins the first trial with probability $p$, and if she wins, then she effectively starts a new sequence of trials but with initial fortune $x + 1$. The difference equation is linear (in the unknown function $f$), homogeneous (because each term involves the unknown function $f$), and second order (because 2 is the difference between the largest and smallest fortunes in the equation). Recall that linear homogeneous difference equations can be solved by finding the roots of the characteristic equation. The characteristic equation of the difference equation is $p r^2 - r + q = 0$, and that the roots are $r = 1$ and $r = q / p$. If $p \ne \frac{1}{2}$, then the roots are distinct. In this case, the probability that the gambler reaches her target is $f(x) = \frac{(q / p)^x - 1}{(q / p)^a - 1}, \quad x \in \{0, 1, \ldots, a\}$ If $p = \frac{1}{2}$, the characteristic equation has a single root 1 that has multiplicity 2. In this case, the probability that the gambler reaches her target is simply the ratio of the initial fortune to the target fortune: $f(x) = \frac{x}{a}, \quad x \in \{0, 1, \ldots, a\}$ Thus, we have the distribution of the final fortune $X_N$ in either casse: $\P(X_N = 0 \mid X_0 = x) = 1 - f(x), \; \P(X_N = a \mid X_0 = x) = f(x); \quad x \in \{0, 1, \ldots, a\}$ In the red and black experiment, choose Timid Play. Vary the initial fortune, target fortune, and game win probability and note how the probability of winning the game changes. For various values of the parameters, run the experiment 1000 times and compare the relative frequency of winning a game to the probability of winning a game. As a function of $x$, for fixed $p$ and $a$, 1. $f$ is increasing from 0 to $a$. 2. $f$ is concave upward if $p \lt \frac{1}{2}$ and concave downward if $p \gt \frac{1}{2}$. Of course, $f$ is linear if $p = \frac{1}{2}$. $f$ is continuous as a function of $p$, for fixed $x$ and $a$. Proof An application of L'Hospital's Rule shows that the probability of winning when $p \ne \frac{1}{2}$ converges to the probability of winning when $p = \frac{1}{2}$, as $p \to \frac{1}{2}$. For fixed $x$ and $a$, $f(x)$ increases from 0 to 1 as $p$ increases from 0 to 1. The Expected Number of Trials Now let us consider the expected number of games needed with timid play, when the initial fortune is $x$: $g(x) = \E(N \mid X_0 = x), \quad x \in \{0, 1, \ldots, a\}$ The function $g$ satisfies the following difference equation and boundary conditions: 1. $g(x) = q g(x - 1) + p g(x + 1) + 1$ for $x \in \{1, 2, \ldots, a - 1\}$ 2. $g(0) = 0$, $g(a) = 0$ Proof Again, the difference equation follows from conditioning on the first trial. She loses this trial with probability $q$ and if she loses, then effectively she starts a new sequence of trials but with initial fortune $x - 1$. She wins the first trial with probability $p$, and if she wins, then she effectively starts a new sequence of trials but with initial fortune $x + 1$. In either case, one trial is over. The difference equation in the last exercise is linear, second order, but non-homogeneous (because of the constant term 1 on the right side). The corresponding homogeneous equation is the equation satisfied by the win probability function $f$. Thus, only a little additional work is needed to solve the non-homogeneous equation. If $p \ne \frac{1}{2}$, then $g(x) = \frac{x}{q - p} - \frac{a}{q - p} f(x), \quad x \in \{0, 1, \ldots, a\}$ where $f$ is the win probability function above. If $p = \frac{1}{2}$, then $g(x) = x (a - x), \quad x \in \{0, 1, \ldots, a\}$ Consider $g$ as a function of the initial fortune $x$, for fixed values of the game win probability $p$ and the target fortune $a$. 1. $g$ at first increases and then decreases. 2. $g$ is concave downward. When $p = \frac{1}{2}$, the maximum value of $g$ is $\frac{a^2}{4}$ and occurs when $x = \frac{a}{2}$. When $p \ne \frac{1}{2}$, the value of $x$ where the maximum occurs is rather complicated. $g$ is continuous as a function of $p$, for fixed $x$ and $a$. Proof The expected value when $p \ne \frac{1}{2}$ converges to the expected value when $p = \frac{1}{2}$, as $p \to \frac{1}{2}$. For many parameter settings, the expected number of games is surprisingly large. For example, suppose that $p = \frac{1}{2}$ and the target fortune is 100. If the gambler's initial fortune is 1, then the expected number of games is 99, even though half of the time, the gambler will be ruined on the first game. If the initial fortune is 50, the expected number of games is 2500. In the red and black experiment, select Timid Play. Vary the initial fortune, the target fortune and the game win probability and notice how the expected number of games changes. For various values of the parameters, run the experiment 1000 times and compare the sample mean number of games to the expect value. Increasing the Bet What happens if the gambler makes constant bets, but with an amount higher than 1? The answer to this question may give insight into what will happen with bold play. In the red and black game, set the target fortune to 16, the initial fortune to 8, and the win probability to 0.45. Play 10 games with each of the following strategies. Which seems to work best? 1. Bet 1 on each game (timid play). 2. Bet 2 on each game. 3. Bet 4 on each game. 4. Bet 8 on each game (bold play). We will need to embellish our notation to indicate the dependence on the target fortune. Let $f(x, a) = \P(X_N = a \mid X_0 = x), \quad x \in \{0, 1, \ldots, a\}, \; a \in \N_+$ Now fix $p$ and suppose that the target fortune is $2 a$ and the initial fortune is $2 x$. If the gambler plays timidly (betting$1 each time), then of course, her probability of reaching the target is $f(2 x, 2 a)$. On the other hand:
Suppose that the gambler bets $2 on each game. The fortune process $(X_i / 2: i \in \N)$ corresponds to timid play with initial fortune $x$ and target fortune $a$ and that therefore the probability that the gambler reaches the target is $f(x, a)$. Thus, we need to compare the probabilities $f(2 x, 2 a)$ and $f(x, a)$. The win probability functions are related as follows: $f(2 x, 2 a) = f(x, a) \frac{(q / p)^x + 1}{(q / p)^a + 1}, \quad x \in \{0, 1, \ldots, a\}$ In particular 1. $f(2 x, 2 a) \lt f(x, a)$ if $p \lt \frac{1}{2}$ 2. $f(2 x, 2 a) = f(x, a)$ if $p = \frac{1}{2}$ 3. $f(2 x, 2 a) \gt f(x, a)$ if $p \gt \frac{1}{2}$ Thus, it appears that increasing the bets is a good idea if the games are unfair, a bad idea if the games are favorable, and makes no difference if the games are fair. What about the expected number of games played? It seems almost obvious that if the bets are increased, the expected number of games played should decrease, but a direct analysis using the expected value function above is harder than one might hope (try it!), We will use a different method, one that actually gives better results. Specifically, we will have the$1 and $2 gamblers bet on the same underlying sequence of games, so that the two fortune processes are defined on the same sample space. Then we can compare the actual random variables (the number of games played), which in turn leads to a comparison of their expected values. Recall that this general method is referred to as coupling. Let $X_n$ denote the fortune after $n$ games for the gamble making$1 bets (simple timid play). Then $2 X_n - X_0$ is the fortune after $n$ games for the gambler making $2 bets (with the same initial fortune, betting on the same sequence of games). Assume again that the initial fortune is $2 x$ and the target fortune $2 a$ where $0 \lt x \lt a$. Let $N_1$ denote the number of games played by the$1 gambler, and $N_2$ the number of games played by the $2 gambler, Then 1. If the$1 gambler falls to fortune $x$, the $2 gambler is ruined (fortune 0). 2. If the$1 gambler hits fortune $x + a$, the $2 gambler reaches the target $2 a$. 3. The$1 gambler must hit $x$ before hitting 0 and must hit $x + a$ before hitting $2 a$.
4. $N_2 \lt N_1$ given $X_0 = 2 x$.
5. $\E(N_2 \mid X_0 = 2 x) \lt \E(N_1 \mid X_0 = 2 x)$
Of course, the expected values agree (and are both 0) if $x = 0$ or $x = a$. This result shows that $N_2$ is stochastically smaller than $N_1$ when the gamblers are not playing the same sequence of games (so that the random variables are not defined on the same sample space).
Generalize the analysis in this subsection to compare timid play with the strategy of betting \$$k$ on each game (let the initial fortune be $k x$ and the target fortune $k a$.
It appears that with unfair games, the larger the bets the better, at least in terms of the probability of reaching the target. Thus, we are naturally led to consider bold play. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/13%3A_Games_of_Chance/13.09%3A_Timid_Play.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$
Basic Theory
Preliminaries
Recall that with the strategy of bold play in red and black, the gambler on each game bets either her entire fortune or the amount needed to reach the target fortune, whichever is smaller. As usual, we are interested in the probability that the player reaches the target and the expected number of trials. The first interesting fact is that only the ratio of the initial fortune to the target fortune matters, quite in contrast to timid play.
Suppose that the gambler plays boldly with initial fortune $x$ and target fortune $a$. As usual, let $\bs{X} = (X_1, X_2, \ldots)$ denote the fortune process for the gambler. For any $c \gt 0$, the random process $c \bs{X} = (c X_0, c X_1, \ldots)$ is the fortune process for bold play with initial fortune $c x$ and target fortune $c a$.
Because of this result, it is convenient to use the target fortune as the monetary unit and to allow irrational, as well as rational, initial fortunes. Thus, the fortune space is $[0, 1]$. Sometimes in our analysis we will ignore the states 0 or 1; clearly there is no harm in this because in these states, the game is over.
Recall that the betting function $S$ is the function that gives the amount bet as a function of the current fortune. For bold play, the betting function is $S(x) = \min\{x, 1 - x\} = \begin{cases} x, & 0 \le x \le \frac{1}{2} \ 1 - x, & \frac{1}{2} \le x \le 1 \end{cases}$
The Probability of Winning
We will denote the probability that the bold gambler reaches the target $a = 1$ starting from the initial fortune $x \in [0, 1]$ by $F(x)$. By the scaling property, the probability that the bold gambler reaches some other target value $a \gt 0$, starting from $x \in [0, a]$ is $F(x / a)$.
The function $F$ satisfies the following functional equation and boundary conditions:
1. $F(x) = \begin{cases} p F(2 x), & 0 \le x \le \frac{1}{2}\ p + q F(2 x - 1), & \frac{1}{2} \le x \le 1 \end{cases}$
2. $F(0) = 0$, $F(1) = 1$
From the previous result, and a little thought, it should be clear that an important role is played by the following function:
Let $d$ be the function defined on $[0, 1)$ by $d(x) = 2 x - \lfloor 2 x \rfloor = \begin{cases} 2 x, & 0 \le x \lt \frac{1}{2} \ 2 x - 1, & \frac{1}{2} \le x \lt 1 \end{cases}$ The function $d$ is called the doubling function, mod 1, since $d(x)$ gives the fractional part of $2 x$.
Note that until the last bet that ends the game (with the player ruined or victorious), the successive fortunes of the player follow iterates of the map $d$. Thus, bold play is intimately connected with the dynamical system associated with $d$.
Binary Expansions
One of the keys to our analysis is to represent the initial fortune in binary form.
The binary expansion of $x \in [0, 1)$ is $x = \sum_{i=1}^\infty \frac{x_i}{2^i}$ where $x_i \in \{0, 1\}$ for each $i \in \N_+$. This representation is unique except when $x$ is a binary rational (sometimes also called a dyadic rational), that is, a number of the form $k / 2^n$ where $n \in \N_+$ and $k \in \{1, 3, \ldots, 2^n - 1\}$; the positive integer $n$ is called the rank of $x$. Binary rationals are discussed in more detail in the chapter on Foundations.
For a binary rational $x$ of rank $n$, we will use the standard terminating representation where $x_n = 1$ and $x_i = 0$ for $i \gt n$. Rank can be extended to all numbers in [0, 1) by defining the rank of 0 to be 0 (0 is also considered a binary rational) and by defining the rank of a binary irrational to be $\infty$. We will denote the rank of $x$ by $r(x)$.
Applied to the binary sequences, the doubling function $d$ is the shift operator:
For $x \in [0, 1)$, $[d(x)]_i = x_{i+1}$.
Bold play in red and black can be elegantly described by comparing the bits of the initial fortune with the game bits.
Suppose that gambler starts with initial fortune $x \in (0, 1)$. The gambler eventually reaches the target 1 if and only if there exists a positive integer $k$ such that $I_j = 1 - x_j$ for $j \in \{1, 2, \ldots, k - 1\}$ and $I_k = x_k$. That is, the gambler wins if and only if when the game bit agrees with the corresponding fortune bit for the first time, that bit is 1.
The random variable whose bits are the complements of the fortune bits will play an important role in our analysis. Thus, let
$W = \sum_{j=1}^\infty \frac{1 - I_j}{2^j}$
Note that $W$ is a well defined random variable taking values in $[0, 1]$.
Suppose that the gambler starts with initial fortune $x \in (0, 1)$. Then the gambler reaches the target 1 if and only if $W \lt x$.
Proof
This follows from the previous result.
$W$ has a continuous distribution. That is, $\P(W = x) = 0$ for any $x \in [0, 1]$.
From the previous two results, it follows that $F$ is simply the distribution function of $W$. In particular, $F$ is an increasing function, and since $W$ has a continuous distribution, $F$ is a continuous function.
The success function $F$ is the unique continuous solution of the functional equation above.
Proof
Induction on the rank shows that any two solutions must agree at the binary rationals. But then any two continuous solutions must agree for all $x \in [0, 1]$.
If we introduce a bit more notation, we can give nice expression for $F(x)$, and later for the expected number of games $G(x)$. Let $p_0 = p$ and $p_1 = q = 1 - p$.
The win probability function $F$ can be expressed as follows: $F(x) = \sum_{n=1}^\infty p_{x_1} \cdots p_{x_{n-1}} p x_n$
Note that $p \cdot x_n$ in the last expression is correct; it's not a misprint of $p_{x_n}$. Thus, only terms with $x_n = 1$ are included in the sum.
$F$ is strictly increasing on $[0, 1]$. This means that the distribution of $W$ has support $[0, 1]$; that is, there are no subintervals of $[0, 1]$ that have positive length, but 0 probability.
In particular,
1. $F\left(\frac{1}{8}\right) = p^3$
2. $F\left(\frac{2}{8}\right) = p^2$
3. $F\left(\frac{3}{8}\right) = p^2 + p^2 q$
4. $F\left(\frac{4}{8}\right) = p$
5. $F\left(\frac{5}{8}\right) = p + p^2 q$
6. $F\left(\frac{6}{8}\right) = p + p q$
7. $F\left(\frac{7}{8}\right) = p + p q + p q^2$
If $p = \frac{1}{2}$ then $F(x) = x$ for $x \in [0, 1]$
Proof
There are two proofs. The simplest proof is to note that $x \mapsto x$ is continuous and satisfies the functional equation in functional equation. Another proof can be constructed by using the representation of $F$ as a sum.
Thus, for $p = \frac{1}{2}$ (fair trials), the probability that the bold gambler reaches the target fortune $a$ starting from the initial fortune $x$ is $x / a$, just as it is for the timid gambler. Note also that the random variable $W$ has the uniform distribution on $[0, 1]$. When $p \ne \frac{1}{2}$, the distribution of $W$ is quite strange. To state the result succinctly, we will indicate the dependence of the of the probability measure $\P$ on the parameter $p \in (0, 1)$. First we define $C_p = \left\{ x \in [0, 1]: \frac{1}{n} \sum_{i=1}^n (1 - x_i) \to p \text{ as } n \to \infty \right\}$ Thus, $C_p$ is the set of $x \in [0, 1]$ for which the relative frequency of 0's in the binary expansion is $p$.
For distinct $p, \, t \in (0, 1)$
1. $\P_p(W \in C_p) = 1$
2. $\P_p(W \in C_t) = 0$
Proof
Part (a) follows from the strong law of large numbers. Part (b) follows from part (a) since $C_p \cap C_t = \emptyset$.
When $p \ne \frac{1}{2}$, $W$ does not have a probability density function (with respect to Lebesgue measure on [0, 1]), even though $W$ has a continuous distribution.
Proof
The proof is by contradiction. Suppose that $W$ has probability density function $f$. Then $1 = \P_p(W \in C_p) = \int_{C_p} f(x) \, dx$. But if $p \ne \frac{1}{2}$, $\int_{C_p} 1 \, dx = \P_{1/2}(W \in C_p) = 0$. That is, $C_p$ has Lebesgue measure 0. But then $\int_{C_p} f(x) \, dx = 0$, a contradiction.
When $p \ne \frac{1}{2}$, $F$ has derivative 0 at almost every point in $[0, 1]$, even though it is strictly increasing.
In the red and black experiment, select Bold Play. Vary the initial fortune, target fortune, and game win probability with the scroll bars and note how the probability of winning the game changes. In particular, note that this probability depends only on $x / a$. Now for various values of the parameters, run the experiment 1000 times and compare the relative frequency function to the probability density function.
The Expected Number of Trials
Let $G(x) = \E(N \mid X_0 = x)$ for $x \in [0, 1]$, the expected number of trials starting at $x$. For any other target fortune $a \gt 0$, the expected number of trials starting at $x \in [0, a]$ is just $G(x / a)$.
$G$ satisfies the following functional equation and boundary conditions:
1. $G(x) = \begin{cases} 1 + p G(2 x), & 0 \lt x \le \frac{1}{2} \ 1 + q G(2 x - 1), & \frac{1}{2} \le x \lt 1 \end{cases}$
2. $G(0) = 0$, $G(1) = 0$
Proof
The functional equation follows from conditioning on the result of the first game.
Note, interestingly, that the functional equation is not satisfied at $x = 0$ or $x = 1$. As before, we can give an alternate analysis using the binary representation of an initial fortune $x \in (0, 1)$.
Suppose that the initial fortune of the gambler is $x \in (0, 1)$. Then $N = \min\{k \in \N_+: I_k = x_k \text{ or } k = r(x)\}$.
Proof
If $x$ is a binary rational then $N$ takes values in the set $\{1, 2, \ldots, r(x)\}$. Play continues until the game number agrees with the rank of the fortune or a game bit agrees with the corresponding fortune bit, whichever is smaller. In the first case, the penultimate fortune is $\frac{1}{2}$, the only fortune for which the next game is always final. If $x$ is a binary irrational then $N$ takes values in $\N_+$. Play continues until a game bit agrees with a corresponding fortune bit.
We can give an explicit formula for the expected number of trials $G(x)$ in terms of the binary representation of $x$. Recall our special notation: $p_0 = p$, $p_1 = q = 1 - p$
Suppose that $x \in (0, 1)$. Then $G(x) = \sum_{n=0}^{r(x) - 1} p_{x_1} \ldots p_{x_n}$
Note that the $n = 0$ term is 1, since the product is empty. The sum has a finite number of terms if $x$ is a binary rational, and the sum has an infinite number of terms if $x$ is a binary irrational.
In particular,
1. $G\left(\frac{1}{8}\right) = 1 + p + p^2$
2. $G\left(\frac{2}{8}\right) = 1 + p$
3. $G\left(\frac{3}{8}\right) = 1 + p + p q$
4. $G\left(\frac{4}{8}\right) = 1$
5. $G\left(\frac{5}{8}\right) = 1 + q + p q$
6. $G\left(\frac{6}{8}\right) = 1 + q$
7. $G\left(\frac{7}{8}\right) = 1 + q + q^2$
If $p = \frac{1}{2}$ then
$G(x) = \begin{cases} 2 - \frac{1}{2^{r(x) - 1}}, & x \text{ is a binary rational} \ 2, & x \text{ is a binary irrational} \end{cases}$
In the red and black experiment, select Bold Play. Vary $x$, $a$, and $p$ with the scroll bars and note how the expected number of trials changes. In particular, note that the mean depends only on the ratio $x / a$. For selected values of the parameters, run the experiment 1000 times and compare the sample mean to the distribution mean.
For fixed $x$, $G$ is continuous as a function of $p$.
However, as a function of the initial fortune $x$, for fixed $p$, the function $G$ is very irregular.
$G$ is discontinuous at the binary rationals in $[0, 1]$ and continuous at the binary irrationals. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/13%3A_Games_of_Chance/13.10%3A_Bold_Play.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$
Basic Theory
Definitions
Recall that the stopping rule for red and black is to continue playing until the gambler is ruined or her fortune reaches the target fortune $a$. Thus, the gambler's strategy is to decide how much to bet on each game before she must stop. Suppose that we have a class of strategies that correspond to certain valid fortunes and bets; $A$ will denote the set of fortunes and $B_x$ will denote the set of valid bets for $x \in A$. For example, sometimes (as with timid play) we might want to restrict the fortunes to set of integers $\{0, 1, \ldots, a\}$; other times (as with bold play) we might want to use the interval $[0, 1]$ as the fortune space. As for the bets, recall that the gambler cannot bet what she does not have and will not bet more than she needs in order to reach the target. Thus, a betting function $S$ must satisfy $S(x) \le \min\{x, a - x\}, \quad x \in A$ Moreover, we always restrict our strategies to those for which the stopping time $N$ is finite.
The success function of a strategy is the probability that the gambler reaches the target $a$ with that strategy, as a function of the initial fortune $x$. A strategy with success function $V$ is optimal if for any other strategy with success function $U$, we have $U(x) \le V(x)$ for $x \in A$.
If there exists an optimal strategy, then the optimal success function is unique.
However, there may not exist an optimal strategy or there may be several optimal strategies. Moreover, the optimality question depends on the value of the game win probability $p$, in addition to the structure of fortunes and bets.
A Condition for Optimality
Our main theorem gives a condition for optimality:
A strategy $S$ with success function $V$ is optimal if $p V(x + y) + q V(x - y) \le V(x); \quad x \in A, \, y \in B_x$
Proof
Consider the following strategy: if the initial fortune is $x \in A$, we pick $y \in A_x$ and then bet $y$ on the first game; thereafter we follow strategy $S$. Conditioning on the outcome of the first game, the success function for this new strategy is $U(x) = p\,V(x + y) + q\,V(x - y)$ Thus, the theorem can be restated as follows: If $S$ is optimal with respect to the class of strategies just described, then $S$ is optimal over all strategies. Let $T$ be an arbitrary strategy with success function $U$. The random variable $V(X_n)$ can be interpreted as the probability of winning if the gambler's strategy is replaced by strategy $S$ after time $n$. Conditioning on the outcome of game $n$ gives $\E[V(X_n) \mid X_0 = x] = \E[p \, V(X_{n-1} + Y_n) + q \, V(X_{n-1} - Y_n) \mid X_0 = x]$ Using the the optimality condition gives $\E[V(X_n) \mid X_0 = x] \le \E[V(X_{n-1}) \mid X_0 = x] , \quad n \in \N_+, \; x \in A$ If follows that $\E[V(X_n) \mid X_0 = x] \le V(x)$ for $n \in \N_+$ and $x \in A$. Now let $N$ denote the stopping time for strategy $T$. Letting $n \to \infty$ we have $\E[V(X_N) \mid X_0 = x] \le V(x)$ for $x \in A$. But $\E[V(X_N) \mid X_0 = x] = U(x)$ for $x \in A$. Thus $U(x) \le V(x)$ for $x \in A$.
Favorable Trials with a Minimum Bet
Suppose now that $p \ge \frac{1}{2}$ so that the trials are favorable (or at least not unfair) to the gambler. Next, suppose that all bets must be multiples of a basic unit, which we might as well assume is \$1. Of course, real gambling houses have this restriction. Thus the set of valid fortunes is $A = \{0, 1, \ldots, a\}$ and the set of valid bets for $x \in A$ is $B_x = \{0, 1, \ldots, \min\{x, a - x\}\}$. Our main result for this case is
Timid play is an optimal strategy.
Proof
Recall the success function $f$ for timid play and recall the optimality condition. This condition holds if $p = \frac{1}{2}$. If $p \gt \frac{1}{2}$, the condition for optimality is equivalent to $p \left(\frac{q}{p}\right)^{x+y} + q \left(\frac{q}{p}\right)^{x-y} \ge \left(\frac{q}{p}\right)^x$ But this condition is equivalent to $p q \left(p^y - q^y\right)\left(p^{y - 1} - q^{y - 1}\right) \le 0$ which clearly holds if $p \gt \frac{1}{2}$.
In the red and black game set the target fortune to 16, the initial fortune to 8, and the game win probability to 0.45. Define the strategy of your choice and play 100 games. Compare your relative frequency of wins with the probability of winning with timid play.
Favorable Trials without a Minimum Bet
We will now assume that the house allows arbitrarily small bets and that $p \gt \frac{1}{2}$, so that the trials are strictly favorable. In this case it is natural to take the target as the monetary unit so that the set of fortunes is $A = [0, 1]$, and the set of bets for $x \in A$ is $B_x = [0, \min\{x, 1 - x\}]$. Our main result for this case is given below. The results for timid play will play an important role in the analysis, so we will let $f(j, a)$ denote the probability of reaching an integer target $a$, starting at the integer $j \in [0, a]$, with unit bets.
The optimal success function is $V(x) = x$ for $x \in (0, 1]$.
Proof
Fix a rational initial fortune $x = \frac{k}{n} \in [0, 1]$. Let $m$ be a positive integer and suppose that, starting at $x$, the gambler bets $\frac{1}{m n}$ on each game. This strategy is equivalent to timid play with target fortune $m n$, and initial fortune $m k$. Hence the probability of reaching the target 1 under this strategy is $f(m k, m n)$. But $f(m k, m n) \to 1$ as $m \to \infty$. It follows that $V(x) = x$ if $x \in (0, 1]$ is rational. But $V$ is increasing so $V(x) = x$ for all $x \in (0, 1]$
Unfair Trials
We will now assume that $p \le \frac{1}{2}$ so that the trials are unfair, or at least not favorable. As before, we will take the target fortune as the basic monetary unit and allow any valid fraction of this unit as a bet. Thus, the set of fortunes is $A = [0, 1]$, and the set of bets for $x \in A$ is $B_x = [0, \min\{x, 1 - x\}]$. Our main result for this case is
Bold play is optimal.
Proof
Let $F$ denote the success function for bold play, and let $D(x, y) = F \left(\frac{x + y}{2}\right) - [p F(x) + q F(y)]$ The optimality condition equivalent to $D(x, y) \le 0$ for $0 \le x \le y \le 1$. From the continuity of $F$, it suffices to prove this inequality when $x$ and $y$ are binary rationals. It's simple to see that $D(x, y) \le 0$ when $x$ and $y$ have rank 0: $x = 0$, $y = 0$ or $x = 0$, $y = 1$ or $x = 1$, $y = 1$. Suppose now that $D(x, y) \le 0$ when $x$ and $y$ have rank $m$ or less. We have the following cases:
1. If $x \le y \le \frac{1}{2}$ then $D(x, y) = p D(2 x, 2 y)$.
2. If $\frac{1}{2} \le x \le y$ then $D(x, y) = q D(2 x - 1, 2 y - 1)$.
3. If $x \le \frac{x + y}{2} \le \frac{1}{2} \le y$ and $2 y - 1 \le 2 x$ then $D(x, y) = (q - p) F(2 y - 1) + q D(2 y - 1, 2 x)$.
4. If $x \le \frac{x + y}{2} \le \frac{1}{2} \le y$ and $2 x \le 2 y - 1$ then $D(x, y) = (q - p) F(2 x) + q D(2 x, 2 y - 1)$.
5. If $x \le \frac{1}{2} \le \frac{x + y}{2} \le y$ and $2 y - 1 \le 2 x$ then $D(x, y) = p (q - p) [1 - F(2 x)] + p D(2 y - 1, 2 x)$.
6. If $x \le \frac{1}{2} \le \frac{x + y}{2} \le y$ and $2 x \le 2 y - 1$ then $D(x, y) = p (q - p) [1 - F(2 y - 1)] + p D(2 x, 2 y - 1)$.
The induction hypothesis can now be applied to each case to finish the proof.
In the red and black game, set the target fortune to 16, the initial fortune to 8, and the game win probability to 0.45. Define the strategy of your choice and play 100 games. Compare your relative frequency of wins with the probability of winning with bold play.
Other Optimal Strategies in the Sub-Fair Case
Consider again the sub-fair case where $p \le \frac{1}{2}$ so that the trials are not favorable to the gambler. We will show that bold play is not the only optimal strategy; amazingly, there are infinitely many optimal strategies. Recall first that the bold strategy has betting function $S_1(x) = \min\{x, 1 - x\} = \begin{cases} x, & 0 \le x \le \frac{1}{2} \ 1 - x, & \frac{1}{2} \le x \le 1 \end{cases}$
Consider the following strategy, which we will refer to as the second order bold strategy:
1. With fortune $x \in \left(0, \frac{1}{2}\right)$, play boldly with the object of reaching $\frac{1}{2}$ before falling to 0.
2. With fortune $x \in \left(\frac{1}{2}, 1\right)$, play boldly with the object of reaching 1 without falling below $\frac{1}{2}$.
3. With fortune $\frac{1}{2}$, play boldly and bet $\frac{1}{2}$
The second order bold strategy has betting function $S_2$ given by $S_2(x) = \begin{cases} x, & 0 \le x \lt \frac{1}{4} \ \frac{1}{2} - x, & \frac{1}{4} \le x \lt \frac{1}{2} \ \frac{1}{2}, & x = \frac{1}{2} \ x - \frac{1}{2}, & \frac{1}{2} \lt x \le \frac{3}{4} \ 1 - x, & \frac{3}{4} \lt x \le 1 \end{cases}$
The second order bold strategy is optimal.
Proof
Let $F_2$ denote the success function associated with strategy $S_2$. Suppose first that the player starts with fortune $x \in (0, \frac{1}{2})$ under strategy $S_2$. Note that the player reaches the target 1 if and only if she reaches $\frac{1}{2}$ and then wins the final game. Consider the sequence of fortunes until the player reaches 0 or $\frac{1}{2}$. If we double the fortunes, then we have the fortune sequence under the ordinary bold strategy, starting at $2\,x$ and terminating at either 0 or 1. Thus it follows that $F_2(x) = p F(2 x), \quad 0 \lt x \lt \frac{1}{2}$ Suppose next that the player starts with fortune $x \in (\frac{1}{2}, 1)$ under strategy $S_2$. Note that the player reaches the target 1 if and only if she reaches 1 without falling back to $\frac{1}{2}$ or falls back to $\frac{1}{2}$ and then wins the final game. Consider the sequence of fortunes until the player reaches $\frac{1}{2}$ or 1. If we double the fortunes and subtract 1, then we have the fortune sequence under the ordinary bold strategy, starting at $2 x - 1$ and terminating at either 0 or 1. Thus it follows that $F_2(x) = F(2 x - 1) + [1 - F(2 x - 2)] p = p + q F(2 x - 1), \quad \frac{1}{2} \lt x \lt 1$ But now, using the functional equation for ordinary bold play, we have $F_2(x) = F(x)$ for all $x \in (0, 1]$, and hence $S_2$ is optimal.
Once we understand how this construction is done, it's straightforward to define the third order bold strategy and show that it's optimal as well.
Explicitly give the third order betting function and show that the strategy is optimal.
More generally, we can define the $n$th order bold strategy and show that it is optimal as well.
The sequence of bold strategies can be defined recursively from the basic bold strategy $S_1$ as follows:
$S_{n+1}(x) = \begin{cases} \frac{1}{2} S_n(2 x), & 0 \le x \lt \frac{1}{2} \ \frac{1}{2}, & x = \frac{1}{2} \ \frac{1}{2} S_n(2 x - 1), & \frac{1}{2} \lt x \le 1 \end{cases}$
$S_n$ is optimal for each $n$.
Even more generally, we can define an optimal strategy $T$ in the following way: for each $x \in [0, 1]$ select $n_x \in \N_+$ and let $T(x) = S_{n_x}(x)$. The graph below shows a few of the graphs of the bold strategies. For an optimal strategy $T$, we just need to select, for each $x$ a bet on one of the graphs.
Martingales
Let's return to the unscaled formulation of red and black, where the target fortune is $a \in (0, \infty)$ and the initial fortune is $x \in (0, a)$. In the subfair case, when $p \le \frac{1}{2}$, no strategy can do better than the optimal strategies, so that the win probability is bounded by $x / a$ (with equality when $p = \frac{1}{2}$). Another elegant proof of this is given in the section on inequalities in the chapter on martingales. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/13%3A_Games_of_Chance/13.11%3A_Optimal_Strategies.txt |
The Poisson process is one of the most important random processes in probability theory. It is widely used to model random points in time and space, such as the times of radioactive emissions, the arrival times of customers at a service center, and the positions of flaws in a piece of material. Several important probability distributions arise naturally from the Poisson process—the Poisson distribution, the exponential distribution, and the gamma distribution. The process has a beautiful mathematical structure, and is used as a foundation for building a number of other, more complicated random processes.
14: The Poisson Process
$\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$
The Poisson Model
We will consider a process in which points occur randomly in time. The phrase points in time is generic and could represent, for example:
• The times when a sample of radioactive material emits particles
• The times when customers arrive at a service station
• The times when file requests arrive at a server computer
• The times when accidents occur at a particular intersection
• The times when a device fails and is replaced by a new device
It turns out that under some basic assumptions that deal with independence and uniformity in time, a single, one-parameter probability model governs all such random processes. This is an amazing result, and because of it, the Poisson process (named after Simeon Poisson) is one of the most important in probability theory.
Run the Poisson experiment with the default settings in single step mode. Note the random points in time.
Random Variables
There are three collections of random variables that can be used to describe the process. First, let $X_1$ denote the time of the first arrival, and $X_i$ the time between the $(i - 1)$st and $i$th arrival for $i \in \{2, 3, \ldots\}$. Thus, $\bs{X} = (X_1, X_2, \ldots)$ is the sequence of inter-arrival times. Next, let $T_n$ denote the time of the $n$th arrival for $n \in \N_+$. It will be convenient to define $T_0 = 0$, although we do not consider this as an arrival. Thus $\bs{T} = (T_0, T_1, \ldots)$ is the sequence of arrival times. Clearly $\bs{T}$ is the partial sum process associated $\bs{X}$, and so in particular each sequence determines the other: \begin{align} T_n & = \sum_{i=1}^n X_i, \quad n \in \N \ X_n & = T_n - T_{n-1}, \quad n \in \N_+ \end{align} Next, let $N_t$ denote the number of arrivals in $(0, t]$ for $t \in [0, \infty)$. The random process $\bs{N} = (N_t: t \ge 0)$ is the counting process. The arrival time process $\bs{T}$ and the counting process $\bs{N}$ are inverses of one another in a sense, and in particular each process determines the other: \begin{align} T_n & = \min\{t \ge 0: N_t = n\}, \quad n \in \N \ N_t & = \max\{n \in \N: T_n \le t\}, \quad t \in [0, \infty) \end{align} Note also that $N_t \ge n$ if and only if $T_n \le t$ for $n \in \N$ and $t \in [0, \infty)$ since each of these events means that there are at least $n$ arrivals in the interval $(0, t]$.
Sometimes it will be helpful to extend the notation of the counting process. For $A \subseteq [0, \infty)$ (measurable of course), let $N(A)$ denote the number of arrivals in $A$: $N(A) = \#\{n \in \N_+: T_n \in A\} = \sum_{n=1}^\infty \bs{1}(T_n \in A)$ Thus, $A \mapsto N(A)$ is the counting measure associated with the random points $(T_1, T_2, \ldots)$, so in particular it is a random measure. For our original counting process, note that $N_t = N(0, t]$ for $t \ge 0$. Thus, $t \mapsto N_t$ is a (random) distribution function, and $A \mapsto N(A)$ is the (random) measure associated with this distribution function.
The Basic Assumption
The assumption that we will make can be described intuitively (but imprecisely) as follows: If we fix a time $t$, whether constant or one of the arrival times, then the process after time $t$ is independent of the process before time $t$ and behaves probabilistically just like the original process. Thus, the random process has a strong renewal property. Making the strong renewal assumption precise will enable use to completely specify the probabilistic behavior of the process, up to a single, positive parameter.
Think about the strong renewal assumption for each of the specific applications given above.
Run the Poisson experiment with the default settings in single step mode. See if you can detect the strong renewal assumption.
As a first step, note that part of the renewal assumption, namely that the process restarts at each arrival time, independently of the past, implies the following result:
The sequence of inter-arrival times $\bs{X}$ is an independent, identically distributed sequence
Proof
Note that $X_2$ is the first arrival time after $T_1 = X_1$, so $X_2$ must be independent of $X_1$ and have the same distribution. Similarly $X_3$ is the first arrival time after $T_2 = X_1 + X_2$, so $X_2$ must be independent of $X_1$ and $X_2$ and have the same distribution as $X_1$. Continuing this argument, $\bs{X}$ must be an independent, identically distributed sequence.
A model of random points in time in which the inter-arrival times are independent and identically distributed (so that the process restarts at each arrival time) is known as a renewal process. A separate chapter explores Renewal Processes in detail. Thus, the Poisson process is a renewal process, but a very special one, because we also require that the renewal assumption hold at fixed times.
Analogy with Bernoulli Trials
In some sense, the Poisson process is a continuous time version of the Bernoulli trials process. To see this, suppose that we have a Bernoulli trials process with success parameter $p \in (0, 1)$, and that we think of each success as a random point in discrete time. Then this process, like the Poisson process (and in fact any renewal process) is completely determined by the sequence of inter-arrival times $\bs{X} = (X_1, X_2, \ldots)$ (in this case, the number of trials between successive successes), the sequence of arrival times $\bs{T} = (T_0, T_1, \ldots)$ (in this case, the trial numbers of the successes), and the counting process $(N_t: t \in \N)$ (in this case, the number of successes in the first $t$ trials). Also like the Poisson process, the Bernoulli trials process has the strong renewal property: at each fixed time and at each arrival time, the process starts over independently of the past. But of course, time is discrete in the Bernoulli trials model and continuous in the Poisson model. The Bernoulli trials process can be characterized in terms of each of the three sets of random variables.
Each of the following statements characterizes the Bernoulli trials process with success parameter $p \in (0, 1)$:
1. The inter-arrival time sequence $\bs{X}$ is a sequence of independent variables, and each has the geometric distributions on $\N_+$ with success parameter $p$.
2. The arrival time sequence $\bs{T}$ has stationary, independent increments, and for $n \in \N_+$, $T_n$ has the negative binomial distribution with stopping parameter $n$ and success parameter $p$
3. The counting process $\bs{N}$ has stationary, independent increments, and for $t \in \N$, $N_t$ has the binomial distribution with trial parameter $t$ and success parameter $p$.
Run the binomial experiment with $n = 50$ and $p = 0.1$. Note the random points in discrete time.
Run the Poisson experiment with $t = 5$ and $r = 1$. Note the random points in continuous time and compare with the behavior in the previous exercise.
As we develop the theory of the Poisson process we will frequently refer back to the analogy with Bernoulli trials. In particular, we will show that if we run the Bernoulli trials at a faster and faster rate but with a smaller and smaller success probability, in just the right way, the Bernoulli trials process converges to the Poisson process. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/14%3A_The_Poisson_Process/14.01%3A_Introduction_to_the_Poisson_Process.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
Basic Theory
The Memoryless Property
Recall that in the basic model of the Poisson process, we have points that occur randomly in time. The sequence of inter-arrival times is $\bs{X} = (X_1, X_2, \ldots)$. The strong renewal assumption states that at each arrival time and at each fixed time, the process must probabilistically restart, independent of the past. The first part of that assumption implies that $\bs{X}$ is a sequence of independent, identically distributed variables. The second part of the assumption implies that if the first arrival has not occurred by time $s$, then the time remaining until the arrival occurs must have the same distribution as the first arrival time itself. This is known as the memoryless property and can be stated in terms of a general random variable as follows:
Suppose that $X$ takes values in $[0, \infty)$. Then $X$ has the memoryless property if the conditional distribution of $X - s$ given $X \gt s$ is the same as the distribution of $X$ for every $s \in [0, \infty)$. Equivalently, $\P(X \gt t + s \mid X \gt s) = \P(X \gt t), \quad s, \; t \in [0, \infty)$
The memoryless property determines the distribution of $X$ up to a positive parameter, as we will see now.
Distribution functions
Suppose that $X$ takes values in $[0, \infty)$ and satisfies the memoryless property.
$X$ has a continuous distribution and there exists $r \in (0, \infty)$ such that the distribution function $F$ of $X$ is $F(t) = 1 - e^{-r\,t}, \quad t \in [0, \infty)$
Proof
Let $F^c = 1 - F$ denote the denote the right-tail distribution function of $X$ (also known as the reliability function), so that $F^c(t) = \P(X \gt t)$ for $t \ge 0$. From the definition of conditional probability, the memoryless property is equivalent to the law of exponents: $F^c(t + s) = F^c(s) F^c(t), \quad s, \; t \in [0, \infty)$ Let $a = F^c(1)$. Implicit in the memoryless property is $\P(X \gt t) \gt 0$ for $t \in [0, \infty)$, so $a \gt 0$. If $n \in \N_+$ then $F^c(n) = F^c\left(\sum_{i=1}^n 1\right) = \prod_{i=1}^n F^c(1) = \left[F^c(1)\right]^n = a^n$ Next, if $n \in \N_+$ then $a = F^c(1) = F^c\left(\frac{n}{n}\right) = F^c\left(\sum_{i=1}^n \frac{1}{n}\right) = \prod_{i=1}^n F^c\left(\frac{1}{n}\right) = \left[F^c\left(\frac{1}{n}\right)\right]^n$ so $F^c\left(\frac{1}{n}\right) = a^{1/n}$. Now suppose that $m \in \N$ and $n \in \N_+$. Then $F^c\left(\frac{m}{n}\right) = F^c\left(\sum_{i=1}^m \frac{1}{n}\right) = \prod_{i=1}^m F^c\left(\frac{1}{n}\right) = \left[F^c\left(\frac{1}{n}\right)\right]^m = a^{m/n}$ Thus we have $F^c(q) = a^q$ for rational $q \in [0, \infty)$. For $t \in [0, \infty)$, there exists a sequence of rational numbers $(q_1, q_2, \ldots)$ with $q_n \downarrow t$ as $n \uparrow \infty$. We have $F^c(q_n) = a^{q_n}$ for each $n \in \N_+$. But $F^c$ is continuous from the right, so taking limits gives $a^t = F^c(t)$. Now let $r = -\ln(a)$. Then $F^c(t) = e^{-r\,t}$ for $t \in [0, \infty)$.
The probability density function of $X$ is $f(t) = r \, e^{-r\,t}, \quad t \in [0, \infty)$
1. $f$ is decreasing on $[0, \infty)$.
2. $f$ is concave upward on $[0, \infty)$.
3. $f(t) \to 0$ as $t \to \infty$.
Proof
This follows since $f = F^\prime$. The properties in parts (a)–(c) are simple.
A random variable with the distribution function above or equivalently the probability density function in the last theorem is said to have the exponential distribution with rate parameter $r$. The reciprocal $\frac{1}{r}$ is known as the scale parameter (as will be justified below). Note that the mode of the distribution is 0, regardless of the parameter $r$, not very helpful as a measure of center.
In the gamma experiment, set $n = 1$ so that the simulated random variable has an exponential distribution. Vary $r$ with the scroll bar and watch how the shape of the probability density function changes. For selected values of $r$, run the experiment 1000 times and compare the empirical density function to the probability density function.
The quantile function of $X$ is $F^{-1}(p) = \frac{-\ln(1 - p)}{r}, \quad p \in [0, 1)$
1. The median of $X$ is $\frac{1}{r} \ln(2) \approx 0.6931 \frac{1}{r}$
2. The first quartile of $X$ is $\frac{1}{r}[\ln(4) - \ln(3)] \approx 0.2877 \frac{1}{r}$
3. The third quartile $X$ is $\frac{1}{r} \ln(4) \approx 1.3863 \frac{1}{r}$
4. The interquartile range is $\frac{1}{r} \ln(3) \approx 1.0986 \frac{1}{r}$
Proof
The formula for $F^{-1}$ follows easily from solving $p = F^{-1}(t)$ for $t$ in terms of $p$.
In the special distribution calculator, select the exponential distribution. Vary the scale parameter (which is $1/r$) and note the shape of the distribution/quantile function. For selected values of the parameter, compute a few values of the distribution function and the quantile function.
Returning to the Poisson model, we have our first formal definition:
A process of random points in time is a Poisson process with rate $r \in (0, \infty)$ if and only the interarrvial times are independent, and each has the exponential distribution with rate $r$.
Constant Failure Rate
Suppose now that $X$ has a continuous distribution on $[0, \infty)$ and is interpreted as the lifetime of a device. If $F$ denotes the distribution function of $X$, then $F^c = 1 - F$ is the reliability function of $X$. If $f$ denotes the probability density function of $X$ then the failure rate function $h$ is given by $h(t) = \frac{f(t)}{F^c(t)}, \quad t \in [0, \infty)$ If $X$ has the exponential distribution with rate $r \gt 0$, then from the results above, the reliability function is $F^c(t) = e^{-r t}$ and the probability density function is $f(t) = r e^{-r t}$, so trivially $X$ has constant rate $r$. The converse is also true.
If $X$ has constant failure rate $r \gt 0$ then $X$ has the exponential distribution with parameter $r$.
Proof
Recall that in general, the distribution of a lifetime variable $X$ is determined by the failure rate function $h$. Specifically, if $F^c = 1 - F$ denotes the reliability function, then $(F^c)^\prime = -f$, so $-h = (F^c)^\prime / F^c$. Integrating and then taking exponentials gives $F^c(t) = \exp\left(-\int_0^t h(s) \, ds\right), \quad t \in [0, \infty)$ In particular, if $h(t) = r$ for $t \in [0, \infty)$, then $F^c(t) = e^{-r t}$ for $t \in [0, \infty)$.
The memoryless and constant failure rate properties are the most famous characterizations of the exponential distribution, but are by no means the only ones. Indeed, entire books have been written on characterizations of this distribution.
Moments
Suppose again that $X$ has the exponential distribution with rate parameter $r \gt 0$. Naturaly, we want to know the the mean, variance, and various other moments of $X$.
If $n \in \N$ then $\E\left(X^n\right) = n! \big/ r^n$.
Proof
By the change of variables theorem for expected value, $\E\left(X^n\right) = \int_0^\infty t^n r e^{-r\,t} \, dt$ Integrating by parts gives $\E\left(X^n\right) = \frac{n}{r} \E\left(X^{n-1}\right)$ for $n \in \N+$. Of course $\E\left(X^0\right) = 1$ so the result now follows by induction.
More generally, $\E\left(X^a\right) = \Gamma(a + 1) \big/ r^a$ for every $a \in [0, \infty)$, where $\Gamma$ is the gamma function.
In particular.
1. $\E(X) = \frac{1}{r}$
2. $\var(X) = \frac{1}{r^2}$
3. $\skw(X) = 2$
4. $\kur(X) = 9$
In the context of the Poisson process, the parameter $r$ is known as the rate of the process. On average, there are $1 / r$ time units between arrivals, so the arrivals come at an average rate of $r$ per unit time. The Poisson process is completely determined by the sequence of inter-arrival times, and hence is completely determined by the rate $r$.
Note also that the mean and standard deviation are equal for an exponential distribution, and that the median is always smaller than the mean. Recall also that skewness and kurtosis are standardized measures, and so do not depend on the parameter $r$ (which is the reciprocal of the scale parameter).
The moment generating function of $X$ is $M(s) = \E\left(e^{s X}\right) = \frac{r}{r - s}, \quad s \in (-\infty, r)$
Proof
By the change of variables theorem $M(s) = \int_0^\infty e^{s t} r e^{-r t} \, dt = \int_0^\infty r e^{(s - r)t} \, dt$ The integral evaluates to $\frac{r}{r - s}$ if $s \lt r$ and to $\infty$ if $s \ge r$.
In the gamma experiment, set $n = 1$ so that the simulated random variable has an exponential distribution. Vary $r$ with the scroll bar and watch how the mean$\pm$standard deviation bar changes. For various values of $r$, run the experiment 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation, respectively.
Additional Properties
The exponential distribution has a number of interesting and important mathematical properties. First, and not surprisingly, it's a member of the general exponential family.
Suppose that $X$ has the exponential distribution with rate parameter $r \in (0, \infty)$. Then $X$ has a one parameter general exponential distribution, with natural parameter $-r$ and natural statistic $X$.
Proof
This follows directly from the form of the PDF, $f(x) = r e^{-r x}$ for $x \in [0, \infty)$, and the definition of the general exponential family.
The Scaling Property
As suggested earlier, the exponential distribution is a scale family, and $1/r$ is the scale parameter.
Suppose that $X$ has the exponential distribution with rate parameter $r \gt 0$ and that $c \gt 0$. Then $c X$ has the exponential distribution with rate parameter $r / c$.
Proof
For $t \ge 0$, $\P(c\,X \gt t) = \P(X \gt t / c) = e^{-r (t / c)} = e^{-(r / c) t}$.
Recall that multiplying a random variable by a positive constant frequently corresponds to a change of units (minutes into hours for a lifetime variable, for example). Thus, the exponential distribution is preserved under such changes of units. In the context of the Poisson process, this has to be the case, since the memoryless property, which led to the exponential distribution in the first place, clearly does not depend on the time units.
In fact, the exponential distribution with rate parameter 1 is referred to as the standard exponential distribution. From the previous result, if $Z$ has the standard exponential distribution and $r \gt 0$, then $X = \frac{1}{r} Z$ has the exponential distribution with rate parameter $r$. Conversely, if $X$ has the exponential distribution with rate $r \gt 0$ then $Z = r X$ has the standard exponential distribution.
Similarly, the Poisson process with rate parameter 1 is referred to as the standard Poisson process. If $Z_i$ is the $i$th inter-arrival time for the standard Poisson process for $i \in \N_+$, then letting $X_i = \frac{1}{r} Z_i$ for $i \in \N_+$ gives the inter-arrival times for the Poisson process with rate $r$. Conversely if $X_i$ is the $i$th inter-arrival time of the Poisson process with rate $r \gt 0$ for $i \in \N_+$, then $Z_i = r X_i$ for $i \in \N_+$ gives the inter-arrival times for the standard Poisson process.
Relation to the Geometric Distribution
In many respects, the geometric distribution is a discrete version of the exponential distribution. In particular, recall that the geometric distribution on $\N_+$ is the only distribution on $\N_+$ with the memoryless and constant rate properties. So it is not surprising that the two distributions are also connected through various transformations and limits.
Suppose that $X$ has the exponential distribution with rate parameter $r \gt 0$. Then
1. $\lfloor X \rfloor$ has the geometric distributions on $\N$ with success parameter $1 - e^{-r}$.
2. $\lceil X \rceil$ has the geometric distributions on $\N_+$ with success parameter $1 - e^{-r}$.
Proof
1. For $n \in \N$ note that $\P(\lfloor X \rfloor = n) = \P(n \le X \lt n + 1) = F(n + 1) - F(n)$. Substituting into the distribution function and simplifying gives $\P(\lfloor X \rfloor = n) = (e^{-r})^n (1 - e^{-r})$.
2. For $n \in \N_+$ note that $\P(\lceil X \rceil = n) = \P(n - 1 \lt X \le n) = F(n) - F(n - 1)$. Substituting into the distribution function and simplifying gives $\P(\lceil X \rceil = n) = (e^{-r})^{n - 1} (1 - e^{-r})$.
The following connection between the two distributions is interesting by itself, but will also be very important in the section on splitting Poisson processes. In words, a random, geometrically distributed sum of independent, identically distributed exponential variables is itself exponential.
Suppose that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of independent variables, each with the exponential distribution with rate $r$. Suppose that $U$ has the geometric distribution on $\N_+$ with success parameter $p$ and is independent of $\bs{X}$. Then $Y = \sum_{i=1}^U X_i$ has the exponential distribution with rate $r p$.
Proof
Recall that the moment generating function of $Y$ is $P \circ M$ where $M$ is the common moment generating function of the terms in the sum, and $P$ is the probability generating function of the number of terms $U$. But $M(s) = r \big/ (r - s)$ for $s \lt r$ and $P(s) = p s \big/ \left[1 - (1 - p)s\right]$ for $s \lt 1 \big/ (1 - p)$. Thus, $(P \circ M)(s) = \frac{p r \big/ (r - s)}{1 - (1 - p) r \big/ (r - s)} = \frac{pr}{pr - s}, \quad s \lt pr$ It follows that $Y$ has the exponential distribution with parameter $p r$
The next result explores the connection between the Bernoulli trials process and the Poisson process that was begun in the Introduction.
For $n \in \N_+$, suppose that $U_n$ has the geometric distribution on $\N_+$ with success parameter $p_n$, where $n p_n \to r \gt 0$ as $n \to \infty$. Then the distribution of $U_n / n$ converges to the exponential distribution with parameter $r$ as $n \to \infty$.
Proof
Let $F_n$ denote the CDF of $U_n / n$. Then for $x \in [0, \infty)$ $F_n(x) = \P\left(\frac{U_n}{n} \le x\right) = \P(U_n \le n x) = \P\left(U_n \le \lfloor n x \rfloor\right) = 1 - \left(1 - p_n\right)^{\lfloor n x \rfloor}$ But by a famous limit from calculus, $\left(1 - p_n\right)^n = \left(1 - \frac{n p_n}{n}\right)^n \to e^{-r}$ as $n \to \infty$, and hence $\left(1 - p_n\right)^{n x} \to e^{-r x}$ as $n \to \infty$. But by definition, $\lfloor n x \rfloor \le n x \lt \lfloor n x \rfloor + 1$ or equivalently, $n x - 1 \lt \lfloor n x \rfloor \le n x$ so it follows that $\left(1 - p_n \right)^{\lfloor n x \rfloor} \to e^{- r x}$ as $n \to \infty$. Hence $F_n(x) \to 1 - e^{-r x}$ as $n \to \infty$, which is the CDF of the exponential distribution.
To understand this result more clearly, suppose that we have a sequence of Bernoulli trials processes. In process $n$, we run the trials at a rate of $n$ per unit time, with probability of success $p_n$. Thus, the actual time of the first success in process $n$ is $U_n / n$. The last result shows that if $n p_n \to r \gt 0$ as $n \to \infty$, then the sequence of Bernoulli trials processes converges to the Poisson process with rate parameter $r$ as $n \to \infty$. We will return to this point in subsequent sections.
Orderings and Order Statistics
Suppose that $X$ and $Y$ have exponential distributions with parameters $a$ and $b$, respectively, and are independent. Then $\P(X \lt Y) = \frac{a}{a + b}$
Proof
This result can be proved in a straightforward way by integrating the joint PDF of $(X, Y)$ over $\{(x, y): 0 \lt x \lt y \lt \infty\}$. A more elegant proof uses conditioning and the moment generating function above: $\P(Y \gt X) = \E\left[\P(Y \gt X \mid X)\right] = \E\left(e^{-b X}\right) = \frac{a}{a + b}$
The following theorem gives an important random version of the memoryless property.
Suppose that $X$ and $Y$ are independent variables taking values in $[0, \infty)$ and that $Y$ has the exponential distribution with rate parameter $r \gt 0$. Then $X$ and $Y - X$ are conditionally independent given $X \lt Y$, and the conditional distribution of $Y - X$ is also exponential with parameter $r$.
Proof
Suppose that $A \subseteq [0, \infty)$ (measurable of course) and $t \ge 0$. Then $\P(X \in A, Y - X \ge t \mid X \lt Y) = \frac{\P(X \in A, Y - X \ge t)}{\P(X \lt Y)}$ But conditioning on $X$ we can write the numerator as $\P(X \in A, Y - X \gt t) = \E\left[\P(X \in A, Y - X \gt t \mid X)\right] = \E\left[\P(Y \gt X + t \mid X), X \in A\right] = \E\left[e^{-r(t + X)}, X \in A\right] = e^{-rt} \E\left(e^{-r\,X}, X \in A\right)$ Similarly, conditioning on $X$ gives $\P(X \lt Y) = \E\left(e^{-r\,X}\right)$. Thus $\P(X \in A, Y - X \gt t \mid X \lt Y) = e^{-r\,t} \frac{\E\left(e^{-r\,X}, X \in A\right)}{\E\left(e^{-rX}\right)}$ Letting $A = [0, \infty)$ we have $\P(Y \gt t) = e^{-r\,t}$ so given $X \lt Y$, the variable $Y - X$ has the exponential distribution with parameter $r$. Letting $t = 0$, we see that given $X \lt Y$, variable $X$ has the distribution $A \mapsto \frac{\E\left(e^{-r\,X}, X \in A\right)}{\E\left(e^{-r\,X}\right)}$ Finally, because of the factoring, $X$ and $Y - X$ are conditionally independent given $X \lt Y$.
For our next discussion, suppose that $\bs{X} = (X_1, X_2, \ldots, X_n)$ is a sequence of independent random variables, and that $X_i$ has the exponential distribution with rate parameter $r_i \gt 0$ for each $i \in \{1, 2, \ldots, n\}$.
Let $U = \min\{X_1, X_2, \ldots, X_n\}$. Then $U$ has the exponential distribution with parameter $\sum_{i=1}^n r_i$.
Proof
Recall that in general, $\{U \gt t\} = \{X_1 \gt t, X_2 \gt t, \ldots, X_n \gt t\}$ and therefore by independence, $F^c(t) = F^c_1(t) F^c_2(t) \cdots F^c_n(t)$ for $t \ge 0$, where $F^c$ is the reliability function of $U$ and $F^c_i$ is the reliability function of $X_i$ for each $i$. When $X_i$ has the exponential distribution with rate $r_i$ for each $i$, we have $F^c(t) = \exp\left[-\left(\sum_{i=1}^n r_i\right) t\right]$ for $t \ge 0$.
In the context of reliability, if a series system has independent components, each with an exponentially distributed lifetime, then the lifetime of the system is also exponentially distributed, and the failure rate of the system is the sum of the component failure rates. In the context of random processes, if we have $n$ independent Poisson process, then the new process obtained by combining the random points in time is also Poisson, and the rate of the new process is the sum of the rates of the individual processes (we will return to this point latter).
Let $V = \max\{X_1, X_2, \ldots, X_n\}$. Then $V$ has distribution function $F$ given by $F(t) = \prod_{i=1}^n \left(1 - e^{-r_i t}\right), \quad t \in [0, \infty)$
Proof
Recall that in general, $\{V \le t\} = \{X_1 \le t, X_2 \le t, \ldots, X_n \le t\}$ and therefore by independence, $F(t) = F_1(t) F_2(t) \cdots F_n(t)$ for $t \ge 0$, where $F$ is the distribution function of $V$ and $F_i$ is the distribution function of $X_i$ for each $i$.
Consider the special case where $r_i = r \in (0, \infty)$ for each $i \in \N_+$. In statistical terms, $\bs{X}$ is a random sample of size $n$ from the exponential distribution with parameter $r$. From the last couple of theorems, the minimum $U$ has the exponential distribution with rate $n r$ while the maximum $V$ has distribution function $F(t) = \left(1 - e^{-r t}\right)^n$ for $t \in [0, \infty)$. Recall that $U$ and $V$ are the first and last order statistics, respectively.
In the order statistic experiment, select the exponential distribution.
1. Set $k = 1$ (this gives the minimum $U$). Vary $n$ with the scroll bar and note the shape of the probability density function. For selected values of $n$, run the simulation 1000 times and compare the empirical density function to the true probability density function.
2. Vary $n$ with the scroll bar, set $k = n$ each time (this gives the maximum $V$), and note the shape of the probability density function. For selected values of $n$, run the simulation 1000 times and compare the empirical density function to the true probability density function.
Curiously, the distribution of the maximum of independent, identically distributed exponential variables is also the distribution of the sum of independent exponential variables, with rates that grow linearly with the index.
Suppose that $r_i = i r$ for each $i \in \{1, 2, \ldots, n\}$ where $r \in (0, \infty)$. Then $Y = \sum_{i=1}^n X_i$ has distribution function $F$ given by $F(t) = (1 - e^{-r t})^n, \quad t \in [0, \infty)$
Proof
By assumption, $X_k$ has PDF $f_k$ given by $f_k(t) = k r e^{-k r t}$ for $t \in [0, \infty)$. We want to show that $Y_n = \sum_{i=1}^n X_i$ has PDF $g_n$ given by $g_n(t) = n r e^{-r t} (1 - e^{-r t})^{n-1}, \quad t \in [0, \infty)$ The PDF of a sum of independent variables is the convolution of the individual PDFs, so we want to show that $f_1 * f_2 * \cdots * f_n = g_n, \quad n \in \N_+$ The proof is by induction on $n$. Trivially $f_1 = g_1$, so suppose the result holds for a given $n \in \N_+$. Then \begin{align*} g_n * f_{n+1}(t) & = \int_0^t g_n(s) f_{n+1}(t - s) ds = \int_0^t n r e^{-r s}(1 - e^{-r s})^{n-1} (n + 1) r e^{-r (n + 1) (t - s)} ds \ & = r (n + 1) e^{-r(n + 1)t} \int_0^t n(1 - e^{-rs})^{n-1} r e^{r n s} ds \end{align*} Now substitute $u = e^{r s}$ so that $du = r e^{r s} ds$ or equivalently $r ds = du / u$. After some algebra, \begin{align*} g_n * f_{n+1}(t) & = r (n + 1) e^{-r (n + 1)t} \int_1^{e^{rt}} n (u - 1)^{n-1} du \ & = r(n + 1) e^{-r(n + 1) t}(e^{rt} - 1)^n = r(n + 1)e^{-rt}(1 - e^{-rt})^n = g_{n+1}(t) \end{align*}
This result has an application to the Yule process, named for George Yule. The Yule process, which has some parallels with the Poisson process, is studied in the chapter on Markov processes. We can now generalize the order probability above:
For $i \in \{1, 2, \ldots, n\}$, $\P\left(X_i \lt X_j \text{ for all } j \ne i\right) = \frac{r_i}{\sum_{j=1}^n r_j}$
Proof
First, note that $X_i \lt X_j$ for all $i \ne j$ if and only if $X_i \lt \min\{X_j: j \ne i\}$. But the minimum on the right is independent of $X_i$ and, by result on minimums above, has the exponential distribution with parameter $\sum_{j \ne i} r_j$. The result now follows from order probability for two events above.
Suppose that for each $i$, $X_i$ is the time until an event of interest occurs (the arrival of a customer, the failure of a device, etc.) and that these times are independent and exponentially distributed. Then the first time $U$ that one of the events occurs is also exponentially distributed, and the probability that the first event to occur is event $i$ is proportional to the rate $r_i$.
The probability of a total ordering is $\P(X_1 \lt X_2 \lt \cdots \lt X_n) = \prod_{i=1}^n \frac{r_i}{\sum_{j=i}^n r_j}$
Proof
Let $A = \left\{X_1 \lt X_j \text{ for all } j \in \{2, 3, \ldots, n\}\right\}$. then $\P(X_1 \lt X_2 \lt \cdots \lt X_n) = \P(A, X_2 \lt X_3 \lt \cdots \lt X_n) = \P(A) \P(X_2 \lt X_3 \lt \cdots \lt X_n \mid A)$ But $\P(A) = \frac{r_1}{\sum_{i=1}^n r_i}$ from the previous result, and $\{X_2 \lt X_3 \lt \cdots \lt X_n\}$ is independent of $A$. Thus we have $\P(X_1 \lt X_2 \lt \cdots \lt X_n) = \frac{r_1}{\sum_{i=1}^n r_i} \P(X_2 \lt X_3 \lt \cdots \lt X_n)$ so the result follows by induction.
Of course, the probabilities of other orderings can be computed by permuting the parameters appropriately in the formula on the right.
The result on minimums and the order probability result above are very important in the theory of continuous-time Markov chains. But for that application and others, it's convenient to extend the exponential distribution to two degenerate cases: point mass at 0 and point mass at $\infty$ (so the first is the distribution of a random variable that takes the value 0 with probability 1, and the second the distribution of a random variable that takes the value $\infty$ with probability 1). In terms of the rate parameter $r$ and the distribution function $F$, point mass at 0 corresponds to $r = \infty$ so that $F(t) = 1$ for $0 \lt t \lt \infty$. Point mass at $\infty$ corresponds to $r = 0$ so that $F(t) = 0$ for $0 \lt t \lt \infty$. The memoryless property, as expressed in terms of the reliability function $F^c$, still holds for these degenerate cases on $(0, \infty)$: $F^c(s) F^c(t) = F^c(s + t), \quad s, \, t \in (0, \infty)$ We also need to extend some of results above for a finite number of variables to a countably infinite number of variables. So for the remainder of this discussion, suppose that $\{X_i: i \in I\}$ is a countable collection of independent random variables, and that $X_i$ has the exponential distribution with parameter $r_i \in (0, \infty)$ for each $i \in I$.
Let $U = \inf\{X_i: i \in I\}$. Then $U$ has the exponential distribution with parameter $\sum_{i \in I} r_i$
Proof
The proof is almost the same as the one above for a finite collection. Note that $\{U \ge t\} = \{X_i \ge t \text{ for all } i \in I\}$ and so $\P(U \ge t) = \prod_{i \in I} \P(X_i \ge t) = \prod_{i \in I} e^{-r_i t} = \exp\left[-\left(\sum_{i \in I} r_i\right)t \right]$ If $\sum_{i \in I} r_i \lt \infty$ then $U$ has a proper exponential distribution with the sum as the parameter. If $\sum_{i \in I} r_i = \infty$ then $P(U \ge t) = 0$ for all $t \in (0, \infty)$ so $P(U = 0) = 1$.
For $i \in \N_+$, $\P\left(X_i \lt X_j \text{ for all } j \in I - \{i\}\right) = \frac{r_i}{\sum_{j \in I} r_j}$
Proof
First note that since the variables have continuous distributions and $I$ is countable, $\P\left(X_i \lt X_j \text{ for all } j \in I - \{i\} \right) = \P\left(X_i \le X_j \text{ for all } j \in I - \{i\}\right)$ Next note that $X_i \le X_j$ for all $j \in I - \{i\}$ if and only if $X_i \le U_i$ where $U_i = \inf\left\{X_j: j \in I - \{i\}\right\}$. But $U_i$ is independent of $X_i$ and, by previous result, has the exponential distribution with parameter $s_i = \sum_{j \in I - \{i\}} r_j$. If $s_i = \infty$, then $U_i$ is 0 with probability 1, and so $P(X_i \le U_i) = 0 = r_i / s_i$. If $s_i \lt \infty$, then $X_i$ and $U_i$ have proper exponential distributions, and so the result now follows from order probability for two variables above.
We need one last result in this setting: a condition that ensures that the sum of an infinite collection of exponential variables is finite with probability one.
Let $Y = \sum_{i \in I} X_i$ and $\mu = \sum_{i \in I} 1 / r_i$. Then $\mu = \E(Y)$ and $\P(Y \lt \infty) = 1$ if and only if $\mu \lt \infty$.
Proof
The result is trivial if $I$ is finite, so assume that $I = \N_+$. Recall that $\E(X_i) = 1 / r_i$ and hence $\mu = \E(Y)$. Trivially if $\mu \lt \infty$ then $\P(Y \lt \infty) = 1$. Conversely, suppose that $\P(Y \lt \infty) = 1$. Then $\P(e^{-Y} \gt 0) = 1$ and hence $\E(e^{-Y}) \gt 0$. Using independence and the moment generating function above, $\E(e^{-Y}) = \E\left(\prod_{i=1}^\infty e^{-X_i}\right) = \prod_{i=1}^\infty \E(e^{-X_i}) = \prod_{i=1}^\infty \frac{r_i}{r_i + 1} \gt 0$ Next recall that if $p_i \in (0, 1)$ for $i \in \N_+$ then $\prod_{i=1}^\infty p_i \gt 0 \text{ if and only if } \sum_{i=1}^\infty (1 - p_i) \lt \infty$ Hence it follows that $\sum_{i=1}^\infty \left(1 - \frac{r_i}{r_i + 1}\right) = \sum_{i=1}^\infty \frac{1}{r_i + 1} \lt \infty$ In particular, this means that $1/(r_i + 1) \to 0$ as $i \to \infty$ and hence $r_i \to \infty$ as $i \to \infty$. But then $\frac{1/(r_i + 1)}{1/r_i} = \frac{r_i}{r_i + 1} \to 1 \text{ as } i \to \infty$ By the comparison test for infinite series, it follows that $\mu = \sum_{i=1}^\infty \frac{1}{r_i} \lt \infty$
Computational Exercises
Show directly that the exponential probability density function is a valid probability density function.
Solution
Clearly $f(t) = r e^{-r t} \gt 0$ for $t \in [0, \infty)$. Simple integration that $\int_0^\infty r e^{-r t} \, dt = 1$
Suppose that the length of a telephone call (in minutes) is exponentially distributed with rate parameter $r = 0.2$. Find each of the following:
1. The probability that the call lasts between 2 and 7 minutes.
2. The median, the first and third quartiles, and the interquartile range of the call length.
Answer
Let $X$ denote the call length.
1. $\P(2 \lt X \lt 7) = 0.4237$
2. $q_1 = 1.4384$, $q_2 = 3.4657$, $q_3 = 6.9315$, $q_3 - q_1 = 5.4931$
Suppose that the lifetime of a certain electronic component (in hours) is exponentially distributed with rate parameter $r = 0.001$. Find each of the following:
1. The probability that the component lasts at least 2000 hours.
2. The median, the first and third quartiles, and the interquartile range of the lifetime.
Answer
Let $T$ denote the lifetime
1. $\P(T \ge 2000) = 0.1353$
2. $q_1 = 287.682$, $q_2 = 693.147$, $q_3 = 1386.294$, $q_3 - q_1 = 1098.612$
Suppose that the time between requests to a web server (in seconds) is exponentially distributed with rate parameter $r = 2$. Find each of the following:
1. The mean and standard deviation of the time between requests.
2. The probability that the time between requests is less that 0.5 seconds.
3. The median, the first and third quartiles, and the interquartile range of the time between requests.
Answer
Let $T$ denote the time between requests.
1. $\E(T) = 0.5$, $\sd(T) = 0.5$
2. $\P(T \lt 0.5) = 0.6321$
3. $q_1 = 0.1438$, $q_2 = 0.3466$, $q_3 = 0.6931$, $q_3 - q_1 = 0.5493$
Suppose that the lifetime $X$ of a fuse (in 100 hour units) is exponentially distributed with $\P(X \gt 10) = 0.8$. Find each of the following:
1. The rate parameter.
2. The mean and standard deviation.
3. The median, the first and third quartiles, and the interquartile range of the lifetime.
Answer
Let $X$ denote the lifetime.
1. $r = 0.02231$
2. $\E(X) = 44.814$, $\sd(X) = 44.814$
3. $q_1 = 12.8922$, $q_2 = 31.0628$, $q_3 = 62.1257$, $q_3 - q_1 = 49.2334$
The position $X$ of the first defect on a digital tape (in cm) has the exponential distribution with mean 100. Find each of the following:
1. The rate parameter.
2. The probability that $X \lt 200$ given $X \gt 150$.
3. The standard deviation.
4. The median, the first and third quartiles, and the interquartile range of the position.
Answer
Let $X$ denote the position of the first defect.
1. $r = 0.01$
2. $\P(X \lt 200 \mid X \gt 150) = 0.3935$
3. $\sd(X) = 100$
4. $q_1 = 28.7682$, $q_2 = 69.3147$, $q_3 = 138.6294$, $q_3 - q_1 = 109.6812$
Suppose that $X, \, Y, \, Z$ are independent, exponentially distributed random variables with respective parameters $a, \, b, \, c \in (0, \infty)$. Find the probability of each of the 6 orderings of the variables.
Proof
1. $\P(X \lt Y \lt Z) = \frac{a}{a + b + c} \frac{b}{b + c}$
2. $\P(X \lt Z \lt Y) = \frac{a}{a + b + c} \frac{c}{b + c}$
3. $\P(Y \lt X \lt Z) = \frac{b}{a + b + c} \frac{a}{a + c}$
4. $\P(Y \lt Z \lt X) = \frac{b}{a + b + c} \frac{c}{a + c}$
5. $\P(Z \lt X \lt Y) = \frac{c}{a + b + c} \frac{a}{a + b}$
6. $\P(Z \lt Y \lt X) = \frac{c}{a + b + c} \frac{b}{a + b}$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/14%3A_The_Poisson_Process/14.02%3A_The_Exponential_Distribution.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
Basic Theory
We now know that the sequence of inter-arrival times $\bs{X} = (X_1, X_2, \ldots)$ in the Poisson process is a sequence of independent random variables, each having the exponential distribution with rate parameter $r$, for some $r \gt 0$. No other distribution gives the strong renewal assumption that we want: the property that the process probabilistically restarts, independently of the past, at each arrival time and at each fixed time.
The $n$th arrival time is simply the sum of the first $n$ inter-arrival times: $T_n = \sum_{i=0}^n X_i, \quad n \in \N$ Thus, the sequence of arrival times $\bs{T} = (T_0, T_1, \ldots)$ is the partial sum process associated with the sequence of inter-arrival times $\bs{X} = (X_1, X_2, \ldots)$.
Distribution Functions
Recall that the common probability density function of the inter-arrival times is $f(t) = r e^{-r t}, \quad 0 \le t \lt \infty$ Our first goal is to describe the distribution of the $n$th arrival $T_n$.
For $n \in \N_+$, $T_n$ has a continuous distribution with probability density function $f_n$ given by $f_n(t) = r^n \frac{t^{n-1}}{(n - 1)!} e^{-r t}, \quad 0 \le t \lt \infty$
1. $f_n$ increases and then decreases, with mode at $(n - 1) / r$.
2. $f_1$ is concave upward. $f_2$ is concave downward and then upward, with inflection point at $t = 2 / r$. For $n \ge 2$, $f_n$ is concave upward, then downward, then upward again with inflection points at $t = \left[(n - 1) \pm \sqrt{n - 1}\right] \big/ r$.
Proof
Since $T_n$ is the sum of $n$ independent variables, each with PDF $f$, the PDF of $T_n$ is the convolution power of $f$ of order $n$. That is, $f_n = f^{*n}$. A simple induction argument shows that $f_n$ has the form given above. For example, $f_2(t) = \int_0^t f(s) f(t - s) \, ds = \int_0^t r e^{-r s} r e^{-r(t-s)} \, ds = \int_0^t r^2 e^{-r t} \, ds = r^2 t \, e^{-r t}, \quad 0 \le t \lt \infty$ Parts (a) and (b) follow from standard calculus.
The distribution with this probability density function is known as the gamma distribution with shape parameter $n$ and rate parameter $r$. It is lso known as the Erlang distribution, named for the Danish mathematician Agner Erlang. Again, $1 / r$ is the scale parameter, and that term will be justified below. The term shape parameter for $n$ clearly makes sense in light of parts (a) and (b) of the last result. The term rate parameter for $r$ is inherited from the inter-arrival times, and more generally from the underlying Poisson process itself: the random points are arriving at an average rate of $r$ per unit time. A more general version of the gamma distribution, allowing non-integer shape parameters, is studied in the chapter on Special Distributions. Note that since the arrival times are continuous, the probability of an arrival at any given instant of time is 0.
In the gamma experiment, vary $r$ and $n$ with the scroll bars and watch how the shape of the probability density function changes. For various values of the parameters, run the experiment 1000 times and compare the empirical density function to the true probability density function.
The distribution function and the quantile function of the gamma distribution do not have simple, closed-form expressions. However, it's easy to write the distribution function as a sum.
For $n \in \N_+$, $T_n$ has distribution function $F_n$ given by $F_n(t) = 1 - \sum_{k=0}^{n-1} e^{-r t} \frac{(r t)^k}{k!}, \quad t \in [0, \infty)$
Proof
Note that $F_n(t) = \int_0^t f_n(s) ds = \int_0^n r^n \frac{s^{n-1}}{(n - 1)!} e^{-r s}$ The result follows by repeated integration by part.
Open the special distribution calculator, select the gamma distribution, and select CDF view. Vary the parameters and note the shape of the distribution and quantile functions. For selected values of the parameters, compute the quartiles.
Moments
The mean, variance, and moment generating function of $T_n$ can be found easily from the representation as a sum of independent exponential variables.
The mean and variance of $T_n$ are.
1. $\E\left(T_n\right) = n / r$
2. $\var\left(T_n\right) = n / r^2$
Proof
Recall that the exponential distribution with rate parameter $r$ has mean $1 / r$ and variance $1 / r^2$.
1. The expected value of a sum is the sum of the expected values, so $\E\left(T_n\right) = n / r$.
2. The variance of a sum of independent variables is the sum of the variances, so $\var\left(T_n\right) = n / r^2$.
For $k \in \N$, the moment of order $k$ of $T_n$ is $\E\left(T_n^k\right) = \frac{(k + n - 1)!}{(n - 1)!} \frac{1}{r^k}$
Proof
Using the standard change of variables theorem, $\E\left(T_n^k\right) = \int_0^\infty t^k f_n(t) \, dt = \frac{r^{n-1}}{(n - 1)!} \int_0^\infty t^{k + n - 1} r e^{-r t} \, dt$ But the integral on the right is the moment of order $k + n - 1$ for the exponential distribution, which we showed in the last section is $(k + n - 1)! \big/ r^{k + n - 1}$. Simplifying gives the result.
More generally, the moment of order $k \gt 0$ (not necessarily an integer) is $\E\left(T_n^k\right) = \frac{\Gamma(k + n)}{\Gamma(n)} \frac{1}{r^k}$ where $\Gamma$ is the gamma function.
In the gamma experiment, vary $r$ and $n$ with the scroll bars and watch how the size and location of the mean$\pm$standard deviation bar changes. For various values of $r$ and $n$, run the experiment 1000 times and compare the empirical moments to the true moments.
Our next result gives the skewness and kurtosis of the gamma distribution.
The skewness and kurtosis of $T_n$ are
1. $\skw(X) = \frac{2}{\sqrt{n}}$
2. $\kur(X) = 3 + \frac{6}{n}$
Proof
These results follows from the moment results above and the computational formulas for skewness and kurtosis.
In particular, note that the gamma distribution is positively skewed but $\skw(X) \to 0$ and as $n \to \infty$. Recall also that the excess kurtosis is $\kur(T_n) - 3 = \frac{6}{n} \to 0$ as $n \to \infty$. This result is related to the convergence of the gamma distribution to the normal, discussed below. Finally, note that the skewness and kurtosis do not depend on the rate parameter $r$. This is because, as we show below, $1 / r$ is a scale parameter.
The moment generating function of $T_n$ is $M_n(s) = \E\left(e^{s T_n}\right) = \left(\frac{r}{r - s}\right)^n, \quad -\infty \lt s \lt r$
Proof
Recall that the MGF of a sum of independent variables is the product of the corresponding MGFs. We showed in the last section that the exponential distribution with parameter $r$ has MGF $s \mapsto r / (r - s)$ for $-\infty \lt s \lt r$.
The moment generating function can also be used to derive the moments of the gamma distribution given above—recall that $M_n^{(k)}(0) = \E\left(T_n^k\right)$.
Estimating the Rate
In many practical situations, the rate $r$ of the process in unknown and must be estimated based on data from the process. We start with a natural estimate of the scale parameter $1 / r$. Note that $M_n = \frac{T_n}{n} = \frac{1}{n} \sum_{i=1}^n X_i$ is the sample mean of the first $n$ inter-arrival times $(X_1, X_2, \ldots, X_n)$. In statistical terms, this sequence is a random sample of size $n$ from the exponential distribution with rate $r$.
$M_n$ satisfies the following properties:
1. $\E(M_n) = \frac{1}{r}$
2. $\var(M_n) = \frac{1}{n r^2}$
3. $M_n \to \frac{1}{r}$ as $n \to \infty$ with probability 1
Proof
Parts (a) and (b) follow from the expected value of $T_n$ and standard properties. Part (c) is the strong law of large numbers.
In statistical terms, part (a) means that $M_n$ is an unbiased estimator of $1 / r$ and hence the variance in part (b) is the mean square error. Part (b) means that $M_n$ is a consistent estimator of $1 / r$ since $\var(M_n) \downarrow 0$ as $n \to \infty$. Part (c) is a stronger from of consistency. In general, the sample mean of a random sample from a distribution is an unbiased and consistent estimator of the distribution mean. On the other hand, a natural estimator of $r$ itself is $1 / M_n = n / T_n$. However, this estimator is positively biased.
$\E(n / T_n) \ge r$.
Proof
This follows immediately from Jensen's inequality since $x \mapsto 1 / x$ is concave upward on $(0, \infty)$.
Properties and Connections
Scaling
As noted above, the gamma distribution is a scale family.
Suppose that $T$ has the gamma distribution with rate parameter $r \in (0, \infty)$ and shape parameter $n \in \N_+$. If $c \in (0, \infty)$ then $c T$ has the gamma distribution with rate parameter $r / c$ and shape parameter $n$.
Proof
The moment generating function of $c T$ is $\E[e^{s (c T)}] = \E[e^{(c s) T}] = \left(\frac{r}{r - cs}\right)^n = \left(\frac{r / c}{r / c - s}\right)^n, \quad s \lt \frac{r}{c}$
The scaling property also follows from the fact that the gamma distribution governs the arrival times in the Poisson process. A time change in a Poisson process clearly does not change the strong renewal property, and hence results in a new Poisson process.
General Exponential Family
The gamma distribution is also a member of the general exponential family of distributions.
Suppose that $T$ has the gamma distribution with shape parameter $n \in \N_+$ and rate parameter $r \in (0, \infty)$. Then $T$ has a two parameter general exponential distribution with natural parameters $n - 1$ and $-r$, and natural statistics $\ln(T)$ and $T$.
Proof
This follows from the form of the PDF and the definition of the general exponential family: $f(t) = r^n \frac{t^{n-1}}{(n - 1)!} e^{-r t} = \frac{r^n}{(n - 1)!} \exp\left[(n - 1) \ln(t) - r t\right], \quad t \in (0, \infty)$
Increments
A number of important properties flow from the fact that the sequence of arrival times $\bs{T} = (T_0, T_1, \ldots)$ is the partial sum process associated with the sequence of independent, identically distributed inter-arrival times $\bs{X} = (X_1, X_2, \ldots)$.
The arrival time sequence $\bs{T}$ has stationary, independent increments:
1. If $m \lt n$ then $T_n - T_m$ has the same distribution as $T_{n-m}$, namely the gamma distribution with shape parameter $n - m$ and rate parameter $r$.
2. If $n_1 \lt n_2 \lt n_3 \lt \cdots$ then $\left(T_{n_1}, T_{n_2} - T_{n_1}, T_{n_3} - T_{n_2}, \ldots\right)$ is an independent sequence.
Proof
The stationary and independent increments properties hold for any partial sum process associated with an independent, identically distributed sequence.
Of course, the stationary and independent increments properties are related to the fundamental renewal assumption that we started with. If we fix $n \in \N_+$, then $(T_n - T_n, T_{n+1} - T_n, T_{n+2} - T_n, \ldots)$ is independent of $(T_1, T_2, \ldots, T_n)$ and has the same distribution as $(T_0, T_1, T_2, \ldots)$. That is, if we restart the clock at time $T_n$, then the process in the future looks just like the original process (in a probabilistic sense) and is indpendent of the past. Thus, we have our second characterization of the Poisson process.
A process of random points in time is a Poisson process with rate $r \in (0, \infty)$ if and only if the arrival time sequence $\bs{T}$ has stationary, independent increments, and for $n \in \N_+$, $T_n$ has the gamma distribution with shape parameter $n$ and rate parameter $r$.
Sums
The gamma distribution is closed with respect to sums of independent variables, as long as the rate parameter is fixed.
Suppose that $V$ has the gamma distribution with shape parameter $m \in \N_+$ and rate parameter $r \gt 0$, $W$ has the gamma distribution with shape parameter $n \in \N_+$ and rate parameter $r$, and that $V$ and $W$ are independent. Then $V + W$ has the gamma distribution with shape parameter $m + n$ and rate parameter $r$.
Proof
There are at least three different proofs of this fundamental result. Perhaps the best is a probabilistic proof based on the Poisson process. We start with an IID sequence $\bs{X}$ of independent exponentially distributed variables, each with rate parameter $r$. Then we can associate $V$ with $T_m$ and $W$ with $T_{m + n} - T_m$ so that $V + W$ becomes $T_{m + n}$. The result now follows from the previous theorem.
Another simple proof uses moment generating functions. Recall again that the MGF of $V + W$ is the product of the MGFs of $V$ and of $W$. A third, analytic proof uses convolution. Recall again that the PDF of $V + W$ is the convolution of the PDFs of $V$ and of $W$.
Normal Approximation
In the gamma experiment, vary $r$ and $n$ with the scroll bars and watch how the shape of the probability density function changes. Now set $n = 10$ and for various values of $r$ run the experiment 1000 times and compare the empirical density function to the true probability density function.
Even though you are restricted to relatively small values of $n$ in the app, note that the probability density function of the $n$th arrival time becomes more bell shaped as $n$ increases (for $r$ fixed). This is yet another application of the central limit theorem, since $T_n$ is the sum of $n$ independent, identically distributed random variables (the inter-arrival times).
The distribution of the random variable $Z_n$ below converges to the standard normal distribution as $n \to \infty$: $Z_n = \frac{r\,T_n - n}{\sqrt{n}}$
Proof
$Z_n$ is the standard score associated with $T_n$, so the result follows from the central limit theorem.
Connection to Bernoulli Trials
We return to the analogy between the Bernoulli trials process and the Poisson process that started in the Introduction and continued in the last section on the Exponential Distribution. If we think of the successes in a sequence of Bernoulli trials as random points in discrete time, then the process has the same strong renewal property as the Poisson process, but restricted to discrete time. That is, at each fixed time and at each arrival time, the process starts over, independently of the past. In Bernoulli trials, the time of the $n$th arrival has the negative binomial distribution with parameters $n$ and $p$ (the success probability), while in the Poisson process, as we now know, the time of the $n$th arrival has the gamma distribution with parameters $n$ and $r$ (the rate). Because of this strong analogy, we expect a relationship between these two processes. In fact, we have the same type of limit as with the geometric and exponential distributions.
Fix $n \in \N_+$ and suppose that for each $m \in \N_+$ $T_{m,n}$ has the negative binomial distribution with parameters $n$ and $p_m \in (0, 1)$, where $m p_m \to r \in (0, \infty)$ as $m \to \infty$. Then the distribution of $T_{m,n} \big/ m$ converges to the gamma distribution with parameters $n$ and $r$ as $m \to \infty$.
Proof
Suppose that $X_m$ has the geometric distribution on $\N_+$ with success parameter $p_m$. We know from our convergence result in the last section that the distribution of $X_m / m$ converges to the exponential distribution with rate parameter $r$ as $m \to \infty$. It follows that if $M_m$ denotes the moment generating function of $X_m / m$, then $M_m(s) \to r / (r - s)$ as $m \to \infty$ for $s \lt r$. But then $M_m^n$ is the MGF of $T_{m,n} \big/ m$ and clearly $M_m^n(s) \to \left(\frac{r}{r - s}\right)^n$ as $m \to \infty$ for $s \lt r$. The expression on the right is the MGF of the gamma distribution with shape parameter $n$ and rate parameter $r$.
Computational Exercises
Suppose that customers arrive at a service station according to the Poisson model, at a rate of $r = 3$ per hour. Relative to a given starting time, find the probability that the second customer arrives sometime after 1 hour.
Answer
0.1991
Defects in a type of wire follow the Poisson model, with rate 1 per 100 meter. Find the probability that the 5th defect is located between 450 and 550 meters.
Answer
0.1746
Suppose that requests to a web server follow the Poisson model with rate $r = 5$. Relative to a given starting time, compute the mean and standard deviation of the time of the 10th request.
Answer
2, 0.6325
Suppose that $Y$ has a gamma distribution with mean 40 and standard deviation 20. Find the shape parameter $n$ and the rate parameter $r$.
Answer
$r = 1 / 10$, $n = 4$
Suppose that accidents at an intersection occur according to the Poisson model, at a rate of 8 per year. Compute the normal approximation to the event that the 10th accident (relative to a given starting time) occurs within 2 years.
Answer
0.5752
In the gamma experiment, set $n = 5$ and $r = 2$. Run the experiment 1000 times and compute the following:
1. $\P(1.5 \le T_t \le 3)$
2. The relative frequency of the event $\{1.5 \le T_5 \le 3\}$
3. The normal approximation to $\P(1.5 \le T_5 \le 3)$
Answer
1. 0.5302
2. 0.4871
Suppose that requests to a web server follow the Poisson model. Starting at 12:00 noon on a certain day, the requests are logged. The 100th request comes at 12:15. Estimate the rate of the process.
Answer
$r = 6.67$ hits per minute | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/14%3A_The_Poisson_Process/14.03%3A_The_Gamma_Distribution.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
Basic Theory
Recall that in the Poisson model, $\bs{X} = (X_1, X_2, \ldots)$ denotes the sequence of inter-arrival times, and $\bs{T} = (T_0, T_1, T_2, \ldots)$ denotes the sequence of arrival times. Thus $\bs{T}$ is the partial sum process associated with $\bs{X}$: $T_n = \sum_{i=0}^n X_i, \quad n \in \N$ Based on the strong renewal assumption, that the process restarts at each fixed time and each arrival time, independently of the past, we now know that $\bs{X}$ is a sequence of independent random variables, each with the exponential distribution with rate parameter $r$, for some $r \in (0, \infty)$. We also know that $\bs{T}$ has stationary, independent increments, and that for $n \in \N_+$, $T_n$ has the gamma distribution with rate parameter $r$ and scale parameter $n$. Both of the statements characterize the Poisson process with rate $r$.
Recall that for $t \ge 0$, $N_t$ denotes the number of arrivals in the interval $(0, t]$, so that $N_t = \max\{n \in \N: T_n \le t\}$. We refer to $\bs{N} = (N_t: t \ge 0)$ as the counting process. In this section we will show that $N_t$ has a Poisson distribution, named for Simeon Poisson, one of the most important distributions in probability theory. Our exposition will alternate between properties of the distribution and properties of the counting process. The two are intimately intertwined. It's not too much of an exaggeration to say that wherever there is a Poisson distribution, there is a Poisson process lurking in the background.
Probability density function.
Recall that the probability density function of the $n$th arrival time $T_n$ is $f_n(t) = r^n \frac{t^{n-1}}{(n-1)!} e^{-r t}, \quad 0 \le t \lt \infty$ We can find the distribution of $N_t$ because of the inverse relation between $\bs{N}$ and $\bs{T}$. In particular, recall that $N_t \ge n \iff T_n \le t, \quad t \in (0, \infty), \; n \in \N_+$ since both events mean that there are at least $n$ arrivals in $(0, t]$.
For $t \in [0, \infty)$, the probability density function of $N_t$ is given by
$\P(N_t = n) = e^{-r t} \frac{(r t)^n}{n!}, \quad n \in \N$
Proof
Using the inverse relationship noted above, and integration by parts, we have $\P(N_t \ge n) = \P(T_n \le t) = \int_0^t f_n(s) \, ds = 1 - \sum_{k=0}^{n-1} e^{-r t} \frac{(r t)^k}{k!}, \quad n \in \N$ For $n \in \N$ we have $\P(N_t = n) = \P(N_t \ge n) - \P(N_t \ge n + 1)$. Simplifying gives the result.
Note that the distribution of $N_t$ depends on the paramters $r$ and $t$ only through the product $r t$. The distribution is called the Poisson distribution with parameter $r t$.
In the Poisson experiment, vary $r$ and $t$ with the scroll bars and note the shape of the probability density function. For various values of $r$ and $t$, run the experiment 1000 times and compare the relative frequency function to the probability density function.
In general, a random variable $N$ taking values in $\N$ is said to have the Poisson distribution with parameter $a \in (0, \infty)$ if it has the probability density function $g(n) = e^{-a} \frac{a^n}{n!}, \quad n \in \N$
1. $g(n - 1) \lt g(n)$ if and only if $n \lt a$.
2. If $a \notin \N_+$, there is a single mode at $\lfloor a \rfloor$.
3. If $a \in \N_+$, there are consecutive modes at $a - 1$ and $a$.
Proof
Part (a) follows from simple algebra, and similarly, $g(n - 1) = g(n)$ if and only if $n = a$ (and thus $a \in \N_+$). Parts (b) and (c) then follow.
The Poisson distribution does not have simple closed-form distribution or quantile functions. Trivially, we can write the distribution function as a sum of the probability density function.
The Poisson distribution with parameter $a$ has distribution function $G$ given by $G(n) = \sum_{k=0}^n e^{-a} \frac{a^k}{k!}, \quad n \in \N$
Open the special distribution calculator, select the Poisson distribution, and select CDF view. Vary the parameter and note the shape of the distribution and quantile functions. For various values of the parameter, compute the quartiles.
Sometimes it's convenient to allow the parameter $a$ to be 0. This degenerate Poisson distribution is simply point mass at 0. That is, with the usual conventions regarding nonnegative integer powers of 0, the probability density function $g$ above reduces to $g(0) = 1$ and $g(n) = 0$ for $n \in \N_+$.
Moments
Suppose that $N$ has the Poisson distribution with parameter $a \gt 0$. Naturally we want to know the mean, variance, skewness and kurtosis, and the probability generating function of $N$. The easiest moments to compute are the factorial moments. For this result, recall the falling power notation for the number of permutations of size $k$ chosen from a population of size $n$: $n^{(k)} = n (n - 1) \cdots [n - (k + 1)]$
The factorial moment of $N$ of order $k \in \N$ is $\E\left[N^{(k)}\right] = a^k$.
Proof
Using the standard change of variables formula for expected value, $\E[N^{(k)}] = \sum_{n=0}^\infty n^{(k)} e^{-a} \frac{a^n}{n!} = e^{-a} a^k \sum_{n=k}^\infty \frac{a^{n-k}}{(n-k)!} = e^{-a} a^k e^a = a^k$
The mean and variance of $N$ are the parameter $a$.
1. $\E(N) = a$
2. $\var(N) = a$
Proof
1. This follows directly from the first factorial moment: $\E(N) = \E\left[N^{(1)}\right] = a$.
2. Note that $\E(N^2) = \E\left[N^{(2)}\right] + \E(N) = a^2 + a$.
Open the special distribution simulator and select the Poisson distribution. Vary the parameter and note the location and size of the mean$\pm$standard deviation bar. For selected values of the parameter, run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation.
The skewness and kurtosis of $N$ are
1. $\skw(N) = 1 / \sqrt{a}$
2. $\kur(N) = 3 + 1 / a$
Proof
These results follow from the computational formulas for skewness and kurtosis and the results for factorial moments above. Specifically,
1. $\E(N^3) = a^3 + 3 a^2 + a$ and $\skw(N) = [\E(N^3) - 3 a^2 - a^3] / a^{3/2}$
2. $\E(N^4) = a^4 + 6 a^3 + 7 a^2 + a$ and $\kur(N) = [\E(N^4) - 4 a \E(N^3) + 6 a^3 + 3 a^4] / a^2$
Note that the Poisson distribution is positively skewed, but $\skw(N) \to 0$ as $n \to \infty$. Recall also that the excess kurtosis is $\kur(N) - 3 = +1 /a \to 0$ as $n \to \infty$. This limit is related to the convergence of the Poisson distribution to the normal, discussed below.
Open the special distribution simulator and select the Poisson distribution. Vary the parameter and note the shape of the probability density function in the context of the results on skewness and kurtosis above.
The probability generating function $P$ of $N$ is given by $P(s) = \E\left(s^N\right) = e^{a(s - 1)}, \quad s \in \R$
Proof
Using the change of variables formula again, $\E(s^N) = \sum_{n=0}^\infty s^n e^{-a} \frac{a^n}{n!} = e^{-a} \sum_{n=0}^\infty \frac{(a s)^n}{n!} = e^{-a} e^{a s}, \quad s \in \R$
Returning to the Poisson counting process $\bs{N} = (N_t: t \ge 0)$ with rate parameter $r$, it follows that $\E(N_t) = r t$ and $\var(N_t) = r t$ for $t \ge 0$. Once again, we see that $r$ can be interpreted as the average arrival rate. In an interval of length $t$, we expect about $r t$ arrivals.
In the Poisson experiment, vary $r$ and $t$ with the scroll bars and note the location and size of the mean$\pm$standard deviation bar. For various values of $r$ and $t$, run the experiment 1000 times and compare the sample mean and standard deviation to the distribution mean and standard deviation, respectively.
Estimating the Rate
Suppose again that we have a Poisson process with rate $r \in (0, \infty)$. In many practical situations, the rate $r$ in unknown and must be estimated based on observing data. For fixed $t \gt 0$, a natural estimator of the rate $r$ is $N_t / t$.
The mean and variance of $N_t / t$ are
1. $\E(N_t / t) = r$
2. $\var(N_t / t) = r / t$
Proof
These result follow easily from $\E(N_t) = \var(N_t) = r t$ and basic properties of expected value and variance.
Part (a) means that the estimator is unbiased. Since this is the case, the variance in part (b) gives the mean square error. Since $\var(N_t)$ decreases to 0 as $t \to \infty$, the estimator is consistent.
Additional Properties and Connections
Increments and Characterizations
Let's explore the basic renewal assumption of the Poisson model in terms of the counting process $\bs{N} = (N_t: t \ge 0)$. Recall that $N_t$ is the number of arrivals in the interval $(0, t]$, so it follows that if $s, \; t \in [0, \infty)$ with $s \lt t$, then $N_t - N_s$ is the number of arrivals in the interval $(s, t]$. Of course, the arrival times have continuous distributions, so the probability that an arrival occurs at a specific point $t$ is 0. Thus, it does not really matter if we write the interval above as $(s, t]$, $(s, t)$, $[s, t)$ or $[s, t]$.
The process $\bs{N}$ has stationary, independent increments.
1. If $s, \, t \in [0, \infty)$ with $s \lt t$ then $N_t - N_s$ has the same distribution as $N_{t-s}$, namely Poisson with parameter $r (t - s)$.
2. If $t_1, \, t_2, \, t_3 \ldots \in [0, \infty)$ with $t_1 \lt t_2 \lt t_3 \lt \cdots$ then $\left(N_{t_1}, N_{t_2} - N_{t_1}, N_{t_3} - N_{t_2}, \ldots\right)$ is an independent sequence.
Statements about the increments of the counting process can be expressed more elegantly in terms of our more general counting process. Recall that for $A \subseteq [0, \infty)$ (measurable of course), $N(A)$ denotes the number of random points in $A$: $N(A) = \#\left\{n \in \N_+: T_n \in A\right\}$ and so in particular, $N_t = N(0, t]$. Thus, note that $t \mapsto N_t$ is a (random) distribution function and $A \mapsto N(A)$ is the (random) measure associated with this distribution function. Recall also that $\lambda$ denotes the standard length (Lebesgue) measure on $[0, \infty)$. Here is our third characterization of the Poisson process.
A process of random points in time is a Poisson process with rate $r \in (0, \infty)$ if and only if the following properties hold:.
1. If $A \subseteq [0, \infty)$ is measurable then $N(A)$ has the Poisson distribution with parameter $r \lambda(A)$.
2. if $\{A_i: i \in I\}$ is a countable, disjoint collection of measurable sets in $[0, \infty)$ then $\{N(A_i): i \in I\}$ is a set of independent variables.
From a modeling point of view, the assumptions of stationary, independent increments are ones that might be reasonably made. But the assumption that the increments have Poisson distributions does not seem as clear. Our next characterization of the process is more primitive in a sense, because it just imposes some limiting assumptions (in addition to stationary, independent increments.
A process of random points in time is a Poisson process with rate $r \in (0, \infty)$ if and only if the following properties hold:
1. If $A, \, B \subseteq [0, \infty)$ are measurable and $\lambda(A) = \lambda(B)$, then $N(A)$ and $N(B)$ have the same distribution.
2. if $\{A_i: i \in I\}$ is a countable, disjoint collection of measurable sets in $[0, \infty)$ then $\{N(A_i): i \in I\}$ is a set of independent variables.
3. If $A_n \subseteq [0, \infty)$ is measurable and $\lambda(A_n) \gt 0$ for $n \in \N_+$, and if $\lambda(A_n) \to 0$ as $n \to \infty$ then \begin{align} &\frac{\P\left[N(A_n) = 1\right]}{\lambda(A_n)} \to r \text{ as } n \to \infty \ &\frac{\P\left[N(A_n) \gt 1\right]}{\lambda(A_n)} \to 0 \text{ as } n \to \infty \end{align}
Proof
As usual, let $N_t = N(0, t]$, the number of arrivals in $(0, t]$, and in addition let $P_n(t) = \P(N_t = n)$ for $t \ge 0$ and $n \in \N$. Note first that $P_0$ satisfies the following differential equation and initial condition: $P_0^\prime(t) = -r P_0(t), \; t \gt 0; \quad P_0(0) = 1$ Hence $P_0(t) = e^{-r t}$ for $t \ge 0$. Next for $n \in \N_+$, $P_n$ satisfies the following differential equation and initial condition $P_n^\prime(t) = -r P_n(t) + r P_{n-1}(t), \; t \gt 0; \quad P_n(0) = 0$ Hence $P_n(t) = e^{-r t} (r t)^n / n!$ for $t \ge 0$ and therefore $N_t$ has the Poisson distribution with parameter $r t$.
Of course, part (a) is the stationary assumption and part (b) the independence assumption. The first limit in (c) is sometimes called the rate property and the second limit the sparseness property. In a small time interval of length $dt$, the probability of a single random point is approximately $r \, dt$, and the probability of two or more random points is negligible.
Sums
Suppose that $N$ and $M$ are independent random variables, and that $N$ has the Poisson distribution with parameter $a \in (0, \infty)$ and $M$ has the Poisson distribution with parameter $b \in (0, \infty)$. Then $N + M$ has the Poisson distribution with parameter $a + b$.
Proof from the Poisson process
There are several ways to prove this result, but the one that gives the most insight is a probabilistic proof based on the Poisson process. Thus suppose that $\bs{N} = (N_t: t \ge 0)$ is a Poisson counting process with rate 1. We can associate $N$ with $N_a$ and $M$ with $N_{a + b} - N_a$, since these have the correct distributions and are independent. But then $N + M$ is $N_{a + b}$.
Proof from probability generating functions
From our result above, $M$ has PGF $P(s) = e^{a (s - 1)}$ for $s \in \R$, and $N$ has PGF $Q(s) = e^{b(s-1)}$ for $s \in \R$. Hence $M + N$ has PGF $P(s) Q(s) = e^{(a+b)(s-1)}$ for $s \in \R$. But this is the PGF of the Poisson distribution with parameter $a + b$.
Proof from convolution
From our results above, $M$ has PDF $g(n) = e^{-a} a^n / n!$ for $n \in \N$, and $N$ has PDF $h(n) = e^{-b} b^n / n!$ for $n \in \N$. Hence the PDF of $M + N$ is the convolution $g * h$. For $n \in \N$, $(g * h)(n) = \sum_{k=0}^n g(k) h(n - k) = \sum_{k=0}^n e^{-a} \frac{a^k}{k!} e^{-b} \frac{b^{n-k}}{(n - k)!} = e^{-(a+b)} \frac{1}{n!} \sum_{k=0}^n \frac{n!}{k! (n - k)!} a^k b^{n-k}$ By the binomial theorem, the last sum is $(a + b)^n$.
From the last theorem, it follows that the Poisson distribution is infinitely divisible. That is, a Poisson distributed variable can be written as the sum of an arbitrary number of independent, identically distributed (in fact also Poisson) variables.
Suppose that $N$ has the Poisson distribution with parameter $a \in (0, \infty)$. Then for $n \in \N_+$, $N$ has the same distribution as $\sum_{i=1}^n N_i$ where $(N_1, N_2, \ldots, N_n)$ are independent, and each has the Poisson distribution with parameter $a / n$.
Normal Approximation
Because of the representation as a sum of independent, identically distributed variables, it's not surprising that the Poisson distribution can be approximated by the normal.
Suppose that $N_t$ has the Poisson distribution with parameter $t \gt 0$. Then the distribution of the variable below converges to the standard normal distribution as $t \to \infty$. $Z_t = \frac{N_t - t}{\sqrt{t}}$
Proof
As usual, we can assume that $(N_t: t \ge 0)$ is the Poisson counting process with rate 1. Note that $Z_t$ is simply the standard score associated with $N_t$. For $n \in \N_+$, $N_n$ is the sum of $n$ independent variables, each with the Poisson distribution with parameter 1. Thus, from the central limit theorem, the distribution of $Z_n$ converges to the standard normal distribution as $n \to \infty$. For general $t \in [0, \infty)$, it's possible to write $Z_t = Z_n + W_t$ where $n = \lfloor t \rfloor$ and $W_t \to 0$ as $t \to \infty$ (in probability and hence in distribution).
Thus, if $N$ has the Poisson distribution with parameter $a$, and $a$ is large, then the distribution of $N$ is approximately normal with mean $a$ and standard deviation $\sqrt{a}$. When using the normal approximation, we should remember to use the continuity correction, since the Poisson is a discrete distribution.
In the Poisson experiment, set $r = t = 1$. Increase $t$ and note how the graph of the probability density function becomes more bell-shaped.
General Exponential
The Poisson distribution is a member of the general exponential family of distributions. This fact is important in various statistical procedures.
Suppose that $N$ has the Poisson distribution with parameter $a \in (0, \infty)$. This distribution is a one-parameter exponential family with natural parameter $\ln(a)$ and natural statistic $N$.
Proof
This follows from the form of the Poisson PDF: $g(n) = e^{-a} \frac{a^n}{n!} = \frac{e^{-a}}{n!} \exp\left[n \ln (a)\right], \quad n \in \N$
The Uniform Distribution
The Poisson process has some basic connections to the uniform distribution. Consider again the Poisson process with rate $r \gt 0$. As usual, $\bs{T} = (T_0, T_1, \ldots)$ denotes the arrival time sequence and $\bs{N} = (N_t: t \ge 0)$ the counting process.
For $t \gt 0$, the conditional distribution of $T_1$ given $N_t = 1$ is uniform on the interval $(0, t]$.
Proof
Given $N_t = 1$ (one arrival in $(0, t]$) the arrival time $T_1$ takes values in $(0, t]$. From independent and stationary increments properties, $\P(T_1 \le s \mid N_t = 1) = \P(N_s = 1, N_t - N_s = 0 \mid N_t = 1) = \frac{\P(N_s = 1, N_t - N_s = 0)}{\P(N_t = 0)} = \frac{\P(N_s = 1) \P(N_t - N_s = 0)}{\P(N_t = 1)}$ Hence using the Poisson distribution, $\P(T_1 \le s \mid N_t = 1) = \frac{e^{-r\,s} s e^{-r(t - s)}}{e^{-r\,t} t} = \frac{s}{t}, \quad 0 \lt s \le t$
More generally, for $t \gt 0$ and $n \in \N_+$, the conditional distribution of $(T_1, T_2, \ldots, T_n)$ given $N_t = n$ is the same as the distribution of the order statistics of a random sample of size $n$ from the uniform distribution on the interval $(0, t]$.
Heuristic proof
Suppose that $0 \lt t_1 \lt t_2 \lt t_n \lt t$. On the event $N_t = n$, the probability density of $(T_1, T_2, \ldots, T_n)$ at $(t_1, t_2, \ldots, t_n)$ is the probability density function of independent inter-arrival times $\left(t_1, t_2 - t_1, \ldots, t_n - t_{n-1}\right)$ times the probability of no arrivals in the interval $(t_n, t)$. Hence given $N_t = n$, the conditional density of $(T_1, T_2, \ldots, T_n)$ at $(t_1, t_2, \ldots, t_n)$ is $\frac{r e^{- r t_1} r e^{-r (t_2 - t_1)} \cdots r e^{-r(t_n - t_{n-1})} e^{-r(t - t_n)}}{e^{-r t} (r t)^n \big/n!} = \frac{n!}{t^n}$ But this is the PDF of the order statistics from a sample of size $n$ from the uniform distribution on $[0, t]$.
Note that the conditional distribution in the last result is independent of the rate $r$. This means that, in a sense, the Poisson model gives the most random distribution of points in time.
The Binomial Distribution
The Poisson distribution has important connections to the binomial distribution. First we consider a conditional distribution based on the number of arrivals of a Poisson process in a given interval, as we did in the last subsection.
Suppose that $(N_t: t \in [0, \infty))$ is a Poisson counting process with rate $r \in (0, \infty)$. If $s, \, t \in (0, \infty)$ with $s \lt t$, and $n \in \N_+$, then the conditional distribution of $N_s$ given $N_t = n$ is binomial with trial parameter $n$ and success parameter $p = s / t$.
Proof
Note that given $N_t = n$, the number of arrivals $N_s$ in $(0, s]$ takes values in $\{0, 1 \ldots, n\}$. Again, the stationary and independent increments properties are critical for the proof. $\P(N_s = k \mid N_t = n) = \frac{\P(N_s = k, N_t = n)}{\P(N_t = n)} = \frac{\P(N_s = k, N_t - N_s = n - k)}{\P(N_t = n)} = \frac{\P(N_s = k) \P(N_t - N_s = n - k)}{\P(N_t = n)}$ Subsitituting into the Poisson PDFs gives $\P(N_s = k \mid N_t = n) = \frac{\left(e^{-r s}(r s)^k / k!\right)\left(e^{-r(t-s)}[(r(t - s)]^{n-k}/(n - k)!\right)}{e^{-r t}(r t)^n / n!} = \frac{n!}{k! (n - k)!} \left(\frac{s}{t}\right)^k \left(1 - \frac{s}{t}\right)^{n-k}$
Note again that the conditional distribution in the last result does not depend on the rate $r$. Given $N_t = n$, each of the $n$ arrivals, independently of the others, falls into the interval $(0, s]$ with probability $s / t$ and into the interval $(s, t]$ with probability $1 - s / t = (t - s) / t$. Here is essentially the same result, outside of the context of the Poisson process.
Suppose that $M$ has the Poisson distribution with parameter $a \in (0, \infty)$, $N$ has the Poisson distribution with parameter $b \in (0, \infty)$, and that $M$ and $N$ are independent. Then the conditional distribution of $M$ given $M + N = n$ is binomial with parameters $n$ and $p = a / (a + b)$.
Proof
The proof is essentially the same as the previous theorem, with minor modifications. First recall from the result above that $M + N$ has the Poisson distribution with parameter $a + b$. For $n, \, k \in \N$ with $k \le n$, $\P(M = k \mid M + N = n) = \frac{\P(M = k, M + N = n)}{\P(M + N = n)} = \frac{\P(M = k, N = n - k)}{\P(N + M = n)} = \frac{\P(M = k) \P(N = n - k)}{\P(M + N = n)}$ Subsitituting into the Poisson PDFs gives $\P(M = k \mid M + N = n) = \frac{\left(e^{-a} a^k / k!\right)\left(e^{-b} b^{n-k}/(n - k)!\right)}{e^{-(a + b)}(a + b)^n / n!} = \frac{n!}{k! (n - k)!} \left(\frac{a}{a + b}\right)^k \left(1 - \frac{a}{a + b}\right)^{n-k}$
More importantly, the Poisson distribution is the limit of the binomial distribution in a certain sense. As we will see, this convergence result is related to the analogy between the Bernoulli trials process and the Poisson process that we discussed in the Introduction, the section on the inter-arrival times, and the section on the arrival times.
Suppose that $p_n \in (0, 1)$ for $n \in \N_+$ and that $n p_n \to a \in (0, \infty)$ as $n \to \infty$. Then the binomial distribution with parameters $n$ and $p_n$ converges to the Poisson distribution with parameter $a$ as $n \to \infty$. That is, for fixed $k \in \N$, $\binom{n}{k} p_n^k (1 - p_n)^{n-k} \to e^{-a} \frac{a^k}{k!} \text{ as } n \to \infty$
Direct proof
The binomial PDF with parameters $n$ and $p_n$ at $k \in \{0, 1, \ldots, n\}$ can be written as $\frac{1}{k!}\left[n p_n\right]\left[(n - 1) p_n\right] \cdots \left[(n - k + 1) p_n\right] \left(1 - n \frac{p_n}{n}\right)^{n-k}$ But $(n - j) p_n \to a$ as $n \to \infty$ for fixed $j$. Also, using a basic theorem from calculus, $\left(1 - n p_n / n\right)^{n-k} \to e^{-a}$ as $n \to \infty$.
Proof from generating functions
An easier proof uses probability generating functions. For $s \in \R$, using the same basic limit from calculus, $\left[(1 - p_n) + p_n s\right]^n \to e^{a(s - 1)} \text{ as } n \to \infty$ The left side is the PGF of the binomial distribution with parameters $n$ and $p_n$, while the right side is the PGF of the Poisson distribution with parameter $a$.
The mean and variance of the binomial distribution converge to the mean and variance of the limiting Poisson distribution, respectively.
1. $n p_n \to a$ as $n \to \infty$
2. $n p_n (1 - p_n) \to a$ as $n \to \infty$
Of course the convergence of the means is precisely our basic assumption, and is further evidence that this is the essential assumption. But for a deeper look, let's return to the analogy between the Bernoulli trials process and the Poisson process. Recall that both have the strong renewal property that at each fixed time, and at each arrival time, the process stochastically starts over, independently of the past. The difference, of course, is that time is discrete in the Bernoulli trials process and continuous in the Poisson process. The convergence result is a special case of the more general fact that if we run Bernoulli trials at a faster and faster rate but with a smaller and smaller success probability, in just the right way, the Bernoulli trials process converges to the Poisson process. Specifically, suppose that we have a sequence of Bernoulli trials processes. In process $n$ we perform the trials at a rate of $n$ per unit time, with success probability $p_n$. Our basic assumption is that $n p_n \to r$ as $n \to \infty$ where $r \gt 0$. Now let $Y_{n,t}$ denote the number of successes in the time interval $(0,t]$ for Bernoulli trials process $n$, and let $N_t$ denote the number of arrivals in this interval for the Poisson process with rate $r$. Then $Y_{n,t}$ has the binomial distribution with parameters $\lfloor n t \rfloor$ and $p_n$, and of course $N_t$ has the Poisson distribution with parameter $r t$.
For $t \gt 0$, the distribution of $Y_{n,t}$ converges to the distribution of $N_t$ as $n \to \infty$.
Proof
Note that $n t - 1 \lt \lfloor n t \rfloor \le n t$ and hence $n p_n t - p_n \lt \lfloor n t \rfloor p_n \le n p_n t$. Since $n p_n \to r$ and $p_n \to 0$ as $n \to \infty$, it follows from the squeeze theorem for limits that $\lfloor n t \rfloor p_n \to r t$ as $n \to \infty$. Thus, the result follows from our previous convergence theorem.
Compare the Poisson experiment and the binomial timeline experiment.
1. Open the Poisson experiment and set $r = 1$ and $t = 5$. Run the experiment a few times and note the general behavior of the random points in time. Note also the shape and location of the probability density function and the mean$\pm$standard deviation bar.
2. Now open the binomial timeline experiment and set $n = 100$ and $p = 0.05$. Run the experiment a few times and note the general behavior of the random points in time. Note also the shape and location of the probability density function and the mean$\pm$standard deviation bar.
From a practical point of view, the convergence of the binomial distribution to the Poisson means that if the number of trials $n$ is large and the probability of success $p$ small, so that $n p^2$ is small, then the binomial distribution with parameters $n$ and $p$ is well approximated by the Poisson distribution with parameter $r = n p$. This is often a useful result, because the Poisson distribution has fewer parameters than the binomial distribution (and often in real problems, the parameters may only be known approximately). Specifically, in the approximating Poisson distribution, we do not need to know the number of trials $n$ and the probability of success $p$ individually, but only in the product $n p$. The condition that $n p^2$ be small means that the variance of the binomial distribution, namely $n p (1 - p) = n p - n p^2$ is approximately $r = n p$, the variance of the approximating Poisson distribution.
Recall that the binomial distribution can also be approximated by the normal distribution, by virtue of the central limit theorem. The normal approximation works well when $n p$ and $n (1 - p)$ are large; the rule of thumb is that both should be at least 5. The Poisson approximation works well, as we have already noted, when $n$ is large and $n p^2$ small.
Computational Exercises
Suppose that requests to a web server follow the Poisson model with rate $r = 5$ per minute. Find the probability that there will be at least 8 requests in a 2 minute period.
Answer
0.7798
Defects in a certain type of wire follow the Poisson model with rate 1.5 per meter. Find the probability that there will be no more than 4 defects in a 2 meter piece of the wire.
Answer
0.8153
Suppose that customers arrive at a service station according to the Poisson model, at a rate of $r = 4$. Find the mean and standard deviation of the number of customers in an 8 hour period.
Answer
32, 5.657
In the Poisson experiment, set $r = 5$ and $t = 4$. Run the experiment 1000 times and compute the following:
1. $\P(15 \le N_4 \le 22)$
2. The relative frequency of the event $\{15 \le N_4 \le 22\}$.
3. The normal approximation to $\P(15 \le N_4 \le 22)$.
Answer
1. 0.6157
2. 0.6025
Suppose that requests to a web server follow the Poisson model with rate $r = 5$ per minute. Compute the normal approximation to the probability that there will be at least 280 requests in a 1 hour period.
Answer
0.8818
Suppose that requests to a web server follow the Poisson model, and that 1 request comes in a five minute period. Find the probability that the request came during the first 3 minutes of the period.
Answer
0.6
Suppose that requests to a web server follow the Poisson model, and that 10 requests come during a 5 minute period. Find the probability that at least 4 requests came during the first 3 minutes of the period.
Answer
0.9452
In the Poisson experiment, set $r = 3$ and $t = 5$. Run the experiment 100 times.
1. For each run, compute the estimate of $r$ based on $N_t$.
2. Over the 100 runs, compute the average of the squares of the errors.
3. Compare the result in (b) with $\var(N_t)$.
Suppose that requests to a web server follow the Poisson model with unknown rate $r$ per minute. In a one hour period, the server receives 342 requests. Estimate $r$.
Answer
$r = 5.7$ per minute
In the binomial experiment, set $n = 30$ and $p = 0.1$, and run the simulation 1000 times. Compute and compare each of the following:
1. $\P(Y_{30} \le 4)$
2. The relative frequency of the event $\{Y_{30} \le 4\}$
3. The Poisson approximation to $\P(Y_{30} \le 4)$
Answer
1. 0.8245
2. 0.8153
Suppose that we have 100 memory chips, each of which is defective with probability 0.05, independently of the others. Approximate the probability that there are at least 3 defectives in the batch.
Answer
0.7350
In the binomial timeline experiment, set $n = 40$ and $p = 0.1$ and run the simulation 1000 times. Compute and compare each of the following:
1. $\P(Y_{40} \gt 5)$
2. The relative frequency of the event $\{Y_{40} \gt 5\}$
3. The Poisson approximation to $\P(Y_{40} \gt 5)$
4. The normal approximation to $\P(Y_{40} \gt 5)$
Answer
1. 0.2063
2. 0.2149
3. 0.2146
In the binomial timeline experiment, set $n = 100$ and $p = 0.1$ and run the simulation 1000 times. Compute and compare each of the following:
1. $\P(8 \lt Y_{100} \lt 15)$
2. The relative frequency of the event $\{8 \lt Y_{100} \lt 15\}$
3. The Poisson approximation to $\P(8 \lt Y_{100} \lt 15)$
4. The normal approximation to $\P(8 \lt Y_{100} \lt 15)$
Answer
1. 0.6066
2. 0.5837
3. 0.6247
A text file contains 1000 words. Assume that each word, independently of the others, is misspelled with probability $p$.
1. If $p = 0.015$, approximate the probability that the file contains at least 20 misspelled words.
2. If $p = 0.001$, approximate the probability that the file contains at least 3 misspelled words.
Answer
The true distribution of the number of misspelled words is binomial, with $n = 1000$ and $p$.
1. The normal approximation (with $\mu = n p = 15$ and $\sigma^2 = n p (1 - p) = 14.775$) is 0.120858. The Poisson approximation (with parameter $n p = 15$) is 0.124781. The true binomial probability is 0.123095.
2. The Poisson approximation (with parameter $n p = 1$) is 0.0803014. The true binomial probability is 0.0802093. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/14%3A_The_Poisson_Process/14.04%3A_The_Poisson_Distribution.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
Thinning
Thinning or splitting a Poisson process refers to classifying each random point, independently, into one of a finite number of different types. The random points of a given type also form Poisson processes, and these processes are independent. Our exposition will concentrate on the case of just two types, but this case has all of the essential ideas.
The Two-Type Process
We start with a Poisson process with rate $r \gt 0$. Recall that this statement really means three interrelated stochastic processes: the sequence of inter-arrival times $\bs{X} = (X_1, X_2, \ldots)$, the sequence of arrival times $\bs{T} = (T_0, T_1, \ldots)$, and the counting process $\bs{N} = (N_t: t \ge 0)$. Suppose now that each arrival, independently of the others, is one of two types: type 1 with probability $p$ and type 0 with probability $1 - p$, where $p \in (0, 1)$ is a parameter. Here are some common examples:
• The arrivals are radioactive emissions and each emitted particle is either detected (type 1) or missed (type 0) by a counter.
• The arrivals are customers at a service station and each customer is classified as either male (type 1) or female (type 0).
We want to consider the type 1 and type 0 random points separately. For this reason, the new random process is usually referred to as thinning or splitting the original Poisson process. In some applications, the type 1 points are accepted while the type 0 points are rejected. The main result of this section is that the type 1 and type 0 points form separate Poisson processes, with rates $r p$ and $r (1-p)$ respectively, and are independent. We will explore this important result from several points of view.
Bernoulli Trials
In the previous sections, we have explored the analogy between the Bernoulli trials process and the Poisson process. Both have the strong renewal property that at each fixed time and at each arrival time, the process stochastically restarts, independently of the past. The difference, of course, is that time is discrete in the Bernoulli trials process and continuous in the Poisson process. In this section, we have both processes simultaneously, and given our previous explorations, it's perhaps not surprising that this leads to some interesting mathematics.
Thus, in addition to the processes $\bs{X}$, $\bs{T}$, and $\bs{N}$, we have a sequence of Bernoulli trials $\bs{I} = (I_1, I_2, \ldots)$ with success parameter $p$. Indicator variable $I_j$ specifies the type of the $j$th arrival. Moreover, because of our assumptions, $\bs{I}$ is independent of $\bs{X}$, $\bs{T}$, and $\bs{N}$. Recall that $V_k$, the trial number of the $k$th success has the negative binomial distribution with parameters $k$ and $p$ for $k \in \N_+$. We take $V_0 = 0$ by convention. Also, $U_k$, the number of trials needed to go from the $(k-1)$st success to the $k$th success has the geometric distribution with success parameter $p$ for $k \in \N_+$. Moreover, $\bs{U} = (U_1, U_2, \ldots)$ is independent and $\bs{V} = (V_0, V_1, \ldots)$ is the partial sum process associated with $\bs{U}$: \begin{align} V_k & = \sum_{i=1}^k U_i, \quad k \in \N \ U_k & = V_k - V_{k-1}, \quad k \in \N_+ \end{align} As noted above, the Bernoulli trials process can be thought of as random points in discrete time, namely the trial numbers of the successes. With this understanding, $\bs{U}$ is the sequence of inter-arrival times and $\bs{V}$ is the sequence of arrival times.
The Inter-arrival Times
Now consider just the type 1 points in our Poisson process. The time between the arrivals of $(k-1)$st and $k$th type 1 point is $Y_k = \sum_{i = V_{k-1} + 1}^{V_k} X_i, \quad k \in \N_+$ Note that $Y_k$ has $U_k$ terms. The next result shows that the type 1 points form a Poisson process with rate $p r$.
$\bs{Y} = (Y_1, Y_2, \ldots)$ is a sequence of independent variables and each has the exponential distribution with rate parameter $p r$
Proof
From the renewal properties of the Poisson process and the Bernoulli trials process, the inter-arrival times are independent and identically distributed. Each inter-arrival time is the sum of a random number of independent terms; each term has the exponential distribution with rate $r$, and the number of terms has the geometric distribution on $\N_+$ with parameter $p$. Moreover, the number of terms is independent of the terms themselves. We showed in the section on the exponential distribution that a random sum of this form has the exponential distribution with parameter $r p$.
Similarly, if $\bs{Z} = (Z_1, Z_2, \ldots)$ is the sequence of interarrvial times for the type 0 points, then $\bs{Z}$ is a sequence of independent variables, and each has the exponential distribution with rate $(1 - p) r$. Moreover, $\bs{Y}$ and $\bs{Z}$ are independent.
Counting Processes
For $t \ge 0$, let $M_t$ denote the number of type 1 arrivals in $(0, t]$ and $W_t$ the number of type 0 arrivals in $(0, t]$. Thus, $\bs{M} = (M_t: t \ge 0)$ and $\bs{W} = (W_t: t \ge 0)$ are the counting processes for the type 1 arrivals and for the type 0 arrivals. The next result follows from the previous subsection, but a direct proof is interesting.
For $t \ge 0$, $M_t$ has the Poisson distribution with parameter $p r$, $W_t$ has the Poisson distribution with parameter $(1 - p) r$, and $M_t$ and $W_t$ are independent.
Proof
The important observation is that the conditional distribution of $M_t$ given $N_t = n$ is binomial with parameters $n$ and $p$. Thus for $j \in \N$ and $k \in \N$, \begin{align} \P(M_t = j, W_t = k) & = \P(M_t = j, N_t = j + k) = \P(N_t = j + k) \P(M_t = j \mid N_t = j + k) \ & = e^{-r t} \frac{(r t)^{j + k}}{(j + k)!} \frac{(j + k)!}{j! k!} p^j (1 - p)^k \ & = e^{-p r t} \frac{(p r t)^j}{j!} e^{-(1 - p) r t} \frac{\left[(1 - p) r t\right]^k}{k!} \end{align}
In the two-type Poisson experiment vary $r$, $p$, and $t$ with the scroll bars and note the shape of the probability density functions. For various values of the parameters, run the experiment 1000 times and compare the relative frequency functions to the probability density functions.
Estimating the Number of Arrivals
Suppose that the type 1 arrivals are observable, but not the type 0 arrivals. This setting is natural, for example, if the arrivals are radioactive emissions, and the type 1 arrivals are emissions that are detected by a counter, while the type 0 arrivals are emissions that are missed. Suppose that for a given $t \gt 0$, we would like to estimate the total number arrivals $N_t$ after observing the number of type 1 arrivals $M_t$.
The conditional distribution of $N_t$ given $M_t = k$ is the same as the distribution of $k + W_t$. $\P(N_t = n \mid M_t = k) = e^{-(1 - p) r t} \frac{\left[(1 - p) r t\right]^{n - k}}{(n - k)!}, \quad n \in \{k, k + 1, \ldots\}$
Proof
Recall from the basic splitting result above that $M_t$ and $W_t$ are independent. Thus, for $n \in \N$, \begin{align} \P(N_t = n \mid M_t = k) & = \frac{\P(N_t = n, M_t = k)}{\P(M_t = k)} = \frac{\P(M_t = k, W_t = n - k)}{\P(M_t = k)} \ & = \frac{\P(M_t = k) \P(W_t = n - k)}{\P(M_t = k)} = \P(W_t = n - k) \end{align} The form of the probability density function follows since $W_t$ as the Poisson distribution with parameter $(1 - p) r$.
$\E(N_t \mid M_t = k) = k + (1 - p)\,r$.
Proof
This follows easily from our previous theorem since $\E(N_t \mid M_t = k) = \E(k + W_t) = k + (1 - p) r$.
Thus, if the overall rate $r$ of the process and the probability $p$ that an arrival is type 1 are known, then it follows form the general theory of conditional expectation that the best estimator of $N_t$ based on $M_t$, in the least squares sense, is $\E(N_t \mid M_t) = M_t + (1 - p) r$
The mean square error is $\E\left(\left[N_t - \E(N_t \mid M_t)\right]^2 \right) = (1 - p) r t$.
Proof
Note that $N_t - \E(N_t \mid M_t) = W_t - (1 - p) r$. Thus the mean square error is just $\var(W_t) = (1 - p) r t$.
The Multi-Type Process
As you might guess, the results above generalize from 2 types to $k$ types for general $k \in \N_+$. Once again, we start with a Poisson process with rate $r \gt 0$. Suppose that each arrival, independently of the others, is type $i$ with probability $p_i$ for $i \in \{0, 1, \ldots, k - 1\}$. Of course we must have $p_i \ge 0$ for each $i$ and $\sum_{i=0}^{k-1} p_i = 1$. Then for each $i$, the type $i$ points form a Poisson process with rate $p_i r$, and these processes are independent.
Superposition
Complementary to splitting or thinning a Poisson process is superposition: if we combine the random points in time from independent Poisson processes, then we have a new Poisson processes. The rate of the new process is the sum of the rates of the processes that were combined. Once again, our exposition will concentrate on the superposition of two processes. This case contains all of the essential ideas.
Two Processes
Suppose that we have two independent Poisson processes. We will denote the sequence of inter-arrival times, the sequence of arrival times, and the counting variables for the process $i \in \{1, 2\}$ by $\bs{X}^i = \left(X_1^i, X_2^i \ldots\right)$, $\bs{T}^i = \left(T_1^i, T_2^i, \ldots\right)$, and $\bs{N}^i = \left(N_t^i: t \in [0, \infty)\right)$, and we assume that process $i$ has rate $r_i \in (0, \infty)$. The new process that we want to consider is obtained by simply combining the random points. That is, the new random points are $\left\{T_n^1: n \in \N_+\right\} \cup \left\{T_n^2: n \in \N_+\right\}$, but of course then ordered in time. We will denote the sequence of inter-arrival times, the sequence of arrival times, and the counting variables for the new process by $\bs{X} = (X_1, X_2 \ldots)$, $\bs{T} = (T_1, T_2, \ldots)$, and $\bs{N} = \left(N_t: t \in [0, \infty)\right)$. Clearly if $A$ is an interval in $[0, \infty)$ then $N(A) = N^1(A) + N^2(A)$ the number of combined points in $A$ is simply the sum of the number of point in $A$ for processes 1 and 2. It's also worth noting that $X_1 = \min\{X^1_1, X^2_1\}$ the first arrival for the combined process is the smaller of the first arrival times for processes 1 and 2. The other inter-arrival times, and hence also the arrival times, for the combined process are harder to state.
The combined process is a Poisson process with rate $r_1 + r_2$. Moreover,
Proof
As noted above, if $A$ is a subinterval of $[0, \infty)$ then $N(A) = N^1(A) + N^2(A)$. The first term has the Poisson distribution with parameter $r_1 \lambda(A)$, the second term has the Poisson distribution with parameter $r_2 \lambda(A)$, and the terms are independent. Hence $N(A)$ has the Poisson distribution with parameter $r_1 \lambda(A) + r_2 \lambda(A) = (r_1 + r_2) \lambda(A)$. Thus the counting process has stationary, Poisson distributed increments. Next, if $(A_1, A_2, \ldots, A_n)$ is a sequence of disjoint subintervals of $[0, \infty)$ then $\left(N(A_1), N(A_2), \ldots, N(A_n)\right) = \left(N^1(A_1) + N^2(A_1), N^1(A_2) + N^2(A_2), \ldots, N^1(A_n) + N^2(A_n) \right)$ is an independent sequence, so the counting process has independent increments.
Computational Exercises
In the two-type Poisson experiment, set $r = 2$, $t = 3$, and $p = 0.7$. Run the experiment 1000 times, Compute the appropriate relative frequency functions and investigate empirically the independence of the number of type 1 points and the number of type 0 points.
Suppose that customers arrive at a service station according to the Poisson model, with rate $r = 20$ per hour. Moreover, each customer, independently, is female with probability 0.6 and male with probability 0.4. Find the probability that in a 2 hour period, there will be at least 20 women and at least 15 men.
Answer
0.5814
In the two-type Poisson experiment, set $r = 3$, $t = 4$, and $p = 0.8$. Run the experiment 100 times.
1. Compute the estimate of $N_t$ based on $M_t$ for each run.
2. Over the 100 runs, compute average of the sum of the squares of the errors.
3. Compare the result in (b) with the result in Exercise 8.
Suppose that a piece of radioactive material emits particles according to the Poisson model at a rate of $r = 100$ per second. Moreover, assume that a counter detects each emitted particle, independently, with probability 0.9. Suppose that the number of detected particles in a 5 second period is 465.
1. Estimate the number of particles emitted.
2. Compute the mean square error of the estimate.
Answer
1. 515
2. 50 | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/14%3A_The_Poisson_Process/14.05%3A_Thinning_and_Superpositon.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
Basic Theory
A non-homogeneous Poisson process is similar to an ordinary Poisson process, except that the average rate of arrivals is allowed to vary with time. Many applications that generate random points in time are modeled more faithfully with such non-homogeneous processes. The mathematical cost of this generalization, however, is that we lose the property of stationary increments.
Non-homogeneous Poisson processes are best described in measure-theoretic terms. Thus, you may need to review the sections on measure theory in the chapters on Foundations, Probability Measures, and Distributions. Our basic measure space in this section is $[0, \infty)$ with the $\sigma$-algebra of Borel measurable subsets (named for Émile Borel). As usual, $\lambda$ denotes Lebesgue measure on this space, named for Henri Lebesgue. Recall that the Borel $\sigma$-algebra is the one generated by the intervals, and $\lambda$ is the generalization of length on intervals.
Definition and Basic Properties
Of all of our various characterizations of the ordinary Poisson process, in terms of the inter-arrival times, the arrival times, and the counting process, the characterizations involving the counting process leads to the most natural generalization to non-homogeneous processes. Thus, consider a process that generates random points in time, and as usual, let $N_t$ denote the number of random points in the interval $(0, t]$ for $t \ge 0$, so that $\bs{N} = \{N_t: t \ge 0\}$ is the counting process. More generally, $N(A)$ denotes the number of random points in a measurable $A \subseteq [0, \infty)$, so $N$ is our random counting measure. As before, $t \mapsto N_t$ is a (random) distribution function and $A \mapsto N(A)$ is the (random) measure associated with this distribution function.
Suppose now that $r: [0, \infty) \to [0, \infty)$ is measurable, and define $m: [0, \infty) \to [0, \infty)$ by $m(t) = \int_{(0, t]} r(s) \, d\lambda(s)$ From properties of the integral, $m$ is increasing and right-continuous on $[0, \infty)$ and hence is distribution function. The positive measure on $[0, \infty)$ associated with $m$ (which we will also denote by $m$) is defined on a measurable $A \subseteq [0, \infty)$ by $m(A) = \int_A r(s) \, d\lambda(s)$ Thus, $m(t) = m(0, t]$, and for $s, \, t \in [0, \infty)$ with $s \lt t$, $m(s, t] = m(t) - m(s)$. Finally, note that the measure $m$ is absolutely continuous with respect to $\lambda$, and $r$ is the density function. Note the parallels between the random distribution function and measure $N$ and the deterministic distribution function and measure $m$. With the setup involving $r$ and $m$ complete, we are ready for our first definition.
A process that produces random points in time is a non-homogeneous Poisson process with rate function $r$ if the counting process $N$ satisfies the following properties:
1. If $\{A_i: i \in I\}$ is a countable, disjoint collection of measurable subsets of $[0, \infty)$ then $\{N(A_i): i \in I\}$ is a collection of independent random variables.
2. If $A \subseteq [0, \infty)$ is measurable then $N(A)$ has the Poisson distribution with parameter $m(A)$.
Property (a) is our usual property of independent increments, while property (b) is a natural generalization of the property of Poisson distributed increments. Clearly, if $r$ is a positive constant, then $m(t) = r t$ for $t \in [0, \infty)$ and as a measure, $m$ is proportional to Lebesgue measure $\lambda$. In this case, the non-homogeneous process reduces to an ordinary, homogeneous Poisson process with rate $r$. However, if $r$ is not constant, then $m$ is not linear, and as a measure, is not proportional to Lebesgue measure. In this case, the process does not have stationary increments with respect to $\lambda$, but does of course, have stationary increments with respect to $m$. That is, if $A, \, B$ are measurable subsets of $[0, \infty)$ and $\lambda(A) = \lambda(B)$ then $N(A)$ and $N(B)$ will not in general have the same distribution, but of course they will have the same distribution if $m(A) = m(B)$.
In particular, recall that the parameter of the Poisson distribution is both the mean and the variance, so $\E\left[N(A)\right] = \var\left[N(A)\right] = m(A)$ for measurable $A \subseteq [0, \infty)$, and in particular, $\E(N_t) = \var(N_t) = m(t)$ for $t \in [0, \infty)$. The function $m$ is usually called the mean function. Since $m^\prime(t) = r(t)$ (if $r$ is continuous at $t$), it makes sense to refer to $r$ as the rate function. Locally, at $t$, the arrivals are occurring at an average rate of $r(t)$ per unit time.
As before, from a modeling point of view, the property of independent increments can reasonably be evaluated. But we need something more primitive to replace the property of Poisson increments. Here is the main theorem.
A process that produces random points in time is a non-homogeneous Poisson process with rate function $r$ if and only if the counting process $\bs{N}$ satisfies the following properties:
1. If $\{A_i: i \in I\}$ is a countable, disjoint collection of measurable subsets of $[0, \infty)$ then $\{N(A_i): i \in I\}$ is a set of independent variables.
2. For $t \in [0, \infty)$, \begin{align} &\frac{\P\left[N(t, t + h] = 1\right]}{h} \to r(t) \text{ as } h \downarrow 0 \ &\frac{\P\left[N(t, t + h] > 1\right]}{h} \to 0 \text{ as } h \downarrow 0 \end{align}
So if $h$ is small the probability of a single arrival in $[t, t + h)$ is approximately $r(t) h$, while the probability of more than 1 arrival in this interval is negligible.
Arrival Times and Time Change
Suppose that we have a non-homogeneous Poisson process with rate function $r$, as defined above. As usual, let $T_n$ denote the time of the $n$th arrival for $n \in \N$. As with the ordinary Poisson process, we have an inverse relation between the counting process $\bs{N} = \{N_t: t \in [0, \infty)\}$ and the arrival time sequence $\bs{T} = \{T_n: n \in \N\}$, namely $T_n = \min\{t \in [0, \infty): N_t = n\}$, $N_t = \#\{n \in \N: T_n \le t\}$, and $\{T_n \le t\} = \{N_t \ge n\}$, since both events mean at least $n$ random points in $(0, t]$. The last relationship allows us to get the distribution of $T_n$.
For $n \in \N_+$, $T_n$ has probability density function $f_n$ given by $f_n(t) = \frac{m^{n-1}(t)}{(n - 1)!} r(t) e^{-m(t)}, \quad t \in [0, \infty)$
Proof
Using the inverse relationship above and the Poisson distribution of $N_t$, the distribution function of $T_n$ is $\P(T_n \le t) = \P(N_t \ge n) = \sum_{k=n}^\infty e^{-m(t)} \frac{m^k(t)}{k!}, \quad t \in [0, \infty)$ Differentiating with respect to $t$ gives $f_n(t) = \sum_{k=n}^\infty \left[-m^\prime(t) e^{-m(t)} \frac{m^k(t)}{k!} + e^{-m(t)} \frac{k m^{k-1}(t) m^\prime(t)}{k!}\right] = r(t) e^{-m(t)} \sum_{k=n}^\infty \left[\frac{m^{k-1}(t)}{(k - 1)!} - \frac{m^k(t)}{k!}\right]$ The last sum collapses to $m^{n-1}(t) \big/ (n - 1)!$.
In particular, $T_1$ has probability density function $f_1$ given by $f_1(t) = r(t) e^{-m(t)}, \quad t \in [0, \infty)$ Recall that in reliability terms, $r$ is the failure rate function, and that the reliability function is the right distribution function: $F_1^c(t) = \P(T_1 \gt t) = e^{-m(t)}, \quad t \in [0, \infty)$ In general, the functional form of $f_n$ is clearly similar to the probability density function of the gamma distribution, and indeed, $T_n$ can be transformed into a random variable with a gamma distribution. This amounts to a time change which will give us additional insight into the non-homogeneous Poisson process.
Let $U_n = m(T_n)$ for $n \in \N_+$. Then $U_n$ has the gamma distribution with shape parameter $n$ and rate parameter $1$
Proof
Let $g_n$ denote the PDF of $U_n$. Since $m$ is strictly increasing and differentiable, we can use the standard change of variables formula. So letting $u = m(t)$, the relationship is $g_n(u) = f_n(t) \frac{dt}{du}$ Simplifying gives $g_n(u) = u^{n-1} e^{-u} \big/(n - 1)!$ for $u \in [0, \infty)$.
Thus, the time change $u = m(t)$ transforms the non-homogeneous Poisson process into a standard (rate 1) Poisson process. Here is an equivalent way to look at the time change result.
For $u \in [0, \infty)$, let $M_u = N_t$ where $t = m^{-1}(u)$. Then $\{M_u: u \in [0, \infty)\}$ is the counting process for a standard, rate 1 Poisson process.
Proof
1. Suppose that $(u_1, u_2, \ldots)$ os a sequence of points in $[0, \infty)$ with $0 \le u_1 \lt u_2 \lt \cdots$. Since $m^{-1}$ is strictly increasing, we have $0 \le t_1 \lt t_2 \lt \cdots$, where of course $t_i = m^{-1}(u_i)$. By assumption, the sequence of random variables $\left(N_{t_1}, N_{t_2} - N_{t_1}, \ldots\right)$ is independent, but this is also the sequence $\left(M_{u_1}, M_{u_2} - M_{u_1}, \ldots\right)$.
2. Suppose that $u, \, v \in [0, \infty)$ with $u \lt v$, and let $s = m^{-1}(u)$ and $t = m^{-1}(v)$. Then $s \lt t$ and so $M_v - M_u = N_t - N_s$ has the Poisson distribution with parameter $m(t) - m(s) = v - u$.
Equivalently, we can transform a standard (rate 1) Poisson process into a a non-homogeneous Poisson process with a time change.
Suppose that $\bs{M} = \{M_u: u \in [0, \infty)\}$ is the counting process for a standard Poisson process, and let $N_t = M_{m(t)}$ for $t \in [0, \infty)$. Then $\{N_t: t \in [0, \infty)\}$ is the counting process for a non-homogeneous Poisson process with mean function $m$ (and rate function $r$).
Proof
1. Let $(t_1, t_2, \ldots)$ be a sequence of points in $[0, \infty)$ with $0 \le t_1 \lt t_2 \lt \cdots$. Since $m$ is strictly increasing, we have $0 \le m(t_1) \lt m(t_2) \lt \cdots$. Hence $\left(M_{m(t_1)}, M_{m(t_2)} - M_{m(t_1)}, \ldots\right)$ is a sequence of independent variables. But this sequence is simply $\left(N_{t_1}, N_{t_2} - N_{t_1}, \ldots\right)$.
2. If $s, \, t \in [0, \infty)$ with $s \lt t$. Then $N_t - N_s = M_{m(t)} - M_{m(s)}$ has the Poisson distribution with parameter $m(t) - m(s)$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/14%3A_The_Poisson_Process/14.06%3A_Non-homogeneous_Poisson_Processes.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
In a compound Poisson process, each arrival in an ordinary Poisson process comes with an associated real-valued random variable that represents the value of the arrival in a sense. These variables are independent and identically distributed, and are independent of the underlying Poisson process. Our interest centers on the sum of the random variables for all the arrivals up to a fixed time $t$, which thus is a Poisson-distributed random sum of random variables. Distributions of this type are said to be compound Poisson distributions, and are important in their own right, particularly since some surprising parametric distributions turn out to be compound Poisson.
Basic Theory
Definition
Suppose we have a Poisson process with rate $r \in (0, \infty)$. As usual, we wil denote the sequence of inter-arrival times by $\bs{X} = (X_1, X_2, \ldots)$, the sequence of arrival times by $\bs{T} = (T_0, T_1, T_2, \ldots)$, and the counting process by $\bs{N} = \{N_t: t \in [0, \infty)\}$. To review some of the most important facts briefly, recall that $\bs{X}$ is a sequence of independent random variables, each having the exponential distribution on $[0, \infty)$ with rate $r$. The sequence $\bs{T}$ is the partial sum sequence associated with $\bs{X}$, and has stationary independent increments. For $n \in \N_+$, the $n$th arrival time $T_n$ has the gamma distribution with parameters $n$ and $r$. The process $\bs{N}$ is the inverse of $\bs{T}$, in a certain sense, and also has stationary independent increments. For $t \in (0, \infty)$, the number of arrivals $N_t$ in $(0, t]$ has the Poisson distribution with parameter $r t$.
Suppose now that each arrival has an associated real-valued random variable that represents the value of the arrival in a certain sense. Here are some typical examples:
• The arrivals are customers at a store. Each customer spends a random amount of money.
• The arrivals are visits to a website. Each visitor spends a random amount of time at the site.
• The arrivals are failure times of a complex system. Each failure requires a random repair time.
• The arrivals are earthquakes at a particular location. Each earthquake has a random severity, a measure of the energy released.
For $n \in \N_+$, let $U_n$ denote the value of the $n$th arrival. We assume that $\bs{U} = (U_1, U_2, \ldots)$ is a sequence of independent, identically distributed, real-valued random variables, and that $\bs{U}$ is independent of the underlying Poisson process. The common distribution may be discrete or continuous, but in either case, we let $f$ denote the common probability density function. We will let $\mu = \E(U_n)$ denote the common mean, $\sigma^2 = \var(U_n)$ the common variance, and $G$ the common moment generating function, so that $G(s) = \E\left[\exp(s U_n)\right]$ for $s$ in some interval $I$ about 0. Here is our main definition:
The compound Poisson process associated with the given Poisson process $\bs{N}$ and the sequence $\bs{U}$ is the stochastic process $\bs{V} = \{V_t: t \in [0, \infty)\}$ where $V_t = \sum_{n=1}^{N_t} U_n$
Thus, $V_t$ is the total value for all of the arrivals in $(0, t]$. For the examples above
• $V_t$ is the total income to the store up to time $t$.
• $V_t$ is the total time spent at the site by the customers who arrived up to time $t$.
• $V_t$ is the total repair time for the failures up to time $t$.
• $V_t$ is the total energy released up to time $t$.
Recall that a sum over an empty index set is 0, so $V_0 = 0$.
Properties
Note that for fixed $t$, $V_t$ is a random sum of independent, identically distributed random variables, a topic that we have studied before. In this sense, we have a special case, since the number of terms $N_t$ has the Poisson distribution with parameter $r t$. But we also have a new wrinkle, since the process is indexed by the continuous time parameter $t$, and so we can study its properties as a stochastic process. Our first result is a pair of properties shared by the underlying Poisson process.
$\bs{V}$ has stationary, independent increments:
1. If $s, \, t \in [0, \infty)$ with $s \lt t$, then $V_t - V_s$ has the same distribution as $V_{t - s}$.
2. If $(t_1, t_2, \ldots, t_n)$ is a sequence of points in $[0, \infty)$ with $t_1 \lt t_2 \lt \cdots \lt t_n$ then $\left(V_{t_1}, V_{t_2} - V_{t_1}, \ldots, V_{t_n} - V_{t_{n-1}}\right)$ is a sequence of independent variables.
Proof
1. For $0 \le s \lt t$, $V_t - V_s = \sum_{i = 1}^{N_t} U_i - \sum_{i = 1}^{N_s} U_i = \sum_{i = N_s + 1}^{N_t} U_i$ The number of terms in the last sum is $N_t - N_s$, which has the same distribution as $N_{t - s}$. Since the variables in the sequence $\bs{U}$ are identically distributed, it follows that $V_t - V_s$ has the same distribution as $V_{t - s}$.
2. Suppose that $0 \le t_1 \lt t_2 \lt \cdots \lt t_n$ and let $t_0 = 0$. Then for $i \in \{1, 2, \ldots, n\}$, as in (a) $V_{t_i} - V_{t_i - 1} = \sum_{j = N_{t_i - 1} + 1}^{N_{t_i}} U_j$ The number of terms in this sum is $N_{t_i} - N_{t_{i-1}}$. Since $\bs{N}$ has independent increments, and the variables in $\bs{U}$ are independent, and since the indices between $N_{t_{i-1} + 1}$ and $N_{t_i}$ are disjoint over $i \in \{1, 2, \ldots n\}$, it follows that the random variables $V_{t_i} - V_{t_i - 1}$ are independent over $i \in \{1, 2, \ldots, n\}$.
Next we consider various moments of the compound process.
For $t \in [0, \infty)$, the mean and variance of $V_t$ are
1. $\E(V_t) = \mu r t$
2. $\var(V_t) = (\mu^2 + \sigma^2) r t$
Proof
Again, these are special cases of general results for random sums of IID variables, but we give separate proofs for completeness. The basic tool is conditional expected value and conditional variance. Recall also that $\E(N_t) = \var(N_t) = r t$.
1. Note that $\E(V_t) = \E\left[\E(V_t \mid N_t)\right] = \E(\mu N_t) = \mu r t$.
2. Similarly, note that $\var(V_t \mid N_t) = \sigma^2 N_t$ and hence $\var(V_t) = \E\left[\var(V_t \mid N_t)\right] + \var\left[\E(V_t \mid N_t)\right] = \E(\sigma^2 N_t) + \var(\mu N_t) = \sigma^2 r t + \mu^2 r t$.
For $t \in [0, \infty)$, the moment generating function of $V_t$ is given by $\E\left[\exp(s V_t)\right] = \exp\left(r t \left[G(s) - 1\right]\right), \quad s \in I$
Proof
Again, this is a special case of the more general result for random sums of IID variables, but we give a another proof for completeness. As with the last theorem, the key is to condition on $N_t$ and recall that the MGF of a sum of independent variables is the product of the MGFs. Thus $\E\left[\exp(s V_t)\right] = \E\left(\E\left[\exp(s V_t \mid N_t)\right]\right) = \E\left[G^{N_t}(s)\right] = P_t\left[G(s)\right]$ where $P_t$ is the probability generating function of $N_t$. But we know from our study of the Poisson distribution that $P_t(x) = \exp\left[r t (x - 1)\right]$ for $x \in \R$.
By exactly the same argument, the same relationship holds for characteristic functions and, in the case that the variables in $\bs{U}$ take values in $\N$, for probability generating functions.. That is, if the variables in $\bs{U}$ have generating function $G$, then the generating function $H$ of $V_t$ is given by $H(s) = \exp(r t [G(s) - 1])$ for $s$ in the domain of $G$, where generating function can be any of the three types we have discussed: probability, moment, or characteristic.
Examples and Special Cases
The Discrete Case
First we note that Thinning a Poisson process can be thought of as a special case of a compound Poisson process. Thus, suppose that $\bs{U} = (U_1, U_2, \ldots)$ is a Bernoulli trials sequence with success parameter $p \in (0, 1)$, and as above, that $\bs{U}$ is independent of the Poisson process $\bs{N}$. In the usual language of thinning, the arrivals are of two types (1 and 0), and $U_i$ is the type of the $i$th arrival. Thus the compound process $\bs{V}$ constructed above is the thinned process, so that $V_t$ is the number of type 1 points up to time $t$. We know that $\bs{V}$ is also a Poisson process, with rate $r p$.
The results above for thinning generalize to the case where the values of the arrivals have a discrete distribution. Thus, suppose $U_i$ takes values in a countable set $S \subseteq \R$, and as before, let $f$ denote the common probability density function so that $f(u) = \P(U_i = u)$ for $u \in S$ and $i \in \N_+$. For $u \in S$, let $N^u_t$ denote the number of arrivals up to time $t$ that have the value $u$, and let $\bs{N}^u = \left\{N^u_t: t \in [0, \infty)\right\}$ denote the corresponding stochastic process. Armed with this setup, here is the result:
The compound Poisson process $\bs{V}$ associated with $\bs{N}$ and $\bs{U}$ can be written in the form $V_t = \sum_{u \in S} u N^u_t, \quad t \in [0, \infty)$ The processes $\{\bs{N}^u: u \in S\}$ are independent Poisson processes, and $\bs{N}^u$ has rate $r f(u)$ for $u \in S$.
Proof
Note that $U_i = \sum_{u \in S} u \bs{1}(U_i = u)$ and hence $V_t = \sum_{i = 1}^{N_t} U_i = \sum_{i = 1}^{N_t} \sum_{u \in S} u \bs{1}(U_i = u) = \sum_{u \in S} u \sum_{i = 1}^{N_t} \bs{1}(U_i = u) = \sum_{u \in S} u N^u_t$ The fact that $\{\bs{N}^u: u \in S\}$ are independent Poisson processes, and that $\bs{N}^u$ has rate $r f(u)$ for $u \in S$ follows from our result on thinning.
Compound Poisson Distributions
A compound Poisson random variable can be defined outside of the context of a Poisson process. Here is the formal definition:
Suppose that $\bs{U} = (U_1, U_2, \ldots)$ is a sequence of independent, identically distributed random variables, and that $N$ is independent of $\bs{U}$ and has the Poisson distribution with parameter $\lambda \in (0, \infty)$. Then $V = \sum_{i=1}^N U_i$ has a compound Poisson distribution.
But in fact, compound Poisson variables usually do arise in the context of an underlying Poisson process. In any event, the results on the mean and variance above and the generating function above hold with $r t$ replaced by $\lambda$. Compound Poisson distributions are infinitely divisible. A famous theorem of William Feller gives a partial converse: an infinitely divisible distribution on $\N$ must be compound Poisson.
The negative binomial distribution on $\N$ is infinitely divisible, and hence must be compound Poisson. Here is the construction:
Let $p, \, k \in (0, \infty)$. Suppose that $\bs{U} = (U_1, U_2, \ldots)$ is a sequence of independent variables, each having the logarithmic series distribution with shape parameter $1 - p$. Suppose also that $N$ is independent of $\bs{U}$ and has the Poisson distribution with parameter $- k \ln(p)$. Then $V = \sum_{i=1}^N U_i$ has the negative binomial distribution on $\N$ with parameters $k$ and $p$.
Proof
As noted above, the probability generating function of $V$ is $P(t) = \exp\left( \lambda [Q(t) - 1]\right)$ where $\lambda$ is the parameter of the Poisson variable $N$ and $Q(t)$ is the common PGF of the the terms in the sum. Using the PGF of the logarithmic series distribution, and the particular values of the parameters, we have $P(t) = \exp \left[-k \ln(p) \left(\frac{\ln[1 - (1 - p)t]}{\ln(p)} - 1\right)\right], \quad \left|t\right| \lt \frac{1}{1 - p}$ Using properties of logarithms and simple algebra, this reduces to $P(t) = \left(\frac{p}{1 - (1 - p)t}\right)^k, \quad \left|t\right| \lt \frac{1}{1 - p}$ which is the PGF of the negative binomial distribution with parameters $k$ and $p$.
As a special case ($k = 1$), it follows that the geometric distribution on $\N$ is also compound Poisson. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/14%3A_The_Poisson_Process/14.07%3A_Compound_Poisson_Processes.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\skw}{\text{skew}}$ $\newcommand{\kur}{\text{kurt}}$
Basic Theory
The Process
So far, we have studied the Poisson process as a model for random points in time. However there is also a Poisson model for random points in space. Some specific examples of such random points are
• Defects in a sheet of material.
• Raisins in a cake.
• Stars in the sky.
The Poisson process for random points in space can be defined in a very general setting. All that is really needed is a measure space $(S, \mathscr{S}, \mu)$. Thus, $S$ is a set (the underlying space for our random points), $\mathscr{S}$ is a $\sigma$-algebra of subsets of $S$ (as always, the allowable sets), and $\mu$ is a positive measure on $(S, \mathscr{S})$ (a measure of the size of sets). The most important special case is when $S$ is a (Lebesgue) measurable subset of $\R^d$ for some $d \in \N_+$, $\mathscr{S}$ is the $\sigma$-algebra of measurable subsets of $S$, and $\mu = \lambda_d$ is $d$-dimensional Lebesgue measure. Specializing further, recall the lower dimensional spaces:
1. When $d = 1$, $S \subseteq \R$ and $\lambda_1$ is length measure.
2. When $d = 2$, $S \subseteq \R^2$ and $\lambda_2$ is area measure.
3. When $d = 3$, $S \subseteq \R^3$ and $\lambda_3$ is volume measure.
Of course, the characterizations of the Poisson process on $[0, \infty)$, in term of the inter-arrival times and the characterization in terms of the arrival times do not generalize because they depend critically on the order relation on $[0, \infty)$. However the characterization in terms of the counting process generalizes perfectly to our new setting. Thus, consider a process that produces random points in $S$, and as usual, let $N(A)$ denote the number of random points in $A \in \mathscr{S}$. Thus $N$ is a random, counting measure on $(S, \mathscr{S})$
The random measure $N$ is a Poisson process or a Poisson random measure on $S$ with density parameter $r \gt 0$ if the following axioms are satisfied:
1. If $A \in \mathscr{S}$ then $N(A)$ has the Poisson distribution with parameter $r \mu(A)$.
2. If $\{A_i: i \in I\}$ is a countable, disjoint collection of sets in $\mathscr{S}$ then $\{N(A_i): i \in I\}$ is a set of independent random variables.
To draw parallels with the Poison process on $[0, \infty)$, note that axiom (a) is the generalization of stationary, Poisson-distributed increments, and axiom (b) is the generalization of independent increments. By convention, if $\mu(A) = 0$ then $N(A) = 0$ with probability 1, and if $\mu(A) = \infty$ then $N(A) = \infty$ with probability 1. (These distributions are considered degenerate members of the Poisson family.) On the other hand, note that if $0 \lt \mu(A) \lt \infty$ then $N(A)$ has support $\N$.
In the two-dimensional Poisson process, vary the width $w$ and the rate $r$. Note the location and shape of the probability density function of $N$. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the true probability density function.
For $A \subseteq D$
1. $\E\left[N(A)\right] = r \mu(A)$
2. $\var\left[N(A)\right] = r \mu(A)$
Proof
These result follow of course form our previous study of the Poisson distribution. Recall that the parameter of the Poisson distribution is both the mean and the variance.
In particular, $r$ can be interpreted as the expected density of the random points (that is, the expected number of points in a region of unit size), justifying the name of the parameter.
In the two-dimensional Poisson process, vary the width $w$ and the density parameter $r$. Note the size and location of the mean$\pm$standard deviation bar of $N$. For various values of the parameters, run the simulation 1000 times and compare the empirical mean and standard deviation to the true mean and standard deviation.
The Distribution of the Random Points
As before, the Poisson model defines the most random way to distribute points in space, in a certain sense. Assume that we have a Poisson process $N$ on $(S, \mathscr{S}, \mu)$ with density parameter $r \in (0, \infty)$.
Given that $A \in \mathscr{S}$ contains exactly one random point, the position $X$ of the point is uniformly distributed on $A$.
Proof
For $B \in \mathscr{S}$ with $B \subseteq A$, $\P\left[N(B) = 1 \mid N(A) = 1\right] = \frac{\P\left[N(B) = 1, N(A) = 1\right]}{\P\left[N(A) = 1\right]} = \frac{\P\left[N(B) = 1, N(A \setminus B) = 0\right]}{\P\left[N(A) = 1\right]} = \frac{\P\left[N(B) = 1\right] \P\left[N(A \setminus B) = 0\right]}{\P\left[N(A) = 1\right]}$ Using the Poisson distributions we have $\P\left[N(B) = 1 \mid N(A) = 1\right] = \frac{\exp\left[-r \mu(B)\right] \left[r \mu(B)\right] \exp\left[-r \mu(A \setminus B)\right]} {\exp\left[-r \mu(A)\right] \left[r \mu(A)\right]} = \frac{\mu(B)}{\mu(A)}$ As a function of $B$, this is the uniform distribution on $A$ (with respect to $\mu$).
More generally, if $A$ contains $n$ points, then the positions of the points are independent and each is uniformly distributed in $A$.
Suppose that $A, \, B \in \mathscr{S}$ and $B \subseteq A$. For $n \in \N_+$, the conditional distribution of $N(B)$ given $N(A) = n$ is the binomial distribution with trial parameter $n$ and success parameter $p = \mu(B) \big/ \mu(A)$.
Proof
For $k \in \{0, 1, \ldots, n\}$, $\P\left[N(B) = k \mid N(A) = n\right] = \frac{\P\left[N(B) = k, N(A) = n\right]}{\P\left[N(A) = n\right]} = \frac{\P\left[N(B) = k, N(A \setminus B) = n - k\right]}{\P\left[N(A) = n\right]} = \frac{\P\left[N(B) = k\right] \P\left[N(A \setminus B) = n - k\right]}{\P\left[N(A) = n\right]}$ Using the Poisson distribtuions, $\P\left[N(B) = k \mid N(A) = n\right] = \frac{\exp\left[-r \mu(B)\right] \left(\left[r \mu(B)\right]^k \big/ k!\right) \exp\left[-r \mu(A \setminus B)\right] \left(\left[r \mu(A \setminus B)\right]^{n-k} \big/ (n - k)!\right)}{\exp\left[-r \mu(A)\right] \left[r \mu(A)\right]^n \big/ n!}$ Canceling factors and letting $p = \mu(B) \big/ \mu(A)$, we have $\P\left[N(B) = k \mid N(A) = n\right] = \frac{n!}{k! (n-k)!} p^k (1 - p)^{n-k}$
Thus, given $N(A) = n$, each of the $n$ random points falls into $B$, independently, with probability $p = \mu(B) \big/ \mu(A)$, regardless of the density parameter $r$.
More generally, suppose that $A \in \mathscr{S}$ and that $A$ is partitioned into $k$ subsets $(B_1, B_2, \ldots, B_k)$ in $\mathscr{S}$. Then the conditional distribution of $\left(N(B_1), N(B_2), \ldots, N(B_k)\right)$ given $N(A) = n$ is the multinomial distribution with parameters $n$ and $(p_1, p_2, \ldots p_k)$, where $p_i = \mu(B_i) \big/ \mu(A)$ for $i \in \{1, 2, \ldots, k\}$.
Thinning and Combining
Suppose that $N$ is a Poisson random process on $(S, \mathscr{S}, \mu)$ with density parameter $r \in [0, \infty)$. Thinning (or splitting) this process works just like thinning the Poisson process on $[0, \infty)$. Specifically, suppose that the each random point, independently of the others is either type 1 with probability $p$ or type 0 with probability $1 - p$, where $p \in (0, 1)$ is a new parameter. Let $N_1$ and $N_0$ denote the random counting measures associated with the type 1 and type 0 points, respectively. That is, $N_i(A)$ is the number of type $i$ random points in $A$, for $A \in \mathscr{S}$ and $i \in \{0, 1\}$.
$N_0$ and $N_1$ are independent Poisson processes on $(S, \mathscr{S}, \mu)$ with density parameters $p r$ and $(1 - p) r$, respectively.
Proof
The proof is like the one for the Poisson process on $[0, \infty)$. For $j, \; k \in \N$, $\P\left[N_0(A) = j, N_1(A) = k\right] = \P\left[N_1(A) = k, N(A) = j + k\right] = \P\left[N(A) = j + k\right] \P\left[N_1(A) = k \mid N_0(A) = j + k\right]$ But given $N(A) = n$, the number of type 1 points $N_1(A)$ has the binomial distribution with parameters $n$ and $p$. Hence letting $t = \mu(A)$ to simplify the notation, we have $\P\left[N_0(A) = j, N_1(A) = k\right] = e^{-r t} \frac{(r t)^{j+k}}{(j + k)!} \frac{(j + k)!}{j! k!} p^k (1 - p)^j = e^{-p r t} \frac{(p r t)^k}{k!} e^{-(1 - p) r t} \frac{\left[(1 - p) r t\right]^j}{j!}$ It follows from the factorization theorem that $N_0(A)$ has the Poisson distribution with parameter $p \mu(A)$, $N_1(A)$ has the Poisson distribution with parameter $(1 - p) \mu(A)$, and $N_0(A)$ and $N_1(A)$ are independent. Next suppose that $\{A_i: i \in I\}$ is a countable, disjoint collection of sets in $\mathscr{S}$. Then $\{N_0(A_i): i \in I\}$ and $\{N_1(A_i): i \in I\}$ are each independent sets of random variables, and the two sets are independent of each other.
This result extends naturally to $k \in \N_+$ types. As in the standard case, combining independent Poisson processes produces a new Poisson process, and the density parameters add.
Suppose that $N_0$ and $N_1$ are independent Poisson processes on $(S, \mathscr{S}, \mu)$, with density parameters $r_0$ and $r_1$, respectively. Then the process obtained by combining the random points is also a Poisson process on $(S, \mathscr{S}, \mu)$ with density parameter $r_0 + r_1$.
Proof
The new random measure, of course, is simply $N = N_1 + N_2$. Thus for $A \in \mathscr{S}$, $N(A) = N_1(A) + N_2(A)$. But $N_i(A)$ has the Poisson distribution with parameter $r_i \mu(A)$ for $i \in \{1, 2\}$, and the variables are independent, so $N(A)$ has the Poisson distribution with parameter $r_0 \mu(A) + r_1 \mu(A) = (r_0 + r_1)\mu(A)$. Next suppose that $\{A_i: i \in I\}$ is a countable, disjoint collection of sets in $\mathscr{S}$. Then $\{N(A_i): i \in I\} = \left\{N_0(A_i) + N_1(A_i): i \in I\right\}$ is a set of independent random variables.
Applications and Special Cases
Non-homogeneous Poisson Processes
A non-homogeneous Poisson process on $[0, \infty)$ can be thought of simply as a Poisson process on $[0, \infty)$ with respect to a measure that is not the standard Lebesgue measure $\lambda_1$ on $[0, \infty)$. Thus suppose that $r: [0, \infty) \to (0, \infty)$ is piece-wise continuous with $\int_0^\infty r(t) \, dt = \infty$, and let $m(t) = \int_0^t r(s) \, ds, \quad t \in [0, \infty)$ Consider the non-homogeneous Poisson process with rate function $r$ (and hence mean function $m$). Recall that the Lebesgue-Stieltjes measure on $[0, \infty)$ associated with $m$ (which we also denote by $m$) is defined by the condition $m(a, b] = m(b) - m(a), \quad a, \, b \in [0, \infty), \; a \lt b$ Equivalently, $m$ is the measure that is absolutely continuous with respect to $\lambda_1$, with density function $r$. That is, if $A$ is a measurable subset of $[0, \infty)$ then $m(A) = \int_A r(t) \, dt$
The non-homogeneous Poisson process on $[0, \infty)$ with rate function $r$ is the Poisson process on $[0, \infty)$ with respect to the measure $m$.
Proof
This follows directly from the definitions. If $N$ denotes the counting process associated with the non-homogeneous Poisson process, then $N$ has stationary increments, and for $s, \, t \in [0, \infty)$ with $s \lt t$, $N(s, t]$ has the Poisson distribution with parameter $m(t) - m(s) = m(s, t]$.
Nearest Points in $\R^d$
In this subsection, we consider a rather specialized topic, but one that is fun and interesting. Consider the Poisson process on $\left(\R^d, \mathscr{R}_d, \lambda_d\right)$ with density parameter $r \gt 0$, where as usual, $\mathscr{R}_d$ is the $\sigma$-algebra of Lebesgue measurable subsets of $\R_d$, and $\lambda_d$ is $d$-dimensional Lebesgue measure. We use the usual Euclidean norm on $\R^d$: $\|\bs{x}\|_d = \left(x_1^d + x_2^d \cdots + x_d^d\right)^{1/d}, \quad \bs{x} = (x_1, x_2, \ldots, x_d) \in \R^d$ For $t \gt 0$, let $B_t = \left\{\bs{x} \in \R^d: \|\bs{x}\|_d \le t\right\}$ denote the ball of radius $t$ centered at the origin. Recall that $\lambda_d(B_t) = c_d t^d$ where $c_d = \frac{\pi^{d/2}}{\Gamma(d/2 + 1)}$ is the measure of the unit ball in $\R^d$, and where $\Gamma$ is the gamma function. Of course, $c_1 = 2$, $c_2 = \pi$, $c_3 = 4 \pi / 3$.
For $t \ge 0$, let $M_t = N(B_t)$, the number of random points in the ball $B_t$, or equivalently, the number of random points within distance $t$ of the origin. From our formula for the measure of $B_t$ above, it follows that $M_t$ has the Poisson distribution with parameter $r c_d t^d$.
Now let $Z_0 = 0$ and for $n \in \N_+$ let $Z_n$ denote the distance of the $n$th closest random point to the origin. Note that $Z_n$ is analogous to the $n$th arrival time for the Poisson process on $[0, \infty)$. Clearly the processes $\bs{M} = (M_t: t \ge 0)$ and $\bs{Z} = (Z_0, Z_1, \ldots)$ are inverses of each other in the sense that $Z_n \le t$ if and only if $M_t \ge n$. Both of these events mean that there are at least $n$ random points within distance $t$ of the origin.
Distributions
1. $c_d Z_n^d$ has the gamma distribution with shape parameter $n$ and rate parameter $r$.
2. $Z_n$ has probability density function $g_n$ given by $g_n(z) = \frac{d \left(c_d r\right)^n z^{n d - 1}}{(n-1)!} \exp\left(-r c_d z^d\right), \quad 0 \le z \lt \infty$
Proof
Let $T_n = c_d Z_n^d$.
1. From the inverse relationship above, $\P(T_n \le t) = \P\left[Z_n \le \left(t / c_d\right)^{1/d}\right] = \P\left\{M\left[\left(t / c_d\right)^{1/d}\right] \ge n\right\}$ But $M\left[\left(t / c_d\right)^{1/d}\right]$ has the Poisson distribution with parameter $r c_d \left[\left(t / c_d\right)^{1/d}\right]^d = r t$ so $\P(T_n \le t) = \sum_{k=n}^\infty e^{-r t} \frac{(r t)^k}{k!}$ which we know is the gamma CDF with parameters $n$ and $r$
2. Let $f_n$ denote the gamma PDF with parameters $n$ and $r$ and let $t = c_d z^d$. From the standard change of variables formula, $g_n(z) = f_n(t) \frac{dt}{dz}$ Substituting and simplifying gives the result.
$c_d Z_n^d - c_d Z_{n-1}^d$ are independent for $n \in \N_+$ and each has the exponential distribution with rate parameter $r$.
Computational Exercises
Suppose that defects in a sheet of material follow the Poisson model with an average of 1 defect per 2 square meters. Consider a 5 square meter sheet of material.
1. Find the probability that there will be at least 3 defects.
2. Find the mean and standard deviation of the number of defects.
Answer
1. 0.4562
2. 2.5, 1.581
Suppose that raisins in a cake follow the Poisson model with an average of 2 raisins per cubic inch. Consider a slab of cake that measures 3 by 4 by 1 inches.
1. Find the probability that there will be at no more than 20 raisins.
2. Find the mean and standard deviation of the number of raisins.
Answer
1. 0.2426
2. 24, 4.899
Suppose that the occurrence of trees in a forest of a certain type that exceed a certain critical size follows the Poisson model. In a one-half square mile region of the forest there are 40 trees that exceed the specified size.
1. Estimate the density parameter.
2. Using the estimated density parameter, find the probability of finding at least 100 trees that exceed the specified size in a square mile region of the forest
Answer
1. $r = 80$ per square mile
2. 0.0171
Suppose that defects in a type of material follow the Poisson model. It is known that a square sheet with side length 2 meters contains one defect. Find the probability that the defect is in a circular region of the material with radius $\frac{1}{4}$ meter.
Answer
0.0491
Suppose that raisins in a cake follow the Poisson model. A 6 cubic inch piece of the cake with 20 raisins is divided into 3 equal parts. Find the probability that each piece has at least 6 raisins.
Answer
0.2146
Suppose that defects in a sheet of material follow the Poisson model, with an average of 5 defects per square meter. Each defect, independently of the others is mild with probability 0.5, moderate with probability 0.3, or severe with probability 0.2. Consider a circular piece of the material with radius 1 meter.
1. Give the mean and standard deviation of the number of defects of each type in the piece.
2. Find the probability that there will be at least 2 defects of each type in the piece.
Answer
1. Mild: 7.854, 2.802; Moderate: 4.712, 2.171; Severe: 3.142, 1.772
2. 0.7762 | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/14%3A_The_Poisson_Process/14.08%3A_Poisson_Processes_on_General_Spaces.txt |
A renewal process is an idealized stochastic model for events that occur randomly in time (generically called renewals or arrivals). The basic mathematical assumption is that the times between the successive arrivals are independent and identically distributed. Renewal processes have a very rich and interesting mathematical structure and can be used as a foundation for building more realistic models. Moreover, renewal processes are often found embedded in other stochastic processes, most notably Markov chains.
15: Renewal Processes
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$
A renewal process is an idealized stochastic model for events that occur randomly in time. These temporal events are generically referred to as renewals or arrivals. Here are some typical interpretations and applications.
• The arrivals are customers arriving at a service station. Again, the terms are generic. A customer might be a person and the service station a store, but also a customer might be a file request and the service station a web server.
• A device is placed in service and eventually fails. It is replaced by a device of the same type and the process is repeated. We do not count the replacement time in our analysis; equivalently we can assume that the replacement is immediate. The times of the replacements are the renewals
• The arrivals are times of some natural event, such as a lightening strike, a tornado or an earthquake, at a particular geographical point.
• The arrivals are emissions of elementary particles from a radioactive source.
Basic Processes
The basic model actually gives rise to several interrelated random processes: the sequence of interarrival times, the sequence of arrival times, and the counting process. The term renewal process can refer to any (or all) of these. There are also several natural age processes that arise. In this section we will define and study the basic properties of each of these processes in turn.
Interarrival Times
Let $X_1$ denote the time of the first arrival, and $X_i$ the time between the $(i - 1)$st and $i$th arrivals for $i \in \{2, 3, \ldots)$. Our basic assumption is that the sequence of interarrival times $\bs{X} = (X_1, X_2, \ldots)$ is an independent, identically distributed sequence of random variables. In statistical terms, $\bs{X}$ corresponds to sampling from the distribution of a generic interarrival time $X$. We assume that $X$ takes values in $[0, \infty)$ and $\P(X \gt 0) \gt 0$, so that the interarrival times are nonnegative, but not identically 0. Let $\mu = \E(X)$ denote the common mean of the interarrival times. We allow that possibility that $\mu = \infty$. On the other hand,
$\mu \gt 0$.
Proof
This is a basic fact from properties of expected value. For a simple proof, note that if $\mu = 0$ then $\P(X \gt x) = 0$ for every $x \gt 0$ by Markov's inequality. But then $\P(X = 0) = 1$.
If $\mu \lt \infty$, we will let $\sigma^2 = \var(X)$ denote the common variance of the interarrival times. Let $F$ denote the common distribution function of the interarrival times, so that $F(x) = \P(X \le x), \quad x \in [0, \infty)$ The distribution function $F$ turns out to be of fundamental importance in the study of renewal processes. We will let $f$ denote the probability density function of the interarrival times if the distribution is discrete or if the distribution is continuous and has a probability density function (that is, if the distribution is absolutely continuous with respect to Lebesgue measure on $[0, \infty)$). In the discrete case, the following definition turns out to be important:
If $X$ takes values in the set $\{n d: n \in \N\}$ for some $d \in (0, \infty)$, then $X$ (or its distribution) is said to be arithmetic (the terms lattice and periodic are also used). The largest such $d$ is the span of $X$.
The reason the definition is important is because the limiting behavior of renewal processes turns out to be more complicated when the interarrival distribution is arithmetic.
The Arrival Times
Let $T_n = \sum_{i=1}^n X_i, \quad n \in \N$ We follow our usual convention that the sum over an empty index set is 0; thus $T_0 = 0$. On the other hand, $T_n$ is the time of the $n$th arrival for $n \in \N_+$. The sequence $\bs{T} = (T_0, T_1, \ldots)$ is called the arrival time process, although note that $T_0$ is not considered an arrival. A renewal process is so named because the process starts over, independently of the past, at each arrival time.
The sequence $\bs{T}$ is the partial sum process associated with the independent, identically distributed sequence of interarrival times $\bs{X}$. Partial sum processes associated with independent, identically distributed sequences have been studied in several places in this project. In the remainder of this subsection, we will collect some of the more important facts about such processes. First, we can recover the interarrival times from the arrival times: $X_i = T_i - T_{i-1}, \quad i \in \N_+$ Next, let $F_n$ denote the distribution function of $T_n$, so that $F_n(t) = \P(T_n \le t), \quad t \in [0, \infty)$ Recall that if $X$ has probability density function $f$ (in either the discrete or continuous case), then $T_n$ has probability density function $f_n = f^{*n} = f * f * \cdots * f$, the $n$-fold convolution power of $f$.
The sequence of arrival times $\bs{T}$ has stationary, independent increments:
1. If $m \le n$ then $T_n - T_m$ has the same distribution as $T_{n-m}$ and thus has distribution function $F_{n-m}$
2. If $n_1 \le n_2 \le n_3 \le \cdots$ then $\left(T_{n_1}, T_{n_2} - T_{n_1}, T_{n_3} - T_{n_2}, \ldots\right)$ is a sequence of independent random variables.
Proof
Recall that these are properties that hold generally for the partial sum sequence associated with a sequence of IID variables.
If $n, \, m \in \N$ then
1. $\E\left(T_n\right) = n \mu$
2. $\var\left(T_n\right) = n \sigma^2$
3. $\cov\left(T_m, T_n\right) = \min\{m,n\} \sigma^2$
Proof
Part (a) follows, of course, from the additive property of expected value, and part (b) from the additive property of variance for sums of independent variables. For part (c), assume that $m \le n$. Then $T_n = T_m + (T_n - T_m)$. But $T_m$ and $T_n - T_m$ are independent, so $\cov\left(T_m, T_n\right) = \cov\left[T_m, T_m + (T_n - T_m)\right] = \cov(T_m, T_m) + \cov(T_m, T_n - T_m) = \var(T_m) = m \sigma^2$
Recall the law of large numbers: $T_n / n \to \mu$ as $n \to \infty$
1. With probability 1 (the strong law).
2. In probability (the weak law).
Note that $T_n \le T_{n+1}$ for $n \in \N$ since the interarrival times are nonnegative. Also $\P(T_n = T_{n-1}) = \P(X_n = 0) = F(0)$. This can be positive, so with positive probability, more than one arrival can occur at the same time. On the other hand, the arrival times are unbounded:
$T_n \to \infty$ as $n \to \infty$ with probability 1.
Proof
Since $\P(X \gt 0) \gt 0$, there exits $t \gt 0$ such that $\P(X \gt t) \gt 0$. From the second Borel-Cantelli lemma it follows that with probability 1, $X_i \gt t$ for infinitely many $n \in \N_+$. Therefore $\sum_{i=1}^\infty X_i = \infty$ with probability 1.
The Counting Process
For $t \ge 0$, let $N_t$ denote the number of arrivals in the interval $[0, t]$: $N_t = \sum_{n=1}^\infty \bs{1}(T_n \le t), \quad t \in [0, \infty)$ We will refer to the random process $\bs{N} = (N_t: t \ge 0)$ as the counting process. Recall again that $T_0 = 0$ is not considered an arrival, but it's possible to have $T_n = 0$ for $n \in \N_+$, so there may be one or more arrivals at time 0.
$N_t = \max\{n \in \N: T_n \le t\}$ for $t \ge 0$.
If $s, \, t \in [0, \infty)$ and $s \le t$ then $N_t - N_s$ is the number of arrivals in $(s, t]$.
Note that as a function of $t$, $N_t$ is a (random) step function with jumps at the distinct values of $(T_1, T_2, \ldots)$; the size of the jump at an arrival time is the number of arrivals at that time. In particular, $N$ is an increasing function of $t$.
More generally, we can define the (random) counting measure corresponding to the sequence of random points $(T_1, T_2, \ldots)$ in $[0, \infty)$. Thus, if $A$ is a (measurable) subset of $[0, \infty)$, we will let $N(A)$ denote the number of the random points in $A$: $N(A) = \sum_{n=1}^\infty \bs{1}(T_n \in A)$ In particular, note that with our new notation, $N_t = N[0, t]$ for $t \ge 0$ and $N(s, t] = N_t - N_s$ for $s \le t$. Thus, the random counting measure is completely determined by the counting process. The counting process is the cumulative measure function for the counting measure, analogous the cumulative distribution function of a probability measure.
For $t \ge 0$ and $n \in \N$,
1. $T_n \le t$ if and only if $N_t \ge n$
2. $N_t = n$ if and only if $T_n \le t \lt T_{n+1}$
Proof
Note that the event in part (a) means that at there are at least $n$ arrivals in $[0, t]$. The event in part (b) means that there are exactly $n$ arrivals in $[0, t]$.
Of course, the complements of the events in (a) are also equivalent, so $T_n \gt t$ if and only if $N_t \lt n$. On the other hand, neither of the events $N_t \le n$ and $T_n \ge t$ implies the other. For example, we couse easily have $N_t = n$ and $T_n \lt t \lt T_{n + 1}$. Taking complements, neither of the events $N_t \gt n$ and $T_n \lt t$ implies the other. The last result also shows that the arrival time process $\bs{T}$ and the counting process $\bs{N}$ are inverses of each other in a sense.
The following events have probability 1:
1. $N_t \lt \infty$ for all $t \in [0, \infty)$
2. $N_t \to \infty$ as $t \to \infty$
Proof
The event in part (a) occurs if and only if $T_n \to \infty$ as $n \to \infty$, which occurs with probability 1 by the result above. The event in part (b) occurs if and only if $T_n \lt \infty$ for all $n \in \N$ which also occurs with probability 1.
All of the results so far in this subsection show that the arrival time process $\bs{T}$ and the counting process $\bs{N}$ are inverses of one another in a sense. The important equivalences above can be used to obtain the probability distribution of the counting variables in terms of the interarrival distribution function $F$.
For $t \ge 0$ and $n \in \N$,
1. $\P(N_t \ge n) = F_n(t)$
2. $\P(N_t = n) = F_n(t) - F_{n+1}(t)$
The next result is little more than a restatement of the result above relating the counting process and the arrival time process. However, you may need to review the section on filtrations and stopping times to understand the result
For $t \in [0, \infty)$, $N_t + 1$ is a stopping time for the sequence of interarrival times $\bs{X}$
Proof
Note that $N_t + 1$ takes values in $\N_+$, so we need to show that the event $\left\{N_t + 1 = n\right\}$ is measurable with respect to $\mathscr{F}_n = \sigma\{X_1, X_2, \ldots, X_n\}$ for $n \in \N_+$. But from the result above, $N_t + 1 = n$ if and only if $N_t = n - 1$ if and only if $T_{n-1} \le t \lt T_n$. The last event is clearly measurable with respect to $\mathscr{F}_n$.
The Renewal Function
The function $M$ that gives the expected number of arrivals up to time $t$ is known as the renewal function: $M(t) = \E(N_t), \quad t \in [0, \infty)$
The renewal function turns out to be of fundamental importance in the study of renewal processes. Indeed, the renewal function essentially characterizes the renewal process. It will take awhile to fully understand this, but the following theorem is a first step:
The renewal function is given in terms of the interarrival distribution function by $M(t) = \sum_{n=1}^\infty F_n(t), \quad 0 \le t \lt \infty$
Proof
Recall that $N_t = \sum_{n=1}^\infty \bs{1}(T_n \le t)$. Taking expected values gives the result. Note that the interchange of sum and expected value is valid because the terms are nonnegative.
Note that we have not yet shown that $M(t) \lt \infty$ for $t \ge 0$, and note also that this does not follow from the previous theorem. However, we will establish this finiteness condition in the subsection on moment generating functions below. If $M$ is differentiable, the derivative $m = M^\prime$ is known as the renewal density, so that $m(t)$ gives the expected rate of arrivals per unit time at $t \in [0, \infty)$.
More generally, if $A$ is a (measurable) subset of $[0, \infty)$, let $M(A) = \E[N(A)]$, the expected number of arrivals in $A$.
$M$ is a positive measure on $[0, \infty)$. This measure is known as the renewal measure.
Proof
$N$ is a measure on $[0, \infty)$ (albeit a random one). So if $(A_1, A_2, \ldots)$ is a sequence of disjoint, measurable subsets of $[0, \infty)$ then $N\left(\bigcup_{i=1}^\infty A_i \right) = \sum_{i=1}^\infty N(A_i)$ Taking expected values gives $m\left(\bigcup_{i=1}^\infty A_i \right) = \sum_{i=1}^\infty m(A_i)$ Again, the interchange of sum and expected value is justified since the terms are nonnegative.
The renewal measure is also given by $M(A) = \sum_{n=1}^\infty \P(T_n \in A), \quad A \subset [0, \infty)$
Proof
Recall that $N(A) = \sum_{n=1}^\infty \bs{1}(T_n \in A)$. Taking expected values gives the result. Again, the interchange of expected value and infinite series is justified since the terms are nonnegative.
If $s\, \, t \in [0, \infty)$ with $s \le t$ then $M(t) - M(s) = m(s, t]$, the expected number of arrivals in $(s, t]$.
The last theorem implies that the renewal function actually determines the entire renewal measure. The renewal function is the cumulative measure function, analogous to the cumulative distribution function of a probability measure. Thus, every renewal process naturally leads to two measures on $[0, \infty)$, the random counting measure corresponding to the arrival times, and the measure associated with the expected number of arrivals.
The Age Processes
For $t \in [0, \infty)$, $T_{N_t} \le t \lt T_{N_t + 1}$. That is, $t$ is in the random renewal interval $[T_{N_t}, T_{N_t + 1})$.
Consider the reliability setting in which whenever a device fails, it is immediately replaced by a new device of the same type. Then the sequence of interarrival times $\bs{X}$ is the sequence of lifetimes, while $T_n$ is the time that the $n$th device is placed in service. There are several other natural random processes that can be defined.
The random variable $C_t = t - T_{N_t}, \quad t \in [0, \infty)$ is called the current life at time $t$. This variable takes values in the interval $[0, t]$ and is the age of the device that is in service at time $t$. The random process $\bs{C} = (C_t: t \ge 0)$ is the current life process.
The random variable $R_t = T_{N_t + 1} - t, \quad t \in [0, \infty)$ is called the remaining life at time $t$. This variable takes values in the interval $(0, \infty)$ and is the time remaining until the device that is in service at time $t$ fails. The random process $\bs{R} = (R_t: t \ge 0)$ is the remaining life process.
The random variable $L_t = C_t + R_t = T_{N_t+1} - T_{N_t} = X_{N_t + 1}, \quad t \in [0, \infty)$ is called the total life at time $t$. This variable takes values in $[0, \infty)$ and gives the total life of the device that is in service at time $t$. The random process $\bs{L} = (L_t: t \ge 0)$ is the total life process.
Tail events of the current and remaining life can be written in terms of each other and in terms of the counting variables.
Suppose that $t \in [0, \infty)$, $x \in [0, t]$, and $y \in [0, \infty)$. Then
1. $\{R_t \gt y\} = \{N_{t+y} - N_t = 0\}$
2. $\{C_t \ge x \} = \{R_{t-x} \gt x\} = \{N_t - N_{t-x} = 0\}$
3. $\{C_t \ge x, R_t \gt y\} = \{R_{t-x} \gt x + y\} = \{N_{t+y} - N_{t-x} = 0\}$
Proof
Of course, the various equivalent events in the last result must have the same probability. In particular, it follows that if we know the distribution of $R_t$ for all $t$ then we also know the distribution of $C_t$ for all $t$, and in fact we know the joint distribution of $(R_t, C_t)$ for all $t$ and hence also the distribution of $L_t$ for all $t$.
For fixed $t \in (0, \infty)$ the total life at $t$ (the lifetime of the device in service at time $t$) is stochastically larger than a generic lifetime. This result, a bit surprising at first, is known as the inspection paradox. Let $X$ denote fixed interarrival time.
$\P(L_t \gt x) \ge \P(X \gt x)$ for $x \ge 0$.
Proof
Recall that $L_t = X_{N_t + 1}$. The proof is by conditioning on $N_t$. An important tool is the fact that if $A$ and $B$ are nested events in a probability space (one a subset of the other), then the events are positively correlated, so that $\P(A \mid B) \ge \P(A)$. Recall that $F$ is the common CDF of the interarrival times. First $\P\left(X_{N_t + 1} \gt x \mid N_t = 0\right) = \P(X_1 \gt x \mid X_1 \gt t) \ge \P(X_1 \gt x) = 1 - F(x)$ Next, for $n \in \N_+$, $\P\left(X_{N_t + 1} \gt x \mid N_t = n\right) = \P\left(X_{n + 1} \gt x \mid T_n \le t \lt T_{n + 1}\right) = \P(X_{n+1} \gt x \mid T_n \le t \lt T_n + X_{n+1})$ We condition this additionally on $T_n$, the time of the $n$th arrival. For $s \le t$, and since $X_{n+1}$ is independent of $T_n$, we have $\P(X_{n+1} \gt x \mid T_n = s, \, X_{n+1} \gt t - s) = \P(X_{n+1} \gt x \mid X_{n + 1} \gt t - s) \ge \P(X_{n + 1} \gt x) = 1 - F(x)$ It follows that $\P\left(X_{N_t + 1} \gt x \mid N_t = n\right) \ge 1 - F(x)$ for every $n \in \N$, and hence $\P\left(X_{N_t + 1} \gt x\right) = \sum_{n=0}^\infty \P\left(X_{N_t + 1} \gt x \mid N_t = n\right) \P(N_t = n) \ge \sum_{n=0}^\infty \left[1 - F(x)\right] \P(N_n = n) = 1 - F(x)$
Basic Comparison
The basic comparison in the following result is often useful, particularly for obtaining various bounds. The idea is very simple: if the interarrival times are shortened, the arrivals occur more frequently.
Suppose now that we have two interarrival sequences, $\bs{X} = (X_1, X_2, \ldots)$ and $\bs{Y} = (Y_1, Y_2, \ldots)$ defined on the same probability space, with $Y_i \le X_i$ (with probability 1) for each $i$. Then for $n \in \N$ and $t \in [0, \infty)$,
1. $T_{Y,n} \le T_{X,n}$
2. $N_{Y,t} \ge N_{X,t}$
3. $m_Y(t) \ge m_X(t)$
Examples and Special Cases
Bernoulli Trials
Suppose that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of Bernoulli trials with success parameter $p \in (0, 1)$. Recall that $\bs{X}$ is a sequence of independent, identically distributed indicator variables with $\P(X = 1) = p$.
Recall the random processes derived from $\bs{X}$:
1. $\bs{Y} = (Y_0, Y_1, \ldots)$ where $Y_n$ the number of success in the first $n$ trials. The sequence $\bs{Y}$ is the partial sum process associated with $\bs{X}$. The variable $Y_n$ has the binomial distribution with parameters $n$ and $p$.
2. $\bs{U} = (U_1, U_2, \ldots)$ where $U_n$ the number of trials needed to go from success number $n - 1$ to success number $n$. These are independent variables, each having the geometric distribution on $\N_+$ with parameter $p$.
3. $\bs{V} = (V_0, V_1, \ldots)$ where $V_n$ is the trial number of success $n$. The sequence $\bs{V}$ is the partial sum process associated with $\bs{U}$. The variable $V_n$ has the negative binomial distribution with parameters $n$ and $p$.
It is natural to view the successes as arrivals in a discrete-time renewal process.
Consider the renewal process with interarrival sequence $\bs{U}$. Then
1. The basic assumptions are satisfied and that the mean interarrival time is $\mu = 1 / p$.
2. $\bs{V}$ is the sequence of arrival times.
3. $\bs{Y}$ is the counting process (restricted to $\N$).
4. The renewal function is $m(n) = n p$ for $n \in \N$.
It follows that the renewal measure is proportional to counting measure on $\N_+$.
Run the binomial timeline experiment 1000 times for various values of the parameters $n$ and $p$. Compare the empirical distribution of the counting variable to the true distribution.
Run the negative binomial experiment 1000 times for various values of the parameters $k$ and $p$. Comare the empirical distribution of the arrival time to the true distribution.
Consider again the renewal process with interarrival sequence $\bs{U}$. For $n \in \N$,
1. The current life and remaining life at time $n$ are independent.
2. The remaining life at time $n$ has the same distribution as an interarrival time $U$, namely the geometric distribution on $\N_+$ with parameter $p$.
3. The current life at time $n$ has a truncated geometric distribution with parameters $n$ and $p$: $\P(C_n = k) = \begin{cases} p(1 - p)^k, & k \in \{0, 1, \ldots, n - 1\} \ (1 - p)^n, & k = n \end{cases}$
Proof
These results follow from age process events above.
This renewal process starts over, independently of the past, not only at the arrival times, but at fixed times $n \in \N$ as well. The Bernoulli trials process (with the successes as arrivals) is the only discrete-time renewal process with this property, which is a consequence of the memoryless property of the geometric interarrival distribution.
We can also use the indicator variables as the interarrival times. This may seem strange at first, but actually turns out to be useful.
Consider the renewal process with interarrival sequence $\bs{X}$.
1. The basic assumptions are satisfied and that the mean interarrival time is $\mu = p$.
2. $\bs{Y}$ is the sequence of arrival times.
3. The number of arrivals at time 0 is $U_1 - 1$ and the number of arrivals at time $i \in \N_+$ is $U_{i+1}$.
4. The number of arrivals in the interval $[0, n]$ is $V_{n+1} - 1$ for $n \in \N$. This gives the counting process.
5. The renewal function is $m(n) = \frac{n + 1}{p} - 1$ for $n \in \N$.
The age processes are not very interesting for this renewal process.
For $n \in \N$ (with probability 1),
1. $C_n = 0$
2. $R_n = 1$
The Moment Generating Function of the Counting Variables
As an application of the last renewal process, we can show that the moment generating function of the counting variable $N_t$ in an arbitrary renewal process is finite in an interval about 0 for every $t \in [0, \infty)$. This implies that $N_t$ has finite moments of all orders and in particular that $m(t) \lt \infty$ for every $t \in [0, \infty)$.
Suppose that $\bs{X} = (X_1, X_2, \ldots)$ is the interarrival sequence for a renewal process. By the basic assumptions, there exists $a \gt 0$ such that $p = \P(X \ge a) \gt 0$. We now consider the renewal process with interarrival sequence $\bs{X}_a = (X_{a,1}, X_{a,2}, \ldots)$, where $X_{a,i} = a\,\bs{1}(X_i \ge a)$ for $i \in \N_+$. The renewal process with interarrival sequence $\bs{X}_a$ is just like the renewal process with Bernoulli interarrivals, except that the arrival times occur at the points in the sequence $(0, a, 2 a, \ldots)$, instead of $(0, 1, 2, \ldots)$.
For each $t \in [0, \infty)$, $N_t$ has finite moment generating function in an interval about 0, and hence $N_t$ has moments of all orders at 0.
Proof
Note first that $X_{a,i} \le X_i$ for each $i \in \N_+$. Recall the moment generating function $\Gamma$ of the geometric distribution with parameter $p$ is $\Gamma(s) = \frac{e^s p}{1 - (1 - p) e^s}, \quad s \lt -\ln(1 - p)$ But as with the process with Bernoulli interarrival times, $N_{a,t}$ can be written as $a (V_{n + 1} - 1)$ where $n = \lfloor a / t \rfloor$ and where $V_{n + 1}$ is a sum of $n + 1$ IID geometric variables, each with parameter $p$. We don't really care about the explicit form of the MGF of $N_{a, t}$, but it is clearly finite in an interval of the from $(-\infty, \epsilon)$ where $\epsilon \gt 0$. But $N_t \le N_{t,a}$, so its MGF is also finite on this interval.
The Poisson Process
The Poisson process, named after Simeon Poisson, is the most important of all renewal processes. The Poisson process is so important that it is treated in a separate chapter in this project. Please review the essential properties of this process:
Properties of the Poisson process with rate $r \in (0, \infty)$.
1. The interarrival times have an exponential distribution with rate parameter $r$. Thus, the basic assumptions above are satisfied and the mean interarrival time is $\mu = 1 / r$.
2. The exponential distribution is the only distribution with the memoryless property on $[0, \infty)$.
3. The time of the $n$th arrival $T_n$ has the gamma distribution with shape parameter $n$ and rate parameter $r$.
4. The counting process $\bs{N} = (N_t: t \ge 0)$ has stationary, independent increments and $N_t$ has the Poisson distribution with parameter $r t$ for $t \in [0, \infty)$.
5. In particular, the renewal function is $m(t) = r t$ for $t \in [0, \infty)$. Hence, the renewal measure is a multiple of the standard length measure (Lebesgue measure) on $[0, \infty)$.
Consider again the Poisson process with rate parameter $r$. For $t \in [0, \infty)$,
1. The current life and remaining life at time $t$ are independent.
2. The remaining life at time $t$ has the same distribution as an interarrival time $X$, namely the exponential distribution with rate parameter $r$.
3. The current life at time $t$ has a truncated exponential distribution with parameters $t$ and $r$: $\P(C_t \ge s) = \begin{cases} e^{-r s}, & 0 \le s \le t \ 0, & s \gt t \end{cases}$
Proof
These results follow from age process events given above.
The Poisson process starts over, independently of the past, not only at the arrival times, but at fixed times $t \in [0, \infty)$ as well. The Poisson process is the only renewal process with this property, which is a consequence of the memoryless property of the exponential interarrival distribution.
Run the Poisson experiment 1000 times for various values of the parameters $t$ and $r$. Compare the empirical distribution of the counting variable to the true distribution.
Run the gamma experiment 1000 times for various values of the parameters $n$ and $r$. Compare the empirical distribution of the arrival time to the true distribution.
Simulation Exercises
Open the renewal experiment and set $t = 10$. For each of the following interarrival distributions, run the simulation 1000 times and note the shape and location of the empirical distribution of the counting variable. Note also the mean of the interarrival distribution in each case.
1. The continuous uniform distribution on the interval $[0, 1]$ (the standard uniform distribution).
2. the discrete uniform distribution starting at $a = 0$, with step size $h = 0.1$, and with $n = 10$ points.
3. The gamma distribution with shape parameter $k = 2$ and scale parameter $b = 1$.
4. The beta distribution with left shape parameter $a = 3$ and right shape parameter $b = 2$.
5. The exponential-logarithmic distribution with shape parameter $p = 0.1$ and scale parameter $b = 1$.
6. The Gompertz distribution with shape paraemter $a = 1$ and scale parameter $b = 1$.
7. The Wald distribution with mean $\mu = 1$ and shape parameter $\lambda = 1$.
8. The Weibull distribution with shape parameter $k = 2$ and scale parameter $b = 1$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/15%3A_Renewal_Processes/15.01%3A_Introduction.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$
Many quantities of interest in the study of renewal processes can be described by a special type of integral equation known as a renewal equation. Renewal equations almost always arise by conditioning on the time of the first arrival and by using the defining property of a renewal process—the fact that the process restarts at each arrival time, independently of the past. However, before we can study renewal equations, we need to develop some additional concepts and tools involving measures, convolutions, and transforms. Some of the results in the advanced sections on measure theory, general distribution functions, the integral with respect to a measure, properties of the integral, and density functions are needed for this section. You may need to review some of these topics as necessary. As usual, we assume that all functions and sets that are mentioned are measurable with respect to the appropriate $\sigma$-algebras. In particular, $[0, \infty)$ which is our basic temporal space, is given the usual Borel $\sigma$-algebra generated by the intervals.
Measures, Integrals, and Transforms
Distribution Functions and Positive Measures
Recall that a distribution function on $[0, \infty)$ is a function $G: [0, \infty) \to [0, \infty)$ that is increasing and continuous from the right. The distribution function $G$ defines a positive measure on $[0, \infty)$, which we will also denote by $G$, by means of the formula $G[0, t] = G(t)$ for $t \in [0, \infty)$.
Hopefully, our notation will not cause confusion and it will be clear from context whether $G$ refers to the positive measure (a set function) or the distribution function (a point function). More generally, if $a, \, b \in [0, \infty)$ and $a \le b$ then $G(a, b] = G(b) - G(a)$. Note that the positive measure associated with a distribution function is locally finite in the sense that $G(A) \lt \infty$ is $A \subset [0, \infty)$ is bounded. Of course, if $A$ is unbounded, $G(A)$ may well be infinite. The basic structure of a distribution function and its associated positive measure occurred several times in our preliminary discussion of renewal processes:
Distributions associated with a renewal process.
1. The distribution function $F$ of the interarrival times defines a probability measure on $[0, \infty)$
2. The counting process $N$ defines a (random) counting measure on $[0, \infty)$
3. the renewal function $M$ defines a (deterministic) positive measure on $[0, \infty)$
Suppose again that $G$ is a distribution function on $[0, \infty)$. Recall that the integral associated with the positive measure $G$ is also called the Lebesgue-Stieltjes integral associated with the distribution function $G$ (named for Henri Lebesgue and Thomas Stieltjes). If $f: [0, \infty) \to \R$ and $A \subseteq [0, \infty)$ (measurable of course), the integral of $f$ over $A$ (if it exists) is denoted $\int_A f(t) \, dG(t)$ We use the more conventional $\int_0^t f(x) \, dG(x)$ for the integral over $[0, t]$ and $\int_0^\infty f(x) \, dG(x)$ for the integral over $[0, \infty)$. On the other hand, $\int_s^t f(x) \, dG(x)$ means the integral over $(s, t]$ for $s \lt t$, and $\int_s^\infty f(x) \, dG(x)$ means the integral over $(s, \infty)$. Thus, the additivity of the integral over disjoint domains holds, as it must. For example, for $t \in [0, \infty)$, $\int_0^\infty f(x) \, dG(x) = \int_0^t f(x) \, dG(x) + \int_t^\infty f(x) \, dG(x)$ This notation would be ambiguous without the clarification, but is consistent with how the measure works: $G[0, t] = G(t)$ for $t \ge 0$, $G(s, t] = G(t) - G(s)$ for $0 \le s \lt t$, etc. Of course, if $G$ is continuous as a function, so that $G$ is also continuous as a measure, then none of this matters—the integral over an interval is the same whether or not endpoints are included. . The following definition is a natural complement to the locally finite property of the positive measures that we are considering.
A function $f: [0, \infty) \to \R$ is locally bounded if it is measurable and is bounded on $[0, t]$ for each $t \in [0, \infty)$.
The locally bounded functions form a natural class for which our integrals of interest exist.
Suppose that $G$ is a distribution function on $[0, \infty)$ and $f: [0, \infty) \to \R$ is locally bounded. Then $g: [0, \infty) \to \R$ defined by $g(t) = \int_0^t f(s) \, dG(s)$ is also locally bounded.
Proof
Suppose that $\left|f(s)\right| \le C_t$ for $s \in [0, t]$ and $t \in [0, \infty)$. Then $\int_0^s \left|f(x)\right| \, dG(x) \le C_t G(s) \le C_t G(t), \quad t \in [0, \infty)$ Hence $f$ is integrable on $[0, s]$ and the integral is bounded by $C_t G(t)$ for $s \in [0, t]$.
Note that if $f$ and $g$ are locally bounded, then so are $f + g$ and $f g$. If $f$ is increasing on $[0, \infty)$ then $f$ is locally bounded, so in particular, a distribution function on $[0, \infty)$ is locally bounded. If $f$ is continuous on $[0, \infty)$ then $f$ is locally bounded. Similarly, if $G$ and $H$ are distribution functions on $[0, \infty)$ and if $c \in (0, \infty)$, then $G + H$ and $c G$ are also distribution functions on $[0, \infty)$. Convolution, which we consider next, is another way to construct new distributions on $[0, \infty)$ from ones that we already have.
Convolution
The term convolution means different things in different settings. Let's start with the definition we know, the convolution of probability density functions, on our space of interest $[0, \infty)$.
Suppose that $X$ and $Y$ are independent random variables with values in $[0, \infty)$ and with probability density functions $f$ and $g$, respectively. Then $X + Y$ has probability density function $f * g$ given as follows, in the discrete and continuous cases, respectively \begin{align} (f * g)(t) & = \sum_{s \in [0, t]} f(t - s) g(s) \ (f * g)(t) & = \int_0^t f(t - s) g(s) \, ds \end{align}
In the discrete case, it's understood that $t$ is a possible value of $X + Y$, and the sum is over the countable collection of $s \in [0, t]$ with $s$ a value of $X$ and $t - s$ a value of $Y$. Often in this case, the random variables take values in $\N$, in which case the sum is simply over the set $\{0, 1, \ldots, t\}$ for $t \in \N$. The discrete and continuous cases could be unified by defining convolution with respect to a general positive measure on $[0, \infty)$. Moreover, the definition clearly makes sense for functions that are not necessarily probability density functions.
Suppose that $f, \, g: [0, \infty) \to \R$ ae locally bounded and that $H$ is a distribution function on $[0, \infty)$. The convolution of $f$ and $g$ with respect to $H$ is the function on $[0, \infty)$ defined by $t \mapsto \int_0^t f(t - s) g(s) \, dH(s)$
If $f$ and $g$ are probability density functions for discrete distributions on a countable set $C \subseteq [0, \infty)$ and if $H$ is counting measure on $C$, we get discrete convolution, as above. If $f$ and $g$ are probability density functions for continuous distributions on $[0, \infty)$ and if $H$ is Lebesgue measure, we get continuous convolution, as above. Note however, that if $g$ is nonnegative then $G(t) = \int_0^t g(s) \, dH(s)$ for $t \in [0, \infty)$ defines another distribution function on $[0, \infty)$, and the convolution integral above is simply $\int_0^t f(t - s) \, dG(s)$. This motivates our next version of convolution, the one that we will use in the remainder of this section.
Suppose that $f: [0, \infty) \to \R$ is locally bounded and that $G$ is a distribution function on $[0, \infty)$. The convolution of the function $f$ with the distribution $G$ is the function $f * G$ defined by $(f * G)(t) = \int_0^t f(t - s) \, dG(s), \quad t \in [0, \infty)$
Note that if $F$ and $G$ are distribution functions on $[0, \infty)$, the convolution $F * G$ makes sense, with $F$ simply as a function and $G$ as a distribution function. The result is another distribution function. Moreover in this case, the operation is commutative.
If $F$ and $G$ are distribution functions on $[0, \infty)$ then $F * G$ is also a distribution function on $[0, \infty)$, and $F * G = G * F$
Proof
Let $F \otimes G$ and $G \otimes F$ denote the usual product measures on $[0, \infty)^2 = [0, \infty) \times [0, \infty)$. For $t \in [0, \infty)$, let $T_t = \left\{(r, s) \in [0, \infty)^2: r + s \le t\right\}$, the triangular region with vertices $(0, 0)$, $(t, 0)$, and $(0, t)$. Then $(F * G)(t) = \int_0^t F(t - s) \, dG(s) = \int_0^t \int_0^{t - s} dF(r) \, dG(s) = (F \otimes G)\left(T_t\right)$ This clearly defines a distribution function. Specifically, if $0 \le s \le t \lt \infty$ then $T_s \subseteq T_t$ so $(F * G)(s) = (F \otimes G)(T_s) \le (F \otimes G)(T_t) = (F * G)(t)$. Hence $F * G$ is decreasing. If $t \in [0, \infty)$ and $t_n \in [0, \infty)$ for $n \in \N_+$ with $t_n \downarrow t$ as $n \to \infty$ then $T_{t_n} \downarrow T_t$ (in the subset sense) as $n \to \infty$ so by the continuity property of $F \otimes G$ we have $(F * G)(t_n) = (F \otimes G)\left(T_{t_n}\right) \downarrow (F \otimes G)(T_t) = (F * G)(t)$ as $n \to \infty$. Hence $F * G$ is continuous from the right.
For the commutative property, we have $(F * G)(t) = (F \otimes G)(T_t)$ and $(G * F)(t) = (G \otimes F)(T_t)$. By the symmetry of the triangle $T_t$ with respect to the diagonal $\{(s, s): s \in [0, \infty)\}$, these are the same.
If $F$ and $G$ are probability distribution functions corresponding to independent random variables $X$ and $Y$ with values in $[0, \infty)$, then $F * G$ is the probabiltiy distribution function of $X + Y$. Suppose now that $f: [0, \infty) \to \R$ is locally bounded and that $G$ and $H$ are distribution functions on $[0, \infty)$. From the previous result, both $(f * G) * H$ and $f * (G * H)$ make sense. Fortunately, they are the same so that convolution is associative.
Suppose that $f: [0, \infty) \to \R$ is locally bounded and that $G$ and $H$ are distribution functions on $[0, \infty)$. Then $(f * G) * H = f * (G * H)$
Proof
For $t \in [0, \infty)$, $[(f * G) * H](t) = \int_0^t (f * G)(t - s) \, dH(s) = \int_0^t \int_0^{t - s} f(t - s - r) \, dG(r) \, dH(s) = [f * (G * H)](t)$
Finally, convolution is a linear operation. That is, convolution preserves sums and scalar multiples, whenever these make sense.
Suppose that $f, \, g: [0, \infty) \to \R$ are locally bounded, $H$ is a distribution function on $[0, \infty)$, and $c \in \R$. Then
1. $(f + g) * H = (f * H) + (g * H)$
2. $(c f) * H = c (f * H)$
Proof
These properties follow easily from linearity properties of the integral.
1. $[(f + g) * H](t) = \int_0^t (f + g)(t - s) \, dH(s) = \int_0^t f(t - s) \, dH(s) + \int_0^t g(t - s) \, dH(s) = (f * H)(t) + (g * H)(t)$
2. $[(c f) * H](t) = \int_0^t c f(t - s) \, dH(s) = c \int_0^t f(t - s) \, dH(s) = c (f * H)(t)$
Suppose that $f: [0, \infty) \to \R$ is locally bounded, $G$ and $H$ are distribution functions on $[0, \infty)$, and that $c \in (0, \infty)$. Then
1. $f * (G + H) = (f * G) + (f * H)$
2. $f * (c G) = c (f * G)$
Proof
These properties also follow from linearity properties of the integral.
1. $[f * (G + H)](t) = \int_0^t f(t - s) \, d(G + H)(s) = \int_0^t f(t - s) \, dG(s) + \int_0^t f(t - s) \, dH(s) = (f * G)(t) + (f * H)(t)$
2. $[f * (c G)](t) = \int_0^t f(t - s) \, d(c G)(s) = c \int_0^t f(t - s) \, dG(s) = c (f * G)(t)$
Laplace Transforms
Like convolution, the term Laplace transform (named for Pierre Simon Laplace of course) can mean slightly different things in different settings. We start with the usual definition that you may have seen in your study of differential equations or other subjects:
The Laplace transform of a function $f: [0, \infty) \to \R$ is the function $\phi$ defined as follows, for all $s \in (0, \infty)$ for which the integral exists in $\R$: $\phi(s) = \int_0^\infty e^{-s t} f(t) \, dt$
Suppose that $f$ is nonnegative, so that the integral defining the transform exists in $[0, \infty]$ for every $s \in (0, \infty)$. If $\phi(s_0) \lt \infty$ for some $s_0 \in (0, \infty)$ then $\phi(s) \lt \infty$ for $s \ge s_0$. The transform of a general function $f$ exists (in $\R$) if and only if the transform of $\left|f\right|$ is finite at $s$. It follows that if $f$ has a Laplace transform, then the transform $\phi$ is defined on an interval of the form $(a, \infty)$ for some $a \in (0, \infty)$. The actual domain is of very little importance; the main point is that the Laplace transform, if it exists, will be defined for all sufficiently large $s$. Basically, a nonnegative function will fail to have a Laplace transform if it grows at a hyper-exponential rate as $t \to \infty$.
We could generalize the Laplace transform by replacing the Riemann or Lebesgue integral with the integral over a positive measure on $[0, \infty)$.
Suppose that that $G$ is a distribution on $[0, \infty)$. The Laplace transform of $f: [0, \infty) \to \R$ with respect to $G$ is the function given below, defined for all $s \in (0, \infty)$ for which the integral exists in $\R$: $s \mapsto \int_0^\infty e^{-s t} f(t) \, dG(t)$
However, as before, if $f$ is nonnegative, then $H(t) = \int_0^t f(x) \, dG(x)$ for $t \in [0, \infty)$ defines another distribution function, and the previous integral is simply $\int_0^\infty e^{-s t} \, dH(t)$. This motivates the definiton for the Laplace transform of a distribution.
The Laplace transform of a distribution $F$ on $[0, \infty)$ is the function $\Phi$ defined as follows, for all $s \in (0, \infty)$ for which the integral is finite: $\Phi(s) = \int_0^\infty e^{-s t} dF(t)$
Once again if $F$ has a Laplace transform, then the transform will be defined for all sufficiently large $s \in (0, \infty)$. We will try to be explicit in explaining which of the Laplace transform definitions is being used. For a generic function, the first definition applies, and we will use a lower case Greek letter. If the function is a distribution function, either definition makes sense, but it is usually the the latter that is appropriate, in which case we use an upper case Greek letter. Fortunately, there is a simple relationship between the two.
Suppose that $F$ is a distribution function on $[0, \infty)$. Let $\Phi$ denote the Laplace transform of the distribution $F$ and $\phi$ the Laplace transform of the function $F$. Then $\Phi(s) = s \phi(s)$.
Proof
The main tool is Fubini's theorem (named for Guido Fubini), which allow us to interchange the order of integration for a nonnegative function. \begin{align} \phi(s) & = \int_0^\infty e^{-s t} F(t) \, dt = \int_0^\infty e^{-s t} \left(\int_0^t dF(x)\right) dt \ & = \int_0^\infty \left(\int_x^\infty e^{-s t} dt\right) dF(x) = \int_0^\infty \frac{1}{s} e^{-s x} dF(x) = \frac{1}{s} \Phi(s) \end{align}
For a probability distribution, there is also a simple relationship between the Laplace transform and the moment generating function.
Suppose that $X$ is a random variable with values in $[0, \infty)$ and with probability distribution function $F$. The Laplace transform $\Phi$ and the moment generating function $\Gamma$ of the distribution $F$ are given as follows, and so $\Phi(s) = \Gamma(-s)$for all $s \in (0, \infty)$. \begin{align} \Phi(s) & = \E\left(e^{-s X}\right) = \int_0^\infty e^{-s t} dF(t) \ \Gamma(s) & = \E\left(e^{s X}\right) = \int_0^\infty e^{s t} dF(t) \end{align}
In particular, a probability distribution $F$ on $[0, \infty)$ always has a Laplace transform $\Phi$, defined on $(0, \infty)$. Note also that if $F(0) \lt 1$ (so that $X$ is not deterministically 0), then $\Phi(s) \lt 1$ for $s \in (0, \infty)$.
Laplace transforms are important for general distributions on $[0, \infty)$ for the same reasons that moment generating functions are important for probability distributions: the transform of a distribution uniquely determines the distribution, and the transform of a convolution is the product of the corresponding transforms (and products are much nicer mathematically than convolutions). The following theorems give the essential properties of Laplace transforms. We assume that the transforms exist, of course, and it should be understood that equations involving transforms hold for sufficiently large $s \in (0, \infty)$.
Suppose that $F$ and $G$ are distributions on $[0, \infty)$ with Laplace transforms $\Phi$ and $\Gamma$, respectively. If $\Phi(s) = \Gamma(s)$ for $s$ sufficiently large, then $G = H$
In the case of general functions on $[0, \infty)$, the conclusion is that $f = g$ except perhaps on a subset of $[0, \infty)$ of measure 0. The Laplace transform is a linear operation.
Suppose that $f, \, g : [0, \infty) \to \R$ have Laplace transforms $\phi$ and $\gamma$, respectively, and $c \in \R$ then
1. $f + g$ has Laplace transform $\phi + \gamma$
2. $c f$ has Laplace transform $c \phi$
Proof
These properties follow from the linearity of the integral. For $s$ sufficiently large,
1. $\int_0^\infty e^{- s t} [f(t) + g(t)] \, dt = \int_0^\infty e^{-s t} f(t) \, dt + \int_0^\infty e^{-s t} g(t) \, dt = \phi(s) + \gamma(s)$
2. $\int_0^\infty e^{-s t} c f(t) \, dt = c \int_0^\infty e^{-s t} f(t) \, dt = c \phi(s)$
The same properties holds for distributions on $[0, \infty)$ with $c \in (0, \infty)$. Integral transforms have a smoothing effect. Laplace transforms are differentiable, and we can interchange the derivative and integral operators.
Suppose that $f: [0, \infty) \to \R$ has Lapalce transform $\phi$. Then $\phi$ has derivatives of all orders and $\phi^{(n)}(s) = \int_0^\infty (-1)^n t^n e^{- s t} f(t) \, dt$
Restated, $(-1)^n \phi^{(n)}$ is the Laplace transform of the function $t \mapsto t^n f(t)$. Again, one of the most important properties is that the Laplace transform turns convolution into products.
Suppose that $f: [0, \infty) \to \R$ is locally bounded with Laplace transform $\phi$, and that $G$ is a distribution function on $[0, \infty)$ with Laplace transform $\Gamma$. Then $f * G$ has Laplace transform $\phi \cdot \Gamma$.
Proof By definition, the Laplace transform of $f * G$ is $\int_0^\infty e^{-s t} (f * G)(t) \, dt = \int_0^\infty e^{-s t} \left(\int_0^t f(t - x) \, dG(x)\right) dt$ Writing $e^{-s t} = e^{-s(t - x)} e^{-s x}$ and reversing the order of integration, the last iterated integral can be written as $\int_0^\infty e^{-s x} \left(\int_x^\infty e^{-s (t - x)} f(t - x) \, dt\right) dG(x)$ The interchange is justified, once again, by Fubini's theorem, since our functions are integrable (for sufficiently large $s \in (0, \infty)$). Finally with the substitution $y = t - x$ the last iterated integral can be written as a product $\left(\int_0^\infty e^{-s y} f(y) \, dy\right) \left(\int_0^\infty e^-{s x} dG(x)\right) = \phi(s) \Gamma(s)$
If $F$ and $G$ are distributions on $[0, \infty)$, then so is $F * G$. The result above applies, of course, with $F$ and $F * G$ thought of as functions and $G$ as a distribution, but multiplying through by $s$ and using the theorem above, it's clear that the result is also true with all three as distributions.
Renewal Equations and Their Solutions
Armed with our new analytic machinery, we can return to the study of renewal processes. Thus, suppose that we have a renewal process with interarrival sequence $\bs{X} = (X_1, X_2, \ldots)$, arrival time sequence $\bs{T} = (T_0, T_1, \ldots)$, and counting process $\bs{N} = \{N_t: t \in [0, \infty)\}$. As usual, let $F$ denote the common distribution function of the interarrival times, and let $M$ denote the renewal function, so that $M(t) = \E(N_t)$ for $t \in [0, \infty)$. Of course, the probability distribution function $F$ defines a probability measure on $[0, \infty)$, but as noted earlier, $M$ is also a distribution functions and so defines a positive measure on $[0, \infty)$. Recall that $F^c = 1 - F$ is the right distribution function (or reliability function) of an interarrival time.
The distributions of the arrival times are the convolution powers of $F$. That is, $F_n = F^{*n} = F * F * \cdots * F$.
Proof
This follows from the definitions: $F_n$ is the distribution function of $T_n$, and $T_n = \sum_{i=1}^n X_i$. Since $\bs{X}$ is an independent, identically distributed sequence, $F_n = F^{*n}$
The next definition is the central one for this section.
Suppose that $a: [0, \infty) \to \R$ is locally bounded. An integral equation of the form $u = a + u * F$ for an unknown function $u: [0, \infty) \to \R$ is called a renewal equation for $u$.
Often $u(t) = \E(U_t)$ where $\left\{U_t: t \in [0, \infty)\right\}$ is a random process of interest associated with the renewal process. The renewal equation comes from conditioning on the first arrival time $T_1 = X_1$, and then using the defining property of the renewal process—the fact that the process starts over, interdependently of the past, at the arrival time. Our next important result illustrates this.
Renewal equations for $M$ and $F$:
1. $M = F + M * F$
2. $F = M - F * M$
Proof
1. We condition on the time of the first arrival $X_1$ and break the domain of integration $[0, \infty)$ into the two parts $[0, t]$ and $(t, \infty)$: $M(t) = \E(N_t) = \int_0^\infty E(N_t \mid X_1 = s) \, dF(s) = \int_0^t \E(N_t \mid X_1 = s) \, dF(s) + \int_t^\infty \E(N_t \mid X_1 = s) \, dF(s)$ If $s \gt t$ then $\E(N_t \mid X_1 = s) = 0$. If $0 \le s \le t$, then by the renewal property, $\E(N_t \mid X_1 = s) = 1 + M(t - s)$. Hence we have $M(t) = \int_0^t [1 + M(t - s)] \, dF(s) = F(t) + (M * F)(t)$
2. From (a) and the commutative property of convolution given above (recall that $M$ is also a distribution function), we have $F = M - M * F = M - F * M$
Thus, the renewal function itself satisfies a renewal equation. Of course, we already have a formula for $M$, namely $M = \sum_{n=1}^\infty F_n$. However, sometimes $M$ can be computed more easily from the renewal equation directly. The next result is the transform version of the previous result:
The distributions $F$ and $M$ have Laplace transfroms $\Phi$ and $\Gamma$, respectively, that are related as follows: $\Gamma = \frac{\Phi}{1 - \Phi}, \quad \Phi = \frac{\Gamma}{\Gamma + 1}$
Proof from the renewal equation
Taking Laplace transforms through the renewal equation $M = F + M * F$ (and treating all terms as distributions), we have $\Gamma = \Phi + \Gamma \Phi$. Solving for $\Gamma$ gives the result. Recall that since $F$ is a probability distribution on $[0, \infty)$ with $F(0) \lt 1$, we know that $0 \lt \Phi(s) \lt 1$ for $s \in (0, \infty)$. The second equation follows from the first by simple algebra.
Proof from convolution
Recall that $M = \sum_{n=1}^\infty F^{* n}$. Taking Laplace trasforms (again treating all terms as distributions), and using geometric series we have $\Gamma = \sum_{n=1}^\infty \Phi^n = \frac{\Phi}{1 - \Phi}$ Recall again that $0 \lt \Phi(s) \lt 1$ for $s \in (0, \infty)$. Once again, the second equation follows from the first by simple algebra.
In particular, the renewal distribution $M$ always has a Laplace transform. The following theorem gives the fundamental results on the solution of the renewal equation.
Suppose that $a: [0, \infty) \to \R$ is locally bounded. Then the unique locally bounded solution to the renewal equation $u = a + u * F$ is $u = a + a * M$.
Direct proof
Suppose that $u = a + a * M$. Then $u * F = a * F + a * M * F$. But from the renewal equation for $M$ above, $M * F = M - F$. Hence we have $u * F = a * F + a * (M - F) = a * [F + (M - F)] = a * M$. But $a * M = u - a$ by definition of $u$, so $u = a + u * F$ and hence $u$ is a solution to the renewal equation. Next since $a$ is locally bounded, so is $u = a + a * M$. Suppose now that $v$ is another locally bounded solution of the integral equation, and let $w = u - v$. Then $w$ is locally bounded and $w * F = (u * F) - (v * F) = [(u - a) - (v - a) = u - v = w$. Hence $w = w * F_n$ for $n \in \N_+$. Suppose that $\left|w(s)\right| \le D_t$ for $0 \le s \le t$. Then $\left|w(t)\right| \le D_t \, F_n(t)$ for $n \in \N_+$. Since $M(t) = \sum_{n=1}^\infty F_n(t) \lt \infty$ it follows that $F_n(t) \to 0$ as $n \to \infty$. Hence $w(t) = 0$ for $t \in [0, \infty)$ and so $u = v$.
Proof from Laplace transforms
Let $\alpha$ and $\theta$ denote the Laplace transforms of the functions $a$ and $u$, respectively, and $\Phi$ the Laplace transform of the distribution $F$. Taking Laplace transforms through the renewal equations gives the simple algebraic equation $\theta = \alpha + \theta \Phi$. Solving give $\theta = \frac{\alpha}{1 - \Phi} = \alpha \left(1 + \frac{\Phi}{1 - \Phi}\right) = \alpha + \alpha \Gamma$ where $\Gamma = \frac{\Phi}{1 - \Phi}$ is the Laplace transform of the distribution $M$. Thus $\theta$ is the transform of $a + a * M$.
Returning to the renewal equations for $M$ and $F$ above, we now see that the renewal function $M$ completely determines the renewal process: from $M$ we can obtain $F$, and everything is ultimately constructed from the interarrival times. Of course, this is also clear from the Laplace transform result above which gives simple algebraic equations for each transform in terms of the other.
The Distribution of the Age Variables
Let's recall the definition of the age variables. A deterministic time $t \in [0, \infty)$ falls in the random renewal interval $\left[T_{N_t}, T_{N_t + 1}\right)$. The current life (or age) at time $t$ is $C_t = t - T_{N_t}$, the remaining life at time $t$ is $R_t = T_{N_t + 1} - t$, and the total life at time $t$ is $L_t = T_{N_t + 1} - T_{N_t}$. In the usual reliability setting, $C_t$ is the age of the device that is in service at time $t$, while $R_t$ is the time until that device fails, and $L_t$ is the total lifetime of the device.
For $t, \, y \in [0, \infty)$, let $r_y(t) = \P(R_t \gt y) = \P\left(N(t, t + y] = 0\right)$ and let $F^c_y(t) = F^c(t + y)$. Note that $y \mapsto r_y(t)$ is the right distribution function of $R_t$. We will derive and then solve a renewal equation for $r_y$ by conditioning on the time of the first arrival. We can then find integral equations that describe the distribution of the current age and the joint distribution of the current and remaining ages.
For $y \in [0, \infty)$, $r_y$ satisfies the renewal equation $r_y = F^c_y + r_y * F$ and hence for $t \in [0, \infty)$, $\P(R_t \gt y) = F^c(t + y) + \int_0^t F^c(t + y - s) \, dM(s), \quad y \ge 0$
Proof
As usual, we condition on the time of the first renewal: $\P(R_t \gt y) = \int_0^\infty \P(R_t \gt y \mid X_1 = s) \, dF(s)$ We are naturally led to break the domain $[0, \infty)$ of the integral into three parts $[0, t]$, $(t, t + y]$, and $(t + y, \infty)$, which we take one at a time.
Note first that $\P(R_t \gt y \mid X_1 = s) = \P(R_{t-s} \gt y)$ for $s \in [0, t]$
Next note that $\P(R_t \gt y \mid X_1 = s) = 0$ for $s \in (t, t + y]$
Finally note that $\P(R_t \gt y \mid X_1 = s) = 1$ for $s \in (t + y, \infty)$
Putting the pieces together we have $\P(R_t \gt y) = \int_0^t \P(R_{t - s} \gt y) \, dF(s) + \int_t^{t+y} 0 \, dF(s) + \int_{t + y}^\infty 1 \, dF(s)$ In terms of our function notation, the first integral is $(r_y * F)(t)$, the second integral is 0 of course, and the third integral is $1 - F(t + y) = F_y^c(t)$. Thus the renewal equation is satisfied and the formula for $\P(R_t \gt y)$ follows the fundamental theorem on renewal equations.
We can now describe the distribution of the current age.
For $t \in [0, \infty)$, $\P(C_t \ge x) = F^c(t) + \int_0^{t-x} F^c(t - s) \, dM(s), \quad x \in [0, t]$
Proof
This follows from the previous theorem and the fact that $\P(C_t \ge x) = \P(R_{t-x} \gt x)$ for $x \in [0, t]$.
Finally we get the joint distribution of the current and remaining ages.
For $t \in [0, \infty)$, $\P(C_t \ge x, R_t \gt y) = F^c(t + y) + \int_0^{t-x} F^c(t + y - s) \, dM(s), \quad x \in [0, t], \; y \in [0, \infty)$
Proof
Recall that $\P(C_t \ge x, R_t \gt y) = \P(R_{t-x} \gt x + y)$. The result now follows from the result above for the remaining life.
Examples and Special Cases
Uniformly Distributed Interarrivals
Consider the renewal process with interarrival times uniformly distributed on $[0, 1]$. Thus the distribution function of an interarrival time is $F(x) = x$ for $0 \le x \le 1$. The renewal function $M$ can be computed from the general renewal equation for $M$ by successively solving differential equations. The following exercise give the first two cases.
On the interval $[0, 2]$, show that $M$ is given as follows:
1. $M(t) = e^t - 1$ for $0 \le t \le 1$
2. $M(t) = (e^t - 1) - (t - 1)e^{t-1}$ for $1 \le t \le 2$
Solution
Show that the Laplace transform $\Phi$ of the interarrival distribution $F$ and the Laplace transform $\Gamma$ of the renewal distribution $M$ are given by $\Phi(s) = \frac{1 - e^{-s}}{s}, \; \Gamma(s) = \frac{1 - e^{-s}}{s - 1 + e^{-s}}; \quad s \in (0, \infty)$
Solution
First note that $\Phi(s) = \int_0^\infty e^{-s t} dF(t) = \int_0^1 e^{-s t} dt = \frac{1 - e^{-s}}{s}, \quad s \in (0, \infty)$ The formula for $\Gamma$ follows from $\Gamma = \Phi \big/ (1 - \Phi)$.
Open the renewal experiment and select the uniform interarrival distribution on the interval $[0, 1]$. For each of the following values of the time parameter, run the experiment 1000 times and note the shape and location of the empirical distribution of the counting variable.
1. $t = 5$
2. $t = 10$
3. $t = 15$
4. $t = 20$
5. $t = 25$
6. $t = 30$
The Poisson Process
Recall that the Poisson process has interarrival times that are exponentially distributed with rate parameter $r \gt 0$. Thus, the interarrival distribution function $F$ is given by $F(x) = 1 - e^{-r x}$ for $x \in [0, \infty)$. The following exercises give alternate proofs of fundamental results obtained in the Introduction.
Show that the renewal function $M$ is given by $M(t) = r t$ for $t \in [0, \infty)$
1. Using the renewal equation
2. Using Laplace transforms
Solution
1. The renewal equation gives $M(t) = 1 - e^{-r t} + \int_0^t M(t - s) r e^{-r s} \, ds$ Substituting $x = t - s$ in the integral gives $M(t) = 1 - e^{-r t} + re^{-r t} \int_0^t M(x) e^{r x} \, dx$ Multiplying through by $e^{ r t}$, differentiating with respect to $t$, and simplifying gives $M^\prime(t) = r$ for $t \ge 0$. Since $M(0) = 0$, the result follows.
2. The Laplace transform $\Phi$ of the distribution $F$ is given by $\Phi(s) = \int_0^\infty e^{-s t} r e^{- r t} dt = \int_0^\infty rne^{-(s + r) t} dt = \frac{r}{r + s}, \quad s \in (0, \infty)$ So the Laplace transform $\Gamma$ of the distribution $M$ is given by $\Gamma(s) = \frac{\Phi(s)}{1 - \Phi(s)} = \frac{r}{s}, \quad s \in (0, \infty)$ But this is the Laplace transform of the distribution $t \mapsto r t$.
Show that the current and remaining life at time $t \ge 0$ satisfy the following properties:
1. $C_t$ and $R_t$ are independent.
2. $R_t$ has the same distribution as an interarrival time, namely the exponential distribution with rate parameter $r$.
3. $C_t$ has a truncated exponential distribution with parameters $t$ and $r$: $\P(C_t \ge x) = \begin{cases} e^{-r x}, & 0 \le x \le t \ 0, & x \gt t \end{cases}$
Solution
Recall again that $M(t) = r t$ for $t \in [0, \infty)$. Using the result above on the joint distribution of the current and remaining life, and some standard calculus, we have $\P(C_t \ge x, R_t \ge y) = e^{-r (t + y)} + \int_0^{t - x} e^{-r(t + y - s)} r ds = r^{-r x} e^{-r y}, \quad x \in [0, t], \, y \in [0, \infty)$ Letting $y = 0$ gives $\P(C_t \ge x) = e^{-r x}$ for $x \in [0, t]$. Letting $x = 0$ gives $\P(R_t \ge y) = e^{-r y}$ for $y \in [0, \infty)$. But then also $\P(C_t \ge x, R_t \ge y) = \P(C_t \ge x) \P(R_t \ge y)$ for $x \in [0, t]$ and $y \in [0, \infty)$ so the variables are independent.
Bernoulli Trials
Consider the renewal process for which the interarrival times have the geometric distribution with parameter $p$. Recall that the probability density function is $f(n) = (1 - p)^{n-1}p, \quad n \in \N_+$ The arrivals are the successes in a sequence of Bernoulli trials. The number of successes $Y_n$ in the first $n$ trials is the counting variable for $n \in \N$. The renewal equations in this section can be used to give alternate proofs of some of the fundamental results in the Introduction.
Show that the renewal function is $M(n) = n p$ for $n \in \N$
1. Using the renewal equation
2. Using Laplace transforms
Proof
1. The renewal equation for $M$ is $M(n) = F(n) + (M * F)(n) = 1 - (1 - p)^n + \sum_{k=1}^n M(n - k) p (1 - p)^{k-1}, \quad n \in \N$ So substituting values of $n$ successively we have \begin{align} M(0) & = 1 - (1 - p)^0 = 0 \ M(1) & = 1 - (1 - p) + M(0) p = p \ M(2) & = 1 - (1 - p)^2 + M(1) p + M(0) p (1 - p) = 2 p \end{align} and so forth.
2. The Laplace transform $\Phi$ of the distribution $F$ is $\Phi(s) = \sum_{n=1}^\infty e^{-s n} p(1 - p)^{n-1} = \frac{p e^{-s}}{1 - (1 - p) e^{-s}}, \quad s \in (0, \infty)$ Hence the Laplace transform of the distribution $M$ is $\Gamma(s) = \frac{\Phi(s)}{1 - \Phi(s)} = p \frac{e^{-s}}{1 - e^{-s}}, \quad s \in (0, \infty)$ But $s \mapsto e^{-s} \big/ (1 - e^{-s})$ is the transform of the distribution $n \mapsto n$ on $\N$. That is, $\sum_{n=1}^\infty e^{-s n} \cdot 1 = \frac{e^{-s}}{1 - e^{-s}}, \quad s \in (0, \infty)$
Show that the current and remaining life at time $n \in \N$ satisfy the following properties:.
1. $C_n$ and $R_n$ are independent.
2. $R_n$ has the same distribution as an interarrival time, namely the geometric distribution with parameter $p$.
3. $C_n$ has a truncated geometric distribution with parameters $n$ and $p$: $\P(C_n = j) = \begin{cases} p (1 - p)^j, & j \in \{0, 1, \ldots, n-1\} \ (1 - p)^n, & j = n \end{cases}$
Solution
Recall again that $M(n) = p n$ for $n \in \N$. Using the result above on the joint distribution of the current and remaining life and geometric series, we have $\P(C_n \ge j, R_n \gt k) = (1 - p)^{n + k} + \sum_{i=1}^{n-j} p (1 - p)^{n + k - i} = (1 - p)^{j + k}, \quad j \in \{0, 1, \ldots, n\}, \, k \in \N$ Letting $k = 0$ gives $\P(C_n \ge j) = (1 - p)^j$ for $j \in \{0, 1, \ldots, n\}$. Letting $j = 0$ gives $\P(R_n \gt k) = (1 - p)^k$ for $k \in \N$. But then also $\P(C_n \ge j, R_n \gt k) = \P(C_n \ge j) \P(R_n \gt k)$ for $j \in \{0, 1, \ldots, n\}$ and $k \in \N$ so the variables are independent.
A Gamma Interarrival Distribution
Consider the renewal process whose interarrival distribution $F$ is gamma with shape parameter $2$ and rate parameter $r \in (0, \infty)$. Thus $F(t) = 1 - (1 + r t) e^{-r t}, \quad t \in [0, \infty)$ Recall also that $F$ is the distribution of the sum of two independent random variables, each having the exponential distribution with rate parameter $r$.
Show that the renewal distribution function $M$ is given by $M(t) = -\frac{1}{4} + \frac{1}{2} r t + \frac{1}{4} e^{- 2 r t}, \quad t \in [0, \infty)$
Solution
The exponential distribution with rate parameter $r$ has Laplace transform $s \mapsto r \big/ (r + s)$ and hence the Laplace transform $\Phi$ of the interarrival distribution $F$ is given by $\Phi(s) = \left(\frac{r}{r + s}\right)^2$ So the Laplace transform $\Gamma$ of the distribution $M$ is $\Gamma(s) = \frac{\Phi(s)}{1 - \Phi(s)} = \frac{r^2}{s (s + 2 r)}$ Using a partial fraction decomposition, $\Gamma(s) = \frac{r}{2 s} - \frac{r}{2 (s + 2 r)} = \frac{1}{2} \frac{r}{s} - \frac{1}{4} \frac{2 r }{s + 2 r}$ But the $r / s$ is the Laplace transform of the distribution $r t$ and $2 r \big/(s + 2 r )$ is the Laplace transform of the distribution $1 - e^{-2 r t}$ (the exponential distribution with parameter $2 r$).
Note that $M(t) \approx -\frac{1}{4} + \frac{1}{2} r t$ as $t \to \infty$.
Open the renewal experiment and select the gamma interarrival distribution with shape parameter $k = 2$ and scale parameter $b = 1$ (so the rate parameter $r = \frac{1}{b}$ is also 1). For each of the following values of the time parameter, run the experiment 1000 times and note the shape and location of the empirical distribution of the counting variable.
1. $t = 5$
2. $t = 10$
3. $t = 15$
4. $t = 20$
5. $t = 25$
6. $t = 30$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/15%3A_Renewal_Processes/15.02%3A_Renewal_Equations.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$
We start with a renewal process as constructed in the introduction. Thus, $\bs{X} = (X_1, X_2, \ldots)$ is the sequence of interarrival times. These are independent, identically distributed, nonnegative variables with common distribution function $F$ (satisfying $F(0) \lt 1$) and common mean $\mu$. When $\mu = \infty$, we let $1 / \mu = 0$. When $\mu \lt \infty$, we let $\sigma$ denote the common standard deviation. Recall also that $F^c = 1 - F$ is the right distribution function (or reliability function). Then, $\bs{T} = (T_0, T_1, \ldots)$ is the arrival time sequence, where $T_0 = 0$ and $T_n = \sum_{i=1}^n X_i$ is the time of the $n$th arrival for $n \in \N_+$. Finally, $\bs{N} = \{N_t: t \in [0, \infty)\}$ is the counting process, where for $t \in [0, \infty)$, $N_t = \sum_{n=1}^\infty \bs{1}(T_n \le t)$ is the number of arrivals in $[0, t]$. The renewal function $M$ is defined by $M(t) = \E\left(N_t\right)$ for $t \in [0, \infty)$.
We noted earlier that the arrival time process and the counting process are inverses, in a sense. The arrival time process is the partial sum process for a sequence of independent, identically distributed variables. Thus, it seems reasonable that the fundamental limit theorems for partial sum processes (the law of large numbers and the central limit theorem theorem), should have analogs for the counting process. That is indeed the case, and the purpose of this section is to explore the limiting behavior of renewal processes. The main results that we will study, known appropriately enough as renewal theorems, are important for other stochastic processes, particularly Markov chains.
Basic Theory
The Law of Large Numbers
Our first result is a strong law of large numbers for the renewal counting process, which comes as you might guess, from the law of large numbers for the sequence of arrival times.
If $\mu \lt \infty$ then $N_t / t \to 1 / \mu$ as $t \to \infty$ with probability 1.
Proof
Recall that $T_{N_t} \le t \lt T_{N_t + 1}$ for $t \gt 0$. Hence, if $N_t \gt 0$, $\frac{T_{N_t}}{N_t} \le \frac{t}{N_t} \lt \frac{T_{N_t + 1}}{N_t}$ Recall that $N_t \to \infty$ as $t \to \infty$ with probability 1. Recall also that by the strong law of large numbers that $T_n / n \to \mu$ as $n \to \infty$ with probability 1. It follows that $T_{N_t} \big/ N_t \to \mu$ as $t \to \infty$ with probability 1. Also, $(N_t + 1) \big/ N_t \to 1$ as $t \to \infty$ with probability 1. Therefore $\frac{T_{N_t + 1}}{N_t} = \frac{T_{N_t + 1}}{N_t + 1} \frac{N_t + 1}{N_t} \to \mu$ as $t \to \infty$ with probability 1. Hence by the squeeze theorem for limits, $t \big/ N_t \to \mu$ as $t \to \infty$ with probability 1.
Thus, $1 / \mu$ is the limiting average rate of arrivals per unit time.
Open the renewal experiment and set $t = 50$. For a variety of interarrival distributions, run the simulation 1000 times and note how the empirical distribution is concentrated near $t / \mu$.
The Central Limit Theorem
Our next goal is to show that the counting variable $N_t$ is asymptotically normal.
Suppose that $\mu$ and $\sigma$ are finite, and let $Z_t = \frac{N_t - t / \mu}{\sigma \sqrt{t / \mu^3}}, \quad t \gt 0$ The distribution of $Z_t$ converges to the standard normal distribution as $t \to \infty$.
Proof
For $n \in \N_+$, let $W_n = \frac{T_n - n \mu}{\sigma \sqrt{n}}$ The distribution of $W_n$ converges to the standard normal distribution as $n \to \infty$, by the ordinary central limit theorem. Next, for $z \in \R$, $\P(Z_t \le z) = \P\left(T_{n(z,t)} \gt t\right)$ where $n(z, t) = \left\lfloor t / \mu + z \sigma \sqrt{t / \mu^3} \right\rfloor$. Also, $\P(Z_t \le z) = \P\left[W_{n(z,t)} \gt w(z, t)\right]$ where $w(z, t) = -\frac{z}{\sqrt{1 + z \sigma \big/ \sqrt{t / \mu}}}$ But $n(z, t) \to \infty$ as $t \to \infty$ and $w(z, t) \to -z$ as $t \to \infty$. Recall that $1 - \Phi(-z) = \Phi(z)$, where as usual, $\Phi$ is the standard normal distribution function. Thus, we conclude that $\P(Z_t \le z) \to \Phi(z)$ as $t \to \infty$.
Open the renewal experiment and set $t = 50$. For a variety of interarrival distributions, run the simulation 1000 times and note the normal shape of the empirical distribution. Compare the empirical mean and standard deviation to $t / \mu$ and $\sigma \sqrt{t / \mu^3}$, respectively
The Elementary Renewal Theorem
The elementary renewal theorem states that the basic limit in the law of large numbers above holds in mean, as well as with probability 1. That is, the limiting mean average rate of arrivals is $1 / \mu$. The elementary renewal theorem is of fundamental importance in the study of the limiting behavior of Markov chains, but the proof is not as easy as one might hope. In particular, recall that convergence with probability 1 does not imply convergence in mean, so the elementary renewal theorem does not follow from the law of large numbers.
$M(t) / t \to 1 / \mu$ as $t \to \infty$.
Proof
We first show that $\liminf_{t \to \infty} M(t) / t \ge 1 / \mu$. Note first that this result is trivial if $\mu = \infty$, so assume that $\mu \lt \infty$. Next, recall that $N_t + 1$ is a stopping time for the sequence of interarrival times $\bs{X}$. Recall also that $T_{N_t + 1} \gt t$ for $t \gt 0$. From Wald's equation it follows that $\E\left(T_{N_t + 1}\right) = \E(N_t + 1) \mu = [M(t) + 1] \mu \gt t$ Therefore $M(t) / t \gt 1 / \mu - 1 / t$ for $t \gt 0$. Hence $\liminf_{t \to \infty} M(t) / t \ge 1 / \mu$.
Next we show that $\limsup_{t \to \infty} M(t) / t \le 1 / \mu$. For this part of the proof, we need to truncate the arrival times, and use the basic comparison method. For $a \gt 0$, let $X_{a,i} = \begin{cases} X_i, & X_i \le a \ a, & X_i \gt a \end{cases}$ and consider the renewal process with the sequence of interarrival times $\bs{X}_a = \left(X_{a,1}, X_{a,2}, \ldots\right)$. We will use the standard notation developed in the introductory section. First note that $T_{a, N_{a,t} + 1} \le t + a$ for $t \gt 0$ and $a \gt 0$. From Wald's equation again, it follows that $\left[M_a(t) + 1\right] \mu_a \le t + a$. Therefore $\frac{M_a(t)}{t} \le \left(\frac{1}{\mu_a} + \frac{a}{t \mu_a}\right) - \frac{1}{t}, \quad a, \; t \gt 0$ But $M(t) \le M_a(t)$ for $t \gt 0$ and $a \gt 0$ and therefore $\frac{M(t)}{t} \le \left(\frac{1}{\mu_a} + \frac{a}{t \mu_a} \right) - \frac{1}{t}, \quad a, \; t \gt 0$ Hence $\limsup_{t \to \infty} M(t) / t \le 1 / \mu_a$ for $a \gt 0$. Finally, $\mu_a \to \mu$ as $a \to \infty$ by the monotone convergence theorem, so it follows that $\limsup_{t \to \infty} M(t) / t \le 1 / \mu$
Open the renewal experiment and set $t = 50$. For a variety of interarrival distributions, run the experiment 1000 times and once again compare the empirical mean and standard deviation to $t / \mu$ and $\sigma \sqrt{t / \mu^3}$, respectively.
The Renewal Theorem
The renewal theorem states that the expected number of renewals in an interval is asymptotically proportional to the length of the interval; the proportionality constant is $1 / \mu$. The precise statement is different, depending on whether the renewal process is arithmetic or not. Recall that for an arithmetic renewal process, the interarrival times take values in a set of the form $\{n d: n \in \N\}$ for some $d \in (0, \infty)$, and the largest such $d$ is the span of the distribution.
For $h \gt 0$, $M(t, t + h] \to \frac{h}{\mu}$ as $t \to \infty$ in each of the following cases:
1. The renewal process is non-arithmetic
2. The renewal process is arithmetic with span $d$, and $h$ is a multiple of $d$
The renewal theorem is also known as Blackwell's theorem in honor of David Blackwell. The final limit theorem we will study is the most useful, but before we can state the theorem, we need to define and study the class of functions to which it applies.
Direct Riemann Integration
Recall that in the ordinary theory of Riemann integration, the integral of a function on the interval $[0, t]$ exists if the upper and lower Riemann sums converge to a common number as the partition is refined. Then, the integral of the function on $[0, \infty)$ is defined to be the limit of the integral on $[0, t]$, as $t \to \infty$. For our new definition, a function is said to be directly Riemann integrable if the lower and upper Riemann sums on the entire unbounded interval $[0, \infty)$ converge to a common number as the partition is refined, a more restrictive definition than the usual one.
Suppose that $g: [0, \infty) \to [0, \infty)$. For $h \in [0, \infty)$ and $k \in \N$, let $m_k(g, h) = \inf\{g(t): t \in [k h, (k + 1) h)\}$ and $M_k(g, h) = \sup\{g(t): t \in [k h, (k + 1)h)$. The lower and upper Riemann sums of $g$ on $[0, \infty)$ corresponding to $h$ are $L_g(h) = h \sum_{k=0}^\infty m_k(g, h), \quad U_g(h) = h \sum_{k=0}^\infty M_k(g, h)$ The sums exist in $[0, \infty]$ and satisfy the following properties:
1. $L_g(h) \le U_g(h)$ for $h \gt 0$
2. $L_g(h)$ increases as $h$ decreases
3. $U_g(h)$ decreases as $h$ decreases
It follows that $\lim_{h \downarrow 0} L_g(h)$ and $\lim_{h \downarrow 0} U_g(h)$ exist in $[0, \infty]$ and $\lim_{h \downarrow 0} L_g(h) \le \lim_{h \downarrow 0} U_g(h)$. Naturally, the case where the limits are finite and agree is what we're after.
A function $g: [0, \infty) \to [0, \infty)$ is directly Riemann integrable if $U_g(h) \lt \infty$ for every $h \gt 0$ and $\lim_{h \downarrow 0} L_g(h) = \lim_{h \downarrow 0} U_g(h)$ The common value is $\int_0^\infty g(t) \, dt$.
Ordinary Riemann integrability on $[0, \infty)$ allows functions that are unbounded and oscillate wildly as $t \to \infty$, and these are the types of functions that we want to exclude for the renewal theorems. The following result connects ordinary Riemann integrability with direct Riemann integrability.
If $g: [0, \infty) \to [0, \infty)$ is integrable (in the ordinary Riemann sense) on $[0, t]$ for every $t \in [0, \infty)$ and if $U_g(h) \lt \infty$ for some $h \in (0, \infty)$ then $g$ is directly Riemann integrable.
Here is a simple and useful class of functions that are directly Riemann integrable.
Suppose that $g: [0, \infty) \to [0, \infty)$ is decreasing with $\int_0^\infty g(t) \, dt \lt \infty$. Then $g$ is directly Riemann integrable.
The Key Renewal Theorems
The key renewal theorem is an integral version of the renewal theorem, and is the most useful of the various limit theorems.
Suppose that the renewal process is non-arithmetic and that $g: [0, \infty) \to [0, \infty)$ is directly Riemann integrable. Then $(g * M)(t) = \int_0^t g(t - s) \, dM(s) \to \frac{1}{\mu} \int_0^\infty g(x) \, dx \text{ as } t \to \infty$
Connections
Our next goal is to see how the various renewal theorems relate.
The renewal theorem implies the elementary renewal theorem:
Proof
Let $a_n = M(n, n + 1]$ for $n \in \N$. From the renewal theorem, $a_n \to 1 / \mu$ as $n \to \infty$. Therefore $\frac{1}{n} \sum_{k=0}^{n-1} a_k \to \frac{1}{\mu}$ as $n \to \infty$. It follows that $M(n) / n \to 1 / \mu$ as $n \to \infty$. But the renewal function is increasing so for $t \gt 0$, $\frac{\lfloor t \rfloor}{t} \frac{M(\lfloor t \rfloor)}{\lfloor t \rfloor} \le \frac{M(t)}{t} \le \frac{\lceil t \rceil}{t} \frac{M(\lceil t \rceil)}{\lceil t \rceil}$ From the the squeeze theorem for limits it follows that $M(t) / t \to 1 / \mu$ as $t \to \infty$.
Conversely, the elementary renewal theorem almost implies the renewal theorem.
Proof
Assume that $g(x) = \lim_{t \to \infty} [M(t + x) - M(t)]$ exists for each $x \gt 0$. (This assumption is the reason that the proof is incomplete.) Note that $M(t + x + y) - M(t) = [M(t + x + y) - M(t + x)] + [M(t + x) - M(t)]$ Let $t \to \infty$ to conclude that $g(x + u) = g(x) + g(y)$ for all $x \ge 0$ and $y \ge 0$. It follows that $g$ is increasing and $g(x) = c x$ for $x \ge 0$ where $c$ is a constant. Exactly as in the proof of the previous theorem, it follows that $M(n) / n \to c$ as $n \to \infty$. From the elementary renewal theorem, we can conclude that $c = 1 / \mu$.
The key renewal theorem implies the renewal theorem
Proof
This result follows by applying the key renewal theorem to the function $g_h(x) = \bs{1}(0 \le x \le h)$ where $h \gt 0$.
Conversely, the renewal theorem implies the key renewal theorem.
The Age Processes
The key renewal theorem can be used to find the limiting distributions of the current and remaining age. Recall that for $t \in [0, \infty)$ the current life at time $t$ is $C_t = t - T_{N_t}$ and the remaining life at time $t$ is $R_t = T_{N_t + 1} - t$.
If the renewal process is non-arithmetic, then $\P(R_t \gt x) \to \frac{1}{\mu} \int_x^\infty F^c(y) \, dy \text{ as } t \to \infty, \quad x \in [0, \infty)$
Proof
Recall that $\P(R_t \gt x) = F^c(t + x) + \int_0^t F^c(t + x - s) \, dM(s), \quad x \in [0, \infty)$ But $F^c(t + x) \to 0$ as $t \to \infty$, and by the key renewal theorem, the integral converges to $\frac{1}{\mu} \int_0^\infty F^c(x + y) \, dy$. Finally a change of variables in the limiting integral gives the result.
If the renewal process is aperiodic, then
$\P(R_t \gt x) \to \frac{1}{\mu} \int_x^\infty F^c(y) \, dy \text{ as } t \to \infty, \quad x \in [0, \infty)$
Proof
Recall that, since the renewal process is aperiodic, $\P(C_t \gt x) = F^c(t) + \int_0^{t - x} F^c(t - s) \, dM(s), \quad x \in [0, t]$ Again, $F^c(t) \to 0$ as $t \to \infty$. The change of variables $u = t - x$ changes the integral into $\int_0^u F^c(u + x - s) \, dM(s)$. By the key renewal theorem, this integral converges $\frac{1}{\mu} \int_0^\infty F^c(y + x) \, dy = \int_x^\infty F^c(y + x) \, dy$.
The current and remaining life have the same limiting distribution. In particular, $\lim_{t \to \infty} \P(C_t \le x) = \lim_{t \to \infty} \P(R_t \le x) = \frac{1}{\mu} \int_0^x F^c(y) \, dy, \quad x \in [0, \infty)$
Proof
By the previous two theorems, the limiting right distribution functions of $R_t$ and $C_t$ are the same. The ordinary (left) limiting distribution function is $1 - \frac{1}{\mu} \int_x^\infty F^c(y) \, dy = \frac{1}{\mu} \left(\mu - \int_x^\infty F^c(y) \, dy \right)$ But recall that $\mu = \int_0^\infty F^c(y) \, dy$ so the result follows since $\int_0^\infty F^c(y) \, dy - \int_x^\infty F^c(y) \, dy = \int_0^x F^c(y) \, dy$
The fact that the current and remaining age processes have the same limiting distribution may seem surprising at first, but there is a simple intuitive explanation. After a long period of time, the renewal process looks just about the same backward in time as forward in time. But reversing the direction of time reverses the rolls of current and remaining age.
Examples and Special Cases
The Poisson Process
Recall that the Poisson process, the most important of all renewal processes, has interarrival times that are exponentially distributed with rate parameter $r \gt 0$. Thus, the interarrival distribution function is $F(x) = 1 - e^{-r x}$ for $x \ge 0$ and the mean interarrival time is $\mu = 1 / r$.
Verify each of the following directly:
1. The law of large numbers for the counting process.
2. The central limit theorem for the counting process.
3. The elementary renewal theorem.
4. The renewal theorem.
Bernoulli Trials
Suppose that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of Bernoulli trials with success parameter $p \in (0, 1)$. Recall that $\bs{X}$ is a sequence of independent, identically distributed indicator variables with $p = \P(X = 1)$. We have studied a number of random processes derived from $\bs{X}$:
Random processes associated with Bernoulli trials.
1. $\bs{Y} = (Y_0, Y_1, \ldots)$ where $Y_n$ the number of successes in the first $n$ trials. The sequence $\bs{Y}$ is the partial sum process associated with $\bs{X}$. The variable $Y_n$ has the binomial distribution with parameters $n$ and $p$.
2. $\bs{U} = (U_1, U_2, \ldots)$ where $U_n$ the number of trials needed to go from success number $n - 1$ to success number $n$. These are independent variables, each having the geometric distribution on $\N_+$ with parameter $p$.
3. $\bs{V} = (V_0, V_1, \ldots)$ where $V_n$ is the trial number of success $n$. The sequence $\bs{V}$ is the partial sum process associated with $\bs{U}$. The variable $V_n$ has the negative binomial distribution with parameters $n$ and $p$.
Consider the renewal process with interarrival sequence $\bs{U}$. Thus, $\mu = 1 / p$ is the mean interarrival time, and $\bs{Y}$ is the counting process. Verify each of the following directly:
1. The law of large numbers for the counting process.
2. The central limit theorem for the counting process.
3. The elementary renewal theorem.
Consider the renewal process with interarrival sequence $\bs{X}$. Thus, the mean interarrival time is $\mu = p$ and the number of arrivals in the interval $[0, n]$ is $V_{n+1} - 1$ for $n \in \N$. Verify each of the following directly:
1. The law of large numbers for the counting process.
2. The central limit theorem for the counting process.
3. The elementary renewal theorem. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/15%3A_Renewal_Processes/15.03%3A_Renewal_Limit_Theorems.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$
Basic Theory
Preliminaries
A delayed renewal process is just like an ordinary renewal process, except that the first arrival time is allowed to have a different distribution than the other interarrival times. Delayed renewal processes arise naturally in applications and are also found embedded in other random processes. For example, in a Markov chain (which we study in the next chapter), visits to a fixed state, starting in that state form the random times of an ordinary renewal process. But visits to a fixed state, starting in another state form a delayed renewal process.
Suppose that $\bs{X} = (X_1, X_2, \ldots)$ is a sequence of independent variables taking values in $[0, \infty)$, with $(X_2, X_3, \ldots)$ identically distributed. Suppose also that $\P(X_i \gt 0) \gt 0$ for $i \in \N_+$. The stochastic process with $\bs{X}$ as the sequence of interarrival times is a delayed renewal process.
As before, the actual arrival times are the partial sums of $\bs{X}$. Thus let $T_n = \sum_{i=1}^n X_i$ so that $T_0 = 0$ and $T_n$ is the time of the $n$th arrival for $n \in \{1, 2, \ldots\}$. Also as before, $N_t$ is the number of arrivals in $[0, t]$ (not counting $T_0$): $N_t = \sum_{n=1}^\infty \bs{1}(T_n \le t) = \max\{n \in \N: T_n \le t\}$ If we restart the clock at time $T_1 = X_1$, we have an ordinary renewal process with interarrival sequence $(X_2, X_3, \ldots)$. We use some of the standard notation developed in the Introduction for this renewal process. In particular, $F$ denotes the common distribution function and $\mu$ the common mean of $X_i$ for $i \in \{2, 3, \ldots\}$. Similarly $F_n = F^{*n}$ denotes the distribution function of the sum of $n$ independent variables with distribution function $F$, and $M$ denotes the renewal function: $M(t) = \sum_{n=1}^\infty F_n(t), \quad t \in [0, \infty)$ On the other hand, we will let $G$ denote the distribution function of $X_1$ (the special interarrival time, different from the rest), and we will let $G_n$ denote the distribution function of $T_n$ for $n \in \N_+$. As usual, $F^c = 1 - F$ and $G^c = 1 - G$ are the corresponding right-tail distribution functions.
$G_n = G * F_{n-1} = F_{n-1} * G$ for $n \in \N_+$.
Proof
The follows from the fact that $T_n$ is the sum of $n$ independent random variables; the first has distribution function $G$ and the remaining $n - 1$ have distribution function $F$.
Finally, we will let $U$ denote the renewal function for the delayed renewal process. Thus, $U(t) = \E(N_t)$ is the expected number of arrivals in $[0, t]$ for $t \in [0, \infty)$.
The delayed renewal function satisfies $U(t) = \sum_{n=1}^\infty G_n(t), \quad t \in [0, \infty)$
Proof
The proof is just as before. $U(t) = \E(N_t) = \E\left(\sum_{n=1}^\infty \bs{1}(T_n \le t)\right) = \sum_{n=1}^\infty \P(T_n \le t) = \sum_{n=1}^\infty G_n(t)$
The delayed renewal function $U$ satisfies the equation $U = G + M * G$; that is, $U(t) = G(t) + \int_0^t M(t - s) \, dG(s), \quad t \in [0, \infty)$
Proof
The proof follows from conditioning on the time of the first arrival $T_1 = X_1$. Note first that $\E(N_t \mid X_1 = s) = 0$ if $s \gt t$ and $\E(N_t \mid X_1 = s) = 1 + M(t - s)$ if $0 \le s \le t$. Hence
$U(t) = \int_0^\infty \E(N_t \mid X_1 = s) \, dG(s) = \int_0^t [1 + M(t - s)] \, dG(s) = G(t) + \int_0^t M(t - s) \, dG(s)$
The delayed renewal function $U$ satisfies the renewal equation $U = G + U * F$; that is, $U(t) = G(t) + \int_0^t U(t - s) \, dF(s), \quad t \in [0, \infty)$
Proof
Note that $U = \sum_{n=1}^\infty G_n = G + \sum_{n=2}^\infty G_n = G + \sum_{n=2}^\infty ( G_{n-1} * F ) = G + \left(\sum_{n=1}^\infty G_n\right) * F = G + U * F$
Asymptotic Behavior
In a delayed renewal process only the first arrival time is changed. Thus, it's not surprising that the asymptotic behavior of a delayed renewal process is the same as the asymptotic behavior of the corresponding regular renewal process. Our first result is the strong law of large numbers for the delayed renewal process.
$N_t / t \to 1 / \mu$ as $t \to \infty$ with probability 1.
Proof
We will show that $T_n / n \to \mu$ as $n \to \infty$ with probability 1. Then, the proof is exactly like the proof of the law of large numbers for a regular renewal process. For $n \in \{2, 3, \ldots \}$, $\frac{T_n}{n} = \frac{X_1}{n} + \frac{n - 1}{n} \frac{1}{n - 1} \sum_{i=2}^n X_i$ But $\frac{X_1}{n} \to 0$ as $n \to \infty$ with probability 1; of course $\frac{n}{n - 1} \to 1$ as $n \to \infty$; and $\frac{1}{n - 1} \sum_{i=2}^n X_i \to \mu$ as $n \to \infty$ with probability 1 by the ordinary strong law of large numbers.
Our next result is the elementary renewal theorem for the delayed renewal process.
$U(t) / t \to 1 / \mu$ as $t \to \infty$.
Next we have the renewal theorem for the delayed renwal process, also known as Blackwell's theorem, named for David Blackwell.
For $h \gt 0$, $U(t, t + h] = U(t + h) - U(t) \to h / \mu$ as $t \to \infty$ in each of the following cases:
1. $F$ is non-arithmetic
2. $F$ is arithmetic with span $d \in (0, \infty)$, and $h$ is a multiple of $d$.
Finally we have the key renewal theorem for the delayed renewal process.
Suppose that the renewal process is non-arithmetic and that $g: [0, \infty) \to [0, \infty)$ is directly Riemann integrable. Then $(g * U)(t) = \int_0^t g(t - s) \, dU(s) \to \frac{1}{\mu} \int_0^\infty g(x) \, dx \text{ as } t \to \infty$
Stationary Point Processes
Recall that a point process is a stochastic process that models a discrete set of random points in a measure space $(S, \mathscr{S}, \lambda)$. Often, of course, $S \subseteq \R^n$ for some $n \in \N_+$ and $\lambda$ is the corresponding $n$-dimensional Lebesgue measure. The special cases $S = \N$ with counting measure and $S = [0, \infty)$ with length measure are of particular interest, in part because renewal and delayed renewal processes give rise to point processes in these spaces.
For a general point process on $S$, we use our standard notation and denote the number of random points $A \in \mathscr{S}$ by $N(A)$. There are a couple of natural properties that a point process may have. In particular, the process is said to be stationary if $\lambda(A) = \lambda(B)$ implies that $N(A)$ and $N(B)$ have the same distribution for $A, \; B \in \mathscr{S}$. In $[0, \infty)$ the term stationary increments is often used, because the stationarity property means that for $s, \, t \in [0, \infty)$, the distribution of $N(s, s + t] = N_{s + t} - N_s$ depends only on $t$.
Consider now a regular renewal process. We showed earlier that the asymptotic distributions of the current life and remaining life are the same. Intuitively, after a very long period of time, the renewal process looks pretty much the same forward in time or backward in time. This suggests that if we make the renewal process into a delayed renewal process by giving the first arrival time this asymptotic distribution, then the resulting point process will be stationary. This is indeed the case. Consider the setting and notation of the preliminary subsection above.
For the delayed renewal process, the point process $\bs{N}$ is stationary if and only if the initial arrival time has distribution function $G(t) = \frac{1}{\mu} \int_0^t F^c(s) \, ds, \quad t \in [0, \infty)$ in which case the renewal function is $U(t) = t / \mu$ for $t \in [0, \infty)$.
Proof
Suppose first that $\bs{N}$ has stationary increments. In particular, this means that the arrival times have continuous distributions. For $s, \, t \in [0, \infty)$, $U(s + t) = \E(N_{s + t}) = \E[(N_{s + t} - N_t) + N_t] = \E(N_{s + t} - N_t) + \E(N_t) = U(s) + U(t)$ A theorem from analysis states that the only increasing solutions to such a functional equation are linear functions, and hence $U(t) = c t$ for some positive constant $c$. Substituting $m_d$ into the renewal equation above gives $c t = G(t) + \int_0^t c (t - s) \, dF(s) = G(t) + c t \, F(t) - \int_0^t c s \, dF(s)$ Integrating by parts in the last integral and simplifying gives $G(t) = c \int_0^t F^c(s) \, ds$ Finally, if we let $t \to \infty$, the left side converges to 1 and the right side to $c \mu$, so $c = 1 / \mu$. Thus $G$ has the form given in the statement of the theorem and $U(t) = t / \mu$ for $t \in [0, \infty)$.
Conversely, suppose that $G$ has the form given in the theorem. Note that this is a continuous distribution with density function $t \mapsto F^c(t) \big/ \mu$. Substituting into the renewal equation above, it follows that the renewal density $U^\prime$ satisfies $U^\prime = \frac{1}{\mu} F^c + \frac{1}{\mu}F^c * \sum_{n=1}^\infty F_n = \frac{1}{\mu}$ Hence $U(t) = t / \mu$ for $t \ge 0$. Next, the process $\bs{N}$ has stationary increments if and only if the remaining life $R_t$ at time $t$ has distribution function $G$ for each $t$. Arguing just as in Section 2, we have $\P(R_t \gt y) = G^c(t + y) + \int_0^t F^c(t + y - s) \, d m_d(s), \quad y \ge 0$ But $G^c(t + y) = \frac{1}{\mu} \int_{t+y}^\infty F^c(u) \, du$ and $dU(s) = \frac{1}{\mu} \, ds$, so substituting into the last displayed equation and using a simple substitution in the integral gives $\P(R_t \gt y) = \frac{1}{\mu} \int_{t+y}^\infty F^c(u) \, du + \frac{1}{\mu} \int_y^{t+y} F^c(u) \, du = \frac{1}{\mu} \int_y^\infty F^c(u) \, du$
Examples and Applications
Patterns in Multinomial Trials
Suppose that $\bs{L} = (L_1, L_2, \ldots)$ is a sequence of independent, identically distributed random variables taking values in a finite set $S$, so that $\bs{L}$ is a sequence of multinomial trials. Let $f$ denote the common probability density function so that for a generic trial variable $L$, we have $f(a) = \P(L = a)$ for $a \in S$. We assume that all outcomes in $S$ are actually possible, so $f(a) \gt 0$ for $a \in S$.
In this section, we interpret $S$ as an alphabet, and we write the sequence of variables in concatenation form, $\bs{L} = L_1 L_2 \cdots$ rather than standard sequence form. Thus the sequence is an infinite string of letters from our alphabet $S$. We are interested in the repeated occurrence of a particular finite substring of letters (that is, a word or pattern) in the infinite sequence.
So, fix a word $\bs a$ (again, a finite string of elements of $S$), and consider the successive random trial numbers $(T_1, T_2, \ldots)$ where the word $\bs a$ is completed in $\bs{L}$. Since the sequence $\bs{L}$ is independent and identically distributed, it seems reasonable that these variables are the arrival times of a renewal process. However there is a slight complication. An example may help.
Suppose that $\bs{L}$ is a sequence of Bernoulli trials (so $S = \{0, 1\}$). Suppose that the outcome of $\bs{L}$ is $101100101010001101000110\cdots$
1. For the word $\bs a = 001$ note that $T_1 = 7$, $T_2 = 15$, $T_3 = 22$
2. For the word $\bs b = 010$, note that $T_1 = 8$, $T_2 = 10$, $T_3 = 12$, $T_4 =19$
In this example, you probably noted an important difference between the two words. For $\bs b$, a suffix of the word (a proper substring at the end) is also a prefix of the word (a proper substring at the beginning. Word $\bs a$ does not have this property. So, once we arrive at $\bs b$, there are ways to get to $\bs b$ again (taking advantage of the suffix-prefix) that do not exist starting from the beginning of the trials. On the other hand, once we arrive at $\bs a$, arriving at $\bs a$ again is just like with a new sequence of trials. Thus we are lead to the following definition.
Suppose that $\bs a$ is a finite word from the alphabet $S$. If no proper suffix of $\bs a$ is also a prefix, then $\bs a$ is simple. Otherwise, $\bs a$ is compound.
Returning to the general setting, let $T_0 = 0$ and then let $X_n = T_n - T_{n-1}$ for $n \in \N_+$. For $k \in \N$, let $N_k = \sum_{n=1}^\infty \bs{1}(T_n \le k)$. For occurrences of the word $\bs a$, $\bs{X} = (X_1, X_2, \ldots)$ is the sequence of interarrival times, $\bs{T} = (T_0, T_1, \ldots)$ is the sequence of arrival times, and $\bs{N} = \{N_k: k \in \N\}$ is the counting process. If $\bs a$ is simple, these form an ordinary renewal process. If $\bs a$ is compound, they form a delayed renewal process, since $X_1$ will have a different distribution than $(X_2, X_3, \ldots)$. Since the structure of a delayed renewal process subsumes that of an ordinary renewal process, we will work with the notation above for the delayed process. In particular, let $U$ denote the renewal function. Everything in this paragraph depends on the word $\bs a$ of course, but we have suppressed this in the notation.
Suppose $\bs a = a_1 a_2 \cdots a_k$, where $a_i \in S$ for each $i \in \{1, 2, \ldots, k\}$, so that $\bs a$ is a word of length $k$. Note that $X_1$ takes values in $\{k, k + 1, \ldots\}$. If $\bs a$ is simple, this applies to the other interarrival times as well. If $\bs a$ is compound, the situation is more complicated $X_2, \, X_3, \ldots$ will have some minimum value $j \lt k$, but the possible values are positive integers, of course, and include $\{k + 1, k + 2, \ldots\}$. In any case, the renewal process is arithmetic with span 1. Expanding the definition of the probability density function $f$, let $f(\bs a) = \prod_{i=1}^k f(a_i)$ so that $f(\bs a)$ is the probability of forming $\bs a$ with $k$ consecutive trials. Let $\mu(\bs a)$ denote the common mean of $X_n$ for $n \in \{2, 3, \ldots\}$, so $\mu(\bs a)$ is the mean number of trials between occurrences of $\bs a$. Let $\nu(\bs a) = \E(X_1)$, so that $\nu(\bs a)$ is the mean time number of trials until $\bs a$ occurs for the first time. Our first result is an elegant connection between $\mu(\bs a)$ and $f(\bs a)$, which has a wonderfully simple proof from renewal theory.
If $\bs a$ is a word in $S$ then $\mu(\bs a) = \frac{1}{f(\bs a)}$
Proof
Suppose that $\bs a$ has length $k$, and consider the discrete interval $(n, n + k] = \{n + 1, n + 2, \ldots, n + k\}$. By the renewal theorem, $U(n, n + k] \to 1 / \mu(\bs a)$ as $n \to \infty$. But $N(n, n + k]$, the number of times that $\bs a$ occurs in the interval, is either 1 or 0. Hence $U(n, n + k] = f(\bs a)$ for any $n$.
Our next goal is to compute $\nu(\bs a)$ in the case that $\bs a$ is a compound word.
Suppose that $\bs a$ is a compound word, and that $\bs b$ is the largest word that is a proper suffix and prefix of $\bs a$. Then $\nu(\bs a) = \nu(\bs b) + \mu(\bs a) = \nu(\bs b) + \frac{1}{f(\bs a)}$
Proof
Since $\bs b$ is the largest prefix-suffix, the expected number of trials to go from $\bs b$ to $\bs a$ is the same as the expected number of trials to go from $\bs a$ to $\bs a$, namely $\mu(\bs a)$. (Note that the paths from $\bs b$ to $\bs a$ are the same as the paths from $\bs a$ to $\bs a$.) But to form the word $\bs a$ initially, the word $\bs b$ must be formed first, so this result follows from the additivity of expected value and the previous result.
By repeated use of the last result, we can compute the expected number of trials needed to form any compound word.
Consider Bernoulli trials with success probability $p \in (0, 1)$, and let $q = 1 - p$. For each of the following strings, find the expected number of trials between occurrences and the expected number of trials to the first occurrence.
1. $\bs a = 001$
2. $\bs b = 010$
3. $\bs c = 1011011$
4. $\bs d = 11 \cdots 1$ ($k$ times)
Answer
1. $\mu(\bs a) = \nu(\bs a) = \frac{1}{p q^2}$
2. $\mu(\bs b) = \frac{1}{p q^2}$, $\nu(\bs b) = \frac{1}{q} + \frac{1}{p q^2}$
3. $\mu(\bs c) = \frac{1}{p^5 q^2}$, $\nu(\bs c) = \frac{1}{p} + \frac{1}{p^3 q} + \frac{1}{p^5 q^2}$
4. $\mu(\bs d) = \frac{1}{p^k}$ $\nu(\bs d) = \sum_{i=1}^k \frac{1}{p^i}$
Recall that an ace-six flat die is a six-sided die for which faces 1 and 6 have probability $\frac{1}{4}$ each while faces 2, 3, 4, and 5 have probability $\frac{1}{8}$ each. Ace-six flat dice are sometimes used by gamblers to cheat.
Suppose that an ace-six flat die is thrown repeatedly. Find the expected number of throws until the pattern $6165616$ first occurs.
Solution
From our main theorem, \begin{align*} \nu(6165616) & = \frac{1}{f(6165616)} + \nu(616) = \frac{1}{f(6165616)} + \frac{1}{f(616)} + \nu(6) \ & = \frac{1}{f(6165616)} + \frac{1}{f(616)} + \frac{1}{f(6)} = \frac{1}{(1/4)^6(1/8)} + \frac{1}{(1/4)^3} + \frac{1}{1/4} = 32\,836 \end{align*}
Suppose that a monkey types randomly on a keyboard that has the 26 lower-case letter keys and the space key (so 27 keys). Find the expected number of keystrokes until the monkey produces each of the following phrases:
1. it was the best of times
2. to be or not to be
Proof
1. $27^{24} \approx 2.258 \times 10^{34}$
2. $27^5 + 27^{18} \approx 5.815 \times 10^{25}$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/15%3A_Renewal_Processes/15.04%3A_Delayed_Renewal_Processes.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$
Basic Theory
Preliminaries
An alternating renewal process models a system that, over time, alternates between two states, which we denote by 1 and 0 (so the system starts in state 1). Generically, we can imagine a device that, over time, alternates between on and off states. Specializing further, suppose that a device operates until it fails, and then is replaced with an identical device, which in turn operates until failure and is replaced, and so forth. In this setting, the times that the device is functioning correspond to the on state, while the replacement times correspond to the off state. (The device might actually be repaired rather than replaced, as long as the repair returns the device to pristine, new condition.) The basic assumption is that the pairs of random times successively spent in the two states form an independent, identically distributed sequence. Clearly the model of a system alternating between two states is basic and important, but moreover, such alternating processes are often found embedded in other stochastic processes.
Let's set up the mathematical notation. Let $\bs{U} = (U_1, U_2, \ldots)$ denote the successive lengths of time that the system is in state 1, and let $\bs{V} = (V_1, V_2, \ldots)$ the successive lengths of time that the system is in state 0. So to be clear, the system starts in state 1 and remains in that state for a period of time $U_1$, then goes to state 0 and stays in this state for a period of time $V_1$, then back to state 1 for a period of time $U_2$, and so forth. Our basic assumption is that $\bs{W} = \left((U_1, V_1), (U_2, V_2), \ldots\right)$ is an independent, identically distributed sequence. It follows that $\bs{U}$ and $\bs{V}$ each are independent, identically distributed sequences, but $\bs{U}$ and $\bs{V}$ might well be dependent. In fact, $V_n$ might be a function of $U_n$ for $n \in \N_+$. Let $\mu = \E(U)$ denote the mean of a generic time period $U$ in state 1 and let $\nu = \E(V)$ denote the mean of a generic time period $V$ in state 0. Let $G$ denote the distribution function of a time period $U$ in state 1, and as usual, let $G^c = 1 - G$ denote the right distribution function (or reliability function) of $U$.
Clearly it's natural to consider returns to state 1 as the arrivals in a renewal process. Thus, let $X_n = U_n + V_n$ for $n \in \N_+$ and consider the renewal process with interarrival times $\bs{X} = (X_1, X_2, \ldots)$. Clearly this makes sense, since $\bs{X}$ is an independent, identically distributed sequence of nonnegative variables. For the most part, we will use our usual notation for a renewal process, so the common distribution function of $X_n = U_n + V_n$ is denoted by $F$, the arrival time process is $\bs{T} = (T_0, T_1, \ldots)$, the counting process is $\{N_t: t \in [0, \infty)\}$, and the renewal function is $M$. But note that the mean interarrival time is now $\mu + \nu$.
The renwal process associated with $\bs{W} = ((U_1, V_1), (U_2, V_2), \dots)$ as constructed above is known as an alternating renewal process.
The State Process
Our interest is the state $I_t$ of the system at time $t \in [0, \infty)$, so $\bs{I} = \left\{I_t: t \in [0, \infty)\right\}$ is a stochastic process with state space $\{0, 1\}$. Clearly the stochastic processes $\bs{W}$ and $\bs{I}$ are equivalent in the sense that we can recover one from the other. Let $p(t) = \P(I_t = 1)$, the probability that the device is on at time $t \in [0, \infty)$. Our first main result is a renewal equation for the function $p$.
The function $p$ satisfies the renewal equation $p = G^c + p * F$ and hence $p = G^c + G^c * M$.
Proof
By now, the approach should be clear—we're going to condition on the first arrival $X_1$: $\P(I_t = 1) = \P(I_t = 1, X_1 \gt t) + \P(I_t = 1, X_1 \le t) = \P(I_t = 1, X_1 \gt t) + \int_0^t \P(I_t = 1 \mid X_1 = s) \, dF(s)$ But $\{I_t = 1, X_1 \gt t\} = \{U_1 \gt t)$ so $\P(I_t = 1, X_1 \gt t) = \P(U_1 \gt t) = G^c(t)$. By the fundamental renewal property (the process restarts, independently of the past, at each arrival) $\P(I_t = 1 \mid X_1 = s) = p(t - s)$ for $s \le t$. Hence we have $p(t) = G^c(t) + \int_0^t p(t - s) \, dF(s), \quad t \in [0, \infty)$ or equivalently, $p = G^c + p * F$. By the fundamental theorem on renewal equations, the solution is $p = G^c + G^c * M$, so $p(t) = G^c(t) + \int_0^t G^c(t - s) \, dM(s), \quad t \in [0, \infty)$
We can now apply the key renewal theorem to get the asymptotic behavior of $p$.
If the renewal process is non-arithmetic, then $p(t) \to \frac{\mu}{\mu + \nu} \text{ as } t \to \infty$
Proof
From the result above, $p = G^c + G^c * m$. First, $G^c(t) \to 0$ as $t \to \infty$ as a basic property of the right distribution function. Next, by the key renewal theorem, $(G^c * M)(t) \to \frac{1}{\mu + \nu} \int_0^\infty G^c(s) \, ds \text{ as } t \to \infty$ But by another basic property of the right distribution function, $\int_0^\infty G^c(s) \, ds = \mu$.
Thus, the limiting probability that the system is on is simply the ratio of the mean of an on period to the mean of an on-off period. It follows, of course, that $\P(I_t = 0) = 1 - p(t) \to \frac{\nu}{\mu + \nu} \text{ as } t \to \infty$ so in particular, the fact that the system starts in the on state makes no difference in the limit. We will return to the asymptotic behavior of the alternating renewal process in the next section on renewal reward processes.
Applications and Special Cases
With a clever definition of on and off, many stochastic processes can be turned into alternating renewal processes, leading in turn to interesting limits, via the basic limit theorem above.
Age Processes
The last remark applies in particular to the age processes of a standard renewal process. So, suppose that we have a renewal process with interarrival sequence $\bs{X}$, arrival sequence $\bs{T}$, and counting process $\bs{N}$. As usual, let $\mu$ denote the mean and $F$ the probability distribution function of an interarrival time, and let $F^c = 1 - F$ denote the right distribution function (or reliability function).
For $t \in [0, \infty)$, recall that the current life, remaining life and total life at time $t$ are $C_t = t - T_{N_t}, \quad R_t = T_{N_t + 1} - t, \quad L_t = C_t + R_t = T_{N_t+1} - T_{N_t} = X_{N_t + 1}$ respectively. In the usual terminology of reliability, $C_t$ is the age of the device in service at time $t$, $R_t$ is the time remaining until this device fails, and $L_t$ is total life of the device. We will use limit theorem above to derive the limiting distributions these age processes. The limiting distributions were obtained earlier, in the section on renewal limit theorems, by a direct application of the key renewal theorem. So the results are not new, but the method of proof is interesting.
If the renewal process is non-arithmetic then $\lim_{t \to \infty} \P(C_t \le x) = \lim_{t \to \infty} \P(R_t \le x) = \frac{1}{\mu} \int_0^x F^c(y) \, dy, \quad x \in [0, \infty)$
Proof
Fix $x \in [0, \infty)$. For the current life limit, define the on period corresponding to the interarrival time $X_n$ to be $U_n = \min\{X_n, x\}$ for $n \in \N_+$, so that the off period is $V_n = X_n - \min\{X_n, x\}$. Note that the system is on at time $t \in [0, \infty)$ if and only if $C_t \le x$, and hence $p(t) = \P(C_t \le x)$. It follows from the limit theorem above that $\P(C_t \le x) \to \frac{\E[\min\{X, x\}]}{\mu} \text{ as } t \to \infty$ where $X$ is a generic interarrival time. But $G^c(y) = \P[\min\{X, x\} \gt y] = \P(X \gt y) \bs{1}(x \gt y) = F^c(y) \bs{1}(x \gt y), \quad y \in [0, \infty)$ Hence $\E[\min(X, x)] = \int_0^\infty G^c(y) \, dy = \int_0^x F^c(y) \, dy$ For the remaining life limit we reverse the on-off periods. Thus, define the on period corresponding to the interarrival time $X_n$ to be $U_n = X_n - \min\{X_n, x\}$ for $n \in \N_+$, so that the off period is $V_n = \min\{X_n, x\}$. Note that the system is off at time $t$ if and only if $R_t \le x$, and hence $1 - p(t) = \P(R_t \le x)$. From the limit theorem above, $\P(R_t \le x) \to \frac{\E[\min\{X, x\}]}{\mu} \text{ as } t \to \infty$
As we have noted before, the fact that the limiting distributions are the same is not surprising after a little thought. After a long time, the renewal process looks the same forward and backward in time, and reversing the arrow of time reverses the roles of current and remaining time.
If the renewal process is non-arithmetic then $\lim_{t \to \infty} \P(L_t \le x) = \frac{1}{\mu} \int_0^x y \, dF(y), \quad x \in [0, \infty)$
Proof
Fix $x \in [0, \infty)$. For $n \in \N_+$, define the on period associated with interarrival time $X_n$ by $U_n = X_n \bs{1}(X_n \gt x)$. Of course, the off period corresponding to $X_n$ is $V_n = X_n - U_n$. Thus, each renewal period is either totally on or totally off, depending on whether or not the interarrival time is greater than $x$. Note that the system is on at time $t \in [0, \infty)$ if and only if $L_t \gt x$, so from the basic limit theorem above, $\P(L_t \gt x) \to \frac{1}{\mu} \E[X \bs{1}(X \gt x)]$ where $X$ is a generic interarrival time. But $\E[X \bs{1}(X \gt x)] = \int_x^\infty y \, dF(y)$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/15%3A_Renewal_Processes/15.05%3A_Alternating_Renewal_Processes.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$ $\newcommand{\cov}{\text{cov}}$
Basic Theory
Preliminaries
In a renewal reward process, each interarrival time is associated with a random variable that is generically thought of as the reward associated with that interarrival time. Our interest is in the process that gives the total reward up to time $t$. So let's set up the usual notation. Suppose that $\bs{X} = (X_1, X_2, \ldots)$ are the interarrival times of a renewal process, so that $\bs{X}$ is a sequence of independent, identically distributed, nonnegative variables with common distribution function $F$ and mean $\mu$. As usual, we assume that $F(0) \lt 1$ so that the interarrival times are not deterministically 0, and in this section we also assume that $\mu \lt \infty$. Let $T_n = \sum_{i=1}^n X_i, \quad n \in \N$ so that $T_n$ is the time of the $n$th arrival for $n \in \N_+$ and $\bs{T} = (T_0, T_1, \ldots)$ is the arrival time sequence. Finally, Let $N_t = \sum_{n=1}^\infty \bs{1}(T_n \le t), \quad t \in [0, \infty)$ so that $N_t$ is the number of arrivals in $[0, t]$ and $\bs{N} = \{N_t: t \in [0, \infty)\}$ is the counting process. As usual, let $M(t) = \E\left(N_t\right)$ for $t \in [0, \infty)$ so that $M$ is the renewal function.
Suppose now that $\bs{Y} = (Y_1, Y_2, \ldots)$ is a sequence of real-valued random variables, where $Y_n$ is thought of as the reward associated with the interarrival time $X_n$. However, the term reward should be interpreted generically since $Y_n$ might actually be a cost or some other value associated with the interarrival time, and in any event, may take negative as well as positive values. Our basic assumption is that the interarrival time and reward pairs $\bs{Z} = \left((X_1, Y_1), (X_2, Y_2), \ldots\right)$ form an independent and identically distributed sequence. Recall that this implies that $\bs{X}$ is an IID sequence, as required by the definition of the renewal process, and that $\bs{Y}$ is also an IID sequence. But $\bs{X}$ and $\bs{Y}$ might well be dependent, and in fact $Y_n$ might be a function of $X_n$ for $n \in \N_+$. Let $\nu = \E(Y)$ denote the mean of a generic reward $Y$, which we assume exists in $\R$.
The stochastic process $\bs{R} = \{R_t: t \in [0, \infty)\}$ defined by $R_t = \sum_{i=1}^{N_t} Y_i, \quad t \in [0, \infty)$ is the reward renewal process associated with $\bs{Z}$. The function $r$ given by $r(t) = \E(R_t)$ for $t \in [0, \infty)$ is the reward function.
As promised, $R_t$ is the total reward up to time $t \in [0, \infty)$. Here are some typical examples:
• The arrivals are customers at a store. Each customer spends a random amount of money.
• The arrivals are visits to a website. Each visitor spends a random amount of time at the site.
• The arrivals are failure times of a complex system. Each failure requires a random repair time.
• The arrivals are earthquakes at a particular location. Each earthquake has a random severity, a measure of the energy released.
So $R_t$ is a random sum of random variables for each $t \in [0, \infty)$. In the special case that $\bs{Y}$ and $\bs{X}$ independent, the distribution of $R_t$ is known as a compound distribution, based on the distribution of $N_t$ and the distribution of a generic reward $Y$. Specializing further, if the renewal process is Poisson and is independent of $\bs{Y}$, the process $\bs{R}$ is a compound Poisson process.
Note that a renewal reward process generalizes an ordinary renewal process. Specifically, if $Y_n = 1$ for each $n \in \N_+$, then $R_t = N_t$ for $t \in [0, \infty)$, so that the reward process simply reduces to the counting process, and then $r$ reduces to the renewal function $M$.
The Renewal Reward Theorem
For $t \in (0, \infty)$, the average reward on the interval $[0, t]$ is $R_t / t$, and the expected average reward on that interval is $r(t) / t$. The fundamental theorem on renewal reward processes gives the asymptotic behavior of these averages.
The renewal reward theorem
1. $R_t / t \to \nu / \mu$ as $t \to \infty$ with probability 1.
2. $r(t) / t \to \nu / \mu$ as $t \to \infty$
Proof
1. Note that $\frac{R_t}{t} = \frac{R_t}{N_t} \frac{N_t}{t}$ But by the ordinary strong law of large numbers for the IID sequence $\bs{Y}$, $\frac{1}{n} \sum_{i=1}^n Y_i \to \nu$ as $n \to \infty$ with probability 1. Recal also that $N_t \to \infty$ as $t \to \infty$ with probability 1. Hence it follows that $\frac{R_t}{N_t} = \frac{1}{N_t} \sum_{i=1}^{N_t} Y_i \to \nu$ as $t \to \infty$ with probability 1. From the law or large numbers for the renewal process, we know that $N_t \big/ t \to 1 / \mu$ as $t \to \infty$ with probability 1.
2. Note first that $R_t = \sum_{i=1}^{N_t} Y_i = \sum_{i=1}^{N_t + 1} Y_i - Y_{N(t) + 1}$ Next Recall that $N_t + 1$ is a stopping time for the sequence of interarrival times $\bs{X}$ for $t \in (0, \infty)$, and hence is also a stopping time for the sequence of interarrival time, reward pairs $\bs{Z}$. (If a random time is a stopping time for a filtration, then it's a stopping time for any larger filtration.) By Wald's equation, $\E\left(\sum_{i=1}^{N_t + 1} Y_i\right) = \nu \E\left(N_t + 1\right) = \nu[M(t) + 1] = \nu \, M(t) + \nu$ By the elementary renewal theorem, $\frac{\nu \, M(t) + \nu}{t} = \nu \frac{M(t)}{t} + \frac{\nu}{t} \to \frac{\nu}{\mu} \text{ as } t \to \infty$ Thus returning to the first displayed equation above, it remains to show that $\frac{\E\left[Y_{N_t + 1}\right]}{t} \to 0 \text{ as } t \to \infty$ Let $u(t) = \E\left[Y_{N_t + 1}\right]$ for $t \in [0, \infty)$. Taking cases for the first arrival time $X_1$ we have $u(t) = \E\left[Y_{N_t + 1} \bs{1}(X_1 \gt t)\right] + \E\left[Y_{N_t + 1} \bs{1}(X_1 \le t)\right]$ But $X_1 \gt t$ if and only if $N_t = 0$ so the first term is $\E\left[Y_1 \bs{1}(X_1 \gt t)\right]$, which we will note by $a(t)$. We have assumed that the expected reward $\nu$ exists in $\R$. Hence $\left|a(t)\right| \le \E\left(\left|Y_1\right|\right) \lt \infty$ so that $a$ is bounded, and $a(t) \to 0$ as $t \to \infty$. For the second term, if the first arrival occurs at time $s \in [0, t]$, then the renewal process restarts, independently of the past, so $\E\left[Y_{N_t + 1} \bs{1}(X_1 \le t)\right] = \int_0^t u(t - s) \, dF(s), \quad t \in [0, \infty)$ It follows that $u$ satisfies the renewal equation $u = a + u * F$. By the fundamental theorem on renewal equations, the solution is $u = a + a * M$. Now, fix $\epsilon \gt 0$. There exists $T \in (0, \infty)$ such that $\left|a(t)\right| \lt \epsilon$ for $t \gt T$. So for $t \gt T$, \begin{align*} \left|\frac{u(t)}{t}\right| & \le \frac{1}{t}\left[\left|a(t)\right| + \int_0^{t - T} \left|a(t - s)\right| dM(s) + \int_{t - M}^t \left|a(t - s)\right| dM(s)\right] \ & \le \frac{1}{t} \left[\epsilon + \epsilon \, M(t - T) + \E\left|Y_1\right| [M(t) - M(t - T)]\right] \end{align*} Using the elementary renewal theorem again, the last expression converges to $\epsilon / \mu$ as $t \to \infty$. Since $\epsilon \gt 0$ is arbitrary, it follows that $u(t) / t \to 0$ as $t \to \infty$.
Part (a) generalizes the law of large numbers and part (b) generalizes elementary renewal theorem, for a basic renewal process. Once again, if $Y_n = 1$ for each $n$, then (a) becomes $N_t / t \to 1/\mu$ as $t \to \infty$ and (b) becomes $M(t) / t \to 1 /\mu$ as $t \to \infty$. It's not surprising then that these two theorems play a fundamental role in the proof of the renewal reward theorem.
General Reward Processes
The renewal reward process $\bs{R} = \{R_t: t \in [0, \infty)\}$ above is constant, taking the value $\sum_{i=1}^n Y_i$, on the renewal interval $[T_n, T_{n+1})$ for each $n \in \N$. Effectively, the rewards are received discretely: $Y_1$ at time $T_1$, an additional $Y_2$ at time $T_2$, and so forth. It's possible to modify the construction so the rewards accrue continuously in time or in a mixed discrete/continuous manner. Here is a simple set of conditions for a general reward process.
Suppose again that $\bs{Z} = ((X_1, Y_1), (X_2, Y_2), \ldots)$ is the sequence of interarrival times and rewards. A stochastic process $\bs{V} = \{V_t: t \in [0, \infty)\}$ (on our underlying probability space) is a reward process associated with $\bs{Z}$ if the following conditions hold:
1. $V_{T_n} = \sum_{i=1}^n Y_i$ for $n \in \N$
2. $V_t$ is between $V_{T_n}$ and $V_{T_{n+1}}$ for $t \in \left(T_n, T_{n+1}\right)$ and $n \in \N$
In the continuous case, with nonnegative rewards (the most important case), the reward process will typically have the following form:
Suppose that the rewards are nonnegative and that $\bs{U} = \{U_t: t \in [0, \infty)\}$ is a nonnegative stochastic process (on our underlying probability space) with
1. $t \mapsto U_t$ piecewise continous
2. $\int_{T_n}^{T_{n+1}} U_t \, dt = Y_{n+1}$ for $n \in \N$
Let $V_t = \int_0^t U_s \, ds$ for $t \in [0, \infty)$. Then $\bs{V} = \{V_t: t \in [0, \infty)\}$ is a reward process associated with $\bs{Z}$.
Proof
By the additivity of the integral and (b), $V_{T_n} = \sum_{i=1}^n Y_i$ for $n \in \N$. Since $\bs{U}$ is nonnegative, $\bs{V}$ is increasing, so $V_{T_n} \le V_t \le V_{T_{n+1}}$ for $t \in \left(T_n, T_{n+1}\right)$
Thus in this special case, the rewards are being accrued continuously and $U_t$ is the rate at which the reward is being accrued at time $t$. So $\bs{U}$ plays the role of a reward density process. For a general reward process, the basic renewal reward theorem still holds.
Suppose that $\bs{V} = \{V_t: t \in [0, \infty)\}$ is a reward process associated with $\bs{Z} = ((X_1, Y_1), (X_2, Y_2), \ldots)$, and let $v(t) = \E\left(V_t\right)$ for $t \in [0, \infty)$ be the corresponding reward function.
1. $V_t / t \to \nu / \mu$ as $t \to \infty$ with probability 1.
2. $v(t) / t \to \nu / \mu$ as $t \to \infty$.
Proof
Suppose first that the reward variables $Y$ are nonnegative. Then $\frac{R_t}{t} \le \frac{V_t}{t} \le \frac{R_t}{t} + \frac{Y_{N_t + 1}}{t}$ From the proof of the renewal reward theorem above, $R_t / t \to \nu / \mu$ as $t \to \infty$ with probability 1, and $Y_{N_t + 1} \big/ t \to 0$ as $t \to \infty$ with probability 1. Hence (a) holds. Taking expected values, $\frac{r(t)}{t} \le \frac{v(t)}{t} \le \frac{r(t)}{t} + \frac{\E\left(Y_{N_t + 1}\right)}{t}$ But again from the renewal reward theorem above, $r(t) / t \to \nu / \mu$ as $t \to \infty$ and $E\left(Y_{N_t + 1}\right) \big/ t \to 0$ as $t \to \infty$. Hence (b) holds. A similar argument works if the reward variables are negative. If the reward variables take positive and negative values, we split the variables into positive and negative parts in the usual way.
Here is the corollary for a continuous reward process.
Suppose that the rewards are positive, and consider the continuous reward process with density process $\bs{U} = \{U_t: t \in [0, \infty)\}$ as above. Let $u(t) = \E(U_t)$ for $t \in [0, \infty)$. Then
1. $\frac{1}{t} \int_0^t U_s \, ds \to \frac{\nu}{\mu}$ as $t \to \infty$ with probability 1
2. $\frac{1}{t} \int_0^t u(s) \, ds \to \frac{\nu}{\mu}$ as $t \to \infty$
Special Cases and Applications
With a clever choice of the rewards, many interesting renewal processes can be turned into renewal reward processes, leading in turn to interesting limits via the renewal reward theorem.
Alternating Renewal Processes
Recall that in an alternating renewal process, a system alternates between on and off states (starting in the on state). If we let $\bs{U} = (U_1, U_2, \ldots)$ be the lengths of the successive time periods in which the system is on, and $\bs{V} = (V_1, V_2, \ldots)$ the lengths of the successive time periods in which the system is off, then the basic assumptions are that $((U_1, V_1), (U_2, V_2), \ldots)$ is an independent, identically distributed sequence, and that the variables $X_n = U_n + V_n$ for $n \in \N_+$ form the interarrival times of a standard renewal process. Let $\mu = \E(U)$ denote the mean of a time period that the device is on, and $\nu = \E(V)$ the mean of a time period that the device is off. Recall that $I_t$ denotes the state (1 or 0) of the system at time $t \in [0, \infty)$, so that $\bs{I} = \{I_t: t \in [0, \infty)\}$ is the state process. The state probability function $p$ is given by $p(t) = \P(I_t = 1)$ for $t \in [0, \infty)$.
Limits for the alternating renewal process.
1. $\frac{1}{t} \int_0^t I_s \, ds \to \frac{\mu}{\mu + \nu}$ as $t \to \infty$ with probability 1
2. $\frac{1}{t} \int_0^t p(s) \, ds \to \frac{\mu}{\mu + \nu}$ as $t \to \infty$
Proof
Consider the renewal reward process where the reward associated with the interarrival time $X_n$ is $U_n$, the on period for that renewal period. The rewards $U_n$ are nonnegative and clearly $\int_{T_n}^{T_{n+1}} I_s \, ds = U_{n+1}$. So $t \mapsto \int_0^t I_s \, ds$ for $t \in [0, \infty)$ defines a continuous reward process of the form given above. Parts (a) and (b) follow directly from the reward renewal theorem above.
Thus, the asymptotic average time that the device is on, and the asymptotic mean average time that the device is on, are both simply the ratio of the mean of an on period to the mean of an on-off period. In our previous study of alternating renewal processes, the fundamental result was that in the non-arithmetic case, $p(t) \to \mu / (\mu + \nu)$ as $t \to \infty$. This result implies part (b) in the theorem above.
Age Processes
Renewal reward processes can be used to derive some asymptotic results for the age processes of a standard renewal process So, suppose that we have a renewal process with interarrival sequence $\bs{X}$, arrival sequence $\bs{T}$, and counting process $\bs{N}$. As usual, let $\mu = \E(X)$ denote the mean of an interarrival time, but now we will also need $\nu = \E(X^2)$, the second moment. We assume that both moments are finite.
For $t \in [0, \infty)$, recall that the current life, remaining life and total life at time $t$ are $A_t = t - T_{N_t}, \quad B_t = T_{N_t + 1} - t, \quad L_t = A_t + B_t = T_{N_t+1} - T_{N_t} = X_{N_t + 1}$ respectively. In the usual terminology of reliability, $A_t$ is the age of the device in service at time $t$, $B_t$ is the time remaining until this device fails, and $L_t$ is total life of the device. (To avoid notational clashes, we are using different notation than in past sections.) Let $a(t) = \E(A_t)$, $b(t) = \E(B_t)$, and $l(t) = \E(L_t)$ for $t \in [0, \infty)$, the corresponding mean functions. To derive our asymptotic results, we simply use the current life and the remaining life as reward densities (or rates) in a renewal reward process.
Limits for the current life process.
1. $\frac{1}{t} \int_0^t A_s \, ds \to \frac{\nu}{2 \mu}$ as $t \to \infty$ with probability 1
2. $\frac{1}{t} \int_0^t a(s) \, ds \to \frac{\nu}{2 \mu}$ as $t \to \infty$
Proof
Consider the renewal reward process where the reward associated with the interarrival time $X_n$ is $\frac{1}{2} X_n^2$ for $n \in \N$. The process $t \mapsto \int_0^t A_s \, ds$ for $t \in [0, \infty)$ is a continuous reward process for this sequence of rewards, as defined above. To see this, note that for $t \in \left[T_n, T_{n+1}\right)$, we have $A_t = t - T_n$, so with a change of variables and noting that $T_{n+1} = T_n + X_{n+1}$ we have $\int_{T_n}^{T_{n+1}} A_t \, dt = \int_0^{X_{n+1}} s \, ds = \frac{1}{2} X_{n+1}^2$ The results now follow from the renewal reward theorem above.
Limits for the remaining life process.
1. $\frac{1}{t} \int_0^t B_s \, ds \to \frac{\nu}{2 \mu}$ as $t \to \infty$ with probability 1
2. $\frac{1}{t} \int_0^t b(s) \, ds \to \frac{\nu}{2 \mu}$ as $t \to \infty$
Proof
Consider again the renewal reward process where the reward associated with the interarrival time $X_n$ is $\frac{1}{2} X_n^2$ for $n \in \N$. The process $t \mapsto \int_0^t B_s \, ds$ for $t \in [0, \infty)$ is a continuous reward process for this sequence of rewards, as defined above. To see this, note that for $t \in \left[T_n, T_{n+1}\right)$, we have $B_t = T_{n+1} - t$, so once again with a change of variables and noting that $T_{n+1} = T_n + X_{n+1}$ we have $\int_{T_n}^{T_{n+1}} B_t \, dt = \int_0^{X_{n+1}} s \, ds = \frac{1}{2} X_{n+1}^2$ The results now follow from the renewal reward theorem above.
With a little thought, it's not surprising that the limits for the current life and remaining life processes are the same. After a long period of time, a renewal process looks stochastically the same forward or backward in time. Changing the arrow of time reverses the role of the current and remaining life. Asymptotic results for the total life process now follow trivially from the results for the current and remaining life processes.
Limits for the total life process
1. $\frac{1}{t} \int_0^t L_s \, ds \to \frac{\nu}{\mu}$ as $t \to \infty$ with probability 1
2. $\frac{1}{t} \int_0^t l(s) \, ds = \frac{\nu}{\mu}$ as $t \to \infty$
Replacement Models
Consider again a standard renewal process as defined in the Introduction, with interarrival sequence $\bs{X} = (X_1, X_2, \ldots)$, arrival sequence $\bs{T} = (T_0, T_1, \ldots)$, and counting process $\bs{N} = \{N_t: t \in [0, \infty)\}$. One of the most basic applications is to reliability, where a device operates for a random lifetime, fails, and then is replaced by a new device, and the process continues. In this model, $X_n$ is the lifetime and $T_n$ the failure time of the $n$th device in service, for $n \in \N_+$, while $N_t$ is the number of failures in $[0, t]$ for $t \in [0, \infty)$. As usual, $F$ denotes the distribution function of a generic lifetime $X$, and $F^c = 1 - F$ the corresponding right distribution function (reliability function). Sometimes, the device is actually a system with a number of critical components—the failure of any of the critical components causes the system to fail.
Replacement models are variations on the basic model in which the device is replaced (or the critical components replaced) at times other than failure. Often the cost $a$ of a planned replacement is less than the cost $b$ of an emergency replacement (at failure), so replacement models can make economic sense. We will consider the the most common model.
In the age replacement model, the device is replaced either when it fails or when it reaches a specified age $s \in (0, \infty)$. This model gives rise to a new renewal process with interarrival sequence $\bs{U} = (U_1, U_2, \ldots)$ where $U_n = \min\{X_n, s\}$ for $n \in \N_+$. If $a, \, b \in (0, \infty)$ are the costs of planned and unplanned replacements, respectively, then the cost associated with the renewal period $U_n$ is $Y_n = a \bs{1}(U_n = s) + b \bs{1}(U_n \lt s) = a \bs{1}(X_n \ge s) + b \bs{1}(X_n \lt s)$ Clearly $((U_1, Y_1), (U_2, Y_2), \ldots)$ satisfies the assumptions of a renewal reward process given above. The model makes mathematical sense for any $a, b \in (0, \infty)$ but if $a \ge b$, so that the planned cost of replacement is at least as large as the unplanned cost of replacement, then $Y_n \ge b$ for $n \in \N_+$, so the model makes no financial sense. Thus we assume that $a \lt b$.
In the age replacement model, with planned replacement at age $s \in (0, \infty)$,
1. The expected cost of a renewal period is $\E(Y) = a F^c(s) + b F(s)$.
2. The expected length of a renewal period is $\E(U) = \int_0^s F^c(x) \, dx$
The limiting expected cost per unit time is $C(s) = \frac{a F^c(s) + b F(s)}{\int_0^s F^c(x) \, dx}$
Proof
Parts (a) and (b) follow from the definition of the reward $Y$ and the renewal period $U$, and then the formula for $C(s)$ follows from the reward renewal theorem above
So naturally, given the costs $a$ and $b$, and the lifetime distribution function $F$, the goal is be to find the value of $s$ that minimizes $C(s)$; this value of $s$ is the optimal replacement time. Of course, the optimal time may not exist.
Properties of $C$
1. $C(s) \to \infty$ as $s \downarrow 0$
2. $C(s) \to \mu / b$ as $s \uparrow \infty$
Proof
1. Recall that $F^c(0) \gt 0$ and $\int_0^s F^c(x) \, dx \to 0$ as $s \downarrow 0$
2. As $s \to \infty$ note that $F^c(s) \to 0$, $F(s) \to 1$ and $\int_0^s F^c(x) \, dx \to \int_0^\infty F^c(x) \, dx = \mu$
As $s \to \infty$, the age replacement model becomes the standard (unplanned) model with limiting expected average cost $b / \mu$.
Suppose that the lifetime of the device (in appropriate units) has the standard exponential distribution. Find $C(s)$ and solve the optimal age replacement problem.
Answer
The exponential reliability function is $F^c(t) = e^{-t}$ for $t \in [0, \infty)$. After some algebra, the long term expected average cost per unit time is $C(s) = \frac{a}{e^s - 1} + b, \quad s \in [0, \infty)$ But clearly $C(s)$ is strictly decreasing in $s$, with limit $b$, so there is no minimum value.
The last result is hardly surprising. A device with an exponentially distributed lifetime does not age—if it has not failed, it's just as good as new. More generally, age replacement does not make sense for any device with decreasing failure rate. Such devices improve with age.
Suppose that the lifetime of the device (in appropriate units) has the gamma distribution with shape parameter $2$ and scale parameter 1. Suppose that the costs (in appropriate units) are $a = 1$ and $b = 5$.
1. Find $C(s)$.
2. Sketch the graph of $C(s)$.
3. Solve numerically the optimal age replacement problem.
Answer
The gamma reliability function is $F^c(t) = e^{-t}(1 + t)$ for $t \in [0, \infty)$
1. $C(s) = \frac{4 + 4 s -5 e^s}{2 + s - 2 e^s}, \quad s \in (0, \infty)$
2. $C$ is minimized for replacement time $s \approx 1.3052$. The optimal cost is about 2.26476.
Suppose again that the lifetime of the device (in appropriate units) has the gamma distribution with shape parameter $2$ and scale parameter 1. But suppose now that the costs (in appropriate units) are $a = 1$ and $b = 2$.
1. Find $C(s)$.
2. Sketch the graph of $C(s)$.
3. Solve the optimal age replacement problem.
Answer
The gamma reliability function is $F^c(t) = e^{-t}(1 + t)$ for $t \in [0, \infty)$
1. $C(s) = \frac{2 e^s - (1 + s)}{2 e^s - (2 + s)}, \quad s \in (0, \infty)$
2. $C$ is strictly decreasing on $[0, \infty)$ with limit 1, so there is no minimum value.
In the last case, the difference between the cost of an emergency replacement and a planned replacement is not great enough for age replacement to make sense.
Suppose that the lifetime of the device (in appropriately scaled units) is uniformly distributed on the interval $[0, 1]$. Find $C(s)$ and solve the optimal replacement problem. Give the results explicitly for the following costs:
1. $a = 4$, $b = 6$
2. $a = 2$, $b = 5$
3. $a = 1$, $b = 10$
Proof
The reliability function is $F^c(t) = 1 - t$ for $t \in [0, 1]$. After standard computations, $C(s) = 2 \frac{a(1 - s) + b s}{s (2 - s)}, \quad s \in (0, 1]$ After more standard calculus, the optimal replacement time is $s = \frac{\sqrt{2 a b - a^2} - a}{b - a}$
1. $s = 2 \left(\sqrt{2} - 1\right) \approx 0.828$, $C \approx 11.657$
2. $s = \frac{2}{3}$, $C = 9$
3. $s = \frac{\sqrt{19} - 1}{9} \approx 0.373$, $C \approx 14.359$
Thinning
We start with a standard renewal process with interarrival sequence $\bs{X} = (X_1, X_2, \ldots)$, arrival sequence $\bs{T} = (T_0, T_1, \ldots)$ and counting process $\bs{N} = \{N_t: t \in [0, \infty)\}$. As usual, let $\mu = \E(X)$ denote the mean of an interarrival time. For $n \in \N_+$, suppose now that arrival $n$ is either accepted or rejected, and define random variable $Y_n$ to be 1 in the first case and 0 in the second. Let $Z_n = (X_n, Y_n)$ denote the interarrival time and rejection variable pair for $n \in \N_+$, and assume that $\bs{Z} = (Z_1, Z_2, \ldots)$ is an independent, identically distributed sequence.
Note that we have the structure of a renewal reward process, and so in particular, $\bs{Y} = (Y_1, Y_2, \ldots)$ is a sequence of Bernoulli trials. Let $p$ denote the parameter of this sequence, so that $p$ is the probability of accepting an arrival. The procedure of accepting or rejecting points in a point process is known as thinning the point process. We studied thinning of the Poisson process. In the notation of this section, note that the reward process $\bs{R} = \{R_t: t \in [0, \infty)\}$ is the thinned counting process. That is, $R_t = \sum_{i=1}^{N_t} Y_i$ is the number of accepted points in $[0, t]$ for $t \in [0, \infty)$. So then $r(t) = \E(R_t)$ is the expected number of accepted points in $[0, t]$. The renewal reward theorem gives the asymptotic behavior.
Limits for the thinned process.
1. $R_t / t \to p / \mu$ as $t \to \infty$
2. $r(t) / t \to p / \mu$ as $t \to \infty$
Proof
This follows immediately from the renewal reward theorem above, since $\nu = p$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/15%3A_Renewal_Processes/15.06%3A_Renewal_Reward_Processes.txt |
A Markov process is a random process in which the future is independent of the past, given the present. Thus, Markov processes are the natural stochastic analogs of the deterministic processes described by differential and difference equations. They form one of the most important classes of random processes.
16: Markov Processes
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$
A Markov process is a random process indexed by time, and with the property that the future is independent of the past, given the present. Markov processes, named for Andrei Markov, are among the most important of all random processes. In a sense, they are the stochastic analogs of differential equations and recurrence relations, which are of course, among the most important deterministic processes.
The complexity of the theory of Markov processes depends greatly on whether the time space $T$ is $\N$ (discrete time) or $[0, \infty)$ (continuous time) and whether the state space is discrete (countable, with all subsets measurable) or a more general topological space. When $T = [0, \infty)$ or when the state space is a general space, continuity assumptions usually need to be imposed in order to rule out various types of weird behavior that would otherwise complicate the theory.
When the state space is discrete, Markov processes are known as Markov chains. The general theory of Markov chains is mathematically rich and relatively simple.
• When $T = \N$ and the state space is discrete, Markov processes are known as discrete-time Markov chains. The theory of such processes is mathematically elegant and complete, and is understandable with minimal reliance on measure theory. Indeed, the main tools are basic probability and linear algebra. Discrete-time Markov chains are studied in this chapter, along with a number of special models.
• When $T = [0, \infty)$ and the state space is discrete, Markov processes are known as continuous-time Markov chains. If we avoid a few technical difficulties (created, as always, by the continuous time space), the theory of these processes is also reasonably simple and mathematically very nice. The Markov property implies that the process, sampled at the random times when the state changes, forms an embedded discrete-time Markov chain, so we can apply the theory that we will have already learned. The Markov property also implies that the holding time in a state has the memoryless property and thus must have an exponential distribution, a distribution that we know well. In terms of what you may have already studied, the Poisson process is a simple example of a continuous-time Markov chain.
For a general state space, the theory is more complicated and technical, as noted above. However, we can distinguish a couple of classes of Markov processes, depending again on whether the time space is discrete or continuous.
• When $T = \N$ and $S \ = \R$, a simple example of a Markov process is the partial sum process associated with a sequence of independent, identically distributed real-valued random variables. Such sequences are studied in the chapter on random samples (but not as Markov processes), and revisited below.
• In the case that $T = [0, \infty)$ and $S = \R$ or more generally $S = \R^k$, the most important Markov processes are the diffusion processes. Generally, such processes can be constructed via stochastic differential equations from Brownian motion, which thus serves as the quintessential example of a Markov process in continuous time and space.
The goal of this section is to give a broad sketch of the general theory of Markov processes. Some of the statements are not completely rigorous and some of the proofs are omitted or are sketches, because we want to emphasize the main ideas without getting bogged down in technicalities. If you are a new student of probability you may want to just browse this section, to get the basic ideas and notation, but skipping over the proofs and technical details. Then jump ahead to the study of discrete-time Markov chains. On the other hand, to understand this section in more depth, you will need to review topcis in the chapter on foundations and in the chapter on stochastic processes.
Basic Theory
Preliminaries
As usual, our starting point is a probability space $(\Omega, \mathscr{F}, \P)$, so that $\Omega$ is the set of outcomes, $\mathscr{F}$ the $\sigma$-algebra of events, and $\P$ the probability measure on $(\Omega, \mathscr{F})$. The time set $T$ is either $\N$ (discrete time) or $[0, \infty)$ (continuous time). In the first case, $T$ is given the discrete topology and in the second case $T$ is given the usual Euclidean topology. In both cases, $T$ is given the Borel $\sigma$-algebra $\mathscr{T}$, the $\sigma$-algebra generated by the open sets. In the discrete case when $T = \N$, this is simply the power set of $T$ so that every subset of $T$ is measurable; every function from $T$ to another measurable space is measurable; and every function from $T$ to another topological space is continuous. The time space $(T, \mathscr{T})$ has a natural measure; counting measure $\#$ in the discrete case, and Lebesgue in the continuous case.
The set of states $S$ also has a $\sigma$-algebra $\mathscr{S}$ of admissible subsets, so that $(S, \mathscr{S})$ is the state space. Usually $S$ has a topology and $\mathscr{S}$ is the Borel $\sigma$-algebra generated by the open sets. A typical set of assumptions is that the topology on $S$ is LCCB: locally compact, Hausdorff, and with a countable base. These particular assumptions are general enough to capture all of the most important processes that occur in applications and yet are restrictive enough for a nice mathematical theory. Usually, there is a natural positive measure $\lambda$ on the state space $(S, \mathscr{S})$. When $S$ has an LCCB topology and $\mathscr{S}$ is the Borel $\sigma$-algebra, the measure $\lambda$ wil usually be a Borel measure satisfying $\lambda(C) \lt \infty$ if $C \subseteq S$ is compact. The term discrete state space means that $S$ is countable with $\mathscr{S} = \mathscr{P}(S)$, the collection of all subsets of $S$. Thus every subset of $S$ is measurable, as is every function from $S$ to another measurable space. This is the Borel $\sigma$-algebra for the discrete topology on $S$, so that every function from $S$ to another topological space is continuous. The compact sets are simply the finite sets, and the reference measure is $\#$, counting measure. If $S = \R^k$ for some $k \in S$ (another common case), then we usually give $S$ the Euclidean topology (which is LCCB) so that $\mathscr{S}$ is the usual Borel $\sigma$-algebra. The compact sets are the closed, bounded sets, and the reference measure $\lambda$ is $k$-dimensional Lebesgue measure.
Clearly, the topological and measure structures on $T$ are not really necessary when $T = \N$, and similarly these structures on $S$ are not necessary when $S$ is countable. But the main point is that the assumptions unify the discrete and the common continuous cases. Also, it should be noted that much more general state spaces (and more general time spaces) are possible, but most of the important Markov processes that occur in applications fit the setting we have described here.
Various spaces of real-valued functions on $S$ play an important role. Let $\mathscr{B}$ denote the collection of bounded, measurable functions $f: S \to \R$. With the usual (pointwise) addition and scalar multiplication, $\mathscr{B}$ is a vector space. We give $\mathscr{B}$ the supremum norm, defined by $\|f\| = \sup\{\left|f(x)\right|: x \in S\}$.
Suppose now that $\bs{X} = \{X_t: t \in T\}$ is a stochastic process on $(\Omega, \mathscr{F}, \P)$ with state space $S$ and time space $T$. Thus, $X_t$ is a random variable taking values in $S$ for each $t \in T$, and we think of $X_t \in S$ as the state of a system at time $t \in T$. We also assume that we have a collection $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ of $\sigma$-algebras with the properties that $X_t$ is measurable with respect to $\mathscr{F}_t$ for $t \in T$, and the $\mathscr{F}_s \subseteq \mathscr{F}_t \subseteq \mathscr{F}$ for $s, \, t \in T$ with $s \le t$. Intuitively, $\mathscr{F}_t$ is the collection of event up to time $t \in T$. Technically, the assumptions mean that $\mathfrak{F}$ is a filtration and that the process $\bs{X}$ is adapted to $\mathfrak{F}$. The most basic (and coarsest) filtration is the natural filtration $\mathfrak{F}^0 = \left\{\mathscr{F}^0_t: t \in T\right\}$ where $\mathscr{F}^0_t = \sigma\{X_s: s \in T, s \le t\}$, the $\sigma$-algebra generated by the process up to time $t \in T$. In continuous time, however, it is often necessary to use slightly finer $\sigma$-algebras in order to have a nice mathematical theory. In particular, we often need to assume that the filtration $\mathfrak{F}$ is right continuous in the sense that $\mathscr{F}_{t+} = \mathscr{F}_t$ for $t \in T$ where $\mathscr{F}_{t+} = \bigcap\{\mathscr{F}_s: s \in T, s \gt t\}$. We can accomplish this by taking $\mathfrak{F} = \mathfrak{F}^0_+$ so that $\mathscr{F}_t = \mathscr{F}^0_{t+}$for $t \in T$, and in this case, $\mathfrak{F}$ is referred to as the right continuous refinement of the natural filtration. We also sometimes need to assume that $\mathfrak{F}$ is complete with respect to $\P$ in the sense that if $A \in \mathscr{S}$ with $\P(A) = 0$ and $B \subseteq A$ then $B \in \mathscr{F}_0$. That is, $\mathscr{F}_0$ contains all of the null events (and hence also all of the almost certain events), and therefore so does $\mathscr{F}_t$ for all $t \in T$.
Definitions
The random process $\bs{X}$ is a Markov process if $\P(X_{s+t} \in A \mid \mathscr{F}_s) = \P(X_{s+t} \in A \mid X_s)$ for all $s, \, t \in T$ and $A \in \mathscr{S}$.
The defining condition, known appropriately enough as the the Markov property, states that the conditional distribution of $X_{s+t}$ given $\mathscr{F}_s$ is the same as the conditional distribution of $X_{s+t}$ just given $X_s$. Think of $s$ as the present time, so that $s + t$ is a time in the future. If we know the present state $X_s$, then any additional knowledge of events in the past is irrelevant in terms of predicting the future state $X_{s + t}$. Technically, the conditional probabilities in the definition are random variables, and the equality must be interpreted as holding with probability 1. As you may recall, conditional expected value is a more general and useful concept than conditional probability, so the following theorem may come as no surprise.
The random process $\bs{X}$ is a Markov process if and only if $\E[f(X_{s+t}) \mid \mathscr{F}_s] = \E[f(X_{s+t}) \mid X_s]$ for every $s, \, t \in T$ and every $f \in \mathscr{B}$.
Proof sketch
The condition in this theorem clearly implies the Markov property, by letting $f = \bs{1}_A$, the indicator function of $A \in \mathscr{S}$. The converse is a classical bootstrapping argument: the Markov property implies the expected value condition
1. First when $f = \bs{1}_A$ for $A \in \mathscr{S}$ (by definition).
2. Next when $f \in \mathscr{B}$ is a simple function, by linearity.
3. Next when $f \in \mathscr{B}$ is nonnegative, by the monotone convergence theorem.
4. Finally for general $f \in \mathscr{B}$ by considering positive and negative parts.
Technically, we should say that $\bs{X}$ is a Markov process relative to the filtration $\mathfrak{F}$. If $\bs{X}$ satisfies the Markov property relative to a filtration, then it satisfies the Markov property relative to any coarser filtration.
Suppose that the stochastic process $\bs{X} = \{X_t: t \in T\}$ is adapted to the filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ and that $\mathfrak{G} = \{\mathscr{G}_t: t \in T\}$ is a filtration that is finer than $\mathfrak{F}$. If $\bs{X}$ is a Markov process relative to $\mathfrak{G}$ then $\bs{X}$ is a Markov process relative to $\mathfrak{F}$.
Proof
First recall that $\bs{X}$ is adapted to $\mathfrak{G}$ since $\bs{X}$ is adapted to $\mathfrak{F}$. If $s, \, t \in T$ and $f \in \mathscr{B}$ then $\E[f(X_{s+t}) \mid \mathscr{F}_s] = \E\left(\E[f(X_{s+t}) \mid \mathscr{G}_s] \mid \mathscr{F}_s\right)= \E\left(\E[f(X_{s+t}) \mid X_s] \mid \mathscr{F}_s\right) = \E[f(X_{s+t}) \mid X_s]$ The first equality is a basic property of conditional expected value. The second uses the fact that $\bs{X}$ is Markov relative to $\mathfrak{G}$, and the third follows since $X_s$ is measurable with respect to $\mathscr{F}_s$.
In particular, if $\bs{X}$ is a Markov process, then $\bs{X}$ satisfies the Markov property relative to the natural filtration $\mathfrak{F}^0$. The theory of Markov processes is simplified considerably if we add an additional assumption.
A Markov process $\bs{X}$ is time homogeneous if $\P(X_{s+t} \in A \mid X_s = x) = \P(X_t \in A \mid X_0 = x)$ for every $s, \, t \in T$, $x \in S$ and $A \in \mathscr{S}$.
So if $\bs{X}$ is homogeneous (we usually don't bother with the time adjective), then the process $\{X_{s+t}: t \in T\}$ given $X_s = x$ is equivalent (in distribution) to the process $\{X_t: t \in T\}$ given $X_0 = x$. For this reason, the initial distribution is often unspecified in the study of Markov processes—if the process is in state $x \in S$ at a particular time $s \in T$, then it doesn't really matter how the process got to state $x$; the process essentially starts over, independently of the past. The term stationary is sometimes used instead of homogeneous.
From now on, we will usually assume that our Markov processes are homogeneous. This is not as big of a loss of generality as you might think. A non-homogenous process can be turned into a homogeneous process by enlarging the state space, as shown below. For a homogeneous Markov process, if $s, \, t \in T$, $x \in S$, and $f \in \mathscr{B}$, then $\E[f(X_{s+t}) \mid X_s = x] = \E[f(X_t) \mid X_0 = x]$
Feller Processes
In continuous time, or with general state spaces, Markov processes can be very strange without additional continuity assumptions. Suppose (as is usually the case) that $S$ has an LCCB topology and that $\mathscr{S}$ is the Borel $\sigma$-algebra. Let $\mathscr{C}$ denote the collection of bounded, continuous functions $f: S \to \R$. Let $\mathscr{C}_0$ denote the collection of continuous functions $f: S \to \R$ that vanish at $\infty$. The last phrase means that for every $\epsilon \gt 0$, there exists a compact set $C \subseteq S$ such that $\left|f(x)\right| \lt \epsilon$ if $x \notin C$. With the usual (pointwise) operations of addition and scalar multiplication, $\mathscr{C}_0$ is a vector subspace of $\mathscr{C}$, which in turn is a vector subspace of $\mathscr{B}$. Just as with $\mathscr{B}$, the supremum norm is used for $\mathscr{C}$ and $\mathscr{C}_0$.
A Markov process $\bs{X} = \{X_t: t \in T\}$ is a Feller process if the following conditions are satisfied.
1. Continuity in space: For $t \in T$ and $y \in S$, the distribution of $X_t$ given $X_0 = x$ converges to the distribution of $X_t$ given $X_0 = y$ as $x \to y$.
2. Continuity in time: Given $X_0 = x$ for $x \in S$, $X_t$ converges in probability to $x$ as $t \downarrow 0$.
Additional details
1. This means that $\E[f(X_t) \mid X_0 = x] \to \E[f(X_t) \mid X_0 = y]$ as $x \to y$ for every $f \in \mathscr{C}$.
2. This means that $\P[X_t \in U \mid X_0 = x] \to 1$ as $t \downarrow 0$ for every neighborhood $U$ of $x$.
Feller processes are named for William Feller. Note that if $S$ is discrete, (a) is automatically satisfied and if $T$ is discrete, (b) is automatically satisfied. In particular, every discrete-time Markov chain is a Feller Markov process. There are certainly more general Markov processes, but most of the important processes that occur in applications are Feller processes, and a number of nice properties flow from the assumptions. Here is the first:
If $\bs{X} = \{X_t: t \in T\}$ is a Feller process, then there is a version of $\bs{X}$ such that $t \mapsto X_t(\omega)$ is continuous from the right and has left limits for every $\omega \in \Omega$.
Again, this result is only interesting in continuous time $T = [0, \infty)$. Recall that for $\omega \in \Omega$, the function $t \mapsto X_t(\omega)$ is a sample path of the process. So we will often assume that a Feller Markov process has sample paths that are right continuous have left limits, since we know there is a version with these properties.
Stopping Times and the Strong Markov Property
For our next discussion, you may need to review again the section on filtrations and stopping times.To give a quick review, suppose again that we start with our probability space $(\Omega, \mathscr{F}, \P)$ and the filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ (so that we have a filtered probability space).
Since time (past, present, future) plays such a fundamental role in Markov processes, it should come as no surprise that random times are important. We often need to allow random times to take the value $\infty$, so we need to enlarge the set of times to $T_\infty = T \cup \{\infty\}$. The topology on $T$ is extended to $T_\infty$ by the rule that for $s \in T$, the set $\{t \in T_\infty: t \gt s\}$ is an open neighborhood of $\infty$. This is the one-point compactification of $T$ and is used so that the notion of time converging to infinity is preserved. The Borel $\sigma$-algebra $\mathscr{T}_\infty$ is used on $T_\infty$, which again is just the power set in the discrete case.
If $\bs{X} = \{X_t: t \in T\}$ is a stochastic process on the sample space $(\Omega, \mathscr{F})$, and if $\tau$ is a random time, then naturally we want to consider the state $X_\tau$ at the random time. There are two problems. First if $\tau$ takes the value $\infty$, $X_\tau$ is not defined. The usual solution is to add a new death state $\delta$ to the set of states $S$, and then to give $S_\delta = S \cup \{\delta\}$ the $\sigma$ algebra $\mathscr{S}_\delta = \mathscr{S} \cup \{A \cup \{\delta\}: A \in \mathscr{S}\}$. A function $f \in \mathscr{B}$ is extended to $S_\delta$ by the rule $f(\delta) = 0$. The second problem is that $X_\tau$ may not be a valid random variable (that is, measurable) unless we assume that the stochastic process $\bs{X}$ is measurable. Recall that this means that $\bs{X}: \Omega \times T \to S$ is measurable relative to $\mathscr{F} \otimes \mathscr{T}$ and $\mathscr{S}$. (This is always true in discrete time.)
Recall next that a random time $\tau$ is a stopping time (also called a Markov time or an optional time) relative to $\mathfrak{F}$ if $\{\tau \le t\} \in \mathscr{F}_t$ for each $t \in T$. Intuitively, we can tell whether or not $\tau \le t$ from the information available to us at time $t$. In a sense, a stopping time is a random time that does not require that we see into the future. Of course, the concept depends critically on the filtration. Recall that if a random time $\tau$ is a stopping time for a filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ then it is also a stopping time for a finer filtration $\mathfrak{G} = \{\mathscr{G}_t: t \in T\}$, so that $\mathscr{F}_t \subseteq \mathscr{G}_t$ for $t \in T$. Thus, the finer the filtration, the larger the collection of stopping times. In fact if the filtration is the trivial one where $\mathscr{F}_t = \mathscr{F}$ for all $t \in T$ (so that all information is available to us from the beginning of time), then any random time is a stopping time. But of course, this trivial filtration is usually not sensible.
Next, recall that if $\tau$ is a stopping time for the filtration $\mathfrak{F}$, then the $\sigma$-algebra $\mathscr{F}_\tau$ associated with $\tau$ is given by $\mathscr{F}_\tau = \left\{A \in \mathscr{F}: A \cap \{\tau \le t\} \in \mathscr{F}_t \text{ for all } t \in T\right\}$ Intuitively, $\mathscr{F}_\tau$ is the collection of events up to the random time $\tau$, analogous to the $\mathscr{F}_t$ which is the collection of events up to the deterministic time $t \in T$. If $\bs{X} = \{X_t: t \in T\}$ is a stochastic process adapted to $\mathfrak{F}$ and if $\tau$ is a stopping time relative to $\mathfrak{F}$, then we would hope that $X_\tau$ is measurable with respect to $\mathscr{F}_\tau$ just as $X_t$ is measurable with respect to $\mathscr{F}_t$ for deterministic $t \in T$. However, this will generally not be the case unless $\bs{X}$ is progressively measurable relative to $\mathfrak{F}$, which means that $\bs{X}: \Omega \times T_t \to S$ is measurable with respect to $\mathscr{F}_t \otimes \mathscr{T}_t$ and $\mathscr{S}$ where $T_t = \{s \in T: s \le t\}$ and $\mathscr{T}_t$ the corresponding Borel $\sigma$-algebra. This is always true in discrete time, of course, and more generally if $S$ has an LCCB topology with $\mathscr{S}$ the Borel $\sigma$-algebra, and $\bs{X}$ is right continuous. If $\bs{X}$ is progressively measurable with respect to $\mathfrak{F}$ then $\bs{X}$ is measurable and $\bs{X}$ is adapted to $\mathfrak{F}$.
The strong Markov property for our stochastic process $\bs{X} = \{X_t: t \in T\}$ states that the future is independent of the past, given the present, when the present time is a stopping time.
The random process $\bs{X}$ is a strong Markov process if $\E[f(X_{\tau + t}) \mid \mathscr{F}_\tau] = \E[f(X_{\tau + t}) \mid X_\tau]$ for every $t \in T$, stopping time $\tau$, and $f \in \mathscr{B}$.
As with the regular Markov property, the strong Markov property depends on the underlying filtration $\mathfrak{F}$. If the property holds with respect to a given filtration, then it holds with respect to a coarser filtration.
Suppose that the stochastic process $\bs{X} = \{X_t: t \in T\}$ is progressively measurable relative to the filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ and that the filtration $\mathfrak{G} = \{\mathscr{G}_t: t \in T\}$ is finer than $\mathfrak{F}$. If $\bs{X}$ is a strong Markov process relative to $\mathfrak{G}$ then $\bs{X}$ is a strong Markov process relative to $\mathfrak{F}$.
Proof
Recall again that since $\bs{X}$ is adapted to $\mathfrak{F}$, it is also adapted to $\mathfrak{G}$. Suppose that $\tau$ is a finite stopping time for $\mathfrak{F}$ and that $t \in T$ and $f \in \mathscr{B}$. Then $\tau$ is also a stopping time for $\mathfrak{G}$, and $\mathscr{F}_\tau \subseteq \mathscr{G}_\tau$. Hence $\E[f(X_{\tau+t}) \mid \mathscr{F}_\tau] = \E\left(\E[f(X_{\tau+t}) \mid \mathscr{G}_\tau] \mid \mathscr{F}_\tau\right)= \E\left(\E[f(X_{\tau+t}) \mid X_\tau] \mid \mathscr{F}_\tau\right) = \E[f(X_{\tau+t}) \mid X_\tau]$ The first equality is a basic property of conditional expected value. The second uses the fact that $\bs{X}$ has the strong Markov property relative to $\mathfrak{G}$, and the third follows since $\bs{X_\tau}$ measurable with respect to $\mathscr{F}_\tau$. In continuous time, it's last step that requires progressive measurability.
So if $\bs{X}$ is a strong Markov process, then $\bs{X}$ satisfies the strong Markov property relative to its natural filtration. Again there is a tradeoff: finer filtrations allow more stopping times (generally a good thing), but make the strong Markov property harder to satisfy and may not be reasonable (not so good). So we usually don't want filtrations that are too much finer than the natural one.
With the strong Markov and homogeneous properties, the process $\{X_{\tau + t}: t \in T\}$ given $X_\tau = x$ is equivalent in distribution to the process $\{X_t: t \in T\}$ given $X_0 = x$. Clearly, the strong Markov property implies the ordinary Markov property, since a fixed time $t \in T$ is trivially also a stopping time. The converse is true in discrete time.
Suppose that $\bs{X} = \{X_n: n \in \N\}$ is a (homogeneous) Markov process in discrete time. Then $\bs{X}$ is a strong Markov process.
As always in continuous time, the situation is more complicated and depends on the continuity of the process $\bs{X}$ and the filtration $\mathfrak{F}$. Here is the standard result for Feller processes.
If $\bs{X} = \{X_t: t \in [0, \infty)$ is a Feller Markov process, then $\bs{X}$ is a strong Markov process relative to filtration $\mathfrak{F}^0_+$, the right-continuous refinement of the natural filtration..
Transition Kernels of Markov Processes
For our next discussion, you may need to review the section on kernels and operators in the chapter on expected value. Suppose again that $\bs{X} = \{X_t: t \in T\}$ is a (homogeneous) Markov process with state space $S$ and time space $T$, as described above. The kernels in the following definition are of fundamental importance in the study of $\bs{X}$
For $t \in T$, let $P_t(x, A) = \P(X_t \in A \mid X_0 = x), \quad x \in S, \, A \in \mathscr{S}$ Then $P_t$ is a probability kernel on $(S, \mathscr{S})$, known as the transition kernel of $\bs{X}$ for time $t$.
Proof
Fix $t \in T$. The measurability of $x \mapsto \P(X_t \in A \mid X_0 = x)$ for $A \in \mathscr{S}$ is built into the definition of conditional probability. Also, of course, $A \mapsto \P(X_t \in A \mid X_0 = x)$ is a probability measure on $\mathscr{S}$ for $x \in S$. In general, the conditional distribution of one random variable, conditioned on a value of another random variable defines a probability kernel.
That is, $P_t(x, \cdot)$ is the conditional distribution of $X_t$ given $X_0 = x$ for $t \in T$ and $x \in S$. By the time homogenous property, $P_t(x, \cdot)$ is also the conditional distribution of $X_{s + t}$ given $X_s = x$ for $s \in T$: $P_t(x, A) = \P(X_{s+t} \in A \mid X_s = x), \quad s, \, t \in T, \, x \in S, \, A \in \mathscr{S}$ Note that $P_0 = I$, the identity kernel on $(S, \mathscr{S})$ defined by $I(x, A) = \bs{1}(x \in A)$ for $x \in S$ and $A \in \mathscr{S}$, so that $I(x, A) = 1$ if $x \in A$ and $I(x, A) = 0$ if $x \notin A$. Recall also that usually there is a natural reference measure $\lambda$ on $(S, \mathscr{S})$. In this case, the transition kernel $P_t$ will often have a transition density $p_t$ with respect to $\lambda$ for $t \in T$. That is, $P_t(x, A) = \P(X_t \in A \mid X_0 = x) = \int_A p_t(x, y) \lambda(dy), \quad x \in S, \, A \in \mathscr{S}$ The next theorem gives the Chapman-Kolmogorov equation, named for Sydney Chapman and Andrei Kolmogorov, the fundamental relationship between the probability kernels, and the reason for the name transition kernel.
Suppose again that $\bs{X} = \{X_t: t \in T\}$ is a Markov process on $S$ with transition kernels $\bs{P} = \{P_t: t \in T\}$. If $s, \, s \in T$, then $P_s P_t = P_{s + t}$. That is, $P_{s+t}(x, A) = \int_S P_s(x, dy) P_t(y, A), \quad x \in S, \, A \in \mathscr{S}$
Proof
The Markov property and a conditioning argument are the fundamental tools. Recall again that $P_s(x, \cdot)$ is the conditional distribution of $X_s$ given $X_0 = x$ for $x \in S$. Let $A \in \mathscr{S}$. Conditioning on $X_s$ gives $P_{s+t}(x, A) = \P(X_{s+t} \in A \mid X_0 = x) = \int_S P_s(x, dy) \P(X_{s+t} \in A \mid X_s = y, X_0 = x)$ But by the Markov and time-homogeneous properties, $\P(X_{s+t} \in A \mid X_s = y, X_0 = x) = \P(X_t \in A \mid X_0 = y) = P_t(y, A)$ Substituting we have $P_{s+t}(x, A) = \int_S P_s(x, dy) P_t(y, A) = (P_s P_t)(x, A)$
In the language of functional analysis, $\bs{P}$ is a semigroup. Recall that the commutative property generally does not hold for the product operation on kernels. However the property does hold for the transition kernels of a homogeneous Markov process. That is, $P_s P_t = P_t P_s = P_{s+t}$ for $s, \, t \in T$. As a simple corollary, if $S$ has a reference measure, the same basic relationship holds for the transition densities.
Suppose that $\lambda$ is the reference measure on $(S, \mathscr{S})$ and that $\bs{X} = \{X_t: t \in T\}$ is a Markov process on $S$ and with transition densities $\{p_t: t \in T\}$. If $s, \, t \in T$ then $p_s p_t = p_{s+t}$. That is, $p_t(x, z) = \int_S p_s(x, y) p_t(y, z) \lambda(dy), \quad x, \, z \in S$
Proof
The transition kernels satisfy $P_s P_t = P_{s+t}$. But $P_s$ has density $p_s$, $P_t$ has density $p_t$, and $P_{s+t}$ has density $p_{s+t}$. From a basic result on kernel functions, $P_s P_t$ has density $p_s p_t$ as defined in the theorem.
If $T = \N$ (discrete time), then the transition kernels of $\bs{X}$ are just the powers of the one-step transition kernel. That is, if we let $P = P_1$ then $P_n = P^n$ for $n \in \N$.
Recall that a kernel defines two operations: operating on the left with positive measures on $(S, \mathscr{S})$ and operating on the right with measurable, real-valued functions. For the transition kernels of a Markov process, both of the these operators have natural interpretations.
Suppose that $s, \, t \in T$. If $\mu_s$ is the distribution of $X_s$ then $X_{s+t}$ has distribution $\mu_{s+t} = \mu_s P_t$. That is, $\mu_{s+t}(A) = \int_S \mu_s(dx) P_t(x, A), \quad A \in \mathscr{S}$
Proof
Let $A \in \mathscr{S}$. Conditioning on $X_s$ gives $\P(X_{s+t} \in A) = \E[\P(X_{s+t} \in A \mid X_s)] = \int_S \mu_s(dx) \P(X_{s+t} \in A \mid X_s = x) = \int_S \mu_s(dx) P_t(x, A) = \mu_s P_t(A)$
So if $\mathscr{P}$ denotes the collection of probability measures on $(S, \mathscr{S})$, then the left operator $P_t$ maps $\mathscr{P}$ back into $\mathscr{P}$. In particular, if $X_0$ has distribution $\mu_0$ (the initial distribution) then $X_t$ has distribution $\mu_t = \mu_0 P_t$ for every $t \in T$.
A positive measure $\mu$ on $(S, \mathscr{S})$ is invariant for $\bs{X}$ if $\mu P_t = \mu$ for every $t \in T$.
Hence if $\mu$ is a probability measure that is invariant for $\bs{X}$, and $X_0$ has distribution $\mu$, then $X_t$ has distribution $\mu$ for every $t \in T$ so that the process $\bs{X}$ is identically distributed. In discrete time, note that if $\mu$ is a positive measure and $\mu P = \mu$ then $\mu P^n = \mu$ for every $n \in \N$, so $\mu$ is invariant for $\bs{X}$. The operator on the right is given next.
Suppose that $f: S \to \R$. If $t \in T$ then (assuming that the expected value exists), $P_t f(x) = \int_S P_t(x, dy) f(y) = \E\left[f(X_t) \mid X_0 = x\right], \quad x \in S$
Proof
This follows directly from the definitions: $P_t f(x) = \int_S P_t(x, dy) f(y), \quad x \in S$ and $P_t(x, \cdot)$ is the conditional distribution of $X_t$ given $X_0 = x$.
In particular, the right operator $P_t$ is defined on $\mathscr{B}$, the vector space of bounded, linear functions $f: S \to \R$, and in fact is a linear operator on $\mathscr{B}$. That is, if $f, \, g \in \mathscr{B}$ and $c \in \R$, then $P_t(f + g) = P_t f + P_t g$ and $P_t(c f) = c P_t f$. Moreover, $P_t$ is a contraction operator on $\mathscr{B}$, since $\left\|P_t f\right\| \le \|f\|$ for $f \in \mathscr{B}$. It then follows that $P_t$ is a continuous operator on $\mathscr{B}$ for $t \in T$.
For the right operator, there is a concept that is complementary to the invariance of of a positive measure for the left operator.
A measurable function $f: S \to \R$ is harmonic for $\bs{X}$ if $P_t f = f$ for all $t \in T$.
Again, in discrete time, if $P f = f$ then $P^n f = f$ for all $n \in \N$, so $f$ is harmonic for $\bs{X}$.
Combining two results above, if $X_0$ has distribution $\mu_0$ and $f: S \to \R$ is measurable, then (again assuming that the expected value exists), $\mu_0 P_t f = \E[f(X_t)]$ for $t \in T$. That is, $\E[f(X_t)] = \int_S \mu_0(dx) \int_S P_t(x, dy) f(y)$
The result above shows how to obtain the distribution of $X_t$ from the distribution of $X_0$ and the transition kernel $P_t$ for $t \in T$. But we can do more. Recall that one basic way to describe a stochastic process is to give its finite dimensional distributions, that is, the distribution of $\left(X_{t_1}, X_{t_2}, \ldots, X_{t_n}\right)$ for every $n \in \N_+$ and every $(t_1, t_2, \ldots, t_n) \in T^n$. For a Markov process, the initial distribution and the transition kernels determine the finite dimensional distributions. It's easiest to state the distributions in differential form.
Suppose $\bs{X} = \{X_t: t \in T\}$ is a Markov process with transition operators $\bs{P} = \{P_t: t \in T\}$, and that $(t_1, \ldots, t_n) \in T^n$ with $0 \lt t_1 \lt \cdots \lt t_n$. If $X_0$ has distribution $\mu_0$, then in differential form, the distribution of $\left(X_0, X_{t_1}, \ldots, X_{t_n}\right)$ is $\mu_0(dx_0) P_{t_1}(x_0, dx_1) P_{t_2 - t_1}(x_1, dx_2) \cdots P_{t_n - t_{n-1}} (x_{n-1}, dx_n)$
Proof
This follows from induction and repeated use of the Markov property. For example, if $t \in T$ with $t \gt 0$, then conditioning on $X_0$ gives $\P(X_0 \in A, X_t \in B) = \int_A \P(X_t \in B \mid X_0 = x) \mu_0(dx) = \int_A P_t(x, B) \mu(dx) = \int_A \int_B P_t(x, dy) \mu_0(dx)$ for $A, \, B \in \mathscr{S}$. So in differential form, the distribution of $(X_0, X_t)$ is $\mu(dx) P_t(x, dy)$. If $s, \, t \in T$ with $0 \lt s \lt t$, then conditioning on $(X_0, X_s)$ and using our previous result gives $\P(X_0 \in A, X_s \in B, X_t \in C) = \int_{A \times B} \P(X_t \in C \mid X_0 = x, X_s = y) \mu_0(dx) P_s(x, dy)$ for $A, \, B, \, C \in \mathscr{S}$. But by the Markov property, $\P(X_t \in C \mid X_0 = x, X_s = y) = \P(X_t \in C \mid X_s = y) = P_{t-s}(y, C) = \int_C P_{t- s}(y, dz)$ Hence in differential form, the distribution of $(X_0, X_s, X_t)$ is $\mu_0(dx) P_s(x, dy) P_{t-s}(y, dz)$. Continuing in this manner gives the general result.
This result is very important for constructing Markov processes. If we know how to define the transition kernels $P_t$ for $t \in T$ (based on modeling considerations, for example), and if we know the initial distribution $\mu_0$, then the last result gives a consistent set of finite dimensional distributions. From the Kolmogorov construction theorem, we know that there exists a stochastic process that has these finite dimensional distributions. In continuous time, however, two serious problems remain. First, it's not clear how we would construct the transition kernels so that the crucial Chapman-Kolmogorov equations above are satisfied. Second, we usually want our Markov process to have certain properties (such as continuity properties of the sample paths) that go beyond the finite dimensional distributions. The first problem will be addressed in the next section, and fortunately, the second problem can be resolved for a Feller process.
Suppose that $\bs{X} = \{X_t: t \in T\}$ is a Markov process on an LCCB state space $(S, \mathscr{S})$ with transition operators $\bs{P} = \{P_t: t \in [0, \infty)\}$. Then $\bs{X}$ is a Feller process if and only if the following conditions hold:
1. Continuity in space: If $f \in \mathscr{C}_0$ and $t \in [0, \infty)$ then $P_t f \in \mathscr{C}_0$
2. Continuity in time: If $f \in \mathscr{C}_0$ and $x \in S$ then $P_t f(x) \to f(x)$ as $t \downarrow 0$.
A semigroup of probability kernels $\bs{P} = \{P_t: t \in T\}$ that satisfies the properties in this theorem is called a Feller semigroup. So the theorem states that the Markov process $\bs{X}$ is Feller if and only if the transition semigroup of transition $\bs{P}$ is Feller. As before, (a) is automatically satisfied if $S$ is discrete, and (b) is automatically satisfied if $T$ is discrete. Condition (a) means that $P_t$ is an operator on the vector space $\mathscr{C}_0$, in addition to being an operator on the larger space $\mathscr{B}$. Condition (b) actually implies a stronger form of continuity in time.
Suppose that $\bs{P} = \{P_t: t \in T\}$ is a Feller semigroup of transition operators. Then $t \mapsto P_t f$ is continuous (with respect to the supremum norm) for $f \in \mathscr{C}_0$.
Additional details
This means that for $f \in \mathscr{C}_0$ and $t \in [0, \infty)$, $\|P_{t+s} f - P_t f \| = \sup\{\left|P_{t+s}f(x) - P_t f(x)\right|: x \in S\} \to 0 \text{ as } s \to 0$
So combining this with the remark above, note that if $\bs{P}$ is a Feller semigroup of transition operators, then $f \mapsto P_t f$ is continuous on $\mathscr{C}_0$ for fixed $t \in T$, and $t \mapsto P_t f$ is continuous on $T$ for fixed $f \in \mathscr{C}_0$. Again, the importance of this is that we often start with the collection of probability kernels $\bs{P}$ and want to know that there exists a nice Markov process $\bs{X}$ that has these transition operators.
Sampling in Time
If we sample a Markov process at an increasing sequence of points in time, we get another Markov process in discrete time. But the discrete time process may not be homogeneous even if the original process is homogeneous.
Suppose that $\bs{X} = \{X_t: t \in T\}$ is a Markov process with state space $(S, \mathscr{S})$ and that $(t_0, t_1, t_2, \ldots)$ is a sequence in $T$ with $0 = t_0 \lt t_1 \lt t_2 \lt \cdots$. Let $Y_n = X_{t_n}$ for $n \in \N$. Then $\bs{Y} = \{Y_n: n \in \N\}$ is a Markov process in discrete time.
Proof
For $n \in \N$, let $\mathscr{G}_n = \sigma\{Y_k: k \in \N, k \le n\}$, so that $\{\mathscr{G}_n: n \in \N\}$ is the natural filtration associated with $\bs{Y}$. Note that $\mathscr{G}_n \subseteq \mathscr{F}_{t_n}$ and $Y_n = X_{t_n}$ is measurable with respect to $\mathscr{G}_n$ for $n \in \N$. Let $k, \, n \in \N$ and let $A \in \mathscr{S}$. Then $\P\left(Y_{k+n} \in A \mid \mathscr{G}_k\right) = \P\left(X_{t_{n+k}} \in A \mid \mathscr{G}_k\right) = \P\left(X_{t_{n+k}} \in A \mid X_{t_k}\right) = \P\left(Y_{n+k} \in A \mid Y_k\right)$
If we sample a homogeneous Markov process at multiples of a fixed, positive time, we get a homogenous Markov process in discrete time.
Suppose that $\bs{X} = \{X_t: t \in T\}$ is a homogeneous Markov process with state space $(S, \mathscr{S})$ and transition kernels $\bs{P} = \{P_t: t \in T\}$. Fix $r \in T$ with $r \gt 0$ and define $Y_n = X_{n r}$ for $n \in \N$. Then $\bs{Y} = \{Y_n: n \in \N\}$ is a homogeneous Markov process in discrete time, with one-step transition kernel $Q$ given by $Q(x, A) = P_r(x, A); \quad x \in S, \, A \in \mathscr{S}$
In some cases, sampling a strong Markov process at an increasing sequence of stopping times yields another Markov process in discrete time. The point of this is that discrete-time Markov processes are often found naturally embedded in continuous-time Markov processes.
Enlarging the State Space
Our first result in this discussion is that a non-homogeneous Markov process can be turned into a homogenous Markov process, but only at the expense of enlarging the state space.
Suppose that $\bs{X} = \{X_t: t \in T\}$ is a non-homogeneous Markov process with state space $(S, \mathscr{S})$. Suppose also that $\tau$ is a random variable taking values in $T$, independent of $\bs{X}$. Let $\tau_t = \tau + t$ and let $Y_t = \left(X_{\tau_t}, \tau_t\right)$ for $t \in T$. Then $\bs{Y} = \{Y_t: t \in T\}$ is a homogeneous Markov process with state space $(S \times T, \mathscr{S} \otimes \mathscr{T})$. For $t \in T$, the transition kernel $P_t$ is given by $P_t[(x, r), A \times B] = \P(X_{r+t} \in A \mid X_r = x) \bs{1}(r + t \in B), \quad (x, r) \in S \times T, \, A \times B \in \mathscr{S} \otimes \mathscr{T}$
Proof
By definition and the substitution rule, \begin{align*} \P[Y_{s + t} \in A \times B \mid Y_s = (x, r)] & = \P\left(X_{\tau_{s + t}} \in A, \tau_{s + t} \in B \mid X_{\tau_s} = x, \tau_s = r\right) \ & = \P \left(X_{\tau + s + t} \in A, \tau + s + t \in B \mid X_{\tau + s} = x, \tau + s = r\right) \ & = \P(X_{r + t} \in A, r + t \in B \mid X_r = x, \tau + s = r) \end{align*} But $\tau$ is independent of $\bs{X}$, so the last term is $\P(X_{r + t} \in A, r + t \in B \mid X_r = x) = \P(X_{r+t} \in A \mid X_r = x) \bs{1}(r + t \in B)$ The important point is that the last expression does not depend on $s$, so $\bs{Y}$ is homogeneous.
The trick of enlarging the state space is a common one in the study of stochastic processes. Sometimes a process that has a weaker form of forgetting the past can be made into a Markov process by enlarging the state space appropriately. Here is an example in discrete time.
Suppose that $\bs{X} = \{X_n: n \in \N\}$ is a random process with state space $(S, \mathscr{S})$ in which the future depends stochastically on the last two states. That is, for $n \in \N$ $\P(X_{n+2} \in A \mid \mathscr{F}_{n+1}) = \P(X_{n+2} \in A \mid X_n, X_{n+1}), \quad A \in \mathscr{S}$ where $\{\mathscr{F}_n: n \in \N\}$ is the natural filtration associated with the process $\bs{X}$. Suppose also that the process is time homogeneous in the sense that $\P(X_{n+2} \in A \mid X_n = x, X_{n+1} = y) = Q(x, y, A)$ independently of $n \in \N$. Let $Y_n = (X_n, X_{n+1})$ for $n \in \N$. Then $\bs{Y} = \{Y_n: n \in \N\}$ is a homogeneous Markov process with state space $(S \times S, \mathscr{S} \otimes \mathscr{S}$. The one step transition kernel $P$ is given by $P[(x, y), A \times B] = I(y, A) Q(x, y, B); \quad x, \, y \in S, \; A, \, B \in \mathscr{S}$
Proof
Note first that for $n \in \N$, $\sigma\{Y_k: k \le n\} = \sigma\{(X_k, X_{k+1}): k \le n\} = \mathscr{F}_{n+1}$ so the natural filtration associated with the process $\bs{Y}$ is $\{\mathscr{F}_{n+1}: n \in \N\}$. If $C \in \mathscr{S} \otimes \mathscr{S})$ then \begin{align*} \P(Y_{n+1} \in C \mid \mathscr{F}_{n+1}) & = \P[(X_{n+1}, X_{n+2}) \in C \mid \mathscr{F}_{n+1}]\ & = \P[(X_{n+1}, X_{n+2}) \in C \mid X_n, X_{n+1}] = \P(Y_{n+1} \in C \mid Y_n) \end{align*} by the given assumption on $\bs{X}$. Hence $\bs{Y}$ is a Markov process. Next, \begin{align*} \P[Y_{n+1} \in A \times B \mid Y_n = (x, y)] & = \P[(X_{n+1}, X_{n+2}) \in A \times B \mid (X_n, X_{n+1}) = (x, y)] \ & = \P(X_{n+1} \in A, X_{n+2} \in B \mid X_n = x, X_{n+1} = y) = \P(y \in A, X_{n+2} \in B \mid X_n = x, X_{n + 1} = y) \ & = I(y, A) Q(x, y, B) \end{align*}
The last result generalizes in a completely straightforward way to the case where the future of a random process in discrete time depends stochastically on the last $k$ states, for some fixed $k \in \N$.
Examples and Applications
Recurrence Relations and Differential Equations
As noted in the introduction, Markov processes can be viewed as stochastic counterparts of deterministic recurrence relations (discrete time) and differential equations (continuous time). Our goal in this discussion is to explore these connections.
Suppose that $\bs{X} = \{X_n: n \in \N\}$ is a stochastic process with state space $(S, \mathscr{S})$ and that $\bs{X}$ satisfies the recurrence relation $X_{n+1} = g(X_n), \quad n \in \N$ where $g: S \to S$ is measurable. Then $\bs{X}$ is a homogeneous Markov process with one-step transition operator $P$ given by $P f = f \circ g$ for a measurable function $f: S \to \R$.
Proof
Clearly $\bs{X}$ is uniquely determined by the initial state, and in fact $X_n = g^n(X_0)$ for $n \in \N$ where $g^n$ is the $n$-fold composition power of $g$. So the only possible source of randomness is in the initial state. The Markov and time homogeneous properties simply follow from the trivial fact that $g^{m+n}(X_0) = g^n[g^m(X_0)]$, so that $X_{m+n} = g^n(X_m)$. That is, the state at time $m + n$ is completely determined by the state at time $m$ (regardless of the previous states) and the time increment $n$. In particular, $P f(x) = \E[g(X_1) \mid X_0 = x] = f[g(x)]$ for measurable $f: S \to \R$ and $x \in S$. Note that for $n \in \N$, the $n$-step transition operator is given by $P^n f = f \circ g^n$.
In the deterministic world, as in the stochastic world, the situation is more complicated in continuous time. Nonetheless, the same basic analogy applies.
Suppose that $\bs{X} = \{X_t: t \in [0, \infty)\}$ with state space $(\R, \mathscr{R})$satisfies the first-order differential equation $\frac{d}{dt}X_t = g(X_t)$ where $g: \R \to \R$ is Lipschitz continuous. Then $\bs{X}$ is a Feller Markov process
Proof
Recall that Lipschitz continuous means that there exists a constant $k \in (0, \infty)$ such that $\left|g(y) - g(x)\right| \le k \left|x - y\right|$ for $x, \, y \in \R$. This is a standard condition on $g$ that guarantees the existence and uniqueness of a solution to the differential equation on $[0, \infty)$. So as before, the only source of randomness in the process comes from the initial value $X_0$. Let $t \mapsto X_t(x)$ denote the unique solution with $X_0(x) = x$ for $x \in \R$. The Markov and homogenous properties follow from the fact that $X_{t+s}(x) = X_t(X_s(x))$ for $s, \, t \in [0, \infty)$ and $x \in S$. That is, the state at time $t + s$ depends only on the state at time $s$ and the time increment $t$. The Feller properties follow from the continuity of $t \mapsto X_t(x)$ and the continuity of $x \mapsto X_t(x)$. The latter is the continuous dependence on the initial value, again guaranteed by the assumptions on $g$. Note that the transition operator is given by $P_t f(x) = f[X_t(x)]$ for a measurable function $f: S \to \R$ and $x \in S$.
In differential form, the process can be described by $d X_t = g(X_t) \, dt$. This essentially deterministic process can be extended to a very important class of Markov processes by the addition of a stochastic term related to Brownian motion. Such stochastic differential equations are the main tools for constructing Markov processes known as diffusion processes.
Processes with Stationary, Independent Increments
For our next discussion, we consider a general class of stochastic processes that are Markov processes. Suppose that $\bs{X} = \{X_t: t \in T\}$ is a random process with $S \subseteq \R$ as the set of states. The state space can be discrete (countable) or continuous. Typically, $S$ is either $\N$ or $\Z$ in the discrete case, and is either $[0, \infty)$ or $\R$ in the continuous case. In any case, $S$ is given the usual $\sigma$-algebra $\mathscr{S}$ of Borel subsets of $S$ (which is the power set in the discrete case). Also, the state space $(S, \mathscr{S})$ has a natural reference measure measure $\lambda$, namely counting measure in the discrete case and Lebesgue measure in the continuous case. Let $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ denote the natural filtration, so that $\mathscr{F}_t = \sigma\{X_s: s \in T, s \le t\}$ for $t \in T$.
The process $\bs{X}$ has
1. Independent increments if $X_{s+t} - X_s$ is independent of $\mathscr{F}_s$ for all $s, \, t \in T$.
2. Stationary increments if the distribution of $X_{s+t} - X_s$ is the same as the distribution of $X_t - X_0$ for all $s, \, t \in T$.
A difference of the form $X_{s+t} - X_s$ for $s, \, t \in T$ is an increment of the process, hence the names. Sometimes the definition of stationary increments is that $X_{s+t} - X_s$ have the same distribution as $X_t$. But this forces $X_0 = 0$ with probability 1, and as usual with Markov processes, it's best to keep the initial distribution unspecified. If $\bs{X}$ has stationary increments in the sense of our definition, then the process $\bs{Y} = \{Y_t = X_t - X_0: t \in T\}$ has stationary increments in the more restricted sense. For the remainder of this discussion, assume that $\bs X = \{X_t: t \in T\}$ has stationary, independent increments, and let $Q_t$ denote the distribution of $X_t - X_0$ for $t \in T$.
$Q_s * Q_t = Q_{s+t}$ for $s, \, t \in T$.
Proof
For $s, \, t \in T$, $Q_s$ is the distribution of $X_s - X_0$, and by the stationary property, $Q_t$ is the distribution of $X_{s + t} - X_s$. By the independence property, $X_s - X_0$ and $X_{s+t} - X_s$ are independent. Hence $Q_s * Q_t$ is the distribution of $\left[X_s - X_0\right] + \left[X_{s+t} - X_s\right] = X_{s+t} - X_0$. But by definition, this variable has distribution $Q_{s+t}$
So the collection of distributions $\bs{Q} = \{Q_t: t \in T\}$ forms a semigroup, with convolution as the operator. Note that $Q_0$ is simply point mass at 0.
The process $\bs{X}$ is a homogeneous Markov process. For $t \in T$, the transition operator $P_t$ is given by $P_t f(x) = \int_S f(x + y) Q_t(dy), \quad f \in \mathscr{B}$
Proof
Suppose that $s, \, t \in T$ and $f \in \mathscr{B}$, $\E[f(X_{s+t}) \mid \mathscr{F}_s] = \E[f(X_{s+t} - X_s + X_s) \mid \mathscr{F}_s] = \E[f(X_{s+t}) \mid X_s]$ since $X_{s+t} - X_s$ is independent of $\mathscr{F}_s$. Moreover, by the stationary property, $\E[f(X_{s+t}) \mid X_s = x] = \int_S f(x + y) Q_t(dy), \quad x \in S$
Clearly the semigroup property of $\bs{P} = \{P_t: t \in T\}$ (with the usual operator product) is equivalent to the semigroup property of $\bs{Q} = \{Q_t: t \in T\}$ (with convolution as the product).
Suppose that for positive $t \in T$, the distribution $Q_t$ has probability density function $g_t$ with respect to the reference measure $\lambda$. Then the transition density is $p_t(x, y) = g_t(y - x), \quad x, \, y \in S$
Of course, from the result above, it follows that $g_s * g_t = g_{s+t}$ for $s, \, t \in T$, where here $*$ refers to the convolution operation on probability density functions.
If $Q_t \to Q_0$ as $t \downarrow 0$ then $\bs{X}$ is a Feller Markov process.
Thus, by the general theory sketched above, $\bs{X}$ is a strong Markov process, and there exists a version of $\bs{X}$ that is right continuous and has left limits. Such a process is known as a Lévy process, in honor of Paul Lévy.
For a real-valued stochastic process $\bs X = \{X_t: t \in T\}$, let $m$ and $v$ denote the mean and variance functions, so that $m(t) = \E(X_t), \; v(t) = \var(X_t); \quad t \in T$ assuming of course that the these exist. The mean and variance functions for a Lévy process are particularly simple.
Suppose again that $\bs X$ has stationary, independent increments.
1. If $\mu_0 = \E(X_0) \in \R$ and $\mu_1 = \E(X_1) \in \R$ then $m(t) = \mu_0 + (\mu_1 - \mu_0) t$ for $t \in T$.
2. If in addition, $\sigma_0^2 = \var(X_0) \in (0, \infty)$ and $\sigma_1^2 = \var(X_1) \in (0, \infty)$ then $v(t) = \sigma_0^2 + (\sigma_1^2 - \sigma_0^2) t$ for $t \in T$.
Proof
The proofs are simple using the independent and stationary increments properties. For $t \in T$, let $m_0(t) = \E(X_t - X_0) = m(t) - \mu_0$ and $v_0(t) = \var(X_t - X_0) = v(t) - \sigma_0^2$. denote the mean and variance functions for the centered process $\{X_t - X_0: t \in T\}$. Now let $s, \, t \in T$.
1. From the additive property of expected value and the stationary property, $m_0(t + s) = \E(X_{t+s} - X_0) = \E[(X_{t + s} - X_s) + (X_s - X_0)] = \E(X_{t+s} - X_s) + \E(X_s - X_0) = m_0(t) + m_0(s)$
2. From the additive property of variance for independent variables and the stationary property, $v_0(t + s) = \var(X_{t+s} - X_0) = \var[(X_{t + s} - X_s) + (X_s - X_0)] = \var(X_{t+s} - X_s) + \var(X_s - X_0) = v_0(t) + v_0(s)$
So $m_0$ and $v_0$ satisfy the Cauchy equation. In discrete time, it's simple to see that there exists $a \in \R$ and $b^2 \in (0, \infty)$ such that $m_0(t) = a t$ and $v_0(t) = b^2 t$. The same is true in continuous time, given the continuity assumptions that we have on the process $\bs X$. Substituting $t = 1$ we have $a = \mu_1 - \mu_0$ and $b^2 = \sigma_1^2 - \sigma_0^2$, so the results follow,
It's easy to describe processes with stationary independent increments in discrete time.
A process $\bs{X} = \{X_n: n \in \N\}$ has independent increments if and only if there exists a sequence of independent, real-valued random variables $(U_0, U_1, \ldots)$ such that $X_n = \sum_{i=0}^n U_i$ In addition, $\bs{X}$ has stationary increments if and only if $(U_1, U_2, \ldots)$ are identically distributed.
Proof
Suppose first that $\bs{U} = (U_0, U_1, \ldots)$ is a sequence of independent, real-valued random variables, and define $X_n = \sum_{i=0}^n U_i$ for $n \in \N$. Note that $\mathscr{F}_n = \sigma\{X_0, \ldots, X_n\} = \sigma\{U_0, \ldots, U_n\}$ for $n \in \N$. If $k, \, n \in \N$ with $k \le n$, then $X_n - X_k = \sum_{i=k+1}^n U_i$ which is independent of $\mathscr{F}_k$ by the independence assumption on $\bs{U}$. Hence $\bs{X}$ has independent increments. Suppose in addition that $(U_1, U_2, \ldots)$ are identically distributed. Then the increment $X_n - X_k$ above has the same distribution as $\sum_{i=1}^{n-k} U_i = X_{n-k} - X_0$. Hence $\bs{X}$ has stationary increments.
Conversely, suppose that $\bs{X} = \{X_n: n \in \N\}$ has independent increments. Let $U_0 = X_0$ and $U_n = X_n - X_{n-1}$ for $n \in \N_+$. Then $X_n = \sum_{i=0}^n U_i$ for $n \in \N$. As before $\mathscr{F}_n = \sigma\{X_0, \ldots, X_n\} = \sigma\{U_0, \ldots, U_n\}$ for $n \in \N$. Since $\bs{X}$ has independent increments, $U_n$ is independent of $\mathscr{F}_{n-1}$ for $n \in \N_+$, so $(U_0, U_1, \ldots)$ are mutually independent. If in addition, $\bs{X}$ has stationary increments, $U_n = X_n - X_{n-1}$ has the same distribution as $X_1 - X_0 = U_1$ for $n \in \N_+$. Hence $(U_1, U_2, \ldots)$ are identically distributed.
Thus suppose that $\bs{U} = (U_0, U_1, \ldots)$ is a sequence of independent, real-valued random variables, with $(U_1, U_2, \ldots)$ identically distributed with common distribution $Q$. Then from our main result above, the partial sum process $\bs{X} = \{X_n: n \in \N\}$ associated with $\bs{U}$ is a homogeneous Markov process with one step transition kernel $P$ given by $P(x, A) = Q(A - x), \quad x \in S, \, A \in \mathscr{S}$ More generally, for $n \in \N$, the $n$-step transition kernel is $P^n(x, A) = Q^{*n}(A - x)$ for $x \in S$ and $A \in \mathscr{S}$. This Markov process is known as a random walk (although unfortunately, the term random walk is used in a number of other contexts as well). The idea is that at time $n$, the walker moves a (directed) distance $U_n$ on the real line, and these steps are independent and identically distributed. If $Q$ has probability density function $g$ with respect to the reference measure $\lambda$, then the one-step transition density is $p(x, y) = g(y - x), \quad x, \, y \in S$
Consider the random walk on $\R$ with steps that have the standard normal distribution. Give each of the following explicitly:
1. The one-step transition density.
2. The $n$-step transition density for $n \in \N_+$.
Proof
1. For $x \in \R$, $p(x, \cdot)$ is the normal PDF with mean $x$ and variance 1: $p(x, y) = \frac{1}{\sqrt{2 \pi}} \exp\left[-\frac{1}{2} (y - x)^2 \right]; \quad x, \, y \in \R$
2. For $x \in \R$, $p^n(x, \cdot)$ is the normal PDF with mean $x$ and variance $n$: $p^n(x, y) = \frac{1}{\sqrt{2 \pi n}} \exp\left[-\frac{1}{2 n} (y - x)^2\right], \quad x, \, y \in \R$
In continuous time, there are two processes that are particularly important, one with the discrete state space $\N$ and one with the continuous state space $\R$.
For $t \in [0, \infty)$, let $g_t$ denote the probability density function of the Poisson distribution with parameter $t$, and let $p_t(x, y) = g_t(y - x)$ for $x, \, y \in \N$. Then $\{p_t: t \in [0, \infty)\}$ is the collection of transition densities for a Feller semigroup on $\N$
Proof
Recall that $g_t(n) = e^{-t} \frac{t^n}{n!}, \quad n \in \N$ We just need to show that $\{g_t: t \in [0, \infty)\}$ satisfies the semigroup property, and that the continuity result holds. But we already know that if $U, \, V$ are independent variables having Poisson distributions with parameters $s, \, t \in [0, \infty)$, respectively, then $U + V$ has the Poisson distribution with parameter $s + t$. That is, $g_s * g_t = g_{s+t}$. Moreover, $g_t \to g_0$ as $t \downarrow 0$.
So a Lévy process $\bs{N} = \{N_t: t \in [0, \infty)\}$ with these transition densities would be a Markov process with stationary, independent increments and with sample paths are right continuous and have left limits. We do know of such a process, namely the Poisson process with rate 1.
Open the Poisson experiment and set the rate parameter to 1 and the time parameter to 10. Run the experiment several times in single-step mode and note the behavior of the process.
For $t \in (0, \infty)$, let $g_t$ denote the probability density function of the normal distribution with mean 0 and variance $t$, and let $p_t(x, y) = g_t(y - x)$ for $x, \, y \in \R$. Then $\{p_t: t \in [0, \infty)\}$ is the collection of transition densities of a Feller semigroup on $\R$.
Proof
Recall that for $t \in (0, \infty)$, $g_t(z) = \frac{1}{\sqrt{2 \pi t}} \exp\left(-\frac{z^2}{2 t}\right), \quad z \in \R$ We just need to show that $\{g_t: t \in [0, \infty)\}$ satisfies the semigroup property, and that the continuity result holds. But we already know that if $U, \, V$ are independent variables having normal distributions with mean 0 and variances $s, \, t \in (0, \infty)$, respectively, then $U + V$ has the normal distribution with mean 0 and variance $s + t$. That is, $g_s * g_t = g_{s+t}$. Moreover, we also know that the normal distribution with variance $t$ converges to point mass at 0 as $t \downarrow 0$.
So a Lévy process $\bs{X} = \{X_t: t \in [0, \infty)\}$ on $\R$ with these transition densities would be a Markov process with stationary, independent increments, and whose sample paths are continuous from the right and have left limits. In fact, there exists such a process with continuous sample paths. This process is Brownian motion, a process important enough to have its own chapter.
Run the simulation of standard Brownian motion and note the behavior of the process. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.01%3A_Introduction_to_Markov_Processes.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$
Our goal in this section is to continue the broad sketch of the general theory of Markov processes. As with the last section, some of the statements are not completely precise and rigorous, because we want to focus on the main ideas without being overly burdened by technicalities. If you are a new student of probability, or are primarily interested in applications, you may want to skip ahead to the study of discrete-time Markov chains.
Preliminaries
Basic Definitions
As usual, our starting point is a probability space $(\Omega, \mathscr{F}, \P)$, so that $\Omega$ is the set of outcomes, $\mathscr{F}$ the $\sigma$-algebra of events, and $\P$ the probability measure on the sample space $(\Omega, \mathscr{F})$. The set of times $T$ is either $\N$, discrete time with the discrete topology, or $[0, \infty)$, continuous time with the usual Euclidean topology. The time set $T$ is given the Borel $\sigma$-algebra $\mathscr{T}$, which is just the power set if $T = \N$, and then the time space $(T, \mathscr{T})$ is given the usual measure, counting measure in the discrete case and Lebesgue measure in the continuous case. The set of states $S$ has an LCCB topology (locally compact, Hausdorff, with a countable base), and is also given the Borel $\sigma$-algebra $\mathscr{S}$. Recall that to say that the state space is discrete means that $S$ is countable with the discrete topology, so that $\mathscr{S}$ is the power set of $S$. The topological assumptions mean that the state space $(S, \mathscr{S})$ is nice enough for a rich mathematical theory and general enough to encompass the most important applications. There is often a natural Borel measure $\lambda$ on $(S, \mathscr{S})$, counting measure $\#$ if $S$ is discrete, and for example, Lebesgue measure if $S = \R^k$ for some $k \in \N_+$.
Recall also that there are several spaces of functions on $S$ that are important. Let $\mathscr{B}$ denote the set of bounded, measurable functions $f: S \to \R$. Let $\mathscr{C}$ denote the set of bounded, continuous functions $f: S \to \R$, and let $\mathscr{C}_0$ denote the set of continuous functions $f: S \to \R$ that vanish at $\infty$ in the sense that for every $\epsilon \gt 0$, there exists a compact set $K \subseteq S$ such $\left|f(x)\right| \lt \epsilon$ for $x \in K^c$. These are all vector spaces under the usual (pointwise) addition and scalar multiplication, and $\mathscr{C}_0 \subseteq \mathscr{C} \subseteq \mathscr{B}$. The supremum norm, defined by $\left\| f \right\| = \sup\{\left|f(x)\right|: x \in S\}$ for $f \in \mathscr{B}$ is the norm that is used on these spaces.
Suppose now that $\bs{X} = \{X_t: t \in T\}$ is a time-homogeneous Markov process with state space $(S, \mathscr{S})$ defined on the probability space $(\Omega, \mathscr{F}, \P)$. As before, we also assume that we have a filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$, that is, an increasing family of sub $\sigma$-algebras of $\mathscr{F}$, indexed by the time space, with the properties that $X_t$ is measurable with repsect to $\mathscr{F}_t$ for $t \in T$. Intuitively, $\mathscr{F}_t$ is the collection of events up to time $t \in T$.
As usual, we let $P_t$ denote the transition probability kernel for an increase in time of size $t \in T$. Thus $P_t(x, A) = \P(X_t \in A \mid X_0 = x), \quad x \in S, \, A \in \mathscr{S}$ Recall that for $t \in T$, the transition kernel $P_t$ defines two operators, on the left with measures and on the right with functions. So, if $\mu$ is a measure on $(S, \mathscr{S})$ then $\mu P_t$ is the measure on $(S, \mathscr{S})$ given by $\mu P_t(A) = \int_S \mu(dx) P_t(x, A), \quad A \in \mathscr{S}$ If $\mu$ is the distribution of $X_0$ then $\mu P_t$ is the distribution of $X_t$ for $t \in T$. If $f \in \mathscr{B}$ then $P_t f \in \mathscr{B}$ is defined by $P_t f(x) = \int_S P_t(x, dy) f(y) = \E\left[f(X_t) \mid X_0 = x\right]$ Recall that the collection of transition operators $\bs{P} = \{P_t: t \in T\}$ is a semigroup because $P_s P_t = P_{s+t}$ for $s, \, t \in T$. Just about everything in this section is defined in terms of the semigroup $\bs{P}$, which is one of the main analytic tools in the study of Markov processes.
Feller Markov Processes
We make the same assumptions as in the Introduction. Here is a brief review:
We assume that the Markov process $\bs{X} = \{X_t: t \in T\}$ satisfies the following properties (and hence is a Feller Markov process):
1. For $t \in T$ and $y \in S$, the distribution of $X_t$ given $X_0 = x$ converges to the distribution of $X_t$ given $X_0 = y$ as $x \to y$.
2. Given $X_0 = x \in S$, $X_t$ converges in probability to $x$ as $t \downarrow 0$.
Part (a) is an assumption on continuity in space, while part (b) is an assumption on continuity in time. If $S$ is discrete then (a) automatically holds, and if $T$ is discrete then (b) automatically holds. As we will see, the Feller assumptions are sufficient for a very nice mathematical theory, and yet are general enough to encompass the most important continuous-time Markov processes.
The process $\bs{X} = \{X_t: t \in T\}$ has the following properties:
1. There is a version of $\bs{X}$ such that $t \mapsto X_t$ is continuous from the right and has left limits.
2. $\bs{X}$ is a strong Markov process relative to the $\mathfrak{F}^0_+$, the right-continuous refinement of the natural filtration.
The Feller assumptions on the Markov process have equivalent formulations in terms of the transition semigroup.
The transition semigroup $\bs{P} = \{P_t: t \in T\}$ has the following properties:
1. If $f \in \mathscr{C}_0$ and $t \in T$ then $P_t f \in \mathscr{C}_0$
2. If $f \in \mathscr{C}_0$ and $x \in S$ then $P_t f(x) \to f(x)$ as $t \downarrow 0$.
As before, part (a) is a condition on continuity in space, while part (b) is a condition on continuity in time. Once again, (a) is trivial if $S$ is discrete, and (b) trivial if $T$ is discrete. The first condition means that $P_t$ is a linear operator on $\mathscr{C}_0$ (as well as being a linear operator on $\mathscr{B}$). The second condition leads to a stronger continuity result.
For $f \in \mathscr{C}_0$, the mapping $t \mapsto P_t f$ is continuous on $T$. That is, for $t \in T$, $\|P_s f - P_t f\| = \sup\{\left|P_s f(x) - P_t f(x) \right|: x \in S\} \to 0 \text{ as } s \to t$
Our interest in this section is primarily the continuous time case. However, we start with the discrete time case since the concepts are clearer and simpler, and we can avoid some of the technicalities that inevitably occur in continuous time.
Discrete Time
Suppose that $T = \N$, so that time is discrete. Recall that the transition kernels are just powers of the one-step kernel. That is, we let $P = P_1$ and then $P_n = P^n$ for $n \in \N$.
Potential Operators
For $\alpha \in (0, 1]$, the $\alpha$-potential kernel $R_\alpha$ of $\bs{X}$ is defined as follows: $R_\alpha(x, A) = \sum_{n=0}^\infty \alpha^n P^n(x, A), \quad x \in S, \, A \in \mathscr{S}$
1. The special case $R = R_1$ is simply the potential kernel of $\bs{X}$.
2. For $x \in S$ and $A \in \mathscr{S}$, $R(x, A)$ is the expected number of visits of $\bs{X}$ to $A$, starting at $x$.
Proof
The function $x \mapsto R_\alpha(x, A)$ from $S$ to $[0, \infty)$ is measurable for $A \in \mathscr{S}$ since $x \mapsto P^n(x, A)$ is measurable for each $n \in \N$. The mapping $A \mapsto R_\alpha(x, A)$ is a positive measure on $\mathscr{S}$ for $x \in S$ since $A \mapsto P^n(x, A)$ is a probability measure for each $n \in \N$. Finally, the interpretation of $R(x, A)$ for $x \in S$ and $A \in \mathscr{S}$ comes from interchanging sum and expected value, which is allowed since the terms are nonnegative: $R(x, A) = \sum_{n=0}^\infty P^n(x, A) = \sum_{n=0}^\infty \E[\bs{1}(X_n \in A) \mid X_0 = x] = \E\left( \sum_{n=0}^\infty \bs{1}(X_n \in A) \biggm| X_0 = x\right) = \E[\#\{n \in \N: X_n \in A\} \mid X_0 = x]$
Note that it's quite possible that $R(x, A) = \infty$ for some $x \in S$ and $A \in \mathscr{S}$. In fact, knowing when this is the case is of considerable importance in the study of Markov processes. As with all kernels, the potential kernel $R_\alpha$ defines two operators, operating on the right on functions, and operating on the left on positive measures. For the right potential operator, if $f: S \to \R$ is measurable then $R_\alpha f(x) = \sum_{n=0}^\infty \alpha^n P^n f(x) = \sum_{n=0}^\infty \alpha^n \int_S P^n(x, dy) f(y) = \sum_{n=0}^\infty \alpha^n \E[f(X_n) \mid X_0 = x], \quad x \in S$ assuming as usual that the expected values and the infinite series make sense. This will be the case, in particular, if $f$ is nonnegative or if $p \in (0, 1)$ and $f \in \mathscr{B}$.
If $\alpha \in (0, 1)$, then $R_\alpha(x, S) = \frac{1}{1 - \alpha}$ for all $x \in S$.
Proof
Using geometric series, $R_\alpha(x, S) = \sum_{n=0}^\infty \alpha^n P^n(x, S) = \sum_{n=0}^\infty \alpha^n = \frac{1}{1 - \alpha}$
It follows that for $\alpha \in (0, 1)$, the right operator $R_\alpha$ is a bounded, linear operator on $\mathscr{B}$ with $\left\|R_\alpha \right\| = \frac{1}{1 - \alpha}$. It also follows that $(1 - \alpha) R_\alpha$ is a probability kernel. There is a nice interpretation of this kernel.
If $\alpha \in (0, 1)$ then $(1 - \alpha) R_\alpha(x, \cdot)$ is the conditional distribution of $X_N$ given $X_0 = x \in S$, where $N$ is independent of $\bs{X}$ and has the geometric distribution on $\N$ with parameter $1 - \alpha$.
Proof
Suppose that $x \in S$ and $A \in \mathscr{S}$. Conditioning on $N$ gives $\P(X_N \in A \mid X_0 = x) = \sum_{n=0}^\infty \P(N = n) \P(X_N \in A \mid N = n, X_0 = x)$ But by the substitution rule and the assumption of independence, $\P(X_N \in A \mid N = n, X_0 = x) = \P(X_n \in A \mid N = n, X_0 = x) = \P(X_n \in A \mid X_0 = x) = P^n(x, A)$ Since $N$ has the geometric distribution on $N$ with parameter $1 - \alpha$ we have $P(N = n) = (1 - \alpha) \alpha^n$ for $n \in \N$. Substituting gives $\P(X_N \in A \mid X_0 = x) = \sum_{n=0}^\infty (1 - \alpha) \alpha^n P^n(x, A) = (1 - \alpha) R_\alpha(x, A)$
So $(1 - \alpha)R_\alpha$ is a transition probability kernel, just as $P_n$ is a transition probability kernel, but corresponding to the random time $N$ (with $\alpha \in (0, 1)$ as a parameter), rather than the deterministic time $n \in \N$. An interpretation of the potential kernel $R_\alpha$ for $\alpha \in (0, 1)$ can be also given in economic terms. Suppose that $A \in \mathscr{S}$ and that we receive one monetary unit each time the process $\bs{X}$ visits $A$. Then as above, $R(x, A)$ is the expected total amount of money we receive, starting at $x \in S$. However, typically money that we will receive at times distant in the future has less value to us now than money that we will receive soon. Specifically suppose that a monetary unit received at time $n \in \N$ has a present value of $\alpha^n$, where $\alpha \in (0, 1)$ is an inflation factor (sometimes also called a discount factor). Then $R_\alpha(x, A)$ gives the expected, total, discounted amount we will receive, starting at $x \in S$. A bit more generally, if $f \in \mathscr{B}$ is a reward function, so that $f(x)$ is the reward (or cost, depending on the sign) that we receive when we visit state $x \in S$, then for $\alpha \in (0, 1)$, $R_\alpha f(x)$ is the expected, total, discounted reward, starting at $x \in S$.
For the left potential operator, if $\mu$ is a positive measure on $\mathscr{S}$ then $\mu R_\alpha(A) = \sum_{n=0}^\infty \alpha^n \mu P^n(A) = \sum_{n=0}^\infty \alpha^n \int_S \mu(dx) P^n(x, A), \quad A \in \mathscr{S}$ In particular, if $\mu$ is a probability measure and $X_0$ has distribution $\mu$ then $\mu P^n$ is the distribution of $X_n$ for $n \in \N$, so from the last result, $(1 - \alpha) \mu R_\alpha$ is the distribution of $X_N$ where again, $N$ is independent of $\bs{X}$ and has the geometric distribution on $\N$ with parameter $1 - \alpha$. The family of potential kernels gives the same information as the family of transition kernels.
The potential kernels $\bs{R} = \{R_\alpha: \alpha \in (0, 1)\}$ completely determine the transition kernels $\bs{P} = \{P_n: n \in \N\}$.
Proof
Note that for $x \in S$ and $A \in \mathscr{S}$, the function $\alpha \mapsto R_\alpha(x, A)$ is a power series in $\alpha$ with coefficients $n \mapsto P^n(x, A)$. In the language of combinatorics, $\alpha \mapsto R_\alpha(x, A)$ is the ordinary generating function of the sequence $n \mapsto P^n(x, A)$. As noted above, this power series has radius of convergence at least 1, so we can extend the domain to $\alpha \in (-1, 1)$. Thus, given the potential kernels, we can recover the transition kernels by taking derivatives and evaluating at 0: $P^n(x, A) = \frac{1}{n!}\left[\frac{d^n}{d\alpha^n} R_\alpha(x, A) \right]_{\alpha = 0}$
Of course, it's really only necessary to determine $P$, the one step transition kernel, since the other transition kernels are powers of $P$. In any event, it follows that the kernels $\bs{R} = \{R_\alpha: \alpha \in (0, 1)\}$, along with the initial distribution, completely determine the finite dimensional distributions of the Markov process $\bs{X}$. The potential kernels commute with each other and with the transition kernels.
Suppose that $\alpha, \, \beta \in (0, 1]$ and $k \in \N$. Then (as kernels)
1. $P^k R_\alpha = R_\alpha P^k = \sum_{n=0}^\infty \alpha^n P^{n+k}$
2. $R_\alpha R_\beta = R_\beta R_\alpha = \sum_{m=0}^\infty \sum_{n=0}^\infty \alpha^m \beta^n P^{m+n}$
Proof
Suppose that $f \in \mathscr{B}$ is nonnegative. The interchange of the sums with the kernel operation is allowed since the kernels are nonnegative. The other tool used is the semigroup property.
1. Directly $R_\alpha P^k f = \sum_{n=0}^\infty \alpha^n P^n P^k f = \sum_{n=0}^\infty \alpha^n P^{n+k} f$ The other direction requires an interchange. $P^k R_\alpha f = P^k \sum_{n=0}^\infty \alpha^n P^n f = \sum_{n=0}^\infty \alpha^n P^k P^n f = \sum_{n=0}^\infty \alpha^n P^{n+k} f$
2. First, $R_\alpha R_\beta f = \sum_{m=0}^\infty \alpha^m P^m R_\beta f = \sum_{m=0}^\infty \alpha^m P^m \left(\sum_{n=0}^\infty \beta^n P^n f\right) = \sum_{m=0}^\infty \sum_{n=0}^\infty \alpha^m \beta^n P^m P^n f = \sum_{m=0}^\infty \sum_{n=0}^\infty \alpha^m \beta^n P^{m+n} f$ The other direction is similar.
The same identities hold for the right operators on the entire space $\mathscr{B}$, with the additional restrictions that $\alpha \lt 1$ and $\beta \lt 1$. The fundamental equation that relates the potential kernels is given next.
If $\alpha, \, \beta \in (0, 1]$ with $\alpha \le \beta$ then (as kernels), $\beta R_\beta = \alpha R_\alpha + (\beta - \alpha) R_\alpha R_\beta$
Proof
If $\alpha = \beta$ the equation is trivial, so assume $\alpha \lt \beta$. Suppose that $f \in \mathscr{B}$ is nonnegative. From the previous result, $R_\alpha R_\beta f = \sum_{j=0}^\infty \sum_{k=0}^\infty \alpha^j \beta^k P^{j+k} f$ Changing variables to sum over $n = j + k$ and $j$ gives $R_\alpha R_\beta f = \sum_{n=0}^\infty \sum_{j=0}^n \alpha^j \beta^{n-j} P^n f = \sum_{n=0}^\infty \sum_{j=0}^n \left(\frac{\alpha}{\beta}\right)^j \beta^n P^n f = \sum_{n=0}^\infty \frac{1 - \left(\frac{\alpha}{\beta}\right)^{n+1}}{1 - \frac{\alpha}{\beta}} \beta^n P^n f$ Simplifying gives $R_\alpha R_\beta f = \frac{1}{\beta - \alpha} (\beta R_\beta f - \alpha R_\alpha f)$ Note that since $\alpha \lt 1$, $R_\alpha f$ is a finite, so we don't have to worry about the dreaded indeterminate form $\infty - \infty$.
The same identity holds holds for the right operators on the entire space $\mathscr{B}$, with the additional restriction that $\beta \lt 1$.
If $\alpha \in (0, 1]$, then (as kernels), $I + \alpha R_\alpha P = I + \alpha P R_\alpha = R_\alpha$.
Proof
Suppose that $f \in \mathscr{B}$ is nonnegative. From the result above, $(I + \alpha R_\alpha P) f = (I + \alpha P R_\alpha) f = f + \sum_{n=0}^\infty \alpha^{n+1} P^{n+1} f = \sum_{n = 0}^\infty \alpha^n P^n f = R_\alpha f$
The same identity holds for the right operators on the entire space $\mathscr{B}$, with the additional restriction that $\alpha \lt 1$. This leads to the following important result:
If $\alpha \in (0, 1)$, then as operators on the space $\mathscr{B}$,
1. $R_\alpha = (I - \alpha P)^{-1}$
2. $P = \frac{1}{\alpha}\left(I - R_\alpha^{-1}\right)$
Proof
The operators are bounded, so we can subtract. The identity $I + \alpha R_\alpha P = R_\alpha$ leads to $R_\alpha(I - \alpha P) = I$ and the identity $I + \alpha P R_\alpha = R_\alpha$ leads to $(I - \alpha P) R_\alpha = I$. Hence (a) holds. Part (b) follows from (a).
This result shows again that the potential operator $R_\alpha$ determines the transition operator $P$.
Examples and Applications
Our first example considers the binomial process as a Markov process.
Let $\bs{I} = \{I_n: n \in \N_+\}$ be a sequence of Bernoulli trials with success parameter $p \in (0, 1)$. Define the Markov process $\bs{X} = \{X_n: n \in \N\}$ by $X_n = X_0 + \sum_{k=1}^n I_k$ where $X_0$ takes values in $\N$ and is independent of $\bs{I}$.
1. For $n \in \N$, show that the transition probability matrix $P^n$ of $\bs{X}$ is given by $P^n(x, y) = \binom{n}{y - x} p^{y - x} (1 - p)^{n - y + x}, \quad x \in \N, \, y \in \{x, x + 1, \ldots, x + n\}$
2. For $\alpha \in (0, 1]$, show that the potential matrix $R_\alpha$ of $\bs{X}$ is given by $R_\alpha(x, y) = \frac{1}{1 - \alpha + \alpha p} \left(\frac{\alpha p}{1 - \alpha + \alpha p}\right)^{y - x}, \quad x \in \N, \, y \in \{x, x + 1, \ldots\}$
3. For $\alpha \in (0, 1)$ and $x \in \N$, identify the probability distribution defined by $(1 - \alpha) R_\alpha(x, \cdot)$.
4. For $x, \, y \in \N$ with $x \le y$, interpret $R(x, y)$, the expected time in $y$ starting in $x$, in the context of the process $\bs{X}$.
Solutions
Recall that $\bs{X}$ is a Markov process since it has stationary, independent increments.
1. Note that for $n, \, x \in \N$, $P^n(x, \cdot)$ is the (discrete) PDF of $x + \sum_{k=1}^n I_k$. The result follows since the sum of the indicator variables has the binomial distribution with parameters $n$ and $p$.
2. Let $\alpha \in (0, 1]$ and let $x, \, y \in \N$ with $x \le y$. Then \begin{align*} R_\alpha(x, y) & = \sum_{n=0}^\infty \alpha^n P^n(x, y) = \sum_{n = y - x}^\infty \alpha^n \binom{n}{y - x} p^{y-x} (1 - p)^{n - y + x} \ & = (\alpha p)^{y - x} \sum_{n = y - x}^\infty \binom{n}{y - x}[\alpha (1 - p)]^{n - y + x} = \frac{(\alpha p)^{y - x}}{[1 - \alpha (1 - p)]^{n - x + 1}} \end{align*} Simplifying gives the result.
3. For $\alpha \in (0, 1)$, $(1 - \alpha) R_\alpha(x, y) = \frac{1 - \alpha}{1 - \alpha + \alpha p} \left(\frac{\alpha p}{1 - \alpha + \alpha p}\right)^{y - x}$ As a function of $y$ for fixed $x$, this is the PDF of $x + Y_\alpha$ where $Y_\alpha$ has the geometric distribution on $N$ with parameter $\frac{1 - \alpha}{1 - \alpha + \alpha p}$.
4. Note that $R(x, y) = 1 / p$ for $x, \, y \in \N$ with $x \le y$. Starting in state $x$, the process eventually reaches $y$ with probability 1. The process remains in state $y$ for a geometrically distributed time, with parameter $p$. The mean of this distribution is $1 / p$.
Continuous Time
With the discrete-time setting as motivation, we now turn the more important continuous-time case where $T = [0, \infty)$.
Potential Kernels
For $\alpha \in [0, \infty)$, the $\alpha$-potential kernel $U_\alpha$ of $\bs{X}$ is defined as follows: $U_\alpha(x, A) = \int_0^\infty e^{-\alpha t} P_t(x, A) \, dt, \quad x \in S, \, A \in \mathscr{S}$
1. The special case $U = U_0$ is simply the potential kerenl of $\bs{X}$.
2. For $x \in S$ and $A \in \mathscr{S}$, $U(x, A)$ is the expected amount of time that $\bs{X}$ spends in $A$, starting at $x$.
3. The family of kernels $\bs{U} = \{U_\alpha: \alpha \in (0, \infty)\}$ is known as the reolvent of $\bs{X}$.
Proof
Since $\bs{P} = \{P_t: t \in T\}$ is a Feller semigroup of transition operators, the mapping $(t, x) \mapsto P_t(x, A)$ from $[0, \infty) \times S$ to $[0, 1]$ is jointly measurable for $A \in \mathscr{S}$. Thus, $U_\alpha(x, A)$ makes sense for $x \in S$ and $A \in \mathscr{S}$ and $x \mapsto U_\alpha(x, A)$ from $S$ to $[0, \infty)$ is measurable for $A \in \mathscr{S}$. That $A \mapsto U_\alpha(x, A)$ is a measure on $\mathscr{S}$ follows from the usual interchange of sum and integral, via Fubini's theorem: Suppose that $\{A_j: j \in J\}$ is a countable collection of disjoint sets in $\mathscr{S}$, and let $S = \bigcup_{j \in J} A_j$ \begin{align*} U_\alpha(x, A) & = \int_0^\infty e^{-\alpha t} P_t(x, A) \, dt = \int_0^\infty \left[\sum_{j \in J} e^{-\alpha t} P_t(x, A_j)\right] \, dt\ & = \sum_{j \in J} \int_0^\infty e^{-\alpha t} P_t(x, A_j) \, dt = \sum_{j \in J} U_\alpha(x, A_j) \end{align*} Finally, the interpretation of $U(x, A)$ for $x \in S$ and $A \in \mathscr{S}$ is another interchange of integrals: $U(x, A) = \int_0^\infty P_t(x, A) \, dt = \int_0^\infty \E[\bs{1}(X_t \in A) \mid X_0 = x] \, dt = \E\left( \int_0^\infty \bs{1}(X_t \in A) \, dt \biggm| X_0 = x\right)$ The inside integral is the Lebesgue measure of $\{t \in [0, \infty): X_t \in A\}$.
As with discrete time, it's quite possible that $U(x, A) = \infty$ for some $x \in S$ and $A \in \mathscr{S}$, and knowing when this is the case is of considerable interest. As with all kernels, the potential kernel $U_\alpha$ defines two operators, operating on the right on functions, and operating on the left on positive measures. If $f: S \to \R$ is measurable then, giving the right potential operator in its many forms, \begin{align*} U_\alpha f(x) & = \int_S U_\alpha(x, dy) f(y) = \int_0^\infty e^{-\alpha t} P_t f(x) \, dt \ & = \int_0^\infty e^{-\alpha t} \int_S P_t(x, dy) f(y) = \int_0^\infty e^{-\alpha t} \E[f(X_t) \mid X_0 = x] \, dt, \quad x \in S \end{align*} assuming that the various integrals make sense. This will be the case in particular if $f$ is nonnegative, or if $f \in \mathscr{B}$ and $\alpha \gt 0$.
If $\alpha \gt 0$, then $U_\alpha(x, S) = \frac{1}{\alpha}$ for all $x \in S$.
Proof
For $x \in S$, $U_\alpha(x, S) = \int_0^\infty e^{-\alpha t} P_t(x, S) \, dt = \int_0^\infty e^{-\alpha t} dt = \frac{1}{\alpha}$
It follows that for $\alpha \in (0, \infty)$, the right potential operator $U_\alpha$ is a bounded, linear operator on $\mathscr{B}$ with $\|U_\alpha\| = \frac{1}{\alpha}$. It also follows that $\alpha U_\alpha$ is a probability kernel. This kernel has a nice interpretation.
If $\alpha \gt 0$ then $\alpha U_\alpha (x, \cdot)$ is the conditional distribution of $X_\tau$ where $\tau$ is independent of $\bs{X}$ and has the exponential distribution on $[0, \infty)$ with parameter $\alpha$.
Proof
Suppose that $x \in S$ and $A \in \mathscr{S}$. The random time $\tau$ has PDF $f(t) = \alpha e^{-\alpha t}$ for $t \in [0, \infty)$. Hence, conditioning on $\tau$ gives $\P(X_\tau \in A \mid X_0 = x) = \int_0^\infty \alpha e^{-\alpha t} \P(X_\tau \in A \mid \tau = t, X_0 = x) \, dt$ But by the substitution rule and the assumption of independence, $\P(X_\tau \in A \mid \tau = t, X_0 = x) = \P(X_t \in A \mid \tau = t, X_0 = x) = \P(X_t \in A \mid X_0 = x) = P_t(x, A)$ Substituting gives $\P(X_\tau \in A \mid X_0 = x) = \int_0^\infty \alpha e^{-\alpha t} P_t(x, A) \, dt = \alpha U_\alpha(x, A)$
So $\alpha U_\alpha$ is a transition probability kernel, just as $P_t$ is a transition probability kernel, but corresponding to the random time $\tau$ (with $\alpha \in (0, \infty)$ as a parameter), rather than the deterministic time $t \in [0, \infty)$. As in the discrete case, the potential kernel can also be interpreted in economic terms. Suppose that $A \in \mathscr{S}$ and that we receive money at a rate of one unit per unit time whenever the process $\bs{X}$ is in $A$. Then $U(x, A)$ is the expected total amount of money that we receive, starting in state $x \in S$. But again, money that we receive later is of less value to us now than money that we will receive sooner. Specifically, suppose that one monetary unit at time $t \in [0, \infty)$ has a present value of $e^{-\alpha t}$ where $\alpha \in (0, \infty)$ is the inflation factor or discount factor. The $U_\alpha(x, A)$ is the total, expected, discounted amount that we receive, starting in $x \in S$. A bit more generally, suppose that $f \in \mathscr{B}$ and that $f(x)$ is the reward (or cost, depending on the sign) per unit time that we receive when the process is in state $x \in S$. Then $U_\alpha f(x)$ is the expected, total, discounted reward, starting in state $x \in S$.
For the left potential operator, if $\mu$ is a positive measure on $\mathscr{S}$ then \begin{align*}\mu U_\alpha(A) & = \int_S \mu(dx) U_\alpha(x, A) = \int_0^\infty e^{-\alpha t} \mu P_t (A) \, dt\ & = \int_0^\infty e^{-\alpha t} \left[\int_S \mu(dx) P_t(x, A)\right] dt = \int_0^\infty e^{-\alpha t} \left[\int_S \mu(dx) \P(X_t \in A) \right] dt, \quad A \in \mathscr{S} \end{align*} In particular, suppose that $\alpha \gt 0$ and that $\mu$ is a probability measure and $X_0$ has distribution$\mu$. Then $\mu P_t$ is the distribution of $X_t$ for $t \in [0, \infty)$, and hence from the last result, $\alpha \mu U_\alpha$ is the distribution of $X_\tau$, where again, $\tau$ is independent of $\bs{X}$ and has the exponential distribution on $[0, \infty)$ with parameter $\alpha$. The family of potential kernels gives the same information as the family of transition kernels.
The resolvent $\bs{U} = \{U_\alpha: \alpha \in (0, \infty)\}$ completely determines the family of transition kernels $\bs{P} = \{P_t: t \in (0, \infty)\}$.
Proof
Note that for $x \in S$ and $A \in \mathscr{S}$, the function $\alpha \mapsto U_\alpha(x, A)$ on $(0, \infty)$ is the Laplace transform of the function $t \mapsto P_t(x, A)$ on $[0, \infty)$. The Laplace transform of a function determines the function completely.
It follows that the resolvent $\{U_\alpha: \alpha \in [0, \infty)\}$, along with the initial distribution, completely determine the finite dimensional distributions of the Markov process $\bs{X}$. This is much more important here in the continuous-time case than in the discrete-time case, since the transition kernels $P_t$ cannot be generated from a single transition kernel. The potential kernels commute with each other and with the transition kernels.
Suppose that $\alpha, \, \beta, \, t \in [0, \infty)$. Then (as kernels),
1. $P_t U_\alpha = U_\alpha P_t = \int_0^\infty e^{-\alpha s} P_{s+t} ds$
2. $U_\alpha U_\beta = \int_0^\infty \int_0^\infty e^{-\alpha s} e^{-\beta t} P_{s+t} ds \, dt$
Proof
Suppose that $f \in \mathscr{B}$ is nonnegative. The interchanges of operators and integrals below are interchanges of integrals, and are justified since the integrands are nonnegative. The other tool used is the semigroup property of $\bs{P} = \{P_t: t \in [0, \infty)\}$.
1. Directly, $U_\alpha P_t f = \int_0^\infty e^{-\alpha s} P_s P_t f \, ds = \int_0^\infty e^{-\alpha s} P_{s+t} f \, ds$ The other direction involves an interchange. $P_t U_\alpha f = P_t \int_0^\infty e^{-\alpha s} P_s f \, ds = \int_0^\infty e^{-\alpha s} P_t P_s f \, ds = \int_0^\infty e^{-\alpha s} P_{s+t} f \, ds$
2. First \begin{align*} U_\alpha U_\beta f & = \int_0^\infty e^{-\alpha s} P_s U_\beta f \, ds = \int_0^\infty e^{-\alpha s} P_s \int_0^\infty e^{-\beta t} P_t f \, dt \ & = \int_0^\infty e^{-\alpha s} \int_0^\infty e^{-\beta t} P_s P_t f \, ds \, dt = \int_0^\infty \int_0^\infty e^{-\alpha s} e^{-\beta t} P_{s+t} f \, ds \, dt \end{align*} The other direction is similar.
The same identities hold for the right operators on the entire space $\mathscr{B}$ under the additional restriction that $\alpha \gt 0$ and $\beta \gt 0$. The fundamental equation that relates the potential kernels, known as the resolvent equation, is given in the next theorem:
If $\alpha, \, \beta \in [0, \infty)$ with $\alpha \le \beta$ then (as kernels) $U_\alpha = U_\beta + (\beta - \alpha) U_\alpha U_\beta$.
Proof
If $\alpha = \beta$ the equation is trivial, so assume $\alpha \lt \beta$. Suppose that $f \in \mathscr{B}$ is nonnegative. From the previous result, $U_\alpha U_\beta f = \int_0^\infty \int_0^\infty e^{-\alpha s} e^{-\beta t} P_{s + t} f \, dt \, ds$ The transformation $u = s + t, \, v = s$ maps $[0, \infty)^2$ one-to-one onto $\{(u, v) \in [0, \infty)^2: u \ge v\}$. The inverse transformation is $s = v, \, t = u - v$ with Jacobian $-1$. Hence we have \begin{align*} U_\alpha U_\beta f & = \int_0^\infty \int_0^u e^{-\alpha v} e^{-\beta(u - v)} P_u f \, dv \, du = \int_0^\infty \left(\int_0^u e^{(\beta - \alpha) v} dv\right) e^{-\beta u} P_u f \, du \ & = \frac{1}{\beta - \alpha} \int_0^\infty \left[e^{(\beta - \alpha) u} - 1\right] e^{-\beta u} P_u f du\ & = \frac{1}{\beta - \alpha}\left(\int_0^\infty e^{-\alpha u} P_u f \, du - \int_0^\infty e^{-\beta u} P_u f \, du\right) = \frac{1}{\beta - \alpha}\left(U_\alpha f - U_\beta f\right) \end{align*} Simplifying gives the result. Note that $U_\beta f$ is finite since $\beta \gt 0$.
The same identity holds for the right potential operators on the entire space $\mathscr{B}$, under the additional restriction that $\alpha \gt 0$. For $\alpha \in (0, \infty)$, $U_\alpha$ is also an operator on the space $\mathscr{C}_0$.
If $\alpha \in (0, \infty)$ and $f \in \mathscr{C}_0$ then $U_\alpha f \in \mathscr{C}_0$.
Proof
Suppose that $f \in \mathscr{C}_0$ and that $(x_1, x_2, \ldots)$ is a sequence in $S$. Then $P_t f \in \mathscr{C}_0$ for $t \in [0, \infty)$. Hence if $x_n \to x \in S$ as $n \to \infty$ then $e^{-\alpha t} P_t f(x_n) \to e^{-\alpha t} P_t f(x)$ as $n \to \infty$ for each $t \in [0, \infty)$. By the dominated convergence theorem, $U_\alpha f(x_n) = \int_0^\infty e^{-\alpha t} P_t f(x_n) \, dt \to \int_0^\infty e^{-\alpha t} P_t f(x) \, dt = U_\alpha f(x) \text{ as } n \to \infty$ Hence $U_\alpha f$ is continuous. Next suppose that $x_n \to \infty$ as $n \to \infty$. This means that for every compact $C \subseteq S$, there exist $m \in \N_+$ such that $x_n \notin C$ for $n \gt m$. Them $e^{-\alpha t} P_t f(x_n) \to 0$ as $n \to \infty$ for each $t \in [0, \infty)$. Again by the dominated convergence theorem, $U_\alpha f(x_n) = \int_0^\infty e^{-\alpha t} P_t f(x_n) \, dt \to 0 \text{ as } n \to \infty$ So $U_\alpha f \in \mathscr{C}_0$.
If $f \in \mathscr{C}_0$ then $\alpha U_\alpha f \to f$ as $\alpha \to \infty$.
Proof
Convergence is with respect to the supremum norm on $\mathscr{C}_0$, of course. Suppose that $f \in \mathscr{C}_0$. Note first that with a change of variables $s = \alpha t$, $\alpha U_\alpha f = \int_0^\infty \alpha e^{-\alpha t} P_t f \, dt = \int_0^\infty e^{-s} P_{s/\alpha} f \, ds$ and hence $\left|\alpha U_\alpha f - f\right| = \left|\int_0^\infty e^{-s} \left(P_{s/\alpha} f - f\right) ds\right| \le \int_0^\infty e^{-s} \left|P_{s/\alpha} f - f\right| \, ds \le \int_0^\infty e^{-s} \left\|P_{s/\alpha} f - f\right\| \, ds$ So it follows that $\left\|\alpha U_\alpha f - f\right\| \le \int_0^\infty e^{-s} \left\|P_{s/\alpha} f - f\right\| \, ds$ But $\left\|P_{s/\alpha} f - f\right\| \to 0$ as $\alpha \to \infty$ and hence by the dominated convergence theorem, $\int_0^\infty e^{-s} \left\|P_{s/\alpha} f - f\right\| \, ds \to 0$ as $\alpha \to \infty$.
Infinitesimal Generator
In continuous time, it's not at all clear how we could construct a Markov process with desired properties, say to model a real system of some sort. Stated mathematically, the existential problem is how to construct the family of transition kernels $\{P_t: t \in [0, \infty)\}$ so that the semigroup property $P_s P_t = P_{s + t}$ is satisfied for all $s, \, t \in [0, \infty)$. The answer, as for similar problems in the deterministic world, comes essentially from calculus, from a type of derivative.
The infinitesimal generator of the Markov process $\bs{X}$ is the operator $G: \mathscr{D} \to \mathscr{C}_0$ defined by $G f = \lim_{t \downarrow 0} \frac{P_t f - f}{t}$ on the domain $\mathscr{D} \subseteq \mathscr{C}_0$ for which the limit exists.
As usual, the limit is with respect to the supremum norm on $\mathscr{C}_0$, so $f \in \mathscr{D}$ and $G f = g$ means that $f, \, g \in \mathscr{C}_0$ and $\left\|\frac{P_t f - f}{t} - g \right\| = \sup\left\{\left| \frac{P_t f(x) - f(x)}{t} - g(x) \right|: x \in S \right\} \to 0 \text{ as } t \downarrow 0$ So in particular, $G f(x) = \lim_{t \downarrow 0} \frac{P_t f(x) - f(x)}{t} = \lim_{t \downarrow 0} \frac{\E[f(X_t) \mid X_0 = x] - f(x)}{t}, \quad x \in S$
The domain $\mathscr{D}$ is a subspace of $\mathscr{C}_0$ and the generator $G$ is a linear operator on $\mathscr{D}$
1. If $f \in \mathscr{D}$ and $c \in \R$ then $c f \in \mathscr{D}$ and $G(c f) = c G f$.
2. If $f, \, g \in \mathscr{D}$ then $f + g \in \mathscr{D}$ and $G(f + g) = G f + G g$.
Proof
These are simple results that depend on the linearity of $P_t$ for $t \in [0, \infty)$ and basic results on convergence.
1. If $f \in \mathscr{D}$ then $\frac{P_t (c f) - (c f)}{t} = c \frac{P_t f - f}{t} \to c G f \text{ as } t \downarrow 0$
2. If $f, \, g \in \mathscr{D}$ then $\frac{P_t(f + g) - (f + g)}{t} = \frac{P_t f - f}{t} + \frac{P_t g - g}{t} \to G f + G g \text{ as } t \downarrow 0$
Note $G$ is the (right) derivative at 0 of the function $t \mapsto P_t f$. Because of the semigroup property, this differentiability property at $0$ implies differentiability at arbitrary $t \in [0, \infty)$. Moreover, the infinitesimal operator and the transition operators commute:
If $f \in \mathscr{D}$ and $t \in [0, \infty)$, then $P_t f \in \mathscr{D}$ and the following derivative rules hold with respect to the supremum norm.
1. $P^\prime_t f = P_t G f$, the Kolmogorov forward equation
2. $P^\prime_t f = G P_t f$, the Kolmogorov backward equation
Proof
Let $f \in \mathscr{D}$. All limits and statements about derivatives and continuity are with respect to the supremum norm.
1. By assumption, $\frac{1}{h}(P_h f - f) \to G f \text{ as } h \downarrow 0$ Since $P_t$ is a bounded, linear operator on the space $\mathscr{C}_0$, it preserves limits, so $\frac{1}{h}(P_t P_h f - P_t f) = \frac{1}{h}(P_{t+h} f - P_t f) \to P_t G f \text{ as } h \downarrow 0$ This proves the result for the derivative from the right. But since $t \mapsto P_t f$ is continuous, the the result is also true for the two-sided derivative.
2. From part (a), we now know that $\frac{1}{h} (P_h P_t f - P_t f) = \frac{1}{h}(P_{t+h} f - P_t f) \to P_t G f \text { as } h \to 0$ By definition, this means that $P_t f \in \mathscr{D}$ and $G P_t f = P_t G f = P_t^\prime f$.
The last result gives a possible solution to the dilema that motivated this discussion in the first place. If we want to construct a Markov process with desired properties, to model a a real system for example, we can start by constructing an appropriate generator $G$ and then solve the initial value problem $P^\prime_t = G P_t, \quad P_0 = I$ to obtain the transition operators $\bs{P} = \{P_t: t \in [0, \infty)\}$. The next theorem gives the relationship between the potential operators and the infinitesimal operator, which in some ways is better. This relationship is analogous to the relationship between the potential operators and the one-step operator given above in discrete time
Suppose $\alpha \in (0, \infty)$.
1. If $f \in \mathscr{D}$ the $G f \in \mathscr{C}_0$ and $f + U_\alpha G f = \alpha U_\alpha f$
2. If $f \in \mathscr{C}_0$ then $U_\alpha f \in \mathscr{D}$ and $f + G U_\alpha f = \alpha U_\alpha f$.
Proof
1. By definition, if $f \in \mathscr{D}$ then $G f \in \mathscr{C}_0$. Hence using the previous result, $f + U_\alpha G f = f + \int_0^\infty e^{-\alpha t} G P_t f \, dt = f + \int_0^\infty e^{-\alpha t} P^\prime_t f \, dt$ Integrating by parts (with $u = e^{-\alpha t}$ and $dv = P^\prime_t f \, dt$) gives $f + G U_\alpha f = f - e^{-\alpha t} P_t f \biggm|_0^\infty + \alpha \int_0^\infty e^{-\alpha t} P_t f\, dt$ But $e^{-\alpha t} P_t f \to 0$ as $t \to \infty$ while $P_0 f = f$. The last term is $\alpha U_\alpha f$.
2. Suppose that $f \in \mathscr{C}_0$. From the result above and the substitution $u = s + t$, $P_t U_\alpha f = \int_0^\infty e^{-\alpha s} P_{s+t} f \, ds = \int_t^\infty e^{-\alpha (u - t)} P_u f \, du = e^{\alpha t} \int_t^\infty e^{-\alpha u} P_u f \, du$ Hence $\frac{P_t U_\alpha f - U_\alpha f}{t} = \frac{1}{t} \left[e^{\alpha t} \int_t^\infty e^{-\alpha u} P_u f \, du - U_\alpha f\right]$ Adding and subtracting $e^{\alpha u} U_\alpha f$ and combining integrals gives \begin{align*} \frac{P_t U_\alpha f - U_\alpha f}{t} & = \frac{1}{t} \left[e^{\alpha t} \int_t^\infty e^{-\alpha u} P_u f \, du - e^{\alpha t} \int_0^\infty e^{-\alpha u} P_u f \, du\right] + \frac{e^{\alpha t} - 1}{t} U_\alpha f \ & = -e^{\alpha t} \frac{1}{t} \int_0^t e^{-\alpha s} P_s f \, ds + \frac{e^{\alpha t} - 1}{t} U_\alpha f \end{align*} Since $s \mapsto P_s f$ is continuous, the first term converges to $-f$ as $t \downarrow 0$. The second term converges to $\alpha U_\alpha f$ as $t \downarrow 0$.
For $\alpha \gt 0$, the operators $U_\alpha$ and $G$ have an inverse relationship.
Suppose again that $\alpha \in (0, \infty)$.
1. $U_\alpha = (\alpha I - G)^{-1}: \mathscr{C}_0 \to \mathscr{D}$
2. $G = \alpha I - U_\alpha^{-1} : \mathscr{D} \to \mathscr{C}_0$
Proof
Recall that $U_\alpha: \mathscr{C}_0 \to \mathscr{D}$ and $G: \mathscr{D} \to \mathscr{C}_0$
1. By part(a) the previous result we have $\alpha U_\alpha - U_\alpha G = I$ so $U_\alpha(\alpha I - G) = I$. By part (b) we have $\alpha U_\alpha - G U_\alpha = I$ so $(\alpha I - G) U_\alpha = I$.
2. This follows from (a).
So, from the generator $G$ we can determine the potential operators $\bs{U} = \{U_\alpha: \alpha \in (0, \infty)\}$, which in turn determine the transition operators $\bs{P} = \{P_t: t \in (0, \infty)\}$. In continuous time, transition operators $\bs{P} = \{P_t: t \in [0, \infty)\}$ can be obtained from the single, infinitesimal operator $G$ in a way that is reminiscent of the fact that in discrete time, the transition operators $\bs{P} = \{P^n: n \in \N\}$ can be obtained from the single, one-step operator $P$.
Examples and Applications
Our first example is essentially deterministic.
Consider the Markov process $\bs{X} = \{X_t: t \in [0, \infty)\}$ on $\R$ satisfying the ordinary differential equation $\frac{d}{dt} X_t = g(X_t), \quad t \in [0, \infty)$ where $g: \R \to \R$ is Lipschitz continuous. The infinitesimal operator $G$ is given by $G f(x) = f^\prime(x) g(x)$ for $x \in \R$ on the domain $\mathscr{D}$ of functions $f: \R \to \R$ where $f \in \mathscr{C}_0$ and $f^\prime \in \mathscr{C}_0$.
Proof
Recall that the only source of randomness in this process is the initial sate $X_0$. By the continuity assumptions on $g$, there exists a unique solution $X_t(x)$ to the differential equation with initial value $X_0 = x$, defined for all $t \in [0, \infty)$. The transition operator $P_t$ for $t \in [0, \infty)$ is defined on $\mathscr{B}$ by $P_t f(x) = f[X_t(x)]$ for $x \in \R$. By the ordinary chain rule, if $f$ is differentiable, $\frac{P_t f(x) - f(x)}{t} = \frac{f[X_t(x)] - f(x)}{t} \to f^\prime(x) g(x) \text{ as } t \downarrow 0$
Our next example considers the Poisson process as a Markov process. Compare this with the binomial process above.
Let $\bs{N} = \{N_t: t \in [0, \infty)\}$ denote the Poisson process on $\N$ with rate $\beta \in (0, \infty)$. Define the Markov process $\bs{X} = \{X_t: t \in [0, \infty)\}$ by $X_t = X_0 + N_t$ where $X_0$ takes values in $\N$ and is independent of $\bs{N}$.
1. For $t \in [0, \infty)$, show that the probability transition matrix $P_t$ of $\bs{X}$ is given by $P_t(x, y) = e^{-\beta t} \frac{(\beta t)^{y - x}}{(y - x)!}, \quad x, \, y \in \N, \, y \ge x$
2. For $\alpha \in [0, \infty)$, show that the potential matrix $U_\alpha$ of $\bs{X}$ is given by $U_\alpha(x, y) = \frac{1}{\alpha + \beta} \left(\frac{\beta}{\alpha + \beta}\right)^{y - x}, \quad x, \, y \in \N, \, y \ge x$
3. For $\alpha \gt 0$ and $x \in \N$, identify the probability distribution defined by $\alpha U_\alpha(x, \cdot)$.
4. Show that the infinitesimal matrix $G$ of $\bs{X}$ is given by $G(x, x) = -\beta$, $G(x, x + 1) = \beta$ for $x \in \N$.
Solutions
1. Note that for $t \in [0, \infty)$ and $x \in \N$, $P_t(x, \cdot)$ is the (discrete) PDF of $x + N_t$ since $N_t$ has the Poisson distribution with parameter $\beta t$.
2. Let $\alpha \in [0, \infty)$ and let $x, \, y \in \N$ with $x \le y$. Then \begin{align*} U_\alpha(x, y) & = \int_0^\infty e^{-\alpha t} P_t(x, y) \, dt = \int_0^\infty e^{-\alpha t} e^{-\beta t} \frac{(\beta t)^{y - x}}{(y - x)!} dt \ & = \frac{\beta^{y - x}}{(y - x)!} \int_0^\infty e^{-(\alpha + \beta) t} t^{y - x} \, dt \end{align*} The change of variables $s = (\alpha + \beta)t$ gives $U_\alpha(x, y) = \frac{\beta^{y-x}}{(y - x)! (\alpha + \beta)^{y - x + 1}} \int_0^\infty e^{-s} s^{y-x} \, ds$ But the last integral is $\Gamma(y - x + 1) = (y - x)!$. Simplifying gives the result.
3. For $\alpha \gt 0$, $\alpha U_\alpha(x, y) = \frac{\alpha}{\alpha + \beta} \left(\frac{\beta}{\alpha + \beta}\right)^{y - x}, \quad x, \, y \in \N, \, y \ge x$ As a function of $y$ for fixed $x$, this is the PDF of $x + Y_\alpha$ where $Y_\alpha$ has the geometric distribution with parameter $\frac{\alpha}{\alpha + \beta}$.
4. Note that for $x, \, y \in \N$, $G(x, y) = \frac{d}{dt} P_t(x, y) \bigm|_{t=0}$. By simple calculus, this is $-\beta$ if $y = x$, $\beta$ if $y = x + 1$, and 0 otherwise. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.02%3A_Potentials_and_Generators_for_General_Markov_Processes.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$
In this and the next several sections, we consider a Markov process with the discrete time space $\N$ and with a discrete (countable) state space. Recall that a Markov process with a discrete state space is called a Markov chain, so we are studying discrete-time Markov chains.
Review
We will review the basic definitions and concepts in the general introduction. With both time and space discrete, many of these definitions and concepts simplify considerably. As usual, our starting point is a probability space $(\Omega, \mathscr{F}, \P)$, so $\Omega$ is the sample space, $\mathscr{F}$ the $\sigma$-algebra of events, and $\P$ the probability measure on $(\Omega, \mathscr{F})$. Let $\bs{X} = (X_0, X_1, X_2, \ldots)$ be a stochastic process defined on the probability space, with time space $\N$ and with countable state space $S$. In the context of the general introduction, $S$ is given the power set $\mathscr{P}(S)$ as the $\sigma$-algebra, so all subsets of $S$ are measurable, as are all functions from $S$ into another measurable space. Counting measure $\#$ is the natural measure on $S$, so integrals over $S$ are simply sums. The same comments apply to the time space $\N$: all subsets of $\N$ are measurable and counting measure $\#$ is the natural measure on $\N$.
The vector space $\mathscr{B}$ consisting of bounded functions $f: S \to \R$ will play an important role. The norm that we use is the supremum norm defined by $\|f\| = \sup\{\left|f(x)\right|: x \in S\}, \quad f \in \mathscr{B}$
For $n \in \N$, let $\mathscr{F}_n = \sigma\{X_0, X_1, \ldots, X_n\}$, the $\sigma$-algebra generated by the process up to time $n$. Thus $\mathfrak{F} = \{\mathscr{F}_0, \mathscr{F}_1, \mathscr{F}_2, \ldots\}$ is the natural filtration associated with $\bs{X}$. We also let $\mathscr{G}_n = \sigma\{X_n, X_{n+1}, \ldots\}$, the $\sigma$-algebra generated by the process from time $n$ on. So if $n \in \N$ represents the present time, then $\mathscr{F}_n$ contains the events in the past and $\mathscr{G}_n$ the events in the future.
Definitions
We start with the basic definition of the Markov property: the past and future are conditionally independent, given the present.
$\bs{X} = (X_0, X_1, X_2, \ldots)$ is a Markov chain if $\P(A \cap B \mid X_n) = \P(A \mid X_n) \P(B \mid X_n)$ for every $n \in \N$, $A \in \mathscr{F}_n$ and $B \in \mathscr{G}_n$.
There are a number of equivalent formulations of the Markov property for a discrete-time Markov chain. We give a few of these.
$\bs{X} = (X_0, X_1, X_2, \ldots)$ is a Markov chain if either of the following equivalent conditions is satisfied:
1. $\P(X_{n+1} = x \mid \mathscr{F}_n) = \P(X_{n+1} = x \mid X_n)$ for every $n \in \N$ and $x \in S$.
2. $\E[f(X_{n+1}) \mid \mathscr{F}_n] = \E[f(X_{n+1}) \mid X_n]$ for every $n \in \N$ and $f \in \mathscr{B}$.
Part (a) states that for $n \in \N$, the conditional probability density function of $X_{n+1}$ given $\mathscr{F}_n$ is the same as the conditional probability density function of $X_{n+1}$ given $X_n$. Part (b) also states, in terms of expected value, that the conditional distribution of $X_{n+1}$ given $\mathscr{F}_n$ is the same as the conditional distribution of $X_{n+1}$ given $X_n$. Both parts are the Markov property looking just one time step in the future. But with discrete time, this is equivalent to the Markov property at general future times.
$\bs{X} = (X_0, X_1, X_2, \ldots)$ is a Markov chain if either of the following equivalent conditions is satisfied:
1. $\P(X_{n+k} = x \mid \mathscr{F}_n) = \P(X_{n+k} = x \mid X_n)$ for every $n, \, k \in \N$ and $x \in S$.
2. $\E[f(X_{n+k}) \mid \mathscr{F}_n] = \E[f(X_{n+k}) \mid X_n]$ for every $n, \, k \in \N$ and $f \in \mathscr{B}$.
Part (a) states that for $n, \, k \in \N$, the conditional probability density function of $X_{n+k}$ given $\mathscr{F}_n$ is the same as the conditional probability density function of $X_{n+k}$ given $X_n$. Part (b) also states, in terms of expected value, that the conditional distribution of $X_{n+k}$ given $\mathscr{F}_n$ is the same as the conditional distribution of $X_{n+k}$ given $X_n$. In discrete time and space, the Markov property can also be stated without explicit reference to $\sigma$-algebras. If you are not familiar with measure theory, you can take this as the starting definition.
$\bs{X} = (X_0, X_1, X_2, \ldots)$ is a Markov chain if for every $n \in \N$ and every sequence of states $(x_0, x_1, \ldots, x_{n-1}, x, y)$, $\P(X_{n+1} = y \mid X_0 = x_0, X_1 = x_1, \ldots, X_{n-1} = x_{n-1}, X_n = x) = \P(X_{n+1} = y \mid X_n = x)$
The theory of discrete-time Markov chains is simplified considerably if we add an additional assumption.
A Markov chain $\bs{X} = (X_0, X_1, X_2, \ldots)$ is time homogeneous if $\P(X_{n+k} = y \mid X_k = x) = \P(X_n = y \mid X_0 = x)$ for every $k, \, n \in \N$ and every $x, \, y \in S$.
That is, the conditional distribution of $X_{n+k}$ given $X_k = x$ depends only on $n$. So if $\bs{X}$ is homogeneous (we usually don't bother with the time adjective), then the chain $\{X_{k+n}: n \in \N\}$ given $X_k = x$ is equivalent (in distribution) to the chain $\{X_n: n \in \N\}$ given $X_0 = x$. For this reason, the initial distribution is often unspecified in the study of Markov chains—if the chain is in state $x \in S$ at a particular time $k \in \N$, then it doesn't really matter how the chain got to state $x$; the process essentially starts over, independently of the past. The term stationary is sometimes used instead of homogeneous.
From now on, we will usually assume that our Markov chains are homogeneous. This is not as big of a loss of generality as you might think. A non-homogenous Markov chain can be turned into a homogeneous Markov process by enlarging the state space, as shown in the introduction to general Markov processes, but at the cost of creating an uncountable state space. For a homogeneous Markov chain, if $k, \, n \in \N$, $x \in S$, and $f \in \mathscr{B}$, then $\E[f(X_{k+n}) \mid X_k = x] = \E[f(X_n) \mid X_0 = x]$
Stopping Times and the Strong Markov Property
Consider again a stochastic process $\bs{X} = (X_0, X_1, X_2, \ldots)$ with countable state space $S$, and with the natural filtration $\mathfrak{F} = (\mathscr{F}_0, \mathscr{F}_1, \mathscr{F}_2, \ldots)$ as given above. Recall that a random variable $\tau$ taking values in $\N \cup \{\infty\}$ is a stopping time or a Markov time for $\bs{X}$ if $\{\tau = n\} \in \mathscr{F}_n$ for each $n \in \N$. Intuitively, we can tell whether or not $\tau = n$ by observing the chain up to time $n$. In a sense, a stopping time is a random time that does not require that we see into the future. The following result gives the quintessential examples of stopping times.
Suppose again $\bs{X} = \{X_n: n \in \N\}$ is a discrete-time Markov chain with state space $S$ as defined above. For $A \subseteq S$, the following random times are stopping times:
1. $\rho_A = \inf\{n \in \N: X_n \in A\}$, the entrance time to $A$.
2. $\tau_A = \inf\{n \in \N_+: X_n \in A\}$, the hitting time to $A$.
Proof
For $n \in \N$
1. $\{\rho_A = n\} = \{X_0 \notin A, X_1 \notin A, \ldots, X_{n-1} \notin A, X_n \in A\} \in \mathscr{F}_n$
2. $\{\tau_A = n\} = \{X_1 \notin A, X_2 \notin A, \ldots, X_{n-1} \notin A, X_n \in A\} \in \mathscr{F}_n$
An example of a random time that is generally not a stopping time is the last time that the process is in $A$: $\zeta_A = \max\{n \in \N_+: X_n \in A\}$ We cannot tell if $\zeta_A = n$ without looking into the future: $\{ \zeta_A = n\} = \{X_n \in A, X_{n+1} \notin A, X_{n+2} \notin A, \ldots\}$ for $n \in \N$.
If $\tau$ is a stopping time for $\bs{X}$, the $\sigma$-algebra associated with $\tau$ is $\mathscr{F}_\tau = \{A \in \mathscr{F}: A \cap \{\tau = n\} \in \mathscr{F}_n \text{ for all } n \in \N\}$ Intuitively, $\mathscr{F}_\tau$ contains the events that can be described by the process up to the random time $\tau$, in the same way that $\mathscr{F}_n$ contains the events that can be described by the process up to the deterministic time $n \in \N$. For more information see the section on filtrations and stopping times.
The strong Markov property states that the future is independent of the past, given the present, when the present time is a stopping time. For a discrete-time Markov chain, the ordinary Markov property implies the strong Markov property.
If $\bs{X} = (X_0, X_1, X_2, \ldots)$ is a discrete-time Markov chain then $\bs{X}$ has the strong Markov property. That is, if $\tau$ is a finite stopping time for $\bs{X}$ then
1. $\P(X_{\tau+k} = x \mid \mathscr{F}_\tau) = \P(X_{\tau+k} = x \mid X_\tau)$ for every $k \in \N$ and $x \in S$.
2. $\E[f(X_{\tau+k}) \mid \mathscr{F}_\tau] = \E[f(X_{\tau+k}) \mid X_\tau]$ for every $k \in \N$ and $f \in \mathscr{B}$.
Part (a) states that the conditional probability density function of $X_{\tau + k}$ given $\mathscr{F}_\tau$ is the same as the conditional probability density function of $X_{\tau + k}$ given just $X_\tau$. Part (b) also states, in terms of expected value, that the conditional distribution of $X_{\tau + k}$ given $\mathscr{F}_\tau$ is the same as the conditional distribution of $X_{\tau + k}$ given just $X_\tau$. Assuming homogeneity as usual, the Markov chain $\{X_{\tau + n}: n \in \N\}$ given $X_\tau = x$ is equivalent in distribution to the chain $\{X_n: n \in \N\}$ given $X_0 = x$.
Transition Matrices
Suppose again that $\bs{X} = (X_0, X_1, X_2, \ldots)$ is a homogeneous, discrete-time Markov chain with state space $S$. With a discrete state space, the transition kernels studied in the general introduction become transition matrices, with rows and columns indexed by $S$ (and so perhaps of infinite size). The kernel operations become familiar matrix operations. The results in this section are special cases of the general results, but we sometimes give independent proofs for completeness, and because the proofs are simpler. You may want to review the section on kernels in the chapter on expected value.
For $n \in \N$ let $P_n(x, y) = \P(X_n = y \mid X_0 = x), \quad (x, y) \in S \times S$ The matrix $P_n$ is the $n$-step transition probability matrix for $\bs{X}$.
Thus, $y \mapsto P_n(x, y)$ is the probability density function of $X_n$ given $X_0 = x$. In particular, $P_n$ is a probability matrix (or stochastic matrix) since $P_n(x, y) \ge 0$ for $(x, y) \in S^2$ and $\sum_{y \in S} P(x, y) = 1$ for $x \in S$. As with any nonnegative matrix on $S$, $P_n$ defines a kernel on $S$ for $n \in \N$: $P_n(x, A) = \sum_{y \in A} P_n(x, y) = \P(X_n \in A \mid X_0 = x), \quad x \in S, \, A \subseteq S$ So $A \mapsto P_n(x, A)$ is the probability distribution of $X_n$ given $X_0 = x$. The next result is the Chapman-Kolmogorov equation, named for Sydney Chapman and Andrei Kolmogorov. It gives the basic relationship between the transition matrices.
If $m, \, n \in \N$ then $P_m P_n = P_{m+n}$
Proof
This follows from the Markov and time-homogeneous properties and a basic conditioning argument. If $x, \, z \in S$ then $P_{m+n}(x, z) = \P(X_{m+n} = z \mid X_0 = x) = \sum_{y \in S} \P(X_{m+n} = z \mid X_0 = x, X_m = y) \P(X_m = y \mid X_0 = x)$ But by the Markov property and time-homogeneous properties $\P(X_{m+n} = z \mid X_0 = x, X_m = y) = \P(X_n = z \mid X_0 = y) = P_n(y, z)$ Of course also $\P(X_m = y \mid X_0 = x) = P_m(x, y)$ Hence we have $P_{m+n}(x, z) = \sum_{y \in S} P_m(x, y) P_n(y, z)$ The right side, by definition, is $P_m P_n(x, z)$.
It follows immediately that the transition matrices are just the matrix powers of the one-step transition matrix. That is, letting $P = P_1$ we have $P_n = P^n$ for all $n \in \N$. Note that $P^0 = I$, the identity matrix on $S$ given by $I(x, y) = 1$ if $x = y$ and 0 otherwise. The right operator corresponding to $P^n$ yields an expected value.
Suppose that $n \in \N$ and that $f: S \to \R$. Then, assuming that the expected value exists, $P^n f(x) = \sum_{y \in S} P^n(x, y) f(y) = \E[f(X_n) \mid X_0 = x], \quad x \in S$
Proof
This follows easily from the definitions: $P^nf(x) = \sum_{y \in S} P^n(x, y) f(y) = \sum_{y \in S} \P(X_n = y \mid X_0 = x) f(y) = \E[f(X_n) \mid X_0 = x], \quad x \in S$
The existence of the expected value is only an issue if $S$ is infinte. In particular, the result holds if $f$ is nonnegative or if $f \in \mathscr{B}$ (which in turn would always be the case if $S$ is finite). In fact, $P^n$ is a linear contraction operator on the space $\mathscr{B}$ for $n \in \N$. That is, if $f \in \mathscr{B}$ then $P^n f \in \mathscr{B}$ and $\|P^n f\| \le \|f\|$. The left operator corresponding to $P^n$ is defined similarly. For $f: S \to \R$ $f P^n(y) = \sum_{x \in S} f(x) P^n(x, y), \quad y \in S$ assuming again that the sum makes sense (as before, only an issue when $S$ is infinite). The left operator is often restricted to nonnegative functions, and we often think of such a function as the density function (with respect to $\#$) of a positive measure on $S$. In this sense, the left operator maps a density function to another density function.
A function $f: S \to \R$ is invariant for $P$ (or for the chain $\bs{X}$) if $f P = f$.
Clearly if $f$ is invariant, so that $f P = f$ then $f P^n = f$ for all $n \in \N$. If $f$ is a probability density function, then so is $f P$.
If $X_0$ has probability density function $f$, then $X_n$ has probability density function $f P^n$ for $n \in \N$.
Proof
Again, this follows easily from the definitions and a conditioning argument. $\P(X_n = y) = \sum_{x \in S} \P(X_0 = x) \P(X_n = y \mid X_0 = x) = \sum_{x \in S} f(x) P^n(x, y) = f P^n(y), \quad y \in S$
In particular, if $X_0$ has probability density function $f$, and $f$ is invariant for $\bs{X}$, then $X_n$ has probability density function $f$ for all $n \in \N$, so the sequence of variables $\bs{X} = (X_0, X_1, X_2, \ldots)$ is identically distributed. Combining two results above, suppose that $X_0$ has probability density function $f$ and that $g: S \to \R$. Assuming the expected value exists, $\E[g(X_n)] = f P^n g$. Explicitly, $\E[g(X_n)] = \sum_{x \in S} \sum_{y \in S} f(x) P^n(x, y) g(y)$ It also follows from the last theorem that the distribution of $X_0$ (the initial distribution) and the one-step transition matrix determine the distribution of $X_n$ for each $n \in \N$. Actually, these basic quantities determine the finite dimensional distributions of the process, a stronger result.
Suppose that $X_0$ has probability density function $f_0$. For any sequence of states $(x_0, x_1, \ldots, x_n) \in S^n,$, $\P(X_0 = x_0, X_1 = x_1, \ldots, X_n = x_n) = f_0(x_0) P(x_0, x_1) P(x_1, x_2) \cdots P(x_{n-1},x_n)$
Proof
This follows directly from the Markov property and the multiplication rule of conditional probability: $\P(X_0 = x_0, X_1 = x_1, \ldots, X_n = x_n) = \P(X_0 = x_0) \P(X_1 = x_1 \mid X_0 = x_0) \P(X_2 = x_2 \mid X_0 = x_0, X_1 = x_1) \cdots \P(X_n = x_n \mid X_0 = x_0, \ldots, X_{n-1} = x_{n-1})$ But by the Markov property, this reduces to \begin{align*} \P(X_0 = x_0, X_1 = x_1, \ldots, X_n = x_n) & = \P(X_0 = x_0) \P(X_1 = x_1 \mid X_0 = x_0) \P(X_2 = x_2 \mid X_1 = x_1) \cdots \P(X_n = x_n \mid X_{n-1} = x_{n-1}) \ & = f_0(x_0) P(x_0, x_1) P(x_1, x_2) \cdots P(x_{n-1}, x_n) \end{align*}
Computations of this sort are the reason for the term chain in the name Markov chain. From this result, it follows that given a probability matrix $P$ on $S$ and a probability density function $f$ on $S$, we can construct a Markov chain $\bs{X} = (X_0, X_1, X_2, \ldots)$ such that $X_0$ has probability density function $f$ and the chain has one-step transition matrix $P$. In applied problems, we often know the one-step transition matrix $P$ from modeling considerations, and again, the initial distribution is often unspecified.
There is a natural graph (in the combinatorial sense) associated with a homogeneous, discrete-time Markov chain.
Suppose again that $\bs{X} = (X_0, X_1, X_2, \ldots)$ is a Markov chain with state space $S$ and transition probability matrix $P$. The state graph of $\bs{X}$ is the directed graph with vertex set $S$ and edge set $E = \{(x, y) \in S^2: P(x, y) \gt 0\}$.
That is, there is a directed edge from $x$ to $y$ if and only if state $x$ leads to state $y$ in one step. Note that the graph may well have loops, since a state can certainly lead back to itself in one step. More generally, we have the following result:
Suppose again that $\bs{X} = (X_0, X_1, X_2, \ldots)$ is a Markov chain with state space $S$ and transition probability matrix $P$. For $x, \, y \in S$ and $n \in \N_+$, there is a directed path of length $n$ in the state graph from $x$ to $y$ if and only if $P^n(x, y) \gt 0$.
Proof
This follows since $P^n(x, y) \gt 0$ if and only if there exists a sequence of states $(x_1, x_2, \ldots, x_{n-1})$ with $P(x, x_1) \gt 0, P(x_1, x_2) \gt 0, \ldots, P(x_{n-1}, y) \gt 0$. This is also precisely the condition for the existence of a directed path $(x, x_1, \ldots, x_{n-1}, y)$ of length $n$ from $x$ to $y$ in the state graph.
Potential Matrices
For $\alpha \in (0, 1]$, the $\alpha$-potential matrix $R_\alpha$ of $\bs{X}$ is $R_\alpha = \sum_{n=0}^\infty \alpha^n P^n, \quad (x, y) \in S^2$
1. $R = R_1$ is simply the potential matrix of $\bs{X}$.
2. $R(x, y)$ is the expected number of visits by $\bs{X}$ to $y \in S$, starting at $x \in S$.
Proof
First the definition of $R_\alpha$ as an infinite series of matrices makes sense since $P^n$ is a nonnegative matrix for each $n$. The interpretation of $R(x, y)$ for $(x, y) \in S^2$ comes from interchanging sum and expected value, again justified since the terms are nonnegative. $R(x, y) = \sum_{n=0}^\infty P^n(x, y) = \sum_{n=0}^\infty \E[\bs{1}(X_n = y) \mid X_0 = x] = \E\left( \sum_{n=0}^\infty \bs{1}(X_n = y) \biggm| X_0 = x\right) = \E[\#\{n \in \N: X_n = y\} \mid X_0 = x]$
Note that it's quite possible that $R(x, y) = \infty$ for some $(x, y) \in S^2$. In fact, knowing when this is the case is of considerable importance in recurrence and transience, which we study in the next section. As with any nonnegative matrix, the $\alpha$-potential matrix defines a kernel and defines left and right operators. For the kernel, $R_\alpha(x, A) = \sum_{y \in A} R_\alpha(x, y) = \sum_{n=0}^\infty \alpha^n P^n(x, A), \quad x \in S, A \subseteq S$ In particular, $R(x, A)$ is the expected number of visits by the chain to $A$ starting in $x$: $R(x, A) = \sum_{y \in A} R(x, y) = \sum_{n=0}^\infty P^n(x, A) = \E\left[\sum_{n=0}^\infty \bs{1}(X_n \in A)\right], \quad x \in S, \, A \subseteq S$
If $\alpha \in (0, 1)$, then $R_\alpha(x, S) = \frac{1}{1 - \alpha}$ for all $x \in S$.
Proof
Using geometric series, $R_\alpha(x, S) = \sum_{n=0}^\infty \alpha^n P^n(x, S) = \sum_{n=0}^\infty \alpha^n = \frac{1}{1 - \alpha}$
Hence $R_\alpha$ is a bounded matrix for $\alpha \in (0, 1)$ and $(1 - \alpha) R_\alpha$ is a probability matrix. There is a simple interpretation of this matrix.
If $\alpha \in (0, 1)$ then $(1 - \alpha) R_\alpha(x, y) = \P(X_N = y \mid X_0 = x)$ for $(x, y) \in S^2$, where $N$ is independent of $\bs{X}$ and has the geometric distribution on $\N$ with parameter $1 - \alpha$.
Proof
Let $(x, y) \in S^2$. Conditioning on $N$ gives $\P(X_N = y \mid X_0 = x) = \sum_{n=0}^\infty \P(N = n) \P(X_N = y \mid X_0 = x, N = n)$ But by the substitution rule and the assumption of independence, $\P(X_N = y \mid N = n, X_0 = x) = \E(X_n = y \mid N = n, X_0 = x) = \P(X_n = y \mid X_0 = x) = P^n (x, y)$ Since $N$ has the geometric distribution on $N$ with parameter $1 - \alpha$ we have $\P(N = n) = (1 - \alpha) \alpha^n$. Hence $\P(X_N = y \mid X_0 = x) = \sum_{n=0}^\infty (1 - \alpha) \alpha^n P^n(x, y) = (1 - \alpha) R_\alpha(x, y)$
So $(1 - \alpha) R_\alpha$ can be thought of as a transition matrix just as $P^n$ is a transition matrix, but corresponding to the random time $N$ (with $\alpha$ as a paraamter) rather than the deterministic time $n$. An interpretation of the potential matrix $R_\alpha$ for $\alpha \in (0, 1)$ can also be given in economic terms. Suppose that we receive one monetary unit each time the chain visits a fixed state $y \in S$. Then $R(x, y)$ is the expected total reward, starting in state $x \in S$. However, typically money that we will receive at times distant in the future have less value to us now than money that we will receive soon. Specifically suppose that a monetary unit at time $n \in \N$ has a present value of $\alpha^n$, so that $\alpha$ is an inflation factor (sometimes also called a discount factor). Then $R_\alpha (x, y)$ gives the expected total discounted reward, starting at $x \in S$.
The potential kernels $\bs{R} = \{R_\alpha: \alpha \in (0, 1)\}$ completely determine the transition kernels $\bs{P} = \{P_n: n \in \N\}$.
Proof
Note that for $(x, y) \in S^2$, the function $\alpha \mapsto R_\alpha(x, y)$ is a power series in $\alpha$ with coefficients $n \mapsto P^n(x, y)$. In the language of combinatorics, $\alpha \mapsto R_\alpha(x, y)$ is the ordinary generating function of the sequence $n \mapsto P^n(x, y)$. As noted above, this power series has radius of convergence at least 1, so we can extend the domain to $\alpha \in (-1, 1)$. Thus, given the potential matrices, we can recover the transition matrices by taking derivatives and evaluating at 0: $P^n(x, y) = \frac{1}{n!}\left[\frac{d^n}{d\alpha^n} R_\alpha(x, y) \right]_{\alpha = 0}$
Of course, it's really only necessary to determine $P$, the one step transition kernel, since the other transition kernels are powers of $P$. In any event, it follows that the matrices $\bs{R} = \{R_\alpha: \alpha \in (0, 1)\}$, along with the initial distribution, completely determine the finite dimensional distributions of the Markov chain $\bs{X}$. The potential matrices commute with each other and with the transition matrices.
If $\alpha, \, \beta \in (0, 1]$ and $k \in \N$, then
1. $P^k R_\alpha = R_\alpha P^k = \sum_{n=0}^\infty \alpha^n P^{n+k}$
2. $R_\alpha R_\beta = R_\beta R_\alpha = \sum_{m=0}^\infty \sum_{n=0}^\infty \alpha^m \beta^n P^{m+n}$
Proof
Distributing matrix products through matrix sums is allowed since the matrices are nonnegative.
1. Directly $R_\alpha P^k = \sum_{n=0}^\infty \alpha^n P^n P^k = \sum_{n=0}^\infty \alpha^n P^{n+k}$ The other direction requires an interchange. $P^k R_\alpha = P^k \sum_{n=0}^\infty \alpha^n P^n = \sum_{n=0}^\infty \alpha^n P^k P^n = \sum_{n=0}^\infty \alpha^n P^{n+k}$
2. First, $R_\alpha R_\beta = \sum_{m=0}^\infty \alpha^m P^m R_\beta = \sum_{m=0}^\infty \alpha^m P^m \left(\sum_{n=0}^\infty \beta^n P^n\right) = \sum_{m=0}^\infty \sum_{n=0}^\infty \alpha^m \beta^n P^m P^n = \sum_{m=0}^\infty \sum_{n=0}^\infty \alpha^m \beta^n P^{m+n}$ The other direction is similar.
The fundamental equation that relates the potential matrices is given next.
If $\alpha, \, \beta \in (0, 1]$ with $\alpha \ge \beta$ then $\alpha R_\alpha = \beta R_\beta + (\alpha - \beta) R_\alpha R_\beta$
Proof
If $\alpha = \beta$ the equation is trivial, so assume $\alpha \gt \beta$. From the previous result, $R_\alpha R_\beta = \sum_{j=0}^\infty \sum_{k=0}^\infty \alpha^j q^k P^{j+k}$ Changing variables to sum over $n = j + k$ and $k$ gives $R_\alpha R_\beta = \sum_{n=0}^\infty \sum_{k=0}^n \alpha^{n-k} \beta^k P^n = \sum_{n=0}^\infty \sum_{k=0}^n \left(\frac{\beta}{\alpha}\right)^k \alpha^n P^n = \sum_{n=0}^\infty \frac{1 - \left(\frac{\beta}{\alpha}\right)^{n+1}}{1 - \frac{\beta}{\alpha}} \alpha^n P^n$ Simplifying gives $R_\alpha R_\beta = \frac{1}{\alpha - \beta} \left[\alpha R_\alpha - \beta R_\beta \right]$ Note that since $\beta \lt 1$, the matrix $R_\beta$ has finite values, so we don't have to worry about the dreaded indeterminate form $\infty - \infty$.
If $\alpha \in (0, 1]$ then $I + \alpha R_\alpha P = I + \alpha P R_\alpha = R_\alpha$.
Proof
From the result above, $I + \alpha R_\alpha P = I + \alpha P R_\alpha = I + \sum_{n=0}^\infty \alpha^{n+1} P^{n+1} = \sum_{n = 0}^\infty \alpha^n P^n = R_\alpha$
This leads to an important result: when $\alpha \in (0, 1)$, there is an inverse relationship between $P$ and $R_\alpha$.
If $\alpha \in (0, 1)$, then
1. $R_\alpha = (I - \alpha P)^{-1}$
2. $P = \frac{1}{\alpha}\left(I - R_\alpha^{-1}\right)$
Proof
The matrices have finite values, so we can subtract. The identity $I + \alpha R_\alpha P = R_\alpha$ leads to $R_\alpha(I - \alpha P) = I$ and the identity $I + \alpha P R_\alpha = R_\alpha$ leads to $(I - \alpha P) R_\alpha = I$. Hence (a) holds. Part (b) follows from (a).
This result shows again that the potential matrix $R_\alpha$ determines the transition operator $P$.
Sampling in Time
If we sample a Markov chain at multiples of a fixed time $k$, we get another (homogeneous) chain.
Suppose that $\bs{X} = (X_0, X_1, X_2, \ldots)$ is an Markov chain with state space $S$ and transition probability matrix $P$. For fixed $k \in \N_+$, the sequence $\bs{X}_k = (X_0, X_k, X_{2 k}, \ldots)$ is a Markov chain on $S$ with transition probability matrix $P^k$.
If we sample a Markov chain at a general increasing sequence of time points $0 \lt n_1 \lt n_2 \lt \cdots$ in $\N$, then the resulting stochastic process $\bs{Y} = (Y_0, Y_1, Y_2, \ldots)$, where $Y_k = X_{n_k}$ for $k \in \N$, is still a Markov chain, but is not time homogeneous in general.
Recall that if $A$ is a nonempty subset of $S$, then $P_A$ is the matrix $P$ restricted to $A \times A$. So $P_A$ is a sub-stochastic matrix, since the row sums may be less than 1. Recall also that $P_A^n$ means $(P_A)^n$, not $(P^n)_A$; in general these matrices are different.
If $A$ is a nonempty subset of $S$ then for $n \in \N$,
$P_A^n(x, y) = \P(X_1 \in A, X_2 \in A, \ldots, X_{n-1} \in A, X_n = y \mid X_0 = x), \quad (x, y) \in A \times A$
That is, $P_A^n(x, y)$ is the probability of going from state $x$ to $y$ in $n$ steps, remaining in $A$ all the while. In terms of the state graph of $\bs{X}$, it is the sum of products of probabilities along paths of length $n$ from $x$ to $y$ that stay inside $A$.
Examples and Applications
Computational Exercises
Let $\bs{X} = (X_0, X_1, \ldots)$ be the Markov chain on $S = \{a, b, c\}$ with transition matrix $P = \left[\begin{matrix} \frac{1}{2} & \frac{1}{2} & 0 \ \frac{1}{4} & 0 & \frac{3}{4} \ 1 & 0 & 0 \end{matrix} \right]$
For the Markov chain $\bs{X}$,
1. Draw the state graph.
2. Find $\P(X_1 = a, X_2 = b, X_3 = c \mid X_0 = a)$
3. Find $P^2$
4. Suppose that $g: S \to \R$ is given by $g(a) = 1$, $g(b) = 2$, $g(c) = 3$. Find $\E[g(X_2) \mid X_0 = x]$ for $x \in S$.
5. Suppose that $X_0$ has the uniform distribution on $S$. Find the probability density function of $X_2$.
Answer
1. The edge set is $E = \{(a, a), (a, b), (b, a), (b, c), (c, a)\}$
2. $P(a, a) P(a, b) P(b, c) = \frac{3}{16}$
3. By standard matrix multiplication, $P^2 = \left[\begin{matrix} \frac{3}{8} & \frac{1}{4} & \frac{3}{8} \ \frac{7}{8} & \frac{1}{8} & 0 \ \frac{1}{2} & \frac{1}{2} & 0 \end{matrix} \right]$
4. In matrix form, $g = \left[\begin{matrix} 1 \ 2 \ 3 \end{matrix}\right], \quad P^2 g = \left[\begin{matrix} 2 \ \frac{9}{8} \ \frac{3}{2} \end{matrix} \right]$
5. In matrix form, $X_0$ has PDF $f = \left[\begin{matrix} \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \end{matrix} \right]$, and $X_2$ has PDF $f P^2 = \left[\begin{matrix} \frac{7}{12} & \frac{7}{24} & \frac{1}{8} \end{matrix} \right]$.
Let $A = \{a, b\}$. Find each of the following:
1. $P_A$
2. $P_A^2$
3. $(P^2)_A$
Proof
1. $P_A = \left[\begin{matrix} \frac{1}{2} & \frac{1}{2} \ \frac{1}{4} & 0 \end{matrix}\right]$
2. $P_A^2 = \left[\begin{matrix} \frac{3}{8} & \frac{1}{4} \ \frac{1}{8} & \frac{1}{8} \end{matrix}\right]$
3. $(P^2)_A = \left[\begin{matrix} \frac{3}{8} & \frac{1}{4} \ \frac{7}{8} & \frac{1}{8} \end{matrix}\right]$
Find the invariant probability density function of $\bs{X}$
Answer
Solving $f P = f$ subject to the condition that $f$ is a PDF gives $f = \left[\begin{matrix} \frac{8}{15} & \frac{4}{15} & \frac{3}{15} \end{matrix}\right]$
Compute the $\alpha$-potential matrix $R_\alpha$ for $\alpha \in (0, 1)$.
Answer
Computing $R_\alpha = (I - \alpha P)^{-1}$ gives $R_\alpha = \frac{1}{(1 - \alpha)(8 + 4 \alpha + 3 \alpha^2)}\left[\begin{matrix} 8 & 4 \alpha & 3 \alpha^2 \ 2 \alpha + 6 \alpha^2 & 8 - 4 \alpha & 6 \alpha - 3 \alpha^2 \ 8 \alpha & 4 \alpha^2 & 8 - 4 \alpha - \alpha^2 \end{matrix}\right]$ As a check on our work, note that the row sums are $\frac{1}{1 - \alpha}$.
The Two-State Chain
Perhaps the simplest, non-trivial Markov chain has two states, say $S = \{0, 1\}$ and the transition probability matrix given below, where $p \in (0, 1)$ and $q \in (0, 1)$ are parameters. $P = \left[ \begin{matrix} 1 - p & p \ q & 1 - q \end{matrix} \right]$
For $n \in \N$, $P^n = \frac{1}{p + q} \left[ \begin{matrix} q + p(1 - p - q)^n & p - p(1 - p - q)^n \ q - q(1 - p - q)^n & p + q(1 - p - q)^n \end{matrix} \right]$
Proof
The eigenvalues of $P$ are 1 and $1 - p - q$. Next, $B^{-1} P B = D$ where $B = \left[ \begin{matrix} 1 & - p \ 1 & q \end{matrix} \right], \quad D = \left[ \begin{matrix} 1 & 0 \ 0 & 1 - p - q \end{matrix} \right]$ Hence $P^n = B D^n B^{-1}$, which gives the expression above.
As $n \to \infty$, $P^n \to \frac{1}{p + q} \left[ \begin{matrix} q & p \ q & p \end{matrix} \right]$
Proof
Note that $0 \lt p + q \lt 2$ and so $-1 \lt 1 - (p + q) \lt 1$. Hence $(1 - p - q)^n \to 0$ as $n \to \infty$.
Open the simulation of the two-state, discrete-time Markov chain. For various values of $p$ and $q$, and different initial states, run the simulation 1000 times. Compare the relative frequency distribution to the limiting distribution, and in particular, note the rate of convergence. Be sure to try the case $p = q = 0.01$
The only invariant probability density function for the chain is $f = \left[\begin{matrix} \frac{q}{p + q} & \frac{p}{p + q} \end{matrix} \right]$
Proof
Let $f = \left[\begin{matrix} a & b \end{matrix}\right]$. The matrix equation $f P = f$ leads to $-p a + q b = 0$ so $b = a \frac{p}{q}$. The condition $a + b = 1$ for $f$ to be a PDF then gives $a = \frac{q}{p + q}$, $b = \frac{p}{p + q}$
For $\alpha \in (0, 1)$, the $\alpha$-potential matrix is $R_\alpha = \frac{1}{(p + q)(1 - \alpha)} \left[\begin{matrix} q & p \ q & p \ \end{matrix}\right] + \frac{1}{(p + q)^2 (1 - \alpha)} \left[\begin{matrix} p & -p \ -q & q \end{matrix}\right]$
Proof
In this case, $R_\alpha$ can be computed directly as $\sum_{n=0}^\infty \alpha^n P^n$ using geometric series.
In spite of its simplicity, the two state chain illustrates some of the basic limiting behavior and the connection with invariant distributions that we will study in general in a later section.
Independent Variables and Random Walks
Suppose that $\bs{X} = (X_0, X_1, X_2, \ldots)$ is a sequence of independent random variables taking values in a countable set $S$, and that $(X_1, X_2, \ldots)$ are identically distributed with (discrete) probability density function $f$.
$\bs{X}$ is a Markov chain on $S$ with transition probability matrix $P$ given by $P(x, y) = f(y)$ for $(x, y) \in S \times S$. Also, $f$ is invariant for $P$.
Proof
As usual, let $\mathscr{F}_n = \sigma\{X_0, X_1 \ldots, X_n\}$ for $n \in \N$. Since the sequence $\bs{X}$ is independent, $\P(X_{n+1} = y \mid \mathscr{F}_n) = \P(X_{n+1} = y) = f(y), \quad y \in S$ Also, $f P(y) = \sum_{x \in S} f(x) P(x, y) = \sum_{x \in S} f(x) f(y) = f(y), \quad y \in S$
As a Markov chain, the process $\bs{X}$ is not very interesting, although of course it is very interesting in other ways. Suppose now that $S = \Z$, the set of integers, and consider the partial sum process (or random walk) $\bs{Y}$ associated with $\bs{X}$: $Y_n = \sum_{i=0}^n X_i, \quad n \in \N$
$\bs{Y}$ is a Markov chain on $\Z$ with transition probability matrix $Q$ given by $Q(x, y) = f(y - x)$ for $(x, y) \in \Z \times \Z$.
Proof
Again, let $\mathscr{F}_n = \sigma\{X_0, X_1, \ldots, X_n\}$ for $n \in \N$. Then also, $\mathscr{F}_n = \sigma\{Y_0, Y_1, \ldots, Y_n\}$ for $n \in \N$. Hence $\P(Y_{n+1} = y \mid \mathscr{F}_n) = \P(Y_n + X_{n+1} = y \mid \mathscr{F}_n) = \P(Y_n + X_{n+1} = y \mid Y_n), \quad y \in \Z$ since the sequence $\bs{X}$ is independent. In particular, $\P(Y_{n+1} = y \mid Y_n = x) = \P(x + X_{n+1} = y \mid Y_n = x) = \P(X_{n+1} = y - x) = f(y - x), \quad (x, y) \in \Z^2$
Thus the probability density function $f$ governs the distribution of a step size of the random walker on $\Z$.
Consider the special case of the random walk on $\Z$ with $f(1) = p$ and $f(-1) = 1 - p$, where $p \in (0, 1)$.
1. Give the transition matrix $Q$ explicitly.
2. Give $Q^n$ explicitly for $n \in \N$.
Answer
1. $Q(x, x - 1) = 1 - p$, $Q(x, x + 1) = p$ for $x \in Z$.
2. For $k \in \{0, 1, \ldots, n\}$ $Q^n(x, x + 2 k - n) = \binom{n}{k} p^k (1 - p)^{n-k}$ This corresponds to $k$ steps to the right and $n - k$ steps to the left.
This special case is the simple random walk on $\Z$. When $p = \frac{1}{2}$ we have the simple, symmetric random walk. The simple random walk on $\Z$ is studied in more detail in the section on random walks on graphs. The simple symmetric random walk is studied in more detail in the chapter on Bernoulli Trials.
Doubly Stochastic Matrices
A matrix $P$ on $S$ is doubly stochastic if it is nonnegative and if the row and columns sums are 1: $\sum_{u \in S} P(x, u) = 1, \; \sum_{u \in s} P(u, y) = 1, \quad (x, y) \in S \times S$
Suppose that $\bs{X}$ is a Markov chain on a finite state space $S$ with doubly stochastic transition matrix $P$. Then the uniform distribution on $S$ is invariant.
Proof
Constant functions are left invariant. Suppose that $f(x) = c$ for $x \in S$. Then $f P(y) = \sum_{x \in S} f(x) P(x, y) = c \sum_{x \in S} P(x, y) = c, \quad y \in S$ Hence if $S$ is finite, the uniform PDF $f$ given by $f(x) = 1 \big/ \#(S)$ for $x \in S$ is invariant.
If $P$ and $Q$ are doubly stochastic matrices on $S$, then so is $P Q$.
Proof
For $y \in S$, $\sum_{x \in S} P Q(x, y) = \sum_{x \in S} \sum_{z \in S} P(x, z) Q(z, y) = \sum_{z \in S} Q(z, y) \sum_{x \in S} P(x, z) = \sum_{z \in S} Q(z, y) = 1$ The interchange of sums is valid since the terms are nonnegative.
It follows that if $P$ is doubly stochastic then so is $P^n$ for $n \in \N$.
Suppose that $\bs{X} = (X_0, X_1, \ldots)$ is the Markov chain with state space $S = \{-1, 0, 1\}$ and with transition matrix $P = \left[\begin{matrix} \frac{1}{2} & \frac{1}{2} & 0 \ 0 & \frac{1}{2} & \frac{1}{2} \ \frac{1}{2} & 0 & \frac{1}{2} \end{matrix} \right]$
1. Draw the state graph.
2. Show that $P$ is doubly stochastic
3. Find $P^2$.
4. Show that the uniform distribution on $S$ is the only invariant distribution for $\bs{X}$.
5. Suppose that $X_0$ has the uniform distribution on $S$. For $n \in \N$, find $\E(X_n)$ and $\var(X_n)$.
6. Find the $\alpha$-potential matrix $R_\alpha$ for $\alpha \in (0, 1)$.
Proof
1. The edge set is $E = \{(-1, -1), (-1, 0), (0, 0), (0, 1), (1, -1), (1, 1)\}$
2. Just note that the row sums and the column sums are 1.
3. By matrix multiplication, $P^2 = \left[\begin{matrix} \frac{1}{4} & \frac{1}{2} & \frac{1}{4} \ \frac{1}{4} & \frac{1}{4} & \frac{1}{2} \ \frac{1}{2} & \frac{1}{4} & \frac{1}{4} \end{matrix} \right]$
4. Let $f = \left[\begin{matrix} p & q & r\end{matrix}\right]$. Solving the equation $f P = f$ gives $p = q = r$. The requirement that $f$ be a PDF then forces the common value to be $\frac{1}{3}$.
5. If $X_0$ has the uniform distribution on $S$, then so does $X_n$ for every $n \in \N$, so $\E(X_n) = 0$ and $\var(X_n) = \E\left(X_0^2\right) = \frac{2}{3}$.
6. $R_\alpha = (I - \alpha P)^{-1} = \frac{1}{(1 - \alpha)(4 - 2 \alpha + \alpha^2)}\left[\begin{matrix} 4 - 4 a + a^2 & 2 a - a^2 & a^2 \ a^2 & 4 - 4 a + a^2 & 2 a - a^2 \ 2 a - a^2 & a^2 & 4 - 4 a + a^2 \end{matrix}\right]$
Recall that a matrix $M$ indexed by a countable set $S$ is symmetric if $M(x, y) = M(y, x)$ for all $x, \, y \in S$.
If $P$ is a symmetric, stochastic matrix then $P$ is doubly stochastic.
Proof
This is trivial since $\sum_{x \in S} P(x, y) = \sum_{x \in S} P(y, x) = 1, \quad y \in S$
The converse is not true. The doubly stochastic matrix in the exercise above is not symmetric. But since a symmetric, stochastic matrix on a finite state space is doubly stochastic, the uniform distribution is invariant.
Suppose that $\bs{X} = (X_0, X_1, \ldots)$ is the Markov chain with state space $S = \{-1, 0, 1\}$ and with transition matrix $P = \left[\begin{matrix} 1 & 0 & 0 \ 0 & \frac{1}{4} & \frac{3}{4} \ 0 & \frac{3}{4} & \frac{1}{4} \end{matrix} \right]$
1. Draw the state graph.
2. Show that $P$ is symmetric
3. Find $P^2$.
4. Find all invariant probability density functions for $\bs{X}$.
5. Find the $\alpha$-potential matrix $R_\alpha$ for $\alpha \in (0, 1)$.
Proof
1. The edge set is $E = \{(-1, -1), (0, 0), (0, 1), (1, 0), (1, 1)\}$
2. Just note that $P$ is symmetric with respect to the main diagonal.
3. By matrix multiplication, $P^2 = \left[\begin{matrix} 1 & 0 & 0 \ 0 & \frac{5}{8} & \frac{3}{8} \ 0 & \frac{3}{8} & \frac{5}{8} \end{matrix} \right]$
4. Let $f = \left[\begin{matrix} p & q & r\end{matrix}\right]$. Solving the equation $f P = f$ gives simply $r = q$. The requirement that $f$ be a PDF forces $p = 1 - 2 q$. Thus the invariant PDFs are $f = \left[\begin{matrix} 1 - 2 q & q & q \end{matrix}\right]$ where $q \in \left[0, \frac{1}{2}\right]$. The special case $q = \frac{1}{3}$ gives the uniform distribution on $S$.
5. $R_\alpha = (I - \alpha P)^{-1} = \frac{1}{2 (1 - \alpha)^2 (2 + \alpha)}\left[\begin{matrix} 4 - 2 \alpha - 2 \alpha^2 & 0 & 0 \ 0 & 4 - 5 \alpha + \alpha^2 & 3 \alpha - 3 \alpha^2 \ 0 & 3 \alpha - 3 \alpha^2 & 4 - 5 \alpha + \alpha^2 \end{matrix}\right]$
Special Models
The Markov chains in the following exercises model interesting processes that are studied in separate sections.
Read the introduction to the Ehrenfest chains.
Read the introduction to the Bernoulli-Laplace chain.
Read the introduction to the reliability chains.
Read the introduction to the branching chain.
Read the introduction to the queuing chains.
Read the introduction to random walks on graphs.
Read the introduction to birth-death chains. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.03%3A_Introduction_to_Discrete-Time_Chains.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\cl}{\text{cl}}$
The study of discrete-time Markov chains, particularly the limiting behavior, depends critically on the random times between visits to a given state. The nature of these random times leads to a fundamental dichotomy of the states.
Basic Theory
As usual, our starting point is a probability space $(\Omega, \mathscr{F}, \P)$, so that $\Omega$ is the sample space, $\mathscr{F}$ the $\sigma$-algebra of events, and $\P$ the probability measure on $(\Omega, \mathscr{F})$. Suppose now that $\bs{X} = (X_0, X_1, X_2, \ldots)$ is a (homogeneous) discrete-time Markov chain with (countable) state space $S$ and transition probability matrix $P$. So by definition, $P(x, y) = \P(X_{n+1} = y \mid X_n = x)$ for $x, \, y \in S$ and $n \in \N$. Let $\mathscr{F}_n = \sigma\{X_0, X_1, \ldots, X_n\}$, the $\sigma$-algebra of events defined by the chain up to time $n \in \N$, so that $\mathfrak{F} = (\mathscr{F}_0, \mathscr{F}_1, \ldots)$ is the natural filtration associated with $\bs{X}$.
Hitting Times and Probabilities
Let $A$ be a nonempty subset of $S$. Recall that the hitting time to $A$ is the random variable that gives the first positive time that the chain is in $A$: $\tau_A = \min\{n \in \N_+: X_n \in A\}$ Since the chain may never enter $A$, the random variable $\tau_A$ takes values in $\N_+ \cup \{\infty\}$ (recall our convention that the minimum of the empty set is $\infty$). Recall also that $\tau_A$ is a stopping time for $\bs{X}$. That is, $\{\tau_A = n\} \in \mathscr{F}_n$ for $n \in \N_+$. Intuitively, this means that we can tell if $\tau_A = n$ by observing the chain up to time $n$. This is clearly the case, since explicitly $\{\tau_A = n\} = \{X_1 \notin A, \ldots, X_{n-1} \notin A, X_n \in A\}$ When $A = \{x\}$ for $x \in S$, we will simplify the notation to $\tau_x$. This random variable gives the first positive time that the chain is in state $x$. When the chain enters a set of states $A$ for the first time, the chain must visit some state in $A$ for the first time, so it's clear that $\tau_A = \min\{\tau_x: x \in A\}, \quad A \subseteq S$ Next we define two functions on $S$ that are related to the hitting times.
For $x \in S$, $A \subseteq S$ (nonempty), and $n \in \N_+$ define
1. $H_n(x, A) = \P(\tau_A = n \mid X_0 = x)$
2. $H(x, A) = \P(\tau_A \lt \infty \mid X_0 = x)$
So $H(x, A) = \sum_{n=1}^\infty H_n(x, A)$.
Note that $n \mapsto H_n(x, A)$ is the probability density function of $\tau_A$, given $X_0 = x$, except that the density function may be defective in the sense that the sum $H(x, A)$ may be less than 1, in which case of course, $1 - H(x, A) = \P(\tau_A = \infty \mid X_0 = x)$. Again, when $A = \{y\}$, we will simplify the notation to $H_n(x, y)$ and $H(x, y)$, respectively. In particular, $H(x, x)$ is the probability, starting at $x$, that the chain eventually returns to $x$. If $x \ne y$, $H(x, y)$ is the probability, starting at $x$, that the chain eventually reaches $y$. Just knowing when $H(x, y)$ is 0, positive, and 1 will turn out to be of considerable importance in the overall structure and limiting behavior of the chain. As a function on $S^2$, we will refer to $H$ as the hitting matrix of $\bs{X}$. Note however, that unlike the transition matrix $P$, we do not have the structure of a kernel. That is, $A \mapsto H(x, A)$ is not a measure, so in particular, it is generally not true that $H(x, A) = \sum_{y \in A} H(x, y)$. The same remarks apply to $H_n$ for $n \in \N_+$. However, there are interesting relationships between the transition matrix and the hitting matrix.
$H(x, y) \gt 0$ if and only if $P^n(x, y) \gt 0$ for some $n \in \N_+$.
Proof
Note that $\{X_n = y\} \subseteq \{\tau_y \lt \infty\}$ for all $n \in \N_+$, and $\{\tau_y \lt \infty\} = \{X_k = y \text{ for some } k \in \N_+\}$. From the increasing property of probability and Boole's inequality it follows that for each $n \in \N_+$, $P^n(x, y) \le H(x, y) \le \sum_{k=1}^\infty P^k(x, y)$
The following result gives a basic relationship between the sequence of hitting probabilities and the sequence of transition probabilities.
Suppose that $(x, y) \in S^2$. Then $P^n(x, y) = \sum_{k=1}^n H_k(x, y) P^{n-k}(y, y), \quad n \in \N_+$
Proof
This result follows from conditioning on $\tau_y$. Starting in state $x$, the chain is in state $y$ at time $n$ if and only if the chain hits $y$ for the first time at some previous time $k$, and then returns to $y$ in the remaining $n - k$ steps. More formally, $P^n(x, y) = \P(X_n = y \mid X_0 = x) = \sum_{k=0}^\infty \P(X_n = y \mid \tau_y = k, X_0 = x) \P(\tau_y = k \mid X_0 = x)$ But the event $\tau_y = k$ implies $X_k = y$ and is in $\mathscr{F}_k$. Hence by the Markov property, $\P(X_n = y \mid \tau_y = k, X_0 = x) = \P(X_n = y \mid X_k = y, \tau_y = k, X_0 = x) = \P(X_n = y \mid X_k = y) = P^k(x, y)$ Of course, by definition, $\P(\tau_y = k \mid X_0 = x) = H_k(x, y)$, so the result follows by substitution.
Suppose that $x \in S$ and $A \subseteq S$. Then
1. $H_{n+1}(x, A) = \sum_{y \notin A} P(x, y) H_n(y, A)$ for $n \in \N_+$
2. $H(x, A) = P(x, A) + \sum_{y \notin A} P(x, y) H(y, A)$
Proof
These results follow form conditioning on $X_1$.
1. Starting in state $x$, the chain first enters $A$ at time $n + 1$ if and only if the chain goes to some state $y \notin A$ at time 1, and then from state $y$, first enters $A$ in $n$ steps. $H_{n+1}(x, A) = \P(\tau_A = n + 1 \mid X_0 = x) = \sum_{y \in S} \P(\tau_A = n + 1 \mid X_0 = x, X_1 = y) \P(X_1 = y \mid X_0 = x)$ But $\P(\tau_A = n + 1 \mid X_0 = x, X_1 = y) = 0$ for $y \in A$. By the Markov and time homogeneous properties, $\P(\tau_A = n + 1 \mid X_0 = x, X_1 = y) = \P(\tau_A = n \mid X_0 = y) = H_n(x, A)$ for $y \notin A$. Of course $\P(X_1 = y \mid X_0 = x) = P(x, y)$. So the result follows by substitution.
2. Starting in state $x$, the chain eventually enters $A$ if and only if it either enters $A$ at the first step, or moves to some other state $y \notin A$ at the first step, and then eventually enters $A$ from $y$. $H(x, A) = \P(\tau_A \lt \infty \mid X_0 = x) = \sum_{y \in S} \P(\tau_A \lt \infty \mid X_1 =y, X_0 = x) \P(X_1 = y \mid X_0 = x)$ But $\P(\tau_A \lt \infty \mid X_1 = y, X_0 = x) = 1$ for $y \in A$. By the Markov and homogeneous properties, $\P(\tau_A \lt \infty \mid X_1 = y, X_0 = x) = \P(\tau_A \lt \infty \mid X_0 = y) = H(y, A)$ for $y \notin A$. Substituting we have $H(x, A) = \sum_{y \in A} P(x, y) + \sum_{y \notin A} P(x, y) H(y, A) = P(x, A) + \sum_{y \notin A} P(x, y) H(y, A)$
The following definition is fundamental for the study of Markov chains.
Let $x \in S$.
1. State $x$ is recurrent if $H(x, x) = 1$.
2. State $x$ is transient if $H(x, x) \lt 1$.
Thus, starting in a recurrent state, the chain will, with probability 1, eventually return to the state. As we will see, the chain will return to the state infinitely often with probability 1, and the times of the visits will form the arrival times of a renewal process. This will turn out to be the critical observation in the study of the limiting behavior of the chain. By contrast, if the chain starts in a transient state, then there is a positive probability that the chain will never return to the state.
Counting Variables and Potentials
Again, suppose that $A$ is a nonempty set of states. A natural complement to the hitting time to $A$ is the counting variable that gives the number of visits to $A$ (at positive times). Thus, let $N_A = \sum_{n=1}^\infty \bs{1}(X_n \in A)$ Note that $N_A$ takes value in $\N \cup \{\infty\}$. We will mostly be interested in the special case $A = \{x\}$ for $x \in S$, and in this case, we will simplify the notation to $N_x$.
Let $G(x, A) = \E(N_A \mid X_0 = x)$ for $x \in S$ and $A \subseteq S$. Then $G$ is a kernel on $S$ and $G(x, A) = \sum_{n=1}^\infty P^n(x, A)$
Proof
Note that $G(x, A) = \E \left(\sum_{n=1}^\infty \bs{1}(X_n \in A) \biggm| X_0 = x\right) = \sum_{n=1}^\infty \P(X_n \in A \mid X_0 = x) = \sum_{n=1}^\infty P^n(x, A)$ The interchange of sum and expected value is justified since the terms are nonnegative. For fixed $x \in S$, $A \mapsto G(x, A)$ is a positive measure on $S$ since $A \mapsto P^n(x, A)$ is a probability measure on $S$ for each $n \in \N_+$. Note also that $A \mapsto N_A$ is a random, counting measure on $S$ and hence $A \mapsto G(x, A)$ is a (deterministic) positive measure on $S$.
Thus $G(x, A)$ is the expected number of visits to $A$ at positive times. As usual, when $A = \{y\}$ for $y \in S$ we simplify the notation to $G(x, y)$, and then more generally we have $G(x, A) = \sum_{y \in A} G(x, y)$ for $A \subseteq S$. So, as a matrix on $S$, $G = \sum_{n=1}^\infty P^n$. The matrix $G$ is closely related to the potential matrix $R$ of $\bs{X}$, given by $R = \sum_{n=0}^\infty P^n$. So $R = I + G$, and $R(x, y)$ gives the expected number of visits to $y \in S$ at all times (not just positive times), starting at $x \in S$. The matrix $G$ is more useful for our purposes in this section.
The distribution of $N_y$ has a simple representation in terms of the hitting probabilities. Note that because of the Markov property and time homogeneous property, whenever the chain reaches state $y$, the future behavior is independent of the past and is stochastically the same as the chain starting in state $y$ at time 0. This is the critical observation in the proof of the following theorem.
If $x, \, y \in S$ then
1. $\P(N_y = 0 \mid X_0 = x) = 1 - H(x, y)$
2. $\P(N_y = n \mid X_0 = x) = H(x, y) [H(y, y)]^{n-1}[1 - H(y, y)]$ for $n \in \N_+$
Proof
The essence of the proof is illustrated in the graphic above. The thick lines are intended as reminders that these are not one step transitions, but rather represent all paths between the given vertices. Note that in the special case that $x = y$ we have $\P(N_x = n \mid X_0 = x) = [H(x, x)]^n [1 - H(x, x)], \quad n \in \N$ In all cases, the counting variable $N_y$ has essentially a geometric distribution, but the distribution may well be defective, with some of the probability mass at $\infty$. The behavior is quite different depending on whether $y$ is transient or recurrent.
If $x, \, y \in S$ and $y$ is transient then
1. $\P(N_y \lt \infty \mid X_0 = x) = 1$
2. $G(x, y) = H(x, y) \big/ [1 - H(y, y)]$
3. $H(x, y) = G(x, y) \big/ [1 + G(y, y)]$
Proof
1. If $y$ is transient then $H(y, y) \lt 1$. Hence using the result above and geometric series, $\P(N_y \in \N_+ \mid X_0 = x) = \sum_{n=1}^\infty \P(N_y = n \mid X_0 = x) = H(x, y) [1 - H(y, y)] \sum_{n=1}^\infty [H(y, y)]^{n-1} = H(x, y)$ Hence \begin{align*} \P(N_y \lt \infty \mid X_0 = x) & = \P(N_y \in \N \mid X_0 = x) = \P(N_y = 0 \mid X_0 = x) + \P(N_y \in \N_+ \mid X_0 = x) \ & = [1 - H(x, y)] + H(x, y) = 1 \end{align*}
2. Using the derivative of the geometric series, \begin{align*} G(x, y) & = \E(N_y \mid X_0 = x) = \sum_{n=1}^\infty n \P(N_y = n \mid X_0 = x) \ & = H(x, y) [1 - H(y, y)]\sum_{n=1}^\infty n [H(y, y)]^{n-1} = \frac{H(x, y)}{1 - H(y, y)} \end{align*}
3. From (b), $G(y, y) = H(y, y) \big/[1 - H(y, y)]$ so solving for $H(y, y)$ gives $H(y, y) = G(y, y) \big/ [1 + G(y, y)]$. Substituting this back into (b) gives $G(x, y) = H(x, y)[1 + G(y, y)]$.
if $x, \, y \in S$ and $y$ is recurrent then
1. $\P(N_y = 0 \mid X_0 = x) = 1 - H(x, y)$ and $\P(N_y = \infty \mid X_0 = x) = H(x, y)$
2. $G(x, y) = 0$ if $H(x, y) = 0$ and $G(x, y) = \infty$ if $H(x, y) \gt 0$
3. $\P(N_y = \infty \mid X_0 = y) = 1$ and $G(y, y) = \infty$
Proof
1. If $y$ is recurrent, $H(y, y) = 1$ and so from the result above, $\P(N_y = n \mid X_0 = x) = 0$ for all $n \in \N_+$. Hence $\P(N_y = \infty \mid X_0 = x) = 1 - \P(N_y = 0 \mid X_0 = x) = 1 - H(x, y)$.
2. If $H(x, y) = 0$ then $\P(N_y = 0 \mid X_0 = x) = 1$, so $\E(N_y \mid X_0 = x) = 0$. If $H(x, y) \gt 0$ then $\P(N_y = \infty \mid X_0 = x) \gt 0$ so $\E(N_y \mid X_0 = x) = \infty$.
3. From the result above, $\P(N_y = n \mid X_0 = y) = 0$ for all $n \in \N$, so $\P(N_y = \infty \mid X_0 = y) = 1$.
Note that there is an invertible relationship between the matrix $H$ and the matrix $G$; if we know one we can compute the other. In particular, we can characterize the transience or recurrence of a state in terms of $G$. Here is our summary so far:
Let $x \in S$.
1. State $x$ is transient if and only if $H(x, x) \lt 1$ if and only if $G(x, x) \lt \infty$.
2. State $x$ is recurrent if and only if $H(x, x) = 1$ if and only if $G(x, x) = \infty$.
Of course, the classification also holds for the potential matrix $R = I + G$. That is, state $x \in S$ is transient if and only if $R(x , x) \lt \infty$ and state $x$ is recurrent if and only if $R(x, x) = \infty$.
Relations
The hitting probabilities suggest an important relation on the state space $S$.
For $(x, y) \in S^2$, we say that $x$ leads to $y$ and we write $x \to y$ if either $x = y$ or $H(x, y) \gt 0$.
It follows immediately from the result above that $x \to y$ if and only if $P^n(x, y) \gt 0$ for some $n \in \N$. In terms of the state graph of the chain, $x \to y$ if and only if $x = y$ or there is a directed path from $x$ to $y$. Note that the leads to relation is reflexive by definition: $x \to x$ for every $x \in S$. The relation has another important property as well.
The leads to relation is transitive: For $x, \, y, \, z \in S$, if $x \to y$ and $y \to z$ then $x \to z$.
Proof
If $x \to y$ and $y \to z$, then there exist $j, \, k \in N$ such that $P^j(x, y) \gt 0$ and $P^k(y, z) \gt 0$. But then $P^{j+k}(x, z) \ge P^j(x, y) P^k(y, z) \gt 0$ so $x \to z$.
The leads to relation naturally suggests a couple of other definitions that are important.
Suppose that $A \subseteq S$ is nonempty.
1. $A$ is closed if $x \in A$ and $x \to y$ implies $y \in A$.
2. $A$ is irreducible if $A$ is closed and has no proper closed subsets.
Suppose that $A \subseteq S$ is closed. Then
1. $P_A$, the restriction of $P$ to $A \times A$, is a transition probability matrix on $A$.
2. $\bs{X}$ restricted to $A$ is a Markov chain with transition probability matrix $P_A$.
3. $(P^n)_A = (P_A)^n$ for $n \in \N$.
Proof
1. If $x \in A$ and $y \notin A$, then $x$ does not lead to $y$ so in particular $P(x, y) = 0$. It follows that $\sum_{y \in A} P(x, y) = 1$ for $x \in A$ so $P_A$ is a transition probability matrix.
2. This follows from (a). If the chain starts in $A$, then the chain remains in $A$ for all time, and of course, the Markov property still holds.
3. Again, this follows from (a).
Of course, the entire state space $S$ is closed by definition. If it is also irreducible, we say the Markov chain $\bs{X}$ itself is irreducible. Recall that for a nonempty subset $A$ of $S$ and for $n \in \N$, the notation $P_A^n$ refers to $(P_A)^n$ and not $(P^n)_A$. In general, these are not the same, and in fact for $x, \, y \in A$, $P_A^n(x, y) = \P(X_1 \in A, \ldots, X_{n-1} \in A, X_n = y \mid X_0 = x)$ the probability of going from $x$ to $y$ in $n$ steps, remaining in $A$ all the while. But if $A$ is closed, then as noted in part (c), this is just $P^n(x, y)$.
Suppose that $A$ is a nonempty subset of $S$. Then $\cl(A) = \{y \in S: x \to y \text{ for some } x \in A\}$ is the smallest closed set containing $A$, and is called the closure of $A$. That is,
1. $\cl(A)$ is closed.
2. $A \subseteq \cl(A)$.
3. If $B$ is closed and $A \subseteq B$ then $\cl(A) \subseteq B$
Proof
1. Suppose that $x \in \cl(A)$ and that $x \to y$. Then there exists $a \in A$ such that $a \to x$. By the transitive property, $a \to y$ and hence $y \in \cl(A)$.
2. If $x \in A$ then $x \to x$ so $x \in \cl(A)$.
3. Suppose that $B$ is closed and that $A \subseteq B$. If $x \in \cl(A)$, then there exists $a \in A$ such that $a \to x$. Hence $a \in B$ and $a \to x$. Since $B$ is closed, it follows that $x \in B$. Hence $\cl(A) \subseteq B$.
Recall that for a fixed positive integer $k$, $P^k$ is also a transition probability matrix, and in fact governs the $k$-step Markov chain $(X_0, X_k, X_{2 k}, \ldots)$. It follows that we could consider the leads to relation for this chain, and all of the results above would still hold (relative, of course, to the $k$-step chain). Occasionally we will need to consider this relation, which we will denote by $\underset{k} \to$, particularly in our study of periodicity.
Suppose that $j, \, k \in \N_+$. If $x \, \underset{k}\to \, y$ and $j \mid k$ then $x \, \underset{j}\to \, y$.
Proof
If $x \, \underset{k}\to \, y$ then there exists $n \in \N$ such that $P^{n k}(x, y) \gt 0$. If $j \mid k$, there exists $m \in \N_+$ such that $k = m j$. Hence $P^{n m j}(x, y) \gt 0$ so $x \, \underset{j}\to \, y$.
By combining the leads to relation $\to$ with its inverse, the comes from relation $\leftarrow$, we can obtain another very useful relation.
For $(x, y) \in S^2$, we say that $x$ to and from $y$ and we write $x \leftrightarrow y$ if $x \to y$ and $y \to x$.
By definition, this relation is symmetric: if $x \leftrightarrow y$ then $y \leftrightarrow x$. From our work above, it is also reflexive and transitive. Thus, the to and from relation is an equivalence relation. Like all equivalence relations, it partitions the space into mutually disjoint equivalence classes. We will denote the equivalence class of a state $x \in S$ by $[x] = \{y \in S: x \leftrightarrow y\}$ Thus, for any two states $x, \, y \in S$, either $[x] = [y]$ or $[x] \cap [y] = \emptyset$, and moreover, $\bigcup_{x \in S} [x] = S$.
Two negative results:
1. A closed set is not necessarily an equivalence class.
2. An equivalence class is not necessarily closed.
Example
Consider the trivial Markov chain with state space $S = \{0, 1\}$ and transition matrix $P = \left[\begin{matrix} 0 & 1 \ 0 & 1\end{matrix}\right]$. So state 0 leads deterministically to 1 in one step, while state 1 is absorbing. For the leads to relation, the only relationships are $0 \to 0$, $0 \to 1$, and $1 \to 1$. Thus, the equivalence classes are $\{0\}$ and $\{1\}$.
1. The entire state space $S$ is closed, but is not an equivalence class.
2. $\{0\}$ is an equivalence class but is not closed.
On the other hand, we have the following result:
If $A \subseteq S$ is irreducible, then $A$ is an equivalence class.
Proof
Fix $x \in A$ (recall that closed sets are nonempty by definition). Since $A$ is closed it follows that $[x] \subseteq A$. Since $A$ is irreducible, $\cl(y) = A$ for each $y \in A$ and in particular, $\cl(x) = A$. It follows that $x \leftrightarrow y$ for each $y \in A$. Hence $A \subseteq [x]$.
The to and from equivalence relation is very important because many interesting state properties turn out in fact to be class properties, shared by all states in a given equivalence class. In particular, the recurrence and transience properties are class properties.
Transient and Recurrent Classes
Our next result is of fundamental importance: a recurrent state can only lead to other recurrent states.
If $x$ is a recurrent state and $x \to y$ then $y$ is recurrent and $H(x, y) = H(y, x) = 1$.
Proof
The result trivially holds if $x = y$, so we assume $x \ne y$. Let $\alpha(x, y)$ denote the probability, starting at $x$, that the chain reaches $y$ without an intermediate return to $x$. It must be the case that $\alpha(x, y) \gt 0$ since $x \to y$. In terms of the graph of $\bs{X}$, if there is a path from $x$ to $y$, then there is a path from $x$ to $y$ without cycles. Starting at $x$, the chain could fail to return to $x$ by first reaching $y$ without an intermediate return to $x$, and then from $y$ never reaching $x$. From the Markov and time homogeneous properties, it follows that $1 - H(x, x) \ge \alpha(x, y)[1 - H(y, x)] \ge 0$. But $H(x, x) = 1$ so it follows that $H(y, x) = 1$. So we now know that there exist positive integers $j, \, k$ such that $P^j(x, y) \gt 0$ and $P^k(y, x) \gt 0$. Hence for every $n \in \N$, $P^{j + k + n}(y, y) \ge P^k(y, x) P^n(x, x) P^j(x, y)$ Recall that $G(x, x) = \infty$ since $x$ is recurrent. Thus, summing over $n$ in the displayed equation gives $G(y, y) = \infty$. Hence $y$ is recurrent. Finally, reversing the roles of $x$ and $y$, if follows that $H(x, y) = 1$
From the last theorem, note that if $x$ is recurrent, then all states in $[x]$ are also recurrent. Thus, for each equivalence class, either all states are transient or all states are recurrent. We can therefore refer to transient or recurrent classes as well as states.
If $A$ is a recurrent equivalence class then $A$ is irreducible.
Proof
Suppose that $x \in A$ and that $x \to y$. Since $x$ is recurrent, $y$ is also recurrent and $y \to x$. Hence $x \leftrightarrow y$ and so $y \in A$ since $A$ is an equivalence class. Suppose that $B \subseteq A$ is closed. Since $B$ is nonempty by definition, there exists $x \in B$ and so $x \in A$ also. For every $y \in A$, $x \leftrightarrow y$ so $y \in B$ since $B$ is closed. Thus $A = B$ so $A$ is irreducible.
If $A$ is finite and closed then $A$ has a recurrent state.
Proof
Fix $x \in A$. Since $A$ is closed, it follows that $\P(N_A = \infty \mid X_0 = x) = 1$. Since $A$ is finite, it follows that $\P(N_y = \infty \mid X_0 = x) \gt 0$ for some $y \in A$. But then $y$ is recurrent.
If $A$ is finite and irreducible then $A$ is a recurrent equivalence class.
Proof
Note that $A$ is an equivalence class by a result above, and $A$ has a recurrent state by previous result. It follows that all states in $A$ are recurrent.
Thus, the Markov chain $\bs{X}$ will have a collection (possibly empty) of recurrent equivalence classes $\{A_j: j \in J\}$ where $J$ is a countable index set. Each $A_j$ is irreducible. Let $B$ denote the set of all transient states. The set $B$ may be empty or may consist of a number of equivalence classes, but the class structure of $B$ is usually not important to us. If the chain starts in $A_j$ for some $j \in J$ then the chain remains in $A_j$ forever, visiting each state infinitely often with probability 1. If the chain starts in $B$, then the chain may stay in $B$ forever (but only if $B$ is infinite) or may enter one of the recurrent classes $A_j$, never to escape. However, in either case, the chain will visit a given transient state only finitely many time with probability 1. This basic structure is known as the canonical decomposition of the chain, and is shown in graphical form below. The edges from $B$ are in gray to indicate that these transitions may not exist.
Staying Probabilities and a Classification Test
Suppose that $A$ is a proper subset of $S$. Then
1. $P_A^n(x, A) = \P(X_1 \in A, X_2 \in A, \ldots, X_n \in A \mid X_0 = x)$ for $x \in A$
2. $\lim_{n \to \infty} P_A^n(x, A) = \P(X_1 \in A, X_2 \in A \ldots \mid X_0 = x)$ for $x \in A$
Proof
Recall that $P_A^n$ means $(P_A)^n$ where $P_A$ is the restriction of $P$ to $A \times A$.
1. This is a consequence of the Markov property, and is the probability that the chain stays in $A$ at least through time $n$, starting in $x \in A$.
2. This follows from (a) and the continuity theorem for decreasing events. This is the probability that the chain stays in $A$ forever, starting in $x \in A$.
Let $g_A$ denote the function defined by part (b), so that $g_A(x) = \P(X_1 \in A, X_2 \in A, \ldots \mid X_0 = x), \quad x \in A$ The staying probability function $g_A$ is an interesting complement to the hitting matrix studied above. The following result characterizes this function and provides a method that can be used to compute it, at least in some cases.
For $A \subset S$, $g_A$ is the largest function on $A$ that takes values in $[0, 1]$ and satisfies $g = P_A g$. Moreover, either $g_A = \bs{0}_A$ or $\sup\{g_A(x): x \in A\} = 1$.
Proof
Note that $P_A^{n+1} \bs{1}_A = P_A P_A^n \bs{1}_A$ for $n \in \N$. Taking the limit as $n \to \infty$ and using the bounded convergence theorem gives $g_A = P_A g_A$. Suppose now that $g$ is a function on $A$ that takes values in $[0, 1]$ and satisfies $g = P_A g$. Then $g \le \bs{1}_A$ and hence $g \le P_A^n \bs{1}_A$ for all $n \in \N$. Letting $n \to \infty$ it follows that $g \le g_A$. Next, let $c = \sup\{g_A(x): x \in A\}$. Then $g_A \le c \, \bs{1}_A$ and hence $g_A \le c \, P_A^n \bs{1}_A$ for each $n \in \N$. Letting $n \to \infty$ gives $g_A \le c \, g_A$. It follows that either $g_A = \bs{0}_A$ or $c = 1$.
Note that the characterization in the last result includes a zero-one law of sorts: either the probability that the chain stays in $A$ forever is 0 for every initial state $x \in A$, or we can find states in $A$ for which the probability is arbitrarily close to 1. The next two results explore the relationship between the staying function and recurrence.
Suppose that $\bs{X}$ is an irreducible, recurrent chain with state space $S$. Then $g_A = \bs{0}_A$ for every proper subset $A$ of $S$.
Proof
Fix $y \notin A$ and note that $0 \le g_A(x) \le 1 - H(x, y)$ for every $x \in A$. But $H(x, y) = 1$ since the chain is irreducible and recurrent. Hence $g_A(x) = 0$ for $x \in A$.
Suppose that $\bs{X}$ is an irreducible Markov chain with state space $S$ and transition probability matrix $P$. If there exists a state $x$ such that $g_A = \bs{0}_A$ where $A = S \setminus \{x\}$, then $\bs{X}$ is recurrent.
Proof
With $A$ as defined above, note that $1 - H(x, x) = \sum_{y \in A} P(x, y) g_A(y)$. Hence $H(x, x) = 1$, so $x$ is recurent. Since the $\bs{X}$ is irreducible, it follows that $\bs{X}$ is recurrent.
More generally, suppose that $\bs{X}$ is a Markov chain with state space $S$ and transition probability matrix $P$. The last two theorems can be used to test whether an irreducible equivalence class $C$ is recurrent or transient. We fix a state $x \in C$ and set $A = C \setminus \{x\}$. We then try to solve the equation $g = P_A g$ on $A$. If the only solution taking values in $[0, 1]$ is $\bs{0}_A$, then the class $C$ is recurrent by the previous result. If there are nontrivial solutions, then $C$ is transient. Often we try to choose $x$ to make the computations easy.
Computing Hitting Probabilities and Potentials
We now know quite a bit about Markov chains, and we can often classify the states and compute quantities of interest. However, we do not yet know how to compute:
• $G(x, y)$ when $x$ and $y$ are transient
• $H(x, y)$ when $x$ is transient and $y$ is transient or recurrent.
These problems are related, because of the general inverse relationship between the matrix $H$ and the matrix $G$ noted in our discussion above. As usual, suppose that $\bs{X}$ is a Markov chain with state space $S$, and let $B$ denote the set of transient states. The next result shows how to compute $G_B$, the matrix $G$ restricted to the transient states. Recall that the values of this matrix are finite.
$G_B$ satisfies the equation $G_B = P_B + P_B G_B$ and is the smallest nonnegative solution. If $B$ is finite then $G_B = (I_B - P_B)^{-1} P_B$.
Proof
First note the $(P^n)_B = (P_B)^n$ since a path between two transient states can only pass through other transient states. Thus $G_B = \sum_{n=1}^\infty P_B^n$. From the monotone convergence theorem it follows that $P_B G_B = G_B - P_B$. Suppose now that $U$ is a nonnegative matrix on $B$ satisfying $U = P_B + P_B U$. Then $U = \sum_{k=1}^n P_B^k + P_B^{n+1} U$ for each $n \in \N_+$. Hence $U \ge \sum_{k=1}^n P_B^k$ for every $n \in \N_+$ and therefore $U \ge G_B$. It follows that $(I_B - P_B)(I_B + G_B) = I_B$. If $B$ is finite, the matrix $I_B - P_B$ is invertible.
Now that we can compute $G_B$, we can also compute $H_B$ using the result above. All that remains is for us to compute the hitting probability $H(x, y)$ when $x$ is transient and $y$ is recurrent. The first thing to notice is that the hitting probability is a class property.
Suppose that $x$ is transient and that $A$ is a recurrent class. Then $H(x, y) = H(x, A)$ for $y \in A$.
That is, starting in the transient state $x \in S$, the hitting probability to $y$ is constant for $y \in A$, and is just the hitting probability to the class $A$. As before, let $B$ denote the set of transient states and suppose that $A$ is a recurrent equivalence class. Let $h_A$ denote the function on $B$ that gives the hitting probability to class $A$, and let $p_A$ denote the function on $B$ that gives the probability of entering $A$ on the first step: $h_A(x) = H(x, A), \; p_A(x) = P(x, A), \quad x \in B$
$h_A = p_A + G_B p_A$.
Proof
First note that $\P(\tau_A = n \mid X_0 = x) = (P_B^{n-1} p_A)(x)$ for $n \in \N_+$. The result then follows by summing over $n$.
This result is adequate if we have already computed $G_B$ (using the result in above, for example). However, we might just want to compute $h_A$ directly.
$h_A$ satisfies the equation $h_A = p_A + P_B h_A$ and is the smallest nonnegative solution. If $B$ is finite, $h_A = (I_B - P_B)^{-1} p_A$.
Proof
First, conditioning on $X_1$ gives $h_A = p_A + P_B h_A$. Next suppose that $h$ is nonnegative and satisfies $h = p_A + P_B h$. Then $h = p_A + \sum_{k=1}^{n-1} P_B^k p_A + P_B^n h$ for each $n \in \N_+$. Hence $h \ge p_A + \sum_{k=1}^{n-1} P_B^k p_A$. Letting $n \to \infty$ gives $h \ge h_A$. The representation when $B$ is finite follows from the result above.
Examples and Applications
Finite Chains
Consider a Markov chain with state space $S = \{a, b, c, d\}$ and transition matrix $P$ given below:
$P = \left[ \begin{matrix} \frac{1}{2} & \frac{2}{3} & 0 & 0 \ 1 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 \ \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} \end{matrix} \right]$
1. Draw the state graph.
2. Find the equivalent classes and classify each as transient or recurrent.
3. Compute the matrix $G$.
4. Compute the matrix $H$.
Answer
1. $\{a, b\}$ recurrent; $\{c\}$ recurrent; $\{d\}$ transient.
2. $G = \left[ \begin{matrix} \infty & \infty & 0 & 0 \ \infty & \infty & 0 & 0 \ 0 & 0 & \infty & 0 \ \infty & \infty & \infty & \frac{1}{3} \end{matrix} \right]$
3. $H = \left[ \begin{matrix} 1 & 1 & 0 & 0 \ 1 & 1 & 0 & 0 \ 0 & 0 & 1 & 0 \ \frac{2}{3} & \frac{2}{3} & \frac{1}{3} & \frac{1}{4} \end{matrix} \right]$
Consider a Markov chain with state space $S = \{1, 2, 3, 4, 5, 6\}$ and transition matrix $P$ given below:
$P = \left[ \begin{matrix} 0 & 0 & \frac{1}{2} & 0 & \frac{1}{2} & 0 \ 0 & 0 & 0 & 0 & 0 & 1 \ \frac{1}{4} & 0 & \frac{1}{2} & 0 & \frac{1}{4} & 0 \ 0 & 0 & 0 & 1 & 0 & 0 \ 0 & 0 & \frac{1}{3} & 0 & \frac{2}{3} & 0 \ 0 & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & 0 & \frac{1}{4} \end{matrix} \right]$
1. Sketch the state graph.
2. Find the equivalence classes and classify each as recurrent or transient.
3. Compute the matrix $G$.
4. Compute the matrix $H$.
Answer
1. $\{1, 3, 5\}$ recurrent; $\{2, 6\}$ transient; $\{4\}$ recurrent.
2. $G = \left[ \begin{matrix} \infty & 0 & \infty & 0 & \infty & 0 \ \infty & \frac{1}{2} & \infty & \infty & \infty & 2 \ \infty & 0 & \infty & 0 & \infty & 0 \ 0 & 0 & 0 & \infty & 0 & 0 \ \infty & 0 & \infty & 0 & \infty & 0 \ \infty & \frac{1}{2} & \infty & \infty & \infty & 1 \end{matrix} \right]$
3. $H = \left[ \begin{matrix} 1 & 0 & 1 & 0 & 1 & 0 \ \frac{1}{2} & \frac{1}{3} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & 1 \ 1 & 0 & 1 & 0 & 1 & 0 \ 0 & 0 & 0 & 1 & 0 & 0 \ 1 & 0 & 1 & 0 & 1 & 0 \ \frac{1}{2} & \frac{1}{3} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \end{matrix} \right]$
Consider a Markov chain with state space $S = \{1, 2, 3, 4, 5, 6\}$ and transition matrix $P$ given below:
$P = \left[ \begin{matrix} \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 & 0 \ \frac{1}{4} & \frac{3}{4} & 0 & 0 & 0 & 0 \ \frac{1}{4} & 0 & \frac{1}{2} & \frac{1}{4} & 0 & 0 \ \frac{1}{4} & 0 & \frac{1}{4} & \frac{1}{4} & 0 & \frac{1}{4} \ 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \ 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \end{matrix} \right]$
1. Sketch the state graph.
2. Find the equivalence classes and classify each as recurrent or transient.
3. Compute the matrix $G$.
4. Compute the matrix $H$.
Answer
1. $\{1, 2\}$ recurrent; $\{3, 4\}$ transient; $\{5, 6\}$ recurrent.
2. $G = \left[ \begin{matrix} \infty & \infty & 0 & 0 & 0 & 0 \ \infty & \infty & 0 & 0 & 0 & 0 \ \infty & \infty & \frac{7}{5} & \frac{4}{5} & \infty & \infty \ \infty & \infty & \frac{4}{5} & \frac{3}{5} & \infty & \infty \ 0 & 0 & 0 & 0 & \infty & \infty \ 0 & 0 & 0 & 0 & \infty & \infty \end{matrix} \right]$
3. $H = \left[ \begin{matrix} 1 & 1 & 0 & 0 & 0 & 0 \ 1 & 1 & 0 & 0 & 0 & 0 \ \frac{4}{5} & \frac{4}{5} & \frac{7}{12} & \frac{1}{2} & \frac{1}{5} & \frac{1}{5} \ \frac{3}{5} & \frac{3}{5} & \frac{1}{3} & \frac{3}{8} & \frac{2}{5} & \frac{2}{5} \ 0 & 0 & 0 & 0 & 1 & 1 \ 0 & 0 & 0 & 0 & 1 & 1 \end{matrix} \right]$
Special Models
Read again the definitions of the Ehrenfest chains and the Bernoulli-Laplace chains. Note that since these chains are irreducible and have finite state spaces, they are recurrent.
Read the discussion on recurrence in the section on the reliability chains.
Read the discussion on random walks on $\Z^k$ in the section on the random walks on graphs.
Read the discussion on extinction and explosion in the section on the branching chain.
Read the discussion on recurrence and transience in the section on queuing chains.
Read the discussion on recurrence and transience in the section on birth-death chains. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.04%3A_Transience_and_Recurrence_for_Discrete-Time_Chains.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\cl}{\text{cl}}$
A state in a discrete-time Markov chain is periodic if the chain can return to the state only at multiples of some integer larger than 1. Periodic behavior complicates the study of the limiting behavior of the chain. As we will see in this section, we can eliminate the periodic behavior by considering the $d$-step chain, where $d \in \N_+$ is the period, but only at the expense of introducing additional equivalence classes. Thus, in a sense, we can trade one form of complexity for another.
Basic Theory
Definitions and Basic Results
As usual, our starting point is a (time homogeneous) discrete-time Markov chain $\bs{X} = (X_0, X_1, X_2, \ldots)$ with (countable) state space $S$ and transition probability matrix $P$.
The period of state $x \in S$ is $d(x) = \gcd\{n \in \N_+: P^n(x, x) \gt 0 \}$ State $x$ is aperiodic if $d(x) = 1$ and periodic if $d(x) \gt 1$.
Thus, starting in $x$, the chain can return to $x$ only at multiples of the period $d$, and $d$ is the largest such integer. Perhaps the most important result is that period, like recurrence and transience, is a class property, shared by all states in an equivalence class under the to and from relation.
If $x \leftrightarrow y$ then $d(x) = d(y)$.
Proof
Suppose that $x \leftrightarrow y$. The result is trivial if $x = y$, so let's assume that $x \neq y$. Recall that there exist $j, \, k \in \N_+$ such that $P^j(x, y) \gt 0$ and $P^k(y, x) \gt 0$. But then $P^{j+k}(x, x) \ge P^j(x, y) P^k(y, x) \gt 0$ and hence $d(x) \mid (j + k)$. Suppose now that $n$ is a positive integer with $P^n(y, y) \gt 0$. Then $P^{j+k+n}(x, x) \ge P^j(x, y) P^n(y, y) P^k(y, x) \gt 0$ and hence $d(x) \mid (j + k + n)$. It follows that $d(x) \mid n$. From the definition of period, $d(y) \mid d(x)$. Reversing the roles of $x$ and $y$ we also have $d(x) \mid d(y)$. Hence $d(x) = d(y)$.
Thus, the definitions of period, periodic, and aperiodic apply to equivalence classes as well as individual states. When the chain is irreducible, we can apply these terms to the entire chain.
Suppose that $x \in S$. If $P(x, x) \gt 0$ then $x$ (and hence the equivalence class of $x$) is aperiodic.
Proof
By assumption, $1 \in \{n \in \N_+: P^n(x, x) \gt 0\}$ and hence the greatest common divisor of this set is 1.
The converse is not true, of course. A simple counterexample is given below.
The Cyclic Classes
Suppose now that $\bs{X} = (X_0, X_1, X_2, \ldots)$ is irreducible and is periodic with period $d$. There is no real loss in generality in assuming that the chain is irreducible, for if this were not the case, we could simply restrict our attention to one of the irreducible equivalence classes. Our exposition will be easier and cleaner if we recall the congruence equivalence relation modulo $d$ on $\Z$, which in turn is based on the divison partial order. For $n, \, m \in \Z$, $n \equiv_d m$ if and only if $d \mid (n - m)$, equivalently $n - m$ is an integer multiple of $d$, equivalently $m$ and $n$ have the same remainder after division by $d$. The basic fact that we will need is that $\equiv_d$ is preserved under sums and differences. That is, if $m, \, n, \, p, \, q \in \Z$ and if $m \equiv_d n$ and $p \equiv_d q$, then $m + p \equiv_d n + q$ and $m - p \equiv_d n - q$.
Now, we fix a reference state $u \in S$, and for $k \in \{0, 1, \ldots, d - 1\}$, define $A_k = \{x \in S: P^{n d + k}(u, x) \gt 0 \text{ for some } n \in \N\}$ That is, $x \in A_k$ if and only if there exists $m \in \N$ with $m \equiv_d k$ and $P^m(u, x) \gt 0$.
Suppose that $x, \, y \in S$.
1. If $x \in A_j$ and $y \in A_k$ for $j, \, k \in \{0, 1, \ldots, d - 1\}$ then $P^n(x, y) \gt 0$ for some $n \equiv_d k - j$
2. Conversely, if $P^n(x, y) \gt 0$ for some $n \in \N$, then there exists $j, \, k \in \{0, 1, \ldots d - 1\}$ such that $x \in A_j$, $y \in A_k$ and $n \equiv_d k - j$.
3. The sets $(A_0, A_1, \ldots, A_{k-1})$ partition $S$.
Proof
Suppose first that $x \in A_k$. By definition, $u$ leads to $x$ in $m$ steps for some $m \equiv_d k$. Since the chain is irreducible, $x$ leads back to $u$, say in $p$ steps. It follows that $u$ leads back to $u$ in $m + p$ steps. But because $u$ has period $d$, we must have $m + p \equiv_d 0$ and hence $p \equiv_d -k$.
Next suppose that $x \in A_j$ and $y \in A_k$. We know that $u$ leads to $x$ in $m$ steps for some $m \equiv_d j$, and we now know that $y$ leads to $u$ in $p$ steps for some $p \equiv_d -k$. Since the chain is irreducible, $x$ leads to $y$, say in $n$ steps. It follows that $u$ leads back to $u$ in $m + n + p$ steps. Again, since $u$ has period $d$, $m + n + p \equiv_d 0$ and it follows that $n \equiv_d k - j$
Next, note that since the chain is irreducible and since $\{0, 1, \ldots, d - 1\}$ is the set of all remainders modulo $d$, we must have $\bigcup_{i=0}^{d-1} A_i = S$. Suppose that $x, y \in S$ and $P^n(x, y) \gt 0$ for some $n \in \N$. Then $x \in A_j$ and $y \in A_k$ for some $j, \, k \in \{0, 1, \ldots\}$. By the same argument as the last paragraph, we must have $n \equiv_d k - j$.
All that remains is to show that the sets are disjoint, and that amounts to the same old argument yet again. Suppose that $x \in A_j \cap A_k$ for some $j, \, k \in \{0, 1, \ldots, d - 1\}$. Then $u$ leads to $x$ in $m$ steps for some $m \equiv_d j$ and $x$ leads to $u$ in $n$ steps for some $n \equiv_d - k$. Hence $u$ leads back to $u$ in $m + n$ steps. Since the chain has period $d$ we have $m + n \equiv_d 0$ and therefore $j - k \equiv_d 0$. Since $j, \, k \in \{0, 1, \ldots, d - 1\}$ it follows that $j = k$.
$(A_0, A_1, \ldots, A_{d-1})$ are the equivalence classes for the $d$-step to and from relation $\underset{d}{\leftrightarrow}$ that governs the $d$-step chain $(X_0, X_d, X_{2 d}, \ldots)$ that has transition matrix $P^d$.
The sets $(A_0, A_1, \ldots, A_{d-1})$ are known as the cyclic classes. The basic structure of the chain is shown in the state diagram below:
Examples and Special Cases
Finite Chains
Consider the Markov chain with state space $S = \{a, b, c\}$ and transition matrix $P$ given below:
$P = \left[\begin{matrix} 0 & \frac{1}{3} & \frac{2}{3} \ 0 & 0 & 1 \ 1 & 0 & 0 \end{matrix}\right]$
1. Sketch the state graph and show that the chain is irreducible.
2. Show that the chain is aperiodic.
3. Note that $P(x, x) = 0$ for all $x \in S$.
Answer
1. The state graph has edge set $E = \{(a, b), (a, c), (b, c), (b, a)\}$.
2. Note that $P^2(a, a) \gt 0$ and $P^3(a, a) \gt 0$. Hence $d(a) = 1$ since 2 and 3 are relatively prime. Since the chain is irreducible, it is aperiodic.
Consider the Markov chain with state space $S = \{1, 2, 3, 4, 5, 6, 7\}$ and transition matrix $P$ given below:
$P = \left[\begin{matrix} 0 & 0 & \frac{1}{2} & \frac{1}{4} & \frac{1}{4} & 0 & 0 \ 0 & 0 & \frac{1}{3} & 0 & \frac{2}{3} & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & \frac{1}{3} & \frac{2}{3} \ 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \ 0 & 0 & 0 & 0 & 0 & \frac{3}{4} & \frac{1}{4} \ \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 & 0 & 0 \ \frac{1}{4} & \frac{3}{4} & 0 & 0 & 0 & 0 & 0 \end{matrix}\right]$
1. Sketch the state graph and show that the chain is irreducible.
2. Find the period $d$.
3. Find $P^d$.
4. Identify the cyclic classes.
Answer
1. Period 3
2. $P^3 = \left[ \begin{matrix} \frac{71}{192} & \frac{121}{192} & 0 & 0 & 0 & 0 & 0 \ \frac{29}{72} & \frac{43}{72} & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & \frac{7}{18} & \frac{1}{12} & \frac{19}{36} & 0 & 0 \ 0 & 0 & \frac{19}{48} & \frac{3}{32} & \frac{49}{96} & 0 & 0 \ 0 & 0 & \frac{13}{32} & \frac{7}{64} & \frac{31}{64} & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & \frac{157}{299} & \frac{131}{288} \ 0 & 0 & 0 & 0 & 0 & \frac{37}{64} & \frac{27}{64} \end{matrix} \right]$
3. Cyclic classes: $\{1, 2\}$, $\{3, 4, 5\}$, $\{6, 7\}$
Special Models
Review the definition of the basic Ehrenfest chain. Show that this chain has period 2, and find the cyclic classes.
Review the definition of the modified Ehrenfest chain. Show that this chain is aperiodic.
Review the definition of the simple random walk on $\Z^k$. Show that the chain is periodic with period 2, and find the cyclic classes. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.05%3A_Periodicity_of_Discrete-Time_Chains.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\cl}{\text{cl}}$
In this section, we study some of the deepest and most interesting parts of the theory of discrete-time Markov chains, involving two different but complementary ideas: stationary distributions and limiting distributions. The theory of renewal processes plays a critical role.
Basic Theory
As usual, our starting point is a (time homogeneous) discrete-time Markov chain $\bs{X} = (X_0, X_1, X_2, \ldots)$ with (countable) state space $S$ and transition probability matrix $P$. In the background, of course, is a probability space $(\Omega, \mathscr{F}, \P)$ so that $\Omega$ is the sample space, $\mathscr{F}$ the $\sigma$-algebra of events, and $\P$ the probability measure on $(\Omega, \mathscr{F})$. For $n \in \N$, let $\mathscr{F}_n = \sigma\{X_0, X_1, \ldots, X_n\}$, the $\sigma$-algebra of events determined by the chain up to time $n$, so that $\mathfrak{F} = \{\mathscr{F}_0, \mathscr{F}_1, \ldots\}$ is the natural filtration associated with $\bs{X}$.
The Embedded Renewal Process
Let $y \in S$ and $n \in \N_+$. We will denote the number of visits to $y$ during the first $n$ positive time units by $N_{y,n} = \sum_{i=1}^n \bs{1}(X_i = y)$ Note that $N_{y,n} \to N_y$ as $n \to \infty$, where $N_y = \sum_{i=1}^\infty \bs{1}(X_i = y)$ is the total number of visits to $y$ at positive times, one of the important random variables that we studied in the section on transience and recurrence. For $n \in \N_+$, we denote the time of the $n$th visit to $y$ by $\tau_{y,n} = \min\{k \in \N_+: N_{y,k} = n\}$ where as usual, we define $\min(\emptyset) = \infty$. Note that $\tau_{y,1}$ is the time of the first visit to $y$, which we denoted simply by $\tau_y$ in the section on transience and recurrence. The times of the visits to $y$ are stopping times for $\bs{X}$. That is, $\{\tau_{y,n} = k\} \in \mathscr{F}_k$ for $n \in \N_+$ and $k \in \N$. Recall also the definition of the hitting probability to state $y$ starting in state $x$: $H(x, y) = \P\left(\tau_y \lt \infty \mid X_0 = x\right), \quad (x, y) \in S^2$
Suppose that $x, \, y \in S$, and that $y$ is recurrent and $X_0 = x$.
1. If $x = y$, then the successive visits to $y$ form a renewal process.
2. If $x \ne y$ but $x \to y$, then the successive visits to $y$ form a delayed renewal process.
Proof
Let $\tau_{y,0} = 0$ for convenience.
1. Given $X_0 = y$, the sequence $\left(\tau_{y,1}, \tau_{y,2}, \ldots\right)$ is the sequence of arrival times of a renewal process. Every time the chain reaches state $y$, the process starts over, independently of the past, by the Markov property. Thus the interarrival times $\tau_{y,n+1} - \tau_{y,n}$ for $n \in \N$ are conditionally independent, and are identically distributed, given $X_0 = y$.
2. If $x \ne y$ but $x \to y$, then given $X_0 = x$, the sequence $\left(\tau_{y,1}, \tau_{y,2}, \ldots\right)$ is the sequence of arrival times of a delayed renewal process. By the same argument as in (a), the interarrival times $\tau_{y,n+1} - \tau_{y,n}$ for $n \in \N$ are conditionally independent, given $X_0 = x$, and all but $\tau_{y,1}$ have the same distribution.
As noted in the proof, $\left(\tau_{y,1}, \tau_{y,2}, \ldots\right)$ is the sequence of arrival times and $\left(N_{y,1}, N_{y,2}, \ldots\right)$ is the associated sequence of counting variables for the embedded renewal process associated with the recurrent state $y$. The corresponding renewal function, given $X_0 = x$, is the function $n \mapsto G_n(x, y)$ where $G_n(x, y) = \E\left(N_{y,n} \mid X_0 = x\right) = \sum_{k=1}^n P^k(x, y), \quad n \in \N$ Thus $G_n(x, y)$ is the expected number of visits to $y$ in the first $n$ positive time units, starting in state $x$. Note that $G_n(x, y) \to G(x, y)$ as $n \to \infty$ where $G$ is the potential matrix that we studied previously. This matrix gives the expected total number visits to state $y \in S$, at positive times, starting in state $x \in S$: $G(x, y) = \E\left(N_y \mid X_0 = x\right) = \sum_{k=1}^\infty P^k(x, y)$
Limiting Behavior
The limit theorems of renewal theory can now be used to explore the limiting behavior of the Markov chain. Let $\mu(y) = \E(\tau_y \mid X_0 = y)$ denote the mean return time to state $y$, starting in $y$. In the following results, it may be the case that $\mu(y) = \infty$, in which case we interpret $1 / \mu(y)$ as 0.
If $x, \, y \in S$ and $y$ is recurrent then $\P\left( \frac{1}{n} N_{n,y} \to \frac{1}{\mu(y)} \text{ as } n \to \infty \biggm| X_0 = x \right) = H(x, y)$
Proof
This result follows from the strong law of large numbers for renewal processes.
Note that $\frac{1}{n} N_{y,n} = \frac{1}{n} \sum_{k=1}^n \bs{1}(X_k = y)$ is the average number of visits to $y$ in the first $n$ positive time units.
If $x, \, y \in S$ and $y$ is recurrent then $\frac{1}{n} G_n(x, y) = \frac{1}{n} \sum_{k=1}^n P^k(x, y) \to \frac{H(x, y)}{\mu(y)} \text{ as } n \to \infty$
Proof
This result follows from the elementary renewal theorem for renewal processes.
Note that $\frac{1}{n} G_n(x, y) = \frac{1}{n} \sum_{k=1}^n P^k(x, y)$ is the expected average number of visits to $y$ during the first $n$ positive time units, starting at $x$.
If $x, \, y \in S$ and $y$ is recurrent and aperiodic then $P^n(x, y) \to \frac{H(x, y)}{\mu(y)} \text{ as } n \to \infty$
Proof
This result follows from the renewal theorem for renewal processes.
Note that $H(y, y) = 1$ by the very definition of a recurrent state. Thus, when $x = y$, the law of large numbers above gives convergence with probability 1, and the first and second renewal theory limits above are simply $1 / \mu(y)$. By contrast, we already know the corresponding limiting behavior when $y$ is transient.
If $x, \, y \in S$ and $y$ is transient then
1. $\P\left(\frac{1}{n} N_{y,n} \to 0 \text{ as } n \to \infty \mid X_0 = x\right) = 1$
2. $\frac{1}{n} G_n(x, y) = \frac{1}{n} \sum_{k=1}^n P^k(x, y) \to 0 \text{ as } n \to \infty$
3. $P^n(x, y) \to 0$ as $n \to \infty$
Proof
1. Note that $0 \le \frac{1}{n} N_{y,n} \le \frac{1}{n} N_y$. But if $y$ is transient, $\P(N_y \lt \infty \mid X_0 = x) = 1$ and hence $\P\left(\frac{1}{n} N_y \to 0 \text{ as } n \to \infty \mid X_0 = x\right) = 1$ so the result follows from the squeeze theorem for limits.
2. Similarly, note that $0 \le \frac{1}{n} \sum_{k=1}^n P^k(x, y) \le \frac{1}{n} \sum_{k=1}^\infty P^k(x, y)$ If $y$ is transient, $G(x, y) = \sum_{k=1}^\infty P^k(x, y) \lt \infty$ and hence $\frac{1}{n} G(x, y) \to 0$ as $n \to \infty$. Again the result follows from the squeeze theorem for limits.
3. Once more, if $y$ is transient, $G(x, y) = \sum_{k=1}^\infty P^k(x, y) \lt \infty$ and therefore $P^n(x, y) \to 0$ as $n \to \infty$.
On the other hand, if $y$ is transient then $\P(\tau_y = \infty \mid X_0 = y) \gt 0$ by the very definition of a transience. Thus $\mu(y) = \infty$, and so the results in parts (b) and (c) agree with the corresponding results above for a recurrent state. Here is a summary.
For $x, \, y \in S$, $\frac{1}{n} G_n(x, y) = \frac{1}{n} \sum_{k=1}^n P^k(x, y) \to \frac{H(x, y)}{\mu(y)} \text{ as } n \to \infty$ If $y$ is transient or if $y$ is recurrent and aperiodic, $P^n(x, y) \to \frac{H(x, y)} {\mu(y)} \text{ as } n \to \infty$
Positive and Null Recurrence
Clearly there is a fundamental dichotomy in terms of the limiting behavior of the chain, depending on whether the mean return time to a given state is finite or infinite. Thus the following definition is natural.
Let $x \in S$.
1. State $x$ is positive recurrent if $\mu(x) \lt \infty$.
2. If $x$ is recurrent but $\mu(x) = \infty$ then state $x$ is null recurrent.
Implicit in the definition is the following simple result:
If $x \in S$ is positive recurrent, then $x$ is recurrent.
Proof
Recall that if $\E(\tau_x \mid X_0 = x) \lt \infty$ then $\P(\tau_x \lt \infty \mid X_0 = x) = 1$.
On the other hand, it is possible to have $\P(\tau_x \lt \infty \mid X_0 = x) = 1$, so that $x$ is recurrent, and also $\E(\tau_x \mid X_0 = x) = \infty$, so that $x$ is null recurrent. Simply put, a random variable can be finite with probability 1, but can have infinite expected value. A classic example is the Pareto distribution with shape parameter $a \in (0, 1)$.
Like recurrence/transience, and period, the null/positive recurrence property is a class property.
If $x$ is positive recurrent and $x \to y$ then $y$ is positive recurrent.
Proof
Suppose that $x$ is positive recurrent and $x \to y$. Recall that $y$ is recurrent and $y \to x$. Hence there exist $i, \, j \in \N_+$ such that $P^i(x, y) \gt 0$ and $P^j(y, x) \gt 0$. Thus for every $k \in \N_+$, $P^{i+j+k}(y, y) \ge P^j(y, x) P^k(x, x) P^i(x, y)$. Averaging over $k$ from 1 to $n$ gives $\frac{G_n(y, y)}{n } - \frac{G_{i+j}(y, y)}{n} \ge P^j(y, x) \frac{G_n(x, x)}{n} P^i(x, y)$ Letting $n \to \infty$ and using renwal theory limit above gives $\frac{1}{\mu(y)} \ge P^j(y, x) \frac{1}{\mu(x)} P^i(x, y) \gt 0$ Therefore $\mu(y) \lt \infty$ and so $y$ is also positive recurrent.
Thus, the terms positive recurrent and null recurrent can be applied to equivalence classes (under the to and from equivalence relation), as well as individual states. When the chain is irreducible, the terms can be applied to the chain as a whole.
Recall that a nonempty set of states $A$ is closed if $x \in A$ and $x \to y$ implies $y \in A$. Here are some simple results for a finite, closed set of states.
If $A \subseteq S$ is finite and closed, then $A$ contains a positive recurrent state.
Proof
Fix a state $x \in A$ and note that $P^k(x, A) = \sum_{y \in A} P^k(x, y) = 1$ for every $k \in \N_+$ since $A$ is closed. Averaging over $k$ from 1 to $n$ gives $\sum_{y \in A} \frac{G_n(x, y)}{n} = 1$ for every $n \in \N_+$. Note that the change in the order of summation is justified since both sums are finite. Assume now that all states in $A$ are transient or null recurrent. Letting $n \to \infty$ in the displayed equation gives the contradiction $0 = 1$. Again, the interchange of sum and limit is justified by the fact that $A$ is finite.
If $A \subseteq S$ is finite and closed, then $A$ contains no null recurrent states.
Proof
Let $x \in A$. Note that $[x] \subseteq A$ since $A$ is closed. Suppose that $x$ is recurrent. Note that $[x]$ is also closed and finite and hence must have a positive recurrent state by the previous result. Hence the equivalence class $[x]$ is positive recurrent and thus so is $x$.
If $A \subseteq S$ is finite and irreducible, then $A$ is a positive recurrent equivalence class.
Proof
We already know that $A$ is a recurrent equivalence class, from our study of transience and recurrence. From the previous theorem, $A$ is positive recurrent.
In particular, a Markov chain with a finite state space cannot have null recurrent states; every state must be transient or positive recurrent.
Limiting Behavior, Revisited
Returning to the limiting behavior, suppose that the chain $\bs{X}$ is irreducible, so that either all states are transient, all states are null recurrent, or all states are positive recurrent. From the basic limit theorem above, if the chain is transient or if the chain is recurrent and aperiodic, then $P^n(x, y) \to \frac{1}{\mu(y)} \text{ as } n \to \infty \text{ for every } x \in S$ Note in particular that the limit is independent of the initial state $x$. Of course in the transient case and in the null recurrent and aperiodic case, the limit is 0. Only in the positive recurrent, aperiodic case is the limit positive, which motivates our next definition.
A Markov chain $\bs{X}$ that is irreducible, positive recurrent, and aperiodic, is said to be ergodic.
In the ergodic case, as we will see, $X_n$ has a limiting distribution as $n \to \infty$ that is independent of the initial distribution.
The behavior when the chain is periodic with period $d \in \{2, 3, \ldots\}$ is a bit more complicated, but we can understand this behavior by considering the $d$-step chain $\bs{X}_d = (X_0, X_d, X_{2 d}, \ldots)$ that has transition matrix $P^d$. Essentially, this allows us to trade periodicity (one form of complexity) for reducibility (another form of complexity). Specifically, recall that the $d$-step chain is aperiodic but has $d$ equivalence classes $(A_0, A_1, \ldots, A_{d-1})$; and these are the cyclic classes of original chain $\bs{X}$.
The mean return time to state $x$ for the $d$-step chain $\bs{X}_d$ is $\mu_d(x) = \mu(x) / d$.
Proof
Note that every single step for the $d$-step chain corresponds to $d$ steps for the original chain.
Let $i, \, j, \, k \in \{0, 1, \ldots, d - 1\}$,
1. $P^{n d + k}(x, y) \to d / \mu(y)$ as $n \to \infty$ if $x \in A_i$ and $y \in A_j$ and $j = (i + k) \mod d$.
2. $P^{n d + k}(x, y) \to 0$ as $n \to \infty$ in all other cases.
Proof
These results follow from the previous theorem and the cyclic behavior of the chain.
If $y \in S$ is null recurrent or transient then regardless of the period of $y$, $P^n(x, y) \to 0$ as $n \to \infty$ for every $x \in S$.
Invariant Distributions
Our next goal is to see how the limiting behavior is related to invariant distributions. Suppose that $f$ is a probability density function on the state space $S$. Recall that $f$ is invariant for $P$ (and for the chain $\bs{X}$) if $f P = f$. It follows immediately that $f P^n = f$ for every $n \in \N$. Thus, if $X_0$ has probability density function $f$ then so does $X_n$ for each $n \in \N$, and hence $\bs{X}$ is a sequence of identically distributed random variables. A bit more generally, suppose that $g: S \to [0, \infty)$ is invariant for $P$, and let $C = \sum_{x \in S} g(x)$. If $0 \lt C \lt \infty$ then $f$ defined by $f(x) = g(x) / C$ for $x \in S$ is an invariant probability density function.
Suppose that $g: S \to [0, \infty)$ is invariant for $P$ and satisfies $\sum_{x \in S} g(x) \lt \infty$. Then $g(y) = \frac{1}{\mu(y)} \sum_{x \in S} g(x) H(x, y), \quad y \in S$
Proof
Recall again that $g P^k = g$ for every $k \in \N$ since $g$ is invariant for $P$. Averaging over $k$ from 1 to $n$ gives $g G_ n / n = g$ for each $n \in \N_+$. Explicitly, $\sum_{x \in S} g(x) \frac{G_n(x, y)}{n} = g(y), \quad y \in S$ Letting $n \to \infty$ and using the limit theorem above gives the result. The dominated convergence theorem justifies interchanging the limit with the sum, since the terms are positive, $\frac{1}{n}G_n(x, y) \le 1$, and $\sum_{x \in S} g(x) \lt \infty$.
Note that if $y$ is transient or null recurrent, then $g(y) = 0$. Thus, a invariant function with finite sum, and in particular an invariant probability density function must be concentrated on the positive recurrent states.
Suppose now that the chain $\bs{X}$ is irreducible. If $\bs{X}$ is transient or null recurrent, then from the previous result, the only nonnegative functions that are invariant for $P$ are functions that satisfy $\sum_{x \in S} g(x) = \infty$ and the function that is identically 0: $g = \bs{0}$. In particular, the chain does not have an invariant distribution. On the other hand, if the chain is positive recurrent, then $H(x, y) = 1$ for all $x, \, y \in S$. Thus, from the previous result, the only possible invariant probability density function is the function $f$ given by $f(x) = 1 / \mu(x)$ for $x \in S$. Any other nonnegative function $g$ that is invariant for $P$ and has finite sum, is a multiple of $f$ (and indeed the multiple is sum of the values). Our next goal is to show that $f$ really is an invariant probability density function.
If $\bs{X}$ is an irreducible, positive recurrent chain then the function $f$ given by $f(x) = 1 / \mu(x)$ for $x \in S$ is an invariant probability density function for $\bs{X}$.
Proof
Let $f(x) = 1 / \mu(x)$ for $x \in S$, and let $A$ be a finite subset of $S$. Then $\sum_{y \in A} \frac{1}{n} G_n(x, y) \le 1$ for every $x \in S$. Letting $n \to \infty$ using the basic limit above gives $\sum_{y \in A} f(y) \le 1$. The interchange of limit and sum is justified since $A$ is finite. Since this is true for every finite $A \subseteq S$, it follows that $C \le 1$ where $C = \sum_{y \in S} f(y)$. Note also that $C \gt 0$ since the chain is positive recurrent. Next note that $\sum_{y \in A} \frac{1}{n} G_n(x, y) P(y, z) \le \frac{1}{n} G_{n+1}(x, z)$ for every $x, \, z \in S$. Letting $n \to \infty$ gives $\sum_{y \in A} f(y) P(y, z) \le f(z)$ for every $z \in S$. It then follows that $\sum_{y \in S} f(y) P(y, z) \le f(z)$ for every $z \in S$. Suppose that strict inequality holds for some for some $z \in S$. Then $\sum_{z \in S} \sum_{y \in S} f(y) P(y, z) \lt \sum_{z \in S} f(z)$ Interchanging the order of summation on the left in the displayed inequality yields the contradiction $C \lt C$. Thus $f$ is invariant for $P$. Hence $f / C$ is an invariant probability density function. By the uniqueness result noted earlier, it follows that $f / C = f$ so that in fact $C = 1$.
In summary, an irreducible, positive recurrent Markov chain $\bs{X}$ has a unique invariant probability density function $f$ given by $f(x) = 1 / \mu(x)$ for $x \in S$. We also now have a test for positive recurrence. An irreducible Markov chain $\bs{X}$ is positive recurrent if and only if there exists a positive function $g$ on $S$ that is invariant for $P$ and satisfies $\sum_{x \in S} g(x) \lt \infty$ (and then, of course, normalizing $g$ would give $f$).
Consider now a general Markov chain $\bs{X}$ on $S$. If $\bs{X}$ has no positive recurrent states, then as noted earlier, there are no invariant distributions. Thus, suppose that $\bs{X}$ has a collection of positive recurrent equivalence classes $(A_i: i \in I)$ where $I$ is a nonempty, countable index set. The chain restricted to $A_i$ is irreducible and positive recurrent for each $i \in I$, and hence has a unique invariant probability density function $f_i$ on $A_i$ given by $f_i(x) = \frac{1}{\mu(x)}, \quad x \in A_i$ We extend $f_i$ to $S$ by defining $f_i(x) = 0$ for $x \notin A_i$, so that $f_i$ is a probability density function on $S$. All invariant probability density functions for $\bs{X}$ are mixtures of these functions:
$f$ is an invariant probability density function for $\bs{X}$ if and only if $f$ has the form $f(x) = \sum_{i \in I} p_i f_i(x), \quad x \in S$ where $(p_i: i \in I)$ is a probability density function on the index set $I$. That is, $f(x) = p_i f_i(x)$ for $i \in I$ and $x \in A_i$, and $f(x) = 0$ otherwise.
Proof
Let $A = \bigcup_{i \in I} A_i$, the set of positive recurrent states. Suppose that $f$ has the form given in the theorem. Since $f(x) = 0$ for $x \notin A$ we have $(f P)(y) = \sum_{x \in S} f(x) P(x, y) = \sum_{i \in I} \sum_{x \in A_i} p_i f_i(x) P(x, y)$ Suppose that $y \in A_j$ for some $j \in I$. Since $P(x, y) = 0$ if $x \in A_i$ and $i \ne j$, the last sum becomes $(f P)(y) = p_j \sum_{x \in A_j} f_j(x) P(x, y) = p_j f_j(y) = f(y)$ because $f_j$ is invariant for the $P$ restricted to $A_j$. If $y \notin A$ then $P(x, y) = 0$ for $x \in A$ so the sum above becomes $(f P)(y) = 0 = f(y)$. Hence $f$ is invariant. Moreover, $\sum_{x \in S} f(x) = \sum_{i \in I} \sum_{x \in A_i} f(x) = \sum_{i \in I} p_i \sum_{x \in A_i} f_i(x) = \sum_{i \in I} p_i = 1$ so $f$ is a PDF on $S$. Conversely, suppose that $f$ is an invariant PDF for $\bs{X}$. We know that $f$ is concentrated on the positive recurrent states, so $f(x) = 0$ for $x \notin A$. For $i \in I$ and $y \in A_i$ $\sum_{x \in A_i} f(x) P(x, y) = \sum_{x \in S} f(x) P(x, y) = f(y)$ since $f$ is invariant for $P$ and since, as noted before, $f(x) P(x, y) = 0$ if $x \notin A_i$. It follows that $f$ restricted to $A_i$ is invariant for the chain restricted to $A_i$ for each $i \in I$. Let $p_i = \sum_{x \in A_i} f(x)$, the normalizing constant for $f$ restricted to $A_i$. By uniqueness, the restriction of $f / p_i$ to $A_i$ must be $f_i$, so $f$ has the form given in the theorem.
Invariant Measures
Suppose that $\bs{X}$ is irreducible. In this section we are interested in general functions $g: S \to [0, \infty)$ that are invariant for $\bs{X}$, so that $g P = g$. A function $g: S \to [0, \infty)$ defines a positive measure $\nu$ on $S$ by the simple rule $\nu(A) = \sum_{x \in A} g(x), \quad A \subseteq S$ so in this sense, we are interested in invariant positive measures for $\bs{X}$ that may not be probability measures. Technically, $g$ is the density function of $\nu$ with respect to counting measure $\#$ on $S$.
From our work above, We know the situation if $\bs{X}$ is positive recurrent. In this case, there exists a unique invariant probability density function $f$ that is positive on $S$, and every other nonnegative invariant function $g$ is a nonnegative multiples of $f$. In particular, either $g = \bs{0}$, the zero function on $S$, or $g$ is positive on $S$ and satisfies $\sum_{x \in S} g(x) \lt \infty$.
We can generalize to chains that are simply recurrent, either null or positive. We will show that there exists a positive invariant function that is unique, up to multiplication by positive constants. To set up the notation, recall that $\tau_x = \min\{k \in \N_+: X_k = x\}$ is the first positive time that the chain is in state $x \in S$. In particular, if the chain starts in $x$ then $\tau_x$ is the time of the first return to $x$. For $x \in S$ we define the function $\gamma_x$ by $\gamma_x(y) = \E\left(\sum_{n=0}^{\tau_x - 1} \bs{1}(X_n = y) \biggm| X_0 = x\right), \quad y \in S$ so that $\gamma_x(y)$ is the expected number of visits to $y$ before the first return to $x$, starting in $x$. Here is the existence result.
Suppose that $\bs X$ is recurrent. For $x \in S$,
1. $\gamma_x(x) = 1$
2. $\gamma_x$ is invariant for $\bs X$
3. $\gamma_x(y) \in (0, \infty)$ for $y \in S$.
Proof
1. By definition, given $X_0 = x$, we have $X_0 = x$ but $X_n \ne x$ for $n \in \{1, \ldots, \tau_x - 1\}$. Hence $\gamma_x(x) = 1$.
2. Since the chain is recurrent, with probability 1 we have $\tau_x \lt \infty$ and $X_{\tau_x} = x$. Hence for $y \in S$, $\gamma_x(y) = \E\left(\sum_{n=0}^{\tau_x - 1} \bs{1}(X_n = y) \biggm| X_0 = x\right) = \E\left(\sum_{n=1}^{\tau_x} \bs{1}(X_n = y) \biggm| X_0 = x\right)$ (Note that if $x = y$ then with probability 1, the $n = 0$ term in the first sum and the $n = \tau_x$ term in the second sum are 1 and the remaining terms are 0. If $x \ne y$, the $n = 0$ term in the first sum and the $n = \tau_x$ term in the second sum are 0 with probability 1, so again the the two sums are the same.) Hence $\gamma_x(y) = \E\left(\sum_{n=1}^\infty \bs{1}(X_n = y, \tau_x \ge n) \biggm| X_0 = x\right) = \sum_{n=1}^\infty \P(X_n = y, \tau_x \ge n \mid X_0 = x)$ Next we partition on the values of $X_{n-1}$ in the sum to obtain \begin{align*} \gamma_x(y) & = \sum_{n=1}^\infty \sum_{z \in S} \P(X_n = y, X_{n-1} = z, \tau_x \ge n \mid X_0 = x)\ & = \sum_{n=1}^\infty \sum_{z \in S} \P(X_n = y \mid X_{n-1} = z, \tau_x \ge n, X_0 = x) \P(X_{n-1} = z, \tau_x \ge n \mid X_0 = x) \end{align*} But $\{X_0 = x, \tau_x \ge n\} \in \mathscr{F}_{n-1}$ (that is, the events depend only on $(X_0, \ldots, X_{n-1}))$. Hence by the Markov property, the first factor in the last displayed equation is simply $\P(X_n = y \mid X_{n-1} = z) = P(z, y)$. Substituting and re-indexing the sum gives \begin{align*} \gamma_x(y) & = \sum_{n=1}^\infty \sum_{z \in S} P(z, y) \P(X_{n-1} = z, \tau_x \ge n \mid X_0 = x) = \sum_{z \in S} P(z, y) \E\left(\sum_{n=1}^{\tau_x} \bs{1}(X_{n-1} = z) \biggm| X_0 = x \right) \ & = \sum_{z \in S} P(z, y) \E\left(\sum_{m=0}^{\tau_x - 1} \bs{1}(X_m = z) \biggm| X_0 = x\right) = \sum_{z \in S} P(z, y) \gamma_x(z) = \gamma_x P(y) \end{align*}
3. By the invariance in part (b), $\gamma_x = \gamma_x P^n$ for every $n \in \N$. Let $y \in S$. Since the chain is irreducible, there exists $j \in \N$ such that $P^j(x, y) \gt 0$. Hence $\gamma_x(y) = \gamma_x P^j(y) \ge \gamma_x(x) P^j(x, y) = P^j(x, y) \gt 0 \$ Similarly, there exists $k \in \N$ such that $P^k(y, x) \gt 0$. Hence $1 = \gamma_x(x) = \gamma_xP^k(x) \ge \gamma_x(y) P^k(y, x)$ and therefore $\gamma_x(y) \le 1 / P^k(y, x) \lt \infty$.
Next is the uniqueness result.
Suppose again that $\bs X$ is recurrent and that $g: S \to [0, \infty)$ is invariant for $\bs X$. For fixed $x \in S$, $g(y) = g(x) \gamma_x(y), \quad y \in S$
Proof
Let $S_x = S - \{x\}$ and let $y \in S$. Since $g$ is invariant, $g(y) = g P(y) = \sum_{z \in S} g(z) P(z, y) = \sum_{z \in S_x} g(z) P(z, y) + g(x) P(x, y)$ Note that the last term is $g(x) \P(X_1 = y, \tau_x \ge 1 \mid X_0 = x)$. Repeating the argument for $g(z)$ in the sum above gives $g(y) = \sum_{z \in S_x} \sum_{w \in S_x} g(w) P(w, z)P(z, y) + g(x) \sum_{z \in S_x} P(x, z) P(z, y) + g(x) P(x, y)$ The last two terms are $g(x) \left[\P(X_2 = y, \tau_x \ge 2 \mid X_0 = x) + \P(X_1 = y, \tau_x \ge 1 \mid X_0 = x)\right]$ Continuing in this way shows that for each $n \in \N_+$, $g(y) \ge g(x) \sum_{k=1}^n \P(X_k = y, \tau_x \ge k \mid X_0 = x)$ Letting $n \to \infty$ then shows that $g(y) \ge g(x) \gamma_x(y)$. Next, note that the function $h = g - g(x) \gamma_x$ is invariant, since it is a difference of two invariant functions, and as just shown, is nonnegative. Also, $h(x) = g(x) - g(x) \gamma_x(x) = 0$. Let $y \in S$. Since the chain is irreducible, there exists $j \in \N$ such that $P^j(y, x) \gt 0$. Hence $0 = h(x) = hP^j(x) \ge h(y) P^j(y, x) \ge 0$ Since $P^j(y, x) \gt 0$ it follows that $h(y) = 0$.
Thus, suppose that $\bs{X}$ is null recurrent. Then there exists an invariant function $g$ that is positive on $S$ and satisfies $\sum_{x \in S} g(x) = \infty$. Every other nonnegative invariant function is a nonnegative multiple of $g$. In particular, either $g = \bs{0}$, the zero function on $S$, or $g$ is positive on $S$ and satisfies $\sum_{x \in S} g(x) = \infty$. The section on reliability chains gives an example of the invariant function for a null recurrent chain.
The situation is complicated when $\bs{X}$ is transient. In this case, there may or may not exist nonnegative invariant functions that are not identically 0. When they do exist, they may not be unique (up to multiplication by nonnegative constants). But we still know that there are no invariant probability density functions, so if $g$ is a nonnegative function that is invariant for $\bs{X}$ then either $g = \bs{0}$ or $\sum_{x \in S} g(x) = \infty$. The section on random walks on graphs provides lots of examples of transient chains with nontrivial invariant functions. In particular, the non-symmetric random walk on $\Z$ has a two-dimensional space of invariant functions.
Examples and Applications
Finite Chains
Consider again the general two-state chain on $S = \{0, 1\}$ with transition probability matrix given below, where $p \in (0, 1)$ and $q \in (0, 1)$ are parameters. $P = \left[ \begin{matrix} 1 - p & p \ q & 1 - q \end{matrix} \right]$
1. Find the invariant distribution.
2. Find the mean return time to each state.
3. Find $\lim_{n \to \infty} P^n$ without having to go to the trouble of diagonalizing $P$, as we did in the introduction to discrete-time chains.
Answer
1. $f = \left(\frac{q}{p + q}, \frac{p}{p + q} \right)$
2. $\mu = \left( \frac{p + q}{q}, \frac{p + q}{p} \right)$
3. $P^n \to \frac{1}{p + q} \left[ \begin{matrix} q & p \ q & p \end{matrix} \right]$ as $n \to \infty$.
Consider a Markov chain with state space $S = \{a, b, c, d\}$ and transition matrix $P$ given below: $P = \left[ \begin{matrix} \frac{1}{3} & \frac{2}{3} & 0 & 0 \ 1 & 0 & 0 & 0 \ 0 & 0 & 1 & 0 \ \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} \end{matrix} \right]$
1. Draw the state diagram.
2. Determine the equivalent classes and classify each as transient or positive recurrent.
3. Find all invariant probability density functions.
4. Find the mean return time to each state.
5. Find $\lim_{n \to \infty} P^n$.
Answer
1. $\{a, b\}$ recurrent; $\{c\}$ recurrent; $\{d\}$ transient.
2. $f = \left( \frac{3}{5} p, \frac{2}{5} p, 1 - p, 0 \right)$, $0 \le p \le 1$
3. $\mu = \left(\frac{5}{3}, \frac{5}{2}, 1, \infty \right)$
4. $P^n \to \left[ \begin{matrix} \frac{3}{5} & \frac{2}{5} & 0 & 0 \ \frac{3}{5} & \frac{2}{5} & 0 & 0 \ 0 & 0 & 1 & 0 \ \frac{2}{5} & \frac{4}{15} & \frac{1}{3} & 0 \end{matrix} \right]$ as $n \to \infty$
Consider a Markov chain with state space $S = \{1, 2, 3, 4, 5, 6\}$ and transition matrix $P$ given below: $P = \left[ \begin{matrix} 0 & 0 & \frac{1}{2} & 0 & \frac{1}{2} & 0 \ 0 & 0 & 0 & 0 & 0 & 1 \ \frac{1}{4} & 0 & \frac{1}{2} & 0 & \frac{1}{4} & 0 \ 0 & 0 & 0 & 1 & 0 & 0 \ 0 & 0 & \frac{1}{3} & 0 & \frac{2}{3} & 0 \ 0 & \frac{1}{4} & \frac{1}{4} & \frac{1}{4} & 0 & \frac{1}{4} \end{matrix} \right]$
1. Sketch the state graph.
2. Find the equivalence classes and classify each as transient or positive recurrent.
3. Find all invariant probability density functions.
4. Find the mean return time to each state.
5. Find $\lim_{n \to \infty} P^n$.
Answer
1. $\{1, 3, 5\}$ recurrent; $\{2, 6\}$ transient; $\{4\}$ recurrent.
2. $f = \left(\frac{2}{19}p, 0, \frac{8}{19} p, 1 - p, \frac{9}{19}p, 0\right), \quad 0 \le p \le 1$
3. $\mu = \left(\frac{19}{2}, \infty, \frac{19}{8}, 1, \frac{19}{8}, \infty\right)$
4. $P^n \to \left[ \begin{matrix} \frac{2}{19} & 0 & \frac{8}{19} & 0 & \frac{9}{19} & 0 \ \frac{1}{19} & 0 & \frac{4}{19} & \frac{1}{2} & \frac{9}{38} & 0 \ \frac{2}{19} & 0 & \frac{8}{19} & 0 & \frac{9}{19} & 0 \ 0 & 0 & 0 & 1 & 0 & 0 \ \frac{2}{19} & 0 & \frac{8}{19} & 0 & \frac{9}{19} & 0 \ \frac{1}{19} & 0 & \frac{4}{19} & \frac{1}{2} & \frac{9}{38} & 0 \ \end{matrix} \right]$ as $n \to \infty$.
Consider a Markov chain with state space $S = \{1, 2, 3, 4, 5, 6\}$ and transition matrix $P$ given below: $P = \left[ \begin{matrix} \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 & 0 \ \frac{1}{4} & \frac{3}{4} & 0 & 0 & 0 & 0 \ \frac{1}{4} & 0 & \frac{1}{2} & \frac{1}{4} & 0 & 0 \ \frac{1}{4} & 0 & \frac{1}{4} & \frac{1}{4} & 0 & \frac{1}{4} \ 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \ 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \end{matrix} \right]$
1. Sketch the state graph.
2. Find the equivalence classes and classify each as transient or positive recurrent.
3. Find all invariant probability density functions.
4. Find the mean return time to each state.
5. Find $\lim_{n \to \infty} P^n$.
Answer
1. $\{1, 2\}$ recurrent; $\{3, 4\}$ transient; $\{5, 6\}$ recurrent.
2. $f = \left(\frac{1}{3} p, \frac{2}{3} p, 0, 0, \frac{1}{2}(1 - p), \frac{1}{2}(1 - p) \right), \quad 0 \le p \le 1$
3. $\mu = \left(3, \frac{3}{2}, \infty, \infty, 2, 2 \right)$
4. $P^n \to \left[ \begin{matrix} \frac{1}{3} & \frac{2}{3} & 0 & 0 & 0 & 0 \ \frac{1}{3} & \frac{2}{3} & 0 & 0 & 0 & 0 \ \frac{4}{15} & \frac{8}{15} & 0 & 0 & 0 & 0 \ \frac{1}{5} & \frac{2}{5} & 0 & 0 & \frac{1}{5} & \frac{1}{5} \ 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \ 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \end{matrix} \right]$ as $n \to \infty$
Consider the Markov chain with state space $S = \{1, 2, 3, 4, 5, 6, 7\}$ and transition matrix $P$ given below:
$P = \left[ \begin{matrix} 0 & 0 & \frac{1}{2} & \frac{1}{4} & \frac{1}{4} & 0 & 0 \ 0 & 0 & \frac{1}{3} & 0 & \frac{2}{3} & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & \frac{1}{3} & \frac{2}{3} \ 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{2} \ 0 & 0 & 0 & 0 & 0 & \frac{3}{4} & \frac{1}{4} \ \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 & 0 & 0 \ \frac{1}{4} & \frac{3}{4} & 0 & 0 & 0 & 0 & 0 \end{matrix} \right]$
1. Sketch the state digraph, and show that the chain is irreducible with period 3.
2. Identify the cyclic classes.
3. Find the invariant probability density function.
4. Find the mean return time to each state.
5. Find $\lim_{n \to \infty} P^{3 n}$.
6. Find $\lim_{n \to \infty} P^{3 n + 1}$.
7. Find $\lim_{n \to \infty} P^{3 n + 2}$.
Answer
1. Cyclic classes: $\{1, 2\}$, $\{3, 4, 5\}$, $\{6, 7\}$
2. $f = \frac{1}{1785}(232, 363, 237, 58, 300, 333, 262)$
3. $\mu = 1785 \left( \frac{1}{232}, \frac{1}{363}, \frac{1}{237}, \frac{1}{58}, \frac{1}{300} \frac{1}{333}, \frac{1}{262} \right)$
4. $P^{3 n} \to \frac{1}{585} \left[ \begin{matrix} 232 & 363 & 0 & 0 & 0 & 0 & 0 \ 232 & 363 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 237 & 58 & 300 & 0 & 0 \ 0 & 0 & 237 & 58 & 300 & 0 & 0 \ 0 & 0 & 237 & 58 & 300 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 333 & 262 \ 0 & 0 & 0 & 0 & 0 & 333 & 262 \ \end{matrix} \right]$ as $n \to \infty$
5. $P^{3 n + 1} \to \frac{1}{585} \left[ \begin{matrix} 0 & 0 & 237 & 58 & 300 & 0 & 0 \ 0 & 0 & 237 & 58 & 300 & 0 & 0 \ 0 & 0 & 0 & 0 & 0 & 333 & 262 \ 0 & 0 & 0 & 0 & 0 & 333 & 262 \ 0 & 0 & 0 & 0 & 0 & 333 & 262 \ 232 & 363 & 0 & 0 & 0 & 0 & 0 \ 232 & 363 & 0 & 0 & 0 & 0 & 0 \end{matrix} \right]$ as $n \to \infty$
6. $P^{3 n + 2} \to \frac{1}{585} \left[ \begin{matrix} 0 & 0 & 0 & 0 & 0 & 333 & 262 \ 0 & 0 & 0 & 0 & 0 & 333 & 262 \ 232 & 363 & 0 & 0 & 0 & 0 & 0 \ 232 & 363 & 0 & 0 & 0 & 0 & 0 \ 232 & 363 & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 237 & 58 & 300 & 0 & 0 \ 0 & 0 & 237 & 58 & 300 & 0 & 0 \end{matrix} \right]$ as $n \to \infty$
Special Models
Read the discussion of invariant distributions and limiting distributions in the Ehrenfest chains.
Read the discussion of invariant distributions and limiting distributions in the Bernoulli-Laplace chain.
Read the discussion of positive recurrence and invariant distributions for the reliability chains.
Read the discussion of positive recurrence and limiting distributions for the birth-death chain.
Read the discussion of positive recurrence and for the queuing chains.
Read the discussion of positive recurrence and limiting distributions for the random walks on graphs. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.06%3A_Stationary_and_Limiting_Distributions_of_Discrete-Time_Chains.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\cl}{\text{cl}}$
The Markov property, stated in the form that the past and future are independent given the present, essentially treats the past and future symmetrically. However, there is a lack of symmetry in the fact that in the usual formulation, we have an initial time 0, but not a terminal time. If we introduce a terminal time, then we can run the process backwards in time. In this section, we are interested in the following questions:
• Is the new process still Markov?
• If so, how does the new transition probability matrix relate to the original one?
• Under what conditions are the forward and backward processes stochastically the same?
Consideration of these questions leads to reversed chains, an important and interesting part of the theory of Markov chains.
Basic Theory
Reversed Chains
Our starting point is a (homogeneous) discrete-time Markov chain $\bs X = (X_0, X_1, X_2, \ldots)$ with (countable) state space $S$ and transition probability matrix $P$. Let $m$ be a positive integer, which we will think of as the terminal time or finite time horizon. We won't bother to indicate the dependence on $m$ notationally, since ultimately the terminal time will not matter. Define $\hat X_n = X_{m-n}$ for $n \in \{0, 1, \ldots, m\}$. Thus, the process forward in time is $\bs X = (X_0, X_1, \ldots, X_m)$ while the process backwards in time is $\hat{\bs X} = (\hat X_0, \hat X_1, \ldots, \hat X_m) = (X_m, X_{m-1}, \ldots, X_0)$
For $n \in \{0, 1, \ldots, m\}$, let $\hat{\mathscr F}_n = \sigma\{\hat X_0, \hat X_1, \ldots, \hat X_n\} = \sigma\{X_{m-n}, X_{m - n + 1}, \ldots, X_m\}$ denote the $\sigma$ algebra of the events of the process $\hat{\bs X}$ up to time $n$. So of course, an event for $\hat{\bs X}$ up to time $n$ is an event for $\bs X$ from time $m - n$ forward. Our first result is that the reversed process is still a Markov chain, but not time homogeneous in general.
The process $\hat{\bs X} = (\hat X_0, \hat X_1, \ldots, \hat X_m)$ is a Markov chain, but is not time homogenous in general. The one-step transition matrix at time $n \in \{0, 1, \ldots, m - 1\}$ is given by $\P(\hat X_{n+1} = y \mid \hat X_n = x) = \frac{\P(X_{m - n - 1} = y)}{\P(X_{m - n} = x)} P(y, x), \quad (x, y) \in S^2$
Proof
Let $A \in \hat{\mathscr F}_n$ and $x, \, y \in S$. Then \begin{align*} \P(\hat X_{n+1} = y \mid \hat X_n = x, A) & = \frac{\P(\hat X_{n+1} = y, \hat X_n = x, A)}{\P(\hat X_n = x, A)} = \frac{\P(X_{m - n - 1} = y, X_{m - n} = x, A)}{\P(X_{m - n} = x, A)} \ & = \frac{\P(A \mid X_{m - n - 1} = y, X_{m - n} = x) \P(X_{m - n} = x \mid X_{m - n - 1} = y) \P(X_{m - n - 1} = y)}{\P(A \mid X_{m - n} = x) \P(X_{m - n} = x)} \end{align*} But $A \in \sigma\{X_{m - n}, \ldots, X_m\}$ and so by the Markov property for $\bs X$, $\P(A \mid X_{m - n - 1} = y, X_{m - n} = x) = \P(A \mid X_{m - n} = x)$ By the time homogeneity of $\bs X$, $\P(X_{m - n} = x \mid X_{m - n - 1} = y) = P(y, x)$. Substituting and simplifying gives $\P(\hat X_{n+1} = y \mid \hat X_n = x, A) = \frac{\P(X_{m - n - 1} = y)}{\P(X_{m - n} = x)} P(y, x)$
However, the backwards chain will be time homogeneous if $X_0$ has an invariant distribution.
Suppose that $\bs X$ is irreducible and positive recurrent, with (unique) invariant probability density function $f$. If $X_0$ has the invariant probability distribution, then $\hat{\bs X}$ is a time-homogeneous Markov chain with transition matrix $\hat P$ given by $\hat P(x, y) = \frac{f(y)}{f(x)} P(y, x), \quad (x, y) \in S^2$
Proof
This follows from the result above. Recall that if $X_0$ has PDF $f$, then $X_k$ has PDF $f$ for each $k \in \N$.
Recall that a discrete-time Markov chain is ergodic if it is irreducible, positive recurrent, and aperiodic. For an ergodic chain, the previous result holds in the limit of the terminal time.
Suppose that $\bs X$ is ergodic, with (unique) invariant probability density function $f$. Regardless of the distribution of $X_0$, $\P(\hat X_{n+1} = y \mid \hat X_n = x) \to \frac{f(y)}{f(x)} P(y, x) \text{ as } m \to \infty$
Proof
This follows from the conditional probability above and our study of the limiting behavior of Markov chains. Since $\bs X$ is ergodic, $\P(X_k = x) \to f(x)$ as $k \to \infty$ for every $x \in S$.
These three results are motivation for the definition that follows. We can generalize by defining the reversal of an irreducible Markov chain, as long as there is a positive, invariant function. Recall that a positive invariant function defines a positive measure on $S$, but of course not in general a probability distribution.
Suppose that $\bs X$ is an irreducible Markov chain with transition matrix $P$, and that $g: S \to (0, \infty)$ is invariant for $\bs X$. The reversal of $\bs X$ with respect to $g$ is the Markov chain $\hat{\bs X} = (\hat X_0, \hat X_1, \ldots)$ with transition probability matrix $\hat P$ defined by $\hat P(x, y) = \frac{g(y)}{g(x)} P(y, x), \quad (x, y) \in S^2$
Proof
We need to show that $\hat P$ is a valid transition probability matrix, so that the definition makes sense. Since $g$ is invariant for $\bs X$, $\sum_{y \in S} \hat P(x, y) = \frac{1}{g(x)} \sum_{y \in S} g(y) P(y, x) = \frac{g(x)}{g(x)} = 1, \quad x \in S$
Recall that if $g$ is a positive invariant function for $\bs X$ then so is $c g$ for every positive constant $c$. Note that $g$ and $c g$ generate the same reversed chain. So let's consider the cases:
Suppose that $\bs X$ is an irreducible Markov chain on $S$.
1. If $\bs X$ is recurrent, then $\bs X$ always has a positive invariant function that is unique up to multiplication by positive constants. Hence the reversal of a recurrent chain $\bs X$ always exists and is unique, and so we can refer to the reversal of $\bs X$ without reference to the invariant function.
2. Even better, if $\bs X$ is positive recurrent, then there exists a unique invariant probability density function, and the reversal of $\bs X$ can be interpreted as the time reversal (with respect to a terminal time) when $\bs X$ has the invariant distribution, as in the motivating exercises above.
3. If $\bs X$ is transient, then there may or may not exist a positive invariant function, and if one does exist, it may not be unique (up to multiplication by positive constants). So a transient chain may have no reversals or more than one.
Nonetheless, the general definition is natural, because most of the important properties of the reversed chain follow from the balance equation between the transition matrices $P$ and $\hat P$, and the invariant function $g$: $g(x) \hat P(x, y) = g(y) P(y, x), \quad (x, y) \in S^2$ We will see this balance equation repeated with other objects related to the Markov chains.
Suppose that $\bs X$ is an irreducible Markov chain with invariant function $g: S \to (0, \infty)$, and that $\hat{\bs X}$ is the reversal of $\bs X$ with respect to $g$. For $x, \, y \in S$,
1. $\hat P(x, x) = P(x, x)$
2. $\hat P(x, y) \gt 0$ if and only if $P(y, x) \gt 0$
Proof
These results follow immediately from the balance equation $g(x) \hat P(x, y) = g(y) P(y, x)$ for $(x, y) \in S^2$.
From part (b) it follows that the state graphs of $\bs X$ and $\hat{\bs X}$ are reverses of each other. That is, to go from the state graph of one chain to the state graph of the other, simply reverse the direction of each edge. Here is a more complicated (but equivalent) version of the balance equation for chains of states:
Suppose again that $\bs X$ is an irreducible Markov chain with invariant function $g: S \to (0, \infty)$, and that $\hat{\bs X}$ is the reversal of $\bs X$ with respect to $g$. For every $n \in \N_+$ and every sequence of states $(x_1, x_2, \ldots, x_n, x_{n+1}) \in S^{n+1}$, $g(x_1) \hat P(x_1, x_2) \hat P(x_2, x_3) \cdots \hat P(x_n, x_{n+1}) = g(x_{n+1}) P(x_{n+1}, x_n) \cdots P(x_3, x_2) P(x_2, x_1)$
Proof
This follows from repeated applications of the basic equation. When $n = 1$, we have the balance equation itself: $g(x_1) \hat P(x_1, x_2) = g(x_2) P(x_2, x_1)$ For $n = 2$, $g(x_1) \hat P(x_1, x_2) \hat P(x_2, x_3) = g(x_2)P(x_2, x_1)\hat P(x_2, x_3) = g(x_3)P(x_3, x_2)P(x_2, x_1)$ Continuing in this manner (or using induction) gives the general result.
The balance equation holds for the powers of the transition matrix:
Suppose again that $\bs X$ is an irreducible Markov chain with invariant function $g: S \to (0, \infty)$, and that $\hat{\bs X}$ is the reversal of $\bs X$ with respect to $g$. For every $(x, y) \in S^2$ and $n \in \N$, $g(x) \hat P^n(x, y) = g(y) P^n (y, x)$
Proof
When $n = 0$, the left and right sides are $g(x)$ if $x = y$ and are 0 otherwise. When $n = 1$, we have the basic balance equation: $g(x) \hat P(x, y) = g(y) P(y, x)$. In general, for $n \in \N_+$, by the previous result we have \begin{align*} g(x) \hat P^n(x, y) &= \sum_{(x_1, \ldots, x_{n-1}) \in S^{n-1}} g(x) \hat P(x, x_1) \hat P(x_1, x_2) \cdots \hat P(x_{n-1}, y) \ &= \sum_{(x_1, \ldots, x_{n-1}) \in S^{n-1}} g(y) P(y, x_{n-1}) P(x_{n-1}, x_{n-2}) \cdots P(x_1, x) = g(y) P^n(y, x) \end{align*}
We can now generalize the simple result above.
Suppose again that $\bs X$ is an irreducible Markov chain with invariant function $g: S \to (0, \infty)$, and that $\hat{\bs X}$ is the reversal of $\bs X$ with respect to $g$. For $n \in \N$ and $(x, y) \in S^2$,
1. $P^n(x, x) = \hat P^n(x, x)$
2. $\hat P^n(x, y) \gt 0$ if and only if $P^n(y, x) \gt 0$
In terms of the state graphs, part (b) has an obvious meaning: If there exists a path of length $n$ from $y$ to $x$ in the original state graph, then there exists a path of length $n$ from $x$ to $y$ in the reversed state graph. The time reversal definition is symmetric with respect to the two Markov chains.
Suppose again that $\bs X$ is an irreducible Markov chain with invariant function $g: S \to (0, \infty)$, and that $\hat{\bs X}$ is the reversal of $\bs X$ with respect to $g$. Then
1. $g$ is also invariant for $\hat{\bs X}$.
2. $\hat{\bs X}$ is also irreducible.
3. $\bs X$ is the reversal of $\hat{\bs X}$ with respect to $g$.
Proof
1. For $y \in S$, using the balance equation, $\sum_{x \in S} g(x) \hat P(x, y) = \sum_{x \in S} g(y) P(y, x) = g(y)$
2. Suppose $(x, y) \in S^2$. Since $\bs X$ is irreducible, there exist $n \in \N$ with $P^n(y, x) \gt 0$. But then from the previous result, $\hat P^n(x, y) \gt 0$. Hence $\hat{\bs X}$ is also irreducible.
3. This is clear from the symmetric relationship in the fundamental result.
The balance equation also holds for the potential matrices.
Suppose that $\bs X$ and $\hat{\bs X}$ are time reversals with respect to the invariant function $g: S \to (0, \infty)$. For $\alpha \in (0, 1]$, the $\alpha$ potential matrices are related by $g(x) \hat R_\alpha(x, y) = g(y) R_\alpha(y, x), \quad (x, y) \in S^2$
Proof
This follows easily from the result above and the definition of the potential matrices: \begin{align*} g(x) \hat R_\alpha(x, y) & = g(x) \sum_{n=0}^\infty \alpha^n \hat P^n(x, y) = \sum_{n=0}^\infty \alpha^n g(x) \hat P^n(x, y) \ & = \sum_{n=0}^\infty \alpha^n g(y) P^n(y, x) = g(y) \sum_{n=0}^\infty \alpha^n P^n(y, x) = g(y) R_\alpha(y, x) \end{align*}
Markov chains that are time reversals share many important properties:
Suppose that $\bs X$ and $\hat{\bs X}$ are time reversals. Then
1. $\bs X$ and $\hat{\bs X}$ are of the same type (transient, null recurrent, or positive recurrent).
2. $\bs X$ and $\hat{\bs X}$ have the same period.
3. $\bs X$ and $\hat{\bs X}$ have the same mean return time $\mu(x)$ for every $x \in S$.
Proof
Suppose that $\bs X$ and $\hat{\bs X}$ are time reversals with respect to the invariant function $g: S \to (0, \infty)$.
1. The expected number of visits to a state $x \in S$, starting in $x$, is the same for both chains: $\hat R(x, x) = R(x, x)$. Hence either both chains are transient (if the common potential is finite) or both chains are recurrent (if the common potential is infinite). If both chains are recurrent then the invariant function $g$ is unique up to multiplication by positive constants, and both are null recurrent if $\sum_{x \in S} g(x) = \infty$ and both are positive recurrent if $\sum_{x \in S} g(x) \lt \infty$.
2. This follows since $P^n(x, x) = \hat P^n(x, x)$ for all $n \in \N$ and $x \in S$.
3. If both chains are transient or both are null recurrent, then $\mu(x) = \hat \mu(x) = \infty$ for all $x \in S$. If both chains are positive recurrent, then for all $n \in \N$ and $x \in S$, we have $\frac{1}{n} \sum_{k = 1}^n P^k(x, x) = \frac{1}{n} \sum_{k = 1}^n \hat P^k(x, x)$ The left side converges to $1 / \mu(x)$ as $n \to \infty$ while the right side converges to $1 / \hat \mu(x)$ as $n \to \infty$.
The main point of the next result is that we don't need to know a-priori that $g$ is invariant for $\bs X$, if we can guess $g$ and $\hat P$.
Suppose again that $\bs X$ is irreducible with transition probability matrix $P$. If there exists a a function $g: S \to (0, \infty)$ and a transition probability matrix $\hat P$ such that $g(x) \hat P(x, y) = g(y) P(y, x)$ for all $(x, y) \in S^2$, then
1. $g$ is invariant for $\bs X$.
2. $\hat P$ is the transition matrix of the reversal of $\bs X$ with respect to $g$.
Proof
1. Since $\hat P$ is a transition probability matrix, we have the same computation we have seen before: $g P(x) = \sum_{y \in S} g(y) P(y, x) = \sum_{y \in S} g(x) \hat P(x, y) = g(x), \quad x \in S$
2. This follows from (a) and the definition.
As a corollary, if there exists a probability density function $f$ on $S$ and a transition probability matrix $\hat P$ such that $f(x) \hat P(x, y) = f(y) P(y, x)$ for all $(x, y) \in S^2$ then in addition to the conclusions above, we know that the chains $\bs X$ and $\hat{\bs X}$ are positive recurrent.
Reversible Chains
Clearly, an interesting special case occurs when the transition matrix of the reversed chain turns out to be the same as the original transition matrix. A chain of this type could be used to model a physical process that is stochastically the same, forward or backward in time.
Suppose again that $\bs X = (X_0, X_1, X_2, \ldots)$ is an irreducible Markov chain with transition matrix $P$ and invariant function $g: S \to (0, \infty)$. If the reversal of $\bs X$ with respect to $g$ also has transition matrix $P$, then $\bs X$ is said to be reversible with respect to $g$. That is, $\bs X$ is reversible with respect to $g$ if and only if $g(x) P(x, y) = g(y) P(y, x), \quad (x, y) \in S^2$
Clearly if $\bs X$ is reversible with respect to the invariant function $g: S \to (0, \infty)$ then $\bs X$ is reversible with respect to the invariant function $c g$ for every $c \in (0, \infty)$. So again, let's review the cases.
Suppose that $\bs X$ is an irreducible Markov chain on $S$.
1. If $\bs X$ is recurrent, there exists a positive invariant function that is unique up to multiplication by positive constants. So $\bs X$ is either reversible or not, and we don't have to reference the invariant function $g$.
2. If $\bs X$ is positive recurrent then there exists a unique invariant probability density function $f: S \to (0, 1)$, and again, either $\bs X$ is reversible or not. If $\bs X$ is reversible, then $P$ is the transition matrix of $\bs X$ forward or backward in time, when the chain has the invariant distribution.
3. If $\bs X$ is transient, there may or may not exist positive invariant functions. If there are two or more positive invariant functions that are not multiplies of one another, $\bs X$ might be reversible with respect to one function but not the others.
The non-symmetric simple random walk on $\Z$ falls into the last case. Using the last result in the previous subsection, we can tell whether $\bs X$ is reversible with respect to $g$ without knowing a-priori that $g$ is invariant.
Suppose again that $\bs X$ is irreducible with transition matrix $P$. If there exists a function $g: S \to (0, \infty)$ such that $g(x) P(x, y) = g(y) P(y, x)$ for all $(x, y) \in S^2$, then
1. $g$ is invariant for $\bs X$.
2. $\bs X$ is reversible with respect to $g$
If we have reason to believe that a Markov chain is reversible (based on modeling considerations, for example), then the condition in the previous theorem can be used to find the invariant functions. This procedure is often easier than using the definition of invariance directly. The next two results are minor generalizations:
Suppose again that $\bs X$ is irreducible and that $g: S \to (0, \infty)$. Then $g$ is invariant and $\bs X$ is reversible with respect to $g$ if and only if for every $n \in \N_+$ and every sequence of states $(x_1, x_2, \ldots x_n, x_{n+1}) \in S^{n+1}$, $g(x_1) P(x_1, x_2) P(x_2, x_3) \cdots P(x_n, x_{n+1}) = g(x_{n+1}) P(x_{n+1}, x_n), \cdots P(x_3, x_2) P(x_2, x_1)$
Suppose again that $\bs X$ is irreducible and that $g: S \to (0, \infty)$. Then $g$ is invariant and $\bs X$ is reversible with respect to $g$ if and only if for every $(x, y) \in S^2$ and $n \in \N_+$, $g(x) P^n(x, y) = g(y) P^n(y, x)$
Here is the condition for reversibility in terms of the potential matrices.
Suppose again that $\bs X$ is irreducible and that $g: S \to (0, \infty)$. Then $g$ is invariant and $\bs X$ is reversible with respect to $g$ if and only if $g(x) R_\alpha(x, y) = g(y) R_\alpha(y, x), \quad \alpha \in (0, 1], \, (x, y) \in S^2$
In the positive recurrent case (the most important case), the following theorem gives a condition for reversibility that does not directly reference the invariant distribution. The condition is known as the Kolmogorov cycle condition, and is named for Andrei Kolmogorov
Suppose that $\bs X$ is irreducible and positive recurrent. Then $\bs X$ is reversible if and only if for every sequence of states $(x_1, x_2, \ldots, x_n)$, $P(x_1, x_2) P(x_2, x_3) \cdots P(x_{n-1}, x_n) P(x_n, x_1) = P(x_1, x_n) P(x_n, x_{n-1}) \cdots P(x_3, x_2) P(x_2, x_1)$
Proof
Suppose that $\bs X$ is reversible. Applying the chain result above to the sequence $(x_1, x_2, \ldots, x_n, x_1)$ gives the Kolmogorov cycle condition. Conversely, suppose that the Kolmogorov cycle condition holds, and let $f$ denote the invariant probability density function of $\bs X$. From the cycle condition we have $P(x, y) P^k(y, x) = P(y, x)P^k(x, y)$ for every $(x, y) \in S$ and $k \in \N_+$. Averaging over $k$ from 1 to $n$ gives $P(x, y) \frac{1}{n} \sum_{k=1}^n P^k(y, x) = P(y, x) \frac{1}{n} \sum_{k = 1}^n P^k(x, y), \quad (x, y) \in S^2, \; n \in \N_+$ Letting $n \to \infty$ gives $f(x) P(x, y) = f(y) P(y, x)$ for $(x, y) \in S^2$, so $\bs X$ is reversible.
Note that the Kolmogorov cycle condition states that the probability of visiting states $(x_2, x_3, \ldots, x_n, x_1)$ in sequence, starting in state $x_1$ is the same as the probability of visiting states $(x_n, x_{n-1}, \ldots, x_2, x_1)$ in sequence, starting in state $x_1$. The cycle condition is also known as the balance equation for cycles.
Examples and Applications
Finite Chains
Recall the general two-state chain $\bs X$ on $S = \{0, 1\}$ with the transition probability matrix $P = \left[ \begin{matrix} 1 - p & p \ q & 1 - q \end{matrix} \right]$ where $p, \, q \in (0, 1)$ are parameters. The chain $\bs X$ is reversible and the invariant probability density function is $f = \left( \frac{q}{p + q}, \frac{p}{p + q} \right)$.
Proof
All we have to do is note that $\left[\begin{matrix} q & p \end{matrix}\right] \left[ \begin{matrix} 1 - p & p \ q & 1 - q \end{matrix} \right] = \left[\begin{matrix} q & p \end{matrix}\right]$
Suppose that $\bs X$ is a Markov chain on a finite state space $S$ with symmetric transition probability matrix $P$. Thus $P(x, y) = P(y, x)$ for all $(x, y) \in S^2$. The chain $\bs X$ is reversible and that the uniform distribution on $S$ is invariant.
Proof
All we have to do is note that $\bs{1}(x) P(x, y) = \bs{1}(y) P(y, x)$ where $\bs{1}$ is the constant function 1 on $S$.
Consider the Markov chain $\bs X$ on $S = \{a, b, c\}$ with transition probability matrix $P$ given below:
$P = \left[ \begin{matrix} \frac{1}{4} & \frac{1}{4} & \frac{1}{2} \ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \ \frac{1}{2} & \frac{1}{2} & 0 \end{matrix} \right]$
1. Draw the state graph of $\bs X$ and note that the chain is irreducible.
2. Find the invariant probability density function $f$.
3. Find the mean return time to each state.
4. Find the transition probability matrix $\hat P$ of the time-reversed chain $\hat{\bs X}$.
5. Draw the state graph of $\hat{\bs X}$.
Answer
1. $f = \left( \frac{6}{17}, \frac{6}{17}, \frac{5}{17}\right)$
2. $\mu = \left( \frac{17}{6}, \frac{17}{6}, \frac{17}{5} \right)$
3. $\hat P = \left[ \begin{matrix} \frac{1}{4} & \frac{1}{3} & \frac{5}{12} \ \frac{1}{4} & \frac{1}{3} & \frac{5}{12} \ \frac{3}{5} & \frac{2}{5} & 0 \end{matrix} \right]$
Special Models
Read the discussion of reversibility for the Ehrenfest chains.
Read the discussion of reversibility for the Bernoulli-Laplace chain.
Read the discussion of reversibility for the random walks on graphs.
Read the discussion of time reversal for the reliability chains.
Read the discussion of reversibility for the birth-death chains. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.07%3A_Time_Reversal_in_Discrete-Time_Chains.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\cl}{\text{cl}}$
Basic Theory
The Ehrenfest chains, named for Paul Ehrenfest, are simple, discrete models for the exchange of gas molecules between two containers. However, they can be formulated as simple ball and urn models; the balls correspond to the molecules and the urns to the two containers. Thus, suppose that we have two urns, labeled 0 and 1, that contain a total of $m$ balls. The state of the system at time $n \in \N$ is the number of balls in urn 1, which we will denote by $X_n$. Our stochastic process is $\bs{X} = (X_0, X_1, X_2, \ldots)$ with state space $S = \{0, 1, \ldots, m\}$. Of course, the number of balls in urn 0 at time $n$ is $m - X_n$.
The Models
In the basic Ehrenfest model, at each discrete time unit, independently of the past, a ball is selected at random and moved to the other urn.
$\bs{X}$ is a discrete-time Markov chain on $S$ with transition probability matrix $P$ given by $P(x, x - 1) = \frac{x}{m}, \; P(x, x + 1) = \frac{m - x}{m}, \quad x \in S$
Proof
We will give a construction of the chain from a more basic process. Let $V_n$ be the ball selected at time $n \in \N_+$. Thus $\bs{V} = (V_1, V_2, \ldots)$ is a sequence of independent random variables, each uniformly distributed on $\{1, 2, \ldots, m\}$. Let $X_0 \in S$ be independent of $\bs{V}$. (We can start the process any way that we like.) Now define the state process recursively as follows: $X_{n+1} = \begin{cases} X_n - 1, & V_{n+1} \le X_n \ X_n + 1, & V_{n+1} \gt X_n \end{cases}, \quad n \in \N$
In the Ehrenfest experiment, select the basic model. For selected values of $m$ and selected values of the initial state, run the chain for 1000 time steps and note the limiting behavior of the proportion of time spent in each state.
Suppose now that we modify the basic Ehrenfest model as follows: at each discrete time, independently of the past, we select a ball at random and a urn at random. We then put the chosen ball in the chosen urn.
$\bs{X}$ is a discrete-time Markov chain on $S$ with the transition probability matrix $Q$ given by $Q(x, x - 1) = \frac{x}{2 m}, \; Q(x, x) = \frac{1}{2}, \; Q(x, x + 1) = \frac{m - x}{2 m}, \quad x \in S$
Proof
Again, we can construct the chain from a more basic process. Let $X_0$ and $\bs{V}$ be as in Theorem 1. Let $U_n$ be the urn selected at time $n \in \N_+$. Thus $\bs{U} = (U_1, U_2, \ldots)$ is a sequence of independent random variables, each uniformly distributed on $\{0, 1\}$ (so that $\bs{U}$ is a fair, Bernoulli trials sequence). Also, $\bs{U}$ is independent of $\bs{V}$ and $X_0$. Now define the state process recursively as follows: $X_{n+1} = \begin{cases} X_n - 1, & V_{n+1} \le X_n, \; U_{n+1} = 0 \ X_n + 1, & V_{n+1} \gt X_n, \; U_{n+1} = 1 \ X_n, & \text{otherwise} \end{cases}, \quad n \in \N$
Note that $Q(x, y) = \frac{1}{2} P(x, y)$ for $y \in \{x - 1, x + 1\}$.
In the Ehrenfest experiment, select the modified model. For selected values of $m$ and selected values of the initial state, run the chain for 1000 time steps and note the limiting behavior of the proportion of time spent in each state.
Classification
The basic and modified Ehrenfest chains are irreducible and positive recurrent.
Proof
The chains are clearly irreducible since every state leads to every other state. It follows that the chains are positive recurrent since the state space $S$ is finite.
The basic Ehrenfest chain is periodic with period 2. The cyclic classes are the set of even states and the set of odd states. The two-step transition matrix is
$P^2(x, x - 2) = \frac{x (x - 1)}{m^2}, \; P^2(x, x) = \frac{x(m - x + 1) + (m - x)(x + 1)}{m^2}, \; P^2(x, x + 2) = \frac{(m - x)(m - x - 1)}{m^2}, \quad x \in S$
Proof
Note that returns to a state can only occur at even times, so the chain has period 2. The form of $P^2$ follows from the formula for $P$ above.
The modified Ehrenfest chain is aperiodic.
Proof
Note that $P(x, x) \gt 0$ for each $x \in S$.
Invariant and Limiting Distributions
For the basic and modified Ehrenfest chains, the invariant distribution is the binomial distribution with trial parameter $m$ and success parameter $\frac{1}{2}$. So the invariant probability density function $f$ is given by $f(x) = \binom{m}{x} \left( \frac{1}{2} \right)^m, \quad x \in S$
Proof
For the basic chain we have \begin{align*} (f P)(y) & = f(y - 1) P(y - 1, y) + f(y + 1) P(y + 1, y)\ & = \binom{m}{y - 1} \left(\frac{1}{2}\right)^m \frac{m - y + 1}{m} + \binom{m}{y + 1} \left(\frac{1}{2}\right)^m \frac{y + 1}{m} \ & = \left(\frac{1}{2}\right)^m \left[\binom{m - 1}{y - 1} + \binom{m - 1}{y}\right] = \left(\frac{1}{2}\right)^m \binom{m}{y} = f(y), \quad y \in S \end{align*} The last step uses a fundamental identity for binomial coefficients. For the modified chain we can use the result for the basic chain: \begin{align*} (f Q)(y) & = f(y - 1) Q(y - 1, y) + f(y) Q(y, y) + f(y + 1)Q(y + 1, y) \ & = \frac{1}{2} f(y - 1) P(y - 1, y) + \frac{1}{2} f(y + 1) P(y + 1, y) + \frac{1}{2} f(y) = f(y), \quad y \in S \end{align*}
Thus, the invariant distribution corresponds to placing each ball randomly and independently either in urn 0 or in urn 1.
The mean return time to state $x \in S$ for the basic or modified Ehrenfest chain is $\mu(x) = 2^m \big/ \binom{m}{x}$.
Proof
This follows from the general theory and the invariant distribution above.
For the basic Ehrenfest chain, the limiting behavior of the chain is as follows:
1. $P^{2 n}(x, y) \to \binom{m}{y} \left(\frac{1}{2}\right)^{m-1}$ as $n \to \infty$ if $x, \, y \in S$ have the same parity (both even or both odd). The limit is 0 otherwise.
2. $P^{2 n+1}(x, y) \to \binom{m}{y} \left(\frac{1}{2}\right)^{m-1}$ as $n \to \infty$ if $x, \, y \in S$ have oppositie parity (one even and one odd). The limit is 0 otherwise.
Proof
These results follow from the general theory and the invariant distribution above, and the fact that the chain is periodic with period 2, with the odd and even integers in $S$ as the cyclic classes.
For the modified Ehrenfest chain, $Q^n(x, y) \to \binom{m}{y} \left(\frac{1}{2}\right)^m$ as $n \to \infty$ for $x, \, y \in S$.
Proof
Again, this follows from the general theory and the invariant distribution above, and the fact that the chain is aperiodic.
In the Ehrenfest experiment, the limiting binomial distribution is shown graphically and numerically. For each model and for selected values of $m$ and selected values of the initial state, run the chain for 1000 time steps and note the limiting behavior of the proportion of time spent in each state. How do the choices of $m$, the initial state, and the model seem to affect the rate of convergence to the limiting distribution?
Reversibility
The basic and modified Ehrenfest chains are reversible.
Proof
Let $g(x) = \binom{m}{x}$ for $x \in S$. The crucial observations are $g(x) P(x, y) = g(y) P(y, x)$ and $g(x) Q(x, y) = g(y) Q(y, x)$ for all $x, \, y \in S$. For the basic chain, if $x \in S$ then \begin{align*} g(x) P(x, x - 1) & = g(x - 1) P(x - 1, x) = \binom{m - 1}{x - 1} \ g(x) P(x, x + 1) & = g(x + 1) P(x + 1, x) = \binom{m - 1}{x} \end{align*} In all other cases, $g(x) P(x, y) = g(y) P(y, x) = 0$. The reversibility condition for the modified chain follows trivially from that of the basic chain since $Q(x, y) = \frac{1}{2} P(x, y)$ for $y = x \pm 1$ (and of course the reversibility condition is trivially satisfied when $x = y$). Note that the invariant PDF $f$ is simply $g$ normalized. The reversibility condition gives another (and better) proof that $f$ is invariant.
Run the simulation of the Ehrenfest experiment 10,000 time steps for each model, for selected values of $m$, and with initial state 0. Note that at first, you can see the arrow of time. After a long period, however, the direction of time is no longer evident.
Computational Exercises
Consider the basic Ehrenfest chain with $m = 5$ balls, and suppose that $X_0$ has the uniform distribution on $S$.
1. Compute the probability density function, mean and variance of $X_1$.
2. Compute the probability density function, mean and variance of $X_2$.
3. Compute the probability density function, mean and variance of $X_3$.
4. Sketch the initial probability density function and the probability density functions in parts (a), (b), and (c) on a common set of axes.
Answer
1. $f_1 = \left( \frac{1}{30}, \frac{7}{30}, \frac{7}{30}, \frac{7}{30}, \frac{7}{30}, \frac{1}{30} \right)$, $\mu_1 = \frac{5}{2}$, $\sigma_1^2 = \frac{19}{12}$
2. $f_2 = \left( \frac{7}{150}, \frac{19}{150}, \frac{49}{150}, \frac{49}{150}, \frac{19}{150}, \frac{7}{150} \right)$, $\mu_2 = \frac{5}{2}$, $\sigma_2^2 = \frac{79}{60}$
3. $f_3 = \left( \frac{19}{750}, \frac{133}{750}, \frac{223}{150}, \frac{223}{150}, \frac{133}{150}, \frac{19}{150} \right)$, $\mu_2 = \frac{5}{2}$, $\sigma_3^2 = \frac{431}{300}$
Consider the modified Ehrenfest chain with $m = 5$ balls, and suppose that the chain starts in state 2 (with probability 1).
1. Compute the probability density function, mean and standard deviation of $X_1$.
2. Compute the probability density function, mean and standard deviation of $X_2$.
3. Compute the probability density function, mean and standard deviation of $X_3$.
4. Sketch the initial probability density function and the probability density functions in parts (a), (b), and (c) on a common set of axes.
Answer
1. $f_1 = (0, 0.2, 0.5, 0.3, 0, 0)$, $\mu_1 = 2.1$, $\sigma_1 = 0.7$
2. $f_2 = (0.02, 0.20, 0.42, 0.30, 0.06, 0)$, $\mu_2 = 2.18$, $\sigma_2 = 0.887$
3. $f_3 = (0.030, 0.194, 0.380, 0.300, 0.090, 0.006)$, $\mu_3 = 2.244$, $\sigma_3 = 0.984$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.08%3A_The_Ehrenfest_Chains.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
Introduction
The Bernoulli-Laplace chain, named for Jacob Bernoulli and Pierre Simon Laplace, is a simple discrete model for the diffusion of two incompressible gases between two containers. Like the Ehrenfest chain, it can also be formulated as a simple ball and urn model. Thus, suppose that we have two urns, labeled 0 and 1. Urn 0 contains $j$ balls and urn 1 contains $k$ balls, where $j, \, k \in \N_+$. Of the $j + k$ balls, $r$ are red and the remaining $j + k - r$ are green. Thus $r \in \N_+$ and $0 \lt r \lt j + k$. At each discrete time, independently of the past, a ball is selected at random from each urn and then the two balls are switched. The balls of different colors correspond to molecules of different types, and the urns are the containers. The incompressible property is reflected in the fact that the number of balls in each urn remains constant over time.
Let $X_n$ denote the number of red balls in urn 1 at time $n \in \N$. Then
1. $k - X_n$ is the number of green balls in urn 1 at time $n$.
2. $r - X_n$ is the number of red balls in urn 0 at time $n$.
3. $j - r + X_n$ is the number of green balls in urn 0 at time $n$.
$\bs{X} = (X_0, X_1, X_2, \ldots)$ is a discrete-time Markov chain with state space $S = \{\max\{0, r - j\}, \ldots, \min\{k,r\}\}$ and with transition matrix $P$ given by $P(x, x - 1) = \frac{(j - r + x) x}{j k}, \; P(x, x) = \frac{(r - x) x + (j - r + x)(k - x)}{j k}, \; P(x, x + 1) = \frac{(r - x)(k - x)}{j k}; \quad x \in S$
Proof
For the state space, note from the previous result that the number of red balls $x$ in urn 1 must satisfy the inequalities $x \ge 0$, $x \le k$, $x \le r$, and $x \ge r - j$. The Markov property is clear from the model. For the transition probabilities, note that to go from state $x$ to state $x - 1$ we must select a green ball from urn 0 and a red ball from urn 1. The probabilities of these events are $(j - r + x) / j$ and $x / k$ for $x$ and $x - 1$ in $S$, and the events are independent. Similarly, to go from state $x$ to state $x + 1$ we must select a red ball from urn 0 and a green ball from urn 1. The probabilities of these events are $(r - x) / j$ and $(k - x) / k$ for $x$ and $x + 1$ in $S$, and the events are independent. Finally, to go from state $x$ back to state $x$, we must select a red ball from both urns or a green ball from both urns. Of course also, $P(x, x) = 1 - P(x, x - 1) - P(x, x + 1)$.
This is a fairly complicated model, simply because of the number of parameters. Interesting special cases occur when some of the parameters are the same.
Consider the special case $j = k$, so that each urn has the same number of balls. The state space is $S = \{\max\{0, r - k\}, \ldots, \min\{k,r\}\}$ and the transition probability matrix is $P(x, x - 1) = \frac{(k - r + x) x}{k^2}, \; P(x, x) = \frac{(r - x) x + (k - r + x)(k - x)}{k^2}, \; P(x, x + 1) = \frac{(r - x)(k - x)}{k^2}; \quad x \in S$
Consider the special case $r = j$, so that the number of red balls is the same as the number of balls in urn 0. The state space is $S = \{0, \ldots, \min\{j, k\}\}$ and the transition probability matrix is $P(x, x - 1) = \frac{x^2}{j k}, \; P(x, x) = \frac{x (j + k - 2 x)}{j k}, \; P(x, x + 1) = \frac{(j - x)(k - x)}{j k}; \quad x \in S$
Consider the special case $r = k$, so that the number of red balls is the same as the number of balls in urn 1. The state space is $S = \{\max\{0, k - j\}, \ldots, k\}$ and the transition probability matrix is $P(x, x - 1) = \frac{(j - k + x) x}{j k}, \; P(x, x) = \frac{(k - x)(j - k + 2 x)}{j k}, \; P(x, x + 1) = \frac{(k - x)^2}{j k}; \quad x \in S$
Consider the special case $j = k = r$, so that each urn has the same number of balls, and this is also the number of red balls. The state space is $S = \{0, 1, \ldots, k\}$ and the transition probability matrix is $P(x, x - 1) = \frac{x^2}{k^2}, \; P(x, x) = \frac{2 x (k - x)}{k^2}, \; P(x, x + 1) = \frac{(k - x)^2}{k^2}; \quad x \in S$
Run the simulation of the Bernoulli-Laplace experiment for 10000 steps and for various values of the parameters. Note the limiting behavior of the proportion of time spent in each state.
Invariant and Limiting Distributions
The Bernoulli-Laplace chain is irreducible.
Proof
Note that $P(x, x - 1) \gt 0$ whenever $x, \, x - 1 \in S$, and $P(x, x + 1) \gt 0$ whenever $x, \, x + 1 \in S$. Hence every state leads to every other state so the chain is irreducible.
Except in the trivial case $j = k = r = 1$, the Bernoulli-Laplace chain aperiodic.
Proof
Consideration of the state probabilities shows that except when $j = k = r = 1$, the chain has a state $x$ with $P(x, x) \gt 0$, so state $x$ is aperiodic. Since the chain is irreducible by the previous result, all states are aperiodic.
The invariant distribution is the hypergeometric distribution with population parameter $j + k$, sample parameter $k$, and type parameter $r$. The probability density function is $f(x) = \frac{\binom{r}{x} \binom{j + k - r}{k - x}}{\binom{j + k}{k}}, \quad x \in S$
Proof
A direct proof that $(f P)(x) = f(x)$ for all $x \in S$ is straightforward but tedious. A better proof follows from the reversibility condition below.
Thus, the invariant distribution corresponds to selecting a sample of $k$ balls at random and without replacement from the $j + k$ balls and placing them in urn 1. The mean and variance of the invariant distribution are $\mu = k \frac{r}{j + k}, \; \sigma^2 = k \frac{r}{j + k} \frac{j + k - r}{j + k} \frac{j}{j + k -1}$
The mean return time to each state $x \in S$ is $\mu(x) = \frac{\binom{j + k}{k}}{\binom{r}{x} \binom{j + k - r}{k - x}}$
Proof
This follows from the general theory and the invariant distribution above.
$P^n(x, y) \to f(y) = \binom{r}{y} \binom{r + k - r}{k - y} \big/ \binom{j + k}{k}$ as $n \to \infty$ for $(x, y) \in S^2$.
Proof
This follows from the general theory and the invariant distribution above.
In the simulation of the Bernoulli-Laplace experiment, vary the parameters and note the shape and location of the limiting hypergeometric distribution. For selected values of the parameters, run the simulation for 10000 steps and and note the limiting behavior of the proportion of time spent in each state.
Reversibility
The Bernoulli-Laplace chain is reversible.
Proof
Let $g(x) = \binom{r}{x} \binom{j + k - r}{k - x}, \quad x \in S$ It suffices to show the reversibility condition $g(x) P(x, y) = g(y) P(y, x)$ for all $x, \, y \in S$. It then follows that $\bs{X}$ is reversible and that $g$ is invariant for $\bs{X}$. For $x \in S$ and $y = x - 1 \in S$, the left and right sides of the reversibility condition reduce to $\frac{1}{j k}\frac{r!}{(x - 1)! (r - x)!} \frac{(j + k - r)!}{(k - x)! (j - r + x - 1)!}$ For $x \in S$ and $y = x + 1 \in S$, the left and right sides of the reversibility condition reduce to $\frac{1}{j k} \frac{r!}{x! (r - x - 1)!} \frac{(j + k - r)!}{(k - x - 1)! (j - r + x)!}$ For all other values of $x, \, y \in S$, the reversibility condition is trivially satisfied. The hypergeometric PDF $f$ above is simply $g$ normalized, so this proves that $f$ is also invariant.
Run the simulation of the Bernoulli-Laplace experiment 10,000 time steps for selected values of the parameters, and with initial state 0. Note that at first, you can see the arrow of time. After a long period, however, the direction of time is no longer evident.
Computational Exercises
Consider the Bernoulli-Laplace chain with $j = 10$, $k = 5$, and $r = 4$. Suppose that $X_0$ has the uniform distribution on $S$. Explicitly give each of the following:
1. The state space $S$
2. The transition matrix $P$.
3. The probability density function, mean and variance of $X_1$.
4. The probability density function, mean and variance of $X_2$.
5. The probability density function, mean and variance of $X_3$.
Answer
1. $S = \{0, 1, 2, 3, 4\}$
2. $P = \frac{1}{50} \left[ \begin{matrix} 30 & 20 & 0 & 0 & 0 \ 7 & 31 & 12 & 0 & 0 \ 0 & 16 & 28 & 6 & 0 \ 0 & 0 & 27 & 21 & 2 \ 0 & 0 & 0 & 40 & 10 \end{matrix} \right]$
3. $f_1 = \frac{1}{250} (37, 67, 67, 67, 12), \mu_1 = \frac{9}{5}, \sigma_1^2 = \frac{32}{25}$
4. $f_2 = \frac{1}{12\,500} (1579, 3889, 4489, 2289, 254), \mu_2 = \frac{83}{50}, \sigma_2^2 = \frac{2413}{2500}$
5. $f_3 = \frac{1}{625\,000} (74\,593, 223\,963, 234\,163, 85\,163, 7118), \mu_3 = \frac{781}{500}, \sigma_3^2 = \frac{206\,427}{250\,000}$
Consider the Bernoulli-Laplace chain with $j = k = 10$ and $r = 6$. Give each of the following explicitly:
1. The state space $S$
2. The transition matrix $P$
3. The invariant probability density function.
Answer
1. $S = \{0, 1, 2, 3, 4, 5, 6\}$
2. $P = \frac{1}{100} \left[ \begin{matrix} 40 & 60 & 0 & 0 & 0 & 0 & 0 \ 5 & 50 & 45 & 0 & 0 & 0 & 0 \ 0 & 12 & 56 & 32 & 0 & 0 & 0 \ 0 & 0 & 21 & 58 & 21 & 0 & 0 \ 0 & 0 & 0 & 32 & 56 & 12 & 0 \ 0 & 0 & 0 & 0 & 45 & 50 & 5 \ 0 & 0 & 0 & 0 & 0 & 60 & 40 \end{matrix} \right]$
3. $f = \frac{1}{1292}(7, 84, 315, 480, 315, 84, 7)$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.09%3A_The_Bernoulli-Laplace_Chain.txt |
Subsets and Splits