chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$
The Success-Runs Chain
Suppose that we have a sequence of trials, each of which results in either success or failure. Our basic assumption is that if there have been $x \in \N$ consecutive successes, then the probability of success on the next trial is $p(x)$, independently of the past, where $p: \N \to (0, 1)$. Whenever there is a failure, we start over, independently, with a new sequence of trials. Appropriately enough, $p$ is called the success function. Let $X_n$ denote the length of the run of successes after $n$ trials.
$\bs{X} =(X_0, X_1, X_2, \ldots)$ is a discrete-time Markov chain with state space $\N$ and transition probability matrix $P$ given by $P(x, x + 1) = p(x), \; P(x, 0) = 1 - p(x); \quad x \in \N$ The Markov chain $\bs{X}$ is called the success-runs chain.
Now let $T$ denote the trial number of the first failure, starting with a fresh sequence of trials. Note that in the context of the success-runs chain $\bs{X}$, $T = \tau_0$, the first return time to state 0, starting in 0. Note that $T$ takes values in $\N_+ \cup \{\infty\}$, since presumably, it is possible that no failure occurs. Let $r(n) = \P(T \gt n)$ for $n \in \N$, the probability of at least $n$ consecutive successes, starting with a fresh set of trials. Let $f(n) = \P(T = n + 1)$ for $n \in \N$, the probability of exactly $n$ consecutive successes, starting with a fresh set of trails.
The functions $p$, $r$, and $f$ are related as follows:
1. $p(x) = r(x + 1) \big/ r(x)$ for $x \in \N$
2. $r(n) = \prod_{x=0}^{n-1} p(x)$ for $n \in \N$
3. $f(n) = [1 - p(n)] \prod_{x=0}^{n-1} p(x)$ for $n \in \N$
4. $r(n) = 1 - \sum_{x=0}^{n-1} f(x)$ for $n \in \N$
5. $f(n) = r(n) - r(n + 1)$ for $n \in \N$
Thus, the functions $p$, $r$, and $f$ give equivalent information. If we know one of the functions, we can construct the other two, and hence any of the functions can be used to define the success-runs chain. The function $r$ is the reliability function associated with $T$.
The function $r$ is characterized by the following properties:
1. $r$ is positive.
2. $r(0) = 1$
3. $r$ is strictly decreasing.
The function $f$ is characterized by the following properties:
1. $f$ is positive.
2. $\sum_{x=0}^\infty f(x) \le 1$
Essentially, $f$ is the probability density function of $T - 1$, except that it may be defective in the sense that the sum of its values may be less than 1. The leftover probability, of course, is the probability that $T = \infty$. This is the critical consideration in the classification of the success-runs chain, which we will consider shortly.
Verify that each of the following functions has the appropriate properties, and then find the other two functions:
1. $p$ is a constant in $(0, 1)$.
2. $r(n) = 1 \big/ (n + 1)$ for $n \in \N$.
3. $r(n) = (n + 1) \big/ (2 \, n + 1)$ for $n \in \N$.
4. $p(x) = 1 \big/ (x + 2)$ for $x \in \N$.
Answer
1. $p(x) = p$ for $x \in \N$. $r(n) = p^n$ for $n \in \N$. $f(n) = (1 - p) p^n$ for $n \in \N$.
2. $p(x) = \frac{x + 1}{x + 2}$ for $x \in \N$. $r(n) = \frac{1}{n + 1}$ for $n \in \N$. $f(n) = \frac{1}{n + 1} - \frac{1}{n}$ for $n \in \N$.
3. $p(x) = \frac{(x + 2)(2 x + 1)}{(x + 1)(2 x + 3}$ for $x \in \N$. $r(n) = \frac{n + 1}{2 n + 1}$ for $n \in \N$. $f(n) = \frac{n + 1}{2 n + 1} - \frac{n + 2}{2 n + 3}$ for $n \in \N$.
4. $p(x) = \frac{1}{x + 2}$ for $x \in \N$. $r(n) = \frac{1}{(n + 1)!}$ for $n \in \N$. $f(n) = \frac{1}{(n + 1)!} - \frac{1}{(n + 2)!}$ for $n \in \N$.
In part (a), note that the trials are Bernoulli trials. We have an app for this case.
The success-runs app is a simulation of the success-runs chain based on Bernoulli trials. Run the simulation 1000 times for various values of $p$ and various initial states, and note the general behavior of the chain.
The success-runs chain is irreducible and aperiodic.
Proof
The chain is irreducible, since 0 leads to every other state, and every state leads back to 0. The chain is aperiodic since $P(0, 0) \gt 0$.
Recall that $T$ has the same distribution as $\tau_0$, the first return time to 0 starting at state 0. Thus, the classification of the chain as recurrent or transient depends on $\alpha = \P(T = \infty)$. Specifically, the success-runs chain is transient if $\alpha \gt 0$ and recurrent if $\alpha = 0$. Thus, we see that the chain is recurrent if and only if a failure is sure to occur. We can compute the parameter $\alpha$ in terms of each of the three functions that define the chain.
In terms of $p$, $r$, and $f$, $\alpha = \prod_{x=0}^\infty p(x) = \lim_{n \to \infty} r(n) = 1 - \sum_{x=0}^\infty f(x)$
Compute $\alpha$ and determine whether the success-runs chain $\bs{X}$ is transient or recurrent for each of the examples above.
Answer
1. $\alpha = 0$, recurrent.
2. $\alpha = 0$, recurrent.
3. $\alpha = \frac{1}{2}$, transient.
4. $\alpha = 0$, recurrent.
Run the simulation of the success-runs chain 1000 times for various values of $p$, starting in state 0. Note the return times to state 0.
Let $\mu = \E(T)$, the expected trial number of the first failure, starting with a fresh sequence of trials.
$\mu$ is related to $\alpha$, $f$, and $r$ as follows:
1. If $\alpha \gt 0$ then $\mu = \infty$
2. If $\alpha = 0$ then $\mu = 1 + \sum_{n=0}^\infty n f(n)$
3. $\mu = \sum_{n=0}^\infty r(n)$
Proof
1. If $\alpha = \P(T = \infty) \gt 0$ then $\mu = \E(T) = \infty$.
2. If $\alpha = 0$, so that $T$ takes values in $\N_+$, then $f$ is the PDF of $T - 1$, so $\mu = 1 + \E(T - 1)$.
3. This is a basic result from the general theory of expected value: $\E(T) = \sum_{n=0}^\infty \P(T \gt n)$.
The success-runs chain $\bs{X}$ is positive recurrent if and only if $\mu \lt \infty$.
Proof
Since $T$ is the return time to 0, starting at 0, and since the chain is irreducible, it follows from the general theory that the chain is positive recurrent if and only if $\mu = \E(T) \lt \infty$.
If $\bs{X}$ is recurrent, then $r$ is invariant for $\bs{X}$. In the positive recurrent case, when $\mu \lt \infty$, the invariant distribution has probability density function $g$ given by $g(x) = \frac{r(x)}{\mu}, \quad x \in \N$
Proof
If $y \in \N_+$ then from the result above, $(r P)(y) = \sum_{x=0}^\infty r(x) P(x, y) = r(y - 1) p(y - 1) = r(y)$ For $y = 0$, using the result above again, $(r P)(0) = \sum_{x=0}^\infty r(x) P(x, 0) = \sum_{x=0}^\infty r(x)[1 - p(x)] = \sum_{x=0}^\infty [r(x) - r(x)p(x)] = \sum_{x=0}^\infty [r(x) - r(x + 1)]$ If the chain is recurrent, $r(n) \to 0$ as $n \to \infty$ so the last sum collapses to $r(0) = 1$. Recall that $\mu = \sum_{n = 0}^\infty r(n)$. Hence if $\mu \lt \infty$, so that the chain is positive recurrent, the function $g$ (which is just $r$ normalized) is the invariant PDF.
When $\bs{X}$ is recurrent, we know from the general theory that every other nonnegative left invariant function is a nonnegative multiple of $r$
Determine whether the success-runs chain $\bs{X}$ is transient, null recurrent, or positive recurrent for each of the examples above. If the chain is positive recurrent, find the invariant probability density function.
Answer
1. $\mu = \frac{1}{1 - p}$, positive recurrent. $g(x) = (1 - p) p^x$ for $x \in \N$.
2. $\alpha = 0$, $\mu = \infty$, null recurrent.
3. $\alpha = \frac{1}{2}$, transient.
4. $\mu = e - 1$, positive recurrent. $g(x) = \frac{1}{(e - 1)(x + 1)!}$ for $x \in \N$.
From (a), the success-runs chain corresponding to Bernoulli trials with success probability $p \in (0, 1)$ has the geometric distribution on $\N$, with parameter $1 - p$, as the invariant distribution.
Run the simulation of the success-runs chain 1000 times for various values of $p$ and various initial states. Compare the empirical distribution to the invariant distribution.
The Remaining Life Chain
Consider a device whose (discrete) time to failure $U$ takes values in $\N$, with probability density function $f$. We assume that $f(n) \gt 0$ for $n \in \N$. When the device fails, it is immediately (and independently) replaced by an identical device. For $n \in \N$, let $Y_n$ denote the time to failure of the device that is in service at time $n$.
$\bs{Y} = (Y_0, Y_1, Y_2, \ldots)$ is a discrete-time Markov chain with state space $\N$ and transition probability matrix $Q$ given by $Q(0, x) = f(x), \; Q(x + 1, x) = 1; \quad x \in \N$
The Markov chain $\bs{Y}$ is called the remaining life chain with lifetime probability density function $f$, and has the state graph below.
We have an app for the remaining life chain whose lifetime distribution is the geometric distribution on $\N$, with parameter $1 - p \in (0, 1)$.
Run the simulation of the remaining-life chain 1000 times for various values of $p$ and various initial states. Note the general behavior of the chain.
If $U$ denotes the lifetime of a device, as before, note that $T = 1 + U$ is the return time to 0 for the chain $\bs{Y}$, starting at 0.
$\bs{Y}$ is irreducible, aperiodic, and recurrent.
Proof
From the assumptions on $f$, state 0 leads to every other state (including 0), and every positive state leads (deterministically) to 0. Thus the chain is irreducible and aperiodic. By assumption, $\P(U \in \N) = 1$ so $\P(T \lt \infty) = 1$ and hence the chain is recurrent.
Now let $r(n) = \P(U \ge n) = \P(T \gt n)$ for $n \in \N$ and let $\mu = \E(T) = 1 + \E(U)$. Note that $r(n) = \sum_{x=n}^\infty f(x)$ and $\mu = 1 + \sum_{x=0}^\infty f(x) = \sum_{n=0}^\infty r(n)$.
The success-runs chain $\bs{X}$ is positive recurrent if and only if $\mu \lt \infty$, in which case the invariant distribution has probability density function $g$ given by $g(x) = \frac{r(x)}{\mu}, \quad x \in \N$
Proof
Since the chain is irreducible, it is positive recurrent if and only if $\mu = E(T) \lt \infty$. The function $r$ is invariant for $Q$: for $y \in \N$ \begin{align*} (r Q)(y) &= \sum_{x \in \N} r(x) Q(x, y) = r(0) Q(0, y) + r(y + 1) Q(y + 1, y) \ &= f(y) + r(y + 1) = r(y) \end{align*} In the positive recurrent case, $\mu$ is the normalizing constant for $r$, so $g$ is the invariant PDF.
Suppose that $\bs{Y}$ is the remaining life chain whose lifetime distribution is the geometric distribution on $\N$ with parameter $1 - p \in (0, 1)$. Then this distribution is also the invariant distribution.
Proof
By assumption, $f(x) = (1 - p) p^x$ for $x \in \N$, and the mean of this distribution is $p / (1 - p)$. Hence $\mu = 1 + p / (1 - p) = 1 / (1 - p)$, and $r(x) = \sum_{y = x}^\infty f(y) = p^x$ for $x \in \N$. Hence $g = f$.
Run the simulation of the success-runs chain 1000 times for various values of $p$ and various initial states. Compare the empirical distribution to the invariant distribution.
Time Reversal
You probably have already noticed similarities, in notation and results, between the success-runs chain and the remaining-life chain. There are deeper connections.
Suppose that $f$ is a probability density function on $\N$ with $f(n) \gt 0$ for $n \in \N$. Let $\bs{X}$ be the success-runs chain associated with $f$ and $\bs{Y}$ the remaining life chain associated with $f$. Then $\bs{X}$ and $\bs{Y}$ are time reversals of each other.
Proof
Under the assumptions on $f$, both chains are recurrent and irreducible. Hence it suffices to show that $r(x) P(x, y) = r(y) Q(y, x), \quad x, \, y \in \N$ It will then follow that the chains are time reversals of each other, and that $r$ is a common invariant function (unique up to multiplication by positive constants). In the case that $\mu = \sum_{n=0}^\infty r(n) \lt \infty$, the function $g = r / \mu$ is the common invariant PDF. There are only two cases to consider. With $y = 0$, we have $r(x) P(x, 0) = r(x)[1 - p(x)]$ and $r(0) Q(y, 0) = f(x)$. But $r(x) [1 - p(x)] = f(x)$ by the result above. When $x \in \N$ and $y = x + 1$, we have $r(x) P(x, x + 1) = r(x) p(x)$ and $r(x + 1) Q(x + 1, x) = r(x + 1)$. But $r(x) p(x) = r(x + 1)$ by the result above.
In the context of reliability, it is also easy to see that the chains are time reversals of each other. Consider again a device whose random lifetime takes values in $\N$, with the device immediately replaced by an identical device upon failure. For $n \in \N$, we can think of $X_n$ as the age of the device in service at time $n$ and $Y_n$ as the time remaining until failure for that device.
Run the simulation of the success-runs chain 1000 times for various values of $p$, starting in state 0. This is the time reversal of the simulation in the next exercise
Run the simulation of the remaining-life chain 1000 times for various values of $p$, starting in state 0. This is the time reversal of the simulation in the previous exercise. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.10%3A_Discrete-Time_Reliability_Chains.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
Introduction
Generically, suppose that we have a system of particles that can generate or split into other particles of the same type. Here are some typical examples:
• The particles are biological organisms that reproduce.
• The particles are neutrons in a chain reaction.
• The particles are electrons in an electron multiplier.
We assume that each particle, at the end of its life, is replaced by a random number of new particles that we will refer to as children of the original particle. Our basic assumption is that the particles act independently, each with the same offspring distribution on $\N$. Let $f$ denote the common probability density function of the number of offspring of a particle. We will also let $f^{*n} = f * f * \cdots * f$ denote the convolution power of degree $n$ of $f$; this is the probability density function of the total number of children of $n$ particles.
We will consider the evolution of the system in real time in our study of continuous-time branching chains. In this section, we will study the evolution of the system in generational time. Specifically, the particles that we start with are in generation 0, and recursively, the children of a particle in generation $n$ are in generation $n + 1$.
Let $X_n$ denote the number of particles in generation $n \in \N$. One way to construct the process mathematically is to start with an array of independent random variables $(U_{n,i}: n \in \N, \; i \in \N_+)$, each with probability density function $f$. We interpret $U_{n,i}$ as the number of children of the $i$th particle in generation $n$ (if this particle exists). Note that we have more random variables than we need, but this causes no harm, and we know that we can construct a probability space that supports such an array of random variables. We can now define our state variables recursively by $X_{n+1} = \sum_{i=1}^{X_n} U_{n,i}$
$\bs{X} = (X_0, X_1, X_2, \ldots)$ is a discrete-time Markov chain on $\N$ with transition probability matrix $P$ given by $P(x, y) = f^{*x}(y), \quad (x, y) \in \N^2$ The chain $\bs{X}$ is the branching chain with offspring distribution defined by $f$.
Proof
The Markov property and the form of the transition matrix follow directly from the construction of the state variables given above. Since the variables $(U_{n,i}: n \in \N, i \in \N_+)$ are independent, each with PDF $f$, we have $\P(X_{n+1} = y \mid X_0 = x_0, \ldots, X_{n-1} = x_{n-1}, X_n = x) = \P\left(\sum_{i=1}^x U_{n,i} = y\right) = f^{*x}(y)$
The branching chain is also known as the Galton-Watson process in honor of Francis Galton and Henry William Watson who studied such processes in the context of the survival of (aristocratic) family names. Note that the descendants of each initial particle form a branching chain, and these chains are independent. Thus, the branching chain starting with $x$ particles is equivalent to $x$ independent copies of the branching chain starting with 1 particle. This features turns out to be very important in the analysis of the chain. Note also that 0 is an absorbing state that corresponds to extinction. On the other hand, the population may grow to infinity, sometimes called explosion. Computing the probability of extinction is one of the fundamental problems in branching chains; we will essentially solve this problem in the next subsection.
Extinction and Explosion
The behavior of the branching chain in expected value is easy to analyze. Let $m$ denote the mean of the offspring distribution, so that $m = \sum_{x=0}^\infty x f(x)$ Note that $m \in [0, \infty]$. The parameter $m$ will turn out to be of fundamental importance.
Expected value properties
1. $\E(X_{n+1}) = m \E(X_n)$ for $n \in \N$
2. $\E(X_n) = m^n \E(X_0)$ for $n \in \N$
3. $\E(X_n) \to 0$ as $n \to \infty$ if $m \lt 1$.
4. $\E(X_n) = \E(X_0)$ for each $n \in \N$ if $m = 1$.
5. $\E(X_n) \to \infty$ as $n \to \infty$ if $m \gt 1$ and $\E(X_0) \gt 0$.
Proof
For part (a) we use a conditioning argument and the construction above. For $x \in \N$, $\E(X_{n+1} \mid X_n = x) = \E\left(\sum_{i=1}^x U_{n,i} \biggm| X_n = x\right) = \E\left(\sum_{i=1}^x U_{n,i}\right) = m x$ That is, $\E(X_{n+1} \mid X_n) = m X_n$ so $\E(X_{n+1}) = \E\left[\E(X_{n+1} \mid X_n)\right] = m \E(X_n)$ Part (b) follows from (a) and then parts (c), (d), and (e) follow from (b).
Part (c) is extinction in the mean; part (d) is stability in the mean; and part (e) is explosion in the mean.
Recall that state 0 is absorbing (there are no particles), and hence $\{X_n = 0 \text{ for some } n \in \N\} = \{\tau_0 \lt \infty\}$ is the extinction event (where as usual, $\tau_0$ is the time of the first return to 0). We are primarily concerned with the probability of extinction, as a function of the initial state. First, however, we will make some simple observations and eliminate some trivial cases.
Suppose that $f(1) = 1$, so that each particle is replaced by a single new particle. Then
1. Every state is absorbing.
2. The equivalence classes are the singleton sets.
3. With probability 1, $X_n = X_0$ for every $n \in \N$.
Proof
These properties are obvious since $P(x, x) = 1$ for every $x \in \N$.
Suppose that $f(0) \gt 0$ so that with positive probability, a particle will die without offspring. Then
1. Every state leads to 0.
2. Every positive state is transient.
3. With probability 1 either $X_n = 0$ for some $n \in \N$ (extinction) or $X_n \to \infty$ as $n \to \infty$ (explosion).
Proof
1. Note that $P(x, 0) = [f(0)]^x \gt 0$ for $x \in \N$, so every state leads to 0 in one step.
2. This follows from (a). If $x \in \N_+$, then $x$ leads to the absorbing state 0 with positive probability. Hence a return to $x$, starting in $x$, cannot have probability 1.
3. This follows from (a) and (b). With probability 1, every positive state is visited only finitely many times. Hence the only possibilities are $X_n = 0$ for some $n \in \N$ or $X_n \to \infty$ as $n \to \infty$.
Suppose that $f(0) = 0$ and $f(1) \lt 1$, so that every particle is replaced by at least one particle, and with positive probability, more than one. Then
1. Every positive state is transient.
2. $\P(X_n \to \infty \text{ as } n \to \infty \mid X_0 = x) = 1$ for every $x \in \N_+$, so that explosion is certain, starting with at least one particle.
Proof
1. Let $x \in \N_+$. Under the assumptions on $f$, state $x$ leads to some state $y \gt x$ but $y$ does not lead back to $x$. Hence with positive probability, the chain starting in $x$ will not return to $x$.
2. This follows from (a) and that the fact that positive states do not lead to 0.
Suppose that $f(0) \gt 0$ and $f(0) + f(1) = 1$, so that with positive probability, a particle will die without offspring, and with probability 1, a particle is not replaced by more than one particle. Then
1. Every state leads to 0.
2. Every positive state is transient.
3. With probability 1, $X_n = 0$ for some $n \in \N$, so extinction is certain.
Proof
1. As before, $P(x, 0) = [f(0)]^x \gt 0$ for $x \in \N$, so $x$ leads to 0 in one step.
2. This follows from (a) and the fact that 0 is absorbing.
3. Under the assumptions on $f$, state $x$ leads to state $y$ only if $y \le x$. So this follows from (a) and (b).
Thus, the interesting case is when $f(0) \gt 0$ and $f(0) + f(1) \lt 1$, so that with positive probability, a particle will die without offspring, and also with positive probability, the particle will be replaced by more than one new particles. We will assume these conditions for the remainder of our discussion. By the state classification above all states lead to 0 (extinction). We will denote the probability of extinction, starting with one particle, by $q = \P(\tau_0 \lt \infty \mid X_0 = 1) = \P(X_n = 0 \text{ for some } n \in \N \mid X_0 = 1)$
The set of positive states $\N_+$ is a transient equivalence class, and the probability of extinction starting with $x \in \N$ particles is $q^x = \P(\tau_0 \lt \infty \mid X_0 = x) = \P(X_n = 0 \text{ for some } n \in \N \mid X_0 = x)$
Proof
Under the assumptions on $f$, from any positive state the chain can move 2 or more units to the right and one unit to the left in one step. It follows that every positive state leads to every other positive state. On the other hand, every positive state leads to 0, which is absorbing. Thus, $\N_+$ is a transient equivalence class.
Recall that the branching chain starting with $x \in \N_+$ particles acts like $x$ independent branching chains starting with one particle. Thus, the extinction probability starting with $x$ particles is $q^x$.
The parameter $q$ satisfies the equation $q = \sum_{x = 0}^\infty f(x) q^x$
Proof
This result follows from conditioning on the first state. $q = \P(\tau_0 \lt \infty \mid X_0 = 1) = \sum_{x = 0}^\infty \P(\tau_0 \lt \infty \mid X_0 = 1, X_1 = x) \P(X_1 = x \mid X_0 = 1)$ But by the Markov property and the previous result, $\P(\tau_0 \lt \infty \mid X_0 = 1, X_1 = x) = \P(\tau_0 \lt \infty \mid X_1 = x) = q^x$ and of course $\P(X_1 = x \mid X_0 = 1) = P(1, x) = f(x)$.
Thus the extinction probability $q$ starting with 1 particle is a fixed point of the probability generating function $\Phi$ of the offspring distribution: $\Phi(t) = \sum_{x=0}^\infty f(x) t^x, \quad t \in [0, 1]$ Moreover, from the general discussion of hitting probabilities in the section on recurrence and transience, $q$ is the smallest such number in the interval $(0, 1]$. If the probability generating function $\Phi$ can be computed in closed form, then $q$ can sometimes be computed by solving the equation $\Phi(t) = t$.
$\Phi$ satisfies the following properties:
1. $\Phi(0) = f(0)$.
2. $\Phi(1) = 1$.
3. $\Phi^\prime(t) \gt 0$ for $t \in (0, 1)$ so $\Phi$ in increasing on $(0, 1)$.
4. $\Phi^{\prime \prime}(t) \gt 0$ for $t \in (0, 1)$ so $\Phi$ in concave upward on $(0, 1)$.
5. $m = \lim_{t \uparrow 1} \Phi^\prime(t)$.
Proof
These are basic properties of the probability generating function. Recall that the series that defines $\Phi$ is a power series about 0 with radius of convergence $r \ge 1$. A function defined by a power series is infinitely differentiable within the open interval of convergence, and the derivates can be computed term by term. So \begin{align*} \Phi^\prime(t) &= \sum_{x=1}^\infty x f(x) t^{x-1} \gt 0, \quad t \in (0, 1) \ \Phi^{\prime \prime}(t) &= \sum_{x=2}^\infty x (x - 1) f(x) t^{x - 2} \gt 0, \quad t \in (0, 1) \end{align*} If $r \gt 1$ then $m = \Phi^\prime(1)$. If $r = 1$, the limit result is the best we can do.
Our main result is next, and relates the extinction probability $q$ and the mean of the offspring distribution $m$.
The extinction probability $q$ and the mean of the offspring distribution $m$ are related as follows:
1. If $m \le 1$ then $q = 1$, so extinction is certain.
2. If $m \gt 1$ then $0 \lt q \lt 1$, so there is a positive probability of extinction and a positive probability of explosion.
Proof
Computational Exercises
Consider the branching chain with offspring probability density function $f$ given by $f(0) = 1 - p$, $f(2) = p$, where $p \in (0, 1)$ is a parameter. Thus, each particle either dies or splits into two new particles. Find each of the following.
1. The transition matrix $P$.
2. The mean $m$ of the offspring distribution.
3. The generating function $\Phi$ of the offspring distribution.
4. The extinction probability $q$.
Answer
Note that an offspring variable has the form $2 I$ where $I$ is an indicator variable with parameter $p$.
1. For $x \in \N$, $f^{* x}$ is the PDF of $2 U$ where $U$ has the binomial distribution with parameters $x$ and $p$. Hence $P(x, y) = f^{* x}(y) = \binom{x}{y/2} p^{y/2} (1 - p)^{x - y/2}, \quad y \in \{0, 2, \ldots, 2x\}$
2. $m = 2 p$.
3. $\Phi(t) = p t^2 + (1 - p)$ for $t \in \R$.
4. $q = 1$ if $0 \lt p \le \frac{1}{2}$ and $q = \frac{1 - p}{p}$ if $\frac{1}{2} \lt p \lt 1$.
Consider the branching chain whose offspring distribution is the geometric distribution on $\N$ with parameter $1 - p$, where $p \in (0, 1)$. Thus $f(n) = (1 - p) p^n$ for $n \in \N$. Find each of the following:
1. The transition matrix $P$.
2. The mean $m$ of the offspring distribution.
3. The generating function $\Phi$ of the offspring distribution.
4. The extinction probability $q$.
Answer
1. For $x \in \N$, $f^{* x}$ is the PDF of the negative binomial distribution on $\N$ with parameter $1 - p$. So $P(x, y) = f^{* x}(y) = \binom{x + y - 1}{x - 1} p^y (1 - p)^x, \quad y \in \N$
2. $m = \frac{p}{1 - p}$.
3. $\Phi(t) = \frac{1 - p}{1 - p t}$ for $\left|t\right| \lt \frac{1}{p}$.
4. $q = 1$ if $0 \lt p \le \frac{1}{2}$ and $q = \frac{1 - p}{p}$ if $\frac{1}{2} \lt p \lt 1$.
Curiously, the extinction probability is the same as for the previous problem.
Consider the branching chain whose offspring distribution is the Poisson distribution with parameter $m \in (0, \infty)$. Thus $f(n) = e^{-m} m^n / n!$ for $n \in \N$. Find each of the following:
1. The transition matrix $P$.
2. The mean $m$ of the offspring distribution.
3. The generating function $\Phi$ of the offspring distribution.
4. The approximate extinction probability $q$ when $m = 2$ and when $m = 3$.
Answer
1. For $x \in \N$, $f^{* x}$ is the PDF of the Poisson distribution with parameter $m x$. So $P(x, y) = f^{* x}(y) = e^{-m x} \frac{(m x)^y}{y!}, \quad y \in \N$
2. The parameter $m$ is the mean of the Poisson distribution, so the notation is consistent.
3. $\Phi(t) = e^{m (t - 1)}$ for $t \in \R$.
4. $q = 1$ if $0 \lt m \le 1$. If $m \gt 1$ then $q$ is the solution in $(0, 1)$ of the equation $e^{m (q - 1)} = q$ which can be expressed in terms of a special function known as the Lambert $W$ function: $q = -\frac{1}{m} W\left(-m e^{-m}\right)$ For $m = 2$, $q \approx 0.20319$. For $m = 3$, $q \approx 0.059520$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.11%3A_Discrete-Time_Branching_Chain.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
Introduction
In a queuing model, customers arrive at a station for service. As always, the terms are generic; here are some typical examples:
• The customers are persons and the service station is a store.
• The customers are file requests and the service station is a web server.
• The customers are packages and the service station is a processing facility.
Queuing models can be quite complex, depending on such factors as the probability distribution that governs the arrival of customers, the probability distribution that governs the service of customers, the number of servers, and the behavior of the customers when all servers are busy. Indeed, queuing theory has its own lexicon to indicate some of these factors. In this section, we will study one of the simplest, discrete-time queuing models. However, as we will see, this discrete-time chain is embedded in a much more realistic continuous-time queuing process knows as the M/G/1 queue. In a general sense, the main interest in any queuing model is the number of customers in the system as a function of time, and in particular, whether the servers can adequately handle the flow of customers.
Our main assumptions are as follows:
1. If the queue is empty at a given time, then a random number of new customers arrive at the next time.
2. If the queue is nonempty at a given time, then one customer is served and a random number of new customers arrive at the next time.
3. The number of customers who arrive at each time period form an independent, identically distributed sequence.
Thus, let $X_n$ denote the number of customers in the system at time $n \in \N$, and let $U_n$ denote the number of new customers who arrive at time $n \in \N_+$. Then $\bs{U} = (U_1, U_2, \ldots)$ is a sequence of independent random variables, with common probability density function $f$ on $\N$, and $X_{n+1} = \begin{cases} U_{n+1}, & X_n = 0 \ (X_n - 1) + U_{n+1}, & X_n \gt 0 \end{cases}, \quad n \in \N$
$\bs{X} = (X_0, X_1, X_2, \ldots)$ is a discrete-time Markov chain with state space $\N$ and transition probability matrix $P$ given by \begin{align} P(0, y) & = f(y), \quad y \in \N \ P(x, y) & = f(y - x + 1), \quad x \in \N_+, \; y \in \{x - 1, x, x + 1, \ldots\} \end{align} The chain $\bs{X}$ is the queuing chain with arrival distribution defined by $f$.
Proof
The Markov property and the form of the transition matrix follow from the construction of the state process $\bs{X}$ in term of the IID sequence $\bs{U}$. Starting in state 0 (an empty queue), a random number of new customers arrive at the next time unit, governed by the PDF $f$. Hence the probability of going from state 0 to state $y$ in one step is $f(y)$. Starting in state $x \in \N_+$, one customer is served and a random number of new customers arrive by the next time unit, again governed by the PDF $f$. Hence the probability of going from state $x$ to state $y \in \{x - 1, x, x + 1, \ldots\}$ is $f[y - (x - 1)]$.
Recurrence and Transience
From now on we will assume that $f(0) \gt 0$ and $f(0) + f(1) \lt 1$. Thus, at each time unit, it's possible that no new customers arrive or that at least 2 new customers arrive. Also, we let $m$ denote the mean of the arrival distribution, so that $m = \sum_{x = 0}^\infty x f(x)$ Thus $m$ is the average number of new customers who arrive during a time period.
The chain $\bs{X}$ is irreducible and aperiodic.
Proof
In a positive state, the chain can move at least one unit to the right and can move one unit to the left at the next step. From state 0, the chain can move two or more units to the right or can stay in 0 at the next step. Thus, every state leads to every other state so the chain is irreducible. Since 0 leads back to 0, the chain is aperiodic.
Our goal in this section is to compute the probability that the chain reaches 0, as a function of the initial state (so that the server is able to serve all of the customers). As we will see, there are some curious and unexpected parallels between this problem and the problem of computing the extinction probability in the branching chain. As a corollary, we will also be able to classify the queuing chain as transient or recurrent. Our basic parameter of interest is $q = H(1, 0) = \P(\tau_0 \lt \infty \mid X_0 = 1)$, where as usual, $H$ is the hitting probability matrix and $\tau_0 = \min\{n \in \N_+: X_n = 0\}$ is the first positive time that the chain is in state 0 (possibly infinite). Thus, $q$ is the probability that the queue eventually empties, starting with a single customer.
The parameter $q$ satisifes the following properties:
1. $q = H(x, x - 1)$ for every $x \in \N_+$.
2. $q^x = H(x, 0)$ for every $x \in \N_+$.
Proof
1. The critical observation is that if $x \in \N_+$ then $P(x, y) = P(1, y - x + 1) = f(y - x + 1)$ for $y \in \{x - 1, x, x + 1, \ldots\}$. Thus, the chain, starting in $x$, and up until the time that it reaches $x - 1$ (if it does), behaves stochastically like the chain starting in state 1, and up until it reaches 0.
2. In order to reach 0, starting in state $x \in \N_+$, the chain must first reach $x - 1$ and then from $x - 1$ must reach $x - 2$, until finally reaching 0 from state 1. Each of these intermediate trips has probability $q$ by part (a) and are independent by the Markov property.
The parameter $q$ satisfies the equation: $q = \sum_{x = 0}^\infty f(x) q^x$
Proof
This follows from the previous theorem by conditioning on the first state. $\P(\tau_0 \lt \infty \mid X_0 = 1) = \sum_{x=0}^\infty \P(\tau_0 \lt \infty \mid X_0 = 1, X_1 = x) \P(X_1 = x \mid X_0 = 1)$ Note first that $\P(\tau_0 \lt \infty \mid X_0 = 1, X_1 = 0) = 1 = q^0$. On the other hand, by the Markov property and the previous result, $\P(\tau_0 \lt \infty \mid X_0 = 1, X_1 = x) = \P(\tau_0 \lt \infty \mid X_1 = x) = q^x, \quad x \in \N_+$ Of course $\P(X_1 = x \mid X_0 = 1) = P(1, x) = f(x)$ for $x \in \N$.
Note that this is exactly the same equation that we considered for the branching chain, namely $\Phi(q) = q$, where $\Phi$ is the probability generating function of the distribution that governs the number of new customers that arrive during each period.
$q$ is the smallest solution in $(0, 1]$ of the equation $\Phi(t) = t$. Moreover
1. If $m \le 1$ then $q = 1$ and the chain is recurrent.
2. If $m \gt 1$ then $0 \lt q \lt 1$ and the chain is transient..
Proof
This follows from our analysis of branching chains. The graphs above show the two cases. Note that the condition in (a) means that on average, one or fewer new customers arrive for each customer served. The condition in (b) means that on average, more than one new customer arrives for each customer served.
Positive Recurrence
Our next goal is to find conditions for the queuing chain to be positive recurrent. Recall that $m$ is the mean of the probability density function $f$; that is, the expected number of new customers who arrive during a time period. As before, let $\tau_0$ denote the first positive time that the chain is in state 0. We assume that the chain is recurrent, so $m \le 1$ and $\P(\tau_0 \lt \infty) = 1$.
Let $\Psi$ denote the probability generating function of $\tau_0$, starting in state 1. Then
1. $\Psi$ is also the probability generating function of $\tau_0$ starting in state 0.
2. $\Psi^x$ is the probability generating function of $\tau_0$ starting in state $x \in \N_+$.
Proof
1. The transition probabilities starting in state 1 are the same as those starting in state 0: $P(0, x) = P(1, x) = f(x)$ for $x \in \N$.
2. Starting in state $x \in \N_+$, the random time to reach 0 is the sum of the time to reach $x - 1$, the additional time to reach $x - 2$ from $x - 1$, and so forth, ending with the time to reach 0 from state 1. These random times are independent by the Markov property, and each has the same distribution as the time to reach 0 from state 1 by our argument above. Finally, recall that the PGF of a sum of independent variables is the product of the corresponding PGFs.
$\Psi(t) = t \Phi[\Psi(t)]$ for $t \in [-1, 1]$.
Proof
Once again, the trick is to condition on the first state: $\Psi(t) = \E\left(t^{\tau_0} \bigm| X_0 = 1\right) = \sum_{x = 0}^\infty \E\left(t^{\tau_0} \bigm| X_0 = 1, X_1 = x\right) \P(X_1 = x \mid X_0 = 1)$ First note that $\E\left(t^{\tau_0} \bigm| X_0 = 1, X_1 = 0\right) = t^1 = t \Psi^0(t)$. On the other hand, by the Markov property and the previous theorem, $\E\left(t^{\tau_0} \bigm| X_0 = 1, X_1 = x\right) = \E\left(t^{1 + \tau_0} \bigm| X_0 = x\right) = t \E\left(t^{\tau_0} \bigm| X_0 = x\right) = t \Psi^x(t), \quad x \in \N_+$ Of course $\P(X_1 = x \mid X_0 = 1) = P(1, x) = f(x)$. Hence we have $\Psi(t) = \sum_{x=0}^\infty t \Psi^x(t) f(x) = t \Phi[\Psi(t)]$ The PGF of any variable that takes positive integer values is defined on $[-1, 1]$, and maps this interval back into itself. Hence the representation is valid at least for $t \in [-1, 1]$.
The deriviative of $\Psi$ is $\Psi^\prime(t) = \frac{\Phi[\Psi(t)]}{1 - t \Phi^\prime[\Psi(t)]}, \quad t \in (-1, 1)$
Proof
Recall that a PGF is infinitely differentiable on the open interval of convergence. Hence using the result in the previous theorem and the product and chain rules, $\Psi^\prime(t) = \Phi[\Psi(t)] + t \Phi^\prime[\Psi(t)] \Psi^\prime(t)$ Solving for $\Psi^\prime(t)$ gives the result.
As usual, let $\mu_0 = \E(\tau_0 \mid X_0 = 0)$, the mean return time to state 0 starting in state 0. Then
1. $\mu_0 = \frac{1}{1 - m}$ if $m \lt 1$ and therefore the chain is positive recurrent.
2. $\mu_0 = \infty$ if $m = 1$ and therefore the chain is null recurrent.
Proof
Recall that $\Psi$ is the probability generating function of $\tau_0$, starting at 0. From basic properties of PGFs we know that $\Phi(t) \uparrow 1$, $\Psi(t) \uparrow 1$, $\Phi^\prime(t) \uparrow m$, and $\Psi^\prime(t) \uparrow \mu_0$ as $t \uparrow 1$. So letting $t \uparrow 1$ in the result of the previous theorem, we have $\mu_0 = 1 \big/ (1 - m)$ if $m \lt 1$ and $\mu_0 = \infty$ if $m = 1$.
So to summarize, the queuing chain is positive recurrent if $m \lt 1$, null recurrent if $m = 1$, and transient if $m > 1$. Since $m$ is the expected number of new customers who arrive during a service period, the results are certainly reasonable.
Computational Exercises
Consider the queuing chain with arrival probability density function $f$ given by $f(0) = 1 - p$, $f(2) = p$, where $p \in (0, 1)$ is a parameter. Thus, at each time period, either no new customers arrive or two arrive.
1. Find the transition matrix $P$.
2. Find the mean $m$ of the arrival distribution.
3. Find the generating function $\Phi$ of the arrival distribution.
4. Find the probability $q$ that the queue eventually empties, starting with one customer.
5. Classify the chain as transient, null recurrent, or positive recurrent.
6. In the positive recurrent case, find $\mu_0$, the mean return time to 0.
Answer
1. $P(0, 0) = 1 - p$, $P(0, 2) = p$. For $x \in \N_+$, $P(x, x - 1) = 1 - p$, $P(x, x + 1) = p$.
2. $m = 2 p$.
3. $\Phi(t) = p t^2 + (1 - p)$ for $t \in \R$.
4. $q = 1$ if $0 \lt p \le \frac{1}{2}$ and $q = \frac{1 - p}{p}$ if $\frac{1}{2} \lt p \lt 1$.
5. The chain is transient if $p \gt \frac{1}{2}$, null recurrent if $p = \frac{1}{2}$, and positive recurrent if $p \lt \frac{1}{2}$.
6. $\mu_0 = \frac{1}{1 - 2 p}$ for $p \lt \frac{1}{2}$.
Consider the queuing chain whose arrival distribution is the geometric distribution on $\N$ with parameter $1 - p$, where $p \in (0, 1)$. Thus $f(n) = (1 - p) p^n$ for $n \in \N$.
1. Find the transition matrix $P$.
2. Find the mean $m$ of the arrival distribution.
3. Find the generating function $\Phi$ of the arrival distribution.
4. Find the probability $q$ that the queue eventually empties, starting with one customer.
5. Classify the chain as transient, null recurrent, or positive recurrent.
6. In the positive recurrent case, find $\mu_0$, the mean return time to 0.
Answer
1. $P(0, y) = (1 - p) p^y$ for $y \in \N$. For $x \in \N_+$, $P(x, y) = (1 - p) p^{y - x + 1}$ for $y \in \{x - 1, x, x + 1, \ldots\}$.
2. $m = \frac{p}{1 - p}$.
3. $\Phi(t) = \frac{1 - p}{1 - p t}$ for $\left|t\right| \lt \frac{1}{p}$.
4. $q = 1$ if $0 \lt p \le \frac{1}{2}$ and $q = \frac{1 - p}{p}$ if $\frac{1}{2} \lt p \lt 1$.
5. The chain is transient if $p \gt \frac{1}{2}$, null recurrent if $p = \frac{1}{2}$, and positive recurrent if $p \lt \frac{1}{2}$.
6. $\mu_0 = \frac{1 - p}{1 - 2 p}$ for $p \lt \frac{1}{2}$.
Curiously, the parameter $q$ and the classification of the chain are the same in the last two models.
Consider the queuing chain whose arrival distribution is the Poisson distribution with parameter $m \in (0, \infty)$. Thus $f(n) = e^{-m} m^n / n!$ for $n \in \N$. Find each of the following:
1. The transition matrix $P$
2. The mean $m$ of the arrival distribution.
3. The generating function $\Phi$ of the arrival distribution.
4. The approximate value of $q$ when $m = 2$ and when $m = 3$.
5. Classify the chain as transient, null recurrent, or positive recurrent.
6. In the positive recurrent case, find $\mu_0$, the mean return time to 0.
Answer
1. $P(0, y) = e^{-m} m^y / y!$ for $y \in \N$. For $x \in \N_+$, $P(x, y) = e^{-m} m^{y - x + 1} \big/ (y - x + 1)!$ for $y \in \{x - 1, x, x + 1, \ldots\}$.
2. The parameter $m$ is the mean of the Poisson distribution, so the notation is consistent.
3. $\Phi(t) = e^{m (t - 1)}$ for $t \in \R$.
4. $q = 1$ if $0 \lt m \le 1$. If $m \gt 1$ then $q$ is the solution in $(0, 1)$ of the equation $e^{m (q - 1)} = q$ which can be expressed in terms of a special function known as the Lambert $W$ function: $q = -\frac{1}{m} W\left(-m e^{-m}\right)$ For $m = 2$, $q \approx 0.20319$. For $m = 3$, $q \approx 0.059520$.
5. The chain is transient if $m \gt 1$, null recurrent if $m = 1$, and positive recurrent if $m \lt 1$.
6. $\mu_0 = \frac{1}{1 - m}$ for $m \lt 1$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.12%3A_Discrete-Time_Queuing_Chains.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
Introduction
Suppose that $S$ is an interval of integers (that is, a set of consecutive integers), either finite or infinite. A (discrete-time) birth-death chain on $S$ is a discrete-time Markov chain $\bs{X} = (X_0, X_1, X_2, \ldots)$ on $S$ with transition probability matrix $P$ of the form $P(x, x - 1) = q(x), \; P(x, x) = r(x), \; P(x, x + 1) = p(x); \quad x \in S$ where $p$, $q$, and $r$ are nonnegative functions on $S$ with $p(x) + q(x) + r(x) = 1$ for $x \in S$.
If the interval $S$ has a minimum value $a \in \Z$ then of course we must have $q(a) = 0$. If $r(a) = 1$, the boundary point $a$ is absorbing and if $p(a) = 1$, then $a$ is reflecting. Similarly, if the interval $S$ has a maximum value $b \in \Z$ then of course we must have $p(b) = 0$. If $r(b) = 1$, the boundary point $b$ is absorbing and if $p(b) = 1$, then $b$ is reflecting. Several other special models that we have studied are birth-death chains; these are explored in below.
In this section, as you will see, we often have sums of products. Recall that a sum over an empty index set is 0, while a product over an empty index set is 1.
Recurrence and Transience
If $S$ is finite, classification of the states of a birth-death chain as recurrent or transient is simple, and depends only on the state graph. In particular, if the chain is irreducible, then the chain is positive recurrent. So we will study the classification of birth-death chains when $S = \N$. We assume that $p(x) \gt 0$ for all $x \in \N$ and that $q(x) \gt 0$ for all $x \in \N_+$ (but of course we must have $q(0) = 0$). Thus, the chain is irreducible.
Under these assumptions, the birth-death chain on $\N$ is
1. Aperiodic if $r(x) \gt 0$ for some $x \in \N$.
2. Periodic with period 2 if $r(x) = 0$ for all $x \in \N$.
Proof
1. If $r(x) \gt 0$ for some $x \in \N$ then $P(x, x) \gt 0$ and hence the chain is aperiodic.
2. If $r(x) = 0$ for every $x \in \N$ then clearly the chain starting in $x$ can be in state $x$ again only at even times.
We will use the test for recurrence derived earlier with $A = \N_+$, the set of positive states. That is, we will compute the probability that the chain never hits 0, starting in a positive state.
The chain $\bs{X}$ is recurrent if and only if $\sum_{x = 0}^\infty \frac{q(1) \cdots q(x)}{p(1) \cdots p(x)} = \infty$
Proof
Let $P_+$ denote the restriction of $P$ to $\N_+ \times \N_+$, and define $u_+: \N_+ \to [0, 1]$ by $u_+(x) = \P(X_1 \gt 0, X_2 \gt 0, \ldots \mid X_0 = x), \quad x \in \N_+$ So $u_+(x)$ is the probability that chain never reaches 0, starting in $x \in \N_+$. From our general theory, we know that $u_+$ satisfies $u_+ = P_+ u_+$ and is the largest such function with values in $[0, 1]$. Furthermore, we know that either $u_+(x) = 0$ for all $x \in \N_+$ or that $\sup\{u_+(x): x \in [0, 1]\} = 1$. In the first case the chain is recurrent, and in the second case the chain is transient.
The functional equation $P_+ u = u$ for a function $u: \N_+ \to [0, 1]$ is equivalent to the following system of equations: \begin{align} u(2) - u(1) & = \frac{q(1)}{p(1)} u(1) \ u(x + 1) - u(x) & = \frac{q(x)}{p(x)}[u(x) - u(x - 1)], \quad x \in \{2, 3, \ldots\} \end{align} Solving this system of equations for the differences gives $u(x + 1) - u(x) = \frac{q(1) \cdots q(x)}{p(1) \cdots p(x)} u(1), \quad x \in \N_+$ Solving this new systems gives $u(x) = u(1) \sum_{i=0}^{x-1} \frac{q(1) \cdots q(i)}{p(1) \cdots p(i)}, \quad x \in \N_+$ Note that $u(x)$ is increasing in $x \in \N_+$ and so has a limit as $x \to \infty$. Let $A = \sum_{i=0}^\infty \frac{q(1) \cdots q(i)}{p(1) \cdots p(i)}$.
1. Suppose that $A = \infty$. Letting $x \to \infty$ in the displayed equation above for $u(x)$ shows that $u(1) = 0$ and so $u(x) = 0$ for all $x$. Hence the chain is recurrent.
2. Suppose that $A \lt \infty$. Define $u(1) = 1/A$ and then more generally, $u(x) = \frac{1}{A} \sum_{i=0}^{x-1} \frac{q(1) \cdots q(i)}{p(1) \cdots p(i)}, \quad x \in \N_+$ The function $u$ takes values in $(0, 1)$ and satisfies the functional equation $u = P_+ u$. Hence the chain is transient. Note that $u(x) \to 1$ as $x \to \infty$ and so in fact, $u = u_+$, the function that we discussed above that gives the probability of staying in $\N_+$ for all time. We will return to this function below in our discussion of absorption.
Note that $r$, the function that assigns to each state $x \in \N$ the probability of an immediate return to $x$, plays no direct role in whether the chain is transient or recurrent. Indeed all that matters are the ratios $q(x) / p(x)$ for $x \in \N_+$.
Positive Recurrence and Invariant Distributions
Suppose again that we have a birth-death chain $\bs{X}$ on $\N$, with $p(x) \gt 0$ for all $x \in \N$ and $q(x) \gt 0$ for all $x \in \N_+$. Thus the chain is irreducible.
The function $g: \N \to (0, \infty)$ defined by $g(x) = \frac{p(0) \cdots p(x - 1)}{q(1) \cdots q(x)}, \quad x \in \N$ is invariant for $\bs{X}$, and is the only invariant function, up to multiplication by constants. Hence $\bs{X}$ is positive recurrent if and only if $B = \sum_{x = 0}^\infty g(x) \lt \infty$, in which case the (unique) invariant probability density function $f$ is given by $f(x) = \frac{1}{B} g(x)$ for $x \in \N$.
Proof
Recall that by convention, a product over an empty index set is 1. So first, \begin{align*} (g P)(0) & = g(0) P(0, 0) + g(1) P(1, 0) = g(0) r(0) + g(1) q(1) \ & = 1 r(0) + \frac{p(0)}{q(1)} q(1) = [1 - p(0)] + p(0) = 1 = g(0) \end{align*} Next, for $y \in \N_+$, \begin{align*} (g P)(y) & = g(y - 1) P(y - 1, y) + g(y) P(y, y) + g(y + 1) P(y + 1, y) \ & = g(y - 1) p(y - 1) + g(y) r(y) + g(y + 1) q(y + 1) \ & = g(y - 1) p(y - 1) + g(y) [1 - p(y) - q(y)] + g(y + 1) q(y + 1) \end{align*} But \begin{align*} g(y - 1) p(y - 1) & = g(y) q(y) = \frac{p(0) \cdots p(y - 1)}{q(1) \cdots q(y - 1)} \ g(y + 1) q(y + 1) & = g(y) p(y) = \frac{p(0) \cdots p(y)}{q(1) \cdots q(y)} \end{align*} so $(g P)(y) = g(y)$.
Conversely, suppose that $h: \N \to \R$ is invariant for $\bs{X}$. We will show by induction that $h(x) = h(0) g(x)$ for all $x \in \N$. The result is trivailly true for $x = 0$ since $g(0) = 1$. Next, $(h P)(0) = h(0)$ gives $h(0) P(0, 0) + h(1) P(1, 0) = h(0)$. But $P(0, 0) = r(0) = [1 - p(0)]$ and $P(1, 0) = q(1)$, so substituting and solving for $h(1)$ gives $h(1) = h(0) \frac{p(0)}{q(1)} = h(0) g(1)$ so the result is true when $x = 1$. Assume now that $y \in \N_+$ and that the result is true for all $x \in \N$ with $x \le y$. Then $(h P)(y) = h(y)$ gives $h(y - 1) P(y - 1, y) + h(y) P(y, y) + h(y + 1) P(y + 1, y) = h(y)$ But $P(y - 1, y) = p(y - 1)$, $P(y, y) = r(y) = 1 - p(y) - q(y)$, and $P(y + 1, y) = q(y + 1)$. Also, by the induction hypothesis, $h(y) = h(0) g(y)$ and $h(y - 1) = h(0) g(y - 1)$ so substituting and using the definition of $g$ gives \begin{align*} q(y + 1) h(y + 1) & = [p(y) + q(y)] h(0) \frac{p(0) \cdots p(y - 1)}{q(1) \cdots q(y)} - p(y - 1) h(0) \frac{p(0) \cdots p(y - 2)}{q(1) \cdots q(y - 1)} \ & = h(0) \frac{p(0) \cdots p(y)}{q(1) \cdots q(y)} \end{align*} Finally, solving gives $h(y + 1) = h(0) \frac{p(0) \cdots p(y)}{q(1) \cdots q(y + 1)} = h(0) g(y + 1)$
Here is a summary of the classification:
For the birth-death chain $\bs X$, define $A = \sum_{x = 0}^\infty \frac{q(1) \cdots q(x)}{p(1) \cdots p(x)}, \quad B = \sum_{x = 0}^\infty \frac{p(0) \cdots p(x - 1)}{q(1) \cdots q(x)}$
1. $\bs X$ is transient if $A \lt \infty$
2. $\bs X$ is null recurrent if $A = \infty$ and $B = \infty$.
3. $\bs X$ is positive recurrent if $B \lt \infty$.
Note again that $r$, the function that assigns to each state $x \in \N$ the probability of an immediate return to $x$, plays no direct role in whether the chain is transient, null recurrent, or positive recurrent. Also, we know that an irreducible, recurrent chain has a positive invariant function that is unique up to multiplication by positive constants, but the birth-death chain gives an example where this is also true in the transient case.
Suppose now that $n \in \N_+$ and that $\bs X = (X_0, X_1, X_2, \ldots)$ is a birth-death chain on the integer interval $\N_n = \{0, 1, \ldots, n\}$. We assume that $p(x) \gt 0$ for $x \in \{0, 1, \ldots, n - 1\}$ while $q(x) \gt 0$ for $x \in \{1, 2, \ldots n\}$. Of course, we must have $q(0) = p(n) = 0$. With these assumptions, $\bs X$ is irreducible, and since the state space is finite, positive recurrent. So all that remains is to find the invariant distribution. The result is essentially the same as when the state space is $\N$.
The invariant probability density function $f_n$ is given by $f_n(x) = \frac{1}{B_n} \frac{p(0) \cdots p(x - 1)}{q(1) \cdots q(x)} \text{ for } x \in \N_n \text{ where } B_n = \sum_{x=0}^n \frac{p(0) \cdots p(x - 1)}{q(1) \cdots q(x)}$
Proof
Define $g_n(x) = \frac{p(0) \cdots p(x - 1)}{q(1) \cdots q(x)}, \quad x \in \N_n$ The proof thet $g_n$ is invariant for $\bs X$ is the same as before. The constant $B_n$ is the normalizing constant.
Note that $B_n \to B$ as $n \to \infty$, and if $B \lt \infty$, $f_n(x) \to f(x)$ as $n \to \infty$ for $x \in \N$. We will see this type of behavior again. Results for the birth-death chain on $\N_n$ often converge to the corresponding results for the birth-death chain on $\N$ as $n \to \infty$.
Absorption
Often when the state space $S = \N$, the state of a birth-death chain represents a population of individuals of some sort (and so the terms birth and death have their usual meanings). In this case state 0 is absorbing and means that the population is extinct. Specifically, suppose that $\bs X = (X_0, X_1, X_2, \ldots)$ is a birth-death chain on $\N$ with $r(0) = 1$ and with $p(x), \, q(x) \gt 0$ for $x \in \N_+$. Thus, state 0 is absorbing and all positive states lead to each other and to 0. Let $N = \min\{n \in \N: X_n = 0\}$ denote the time until absorption, where as usual, $\min \emptyset = \infty$.
One of the following events will occur:
1. Population extinction: $N \lt \infty$ or equivalently, $X_m = 0$ for some $m \in \N$ and hence $X_n = 0$ for all $n \ge m$.
2. Population explosion: $N = \infty$ or equivalently $X_n \to \infty$ as $n \to \infty$.
Proof
Part (b) follows from the general theory, since 0 is absorbing, and all positive states lead to each other and to 0. Thus the positive states are transient and we know that with probability 1, a Markov chain will visit a transient state only finitely often. Thus $N = \infty$ is equivalent to $X_n \to \infty$ as $n \to \infty$.
Naturally we would like to find the probability of these complementary events, and happily we have already done so in our study of recurrence above. Let $u(x) = \P(N = \infty) = \P(X_n \to \infty \text{ as } n \to \infty \mid X_0 = x), \quad x \in \N$ so the absorption probability is $v(x) = 1 - u(x) = \P(N \lt \infty) = \P(X_n = 0 \text{ for some } n \in \N \mid X_0 = x), \quad x \in \N$
For the birth-death chain $\bs X$, $u(x) = \frac{1}{A} \sum_{i=0}^{x - 1} \frac{q(1) \cdots q(i)}{p(1) \cdots p(i)} \text{ for } x \in \N_+ \text{ where } A = \sum_{i=0}^\infty \frac{q(1) \cdots q(i)}{p(1) \cdots p(i)}$
Proof
For $x \in \N_+$, note that $u(x) = \P(X_n \in \N_+ \text{ for all } n \in \N \mid X_0 = x)$, the function that gives the probability of staying in the positive states for all time. The proof of the theorem on recurrence above has nothing to do with the transition probabilities in state 0, so the proof applies in this setting as well. In that proof we showed that $u(x)$ as the form given above, where of course the value is 0 if $A = \infty$. Trivially, $u(0) = 0$.
So if $A = \infty$ then $u(x) = 0$ for all $x \in S$. If $A \lt \infty$ then $u(x) \gt 0$ for all $x \in \N_+$ and $u(x) \to 1$ as $x \to \infty$. For the absorption probability, $v(x) = 1$ for all $x \in \N$ if $A = \infty$ and so absorption is certain. If $A \lt \infty$ then $v(x) = \frac{1}{A} \sum_{i=x}^\infty \frac{q(1) \cdots q(i)}{p(1) \cdots p(i)}, \quad x \in \N$ Next we consider the mean time to absorption, so let $m(x) = \E(N \mid X_0 = x)$ for $x \in \N_+$.
The mean absorption function is given by $m(x) = \sum_{j=1}^x \sum_{k=j-1}^\infty \frac{p(j) \cdots p(k)}{q(j) \cdots q(k+1)}, \quad x \in \N$
Probabilisitic Proof
The number of steps required to go from state $x \in \N_+$ to $x - 1$ has the same distribution as the number of steps required to go from state 1 to 0, except with parameters $p(y), \, q(y)$ for $y \in \{x, x + 1, \ldots\}$ instead of parameters $p(y), \, q(y)$ for $y \in \{1, 2, \ldots\}$. So by the additivity of expected value, we just need to compute $m(1)$ as a function of the parameters. Starting in state 1, the chain will be absorbed in state 0 after a random number of returns to state 1 without absorption. Whenever the chain is in state 1, absorption occurs at the next time with probability $q(1)$ so it follows that the number of times that the chain is in state 1 before absorption has the geometric distribution on $\N_+$ with success parameter $q(1)$. The mean of this distribution is $1 / q(1)$. On the other hand, starting in state 1, the number of steps until the chain is in state 1 again (without absorption) has the same distribution as the return time to state 0, starting in state 0 for the irreducible birth-death chain $\bs{X}^\prime$ considered above but with birth and death functions $p^\prime$ and $q^\prime$ given by $p^\prime(x) = p(x + 1)$ for $x \in \N$ and $q^\prime(x) = q(x + 1)$ for $x \in \N_+$. Thus, let $\mu = \sum_{k=0}^\infty \frac{p(1) \cdots p(k)}{q(2) \cdots q(k+1)}$ Then $\mu$ is the mean return time to state 0 for the chain $\bs{X}^\prime$. Specifically, note that if $\mu = \infty$ then $\bs{X}^\prime$ is either transient or null recurrent. If $\mu \lt \infty$ then $1 / \mu$ is the invariant PDF at 0. So, it follows that $m(1) = \frac{1}{q(1)} \mu = \sum_{k=0}^\infty \frac{p(1) \cdots p(k)}{q(1) \cdots q(k + 1)}$ By our argument above, the mean time to go from state $x$ to $x - 1$ is $\sum_{k=x-1}^\infty \frac{p(x) \cdots p(k)}{q(x) \cdots q(k + 1)}$
Analytic Proof
Conditioning and using the Markov property, we have $m(x) = 1 + p(x) m(x + 1) + q(x) m(x - 1) + r(x) m(x), \quad x \in \N_+$ with initial condition $m(0) = 0$. Equivalently, $m(x + 1) - m(x) = \frac{q(x)}{p(x)}[m(x) - m(x - 1)] - \frac{1}{p(x)}, \quad x \in \N_+$ Solving gives $m(x + 1) - m(x) = \frac{q(1) \cdots q(x)}{p(1) \cdots p(x)} m(1) - \sum_{y=1}^x \frac{q(y+1) \cdots q(x)}{p(y) \cdots p(x)}, \quad x \in \N_+$ Next, $m(x) = \sum_{y=0}^{x-1} [m(y+1) - m(y)]$ for $x \in \N$ which gives $m(x) = m(1) \sum_{y=0}^{x-1} \frac{q(1) \cdots q(y)}{p(1) \cdots p(y)} - \sum_{y=0}^{x-1} \sum_{z=1}^y \frac{q(z + 1) \cdots q(y)}{p(z) \cdots p(y)}, \quad x \in \N$ Finally, $m(1)$ is given as in the first proof. The expression for $m(x)$ is different, but equivalent, of course.
Next we will consider a birth-death chain on a finite integer interval with both endpoints absorbing. Our interest is in the probability of absorption in one endpoint rather than the other, and in the mean time to absorption. Thus suppose that $n \in \N_+$ and that $\bs X = (X_0, X_1, X_2, \ldots)$ is a birth-death chain on $\N_n = \{0, 1, \ldots, n\}$ with $r(0) = r(n) = 1$ and with $p(x) \gt 0$ and $q(x) \gt 0$ for $x \in \{1, 2, \ldots, n - 1\}$. So the endpoints 0 and $n$ are absorbing, and all other states lead to each other and to the endpoints. Let $N = \min\{n \in \N: X_n \in \{0, n\}\}$, the time until absorption, and for $x \in S$ let $v_n(x) = \P(X_N = 0 \mid X_0 = x)$ and $m_n(x) = \E(N \mid X_0 = x)$. The definitions make sense since $N$ is finite with probability 1.
The absorption probability function for state 0 is given by $v_n(x) = \frac{1}{A_n} \sum_{i=x}^{n-1} \frac{q(1) \cdots q(i)}{p(1) \cdots p(i)} \text{ for } x \in \N_n \text{ where } A_n = \sum_{i=0}^{n-1} \frac{q(1) \cdots q(i)}{p(1) \cdots p(i)}$
Proof
Conditioning and using the Markov property, $v_n$ satisfies the second-order linear difference equation $v_n(x) = p(x) v_n(x + 1) + q(x) v_n(x - 1) + r(x) v_n(x), \quad x \in \{1, 2, \ldots, n - 1\}$ with boundary conditions $v_n(0) = 1$, $v_n(n) = 0$. As we have seen before, the difference equation can be rewritten as $v_n(x + 1) - v_n(x) = \frac{p(x)}{q(x)} [v_n(x) - v_n(x - 1)], \quad x \in \{1, 2, \ldots, n - 2\}$ Solving and applying the boundary conditions gives the result.
Note that $A_n \to A$ as $n \to \infty$ where $A$ is the constant above for the absorption probability at 0 with the infinite state space $\N$. If $A \lt \infty$ then $v_n(x) \to v(x)$ as $n \to \infty$ for $x \in \N$.
The mean absorption time is given by $m_n(x) = m_n(1) \sum_{y=0}^{x-1} \frac{q(1) \cdots q(y)}{p(1) \cdots p(y)} - \sum_{y=0}^{x-1} \sum_{z=1}^y \frac{q(z+1) \cdots q(y)}{p(z) \cdots p(y)}, \quad x \in \N_n$ where, with $A_n$ as in the previous theorem, $m_n(1) = \frac{1}{A_n} \sum_{y=1}^{n-1} \sum_{z=1}^y \frac{q(z+1) \cdots q(y)}{p(z) \cdots p(y)}$
Proof
The probabilistic proof above with state space $\N$ and 0 absorbing does not work here, but the first part of the analytic proof does. So, $m_n(x) = m_n(1) \sum_{y=0}^{x-1} \frac{q(1) \cdots q(y)}{p(1) \cdots p(y)} - \sum_{y=0}^{x-1} \sum_{z=1}^y \frac{q(z + 1) \cdots q(y)}{p(z) \cdots p(y)}, \quad x \in \{1, 2, \ldots, n\}$ Substituting $x = n$ and applying the boundary condition $m_n(n) = 0$, gives the result for $m_n(1)$ in the theorem.
Time Reversal
Our next discussion is on the time reversal of a birth-death chain. Essentially, every recurrent birth-death chain is reversible.
Suppose that $\bs X = (X_0, X_1, X_2, \ldots)$ is an irreducible, recurrent birth-death chain on an integer interval $S$. Then $\bs X$ is reversible.
Proof
We need to show that the Kolmogorov cycle condition is satisfied. That is, for every sequence of states $(x_0, x_1, x_2, \ldots, x_n)$ with $x_0 = x_n$, $P(x_0, x_1) P(x_1, x_2) \cdots P(x_{n-1}, x_n) = P(x_n, x_{n-1}) P(x_{n-1}, x_{n-2}) \cdots P(x_1, x_0)$ We can restrict our attention to sequences where $x_{i+1} \in \{x_i, x_i - 1, x_i + 1\}$ for each $i \in \{1, 2, \ldots, n\}$. For such sequences, the cycle condition is trivially satisfied.
If $S$ is finite and the chain $\bs X$ is irreducible, then of course $\bs X$ is recurrent (in fact positive recurrent), so by the previous result, $\bs X$ is reversible. In the case $S = \N$, we can use the invariant function above to show directly that the chain is reversible.
Suppose that $\bs X = (X_0, X_1, X_2, \ldots)$ is a birth-death chain on $\N$ with $p(x) \gt 0$ for $x \in \N$ and $q(x) \gt 0$ for $x \in \N_+$. Then $\bs X$ is reversible.
Proof
With the function $g$ defined above, it suffices to show the reversibility condition $g(x)P(x, y) = g(y) P(y, x)$ for all $x, \, y \in \N$. It then follows that $g$ is invariant for $\bs{X}$ and that $\bs{X}$ is reversible with respect to $g$. But since $g$ is the only positive invariant function for $\bs{X}$, up to multiplication by positive constants, we can omit the qualifying phrase with respect to $g$. For $x \in \N$ and $y = x + 1$ we have $g(x) P(x, y) = g(y) P(y, x) = \frac{p(0) \cdots p(x)}{q(1) \cdots q(x)}$ For $x \in \N_+$ and $y = x - 1$ we have $g(x) P(x, y) = g(y) P(y, x) = \frac{p(0) \cdots p(x - 1)}{q(1) \cdots q(x - 1)}$ In all other cases, the reversibility condition is trivially satisfied.
Thus, in the positive recurrent case, when the variables are given the invariant distribution, the transition matrix $P$ describes the chain forward in time and backwards in time.
Examples and Special Cases
As always, be sure to try the problems yourself before looking at the solutions.
Constant Birth and Death Probabilities
Our first examples consider birth-death chains on $\N$ with constant birth and death probabilities, except at the boundary points. Such chains are often referred to as random walks, although that term is used in a variety of different settings. The results are special cases of the general results above, but sometimes direct proofs are illuminating.
Suppose that $\bs X = (X_0, X_1, X_2, \ldots)$ is the birth-death chain on $\N$ with constant birth probability $p \in (0, \infty)$ on $\N$ and constant death probability $q \in (0, \infty)$ on $\N_+$, with $p + q \le 1$. Then
1. $\bs X$ is transient if $q \lt p$
2. $\bs X$ is null recurrent if $q = p$
3. $\bs X$ is positive recurrent if $q \gt p$, and the invariant distribution is the geometric distribution on $\N$ with parameter $p / q$ $f(x) = \left( 1 - \frac{p }{q} \right) \left( \frac{p}{q} \right)^x, \quad x \in \N$
Next we consider the random walk on $\N$ with 0 absorbing. As in the discussion of absorption above, $v(x)$ denotes the absorption probability and $m(x)$ the mean time to absorption, starting in state $x \in \N$.
Suppose that $\bs X = (X_0, X_1, \ldots)$ is the birth-death chain on $\N$ with constant birth probability $p \in (0, \infty)$ on $\N_+$ and constant death probability $q \in (0, \infty)$ on $\N_+$, with $p + q \le 1$. Assume also that $r(0) = 1$, so that 0 is absorbing.
1. If $q \ge p$ then $v(x) = 1$ for all $x \in \N$. If $q \lt p$ then $v(x) = (q/p)^x$ for $x \in \N$.
2. If $q \le p$ then $m(x) = \infty$ for all $x \in \N_+$. If $q \gt p$ then $m(x) = x / (q - p)$ for $x \in \N$.
Proof
1. This follows from the general result above for the absorption probability.
2. This also follows from the general result above for the mean absorption time, but we will give a direct proof using the same ideas. If $q \lt p$ then $\P(N = \infty \mid X_0 = x) \gt 0$ and hence $m(x) = \infty$ for $x \in \N_+$. So suppose that $q \ge p$ so that $\P(N \lt \infty \mid X_0 = x) = 1$ for $x \in \N$. Because of the spatial homogeneity, the time required to reach state $x - 1$ starting in state $x \in \N_+$ has the same distribution as the time required to reach state 0 starting in state 1. By the additivity of expected value, it follows that $m(x) = x \, m(1)$ for $x \in \N$. So it remains for us to compute $m(1)$. Starting in state 1, the chain will be absorbed into state 0 after a random number of intermediate returns to state 1 with absorption. In state 1, the probability of absorption at the next step is $q$, so the number of times that the chain is in state 1 before absorption has the geometric distribution on $\N_+$ with success parameter $q$. So the mean number of visits is $1 / q$. In state 1, the number of steps before a return to step 1 without absorption has the same distribution as the return time to state 0, starting in 0, for the recurrent chain considered in the previous exercise. The mean of this distribution is $\infty$ if $q = p$ and is $1 / f(0)$ if $q \gt p$, were $f$ is the invariant distribution. It follows that $m(1) = \frac{1}{q} \frac{1}{1 - p / q} = \frac{1}{q - p}$
This chain is essentially the gambler's ruin chain. Consider a gambler who bets on a sequence of independent games, where $p$ and $q$ are the probabilities of winning and losing, respectively. The gambler receives one monetary unit when she wins a game and must pay one unit when she loses a game. So $X_n$ is the gambler's fortune after playing $n$ games.
Next we consider random walks on a finite interval.
Suppose that $\bs X = (X_0, X_1, \ldots)$ is the birth-death chain on $\N_n = \{0, 1, \ldots, n\}$ with constant birth probability $p \in (0, \infty)$ on $\{0, 1, \ldots, n - 1\}$ and constant death probability $q \in (0, \infty)$ on $\{1, 2, \ldots, n\}$, with $p + q \le 1$. Then $\bs X$ is positive recurrent and the invariant probability density function $f_n$ is given as follows:
1. If $p \ne q$ then $f_n(x) = \frac{(p/q)^x (1 - p/q)}{1 - (p/q)^{n+1}}, \quad x \in \N_n$
2. If $p = q$ then $f_n(x) = 1 / (n + 1)$ for $x \in \N_n$.
Note that if $p \lt q$ then the invariant distribution is a truncated geometric distribution, and $f_n(x) \to f(x)$ for $x \in \N$ where $f$ is the invariant probability density function of the birth-death chain on $\N$ considered above. If $p = q$, the invariant distribution is uniform on $\N_n$, certainly a reasonable result. Next we consider the chain with both endpoints absorbing. As before, $v_n$ is the function that gives the probability of absorption in state 0, while $m_n$ is the function that gives the mean time to absorption.
Suppose that $\bs X = (X_0, X_1, \ldots)$ is the birth-death chain on $\N_n = \{0, 1, \ldots, n\}$ with constant birth probability $p \in (0, 1)$ and death probability $q \in (0, \infty)$ on $\{1, 2, \ldots, n - 1\}$, where $p + q \le 1$. Assume also that $r(0) = r(n) = 1$, so that $0$ and $n$ are absorbing.
1. If $p \ne q$ then $v_n(x) = \frac{(q/p)^x - (q/p)^n}{1 - (q/p)^n}, \quad x \in \N_n$
2. If $p = q$ then $v_n(x) = 1 - x / n$ for $x \in \N_n$
Note that if $q \lt p$ then $v_n(x) \to v(x)$ as $n \to \infty$ for $x \in \N$.
Suppose again that $\bs X = (X_0, X_1, \ldots)$ is the birth-death chain on $\N_n = \{0, 1, \ldots, n\}$ with constant birth probability $p \in (0, 1)$ and death probability $q \in (0, \infty)$ on $\{1, 2, \ldots, n - 1\}$, where $p + q \le 1$. Assume also that $r(0) = r(n) = 1$, so that $0$ and $n$ are absorbing.
1. If $p \ne q$ then $m_n(x) = \frac{n}{p - q} \frac{1 - (q/p)^x}{1 - (q/p)^n} + \frac{x}{q - p}, \quad x \in \N_n$
2. If $p = q$ then $m_n(x) = \frac{1}{2p}x(n - x), \quad x \in \N_n$
Special Birth-Death Chains
Some of the random processes that we have studied previously are birth-death Markov chains.
Describe each of the following as a birth-death chain.
1. The Ehrenfest chain.
2. The modified Ehrenfest chain.
3. The Bernoulli-Laplace chain
4. The simple random walk on $\Z$.
Answer
1. The Ehrenfest chain with parameter $m \in \N_+$ is a birth death chain on $S = \{0, 1, \ldots, m\}$ with $q(x) = \frac{x}{m}$ and $p(x) = \frac{m - x}{m}$ for $x \in S$.
2. The modified Ehrenfest chain with parameter $m \in \N_+$ is a birth death chain on $S = \{0, 1, \ldots, m\}$ with $q(x) = \frac{x}{2 m}$, $r(x) = \frac{1}{2}$, and $p(x) = \frac{m - x}{2 m}$ for $x \in S$.
3. The Bernoulli-Laplace chain with parameters $j, \, k, \, r \in \N_+$ with $r \lt j + k$ is a birth-death chain on $S = \{\max\{0, r - j\}, \ldots, \min\{k, r\}\}$ with $q(x) = \frac{(j - r + x) x}{j k}$, $r(x) = \frac{(r - x) x + (j - r + x)(k - x)}{j k}$, and $p(x) = \frac{(r - x)(k - x)}{j k}$ for $x \in S$.
4. The simple random walk on $\Z$ with parameter $p \in (0, 1)$ is a birth-death chain on $\Z$ with $p(x) = p$ and $q(x) = 1 - p$ for $x \in \Z$.
Other Examples
Consider the birth-death process on $\N$ with $p(x) = \frac{1}{x + 1}$, $q(x) = 1 - p(x)$, and $r(x) = 0$ for $x \in S$.
1. Find the invariant function $g$.
2. Classify the chain.
Answer
1. Note that $p(0) \cdots p(x - 1) = \frac{1}{x!}$ and $q(1) \cdots q(x) = \frac{1}{x + 1} = p(x)$ for $x \in \N$. Hence $g(x) = \frac{x + 1}{x!}$.
2. Note that $\sum_{x = 0}^\infty g(x) = \sum_{x = 1}^\infty \frac{1}{(x - 1)!} + \sum_{x = 0}^\infty \frac{1}{x!} = 2 e$ So the chain is positive recurrent, with invariant PDF $f$ given by $f(x) = e^{-2} \frac{(x + 1)}{x!}, \quad x \in \N$ Also, the chain is periodic with period 2. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.13%3A_Discrete-Time_Birth-Death_Chains.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
Introduction
Suppose that $G = (S, E)$ is a graph with vertex set $S$ and edge set $E \subseteq S^2$. We assume that the graph is undirected (perhaps a better term would be bi-directed) in the sense that $(x, y) \in E$ if and only if $(y, x) \in E$. The vertex set $S$ is countable, but may be infinite. Let $N(x) = \{y \in S: (x, y) \in E\}$ denote the set of neighbors of a vertex $x \in S$, and let $d(x) = \#[N(x)]$ denote the degree of $x$. We assume that $N(x) \neq \emptyset$ for $x \in S$, so $G$ has no isolated points.
Suppose now that there is a conductance $c(x, y) \gt 0$ associated with each edge $(x, y) \in E$. The conductance is symmetric in the sense that $c(x, y) = c(y, x)$ for $(x, y) \in E$. We extend $c$ to a function on all of $S \times S$ by defining $c(x, y) = 0$ for $(x, y) \notin E$. Let $C(x) = \sum_{y \in S} c(x, y), \quad x \in S$ so that $C(x)$ is the total conductance of the edges coming from $x$. Our main assumption is that $C(x) \lt \infty$ for $x \in S$. As the terminology suggests, we imagine a fluid of some sort flowing through the edges of the graph, so that the conductance of an edge measures the capacity of the edge in some sense. One of the best interpretation is that the graph is an electrical network and the edges are resistors. In this interpretation, the conductance of a resistor is the reciprocal of the resistance.
In some applications, specifically the resistor network just mentioned, it's appropriate to impose the additional assumption that $G$ has no loops, so that $(x, x) \notin E$ for each $x \in S$. However, that assumption is not mathematically necessary for the Markov chains that we will consider in this section.
The discrete-time Markov chain $\bs{X} = (X_0, X_1, X_2, \ldots)$ with state space $S$ and transition probability matrix $P$ given by $P(x, y) = \frac{c(x, y)}{C(x)}, \quad (x, y) \in S^2$ is called a random walk on the graph $G$.
Justification
First, $P(x, y) \ge 0$ for $x, \, y \in S$. Next, by definition of $C$, $\sum_{y \in S} P(x, y) = \sum_{y \in S} \frac{c(x, y)}{C(x)} = \frac{C(x)}{C(x)} = 1, \quad x \in S$ sp $P$ is a valid transition matrix on $S$. Also, $P(x, y) \gt 0$ if and only if $c(x, y) \gt 0$ if and only if $(x, y) \in E$, so the state graph of $\bs{X}$ is $G$, the graph we started with.
This chain governs a particle moving along the vertices of $G$. If the particle is at vertex $x \in S$ at a given time, then the particle will be at a neighbor of $x$ at the next time; the neighbor is chosen randomly, in proportion to the conductance. In the setting of an electrical network, it is natural to interpret the particle as an electron. Note that multiplying the conductance function $c$ by a positive constant has no effect on the associated random walk.
Suppose that $d(x) \lt \infty$ for each $x \in S$ and that $c$ is constant on the edges. Then
1. $C(x) = c d(x)$ for every $x \in S$.
2. The transition matrix $P$ is given by $P(x, y) = \frac{1}{d(x)}$ for $x \in S$ and $y \in N(x)$, and $P(x, y) = 0$ otherwise.
The discrete-time Markov chain $\bs{X}$ is the symmetric random walk on $G$.
Proof
1. $C(x) = \sum_{y \in N(x)} c(x, y) = c \#[N(x)] = c d(x)$ for $x \in S$.
2. $P(x, y) = c(x, y) / C(x) = c / c d(x) = 1 / d(x)$ for $x \in S$ and $y \in N(x)$
Thus, for the symmetric random walk, if the state is $x \in S$ at a given time, then the next state is equally likely to be any of the neighbors of $x$. The assumption that each vertex has finite degree means that the graph $G$ is locally finite.
Let $\bs{X}$ be a random walk on a graph $G$.
1. If $G$ is connected then $\bs{X}$ is irreducible.
2. If $G$ is not connected then the equivalence classes of $\bs{X}$ are the components of $G$ (the maximal connected subsets of $S$).
Proof
1. Recall that there is a path of length $n \in \N_+$ between distinct states $x, \, y \in S$ in the state graph of $\bs{X}$ if and only if $P^n(x, y) \gt 0$. If $G$ is connected, there is a path between each pair of distinct vertices and hence the chain $\bs{X}$ is irreducible.
2. This follows from (a).
So as usual, we will usually assume that $G$ is connected, for otherwise we could simply restrict our attention to a component of $G$. In the case that $G$ has no loops (again, an important special case because of applications), it's easy to characterize the periodicity of the chain. For the theorem that follows, recall that $G$ is bipartite if the vertex set $S$ can be partitioned into nonempty, disjoint sets $A$ and $B$ (the parts) such that every edge in $E$ has one endpoint in $A$ and one endpoint in $B$.
Suppose that $\bs{X}$ is a random walk on a connected graph $G$ with no loops. Then $\bs{X}$ is either aperiodic or has period 2. Moreover, $\bs{X}$ has period 2 if and only if $G$ is bipartite, in which case the parts are the cyclic classes of $\bs{X}$.
Proof
First note that since $G$ is connected, the chain $\bs{X}$ is irreducible, and so all states have the same period. If $(x, y) \in E$ then $(y, x) \in E$ also, so returns to $x \in S$, starting at $x$ can always occur at even positive integers. If $G$ is bipartite, then returns to $x$ starting at $x$ can clearly only occur at even postive integers, so the period is 2. Conversely, if $G$ is not bipartite then $G$ has a cycle of odd length $k$. If $x$ is a vertex on the cycle, then returns to $x$, starting at $x$, can occur in 2 steps or in $k$ steps, so the period of $x$ is 1.
Positive Recurrence and Invariant Distributions
Suppose again that $\bs{X}$ is a random walk on a graph $G$, and assume that $G$ is connected so that $\bs{X}$ is irreducible.
The function $C$ is invariant for $P$. The random walk $\bs{X}$ is positive recurrent if and only if $K = \sum_{x \in S} C(x) = \sum_{(x, y) \in S^2} c(x, y) \lt \infty$ in which case the invariant probability density function $f$ is given by $f(x) = C(x) / K$ for $x \in S$.
Proof
For $y \in S$, $(C P)(y) = \sum_{x \in S} C(x) P(x, y) = \sum_{x \in N(y)} C(x) \frac{c(x, y)}{C(x)} = \sum_{x \in N(y)} c(x, y) = C(y)$ so $C$ is invariant for $P$. The other results follow from the general theory.
Note that $K$ is the total conductance over all edges in $G$. In particular, of course, if $S$ is finite then $\bs{X}$ is positive recurrent, with $f$ as the invariant probability density function. For the symmetric random walk, this is the only way that positive recurrence can occur:
The symmetric random walk on $G$ is positive recurrent if and only if the set of vertices $S$ is finite, in which case the invariant probability density function $f$ is given by $f(x) = \frac{d(x)}{2 m}, \quad x \in S$ where $d$ is the degree function and where $m$ is the number of undirected edges.
Proof
If we take the conductance function to be the constant 1 on the edges, then $C(x) = d(x)$ and $K = 2 m$.
On the other hand, when $S$ is infinite, the classification of $\bs{X}$ as recurrent or transient is complicated. We will consider an interesting special case below, the symmetric random walk on $\Z^k$.
Reversibility
Essentially, all reversible Markov chains can be interpreted as random walks on graphs. This fact is one of the reasons for studying such walks.
If $\bs{X}$ is a random walk on a connected graph $G$, then $\bs{X}$ is reversible with respect to $C$.
Proof
Since the graph is connected, $\bs{X}$ is irreducible. The crucial observation is that $C(x) P(x, y) = C(y) P(y, x), \quad (x, y) \in S^2$ If $(x, y) \in E$ the left side is $c(x, y)$ and the right side is $c(y, x)$. If $(x, y) \notin E$, both sides are 0. It then follows from the general theory that $C$ is invariant for $\bs{X}$ and that $\bs{X}$ is reversible with respect to $C$.
Of course, if $\bs{X}$ is recurrent, then $C$ is the only positive invariant function, up to multiplication by positive constants, and so $\bs{X}$ is simply reversible.
Conversely, suppose that $\bs{X}$ is an irreducible Markov chain on $S$ with transition matrix $P$ and positive invariant function $g$. If $\bs{X}$ is reversible with respect to $g$ then $\bs{X}$ is the random walk on the state graph with conductance function $c$ given by $c(x, y) = g(x) P(x, y)$ for $(x, y) \in S^2$.
Proof
Since $\bs{X}$ is reversible with respect to $g$, $g$ and $P$ satisfy $g(x) P(x, y) = g(y) P(y, x)$ for every $(x, y) \in S^2$. Note that the state graph $G$ of $\bs{X}$ is bi-directed since $P(x, y) \gt 0$ if and only if $P(y, x) \gt 0$, and that the function $c$ given in the theorem is symmetric, so that $c(x, y) = c(y, x)$ for all $(x, y) \in S^2$. Finally, note that $C(x) = \sum_{y \in S} c(x, y) = \sum_{y \in S} g(x) P(x, y) = g(x), \quad x \in S$ so that $P(x, y) = c(x, y) \big/ C(x)$ for $(x, y) \in S^2$, as required.
Again, in the important special case that $\bs{X}$ is recurrent, there exists a positive invariant function $g$ that is unique up to multiplication by positive constants. In this case the theorem states that an irreducible, recurrent, reversible chain is a random walk on the state graph.
Examples and Applications
The Wheatstone Bridge Graph
The graph below is called the Wheatstone bridge in honor of Charles Wheatstone.
In this subsection, let $\bs{X}$ be the random walk on the Wheatstone bridge above, with the given conductance values.
For the random walk $\bs{X}$,
1. Explicitly give the transition probability matrix $P$.
2. Given $X_0 = a$, find the probability density function of $X_2$.
Answer
For the matrix and vector below, we use the ordered state space $S = (a, b, c, d)$.
1. $P = \left[ \begin{matrix} 0 & \frac{1}{2} & 0 & \frac{1}{2} \ \frac{1}{4} & 0 & \frac{1}{4} & \frac{1}{2} \ 0 & \frac{1}{3} & 0 & \frac{2}{3} \ \frac{1}{5} & \frac{2}{5} & \frac{2}{5} & 0 \end{matrix} \right]$
2. $f_2 = \left( \frac{9}{40}, \frac{1}{5}, \frac{13}{40}, \frac{1}{4} \right)$
For the random walk $\bs{X}$,
1. Show that $\bs{X}$ is aperiodic.
2. Find the invariant probability density function.
3. Find the mean return time to each state.
4. Find $\lim_{n \to \infty} P^n$.
Answer
For the matrix and vectors below, we use the ordered state space $(a, b, c, d)$.
1. The chain is aperiodic since the graph is not bipartite. (Note that the graph has triangles.)
2. $f = \left(\frac{1}{7}, \frac{2}{7}, \frac{3}{14}, \frac{5}{14} \right)$
3. $\mu = \left(7, \frac{7}{2}, \frac{14}{3}, \frac{14}{5} \right)$
4. $P^n \to \left[ \begin{matrix} \frac{1}{7} & \frac{2}{7} & \frac{3}{14} & \frac{5}{14} \ \frac{1}{7} & \frac{2}{7} & \frac{3}{14} & \frac{5}{14} \ \frac{1}{7} & \frac{2}{7} & \frac{3}{14} & \frac{5}{14} \ \frac{1}{7} & \frac{2}{7} & \frac{3}{14} & \frac{5}{14} \ \end{matrix} \right]$ as $n \to \infty$
The Cube Graph
The graph below is the 3-dimensional cube graph. The vertices are bit strings of length 3, and two vertices are connected by an edge if and only if the bit strings differ by a single bit.
In this subsection, let $\bs{X}$ denote the random walk on the cube graph above, with the given conductance values.
For the random walk $\bs{X}$,
1. Explicitly give the transition probability matrix $P$.
2. Suppose that the initial distribution is the uniform distribution on $\{000, 001, 101, 100\}$. Find the probability density function of $X_2$.
Answer
For the matrix and vector below, we use the ordered state space $S = (000, 001, 101, 110, 010, 011, 111, 101 )$.
1. $P = \left[ \begin{matrix} 0 & \frac{1}{4} & 0 & \frac{1}{4} & \frac{1}{2} & 0 & 0 & 0 \ \frac{1}{4} & 0 & \frac{1}{4} & 0 & 0 & \frac{1}{2} & 0 & 0 \ 0 & \frac{1}{4} & 0 & \frac{1}{4} & 0 & 0 & \frac{1}{2} & 0 \ \frac{1}{4} & 0 & \frac{1}{4} & 0 & 0 & 0 & 0 & \frac{1}{2} \ \frac{1}{4} & 0 & 0 & 0 & 0 & \frac{3}{9} & 0 & \frac{3}{8} \ 0 & \frac{1}{4} & 0 & 0 & \frac{3}{8} & 0 & \frac{3}{8} & 0 \ 0 & 0 & \frac{1}{4} & 0 & 0 & \frac{3}{8} & 0 & \frac{3}{8} \ 0 & 0 & 0 & \frac{1}{4} & \frac{3}{8} & 0 & \frac{3}{8} & 0 \end{matrix} \right]$
2. $f_2 = \left(\frac{3}{32}, \frac{3}{32}, \frac{3}{32}, \frac{3}{32}, \frac{5}{32}, \frac{5}{32}, \frac{5}{32}, \frac{5}{32}\right)$
For the random walk $\bs{X}$,
1. Show that the chain has period 2 and find the cyclic classes.
2. Find the invariant probability density function.
3. Find the mean return time to each state.
4. Find $\lim_{n \to \infty} P^{2 n}$.
5. Find $\lim_{n \to \infty} P^{2 n + 1}$.
Answer
For the matrix and vector below, we use the ordered state space $S = (000, 001, 101, 110, 010, 011, 111, 101)$.
1. The chain has period 2 since the graph is bipartite. The cyclic classes are $\{000, 011, 110, 101\}$ (bit strings with an even number of 1's) and $\{010, 001, 100, 111\}$ (bit strings with an odd number of 1's).
2. $f = \left(\frac{1}{12}, \frac{1}{12}, \frac{1}{12}, \frac{1}{12}, \frac{1}{6}, \frac{1}{6}, \frac{1}{6}, \frac{1}{6}\right)$
3. $\mu = (12, 12, 12, 12, 6, 6, 6, 6)$
4. $P^{2 n} \to \left[ \begin{matrix} \frac{1}{6} & 0 & \frac{1}{6} & 0 & 0 & \frac{1}{3} & 0 & \frac{1}{3} \ 0 & \frac{1}{6} & 0 & \frac{1}{6} & \frac{1}{3} & 0 & \frac{1}{3} & 0 \ \frac{1}{6} & 0 & \frac{1}{6} & 0 & 0 & \frac{1}{3} & 0 & \frac{1}{3} \ 0 & \frac{1}{6} & 0 & \frac{1}{6} & \frac{1}{3} & 0 & \frac{1}{3} & 0 \ 0 & \frac{1}{6} & 0 & \frac{1}{6} & \frac{1}{3} & 0 & \frac{1}{3} & 0 \ \frac{1}{6} & 0 & \frac{1}{6} & 0 & 0 & \frac{1}{3} & 0 & \frac{1}{3} \ 0 & \frac{1}{6} & 0 & \frac{1}{6} & \frac{1}{3} & 0 & \frac{1}{3} & 0 \ \frac{1}{6} & 0 & \frac{1}{6} & 0 & 0 & \frac{1}{3} & 0 & \frac{1}{3} \end{matrix} \right]$ as $n \to \infty$
5. $P^{2 n + 1} \to \left[ \begin{matrix} 0 & \frac{1}{6} & 0 & \frac{1}{6} & \frac{1}{3} & 0 & \frac{1}{3} & 0 \ \frac{1}{6} & 0 & \frac{1}{6} & 0 & 0 & \frac{1}{3} & 0 & \frac{1}{3} \ 0 & \frac{1}{6} & 0 & \frac{1}{6} & \frac{1}{3} & 0 & \frac{1}{3} & 0 \ \frac{1}{6} & 0 & \frac{1}{6} & 0 & 0 & \frac{1}{3} & 0 & \frac{1}{3} \ \frac{1}{6} & 0 & \frac{1}{6} & 0 & 0 & \frac{1}{3} & 0 & \frac{1}{3} \ 0 & \frac{1}{6} & 0 & \frac{1}{6} & \frac{1}{3} & 0 & \frac{1}{3} & 0 \ \frac{1}{6} & 0 & \frac{1}{6} & 0 & 0 & \frac{1}{3} & 0 & \frac{1}{3} \ 0 & \frac{1}{6} & 0 & \frac{1}{6} & \frac{1}{3} & 0 & \frac{1}{3} & 0 \end{matrix} \right]$ as $n \to \infty$
Special Models
Recall that the basic Ehrenfest chain with $m \in \N_+$ balls is reversible. Interpreting the chain as a random walk on a graph, sketch the graph and find a conductance function.
Answer
The state graph $G$ of the basic Ehrenfest chain with $m$ balls is the path from 0 to $m$ with no loops. A conductance function $c$ is $c(x, x + 1) = \binom{m - 1}{x}$ for $x \in \{0, 1, \ldots, m - 1\}$.
Recall that the modified Ehrenfest chain with $m \in \N_+$ balls is reversible. Interpreting the chain as a random walk on a graph, sketch the graph and find a conductance function.
Answer
The state graph $G$ of the modified Ehrenfest chain with $m$ balls is the path from 0 to $m$ with loops. A conductance function $c$ is $c(x, x + 1) = \frac{1}{2}\binom{m - 1}{x}$ for $x \in \{0, 1, \ldots, m - 1\}$ and $c(x, x) = \frac{1}{2} \binom{m}{x}$ for $x \in \{0, 1, \ldots, m\}$.
Recall that the Bernoulli-Laplace chain with $j \in \N_+$ balls in urn 0, $k \in \N_+$ balls in urn 1, and with $r \in \{0, \ldots, j + k\}$ of the balls red, is reversible. Interpreting the chain as a random walk on a graph, sketch the graph and find a conductance function. Simplify the conductance function in the special case that $j = k = r$.
Answer
The state graph $G$ of the Bernoulli-Lapace chain with $j$ balls in urn 0, $k$ balls in urn 1, and with $r$ of the balls red, is the path from $\max\{0, r - j\}$ to $\min\{k, r\}$ with loops. A conductance function $c$ is given by \begin{align*} c(x, x + 1) & = \binom{r}{x} \binom{j + k - r}{k - x} (r - x)(k - x), \quad x \in \{\max\{0, r - j\}, \ldots, \min\{k, r\} - 1\} \ c(x, x) & = \binom{r}{x} \binom{j + k - r}{k - x} [(r - x) x + (j - r + x)(k - x)], \quad x \in \{\max\{0, r - j\}, \ldots, \min\{k, r\}\} \end{align*} In the special case that $j = k = r$, a conductance function is \begin{align*} c(x, x + 1) & = \binom{k}{x} \binom{k}{k - x} (k - x)^2, \quad x \in \{0, \ldots, k - 1\} \ c(x, x) & = \binom{k}{x} \binom{k}{k - x} 2 x (k - x), \quad x \in \{0, \ldots, k\} \end{align*}
Random Walks on $\Z$
Random walks on integer lattices are particularly interesting because of their classification as transient or recurrent. We consider the one-dimensional case in this subsection, and the higher dimensional case in the next subsection.
Let $\bs{X} = (X_0, X_1, X_2, \ldots)$ be the discrete-time Markov chain with state space $\Z$ and transition probability matrix $P$ given by $P(x, x + 1) = p, \; P(x, x - 1) = 1 - p, \quad x \in \Z$ where $p \in (0, 1)$. The chain $\bs{X}$ is called the simple random walk on $\Z$ with parameter $p$.
The term simple is used because the transition probabilities starting in state $x \in \Z$ do not depend on $x$. Thus the chain is spatially as well as temporally homogeneous. In the special case $p = \frac{1}{2}$, the chain $\bs{X}$ is the simple symmetric random walk on $\Z$. Basic properties of the simple random walk on $\Z$, and in particular, the simple symmetric random walk were studied in the chapter on Bernoulli Trials. Of course, the state graph $G$ of $\bs{X}$ has vertex set $\Z$, and the neighbors of $x \in \Z$ are $x + 1$ and $x - 1$. It's not immediately clear that $\bs{X}$ is a random walk on $G$ associated with a conductance function, which after all, is the topic of this section. But that fact and more follow from the next result.
Let $g$ be the function on $\Z$ defined by $g(x) = \left(\frac{p}{1 - p}\right)^x, \quad x \in \Z$ Then
1. $g(x) P(x, y) = g(y) P(y, x)$ for all $(x, y) \in \Z^2$
2. $g$ is invariant for $\bs{X}$
3. $\bs{X}$ is reversible with respect to $g$
4. $\bs{X}$ is the random walk on $\Z$ with conductance function $c$ given by $c(x, x + 1) = p^{x+1} \big/ (1 - p)^x$ for $x \in \Z$.
Proof
1. For $x \in \Z$, we only need to consider $y = x \pm 1$. \begin{align*} g(x) P(x, x - 1) & = \frac{p^x}{(1 - p)^{x-1}} = g(x - 1) P(x - 1, x)\ g(x) P(x, x + 1) & = \frac{p^{x+1}}{(1 - p)^x} = g(x + 1) P(x + 1, x) \end{align*}
2. This follows from (a) and the general theory.
3. This follows from (a) and (b) and the general theory.
4. From the result above, $\bs{X}$ is the random walk on $G$ associated with the conductance function $c$ given by $c(x, y) = g(x) P(x, y)$. By symmetry, it suffices to consider the edge $(x, x + 1)$, and in this case, $c$ is given in the second displayed equation above.
In particular, the simple symmetric random walk is the symmetric random walk on $G$.
The chain $\bs{X}$ is irreducible and periodic with period 2. Moreover $P^{2 n}(0, 0) = \binom{2 n}{n} p^n (1 - p)^n, \quad n \in \N$
Proof
The chain is irreducible since $G$ is connected. The chain is periodic since $G$ has no loops and is bipartite, with the parts being the odd and even integers. Finally, note that starting in state 0, the chain returns to 0 at time $2 n$ if and only if there are $n$ steps to the right and $n$ steps to the left.
Classification of the simple random walk on $\Z$.
1. If $p \ne \frac{1}{2}$ then $\bs{X}$ is transient.
2. If $p = \frac{1}{2}$ then $\bs{X}$ is null recurrent.
Proof
From the previous result and Stirling's approximation, $P^{2 n}(0, 0) \approx \frac{[4 p (1 - p)]^n}{\sqrt{\pi \, n}} \text{ as } n \to \infty$ Let $R(x, y) = \sum_{n=0}^\infty P^n(x, y)$ for $(x, y) \in \Z^2$, so that $R$ is the potential matrix. Recall that $R(x, y)$ is the expected number of visits to $y$ starting in $x$ for $(x, y) \in \Z^2$. If $p \ne \frac{1}{2}$ then $R(0, 0) \lt \infty$ and hence $\bs{X}$ is transient. If $p = \frac{1}{2}$ then $R(0, 0) = \infty$ and hence $\bs{X}$ is recurrent. In this case $\bs{X}$ must be null recurrent from our general results above, since the vertex set is infinite.
So for the one-dimensional lattice $\Z$, the random walk $\bs{X}$ is transient in the non-symmetric case, and null recurrent in the symmetric case. Let's return to the invariant functions of $\bs{X}$
Consider again the random walk $\bs{X}$ on $\Z$ with parameter $p \in (0, 1)$. The constant function $\bs{1}$ on $\Z$ and the function $g$ given by $g(x) = \left(\frac{p}{1 - p}\right)^x, \quad x \in \Z$ are invariant for $\bs{X}$. All other invariant functions are linear combinations of these two functions.
Proof
The condition for $h$ to be invariant, $h P = h$, leads to the following linear, second order difference equation: $(1 - p) h(y + 1) - h(y) + (1 + p) h(y - 1), \quad y \in \Z$ The characteristic equation is $(1 - p) r^2 - r + (1 + p) = 0$ which has roots $r = 1$ and $r = p \big/ (1 - p)$. The solutions corresponding to the roots are $\bs{1}$ and $g$, respectively. Hence the result follows from the general theory of linear difference equations.
Note that when $p = \frac{1}{2}$, the constant function $\bs{1}$ is the only positive invariant function, up to multiplication by positive constants. But we know this has to be the case since the chain is recurrent when $p = \frac{1}{2}$. Moreover, the chain is reversible. In the non-symmetric case, when $p \ne \frac{1}{2}$, we have an example of a transient chain which nonetheless has non-trivial invariant functions—in fact a two dimensional space of such functions. Also, $\bs{X}$ is reversible with respect to $g$, as shown above, but the reversal of $\bs{X}$ with respect to $\bs{1}$ is the chain with transition matrix $Q$ given by $Q(x, y) = P(y, x)$ for $(x, y) \in \Z^2$. This chain is just the simple random walk on $Z$ with parameter $1 - p$. So the non-symmetric simple random walk is an example of a transient chain that is reversible with respect to one invariant measure but not with respect to another invariant measure.
Random walks on $\Z^k$
More generally, we now consider $\Z^k$, where $k \in \N_+$. For $i \in \{1, 2, \ldots, k\}$, let $\bs{u}_i \in \Z^k$ denote the unit vector with 1 in position $i$ and 0 elsewhere. The $k$-dimensional integer lattice $G$ has vertex set $Z^k$, and the neighbors of $\bs{x} \in \Z^k$ are $\bs{x} \pm \bs{u}_i$ for $i \in \{1, 2, \ldots, k\}$. So in particular, each vertex has $2 k$ neighbors.
Let $\mathscr{X} = (\bs{X}_1, \bs{X_2}, \ldots)$ be the Markov chain on $\Z^k$ with transition probability matrix $P$ given by $P(\bs{x},\bs{x} + \bs{u}_i) = p_i, \; P(\bs{x}, \bs{x} - \bs{u}_i) = q_i; \quad \bs{x} \in \Z^k, \; i \in \{1, 2, \ldots, k\}$ where $p_i \gt 0$, $q_i \gt 0$ for $i \in \{1, 2, \ldots, k\}$ and $\sum_{i=1}^k (p_i + q_i) = 1$. The chain $\mathscr{X}$ is the simple random walk on $\Z^k$ with parameters $\bs{p} = (p_1, p_2, \ldots, p_k)$ and $\bs{q} = (q_1, q_2, \ldots, q_k)$.
Again, the term simple means that the transition probabilities starting at $\bs{x} \in \Z^k$ do not depend on $\bs{x}$, so that the chain is spatially homogeneous as well as temporally homogeneous. In the special case that $p_i = q_i = \frac{1}{2 k}$ for $i \in \{1, 2, \ldots, k\}$, $\mathscr{X}$ is the simple symmetric random walk on $Z^k$. The following theorem is the natural generalization of the result abpve for the one-dimensional case.
Define the function $g: \Z^k \to (0, \infty)$ by $g(x_1, x_2, \ldots, x_k) = \prod_{i=1}^k \left(\frac{p_i}{q_i}\right)^{x_i}, \quad (x_1, x_2, \ldots, x_k) \in \Z^k$ Then
1. $g(\bs{x}) P(\bs{x}, \bs{y}) = g(\bs{y}) P(\bs{y}, \bs{x})$ for all $\bs{x}, \, \bs{y} \in \Z^k$
2. $g$ is invariant for $\mathscr{X}$.
3. $\mathscr{X}$ is reversible with respect to $g$.
4. $\mathscr{X}$ is the random walk on $G$ with conductance function $c$ given by $c(\bs{x}, \bs{y}) = g(\bs{x}) P(\bs{x}, \bs{y})$ for $\bs{x}, \, \bs{y} \in \Z^k$.
Proof
1. For $\bs{x} = (x_1, x_2, \ldots, x_k) \in Z^k$, the only cases of interest are $\bs{y} = \bs{x} \pm \bs{u}_i$ for $i \in \{1, 2, \ldots, k\}$, since in all other cases, the left and right sides are 0. But \begin{align*} g(\bs{x}) P(\bs{x}, \bs{x} + \bs{u}_i) & = \prod_{j \ne i} \left(\frac{p_j}{q_j}\right)^{x_j} \cdot \frac{p_i^{x_i + 1} }{q_i^{x_i}} = g(\bs{x} + \bs{u}_i) P(\bs{x} + \bs{u}_i, \bs{x}) \ g(\bs{x}) P(\bs{x}, \bs{x} - \bs{u}_i) & = \prod_{j \ne i} \left(\frac{p_j}{q_j}\right)^{x_j} \cdot \frac{p_i^{x_i} }{q_i^{x_i - 1}} = g(\bs{x} - \bs{u}_i) P(\bs{x} + \bs{u}_i, \bs{x}) \end{align*}
2. This follows from (a).
3. This follows from (a) and (b).
4. This also follows from the general result above.
It terms of recurrence and transience, it would certainly seem that the larger the dimension $k$, the less likely the chain is to be recurrent. That's generally true:
Classification of the simple random walk on $\Z^k$.
1. For $k \in \{1, 2\}$, $\mathscr{X}$ is null recurrent in the symmetric case and transient for all other values of the parameters.
2. For $k \in \{3, 4, \ldots\}$, $\mathscr{X}$ is transient for all values of the parameters.
Proof sketch
For certain of the non-symmetric cases, we can use the result for dimension 1. Suppose $i \in \{1, 2, \ldots, k\}$ with $p_i \ne q_i$. If we consider the times when coordinate $i$ of the random walk $\mathscr{X}$ changes, we have an embedded one-dimensional random walk with parameter $p = p_i \big/ (p_i + q_i)$ (the probability of a step in the positive direction). Since $p \ne \frac{1}{2}$, this embedded random walk is transient and so will fail to return to 0, starting at 0, with positive probability. But if this embedded random walk fails to return to 0, starting at 0, then the parent random walk $\mathscr{X}$ fails to return to $\bs{0}$ starting at $\bs{0}$. Hence $\mathscr{X}$ is transient.
For the symmetric case, the general proof is similar in to the proof for dimension 1, but the details are considerably more complex. A return to $0$ can occur only at even times and $P^{2 n}(\bs{0}, \bs{0}) \approx \frac{C_k}{n^{k/2}} \text{ as } n \to \infty \text{ where } C_k = \frac{k^{k/2}}{\pi^{k/2} 2^{k-1}}$ Thus for the potential matrix $R$ we have $R(\bs{0}, \bs{0}) = \infty$ and the chain is recurrent if $k \in \{1, 2\}$ while $R(\bs{0}, \bs{0}) \lt \infty$ and the chain is transient if $k \in \{3, 4, \ldots\}$.
So for the simple, symmetric random walk on the integer lattice $\Z^k$, we have the following interesting dimensional phase shift: the chain is null recurrent in dimensions 1 and 2 and transient in dimensions 3 or more.
Let's return to the positive invariant functions for $\mathscr{X}$. Again, the results generalize those for the one-dimensional case.
For $J \subseteq \{1, 2 \ldots, k\}$, define $g_J$ on $\Z^k$ by $g_J(x_1, x_2, \ldots, x_k) = \prod_{j \in J} \left(\frac{p_j}{q_j}\right)^{x_j}, \quad (x_1, x_2, \ldots, x_k) \in \Z^k$ Let $\mathscr{X}_J$ denote the simple random walk on $\Z^k$ with transition matrix $P_J$, corresponding to the parameter vectors $\bs{p}^J$ and $\bs{q^J}$, wherre $p^J_j = p_j$, $q^J_j = q_j$ for $j \in J$, and $p^J_j = q_j$, $q^J_j = p_j$ for $j \notin J$. Then
1. $g_J(\bs{x}) P(\bs{x}, \bs{y}) = g_J(\bs{y}) P_J(\bs{y}, \bs{x})$ for all $\bs{x}, \, \bs{y} \in \Z^k$
2. $g_J$ is invariant for $\mathscr{X}$.
3. $\mathscr{X}_J$ is reversal of $\mathscr{X}$ with respect to $g_J$.
Proof
Part (a) follows from simple substitution. Parts (b) and (c) follow from (a) and the general theory.
Note that when $J = \emptyset$, $g_J = \bs{1}$ and when $J = \{1, 2, \ldots, k\}$, $g_J = g$, the invariant function introduced above. So in the completely non-symmetric case where $p_i \ne q_i$ for every $i \in \{1, 2, \ldots, k\}$, the random walk $\mathscr{X}$ has $2^k$ positive invariant functions that are linearly independent, and $\mathscr{X}$ is reversible with respect to one of them. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.14%3A_Random_Walks_on_Graphs.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$
This section begins our study of Markov processes in continuous time and with discrete state spaces. Recall that a Markov process with a discrete state space is called a Markov chain, so we are studying continuous-time Markov chains. It will be helpful if you review the section on general Markov processes, at least briefly, to become familiar with the basic notation and concepts. Also, discrete-time chains plays a fundamental role, so you will need review this topic also.
We will study continuous-time Markov chains from different points of view. Our point of view in this section, involving holding times and the embedded discrete-time chain, is the most intuitive from a probabilistic point of view, and so is the best place to start. In the next section, we study the transition probability matrices in continuous time. This point of view is somewhat less intuitive, but is closest to how other types of Markov processes are treated. Finally, in the third introductory section we study the Markov chain from the view point of potential matrices. This is the least intuitive approach, but analytically one of the best. Naturally, the interconnections between the various approaches are particularly important.
Preliminaries
As usual, we start with a probability space $(\Omega, \mathscr{F}, \P)$, so that $\Omega$ is the set of outcomes, $\mathscr{F}$ the $\sigma$-algebra of events, and $\P$ the probability measure on the sample space $(\Omega, \mathscr{F})$. The time space is $([0, \infty), \mathscr{T})$ where as usual, $\mathscr{T}$ is the Borel $\sigma$-algebra on $[0, \infty)$ corresponding to the standard Euclidean topology. The state space is $(S, \mathscr{S})$ where $S$ is countable and $\mathscr{S}$ is the power set of $S$. So every subset of $S$ is measurable, as is every function from $S$ to another measurable space. Recall that $\mathscr{S}$ is also the Borel $\sigma$ algebra corresponding to the discrete topology on $S$. With this topology, every function from $S$ to another topological space is continuous. Counting measure $\#$ is the natural measure on $(S, \mathscr{S})$, so in the context of the general introduction, integrals over $S$ are simply sums. Also, kernels on $S$ can be thought of as matrices, with rows and sums indexed by $S$. The left and right kernel operations are generalizations of matrix multiplication.
Suppose now that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is stochastic process with state space $(S, \mathscr{S})$. For $t \in [0, \infty)$, let $\mathscr{F}^0_t = \sigma\{X_s: s \in [0, t]\}$, so that $\mathscr{F}^0_t$ is the $\sigma$-algebra of events defined by the process up to time $t$. The collection of $\sigma$-algebras $\mathfrak{F}^0 = \{\mathscr{F}^0_t: t \in [0, \infty)\}$ is the natural filtration associated with $\bs{X}$. For technical reasons, it's often necessary to have a filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in [0, \infty)\}$ that is slightly finer than the natural one, so that $\mathscr{F}^0_t \subseteq \mathscr{F}_t$ for $t \in [0, \infty)$ (or in equivlaent jargon, $\bs{X}$ is adapted to $\mathfrak{F}$). See the general introduction for more details on the common ways that the natural filtration is refined. We will also let $\mathscr{G}_t = \sigma\{X_s: s \in [t, \infty)\}$, the $\sigma$-algebra of events defined by the process from time $t$ onward. If $t$ is thought of as the present time, then $\mathscr{F}_t$ is the collection of events in the past and $\mathscr{G}_t$ is the collection of events in the future.
It's often necessary to impose assumptions on the continuity of the process $\bs{X}$ in time. Recall that $\bs{X}$ is right continuous if $t \mapsto X_t(\omega)$ is right continuous on $[0, \infty)$ for every $\omega \in \Omega$, and similarly $\bs{X}$ has left limits if $t \mapsto X_t(\omega)$ has left limits on $(0, \infty)$ for every $\omega \in \Omega$. Since $S$ has the discrete topology, note that if $\bs{X}$ is right continuous, then for every $t \in [0, \infty)$ and $\omega \in \Omega$, there exists $\epsilon$ (depending on $t$ and $\omega$) such that $X_{t+s}(\omega) = X_t(\omega)$ for $s \in [0, \epsilon)$. Similarly, if $\bs{X}$ has left limits, then for every $t \in (0, \infty)$ and $\omega \in \Omega$ there exists $\delta$ (depending on $t$ and $\omega$) such that $X_{t - s}(\omega)$ is constant for $s \in (0, \delta)$.
The Markov Property
There are a number of equivalent ways to state the Markov property. At the most basic level, the property states that the past and future are conditionally independent, given the present.
The process $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a Markov chain on $S$ if for every $t \in [0, \infty)$, $A \in \mathscr{F}_t$, and $B \in \mathscr{G}_t$, $\P(A \cap B \mid X_t) = \P(A \mid X_t) \P(B \mid X_t)$
Another version is that the conditional distribution of a state in the future, given the past, is the same as the conditional distribution just given the present state.
The process $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a Markov chain on $S$ if for every $s, \, t \in [0, \infty)$, and $x \in S$, $\P(X_{s + t} = x \mid \mathscr{F}_s) = \P(X_{s + t} = x \mid X_s)$
Technically, in the last two definitions, we should say that $\bs{X}$ is a Markov process relative to the filtration $\mathfrak{F}$. But recall that if $\bs{X}$ satisfies the Markov property relative to a filtration, then it satisfies the Markov property relative to any coarser filtration, and in particular, relative to the natural filtration. For the natural filtration, the Markov property can also be stated without explicit reference to $\sigma$-algebras, although at the cost of additional clutter:
The process $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a Markov chain on $S$ if and only if for every $n \in \N_+$, time sequence $(t_1, t_2, \ldots, t_n) \in [0, \infty)^n$ with $t_1 \lt t_2 \lt \cdots \lt t_n$, and state sequence $(x_1, x_2, \ldots, x_n) \in S^n$, $\P\left(X_{t_n} = x_n \mid X_{t_1} = x_1, X_{t_2} = x_2, \ldots X_{t_{n-1}} = x_{n-1}\right) = \P\left(X_{t_n} = x_n \mid X_{t_{n-1}} = x_{n-1}\right)$
As usual, we also assume that our Markov chain $\bs{X}$ is time homogeneous, so that $\P(X_{s + t} = y \mid X_s = x) = \P(X_t = y \mid X_0 = x)$ for $s, \, t \in [0, \infty)$ and $x, \, y \in S$. So, for a homogeneous Markov chain on $S$, the process $\{X_{s+t}: t \in [0, \infty)\}$ given $X_s = x$, is independent of $\mathscr{F}_s$ and equivalent to the process $\{X_t: t \in [0, \infty)\}$ given $X_0 = x$, for every $s \in [0, \infty)$ and $x \in S$. That is, if the chain is in state $x \in S$ at a particular time $s \in [0, \infty)$, it does not matter how the chain got to $x$; the chain essentially starts over in state $x$.
The Strong Markov Property
Random times play an important role in the study of continuous-time Markov chains. It's often necessary to allow random times to take the value $\infty$, so formally, a random time $\tau$ is a random variable on the underlying sample space $(\Omega, \mathscr{F})$ taking values in $[0, \infty]$. Recall also that a random time $\tau$ is a stopping time (also called a Markov time or an optional time) if $\{\tau \le t\} \in \mathscr{F}_t$ for every $t \in [0, \infty)$. If $\tau$ is a stopping time, the $\sigma$-algebra associated with $\tau$ is $\mathscr{F}_\tau = \{A \in \mathscr{F}: A \cap \{\tau \le t\} \in \mathscr{F}_t \text{ for all } t \in [0, \infty)\}$ So $\mathscr{F}_\tau$ is the collection of events up to the random time $\tau$ in the same way that $\mathscr{F}_t$ is the collection of events up to the deterministic time $t \in [0, \infty)$. We usually want the Markov property to extend from deterministic times to stopping times.
The process $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a strong Markov chain on $S$ if for every stopping time $\tau$, $t \in [0, \infty)$, and $x \in S$, $\P(X_{\tau + t} = x \mid \mathscr{F}_\tau) = \P(X_{\tau + t} = x \mid X_\tau)$
So, for a homogeneous strong Markov chain on $S$, the process $\{X_{\tau + t}: t \in [0, \infty)\}$ given $X_\tau = x$, is independent of $\mathscr{F}_\tau$ and equivalent to the process $\{X_t: t \in [0, \infty)\}$ given $X_0 = x$, for every stopping time $\tau$ and $x \in S$. That is, if the chain is in state $x \in S$ at a stopping time $\tau$, then the chain essentially starts over at $x$, independently of the past.
Holding Times and the Jump Chain
For our first point of view, we sill study when and how our Markov chain $\bs{X}$ changes state. The discussion depends heavily on properties of the exponential distribution, so we need a quick review.
The Exponential Distribution
A random variable $\tau$ has the exponential distribution with rate parameter $r \in (0, \infty)$ if $\tau$ has a continuous distribution on $[0, \infty)$ with probability density function $f$ given by $f(t) = r e^{-r t}$ for $t \in [0, \infty)$. Equivalently, the right distribution function $F^c$ is given by $F^c(t) = \P(\tau \gt t) = e^{-r t}, \quad t \in [0, \infty)$ The mean of the distribution is $1 / r$ and the variance is $1 / r^2$. The exponential distribution has an amazing number of characterizations. One of the most important is the memoryless property which states that a random variable $\tau$ with values in $[0, \infty)$ has an exponential distribution if and only if the conditional distribution of $\tau - s$ given $\tau \gt s$ is the same as the distribution of $\tau$ itself, for every $s \in [0, \infty)$. It's easy to see that the memoryless property is equivalent to the law of exponents for right distribution function $F^c$, namely $F^c(s + t) = F^c(s) F^c(t)$ for $s, \, t \in [0, \infty)$. Since $F^c$ is right continuous, the only solutions are exponential functions.
For our study of continuous-time Markov chains, it's helpful to extend the exponential distribution to two degenerate cases, $\tau = 0$ with probability 1, and $\tau = \infty$ with probability 1. In terms of the parameter, the first case corresponds to $r = \infty$ so that $F(t) = \P(\tau \gt t) = 0$ for every $t \in [0, \infty)$, and the second case corresponds to $r = 0$ so that $F(t) = \P(\tau \gt t) = 1$ for every $t \in [0, \infty)$. Note that in both cases, the function $F$ satisfies the law of exponents, and so corresponds to a memoryless distribution in a general sense. In all cases, the mean of the exponential distribution with parameter $r \in [0, \infty]$ is $1 / r$, where we interpret $1/0 = \infty$ and $1/\infty = 0$.
Holding Times
The Markov property implies the memoryless property for the random time when a Markov process first leaves its initial state. It follows that this random time must have an exponential distribution.
Suppose that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a Markov chain on $S$, and let $\tau = \inf\{t \in [0, \infty): X_t \ne X_0\}$. For $x \in S$, the conditional distribution of $\tau$ given $X_0 = x$ is exponential with parameter $\lambda(x) \in [0, \infty]$.
Proof
Let $x \in S$ and $s \in [0, \infty)$. The events $X_0 = x$ and $\tau \gt s$ imply $X_s = x$. By the Markov property, given $X_s = x$, the chain starts over at time $s$ in state $x$, independent of $\{X_0 = x\}$ and $\{\tau \gt s\}$, since both events are in $\mathscr{F}_s$. Hence for $t \in [0, \infty)$, $\P(\tau \gt t + s \mid X_0 = x, \tau \gt s) = \P(\tau \gt t + s \mid X_0 = x, X_s = x, \tau \gt s) = \P(\tau \gt t \mid X_0 = x)$ It follows that $\tau$ has the memoryless property, and hence has an exponential distribution with parameter $\lambda(x) \in [0, \infty]$.
So, associated with the Markov chain $\bs{X}$ on $S$ is a function $\lambda: S \to [0, \infty]$ that gives the exponential parameters for the holding times in the states. Considering the ordinary exponential distribution, and the two degenerate versions, we are led to the following classification of states:
Suppose again that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a Markov chain on $S$ with exponential parameter function $\lambda$. Let $x \in S$.
1. If $\lambda(x) = 0$ then $\P(\tau = \infty \mid X_0 = x) = 1$, and $x$ is said to be an absorbing state.
2. If $\lambda(x) \in (0, \infty)$ then $\P(0 \lt \tau \lt \infty \mid X_0 = x) = 1$ and $x$ is said to be an stable state.
3. If $\lambda(x) = \infty$ then $\P(\tau = 0 \mid X_0 = x) = 1$, and $x$ is said to be an instantaneous state.
As you can imagine, an instantaneous state corresponds to weird behavior, since the chain starting in the state leaves the state at times arbitrarily close to 0. While mathematically possible, instantaneous states make no sense in most applications, and so are to be avoided. Also, the proof of the last result has some technical holes. We did not really show that $\tau$ is a valid random time, let alone a stopping time. Fortunately, one of our standard assumptions resolves these problems.
Suppose again that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a Markov chain on $S$. If the process $\bs{X}$ and the filtration $\mathfrak{F}$ are right continuous, then
1. $\tau$ is a stopping time.
2. $\bs{X}$ has no instantaneous states.
3. $\P(X_\tau \ne x \mid X_0 = x) = 1$ if $x \in S$ is stable.
4. $\bs{X}$ is a strong Markov process.
Proof
1. Let $t \in [0, \infty)$. By right continuity, $\{\tau \lt t\} = \{X_s \ne X_0 \text{ for some } s \in (0, t)\} = \{X_s \ne X_0 \text{ for some rational } s \in (0, t)\}$ But for $s \in (0, t)$, $\{X_s \ne X_0\} \in \mathscr{F}_s \subseteq \mathscr{F}_t$. The last event in the displayed equation is a countable union, so $\{\tau \lt t\} \in \mathscr{F}_t$. Since $\mathfrak{F}$ is right continuous, $\tau$ is a stopping time.
2. Suppose that $\omega \in \Omega$ and $X_0(\omega) = x$. Since $\bs{X}$ is right continuous, there exists $\epsilon \gt 0$ such that $X_t(\omega) = x$ for $0 \le t \lt \epsilon$ and hence $\tau(\omega) \ge \epsilon \gt 0$. So $\P(\tau \gt 0 \mid X_0 = x) = 1$.
3. Similarly, suppose that $\omega \in \Omega$ and that $X_0(\omega) = x$ and $X_{\tau(\omega)}(\omega) = y$. Since $\bs{X}$ is right continuous, there exists $\epsilon \gt 0$ such that $X_t(\omega) = y$ for $\tau(\omega) \le t \lt \tau(\omega) + \epsilon$. But by definition of $\tau(\omega)$, there exists $t \in (\tau(\omega), \tau(\omega) + \epsilon)$ with $X_t(\omega) \ne x$. Hence $\P(X_\tau \ne x \mid X_0 = x) = 1$.
There is actually a converse to part (b) that states that if $\bs{X}$ has no instantaneous states, then there is a version of $\bs{X}$ that is right continuous. From now on, we will assume that our Markov chains are right continuous with probability 1, and hence have no instantaneous states. On the other hand, absorbing states are perfectly reasonable and often do occur in applications. Finally, if the chain enters a stable state, it will stay there for a (proper) exponentially distributed time, and then leave.
The Jump Chain
Without instantaneous states, we can now construct a sequence of stopping times. Basically, we let $\tau_n$ denote the $n$th time that the chain changes state for $n \in \N_+$, unless the chain has previously been caught in an absorbing state. Here is the formal construction:
Suppose again that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a Markov chain on $S$. Let $\tau_0 = 0$ and $\tau_1 = \inf\{t \in [0, \infty): X_t \ne X_0\}$. Recursively, suppose that $\tau_n$ is defined for $n \in \N_+$. If $\tau_n = \infty$ let $\tau_{n+1} = \infty$. Otherwise, let $\tau_{n+1} = \inf\left\{t \in [\tau_n, \infty): X_t \ne X_{\tau_n}\right\}$ Let $M = \sup\{n \in \N: \tau_n \lt \infty\}$.
In the definition of $M$, of course, $\sup(\N) = \infty$, so $M$ is the number of changes of state. If $M \lt \infty$, the chain was sucked into an absorbing state at time $\tau_M$. Since we have ruled out instantaneous states, the sequence of random times in strictly increasing up until the (random) term $M$. That is, with probability 1, if $n \in \N$ and $\tau_n \lt \infty$ then $\tau_n \lt \tau_{n+1}$. Of course by construction, if $\tau_n = \infty$ then $\tau_{n+1} = \infty$. The increments $\tau_{n+1} - \tau_n$ for $n \in \N$ with $n \lt M$ are the times spent in the states visited by $\bs{X}$. The process at the random times when the state changes forms an embedded discrete-time Markov chain.
Suppose again that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a Markov chain on $S$. Let $\{\tau_n: n \in \N\}$ denote the stopping times and $M$ the random index, as defined above. For $n \in \N$, let $Y_n = X_{\tau_n}$ if $n \le M$ and $Y_n = X_{\tau_M}$ if $n \gt M$. Then $\bs{Y} = \{Y_n: n \in \N\}$ is a (homogenous) discrete-time Markov chain on $S$, known as the jump chain of $\bs{X}$.
Proof
For $n \in \N$ let $\mathscr{G}_n = \sigma\{Y_0, Y_1, \ldots, Y_n\}$, the $\sigma$-algebra of events for the process $\bs{Y}$, up to the discrete time $n$. Let $x \in S$. If $x$ is stable, then given $Y_n = x$, the random times $\tau_n$ and $\tau_{n+1}$ are finite with probability 1. (Note that we cannot get to $x$ from an absorbing state.) So $\P(Y_{n+1} = y \mid Y_n = x, \mathscr{G}_n) = \P\left(X_{\tau_{n+1}} = y \mid X_{\tau_n} = x, \mathscr{G}_n\right), \quad y \in S$ But by the strong Markov property, given $X_{\tau_n} = x$, the chain starts over at time $\tau_n$ in state $x$, independent of $\mathscr{G}_n \subseteq \mathscr{F}_{\tau_n}$. Hence $\P(Y_{n+1} = y \mid Y_n = x, \mathscr{G}_n) = \P(X_\tau = y \mid X_0 = x), \quad y \in S$ On the other hand, if $x$ is an absorbing state, then by construction, $\P(Y_{n+1} = y \mid Y_n = x, \mathscr{G}_n) = I(x, y), \quad y \in S$ where $I$ is the identity matrix on $S$.
As noted in the proof, the one-step transition probability matrix $Q$ for the jump chain $\bs{Y}$ is given for $(x, y) \in S^2$ by $Q(x, y) = \begin{cases} \P(X_\tau = y \mid X_0 = x), & x \text{ stable} \ I(x, y), & x \text{ absorbing} \end{cases}$ where $I$ is the identity matrix on $S$. Of course $Q$ satisfies the usual properties of a probability matrix on $S$, namely $Q(x, y) \ge 0$ for $(x, y) \in S^2$ and $\sum_{y \in S} Q(x, y) = 1$ for $x \in S$. But $Q$ satisfies another interesting property as well. Since the the state actually changes at time $\tau$ starting in a stable state, we must have $Q(x, x) = 0$ if $x$ is stable and $Q(x, x) = 1$ if $x$ is absorbing.
Given the initial state, the holding time and the next state are independent.
If $x, \, y \in S$ and $t \in [0, \infty)$ then $\P(Y_1 = y, \tau_1 \gt t \mid Y_0 = x) = Q(x, y) e^{-\lambda(x) t}$
Proof
Suppose that $x$ is a stable state, so that given $Y_0 = X_0 = x$, the stopping time $\tau_1 = \tau$ has a proper exponential distribution with parameter $\lambda(x) \in (0, \infty)$. Note that $\P(Y_1 = y, \tau_1 \gt t \mid Y_0 = x) = \P(X_{\tau} = y, \tau \gt t \mid X_0 = x) = \P(X_\tau = y \mid \tau \gt t, X_0 = x) \P(\tau \gt t \mid X_0 = x)$ Note that if $X_0 = x$ and $\tau \gt t$ then $X_t = x$ also. By the Markov property, given $X_t = x$, the chain starts over at time $t$ in state $x$, independent of $\{X_0 = x\}$ and $\{\tau \gt t\}$, both events in $\mathscr{F}_t$. Hence $\P(X_\tau = y \mid \tau \gt t, X_0 = x) = \P(X_\tau = y \mid X_t = x, \tau \gt t, X_0 = x) = \P(X_\tau = y \mid X_0 = x) = Q(x, y)$ Of course $\P(\tau \gt t \mid X_0 = x) = e^{-\lambda(x) t}$.
If $x$ is an absorbing state then $\P(\tau = \infty \mid X_0 = x) = 1$, $\P(Y_1 = x \mid Y_0 = x) = 1$, and $\lambda(x) = 0$. Hence $\P(Y_1 = y, \tau_1 \gt t \mid Y_0 = x) = I(x, y) = Q(x, y) e^{-\lambda(x) t}$
The following theorem is a generalization. The changes in state and the holding times are independent, given the initial state.
Suppose that $n \in \N_+$ and that $(x_0, x_1, \ldots, x_n)$ is a sequence of stable states and $(t_1, t_2, \ldots, t_n)$ is a sequence in $[0, \infty)$. Then \begin{align*} & \P(Y_1 = x_1, \tau_1 \gt t_1, Y_2 = x_2, \tau_2 - \tau_1 \gt t_2, \ldots, Y_n = x_n, \tau_n - \tau_{n-1} \gt t_n \mid Y_0 = x_0) \ & = Q(x_0, x_1) e^{-\lambda(x_0) t_1} Q(x_1, x_2) e^{-\lambda(x_1) t_2} \cdots Q(x_{n-1}, x_n) e^{-\lambda(x_{n-1}) t_n} \end{align*}
Proof
The proof is by induction, and the essence is captured in the case $n = 2$. So suppose that $x_0, \, x_1, \, x_2$ are stable states and $t_1, \, t_2 \in [0, \infty)$. Then \begin{align*} & \P(Y_1 = x_1, \tau_1 \gt t_1, Y_2 = x_2, \tau_2 - \tau_1 \gt t_2 \mid Y_0 = x_0) \ & = \P(Y_2 = x_2, \tau_2 - \tau_1 \gt t_2 \mid X_0 = x, Y_1 = x_1, \tau_1 \gt t_1) \P(Y_1 = x_1, \tau_1 \gt t_1 \mid Y_0 = x_0) \end{align*} But $\P(Y_1 = x_1, \tau_1 \gt t_1 \mid Y_0 = x_0) = Q(x_0, x_1) e^{-\lambda(x_0) t_1}$ by the previous theorem. Next, by definition, $\P(Y_2 = x_2, \tau_2 - \tau_1 \gt t_2 \mid X_0 = x, Y_1 = x_1, \tau_1 \gt t_1) = \P\left(X_{\tau_2} = x_2, \tau_2 - \tau_1 \gt t_2 \mid X_0 = x_0, X_{\tau_1} = x_1, \tau_1 \gt t_1\right)$ But by the strong Markov property, given $X_{\tau_1} = x_1$, the chain starts over at time $\tau_1$ in state $x$, independent of the events $\{X_0 = x_0\}$ and $\{\tau_1 \gt t_1\}$ (both events in $\mathscr{F}_{\tau_1}$). Hence using the previous theorem again, $\P(Y_2 = x_2, \tau_2 - \tau_1 \gt t_2 \mid X_0 = x, Y_1 = x_1, \tau_1 \gt t_1) = \P(X_\tau = x_2, \tau \gt t_2 \mid X_0 = x_1) = Q(x_1, x_2) e^{-\lambda(x_1)t_2}$
Regularity
We now know quite a bit about the structure of a continuous-time Markov chain $\bs{X} = \{X_t: t \in [0, \infty)\}$ (without instantaneous states). Once the chain enters a given state $x \in S$, the holding time in state $x$ has an exponential distribution with parameter $\lambda(x) \in [0, \infty)$, after which the next state $y \in S$ is chosen, independently of the holding time, with probability $Q(x, y)$. However, we don't know everything about the chain. For the sequence $\{\tau_n: n \in \N\}$ defined above, let $\tau_\infty = \lim_{n \to \infty} \tau_n$, which exists in $(0, \infty]$ of course, since the sequence is increasing. Even though the holding time in a state is positive with probability 1, it's possible that $\tau_\infty \lt \infty$ with positive probability, in which case we know nothing about $X_t$ for $t \ge \tau_\infty$. The event $\{\tau_\infty \lt \infty\}$ is known as explosion, since it means that the $\bs{X}$ makes infinitely many transitions before the finite time $\tau_\infty$. While not as pathological as the existence of instantaneous states, explosion is still to be avoided in most applications.
A Markov chain $\bs{X} = \{X_t: t \in [0, \infty)\}$ on $S$ is regular if each of the following events has probability 1:
1. $\bs{X}$ is right continuous.
2. $\tau_n \to \infty$ as $n \to \infty$.
There is a simple condition on the exponential parameters and the embedded chain that is equivalent to condition (b).
Suppose that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a right-continuous Markov chain on $S$ with exponential parameter function $\lambda$ and embedded chain $\bs{Y} = (Y_0, Y_1, \ldots)$. Then $\tau_n \to \infty$ as $n \to \infty$ with probability 1 if and only if $\sum_{n=0}^\infty 1 \big/ \lambda(Y_n) = \infty$ with probability 1.
Proof
Given $\bs{Y} = (y_0, y_1, \ldots)$, the distribution of $\tau_\infty = \lim_{n \to \infty} \tau_n$ is the distribution of $T_\infty = \sum_{n=0}^\infty T_n$ where $(T_0, T_1, \ldots)$ are independent, and $T_n$ has the exponential distribution with parameter $\lambda(y_n)$. Note that $\E(T_\infty) = \sum_{n=0}^\infty 1 \big/ \lambda(y_n)$. In the section on the exponential distribution, it's shown that $\P(T_\infty = \infty) = 1$ if and only if $\E(T_\infty) = \infty$.
If $\lambda$ is bounded, then $\bs{X}$ is regular.
Suppose that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a Markov chain on $S$ with exponential parameter function $\lambda$. If $\lambda$ is bounded, then $\bs{X}$ is regular.
Proof
Suppose that $\lambda(x) \le r$ for $x \in S$, where $r \in (0, \infty)$. Then in particular, $\bs{X}$ has no instantaneous states and so is right continuous. Moreover, $1 / \lambda(x) \ge 1 / r$ for $x \in S$ so $\sum_{n=0}^\infty 1 \big / \lambda(Y_n) = \infty$ with probability 1, where as ususal, $\bs{Y} = (Y_0, Y_1, \ldots)$ is the jump chain of $\bs{X}$.
Here is another sufficient condition that is useful when the state space is infinite.
Suppose that $\bs X = \{X_t: t \in [0, \infty)\}$ is a Markov chain on $S$ with exponential parameter function $\lambda: S \to [0, \infty)$. Let $S_+ = \{x \in S: \lambda(x) \gt 0\}$. Then $\bs X$ is regular if $\sum_{x \in S_+} \frac{1}{\lambda(x)} = \infty$
Proof
By assumption, $\lambda(x) \lt \infty$ for $x \in S$, so there are no instantaneous states and so we can take $\bs X$ to be right continuous. Next, $\sum_{n=0}^\infty \frac{1}{\lambda(Y_n)} = \sum_{n=0}^\infty \sum_{x \in S} \frac{1}{\lambda(x)} \bs{1}(Y_n = x) = \sum_{x \in S} \frac{1}{\lambda(x)} \sum_{n=0}^\infty \bs{1}(Y_n = x) = \sum_{x \in S} \frac{N_x}{\lambda(x)}$ where $N_x = \sum_{n=0}^\infty \bs{1}(Y_n = x)$ is the number of times that the jump chain $\bs Y$ is in state $x$. Suppose that $\sum_{x \in S_+} 1 / \lambda(x) = \infty$. Note that it must be the case that $S_+$, and hence $S$, is infinite. With probability 1, either $\bs Y$ enters an absorbing state (a state $x \in S$ with $\lambda(x) = 0$), or $N_x = \infty$ for some $x \in S_+$, or $N_x \ge 1$ for infinitely many $x \in S_+$. In any case, $\sum_{n=0}^\infty \frac{1}{\lambda(Y_n)} = \sum_{x \in S} \frac{N_x}{\lambda(x)} = \infty$
As a corollary, note that if $S$ is finite then $\lambda$ is bounded, so a continuous-time Markov chain on a finite state space is regular. So to review, if the exponential parameter function $\lambda$ is finite, the chain $\bs{X}$ has no instantaneous states. Even better, if $\lambda$ is bounded or if the conditions in the last theorem are satisfied, then $\bs{X}$ is regular. A continuous-time Markov chain with bounded exponential parameter function $\lambda$ is called uniform, for reasons that will become clear in the next section on transition matrices. As we will see in later section, a uniform continuous-time Markov chain can be constructed from a discrete-time chain and an independent Poisson process. For the next result, recall that to say that $\bs{X}$ has left limits with probability 1 means that the random function $t \mapsto X_t$ has limits from the left on $(0, \infty)$ with probability 1.
If $\bs{X} = \{X_t: t \in [0, \infty)\}$ is regular then $\bs{X}$ has left limits with probability 1.
Proof
Suppose first that there are no absorbing states. Under the assumptions, with probability 1, $0 \lt \tau_n \lt \infty$ for each $n \in \N$ and $\tau_n \to \infty$ as $n \to \infty$. Moreover, $X_t = Y_n$ for $t \in [\tau_n, \tau_{n+1})$ and $n \in \N$. So $t \mapsto X_t$ has left limits on $(0, \infty)$ with probability 1. The same basic argument works with absorbing states, except that possibly $\tau_{n+1} = \infty$.
Thus, our standard assumption will be that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a regular Markov chain on $S$. For such a chain, the behavior of $\bs{X}$ is completely determined by the exponential parameter function $\lambda$ that governs the holding times, and the transition probability matrix $Q$ of the jump chain $\bs{Y}$. Conversely, when modeling real stochastic systems, we often start with $\lambda$ and $Q$. It's then relatively straightforward to construct the continuous-time Markov chain that has these parameters. For simplicity, we will assume that there are no absorbing states. The inclusion of absorbing states is not difficult, but mucks up the otherwise elegant exposition.
Suppose that $\lambda: S \to (0, \infty)$ is bounded and that $Q$ is a probability matrix on $S$ with the property that $Q(x, x) = 0$ for every $x \in S$. The regular, continuous-time Markov chain $\bs X = \{X_t: t \in [0, \infty)\}$ with exponential parameter function $\lambda$ and jump transition matrix $Q$ can be constructed as follows:
1. First construct the jump chain $\bs Y = (Y_0, Y_1, \ldots)$ having transition matrix $Q$.
2. Next, given $\bs Y = (x_0, x_1, \ldots)$, the transition times $(\tau_1, \tau_2, \ldots)$ are constructed so that the holding times $(\tau_1, \tau_2 - \tau_1, \ldots)$ are independent and exponentially distributed with parameters $(\lambda(x_0), \lambda(x_1), \ldots)$
3. Again given $\bs Y = (x_0, x_1, \ldots)$, define $X_t = x_0$ for $0 \le t \lt \tau_1$ and for $n \in \N_+$, define $X_t = x_n$ for $\tau_n \le t \lt \tau_{n+1})$.
Additional details
Using product sets and product measures, it's straightforward to construct a probability space $(\Omega, \mathscr{F}, \P)$ with the following objects and properties:
1. $\bs{Y} = (Y_0, Y_1, \ldots)$ is a Markov chain on $S$ with transition matrix $Q$.
2. $\bs{T} = \{T_x: x \in S\}$ is a collection of independent random variables with values in $[0, \infty)$ such that $T_x$ has the exponential distribution with parameter $\lambda(x)$ for each $x \in S$.
3. $\bs{Y}$ and $\bs{T}$ are independent.
Define $\bs{X} = \{X_t: t \in [0, \infty)\}$ as follows: First, $\tau_1 = T_{Y_0}$ and $X_t = Y_0$ for $0 \le t \lt \tau_1$. Recursively, if $X_t$ is defined on $[0, \tau_n)$, let $\tau_{n+1} = \tau_n + T_{Y_n}$ and then let $X_t = Y_n$ for for $\tau_n \le t \lt \tau_{n+1}$. Since $\lambda$ is bounded, $\tau_n \to \infty$ as $n \to \infty$, so $X_t$ is well defined for $t \in [0, \infty)$. By construction, $t \mapsto X_t$ is right continuous and has left limits. The Markov property holds by the memoryless property of the exponential distribution and the fact that $\bs Y$ is a Markov chain. Finally, by construction, $\bs X$ has exponential parameter function $\lambda$ and jump chain $\bs{Y}$.
Often, particularly when $S$ is finite, the essential structure of a standard, continuous-time Markov chain can be succinctly summarized with a graph.
Suppose again that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a regular Markov chain on $S$, with exponential parameter function $\lambda$ and embedded transition matrix $Q$. The state graph of $\bs{X}$ is the graph with vertex set $S$ and directed edge set $E = \{(x, y) \in S^2: Q(x, y) \gt 0\}$. The graph is labeled as follows:
1. Each vertex $x \in S$ is labeled with the exponential parameter $\lambda(x)$.
2. Each edge $(x, y) \in E$ is labeled with the transition probability $Q(x, y)$.
So except for the labels on the vertices, the state graph of $\bs{X}$ is the same as the state graph of the discrete-time jump chain $\bs{Y}$. That is, there is a directed edge from state $x$ to state $y$ if and only if the chain, when in $x$, can move to $y$ after the random holding time in $x$. Note that the only loops in the state graph correspond to absorbing states, and for such a state there are no outward edges.
Let's return again to the construction above of a continuous-time Markov chain from the jump transition matrix $Q$ and the exponential parameter function $\lambda$. Again for simplicity, assume there are no absorbing states. We assume that $Q(x, x) = 0$ for all $x \in S$, so that the state really does change at the transition times. However, if we drop this assumption, the construction still produces a continuous-time Markov chain, but with an altered jump transition matrix and exponential parameter function.
Suppose that $Q$ is a transition matrix on $S \times S$ with $Q(x, x) \lt 1$ for $x \in S$, and that $\lambda: S \to (0, \infty)$ is bounded. The stochastic process $\bs X = \{X_t: t \in [0, \infty)\}$ constructed above from $Q$ and $\lambda$ is a regular, continuous-time Markov chain with exponential parameter function $\tilde \lambda$ and jump transition matrix $\tilde Q$ given by \begin{align*} & \tilde \lambda(x) = \lambda(x)[1 - Q(x, x)], \quad x \in S \ & \tilde Q(x, y) = \frac{Q(x, y)}{1 - Q(x, x)}, \quad (x, y) \in S^2, \, x \ne y \end{align*}
Proof 1
As before, the fact that $\bs X$ is a continuous-time Markov chain follows from the memoryless property of the exponential distribution and the Markov property of the jump chain $\bs Y$. By construction, $t \mapsto X_t$ is right continuous and has left limits. The main point, however, is that $(\tau_1, \tau_2, \ldots)$ is not necessarily the sequence of transition times, when the state actually changes. So we just need to determine the parameters. Suppose $X_0 = x \in S$ and let $\tau = \tau_1$ have the exponential distribution with parameter $\lambda(x)$, as in the construction. Let $T$ denote the time when the state actually does change. For $t \in [0, \infty)$, the event $T \gt t$ can happen in two ways: either $\tau \gt t$ or $\tau = s$ for some $s \in [0, t]$, the chain jumps back into state $x$ at time $s$, and the process then stays in $x$ for a period of at least $t - s$. Thus let $F_x(t) = \P(T \gt t \mid X_0 = x)$. Taking the two cases, conditioning on $\tau$, and using the Markov property gives $F_x(t) = e^{-\lambda(x) t} + \int_0^t \lambda(x) e^{-\lambda(x) s} Q(x, x) F_x(t - s) ds$ Using the change of variables $u = t - s$ and simplifying gives $F_x(t) = e^{-\lambda(x) t} \left[1 + \lambda(x) Q(x, x) \int_0^t e^{\lambda(x) u} F_x(u) du\right]$ Differentiating with respect to $t$ then gives $F_x^\prime(t) = -\lambda(x) [1 - Q(x, x)] F_x(t)$ with the initial condition $F_x(0) = 1$. The solution of course is $F_x(t) = \exp\{-\lambda(x)[1 - Q(x, x)]\}$ for $t \in [0, \infty)$. When the state does change, the new state $y \ne x$ is chosen with probability $\P(Y_1 = y \mid Y_0 = x, Y_1 \ne x) = \frac{Q(x, y)}{1 - Q(x, x)}$
Proof 2
As in the first proof, we just need to determine the parameters. Given $X_0 = Y_0 = x$, the discrete time $N$ when $\ Y$ first changes state has the geometric distribution on $\N_+$ with success parameter $1 - Q(x, x)$. Hence the time until $\bs X$ actually changes state has the distribution of $T = \sum_{i=1}^N U_i$ where $\bs U = (U_1, U_2, \ldots)$ is a sequence of independent variables, each exponentially distributed with parameter $\lambda(x)$ and with $\bs U$ independent of $N$. In the section on the exponential distribution, it is shown that $T$ also has the exponential distribution, but with parameter $\lambda(x)[1 - Q(x, x)]$. (The proof is simple using generating functions.) As in the first proof, when the state does change, the new state $y \ne x$ is chosen with probability $\P(Y_1 = y \mid Y_0 = x, Y_1 \ne x) = \frac{Q(x, y)}{1 - Q(x, x)}$
This construction will be important in our study of chains subordinate to the Poisson process.
Transition Times
The structure of a regular Markov chain on $S$, as described above, can be explained purely in terms of a family of independent, exponentially distributed random variables. The main tools are some additional special properties of the exponential distribution, that we need to restate in the setting of our Markov chain. Our interest is in how the process evolves among the stable states until it enters an absorbing state (if it does). Once in an absorbing state, the chain stays there forever, so the behavior from that point on is trivial.
Suppose that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a regular Markov chain on $S$, with exponential parameter function $\lambda$ and transition probability matrix $Q$. Define $\mu(x, y) = \lambda(x) Q(x, y)$ for $(x, y) \in S^2$. Then
1. $\lambda(x) = \sum_{y \in S} \mu(x, y)$ for $x \in S$.
2. $Q(x, y) = \mu(x, y) \big/ \lambda(x)$ if $(x, y) \in S^2$ and $x$ is stable.
The main point is that the new parameters $\mu(x, y)$ for $(x, y) \in S^2$ determine the exponential parameters $\lambda(x)$ for $x \in S$, and the transition probabilities $Q(x, y)$ when $x \in S$ is stable and $y \in S$. Of course we know that if $\lambda(x) = 0$, so that $x$ is absorbing, then $Q(x, x) = 1$. So in fact, the new parameters, as specified by the function $\mu$, completely determine the old parmeters, as specified by the functions $\lambda$ and $Q$. But so what?
Consider the functions $\mu$, $\lambda$, and $Q$ as given in the previous result. Suppose that $T_{x,y}$ has the exponential distribution with parameter $\mu(x, y)$ for each $(x, y) \in S^2$ and that $\left\{T_{x,y}: (x, y) \in S^2\right\}$ is a set of independent random variables. Then
1. $T_x = \inf\left\{T_{x,y}: y \in S\right\}$ has the exponential distribution with parameter $\lambda(x)$ for $x \in S$.
2. $\P\left(T_x = T_{x, y}\right) = Q(x, y)$ for $(x, y) \in S^2$.
Proof
These are basic results proved in the section on the exponential distribution.
So here's how we can think of a regular, continuous-time Markov chain on $S$: There is a timer associated with each $(x, y) \in S^2$, set to the random time $T_{x,y}$. All of the timers function independently. When the chain enters state $x \in S$, the timers on $(x, y)$ for $y \in S$ are started simultaneously. As soon as the first alarm goes off for a particular $(x, y)$, the chain immediately moves to state $y$, and the process repeats. Of course, if $\mu(x, y) = 0$ then $T_{x, y} = \infty$ with probability 1, so only the timers with $\lambda(x) \gt 0$ and $Q(x, y) \gt 0$ matter (these correspond to the non-loop edges in the state graph). In particular, if $x$ is absorbing, then the timers on $(x, y)$ are set to infinity for each $y$, and no alarm ever sounds.
The new collection of exponential parameters can be used to give an alternate version of the state graph. Again, the vertex set is $S$ and the edge set is $E = \{(x, y) \in S^2: Q(x, y) \gt 0\}$. But now each edge $(x, y)$ is labeled with the exponential rate parameter $\mu(x, y)$. The exponential rate parameters are closely related to the generator matrix, a matrix of fundamental importance that we will study in the next section.
Examples and Exercises
The Two-State Chain
The two-state chain is the simplest non-trivial, continuous-time Markov chain, but yet this chain illustrates many of the important properties of general continuous-time chains. So consider the Markov chain $\bs{X} = \{X_t: t \in [0, \infty)\}$ on the set of states $S = \{0, 1\}$, with transition rate $a \in [0, \infty)$ from 0 to 1 and transition rate $b \in [0, \infty)$ from 1 to 0.
The transition matrix $Q$ for the embedded chain is given below. Draw the state graph in each case.
1. $Q = \left[\begin{matrix} 0 & 1 \ 1 & 0 \end{matrix}\right]$ if $a \gt 0$ and $b \gt 0$, so that both states are stable.
2. $Q = \left[\begin{matrix} 1 & 0 \ 1 & 0 \end{matrix}\right]$ if $a = 0$ and $b \gt 0$, so that $a$ is absorbing and $b$ is stable.
3. $Q = \left[\begin{matrix} 0 & 1 \ 0 & 1 \end{matrix}\right]$ if $a \gt 0$ and $b = 0$, so that $a$ is stable and $b$ is absorbing.
4. $Q = \left[\begin{matrix} 1 & 0 \ 0 & 1 \end{matrix}\right]$ if $a = 0$ and $b = 0$, so that both states are absorbing.
We will return to the two-state chain in subsequent sections.
Computational Exercises
Consider the Markov chain $\bs{X} = \{X_t: t \in [0, \infty)\}$ on $S = \{0, 1, 2\}$ with exponential parameter function $\lambda = (4, 1, 3)$ and embedded transition matrix $Q = \left[\begin{matrix} 0 & \frac{1}{2} & \frac{1}{2} \ 1 & 0 & 0 \ \frac{1}{3} & \frac{2}{3} & 0\end{matrix}\right]$
1. Draw the state graph and classify the states.
2. Find the matrix of transition rates.
3. Classify the jump chain in terms of recurrence and period.
4. Find the invariant distribution of the jump chain.
Answer
1. The edge set is $E = \{(0, 1), (0, 2), (1, 0), (2, 0), (2, 1)\}$. All states are stable.
2. The matrix of transition rates is $\left[\begin{matrix} 0 & 2 & 2 \ 1 & 0 & 0 \ 1 & 2 & 0 \end{matrix}\right]$
3. The jump chain is irreducible, positive recurrent, and aperiodic.
4. The invariant distribution for the jump chain has PDF $f = \left[\begin{matrix} \frac{6}{14} & \frac{5}{14} & \frac{3}{14}\end{matrix}\right]$
Special Models
Read the introduction to chains subordinate to the Poisson process.
Read the introduction to birth-death chains.
Read the introduction to continuous-time queuing chains.
Read the introduction to continuous-time branching chains. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.15%3A_Introduction_to_Continuous-Time_Markov_Chains.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$
16. Transition Matrices and Generators of Continuous-Time Chains
Preliminaries
This is the second of the three introductory sections on continuous-time Markov chains. Thus, suppose that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a continuous-time Markov chain defined on an underlying probability space $(\Omega, \mathscr{F}, \P)$ and with state space $(S, \mathscr{S})$. By the very meaning of Markov chain, the set of states $S$ is countable and the $\sigma$-algebra $\mathscr{S}$ is the collection of all subsets of $S$. So every subset of $S$ is measurable, as is every function from $S$ to another measurable space. Recall that $\mathscr{S}$ is also the Borel $\sigma$ algebra corresponding to the discrete topology on $S$. With this topology, every function from $S$ to another topological space is continuous. Counting measure $\#$ is the natural measure on $(S, \mathscr{S})$, so in the context of the general introduction, integrals over $S$ are simply sums. Also, kernels on $S$ can be thought of as matrices, with rows and sums indexed by $S$. The left and right kernel operations are generalizations of matrix multiplication.
A space of functions on $S$ plays an important role. Let $\mathscr{B}$ denote the collection of bounded functions $f: S \to \R$. With the usual pointwise definitions of addition and scalar multiplication, $\mathscr{B}$ is a vector space. The supremum norm on $\mathscr{B}$ is given by $\|f\| = \sup\{\left|f(x)\right|: x \in S\}, \quad f \in \mathscr{B}$ Of course, if $S$ is finite, $\mathscr{B}$ is the set of all real-valued functions on $S$, and $\|f\| = \max\{\left|f(x)\right|: x \in S\}$ for $f \in \mathscr{B}$.
In the last section, we studied $\bs{X}$ in terms of when and how the state changes. To review briefly, let $\tau = \inf\{t \in (0, \infty): X_t \ne X_0\}$. Assuming that $\bs{X}$ is right continuous, the Markov property of $\bs{X}$ implies the memoryless property of $\tau$, and hence the distribution of $\tau$ given $X_0 = x$ is exponential with parameter $\lambda(x) \in [0, \infty)$ for each $x \in S$. The assumption of right continuity rules out the pathological possibility that $\lambda(x) = \infty$, which would mean that $x$ is an instantaneous state so that $\P(\tau = 0 \mid X_0 = x) = 1$. On the other hand, if $\lambda(x) \in (0, \infty)$ then $x$ is a stable state, so that $\tau$ has a proper exponential distribution given $X_0 = x$ with $\P(0 \lt \tau \lt \infty \mid X_0 = x) = 1$. Finally, if $\lambda(x) = 0$ then $x$ is an absorbing state, so that $\P(\tau = \infty \mid X_0 = x) = 1$. Next we define a sequence of stopping times: First $\tau_0 = 0$ and $\tau_1 = \tau$. Recursively, if $\tau_n \lt \infty$ then $\tau_n = \inf\left\{t \gt \tau_n: X_t \ne X_{\tau_n}\right\}$, while if $\tau_n = \infty$ then $\tau_{n+1} = \infty$. With $M = \sup\{n \in \N: \tau_n \lt \infty\}$ we define $Y_n = X_{\tau_n}$ if $n \in \N$ with $n \le M$ and $Y_n = Y_M$ if $n \in \N$ with $n \gt M$. The sequence $\bs{Y} = (Y_0, Y_1, \ldots)$ is a discrete-time Markov chain on $S$ with one-step transition matrix $Q$ given by $Q(x, y) = \P(X_\tau = y \mid X_0 = x)$ if $x, \, y \in S$ with $x$ stable, and $Q(x, x) = 1$ if $x \in S$ is absorbing. Assuming that $\bs{X}$ is regular, which means that $\tau_n \to \infty$ as $n \to \infty$ with probability 1 (ruling out the explosion event of infinitely many transitions in finite time), the structure of $\bs{X}$ is completely determined by the sequence of stopping times $\bs{\tau} = (\tau_0, \tau_1, \ldots)$ and the discrete-time jump chain $\bs{Y} = (Y_0, Y_1, \ldots)$. Analytically, the distribution $\bs{X}$ is determined by the exponential parameter function $\lambda$ and the one-step transition matrix $Q$ of the jump chain.
In this section, we sill study the Markov chain $\bs{X}$ in terms of the transition matrices in continuous time and a fundamentally important matrix known as the generator. Naturally, the connections between the two points of view are particularly interesting.
The Transition Semigroup
Definition and basic Properties
The first part of our discussion is very similar to the treatment for a general Markov processes, except for simplifications caused by the discrete state space. We assume that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a Markov chain on $S$.
The transition probability matrix $P_t$ of $\bs{X}$ corresponding to $t \in [0, \infty)$ is $P_t(x, y) = \P(X_t = y \mid X_0 = x), \quad (x, y) \in S^2$ In particular, $P_0 = I$, the identity matrix on $S$
Proof
The mapping $y \mapsto P_t(x, y)$ is the PDF of $X_t$ given $X_0 = x$. Hence $P_t$ is a probability matrix. That is, $P_t(x, y) \ge 0$ for $(x, y) \in S^2$ and $\sum_{y \in S} P_t(x, y) = 1$ for $x \in S$. Trivially, $P_0 = I$ by definition.
Note that since we are assuming that the Markov chain is homogeneous, $P_t(x, y) = \P(X_{s + t} = y \mid X_s = x), \quad (x, y) \in S^2$ for every $s, \, t \in [0, \infty)$. The Chapman-Kolmogorov equation given next is essentially yet another restatement of the Markov property. The equation is named for Andrei Kolmogorov and Sydney Chapman,
Suppose that $\bs{P} = \{P_t: t \in [0, \infty)\}$ is the collection of transition matrices for the chain $\bs{X}$. Then $P_s P_t = P_{s+t}$ for $s, \, t \in [0, \infty)$. Explicitly, $P_{s+t}(x, z) = \sum_{y \in S} P_s(x, y) P_t(y, z), \quad x, \, z \in S$
Proof
We condition on $X_s$. $P_{s+t}(x, z) = \P(X_{s + t} = z \mid X_0 = x) = \sum_{y \in S} \P(X_{s+t} = z \mid X_s = y, X_0 = x) \P(X_s = y \mid X_0 = x)$ But by the Markov and time homogeneous properties, $\P(X_{s+t} = z \mid X_s = y, X_0 = x) = \P(X_{s+t} = z \mid X_s = y) = P_t(y, z)$ Of course by definition, $\P(X_s = y \mid X_0 = x) = P_s(x, y)$. So the first displayed equation above becomes $P_{s+t}(x, y) = \sum_{y \in S} P_s(x, y) P_t(y, z) = P_s P_t(x, z)$
Restated in another form of jargon, the collection $\bs{P} = \{P_t: t \in [0, \infty)\}$ is a semigroup of probability matrices. The semigroup of transition matrices $\bs{P}$, along with the initial distribution, determine the finite-dimensional distributions of $\bs{X}$.
Suppose that $X_0$ has probability density function $f$. If $(t_1, t_2, \ldots, t_n) \in [0, \infty)^n$ is a time sequence with $0 \lt t_1 \lt \cdots \lt t_n$ and $(x_0, x_1, \ldots, x_n) \in S^{n+1}$ is a state sequence, then $\P\left(X_0 = x_0, X_{t_1} = x_1, \ldots X_{t_n} = x_n\right) = f(x_0) P_{t_1}(x_0, x_1) P_{t_2 - t_1}(x_1, x_2) \cdots P_{t_n - t_{n-1}}(x_{n-1}, x_n)$
Proof
To simplify the notation, we will just give the cases $n = 1$ and $n = 2$, which capture the essence of the proof. First suppose $x, \, y \in S$ and $t \in [0, \infty)$. Then $\P(X_0 = x, X_t = y) = \P(X_0 = x) \P(X_t = y \mid X_0 = x) = f(x) P_t(x, y)$ Next suppose that $x, \, y, \, z \in S$ and $s, \, t \in [0, \infty)$ with $s \lt t$. Then $\P(X_0 = x, X_s = y, X_t = z) = \P(X_t = z \mid X_0 = x, X_s = y) \P(X_0 = x, X_s = y)$ But by the Markov and time homogeneous properties, $\P(X_t = z \mid X_0 = x, X_s = y) = P_{t - s}(y, z)$. By the $n = 1$ case, $\P(X_0 = x, X_s = y) = f(x) P_s(x, y)$. Hence $\P(X_0 = x, X_s = y, X_t = z) = f(x) P_s(x, y) P_{t-s}(y, z)$
As with any matrix on $S$, the transition matrices define left and right operations on functions which are generalizations of matrix multiplication. For a transition matrix, both have natural interpretations.
Suppose that $f: S \to \R$, and that either $f$ is nonnegative or $f \in \mathscr{B}$. Then for $t \in [0, \infty)$, $P_t f(x) = \sum_{y \in S} P_t(x, y) f(y) = \E[f(X_t) \mid X_0 = x], \quad x \in S$ The mapping $f \mapsto P_t f$ is a bounded, linear operator on $\mathscr{B}$ and $\|P_t\| = 1$.
Proof
Since $P_t(x, \cdot)$ is the conditional probability density function of $X_t$ given $X_0 = x$, it follows that $P_t f(x) = \E[f(X_t) \mid X_0 = x]$. The statement about $f \mapsto P_t f$ follows from general results on probability kernels.
If $f$ is nonnegative and $S$ is infinte, then it's possible that $P_t f(x) = \infty$. In general, the left operation of a positive kernel acts on positive measures on the state space. In the setting here, if $\mu$ is a positive (Borel) measure on $(S, \mathscr{S})$, then the function $f: S \to [0, \infty)$ given by $f(x) = \mu\{x\}$ for $x \in S$ is the density function of $\mu$ with respect to counting measure $\#$ on $(S, \mathscr{S})$. This simply means that $\mu(A) = \sum_{x \in A} f(x)$ for $A \subseteq S$. Conversely, given $f: S \to [0, \infty)$, the set function $\mu(A) = \sum_{x \in A} f(x)$ for $A \subseteq S$ defines a positive measure on $(S, \mathscr{S})$ with $f$ as its density function. So for the left operation of $P_t$, it's natural to consider only nonnegative functions.
If $f: S \to [0, \infty)$ then $f P_t(y) = \sum_{x \in S} f(x) P_t(x, y), \quad y \in S$ If $X_0$ has probability density function $f$ then $X_t$ has probability density function $f P_t$.
Proof
If $X_0$ has PDF $f$, then conditioning gives $\P(X_t = y) = \sum_{x \in S} \P(X_t = y \mid X_0 = x) \P(X_0 = x) = \sum_{x \in S} P_t(x, y) f(x) = f P_t(x), \quad y \in S$
More generally, if $f$ is the density function of a positive measure $\mu$ on $(S, \mathscr{S})$ then $f P_t$ is the density function of the measure $\mu P_t$, defined by $\mu P_t(A) = \sum_{x \in S} \mu\{x\} P_t(x, A) = \sum_{x \in S} f(x) P_t(x, A), \quad A \subseteq S$
A function $f : S \to [0, \infty)$ is invariant for the Markov chain $\bs{X}$ (or for the transition semigroup $\bs{P}$) if $f P_t = f$ for every $t \in [0, \infty)$.
It follows that if $X_0$ has an invariant probability density function $f$, then $X_t$ has probability density function $f$ for every $t \in [0, \infty)$, so $\bs{X}$ is identically distributed. Invariant and limiting distributions are fundamentally important for continuous-time Markov chains.
Standard Semigroups
Suppose again that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a Markov chain on $S$ with transition semigroup $\bs{P} = \{P_t: t \in [0, \infty)\}$. Once again, continuity assumptions need to be imposed on $\bs{X}$ in order to rule out strange behavior that would otherwise greatly complicate the theory. In terms of the transition semigroup $\bs{P}$, here is the basic assumption:
The transition semigroup $\bs{P}$ is standard if $P_t(x, x) \to 1$ as $t \downarrow 0$ for each $x \in S$.
Since $P_0(x, x) = 1$ for $x \in S$, the standard assumption is clearly a continuity assumption. It actually implies much stronger smoothness properties that we will build up by stages.
If the transition semigroup $\bs{P} = \{P_t: t \in [0, \infty)\}$ is standard, then the function $t \mapsto P_t(x, y)$ is right continuous for each $(x, y) \in S^2$.
Proof
First note that if $(x, y) \in S^2$ with $x \ne y$ then $P_h(x, y) \le 1 - P_h(x, x) \to 0$ as $h \downarrow 0$. Hence $P_h(x, y) \to I(x, y)$ as $h \downarrow 0$ for all $(x, y) \in S^2$. Suppose next that $t \in (0, \infty)$ and $(x, y) \in S^2$. By the semigroup property, $P_{t+h}(x, y) = P_t P_h(x, y) = \sum_{z \in S} P_t(x, z) P_h(z, y)$ But $P_h(z, y) \to I(z, y)$ as $h \downarrow 0$ so by the bounded convergence theorem, $P_{t+h}(x, y) \to P_t(x, y)$ as $h \downarrow 0$.
Our next result connects one of the basic assumptions in the section on transition times and the embedded chain with the standard assumption here.
If the Markov chain $\bs{X}$ has no instantaneous states then the transition semigroup $\bs{P}$ is standard.
Proof
Given $X_0 = x \in S$ note that $\tau \gt t$ implies $X_t = x$. Hence $P_t(x, x) = \P(X_t = x \mid X_0 = x) \ge \P(\tau \gt t \mid X_0 = x) = e^{-\lambda(x) t}$ Since $\bs{X}$ has no instantaneous states, $0 \le \lambda(x) \lt \infty$ so $e^{-\lambda(x) t} \to 1$ as $t \downarrow 0$.
Recall that the non-existence of instantaneous states is essentially equivalent to the right continuity of $\bs{X}$. So we have the nice result that if $\bs{X}$ is right continuous, then so is $\bs{P}$. For the remainder of our discussion, we assume that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a regular Markov chain on $S$ with transition semigroup $\bs{P} = \{P_t: t \in [0, \infty)\}$, exponential function $\lambda$ and one-step transition matrix $Q$ for the jump chain. Our next result is the fundamental integral equations relating $\bs{P}$, $\lambda$, and $Q$.
For $t \in [0, \infty)$, $P_t(x, y) = I(x, y) e^{-\lambda(x) t} + \int_0^t \lambda(x) e^{-\lambda(x) s} Q P_{t - s} (x, y) \, ds, \quad (x, y) \in S^2$
Proof
If $x$ is an absorbing state, then the equation trivially holds, since $\lambda(x) = 0$ and $P_t(x, y) = I(x, y)$. So suppose that $x$ is a stable state, and as above, let $\tau = \inf\{t \in [0, \infty): X_t \ne X_0\}$. Given $X_0 = x$, $\tau$ has a proper exponential distribution with parameter $\lambda(x) \in (0, \infty)$. Taking cases, $P_t(x, y) = \P(X_t = y \mid X_0 = x) = \P(X_t = y, \tau \gt t \mid X_0 = x) + \P(X_t = y, \tau \le t \mid X_0 = x)$ The first term on the right is 0 if $y \ne x$ and is $\P(\tau \gt t \mid X_0 = x) = e^{-\lambda(x) t}$ if $y = x$. In short, $\P(X_t = y, \tau \gt t \mid X_0 = x) = I(x, y) e^{-\lambda(x)s}$ For the second term on the right in the displayed equation, we condition on $\tau$ and $Y_1 = X_\tau$. By a result in the last section on transition times and the embedded chain, the joint PDF of $(\tau, Y_1)$ at $s \in [0, \infty)$ and $z \in S$, given $X_0 = x$, is $\lambda(x) e^{-\lambda(x) s} Q(x, z)$ (continuous in time, discrete in space). Also, given $\tau = s \in [0, t]$ and $Y_1 = z \in S$, we can use the strong Markov property to restart the clock at $s$ giving $\P(X_t = y \mid X_0 = x, \tau = s, Y_1 = z) = \P(X_{t-s} = y \mid X_0 = z) = P_{t-s}(z, y)$ Putting the pieces together we have $\P(X_t = y, \tau \le t \mid X_0 = x) = \int_0^t \lambda(x) e^{-\lambda(x) s} \sum_{z \in S} Q(x, z) P_{t-s}(z, y) \, ds = \int_0^t \lambda(x) e^{-\lambda(x) s} QP_{t - s} (x, y) \, ds$
We can now improve on the continuity result that we got earlier. First recall the leads to relation for the jump chain $\bs{Y}$: For $(x, y) \in S^2$, $x$ leads to $y$ if $Q^n(x, y) \gt 0$ for some $n \in \N$. So by definition, $x$ leads to $x$ for each $x \in S$, and for $(x, y) \in S^2$ with $x \ne y$, $x$ leads to $y$ if and only if the discrete-time chain starting in $x$ eventually reaches $y$ with positive probability.
For $(x, y) \in S^2$,
1. $t \mapsto P_t(x, y)$ is continuous.
2. If $x$ leads to $y$ then $P_t(x, y) \gt 0$ for every $t \in (0, \infty)$.
3. If $x$ does not lead to $y$ then $P_t(x, y) = 0$ for every $t \in (0, \infty)$.
For $t \in [0, \infty)$, we can use the change of variables $r = t - s$ in the fundamental integral equation to get $P_t(x, y) = I(x, y) e^{-\lambda(x) t} + \lambda(x) e^{-\lambda(x) t} \int_0^t e^{\lambda(x) r} Q P_r (x, y) \, dr, \quad (x, y) \in S^2$
Proof
1. In the displayed equation, $r \mapsto P_r(x, y)$ is right continuous for every $(x, y) \in S^2$, and hence by the bounded convergence theorem again, so is $r \mapsto QP_r(x, y)$. Since the integrand in the displayed equation is bounded and right continuous, the integral is a continuous function of $t$. Hence $t \mapsto P_t(x, y)$ is continuous for $(x, y) \in S^2$.
2. For $x \in S$, note that $P_t(x, x) \ge e^{-\lambda(x) t} \gt 0$ for $t \in [0, \infty)$. If $x$ leads to $y$ and $x \ne y$ then there exists $n \in \N_+$ and $(x_1, x_2, \ldots, x_{n-1}) \in S^{n-1}$ such that $Q(x, x_1) \gt 0, \, \ldots Q(x_{n-1}, y) \gt 0$. Then $P_t(x, y) = \P(X_t = y \mid X_0 = x) \ge \P(Y_1 = x_1, \ldots, Y_{n-1} = x_{n-1}, Y_n = y, \tau_n \le t \lt \tau_{n+1}) \gt 0$
3. This is clear from the definition of the embedded chain $\bs{Y}$.
Parts (b) and (c) are known as the Lévy dichotomy, named for Paul Lévy. It's possible to prove the Lévy dichotomy just from the semigroup property of $\bs{P}$, but this proof is considerably more complicated. In light of the dichotomy, the leads to relation clearly makes sense for the continuous-time chain $\bs{X}$ as well as the discrete-time embedded chain $\bs{Y}$.
The Generator Matrix
Definition and Basic Properties
In this discussion, we assume again that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a regular Markov chain on $S$ with transition semigroup $\bs{P} = \{P_t: t \in [0, \infty)\}$, exponential parameter function $\bs{\lambda}$ and one-step transition matrix $Q$ for the embedded jump chain. The fundamental integral equation above now implies that the transition probability matrix $P_t$ is differentiable in $t$. The derivative at $0$ is particularly important.
The matrix function $t \mapsto P_t$ has a (right) derivative at 0: $\frac{P_t - I}{t} \to G \text { as } t \downarrow 0$ where the infinitesimal generator matrix $G$ is given by $G(x, y) = -\lambda(x) I(x, y) + \lambda(x) Q(x, y)$ for $(x, y) \in S^2$.
Proof
As before the change of variables $r = t - s$ in the fundamental integral equation gives $P_t(x, y) = I(x, y) e^{-\lambda(x) t} + \lambda(x) e^{-\lambda(x) t} \int_0^t e^{\lambda(x) r} Q P_r (x, y) \, dr$ The first term is clearly differentiable in $t$, and the second term is also differentiable in $t$ since we now know that the integrand is a continuous function of $r$. The result then follows from standard calculus.
Note that $\lambda(x) Q(x, x) = 0$ for every $x \in S$, since $\lambda(x) = 0$ is $x$ is absorbing, while $Q(x, x) = 0$ if $x$ is stable. So $G(x, x) = -\lambda(x)$ for $x \in S$, and $G(x, y) = \lambda(x) Q(x, y)$ for $(x, y) \in S^2$ with $y \ne x$. Thus, the generator matrix $G$ determines the exponential parameter function $\lambda$ and the jump transition matrix $Q$, and thus determines the distribution of the Markov chain $\bs{X}$.
Given the generator matrix $G$ of $\bs{X}$,
1. $\lambda(x) = -G(x, x)$ for $x \in S$
2. $Q(x, y) = - G(x, y) \big/ G(x, x)$ if $x \in S$ is stable and $y \in S - \{x\}$
The infinitesimal generator has a nice interpretation in terms of our discussion in the last section. Recall that when the chain first enters a stable state $x$, we set independent, exponentially distributed timers on (x, y), for each $y \in S - \{x\}$. Note that $G(x, y)$ is the exponential parameter for the timer on $(x, y)$. As soon as an alarm sounds for a particular $(x, y)$, the chain moves to state $y$ and the process continues.
The generator matrix $G$ satisfies the following properties for every $x \in S$:
1. $G(x, x) \le 0$
2. $\sum_{y \in S} G(x, y) = 0$
The matrix function $t \mapsto P_t$ is differentiable on $[0, \infty)$, and satisfies the Kolmogorov backward equation: $P^\prime_t = G P_t$. Explicitly, $P^\prime_t(x, y) = -\lambda(x) P_t(x, y) + \sum_{z \in S} \lambda(x) Q(x, z) P_t(z, y), \quad (x, y) \in S^2$
Proof
The proof is just like before, and follows from standard calculus and the integral equation $P_t(x, y) = I(x, y) e^{-\lambda(x) t} + \lambda(x) e^{-\lambda(x) t} \int_0^t e^{\lambda(x) r} Q P_r (x, y) \, dr$
The backward equation is named for Andrei Kolmogorov. In continuous time, the transition semigroup $\bs{P} = \{P_t: t \in [0, \infty)\}$ can be obtained from the single, generator matrix $G$ in a way that is reminiscent of the fact that in discrete time, the transition semigroup $\bs{P} = \{P^n: n \in \N\}$ can be obtained from the single, one-step matrix $P$. From a modeling point of view, we often start with the generator matrix $G$ and then solve the the backward equation, subject to the initial condition $P_0 = I$, to obtain the semigroup of transition matrices $\bs{P}$.
As with any matrix on $S$, the generator matrix $G$ defines left and right operations on functions that are analogous to ordinary matrix multiplication. The right operation is defined for functions in $\mathscr{B}$.
If $f \in \mathscr{B}$ then $Gf$ is given by $G f(x) = -\lambda(x) f(x) + \sum_{y \in S} \lambda(x) Q(x, y) f(y), \quad x \in S$
Proof
By definition, $G f(x) = \sum_{y \in S} G(x, y) f(y) = -\lambda(x) f(x) + \sum_{y \in S - \{x\}} \lambda(x) Q(x, y) f(y)$ In the second term, we can sum over all $y \in S$ since $\lambda(x) = 0$ if $x$ is absorbing and $Q(x, x) = 0$ if $x$ is stable. Note that $G f$ is well defined since $\sum_{y \in S-\{x\}} \lambda(x) Q(x, y) \left|f(x)\right| \le \sum_{y \in S-\{x\}} \lambda(x) Q(x, y) \|f\| = \lambda(x) \|f\|$
But note that $G f$ is not in $\mathscr{B}$ unless $\lambda \in \mathscr{B}$. Without this additional assumption, $G$ is a linear operator from the vector space $\mathscr{B}$ of bounded functions from $S$ to $\R$ into the vector space of all functions from $S$ to $\R$. We will return to this point in our next discussion.
Uniform Transition Semigroups
We can obtain stronger results for the generator matrix if we impose stronger continuity assumptions on $\bs{P}$.
The transition semigroup $\bs{P} = \{P_t: t \in [0, \infty)\}$ is uniform if $P_t(x, x) \to 1$ as $t \downarrow 0$ uniformly in $x \in S$.
If $\bs{P}$ is uniform, then the operator function $t \mapsto P_t$ is continuous on the vector space $\mathscr{B}$.
Proof
The statement means that for $f \in \mathscr{B}$, the function $t \mapsto P_t f$ is continuous with respect to the supremum norm on $\mathscr{B}$.
As usual, we want to look at this new assumption from different points of view.
The following are equivalent:
1. The transition semigroup $\bs{P}$ is uniform.
2. The exponential parameter function $\lambda$ is bounded.
3. The generator matrix $G$ defiens a bounded linear operator on $\mathscr{B}$.
Proof
From our remarks above we know that $\lambda \in \mathscr{B}$ if and only if the generator matrix $G$ defines a bounded linear operator on $\mathscr{B}$. So we just need to show the equivalence of (a) and (b). If $\lambda \in \mathscr{B}$ then $P_t(x, x) = \P(X_t = x \mid X_0 = x) \ge \P(\tau \gt t \mid X_0 = x) = \exp[-\lambda(x) t] \ge \exp(-\|\lambda\|t)$ The last term converges to 1 as $t \downarrow 0$ uniformly in $x$.
So when the equivalent conditions are satisfied, the Markov chain $\bs X = \{X_t: t \in [0, \infty)\}$ is also said to be uniform. As we will see in a later section, a uniform, continuous-time Markov chain can be constructed from a discrete-time Markov chain and an independent Poisson process. For a uniform transition semigroup, we have a companion to the backward equation.
Suppose that $\bs{P}$ is a uniform transition semigroup. Then $t \mapsto P_t$ satisfies the Kolmogorov forward equation $P^\prime_t = P_t G$. Explicitly, $P^\prime_t(x,y) = -\lambda(y) P_t(x, y) + \sum_{z \in S} P_t(x, z) \lambda(z) Q(z, y), \quad (x, y) \in S^2$
The backward equation holds with more generality than the forward equation, since we only need the transition semigroup $\bs{P}$ to be standard rather than uniform. It would seem that we need stronger conditions on $\lambda$ for the forward equation to hold, for otherwise it's not even obvious that $\sum_{z \in S} P_t(x, z) \lambda(z) Q(z, y)$ is finite for $(x, y) \in S$. On the other hand, the forward equation is sometimes easier to solve than the backward equation, and the assumption that $\lambda$ is bounded is met in many applications (and of course holds automatically if $S$ is finite).
As a simple corollary, the transition matrices and the generator matrix commute for a uniform semigroup: $P_t G = G P_t$ for $t \in [0, \infty)$. The forward and backward equations formally look like the differential equations for the exponential function. This actually holds with the operator exponential.
Suppose again that $\bs{P} = \{P_t: t \in [0, \infty)\}$ is a uniform transition semigroup with generator $G$. Then $P_t = e^{t G} = \sum_{n=0}^\infty \frac{t^n}{n!} G^n, \quad t \in [0, \infty)$
Proof
First $e^{t G}$ is well defined as a bounded linear operator on $\mathscr{B}$ for $t \in [0, \infty)$ (and hence also simply as a matrix), since $G$ is a bounded linear operator on $\mathscr{B}$. Trivially $e^{0 G} = I$, and by basic properties of the matrix exponential, $\frac{d}{dt} e^{t G} = G e^{t G}, \quad t \in (0, \infty)$ It follows that $P_t = e^{t G}$ for $t \in [0, \infty)$.
We can characterize the generators of uniform transition semigroups. We just need the minimal conditions that the diagonal entries are nonpositive and the row sums are 0.
Suppose that $G$ a matrix on $S$ with $\|G\| \lt \infty$. Then $G$ is the generator of a uniform transition semigroup $\bs{P} = \{P_t: t \in [0, \infty)\}$ if and only if for every $x \in S$,
1. $G(x, x) \le 0$
2. $\sum_{y \in S} G(x, y) = 0$
Proof
We know of course that if $G$ is the generator of a transition semigroup, then conditions (a) and (b) hold. For the converse, we can use the previous result. Let $P_t = e^{t G} = \sum_{n=0}^\infty \frac{t^n}{n!} G^n, \quad t \in [0, \infty)$ which makes sense since $G$ is bounded in norm. Then $P_t(x, y) \ge 0$ for $(x, y) \in S^2$. By part (b), $\sum_{y \in S} G^n(x, y) = 0$ for every $x \in S$ and $n \in \N_+$, and hence $\sum_{y \in S} P_t(x, y) = \sum_{y \in S} I(x, y) = 1$ for $x \in S$. Finally, the semigroup property is a consequence of the law of exponents, which holds for the exponential of a matrix. $P_s P_t = e^{s G} e^{t G} = e^{(s+t) G} = P_{s+t}$
Examples and Exercises
The Two-State Chain
Let $\bs{X} = \{X_t: t \in [0, \infty)\}$ be the Markov chain on the set of states $S = \{0, 1\}$, with transition rate $a \in [0, \infty)$ from 0 to 1 and transition rate $b \in [0, \infty)$ from 1 to 0. This two-state Markov chain was studied in the previous section. To avoid the trivial case with both states absorbing, we will assume that $a + b \gt 0$.
The generator matrix is $G = \left[\begin{matrix} -a & a \ b & -b\end{matrix}\right]$
Show that for $t \in [0, \infty)$, $P_t = \frac{1}{a + b} \left[\begin{matrix} b & a \ b & a \end{matrix} \right] - \frac{1}{a + b} e^{-(a + b)t} \left[\begin{matrix} -a & a \ b & -b\end{matrix}\right]$
1. By solving the Kolmogorov backward equation.
2. By solving the Kolmogorov forward equation.
3. By computing $P_t = e^{t G}$.
You probably noticed that the forward equation is easier to solve because there is less coupling of terms than in the backward equation.
Define the probability density function $f$ on $S$ by $f(0) = \frac{b}{a + b}$, $f(1) = \frac{a}{a + b}$. Show that
1. $P_t \to \frac{1}{a + b} \left[\begin{matrix} b & a \ b & a \end{matrix} \right]$ as $t \to \infty$, the matrix with $f$ in both rows.
2. $f P_t = f$ for all $t \in [0, \infty)$, so that $f$ is invariant for $\bs{P}$.
3. $f G = 0$.
Computational Exercises
Consider the Markov chain $\bs{X} = \{X_t: t \in [0, \infty)\}$ on $S = \{0, 1, 2\}$ with exponential parameter function $\lambda = (4, 1, 3)$ and embedded transition matrix $Q = \left[\begin{matrix} 0 & \frac{1}{2} & \frac{1}{2} \ 1 & 0 & 0 \ \frac{1}{3} & \frac{2}{3} & 0\end{matrix}\right]$
1. Draw the state graph and classify the states.
2. Find the generator matrix $G$.
3. Find the transition matrix $P_t$ for $t \in [0, \infty)$.
4. Find $\lim_{t \to \infty} P_t$.
Answer
1. The edge set is $E = \{(0, 1), (0, 2), (1, 0), (2, 0), (2, 1)\}$. All states are stable.
2. The generator matrix is $G = \left[\begin{matrix} -4 & 2 & 2 \ 1 & -1 & 0 \ 1 & 2 & -3 \end{matrix}\right]$
3. For $t \in [0, \infty)$, $P_t = \frac{1}{15} \left[\begin{matrix} 3 + 12 e^{-5 t} & 10 - 10 e^{-3 t} & 2 - 12 e^{-5 t} + 10 e^{-3 t} \ 3 - 3 e^{-5 t} & 10 + 5 e^{-3 t} & 2 + 3 e^{-5t} - 5 e^{-3 t} \ 3 - 3 e^{-5 t} & 10 - 10 e^{-3 t} & 2 + 3 e^{-5 t} + 10 e^{-3 t} \end{matrix}\right]$
4. $P_t \to \frac{1}{15} \left[\begin{matrix} 3 & 10 & 2 \ 3 & 10 & 2 \ 3 & 10 & 2 \end{matrix}\right]$
Special Models
Read the discussion of generator and transition matrices for chains subordinate to the Poisson process.
Read the discussion of the infinitesimal generator for continuous-time birth-death chains.
Read the discussion of the infinitesimal generator for continuous-time queuing chains.
Read the discussion of the infinitesimal generator for continuous-time branching chains. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.16%3A_Transition_Matrices_and_Generators_of_Continuous-Time_Chains.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$
Prelimnaries
This is the third of the introductory sections on continuous-time Markov chains. So our starting point is a time-homogeneous Markov chain $\bs{X} = \{X_t: t \in [0, \infty)\}$ defined on an underlying probability space $(\Omega, \mathscr{F}, \P)$ and with discrete state space $(S, \mathscr{S})$. Thus $S$ is countable and $\mathscr{S}$ is the power set of $S$, so every subset of $S$ is measurable, as is every function from $S$ into another measurable space. In addition, $S$ is given the discret topology so that $\mathscr{S}$ can also be thought of as the Borel $\sigma$-algebra. Every function from $S$ to another topological space is continuous. Counting measure $\#$ is the natural measure on $(S, \mathscr{S})$, so in the context of the general introduction, integrals over $S$ are simply sums. Also, kernels on $S$ can be thought of as matrices, with rows and sums indexed by $S$, so the left and right kernel operations are generalizations of matrix multiplication. As before, let $\mathscr{B}$ denote the collection of bounded functions $f: S \to \R$. With the usual pointwise definitions of addition and scalar multiplication, $\mathscr{B}$ is a vector space. The supremum norm on $\mathscr{B}$ is given by $\|f\| = \sup\{\left|f(x)\right|: x \in S\}, \quad f \in \mathscr{B}$ Of course, if $S$ is finite, $\mathscr{B}$ is the set of all real-valued functions on $S$, and $\|f\| = \max\{\left|f(x)\right|: x \in S\}$ for $f \in \mathscr{B}$. The time space is $([0, \infty), \mathscr{T})$ where as usual, $\mathscr{T}$ is the Borel $\sigma$-algebra on $[0, \infty)$ corresponding to the standard Euclidean topology. Lebesgue measure is the natural measure on $([0, \infty), \mathscr{T})$.
In our first point of view, we studied $\bs{X}$ in terms of when and how the state changes. To review briefly, let $\tau = \inf\{t \in (0, \infty): X_t \ne X_0\}$. Assuming that $\bs{X}$ is right continuous, the Markov property of $\bs{X}$ implies the memoryless property of $\tau$, and hence the distribution of $\tau$ given $X_0 = x$ is exponential with parameter $\lambda(x) \in [0, \infty)$ for each $x \in S$. The assumption of right continuity rules out the pathological possibility that $\lambda(x) = \infty$, which would mean that $x$ is an instantaneous state so that $\P(\tau = 0 \mid X_0 = x) = 1$. On the other hand, if $\lambda(x) \in (0, \infty)$ then $x$ is a stable state, so that $\tau$ has a proper exponential distribution given $X_0 = x$ with $\P(0 \lt \tau \lt \infty \mid X_0 = x) = 1$. Finally, if $\lambda(x) = 0$ then $x$ is an absorbing state, so that $\P(\tau = \infty \mid X_0 = x) = 1$. Next we define a sequence of stopping times: First $\tau_0 = 0$ and $\tau_1 = \tau$. Recursively, if $\tau_n \lt \infty$ then $\tau_n = \inf\left\{t \gt \tau_n: X_t \ne X_{\tau_n}\right\}$, while if $\tau_n = \infty$ then $\tau_{n+1} = \infty$. With $M = \sup\{n \in \N: \tau_n \lt \infty\}$ we define $Y_n = X_{\tau_n}$ if $n \in \N$ with $n \le M$ and $Y_n = Y_M$ if $n \in \N$ with $n \gt M$. The sequence $\bs{Y} = (Y_0, Y_1, \ldots)$ is a discrete-time Markov chain on $S$ with one-step transition matrix $Q$ given by $Q(x, y) = \P(X_\tau = y \mid X_0 = x)$ if $x, \, y \in S$ with $x$ stable, and $Q(x, x) = 1$ if $x \in S$ is absorbing. Assuming that $\bs{X}$ is regular, which means that $\tau_n \to \infty$ as $n \to \infty$ with probability 1 (ruling out the explosion event of infinitely many transitions in finite time), the structure of $\bs{X}$ is completely determined by the sequence of stopping times $\bs{\tau} = (\tau_0, \tau_1, \ldots)$ and the embedded discrete-time jump chain $\bs{Y} = (Y_0, Y_1, \ldots)$. Analytically, the distribution $\bs{X}$ is determined by the exponential parameter function $\lambda$ and the one-step transition matrix $Q$ of the jump chain.
In our second point of view, we studied $\bs{X}$ in terms of the collection of transition matrices $\bs{P} = \{P_t: t \in [0, \infty)\}$, where for $t \in [0, \infty)$, $P_t(x, y) = \P(X_t = y \mid X_0 = x), \quad (x, y) \in S^2$ The Markov and time-homogeneous properties imply the Chapman-Kolmogorov equations $P_s P_t = P_{s+t}$ for $s, \, t \in [0, \infty)$, so that $\bs{P}$ is a semigroup of transition matrices. The semigroup $\bs{P}$, along with the initial distribution of $X_0$, completely determines the distribution of $\bs{X}$. For a regular Markov chain $\bs{X}$, the fundamental integral equation connecting the two points of view is $P_t(x, y) = I(x, y) e^{-\lambda(x) t} + \int_0^t \lambda(x) e^{-\lambda(x) s} Q P_{t - s} (x, y) \, ds, \quad (x, y) \in S^2$ which is obtained by conditioning on $\tau$ and $X_\tau$. It then follows that the matrix function $t \mapsto P_t$ is differentiable, with the derivative satisfying the Kolmogorov backward equation $P_t^\prime = G P_t$ where the generator matrix $G$ is given by $G(x, y) = -\lambda(x) I(x, y) + \lambda(x) Q(x, y), \quad (x, y) \in S^2$ If the exponential parameter function $\lambda$ is bounded, then the transition semigroup $\bs{P}$ is uniform, which leads to stronger results. The generator $G$ is a bounded operator on $\mathscr{B}$, the backward equation holds as well as a companion forward equation $P_t^\prime = P_t G$, as operators on $\mathscr{B}$ (so with respect to the supremum norm rather than just pointwise). Finally, we can represent the transition matrix as an exponential: $P_t = e^{t G}$ for $t \in [0, \infty)$.
In this section, we study the Markov chain $\bs{X}$ in terms of a family of matrices known as potential matrices. This is the least intuitive of the three points of view, but analytically one of the best approaches. Essentially, the potential matrices are transforms of the transition matrices.
Basic Theory
We assume again that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a regular Markov chain on $S$ with transition semigroup $\bs{P} = \{P_t: t \in [0, \infty)\}$. Our first discussion closely parallels the general theory, except for simplifications caused by the discrete state space.
Definitions and Properties
For $\alpha \in [0, \infty)$, the $\alpha$-potential matrix $U_\alpha$ of $\bs{X}$ is defined as follows: $U_\alpha(x, y) = \int_0^\infty e^{-\alpha t} P_t(x, y) \, dt, \quad (x, y) \in S^2$
1. The special case $U = U_0$ is simply the potential matrix of $\bs{X}$.
2. For $(x. y) \in S^2$, $U(x, y)$ is the expected amount of time that $\bs{X}$ spends in $y$, starting at $x$.
3. The family of matrices $\bs{U} = \{U_\alpha: \alpha \in (0, \infty)\}$ is known as the reolvent of $\bs{X}$.
Proof
Since $t \mapsto P_t(x, y)$ is continuous, $U_\alpha(x, y)$ makes sense for $(x, y) \in S^2$. The interpretation of $U(x, y)$ involves an interchange of integrals: $U(x, y) = \int_0^\infty P_t(x, y) \, dt = \int_0^\infty \E[\bs{1}(X_t = y) \mid X_0 = x] \, dt = \E\left( \int_0^\infty \bs{1}(X_t = y) \, dt \biggm| X_0 = x\right)$ The inside integral is the Lebesgue measure of $\{t \in [0, \infty): X_t = y\}$.
It's quite possible that $U(x, y) = \infty$ for some $(x, y) \in S^2$, and knowing when this is the case is of considerable interest. If $f: S \to \R$ and $\alpha \ge 0$, then giving the right operation in its many forms, \begin{align*} U_\alpha f(x) & = \sum_{y \in S} U_\alpha(x, y) f(y) = \int_0^\infty e^{-\alpha t} P_t f(x) \, dt \ & = \int_0^\infty e^{-\alpha t} \sum_{y \in S} P_t(x, y) f(y) = \int_0^\infty e^{-\alpha t} \E[f(X_t) \mid X_0 = x] \, dt, \quad x \in S \end{align*} assuming, as always, that the sums and integrals make sense. This will be the case in particular if $f$ is nonnegative (although $\infty$ is a possible value), or as we will now see, if $f \in \mathscr{B}$ and $\alpha \gt 0$.
If $\alpha \gt 0$, then $U_\alpha(x, S) = \frac{1}{\alpha}$ for all $x \in S$.
Proof
For $x \in S$, $U_\alpha(x, S) = \int_0^\infty e^{-\alpha t} P_t(x, S) \, dt = \int_0^\infty e^{-\alpha t} dt = \frac{1}{\alpha}$
It follows that for $\alpha \in (0, \infty)$, the right potential operator $U_\alpha$ is a bounded, linear operator on $\mathscr{B}$ with $\|U_\alpha\| = \frac{1}{\alpha}$. It also follows that $\alpha U_\alpha$ is a probability matrix. This matrix has a nice interpretation.
If $\alpha \gt 0$ then $\alpha U_\alpha (x, \cdot)$ is the conditional probability density function of $X_T$ given $X_0 = x$, where $T$ is independent of $\bs{X}$ and has the exponential distribution on $[0, \infty)$ with parameter $\alpha$.
Proof
Suppose that $(x, y) \in S^2$. The random time $T$ has PDF $f(t) = \alpha e^{-\alpha t}$ for $t \in [0, \infty)$. Hence, conditioning on $T$ gives $\P(X_T = y \mid X_0 = x) = \int_0^\infty \alpha e^{-\alpha t} \P(X_T = y \mid T = t, X_0 = x) \, dt$ But by the substitution rule and the assumption of independence, $\P(X_T = y \mid T = t, X_0 = x) = \P(X_t = y \mid T = t, X_0 = x) = \P(X_t = y \mid X_0 = x) = P_t(x, y)$ Substituting gives $\P(X_T = y \mid X_0 = x) = \int_0^\infty \alpha e^{-\alpha t} P_t(x, y) \, dt = \alpha U_\alpha(x, y)$
So $\alpha U_\alpha$ is a transition probability matrix, just as $P_t$ is a transition probability matrix, but corresponding to the random time $T$ (with $\alpha \in (0, \infty)$ as a parameter), rather than the deterministic time $t \in [0, \infty)$. The potential matrix can also be interpreted in economic terms. Suppose that we receive money at a rate of one unit per unit time whenever the process $\bs{X}$ is in a particular state $y \in S$. Then $U(x, y)$ is the expected total amount of money that we receive, starting in state $x \in S$. But money that we receive later is of less value to us now than money that we will receive sooner. Specifically, suppose that one monetary unit at time $t \in [0, \infty)$ has a present value of $e^{-\alpha t}$ where $\alpha \in (0, \infty)$ is the inflation factor or discount factor. Then $U_\alpha(x, y)$ is the total, expected, discounted amount that we receive, starting in $x \in S$. A bit more generally, suppose that $f \in \mathscr{B}$ and that $f(y)$ is the reward (or cost, depending on the sign) per unit time that we receive when the process is in state $y \in S$. Then $U_\alpha f(x)$ is the expected, total, discounted reward, starting in state $x \in S$.
$\alpha U_\alpha \to I$ as $\alpha \to \infty$.
Proof
Note first that with a change of variables $s = \alpha t$, $\alpha U_\alpha = \int_0^\infty \alpha e^{-\alpha t} P_t \, dt = \int_0^\infty e^{-s} P_{s/\alpha} \, ds$ But for $s \in [0, \infty)$, $s / \alpha \to 0$ and hence $P_{s/\alpha} \to I$ as $\alpha \to \infty$. The result then follows from the dominated convergence theorem.
If $f: S \to [0, \infty)$, then giving the left potential operation in its various forms, \begin{align*}f U_\alpha(y) & = \sum_{x \in S} f(x) U_\alpha(x, y) = \int_0^\infty e^{-\alpha t} f P_t (y) \, dt\ & = \int_0^\infty e^{-\alpha t} \left[\sum_{x \in S} f(x) P_t(x, y)\right] dt = \int_0^\infty e^{-\alpha t} \left[\sum_{x \in S} f(x) \P(X_t = y) \right] dt, \quad y \in S \end{align*} In particular, suppose that $\alpha \gt 0$ and that $f$ is the probability density function of $X_0$. Then $f P_t$ is the probability density function of $X_t$ for $t \in [0, \infty)$, and hence from the last result, $\alpha f U_\alpha$ is the probability density function of $X_T$, where again, $T$ is independent of $\bs{X}$ and has the exponential distribution on $[0, \infty)$ with parameter $\alpha$. The family of potential kernels gives the same information as the family of transition kernels.
The resolvent $\bs{U} = \{U_\alpha: \alpha \in (0, \infty)\}$ completely determines the family of transition kernels $\bs{P} = \{P_t: t \in (0, \infty)\}$.
Proof
Note that for $(x, y) \in S^2$, the function $\alpha \mapsto U_\alpha(x, y)$ on $(0, \infty)$ is the Laplace transform of the function $t \mapsto P_t(x, y)$ on $[0, \infty)$. The Laplace transform of a continuous function determines the function uniquely.
Although not as intuitive from a probability view point, the potential matrices are in some ways nicer than the transition matrices because of additional smoothness. In particular, the resolvent $\{U_\alpha: \alpha \in [0, \infty)\}$, along with the initial distribution, completely determine the finite dimensional distributions of the Markov chain $\bs{X}$. The potential matrices commute with the transition matrices and with each other.
Suppose that $\alpha, \, \beta, \, t \in [0, \infty)$. Then
1. $P_t U_\alpha = U_\alpha P_t = \int_0^\infty e^{-\alpha s} P_{s+t} ds$
2. $U_\alpha U_\beta = U_\beta U_\alpha = \int_0^\infty \int_0^\infty e^{-\alpha s} e^{-\beta t} P_{s+t} ds \, dt$
Proof
The interchanges of matrix multiplication and integrals below are interchanges of sums and integrals, and are justified since the underlying integrands are nonnegative. The other tool used is the semigroup property of $\bs{P} = \{P_t: t \in [0, \infty)\}$. You may want to write out the proofs explicitly to convince yourself
1. First, $U_\alpha P_t = \left(\int_0^\infty e^{-\alpha s} P_s \, ds\right) P_t = \int_0^\infty e^{-\alpha s} P_s P_t \, ds = \int_0^\infty e^{-\alpha s} P_{s+t} \, ds$ Similarly $P_t U_\alpha = P_t \int_0^\infty e^{-\alpha s} P_s \, ds = \int_0^\infty e^{-\alpha s} P_t P_s \, ds = \int_0^\infty e^{-\alpha s} P_{s+t} \, ds$
2. First $U_\alpha U_\beta = \left(\int_0^\infty e^{-\alpha s} P_s \, ds \right) \left(\int_0^\infty e^{-\beta t} P_t \, dt \right) = \int_0^\infty \int_0^\infty e^{-\alpha s} e^{-\beta t} P_s P_t \, ds \, dt = \int_0^\infty \int_0^\infty e^{-\alpha s} e^{-\beta t} P_{s+t} \, ds \, dt$ The other direction is similar.
The equations above are matrix equations, and so hold pointwise. The same identities hold for the right operators on the space $\mathscr{B}$ under the additional restriction that $\alpha \gt 0$ and $\beta \gt 0$. The fundamental equation that relates the potential kernels, known as the resolvent equation, is given in the next theorem:
If $\alpha, \, \beta \in [0, \infty)$ with $\alpha \le \beta$ then $U_\alpha = U_\beta + (\beta - \alpha) U_\alpha U_\beta$.
Proof
If $\alpha = \beta$ the equation is trivial, so assume $\alpha \lt \beta$. From the previous result, $U_\alpha U_\beta = \int_0^\infty \int_0^\infty e^{-\alpha s} e^{-\beta t} P_{s + t} \, dt \, ds$ The transformation $u = s + t, \, v = s$ maps $[0, \infty)^2$ one-to-one onto $\{(u, v) \in [0, \infty)^2: u \ge v\}$. The inverse transformation is $s = v, \, t = u - v$ with Jacobian $-1$. Hence we have \begin{align*} U_\alpha U_\beta & = \int_0^\infty \int_0^u e^{-\alpha v} e^{-\beta(u - v)} P_u \, dv \, du = \int_0^\infty \left(\int_0^u e^{(\beta - \alpha) v} dv\right) e^{-\beta u} P_u \, du \ & = \frac{1}{\beta - \alpha} \int_0^\infty \left[e^{(\beta - \alpha) u} - 1\right] e^{-\beta u} P_u du\ & = \frac{1}{\beta - \alpha}\left(\int_0^\infty e^{-\alpha u} P_u \, du - \int_0^\infty e^{-\beta u} P_u \, du\right) = \frac{1}{\beta - \alpha}\left(U_\alpha - U_\beta \right) \end{align*} Simplifying gives the result. Note that $U_\beta$ is finite since $\beta \gt 0$, so we don't have to worry about the dreaded indeterminate form $\infty - \infty$.
The equation above is a matrix equation, and so holds pointwise. The same identity holds for the right potential operators on the space $\mathscr{B}$, under the additional restriction that $\alpha \gt 0$.
Connections with the Generator
Once again, assume that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a regular Markov chain on $S$ with transition semigroup $\bs{P} = \{P_t: t \in [0, \infty)\}$, infinitesimal generator $G$, resolvent $\bs{U} = \{U_\alpha: \alpha \in (0, \infty)\}$, exponential parameter function $\lambda$, and one-step transition matrix $Q$ for the jump chain. There are fundamental connections between the potential $U_\alpha$ and the generator matrix $G$, and hence between $U_\alpha$ and the function $\lambda$ and the matrix $Q$.
If $\alpha \in (0, \infty)$ then $I + G U_\alpha = \alpha U_\alpha$. In terms of $\lambda$ and $Q$, $U_\alpha(x, y) = \frac{1}{\alpha + \lambda(x)} I(x, y) + \frac{\lambda(x)}{\alpha + \lambda(x)} Q U_\alpha(x, y), \quad (x, y) \in S^2$
Proof 1
First, $G U_\alpha = G \int_0^\infty e^{-\alpha t} P_t \, dt = \int_0^\infty e^{-\alpha t} G P_t \, dt = \int_0^\infty e^{-\alpha t} P_t^\prime \, dt$ Passing $G$ through the integrand is justified since $G P_t(x, y)$ is a sum with just one negative term for $(x, y) \in S^2$. The second identity in the displayed equation follows from the backward equation. Integrating by parts then gives $G U_\alpha = e^{-\alpha t} P_t \biggm|_0^\infty + \int_0^\infty \alpha e^{-\alpha t} P_t \, dt = -I + \alpha U_\alpha$
Proof 2
This proof use the fundamental integral equation relating $\bs{P}$, $\lambda$, and $Q$ as well as the definition of $U_\alpha$ and interchanges of integrals. The interchange is justified since the integrand is nonnegative. So for $\alpha \in [0, \infty)$ and $(x, y) \in S^2$, \begin{align*} U_\alpha(x, y) & = \int_0^\infty e^{-\alpha t} P_t(x, y) \, dt \ & = \int_0^\infty e^{-\alpha t} \left[e^{-\lambda(x) t} I(x, y) + \lambda(x) e^{-\lambda(x) t} \int_0^t e^{\lambda(x) r} Q P_r(x, y) \, dr \right] dt \ & = I(x, y) \int_0^\infty e^{-[\alpha + \lambda(x)]t} dt + \lambda(x) \int_0^\infty \int_0^t e^{-[\alpha + \lambda(x)]t} e^{\lambda(x) r} Q P_r(x, y) \, dr \, dt \ & = \frac{1}{\alpha + \lambda(x)} I(x, y) + \lambda(x) \int_0^\infty \int_r^\infty e^{-[\alpha + \lambda(x)]t} e^{\lambda(x) r} Q P_r(x, y) \, dt \, dr \ & = \frac{1}{\alpha + \lambda(x)} I(x, y) + \frac{\lambda(x)}{\alpha + \lambda(x)} \int_0^\infty e^{-[\alpha + \lambda(x)]r} e^{\lambda(x) r} QP_r(x, y) \, dr \ & = \frac{1}{\alpha + \lambda(x)} I(x, y) + \frac{\lambda(x)}{\alpha + \lambda(x)} \int_0^\infty e^{-\alpha r} Q P_r (x, y) \, dr = \frac{1}{\alpha + \lambda(x)} I(x, y) + \frac{\lambda(x)}{\alpha + \lambda(x)} Q U_\alpha(x, y) \end{align*}
Proof 3
Recall that $\alpha U_\alpha(x, y) = \P(X_T = y \mid X_0 = x)$ where $T$ is independent of $\bs{X}$ and has the exponential distribution with parameter $\alpha$. This proof works by conditioning on whether $T \lt \tau_1$ or $T \ge \tau_1$: $\alpha U_\alpha(x, y) = \P(X_T = y \mid X_0 = x, T \lt \tau_1) \P(T \lt \tau_1 \mid X_0 = x) + \P(X_T = y \mid X_0 = x, T \ge \tau_1) \P(T \ge \tau_1 \mid X_0 = x)$ But $X_0 = x$ and $T \lt \tau_1$ imply $X_T = x$ so $\P(X_T = y \mid X_0 = x, T \lt \tau_1) = I(x, y)$. And by a basic property of independent exponential variables that we have seen many times before, $\P(T \lt \tau_1 \mid X_0 = x) = \frac{\alpha}{\alpha + \lambda(x)}$ Next, for the first factor in the second term of the displayed equation, we condition on $X_{\tau_1}$: $\P(X_T = y \mid X_0 = x, T \ge \tau_1) = \sum_{z \in S} \P(X_T = y \mid X_0 = x, X_{\tau_1} = z, T \ge \tau_1) \P(X_{\tau_1} = z \mid X_0 = x, T \ge \tau_1)$ But by the strong Markov property, given $X_{\tau_1} = z$, we can restart the clock at time $\tau_1$ in state $z$. Moreover, by the memoryless property and independence, the distribution of $T - \tau_1$ given $T \ge \tau_1$ is the same as the distribution of $T$, mainly exponential with parameter $\alpha$. It follows that $\P(X_T = y \mid X_0 = x, X_{\tau_1} = z, T \ge \tau_1) = \P(X_T = y \mid X_0 = z) = \alpha U_\alpha(z, y)$ Also, $X_{\tau_1}$ is independent of $\tau_1$ and $T$ so $\P(X_{\tau_1} = z \mid X_0 = x, T \ge \tau_1) = Q(x, z)$ Finally using the basic property of exponential distributions again, $\P(T \ge \tau_1 \mid X_0 = x) = \frac{\lambda(x)}{\alpha + \lambda(x)}$ Putting all the pieces together we have $\alpha U_\alpha(x, y) = \frac{\alpha}{\alpha + \lambda(x)} I(x, y) = \frac{\lambda(x)}{\alpha + \lambda(x)} \sum_{z \in S} Q(x, z) \alpha U_\alpha(z, y) = \frac{\alpha}{\alpha + \lambda(x)} I(x, y) + \frac{\lambda(x)}{\alpha + \lambda(x)} Q \alpha U_\alpha (x, y)$
As before, we can get stronger results if we assume that $\lambda$ is bounded, or equivalently, the transition semigroup $\bs{P}$ is uniform.
Suppose that $\lambda$ is bounded and $\alpha \in (0, \infty)$. Then as operators on $\mathscr{B}$ (and hence also as matrices),
1. $I + G U_\alpha = \alpha U_\alpha$
2. $I + U_\alpha G = \alpha U_\alpha$
Proof
Since $\lambda$ is bounded, $G$ is a bounded operator on $\mathscr{B}$. The proof of (a) then proceeds as before. For (b) we know from the forward and backward equations that $G P_t = P_t G$ for $t \in [0, \infty)$ and hence $G U_\alpha = U_\alpha G$ for $\alpha \in (0, \infty)$.
As matrices, the equation in (a) holds with more generality than the equation in (b), much as the Kolmogorov backward equation holds with more generality than the forward equation. Note that $U_\alpha G(x, y) = \sum_{z \in S} U_\alpha(x, z) G(z, y) = -\lambda(y) U_\alpha(x, y) + \sum_{z \in S} U_\alpha(x, z) \lambda(z) Q(z, y), \quad (x, y) \in S^2$ If $\lambda$ is unbounded, it's not clear that the second sum is finite.
Suppose that $\lambda$ is bounded and $\alpha \in (0, \infty)$. Then as operators on $\mathscr{B}$ (and hence also as matrices),
1. $U_\alpha = (\alpha I - G)^{-1}$
2. $G = \alpha I - U_\alpha^{-1}$
Proof
1. This follows immediately from the previous result, since $U_\alpha (\alpha I - G) = I$ and $(\alpha I - G) U_\alpha = I$
2. This follows from (a): $\alpha I - G = U_\alpha^{-1}$ so $G = \alpha I - U_\alpha^{-1}$
So the potential operator $U_\alpha$ and the generator $G$ have a simple, elegant inverse relationship. Of course, these results hold in particular if $S$ is finite, so that all of the various matrices really are matrices in the elementary sense.
Examples and Exercises
The Two-State Chain
Let $\bs{X} = \{X_t: t \in [0, \infty)\}$ be the Markov chain on the set of states $S = \{0, 1\}$, with transition rate $a \in [0, \infty)$ from 0 to 1 and transition rate $b \in [0, \infty)$ from 1 to 0. To avoid the trivial case with both states absorbing, we will assume that $a + b \gt 0$. The first two results below are a review from the previous two sections.
The generator matrix $G$ is $G = \left[\begin{matrix} -a & a \ b & -b\end{matrix}\right]$
The transition matrix at time $t \in [0, \infty)$ is $P_t = \frac{1}{a + b} \left[\begin{matrix} b & a \ b & a \end{matrix} \right] - \frac{1}{a + b} e^{-(a + b)t} \left[\begin{matrix} -a & a \ b & -b\end{matrix}\right], \quad t \in [0, \infty)$
Now we can find the potential matrix in two ways.
For $\alpha \in (0, \infty)$, show that the potential matrix $U_\alpha$ is $U_\alpha = \frac{1}{\alpha (a + b)} \left[\begin{matrix} b & a \ b & a \end{matrix}\right] - \frac{1}{(\alpha + a + b)(a + b)} \left[\begin{matrix} -a & a \ b & -b\end{matrix}\right]$
1. From the definition.
2. From the relation $U_\alpha = (\alpha I - G)^{-1}$.
Computational Exercises
Consider the Markov chain $\bs{X} = \{X_t: t \in [0, \infty)\}$ on $S = \{0, 1, 2\}$ with exponential parameter function $\lambda = (4, 1, 3)$ and jump transition matrix $Q = \left[\begin{matrix} 0 & \frac{1}{2} & \frac{1}{2} \ 1 & 0 & 0 \ \frac{1}{3} & \frac{2}{3} & 0\end{matrix}\right]$
1. Draw the state graph and classify the states.
2. Find the generator matrix $G$.
3. Find the potential matrix $U_\alpha$ for $\alpha \in (0, \infty)$.
Answer
1. The edge set is $E = \{(0, 1), (0, 2), (1, 0), (2, 0), (2, 1)\}$. All states are stable.
2. The generator matrix is $G = \left[\begin{matrix} -4 & 2 & 2 \ 1 & -1 & 0 \ 1 & 2 & -3 \end{matrix}\right]$
3. For $\alpha \in (0, \infty)$, $U_\alpha = (\alpha I - G)^{-1} = \frac{1}{15 \alpha + 8 \alpha^2 + \alpha^3} \left[\begin{matrix} 3 + 4 \alpha + \alpha^2 & 10 + 2 \alpha & 2 + 2 \alpha \ 3 + \alpha & 10 + 7 \alpha + \alpha^2 & 2 \ 3 + \alpha & 10 + 2 \alpha & 2 + 5 \alpha + \alpha^2\end{matrix}\right]$
Special Models
Read the discussion of potential matrices for chains subordinate to the Poisson process. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.17%3A_Potential_Matrices.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\cl}{\text{cl}}$
In this section, we study the limiting behavior of continuous-time Markov chains by focusing on two interrelated ideas: invariant (or stationary) distributions and limiting distributions. In some ways, the limiting behavior of continuous-time chains is simpler than the limiting behavior of discrete-time chains, in part because the complications caused by periodicity in the discrete-time case do not occur in the continuous-time case. Nonetheless as we will see, the limiting behavior of a continuous-time chain is closely related to the limiting behavior of the embedded, discrete-time jump chain.
Review
Once again, our starting point is a time-homogeneous, continuous-time Markov chain $\bs{X} = \{X_t: t \in [0, \infty)\}$ defined on an underlying probability space $(\Omega, \mathscr{F}, \P)$ and with discrete state space $(S, \mathscr{S})$. By definition, this means that $S$ is countable with the discrete topology, so that $\mathscr{S}$ is the $\sigma$-algebra of all subsets of $S$.
Let's review what we have so far. We assume that the Markov chain $\bs{X}$ is regular. Among other things, this means that the basic structure of $\bs{X}$ is determined by the transition times $\bs{\tau} = (\tau_0, \tau_1, \tau_2, \ldots)$ and the jump chain $\bs{Y} = (Y_0, Y_1, Y_2, \ldots)$. First, $\tau_0 = 0$ and $\tau_1 = \tau = \inf\{t \gt 0: X_t \ne X_0\}$. The time-homogeneous and Markov properties imply that the distribution of $\tau$ given $X_0 = x$ is exponential with parameter $\lambda(x) \in [0, \infty)$. Part of regularity is that $\bs{X}$ is right continuous so that there are no instantaneous states where $\lambda(x) = \infty$, which would mean $\P(\tau = 0 \mid X_0 = x) = 1$. On the other hand, $\lambda(x) \in (0, \infty)$ means that $x$ is a stable state so that $\tau$ has a proper exponential distribution given $X_0 = x$, with $\P(0 \lt \tau \lt \infty \mid X_0 = x) = 1$. Finally, $\lambda(x) = 0$ means that $x$ is an absorbing state so that $\P(\tau = \infty \mid X_0 = x) = 1$. The remaining transition times are defined recursively: $\tau_{n+1} = \inf\left\{t \gt \tau_n: X_t \ne X_{\tau_n}\right\}$ if $\tau_n \lt \infty$ and $\tau_{n+1} = \infty$ if $\tau_n = \infty$. Another component of regularity is that with probability 1, $\tau_n \to \infty$ as $n \to \infty$, ruling out the explosion event of infinitely many jumps in finite time. The jump chain $\bs{Y}$ is formed by sampling $\bs{X}$ at the transition times (until the chain is sucked into an absorbing state, if that happens). That is, with $M = \sup\{n: \tau_n \lt \infty\}$ and for $n \in \N$, we define $Y_n = X_{\tau_n}$ if $n \le M$ and $Y_n = X_{\tau_M}$ if $n \gt M$. Then $\bs{Y}$ is a discrete-time Markov chain with one-step transition matrix $Q$ given $Q(x, y) = \P(X_\tau = y \mid X_0 = x)$ if $(x, y) \in S^2$ with $x$ stable and $Q(x, x) = 1$ if $x \in S$ is absorbing.
The transition matrix $P_t$ at time $t \in [0, \infty)$ is given by $P_t(x, y) = \P(X_t = y \mid X_0 = x)$ for $(x, y) \in S^2$. The time-homogenous and Markov properties imply that the collection of transition matrices $\bs{P} = \{P_t: t \in [0, \infty)\}$ satisfies the Chapman-Kolmogorov equations $P_s P_t = P_{s+t}$ for $s, \, t \in [0, \infty)$, and hence is a semigroup. of transition matrices The transition semigroup $\bs{P}$ and the initial distribution of $X_0$ determine all of the finite-dimensional distributions of $\bs{X}$. Since there are no instantaneous states, $P$ is standard which means that $P_t \to I$ as $t \downarrow 0$ (as matrices, and so pointwise). The fundamental relationship between $\bs{P}$ on the one hand, and $\lambda$ and $Q$ on the other, is $P_t(x, y) = I(x, y) e^{-\lambda(x) t} + \int_0^t \lambda(x) e^{-\lambda(x) s} Q P_{t - s} (x, y) \, ds, \quad (x, y) \in S^2$ From this, it follows that the matrix function $t \mapsto P_t$ is differentiable (again, pointwise) and satisfies the Kolmogorov backward equation $\frac{d}{dt} P_t = G P_t$, where the infinitesimal generator matrix $G$ is given by $G(x, y) = -\lambda(x) I(x, y) + \lambda(x) Q(x, y)$ for $(x, y) \in S^2$. If we impose the stronger assumption that $\bs{P}$ is uniform, which means that $P_t \to I$ as $t \downarrow 0$ as operators on $\mathscr{B}$ (so with respect to the supremum norm), then the backward equation as well as the companion Kolmogorov forward equation $\frac{d}{dt} P_t = = P_t G$ hold as operators on $\mathscr{B}$. In addition, we have the matrix exponential representation $P_t = e^{t G}$ for $t \in [0, \infty)$. The uniform assumption is equivalent to the exponential parameter function being bounded.
Finally, for $\alpha \in [0, \infty)$, the $\alpha$ potential matrix $U_\alpha$ of $\bs{X}$ is $U_\alpha = \int_0^\infty e^{-\alpha t} P_t \, dt$. The resolvent $\bs{U} = \{U_\alpha: \alpha \in (0, \infty)\}$ is the Laplace transform of $\bs{P}$ and hence gives the same information as $\bs{P}$. From this point of view, the time-homogeneous and Markov properties lead to the resolvent equation $U_\alpha = U_\beta + (\beta - \alpha) U_\alpha U_\beta$ for $\alpha, \, \beta \in (0, \infty)$ with $\alpha \le \beta$. For $\alpha \in (0, \infty)$, the $\alpha$ potential matrix is related to the generator by the fundamental equation $\alpha U_\alpha = I + G U_\alpha$. If $\bs{P}$ is uniform, then this equation, as well as the companion $\alpha U_\alpha = I + U_\alpha G$ hold as operators on $\mathscr{B}$, which leads to $U_\alpha = (\alpha I - G)^{-1}$.
Basic Theory
Relations and Classification
We start our discussion with relations among states and classifications of states. These are the same ones that we studied for discrete-time chains in our study of recurrence and transience, applied here to the jump chain $\bs{Y}$. But as we will see, the relations and classifications make sense for the continuous-time chain $\bs{X}$ as well. The discussion is complicated slightly when there are absorbing states. Only when $\bs{X}$ is in an absorbing state can we not interpret the values of $\bs{Y}$ as the values of $\bs{X}$ at the transition times (because of course, there are no transitions when $\bs{X}$ is in an absorbing state). But $x \in S$ is absorbing for the continuous-time chain $\bs{X}$ if and only if $x$ is absorbing for the jump chain $\bs{Y}$, so this trivial exception is easily handled.
For $y \in S$ let $\rho_y = \inf\{n \in \N_+: Y_n = y\}$, the (discrete) hitting time to $y$ for the jump chain $\bs{Y}$, where as usual, $\inf(\emptyset) = \infty$. That is, $\rho_y$ is the first positive (discrete) time that $\bs{Y}$ in in state $y$. The analogous random time for the continuous-time chain $\bs{X}$ is $\tau_{\rho_y}$, where naturally we take $\tau_\infty = \infty$. This is the first time that $\bs{X}$ is in state $y$, not counting the possible initial period in $y$. Specifically, suppose $X_0 = x$. If $x \ne y$ then $\tau_{\rho_y} = \inf\{t \gt 0: X_t = y\}$. If $x = y$ then $\tau_{\rho_y} = \inf\{t \gt \tau_1: X_t = y\}$.
Define the hitting matrix $H$ by $H(x, y) = \P(\rho_y \lt \infty \mid Y_0 = x), \quad (x, y) \in S^2$ Then $H(x, y) = \P\left(\tau_{\rho_y} \lt \infty \mid X_0 = x\right)$ except when $x$ is absorbing and $y = x$.
So for the continuous-time chain, if $x \in S$ is stable then $H(x, x)$ is the probability that, starting in $x$, the chain $\bs{X}$ returns to $x$ after its initial period in $x$. If $x, \, y \in S$ are distinct, then $H(x, y)$ is simply the probability that $\bs{X}$, starting in $x$, eventually reaches $y$. It follows that the basic relation among states makes sense for either the continuous-time chain $\bs{X}$ as well as its jump chain $\bs{Y}$.
Define the relation $\to$ on $S^2$ by $x \to y$ if $x = y$ or $H(x, y) \gt 0$.
The leads to relation $\to$ is reflexive by definition: $x \to x$ for every $x \in S$. From our previous study of discrete-time chains, we know it's also transitive: if $x \to y$ and $y \to z$ then $x \to z$ for $x, \, y, \, z \in S$. We also know that $x \to y$ if and only if there is a directed path in the state graph from $x$ to $y$, if and only if $Q^n(x, y) \gt 0$ for some $n \in \N$. For the continuous-time transition matrices, we have a stronger result that in turn makes a stronger case that the leads to relation is fundamental for $\bs{X}$.
Suppose $(x, y) \in S^2$.
1. If $x \to y$ then $P_t(x, y) \gt 0$ for all $t \in (0, \infty)$.
2. If $x \not \to y$ then $P_t(x, y) = 0$ for all $t \in (0, \infty)$.
Proof
This result is proved in the section on transition matrices and generators.
This result is known as the Lévy dichotomy, and is named for Paul Lévy. Let's recall a couple of other definitions:
Suppose that $A$ is a nonempty subset of $S$.
1. $A$ is closed if $x \in A$ and $x \to y$ imply $y \in A$.
2. $A$ is irreducible if $A$ is closed and has no proper closed subset.
If $S$ is irreducible, we also say that the chain $\bs{X}$ itself is irreducible.
If $A$ is a nonempty subset of $S$, then $\cl(A) = \{y \in S: x \to y \text{ for some } x \in A\}$ is the smallest closed set containing $A$, and is called the closure of $A$.
Suppose that $A \subseteq S$ is closed. Then
1. $P^A_t$, the restriction of $P_t$ to $A \times A$, is a transition probability matrix on $A$ for every $t \in [0, \infty)$.
2. $\bs{X}$ restricted to $A$ is a continuous-time Markov chain with transition semigroup $\bs{P}^A = \left\{P^A_t: t \in [0, \infty)\right\}$.
Proof
1. If $x \in A$ and $y \notin A$, then $x$ does not lead to $y$ so in particular $P_t(x, y) = 0$. It follows that $\sum_{y \in A} P_t(x, y) = 1$ for $x \in A$ so $P^A_t$ is a transition probability matrix.
2. This follows from (a). If the chain starts in $A$, then the chain remains in $A$ for all time, and of course, the Markov property still holds.
Define the relation $\leftrightarrow$ on $S$ by $x \leftrightarrow y$ if $x \to y$ and $y \to x$ for $(x, y) \in S^2$.
The to and from relation $\leftrightarrow$ defines an equivalence relation on $S$ and hence partitions $S$ into mutually disjoint equivalence classes. Recall from our study of discrete-time chains that a closed set is not necessarily an equivalence class, nor is an equivalence class necessarily closed. However, an irreducible set is an equivalence class, but an equivalence class may not be irreducible. The importance of the relation $\leftrightarrow$ stems from the fact that many important properties of Markov chains (in discrete or continuous time) turn out to be class properties, shared by all states in an equivalence class. The following definition is fundamental, and once again, makes sense for either the continuous-time chain $\bs{X}$ or its jump chain $\bs{Y}$.
Let $x \in S$.
1. State $x$ is transient if $H(x, x) \lt 1$
2. State $x$ is recurrent if $H(x, x) = 1$.
Recall from our study of discrete-time chains that if $x$ is recurrent and $x \to y$ then $y$ is recurrent and $y \to x$. Thus, recurrence and transience are class properties, shared by all states in an equivalence class.
Time Spent in a State
For $x \in S$, let $N_x$ denote the number of visits to state $x$ by the jump chain $\bs{Y}$, and let $T_x$ denote the total time spent in state $x$ by the continuous-time chain $\bs{X}$. Thus $N_x = \sum_{n=0}^\infty \bs{1}(Y_n = x), \quad T_x = \int_0^\infty \bs{1}(X_t = x) \, dt$ The expected values $R(x, y) = \E(N_y \mid Y_0 = x)$ and $U(x, y) = \E(T_y \mid X_0 = x)$ for $(x, y) \in S^2$ define the potential matrices of $\bs{Y}$ and $\bs{X}$, respectively. From our previous study of discrete-time chains, we know the distribution and mean of $N_y$ given $Y_0 = x$ in terms of the hitting matrix $H$. The next two results give a review:
Suppose that $x, \, y \in S$ are distinct. Then
1. $\P(N_y = n \mid Y_0 = y) = H^{n-1}(y, y)[1 - H(y, y)]$ for $n \in \N_+$
2. $\P(N_y = 0 \mid Y_0 = x) = 1 - H(x, y)$ and $\P(N_y = n \mid Y_0 = x) = H(x, y) H^{n-1}(y, y) [1 - H(y, y)]$ for $n \in \N_+$
Let's take cases. First suppose that $y$ is recurrent. In part (a), $\P(N_y = n \mid Y_0 = y) = 0$ for all $n \in \N_+$, and consequently $\P(N_y = \infty \mid Y_0 = y) = 1$. In part (b), $\P(N_y = n \mid Y_0 = x) = 0$ for $n \in \N_+$, and consequently $\P(N_y = 0 \mid Y_0 = x) = 1 - H(x, y)$ while $\P(N_y = \infty \mid Y_0 = x) = H(x, y)$. Suppose next that $y$ is transient. Part (a) specifies a proper geometric distribution on $\N_+$ while in part (b), probability $1 - H(x, y)$ is assigned to 0 and the remaining probability $H(x, y)$ is geometrically distributed over $\N_+$ as in (a). In both cases, $N_y$ is finite with probability 1. Next we consider the expected value, that is, the (discrete) potential. To state the results succinctly we will use the convention that $a / 0 = \infty$ if $a \gt 0$ and $0 / 0 = 0$.
Suppose again that $x, \, y \in S$ are distinct. Then
1. $R(y, y) = 1 \big/ [1 - H(y, y)]$
2. $R(x, y) = H(x, y) \big/ [1 - H(y, y)]$
Let's take cases again. If $y \in S$ is recurrent then $R(y, y) = \infty$, and for $x \in S$ with $x \ne y$, either $R(x, y) = \infty$ if $x \to y$ or $R(x, y) = 0$ if $x \not \to y$. If $y \in S$ is transient, $R(y, y)$ is finite, as is $R(x, y)$ for every $x \in S$ with $x \ne y$. Moreover, there is an inverse relationship of sorts between the potential and the hitting probabilities.
Naturally, our next goal is to find analogous results for the continuous-time chain $\bs{X}$. For the distribution of $T_y$ it's best to use the right distribution function.
Suppose that $x, \, y \in S$ are distinct. Then for $t \in [0, \infty)$
1. $\P(T_y \gt t \mid X_0 = y) = \exp\left\{-\lambda(y) [1 - H(y, y)] t\right\}$
2. $\P(T_y \gt t \mid X_0 = x) = H(x, y) \exp\{-\lambda(y) [1 - H(y, y)] t\}$
Proof
The proof is by conditioning on $N_y$.
1. First, if $H(y, y) = 1$ (so that $y$ is recurrent), then either $y$ is absorbing with $\P(\tau_1 = \infty \mid X_0 = y) = 1$ or $y$ is stable and recurrent, so that $\P(N_y = \infty \mid X_0 = y) = 1$. In the second case, starting in state $y$, $T_y$ is the sum of infinitely many independent variables, each with the exponential distribution with parameter $\lambda(y) \in (0, \infty)$. In both cases, $\P(T_y = \infty \mid X_0 = y) = 1$ and so $\P(T_y \gt t \mid X_0 = y) = 1$ for every $t \in [0, \infty)$. So suppose that $H(y, y) \lt 1$ so that $y$ is transient. Then $\P(T_y \gt t \mid X_0 = y) = \sum_{n=1}^\infty \P(T_y \gt t \mid X_0 = y, N_y = n) \P(N_y = n \mid X_0 = y)$ Given $N_y = n$, $T_y$ is the sum of $n$ independent variables, each having the exponential distribution with parameter $\lambda(y)$. So $T_y$ has the gamma distribution with parameters $n$ and $\lambda(y)$ and hence $\P(T_y \gt t \mid X_0 = y, N_y = n) = \sum_{k=0}^{n-1} e^{-\lambda(y) t} \frac{[\lambda(y) t]^k}{k!}$ From the previous result, $\P(N_y = n \mid X_0 = y) = \P(N_y = n \mid Y_0 = y) = H^{n-1}(y, y) [1 - H(y, y)]$. We substitute, change the order of summation, use geometric series and then exponential series: \begin{align*} \P(T_y \gt t \mid X_0 = y) & = \sum_{n=1}^\infty \left(\sum_{k=0}^{n-1} e^{-\lambda(y) t} \frac{[\lambda(y) t]^k}{k!}\right) H^{n-1}(y, y) [1 - H(y, y)] \ & = e^{-\lambda(y) t} [1 - H(y, y)] \sum_{k=0}^\infty \frac{[\lambda(y) t]^k}{k!} \sum_{n=k+1}^\infty H^{n-1}(y, y) \ & = e^{-\lambda(y) t} \sum_{k=0}^\infty \frac{[\lambda(y) t]^k}{k!} H^k(y, y) = e^{-\lambda(y) t} \exp[\lambda(y) H(y, y) t] \end{align*} Simplifying gives the result.
2. The proof is similar. If $H(y, y) = 1$ so that $y$ is recurrent, then starting in state $x$, either $T_y = 0$ if $N_y = 0$, which occurs with probability $1 - H(x, y)$ or $T_y = \infty$ if $N_y = \infty$, which occurs with probability $H(x, y)$. If $H(y, y) \lt 1$ so that $y$ is transient, then the result follows from conditioning on $N_y$ as in (a), except that $\P(T_y = 0 \mid X_0 = x) = \P(N_y = 0 \mid Y_0 = x) = 1 - H(x, y)$.
Let's take cases as before. Suppose first that $y$ is recurrent. In part (a), $\P(T_y \gt t \mid X_0 = y) = 1$ for every $t \in [0, \infty)$ and hence $\P(T_y = \infty \mid X_0 = y) = 1$. In part (b), $\P(T_y \gt t \mid X_0 = x) = H(x, y)$ for every $t \in [0, \infty)$ and consequently $\P(T_y = 0 \mid X_0 = x) = 1 - H(x, y)$ while $\P(T_y = \infty \mid X_0 = x) = H(x, y)$. Suppose next that $y$ is transient. From part (a), the distribution of $T_y$ given $X_0 = y$ is exponential with parameter $\lambda(y) [1 - H(y, y)]$. In part (b), the distribution assigns probability $1 - H(x, y)$ to 0 while the remaining probability $H(x, y)$ is exponentially distributed over $(0, \infty)$ as in (a). Taking expected value, we get a very nice relationship between the potential matrix $U$ of the continuous-time chain $\bs{X}$ and the potential matrix $R$ of the discrete-time jump chain $\bs{Y}$:
For every $(x, y) \in S^2$, $U(x, y) = \frac{R(x, y)}{\lambda(y)}$
Proof
If $y$ is recurrent, then $U(x, y) = R(x, y)$ and the common value is either 0 if $H(x, y) = 0$ or $\infty$ if $H(x, y) = 1$. So suppose that $y$ is transient. We can compute the expected value of $T_y$ by integrating the right distribution function in the previous theorem. In case $x = y$, we have $U(y, y) = \int_0^\infty \exp\{-\lambda(y)[1 - H(y, y)]t\} \, dt = \frac{1}{\lambda(y)[1 - H(y, y)]} = \frac{R(y, y)}{\lambda(y)}$ In the case that $x$ and $y$ are distinct, $U(x, y) = \int_0^\infty H(x, y)\exp\{-\lambda(y)[1 - H(y, y)]t\} \, dt = \frac{H(x, y)}{\lambda(y)[1 - H(y, y)]} = \frac{R(x, y)}{\lambda(y)}$
In particular, $y \in S$ is transient if and only if $R(x, y) \lt \infty$ for every $x \in S$, if and only if $U(x, y) \lt \infty$ for every $x \in S$. On the other hand, $y$ is recurrent if and only if $R(x, y) = U(x, y) = \infty$ if $x \to y$ and $R(x, y) = U(x, y) = 0$ if $x \not \to y$.
Null and Positive Recurrence
Unlike transience and recurrence, the definitions of null and positive recurrence of a state $x \in S$ are different for the continuous-time chain $\bs{X}$ and its jump chain $\bs{Y}$. This is because these definitions depend on the expected hitting time to $x$, starting in $x$, and not just the finiteness of this hitting time. For $x \in S$, let $\nu(x) = \E(\rho_x \mid Y_0 = x)$, the expected (discrete) return time to $x$ starting in $x$. Recall that $x$ is positive recurrent for $\bs{Y}$ if $\nu(x) \lt \infty$ and $x$ is null recurrent if $x$ is recurrent but not positive recurrent, so that $H(x, x) = 1$ but $\nu(x) = \infty$. The definitions are similar for $\bs{X}$, but using the continuous hitting time $\tau_{\rho_x}$.
For $x \in S$, let $\mu(x) = 0$ if $x$ is absorbing and $\mu(x) = \E\left(\tau_{\rho_x} \mid X_0 = x\right)$ if $x$ is stable. So if $x$ is stable, $\mu(x)$ is the expected return time to $x$ starting in $x$ (after the initial period in $x$).
1. State $x$ is positive recurrent for $\bs{X}$ if $\mu(x) \lt \infty$.
2. State $x$ is null recurrent for $\bs{X}$ if $x$ recurrent but not positive recurrent, so that $H(x, x) = 1$ but $\mu(x) = \infty$.
A state $x \in S$ can be positive recurrent for $\bs{X}$ but null recurrent for its jump chain $\bs{Y}$ or can be null recurrent for $\bs{X}$ but positive recurrent for $\bs{Y}$. But like transience and recurrence, positive and null recurrence are class properties, shared by all states in an equivalence class under the to and from equivalence relation $\leftrightarrow$.
Invariant Functions
Our next discussion concerns functions that are invariant for the transition matrix $Q$ of the jump chain $\bs{Y}$ and functions that are invariant for the transition semigroup $\bs{P} = \{P_t: t \in [0, \infty)\}$ of the continuous-time chain $\bs{X}$. For both discrete-time and continuous-time chains, there is a close relationship between invariant functions and the limiting behavior in time.
First let's recall the definitions. A function $f: S \to [0, \infty)$ is invariant for $Q$ (or for the chain $\bs{Y}$) if $f Q = f$. It then follows that $f Q^n = f$ for every $n \in \N$. In continuous time we must assume invariance at each time. That is, a function $f: S \to [0, \infty)$ is invariant for $\bs{P}$ (or for the chain $\bs{X}$) if $f P_t = f$ for all $t \in [0, \infty)$. Our interest is in nonnegative functions, because we can think of such a function as the density function, with respect to counting measure, of a positive measure on $S$. We are particularly interested in the special case that $f$ is a probability density function, so that $\sum_{x \in S} f(x) = 1$. If $Y_0$ has a probability density function $f$ that is invariant for $Q$, then $Y_n$ has probability density function $f$ for all $n \in \N$ and hence $\bs{Y}$ is stationary. Similarly, if $X_0$ has a probability density function $f$ that is invariant for $\bs{P}$ then $X_t$ has probability density function $f$ for every $t \in [0, \infty)$ and once again, the chain $\bs{X}$ is stationary.
Our first result shows that there is a one-to-one correspondence between invariant functions for $Q$ and zero functions for the generator $G$.
Suppose $f: S \to [0, \infty)$. Then $f G = 0$ if and only if $(\lambda f) Q = \lambda f$, so that $\lambda f$ is invariant for $Q$.
Proof
This is a simple consequence of the definition of the generator: $f G(y) = \sum_{x \in S} f(x) G(x, y) = -\lambda(y) f(y) + \sum_{x \in S} f(x) \lambda(x) Q(x, y), \quad y \in S$ or in functional form, $f G = - \lambda f + (\lambda f) Q$
If our chain $\bs{X}$ has no absorbing states, then $f: S \to [0, \infty)$ is invariant for $Q$ if and only if $(f / \lambda ) G = 0$.
Suppose that $f: S \to [0, \infty)$. Then $f$ is invariant for $\bs{P}$ if and only if $f G = 0$.
Proof 1
Assume that $\lambda$ is bounded, so that the transition semigroup $\bs{P}$ is uniform. Then $P_t = e^{t G}$ for $t \in [0, \infty)$. So if $f: S \to [0, \infty)$ then $f P_t = f (e^{t G}) = f \sum_{n=0}^\infty \frac{t^n}{n!} G^n = f + \sum_{n=1}^\infty \frac{t^n}{n!} f G^n$ Since $f$ is nonnegative, $f P_t = f$ if and only if $f G = 0$ (in which case $f G^n = 0$ for every $n \in \N_+$).
Proof 2
Suppose that $f P_t = f$ for $t \in [0, \infty)$. Then $\frac{d}{dt} (f P_t) = 0$ for $t \in [0, \infty)$. But using the Kolmogorov backward equation, $\frac{d}{dt} (f P_t) = f \frac{d}{dt} P_t = f G P_t = 0$. Letting $t = 0$ we conclude that $f G = 0$. Conversely, if $f G = 0$ then $\frac{d}{dt} (f P_t) = f \frac{d}{dt} P_t = f G P_t = 0$ for $t \in [0, \infty)$. It follows that $f P_t$ is constant in $t \in [0, \infty)$. Since $f P_0 = f$ it follows that $f P_t = f$ for all $t \in [0, \infty)$.
So putting the two main results together we see that $f$ is invariant for the continuous-time chain $\bs{X}$ if and only if $\lambda f$ is invariant for the jump chain $\bs{Y}$. Our next result shows how functions that are invariant for $\bs{X}$ are related to the resolvent $\bs{U} = \{U_\alpha: \alpha \in (0, \infty)\}$. To appreciate the result, recall that for $\alpha \in (0, \infty)$ the matrix $\alpha U_\alpha$ is a probability matrix, and in fact $\alpha U_\alpha(x, \cdot)$ is the conditional probability density function of $X_T$, given $X_0 = x$, where $T$ is independent of $\bs{X}$ and has the exponential distribution with parameter $\alpha$. So $\alpha U_\alpha$ is a transition matrix just as $P_t$ is a transition matrix, but corresponding to the exponentially distributed random time $T$ with parameter $\alpha \in (0, \infty)$ rather than the deterministic time $t \in [0, \infty)$.
Suppose that $f: S \to [0, \infty)$. If $f G = 0$ then $f (\alpha U_\alpha) = f$ for $\alpha \in (0, \infty)$. Conversely if $f (\alpha U_\alpha) = f$ for $\alpha \in (0, \infty)$ then $f G = 0$.
Proof
Recall that $I + G U_\alpha = \alpha U_\alpha$ for $\alpha \in (0, \infty)$. Hence if $f G = 0$ then $f (\alpha U_\alpha) = f + f G U_\alpha = f$ Conversely, suppose that $f (\alpha U_\alpha) = f$. Then $f G U_\alpha = \int_0^\infty e^{-\alpha t} f G P_t dt = 0$ As a function of $\alpha \in (0, \infty)$, the integral on the right side is the Laplace transform of the time function $t \mapsto f G P_t$. Hence we must have $f G P_t = 0$ for $t \in (0, \infty)$, and letting $t \downarrow 0$ gives $f G = 0$.
So extending our summary, $f: S \to [0, \infty)$ is invariant for the transition semigroup $\bs{P} = \{P_t: t \in [0, \infty)\}$ if and only if $\lambda f$ is invariant for jump transition matrices $\{Q^n: n \in \N\}$ if and only if $f G = 0$ if and only if $f$ is invariant for the collection of probability matrices $\{\alpha U_\alpha: \alpha \in (0, \infty)\}$. From our knowledge of the theory for discrete-time chains, we now have the following fundamental result:
Suppose that $\bs{X}$ is irreducible and recurrent.
1. There exists $g: S \to (0, \infty)$ that is invariant for $\bs{X}$.
2. If $f$ is invariant for $\bs{X}$, then $f = c g$ for some constant $c \in [0, \infty)$.
Proof
The result is trivial if $S$ consists of a single, necessarily absorbing, state. Otherwise, there are no absorbing states, since $\bs{X}$ is irreducible and so $\lambda(x) \gt 0$ for $x \in S$. From the result above, $f$ is invariant for $\bs{X}$ if and only if $\lambda f$ is invariant for $\bs{Y}$. But $\bs{Y}$ is also irreducible and recurrent, so we know that there exists a strictly positive function that is invariant for $\bs{Y}$, and every other function that is invariant for $\bs{Y}$ is a nonnegative multiple of this one. Hence the same is true for $\bs{X}$.
Invariant functions have a nice interpretation in terms of occupation times, an interpretation that parallels the discrete case. The potential gives the expected total time in a state, starting in another state, but here we need to consider the expected time in a state during a cycle that starts and ends in another state.
For $x \in S$, define the function $\gamma_x$ by $\gamma_x(y) = \E\left(\int_0^{\tau_{\rho_x}} \bs{1}(X_s = y) \, ds \biggm| X_0 = x\right), \quad y \in S$ so that $\gamma_x(y)$ is the expected occupation time in state $y$ before the first return to $x$, starting in $x$.
Suppose again that $\bs{X}$ is irreducible and recurrent. For $x \in S$,
1. $\gamma_x: S \to (0, \infty)$
2. $\gamma_x$ is invariant for $\bs X$
3. $\gamma_x(x) = 1 / \lambda(x)$
4. $\mu(x) = \sum_{y \in S} \gamma_x(y)$
Proof
As is often the case, the proof is based on results that we already have for the embedded jump chain. For $x \in S$, define $\delta_x(y) = \E\left(\sum_{n=0}^{\rho_x - 1} \bs{1}(Y_n = y) \biggm| Y_0 = x\right), \quad y \in S$ so that $\delta_x(y)$ is the expected number of visits to $y$ before the first return to $x$, starting in $x$, for the jump chain $\bs Y = (Y_0, Y_1, \ldots)$. Since $\bs X$ is irreducible and recurrent, so is $\bs Y$. From our results in the discrete case we know that
1. $\delta_x: S \to (0, \infty)$
2. $\delta_x$ is invariant for $\bs Y$
3. $\delta_x(x) = 1$
From our results above, it follows that the function $y \mapsto \delta_x(y) / \lambda(y)$ satisfies properties (a), (b), and (c) in the theorem. But each visit to $y$ by the jump chain $\bs Y$ has expected length $1 / \lambda(y)$ for the continuous-time chain $\bs X$. It follows that $\gamma_x(y) = \delta_x(y) / \lambda(y)$ for $x, \, y \in S$. By definition, $\gamma_x(y)$ is the expected occupation time in $y$ before the first return to $x$, starting in $x$. Hence, summing over $y \in S$ gives the expected return time to $x$, starting in $x$, so (d) holds.
So now we have some additional insight into positive and null recurrence for the continuous-time chain $\bs{X}$ and the associated jump chain $\bs{Y}$. Suppose again that the chains are irreducible and recurrent. There exist $g: S \to (0, \infty)$ that is invariant for $\bs{Y}$, and then $g / \lambda$ is invariant for $\bs{X}$. The invariant functions are unique up to multiplication by positive constants. The jump chain $\bs{Y}$ is positive recurrent if and only if $\sum_{x \in S} g(x) \lt \infty$ while the continuous-time chain $\bs{X}$ is positive recurrent if and only if $\sum_{x \in S} g(x) \big/ \lambda(x) \lt \infty$. Note that if $\lambda$ is bounded (which is equivalent to the transition semigroup $\bs{P}$ being uniform), then $\bs{X}$ is positive recurrent if and only if $\bs{Y}$ is positive recurrent.
Suppose again that $\bs{X}$ is irreducible and recurrent.
1. If $\bs{X}$ is null recurrent then $\bs{X}$ does not have an invariant probability density function.
2. If $\bs{X}$ is positive recurrent then $\bs{X}$ has a unique, positive invariant probability density function.
Proof
From the previous result, there exists $g: S \to (0, \infty)$ that is invariant for $\bs{X}$, and every other invariant function is a nonnegative multiple of this one. The function $f$ given by $f(y) = \frac{g(y)}{\sum_{x \in S} g(x)}, \quad y \in S$ is uniquely defined (that is, unchanged if we replace $g$ by $c g$ where $c \gt 0$).
1. If $\sum_{x \in S} g(x) = \infty$ then $f(y) = 0$ for every $y \in S$.
2. If $\sum_{x \in S} g(x) \lt \infty$ then $f(y) \gt 0$ for every $y \in S$ and $\sum_{y \in S} f(y) = 1$.
Limiting Behavior
Our next discussion focuses on the limiting behavior of the transition semigroup $\bs{P} = \{P_t: t \in [0, \infty)\}$. Our first result is a simple corollary of the result above for potentials.
If $y \in S$ is transient, then $P_t(x, y) \to 0$ as $t \to \infty$ for every $x \in S$.
Proof
This follows from the previous result. If $y \in S$ is transient, then for any $x \in S$, $U(x, y) = \int_0^\infty P_t(x, y) \, dt \lt \infty$ and so we must have $P_t(x, y) \to 0$ as $t \to \infty$.
So we should turn our attention to the recurrent states. The set of recurrent states partitions into equivalent classes under $\leftrightarrow$, and each of these classes is irreducible. Hence we can assume without loss of generality that our continuous-time chain $\bs{X} = \{X_t: t \in [0, \infty)\}$ is irreducible and recurrent. To avoid trivialities, we will also assume that $S$ has at least two states. Thus, there are no absorbing states and so $\lambda(x) \gt 0$ for $x \in S$. Here is the main result.
Suppose that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is irreducible and recurrent. Then $f(y) = \lim_{t \to \infty} P_t(x, y)$ exists for each $y \in S$, independently of $x \in S$. The function $f$ is invariant for $\bs{X}$ and $f(y) = \frac{\gamma_x(y)}{\mu(x)}, \quad y \in S$
1. If $\bs{X}$ is null recurrent then $f(y) = 0$ for all $y \in S$.
2. If $\bs{X}$ is positive recurrent then $f(y) \gt 0$ for all $y \in S$ and $\sum_{y \in S} f(y) = 1$.
Proof sketch
The basic idea is that $\lim_{t \to \infty} P_t(x, y) = \lim_{t \to \infty} \frac{1}{t} \int_0^t P_s(x, y) ds$ The expression on the right is the limiting proportion of time spent in $y \in S$, starting in $x \in S$. This proportion is $\gamma_x(y) \big/ \mu(x)$, so the results follow from the theorem above .
The limiting function $f$ can be computed in a number of ways. First we find a function $g: S \to (0, \infty)$ that is invariant for $\bs{X}$. We can do this by solving
• $g P_t = g$ for $t \in (0, \infty)$
• $g G = 0$
• $g (\alpha U_\alpha) = g$ for $\alpha \in (0, \infty)$
• $h Q = h$ and then $g = h / \lambda$
The function $g$ is unique up to multiplication by positive constants. If $\sum_{x \in S} g(x) \lt \infty$, then we are in the positive recurrent case and so $f$ is simply $g$ normalized: $f(y) = \frac{g(y)}{\sum_{x \in S} g(x)}, \quad y \in S$
The following result is known as the ergodic theorem for continuous-time Markov chains. It can also be thought of as a strong law of large numbers for continuous-time Markov chains.
Suppose that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is irreducible and positive recurrent, with (unique) invariant probability density function $f$. If $h: S \to \R$ then $\frac{1}{t} \int_0^t h(X_s) ds \to \sum_{x \in S} f(x) h(x) \text{ as } t \to \infty$ with probability 1, assuming that the sum on the right converges absolutely.
Notes
First, let $x, \, y \in S$ and let $h = \bs{1}_y$, the indicator function of $y$. Then given $X_0 = x$, $\frac{1}{t} \int_0^t h(X_s) ds$ is the average occupation time in state $y$, starting in state $x$, over the time interval $[0, t]$. In expected value, this is $\frac{1}{t} \int_0^t P_s(x, y) ds$ which we know converges to $f(y)$ as $t \to \infty$, independently of $x$. So in this special case, the ergodic theorem states that the convergence is with probability 1 also. A general function $h: S \to \R$ is a linear combination of the indicator functions of the points in $S$, so the ergodic theorem is plausible.
Note that no assumptions are made about $X_0$, so the limit is independent of the initial state. By now, this should come as no surprise. After a long period of time, the Markov chain $\bs{X}$ forgets about the initial state. Note also that $\sum_{x \in S} f(x) h(x)$ is the expected value of $h$, thought of as a random variable on $S$ with probability measure defined by $f$. On the other hand, $\frac{1}{t} \int_0^t h(X_s) ds$ is the average of the time function $s \mapsto h(X_s)$ on the interval $[0, t]$. So the ergodic theorem states that the limiting time average on the left is the same as the spatial average on the right.
Applications and Exercises
The Two-State Chain
The continuous-time, two-state chain has been studied in the last several sections. The following result puts the pieces together and completes the picture.
Consider the continuous-time Markov chain $\bs{X} = \{X_t: t \in [0, \infty)\}$ on $S = \{0, 1\}$ with transition rate $a \in (0, \infty)$ from 0 to 1 and transition rate $b \in (0, \infty)$ from 1 to 0. Give each of the following
1. The transition matrix $Q^n$ for $\bs{Y}$ at $n \in \N$.
2. The infinitesimal generator $G$.
3. The transition matrix $P_t$ for $\bs{X}$ at $t \in [0, \infty)$.
4. The invariant probability density function for $\bs{Y}$.
5. The invariant probability density function for $\bs{X}$.
6. The limiting behavior of $Q^n$ as $n \to \infty$.
7. The limiting behavior of $P_t$ as $t \to \infty$.
Answer
Note that since the transition rates $a$ and $b$ are positive, the chain is irreducible.
1. First, $Q = \left[\begin{matrix} 0 & 1 \ 1 & 0 \end{matrix} \right]$ and then for $n \in \N$, $Q^n = Q$ if $n$ is odd and $Q^n = I$ if $n$ is even.
2. $G = \left[\begin{matrix} -a & a \ b & -b \end{matrix} \right]$.
3. $P_t = \frac{1}{a + b} \left[\begin{matrix} b & a \ b & a \end{matrix} \right] - \frac{1}{a + b} e^{-(a + b)t} \left[\begin{matrix} -a & a \ b & -b\end{matrix}\right]$ for $t \in [0, \infty)$.
4. $f_d = \left[\begin{matrix} \frac{1}{2} & \frac{1}{2} \end{matrix}\right]$
5. $f_c = \left[\begin{matrix} \frac{b}{a + b} & \frac{a}{a + b} \end{matrix} \right]$
6. As in (a), $Q^{2 n} = I$ and $Q^{2 n + 1} = Q$ for $n \in \N$. So there are two sub-sequential limits. The jump chain $\bs{Y}$ is periodic with period 2.
7. $P_t \to \frac{1}{a + b} \left[\begin{matrix} b & a \ b & a \end{matrix} \right]$ as $t \to \infty$. Each row is $f_c$.
Computational Exercises
The following continuous-time chain has also been studied in the previous three sections.
Consider the Markov chain $\bs{X} = \{X_t: t \in [0, \infty)\}$ on $S = \{0, 1, 2\}$ with exponential parameter function $\lambda = (4, 1, 3)$ and jump transition matrix $Q = \left[\begin{matrix} 0 & \frac{1}{2} & \frac{1}{2} \ 1 & 0 & 0 \ \frac{1}{3} & \frac{2}{3} & 0\end{matrix}\right]$
1. Recall the generator matrix $G$.
2. Find the invariant probability density function $f_d$ for $\bs{Y}$ by solving $f_d Q = f_d$.
3. Find the invariant probability density function $f_c$ for $\bs{X}$ by solving $f_c G = 0$.
4. Verify that $\lambda f_c$ is a multiple of $f_d$.
5. Describe the limiting behavior of $Q^n$ as $n \to \infty$.
6. Describe the limiting behavior of $P_t$ as $t \to \infty$.
7. Verify the result in (f) by recalling the transition matrix $P_t$ for $\bs{X}$ at $t \in [0, \infty)$.
Answer
1. $G = \left[\begin{matrix} -4 & 2 & 2 \ 1 & -1 & 0 \ 1 & 2 & -3 \end{matrix}\right]$
2. $f_d = \frac{1}{14} \left[\begin{matrix} 6 & 5 & 3 \end{matrix} \right]$
3. $f_c = \frac{1}{15} \left[\begin{matrix} 3 & 10 & 2 \end{matrix} \right]$
4. $\lambda f_c = \frac{1}{15} \left[\begin{matrix} 12 & 10 & 6\end{matrix} \right] = \frac{28}{15} f_d$
5. $Q^n \to \frac{1}{14} \left[\begin{matrix} 6 & 5 & 3 \ 6 & 5 & 3 \ 6 & 5 & 3 \end{matrix} \right]$ as $n \to \infty$
6. $P_t \to \frac{1}{15} \left[\begin{matrix} 3 & 10 & 2 \ 3 & 10 & 2 \ 3 & 10 & 2 \end{matrix}\right]$ as $t \to \infty$
7. $P_t = \frac{1}{15} \left[\begin{matrix} 3 + 12 e^{-5 t} & 10 - 10 e^{-3 t} & 2 - 12 e^{-5 t} + 10 e^{-3 t} \ 3 - 3 e^{-5 t} & 10 + 5 e^{-3 t} & 2 + 3 e^{-5t} - 5 e^{-3 t} \ 3 - 3 e^{-5 t} & 10 - 10 e^{-3 t} & 2 + 3 e^{-5 t} + 10 e^{-3 t} \end{matrix}\right]$ for $t \in [0, \infty)$
Special Models
Read the discussion of stationary and limiting distributions for chains subordinate to the Poisson process.
Read the discussion of stationary and limiting distributions for continuous-time birth-death chains.
Read the discussion of classification and limiting distributions for continuous-time queuing chains. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.18%3A_Stationary_and_Limting_Distributions_of_Continuous-Time_Chains.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\cl}{\text{cl}}$
Earlier, we studied time reversal of discrete-time Markov chains. In continous time, the issues are basically the same. First, the Markov property stated in the form that the past and future are independent given the present, essentially treats the past and future symmetrically. However, there is a lack of symmetry in the fact that in the usual formulation, we have an initial time 0, but not a terminal time. If we introduce a terminal time, then we can run the process backwards in time. In this section, we are interested in the following questions:
• Is the new process still Markov?
• If so, how are the various parameters of the reversed Markov chain related to those of the original chain?
• Under what conditions are the forward and backward Markov chains stochastically the same?
Consideration of these questions leads to reversed chains, an important and interesting part of the theory of continuous-time Markov chains. As always, we are also interested in the relationship between properties of a continuous-time chain and the corresponding properties of its discrete-time jump chain. In this section we will see that there are simple and elegant connections between the time reversal of a continuous-time chain and the time-reversal of the jump chain.
Basic Theory
Reversed Chains
Our starting point is a (homogeneous) continuous-time Markov chain $\bs X = \{X_t: t \in [0, \infty)\}$ with (countable ) state space $S$. We will assume that $\bs X$ is irreducible, so that every state in $S$ leads to every other state, and to avoid trivialities, we will assume that there are at least two states. The irreducibility assumption involves no serious loss of generality since otherwise we could simply restrict our attention to an irreducible equivalence class of states. With our usual notation, we will let $\bs{P} = \{P_t: t \in [0, \infty)\}$ denote the semigroup of transition matrices of $\bs X$ and $G$ the infinitesimal generator. Let $\lambda(x)$ denote the exponential parameter for the holding time in state $x \in S$ and $Q$ the transition matrix for the discrete-time jump chain $\bs{Y} = (Y_0, Y_1, \ldots)$. Finally, let $\bs U = \{U_\alpha: \alpha \in [0, \infty)\}$ denote the collection of potential matrices of $\bs X$. We will assume that the chain $\bs X$ is regular, which gives us the following properties:
• $P_t(x, x) \to 1$ as $t \downarrow 0$ for $x \in S$.
• There are no instantaneous states, so $\lambda(x) \lt \infty$ for $x \in S$.
• The transition times $(\tau_1, \tau_2, \ldots)$ satisfy $\tau_n \to \infty$ as $n \to \infty$.
• We may assume that the chain $\bs X$ is right continuous and has left limits.
The assumption of regularity rules out various types of weird behavior that, while mathematically possible, are usually not appropriate in applications. If $\bs X$ is uniform, a stronger assumption than regularity, we have the following additional properties:
• $P_t(x, x) \to 1$ as $t \downarrow 0$ uniformly in $x \in S$.
• $\lambda$ is bounded.
• $P_t = e^{t G}$ for $t \in [0, \infty)$.
• $U_\alpha = (\alpha I - G)^{-1}$ for $\alpha \in (0, \infty)$.
Now let $h \in (0, \infty)$. We will think of $h$ as the terminal time or time horizon so the chains in our first discussion will be defined on the time interval $[0, h]$. Notationally, we won't bother to indicate the dependence on $h$, since ultimately the time horizon won't matter. Define $\hat X_t = X_{h-t}$ for $t \in [0, h]$. Thus, the process forward in time is $\bs X = \{X_t: t \in [0, h]\}$ while the process backwards in time is $\hat{\bs X} = \{\hat X_t: t \in [0, h]\} = \{X_{h-t}: t \in [0, h]\}$ Similarly let $\hat{\mathscr F}_t = \sigma\{\hat X_s: s \in [0, t]\} = \sigma\{X_{h-s}: s \in [0, t]\} = \sigma\{X_r: r \in [h - t, h]\}, \quad t \in [0, h]$ So $\hat{\mathscr{F}}_t$ is the $\sigma$-algebra of events of the process $\hat{\bs X}$ up to time $t$, which of course, is also the $\sigma$-algebra of events of $\bs X$ from time $h - t$ forward. Our first result is that the chain reversed in time is still Markov
The process $\hat{\bs X } = \{\hat X_t: t \in [0, h]\}$ is a Markov chain, but is not time homogeneous in general. For $s, \, t \in [0, h]$ with $s \lt t$, the transition matrix from $s$ to $t$ is $\hat{P}_{s, t}(x, y) = \frac{\P(X_{h - t} = y)}{\P(X_{h - s} = x)} P_{t - s}(y, x), \quad (x, y) \in S^2$
Proof
Let $A \in \hat{\mathscr{F}}_s$ and $x, \, y \in S$. Then \begin{align*} \P(\hat X_t = y \mid \hat X_s = x, A) & = \frac{\P(\hat X_t = y, \hat X_s = x, A)}{\P(\hat X_s = x, A)} = \frac{\P(X_{h - t} = y, X_{h - s} = x, A)}{\P(X_{h - s} = x, A)} \ & = \frac{\P(A \mid X_{h - t} = y, X_{h - s} = x) \P(X_{h - s} = x \mid X_{h - t} = y) \P(X_{h - t} = y)}{\P(A \mid X_{h - s} = x) \P(X_{h - s} = x)} \end{align*} But $A \in \sigma\{X_r: r \in [h - s, h]\}$ and $h - t \lt h - s$, so by the Markov property for $\bs X$, $\P(A \mid X_{h - t} = y, X_{h - s} = x) = \P(A \mid X_{h - s} = x)$ By the time homogeneity of $\bs X$, $\P(X_{h - s} = x \mid X_{h - t} = y) = P_{t - s}(y, x)$. Substituting and simplifying gives $\P(\hat X_t = y \mid \hat X_s = x, A) = \frac{\P(X_{h - t} = y)}{\P(X_{h - s} = x)} P_{t - s}(y, x)$
However, the backwards chain will be time homogeneous if $X_0$ has an invariant distribution.
Suppose that $\bs X$ is positive recurrent, with (unique) invariant probability density function $f$. If $X_0$ has the invariant distribution, then $\hat{\bs X}$ is a time-homogeneous Markov chain. The transition matrix at time $t \in [0, \infty)$ (for every terminal time $h \ge t$), is given by $\hat P_t(x, y) = \frac{f(y)}{f(x)} P_t(y, x), \quad (x, y) \in S^2$
Proof
This follows from the result above. Recall that if $X_0$ has PDF $f$, then $X_{h - t}$ and $X_{h - s}$ also have PDF $f$.
The previous result holds in the limit of the terminal time, regardless of the initial distribution.
Suppose again that $\bs X$ is positive recurrent, with (unique) invariant probability density function $f$. Regardless of the distribution of $X_0$, $\P(\hat X_{s+t} = y \mid \hat X_s = x) \to \frac{f(y)}{f(x)} P_t(y, x) \text{ as } h \to \infty$
Proof
This follows from the conditional probability above and our study of the limiting behavior of continuous-time Markov chains. Since $\bs X$ is irreducible and positive recurrent, $\P(X_{h - s} = x) \to f(x)$ and $\P(X_{h - t} = y) \to f(y)$ as $h \to \infty$ for every $x, \, y \in S$.
These three results are motivation for the definition that follows. We can generalize by defining the reversal of an irreducible Markov chain, as long as there is a positive, invariant function. Recall that a positive invariant function defines a positive measure on $S$, but of course not in general a probability measure.
Suppose that $g: S \to (0, \infty)$ is invariant for $\bs X$. The reversal of $\bs X$ with respect to $g$ is the Markov chain $\hat{\bs X} = \{\hat X_t: t \in [0, \infty)\}$ with transition semigroup $\hat{\bs{P}}$ defined by $\hat P_t (x, y) = \frac{g(y)}{g(x)} P_t(y, x), \quad (x, y) \in S^2, \; t \in [0, \infty)$
Justification
We need to show that the definition makes sense, namely that $\hat{\bs P}$ defines a transition semigroup for a Markov chain $\hat{\bs X}$ satisfying the same assumptions that we have imposed on $\bs X$. First let $t \in [0, \infty)$. Since $g$ is invariant for $\bs X$, $\sum_{y \in S} \hat P_t(x, y) = \frac{1}{g(x)} \sum_{y \in S} g(y) P_t(y, x) = \frac{g(x)}{g(x)} = 1, \quad x \in S$ Hence $\hat P_t$ is a valid transition matrix. Next we show that the Chapman-Kolmogorov equations (the semigroup property) holds. Let $s, \, t \in [0, \infty)$ and $x, \, z \in S$. Then \begin{align*} \hat P_s \hat P_t (x, z) & = \sum_{y \in S} \hat P_s (x, y) \hat P_t (y, z) = \sum_{y \in S} \frac{g(y)}{g(x)} P_s(y, x) \frac{g(z)}{g(y)} P_t (z, y) \ & = \frac{g(z)}{g(x)} \sum_{y \in S} P_t(z, y) P_s (y, x) = \frac{g(z)}{g(x)} P_{s+t}(z, x) = \hat P_{s+t}(x, z) \end{align*} Next note that $\hat P_t(x, x) = P_t(x, x)$ for every $x \in S$. Hence $\hat P_t(x, x) \to 1$ as $t \downarrow 0$ for $x$, so $\hat{\bs P}$ is also a standard transition semigroup. Note also that if $\bs P$ is uniform, then so is $\hat{\bs P}$. Finally, since $\bs X$ is irreducible, $P_t(x, y) \gt 0$ for every $(x, y) \in S^2$ and $t \in (0, \infty)$. Since $g$ is positive, it follows that $\hat P_t(y, x) \gt 0$ for every $(x, y) \in S^2$ and $t \in (0, \infty)$, and hence $\hat{\bs X}$ is also irreducible.
Recall that if $g$ is a positive invariant function for $\bs X$ then so is $c g$ for every constant $c \in (0, \infty)$. Note that $g$ and $c g$ generate the same reversed chain. So let's consider the cases:
Suppose again that $\bs X$ is a Markov chain satisfying the assumptions above.
1. If $\bs X$ is recurrent, then $\bs X$ always has a positive invariant function $g$, unique up to multiplication by positive constants. Hence the reversal of a recurrent chain $\bs X$ always exists and is unique, and so we can refer to the reversal of $\bs X$ without reference to the invariant function.
2. Even better, if $\bs X$ is positive recurrent, then there exists a unique invariant probability density function, and the reversal of $\bs X$ can be interpreted as the time reversal (relative to a time horizon) when $\bs X$ has the invariant distribution, as in the motivating result above.
3. If $\bs X$ is transient, then there may or may not exist a positive invariant function, and if one does exist, it may not be unique (up to multiplication by positive constants). So a transient chain may have no reversals or more than one.
Nonetheless, the general definition is natural, because most of the important properties of the reversed chain follow from the basic balance equation relating the transition semigroups $\bs{P}$ and $\hat{\bs{P}}$, and the invariant function $g$: $g(x) \hat P_t(x, y) = g(y) P_t(y, x), \quad (x, y) \in S^2, \; t \in [0, \infty)$ We will see the balance equation repeated for other objects associated with the Markov chains.
Suppose again that $g: S \to (0, \infty)$ is invariant for $\bs X$, and that $\hat{\bs X}$ is the time reversal of $\bs X$ with respect to $g$. Then
1. $g$ is also invariant for $\hat{\bs X}$.
2. $\bs X$ is the time reversal of $\hat{\bs X}$ with respect to $g$.
Proof
1. For $y \in S$, $g \hat P_t(y) = \sum_{x \in S} g(x) \hat P_t(x, y) = \sum_{x \in S} g(y) P_t(y, x) = g(y) \sum_{x \in S} P_t(y, x) = g(y)$
2. This follows from the symmetry of the fundamental equation: $g(x) \hat P_t(x, y) = g(y) P_t (y, x)$ for $(x, y) \in S^2$ and $t \in [0, \infty)$.
In the balance equation for the transition semigroups, it's not really necessary to know a-priori that the function $g$ is invariant, if we know the two transition semigroups.
Suppose that $g: S \to (0, \infty)$. Then $g$ is invariant and the Markov chains $\bs X$ and $\hat{\bs X}$ are time reversals with respect to $g$ if and only if $g(x) \hat P_t(x, y) = g(y) P_t(y, x), \quad (x, y) \in S^2, \; t \in [0, \infty)$
Proof
All that is left to show is that the balance equation implies that $g$ is invariant. The computation is exactly the same as in the last result: $g P_t(x) = \sum_{y \in S} g(y) P_t(y, x) = \sum_{y \in S} g(x) \hat P_t (x, y) = g(x) \sum_{y \in S} \hat P_t(x, y) = g(x), \quad x \in S$
Here is a slightly more complicated (but equivalent) version of the balance equation for the transition probabilities.
Suppose again that $g: S \to (0, \infty)$. Then $g$ is invariant and the chains $\bs X$ and $\hat{\bs X}$ are time reversals with respect to $g$ if and only if $g(x_1) \hat P_{t_1}(x_1, x_2) \hat P_{t_2}(x_2, x_3) \cdots \hat P_{t_n}(x_n, x_{n+1}) = g(x_{n+1}) P_{t_n}(x_{n+1}, x_n) P_{t_{n-1}}(x_n, x_{n-1}) \cdots P_{t_1}(x_2, x_1)$ for all $n \in \N_+$, $(t_1, t_2, \ldots, t_n) \in [0, \infty)^n$, and $(x_1, x_2, \ldots, x_{n+1}) \in S^{n+1}$.
Proof
All that is necessary is to show that the basic balance equation implies the balance equation in the theorem. When $n = 1$, we have the basic balance equation itself: $g(x_1) \hat P_{t_1}(x_1, x_2) = g(x_2) P_{t_1}(x_2, x_1)$ For $n = 2$, $g(x_1) \hat P_{t_1}(x_1, x_2) \hat P_{t_2}(x_2, x_3) = g(x_2)P_{t_1}(x_2, x_1) \hat P_{t_2}(x_2, x_3) = g(x_3)P_{t_2}(x_3, x_2) P_{t_1}(x_2, x_1)$ Continuing in this manner (or using induction) gives the general result.
The balance equation holds for the potenetial matrices.
Suppose again that $g: S \to (0, \infty)$. Then $g$ is invariant and the chains $\bs X$ and $\hat{\bs X}$ are time reversals with respect to $g$ if and only if the potential matrices satisfy $g(x) \hat U_\alpha(x, y) = g(y) U_\alpha(y, x), \quad (x, y) \in S^2, \; \alpha \in [0, \infty)$
Proof
We just need to show that the balance equation for the transition semigroups is equivalent to the balance equation above for the potential matrices. Suppose first $g(x) \hat P_t(x, y) = g(y) P_t(y, x)$ for $t \in [0, \infty)$ and $(x, y) \in S^2$. Then \begin{align*} g(x) \hat U_\alpha(x, y) & = g(x) \int_0^\infty e^{-\alpha t} \hat P_t(x, y) dt = \int_0^\infty e^{-\alpha t} g(x) \hat P_t(x, y) dt \ & = \int_0^\infty e^{-\alpha t} g(y) P_t(y, x) dt = g(y) \int_0^\infty e^{-\alpha t} P_t(y, x) dt = g(y) U_\alpha(y, x) \end{align*} Conversely, suppose that $g(x) \hat U_\alpha(x, y) = g(y) U_\alpha(y, x)$ for $(x, y) \in S^2$ and $\alpha \in [0, \infty)$. As above, $g(x) \hat U_\alpha(x, y) = \int_0^\infty e^{-\alpha t} g(x) \hat P_t(x, y) dt$ So for fixed $(x, y) \in S^2$, the function $\alpha \mapsto g(x) \hat U_\alpha(x, y)$ is the Laplace transform of the time function $t \mapsto g(x) \hat P_t(x, y)$. Similarly, $\alpha \mapsto g(y) U_\alpha(x, y)$ is the Laplace transform of the $t \mapsto g(y) P_t(x, y)$. The Laplace transform of a continuous function uniquely determines the function so it follows that $g(x) \hat P_t(x, y) = g(y) P_t(y, x)$ for $t \in [0, \infty)$ and $(x, y) \in S^2$.
As a corollary, continuous-time chains that are time reversals are of the same type.
If $\bs X$ and $\hat{\bs X}$ are time reversals, then $\bs X$ and $\hat{\bs X}$ are of the same type: transient, null recurrent, or positive recurrent.
Proof
Suppose that $\bs X$ and $\hat{\bs X}$ are time reversals with respect to the invariant function $g: S \to (0, \infty)$. Then from the previous result, $\hat U(x, x) = U(x, x)$ for $x \in S$. The chains are transient if the common potential is finte for each $x \in S$ and recurrent if the potential is infinite for each $x \in S$. Suppose that the chains are recurrent. Then $g$ is unique up to multiplication by positive constants and the chains are both positive recurrent if $\sum_{x \in S} g(x) \lt \infty$ and both null recurrent if $\sum_{x \in S} g(x) = \infty$.
The balance equation extends to the infinitesimal generator matrices.
Suppose again that $g: S \to (0, \infty)$. Then $g$ is invariant and the Markov chains $\bs X$ and $\hat{\bs X}$ are time reversals if and only if the infinitesimal generators satisfy $g(x) \hat G(x, y) = g(y) G(y, x), \quad (x, y) \in S^2$
Proof
We need to show that the balance equation for the transition semigroups is equivalent to the balance equation for the generators. Suppose first that $g(x) \hat P_t(x, y) = g(y) P_t(y, x)$ for $t \in [0, \infty)$ and $(x, y) \in S^2$. Taking derivatives with respect to $t$ and using Kolmogorov's backward equation gives $g(x) \hat G \hat P_t(x, y) = g(y) G P_t(y, x)$ for $t \in [0, \infty)$ and $(x, y) \in S^2$. Evaluating at $t = 0$ gives $g(x) \hat G(x, y) = g(y) G(y, x)$. Conversely, suppose that $g(x) \hat G(x, y) = g(y) G(y, x)$ for $(x, y) \in S^2$. Then repeated application (or induction) shows that $g(x) \hat G^n(x, y) = g(y) G^n(y, x)$ for every $n \in \N$ and $(x, y) \in S^2$. If the transition matrices are uniform, we can express them as exponentials of the generators. Hence for $t \in [0, \infty)$ and $(x, y) \in S^2$, \begin{align*} g(x) \hat P_t(x, y) & = g(x) e^{t \hat G(x, y)} = g(x) \sum_{n=0}^\infty \frac{t^n}{n!} \hat G^n(x, y) =\sum_{n=0}^\infty \frac{t^n}{n!} g(x) \hat G^n(x, y)\ & = \sum_{n=0}^\infty \frac{t^n}{n!} g(y) G^n(y, x) = g(y) \sum_{n=0}^\infty \frac{t^n}{n!} G^n(y, x) = g(y) e^{t G(y, x)} = g(y) P_t(y, x) \end{align*}
This leads to further results and connections:
Suppose again that $g: S \to (0, \infty)$. Then $g$ is invariant and $\bs X$ and $\hat{\bs X}$ are time reversals with respect to $g$ if and only if
1. $\hat{\bs X}$ and $\bs X$ have the same exponential parmeter function $\lambda$.
2. The jump chains $\bs Y$ and $\hat{\bs Y}$ are (discrete) time reversals with respect to $\lambda g$.
Proof
The exponential parameter functions are related to the generator matrices by $\lambda(x) = - G(x, x)$ and $\hat \lambda(x) = -\hat G(x, x)$ for $x \in S$. The transition matrices for the jump chains are related to the generator matrices by $Q(x, y) = G(x, y) / \lambda(x)$ and $\hat Q(x, y) = \hat G(x, y) / \hat \lambda(x)$ for $(x, y) \in S^2$ with $x \ne y$. Hence conditions (a) and (b) are equivalent to $g(x) \hat G(x, y) = g(y) G(y, x), \quad (x, y) \in S^2$ Recall also from the general theory, that if $g$ is invariant for $\bs X$ then $\lambda g$ is invariant for the jump chain $\bs Y$.
In our original discussion of time reversal in the positive recurrent case, we could have argued that the previous results must be true. If we run the positive recurrent chain $\bs X = \{X_t: t \in [0, h]\}$ backwards in time to obtain the time reversed chain $\hat{\bs X} = \{\hat X_t: t \in [0, h]\}$, then the exponential parameters for $\hat{\bs X}$ must the be same as those for $\bs X$, and the jump chain $\hat{\bs Y}$ for $\hat{\bs X}$ must be the time reversal of the jump chain $\bs Y$ for $\bs X$.
Reversible Chains
Clearly an interesting special case is when the time reversal of a continuous-time Markov chain is stochastically the same as the original chain. Once again, we assume that we have a regular Markov chain $\bs X = \{X_t: t \in [0, \infty)\}$ that is irreducible on the state space $S$, with transition semigroup $\bs P = \{P_t: t \in [0, \infty)\}$. As before, $\bs U = \{U_\alpha: \alpha \in [0, \infty)\}$ denotes the collection of potential matrices, and $G$ the infinitesimal generator. Finally, $\lambda$ denotes the exponential parameter function, $\bs Y = \{Y_n: n \in \N\}$ the jump chain, and $Q$ the transition matrix of $\bs Y$. Here is the definition of reversibility:
Suppose that $g: S \to (0, \infty)$ is invariant for $\bs X$. Then $\bs X$ is reversible with respect to $g$ if the time reversed chain $\hat{\bs X} = \{\hat X_t: t \in [0, \infty)\}$ also has transition semigroup $\bs P$. That is, $g(x) P_t(x, y) = g(y) P_t(y, x), \quad (x, y) \in S^2, \; t \in [0, \infty)$
Clearly if $\bs X$ is reversible with respect to $g$ then $\bs X$ is reversible with respect to $c g$ for every $c \in (0, \infty)$. So here is another review of the cases:
Suppose that $\bs X$ is a Markov chain satisfying the assumptions above.
1. If $\bs X$ is recurrent, then there exists an invariant function $g: S \to (0, \infty)$ that is unique up to multiplication by positive constants. So $\bs X$ is either reversible or not, and we do not have to reference the invariant function.
2. Even better, if $\bs X$ is positive recurrent, then there exists a unique invariant probability density function $f$. Again, $\bs X$ is either reversible or not, but if it is, then with the invariant distribution, the chain $\bs X$ is stochastically the same, forward in time or backward in time.
3. If $\bs X$ is transient, then a positive invariant function may or may not exist. If such a function does exist, it may not be unique, up to multiplication by positive constants. So in the transient case, $\bs X$ may be reversible with respect to one invariant function but not with respect to others.
The following results are corollaries of the results above for time reversals. First, we don't need to know a priori that the function $g$ is invariant.
Suppose that $g: S \to (0, \infty)$. Then $g$ is invariant and $\bs X$ is reversible with respect to $g$ if and only if $g(x) P_t(x, y) = g(y) P_t(y, x), \quad (x, y) \in S^2, \; t \in [0, \infty)$
Suppose again that $g: S \to (0, \infty)$. Then $g$ is invariant and $\bs X$ is reversible with respect to $g$ if and only if $g(x_1) P_{t_1}(x_1, x_2) P_{t_2}(x_2, x_3) \cdots P_{t_n}(x_n, x_{n+1}) = g(x_{n+1}) P_{t_n}(x_{n+1}, x_n) P_{t_{n-1}}(x_n, x_{n-1}) \cdots P_{t_1}(x_2, x_1)$ for all $n \in \N_+$, $(t_1, t_2, \ldots, t_n) \in [0, \infty)^n$, and $(x_1, x_2, \ldots, x_{n+1}) \in S^{n+1}$.
Suppose again that $g: S \to (0, \infty)$. Then $g$ is invariant and $\bs X$ is reversible with respect to $g$ if and only if $g(x) U_\alpha(x, y) = g(y) U_\alpha(y, x), \quad (x, y) \in S^2, \; \alpha \in [0, \infty)$
Suppose again that $g: S \to (0, \infty)$. Then $g$ is invariant and $\bs X$ is reversible with respect to $g$ if and only if $g(x) G(x, y) = g(y) G(y, x), \quad (x, y) \in S^2$
Suppose again that $g: S \to (0, \infty)$. Then $g$ is invariant and $\bs X$ is reversible if and only if the jump chain $\bs Y$ is reversible with respect to $\lambda g$.
Recall that $\bs X$ is recurrent if and only if the jump chain $\bs Y$ is recurrent. In this case, the invariant functions for $\bs X$ and $\bs Y$ exist and are unique up to positive constants. So in this case, the previous theorem states that $\bs X$ is reversible if and only if $\bs Y$ is reversible. In the positive recurrent case (the most important case), the following theorem gives a condition for reversibility that does not directly reference the invariant distribution. The condition is known as the Kolmogorov cycle condition, and is named for Andrei Kolmogorov
Suppose that $\bs X$ is positive recurrent. Then $\bs X$ is reversible if and only if for every sequence of distinct states $(x_1, x_2, \ldots, x_n)$, $G(x_1, x_2) G(x_2, x_3) \cdots G(x_{n-1}, x_n) G(x_n, x_1) = G(x_1, x_n) G(x_n, x_{n-1}) \cdots G(x_3, x_2) G(x_2, x_1)$
Proof
Suppose that $\bs X$ is reversible, and let $f$ denote the invariant PDF of $\bs X$. Then $G(x, y) = \frac{f(y)}{f(x)} G(x, y)$ for $(x, y) \in S^2$. Substituting gives the Kolmogorov cycle condition. Conversely, suppose that the Kolmogorov cycle condition holds for $\bs X$. Recall that $G(x, y) = \lambda(x) Q(x, y)$ for $(x, y) \in S^2$. Substituting into the cycle condition for $\bs X$ gives the cycle condition for $\bs Y$. Hence $\bs Y$ is reversible and therefore so is $\bs X$.
Note that the Kolmogorov cycle condition states that the transition rate of visiting states $(x_2, x_3, \ldots, x_n, x_1)$ in sequence, starting in state $x_1$ is the same as the transition rate of visiting states $(x_n, x_{n-1}, \ldots, x_2, x_1)$ in sequence, starting in state $x_1$. The cycle condition is also known as the balance equation for cycles.
Applications and Exercises
The Two-State Chain
The continuous-time, two-state chain has been studied in our previous sections on continuous-time chains, so naturally we are interested in time reversal.
Consider the continuous-time Markov chain $\bs{X} = \{X_t: t \in [0, \infty)\}$ on $S = \{0, 1\}$ with transition rate $a \in (0, \infty)$ from 0 to 1 and transition rate $b \in (0, \infty)$ from 1 to 0. Show that $\bs X$ is reversible
1. Using the transition semigroup $\bs P = \{P_t: t \in [0, \infty)\}$.
2. Using the resolvent $\bs U = \{U_\alpha: \alpha \in (0, \infty)\}$.
3. Using the generator matrix $G$.
Solutions
First note that $\bs X$ is irreducible since $a \gt 0$ and $b \gt 0$. Since $S$ is finite, $\bs X$ is positive recurrent.
1. Recall that $P_t = \frac{1}{a + b} \left[\begin{matrix} b & a \ b & a \end{matrix} \right] - \frac{1}{a + b} e^{-(a + b)t} \left[\begin{matrix} -a & a \ b & -b\end{matrix}\right], \quad t \in [0, \infty)$ All we have to do is find a positive function $g$ on $S$ with the property that $g(0) P_t(0, 1) = g(1) P_t(1, 0)$. The other conditions are trivially satisfied. Note that $g(0) = b$, $g(1) = a$ satisfies the property. It follows that $g$ is invariant for $\bs X$, unique up to multiplication by positive constants, and that $\bs X$ is reversible.
2. Recall that $U_\alpha = \frac{1}{\alpha (a + b)} \left[\begin{matrix} b & a \ b & a \end{matrix}\right] - \frac{1}{(\alpha + a + b)(a + b)} \left[\begin{matrix} -a & a \ b & -b\end{matrix}\right], \quad \alpha \in (0, \infty)$ Again, we just need to find a positive function $g$ on $S$ with the property that $g(0) U_\alpha(0, 1) = g(1) U_\alpha(1, 0)$. The other conditions are trivially satisfied. The function $g$ in part (a) satisfies, the condition, which of course must be the case.
3. Recall that $G = \left[\begin{matrix} -a & a \ b & -b \end{matrix} \right]$. Once again, we just need to find a positive function $g$ on $S$ with the property that $g(0) G(0, 1) = g(1) G(1, 0)$. The function $g$ given in (a) satisfies the condition. Note that this procedure is the easiest of the three.
Of course, the invariant PDF $f$ is $f(0) = b / (a + b)$, $f(1) = a / (a + b)$.
Computational Exercises
The Markov chain in the following exercise has also been studied in previous sections.
Consider the Markov chain $\bs X = \{X_t: t \in [0, \infty)\}$ on $S = \{0, 1, 2\}$ with exponential parameter function $\lambda = (4, 1, 3)$ and jump transition matrix $Q = \left[\begin{matrix} 0 & \frac{1}{2} & \frac{1}{2} \ 1 & 0 & 0 \ \frac{1}{3} & \frac{2}{3} & 0\end{matrix}\right]$ Give each of the following for the time reversed chain $\hat{\bs X}$:
1. The state graph.
2. The semigroup of transition matrices $\hat{\bs P} = \{\hat P_t: t \in [0, \infty)\}$.
3. The resolvent of potential matrices $\hat{\bs U} = \{\hat U_\alpha: \alpha \in (0, \infty)\}$.
4. The generator matrix$\hat G$.
5. The transition matrix of the jump chain $\hat{\bs Y}$.
Solutions
Note that the chain is irreducible, and since $S$ is finite, positive recurrent. We found previously that an invariant function (unique up to multiplication by positive constants) is $g = (3, 10, 2)$.
1. The edge set is $\hat E = \{(1, 0), (2, 0), (0, 1), (0, 2), (1, 2)\}$. The exponential parameter function $\lambda = (4, 1, 3)$ is the same as for $\bs X$.
2. The transition matrix at $t \in [0, \infty)$ is $\hat P_t = \frac{1}{15} \left[\begin{matrix} 3 + 12 e^{-5 t} & 10 - 10 e^{-5 t} & 2 - 2 e^{-5 t} \ 3 - 3 e^{-3 t} & 10 + 5 e^{-3 t} & 2 - 2 e^{-3 t} \ 3 - 18 e^{-5 t} + 15 e^{-3 t} & 10 + 15 e^{-5 t} - 25 e^{-3 t} & 2 + 3 e^{-5 t} + 10 e^{-3 t} \end{matrix}\right]$
3. The potential matrix at $\alpha \in (0, \infty)$ is $\hat U_\alpha = \frac{1}{15 \alpha + 8 \alpha^2 + \alpha^3} \left[\begin{matrix} 3 + 4 \alpha + \alpha^2 & 10 + \frac{10}{3} \alpha & 2 + \frac{2}{3} \alpha \ 3 + \frac{3}{5} \alpha & 10 + 7 \alpha + \alpha^2 & 2 + \frac{2}{5} \alpha \ 3 + 3 \alpha & 10 & 2 + 5 \alpha + \alpha^2\end{matrix}\right]$
4. The generator matrix is $\hat G = \left[\begin{matrix} -4 & \frac{10}{3} & \frac{2}{3} \ \frac{3}{5} & -1 & \frac{2}{5} \ 3 & 0 & -3 \end{matrix}\right]$
5. The transition matrix of the jump chain is $\hat Q = \left[ \begin{matrix} 0 & \frac{5}{6} & \frac{1}{6} \ \frac{3}{5} & 0 & \frac{2}{5} \ 1 & 0 & 0 \end{matrix} \right]$
Special Models
Read the discussion of time reversal for chains subordinate to the Poisson process.
Read the discussion of time reversal for continuous-time birth-death chains. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.19%3A_Time_Reversal_in_Continuous-Time_Chains.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
Introduction
Recall that the standard Poisson process with rate parameter $r \in (0, \infty)$ involves three interrelated stochastic processes. First the sequence of interarrival times $\bs{T} = (T_1, T_2, \ldots)$ is independent, and each variable has the exponential distribution with parameter $r$. Next, the sequence of arrival times $\bs{\tau} = (\tau_0, \tau_1, \ldots)$ is the partial sum sequence associated with the interrival sequence $\bs{T}$: $\tau_n = \sum_{i=1}^n T_i, \quad n \in \N$ For $n \in \N_+$, the arrival time $\tau_n$ has the gamma distribution with parameters $n$ and $r$. Finally, the Poisson counting process $\bs{N} = \{N_t: t \in [0, \infty)\}$ is defined by $N_t = \max\{n \in \N: \tau_n \le t\}, \quad t \in [0, \infty)$ so that $N_t$ is the number of arrivals in $(0, t]$ for $t \in [0, \infty)$. The counting variable $N_t$ has the Poisson distribution with parameter $r t$ for $t \in [0, \infty)$. The counting process $\bs{N}$ and the arrival time process $\bs{\tau}$ are inverses in the sense that $\tau_n \le t$ if and only if $N_t \ge n$ for $t \in [0, \infty)$ and $n \in \N$. The Poisson counting process can be viewed as a continuous-time Markov chain.
Suppose that $X_0$ takes values in $\N$ and is independent of $\bs{N}$. Define $X_t = X_0 + N_t$ for $t \in [0, \infty)$. Then $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a continuous-time Markov chain on $\N$ with exponential parameter function given by $\lambda(x) = r$ for $x \in \N$ and jump transition matrix $Q$ given by $Q(x, x + 1) = 1$ for $x \in S$.
Proof
This follows directly from the basic structure of a continuous-time Markov chain. Given $X_t = x$, the holding time in state $x \in \N$ is exponential with parameter $r$, and the next state is deterministically $x + 1$. Note that the addition of the variable $X_0$ is just to allow us the freedom of arbitrary initial distributions on the state space, as is routine with Markov processes.
Note that the Poisson process, viewed as a Markov chain is a pure birth chain. Clearly we can generalize this continuous-time Markov chain in a simple way by allowing a general embedded jump chain.
Suppose that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a Markov chain with (countable) state space $S$, and with constant exponential parameter $\lambda(x) = r \in (0, \infty)$ for $x \in S$, and jump transition matrix $Q$. Then $\bs{X}$ is said to be subordinate to the Poisson process with rate parameter $r$.
1. The transition times $(\tau_1, \tau_2, \ldots)$ are the arrival times of the Poisson process with rate $r$.
2. The inter-transition times $(\tau_1, \tau_2 - \tau_1, \ldots)$ are the inter-arrival times of the Poisson process with rate $r$ (independent, and each with the exponential distribution with rate $r$).
3. $\bs{N} = \{N_t: t \in [0, \infty)\}$ is the Poisson counting process, where $N_t$ is the number of transitions in (0, t] for $t \in [0, \infty)$.
4. The Poisson process and the jump chain $\bs{Y} = (Y_0, Y_1, \ldots)$ are independent, and $X_t = Y_{N_t}$ for $t \in [0, \infty)$.
Proof
These results all follow from the basic structure of a continuous-time Markov chain.
Since all states are stable, note that we must have $Q(x, x) = 0$ for $x \in S$. Note also that for $x, \, y \in S$ with $x \ne y$, the exponential rate parameter for the transition from $x$ to $y$ is $\mu(x, y) = r Q(x, y)$. Conversely suppose that $\mu: S^2 \to (0, \infty)$ satisfies $\mu(x, x) = 0$ and $\sum_{y \in S} \mu(x, y) = r$ for every $x \in S$. Then the Markov chain with transition rates given by $\mu$ is subordinate to the Poisson process with rate $r$. It's easy to construct a Markov chain subordinate to the Poisson process.
Suppose that $\bs N = \{N_t: t \in [0, \infty)\}$ is a Poisson counting process with rate $r \in (0, \infty)$ and that $\bs Y = \{Y_n: n \in \N\}$ is a discrete-time Markov chain on $S$, independent of $\bs N$, whose transition matrix satisfies $Q(x, x) = 0$ for every $x \in S$. Let $X_t = Y_{N_t}$ for $t \in [0, \infty)$. Then $\bs X = \{X_t: t \in [0, \infty)\}$ is a continuous-time Markov chain subordinate to the Poisson process.
Generator and Transition Matrices
Next let's find the generator matrix and the transition semigroup. Suppose again that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a continuous-time Markov chain on $S$ subordinate to the Poisson process with rate $r \in (0, \infty)$ and with jump transition matrix $Q$. As usual, let $\bs P = \{P_t: t \in [0, \infty)\}$ denote the transition semigroup and $G$ the infinitesimal generator.
The generator matrix $G$ of $\bs{X}$ is $G = r (Q - I)$. Hence for $t \in [0, \infty)$
1. The Kolmogorov backward equation is $P^\prime_t = r (Q - I) P_t$
2. The Kolmogorov forward equation is $P^\prime_t = r P_t (Q - I)$
Proof
This follows directly from the general theory since $G(x, x) = -\lambda(x) = -r$ for $x \in S$ and $G(x, y) = \lambda(x) Q(x, y) = r Q(x, y)$ for distinct $x, \, y \in S$.
There are several ways to find the transition semigroup $\bs{P} = \{P_t: t \in [0, \infty)\}$. The best way is a probabilistic argument using the underlying Poisson process.
For $t \in [0, \infty)$, the transition matrix $P_t$ is given by $P_t = \sum_{n=0}^\infty e^{-r t} \frac{(r t)^n}{n!} Q^n$
Proof from the underlying Poisson process
Let $N_t$ denote the number of transitions in $(0, t]$ for $t \in [0, \infty)$, so that $\bs{N} = \{N_t: t \in [0, \infty)\}$ is the Poisson counting process. Let $\bs{Y} = (Y_0, Y_1, \ldots)$ denote the jump chain, with transition matrix $Q$. Then $\bs{N}$ and $\bs{Y}$ are independent, and $X_t = Y_{N_t}$ for $t \in [0, \infty)$. Conditioning we have \begin{align*} P_t(x, y) & = \P(X_t = y \mid X_0 = x) = \P\left(Y_{N_t} = y \mid Y_0 = x\right) \ & = \sum_{n=0}^\infty \P\left(Y_{N_t} = y \mid N_t = n, Y_0 = y\right) \P(N_t = n \mid Y_0 = y) \ & = \sum_{n=0}^\infty \P(Y_n = y \mid Y_0 = x) \P(N_t = n) = \sum_{n=0}^\infty e^{-r t} \frac{(r t)^n}{n!} Q^n(x, y) \end{align*}
Proof using the generator matrix
Note first that for $n \in \N$, $G^n = [r (Q - I)]^n = r^n \sum_{k = 0}^n \binom{n}{k}(-1)^{n-k} Q^k$ Hence \begin{align*} P_t & = e^{t G} = \sum_{n=0}^\infty \frac{t^n}{n!} G^n = \sum_{n=0}^\infty \frac{t^n}{n!} r^n \sum_{k=0}^\infty \binom{n}{k} (-1)^{n-k}Q^k \ & = \sum_{n=0}^\infty \sum_{k=0}^n \frac{(r t)^n}{k! (n - k)!} (-1)^{n-k} Q^k = \sum_{k=0}^\infty \sum_{n=k}^\infty \frac{(r t)^n}{k! (n - k)!} (-1)^{n-k} Q^k \ & = \sum_{k=0}^\infty \frac{(r t)^k}{k!} Q^k \sum_{n=k}^\infty \frac{1}{(n - k)!}(- r t)^{n-k} = \sum_{k=0}^\infty e^{-r t} \frac{(r t)^k}{k!} Q^k \end{align*}
Potential Matrices
Next let's find the potential matrices. As with the transition matrices, we can do this in (at least) two different ways.
Suppose again that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a continuous-time Markov chain on $S$ subordinate to the Poisson process with rate $r \in (0, \infty)$ and with jump transition matrix $Q$. For $\alpha \in (0, \infty)$, the potential matrix $U_\alpha$ of $\bs{X}$ is $U_\alpha = \frac{1}{\alpha + r} \sum_{n=0}^\infty \left(\frac{r}{\alpha + r}\right)^n Q^n$
Proof from the definition
Using the previous result, \begin{align*} U_\alpha(x, y) & = \int_0^\infty e^{-\alpha t} P_t(x, y) \, dt = \int_0^\infty e^{-\alpha t} \sum_{n=0}^\infty e^{-r t} \frac{(r t)^n}{n!} Q^n(x, y) \, dt \ & = \sum_{n=0}^\infty Q^n(x, y) \frac{r^n}{n!} \int_0^\infty e^{-(r + \alpha) t} t^n dt \end{align*} The interchange of sum and integral is justified since the terms are nonnegative. Using the change of variables $s = (r + \alpha) t$ gives $U_\alpha(x, y) = \frac{1}{\alpha + r} \sum_{n=0}^\infty \left(\frac{r}{\alpha + r}\right)^n \frac{1}{n!} Q^n(x, y) \int_0^\infty e^{-s t} s^n \, ds$ The last integral is $n!$.
Proof using the generator
From the result above, $\alpha I - G = \alpha I - r (Q - I) = (\alpha + r) I - r Q = (\alpha + r)\left(I - \frac{r}{\alpha + r} Q\right)$ Since $\left\| \frac{r}{\alpha + r} Q \right\| = \frac{r}{\alpha + r} \lt 1$ we have $(\alpha I - G)^{-1} = \frac{1}{\alpha + r}\left(I - \frac{r}{\alpha + r} Q\right)^{-1} = \frac{1}{\alpha + r} \sum_{n=0}^\infty \left(\frac{r}{\alpha + r}\right)^n Q^n$
Recall that for $p \in (0, 1)$, the $p$-potential matrix of the jump chain $\bs{Y}$ is $R_p = \sum_{n=0}^\infty p^n Q^n$. Hence we have the following nice relationship between the potential matrix of $\bs{X}$ and the potential matrix of $\bs{Y}$: $U_\alpha = \frac{1}{\alpha + r} R_{r / (\alpha + r)}$ Next recall that $\alpha U_\alpha(x, \cdot)$ is the probability density function of $X_T$ given $X_0 = x$, where $T$ has the exponential distribution with parameter $\alpha$ and is independent of $\bs{X}$. On the other hand, $\alpha U_\alpha(x, \cdot) = (1 - p) R_p(x, \cdot)$ where $p = r \big/ (\alpha + r)$. We know from our study of discrete potentials that $(1 - p) R_p(x, \cdot)$ is the probability density function of $Y_M$ where $M$ has the geometric distribution on $\N$ with parameter $1 - p$ and is independent of $\bs{Y}$. But also $X_T = Y_{N_T}$. So it follows that if $T$ has the exponential distribution with parameter $\alpha$, $\bs{N} = \{N_t: t \in [0, \infty)\}$ is a Poisson process with rate $r$, and is independent of $T$, then $N_T$ has the geometric distribution on $\N$ with parameter $\alpha \big/ (\alpha + r)$. Of course, we could easily verify this directly, but it's still fun to see such connections.
Limiting Behavior and Stationary Distributions
Once again, suppose that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a continuous-time Markov chain on $S$ subordinate to the Poisson process with rate $r \in (0, \infty)$ and with jump transition matrix $Q$. Let $\bs{Y} = \{Y_n: n \in \N\}$ denote the jump process. The limiting behavior and stationary distributions of $\bs{X}$ are closely related to those of $\bs{Y}$.
Suppose that $\bs{X}$ (and hence $\bs{Y}$) are irreducible and positive recurrent
1. $g: S \to (0, \infty)$ is invariant for $\bs{X}$ if and only if $g$ is invariant for $\bs{Y}$.
2. $f$ is an invariant probability density function for $\bs{X}$ if and only if $f$ is an invariant probability density function for $\bs{Y}$.
3. $\bs{X}$ is null recurrent if and only if $\bs{Y}$ is null recurrent, and in this case, $\lim_{n \to \infty} Q^n(x, y) = \lim_{t \to \infty} P_t(x, y) = 0$ for $(x, y) \in S^2$.
4. $\bs{X}$ is positive recurrent if and only if $\bs{Y}$ is positive recurrent. If $\bs{Y}$ is aperiodic, then $\lim_{n \to \infty} Q^n(x, y) = \lim_{t \to \infty} P_t(x, y) = f(y)$ for $(x, y) \in S^2$, where $f$ is the invariant probability density function.
Proof
All of these results follow from the basic theory of stationary and limiting distributions for continuous-time chains, and the fact that the exponential parameter function $\lambda$ is constant.
Time Reversal
Once again, suppose that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a continuous-time Markov chain on $S$ subordinate to the Poisson process with rate $r \in (0, \infty)$ and with jump transition matrix $Q$. Let $\bs{Y} = \{Y_n: n \in \N\}$ denote the jump process. We assume that $\bs X$ (and hence $\bs Y$) are irreducible. The time reversal of $\bs X$ is closely related to that of $\bs Y$.
Suppose that $g: S \to (0, \infty)$ is invariant for $\bs X$. The time reversal $\hat{\bs X}$ with respect to $g$ is also subordinate to the Poisson process with rate $r$. The jump chain $\hat{\bs Y}$ of $\hat{\bs X}$ is the (discrete) time reversal of $\bs Y$ with respect to $g$.
Proof
From the previous result, $g$ is also invariant for $\bs Y$. From the general theory of time reversal, $\hat{\bs X}$ has the same exponential parameter function as $\bs X$ (namely the constant function $r$) and so is also subordinate to the Poisson process with rate $r$. Finally, the jump chain $\hat{\bs Y}$ of $\hat{\bs X}$ is the reversal of $\bs Y$ with respect to $r g$ and hence also with respect to $g$.
In particular, $\bs X$ is reversible with respect to $g$ if and only if $\bs Y$ is reversible with respect to $g$. As noted earlier, $\bs X$ and $\bs Y$ are of the same type: both transient or both null recurrent or both positive recurrent. In the recurrent case, there exists a positive invariant function that is unique up to multiplication by constants. In this case, the reversal of $\bs X$ is unique, and is the chain subordinate to the Poisson process with rate $r$ whose jump chain is the reversal of $\bs Y$.
Uniform Chains
In the construction above for a Markov chain $\bs X = \{X_t: t \in [0, \infty)\}$ that is subordinate to the Poisson process with rate $r$ and jump transition kernel $Q$, we assumed of course that $Q(x, x) = 0$ for every $x \in S$. So there are no absorbing states and the sequence $(\tau_1, \tau_2, \ldots)$ of arrival times of the Poisson process are the jump times of the chain $\bs X$. However in our introduction to continuous-time chains, we saw that the general construction of a chain starting with the function $\lambda$ and the transition matrix $Q$ works without this assumption on $Q$, although the exponential parameters and transition probabilities change. The same idea works here.
Suppose that $\bs N = \{N_t: t \in [0, \infty)\}$ is a counting Poisson process with rate $r \in (0, \infty)$ and that $\bs Y = \{Y_n: n \in \N\}$ is a discrete-time Markov chain with transition matrix $Q$ on $S \times S$ satisfying $Q(x, x) \lt 1$ for $x \in S$. Assume also that $\bs N$ and $\bs Y$ are independent. Define $X_t = Y_{N_t}$ for $t \in [0, \infty)$. Then $\bs X = \{X_t: t \in [0, \infty)\}$ is a continuous-Markov chain with exponential parameter function $\lambda(x) = r [1 - Q(x, x)]$ for $x \in S$ and jump transition matrix $\tilde Q$ given by $\tilde Q(x, y) = \frac{Q(x, y)}{1 - Q(x, x)}, \quad (x, y) \in S^2, \, x \ne y$
Proof
This follows from the result in the introduction.
The Markov chain constructed above is no longer a chain subordinate to the Poisson process by our definition above, since the exponential parameter function is not constant, and the transition times of $\bs X$ are no longer the arrival times of the Poisson process. Nonetheless, many of the basic results above still apply.
Let $\bs X = \{X_t: t \in [0, \infty)\}$ be the Markov chain constructed in the previous theorem. Then
1. For $t \in [0, \infty)$, the transition matrix $P_t$ is given by $P_t = \sum_{n=0}^\infty e^{- r t} \frac{(r t)^n}{n!} Q^n$
2. For $\alpha \in (0, \infty)$, the $\alpha$ potential matrix is given by $U_\alpha = \frac{1}{\alpha + r} \sum_{n=0}^\infty \left(\frac{r}{\alpha + r}\right)^n Q^n$
3. The generator matrix is $G = r (Q - I)$
4. $g: S \to (0, \infty)$ is invariant for $\bs X$ if and only if $g$ is invariant for $\bs Y$.
Proof
The proofs are just as before.
It's a remarkable fact that every continuous-time Markov chain with bounded exponential parameters can be constructed as in the last theorem, a process known as uniformization. The name comes from the fact that in the construction, the exponential parameters become constant, but at the expense of allowing the embedded discrete-time chain to jump from a state back to that state. To review the definition, suppose that $\bs X = \{X_t: t \in [0, \infty)\}$ is a continuous-time Markov chain on $S$ with transition semigroup $\bs P = \{P_t: t \in [0, \infty)\}$, exponential parameter function $\lambda$ and jump transition matrix $Q$. Then $\bs P$ is uniform if $P_t(x, x) \to 1$ as $t \downarrow 0$ uniformly in $x$, or equivalently if $\lambda$ is bounded.
Suppose that $\lambda: S \to (0, \infty)$ is bounded and that $Q$ is a transition matrix on $S$ with $Q(x, x) = 0$ for every $x \in S$. Let $r \in (0, \infty)$ be an upper bound on $\lambda$ and $\bs N = \{N_t: t \in [0, \infty)\}$ a Poisson counting process with rate $r$. Define the transition matrix $\hat Q$ on $S$ by \begin{align*} \hat Q(x, x) & = 1 - \frac{\lambda(x)}{r} \quad x \in S \ \hat Q(x, y) & = \frac{\lambda(x)}{r} Q(x, y) \quad (x, y) \in S^2, \, x \ne y \end{align*} and let $\bs Y = \{Y_n: n \in \N\}$ be a discrete-time Markov chain with transition matrix $\hat Q$, independent of $\bs N$. Define $X_t = Y_{N_t}$ for $t \in [0, \infty)$. Then $\bs X = \{X_t: t \in [0, \infty)\}$ is a continuous-time Markov chain with exponential parameter function $\lambda$ and jump transition matrix $Q$.
Proof
Note that $\hat Q(x, y) \ge 0$ for every $(x, y) \in S^2$ and $\sum_{y \in S} \hat Q(x, y) = 1$ for every $x \in S$. Thus $\hat Q$ is a transition matrix on $S$. Note also that $\hat Q(x, x) \lt 1$ for every $x \in S$. By construction, $\lambda(x) = r[1 - \hat Q(x, x)]$ for $x \in S$ and $Q(x, y) = \frac{\hat Q(x, y)}{1 - \hat Q(x, x)}, \quad (x, y) \in S^2, \, x \ne y$ So the result now follows from the theorem above.
Note in particular that if the state space $S$ is finite then of course $\lambda$ is bounded so the previous theorem applies. The theorem is useful for simulating a continuous-time Markov chain, since the Poisson process and discrete-time chains are simple to simulate. In addition, we have nice representations for the transition matrices, potential matrices, and the generator matrix.
Suppose that $\bs X = \{X_t: t \in [0, \infty\}$ is a continuous-time Markov chain on $S$ with bounded exponential parameter function $\lambda: S \to (0, \infty)$ and jump transition matrix $Q$. Define $r$ and $\hat Q$ as in the last theorem. Then
1. For $t \in [0, \infty)$, the transition matrix $P_t$ is given by $P_t = \sum_{n=0}^\infty e^{- r t} \frac{(r t)^n}{n!} \hat Q^n$
2. For $\alpha \in (0, \infty)$, the $\alpha$ potential matrix is given by $U_\alpha = \frac{1}{\alpha + r} \sum_{n=0}^\infty \left(\frac{r}{\alpha + r}\right)^n \hat Q^n$
3. The generator matrix is $G = r (\hat Q - I)$
4. $g: S \to (0, \infty)$ is invariant for $\bs X$ if and only if $g$ is invariant for $\hat Q$.
Proof
These results follow from the theorem above.
Examples
The Two-State Chain
The following exercise applies the uniformization method to the two-state chain.
Consider the continuous-time Markov chain $\bs X = \{X_t: t \in [0, \infty)\}$ on $S = \{0, 1\}$ with exponential parameter function $\lambda = (a, b)$, where $a, \, b \in (0, \infty)$. Thus, states 0 and 1 are stable and the jump chain has transition matrix $Q = \left[\begin{matrix} 0 & 1 \ 1 & 0 \end{matrix} \right]$ Let $r = a + b$, an upper bound on $\lambda$. Show that
1. $\hat Q = \frac{1}{a + b} \left[\begin{matrix} b & a \ b & a \end{matrix} \right]$
2. $G = \left[\begin{matrix} -a & a \ b & - b \end{matrix}\right]$
3. $P_t = \hat Q - \frac{1}{a + b} e^{-(a + b) t} G$ for $t \in [0, \infty)$
4. $U_\alpha = \frac{1}{\alpha} \hat Q - \frac{1}{(\alpha + a + b)(a + b)} G$ for $\alpha \in (0, \infty)$
Proof
The form of $\hat Q$ follows easily from the definition above. Note that the rows of $\hat Q$ are the invariant PDF. It then follows that $\hat Q^n = \hat Q$ for $n \in \N_+$. The results for the transition matrix $P_t$ and the potential $U_\alpha$ then follow easily from the theorem above.
Although we have obtained all of these results for the two-state chain before, the derivation based on uniformization is the easiest. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.20%3A_Chains_Subordinate_to_the_Poisson_Process.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
Introduction
A continuous-time birth-death chain is a simple class of Markov chains on a subset of $\Z$ with the property that the only possible transitions are to increase the state by 1 (birth) or decrease the state by 1 (death). It's easiest to define the birth-death process in terms of the exponential transition rates, part of the basic structure of continuous-time Markov chains.
Suppose that $S$ is an integer interval (that is, a set of consecutive integers), either finite or infinite. The birth-death chain with birth rate function $\alpha: S \to [0, \infty)$ and death rate function $\beta: S \to [0, \infty)$ is the Markov chain $\bs{X} = \{X_t: t \in [0, \infty)\}$ on $S$ with transition rate $\alpha(x)$ from $x$ to $x + 1$ and transition rate $\beta(x)$ from $x$ to $x - 1$, for $x \in S$.
If $S$ has a minimum element $m$, then of course we must have $\beta(m) = 0$. If $\alpha(m) = 0$ also, then the boundary point $m$ is absorbing. Similarly, if $S$ has a maximum element $n$ then we must have $\alpha(n) = 0$. If $\beta(n) = 0$ also then the boundary point $n$ is absorbing. If $x \in S$ is not a boundary point, then typically we have $\alpha(x) + \beta(x) \gt 0$, so that $x$ is stable. If $\beta(x) = 0$ for all $x \in S$, then $\bs{X}$ is a pure birth process, and similarly if $\alpha(x) = 0$ for all $x \in S$ then $\bs{X}$ is a pure death process. From the transition rates, it's easy to compute the parameters of the exponential holding times in a state and the transition matrix of the embedded, discrete-time jump chain.
Consider again the birth-death chain $\bs{X}$ on $S$ with birth rate function $\alpha$ and death rate function $\beta$. As usual, let $\lambda$ denote the exponential parameter function and $Q$ the transition matrix for the jump chain.
1. $\lambda(x) = \alpha(x) + \beta(x)$ for $x \in S$
2. If $x \in S$ is stable, so that $\alpha(x) + \beta(x) \gt 0$, then $Q(x, x + 1) = \frac{\alpha(x)}{\alpha(x) + \beta(x)}, \quad Q(x, x - 1) = \frac{\beta(x)}{\alpha(x) + \beta(x)}$
Note that jump chain $\bs{Y} = (Y_0, Y_1, \ldots)$ is a discrete-time birth death chain. The probability functions $p$, $q$, and $r$ of $\bs Y$ are given as follows: If $x \in S$ is stable then \begin{align*} p(x) & = Q(x, x + 1) = \frac{\alpha(x)}{\alpha(x) + \beta(x)}\ q(x) & = Q(x, x - 1) = \frac{\beta(x)}{\alpha(x) + \beta(x)}\ r(x) & = Q(x, x) = 0 \end{align*} If $x$ is absorbing then of course $p(x) = q(x) = 0$ and $r(x) = 1$. Except for the initial state, the jump chain $\bs{Y}$ is deterministic for a pure birth process, with $Q(x, x) = 1$ if $x$ is absorbing and $Q(x, x + 1) = 1$ if $x$ is stable. Similarly, except for the initial state, $\bs{Y}$ is deterministic for a pure death process, with $Q(x, x) = 1$ if $x$ is absorbing and $Q(x, x - 1) = 1$ if $x$ is stable. Note that the Poisson process with rate parameter $r \in (0, \infty)$, viewed as a continuous-time Markov chain, is a pure birth process on $\N$ with birth function $\alpha(x) = r$ for each $x \in \N$. More generally, a birth death process with $\lambda(x) = \alpha(x) + \beta(x) = r$ for all $x \in S$ is also subordinate to the Poisson process with rate $r$.
Note that $\lambda$ is bounded if and only if $\alpha$ and $\beta$ are bounded (always the case if $S$ is finite), and in this case the birth-death chain $\bs X = \{X_t: t \in [0, \infty)\}$ is uniform. If $\lambda$ is unbounded, then $\bs X$ may not even be regular, as an example below shows. Recall that a sufficient condition for $\bs X$ to be regular when $S$ is infinite is $\sum_{x \in S_+} \frac{1}{\lambda(x)} = \sum_{x \in S_+} \frac{1}{\alpha(x) + \beta(x)} = \infty$ where $S_+ = \{x \in S: \lambda(x) = \alpha(x) + \beta(x) \gt 0\}$ is the set of stable states. Except for the aforementioned example, we will restrict our study to regular birth-death chains.
Infinitesimal Generator and Transition Matrices
Suppose again that $\bs X = \{X_t: t \in [0, \infty)\}$ is a continuous-time birth-death chain on an interval $S \subseteq \Z$ with birth rate function $\alpha$ and death rate function $\beta$. As usual, we will let $P_t$ denote the transition matrix at time $t \in [0, \infty)$ and $G$ the infinitesimal generator. As always, the infinitesimal generator gives the same information as the exponential parameter function and the jump transition matrix, but in a more compact and useful form.
The generator matrix $G$ is given by $G(x, x) = -[\alpha(x) + \beta(x)], \; G(x, x + 1) = \alpha(x), \; G(x, x - 1) = \beta(x), \quad x \in S$
Proof
This follows from the general theory, since $G(x, x) = -\lambda(x)$ for $x \in S$ and $G(x, y) = \lambda(x) Q(x, y)$ for $(x, y) \in S^2$ with $x \ne y$.
The Kolmogorov backward and forward equations are
1. $\frac{d}{dt} P_t(x, y) = -[\alpha(x) + \beta(x)] P_t(x, y) + \alpha(x) P_t(x + 1, y) + \beta(x) P_t(x - 1, y)$ for $(x, y) \in S^2$.
2. $\frac{d}{dt} P_t(x, y) = -[\alpha(y) + \beta(y)] P_t(x, y) + \alpha(y - 1) P_t(x, y - 1) + \beta(y + 1) P_t(x, y + 1)$ for $(x, y) \in S^2$
Proof
These results follow from the generator matrix $G$ above.
1. The backward equation is $\frac{d}{dt} P_t = G P_t$.
2. The forward equation is $\frac{d}{dt} P_t = P_t G$.
Limiting Behavior and Stationary Distributions
For our discussion of limiting behavior, we will consider first the important special case of a continuous-time birth-death chain $\bs{X} = \{X_t: t \in [0, \infty)\}$ on $S = \N$ and with $\alpha(x) \gt 0$ for all $x \in \N$ and $\beta(x) \gt 0$ for all $x \in \N_+$. For the jump chain $\bs{Y} = \{Y_n: n \in \N\}$, recall that $p(x) = Q(x, x + 1) = \frac{\alpha(x)}{\alpha(x) + \beta(x)}, \; q(x) = Q(x, x - 1) = \frac{\beta(x)}{\alpha(x) + \beta(x)}, \quad x \in \N$ The jump chain $\bs{Y}$ is a discrete-time birth-death chain, and our notation here is consistent with the notation that we used in that section. Note that $\bs{X}$ and $\bs{Y}$ are irreducible. We first consider transience and recurrence.
The chains $\bs{X}$ and $\bs{Y}$ are recurrent if and only if $\sum_{x=0}^\infty \frac{\beta(1) \cdots \beta(x)}{\alpha(1) \cdots \alpha(x)} = \infty$
Proof
Recall that $\bs{X}$ is recurrent if and only if $\bs{Y}$ is recurrent. In our study of discrete-time birth-death chains we saw that $\bs{Y}$ is recurrent if and only if $\sum_{x=0}^\infty \frac{q(1) \cdots q(x)}{p(1) \cdots p(x)} = \infty$ But trivially, $\frac{q(1) \cdots q(x)}{p(1) \cdots p(x)} = \frac{\beta(1) \cdots \beta(x)}{\alpha(1) \cdots \alpha(x)}$
Next we consider positive recurrence and invariant distributions. It's nice to look at this from different points of view.
The function $g: \N \to (0, \infty)$ defined by $g(x) = \frac{\alpha(0) \cdots \alpha(x - 1)}{\beta(1) \cdots \beta(x)}, \quad x \in \N$ is invariant for $\bs{X}$, and is the only invariant function, up to multiplication by constants. Hence $\bs{X}$ is positive recurrent if and only if $B = \sum_{x = 0}^\infty g(x) \lt \infty$, in which case the (unique) invariant probability density function $f$ is given by $f(x) = \frac{1}{B} g(x)$ for $x \in \N$. Moreover, $P_t(x, y) \to f(y)$ as $t \to \infty$ for every $x, \, y \in \N$
Proof using the jump chain
From our study of discrete-time birth-death chains, we know that the function $h: \N \to (0, \infty)$ defined by $h(x) = \frac{p(0) \cdots p(x - 1)}{q(1) \cdots q(x)}, \quad x \in \N$ is invariant for $\bs{Y}$, and is the only positive invariant function up to multiplication by positive constants. It then follows from our study of invariant functions for continuous-time chains that the function $h / \lambda$ is invariant for $\bs{X}$, and again is the only positive invariant function up to multiplication by positive constants. But it's simple to see that $\frac{h(x)}{\lambda(x)} = \frac{h(x)}{\alpha(x) + \beta(x)} = \frac{\alpha(1) \cdots \alpha(x - 1)}{\beta(1) \cdots \beta(x)} = \frac{1}{\alpha(0)} g(x)$ where $g$ is the function given in the theorem. The remaining parts of the theorem follow from the general theory.
Proof from the balance equation
A function $g: \N \to (0, \infty)$ is invariant for $\bs{X}$ if and only if it satisfies the balance equation $g G = 0$. For our birth-death chain, this reduces to \begin{align*} \alpha(0) g(0) & = \beta(1) g(1) \ [\alpha(x) + \beta(x)] g(x) & = \alpha(x - 1) g(x - 1) + \beta(x + 1) g(x + 1), \quad x \in \N_+ \end{align*} Substituting the equation with $x = 0$ on the left into the equation with $x = 1$ on the left gives $\alpha(1) g(1) = \beta(2) g(2)$. Substituting this into the equation with $x = 2$ on the left gives $\alpha(2) g(2) = \beta(3) g(3)$. In general, the balance equations imply $\alpha(x) g(x) = \beta(x + 1) g(x + 1), \quad x \in \N$ Solving these new balance equations recursively gives $g(x) = g(0) \frac{\alpha(0) \cdots \alpha(x - 1)}{\beta(1) \cdots \beta(x)}$ Letting $g(0) = 1$ gives the particular invariant function in the theorem. Again, the remaining parts follow from the general theory.
Here is a summary of the classification:
For the continuous-time birth-death chain $\bs X$, let $A = \sum_{x = 0}^\infty \frac{\beta(1) \cdots \beta(x)}{\alpha(1) \cdots \alpha(x)}, \; B = \sum_{x = 0}^\infty \frac{\alpha(0) \cdots \alpha(x - 1)}{\beta(1) \cdots \beta(x)}$
1. $\bs X$ is transient if $A \lt \infty$.
2. $\bs X$ is null recurrent if $A = \infty$ and $B = \infty$.
3. $\bs X$ is positive recurrent if $B \lt \infty$.
Suppose now that $n \in \N_+$ and that $\bs X = \{X_t: t \in [0, \infty)\}$ is a continuous-time birth-death chain on the integer interval $\N_n = \{0, 1, \ldots, n\}$. We assume that $\alpha(x) \gt 0$ for $x \in \{0, 1, \ldots, n - 1\}$ while $\beta(x) \gt 0$ for $x \in \{1, 2, \ldots n\}$. Of course, we must have $\beta(0) = \alpha(n) = 0$. With these assumptions, $\bs X$ is irreducible, and since the state space is finite, positive recurrent. So all that remains is to find the invariant distribution. The result is essentially the same as when the state space is $\N$.
The invariant probability density function $f_n$ is given by $f_n(x) = \frac{1}{B_n} \frac{\alpha(0) \cdots \alpha(x - 1)}{\beta(1) \cdots \beta(x)} \text{ for } x \in \N_n \text{ where } B_n = \sum_{x=0}^n \frac{\alpha(0) \cdots \alpha(x - 1)}{\beta(1) \cdots \beta(x)}$
Proof
Define $g_n(x) = \frac{\alpha(0) \cdots \alpha(x - 1)}{\beta(1) \cdots \beta(x)}, \quad x \in \N_n$ The proof thet $g_n$ is invariant for $\bs X$ is the same as before. The constant $B_n$ is the normalizing constant.
Note that $B_n \to B$ as $n \to \infty$, and if $B \lt \infty$, $f_n(x) \to f(x)$ as $n \to \infty$ for $x \in \N$. We will see this type of behavior again. Results for the birth-death chain on $\N_n$ often converge to the corresponding results for the birth-death chain on $\N$ as $n \to \infty$.
Absorption
Often when the state space $S = \N$, the state of a birth-death chain represents a population of individuals of some sort (and so the terms birth and death have their usual meanings). In this case state 0 is absorbing and means that the population is extinct. Specifically, suppose that $\bs X = \{X_t: t \in [0, \infty)\}$ is a regular birth-death chain on $\N$ with $\alpha(0) = \beta(0) = 0$ and with $\alpha(x), \, \beta(x) \gt 0$ for $x \in \N_+$. Thus, state 0 is absorbing and all positive states lead to each other and to 0. Let $T = \min\{t \in [0, \infty): X_t = 0\}$ denote the time until absorption, where as usual, $\min \emptyset = \infty$. Many of the results concerning extinction of the continuous-time birth-death chain follow easily from corresponding results for the discrete-time birth-death jump chain.
One of the following events will occur:
1. Population extinction: $T \lt \infty$ or equivalently, $X_s = 0$ for some $s \in [0, \infty)$ and hence $X_t = 0$ for all $t \in [s, \infty)$.
2. Population explosion: $T = \infty$ or equivalently $X_t \to \infty$ as $t \to \infty$.
Proof
Part (b) follows from the general theory, since 0 is absorbing, and all positive states lead to each other and to 0. Thus the positive states are transient and we know that with probability 1, the jump chain will visit a transient state only finitely often. Thus $T = \infty$ is equivalent to $X_t \to \infty$ as $t \to \infty$. Without the assumption that the chain is regular, population explosion could occur in finite time.
Naturally we would like to find the probability of these complementary events, and happily we have already done so in our study of discrete-time birth-death chains. The absorption probability function $v$ is defined by $v(x) = \P(T \lt \infty) = \P(X_t = 0 \text{ for some } t \in [0, \infty) \mid X_0 = x), \quad x \in \N$
As before, let $A = \sum_{i=0}^\infty \frac{\beta(1) \cdots \beta(i)}{\alpha(1) \cdots \alpha(i)}$
1. If $A = \infty$ then $v(x) = 1$ for all $x \in \N$.
2. If $A \lt \infty$ then $v(x) = \frac{1}{A} \sum_{i=x}^\infty \frac{\beta(1) \cdots \beta(i)}{\alpha(1) \cdots \alpha(i)}, \quad x \in \N$
Proof
The continuous-time chain is absorbed into 0 if and only if the discrete-time jump chain is absorbed into 0. So the result follows from the corresponding result for discrete-time birth-death chains. Recall again that $q(x) / p(x) = \beta(x) / \alpha(x)$ for $x \in \N_+$
The mean time to extinction is considered next, so let $m(x) = \E(T \mid X_0 = x)$ for $x \in \N$. Unlike the probability of extinction, computing the mean time to extinction cannot be easily reduced to the corresponding discrete-time computation. However, the method of computation does extend.
The mean absorption function is given by $m(x) = \sum_{j=1}^x \sum_{k=j-1}^\infty \frac{\alpha(j) \cdots \alpha(k)}{\beta(j) \cdots \beta(k+1)}, \quad x \in \N$
Probabilisitic Proof
The time required to go from state $x \in \N_+$ to $x - 1$ has the same distribution as the time required to go from state 1 to 0, except with parameters $\alpha(y), \, \beta(y)$ for $y \in \{x, x + 1, \ldots\}$ instead of parameters $\alpha(y), \, \beta(y)$ for $y \in \{1, 2, \ldots\}$. So by the additivity of expected value, we just need to compute $m(1)$ as a function of the parameters. Starting in state 1, the chain will be absorbed in state 0 after a random number of returns to state 1 without absorption. Whenever the chain is in state 1, absorption occurs at the next transition with probability $q(1)$ so it follows that the number of times that the chain is in state 1 before absorption has the geometric distribution on $\N_+$ with success parameter $q(1)$. The mean of this distribution is $1 / q(1) = [\alpha(1) + \beta(1)] / \beta(1)$. On the other hand, starting in state 1, time until the chain is in state 1 again (without absorption) has the same distribution as the return time to state 0, starting in state 0 for the irreducible birth-death chain on $\N$ with birth and death rates $\alpha^\prime$ and $\beta^\prime$ given by $\alpha^\prime(x) = \alpha(x + 1)$ for $x \in \N$ and $\beta^\prime(x) = \beta(x + 1)$ for $x \in \N_+$. Thus, let $\mu = \frac{1}{\alpha(1) + \beta(1)}\sum_{k=0}^\infty \frac{\alpha(1) \cdots \alpha(k)}{\beta(2) \cdots \beta(k+1)}$ Then $\mu$ is the mean return time to state 0 for the chain $\bs{X}^\prime$. Specifically, note that if $\mu = \infty$ then $\bs{X}^\prime$ is either transient or null recurrent. If $\mu \lt \infty$ then $1 / \mu$ is the invariant PDF at 0. So, it follows that $m(1) = \frac{1}{q(1)} \mu = \sum_{k=0}^\infty \frac{\alpha(1) \cdots \alpha(k)}{\beta(1) \cdots \beta(k + 1)}$ By our argument above, the mean time to go from state $x$ to $x - 1$ is $\sum_{k=x-1}^\infty \frac{\alpha(x) \cdots \alpha(k)}{\beta(x) \cdots \beta(k + 1)}$
In particular, note that $m(1) = \sum_{k=0}^\infty \frac{\alpha(1) \cdots \alpha(k)}{\beta(1) \cdots \beta(k + 1)}$ If $m(1) = \infty$ then $m(x) = \infty$ for all $x \in S$. If $m(1) \lt \infty$ then $m(x) \lt \infty$ for all $x \in S$
Next we will consider a birth-death chain on a finite integer interval with both endpoints absorbing. Our interest is in the probability of absorption in one endpoint rather than the other, and in the mean time to absorption. Thus suppose that $n \in \N_+$ and that $\bs X = \{X_t: t \in [0, \infty)\}$ is a continuous-time birth-death chain on $\N_n = \{0, 1, \ldots, n\}$ with $\alpha(0) = \beta(0) = 0$, $\alpha(n) = \beta(n) = 0$, and $\alpha(x) \gt 0$, $\beta(x) \gt 0$ for $x \in \{1, 2, \ldots, n - 1\}$. So the endpoints 0 and $n$ are absorbing, and all other states lead to each other and to the endpoints. Let $T = \inf\{t \in [0, \infty): X_t \in \{0, n\}\}$, the time until absorption, and for $x \in S$ let $v_n(x) = \P(X_T = 0 \mid X_0 = x)$ and $m_n(x) = \E(T \mid X_0 = x)$. The definitions make sense since $T$ is finite with probability 1.
The absorption probability function for state 0 is given by $v_n(x) = \frac{1}{A_n} \sum_{i=x}^{n-1} \frac{\beta(1) \cdots \beta(i)}{\alpha(1) \cdots \alpha(i)} \text{ for } x \in \N_n \text{ where } A_n = \sum_{i=0}^{n-1} \frac{\beta(1) \cdots \beta(i)}{\alpha(1) \cdots \alpha(i)}$
Proof
The jump chain $\bs Y = \{Y_n: n \in \N\}$ is a discrete-time birth-death chain on $\N_n$ with $0$ and $n$ absorbing. Also, $\bs X$ is absorbed into 0 or $n$ if and only if $\bs Y$ is absorbed into 0 or $n$, respectively. So the result follows from the corresponding result for $\bs Y$, since $q(x) / p(x) = \beta(x) / \alpha(x)$ for $x \in \{1, 2, \ldots, n - 1\}$.
Note that $A_n \to A$ as $n \to \infty$ where $A$ is the constant above for the absorption probability at 0 with the infinite state space $\N$. If $A \lt \infty$ then $v_n(x) \to v(x)$ as $n \to \infty$ for $x \in \N$.
Time Reversal
Essentially, every irreducible continuous-time birth-death chain is reversible.
Suppose that $\bs X = \{X_t: t \in [0, \infty)\}$ is a positive recurrent birth-death chain on an integer interval $S \subseteq \Z$ with birth rate function $\alpha: S \to [0, \infty)$ and death rate function $\beta: S \to \infty$. Assume that $\alpha(x) \gt 0$, except at the maximum value of $S$, if there is one, and similarly that $\beta(x) \gt 0$, except at the minimum value of $X$, if there is one. Then $\bs X$ is reversible.
Proof
Note that $\bs X$ is irreducible. As usual, let $G$ denote the generator matrix. It's easy to see that under the assumptions, $G(x, y) = 0$ implies $G(y, x) = 0$ for $(x, y) \in S^2$, and that the Kolmogorov cycle condition is satisfied: For every $n \in \N_+$ and every sequence $(x_1, x_2, \ldots x_n) \in S^n$, $G(x_1, x_2) \cdots G(x_{n-1}, x_n) G(x_n, x_1) = G(x_1, x_n), G(x_n, x_{n-1} \cdots G(x_2, x_1)$
In the important special case of a birth-death chain on $\N$, we can verify the balance equations directly.
Suppose that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a continuous-time birth-death chain on $S = \N$ and with birth rate $\alpha(x) \gt 0$ for all $x \in \N$ and death rate $\beta(x) \gt 0$ for all $x \in \N_+$. Then $\bs X$ is reversible.
Proof
We just need to show that the balance equation for a reversible chain holds, and this was actually done in the result above. As before, let $g: \N \to (0, \infty)$ be the function given by $g(x) = \frac{\alpha(0) \cdots \alpha(x - 1)}{\beta(1) \cdots \beta(x)}, \quad x \in \N$ The only nontrivial case of the balance equation $g(x) G(x, y) = g(y) G(y, x)$ for $(x, y) \in S^2$ is $g(x) G(x, x + 1) = g(x + 1) G(x + 1, x) ) = \frac{\alpha(0) \cdots \alpha(x)}{\beta(1) \cdots \beta(x)}, \quad x \in \N$ It follows from the general theory that $g$ is invariant for $\bs X$ and that $\bs X$ is reversible with respect to $g$. Since we actually know from our work above that $g$ is the only positive invariant function, up to multiplication by positive constants, we can simply say that $\bs X$ is reversible.
In the positive recurrent case, it follows that the birth-death chain is stochastically the same, forward or backward in time, if the chain has the invariant distribution.
Examples and Special Cases
Regular and Irregular Chains
Our first exercise gives two pure birth chains, each with an unbounded exponential parameter function. One is regular and one is irregular.
Consider the pure birth process $\bs{X} = \{X_t: t \in [0, \infty)\}$ on $\N_+$ with birth rate function $\alpha$.
1. If $\alpha(x) = x^2$ for $x \in \N_+$, then $\bs{X}$ is not regular.
2. If $\alpha(x) = x$ for $x \in \N_+$, then $\bs{X}$ is regular.
Proof
The jump chain $\bs Y$ is deterministic, except for the initial state. Given $Y_0 = x \in \N_+$, we have $Y_n = n + x$. Hence
1. $\sum_{n=0}^\infty \frac{1}{\lambda(Y_n)} = \sum_{n=0}^\infty \frac{1}{(n + x)^2} \lt \infty$
2. $\sum_{n=0}^\infty \frac{1}{\lambda(Y_n)} = \sum_{n=0}^\infty \frac{1}{n + x} = \infty$
So the results follow from the general theory.
Constant Birth and Death Rates
Our next examples consider birth-death chains with constant birth and death rates, except perhaps at the endpoints. Note that such chains will be regular since the exponential parameter function $\lambda$ is bounded.
Suppose that $\bs X = \{X_t: t \in [0, \infty)\}$ is the birth-death chain on $\N$, with constant birth rate $\alpha \in (0, \infty)$ on $\N$ and constant death rate $\beta \in (0, \infty)$ on $\N_+$.
1. $\bs X$ is transient if $\beta \lt \alpha$.
2. $\bs X$ is null recurrent if $\beta = \alpha$.
3. $\bs X$ is positive recurrent if $\beta \gt \alpha$. The invariant distribution is the geometric distribution on $\N$ with parameter $\alpha / \beta$ $f(x) = \left( 1 - \frac{\alpha }{\beta} \right) \left( \frac{\alpha}{\beta} \right)^x, \quad x \in \N$
Proof
Note that $\bs X$ is irreducible since the birth rate is positive on $\N$ and the death rate is positive on $\N_+$. The series in the results above are geometric series: $\frac{\beta(1) \cdots \beta(x)}{\alpha(1) \cdots \alpha(x)} = \left(\frac{\beta}{\alpha}\right)^x, \; \frac{\alpha(0) \cdots \alpha(x - 1)}{\beta(1) \cdots \beta(x)} = \left(\frac{\alpha}{\beta}\right)^x, \quad x \in \N$
Next we consider the chain with $0$ absorbing. As in the general discussion above, let $v$ denote the function that gives the probability of absorption and $m$ the function that gives the mean time to absorption.
Suppose that $\bs X = \{X_t: t \in [0, \infty)\}$ is the birth-death chain in $\N$ with constant birth rate $\alpha \in (0, \infty)$ on $\N_+$, constant death reate $\beta \in (0, \infty)$ on $\N_+$, and with 0 absorbing. Then
1. If $\beta \ge \alpha$ then $v(x) = 1$ for $x \in \N$. If $\beta \lt \alpha$ then $v(x) = (\beta / \alpha)^x$ for $x \in \N$.
2. If $\alpha \ge \beta$ then $m(x) = \infty$. If $\alpha \lt \beta$ then $m(x) = x / (\beta - \alpha)$ for $x \in \N$.
Next let's look at chains on a finite state space. Let $n \in \N_+$ and define $\N_n = \{0, 1, \ldots, n\}$.
Suppose that $\bs X = \{X_t: t \in [0, \infty)\}$ is a continuous-time birth-death chain on $\N_n$ with constant birth rate $\alpha \in (0, \infty)$ on $\{0, 1, \ldots, n - 1\}$ and constant death rate $\beta \in (0, \infty)$ on $\{1, 2, \ldots n\}$. The invariant probability density function $f_n$ is given as follows:
1. If $\alpha \ne \beta$ then $f_n(x) = \frac{(\alpha / \beta)^x (1 - \alpha / \beta)}{1 - (\alpha / \beta)^{n+1}}, \quad x \in \N_n$
2. If $\alpha = \beta$ then $f_n(x) = 1 / (n + 1)$ for $x \in \N_n$
Note that when $\alpha = \beta$, the invariant distribution is uniform on $\N_n$. Our final exercise considers the absorption probability at 0 when both endpoints are absorbing. Let $v_n$ denote the function that gives the probability of absorption into 0, rather than $n$.
Suppose that $\bs X = \{X_t: t \in [0, \infty)\}$ is the birth-death chain on $\N_n$ with constant birth rate $\alpha$ and constant death rate $\beta$ on $\{1, 2, \ldots, n - 1\}$, and with 0 and $n$ absorbing.
1. If $\alpha \ne \beta$ then $v_n(x) = \frac{(\beta / \alpha)^x - (\beta / \alpha)^n}{1 - (\beta / \alpha)^n}, \quad x \in \N_n$
2. If $\alpha = \beta$ then $v_n(x) = (n - x) / n$ for $x \in \N_n$.
Linear Birth and Death Rates
For our next discussion, consider individuals that act identically and independently. Each individual splits into two at exponential rate $a \in (0, \infty)$ and dies at exponential rate $b \in (0, \infty)$.
Let $X_t$ denote the population at time $t \in [0, \infty)$. Then $\bs X = \{X_t: t \in [0, \infty)\}$ is a regular, continuous-time birth-death chain with birth and death rate functions given by $\alpha(x) = a x$ and $\beta(x) = b x$ for $x \in \N$.
Proof
The fact that $\bs X$ is a continuous-time Markov chain follows from the assumptions. Moreover, since the individuals act independently, the overall birth and death rates when the population is $x \in \N$ is simple $x$ times the individual birth and death rates. The chain is regular since $\sum_{x=1}^\infty \frac{1}{(a + b) x} = \infty$
Note that $0$ is absorbing since the population is extinct, so as usual, our interest is in the probability of absorption and the mean time to absorption as functions of the initial state. The probability of absorption is the same as for the chain with constant birth and death rates discussed above.
The absorption probability function $v$ is given as follows:
1. $v(x) = 1$ for all $x \in \N$ if $b \ge a$.
2. $v(x) = (b / a)^x$ for $x \in \N$ if $b \lt a$.
Proof
These results follow from the general results above since $\beta(x) / \alpha(x) = b / a$ for $x \in \N_+$. Hence for $x \in \N$, $\sum_{i=x}^\infty (b / a)^i = \begin{cases} \infty & b \ge a \ \frac{(b/a)^x}{1 - b / a} & b \lt a \end{cases}$
The mean time to absorption is more interesting.
The mean time to absorption function $m$ is given as follows:
1. If $a \ge b$ then $m(x) = \infty$ for $x \in \N_+$.
2. If $a \lt b$ then $m(x) = \sum_{j=1}^x \frac{b^{j-1}}{a^j} \int_0^{a/b} \frac{u^{j-1}}{1 - u} du, \quad x \in \N$
Proof
1. From the general results above, note that $m(1) = \sum_{k=0}^\infty \frac{1}{(k + 1) b} \left(\frac{a}{b}\right)^k$ The sum is infinite if $a \ge b$.
2. If $\alpha \lt \beta$ then again from the general formula above, $m(x) = \sum_{j=1}^x \sum_{k=j-1}^\infty \frac{1}{(k + 1) b} \left(\frac{a}{b}\right)^{k - j + 1} = \sum_{j=1}^x \frac{1}{b} \left(\frac{b}{a}\right)^j \sum_{k=j-1}^\infty \frac{1}{k + 1}\left(\frac{a}{b}\right)^{k+1}$ The inner series converges absolutely. Moreover, for $k \in \N$, $\frac{1}{k + 1} \left(\frac{a}{b}\right)^{k+1} = \int_0^{a/b} u^k du$ Substituting and interchanging the sum and integral gives $m(x) = \sum_{j=1}^x \frac{b^{j-1}}{a^j}\int_0^{a/b} \left( \sum_{k=j-1}^\infty u^k \right) du = \sum_{j=1}^x \frac{b^{j-1}}{a^j} \int_0^{a/b} \frac{u^{j-1}}{1 - u} du$
For small values of $x \in \N$, the integrals in the case $a \lt b$ can be done by elementary methods. For example, \begin{align*} m(1) & = - \frac{1}{a} \ln\left(1 - \frac{a}{b}\right) \ m(2) & = m(1) -\frac{1}{a} - \frac{b}{a^2} \ln\left(1 - \frac{a}{b}\right) \ m(3) & = m(2) - \frac{1}{2 a} - \frac{b}{a^2} - \frac{b^2}{a^3} \ln\left(1 - \frac{a}{b}\right) \end{align*} However, a general formula requires the introduction of a special function that is not much more helpful than the integrals themselves. The Markov chain $\bs X$ is actually an example of a branching chain. We will revisit this chain in that section.
Linear Birth and Death with Immigration
We continue our previous discussion but generalizing a bit. Suppose again that we have individuals that act identically and independently. An individual splits into two at exponential rate $a \in [0, \infty)$ and dies at exponential rate $b \in [0, \infty)$. Additionally, new individuals enter the population at exponential rate $c \in [0, \infty)$. This is the immigration effect, and when $c = 0$ we have the birth-death chain in the previous discussion.
Let $X_t$ denote the population at time $t \in [0, \infty)$. Then $\bs X = \{X_t: t \in [0, \infty)\}$ is a regular, continuous-time birth-death chain with birth and death rate functions given by $\alpha(x) = a x + c$ and $\beta(x) = b x$ for $x \in \N$.
Proof
The fact that $\bs X$ is a continuous-time Markov chain follows from the assumptions. Moreover, since the individuals act independently, the overall birth rate when the population is $x \in \N$ is $a x + c$ while the death rate is $b x$. The chain is regular since $\sum_{x=1}^\infty \frac{1}{((a + b) x + c} = \infty$
The infinitesimal matrix $G$ is given as follows, for $x \in \N$:
1. $G(x, x) = -[(a + b) x + c]$
2. $G(x, x + 1) = a x + c$
3. $G(x, x - 1) = bx$
The backward and forward equations are given as follows, for $(x, y) \in \N^2$ and $t \in (0, \infty)$
1. $\frac{d}{dt} P_t(x, y) = -[(a + b)x + c] P_t(x, y) + (a x + c) P_t(x + 1, y) + b x P_t(x - 1, y)$
2. $\frac{d}{dt} P_t(x, y) = -[(a + b)y + c] P_t(x, y) + [a (y - 1) + c] P_t(x, y - 1) + b(y + 1) P_t(x, y + 1$)
We can use the forward equation to find the expected population size. Let $M_t(x) = \E(X_t, \mid X_0 = x)$ for $t \in [0, \infty)$ and $x \in \N$.
For $t \in [0, \infty)$ and $x \in \N$, the mean population size $M_t(x)$ is given as follows:
1. If $a = b$ then $M_t(x) = c t + x$.
2. If $a \ne b$ then $M_t(x) = \frac{c}{a - b}\left[e^{(a - b)t} - 1 \right] + x e^{(a - b) t}$
Proof
First note that $M_t(x) = \sum_{y=0}^\infty y P_t(x, y)$ for $x \in \N$. Multiplying the forward equation above by $y$ and summing over $y \in \N$ gives \begin{align*} \sum_{y=0}^\infty y \frac{d}{dt} P_t(x, y) = & a \sum_{y=2}^\infty y(y - 1) P_t(x, y - 1) + c \sum_{y=1}^\infty y P_t(x, y - 1) \ & -(a + b) \sum_{y=0}^\infty y^2 P_t(x, y) - c \sum_{y=0}^\infty y P_t(x, y) + b \sum_{y=0}^\infty y (y + 1) P_t(x, y + 1) \end{align*} Re-indexing the sums and using some algebra gives the first-order differential equation $\frac{d}{dt} M_t(x) = c + (a - b) M_t(x), \quad x \in \N, \, t \in (0, \infty)$ with initial condition $M_0(x) = x$. Solving the differential equation gives the result.
Note that $b \gt a$, so that the individual death rate exceeds the birth rate, then $M_t(x) \to c / (b - a)$ as $t \to \infty$ for $x \in \N$. If $a \ge b$ so that the birth rate equals or exceeds the death rate, then $M_t(x) \to \infty$ as $t \to \infty$ for $x \in \N_+$.
Next we will consider the special case with no births, but only death and immigration. In this case, the invariant distribution is easy to compute, and is one of our favorites.
Suppose that $a = 0$ and that $b, \, c \gt 0$. Then $\bs X$ is positive recurrent. The invariant distribution is Poisson with parameter $c / b$: $f(x) = e^{-c/b} \frac{(c / b)^x}{x!}, \quad x \in \N$
Proof
In terms of the general theory above, note that the invariant function $g$, unique up to multiplication by positive constants, is given by $g(x) = \frac{\alpha(0) \cdots \alpha(x)}{\beta(1) \cdots \beta(x)} = \frac{c^x}{b^x x!} = \frac{(c/b)^x}{x!}, \quad x \in \N$ Hence $B = \sum_{x=0}^\infty g(x) = e^{c/b} \lt \infty$ and therefore the chain is positive recurrent with invariant PDF $f(x) = \frac{1}{B} g(x) = e^{-c/b} \frac{(c / b)^x}{x!}, \quad x \in \N$ This is the PDF of the Poisson distribution with parameter $c / b$.
The Logistics Chain
Consider a population that fluctuates between a minimum value $m \in \N_+$ and a maximum value $n \in \N_+$, where of course, $m \lt n$. Given the population size, the individuals act independently and identically. Specifically, if the population is $x \in \{m, m + 1, \ldots, n\}$ then an individual splits in two at exponential rate $a (n - x)$ and dies at exponential rate $b (x - m)$, where $a, \, b \in (0, \infty)$. Thus, an individual's birth rate decreases linearly with the population size from $a (n - m)$ to $0$ while the death rate increases linearly with the population size from $0$ to $b (n - m)$. These assumptions lead to the following definition.
The continuous-time birth-death chain $\bs X = \{X_t: t \in [0, \infty)\}$ on $S = \{m, m + 1, \ldots, n\}$ with birth rate function $\alpha$ and death rate function $\beta$ given by $\alpha(x) = a x (n - x), \; \beta(x) = b x (x - m), \quad x \in S$ is the logistic chain on $S$ with parameters $a$ and $b$.
Justification
Since the individuals act independently and identically, the overall birth rate and death rates when the population is $x \in S$ is simply $x$ times the birth and death rate for an individual.
Note that the logistics chain is a stochastic counterpart of the logistics differential equation, which typically has the form $\frac{dx}{dt} = c (x - m)(n - x)$ where $m, \, n, \, c \in (0, \infty)$ and $m \lt n$. Starting in $x(0) \in (m, n)$, the solution remains in $(m, n)$ for all $t \in [0, \infty)$. Of course, the logistics differential equation models a system that is continuous in time and space, whereas the logistics Markov chain models a system that is continuous in time and discrete is space.
For the logistics chain
1. The exponential parameter function $\lambda$ is given by $\lambda(x) = a x (n - x) + b x (x - m), \quad x \in S$
2. The transition matrix $Q$ of the jump chain is given by $Q(x, x - 1) = \frac{b (x - m)}{a (n - x) + b (x - m)}, \, Q(x, x + 1) = \frac{a (n - x)}{a (n - x) + b (x - m)}, \quad x \in S$
In particular, $m$ and $n$ are reflecting boundary points, and so the chain is irreducible.
The generator matrix $G$ for the logistics chain is given as follows, for $x \in S$:
1. $G(x, x) = - x[a (n - x) + b (x - m)]$
2. $G(x, x - 1) = b x (x - m)$
3. $G(x, x + 1) = a x (n - x)$
Since $S$ is finite, $\bs X$ is positive recurrent. The invariant distribution is given next.
Define $g: S \to (0, \infty)$ by $g(x) = \frac{1}{x} \binom{n - m}{x - m} \left(\frac{a}{b}\right)^{x - m}, \quad x \in S$ Then $g$ is invariant for $\bs X$.
Proof
Since we know that $\bs X$ is reversible, we just need to show that $g(x) G(x, y) = g(y) G(y, x)$ for $(x, y) \in S^2$. For the logistics chain, the only non-trivial equation is $g(x) G(x, x + 1) = g(x + 1) G(x + 1, x)$ for $x \in S$. Simple substitution and algebra show that both sides reduce to $\frac{(n - m)!}{(x - m)! (n - x - 1)!} \frac{a^{x - m + 1}}{b^{x - m}}$
Of course it now follows that the invariant probability density function $f$ for $\bs X$ is given by $f(x) = g(x) / c$ for $x \in S$ where $c$ is the normalizing constant $c = \sum_{x=m}^n \frac{1}{x} \binom{n - m}{x - m} \left(\frac{a}{b}\right)^{x - m}$ The limiting distribution of $\bs X$ has probability density function $f$.
Other Special Birth-Death Chains
There are a number of special birth-death chains that are studied in other sections, because the models are important and lead to special insights and analytic tools. These include
• Queuing chains
• The pure death branching chain
• The Yule process, a pure birth branching chain
• The general birth-death branching chain | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.21%3A_Continuous-Time_Birth-Death_Chains.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
Introduction
In a queuing model, customers arrive at a station for service. As always, the terms are generic; here are some typical examples:
• The customers are persons and the service station is a store.
• The customers are file requests and the service station is a web server.
Queuing models can be quite complex, depending on such factors as the probability distribution that governs the arrival of customers, the probability distribution that governs the service of customers, the number of servers, and the behavior of the customers when all servers are busy. Indeed, queuing theory has its own lexicon to indicate some of these factors. In this section, we will discuss a few of the basic, continuous-time queuing chains. In a general sense, the main interest in any queuing model is the number of customers in the system as a function of time, and in particular, whether the servers can adequately handle the flow of customers. This section parallels the section on discrete-time queuing chains.
Our main assumptions are as follows:
1. There are $k \in \N_+ \cup \{\infty\}$ servers.
2. The customers arrive according to a Poisson process with rate $\mu \in (0, \infty)$.
3. If all of the servers are busy, a new customer goes to the end of a single line of customers waiting service.
4. The time required to service a customer has an exponential distribution with parameter $\nu \in (0, \infty)$.
5. The service times are independent from customer to customer, and are independent of the arrival process.
Assumption (b) means that the times between arrivals of customers are independent and exponentially distributed, with parameter $\mu$. Assumption (c) means that we have a first-in, first-out model, often abbreviated FIFO. Note that there are three parameters in the model: the number of servers $k$, the exponential parameter $\mu$ that governs the arrivals, and the exponential parameter $\nu$ that governs the service times. The special cases $k = 1$ (a single server) and $k = \infty$ (infinitely many servers) deserve special attention. As you might guess, the assumptions lead to a continuous-time Markov chain.
Let $X_t$ denote the number of customers in the system (waiting in line or being served) at time $t \in [0, \infty)$. Then $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a continuous-time Markov chain on $\N$, known as the M/M/$k$ queuing chain.
In terms of the basic structure of the chain, the important quantities are the exponential parameters for the states and the transition matrix for the embedded jump chain.
For the M/M/$k$ chain $\bs{X}$,
1. The exponential parameter function $\lambda$ is given by $\lambda(x) = \mu + \nu x$ if $x \in \N$ and $x \lt k$ and $\lambda(x) = \mu + \nu k$ if $x \in \N$ and $x \ge k$.
2. The transition matrix $Q$ for the jump chain is given by \begin{align*} Q(x, x - 1) & = \frac{\nu x}{\mu + \nu x}, \; Q(x, x + 1) = \frac{\mu}{\mu + \nu x}, \quad x \in \N, \, x \lt k \ Q(x, x - 1) & = \frac{\nu k}{\mu + \nu k}, \; Q(x, x + 1) = \frac{\mu}{\mu + \nu k}, \quad x \in \N, x \ge k \end{align*}
So the M/M/$k$ chain is a birth-death chain with 0 as a reflecting boundary point. That is, in state $x \in \N_+$, the next state is either $x - 1$ or $x + 1$, while in state 0, the next state is 1. When $k = 1$, the single-server queue, the exponential parameter in state $x \in \N_+$ is $\mu + \nu$ and the transition probabilities for the jump chain are $Q(x, x - 1) = \frac{\nu}{\mu + \nu}, \; Q(x, x + 1) = \frac{\mu}{\mu + \nu}$ When $k = \infty$, the infinite server queue, the cases above for $x \ge k$ are vacuous, so the exponential parameter in state $x \in \N$ is $\mu + x \nu$ and the transition probabilities are $Q(x, x - 1) = \frac{\nu x}{\mu + \nu x}, \; Q(x, x + 1) = \frac{\mu}{\mu + \nu x}$
Infinitesimal Generator
The infinitesimal generator of the chain gives the same information as the exponential parameter function and the jump transition matrix, but in a more compact form.
For the M/M/$k$ queuing chain $\bs{X}$, the infinitesimal generator $G$ is given by \begin{align*} G(x, x) & = -(\mu + \nu x), \; G(x, x - 1) = \nu x, \; G(x, x + 1) = \mu; \quad x \in \N, \, x \lt k \ G(x, x) & = -(\mu + \nu k), \; G(x, x - 1) = \nu k, \; G(x, x + 1) = \mu; \quad x \in \N, \, x \ge k \end{align*}
So for $k = 1$, the single server queue, the generator $G$ is given by $G(0, 0) = -\mu$, $G(0, 1) = \mu$, while for $x \in \N_+$, $G(x, x) = -(\mu + \nu)$, $G(x, x - 1) = \nu$, $G(x, x + 1) = \mu$. For $k = \infty$, the infinite server case, the generator $G$ is given by $G(x, x) = -(\mu + \nu x)$, $G(x, x - 1) = \nu x$, and $G(x, x + 1) = \mu$ for all $x \in \N$.
Classification and Limiting Behavior
Again, let $\bs{X} = \{X_t: t \in [0, \infty)\}$ denote the M/M/$k$ queuing chain with arrival rate $\mu$, service rate $\nu$ and with $k \in \N_+ \cup \{\infty\}$ servers. As noted in the introduction, of fundamental importance is the question of whether the servers can handle the flow of customers, so that the queue eventually empties, or whether the length of the queue grows without bound. To understand the limiting behavior, we need to classify the chain as transient, null recurrent, or positive recurrent, and find the invariant functions. This will be easy to do using our results for more general continuous-time birth-death chains. Note first that $\bs{X}$ is irreducible. It's best to consider the single server and infinite server cases individually.
The single server queuing chain $\bs{X}$ is
1. Transient if $\nu \lt \mu$.
2. Null recurrent if $\nu = \mu$.
3. Positive recurrent if $\nu \gt \mu$. The invariant distribution is the geometric distribution on $\N$ with parameter $\mu / \nu$. The invariant probability density function $f$ is given by $f(x) = \left(1 - \frac{\mu}{\nu}\right) \left(\frac{\mu}{\nu}\right)^x, \quad x \in \N$
Proof
This follows directly from results for the continuous-time birth-death chain, with constant birth rate $\mu$ on $\N$ and constant death rate $\nu$ on $\N_+$.
The result makes intuitive sense. If the service rate is less than the arrival rate, the chain is transient and the length of the queue grows to infinity. If the service rate is greater than the arrival rate, the chain is positive recurrent. At the boundary between these two cases, when the arrival and service rates are the same, the chain is null recurrent.
The infinite server queuing chain $\bs{X}$ is positive recurrent. The invariant distribution is the Poisson distribution with parameter $\mu / \nu$. The invariant probability density function $f$ is given by $f(x) = e^{-\mu / \nu} \frac{\left(\mu / \nu\right)^x}{x!}, \quad x \in \N$
Proof
This also follows from results for the continuous-time birth-death chain. In the notation of that section, the birth rate is constant, $\mu(x) = \mu$ for $x \in \N$ and the death rate is proportional to the number of customers in the system: $\nu(x) = \nu x$ for $x \in \N_+$. Hence the invariant function (unique up to multiplication by constants) is $x \mapsto \frac{\mu(0) \cdots \mu(x - 1)}{\nu(1) \cdots \nu(x)} = \frac{\mu^x}{\nu^x x!}$ Normalized, this is the Poisson distribution with parameter $\mu / \nu$.
This result also makes intuitive sense. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.22%3A_Continuous-Time_Queuing_Chains.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
Introduction
Generically, suppose that we have a system of particles that can generate or split into other particles of the same type. Here are some typical examples:
• The particles are biological organisms that reproduce.
• The particles are neutrons in a chain reaction.
• The particles are electrons in an electron multiplier.
We assume that the lifetime of each particle is exponentially distributed with parameter $\alpha \in (0, \infty)$, and at the end of its life, is replaced by a random number of new particles that we will refer to as children of the original particle. The number of children $N$ of a particle has probability density function $f$ on $\N$. The particles act independently, so in addition to being identically distributed, the lifetimes and the number of children are independent from particle to particle. Finally, we assume that $f(1) = 0$, so that a particle cannot simply die and be replaced by a single new particle. Let $\mu$ and $\sigma^2$ denote the mean and variance of the number of offspring of a single particle. So $\mu = \E(N) = \sum_{n=0}^\infty n f(n), \quad \sigma^2 = \var(N) = \sum_{n=0}^\infty (n - \mu)^2 f(n)$ We assume that $\mu$ is finite and so $\sigma^2$ makes sense. In our study of discrete-time Markov chains, we studied branching chains in terms of generational time. Here we want to study the model in real time.
Let $X_t$ denote the number of particles at time $t \in [0, \infty)$. Then $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a continuous-time Markov chain on $\N$, known as a branching chain. The exponential parameter function $\lambda$ and jump transition matrix $Q$ are given by
1. $\lambda(x) = \alpha x$ for $x \in \N$
2. $Q(x, x + k - 1) = f(k)$ for $x \in \N_+$ and $k \in \N$.
Proof
That $\bs X$ is a continuous-time Markov chain follows from the assumptions and the basic structure of continuous-time Markov chains. In turns out that the assumption that $\mu \lt \infty$ implies that $\bs X$ is regular, so that $\tau_n \to \infty$ as $n \to \infty$, where $\tau_n$ is the time of the $n$th jump for $n \in \N_+$.
1. Starting with $x$ particles, the time of the first state change is the minimum of $x$ independent variables, each exponentially distributed with parameter $\alpha$. As we know, this minimum is also exponentially distributed with parameter $\alpha x$.
2. Starting in state $x \in \N_+$, the next state will be $x + k - 1$ for $k \in \N$, if the particle dies and leaves $k$ children in her place. This happens with probability $f(k)$.
Of course 0 is an absorbing state, since this state means extinction with no particles. (Note that $\lambda(0) = 0$ and so by default, $Q(0, 0) = 1$.) So with a branching chain, there are essentially two types of behavior: population extinction or population explosion.
For the branching chain $\bs X = \{X_t: t \in [0, \infty)\}$ one of the following events occurs with probability 1:
1. Extinction: $X_t = 0$ for some $t \in [0, \infty)$ and hence $X_s = 0$ for all $s \ge t$.
2. Explosion: $X_t \to \infty$ as $t \to \infty$.
Proof
If $f(0) \gt 0$ then all states lead to the absorbing state 0 and hence the set of positive staties $\N_+$ is transient. With probability 1, the jump chain $\bs Y$ visits a transient state only finitely many times, so with probability 1 either $Y_n = 0$ for some $n \in \N$ or $Y_n \to \infty$ as $n \to \infty$. If $f(0) = 0$ then $Y_n$ is strictly increasing in $n$, since $f(1) = 0$ by assumption. Hence with probability 1, $Y_n \to \infty$ as $n \to \infty$.
Without the assumption that $\mu \lt \infty$, explosion can actually occur in finite time. On the other hand, the assumption that $f(1) = 0$ is for convenience. Without this assumption, $\bs X$ would still be a continuous-time Markov chain, but as discussed in the Introduction, the exponential parameter function would be $\lambda(x) = \alpha f(1) x$ for $x \in \N$ and the jump transition matrix would be $Q(x, x + k - 1) = \frac{f(k)}{1 - f(1)}, \quad x \in \N_+, \; k \in \{0, 2, 3, \ldots\}$
Because all particles act identically and independently, the branching chain starting with $x \in \N_+$ particles is essentially $x$ independent copies of the branching chain starting with 1 particle. In many ways, this is the fundamental insight into branching chains, and in particular, means that we can often condition on $X(0) = 1$.
Generator and Transition Matrices
As usual, we will let $\bs P = \{P_t: t \in [0, \infty)\}$ denote the semigroup of transition matrices of $\bs X$, so that $P_t(x, y) = \P(X_t = y \mid X = x)$ for $(x, y) \in \N^2$. Similarly, $G$ denotes the infinitesimal generator matrix of $\bs X$.
The infinitesimal generator $G$ is given by \begin{align*} G(x, x) & = -\alpha x, \quad x \in \N \ G(x, x + k - 1) & = \alpha x f( k), \quad x \in \N_+, \, k \in \N \end{align*}
Proof
This follows immediately from the exponential parameter function and the jump transition matrix above.
The Kolmogorov backward equation is $\frac{d}{dt} P_t(x, y) = -\alpha x P_t(x, x) + \alpha x \sum_{k=0}^\infty f(k) P_t(x + k - 1, y), \quad (x, y) \in \N^2$
Proof
The backward equation is $\frac{d}{dt} P_t = G P_t$, so the result follows from the previous theorem.
Unlike some of our other continuous-time models, the jump chain $\bs Y$ governed by $Q$ is not the discrete-time version of the model. That is, $\bs Y$ is not a discrete-time branching chain, since in discrete time, the index $n$ represents the $n$th generation, whereas here it represent the $n$th time that a particle reproduces. However, there are lots of discrete-time branching chain embedded in the continuous-time chain.
Fix $t \in (0, \infty)$ and define $\bs Z_t = \{X_{n t}: n \in \N\}$. Then $\bs Z_t$ is a discrete-time branching chain with offspring probability density function $f_t$ given by $f_t(x) = P_t(1, x)$ for $x \in \N$.
Proof
In general, we know that sampling a (homogeneous) continuous-time Markov chain at multiples of a fixed $t \in (0, \infty)$, results in a (homogeneous) discrete-time Markov chain. For $\bs Z_t$ to be a branching chain, we just need to note that $P_t(x, y) = f_t^{*x}(y), \quad (x, y) \in \N^2$ where $f_t^{*x}$ is the convolution power of $f_t$ of order $x$. This is a consequence of the fundamental fact that $X_t$ given $X_0 = x$ has the same distribution as the sum of $x$ independent copies of $X_t$ given $X_0 = 1$. Recall that the PDF of a sum of independent variables is the convolution of the individual PDFs.
Probability Generating Functions
As in the discrete case, probability generating functions are an important analytic tool for continuous-time branching chains.
For $t \in [0, \infty)$ let $\Phi_t$ denote the probability generating function of $X_t$ given $X_0 = 1$ $\Phi_t(r) = \E\left(r^{X_t} \mid X_0 = 1\right) = \sum_{x=0}^\infty r^x P_t(1, x)$ Let $\Psi$ denote the probability generating function of $N$ $\Psi(r) = \E\left(r^N\right) = \sum_{n=0}^\infty r^n f(n)$The generating functions are defined (the series are absolutely convergent) at least for $r \in (-1, 1]$.
The collection of generating functions $\bs \Phi = \{\Phi_t: t \in [0, \infty)\}$ gives the same information as the collection of probability density functions $\{P_t(1, \cdot): t \in [0, \infty)\}$. With the fundamental insight that the branching process starting with one particle determines the branching process in general, $\bs \Phi$ actually determines the transition semigroup $\bs P = \{P_t: t \in [0, \infty)\}$.
For $t \in [0, \infty)$ and $x \in \N$, the probability generating function of $X_t$ given $X_0 = x$ is $\Phi_t^x$: $\sum_{y=0}^\infty r^y P_t(x, y) = [\Phi_t(r)]^x$
Proof
Again, given $X_0 = x$, the number of particles $X_t$ at time $t$ has the same distribution as the sum of $x$ independent copies of $X_t$ given $X_0 = 1$. Recall that the PGF of a sum of independent variables is the product of the PGFs of the variables.
Note that $\Phi_t$ is the generating function of the offspring distribution for the embedded discrete-time branching chain $\bs Z_t = \{X_{n t}: n \in \N\}$ for $t \in (0, \infty)$. On the other hand, $\Psi$ is the generating function of the offspring distribution for the continuous-time chain. So our main goal in this discussion is to see how $\bs{\Phi}$ is built from $\Psi$. Because $\bs P$ is a semigroup under matrix multiplication, and because the particles act identically and independently, $\bs \Phi$ is a semigroup under composition.
$\Phi_{s+t} = \Phi_s \circ \Phi_t$ for $s, \, t \in [0, \infty)$.
Proof
Using the semigroup property (the Chapman-Kolmogorov equations) and the previous result we have \begin{align*} \Phi_{s+t}(r) & = \sum_{y=0}^\infty r^y P_{s+t}(1, y) = \sum_{y=0}^\infty r^y \sum_{x=0}^\infty P_s(1, x) P_t(x, y) = \sum_{x=0}^\infty P_s(1, x) \sum_{y=0}^\infty r^y P_t(x, y) \ & = \sum_{x=0}^\infty P_s(1, x) [\Phi_t(r)]^x = \Phi_s[\Phi_t(r)] \end{align*}
Note also that $\Phi_0(r) = \E(r^{X_0} \mid X_0 = 1) = r$ for all $r \in \R$. This also follows from the semigroup property: $\Phi_0 = \Phi_0 \circ \Phi_0$. The fundamental relationship between the collection of generating functions $\bs \Phi$ and the generating function $\Psi$ is given in the following theorem:
The mapping $t \mapsto \Phi_t$ satisfies the differential equation $\frac{d}{dt} \Phi_t = \alpha (\Psi \circ \Phi_t - \Phi_t)$
Proof
Using the Kolmogorov backward equation we have $\frac{d}{dt} \Phi_t(r) = \sum_{x=0}^\infty r^x \frac{d}{dt} P_t(1, x) = \sum_{x=0}^\infty r^x G P_t(1, x)$ Using the generator above, $G P_t(1, x) = \sum_{y = 0}^\infty G(1, y) P_t(y, x) = - \alpha P_t(1, x) + \sum_{k=0}^\infty \alpha f(k) P_t(k, x), \quad x \in \N$ Substituting and using the result above gives \begin{align*} \frac{d}{dt} \Phi_t(r) & = \sum_{x=0}^\infty r^x \left[-\alpha P_t(1, x) + \sum_{k=0}^\infty \alpha f(k) P_t(k, x)\right] = - \alpha \sum_{x=0}^\infty r^x P_t(1, x) + \alpha \sum_{x=0}^\infty \sum_{k=0}^\infty r^x f(k) P_t(k, x) \ & = -\alpha \Phi_t(r) + \alpha \sum_{k=0}^\infty f(k) \sum_{x=0}^\infty r^x P_t(k, x) = -\alpha \Psi_t(r) + \alpha \sum_{k=0}^\infty f(k) [\Phi_t(r)]^k = -\alpha \Phi_t(r) + \alpha \Psi[\Phi_t(r)] \end{align*}
This differential equation, along with the initial condition $\Phi_0(r) = r$ for all $r \in \R$ determines the collection of generating functions $\bs \Phi$. In fact, an implicit solution for $\Phi_t(r)$ is given by the integral equation $\int_r^{\Phi_t(r)} \frac{1}{\Psi(u) - u} du = \alpha t$ Another relationship is given in the following theorem. Here, $\Phi_t^\prime$ refers to the derivative of the generating function $\Phi_t$ with respect to its argument, of course (so $r$, not $t$).
For $t \in [0, \infty)$, $\Phi_t^\prime = \frac{\Psi \circ \Phi_t - \Phi_t}{\Psi - \Phi_0}$
Proof
From the semigroup property, we have $\Phi_{t+s}(r) = \Phi_t[\Phi_s(r)]$ for $s, \, t \in [0, \infty)$. Differentiating with respect to $s$ and using the chain rule along with the previous theorem gives $\frac{d}{ds} \Phi_{t+s}(r) = \Phi_t^\prime[\Phi_s(r)] \frac{d}{ds} \Phi_s(r) = \Phi_t^\prime[\Phi_s(r)] \alpha \left[ \Psi(\Phi_s(r)) - \Phi_s(r)\right]$ Evaluating at $s = 0$ and using the condition $\Phi_0(r) = r$ we have $\frac{d}{dt}\Phi_t(r) = \Phi_t^\prime(r) \alpha [\Psi(r) - r]$ Using the previous theorem once again gives $\alpha \left[\Psi(\Phi_t(r)) - \Phi_t(r)\right] = \Phi_t^\prime(r) \alpha[\Psi(r) - r]$ Solving for $\Phi_t^\prime(r)$ gives the result.
Moments
In this discussion, we wil study the mean and variance of the number of particles at time $t \in [0, \infty)$. Let $m_t = \E(X_t \mid X_0 = 1), \; v_t = \var(X_t \mid X_0 = 1), \quad t \in [0, \infty)$ so that $m_t$ and $v_t$ are the mean and variance, starting with a single particle. As always with a branching process, it suffices to consider a single particle:
For $t \in [0, \infty)$ and $x \in \N$,
1. $\E(X_t \mid X_0 = x) = x \, m_t$
2. $\var(X_t \mid X_0 = x) = x \, v_t$
Proof
Once again, the distribution of $X_t$ given $X_0 = x$ is the same as the distribution of the sum of $x$ independent copies of $X_t$ given $X_0 = 1$. Recall that the mean of a sum of variables is the sum of the individual means, and the variance of the sum of independent variables is the sum of the individual variances.
Recall also that $\mu$ and $\sigma^2$ are the the mean and variance of the number of offspring of a particle. Here is the connection between the means:
$m_t = e^{\alpha(\mu - 1) t}$ for $t \in [0, \infty)$.
1. If $\mu \lt 1$ then $m_t \to 0$ as $t \to \infty$. This is extinction in the mean.
2. If $\mu \gt 1$ then $m_t \to \infty$ as $t \to \infty$. This is explosion in the mean.
3. If $\mu = 1$ then $m_t = 1$ for all $t \in [0, \infty)$. This is stability in the mean.
Proof
From the proof of the previous theorem, $\frac{d}{dt} \Phi_t(r) = \alpha \Phi_t^\prime(r) [\Psi(r) - r]$ Differentiating with respect to $r$, interchanging the order of integration on the left, and using the product rule on the right gives $\frac{d}{dt} \Phi_t^\prime(r) = \alpha \Phi_t^{\prime \prime}(r)[\Psi(r) - r] + \alpha \Phi_t^\prime(r)[\Psi^\prime(r) - 1]$ Now let $r = 1$ and recall that $\Phi(1) = 1$. We get $\frac{d}{dt} \Phi_t^\prime(1) = \alpha \Phi_t^\prime(1)[\Psi^\prime(1) - 1]$ From the basic theory of probability generating functions, $m_t = \Phi_t^\prime(1)$ and similarly, $\mu = \Psi^\prime(1)$. Hence we have $\frac{d}{dt} m_t = \alpha (\mu - 1) m_t$ Of course we have the initial condition $m_0 = 1$.
This result is intuitively very appealing. As a function of time, the expected number of particles either grows or decays exponentially, depending on whether the expected number of offspring of a particle is greater or less than one. The connection between the variances is more complicated. We assume that $\sigma^2 \lt \infty$.
If $\mu \ne 1$ then $v_t = \left[\frac{\sigma^2}{\mu - 1} + (\mu - 1)\right]\left[e^{2 \alpha (\mu - 1) t} - e^{\alpha (\mu - 1) t}\right], \quad t \in [0, \infty)$ If $\mu = 1$ then $v_t = \alpha \sigma^2 t$.
1. If $\mu \lt 1$ then $v_t \to 0$ as $t \to \infty$
2. If $\mu \ge 1$ then $v_t \to \infty$ as $t \to \infty$
Proof
Probability generating functions are naturally connected to factorial moments, so it's best to work with these. Thus, let $w_t = \E[X_t(X_t - 1) \mid X_0 = 1]$ for $t \in [0, \infty)$ and let $\delta = \E[N(N - 1)]$. These are the factorial moments of order 2. In the proof of the last theorem we showed that $\frac{d}{dt} \Phi_t^\prime(r) = \alpha \Phi_t^{\prime \prime}(r)[\Psi(r) - r] + \alpha \Phi_t^\prime(r)[\Psi^\prime(r) - 1]$ Differentiating with respect to $r$ again gives $\frac{d}{dt} \Phi_t^{\prime \prime}(r) = \alpha \Phi_t^{\prime \prime \prime}(r)[\Psi(r) - r] + 2 \alpha \Phi_t^{\prime \prime}(r)[\Psi^\prime(r) - 1] + \alpha \Phi_t^\prime(r) \Psi^{\prime \prime}(r)$ Now substitute $r = 1$. Recall that $\Phi_t^{\prime \prime}(1) = w_t$, $\Phi_t^\prime(1) = m_t = e^{\alpha (\mu - 1) t}$, $\Psi^{\prime \prime}(1) = \delta$, $\Psi^\prime(1) = \mu$, and $\Psi(1) = 1$. We get the differential equation $\frac{d}{dt} w_t = 2 \alpha (\mu - 1) w_t + \alpha \delta e^{\alpha (\mu - 1) t}$ with the initial condition $w_0 = 0$.
Suppose that $\mu \ne 1$. Then using standard methods for a linear, first order differential equations with constant coefficients and an exponential forcing function, the solution is $w_t = \frac{\delta}{\mu - 1}\left[e^{2 \alpha (\mu - 1)} - e^{\alpha (\mu - 1) t}\right]$ But $\delta = \sigma^2 + \mu^2 - \mu$, and similarly $w_t = v_t + m_t^2 - m_t$ with $m_t = e^{\alpha (\mu - 1) t}$. Substitution and some algebra then gives the result.
Suppose now that $\mu = 1$. Then also $m_t = 1$ for all $t \in [0, \infty)$ and so $\delta = \sigma^2$ and $v_t = w_t$. The differential equation above reduces simply to $\frac{d}{dt} v_t = \alpha \sigma^2$ with initial condition $v_0 = 0$ so trivially $v_t = \alpha \sigma^2 t$. Finally, in the context of part (b), note that if $\mu = 1$ we must have $\sigma^2 \gt 0$ since we have assumed that $f(1) = 0$.
If $\mu \lt 1$ so that $m_t \to 0$ as $t \to \infty$ and we have extinction in the mean, then $v_t \to 0$ as $t \to \infty$ also. If $\mu \gt 1$ so that $m_t \to \infty$ as $t \to \infty$ and we have explosion in the mean, then $v_t \to \infty$ as $t \to \infty$ also. We would expect these results. On the other hand, if $\mu = 1$ so that $m_t = 1$ for all $t \in [0, \infty)$ and we have stability in the mean, then $v_t$ grows linearly in $t$. This gives some insight into what to expect next when we consider the probability of extinction.
The Probability of Extinction
As shown above, there are two types of behavior for a branching process, either population extinction or population explosion. In this discussion, we study the extinction probability, starting as usual with a single particle: $q = \P(X_t = 0 \text{ for some } t \in (0, \infty) \mid X_0 = 1) = \lim_{t \to \infty} \P(X_t = 0 \mid X_0 = 1)$ Need we say it? The extinction probability starting with an arbitrary number of particles is easy.
For $x \in \N$, $\P(X_t = 0 \text{ for some } t \in (0, \infty) \mid X_0 = x) = \lim_{t \to \infty} \P(X_t = 0 \mid X_0 = x) = q^x$
Proof
Given $X_0 = x$, extinction has occurred by time $t$ if and only if extinction has occurred by time $t$ for each of the $x$ independent branching chains formed from the descendents of the $x$ initial particles.
We can easily relate extinction for the continuous-time branching chain $\bs X$ to extinction for any of the embedded discrete-time branching chains.
If extinction occurs for $\bs X$ then extinction occurs for $\bs Z_t$ for every $t \in (0, \infty)$. Conversely if extinction occurs for $\bs Z_t$ for some $t \in (0, \infty)$ then extinction occurs for $\bs Z_t$ for every $t \in (0, \infty)$ and extinction occurs for $\bs X$. Hence $q$ is the minimum solution in $(0, 1]$ of the equation $\Phi_t(r) = r$ for every $t \in (0, \infty)$.
Proof
The statements about the extinction event follow immediately from the fact that $0$ is absorbing, so that if $X_t = 0$ for some $t \in (0, \infty)$ then $X_s = 0$ for every $s \in [t, \infty)$. The result for the extinction probability $q$ follows from the theory of discrete-time branching chains.
So whether or not extinction is certain depends on the critical parameter $\mu$.
The extinction probability $q$ and the mean of the offspring distribution $\mu$ are related as follows:
1. If $\mu \le 1$ then $q = 1$, so extinction is certain.
2. If $\mu \gt 1$ then $0 \lt q \lt 1$, so there is a positive probability of extinction and a positive probability of explosion.
Proof
These results follow from the corresponding results for discrete-time branching chains. Fix $t \in (0, \infty)$ and recall that $m_t$ is the mean of the offspring distribution for the discrete-time chain $\bs Z_t = \{X_{nt}: n \in \N\}$. From the result above,
1. If $\mu \le 1$ then $m_t \le 1$.
2. If $\mu \gt 1$ then $m_t \gt 1$.
It would be nice to have an equation for $q$ in terms of the offspring probability generating function $\Psi$. This is also easy
The probability of extinction $q$ is the minimum solution in $(0, 1]$ of the equation $\Psi(r) = r$.
Proof
From the result above, $\Phi_t(q) = 1$ for every $t \in (0, \infty)$. Substituting $r = q$ in the differential equation above, we have $\frac{d}{dt} \Phi_t(q) = 0$ and hence $\Psi(q) = q$. As in the theory of discrete branching chains, the equation $\Psi(r) = r$ has only the solution 1 in (0, 1] if $\mu = \Psi^\prime(1) \le 1$ or there are two solutions $q \in (0, 1)$ and $1$ if $\mu \gt 0$. In both cases, $q$ is the smaller solution.
Special Models
We now turn our attention to a number of special branching chains that are important in applications or lead to interesting insights. We will use the notation established above, so that $\alpha$ is the parameter of the exponential lifetime of a particle, $Q$ is the transition matrix of the jump chain, $G$ is the infinitesimal generator matrix, and $P_t$ is the transition matrix at time $t \in [0, \infty)$. Similarly, $m_t = \E(X_t \mid X_0 = x)$, $v_t = \var(X_t \mid X_0 = x)$, and $\Phi_t$ are the mean, variance, and generating function of the number of particles at time $t \in [0, \infty)$, starting with a single particle. As always, be sure to try these exercises yourself before looking at the proofs and solutions.
The Pure Death Branching Chain
First we consider the branching chain in which each particle simply dies without offspring. Sadly for these particles, extinction is inevitable, but this case is still a good place to start because the analysis is simple and lead to explicit formulas. Thus, suppose that $\bs X = \{X_t: t \in [0, \infty)\}$ is a branching process with lifetime parameter $\alpha \in (0, \infty)$ and offspring probability density function $f$ with $f(0) = 1$.
The transition matrix of the jump chain and the generator matrix are given by
1. $Q(0, 0) = 1$ and $Q(x, x - 1) = 1$ for $x \in \N_+$
2. $G(x, x) = - \alpha x$ for $x \in \N$ and $G(x, x - 1) = \alpha x$ for $x \in \N_+$
The time-varying functions are more interesting.
Let $t \in [0, \infty)$. Then
1. $m_t = e^{-\alpha t}$
2. $v_t = e^{-\alpha t} - e^{-2 \alpha t}$
3. $\Phi_t(r) = 1 - (1 - r) e^{-\alpha t}$ for $r \in \R$
4. Given $X_0 = x$ the distribution of $X_t$ is binomial with trial parameter $x$ and success parameter $e^{-\alpha t}$. $P_t(x, y) = \binom{x}{y} e^{-\alpha t y} (1 - e^{-\alpha t})^{x - y}, \quad x \in \N, \, y \in \{0, 1, \ldots, x\}$
Direct Proof
All of these results follow from the general methods above, with $\mu = \sigma = 0$ and $\Psi(r) = 1$ for $r \in \R$. But it's helpful to give direct proofs. Given $X_0 = 1$, let $\tau$ be the time until the first transition, which is simply the lifetime of the particle. So $\tau$ has the exponential distribution with parameter $\alpha$. For $t \in [0, \infty)$, $X_t$ is an indicator random variable (taking just values 0 and 1) with $\P(X_t = 1 \mid X_0 = 1) = \P(\tau \gt t \mid X_0 = 1) = e^{-\alpha t}$ Part (a), (b), and (c) are standard results for an indicator variable. For part (d), given $X_0 = x$, each of the $x$ particles, independently, is still alive at time $t$ with probability $e^{-\alpha t}$. Hence the number of particles still alive has the binomial distribution with parameters $x$ and $e^{-\alpha t}$.
In particular, note that $P_t(x, 0) = (1 - e^{-\alpha t})^x \to 1$ as $t \to \infty$. that is, the probability of extinction by time $t$ increases to 1 exponentially fast. Since we have an explicit formula for the transition matrices, we can find an explicit formula for the potential matrices as well. The result uses the beta function $B$.
For $\beta \in (0, \infty)$ the potential matrix $U_\beta$ is given by $U_\beta(x, y) = \frac{1}{\alpha} \binom{x}{y} B(y + \beta / \alpha, x - y + 1), \quad x \in \N, \, y \in \{0, 1, \ldots, x\}$ For $\beta = 0$, the potential matrix $U$ is given by
1. $U(x, 0) = \infty$ for $x \in \N$
2. $U(x, y) = 1 / \alpha y$ for $x, \, y \in \N_+$ and $x \le y$.
Proof
Suppose that $\beta \gt 0$ and that $x, \, y \in \N$ with $x \le y$. By definition $U_\beta(x, y) = \int_0^\infty e^{-\beta t} P_t(x, y) \, dt = \int_0^\infty e^{-\beta t} \binom{x}{y} e^{-\alpha t y} (1 - e^{-\alpha t})^{x - y} dt$ Substitute $u = e^{-\alpha t}$ so that $du = - \alpha e^{-\alpha t} dt$ or equivalently $dt = - du / \alpha u$. After some algebra, the result is $U_\beta(x, y) = \frac{1}{\alpha} \binom{x}{y} \int_0^1 u^{y + \beta / \alpha - 1} (1 - u)^{x - y} du$ By definition, the last integral is $B(y + \beta / \alpha, x - y + 1)$.
1. For $x \in \N$, $U(x, 0) = \int_0^\infty (1 - e^{-\alpha t})^x dd = \infty$
2. For $x, \, y \in \N_+$ with $x \le y$, the derivation above and properties of the beta function give $U(x, y) = \frac{1}{\alpha} \binom{x}{y} B(y, x - y + 1) = \frac{1}{\alpha} \binom{x}{y} \frac{(y - 1)! (x - y)!}{x!} = \frac{1}{\alpha y}$
We could argue the results for the potential $U$ directly. Recall that $U(x, y)$ is the expected time spent in state $y$ starting in state $x$. Since 0 is absorbing and all states lead to 0, $U(x, 0) = \infty$ for $x \in \N$. If $x, \, y \in \N_+$ and $x \le y$, then $x$ leads to $y$ with probability 1. Once in state $y$ the time spent in $y$ has an exponential distribution with parameter $\lambda(y) = \alpha y$, and so the mean is $1 / \alpha y$. Of course, when the chain leaves $y$, it never returns.
Recall that $\beta U_\beta$ is a transition probability matrix for $\beta \gt 0$, and in fact $\beta U_\beta(x, \cdot)$ is the probability density function of $X_T$ given $X_0 = x$ where $T$ is independent of $\bs X$ has the exponential distribution with parameter $\beta$. For the next result, recall the ascending power notation $a^{[k]} = a ( a + 1) \cdots (a + k - 1), \quad a \in \R, \, k \in \N$
For $\beta \gt 0$ and $x \in \N_+$, the function $\beta U_\beta(x, \cdot)$ is the beta-binomial probability density function with parameters $x$, $\beta / \alpha$, and 1. $\beta U_\beta(x, y) = \binom{x}{y} \frac{(\beta / \alpha)^{[y]} 1^{[x - y]}}{(1 + \beta / \alpha)^{[x]}}, \quad x \in \N, \, y \in \{0, 1, \ldots x\}$
Proof
From the previous result and properties of the beta function. $\beta U_\beta(x, y) = \frac{\beta}{\alpha} \binom{x}{y} B(y + \beta / \alpha, x - y + 1), \quad x \in \N, \, y \in \{0, 1, \ldots, x\}$ But from properties of the beta function, $B(y + \beta / \alpha, x - y + 1) = B(\beta / \alpha, 1) \frac{(\beta / \alpha)^{[y]} 1^{[x - y]}}{(1 + \beta / \alpha)^{[x]}} = \frac{\alpha}{\beta} \frac{(\beta / \alpha)^{[y]} 1^{[x - y]}}{(1 + \beta / \alpha)^{[x]}}$ Substituting gives the result
The Yule Process
Next we consider the pure birth branching chain in which each particle, at the end of its life, is replaced by 2 new particles. Equivalently, we can think of particles that never die, but each particle gives birth to a new particle at a constant rate. This chain could serve as the model for an unconstrained nuclear reaction, and is known as the Yule process, named for George Yule. So specifically, let $\bs X = \{X_t: t \in [0, \infty)\}$ be the branching chain with exponential parameter $\alpha \in (0, \infty)$ and offspring probability density function given by $f(2) = 1$. Explosion is inevitable, starting with at least one particle, but other properties of the Yule process are interesting. in particular, there are fascinating parallels with the pure death branching chain. Since 0 is an isolated, absorbing state, we will sometimes restrict our attention to positive states.
The transition matrix of the jump chain and the generator matrix are given by
1. $Q(0, 0) = 1$ and $Q(x, x + 1) = 1$ for $x \in \N_+$
2. $G(x, x) = - \alpha x$ for $x \in \N$ and $G(x, x + 1) = \alpha x$ for $x \in \N_+$
Since the Yule process is a pure birth process and the birth rate in state $x \in \N$ is $\alpha x$, the process is also called the linear birth chain. As with the pure death process, we can give the distribution of $X_t$ specifically.
Let $t \in [0, \infty)$. Then
1. $m_t = e^{\alpha t}$
2. $v_t = e^{2 \alpha t} - e^{\alpha t}$
3. $\Phi_t(r) = \frac{r e^{-\alpha t}}{1 - r + r e^{-\alpha t}}$ for $|r| \lt \frac{1}{1 - e^{-\alpha t}}$
4. Given $X_0 = x$, $X_t$ has the negative binomial distribution on $\N_+$ with stopping parameter $x$ and success parameter $e^{-\alpha t}$. $P_t(x, y) = \binom{y - 1}{x - 1} e^{-x \alpha t} (1 - e^{-\alpha t})^{y - x}, \quad x \in \N_+, \, y \in \{x, x + 1, \ldots\}$
Proof from the general results
Parts (a) and (b) follow from the general moment results above, with $\mu = 2$ and $\sigma^2 = 0$. For part (c), note that $\Psi(r) = r^2$ for $r \in \R$, so the integral equation for $\Phi_t$ is $\int_r^{\Phi_t(r)} \frac{1}{u^2 - u} = \alpha t$ From partial fractions, $\frac{1}{u^2 - u} = \frac{1}{u - 1} - \frac{1}{u}$, so the result follows by standard integration and algebra. We recognize $\Phi_t$ as the probability generating function of the geometric distribution on $\N_+$ with success parameter $e^{-\alpha t}$, so for part (d) we use our standard argument. Given $X_0 = x \in \N_+$, $X_t$ has the same distribution as the sum of $x$ independent copies of $X_t$ given $X_0 = 1$, and so this is the distribution of the sum of $x$ independent variables each with the geometric distribution on $\N_+$ with parameter $e^{-\alpha t}$. But this is the negative binomial distribution on $\N_+$ with parameters $x$ and $e^{-\alpha t}$.
Direct proof
As usual, let $\tau_0 = 0$ and let $\tau_n$ denote the time of the $n$th transition (birth) for $n \in \N_+$. Given $X_0 = 1$, the population is $n$ at time $\tau_{n-1}$. So the random interval $\tau_n - \tau_{n-1}$ (the time until the next birth) has the exponential distribution with parameter $\alpha n$ and these intervals are independent as $n$ varies. From a result in the section on the exponential distribution, it follows that $\tau_n = \sum_{k=1}^n (\tau_k - \tau_{k-1})$ has distribution function given by $\P(\tau_n \le t \mid X_0 = 1) = (1 - e^{-\alpha t})^n, \quad t \in [0, \infty)$ Curiously, this is also the distribution function of the maximum of $n$ independent variables, each with the exponential distribution with rate $\alpha$. Hence $\P(X_t \ge n \mid X_0 = 1) = \P(\tau_{n - 1} \le t \mid X_0 = 1) = (1 - e^{-\alpha t})^{n - 1}, \quad n \in \N_+$ and therefore $\P(X_t = n \mid X_0 = 1) = \P(X_t \ge n \mid X_0 = 1) - \P(X_t \ge n + 1 \mid X_0 = 1) = (1 - e^{-\alpha t})^{n-1} e^{-\alpha t}, \quad n \in \N_+$ So given $X_0 = 1$, $X_t$ has the geometric distribution with parameter $e^{-\alpha t}$. The other results then follow easily.
Recall that the negative binomial distribution with parameters $k \in \N_+$ and $p \in (0, 1)$ governs the trial number of the $k$th success in a sequence of Bernoulli trials with success parameter $p$. So the occurrence of this distribution in the Yule process suggests such an interpretation. However this interpretation is not nearly as obvious as with the binomial distribution in the pure death branching chain. Next we give the potential matrices.
For $\beta \in [0, \infty)$ the potential matrix $U_\beta$ is given by $U_\beta(x, y) = \frac{1}{\alpha} \binom{y - 1}{x - 1} B(x + \beta / \alpha, y - x + 1), \quad x \in \N_+, \, y \in \{x, x + 1, \ldots\}$ If $\beta \gt 0$, the function $\beta U_\beta(x, \cdot)$ is the beta-negative binomial probability density function with parameters $x$, $\beta / \alpha$, and 1: $\beta U_\beta(x, y) = \binom{y - 1}{x - 1} \frac{(\beta / \alpha)^{[x]} 1^{[y - x]}}{(1 + \beta / \alpha)^{[y]}}, \quad x \in \N, \, y \in \{x, x + 1, \ldots\}$
Proof
The proof is very similar to the one above. Suppose that $\beta \ge 0$ and that $x, \, y \in \N_+$ with $y \ge x$. By definition $U_\beta(x, y) = \int_0^\infty e^{-\beta t} P_t(x, y) \, dt = \int_0^\infty e^{-\beta t} \binom{y - 1}{x - 1} e^{-\alpha t x} (1 - e^{-\alpha t})^{y - x} dt$ Substitute $u = e^{-\alpha t}$ so that $du = - \alpha e^{-\alpha t} dt$ or equivalently $dt = - du / \alpha u$. After some algebra, the result is $U_\beta(x, y) = \frac{1}{\alpha} \binom{y - 1}{x - 1} \int_0^1 u^{x + \beta / \alpha - 1} (1 - u)^{y - x} du$ By definition, the last integral is $B(x + \beta / \alpha, y - x + 1)$.
If we think of the Yule process in terms of particles that never die, but each particle gives birth to a new particle at rate $\alpha$, then we can study the age of the particles at a given time. As usual, we can start with a single, new particle at time 0. So to set up the notation, let $\bs X = \{X_t: t \in [0, \infty)\}$ be the Yule branching chain with birth rate $\alpha \in (0, \infty)$, and assume that $X_0 = 1$. Let $\tau_0 = 0$ and for $n \in \N_+$, let $\tau_n$ denote the time of the $n$th transition (birth).
For $t \in [0, \infty)$, let $A_t$ denote the total age of the particles at time $t$. Then $A_t = \sum_{n = 0}^{X_t - 1} (t - \tau_n), \quad t \in [0, \infty)$ The random process $\bs A = \{A_t: t \in [0, \infty)\}$ is the age process.
Proof
Note that there have been $X_t - 1$ births in the interval $[0, t]$. For $n \in \{0, 1, \ldots, X_t - 1\}$, the age at time $t$ of the particle born at time $\tau_n$ is $t - \tau_n$.
Here is another expression for the age process.
Again, let $\bs A = \{A_t: t \in [0, \infty)\}$ be the age process for the Yule chain starting with a single particle. Then $A_t = \int_0^t X_s ds, \quad t \in [0, \infty)$
Proof
Suppose that $X_t = k + 1$ where $k \in \N$, so that $\tau_k \le t \lt \tau_{k+1}$. Note that $X_s = n$ for $\tau_{n-1} \le s \lt \tau_n$ and $n \in \{1, 2, \ldots, k\}$, while $X_s = k + 1$ for $\tau_k \le s \le t$. Hence $\int_0^t X_s ds = \sum_{n=1}^k n (\tau_n - \tau_{n-1}) + (k + 1) (t - \tau_k) = (k + 1) t - \sum_{n=0}^k \tau_n$ From the previous result, $A_t = \sum_{n=0}^k (t - \tau_n) = (k + 1) t - \sum_{n=0}^k \tau_n$
With the last representation, we can easily find the expected total age at time $t$.
Again, let $\bs A = \{A_t: t \in [0, \infty)\}$ be the age process for the Yule chain starting with a single particle. Then $\E(A_t) = \frac{e^{\alpha t} - 1}{\alpha}, \quad t \in [0, \infty)$
Proof
We can interchange the expected value and the integral by Fubini's theorem. So using the moment result above, $\E(A_t) = \E\left(\int_0^t X_s ds\right) = \int_0^t \E(X_s) ds = \int_0^t e^{\alpha s} ds = \frac{e^{\alpha t} - 1}{\alpha}$
The General Birth-Death Branching Chain
Next we consider the continuous-time branching chain in which each particle, at the end of its life, leaves either no children or two children. At each transition, the number of particles either increases by 1 or decreases by 1, and so such a branching chain is also a continuous-time birth-death chain. Specifically, let $\bs X = \{X_t: t \in [0, \infty)\}$ be a continuous-time branching chain with lifetime parameter $\alpha \in (0, \infty)$ and offspring probability density function $f$ given by $f(0) = 1 - p$, $f(2) = p$, where $p \in [0, 1]$. When $p = 0$ we have the pure death chain, and when $p = 1$ we have the Yule process. We have already studied these, so the interesting case is when $p \in (0, 1)$ so that both extinction and explosion are possible.
The transition matrix of the jump chain and the generator matrix are given by
1. $Q(0, 0) = 1$, and $Q(x, x - 1) = 1 - p$, $Q(x, x + 1) = p$ for $x \in \N_+$
2. $G(x, x) = -\alpha x$ for $x \in \N$, and $G(x, x - 1) = \alpha (1 - p) x$, $G(x, x + 1) = \alpha p x$ for $x \in \N_+$
As mentioned earlier, $\bs X$ is also a continuous-time birth-death chain on $\N$, with 0 absorbing. In state $x \in \N_+$, the birth rate is $\alpha p x$ and the death rate is $\alpha (1 - p) x$. The moment functions are given next.
For $t \in [0, \infty)$,
1. $m_t = e^{\alpha(2 p - 1) t}$
2. If $p \ne \frac{1}{2}$, $v_t = \left[\frac{4 p (1 - p)}{2 p - 1} + (2 p - 1)\right]\left[e^{2 \alpha (2 p - 1)t} - e^{\alpha (2 p - 1) t}\right]$ If $p = \frac{1}{2}$, $v_t = 4 \alpha p (1 - p) t$.
Proof
These results follow from the general formulas above for $m_t$ and $v_t$, since $\mu = 2 p$ and $\sigma^2 = 4 p (1 - p)$.
The next result gives the generating function of the offspring distribution and the extinction probability.
For the birth-death branching chain,
1. $\Psi(r) = p r^2 + (1 - p)$ for $r \in \R$.
2. $q = 1$ if $0 \lt p \le \frac{1}{2}$ and $q = \frac{1 - p}{p}$ if $\frac{1}{2} \lt p \lt 1$.
Proof
For $t \in [0, \infty)$, the generating function $\Phi_t$ is given by \begin{align*} \Phi_t(r) & = \frac{p r - (1 - p) + (1 - p)(1 - r) e^{\alpha(2 p - 1) t}}{p r - (1 - p) + p(1 - r) e^{\alpha(2 p - 1) t}}, \quad \text{if } p \ne 1/2 \ \Phi_t(r) & = \frac{2 r + (1 - r) \alpha t}{2 + (1 - r) \alpha t}, \quad \text{if } p = \frac{1}{2} \end{align*}
Solution
The integral equation for $\Phi_t$ is $\int_r^{\Phi_t(r)} \frac{du}{p u^2 + (1 - p) - u} = \alpha t$ The denominator in the integral factors into $(u - 1)[p u - (1 - p)]$. If $p \ne \frac{1}{2}$, use partial fractions, standard integration, and some algebra. If $p = \frac{1}{2}$ the factoring is $\frac{1}{2}(u - 1)^2$ and partial fractions is not necessary. Again, use standard integration and algebra. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16%3A_Markov_Processes/16.23%3A__Continuous-Time_Branching_Chains.txt |
Martingalges, and their cousins sub-martingales and super-martingales are real-valued stochastic processes that are abstract generalizations of fair, favorable, and unfair gambling processes. The importance of martingales extends far beyond gambling, and indeed these random processes are among the most important in probability theory, with an incredible number and diversity of applications.
17: Martingales
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
Basic Assumptions
For our basic ingredients, we start with a stochastic process $\bs{X} = \{X_t: t \in T\}$ on an underlying probability space $(\Omega, \mathscr{F}, \P)$, having state space $\R$, and where the index set $T$ (representing time) is either $\N$ (discrete time) or $[0, \infty)$ (continuous time). So to review what all this means, $\Omega$ is the sample space, $\mathscr{F}$ the $\sigma$-algebra of events, $\P$ the probability measure on $(\Omega, \mathscr{F})$, and $X_t$ is a random variable with values in $\R$ for each $t \in T$. Next, we have a filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$, and we assume that $\bs{X}$ is adapted to $\mathfrak{F}$. To review again, $\mathfrak{F}$ is an increasing family of sub $\sigma$-algebras of $\mathscr{F}$, so that $\mathscr{F}_s \subseteq \mathscr{F}_t \subseteq \mathscr{F}$ for $s, \, t \in T$ with $s \le t$, and $X_t$ is measurable with respect to $\mathscr{F}_t$ for $t \in T$. We think of $\mathscr{F}_t$ as the collection of events up to time $t \in T$, thus encoding the information available at time $t$. Finally, we assume that $\E\left(\left|X_t\right|\right) \lt \infty$, so that the mean of $X_t$ exists as a real number, for each $t \in T$.
There are two important special cases of the basic setup. The simplest case, of course, is when $\mathscr{F}_t = \sigma\{X_s: s \in T, \, s \le t\}$ for $t \in T$, so that $\mathfrak{F}$ is the natural filtration associated with $\bs{X}$. Another case that arises frequently is when we have a second stochastic process $\bs{Y} = \{Y_t: t \in T\}$ on $(\Omega, \mathscr{F}, \P)$ with values in a general measure space $(S, \mathscr{S})$, and $\mathfrak{F}$ is the natural filtration associated with $\bs{Y}$. So in this case, our main assumption is that $X_t$ is measurable with respect to $\sigma\{Y_s: s \in T, \, s \le t\}$ for $t \in T$.
The theory of martingales is beautiful, elegant, and mostly accessible in discrete time, when $T = \N$. But as with the theory of Markov processes, martingale theory is technically much more complicated in continuous time, when $T = [0, \infty)$. In this case, additional assumptions about the continuity of the sample paths $t \mapsto X_t$ and the filtration $t \mapsto \mathscr{F}_t$ are often necessary in order to have a nice theory. Specifically, we will assume that the process$\bs X$ is right continuous and has left limits, and that the filtration $\mathfrak F$ is right continuous and complete. These are the standard assumptions in continuous time.
Definitions
For the basic definitions that follow, you may need to review conditional expected value with respect to a $\sigma$-algebra.
The process $\bs{X}$ is a martingale with respect to $\mathfrak{F}$ if $\E\left(X_t \mid \mathscr{F}_s\right) = X_s$ for all $s, \, t \in T$ with $s \le t$.
In the special case that $\mathfrak{F}$ is the natural filtration associated with $\bs{X}$, we simply say that $\bs{X}$ is a martingale, without reference to the filtration. In the special case that we have a second stochastic process $\bs{Y} = \{Y_t: t \in T\}$ and $\mathfrak{F}$ is the natural filtration associated with $\bs{Y}$, we say that $\bs{X}$ is a martingale with respect to $\bs{Y}$.
The term martingale originally referred to a portion of the harness of a horse, and was later used to describe gambling strategies, such as the one used in the Petersburg paradox, in which bets are doubled when a game is lost. To interpret the definitions above in terms of gambling, suppose that a gambler is at a casino, and that $X_t$ represents her fortune at time $t \in T$ and $\mathscr{F}_t$ the information available to her at time $t$. Suppose now that $s, \, t \in T$ with $s \lt t$ and that we think of $s$ as the current time, so that $t$ is a future time. If $\bs{X}$ is a martingale with respect to $\mathfrak{F}$ then the games are fair in the sense that the gambler's expected fortune at the future time $t$ is the same as her current fortune at time $s$. To venture a bit from the casino, suppose that $X_t$ is the price of a stock, or the value of a stock index, at time $t \in T$. If $\bs X$ is a martingale, then the expected value at a future time, given all of our information, is the present value.
But as we will see, martingales are useful in probability far beyond the application to gambling and even far beyond financial applications generally. Indeed, martingales are of fundamental importance in modern probability theory. Here are two related definitions, with equality in the martingale condition replaced by inequalities.
Suppose again that the process $\bs X$ and the filtration $\mathfrak F$ satisfy the basic assumptions above.
1. $\bs{X}$ is a sub-martingale with respect to $\mathfrak{F}$ if $\E\left(X_t \mid \mathscr{F}_s\right) \ge X_s$ for all $s, \, t \in T$ with $s \le t$.
2. $\bs{X}$ is a super-martingale with respect to $\mathfrak{F}$ if $\E\left(X_t \mid \mathscr{F}_s\right) \le X_s$ for all $s, \, t \in T$ with $s \le t$.
In the gambling setting, a sub-martingale models games that are favorable to the gambler on average, while a super-martingale models games that are unfavorable to the gambler on average. To venture again from the casino, suppose that $X_t$ is the price of a stock, or the value of a stock index, at time $t \in T$. If $\bs X$ is a sub-martingale, the expected value at a future time, given all of our information, is greater than the present value, and if $\bs X$ is a super-martingale then the expected value at the future time is less than the present value. One hopes that a stock index is a sub-martingale.
Clearly $\bs{X}$ is a martingale with respect to $\mathfrak{F}$ if and only if it is both a sub-martingale and a super-martingale. Finally, recall that the conditional expected value of a random variable with respect to a $\sigma$-algebra is itself a random variable, and so the equations and inequalities in the definitions should be interpreted as holding with probability 1. In this section generally, statements involving random variables are assumed to hold with probability 1.
The conditions that define martingale, sub-martingale, and super-martingale make sense if the index set $T$ is any totally ordered set. In some applications that we will consider later, $T = \{0, 1, \ldots, n\}$ for fixed $n \in \N_+$. In the section on backwards martingales, $T = \{-n: n \in \N\}$ or $T = (-\infty, 0]$. In the case of discrete time when $T = \N$, we can simplify the definitions slightly.
Suppose that $\bs X = \{X_n: n \in \N\}$ satisfies the basic assumptions above.
1. $\bs{X}$ is a martingale with respect to $\frak{F}$ if and only if $\E\left(X_{n+1} \mid \mathscr{F}_n\right) = X_n$ for all $n \in \N$.
2. $\bs{X}$ is a sub-martingale with respect to $\frak{F}$ if and only if $\E\left(X_{n+1} \mid \mathscr{F}_n\right) \ge X_n$ for all $n \in \N$.
3. $\bs{X}$ is a super-martingale with respect to $\frak{F}$ if and only if $\E\left(X_{n+1} \mid \mathscr{F}_n\right) \le X_n$ for all $n \in \N$.
Proof
The conditions in the definitions clearly imply the conditions here, so we just need to show the opposite implications. Thus, assume that the condition in (a) holds and suppose that $k, \, n \in \N$ with $k \lt n$. Then $k \le n - 1$ so $\mathscr{F}_k \subseteq \mathscr{F}_{n-1}$ and hence $\E\left(X_n \mid \mathscr{F}_k\right) = \E\left[\E\left(X_n \mid \mathscr{F}_{n-1}\right) \mid \mathscr{F}_k \right] = \E\left(X_{n-1} \mid \mathscr{F}_k\right)$ Repeating the argument, we get to $\E\left(X_n \mid \mathscr{F}_k\right) = \E\left(X_{k+1} \mid \mathscr{F}_k\right) = X_k$ The proof for sub and super-martingales is analogous, with inequalities replacing the last equality.
The relations that define martingales, sub-martingales, and super-martingales hold for the ordinary (unconditional) expected values.
Suppose that $s, \, t \in T$ with $s \le t$.
1. If $\bs{X}$ is a martingale with respect to $\frak{F}$ then $\E(X_s) = \E(X_t)$.
2. If $\bs{X}$ is a sub-martingale with respect to $\frak{F}$ then $\E(X_s) \le \E(X_t)$.
3. If $\bs{X}$ is a super-martingale with respect to $\frak{F}$ then $\E(X_s) \ge \E(X_t)$.
Proof
The results follow directly from the definitions, and the critical fact that $\E\left[\E\left(X_t \mid \mathscr{F}_s\right)\right] = \E(X_t)$ for $s, \, t \in T$.
So if $\bs X$ is a martingale then $\bs X$ has constant expected value, and this value is referred to as the mean of $\bs X$.
Examples
The goal for the remainder of this section is to give some classical examples of martingales, and by doing so, to show the wide variety of applications in which martingales occur. We will return to many of these examples in subsequent sections. Without further ado, we assume that all random variables are real-valued, unless otherwise specified, and that all expected values mentioned below exist in $\R$. Be sure to try the proofs yourself before expanding the ones in the text.
Constant Sequence
Our first example is rather trivial, but still worth noting.
Suppose that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on the probability space $(\Omega, \mathscr{F}, \P)$ and that $X$ is a random variable that is measurable with respect to $\mathscr{F}_0$ and satisfies $\E(\left|X\right|) \lt \infty$. Let $X_t = X$ for $t \in T$. Them $\bs{X} = \{X_t: t \in T\}$ is a martingale with respect to $\mathfrak{F}$.
Proof
Since $X$ is measurable with respect to $\mathscr{F}_0$, it is measurable with respect to $\mathscr{F}_t$ for all $t \in T$. Hence $\bs{X}$ is adapted to $\mathfrak{F}$. If $s, \, t \in T$ with $s \le t$, then $\E(X_t \mid \mathscr{F}_s) = \E(X \mid \mathscr{F}_s) = X = X_s$.
Partial Sums
For our next discussion, we start with one of the most basic martingales in discrete time, and the one with the simplest interpretation in terms of gambling. Suppose that $\bs{V} = \{V_n: n \in \N\}$ is a sequence of independent random variables with $\E(|V_k|) \lt \infty$ for $k \in \N$. Let $X_n = \sum_{k=0}^n V_k, \quad n \in \N$
so that $\bs X = \{X_n: n \in \N\}$ is simply the partial sum process associated with $\bs V$.
For the partial sum process $\bs X$,
1. If $\E(V_n) \ge 0$ for $n \in \N_+$ then $\bs X$ is a sub-martingale.
2. If $\E(V_n) \le 0$ for $n \in \N_+$ then $\bs X$ is a super-martingale.
3. If $\E(V_n) = 0$ for $n \in \N_+$ then $\bs X$ is a martingale.
Proof
Let $\mathscr{F}_n = \sigma\{X_0, X_1, \ldots, X_n\} = \sigma\{V_0, V_1, \ldots, V_n\}$ for $n \in \N$. Note first that $\E(|X_n|) \le \sum_{k=0}^n \E(|V_k|) \lt \infty, \quad n \in \N$ Next, $\E\left(X_{n+1} \mid \mathscr{F}_n\right) = \E\left(X_n + V_{n+1} \mid \mathscr{F}_n\right) = \E\left(X_n \mid \mathscr{F}_n\right) + \E\left(V_{n+1} \mid \mathscr{F}_n\right) = X_n + \E(V_{n+1}), \quad n \in \N$ The last equality holds since $X_n$ is measurable with respect to $\mathscr{F}_n$ and $V_{n+1}$ is independent of $\mathscr{F}_n$. The results now follow from the definitions.
In terms of gambling, if $X_0 = V_0$ is the gambler's initial fortune and $V_i$ is the gambler's net winnings on the $i$th game, then $X_n$ is the gamblers net fortune after $n$ games for $n \in \N_+$. But partial sum processes associated with independent sequences are important far beyond gambling. In fact, much of classical probability deals with partial sums of independent and identically distributed variables. The entire chapter on Random Samples explores this setting.
Note that $\E(X_n) = \sum_{k=0}^n \E(V_k)$. Hence condition (a) is equivalent to $n \mapsto \E(X_n)$ increasing, condition (b) is equivalent to $n \mapsto \E(X_n)$ decreasing, and condition (c) is equivalent to $n \mapsto \E(X_n)$ constant. Here is another martingale associated with the partial sum process, known as the second moment martingale.
Suppose that $\E(V_k) = 0$ for $k \in \N_+$ and $\var(V_k) \lt \infty$ for $k \in \N$. Let $Y_n = X_n^2 - \var(X_n), \quad n \in \N$ Then $\bs Y = \{Y_n: n \in \N\}$ is a martingale with respect to $\bs X$.
Proof
Again, let $\mathscr{F}_n = \sigma\{X_0, X_1, \ldots, X_n\}$ for $n \in \N$. Since the sequence $\bs V$ is independent, note that $\var(X_n) = \var\left(\sum_{k=0}^n V_k\right) = \sum_{k=0}^n \var(V_k)$ Also, $\var(V_k) = \E(V_k^2)$ since $\E(V_k) = 0$ for $k \in \N_+$. In particular, $\E(|Y_n|) \lt \infty$ for $n \in \N$. Next for $n \in \N$, \begin{align*} \E(Y_{n+1} \mid \mathscr{F}_n) &= \E\left[X_{n+1}^2 - \var(X_{n+1}) \mid \mathscr{F}_n\right] = \E\left[(X_n + V_{n+1})^2 - \var(X_{n+1}) \mid \mathscr{F}_n\right] \ &= \E\left[X_n^2 + 2 X_n V_{n+1} + V_{n+1}^2 - \var(X_{n+1}) \mid \mathscr{F}_n\right] = X_n^2 + 2 X_n \E(V_{n+1}) + \E(V_{n+1}^2) - \var(X_{n+1}) \end{align*} since $X_n$ is measurable with respect to $\mathscr{F}_n$ and $V_{n+1}$ is independent of $\mathscr{F}_n$. But $\E(V_{n+1}) = 0$ and $E(V_{n+1}^2) - \var(X_{n+1}) = - \var(X_n)$. Hence we have $\E(Y_{n+1} \mid \mathscr{F}_n) = X_n^2 - \var(X_n) = Y_n$ for $n \in \N$.
So under the assumptions in this theorem, both $\bs X$ and $\bs Y$ are martingales. We will generalize the results for partial sum processes below in the discussion on processes with independent increments.
Martingale Difference Sequences
In the last discussion, we saw that the partial sum process associated with a sequence of independent, mean 0 variables is a martingale. Conversely, every martingale in discrete time can be written as a partial sum process of uncorrelated mean 0 variables. This representation gives some significant insight into the theory of martingales generally. Suppose that $\bs X = \{X_n: n \in \N\}$ is a martingale with respect to the filtration $\mathfrak F = \{\mathscr{F}_n: n \in \N\}$.
Let $V_0 = X_0$ and $V_n = X_n - X_{n-1}$ for $n \in \N_+$. The process $\bs V = \{V_n: n \in \N\}$ is the martingale difference sequence associated with $\bs X$ and $X_n = \sum_{k=0}^n V_k, \quad n \in \N$
As promised, the martingale difference variables have mean 0, and in fact satisfy a stronger property.
Suppose that $\bs V = \{V_n: n \in \N\}$ is the martingale difference sequence associated with $\bs X$. Then
1. $\bs V$ is adapted to $\mathfrak F$.
2. $\E(V_n \mid \mathscr{F}_k) = 0$ for $k, \, n \in \N$ with $k \lt n$.
3. $\E(V_n) = 0$ for $n \in \N_+$
Proof
1. Of course $V_0 = X_0$ is measurable with respect to $\mathscr{F}_0$. For $n \in \N_+$, $X_n$ and $X_{n-1}$, and hence $V_n$ are measurable with respect to $\mathscr{F}_n$
2. Let $k \in \N$. By the martingale and adapted properties, $\E(V_{k+1} \mid \mathscr{F}_k) = \E(X_{k+1} \mid \mathscr{F}_k) - E(X_k \mid \mathscr{F}_k) = X_k - X_k = 0$ Next by the tower property, $\E(V_{k+2} \mid \mathscr{F}_k) = \E[\E(V_{k+2} \mid \mathscr{F}_{k+1}) \mid \mathscr{F}_k] = 0$ Continuing (or using induction) gives the general result.
3. Since $\bs X$ is a martingale, it has constant mean, as noted above. Hence $\E(V_n) = \E(X_n) - \E(X_{n-1}) = 0$ for $n \in \N_+$. We could also use part (b).
Also as promised, if the martingale variables have finite variance, then the martingale difference variables are uncorrelated.
Suppose again that $\bs V = \{V_n: n \in \N\}$ is the martingale difference sequence associated with the martingale $\bs X$. Assume that $\var(X_n) \lt \infty$ for $n \in \N$. Then $\bs V$ is an uncorrelated sequence. Moreover, $\var(X_n) = \sum_{k=0}^n \var(V_k) = \var(X_0) + \sum_{k=1}^n \E(V_k^2), \quad n \in \N$
Proof
Let $k, \, n \in \N$ with $k \lt n$. To show that $V_k$ and $V_n$ are uncorrelated, we just need to show that $\E(V_k V_n) = 0$ (since $E(V_n) = 0$). But by the previous result, $\E(V_k V_n) = \E[\E(V_k V_n \mid \mathscr{F}_k)] = \E[V_k \E(V_n \mid \mathscr{F}_k)] = 0$ Finally, the variance of a sum of uncorrelated variables is the sum of the variances. Since $V_k$ has mean 0, $\var(V_k) = \E(V_k^2)$ for $k \in \N_+$. Hence the formula for $\var(X_n)$ holds.
We now know that a discrete-time martingale is the partial sum process associated with a sequence of uncorrelated variables. Hence we might hope that there are martingale versions of the fundamental theorems that hold for a partial sum process associated with an independent sequence. This turns out to be true, and is a basic reason for the importance of martingales.
Discrete-Time Random Walks
Suppose that $\bs V = \{V_n: n \in \N\}$ is a sequence of independent random variables with $\{V_n: n \in \N_+\}$ identically distributed. We assume that $\E(|V_n|) \lt \infty$ for $n \in \N$ and we let $a$ denote the common mean of $\{V_n: n \in \N_+\}$. Let $\bs X = \{X_n: n \in \N\}$ be the partial sum process associated with $\bs V$ so that $X_n = \sum_{i=0}^n V_i, \quad n \in \N$ This setting is a special case of the more general partial sum process considered above. The process $\bs X$ is sometimes called a (discrete-time) random walk. The initial position $X_0 = V_0$ of the walker can have an arbitrary distribution, but then the steps that the walker takes are independent and identically distributed. In terms of gambling, $X_0 = V_0$ is the initial fortune of the gambler playing a sequence of independent and identical games. If $V_i$ is the amount won (or lost) on game $i \in \N_+$, then $X_n$ is the gambler's net fortune after $n$ games.
For the random walk $\bs X$,
1. $\bs X$ is a martingale if $a = 0$.
2. $\bs X$ is a sub-martingale if $a \ge 0$.
3. $\bs X$ is a super-martingale if $a \le 0$
For the second moment martingale, suppose that $V_n$ has common mean $a = 0$ and common variance $b^2 \lt \infty$ for $n \in \N_+$, and that $\var(V_0) \lt \infty$.
Let $Y_n = X_n^2 - \var(V_0) - b^2 n$ for $n \in \N$. Then $\bs Y = \{Y_n: n \in \N\}$ is a martingale with respect to $\bs X$.
Proof
This follows from the corresponding result for a general partial sum process, above, since $\var(X_n) = \sum_{k=0}^n \var(V_k) = \var(V_0) + b^2 n, \quad n \in \N$
We will generalize the results for discrete-time random walks below, in the discussion on processes with stationary, independent increments.
Partial Products
Our next discussion is similar to the one on partial sum processes above, but with products instead of sums. So suppose that $\bs V = \{V_n: n \in \N\}$ is an independent sequence of nonnegative random variables with $\E(V_n) \lt \infty$ for $n \in \N$. Let $X_n = \prod_{i=0}^n V_i, \quad n \in \N$ so that $\bs X = \{X_n: n \in \N\}$ is the partial product process associated with $\bs X$.
For the partial product process $\bs X$,
1. If $\E(V_n) = 1$ for $n \in \N_+$ then $\bs X$ is a martingale with respect to $\bs V$
2. If $\E(V_n) \ge 1$ for $n \in \N_+$ then $\bs X$ is a sub-martingale with respect to $\bs V$
3. If $\E(V_n) \le 1$ for $n \in \N_+$ then $\bs X$ is a super-martingale with respect to $\bs V$
Proof
Let $\mathscr{F}_n = \sigma\{V_0, V_1, \ldots, V_n\}$ for $n \in \N$. Since the variables are independent, $\E(X_n) = \prod_{i=0}^n \E(V_i) \lt \infty$ Next, $\E\left(X_{n+1} \mid \mathscr{F}_n\right) = \E\left(X_n V_{n+1} \mid \mathscr{F}_n\right) = X_n E(V_{n+1} \mid \mathscr{F}_n) = X_n \E(V_{n+1}) \quad n \in \N$ since $X_n$ is measurable with respect to $\mathscr{F}_n$ and $V_{n+1}$ is independent of $\mathscr{F}_n$. The results now follow from the definitions.
As with random walks, a special case of interest is when $\{V_n: n \in \N_+\}$ is an identically distributed sequence.
The Simple Random Walk
Suppose now that that $\bs{V} = \{V_n: n \in \N\}$ is a sequence of independent random variables with $\P(V_i = 1) = p$ and $\P(V_i = -1) = 1 - p$ for $i \in \N_+$, where $p \in (0, 1)$. Let $\bs{X} = \{X_n: n \in \N\}$ be the partial sum process associated with $\bs{V}$ so that $X_n = \sum_{i=0}^n V_i, \quad n \in \N$ Then $\bs{X}$ is the simple random walk with parameter $p$, and of course, is a special case of the more general random walk studied above. In terms of gambling, our gambler plays a sequence of independent and identical games, and on each game, wins €1 with probability $p$ and loses €1 with probability $1 - p$. So if $V_0$ is the gambler's initial fortune, then $X_n$ is her net fortune after $n$ games.
For the simple random walk,
1. If $p \gt \frac{1}{2}$ then $\bs{X}$ is a sub-martingale.
2. If $p \lt \frac{1}{2}$ then $\bs{X}$ is a super-martingale.
3. If $p = \frac{1}{2}$ then $\bs{X}$ is a martingale.
Proof
Note that $\E(V_n) = p - (1 - p) = 2 p - 1$ for $n \in \N_+$, so the results follow from the theorem above.
So case (a) corresponds to favorable games, case (b) to unfavorable games, and case (c) to fair games.
Open the simulation of the simple symmetric random. For various values of the number of trials $n$, run the simulation 1000 times and note the general behavior of the sample paths.
Here is the second moment martingale for the simple, symmetric random walk.
Consider the simple random walk with parameter $p = \frac{1}{2}$, and let $Y_n = X_n^2 - \var(V_0) - n$ for $n \in \N$. Then $\bs{Y} = \{Y_n: n \in \N\}$ is a martingale with respect to $\bs{X}$
Proof
Note that $\E(V_i) = 0$ and $\var(V_i) = 1$ for each $i \in \N_+$, so the result follows from the general result above.
But there is another martingale that can be associated with the simple random walk, known as De Moivre's martingale and named for one of the early pioneers of probability theory, Abraham De Moivre.
For $n \in \N$ define $Z_n = \left(\frac{1 - p}{p}\right)^{X_n}$ Then $\bs{Z} = \{Z_n: n \in \N\}$ is a martingale with respect to $\bs{X}$.
Proof
Note that $Z_n = \prod_{k=0}^n \left(\frac{1 - p}{p}\right)^{V_k}, \quad n \in \N$ and $\E\left[\left(\frac{1 - p}{p}\right)^{V_k}\right] = \left(\frac{1 - p}{p}\right)^1 p + \left(\frac{1 - p}{p}\right)^{-1} (1 - p) = 1, \quad k \in \N_+$ So the result follows from the theorem above on partial products.
The Beta-Bernoulli Process
Recall that the beta-Bernoulli process is constructed by randomizing the success parameter in a Bernoulli trials process with a beta distribution. Specifically we have a random variable $P$ that has the beta distribution with parameters $a, \, b \in (0, \infty)$, and a sequence of indicator variables $\bs X = (X_1, X_2, \ldots)$ such that given $P = p \in (0, 1)$, $\bs X$ is a sequence of independent variables with $\P(X_i = 1) = p$ for $i \in \N_+$. As usual, we couch this in reliability terms, so that $X_i = 1$ means success on trial $i$ and $X_i = 0$ means failure. In our study of this process, we showed that the finite-dimensional distributions are given by $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n) = \frac{a^{[k]} b^{[n-k]}}{(a + b)^{[n]}}, \quad n \in \N_+, \; (x_1, x_2, \ldots, x_n) \in \{0, 1\}^n$ where we use the ascending power notation $r^{[j]} = r ( r + 1) \cdots (r + j - 1)$ for $r \in \R$ and $j \in \N$. Next, let $\bs{Y} = \{Y_n: n \in \N\}$ denote the partial sum process associated with $\bs{X}$, so that once again, $Y_n = \sum_{i=1}^n X_i, \quad n \in \N$ Of course $Y_n$ is the number of success in the first $n$ trials and has the beta-binomial distribution defined by $\P(Y_n = k) = \binom{n}{k} \frac{a^{[k]} b^{[n-k]}}{(a + b)^{[n]}}, \quad k \in \{0, 1, \ldots, n\}$ Now let $Z_n = \frac{a + Y_n}{a + b + n}, \quad n \in \N$ This variable also arises naturally. Let $\mathscr{F}_n = \sigma\{X_1, X_2, \ldots, X_n\}$ for $n \in \N$. Then as shown in the section on the beta-Bernoulli process, $Z_n = \E(X_{n+1} \mid \mathscr{F}_n) = \E(P \mid \mathscr{F}_n)$. In statistical terms, the second equation means that $Z_n$ is the Bayesian estimator of the unknown success probability $p$ in a sequence of Bernoulli trials, when $p$ is modeled by the random variable $P$.
$\bs Z = \{Z_n: n \in \N\}$ is a martingale with respect to $\bs X$.
Proof
Note that $0 \le Z_n \le 1$ so $\E(Z_n) \lt \infty$ for $n \in \N$. Next, $\E\left(Z_{n+1} \mid \mathscr{F}_n\right) = \E\left[\frac{a + Y_{n+1}}{a + b + n + 1} \biggm| \mathscr{F}_n\right] = \frac{\E\left[a + \left(Y_n + X_{n+1}\right) \mid \mathscr{F}_n\right]}{a + b + n + 1} = \frac{a + Y_n + \E\left(X_{n+1} \mid \mathscr{F}_n\right)}{a + b + n + 1}$ As noted above, $\E(X_{n+1} \mid \mathscr{F}_n) = (a + Y_n) / (a + b + n)$. Substituting into the displayed equation above and doing a bit of algebra we have $\E(Z_{n+1} \mid \mathscr{F}_n) = \frac{(a + Y_n) + (a + Y_n) / (a + b + n)}{a + b + n + 1} = \frac{a + Y_n}{a + b + n} = Z_n$
Open the beta-Binomial experiment. Run the simulation 1000 times for various values of the parameters, and compare the empirical probability density function with the true probability density function.
Pólya's Urn Process
Recall that in the simplest version of Pólya's urn process, we start with an urn containing $a$ red and $b$ green balls. At each discrete time step, we select a ball at random from the urn and then replace the ball and add $c$ new balls of the same color to the urn. For the parameters, we need $a, \, b \in \N_+$ and $c \in \N$. For $i \in \N_+$, let $X_i$ denote the color of the ball selected on the $i$th draw, where 1 means red and 0 means green. The process $\bs{X} = \{X_n: n \in \N_+\}$ is a classical example of a sequence of exchangeable yet dependent variables. Let $\bs{Y} = \{Y_n: n \in \N\}$ denote the partial sum process associated with $\bs{X}$, so that once again, $Y_n = \sum_{i=1}^n X_i, \quad n \in \N$ Of course $Y_n$ is the total number of red balls selected in the first $n$ draws. Hence at time $n \in \N$, the total number of red balls in the urn is $a + c Y_n$, while the total number of balls in the urn is $a + b + c n$ and so the proportion of red balls in the urn is $Z_n = \frac{a + c Y_n}{a + b + c n}$
$\bs{Z} = \{Z_n: n \in \N\}$ is a martingale with respect to $\bs{X}$.
Indirect proof
If $c = 0$ then $Z_n = a / (a + b)$ for $n \in \N$ so $\bs Z$ is a constant martingale. If $c \in \N_+$ then $\bs Z$ is equivalent to the beta-Bernoulli process with parameters $a / c$ and $b / c$. Moreover, $Z_n = \frac{a + c Y_n}{a + b + c n} = \frac{a / c + Y_n}{a / c + b / c + n}, \quad n \in \N$ So $\bs Z$ is a martingale by the previous theorem.
Direct Proof
Trivially, $0 \le Z_n \le 1$ so $\E(Z_n) \lt \infty$ for $n \in \N$. Let $\mathscr{F}_n = \sigma\{X_1, X_2, \ldots, X_n\}$. For $n \in \N$, $\E\left(Z_{n+1} \mid \mathscr{F}_n\right) = \E\left[\frac{a + c Y_{n+1}}{a + b + c(n + 1)} \biggm| \mathscr{F}_n\right] = \frac{\E\left[a + c \left(Y_n + X_{n+1}\right) \mid \mathscr{F}_n\right]}{a + b + c(n + 1)} = \frac{a + c Y_n + c \E\left(X_{n+1} \mid \mathscr{F}_n\right)}{a + b + c n + c}$ since $Y_n$ is measurable with respect to $\mathscr{F}_n$. But the probability of selecting a red ball on draw $n + 1$, given the history of the process up to time $n$, is simply the proportion of red balls in the urn at time $n$. That is, $\E\left(X_{n+1} \mid \mathscr{F}_n\right) = \P\left(X_{n+1} = 1 \mid \mathscr{F}_n\right) = Z_n = \frac{a + c Y_n}{a + b + c n}$ Substituting and simplifying gives $\E\left(Z_{n+1} \mid \mathscr{F}_n\right) = Z_n$.
Open the simulation of Pólya's Urn Experiment. Run the simulation 1000 times for various values of the parameters, and compare the empirical probability density function of the number of red ball selected to the true probability density function.
Processes with Independent Increments.
Our first example above concerned the partial sum process $\bs{X}$ associated with a sequence of independent random variables $\bs{V}$. Such processes are the only ones in discrete time that have independent increments. That is, for $m, \, n \in \N$ with $m \le n$, $X_n - X_m$ is independent of $(X_0, X_1, \ldots, X_m)$. The random walk process has the additional property of stationary increments. That is, the distribution of $X_n - X_m$ is the same as the distribution of $X_{n-m} - X_0$ for $m, \, n \in \N$ with $m \le n$. Let's consider processes in discrete or continuous time with these properties. Thus, suppose that $\bs{X} = \{X_t: t \in T\}$ satisfying the basic assumptions above relative to the filtration $\mathfrak{F} = \left\{\mathscr{F}_s: s \in T\right\}$. Here are the two definitions.
The process $\bs X$ has
1. Independent increments if $X_t - X_s$ is independent of $\mathscr{F}_s$ for all $s, \, t \in T$ with $s \le t$.
2. Stationary increments if $X_t - X_s$ has the same distribution as $X_{t-s} - X_0$ for all $s, \, t \in T$.
Processes with stationary and independent increments were studied in the Chapter on Markov processes. In continuous time (with the continuity assumptions we have imposed), such a process is known as a Lévy process, named for Paul Lévy, and also as a continuous-time random walk. For a process with independent increments (not necessarily stationary), the connection with martingales depends on the mean function $m$ given by $m(t) = \E(X_t)$ for $t \in T$.
Suppose that $\bs{X} = \{X_t: t \in [0, \infty)\}$ has independent increments.
1. If $m$ is increasing then $\bs{X}$ is a sub-martingale.
2. If $m$ is decreasing then $\bs X$ is a super-martingale.
3. If $m$ is constant then $\bs X$ is a martingale
Proof
The proof is just like the one above for partial sum processes. Suppose that $s, \, t \in [0, \infty)$ with $s \lt t$. Then $E\left(X_t \mid \mathscr{F}_s\right) = \E\left[X_s + (X_t - X_s) \mid \mathscr{F}_s\right] = \E\left(X_s \mid \mathscr{F}_s\right) + \E\left(X_t - X_s \mid \mathscr{F}_s\right)$ But $X_s$ is measurable with respect to $\mathscr{F}_s$ and $X_t - X_s$ is independent of $\mathscr{F}_s$ So $\E\left(X_t \mid \mathscr{F}_s\right) = X_s + \E(X_t - X_s) = X_s + m(t) - m(s)$
Compare this theorem with the corresponding theorem for the partial sum process above. Suppose now that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a stochastic process as above, with mean function $m$, and let $Y_t = X_t - m(t)$ for $t \in [0, \infty)$. The process $\bs{Y} = \{Y_t: t \in [0, \infty)\}$ is sometimes called the compensated process associated with $\bs{X}$ and has mean function 0. If $\bs{X}$ has independent increments, then clearly so does $\bs{Y}$. Hence the following result is a trivial corollary to our previous theorem.
Suppose that $\bs{X}$ has independent increments. The compensated process $\bs{Y}$ is a martingale.
Next we give the second moment martingale for a process with independent increments, generalizing the second moment martingale for a partial sum process.
Suppose that $\bs X = \{X_t: t \in T\}$ has independent increments with constant mean function and and with $\var(X_t) \lt \infty$ for $t \in T$. Then $\bs Y = \{Y_t: t \in T\}$ is a martingale where $Y_t = X_t^2 - \var(X_t), \quad t \in T$
Proof
The proof is essentially the same as for the partial sum process in discrete time. Suppose that $s, \, t \in T$ with $s \lt t$. Note that $\E(Y_t \mid \mathscr{F}_s) = \E(X_t^2 \mid \mathscr{F}_s) - \var(X_t)$. Next, $X_t^2 = [(X_t - X_s) + X_s]^2 = (X_t - X_s)^2 + 2 (X_t - X_s) X_s + X_s^2$ But $X_t - X_s$ is independent of $\mathscr{F}_s$, $X_s$ is measurable with respect to $\mathscr{F}_s$, and $E(X_t - X_s) = 0$ so $\E(X_t^2 \mid \mathscr{F}_s) = \E[(X_t - X_s)^2] + 2 X_s \E(X_t - X_s) + X_s^2 = \E[(X_t - X_s)^2] + X_s^2$ But also by independence and since $X_t - X_s$ has mean 0, $\var(X_t) = \var[(X_t - X_s) + X_s] = \var(X_s) + \var(X_t - X_s)^2 = \var(X_s) + \E[(X_t - X_s)^2$ Putting the pieces together gives $\E(Y_t \mid \mathscr{F}_s) = X_s^2 - \var(X_s) = Y_s$
Of course, since the mean function is constant, $\bs X$ is also a martingale. For processes with independent and stationary increments (that is, random walks), the last two theorems simplify, because the mean and variance functions simplify.
Suppose that $\bs X = \{X_t: t \in T\}$ has stationary, independent increments, and let $a = \E(X_1 - X_0)$. Then
1. $\bs X$ is a martingale if $a = 0$
2. $\bs X$ is a sub-martingale if $a \ge 0$
3. $\bs X$ is a super-martingale if $a \le 0$
Proof
Recall that the mean function $m$ is given by $m(t) = \E(X_0) + a t$ for $t \in T$, so the result follows from the corresponding result for a process with independent increments.
Compare this result with the corresponding one above for discrete-time random walks. Our next result is the second moment martingale. Compare this with the second moment martingale for discrete-time random walks.
Suppose that $\bs X = \{X_t: t \in T\}$ has stationary, independent increments with $\E(X_0) = \E(X_1)$ and $b^2 = \E(X_1^2) \lt \infty$. Then $\bs Y = \{Y_t: t \in T\}$ is a martingale where $Y_t = X_t^2 - \var(X_0) - b^2 t, \quad t \in T$
Proof
Recall that if $\E(X_0) = \E(X_1)$ then $\bs X$ has constant mean function. Also, $\var(X_t) = \var(X_0) + b^2 t$, so the result follows from the corresponding result for a process with independent increments.
In discrete time, as we have mentioned several times, all of these results reduce to the earlier results for partial sum processes and random walks. In continuous time, the Poisson processes, named of course for Simeon Poisson, provides examples. The standard (homogeneous) Poisson counting process $\bs N = \{N_t: t \in [0, \infty)\}$ with constant rate $r \in (0, \infty)$ has stationary, independent increments and mean function given by $m(t) = r t$ for $t \in [0, \infty)$. More generally, suppose that $r: [0, \infty) \to (0, \infty)$ is piecewise continuous (and non-constant). The non-homogeneous Poisson counting process $\bs N = \{N_t: t \in [0, \infty)\}$ with rate function $r$ has independent increments and mean function given by $m(t) = \int_0^t r(s) \, ds, \quad t \in [0, \infty)$ The increment $N_t - N_s$ has the Poisson distribution with parameter $m(t) - m(s)$ for $s, \, t \in [0, \infty)$ with $s \lt t$, so the process does not have stationary increments. In all cases, $m$ is increasing, so the following results are corollaries of our general results:
Let $\bs N = \{N_t: t \in [0, \infty)\}$ be the Poisson counting process with rate function $r: [0, \infty) \to (0, \infty)$. Then
1. $\bs N$ is a sub-martingale
2. The compensated process $\bs X = \{N_t - m(t): t \in [0, \infty)\}$ is a martinagle.
Open the simulation of the Poisson counting experiment. For various values of $r$ and $t$, run the experiment 1000 times and compare the empirical probability density function of the number of arrivals with the true probability density function.
We will see further examples of processes with stationary, independent increments in continuous time (and so also examples of continuous-time martingales) in our study of Brownian motion.
Likelihood Ratio Tests
Suppose that $(S, \mathscr{S}, \mu)$ is a general measure space, and that $\bs{X} = \{X_n: n \in \N\}$ is a sequence of independent, identically distributed random variables, taking values in $S$. In statistical terms, $\bs{X}$ corresponds to sampling from the common distribution, which is usually not completely known. Indeed, the central problem in statistics is to draw inferences about the distribution from observations of $\bs{X}$. Suppose now that the underlying distribution either has probability density function $g_0$ or probability density function $g_1$, with respect to $\mu$. We assume that $g_0$ and $g_1$ are positive on $S$. Of course the common special cases of this setup are
• $S$ is a measurable subset of $\R^n$ for some $n \in \N_+$ and $\mu = \lambda_n$ is $n$-dimensional Lebesgue measure on $S$.
• $S$ is a countable set and $\mu = \#$ is counting measure on $S$.
The likelihood ratio test is a hypothesis test, where the null and alternative hypotheses are
• $H_0$: the probability density function is $g_0$.
• $H_1$: the probability density function is $g_1$.
The test is based on the test statistic $L_n = \prod_{i=1}^n \frac{g_0(X_i)}{g_1(X_i)}, \quad n \in \N$ known as the likelihood ratio test statistic. Small values of the test statistic are evidence in favor of the alternative hypothesis $H_1$. Here is our result.
Under the alternative hypothesis $H_1$, the process $\bs{L} = \{L_n: n \in \N\}$ is a martingale with respect to $\bs{X}$, known as the likelihood ratio martingale.
Proof
Let $\mathscr{F}_n = \sigma\{X_1, X_2, \ldots, X_n\}$. For $n \in \N$, $\E\left(L_{n+1} \mid \mathscr{F}_n\right) = \E\left[L_n \frac{g_0(X_{n+1})}{g_1(X_{n+1})} \biggm| \mathscr{F}_n\right] = L_n \E\left[\frac{g_0(X_{n+1})}{g_1(X_{n+1})}\right]$ Since $L_n$ is measurable with respect to $\mathscr{F}_n$ and $g_0(X_{n+1}) \big/ g_1(X_{n+1})$ is independent of $\mathscr{F}_n$. But under $H_1$, and using the change of variables formula for expected value, we have $\E\left[\frac{g_0(X_{n+1})}{g_1(X_{n+1})}\right] = \int_S \frac{g_0(x)}{g_1(x)} g_1(x) \, d\mu(x) = \int_S g_0(x) \, d\mu(x) = 1$ This result also follows essentially from the theorem above on partial products. The sequence $\bs Z = (Z_1, Z_2, \ldots)$ given by $Z_i = g_0(X_i) / g_1(X_i)$ for $i \in \N_+$ is independent and identically distributed, and as just shown, has mean 1 under $H_1$.
Branching Processes
In the simplest model of a branching process, we have a system of particles each of which can die out or split into new particles of the same type. The fundamental assumption is that the particles act independently, each with the same offspring distribution on $\N$. We will let $f$ denote the (discrete) probability density function of the number of offspring of a particle, $m$ the mean of the distribution, and $\phi$ the probability generating function of the distribution. Thus, if $U$ is the number of children of a particle, then $f(n) = \P(U = n)$ for $n \in \N$, $m = \E(U)$, and $\phi(t) = \E\left(t^U\right)$ defined at least for $t \in (-1, 1]$.
Our interest is in generational time rather than absolute time: the original particles are in generation 0, and recursively, the children a particle in generation $n$ belong to generation $n + 1$. Thus, the stochastic process of interest is $\bs{X} = \{X_n: n \in \N\}$ where $X_n$ is the number of particles in the $n$th generation for $n \in \N$. The process $\bs{X}$ is a Markov chain and was studied in the section on discrete-time branching chains. In particular, one of the fundamental problems is to compute the probability $q$ of extinction starting with a single particle: $q = \P(X_n = 0 \text{ for some } n \in \N \mid X_0 = 1)$ Then, since the particles act independently, the probability of extinction starting with $x \in \N$ particles is simply $q^x$. We will assume that $f(0) \gt 0$ and $f(0) + f(1) \lt 1$. This is the interesting case, since it means that a particle has a positive probability of dying without children and a positive probability of producing more than 1 child. The fundamental result, you may recall, is that $q$ is the smallest fixed point of $\phi$ (so that $\phi(q) = q$) in the interval $[0, 1]$. Here are two martingales associated with the branching process:
Each of the following is a martingale with respect to $\bs X$.
1. $\bs{Y} = \{Y_n: n \in \N\}$ where $Y_n = X_n / m^n$ for $n \in \N$.
2. $\bs{Z} = \{Z_n: n \in \N\}$ where $Z_n = q^{X_n}$ for $n \in \N$.
Proof
Let $\mathscr{F}_n = \sigma\{X_0, X_1, \ldots, X_n\}$. For $n \in \N$, note that $X_{n+1}$ can be written in the form $X_{n+1} = \sum_{i=1}^{X_n} U_i$ where $\bs{U} = (U_1, U_2, \ldots)$ is a sequence of independent variables, each with PDF $f$ (and hence mean $\mu$ and PGF $\phi$), and with $\bs{U}$ independent of $\mathscr{F}_n$. Think of $U_i$ as the number of children of the $i$th particle in generation $n$.
1. For $n \in \N$, $\E(Y_{n+1} \mid \mathscr{F}_n) = \E\left(\frac{X_{n+1}}{m^{n+1}} \biggm| \mathscr{F}_n\right) = \frac{1}{m^{n+1}} \E\left(\sum_{i=0}^{X_n} U_i \biggm| \mathscr{F}_n\right) = \frac{1}{m^{n+1}} m X_n = \frac{X_n}{m^n} = Y_n$
2. For $n \in \N$ $\E\left(Z_{n+1} \mid \mathscr{F}_n\right) = \E\left(q^{X_{n+1}} \mid \mathscr{F}_n\right) = \E\left(q^{\sum_{i=1}^{X_n} U_i} \biggm| \mathscr{F}_n\right) = \left[\phi(q)\right]^{X_n} = q^{X_n} = Z_n$
Doob's Martingale
Our next example is one of the simplest, but most important. Indeed, as we will see later in the section on convergence, this type of martingale is almost universal in the sense that every uniformly integrable martingale is of this type. The process is constructed by conditioning a fixed random variable on the $\sigma$-algebras in a given filtration, and thus accumulating information about the random variable.
Suppose that $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ is a filtration on the probability space $(\Omega, \mathscr{F}, \P)$, and that $X$ is a real-valued random variable with $\E\left(\left|X\right|\right) \lt \infty$. Define $X_t = \E\left(X \mid \mathscr{F}_t\right)$ for $t \in T$. Then $\bs{X} = \{X_t: t \in T\}$ is a martingale with respect to $\mathfrak{F}$.
Proof
For $t \in T$, recall that $|X_t| = |\E(X \mid \mathscr{F}_t)| \le \E(|X| \mid \mathscr{F}_t)$. Taking expected values gives $\E(|X_t|) \le \E(|X|) \lt \infty$. Suppose that $s, \, t \in T$ with $s \lt t$. Using the tower property of conditional expected value, $\E\left(X_t \mid \mathscr{F}_s\right) = \E\left[\E\left(X \mid \mathscr{F}_t\right) \mid \mathscr{F}_s\right] = \E\left(X \mid \mathscr{F}_s\right) = X_s$
The martingale in the last theorem is known as Doob's martingale and is named for Joseph Doob who did much of the pioneering work on martingales. It's also known as the Lévy martingale, named for Paul Lévy.
Doob's martingale arises naturally in the statistical context of Bayesian estimation. Suppose that $\bs X = (X_1, X_2, \ldots)$ is a sequence of independent random variables whose common distribution depends on an unknown real-valued parameter $\theta$, with values in a parameter space $A \subseteq \R$. For each $n \in \N_+$, let $\mathscr{F}_n = \sigma\{X_1, X_2, \ldots, X_n\}$ so that $\mathfrak F = \{\mathscr{F}_n: n \in \N_+\}$ is the natural filtration associated with $\bs X$. In Bayesian estimation, we model the unknown parameter $\theta$ with a random variable $\Theta$ taking values in $A$ and having a specified prior distribution. The Bayesian estimator of $\theta$ based on the sample $\bs{X}_n = (X_1, X_2, \ldots, X_n)$ is $U_n = \E(\Theta \mid \mathscr{F}_n), \quad n \in \N_+$ So it follows that the sequence of Bayesian estimators $\bs U = (U_n: n \in \N_+)$ is a Doob martingale. The estimation referred to in the discussion of the beta-Bernoulli process above is a special case.
Density Functions
For this example, you may need to review general measures and density functions in the chapter on Distributions. We start with our probability space $(\Omega, \mathscr{F}, \P)$ and filtration $\mathfrak{F} = \{\mathscr{F}_n: n \in \N\}$ in discrete time. Suppose now that $\mu$ is a finite measure on the sample space $(\Omega, \mathscr{F})$. For each $n \in \N$, the restriction of $\mu$ to $\mathscr{F}_n$ is a measure on the measurable space $(\Omega, \mathscr{F}_n)$, and similarly the restriction of $\P$ to $\mathscr{F}_n$ is a probability measure on $(\Omega, \mathscr{F}_n)$. To save notation and terminology, we will refer to these as $\mu$ and $\P$ on $\mathscr{F}_n$, respectively. Suppose now that $\mu$ is absolutely continuous with respect to $\P$ on $\mathscr{F}_n$ for each $n \in \N$. Recall that this means that if $A \in \mathscr{F}_n$ and $\P(A) = 0$ then $\mu(B) = 0$ for every $B \in \mathscr{F}_n$ with $B \subseteq A$. By the Radon-Nikodym theorem, $\mu$ has a density function $X_n: \Omega \to \R$ with respect to $\P$ on $\mathscr{F}_n$ for each $n \in \N_+$. The density function of a measure with respect to a positive measure is known as a Radon-Nikodym derivative. The theorem and the derivative are named for Johann Radon and Otto Nikodym. Here is our main result.
$\bs X = \{X_n: n \in \N\}$ is a martingale with respect to $\mathfrak F$.
Proof
Let $n \in \N$. By definition, $X_n$ is measurable with respect to $\mathscr{F}_n$. Also, $\E(|X_n|) = \|\mu\|$ (the total variation of $\mu$) for each $n \in \N$. Since $\mu$ is a finite measure, $\|\mu\| \lt \infty$. By definition, $\mu(A) = \int_A X_n d \P = \E(X_n; A), \quad A \in \mathscr{F}_n$ On the other hand, if $A \in \mathscr{F}_n$ then $A \in \mathscr{F}_{n+1}$ and so $\mu(A) = \E(X_{n+1}; A)$. So to summarize, $X_n$ is $\mathscr{F}_n$-measurable and $\E(X_{n+1}; A) = \E(X_n ; A)$ for all $A \in \mathscr{F}_n$. By definition, this means that $\E(X_{n+1} \mid \mathscr{F}_n) = X_n$, and so $\bs X$ is a martingale with respect to $\mathfrak F$.
Note that $\mu$ may not be absolutely continuous with respect to $\P$ on $\mathscr{F}$ or even on $\mathscr{F}_\infty = \sigma \left(\bigcup_{n=0}^\infty \mathscr{F}_n\right)$. On the other hand, if $\mu$ is absolutely continuous with respect to $\P$ on $\mathscr{F}_\infty$ then $\mu$ has a density function $X$ with respect to $\P$ on $\mathscr{F}_\infty$. So a natural question in this case is the relationship between the martingale $\bs X$ and the random variable $X$. You may have already guessed the answer, but at any rate it will be given in the section on convergence. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/17%3A_Martingales/17.01%3A_Introduction_to_Martingalges.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
Preliminaries
As in the Introduction, we start with a stochastic process $\bs{X} = \{X_t: t \in T\}$ on an underlying probability space $(\Omega, \mathscr{F}, \P)$, having state space $\R$, and where the index set $T$ (representing time) is either $\N$ (discrete time) or $[0, \infty)$ (continuous time). Next, we have a filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$, and we assume that $\bs{X}$ is adapted to $\mathfrak{F}$. So $\mathfrak{F}$ is an increasing family of sub $\sigma$-algebras of $\mathscr{F}$ and $X_t$ is measurable with respect to $\mathscr{F}_t$ for $t \in T$. We think of $\mathscr{F}_t$ as the collection of events up to time $t \in T$. We assume that $\E\left(\left|X_t\right|\right) \lt \infty$, so that the mean of $X_t$ exists as a real number, for each $t \in T$. Finally, in continuous time where $T = [0, \infty)$, we make the standard assumptions that $\bs X$ is right continuous and has left limits, and that the filtration $\mathfrak F$ is right continuous and complete. Please recall the following from the Introduction:
Definitions
1. $\bs X$ is a martingale with respect to $\mathfrak F$ if $\E(X_t \mid \mathscr{F}_s) = X_s$ for all $s, \, t \in T$ with $s \le t$.
2. $\bs X$ is a sub-martingale with respect to $\mathfrak F$ if $\E(X_t \mid \mathscr{F}_s) \ge X_s$ for all $s, \, t \in T$ with $s \le t$.
3. $\bs X$ is a super-martingale with respect to $\mathfrak F$ if $\E(X_t \mid \mathscr{F}_s) \le X_s$ for all $s, \, t \in T$ with $s \le t$.
Our goal in this section is to give a number of basic properties of martingales and to give ways of constructing martingales from other types of processes. The deeper, fundamental theorems will be studied in the following sections.
Basic Properties
Our first result is that the martingale property is preserved under a coarser filtration.
Suppose that the process $\bs X$ and the filtration $\mathfrak F$ satisfy the basic assumptions above and that $\mathfrak G$ is a filtration coarser than $\mathfrak F$ so that $\mathscr{G}_t \subseteq \mathscr{F}_t$ for $t \in T$. If $\bs X$ is a martingale (sub-martingale, super-martingale) with respect to $\mathfrak F$ and $\bs X$ is adapted to $\mathfrak G$ then $\bs X$ is a martingale (sub-martingale, super-martingale) with respect to $\mathfrak G$.
Proof
Suppose that $s, \, t \in T$ with $s \le t$. The proof uses the tower and increasing properties of conditional expected value, and the fact that $\bs X$ is adapted to $\mathfrak G$
1. If $\bs X$ is a martingale with respect to $\mathfrak F$ then $\E(X_t \mid \mathscr{G}_s) = \E\left[\E(X_t \mid \mathscr{F}_s) \mid \mathscr{G}_s\right] = \E(X_s \mid \mathscr{G}_s) = X_s$
2. If $\bs X$ is a sub-martinagle with respect to $\mathfrak F$ then $\E(X_t \mid \mathscr{G}_s) = \E\left[\E(X_t \mid \mathscr{F}_s) \mid \mathscr{G}_s\right] \ge \E(X_s \mid \mathscr{G}_s) = X_s$
3. If $\bs X$ is a super-martinagle with respect to $\mathfrak F$ then $\E(X_t \mid \mathscr{G}_s) = \E\left[\E(X_t \mid \mathscr{F}_s) \mid \mathscr{G}_s\right] \le \E(X_s \mid \mathscr{G}_s) = X_s$
In particular, if $\bs{X}$ is a martingale (sub-martingale, super-martingale) with respect to some filtration, then it is a martingale (sub-martingale, super-martingale) with respect to its own natural filtration.
The relations that define martingales, sub-martingales, and super-martingales hold for the ordinary (unconditional) expected values. We had this result in the last section, but it's worth repeating.
Suppose that $s, \, t \in T$ with $s \le t$.
1. If $\bs{X}$ is a martingale with respect to $\frak{F}$ then $\E(X_s) = \E(X_t)$.
2. If $\bs{X}$ is a sub-martingale with respect to $\frak{F}$ then $\E(X_s) \le \E(X_t)$.
3. If $\bs{X}$ is a super-martingale with respect to $\frak{F}$ then $\E(X_s) \ge \E(X_t)$.
Proof
The results follow directly from the definitions, and the critical fact that $\E\left[\E\left(X_t \mid \mathscr{F}_s\right)\right] = \E(X_t)$ for $s, \, t \in T$.
So if $\bs X$ is a martingale then $\bs X$ has constant expected value, and this value is referred to as the mean of $\bs X$. The martingale properties are preserved under sums of the stochastic processes.
For the processes $\bs{X} = \{X_t: t \in T\}$ and $\bs{Y} = \{Y_t: t \in T\}$, let $\bs{X} + \bs{Y} = \{X_t + Y_t: t \in T\}$. If $\bs X$ and $\bs Y$ are martingales (sub-martingales, super-martinagles) with respect to $\mathfrak F$ then $\bs X + \bs Y$ is a martingale (sub-martingale, super-martinagle) with respect to $\mathfrak F$.
Proof
The results follow easily from basic properties of expected value and conditional expected value. First note that $\E\left(\left|X_t + Y_t\right|\right) \le \E\left(\left|X_t\right|\right) + \E\left(\left|Y_t\right|\right) \lt \infty$ for $t \in T$. Next $\E(X_t + Y_t \mid \mathscr{F}_s) = \E(X_t \mid \mathscr{F}_s) + \E(Y_t \mid \mathscr{F}_s)$ for $s, \, t \in T$ with $s \le t$.
The sub-martingale and super-martingale properties are preserved under multiplication by a positive constant and are reversed under multiplication by a negative constant.
For the process $\bs{X} = \{X_t: t \in T\}$ and the constant $c \in \R$, let $c \bs{X} = \{c X_t: t \in T\}$.
1. If $\bs X$ is a martingale with respect to $\mathfrak F$ then $c \bs X$ is also a martingale with respect to $\mathfrak F$
2. If $\bs X$ is a sub-martingale with respect to $\mathfrak F$, then $c \bs X$ is a sub-martingale if $c \gt 0$, a super-martingale if $c \lt 0$, and a martingale if $c = 0$.
3. If $\bs X$ is a super-martingale with respect to $\mathfrak F$, then $c \bs X$ is a super-martingale if $c \gt 0$, a sub-martingale if $c \lt 0$, and a martingale if $c = 0$.
Proof
The results follow easily from basic properties of expected value and conditional expected value. First note that $\E\left(\left|c X_t \right|\right) = |c| \E\left(\left|X_t\right|\right) \lt \infty$ for $t \in T$. Next $\E(c X_t \mid \mathscr{F}_s) = c \E(X_t \mid \mathscr{F}_s)$ for $s, \, t \in T$ with $s \le t$.
Property (a), together with the previous additive property, means that the collection of martingales with respect to a fixed filtration $\mathfrak F$ forms a vector space. Here is a class of transformations that turns martingales into sub-martingales.
Suppose that $\bs X$ takes values in an interval $S \subseteq \R$ and that $g: S \to \R$ is convex with $\E\left[\left|g(X_t)\right|\right] \lt \infty$ for $t \in T$. If either of the following conditions holds then $g(\bs X) = \{g(X_t): t \in T\}$ is a sub-martingale with respect to $\mathfrak F$:
1. $\bs X$ is a martingale.
2. $\bs X$ is a sub-martingale and $g$ is also increasing.
Proof
Here is the most important special case of the previous result:
Suppose again that $\bs X$ is a martingale with respect to $\mathfrak F$. Let $k \in [1, \infty)$ and suppose that $\E\left(|X_t|^k\right) \lt \infty$ for $t \in T$. Then the process $|\bs X|^k = \left\{\left|X_t\right|^k: t \in T\right\}$ is a sub-martingale with respect to $\mathfrak F$
Proof
In particular, if $\bs X$ is a martingale relative to $\mathfrak F$ then $|\bs X| = \{|X_t|: t \in T\}$ is a sub-martingale relative to $\mathfrak F$. Here is a related result that we will need later. First recall that the positive and negative parts of $x \in \R$ are $x^+ = \max\{x, 0\}$ and $x^- = \max\{-x, 0\}$, so that $x^+ \ge 0$, $x^- \ge 0$, $x = x^+ - x^-$, and $|x| = x^+ + x^-$.
If $\bs X = \{X_t: t \in T\}$ is a sub-martingale relative to $\mathfrak F = \{\mathscr{F}_t: t \in T\}$ then $\bs{X}^+ = \{X_t^+: t \in T\}$ is also a sub-martingale relative to $\mathfrak F$.
Proof
As shown in the graph above, the function $x \mapsto x^+$ is increasing and convex on $\R$.
Our last result of this discussion is that if we sample a continuous-time martingale at an increasing sequence of time points, we get a discrete-time martingale.
Suppose again that the process $\bs X = \{X_t: t \in [0, \infty)\}$ and the filtration $\mathfrak F = \{\mathscr{F}_t: t \in [0, \infty)\}$ satisfy the basic assumptions above. Suppose also that $\{t_n: n \in \N\} \subset [0, \infty)$ is a strictly increasing sequence of time points with $t_0 = 0$, and define $Y_n = X_{t_n}$ and $\mathscr{G}_n = \mathscr{F}_{t_n}$ for $n \in \N$. If $\bs X$ is a martingale (sub-martingale, super-martingale) with respect to $\mathfrak F$ then $\bs Y = \{Y_n: n \in \N\}$ is a martingale (sub-martingale, super-martingale) with respect to $\mathfrak G$.
Proof
Since the time points are increasing, it's clear that $\mathfrak G$ is a discrete-time filtration. Next, $\E(|Y_n|) = \E\left(\left|X_{t_n}\right|\right) \lt \infty$. Finally, suppose that $\bs X$ is a martingale and $n, \, k \in \N$ with $k \lt n$. Then $t_k \lt t_n$ so $\E(Y_n \mid \mathscr{G}_k) = \E\left(X_{t_n} \mid \mathscr{F}_{t_k}\right) = X_{t_k} = Y_k$ Hence $\bs Y$ is also a martingale. The proofs for sub and super-martingales are similar, but with inequalities replacing the second equality.
This result is often useful for extending proofs of theorems in discrete time to continuous time.
The Martingale Transform
Our next discussion, in discrete time, shows how to build a new martingale from an existing martingale and an predictable process. This construction turns out to be very useful, and has an interesting gambling interpretation. To review the definition, recall that $\{Y_n: n \in \N_+\}$ is predictable relative to the filtration $\mathfrak F = \{\mathscr{F}_n: n \in \N\}$ if $Y_n$ is measurable with respect to $\mathscr{F}_{n-1}$ for $n \in \N_+$. Think of $Y_n$ as the bet that a gambler makes on game $n \in \N_+$. The gambler can base the bet on all of the information she has at that point, including the outcomes of the previous $n - 1$ games. That is, she can base the bet on the information encoded in $\mathscr{F}_{n-1}$.
Suppose that $\bs X = \{X_n: n \in \N\}$ is adapted to the filtration $\mathfrak F = \{\mathscr{F}_n: n \in \N\}$ and that $\bs Y = \{Y_n: n \in \N_+\}$ is predictable relative to $\mathfrak F$. The transform of $\bs X$ by $\bs Y$ is the process $\bs Y \cdot \bs X$ defined by $(\bs Y \cdot \bs X)_n = X_0 + \sum_{k=1}^n Y_k (X_k - X_{k-1}), \quad n \in \N$
The motivating example behind the transfrom, in terms of a gambler making a sequence of bets, is given in an example below. Note that $\bs Y \cdot \bs X$ is also adapted to $\mathfrak F$. Note also that the transform depends on $\bs X$ only through $X_0$ and $\{X_n - X_{n-1}: n \in \N_+\}$. If $\bs X$ is a martingale, this sequence is the martingale difference sequence studied Introduction.
Suppose $\bs X = \{X_n: n \in \N\}$ is adapted to the filtration $\mathfrak F = \{\mathscr{F}_n: n \in \N\}$ and that $\bs{Y} = \{Y_n: n \in \N\}$ is a bounded process, predictable relative to $\mathfrak{F}$.
1. If $\bs X$ is a martingale relative to $\mathfrak F$ then $\bs Y \cdot \bs X$ is also a martingale relative to $\mathfrak F$.
2. If $\bs X$ is a sub-martingle relative to $\mathfrak F$ and $\bs Y$ is nonnegative, then $\bs Y \cdot \bs X$ is also a sub-martingale relative to $\mathfrak F$.
3. If $\bs X$ is a super-martingle relative to $\mathfrak F$ and $\bs Y$ is nonnegative, then $\bs Y \cdot \bs X$ is also a super-martingale relative to $\mathfrak F$.
Proof
Suppose that $|Y_n| \le c$ for $n \in \N$ where $c \in (0, \infty)$. Then $\E(|(\bs Y \cdot \bs X)_n|) \le \E(|X_0|) + c \sum_{k=1}^n [\E(|X_k|) + \E(|X_{k-1}|)] \lt \infty, \quad n \in \N$ Next, for $n \in \N$, \begin{align*} \E\left[(\bs Y \cdot \bs X)_{n+1} \mid \mathscr{F}_n\right] &= \E\left[(\bs Y \cdot \bs X)_n + Y_{n+1} (X_{n+1} - X_n) \mid \mathscr{F}_n\right] = (\bs Y \cdot \bs X)_n + Y_{n+1} \E\left(X_{n+1} - X_n \mid \mathscr{F}_n\right)\ & = (\bs Y \cdot \bs X)_n + Y_{n+1} [\E(X_{n+1} \mid \mathscr{F}_n) - X_n] \end{align*} since $(\bs Y \cdot \bs X)_n$, $Y_{n+1}$ and $X_n$ are $\mathscr{F}_n$-measurable. The results now follow from the definitions of martingle, sub-martingale, and super-martingale.
This construction is known as a martingale transform, and is a discrete version of the stochastic integral that we will study in the chapter on Brownian motion. The result also holds if instead of $\bs Y$ being bounded, we have $\bs X$ bounded and $\E(|Y_n|) \lt \infty$ for $n \in \N_+$
The Doob Decomposition
The next result, in discrete time, shows how to decompose a basic stochastic process into a martingale and a predictable process. The result is known as the Doob decomposition theorem and is named for Joseph Doob who developed much of the modern theory of martingales.
Suppose that $\bs X = \{X_n: n \in \N\}$ satisfies the basic assumptions above relative to the filtration $\mathfrak F = \{\mathscr{F}_n: n \in \N\}$. Then $X_n = Y_n + Z_n$ for $n \in \N$ where $\bs Y = \{Y_n: n \in \N\}$ is a martingale relative to $\mathfrak F$ and $\bs Z = \{Z_n: n \in \N\}$ is predictable relative to $\mathfrak F$. The decomposition is unique.
1. If $\bs X$ is a sub-martingale relative to $\mathfrak F$ then $\bs Z$ is increasing.
2. If $\bs X$ is a super-martingale relative to $\mathfrak F$ then $\bs Z$ is decreasing.
Proof
Recall that the basic assumptions mean that $\bs X = \{X_n: n \in \N\}$ is adapted to $\mathfrak F$ and $\E(|X_n|) \lt \infty$ for $n \in \N$. Define $Z_0 = 0$ and $Z_n = \sum_{k=1}^n \left[\E(X_k \mid \mathscr{F}_{k-1}) - X_{k-1}\right], \quad n \in \N_+$ Then $Z_n$ is measurable with respect to $\mathscr{F}_{n-1}$ for $n \in \N_+$ so $\bs Z$ is predictable with respect to $\mathfrak F$. Now define $Y_n = X_n - Z_n = X_n - \sum_{k=1}^n \left[\E(X_k \mid \mathscr{F}_{k-1}) - X_{k-1}\right], \quad n \in \N$ Then $\E(|Y_n|) \lt \infty$ and trivially $X_n = Y_n + Z_n$ for $n \in \N$. Next, \begin{align*} \E(Y_{n+1} \mid \mathscr{F}_n) & = \E(X_{n+1} \mid \mathscr{F}_n) - Z_{n+1} = \E(X_{n+1} \mid \mathscr{F}_n) - \sum_{k=1}^{n+1} \left[\E(X_k \mid \mathscr{F}_{k-1}) - X_{k-1}\right] \ & = X_n - \sum_{k=1}^n \left[\E(X_k \mid \mathscr{F}_{k-1}) - X_{k-1}\right] = Y_n, \quad n \in \N \end{align*} Hence $\bs Y$ is a martingale. Conversely, suppose that $\bs X$ has the decomposition in terms of $\bs Y$ and $\bs Z$ given in the theorem. Since $\bs Y$ is a martingale and $\bs Z$ is predictable, \begin{align*} \E(X_n - X_{n-1} \mid \mathscr{F}_{n-1}) & = \E(Y_n \mid \mathscr{F}_{n-1}) - \E(Y_{n-1} \mid \mathscr{F}_{n-1}) + \E(Z_n \mid \mathscr{F}_{n-1}) - \E(Z_{n-1} \mid \mathscr{F}_{n-1}) \ & = Y_{n-1} - Y_{n-1} + Z_n - Z_{n-1} = Z_n - Z_{n-1}, \quad n \in \N_+ \end{align*} Also $Z_0 = 0$ so $\bs X$ uniquely determines $\bs Z$. But $Y_n = X_n - Z_n$ for $n \in \N$, so $\bs X$ uniquely determines $\bs Y$ also.
1. If $\bs X$ is a sub-martingale then $\E(X_n \mid \mathscr{F}_{n-1}) - X_{n-1} \ge 0$ for $n \in \N_+$ so $\bs Z$ is increasing.
2. If $\bs X$ is a super-martingale then $\E(X_n \mid \mathscr{F}_{n-1}) - X_{n-1} \le 0$ for $n \in \N_+$ so $\bs Z$ is decreasing.
A decomposition of this form is more complicated in continuous time, in part because the definition of a predictable process is more subtle and complex. The decomposition theorem holds in continuous time, with our basic assumptions and the additional assumption that the collection of random variables $\{X_\tau: \tau \text{ is a finite-valued stopping time}\}$ is uniformly integrable. The result is known as the Doob-Meyer decomposition theorem, named additionally for Paul Meyer.
Markov Processes
As you might guess, there are important connections between Markov processes and martingales. Suppose that $\bs X = \{X_t: t \in T\}$ is a (homogeneous) Markov process with state space $(S, \mathscr{S})$, relative to the filtration $\mathfrak F = \{\mathscr{F}_t: t \in T\}$. Let $\bs P = \{P_t: t \in T\}$ denote the collection of transition kernesl of $\bs X$, so that $P_t(x, A) = \P(X_t \in A \mid X_0 = x), \quad x \in S, A \in \mathscr{S}$ Recall that (like all probability kernels), $P_t$ operates (on the right) on (measurable) functions $h: S \to \R$ by the rule $P_t h(x) = \int_S P_t(x, dy) h(y) = \E[h(X_t) \mid X_0 = x], \quad x \in S$ assuming as usual that the expected value exists. Here is the critical definition that we will need.
Suppose that $h: S \to \R$ and that $\E[|h(X_t)|] \lt \infty$ for $t \in T$.
1. $h$ is harmonic for $\bs X$ if $P_t h = h$ for $t \in T$.
2. $h$ is sub-harmonic for $\bs X$ if $P_t h \ge h$ for $t \in T$.
3. $h$ is super-harmonic for $\bs X$ if $P_t h \le h$ for $t \in T$.
The following theorem gives the fundamental connection between the two types of stochastic processes. Given the similarity in the terminology, the result may not be a surprise.
Suppose that $h: S \to \R$ and $\E[\left|h(X_t)\right|] \lt \infty$ for $t \in T$. Define $h(\bs X) = \{h(X_t): t \in T\}$.
1. $h$ is harmonic for $\bs X$ if and only if $h(\bs X)$ is a martingale with respect to $\mathfrak F$.
2. $h$ is sub-harmonic for $\bs X$ if and only if $h(\bs X)$ is a sub-martingale with respect to $\mathfrak F$.
3. $h$ is super-harmonic for $\bs X$ if and only if $h(\bs X)$ is a super-martingale with respect to $\mathfrak F$.
Proof
Suppose that $s,\, t \in T$ with $s \le t$. Then by the Markov property, $\E[h(X_t) \mid \mathscr{F}_s] = \E[h(X_t) \mid X_s] = P_{t-s} h(X_s)$ So if $h$ is harmonic, $\E[h(X_t) \mid \mathscr{F}_s] = h(X_s)$ so $\{h(X_t): t \in T\}$ is a martingale. Conversely, if $\{h(X_t): t \in T\}$ is a martingale, then $P_{t-s}h(X_s) = h(X_s)$. Letting $s = 0$ and $X_0 = x$ gives $P_t h(x) = h(x)$ so $h$ is harmonic. The proofs for sub and super-martingales are similar, with inequalities replacing the equalities.
Several of the examples given in the Introduction can be re-interpreted in the context of harmonic functions of Markov chains. We explore some of these below.
Examples
Let $\mathscr{R}$ denote the usual set of Borel measurable subsets of $\R$, and for $A \in \mathscr{R}$ and $x \in \R$ let $A - x = \{y - x: y \in A\}$. Let $I$ denote the identity function on $\R$, so that $I(x) = x$ for $x \in \R$. We will need this notation in a couple of our applications below.
Random Walks
Suppose that $\bs V = \{V_n: n \in \N\}$ is a sequence of independent, real-valued random variables, with $\{V_n: n \in \N_+\}$ identically distributed and having common probability measure $Q$ on $(\R, \mathscr{R})$ and mean $a \in \R$. Recall from the Introduction that the partial sum process $\bs X = \{X_n: n \in \N\}$ associated with $\bs V$ is given by $X_n = \sum_{i=0}^n V_i, \quad n \in \N$ and that $\bs X$ is a (discrete-time) random walk. But $\bs X$ is also a discrete-time Markov process with one-step transition kernel $P$ given by $P(x, A) = Q(A - x)$ for $x \in \R$ and $A \in \mathscr{R}$.
The identity function $I$ is
1. Harmonic for $\bs X$ if $a = 0$.
2. Sub-harmonic for $\bs X$ if $a \ge 0$.
3. Super-harmonic for $\bs X$ if $a \le 0$.
Proof
Note that $PI(x) = \E(X_1 \mid X_0 = x) = x + \E(X_1 - X_0 \mid X_0 = x) = x + \E(V_1 \mid X_0 = x) = I(x) + a \$ Since $V_1$ and $X_0 = V_0$ are independent. The results now follow from the definitions.
It now follows from our theorem above that $\bs X$ is a martingale if $a = 0$, a sub-martingale if $a \gt 0$, and a super-martingale if $a \lt 0$. We showed these results directly from the definitions in the Introduction.
The Simple Random Walk
Suppose now that that $\bs{V} = \{V_n: n \in \N\}$ is a sequence of independent random variables with $\P(V_i = 1) = p$ and $\P(V_i = -1) = 1 - p$ for $i \in \N_+$, where $p \in (0, 1)$. Let $\bs{X} = \{X_n: n \in \N\}$ be the partial sum process associated with $\bs{V}$ so that $X_n = \sum_{i=0}^n V_i, \quad n \in \N$ Then $\bs{X}$ is the simple random walk with parameter $p$. In terms of gambling, our gambler plays a sequence of independent and identical games, and on each game, wins €1 with probability $p$ and loses €1 with probability $1 - p$. So if $X_0$ is the gambler's initial fortune, then $X_n$ is her net fortune after $n$ games. In the Introduction we showed that $\bs X$ is a martingale if $p = \frac{1}{2}$, a super-martingale if $p \lt \frac{1}{2}$, and a sub-martingale if $p \gt \frac{1}{2}$. But suppose now that instead of making constant unit bets, the gambler makes bets that depend on the outcomes of previous games. This leads to a martingale transform as studied above.
Suppose that the gambler bets $Y_n$ on game $n \in \N_+$ (at even stakes), where $Y_n \in [0, \infty)$ depends on $(V_0, V_1, V_2, \ldots, V_{n-1})$ and satisfies $\E(Y_n) \lt \infty$. So the process $\bs Y = \{Y_n: n \in \N_+\}$ is predictable with respect to $\bs X$, and the gambler's net winnings after $n$ games is $(\bs Y \cdot \bs X)_n = V_0 + \sum_{k=1}^n Y_k V_k = X_0 + \sum_{k=1}^n Y_k (X_k - X_{k-1})$
1. $\bs Y \cdot \bs X$ is a sub-martingale if $p \gt \frac{1}{2}$.
2. $\bs Y \cdot \bs X$ is a super-martingale if $p \lt \frac{1}{2}$.
3. $\bs Y \cdot \bs X$ is a martingale if $p = \frac{1}{2}$.
Proof
These result follow immediately the theorem for martingale transforms above.
The simple random walk $\bs X$ is also a discrete-time Markov chain on $\Z$ with one-step transition matrix $P$ given by $P(x, x + 1) = p$, $P(x, x - 1) = 1 - p$.
The function $h$ given by $h(x) = \left(\frac{1 - p}{p}\right)^x$ for $x \in \Z$ is harmonic for $\bs X$.
Proof
For $x \in \Z$, \begin{align*} P h(x) & = p h(x + 1) + (1 - p) h(x - 1) = p \left(\frac{1 - p}{p}\right)^{x+1} + (1 - p) \left(\frac{1 - p}{p}\right)^{x-1} \ & \frac{(1 - p)^{x + 1}}{p^x} + \frac{(1 - p)^x}{p^{x-1}} = \left(\frac{1 - p}{p}\right)^x [(1 - p) + p] = h(x) \end{align*}
It now follows from our theorem above that the process $\bs Z = \{Z_n: n \in \N\}$ given by $Z_n = \left(\frac{1 - p}{p}\right)^{X_n}$ for $n \in \N$ is a martingale. We showed this directly from the definition in the Introduction. As you may recall, this is De Moivre's martingale and named for Abraham De Moivre.
Branching Processes
Recall the discussion of the simple branching process from the Introduction. The fundamental assumption is that the particles act independently, each with the same offspring distribution on $\N$. As before, we will let $f$ denote the (discrete) probability density function of the number of offspring of a particle, $m$ the mean of the distribution, and $\phi$ the probability generating function of the distribution. We assume that $f(0) \gt 0$ and $f(0) + f(1) \lt 1$ so that a particle has a positive probability of dying without children and a positive probability of producing more than 1 child. Recall that $q$ denotes the probability of extinction, starting with a single particle.
The stochastic process of interest is $\bs{X} = \{X_n: n \in \N\}$ where $X_n$ is the number of particles in the $n$th generation for $n \in \N$. Recall that $\bs{X}$ is a discrete-time Markov chain on $\N$ with one-step transition matrix $P$ given by $P(x, y) = f^{* x}(y)$ for $x, \, y \in \N$ where $f^{*x}$ denotes the convolution power of order $x$ of $f$.
The function $h$ given by $h(x) = q^x$ for $x \in \N$ is harmonic for $\bs X$.
Proof
For $x \in \N$, $Ph(x) = \sum_{y \in \N} P(x, y) h(y) = \sum_{y \in \N} f^{*x}(y) q^y$ The last expression is the probability generating function of $f^{*x}$ evaluated at $q$. But this PGF is simply $\phi^x$ and $q$ is a fixed point of $\phi$ so we have $Ph(x) = [\phi(q)]^x = q^x = h(x)$
It now follows from our theorem above that the process $\bs Z = \{Z_n: n \in \N\}$ is a martingale where $Z_n = q^{X_n}$ for $n \in \N$. We showed this directly from the definition in the Introduction. We also showed that the process $\bs Y = \{Y_n: n \in \N\}$ is a martingale where $Y_n = X_n / m^n$ for $n \in \N$. But we can't write $Y_n = h(X_n)$ for a function $h$ defined on the state space, so we can't interpret this martingale in terms of a harmonic function.
General Random Walks
Suppose that $\bs X = \{X_t: t \in T\}$ is a stochastic process satisfying the basic assumptions above relative to the filtration $\mathfrak F = \{\mathscr{F}_t: t \in T\}$. Recall from the Introduction that the term increment refers to a difference of the form $X_{s+t} - X_s$ for $s, \, t \in T$. The process $\bs X$ has independent increments if this increment is always independent of $\mathscr{F}_s$, and has stationary increments this increment always has the same distribution as $X_t - X_0$. In discrete time, a process with stationary, independent increments is simply a random walk as discussed above. In continuous time, a process with stationary, independent increments (and with the continuity assumptions we have imposed) is called a continuous-time random walk, and also a Lévy process, named for Paul Lévy.
So suppose that $\bs X$ has stationary, independent increments. For $t \in T$ let $Q_t$ denote the probability distribution of $X_t - X_0$ on $(\R, \mathscr{R})$, so that $Q_t$ is also the probability distribution aof $X_{s+t} - X_s$ for every $s, \, t \in T$. From our previous study, we know that $\bs X$ is a Markov processes with transition kernel $P_t$ at time $t \in T$ given by $P_t(x, A) = Q_t(A - x); \quad x \in \R, A \in \mathscr{R}$ We also know that $\E(X_t - X_0) = a t$ for $t \in T$ where $a = \E(X_1 - X_0)$ (assuming of course that the last expected value exists in $\R$).
The identity function $I$ is .
1. Harmonic for $\bs X$ if $a = 0$.
2. Sub-harmonic for $\bs X$ if $a \ge 0$.
3. Super-harmonic for $\bs X$ if $a \le 0$.
Proof
Note that $P_t I(x) = \E(X_t \mid X_0 = x) = x + \E(X_t - X_0 \mid X_0 = x) = I(x) + a t$ since $X_t - X_0$ is independent of $X_0$. The results now follow from the definitions.
It now follows that $\bs X$ is a martingale if $a = 0$, a sub-martingale if $a \ge 0$, and a super-martingale if $a \le 0$. We showed this directly in the Introduction. Recall that in continuous time, the Poisson counting process has stationary, independent increments, as does standard Brownian motion | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/17%3A_Martingales/17.02%3A_Properties_and_Constructions.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
As in the Introduction, we start with a stochastic process $\bs{X} = \{X_t: t \in T\}$ on an underlying probability space $(\Omega, \mathscr{F}, \P)$, having state space $\R$, and where the index set $T$ (representing time) is either $\N$ (discrete time) or $[0, \infty)$ (continuous time). Next, we have a filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$, and we assume that $\bs{X}$ is adapted to $\mathfrak{F}$. So $\mathfrak{F}$ is an increasing family of sub $\sigma$-algebras of $\mathscr{F}$ and $X_t$ is measurable with respect to $\mathscr{F}_t$ for $t \in T$. We think of $\mathscr{F}_t$ as the collection of events up to time $t \in T$. We assume that $\E\left(\left|X_t\right|\right) \lt \infty$, so that the mean of $X_t$ exists as a real number, for each $t \in T$. Finally, in continuous time where $T = [0, \infty)$, we make the standard assumption that $\bs X$ is right continuous and has left limits, and that the filtration $\mathfrak F$ is right continuous and complete.
Our general goal in this section is to see if some of the important martingale properties are preserved if the deterministic time $t \in T$ is replaced by a (random) stopping time. Recall that a random time $\tau$ with values in $T \cup \{\infty\}$ is a stopping time relative to $\mathfrak F$ if $\{\tau \le t\} \in \mathscr{F}_t$ for $t \in T$. So a stopping time is a random time that does not require that we see into the future. That is, we can tell if $\tau \le t$ from the information available at time $t$. Next recall that the $\sigma$-algebra associated with the stopping time $\tau$ is $\mathscr{F}_\tau = \left\{A \in \mathscr{F}: A \cap \{\tau \le t\} \in \mathscr{F}_t \text{ for all } t \in T\right\}$ So $\mathscr{F}_\tau$ is the collection of events up to the random time $\tau$ just as $\mathscr{F}_t$ is the collection of events up to the deterministic time $t \in T$. In terms of a gambler playing a sequence of games, the time that the gambler decides to stop playing must be a stopping time, and in fact this interpretation is the origin of the name. That is, the time when the gambler decides to stop playing can only depend on the information that the gambler has up to that point in time.
Optional Stopping
The basic martingale equation $\E(X_t \mid \mathscr{F}_s) = X_s$ for $s, \, t \in T$ with $s \le t$ can be generalized by replacing both $s$ and $t$ by bounded stopping times. The result is known as the Doob's optional stopping theorem and is named again for Joseph Doob. Suppose that $\bs X = \{X_t: t \in T\}$ satisfies the basic assumptions above with respect to the filtration $\mathfrak F = \{\mathscr{F}_t: t \in T\}$
Suppose that are bounded stopping times relative to $\mathfrak F$ with $\rho \le \tau$.
1. If $\bs X$ is a martingale relative to $\mathfrak F$ then $\E(X_\tau \mid \mathscr{F}_\rho) = X_\rho$.
2. If $\bs X$ is a sub-martingale relative to $\mathfrak F$ then $\E(X_\tau \mid \mathscr{F}_\rho) \ge X_\rho$.
3. If $\bs X$ is a super-martingale relative to $\mathfrak F$ then $\E(X_\tau \mid \mathscr{F}_\rho) \le X_\rho$.
Proof in discrete time
1. Suppose that $\tau \le k$ where $k \in \N_+$ and let $A \in \mathscr{F}_\tau$. For $j \in \N$ with $j \le k$, $A \cap \{\tau = j\} \in \mathscr{F}_j$. Hence by the martingale property, $\E(X_k ; A \cap \{\tau = j\}) = \E(X_j ; A \cap \{\tau = j\}) = \E(X_\tau ; A \cap \{\tau = j\})$ Since $k$ is an upper bound on $\tau$, the events $A \cap \{\tau = j\}$ for $j = 0, 1, \ldots, k$ partition $A$, so summing the displayed equation over $j$ gives $\E(X_k ; A) = \E(X_\tau ; A)$. By definition of conditional expectation, $\E(X_k \mid \mathscr{F}_\tau) = X_\tau$. But since $k$ is also an upper bound for $\rho$ we also have $\E(X_k \mid \mathscr{F}_\rho) = X_\rho$. Finally using the tower property we have $X_\rho = \E(X_k \mid \mathscr{F}_\rho) = \E[\E(X_k \mid \mathscr{F}_\rho) \mid \mathscr{F}_\tau] = \E[\E(X_k \mid \mathscr{F}_\tau) \mid \mathscr{F}_\rho] = \E(X_\tau \mid \mathscr{F}_\rho)$
2. If $\bs X$ is a sub-martingale, then by the Doob decomposition theorem, $X_n = Y_n + Z_n$ for $n \in \N$ where $\bs Y = \{Y_n: n \in \N\}$ is a martingale relative to $\mathfrak F$ and $\bs Z = \{Z_n: n \in \N\}$ is increasing and is predictable relative to $\mathfrak F$. So $\E(X_\tau \mid \mathscr{F}_\rho) = \E(Y_\tau \mid \mathscr{F}_\rho) + \E(Z_\tau \mid \mathscr{F}_\rho)$ But $\E(Y_\tau \mid \mathscr{F}_\rho) = Y_\rho$ by part (a) and since $\bs Z$ is increasing, $\E(Z_\tau \mid \mathscr{F}_\rho) \ge \E(Z_\rho \mid \mathscr{F}_\rho) = Z_\rho$. Hence $\E(X_\tau \mid \mathscr{F}_\rho) \ge X_\rho$.
3. The proof when $\bs X$ is a super-martingale is just like (b), except that the process $\bs Z$ is decreasing.
Proof in continuous time
Suppose that $\bs X$ is a martingale. We need to show that $\E(X_\tau; A) = \E(X_\rho; A)$ for every $A \in \mathscr{F}_\rho$. Let $\rho_n = \lceil 2^n \rho \rceil / 2^n$ and $\tau_n = \lceil 2^n \tau \rceil / 2^n$ for $n \in \N$. The stopping times $\rho_n$ and $\tau_n$ take values in a countable set $T_n$ for each $n \in \N$, and $\rho_n \downarrow \rho$ and $\tau_n \downarrow \tau$ as $n \to \infty$. The process $\{X_t: t \in T_n\}$ is a discrete-time martingale for each $n \in \N$. By the right continuity of $\bs X$, $X_{\rho_n} \to X_\rho, \; X_{\tau_n} \to X_\tau \text{ as } n \to \infty$ Suppose next that $\tau \le c$ where $c \in (0, \infty)$ so that $\rho \le c$ also. Then $\rho_n \le c + 1$ and $\tau_n \le c + 1$ for $n \in \N$ so the discrete stopping times are uniformly bounded. From the discrete version of the theorem, $X_{\rho_n} = \E\left(X_{c+1} \mid \mathscr{F}_{\rho_n}\right)$ and $X_{\tau_n} = \E\left(X_{c+1} \mid \mathscr{F}_{\tau_n}\right)$ for $n \in \N$. It then follows that the sequences $\left\{X_{\rho_n}: n \in \N\right\}$ and $\left\{X_{\tau_n}: n \in \N\right\}$ are uniformly integrable and hence $X_{\rho_n} \to X_\rho$ and $X_{\tau_n} \to X_\tau$ as $n \to \infty$ in mean as well as with probability 1. Now let $A \in \mathscr{F}_\rho$. Since $\rho \le \rho_n$, $\mathscr{F}_\rho \subseteq \mathscr{F}_{\rho_n}$ and so $A \in \mathscr{F}_{\rho_n}$ for each $n \in \N$. By the theorem in discrete time, $\E\left(X_{\tau_n}; A\right) = \E\left(X_{\rho_n}: A\right), \quad n \in \N$ Letting $n \to \infty$ gives $\E(X_\tau; A) = \E(X_\rho; A)$. The proofs in parts (b) and (c) are as in the discrete time.
The assumption that the stopping times are bounded is critical. A counterexample when this assumption does not hold is given below. Here are a couple of simple corollaries:
Suppose again that $\rho$ and $\tau$ are bounded stopping times relative to $\mathfrak F$ with $\rho \le \tau$.
1. If $\bs X$ is a martingale relative to $\mathfrak F$ then $\E(X_\tau) = \E(X_\rho)$.
2. If $\bs X$ is a sub-martingale relative to $\mathfrak F$ then $\E(X_\tau) \ge \E(X_\rho)$.
3. If $\bs X$ is a super-martingale relative to $\mathfrak F$ then $\E(X_\tau) \le \E(X_\rho)$.
Proof
Recall that $\E(X_\tau) = \E[\E(X_\tau \mid \mathscr{F}_\rho)]$, so the results are immediate from the optional stopping theorem.
Suppose that $\tau$ is a bounded stopping time relative to $\mathfrak F$.
1. If $\bs X$ is a martingale relative to $\mathfrak F$ then $\E(X_\tau) = \E(X_0)$.
2. If $\bs X$ is a sub-martingale relative to $\mathfrak F$ then $\E(X_\tau) \ge \E(X_0)$.
3. If $\bs X$ is a super-martingale relative to $\mathfrak F$ then $\E(X_\tau) \le \E(X_0)$.
The Stopped Martingale
For our next discussion, we first need to recall how to stop a stochastic process at a stopping time.
Suppose that $\bs X$ satisfies the assumptions above and that $\tau$ is a stopping time relative to the filtration $\mathfrak F$. The stopped proccess $X^\tau = \{X^\tau_t: t \in [0, \infty)\}$ is defined by $X^\tau_t = X_{t \wedge \tau}, \quad t \in [0, \infty)$
Details
In continuous time, our standard assumptions ensure that $\bs{X}^\tau$ is a valid stochastic process and is adapted to $\mathfrak F$. That is, $X^\tau_t$ is measurable with respect to $\mathscr{F}_t$ for each $t \in [0, \infty)$. Moreover, $\bs{X}^\tau$ is also right continuous and has left limits.
So $X^\tau_t = X_t$ if $t \lt \tau$ and $X^\tau_t = X_\tau$ if $t \ge \tau$. In particular, note that $X^\tau_0 = X_0$. If $X_t$ is the fortune of a gambler at time $t \in T$, then $X^\tau_t$ is the revised fortune at time $t$ when $\tau$ is the stopping time of the gamber. Our next result, known as the elementary stopping theorem, is that a martingale stopped at a stopping time is still a martingale.
Suppose again that $\bs X$ satisfies the assumptions above, and that $\tau$ is a stopping time relative to $\mathfrak F$.
1. If $\bs X$ is a martingale relative to $\mathfrak F$ then so is $\bs{X}^\tau$.
2. If $\bs X$ is a sub-martingale relative to $\mathfrak F$ then so is $\bs{X}^\tau$.
3. If $\bs X$ is a super-martingale relative to $\mathfrak F$ then so is $\bs{X}^\tau$.
General proof
If $s, \, t \in T$ with $s \le t$ then $\tau \wedge s$ and $\tau \wedge t$ are bounded stopping times with $\tau \wedge s \le \tau \wedge t$. So the results follows immediately from the optional stopping theorem above.
Special proof in discrete time
In discrete time, there is a simple direct proof using the martingale transform. So suppose that $T = \N$ and define the process $\bs Y = \{Y_n: n \in \N_+\}$ by $Y_n = \bs{1}(\tau \ge n) = 1 - \bs{1}(\tau \le n - 1), \quad n \in \N_+$ By definition of a stopping time, $\{\tau \le n - 1\} \in \mathscr{F}_{n-1}$ for $n \in \N_+$, so the process $\bs Y$ is predictable. Of course, $\bs Y$ is a bounded, nonnegative process also. The transform of $\bs X$ by $\bs Y$ is $(\bs Y \cdot \bs X)_n = X_0 + \sum_{k=1}^n Y_k (X_k - X_{k-1}) = X_0 + \sum_{k=1}^n \bs{1}(\tau \ge k)(X_k - X_{k-1}), \quad n \in \N_+$ But note that $X^\tau_k - X^\tau_{k-1} = X_k - X_{k-1}$ if $\tau \ge k$ and $X^\tau_k - X^\tau_{k-1} = X_\tau - X_\tau = 0$ if $\tau \lt k$. That is, $X^\tau_k - X^\tau_{k-1} = \bs{1}(\tau \ge k)(X_k - X_{k-1})$. Hence $(\bs Y \cdot \bs X)_n = X_0 + \sum_{k=1}^n (X^\tau_k - X^\tau_{k-1}) = X_0 + X^\tau_n - X^\tau_0 = X^\tau_n, \quad n \in \N_+$ But if $\bs X$ is a martingale (sub-martingale) (super-martingale), then so is the transform $\bs Y \cdot \bs X = \bs{X}_\tau$.
The elementary stopping theorem is bad news for the gambler playing a sequence of games. If the games are fair or unfavorable, then no stopping time, regardless of how cleverly designed, can help the gambler. Since a stopped martingale is still a martingale, the the mean property holds.
Suppose again that $\bs X$ satisfies the assumptions above, and that $\tau$ is a stopping time relative to $\mathfrak F$. Let $t \in T$.
1. If $\bs X$ is a martingale relative to $\mathfrak F$ then $\E(X_{t \wedge \tau}) = E(X_0)$
2. If $\bs X$ is a sub-martingale relative to $\mathfrak F$ then $\E(X_{t \wedge \tau}) \ge E(X_0)$
3. If $\bs X$ is a super-martingale relative to $\mathfrak F$ then $\E(X_{t \wedge \tau}) \le E(X_0)$
Optional Stopping in Discrete Time
A simple corollary of the optional stopping theorem is that if $\bs X$ is a martingale and $\tau$ a bounded stopping time, then $\E(X_\tau) = \E(X_0)$ (with the appropriate inequalities if $\bs X$ is a sub-martingale or a super-martingale). Our next discussion centers on other conditions which give these results in discrete time. Suppose that $\bs X = \{X_n: n \in \N\}$ satisfies the basic assumptions above with respect to the filtration $\mathfrak F = \{\mathscr{F}_n: n \in \N\}$, and that $\tau$ is a stopping time relative to $\mathfrak F$.
Suppose that $\left|X_n\right|$ is bounded uniformly in $n \in \N$ and that $\tau$ is finite.
1. If $\bs X$ is a martingale then $\E(X_\tau) = \E(X_0)$.
2. If $\bs X$ is a sub-martingale then $\E(X_\tau) \ge \E(X_0)$.
3. If $\bs X$ is a super-martingale then $\E(X_\tau) \le \E(X_0)$.
Proof
Assume that $\bs X$ is a super-martingale. The proof for a sub-martingale are similar, and then the results follow immediately for a martingale. The main tool is the mean property above for the stopped super-martingale: $\E(X_{\tau \wedge n}) \le \E(X_0), \quad n \in \N$ Since $\tau \lt \infty$ with probability 1, $\tau \wedge n \to \tau$ as $n \to \infty$, also with probability 1. Since $|X_n|$ is bounded in $n \in T$, it follows from the bounded convergence theorem that $\E(X_{\tau \wedge n}) \to \E(X_\tau)$ as $n \to \infty$. Letting $n \to \infty$ in the displayed equation gives $\E(X_\tau) \le \E(X_0)$.
Suppose that $\left|X_{n+1} - X_n\right|$ is bounded uniformly in $n \in \N$ and that $\E(\tau) \lt \infty$.
1. If $\bs X$ is a martingale then $\E(X_\tau) = \E(X_0)$.
2. If $\bs X$ is a sub-martingale then $\E(X_\tau) \ge \E(X_0)$.
3. If $\bs X$ is a super-martingale then $\E(X_\tau) \le \E(X_0)$.
Proof
Assume that $\bs X$ is a super-martingale. The proofs for a sub-martingale are similar, and then the results follow immediately for a martingale. The main tool once again is the mean property above for the stopped super-martingale: $\E(X_{\tau \wedge n}) \le \E(X_0), \quad n \in \N$ Suppose that $|X_{n+1} - X_n| \le c$ where $c \in (0, \infty)$. Then $|X_{\tau \wedge n} - X_0| = \left|\sum_{k=1}^{\tau \wedge n} (X_k - X_{k-1})\right| \le \sum_{k=1}^{\tau \wedge n} |X_k - X_{k-1}| \le c (\tau \wedge n) \le c \tau$ Hence $|X_{\tau \wedge n}| \le c \tau + |X_0|$. Since $\E(\tau) \lt \infty$ we know that $\tau \lt \infty$ with probability 1, so as before, $\tau \wedge n \to \tau$ as $n \to \infty$. Also $\E(c \tau + |X_0|) \lt \infty$ so by the dominated convergence theorem, $\E(X_{\tau \wedge n}) \to \E(X_\tau)$ as $n \to \infty$. So again letting $n \to \infty$ in the displayed equation gives $\E(X_\tau) \le \E(X_0)$.
Let's return to our original interpretation of a martingale $\bs{X}$ representing the fortune of a gambler playing fair games. The gambler could choose to quit at a random time $\tau$, but $\tau$ would have to be a stopping time, based on the gambler's information encoded in the filtration $\mathfrak{F}$. Under the conditions of the theorem, no such scheme can help the gambler in terms of expected value.
Examples and Applications
The Simple Random Walk
Suppose that $\bs{V} = (V_1, V_2, \ldots)$ is a sequence if independent, identically distributed random variables with $\P(V_i = 1) = p$ and $\P(V_i = -1) = 1 - p$ for $i \in \N_+$, where $p \in (0, 1)$. Let $\bs{X} = (X_0, X_1, X_2, \ldots)$ be the partial sum process associated with $\bs{V}$ so that $X_n = \sum_{i=1}^n V_i, \quad n \in \N$ Then $\bs{X}$ is the simple random walk with parameter $p$. In terms of gambling, our gambler plays a sequence of independent and identical games, and on each game, wins €1 with probability $p$ and loses €1 with probability $1 - p$. So $X_n$ is the the gambler's total net winnings after $n$ games. We showed in the Introduction that $\bs X$ is a martingale if $p = \frac{1}{2}$ (the fair case), a sub-martingale if $p \gt \frac{1}{2}$ (the favorable case), and a super-martingale if $p \lt \frac{1}{2}$ (the unfair case). Now, for $c \in \Z$, let $\tau_c = \inf\{n \in \N: X_n = c\}$ where as usual, $\inf(\emptyset) = \infty$. So $\tau_c$ is the first time that the gambler's fortune reaches $c$. What if the gambler simply continues playing until her net winnings is some specified positive number (say €$1\,000\,000$ )? Is that a workable strategy?
Suppose that $p = \frac{1}{2}$ and that $c \in \N_+$.
1. $\P(\tau_c \lt \infty) = 1$
2. $\E\left(X_{\tau_c}\right) = c \ne 0 = \E(X_0)$
3. $\E(\tau_c) = \infty$
Proof
Parts (a) and (c) hold since $\bs X$ is a null recurrent Markov chain. Part (b) follows from (a) since trivially $X_{\tau_c} = c$ if $\tau_c \lt \infty$.
Note that part (b) does not contradict the optional stopping theorem because of part (c). The strategy of waiting until the net winnings reaches a specified goal $c$ is unsustainable. Suppose now that the gambler plays until the net winnings either falls to a specified negative number (a loss that she can tolerate) or reaches a specified positive number (a goal she hopes to reach).
Suppose again that $p = \frac{1}{2}$. For $a, \, b \in \N_+$, let $\tau = \tau_{-a} \wedge \tau_b$. Then
1. $\E(\tau) \lt \infty$
2. $\E(X_\tau) = 0$
3. $\P(\tau_{-a} \lt \tau_b) = b / (a + b)$
Proof
1. We will let $X_0$ have an arbitrary value in the set $\{-a, -a + 1, \ldots, b - 1, b\}$, so that we can use Markov chain techniques. Let $m(x) = \E(\tau \mid X_0 = x)$ for $x$ in this set. Conditioning on the first state and using the Markov property we have $m(x) = 1 + \frac{1}{2} m(x - 1) + \frac{1}{2} m(x + 1), \quad x \in \{-a + 1, \ldots, b - 1\}$ with boundary conditions $m(-a) = m(b) = 0$. The linear recurrence relation can be solved explicitly, but all that we care about is the fact that the solution is finite.
2. The optional sampling theorem applies, so $\E(X_\tau) = \E(X_0) = 0$.
3. Let $q = \P(\tau_{-a} \lt \tau_b)$ so that $1 - q = \P(\tau_b \lt \tau_{-a})$. By definition, $X_\tau = -a$ if $\tau_{-a} \lt \tau_b$ and $X_\tau = b$ if $\tau_b \lt \tau_{-a}$. So from (b), $q(-a) + (1 - q) b = 0$ and therefore $q = b / (a + b)$.
So gambling until the net winnings either falls to $-a$ or reaches $b$ is a workable strategy, but alas has expected value 0. Here's another example that shows that the first version of the optional sampling theorem can fail if the stopping times are not bounded.
Suppose again that $p = \frac{1}{2}$. Let $a, \, b \in \N_+$ with $a \lt b$. Then $\tau_a \lt \tau_b \lt \infty$ but $b = \E\left(X_{\tau_b} \mid \mathscr{F}_{\tau_a} \right) \ne X_{\tau_a} = a$
Proof
Since $X_0 = 0$, the process $\bs X$ must reach $a$ before reaching $b$. As before, $\tau_b \lt \infty$ but $\E(\tau_b) = \infty$ since $\bs X$ is a null recurrent Markov chain.
This result does not contradict the optional stopping theorem since the stopping times are not bounded.
Wald's Equation
Wald's equation, named for Abraham Wald is a formula for the expected value of the sum of a random number of independent, identically distributed random variables. We have considered this before, in our discussion of conditional expected value and our discussion of random samples, but martingale theory leads to a particularly simple and elegant proof.
Suppose that $\bs X = (X_n: n \in \N_+)$ is a sequence of independent, identically distributed variables with common mean $\mu \in \R$. If $N$ is a stopping time for $\bs X$ with $\E(N) \lt \infty$ then $\E\left(\sum_{k=1}^N X_k\right) = \E(N) \mu$
Proof
Let $\mathfrak F$ denote the natural filtration associated with $\bs X$. Let $c = \E(|X_n|)$, so that by assumption, $c \lt \infty$. Finally, let $Y_n = \sum_{k=1}^n (X_k - \mu) \, \quad n \in \N_+$ Then $\bs Y = (Y_n: n \in \N_+)$ is a martingale relative to $\mathfrak F$, with mean 0. Note that $\E(|Y_{n+1} - Y_n|) = \E(|X_{n+1} - \mu|) \le c + |\mu|, \quad n \in \N_+$ Hence a discrete version of the optional stopping theorem applies and we have $\E(Y_N) = 0$. Therefore $0 = \E(Y_N) = \E\left[\sum_{k=1}^N (X_k - \mu)\right] = \E\left(\sum_{k=1}^N X_k - N \mu\right) = \E\left(\sum_{k=1}^N X_k\right) - \E(N) \mu$
Patterns in Multinomial Trials
Patterns in multinomial trials were studied in the chapter on Renewal Processes. As is often the case, martingales provide a more elegant solution. Suppose that $\bs{L} = (L_1, L_2, \ldots)$ is a sequence of independent, identically distributed random variables taking values in a finite set $S$, so that $\bs{L}$ is a sequence of multinomial trials. Let $f$ denote the common probability density function so that for a generic trial variable $L$, we have $f(a) = \P(L = a)$ for $a \in S$. We assume that all outcomes in $S$ are actually possible, so $f(a) \gt 0$ for $a \in S$.
In this discussion, we interpret $S$ as an alphabet, and we write the sequence of variables in concatenation form, $\bs{L} = L_1 L_2 \cdots$ rather than standard sequence form. Thus the sequence is an infinite string of letters from our alphabet $S$. We are interested in the first occurrence of a particular finite substring of letters (that is, a word or pattern) in the infinite sequence. The following definition will simplify the notation.
If $\bs a = a_1 a_2 \cdots a_k$ is a word of length $k \in \N_+$ from the alphabet $S$, define $f(\bs{a}) = \prod_{i=1}^k f(a_i)$ so $f(\bs a)$ is the probability of $k$ consecutive trials producing word $\bs a$.
So, fix a word $\bs a = a_1 a_2 \cdots a_k$ of length $k \in \N_+$ from the alphabet $S$, and consider the number of trials $N_{\bs a}$ until $\bs a$ is completed. Our goal is compute $\nu(\bs a) = \E\left(N_{\bs a}\right)$. We do this by casting the problem in terms of a sequence of gamblers playing fair games and then using the optional stopping theorem above. So suppose that if a gambler bets $c \in (0, \infty)$ on a letter $a \in S$ on a trial, then the gambler wins $c / f(a)$ if $a$ occurs on that trial and wins 0 otherwise. The expected value of this bet is $f(a) \frac{c}{f(a)} - c = 0$ and so the bet is fair. Consider now a gambler with an initial fortune 1. When she starts playing, she bets 1 on $a_1$. If she wins, she bet her entire fortune $1 / f(a_1)$ on the next trial on $a_2$. She continues in this way: as long as she wins, she bets her entire fortune on the next trial on the next letter of the word, until either she loses or completes the word $\bs a$. Finally, we consider a sequence of independent gamblers playing this strategy, with gambler $i$ starting on trial $i$ for each $i \in \N_+$.
For a finite word $\bs a$ from the alphabet $S$, $\nu(\bs a)$ is the total winnings by all of the players at time $N_{\bs a}$.
Proof
Let $X_n$ denote the total fortunes of all of the gamblers after trial $n \in \N_+$. Since all of the bets are fair, $\bs X = \{X_n: n \in \N_+\}$ is a martingale with mean 0. We will show that the conditions in the discrete version of the optional sampling theorem hold. First, consider disjoint blocks of trials of length $k$, that is $\left((L_1, L_2, \ldots, L_k), (L_{k+1}, L_{k+2}, \ldots, L_{2 k}), \ldots\right)$ Let $M_{\bs a}$ denote the index of the first such block that forms the letter $\bs a$. This variable has the geometric distribution on $\N_+$ with success parameter $f(\bs a)$ and so in particular, $\E(M_\bs{a}) = 1 / f(\bs a)$. But clearly $N_{\bs a} \le k M_{\bs a}$ so $\nu(\bs a) \lt k / f(\bs a) \lt \infty$. Next note that all of the gamblers have stopped playing by time $N$, so clearly $|X_{n+1} - X_n| \le 1 / f(a)$ for $n \in \N_+$. So the optional stopping theorem applies, and hence $\E\left(X_{N_a}\right) = 0$. But note that $\nu(\bs a)$ can also be interpreted as the expected amount of money invested by the gamblers (1 unit at each time until the game ends at time $N_{\bs a}$), and hence this must also be the total winnings at time $N_{\bs a}$ (which is deterministic).
Given $\bs a$, we can compute the total winnings precisely. By definition, trials $N - k + 1, \ldots, N$ form the word $\bs a$ for the first time. Hence for $i \le N - k$, gambler $i$ loses at some point. Also by definition, gambler $N - k + 1$ wins all of her bets, completes word $\bs a$ and so collects $1 / f(\bs a)$. The complicating factor is that gamblers $N - k + 2, \ldots, N$ may or may not have won all of their bets at the point when the game ends. The following exercise illustrates this.
Suppose that $\bs{L}$ is a sequence of Bernoulli trials (so $S = \{0, 1\}$) with success probability $p \in (0, 1)$. For each of the following strings, find the expected number of trials needed to complete the string.
1. 001
2. 010
Solution
Let $q = 1 - p$.
1. For the word 001, gambler $N - 2$ wins $\frac{1}{q^2 p}$ on her three bets. Gambler $N - 2$ makes two bets, winning the first but losing the second. Gambler $N$ loses her first (and only) bet. Hence $\nu(001) = \frac{1}{q^2 p}$
2. For the word 010, gambler $N - 2$ wins $\frac{1}{q^2 p}$ on her three bets as before. Gambler $N - 1$ loses his first bet. Gambler $N$ wins $1 / q$ on his first (and only) bet. So $\nu(010) = \frac{1}{q^2 p} + \frac{1}{q}$
The difference between the two words is that the word in (b) has a prefix (a proper string at the beginning of the word) that is also a suffix (a proper string at the end of the word). Word $\bs a$ has no such prefix. Thus we are led naturally to the following dichotomy:
Suppose that $\bs a$ is a finite word from the alphabet $S$. If no proper prefix of $\bs a$ is also a suffix, then $\bs a$ is simple. Otherwise, $\bs a$ is compound.
Here is the main result, which of course is the same as when the problem was solved using renewal theory.
Suppose that $\bs a$ is a finite word in the alphabet $S$.
1. If $\bs a$ is simple then $\nu(\bs a) = 1 / f(\bs a)$.
2. If $\bs a$ is compound, then $\nu(\bs a) = 1 / f(\bs a) + \nu(\bs b)$ where $\bs b$ is the longest word that is both a prefix and a suffix of $\bs a$.
Proof
The ingredients are in place from our previous discussion. Suppose that $\bs a$ has length $k \in \N_+$.
1. If $\bs a$ is simple, only player $N - k + 1$ wins, and she wins $1 / f(\bs a)$.
2. Suppose $\bs a$ is compound and $\bs b$ is the largest proper prefix-suffix. player $N - k + 1$ wins $1 / f(\bs a)$ as always. The winnings of players $N - k + 2, \ldots, N$ are the same as the winnings of a new sequence of gamblers playing a new sequence of trials with the goal of reaching word $\bs b$.
For a compound word, we can use (b) to reduce the computation to simple words.
Consider Bernoulli trials with success probability $p \in (0, 1)$. Find the expected number of trials until each of the following strings is completed.
1. $1011011$
2. $1 1 \cdots 1$ ($k$ times)
Solutions
Again, let $q = 1 - p$.
1. $\nu(1011011) = \frac{1}{p^5 q^2} + \nu(1011) = \frac{1}{p^5 q^2} + \frac{1}{p^3 q} + \nu(1) = \frac{1}{p^5 q^2} + \frac{1}{p^3 q} + \frac{1}{p}$
2. Let $\bs{1}_j$ denote a string of $j$ 1s for $j \in \N_+$. If $k \ge 2$ then $\nu(\bs{1}_k) = 1 / p^k + \nu(\bs{1}_{k-1})$. Hence $\nu(\bs{1}_k) = \sum_{j=1}^k \frac{1}{p^j}$
Recall that an ace-six flat die is a six-sided die for which faces 1 and 6 have probability $\frac{1}{4}$ each while faces 2, 3, 4, and 5 have probability $\frac{1}{8}$ each. Ace-six flat dice are sometimes used by gamblers to cheat.
Suppose that an ace-six flat die is thrown repeatedly. Find the expected number of throws until the pattern $6165616$ occurs.
Solution
From our main theorem, \begin{align*} \nu(6165616) & = \frac{1}{f(6165616)} + \nu(616) = \frac{1}{f(6165616)} + \frac{1}{f(616)} + \nu(6) \ & = \frac{1}{f(6165616)} + \frac{1}{f(616)} + \frac{1}{f(6)} = \frac{1}{(1/4)^6(1/8)} + \frac{1}{(1/4)^3} + \frac{1}{1/4} = 32\,836 \end{align*}
Suppose that a monkey types randomly on a keyboard that has the 26 lower-case letter keys and the space key (so 27 keys). Find the expected number of keystrokes until the monkey produces each of the following phrases:
1. it was the best of times
2. to be or not to be
Solution
1. $27^{24} \approx 2.258 \times 10^{34}$
2. $27^5 + 27^{18} \approx 5.815 \times 10^{25}$
The Secretary Problem
The secretary problem was considered in the chapter on Finite Sampling Models. In this discussion we will solve a variation of the problem using martingales. Suppose that there are $n \in \N_+$ candidates for a job, or perhaps potential marriage partners. The candidates arrive sequentially in random order and are interviewed. We measure the quality of each candidate by a number in the interval $[0, 1]$. Our goal is to select the very best candidate, but once a candidate is rejected, she cannot be recalled. Mathematically, our assumptions are that the sequence of candidate variables $\bs X = (X_1, X_2, \ldots, X_n)$ is independent and that each is uniformly distributed on the interval $[0, 1]$ (and so has the standard uniform distribution). Our goal is to select a stopping time $\tau$ with respect to $\bs X$ that maximizes $\E(X_\tau)$, the expected value of the chosen candidate. The following sequence will play a critical role as a sequence of thresholds.
Define the sequence $\bs a = (a_k: k \in \N)$ by $a_0 = 0$ and $a_{k+1} = \frac{1}{2}(1 + a_k^2)$ for $k \in \N$. Then
1. $a_k \lt 1$ for $k \in \N$.
2. $a_k \lt a_{k+1}$ for $k \in \N$.
3. $a_k \to 1$ as $k \to \infty$.
4. If $X$ is uniformly distributed on $[0, 1]$ then $\E(X \vee a_k) = a_{k+1}$ for $k \in \N$.
Proof
1. Note that $a_1 = \frac{1}{2} \lt 1$. Suppose that $a_k \lt 1$ for some $k \in \N_+$. Then $a_{k+1} = \frac{1}{2}(1 + a_k^2) \lt \frac{1}{2}(1 + 1) = 1$
2. Note that $0 = a_0 \lt a_1 = \frac{1}{2}$. Suppose that $a_k \gt a_{k-1}$ for some $k \in \N_+$. Then $a_{k+1} = \frac{1}{2}(1 + a_k^2) \gt \frac{1}{2}(1 + a_{k-1}^2) = a_k$.
3. Since the sequence is increasing and bounded above, $a_\infty = \lim_{k \to \infty} a_k$ exists. Taking limits in the recursion relation gives $a_\infty = \frac{1}{2}(1 + a_\infty^2)$ or equivalently $(a_\infty - 1)^2 = 0$.
4. For $k \in \N$, $\E(X \vee a_k) = \int_0^1 (x \vee a_k) dx = \int_0^{a_k} a_k \, dx + \int_{a_k}^1 x \, dx = \frac{1}{2}(1 + a_k^2) = a_{k+1}$
Since $a_0 = 0$, all of the terms of the sequence are in $[0, 1)$ by (a). Approximations of the first 10 terms are $(0, 0.5, 0.625, 0.695, 0.742, 0.775, 0.800, 0.820, 0.836, 0.850, 0.861, \ldots)$ Property (d) gives some indication of why the sequence is important for the secretary probelm. At any rate, the next theorem gives the solution. To simplify the notation, let $\N_n = \{0, 1, \ldots, n\}$ and $\N_n^+ = \{1, 2, \ldots, n\}$.
The stopping time $\tau = \inf\left\{k \in \N_n^+: X_k \gt a_{n-k}\right\}$ is optimal for the secretary problem with $n$ candidates. The optimal value is $\E(X_\tau) = a_n$.
Proof
Let $\mathfrak F = \{\mathscr{F}_k: k \in \N_n^+\}$ be the natural filtration of $\bs X$, and suppose that $\rho$ is a stopping time for $\mathfrak F$. Define $\bs Y = \{Y_k: k \in \N_n\}$ by $Y_0 = 0$ and $Y_k = X_{\rho \wedge k} \vee a_{n-k}$ for $k \in \N_n^+$. We will show that $\bs Y$ is a super-martingale with respect to $\mathfrak F$. First, on the event $\rho \le k - 1$, $\E(Y_k \mid \mathscr{F}_{k-1}) = \E[(X_\rho \vee a_{n-k}) \mid \mathscr{F}_{k-1}] = X_\rho \vee a_{n-k} \le X_\rho \vee a_{n - k + 1} = Y_{k-1}$ where we have used the fact that $X_\rho \bs{1}(\rho \le k - 1)$ is measurable with respect to $\mathscr{F}_{k-1}$ and the fact that the sequence $\bs a$ is increasing. On the event $\rho \gt k - 1$, $\E(Y_k \mid \mathscr{F}_{k-1}) = \E(X_k \vee a_{n-k} \mid \mathscr{F}_{k-1}) = \E(X_k \vee a_{n-k}) = a_{n - k + 1} \le Y_{k - 1}$ where we have used the fact that $X_k$ and $\mathscr{F}_{k-1}$ are independent, and part (d) of the previous result. Since $\bs Y$ is a super-martingale and $\rho$ is bounded, the optional stopping theorem applies and we have $\E(X_\rho) \le \E(X_\rho \vee a_{n - \rho}) = \E(Y_\rho) \le E(Y_0) = a_n$ so $a_n$ is an upper bound on the expected value of the candidate chosen by the stopping time $\rho$.
Next, we will show that in the special case that $\rho = \tau$, the process $\bs Y$ is a martingale. On the event $\tau \le k - 1$ we have $\E(Y_k \mid \mathscr{F}_{k-1}) = X_\tau \vee a_{n-k}$ as before. But by definition, $X_\tau \ge a_{n - \tau} \ge a_{n - k + 1} \ge a_{n - k}$ so on this event, $\E(Y_k \mid \mathscr{F}_{k-1}) = X_\tau = X_\tau \vee a_{n - k + 1} = Y_{k-1}$ On the event $\tau \gt k - 1$ we have $\E(Y_k \mid \mathscr{F}_{k-1}) = a_{n-k+1}$ as before. But on this event, $Y_{k-1} = a_{n-k+1}$. Now since $\bs Y$ is a martingale and $\tau$ is bounded, the optional stopping theorem applies and we have $\E(X_\tau) = \E(X_\tau \vee a_{n-\tau}) = \E(Y_\tau) = \E(Y_0) = a_n$
Here is a specific example:
For $n = 5$, the decision rule is as follows:
1. Select candidate 1 if $X_1 \gt 0.742$; otherwise,
2. select candidate 2 if $X_2 \gt 0.695$; otherwise,
3. select candidate 3 if $X_3 \gt 0.625$; otherwise,
4. select candidate 4 if $X_4 \gt 0.5$; otherwise,
5. select candidate 5.
The expected value of our chosen candidate is 0.775.
In our original version of the secretary problem, we could only observe the relative ranks of the candidates, and our goal was to maximize the probability of picking the best candidate. With $n = 5$, the optimal strategy is to let the first two candidates go by and then pick the first candidate after that is better than all previous candidates, if she exists. If she does not exist, of course, we must select candidate 5. The probability of picking the best candidate is 0.433 | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/17%3A_Martingales/17.03%3A_Stopping_Times.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\Q}{\mathbb{Q}}$ $\newcommand{\D}{\mathbb{D}}$ $\newcommand{\bs}{\boldsymbol}$
Basic Theory
In this section, we will study a number of interesting inequalities associated with martingales and their sub-martingale and super-martingale cousins. These turn out to be very important for both theoretical reasons and for applications. You many need to review infimums and supremums.
Basic Assumptions
As in the Introduction, we start with a stochastic process $\bs{X} = \{X_t: t \in T\}$ on an underlying probability space $(\Omega, \mathscr{F}, \P)$, having state space $\R$, and where the index set $T$ (representing time) is either $\N$ (discrete time) or $[0, \infty)$ (continuous time). Next, we have a filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$, and we assume that $\bs{X}$ is adapted to $\mathfrak{F}$. So $\mathfrak{F}$ is an increasing family of sub $\sigma$-algebras of $\mathscr{F}$ and $X_t$ is measurable with respect to $\mathscr{F}_t$ for $t \in T$. We think of $\mathscr{F}_t$ as the collection of events up to time $t \in T$. We assume that $\E\left(\left|X_t\right|\right) \lt \infty$, so that the mean of $X_t$ exists as a real number, for each $t \in T$. Finally, in continuous time where $T = [0, \infty)$, we make the standard assumptions that $t \mapsto X_t$ is right continuous and has left limits, and that the filtration $\mathfrak F$ is right continuous and complete.
Maximal Inequalites
For motivation, let's review a modified version of Markov's inequality, named for Andrei Markov.
If $X$ is a real-valued random variable then $\P(X \ge x) \le \frac{1}{x} \E(X ; X \ge x), \quad x \in (0, \infty)$
Proof
The modified version has essentially the same elegant proof as the original. Clearly $x \bs{1}(X \ge x) \le X \bs{1}(X \ge x), \quad x \in (0, \infty)$ Taking expected values through the inequality gives $x \P(X \ge x) \le \E(X; X \ge x)$. Dividing both sides by $x$ gives the result (and it is at this point that we need $x \gt 0.$).
So Markov's inequality gives an upper bound on the probability that $X$ exceeds a given positive value $x$, in terms of a monent of $X$. Now let's return to our stochastic process $\bs X = \{X_t: t \in T\}$. To simplify the notation, let $T_t = \{s \in T: s \le t\}$ for $t \in T$. Here is the main definition:
For the process $\bs X$, define the corresponding maximal process $\bs U = \{U_t: t \in T\}$ by $U_t = \sup\{X_s: s \in T_t\}, \quad t \in T$
Clearly, the maximal process is increasing, so that if $s, \, t \in T$ with $s \le t$ then $U_s \le U_t$. A trivial application of Markov's inequality above would give $\P(U_t \ge x) \le \frac{1}{x} \E(U_t; U_t \ge x), \quad x \gt 0$ But when $\bs X$ is a sub-martingale, the following theorem gives a much stronger result by replacing the first occurrent of $U_t$ on the right with $X_t$. The theorem is known as Doob's sub-martingale maximal inequality (or more simply as Doob's inequaltiy), named once again for Joseph Doob who did much of the pioneering work on martingales. A sub-martingale has an increasing property of sorts in the sense that if $s, \, t \in T$ with $s \le t$ then $\E(X_t \mid \mathscr{F}_s) \ge X_s$, so it's perhaps not entirely surprising that such a bound is possible.
Suppose that $\bs X$ is a sub-martingale. For $t \in T$, let $U_t = \sup\{X_s: s \in T_t\}$. Then $\P(U_t \ge x) \le \frac{1}{x} \E(X_t; U_t \ge x), \quad x \in (0, \infty)$
Proof in the discrete time
So $T = \N$ and the maximal process is given by $U_n = \max\left\{X_k: k \in \N_n\right\}$ for $n \in \N$. Let $x \in (0, \infty)$, and define $\tau_x = \min\{k \in \N: X_k \ge x\}$ where as usual, $\min(\emptyset) = \infty$. The random time $\tau_x$ is a stopping time relative to $\mathfrak F$. Moreover, the processes $\{U_n: n \in \N\}$ and $\{\tau_x: x \in (0, \infty)\}$ are inverses in the sense that for $n \in \N$ and $x \in (0, \infty)$, $U_n \ge x \text{ if and only if } \tau_x \le n$ We have seen this type of duality before—in the Poisson process and more generally in renewal processes. Let $n \in \N$. First note that $\E\left(X_{\tau_x \wedge n}\right) = \E\left(X_{\tau_x \wedge n}; \tau_x \le n\right) + \E\left(X_{\tau_x \wedge n}; \tau_x \gt n\right)$ If $\tau_x \le n$ then $X_{\tau_x \wedge n} = X_{\tau_x} \ge x$. On the other hand if $\tau_x \gt n$ then $X_{\tau_x \wedge n} = X_n$. So we have $\E\left(X_{\tau_x \wedge n}\right) \ge x \P(\tau_x \le n) + \E(X_n; \tau_x \gt n) = x \P(U_t \ge x) + \E(X_n; \tau_x \gt n)$ Similarly, $\E(X_n) = \E(X_n; \tau_x \le n) + \E(X_n; \tau_x \gt n) = \E(X_n; U_t \ge x) + \E(X_n; \tau_x \gt n)$ But by the optional stopping theorem, $\E\left(X_{\tau_x \wedge n}\right) \le \E(X_n)$. Hence we have $x \P(U_t \ge x) + \E(X_n; \tau_x \gt n) \le \E(X_n; U_t \ge x) + \E(X_n; \tau_x \gt n)$ Subtracting the common term and then dividing both sides by $x$ gives the result
Proof in continuous time
For $k \in \N$, let $\D^+_k = \{j / 2^k: j \in \N\}$ denote the set of nonnegative dyadic rationals (or binary rationals) of rank $k$ or less. For $t \in [0, \infty)$ let $T^k_t = (\D^+_k \cap [0, t]) \cup \{t\}$, so that $T^k_t$ is the finite set of such dyadic rationals that are less than $t$, with $t$ added to the set. Note that $T^k_t$ has an ordered enumeration, so $\bs{X}^k = \{X_s: s \in T^k_t\}$ is a discrete-time sub-martingale for each $k \in \N$. Let $U^k_t = \sup\{X_s: s \in T^k_t\}$ for $k \in \N$. Note that $T^j_t \subset T^k_t \subset [0, t]$ for $t \in [0, \infty)$ and for $j, \, k \in \N$ with $j \lt k$ and therefore $U^j_t \le U^k_t \le U_t$. It follows that for $x \in (0, \infty)$, $\left\{U^j_t \ge x\right\} \subseteq \left\{U^k_t \ge x\right\} \subset \{U_t \ge x\}$ The set $\D^+$ of all nonnegative dyadic rationals is dense in $[0, \infty)$ and so since $\bs X$ is right continuous and has left limits, it follows that if $U_t \ge x$ then $U^k_t \ge x$ for some $k \in \N$. That is, we have $\{U_t \ge x\} = \bigcup_{k=0}^\infty \left\{U^k_t \ge x\right\}$ The maximal inequality applies to the discrete-time sub-martingale $\bs{X}^k$ and so $P(U^k_t \ge x) \le \frac{1}{x} \E(X_t; U^k_t \ge x)$ for each $k \in \N$. By the monotone convergence theorem, the left side converges to $\P(U_t \ge x)$ as $k \to \infty$ and the right side converges to $\E(X; U_t \ge x)$ as $k \to \infty$.
There are a number of simple corollaries of the maximal inequality. For the first, recall that the positive part of $x \in \R$ is $x^+ = x \vee 0$, so that $x^+ = x$ if $x \gt 0$ and $x^+ = 0$ if $x \le 0$.
Suppose that $\bs X$ is a sub-martingale. For $t \in T$, let $V_t = \sup\{X_s^+: s \in T_t\}$. Then $\P(V_t \ge x) \le \frac{1}{x} \E(X_t^+; V_t \ge x), \quad x \in (0, \infty)$
Proof
Recall that since $\bs X$ is a sub-martingale and $x \mapsto x^+$ is increasing and convex, $\bs X^+ = \{X_t^+: t \in T\}$ is also a sub-martingale. Hence the result follows from the general maximal inequality for sub-martingales.
As a further simple corollary, note that $\P(V_t \ge x) \le \frac{1}{x} \E(X_t^+), \quad x \in (0, \infty)$ This is sometimes how the maximal inequality is given in the literature.
Suppose that $\bs X$ is a martingale. For $t \in T$, let $W_t = \sup\{|X_s|: s \in T_t\}$. Then $\P(W_t \ge x) \le \frac{1}{x} \E(|X_t|; W_t \ge x), \quad x \in (0, \infty)$
Proof
Recall that since $\bs X$ is a martingale, and $x \mapsto |x|$ is convex, $|\bs X| = \{|X_t|: t \in T\}$ is a sub-martingale. Hence the result follows from the general maximal inequality for sub-martingales.
Once again, a further simple corollary is $\P(W_t \ge x) \le \frac{1}{x} \E(|X_t|), \quad x \in (0, \infty)$ Next recall that for $k \in (1, \infty)$, the $k$-norm of a real-valued random variable $X$ is $\|X\|_k = \left[\E(|X|^k)\right]^{1/k}$, and the vector space $\mathscr{L}_k$ consists of all real-valued random variables for which this norm is finite. The following theorem is the norm version of the Doob's maximal inequality.
Suppose again that $\bs X$ is a martingale. For $t \in T$, let $W_t = \sup\{|X_s|: s \in T_t\}$. Then for $k \gt 1$, $\|W_t\|_k \le \frac{k}{k - 1} \|X_t\|_k$
Proof
Fix $t \in T$. If $\E(|X_t|^k) = \infty$, the inequality trivial holds, so assume that $\E(|X_t|^k) \lt \infty$, and thus that $X_t \in \mathscr{L}_k$. The proof relies fundamentally on Hölder's inequality, and for that inequality to work, we need to truncate the variable $W_t$ and consider instead the the bounded random variable $W_t \wedge c$ where $c \in (0, \infty)$. First we need to show that $\P(W_t \wedge c \ge x) \le \frac{1}{x} \E(|X_t|; W_t \wedge c \ge x), \quad x \in (0, \infty)$ If $c \lt x$, both sides are 0. If $c \ge x$, $\{W_t \wedge c \ge x\} = \{W_t \ge x\}$ and so from the maximal inequality above, $\P(W_t \wedge c \ge x) = \P(W_t \ge x) \le \frac{1}{x} \E(|X_t|; W_t \ge x) = \E(|X_t|; W_t \wedge c \ge x)$ Next recall that $\|W_t \wedge c\|_k^k = \E[(W_t \wedge c)^k] = \int_0^\infty k x^{k-1} \P(W_t \wedge c \ge x) dx$ Applying the inequality gives $\E[(W_t \wedge c)^k] \le \int_0^\infty k x^{k-2} \E[|X_t|; W_t \wedge c \ge x] dx$ By Fubini's theorem we can interchange the expected value and the integral which gives $\E[(W_t \wedge c)^k] \le \E\left[\int_0^{W_t \wedge c} k x^{k-2} |X_t| dx\right] = \frac{k}{k - 1} \E[|X_t| (W_t \wedge c)^{k-1}]$ But $X_t \in \mathscr{L}_k$ and $(W_t \wedge c)^{k-1} \in \mathscr{L}_j$ where $j = k / (k - 1)$ is the exponent conjugate to $k$. So an application of Hölder's inequality gives $\|W_t \wedge c\|_k^k \le \frac{k}{k - 1}\|X_t\|_k \, \|(W_t \wedge c)^{k-1}\|_j = \frac{k}{k - 1}\|X_t\|_k \|W_t \wedge c\|_k^{k-1}$ where we have used the simple fact that $\|(W_t \wedge c)^{k-1}\|_j = \|W_t \wedge c\|_k^{k-1}$. Dividing by this factor gives $\|W_t \wedge c\|_k \le \frac{k}{k - 1} \|X_t\|_k$ Finally, $\|W_t \wedge c\|_k \uparrow \|W_t\|_k$ as $c \to \infty$ by the monotone convergence theorem. So letting $c \to \infty$ in the last displayed equation gives $\|W_t\|_k \le \frac{k}{k - 1} \|X_t\|_k$
Once again, $\bs W = \{W_t: t \in T\}$ is the maximal process associated with $|\bs X| = \{\left|X_t\right|: t \in T\}$. As noted in the proof, $j = k / (k - 1)$ is the exponent conjugate to $k$, satisfying $1 / j + 1 / k = 1$. So this version of the maximal inequality states that the $k$ norm of the maximum of the martingale $\bs X$ on $T_t$ is bounded by $j$ times the $k$ norm of $X_t$, where $j$ and $k$ are conjugate exponents. Stated just in terms of expected value, rather than norms, the $\mathscr{L}_k$ maximal inequality is $\E\left(\left|W_t\right|^k\right) \le \left(\frac{k}{k - 1}\right)^k \E\left(\left|X_t\right|^k\right)$ Our final result in this discussion is a variation of the maximal inequality for super-martingales.
Suppose that $\bs X = \{X_t: t \in T\}$ is a nonnegative super-martingale, and let $U_\infty = \sup\{X_t: t \in T\}$. Then $\P(U_\infty \ge x) \le \frac{1}{x} \E(X_0), \quad x \in (0, \infty)$
Proof
Let $Y_t = -X_t$ for $t \in T$. Since $\bs X$ is a super-martingale, $\bs Y$ is a sub-martinagle. And since $\bs X$ is nonnegative, $Y_t^+ = X_t$ for $t \in T$. Let $U_t = \sup\{X_s: s \in T_t\} = \sup\{Y_s^+: s \in T_t\}$ for $t \in T$. By the maximal inequality for sub-martingales, and since $\bs X$ is a super-martingale we have for $t \in T$, $\P(U_t \ge x) \le \frac{1}{x} \E(Y_t^+) = \frac{1}{x} \E(X_t) \le \frac{1}{x} \E(X_0), \quad x \in (0, \infty)$ Next note that $U_t \uparrow U_\infty$ as $t \to \infty$. Let $x \in (0, \infty)$ and $\epsilon \in (0, x)$. If $U_\infty \ge x$ then $U_t \ge x - \epsilon$ for sufficiently large $t \in T$. Hence $\{U_\infty \ge x\} \subseteq \bigcup_{k=1}^\infty \{U_k \ge x - \epsilon\}$ Using the continuity theorem for increasing events, and our result above we have $\P(U_\infty \ge x) \le \lim_{k \to \infty} \P(U_k \ge x - \epsilon) \le \frac{1}{x - \epsilon} \E(X_0)$ Since this holds for all $\epsilon \in (0, x)$, it follows that $\P(U_\infty \ge x) \le \frac{1}{x} \E(X_0)$.
The Up-Crossing Inequality
The up-crossing inequality gives a bound on how much a sub-martingale (or super-martingale) can oscillate, and is the main tool in the martingale convergence theorems that will be studied in the next section. It should come as no surprise by now that the inequality is due to Joseph Doob. We start with the discrete-time case.
Suppose that $\bs x = (x_n: n \in \N)$ is a sequence of real numbers, and that $a, \, b \in \R$ with $a \lt b$. Define $t_0(\bs x) = 0$ and then recursively define \begin{align*} s_{k+1}(\bs x) & = \inf\{n \in \N: n \ge t_k(\bs x), x_n \le a\}, \quad k \in \N \ t_{k+1}(\bs x) & = \inf\{n \in \N: n \ge s_{k+1}(\bs x), x_n \ge b\}, \quad k \in \N \end{align*}
1. The number of up-crossings of the interval $[a, b]$ by the sequence $\bs x$ up to time $n \in \N$ is $u_n(a, b, \bs x) = \sup\{k \in \N: t_k(\bs x) \le n\}$
2. The total number of up-crossings of the interval $[a, b]$ by the sequence $\bs x$ is $u_\infty(a, b, \bs x) = \sup\{k \in \N: t_k(\bs x) \lt \infty\}$
Details
As usual, we define $\inf(\emptyset) = \infty$. Note that if $t_k(\bs x) \lt \infty$ for $k \in \N_+$, then $(x_n: n = s_k(\bs x), \ldots t_k(\bs x))$ is the $k$th up-crossing of the interval $[a, b]$ by the sequence $\bs x$.
So informally, as the name suggests, $u_n(a, b, \bs x)$ is the number of times that the sequence $(x_0, x_1, \ldots, x_n)$ goes from a value below $a$ to one above $b$, and $u(a, b, \bs x)$ is the number of times the entire sequence $\bs x$ goes from a value below $a$ to one above $b$. Here are a few of simple properties:
Suppose again that $\bs x = (x_n: n \in \N)$ is a sequence of real numbers and that $a, \, b \in \R$ with $a \lt b$.
1. $u_n(a, b, \bs x)$ is increasing in $n \in \N$.
2. $u_n(a, b, \bs x) \to u(a, b, \bs x)$ as $n \to \infty$.
3. If $c, \, d \in \R$ with $a \lt c \lt d \lt b$ then $u_n(c, d, \bs x) \ge u_n(a, b, \bs x)$ for $n \in \N$, and $u(c, d, \bs x) \ge u(a, b, \bs x)$.
Proof
1. Note that $\{k \in \N: t_k(\bs x) \le n\} \subseteq \{k \in \N: t_k(\bs x) \le n + 1\}$.
2. Note that $\bigcup_{n=0}^\infty \{k \in \N: t_k(\bs x) \le n\} = \{k \in \N: t_k(\bs x) \le \infty\}$.
3. Every up-crossing of $[a, b]$ is also an up-crossing of $[c, d]$.
The importance of the definitions is found in the following theorem. Recall that $\R^* = \R \cup \{-\infty, \infty\}$ is the set of extended real numbers, and $\Q$ is the set of rational real numbers.
Suppose again that $\bs x = (x_n: n \in \N)$ is a sequence of real numbers. Then $\lim_{n \to \infty} x_n$ exists in $\R^*$ is and only if $u_\infty(a,b, \bs x) \lt \infty$ for every $a, \, b \in \Q$ with $a \lt b$.
Proof
We prove the contrapositive. Note that the following statements are equivalent:
1. $\lim_{n \to \infty} x_n$ does not exist in in $\R^*$.
2. $\liminf_{n \to \infty} x_n \lt \limsup_{n \to \infty} x_n$.
3. There exists $a, \, b \in \Q$ with $a \lt b$ and with $x_n \le a$ for infinitely many $n \in \N$ and $x_n \ge b$ for infinitely many $n \in \N$.
4. There exists $a, \, b \in \Q$ with $a \lt b$ and $u_\infty(a, b, \bs x) = \infty$.
Clearly the theorem is true with $\Q$ replaced with $\R$, but the countability of $\Q$ will be important in the martingale convergence theorem. As a simple corollary, if $\bs x$ is bounded and $u_\infty(a, b, \bs x) \lt \infty$ for every $a, \, b \in \Q$ with $a \lt b$, then $\bs x$ converges in $\R$. The up-crossing inequality for a discrete-time martingale $\bs X$ gives an upper bound on the expected number of up-crossings of $\bs X$ up to time $n \in \N$ in terms of a moment of $X_n$.
Suppose that $\bs X = \{X_n: n \in \N\}$ satisfies the basic assumptions with respect to the filtration $\mathfrak F = \{\mathscr{F}_n: n \in \N\}$, and let $a, \, b \in \R$ with $a \lt b$. Let $U_n = u_n(a, b, \bs X)$, the random number of up-crossings of $[a, b]$ by $\bs X$ up to time $n \in \N$.
1. If $\bs X$ is a super-martingale relative to $\mathfrak F$ then $\E(U_n) \le \frac{1}{b - a} \E[(X_n - a)^-] \le \frac{1}{b - a}\left[\E(X_n^-) + |a|\right] \le \frac{1}{b - a} \left[\E(|X_n|) + |a|\right], \quad n \in \N$
2. If $\bs X$ is a sub-martingale relative to $\mathfrak F$ then $\E(U_n) \le \frac{1}{b - a} \E[(X_n - a)^+] \le \frac{1}{b - a}\left[\E(X_n^+) + |a|\right] \le \frac{1}{b - a}\left[\E(|X_n|) + |a|\right], \quad n \in \N$
Proof
In the context of the up-crossing definition above, let $\sigma_k = s_k(\bs X)$ and $\tau_k = t_k(\bs X)$. These are the random times that define the up-crossings of $\bs X$. Let $Y_k = X_{\tau_k \wedge n} - X_{\sigma_k \wedge n}$ and then define $Z_n = \sum_{k=1}^n Y_k$. To understand the sum, let's take cases for the $k$th term $Y_k$:
• If $\tau_k \le n$ then $Y_k = X_{\tau_k} - X_{\sigma_k} \ge b - a$. By definition, the first $U_n$ terms are of this form.
• If $\sigma_k \le n \lt \tau_k$ then $Y_k = X_n - X_{\sigma_k} \ge X_n - a$. There is at most one such term, with index $k = U_n + 1$.
• If $\sigma_k \gt n$ then $Y_k = X_n - X_n = 0$.
Hence $Z_n \ge (b - a)U_n + (X_n - a) \bs{1} \left(\sigma_{U_n + 1} \le n\right)$ and so $(b - a)U_n \le Z_n - (X_n - a) \bs{1} \left(\sigma_{U_n + 1} \le n\right)$ Next note that $\sigma_k \wedge n$ and $\tau_k \wedge n$ are bounded stopping times and of course $\sigma_k \wedge n \le \tau_k \wedge n$.
1. If $\bs X$ is a super-martingale, it follows from the optional stopping theorem that $\E(Y_k) = \E\left(X_{\tau_k \wedge n}\right) - \E\left(X_{\sigma_k \wedge n}\right) \le 0$ and therefore $\E(Z_n) \le 0$. Finally, $-(X_n - a) \bs{1} \left(\sigma_{U_n + 1} \le n\right) \le (X_n - a)^-$. Taking expected values gives $(b - a) \E(U_n) \le \E(Z_n) + \E[(X_n - a)^-] \le \E[(X_n - a)^-]$ The remaining parts of the inequality follow since $(x - a)^- \le x^- + |a| \le |x| + |a|$ for $x \in \R$.
Additional details
The process $\bs Z = \{Z_n: n \in \N\}$ in the proof can be viewed as a transform of $\bs X = \{X_n: n \in \N\}$ by a predictable process. Specifically, for $n \in \N_+$, let $I_n = 1$ if $\sigma_k \lt n \le \tau_k$ for some $k \in \N$, and let $I_n = 0$ otherwise. Since $\sigma_k$ and $\tau_k$ are stopping times, note that $\{I_n = 1\} \in \mathscr{F}_{n-1}$ for $n \in \N_+$. Hence the process $\bs I = \{I_n: n \in \N_+\}$ is predictable with respect to $\mathfrak F$. Moreover, the transform of $\bs X$ by $\bs I$ is $(\bs I \cdot \bs X)_n = \sum_{j=1}^n I_j (X_j - X_{j-1}) = \sum_{k=1}^n \left(X_{\tau_k \wedge n} - X_{\sigma_k \wedge n}\right) = Z_n, \quad n \in \N$ Since $\bs I$ is a nonnegative process, if $\bs X$ is a martingale (sub-martingale, super-martingale), then $\bs I \cdot \bs X$ is also a martingale (sub-martingale, super-martingale).
Of course if $\bs X$ is a martingale with respect to $\mathfrak F$ then both inequalities apply. In continuous time, as usual, the concepts are more complicated and technical.
Suppose that $\bs x: [0, \infty) \to \R$ and that that $a, \, b \in \R$ with $a \lt b$.
1. If $I \subset [0, \infty)$ is finite, define $t^I_0(\bs x) = 0$ and then recursively define \begin{align*} s^I_{k+1}(\bs x) & = \inf\left\{t \in I: t \ge t^I_k(\bs x), x_t \le a\right\}, \quad k \in \N \ t^I_{k+1}(\bs x) & = \inf\left\{t \in I: t \ge s^I_{k+1}(\bs x), x_t \ge b\right\}, \quad k \in \N \end{align*} The number of up-crossings of the interval $[a, b]$ by the function $\bs x$ restricted to $I$ is $u_I(a, b, \bs x) = \sup \left\{k \in \N: t^I_k(\bs x) \lt \infty\right\}$
2. If $I \subseteq [0, \infty)$ is infinte, the number of up-crossings of the interval $[a, b]$ by $\bs x$ restricted to $I$ is $u_I(a, b, \bs x) = \sup\{u_J(a, b, \bs x): J \text{ is finite and } J \subset I\}$
To simplify the notation, we will let $u_t(a, b, \bs x) = u_{[0, t]}(a, b, \bs x)$, the number of up-crossings of $[a, b]$ by $\bs x$ on $[0, t]$, and $u_\infty(a, b, \bs x) = u_{[0, \infty)}(a, b, \bs x)$, the total number of up-crossings of $[a, b]$ by $\bs x$. In continuous time, the definition of up-crossings is built out of finte subsets of $[0, \infty)$ for measurability concerns, which arise when we replace the deterministic function $\bs x$ with a stochastic process $\bs X$. Here are the simple properties that are analogous to our previous ones.
Suppose again that $\bs x: [0, \infty) \to \R$ and that $a, \, b \in \R$ with $a \lt b$.
1. If $I, \, J \subseteq [0, \infty)$ with $I \subseteq J$, then $u_I(a, b, \bs x) \le u_J(a, b, \bs x)$.
2. If $(I_n: n \in \N)$ is an increasing sequence of sets in $[0, \infty)$ and $J = \bigcup_{n=0}^\infty I_n$ then $u_{I_n}(a, b, \bs x) \to u_J(a, b, \bs x)$ as $n \to \infty$.
3. If $c, \, d \in \R$ with $a \lt c \lt d \lt b$ and $I \subset [0, \infty)$ then $u_I(c, d, \bs x) \ge u_I(a, b, \bs x)$.
Proof
1. The result follows easily from the definitions if $I$ is finite (and $J$ either finite or infinite). If $I$ is infinite (and hence so is $J$), note that $\{u_K(a, b, \bs x): K \text{ is finite and } K \subseteq I\} \subseteq \{u_K(a, b, \bs x): K \text{ is finite and } K \subseteq J\}$
2. Since $I_n$ is increasing in $n \in \N$ (in the subset partial order), note that if $K \subset [0, \infty)$ is finite, then $K \subseteq J$ if and only if $K \subseteq I_n$ for some $n \in \N$.
3. Every up-crossing of $[a, b]$ is an up-crossing of $[c, d]$.
The following result is the reason for studying up-crossings in the first place. Note that the definition built from finite set is sufficient.
Suppose that $\bs x: [0, \infty) \to \R$. Then $\lim_{t \to \infty} x_t$ exists in $\R^*$ if and only if $u_\infty(a, b, \bs x) \lt \infty$ for every $a, \, b \in \Q$ with $a \lt b$.
Proof
As in the discrete-time case, we prove the contrapositive. The proof is almost the same: The following statements are equivalent:
1. $\lim_{t \to \infty} x_t$ does not exist in in $\R^*$.
2. $\liminf_{t \to \infty} x_t \lt \limsup_{t \to \infty} x_t$.
3. There exists $a, \, b \in \Q$ with $a \lt b$ and there exists $s_n, \, t_n \in [0, \infty)$ with $x_{s_n} \le a$ for $n \in \N$ and $x_{t_n} \ge b$ for $n \in \N$.
4. There exists $a, \, b \in \Q$ with $a \lt b$ and $u_\infty(a, b, \bs x) = \infty$.
Finally, here is the up-crossing inequality for martingales in continuous time. Once again, the inequality gives a bound on the expected number of up-crossings.
Suppose that $\bs X = \{X_t: t \in [0, \infty)\}$ satisfies the basic assumptions with respect to the filtration $\mathfrak F = \{\mathscr{F}_t: t \in [0, \infty)\}$, and let $a, \, b \in \R$ with $a \lt b$. Let $U_t = u_t(a, b, \bs X)$, the random number of up-crossings of $[a, b]$ by $\bs X$ up to time $t \in [0, \infty)$.
1. If $\bs X$ is a super-martingale relative to $\mathfrak F$ then $\E(U_t) \le \frac{1}{b - a} \E[(X_t - a)^-] \le \frac{1}{b - a}\left[\E(X_t^-) + |a|\right] \le \frac{1}{b - a}\left[\E(|X_t|) + |a|\right], \quad t \in [0, \infty)$
2. If $\bs X$ is a sub-martingale relative to $\mathfrak F$ then $\E(U_t) \le \frac{1}{b - a} \E[(X_t - a)^+] \le \frac{1}{b - a} \left[\E(X_t^+) + |a|\right] \le \frac{1}{b - a} \left[\E(|X_t|) + |a|\right], \quad t \in [0, \infty)$
Proof
Suppose that $\bs X$ is a sub-martingale; the proof for a super-martingale is analogous. Fix $t \in [0, \infty)$ and $a, \, b \in \R$ with $a \lt b$. For $I \subseteq [0, \infty)$ let $U_I = u_I(a, b, \bs X)$, the number of up-crossings of $[a, b]$ by $\bs X$ restricted to $I$. Suppose that $I$ is finite and that $t \in I$ is the maximum of $I$. Since $\bs X$ restricted to $I$ is also a sub-martingale, the discrete-time up-crossing theorem applies and so $\E(U_I) \le \frac{1}{b - a} \E[(X_t - a)^+]$ Since $U_t = \sup\{U_I: I \text{ is finite and } I \subset [0, t]\}$, there exists finite $I_n$ for $n \in \N$ with $U_{I_n} \uparrow U_t$ as $n \to \infty$. In particular, $U_t$ is measurable. By property (a) in the theorem above, there exists such a sequence with $I_n$ increasing in $n$ and $t \in I_n$ for each $n \in \N$. By the monotone convergence theorem, $\E\left(U_{I_n}\right) \to \E(U_t)$ as $n \to \infty$. So by the displayed equation above, $\E(U_t) \le \frac{1}{b - a} \E[(X_t - a)^+]$
Examples and Applications
Kolmogorov's Inequality
Suppose that $\bs X = \{X_n: n \in \N_+\}$ is a sequence of independent variables with $\E(X_n) = 0$ and $\var(X_n) = \E(X_n^2) \lt \infty$ for $n \in \N_+$. Let $\bs Y = \{Y_n: n \in \N\}$ be the partial sum process associated with $\bs X$, so that $Y_n = \sum_{i=1}^n X_i, \quad n \in \N$ From the Introduction we know that $\bs Y$ is a martingale. A simple application of the maximal inequality gives the following result, which is known as Kolmogorov's inequality, named for Andrei Kolmogorov.
For $n \in \N$, let $U_n = \max\left\{\left|Y_i\right|: i \in \N_n\right\}$. Then $\P(U_n \ge x) \le \frac{1}{x^2} \var(Y_n) = \frac{1}{x^2} \sum_{i=1}^n \E(X_i^2), \quad x \in (0, \infty)$
Proof
As noted above, $\bs Y$ is a martingale. Since the function $x \mapsto x^2$ on $\R$ is convex, $\bs{Y}^2 = \{Y_n^2: n \in \N\}$ is a sub-martingale. Let $V_n = \max\{Y_i^2: i \in \N_n\}$ for $n \in \N$, and let $x \in (0, \infty)$. Applying the maximal inequality for sub-martingales we have $\P(U_n \ge x) = \P(V_n \ge x^2) \le \frac{1}{x^2} \E(Y_n^2) = \frac{1}{x^2} \var(Y_n)$ Finally, since $\bs X$ is an independent sequence, $\var(Y_n) = \sum_{i=1}^n \var(X_i) = \sum_{i=1}^n \E(X_i^2)$
Red and Black
In the game of red and black, a gambler plays a sequence of Bernoulli games with success parameter $p \in (0, 1)$ at even stakes. The gambler starts with an initial fortune $x$ and plays until either she is ruined or reaches a specified target fortune $a$, where $x, \, a \in (0, \infty)$ with $x \lt a$. When $p \le \frac{1}{2}$, so that the games are fair or unfair, an optimal strategy is bold play: on each game, the gambler bets her entire fortune or just what is needed to reach the target, whichever is smaller. In the section on bold play we showed that when $p = \frac{1}{2}$, so that the games are fair, the probability of winning (that is, reaching the target $a$ starting with $x$) is $x / a$. We can use the maximal inequality for super-martingales to show that indeed, one cannot do better.
To set up the notation and review various concepts, let $X_0$ denote the gambler's initial fortune and let $X_n$ denote the outcome of game $n \in \N_+$, where 1 denotes a win and $-1$ a loss. So $\{X_n: n \in \N\}$ is a sequence of independent variables with $\P(X_n = 1) = p$ and $\P(X_n = -1) = 1 - p$ for $n \in \N_+$. (The initial fortune $X_0$ has an unspecified distribution on $(0, \infty)$.) The gambler is at a casino after all, so of course $p \le \frac{1}{2}$. Let $Y_n = \sum_{i=0}^n X_i, \quad n \in \N$ so that $\bs Y = \{Y_n: n \in \N\}$ is the partial sum process associated with $\bs X = \{X_n: n \in \N\}$. Recall that $\bs Y$ is also known as the simple random walk with parameter $p$, and since $p \le \frac{1}{2}$, is a super-martingale. The process $\{X_n: n \in \N_+\}$ is the difference sequence associated with $\bs Y$. Next let $Z_n$ denote the amount that the gambler bets on game $n \in \N_+$. The process $\bs Z = \{Z_n: n \in \N_+\}$ is predictable with respect to $\bs X = \{X_n: n \in \N\}$, so that $Z_n$ is measurable with respect to $\sigma\{X_0, X_1, \ldots, X_{n-1}\}$ for $n \in \N_+$. So the gambler's fortune after $n$ games is $W_n = X_0 + \sum_{i=1}^n Z_i X_i = X_0 + \sum_{i=1}^n Z_i (Y_i - Y_{i-1})$ Recall that $\bs W = \{W_n: n \in \N\}$ is the transform of $\bs Z$ with $\bs Y$, denoted $\bs W = \bs Z \cdot \bs Y$. The gambler is not allowed to go into debt and so we must have $Z_n \le W_{n-1}$ for $n \in \N_+$: the gambler's bet on game $n$ cannot exceed her fortune after game $n - 1$. What's the probability that the gambler can ever reach or exceed the target $a$ starting with fortune $x \lt a$?
Let $U_\infty = \sup\{W_n: n \in \N\}$. Suppose that $x, \, a \in (0, \infty)$ with $x \lt a$ and that $X_0 = x$. Then $\P(U_\infty \ge a) \le \frac{x}{a}$
Proof
Since $\bs Y$ is a super-martingale and $\bs Z$ is nonnegative, the transform $\bs W = \bs Z \cdot \bs Y$ is also a super-martingale. By the inequality for nonnegative super-martingales above: $\P(U_\infty \ge a) \le \frac{1}{a} \E(W_0) = \frac{x}{a}$
Note that the only assumptions made on the gambler's sequence of bets $\bs Z$ is that the sequence is predictable, so that the gambler cannot see into the future, and that gambler cannot go into debt. Under these basic assumptions, no strategy can do any better than bold play. However, there are strategies that do as well as bold play; these are variations on bold play.
Open the simulation of the red and black game. Select bold play and $p = \frac{1}{2}$. Play the game with various values of initial and target fortunes. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/17%3A_Martingales/17.04%3A_Inequalities.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Q}{\mathbb{Q}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$
Basic Theory
Basic Assumptions
As in the Introduction, we start with a stochastic process $\bs{X} = \{X_t: t \in T\}$ on an underlying probability space $(\Omega, \mathscr{F}, \P)$, having state space $\R$, and where the index set $T$ (representing time) is either $\N$ (discrete time) or $[0, \infty)$ (continuous time). Next, we have a filtration $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$, and we assume that $\bs{X}$ is adapted to $\mathfrak{F}$. So $\mathfrak{F}$ is an increasing family of sub $\sigma$-algebras of $\mathscr{F}$ and $X_t$ is measurable with respect to $\mathscr{F}_t$ for $t \in T$. We think of $\mathscr{F}_t$ as the collection of events up to time $t \in T$. We assume that $\E\left(\left|X_t\right|\right) \lt \infty$, so that the mean of $X_t$ exists as a real number, for each $t \in T$. Finally, in continuous time where $T = [0, \infty)$, we need the additional assumptions that $t \mapsto X_t$ is right continuous and has left limits, and that the filtration $\mathfrak F$ is standard (that is, right continuous and complete). Recall also that $\mathscr{F}_\infty = \sigma\left(\bigcup_{t \in T} \mathscr{F}_t\right)$, and this is the $\sigma$-algebra that encodes our information over all time.
The Martingale Convergence Theorems
If $\bs X$ is a sub-martingale relative to $\mathfrak F$ then $\bs X$ has an increasing property of sorts: $E(X_t \mid \mathscr{F}_s) \ge X_s$ for $s, \, t \in T$ with $s \le t$. Similarly, if $\bs X$ is a super-martingale relative to $\mathfrak F$ then $\bs X$ has a decreasing property of sorts, since the last inequality is reversed. Thus, there is hope that if this increasing or decreasing property is coupled with an appropriate boundedness property, then the sub-martingale or super-martingale might converge, in some sense, as $t \to \infty$. This is indeed the case, and is the subject of this section. The martingale convergence theorems, first formulated by Joseph Doob, are among the most important results in the theory of martingales. The first martingale convergence theorem states that if the expected absolute value is bounded in the time, then the martingale process converges with probability 1.
Suppose that $\bs{X} = \{X_t: t \in T\}$ is a sub-martingale or a super-martingale with respect to $\mathfrak{F} = \{\mathscr{F}_t: t \in T\}$ and that $\E\left(\left|X_t\right|\right)$ is bounded in $t \in T$. Then there exists a random variable $X_\infty$ that is measurable with respect to $\mathscr{F}_\infty$ such that $\E(\left|X_\infty\right|) \lt \infty$ and $X_t \to X_\infty$ as $t \to \infty$ with probability 1.
Proof
The proof is simple using the up-crossing inequality. Let $T_t = \{s \in T: s \le t\}$ for $t \in T$. For $a, b \in \R$ with $a \lt b$, let $U_t(a, b)$ denote the number of up-crossings of the interval $[a, b]$ by the process $\bs X$ on $T_t$, and let $U_\infty(a, b)$ denote the number of up-crossings of $[a, b]$ by $\bs X$ on $T$. Recall that $U_t \uparrow U_\infty$ as $t \to \infty$. Suppose that $\E(|X_t|) \lt c$ for $t \in T$, where $c \in (0, \infty)$. By the up-crossing inequality, $\E[U_t(a, b)] \le \frac{1}{b - a}[|a| + \E(|X_t|)] \le \frac{|a| + c}{b - a}, \quad n \in \N$ By the monotone convergence theorem, it follows that $\E[U_\infty(a, b)] \lt \frac{|a| + c}{b - a} \lt \infty$ Hence $\P[U_\infty(a, b) \lt \infty] = 1$. Therefore with probability 1, $U_\infty(a, b) \lt \infty$ for every $a, \, b \in \Q$ with $a \lt b$. By our characterization of convergence in terms of up-crossings, it follows that there exists a random variable $X_\infty$ with values in $\R^* = \R \cup \{-\infty, \infty\}$ such that with probability 1, $X_t \to X_\infty$ as $t \to \infty$. Note that $X$ is measurable with respect to $\mathscr{F}_\infty$. By Fatou's lemma, $\E(|X_\infty|) \le \liminf_{t \to \infty} \E(|X_t|) \lt \infty$ Hence $\P(X_\infty \in \R) = 1$.
The boundedness condition means that $\bs X$ is bounded (in norm) as a subset of the vector space $\mathscr{L}_1$. Here is a very simple, but useful corollary:
If $\bs X = \{X_t: t \in T\}$ is a nonnegative super-martingale with respect to $\mathfrak F = \{\mathscr{F}_t: t \in T\}$ then there exists a random variable $X_\infty$, measurable withe respect to $\mathscr{F}_\infty$, such that $X_t \to X_\infty$ with probability 1.
Proof
Since $\bs X$ is a nonnegative super-martinagle, $\E(|X_t|) = \E(X_t) \le \E(X_0)$ for $t \in T$. Hence the previous martingale convergence theorem applies.
Of course, the corollary applies to a nonnegative martingale as a special case. For the second martingale convergence theorem you will need to review uniformly integrable variables. Recall also that for $k \in [1, \infty)$, the $k$-norm of a random variable $X$ is $\|X\|_k = \left[\E\left(|X|^k\right)\right]^{1/k}$ and $\mathscr{L}_k$ is the normed vector space of all real-valued random variables for which this norm is finite. Convergence in mean refers to convergence in $\mathscr{L}_1$ and more generally, convergence in $k$th mean refers to convergence in $\mathscr{L}_k$.
Suppose that $\bs X$ is a uniformly integrable and is a sub-martingale or super-martingale with respect to $\mathfrak F$. Then there exists a random variable $X_\infty$, measurable with respect to $\mathscr{F}_\infty$ such that $X_t \to X_\infty$ as $t \to \infty$ with probability 1 and in mean. Moreover, if $\bs X$ is a martingale with respect to $\mathfrak F$ then $X_t = \E(X_\infty \mid \mathscr{F}_t)$ for $t \in T$.
Proof
Since $\bs X = \{X_t: t \in T\}$ is uniformly integrable, $\E(|X_t|)$ is bounded in $t \in T$. Hence the by the first martingale convergence theorem, there exists $X_\infty$ that is measurable with respect to $\mathscr{F}_\infty$ such that $\E(|X_\infty|) \lt \infty$ and $X_t \to X_\infty$ as $t \to \infty$ with probability 1. By the uniform integrability theorem, the convergence is also in mean, so that $\E(|X_t - X|) \to 0$ as $t \to \infty$. Suppose now that $\bs X$ is a martingale with respect to $\mathfrak F$ For fixed $s \in T$ we know that $\E(X_t \mid \mathscr{F}_s) \to \E(X_\infty \mid \mathscr{F}_s)$ as $t \to \infty$ (with probability 1). But $\E(X_t \mid \mathscr{F}_s) = X_s$ for $t \ge s$ so it follows that $X_s = \E(X_\infty \mid \mathscr{F}_s)$.
As a simple corollary, recall that if $\|X_t\|_k$ is bounded in $t \in T$ for some $k \in (1, \infty)$ then $\bs X$ is uniformly integrable, and hence the second martingale convergence theorem applies. But we can do better.
Suppose again that $\bs X = \{X_t: t \in T\}$ is a sub-martingale or super-martingale with respect to $\mathfrak F = \{\mathscr{F}_t: t \in T\}$ and that $\|X_t\|_k$ is bounded in $t \in T$ for some $k \in (1, \infty)$. Then there exists a random variable $X_\infty \in \mathscr{L}_k$ such that $X_t \to X_\infty$ as $t \to \infty$ in $\mathscr{L}_k$.
Proof
Suppose that $\|X_t\|_k \le c$ for $t \in T$ where $c \in (0, \infty)$. Since $\|X\|_1 \le \|X\|_k$, we have $\E(|X_t|)$ bounded in $t \in T$ so the first martingale convergence theorem applies. Hence there exists $X_\infty$, measurable with respect to $\mathscr{F}_\infty$, such that $X_t \to X_\infty$ as $t \to \infty$ with probability 1. Equivalently, with probability 1, $|X_t - X_\infty|^k \to 0 \text{ as } t \to \infty$ Next, for $t \in T$, let $T_t = \{s \in T: s \le t\}$ define $W_t = \sup\{|X_s|: s \in T_t\}$. by the norm version of the maximal inequality, $\|W_t\|_k \le \frac{k}{k-1}\|X_t\| \le \frac{k c}{k - 1}, \quad t \in T$ If we let $W_\infty = \sup\{|X_s|: s \in T\}$, then by the montone convergence theorem $\|W_\infty\|_k = \lim_{t \to \infty} \|W_t\|_k \le \frac{c k}{k - 1}$ So $W_\infty \in \mathscr{L}_k$. But $|X_\infty| \le W_\infty$ so $X_\infty \in \mathscr{L}_k$ also. Moreover, $|X_t - X_\infty|^k \le 2^k W^k_\infty$, so applying the dominated convergence theorem to the first displayed equation above, we have $\E(|X_t - X_\infty|^k) \to 0$ as $t \to \infty$.
Example and Applications
In this subsection, we consider a number of applications of the martingale convergence theorems. One indication of the importance of martingale theory is the fact that many of the classical theorems of probability have simple and elegant proofs when formulated in terms of martingales.
Simple Random Walk
Suppose now that that $\bs{V} = \{V_n: n \in \N\}$ is a sequence of independent random variables with $\P(V_i = 1) = p$ and $\P(V_i = -1) = 1 - p$ for $i \in \N_+$, where $p \in (0, 1)$. Let $\bs{X} = \{X_n: n \in \N\}$ be the partial sum process associated with $\bs{V}$ so that $X_n = \sum_{i=0}^n V_i, \quad n \in \N$ Recall that $\bs{X}$ is the simple random walk with parameter $p$. From our study of Markov chains, we know that $p \gt \frac{1}{2}$ then $X_n \to \infty$ as $n \to \infty$ and if $p \lt \frac{1}{2}$ then $X_n \to -\infty$ as $n \to \infty$. The chain is transient in these two cases. If $p = \frac{1}{2}$, the chain is (null) recurrent and so visits every state in $\N$ infinitely often. In this case $X_n$ does not converge as $n \to \infty$. But of course $\E(X_n) = n (2 p - 1)$ for $n \in \N$, so the martingale convergence theorems do not apply.
Doob's Martingale
Recall that if $X$ is a random variable with $\E(|X|) \lt \infty$ and we define $X_t = \E(X \mid \mathscr{F}_t)$ for $t \in T$, then $\bs X = \{X_t: t \in T\}$ is a martingale relative to $\mathfrak F$ and is known as a Doob martingale, named for you know whom. So the second martingale convergence theorem states that every uniformly integrable martingale is a Doob martingale. Moreover, we know that the Doob martingale $\bs X$ constructed from $X$ and $\mathfrak F$ is uniformly integrable, so the second martingale convergence theorem applies. The last remaining question is the relationship between $X$ and the limiting random variable $X_\infty$. The answer may come as no surprise.
Let $\bs X = \{X_t: t \in T\}$ be the Doob martingale constructed from $X$ and $\mathfrak F$. Then $X_t \to X_\infty$ as $t \to \infty$ with probability 1 and in mean, where $X_\infty = \E(X \mid \mathscr{F}_\infty)$
Of course if $\mathscr{F}_\infty = \mathscr{F}$, which is quite possible, then $X_\infty = X$. At the other extreme, if $\mathscr{F}_t = \{\emptyset, \Omega\}$, the trivial $\sigma$-algebra for all $t \in T$, then $X_\infty = \E(X)$, a constant.
Kolmogorov Zero-One Law
Suppose that $\bs X = (X_n: n \in \N_+)$ is a sequence of random variables with values in a general state space $(S, \mathscr{S})$. Let $\mathscr{G}_n = \sigma\{X_k: k \ge n\}$ for $n \in \N_+$, and let $\mathscr{G}_\infty = \bigcap_{n=1}^\infty \mathscr{G}_n$. So $\mathscr{G}_\infty$ is the tail $\sigma$-algebra of $\bs X$, the collection of events that depend only on the terms of the sequence with arbitrarily large indices. For example, if the sequence is real-valued (or more generally takes values in a metric space), then the event that $X_n$ has a limit as $n \to \infty$ is a tail event. If $B \in \mathscr{S}$, then the event that $X_n \in B$ for infinitely many $n \in \N_+$ is another tail event. The Kolmogorov zero-one law, named for Andrei Kolmogorov, states that if $\bs X$ is an independent sequence, then the tail events are essentially deterministic.
Suppose that $\bs X$ is a sequence of independent random variables. If $A \in \mathscr{G}_\infty$ then $\P(A) = 0$ or $\P(A) = 1$.
Proof
Let $\mathscr{F}_n = \sigma\{X_k: k \le n\}$ for $n \in \N_+$ so that $\mathfrak F = \{\mathscr{F}_n: n \in \N_+\}$ is the natural filtration associated with $\bs X$. As with our notation above, let $\mathscr{F}_\infty = \sigma\left(\bigcup_{n \in \N_+} \mathscr{F}_n\right)$. Now let $A \in \mathscr{G}_\infty$ be a tail event. Then $\{\E(\bs{1}_A \mid \mathscr{F}_n): n \in \N_+\}$ is the Doob martingale associated with the indicator variable $\bs{1}_A$ and $\mathfrak F$. By our results above, $\E(\bs{1}_A \mid \mathscr{F}_n) \to \E(\bs{1}_A \mid \mathscr{F}_\infty)$ as $n \to \infty$ with probability 1. But $A \in \mathscr{F}_\infty$ so $\E(\bs{1}_A \mid \mathscr{F}_\infty) = \bs{1}_A$. On the other hand, $A \in \mathscr{G}_{n+1}$ and the $\sigma$-algebras $\mathscr{G}_{n+1}$ and $\mathscr{F}_n$ are independent. Therefore $\E(\bs{1}_A \mid \mathscr{F}_n) = \P(A)$ for each $n \in \N_+$. Thus $\P(A) = \bs{1}_A$.
Tail events and the Kolmogorov zero-one law were studied earlier in the section on measure in the chapter on probability spaces. A random variable that is measurable with respect to $\mathscr{G}_\infty$ is a tail random variable. From the Kolmogorov zero-one law, a real-valued tail random variable for an independent sequence must be a constant (with probability 1).
Branching Processes
Recall the discussion of the simple branching process from the Introduction. The fundamental assumption is that the particles act independently, each with the same offspring distribution on $\N$. As before, we will let $f$ denote the (discrete) probability density function of the number of offspring of a particle, $m$ the mean of the distribution, and $q$ the probability of extinction starting with a single particle. We assume that $f(0) \gt 0$ and $f(0) + f(1) \lt 1$ so that a particle has a positive probability of dying without children and a positive probability of producing more than 1 child.
The stochastic process of interest is $\bs{X} = \{X_n: n \in \N\}$ where $X_n$ is the number of particles in the $n$th generation for $n \in \N$. Recall that $\bs{X}$ is a discrete-time Markov chain on $\N$. Since 0 is an absorbing state, and all positive states lead to 0, we know that the positive states are transient and so are visited only finitely often with probability 1. It follows that either $X_n \to 0$ as $n \to \infty$ (extinction) or $X_n \to \infty$ as $n \to \infty$ (explosion). We have quite a bit of information about which of these events will occur from our study of Markov chains, but the martingale convergence theorems give more information.
Extinction and explosion
1. If $m \le 1$ then $q = 1$ and extinction is certain.
2. If $m \gt 1$ then $q \in (0, 1)$. Either $X_n \to 0$ as $n \to \infty$ or $X_n \to \infty$ as $n \to \infty$ at an exponential rate.
Proof
The new information is the rate of divergence to $\infty$ in (b). The other statements are from our study of discrete-time branching Markov chains. We showed in the Introduction that $\{X_n / m^n: n \in \N\}$ is a martingale. Since this martingale is nonnegative, it has a limit as $n \to \infty$, and the limiting random variable takes values in $\R$. So if $m \gt 1$ and $X_n \to \infty$ as $n \to \infty$, then the divergence to $\infty$ must be at essentially the same rate as $m^n.$
The Beta-Bernoulli Process
Recall that the beta-Bernoulli process is constructed by randomizing the success parameter in a Bernoulli trials process with a beta distribution. Specifically, we start with a random variable $P$ having the beta distribution with parameters $a, \, b \in (0, \infty)$. Next we have a sequence $\bs X = (X_1, X_2, \ldots)$ of indicator variables with the property that $\bs X$ is conditionally independent given $P = p \in (0, 1)$ with $\P(X_i = 1 \mid P = p) = p$ for $i \in \N_+$. Let $\bs{Y} = \{Y_n: n \in \N\}$ denote the partial sum process associated with $\bs{X}$, so that once again, $Y_n = \sum_{i=1}^n X_i$ for $n \in \N$. Next let $M_n = Y_n / n$ for $n \in \N_+$ so that $M_n$ is the sample mean of $(X_1, X_2, \ldots, X_n)$. Finally let $Z_n = \frac{a + Y_n}{a + b + n}, \quad n \in \N$ We showed in the Introduction that $\bs Z = \{Z_n: n \in \N\}$ is a martingale with respect to $\bs X$.
$M_n \to P$ and $Z_n \to P$ as $n \to \infty$ with probability 1 and in mean.
Proof
We showed in the section on the beta-Bernoulli process that $Z_n \to P$ as $n \to \infty$ with probability 1. Note that $0 \le Z_n \le 1$ for $n \in \N$, so the martingale $\bs Z$ is uniformly integrable. Hence the second martingale convergence theorem applies, and the convergence is in mean also.
This is a very nice result and is reminiscent of the fact that for the ordinary Bernoulli trials sequence with success parameter $p \in (0, 1)$ we have the law of large numbers that $M_n \to p$ as $n \to \infty$ with probability 1 and in mean.
Pólya's Urn Process
Recall that in the simplest version of Pólya's urn process, we start with an urn containing $a$ red and $b$ green balls. At each discrete time step, we select a ball at random from the urn and then replace the ball and add $c$ new balls of the same color to the urn. For the parameters, we need $a, \, b \in \N_+$ and $c \in \N$. For $i \in \N_+$, let $X_i$ denote the color of the ball selected on the $i$th draw, where 1 means red and 0 means green. For $n \in \N$, let $Y_n = \sum_{i=1}^n X_i$, so that $\bs Y = \{Y_n: n \in \N\}$ is the partial sum process associated with $\bs X = \{X_i: i \in \N_+\}$. Since $Y_n$ is the number of red balls in the urn at time $n \in \N_+$, the average number of balls at time $n$ is $M_n = Y_n / n$. On the other hand, the total number of balls in the urn at time $n \in \N$ is $a + b + c n$ so the proportion of red balls in the urn at time $n$ is $Z_n = \frac{a + c Y_n}{a + b + c n}$ We showed in the Introduction, that $\bs Z = \{Z_n: n \in \N\}$ is a martingale. Now we are interested in the limiting behavior of $M_n$ and $Z_n$ as $n \to \infty$. When $c = 0$, the answer is easy. In this case, $Y_n$ has the binomial distribution with trial parameter $n$ and success parameter $a / (a + b)$, so by the law of large numbers, $M_n \to a / (a + b)$ as $n \to \infty$ with probability 1 and in mean. On the other hand, $Z_n = a / (a + b)$ when $c = 0$. So the interesting case is when $c \gt 0$.
Suppose that $c \in \N_+$. Then there exists a random variable $P$ such that $M_n \to P$ and $Z_n \to P$ as $n \to \infty$ with probability 1 and in mean. Moreover, $P$ has the beta distribution with left parameter $a / c$ and right parameter $b / c$.
Proof
In our study of Póyla's urn process we showed that when $c \in \N_+$ the process $\bs X$ is a beta-Bernoulli process with parameters $a / c$ and $b / c$. So the result follows from our previous theorem.
Likelihood Ratio Tests
Recall the discussion of likelihood ratio tests in the Introduction. To review, suppose that $(S, \mathscr{S}, \mu)$ is a general measure space, and that $\bs{X} = \{X_n: n \in \N\}$ is a sequence of independent, identically distributed random variables, taking values in $S$, and having a common probability density function with respect to $\mu$. The likelihood ratio test is a hypothesis test, where the null and alternative hypotheses are
• $H_0$: the probability density function is $g_0$.
• $H_1$: the probability density function is $g_1$.
We assume that $g_0$ and $g_1$ are positive on $S$. Also, it makes no sense for $g_0$ and $g_1$ to be the same, so we assume that $g_0 \ne g_1$ on a set of positive measure. The test is based on the likelihood ratio test statistic $L_n = \prod_{i=1}^n \frac{g_0(X_i)}{g_1(X_i)}, \quad n \in \N$ We showed that under the alternative hypothesis $H_1$, $\bs{L} = \{L_n: n \in \N\}$ is a martingale with respect to $\bs{X}$, known as the likelihood ratio martingale.
Under $H_1$, $L_n \to 0$ as $n \to \infty$ with probability 1.
Proof
Assume that $H_1$ is true. $\bs L$ is a nonnegative martingale, so the first martingale convergence theorem applies, and hence there exists a random variable $L_\infty$ with values in $[0, \infty)$ such that $L_n \to L_\infty$ as $n \to \infty$ with probability 1. Next note that $\ln(L_n) = \sum_{i=1}^n \ln\left[\frac{g_0(X_i)}{g_1(X_i)}\right]$ The variables $\ln[g_0(X_i) / g_1(X_i)]$ for $i \in \N_+$ are also independent and identically distributed, so let $m$ denote the common mean. The natural logarithm is concave and the martingale $\bs L$ has mean 1, so by Jensen's inequality, $m = \E\left(\ln\left[\frac{g_0(X)}{g_1(X)}\right]\right) \lt \ln\left(\E\left[\frac{g_0(X)}{g_1(X)}\right]\right) = \ln(1) = 0$ Hence $m \in [-\infty, 0)$. By the strong law of large numbers, $\frac{1}{n} \ln(L_n) \to m$ as $n \to \infty$ with probability 1. Hence we must have $\ln(L_n) \to -\infty$ as $n \to \infty$ with probability 1. But by continuity, $\ln(L_n) \to \ln(L_\infty)$ as $n \to \infty$ with probability 1, so $L_\infty = 0$ with probability 1.
This result is good news, statistically speaking. Small values of $L_n$ are evidence in favor of $H_1$, so the decision rule is to reject $H_0$ in favor of $H_1$ if $L_n \le l$ for a chosen critical value $l \in (0, \infty)$. If $H_1$ is true and the sample size $n$ is sufficiently large, we will reject $H_0$. In the proof, note that $\ln(L_n)$ must diverge to $-\infty$ at least as fast as $n$ diverges to $\infty$. Hence $L_n \to 0$ as $n \to \infty$ exponentially fast, at least. It also worth noting that $\bs L$ is a mean 1 martingale (under $H_1$) so trivially $\E(L_n) \to 1$ as $n \to \infty$ even though $L_n \to 0$ as $n \to \infty$ with probability 1. So the likelihood ratio martingale is a good example of a sequence where the interchange of limit and expected value is not valid.
Partial Products
Suppose that $\bs X = \{X_n: n \in \N_+\}$ is an independent sequence of nonnegative random variables with $\E(X_n) = 1$ for $n \in \N_+$. Let $Y_n = \prod_{i=1}^n X_i, \quad n \in \N$ so that $\bs Y = \{Y_n: n \in \N\}$ is the partial product process associated with $\bs X$. From our discussion of this process in the Introduction, we know that $\bs Y$ is a martingale with respect to $\bs X$. Since $\bs Y$ is nonnegative, the second martingale convergence theorem applies, so there exists a random variable $Y_\infty$ such that $Y_n \to Y_\infty$ as $n \to \infty$ with probability 1. What more can we say? The following result, known as the Kakutani product martingale theorem, is due to Shizuo Kakutani.
Let $a_n = \E\left(\sqrt{X_n}\right)$ for $n \in \N_+$ and let $A = \prod_{i=1}^\infty a_i$.
1. If $A \gt 0$ then $Y_n \to Y_\infty$ as $n \to \infty$ in mean and $\E(Y_\infty) = 1$.
2. If $A = 0$ then $Y_\infty = 0$ with probability 1.
Proof
Note that $a_n \gt 0$ for $n \in \N_+$ since $X_n$ is nonnegative and $\P(X_n \gt 0) \gt 0$. Also, since $x \mapsto \sqrt{x}$ is concave on $(0, \infty)$ it follows from Jensen's inequality that $a_n = \E\left(\sqrt{X_n}\right) \le \sqrt{\E(X_n)} = 1$ Let $A_n = \prod_{i=1}^n a_i$ for $n \in \N$. Since $a_n \in (0, 1]$ for $n \in \N_+$, it follows that $A_n \in (0, 1]$ for $n \in \N$ and that $A_n$ is decreasing in $n \in \N$ with limit $A = \prod_{i=1}^\infty a_i \in [0, 1]$. Next let $Z_n = \prod_{i=1}^n \sqrt{X_i} / a_i$ for $n \in \N$, so that $\bs Z = \{Z_n: n \in \N\}$ is the partial product process associated with $\{\sqrt{X_n} / a_n: n \in \N\}$. Since $\E\left(\sqrt{X_n} / a_n\right) = 1$ for $n \in \N_+$, the process $\bs Z$ is also a nonnegative martingale, so there exists a random variable $Z_\infty$ such that $Z_n \to Z_\infty$ as $n \to \infty$ with probability 1. Note that $Z_n^2 = Y_n / A_n^2$, $Y_n = A_n^2 Z_n^2$, and $Y_n \le Z_n^2$ for $n \in \N$.
1. Suppose that $A \gt 0$. Since the martingale $\bs Y$ has mean 1, $\E\left(Z_n^2\right) = \E(Y_n / A_n^2) = 1 / A_n^2 \le 1 / A^2 \lt \infty, \quad n \in \N$ Let $W_n = \max\{Z_k: k \in \{0, 1, \ldots, n\}\}$ for $n \in \N$ so that $\bs W = \{W_n: n \in \N\}$ is the maximal process associated with $\bs Z$. Also, let $W_\infty = \sup\{Z_k: k \in \N\}$ and note that $W_n \uparrow W_\infty$ as $n \to \infty$. By the $\mathscr{L}_2$ maximal inequality, $\E(W_n^2) \le 4 \E(Z_n^2) \le 4 / A^2, \quad n \in \N$ By the monotone convergence theorem, $\E(W_\infty^2) = \lim_{n \to \infty} \E(W_n^2) \le 4 / A^2$. Since $x \to x^2$ is strictly increasing on $[0, \infty)$, $W_\infty^2 = \sup\{Z_n^2: n \in \N\}$ and so $Y_n \le W_\infty^2$ for $n \in \N$. Since $\E(W_\infty^2) \lt \infty$, it follows that the martingale $\bs Y$ is uniformly integrable. Hence by the third martingale convergence theorem above, $Y_n \to Y_\infty$ is mean. Since convergence in mean implies that the means converge, $\E(Y_\infty) = \lim_{n \to \infty} \E(Y_n) = 1$.
2. Suppose that $A = 0$. Then $Y_n = A_n^2 Z_n^2 \to 0 \cdot Z_\infty^2 = 0$ as $n \to \infty$ with probability 1. Note that in this case, the convergence is not in mean, and trivially $\E(Y_\infty) = 0$.
Density Functions
This discussion continues the one on density functions in the Introduction. To review, we start with our probability space $(\Omega, \mathscr{F}, \P)$ and a filtration $\mathfrak F = \{\mathscr{F}_n: n \in \N\}$ in discrete time. Recall again that $\mathscr{F}_\infty = \sigma \left(\bigcup_{n=0}^\infty \mathscr{F}_n\right)$. Suppose now that $\mu$ is a finite measure on the sample space $(\Omega, \mathscr{F})$. For each $n \in \N \cup \{\infty\}$, the restriction of $\mu$ to $\mathscr{F}_n$ is a measure on $(\Omega, \mathscr{F}_n)$ and similarly the restriction of $\P$ to $\mathscr{F}_n$ is a probability measure on $(\Omega, \mathscr{F}_n)$. To save notation and terminology, we will refer to these as $\mu$ and $\P$ on $\mathscr{F}_n$, respectively. Suppose now that $\mu$ is absolutely continuous with respect to $\P$ on $\mathscr{F}_n$ for each $n \in \N$. By the Radon-Nikodym theorem, $\mu$ has a density function (or Radon-Nikodym derivative) $X_n: \Omega \to \R$ with respect to $\P$ on $\mathscr{F}_n$ for each $n \in \N$. The theorem and the derivative are named for Johann Radon and Otto Nikodym. In the Introduction we showed that $\bs X = \{X_n: n \in \N\}$ is a martingale with respect to $\mathfrak F$. Here is the convergence result:
There exists a random variable $X_\infty$ such that $X_n \to X_\infty$ as $n \to \infty$ with probability 1.
1. If $\mu$ is absolutely continuous with respect to $\P$ on $\mathscr{F}_\infty$ then $X_\infty$ is a density function of $\mu$ with respect to $\P$ on $\mathscr{F}_\infty$.
2. If $\mu$ and $\P$ are mutually singular on $\mathscr{F}_\infty$ then $X_\infty = 0$ with probability 1.
Proof
Again, as shown in the Introduction, $\bs X$ is a martingale with respect to $\mathfrak F$. Moreover, $\E(|X_n|) = \|\mu\|$ (the total variation of $\mu$) for each $n \in \N$. Since $\mu$ is a finite measure, $\|\mu\| \lt \infty$ so the first martingale convergence theorem applies. Hence there exists a random variable $X_\infty$, measurable with respect to $\mathscr{F}_\infty$, such that $X_n \to X_\infty$ as $n \to \infty$.
1. If $\mu$ is absolutely continuous with respect to $\P$ on $\mathscr{F}_\infty$, then $\mu$ has a density function $Y_\infty$ with respect to $\P$ on $\mathscr{F}_\infty$. Our goal is to show that $X_\infty = Y_\infty$ with probability 1. By defintion, $Y_\infty$ is measurable with respect to $\mathscr{F}_\infty$ and $\int_A Y_\infty d\P = \E(Y_\infty; A) = \mu(A), \quad A \in \mathscr{F}_\infty$ Suppose now that $n \in \N$ and $A \in \mathscr{F}_n$. Then again by definition, $\E(X_n; A) = \mu(A)$. But $A \in \mathscr{F}_\infty$ also, so $\E(Y_\infty; A) = \mu(A)$. So to summarize, $X_n$ is $\mathscr{F}_n$-measurable and $E(X_n: A) = \E(Y_\infty; A)$ for each $A \in \mathscr{F}_n$. By definition, this means that $X_n = \E(Y_\infty \mid \mathscr{F}_n)$, so $\bs X$ is the Doob martingale associated with $Y_\infty$. Letting $n \to \infty$ and using the result above gives $X_\infty = \E(Y_\infty \mid \mathscr{F}_\infty) = Y_\infty$ (with probability 1, of course).
2. Suppose that $\mu$ and $\P$ are mutually singular on $\mathscr{F}_\infty$. Assume first that $\mu$ is a positive measure, so that $X_n$ is nonnegative for $n \in \N \cup \{\infty\}$. By the definition of mutually singularity, there exists $B \in \mathscr{F}_\infty$ such that $\mu_\infty(B) = 0$ and $\P_\infty(B^c) = 0$, so that $\P(B) = 1$. Our goal is to show that $\E(X_\infty; A) \le \mu(A)$ for every $A \in \mathscr{F}_\infty$. Towards that end, let $\mathscr{M} = \left\{A \in \mathscr{F}_\infty: \E(X_\infty ; A) \le \mu(A)\right\}$ Suppose that $A \in \bigcup_{k=0}^\infty \mathscr{F}_k$, so that $A \in \mathscr{F}_k$ for some $k \in \N$. Then $A \in \mathscr{F}_n$ for all $n \ge k$ and therefore $\E(X_n; A) = \mu(A)$ for all $n \ge k$. By Fatou's lemmas, $\E(X_\infty; A) \le \liminf_{n \to \infty} \E(X_n; A) \le \mu(A)$ so $A \in \mathscr{M}$. Next, suppose that $\{A_n: n \in \N\}$ is an increasing or decreasing sequence in $\mathscr{M}$, and let $A_\infty = \lim_{n \to \infty} A_n$ (the union in the first case and the intersection in the second case). Then $\E(X_\infty; A_n) \le \mu(A_n)$ for each $n \in \N$. By the continuity theorems, $\E(X_\infty; A_n) \to \E(X_\infty; A_\infty)$ and $\mu(A_n) \to \mu(A_\infty)$ as $n \to \infty$. Therefore $\E(X_\infty; A_\infty) \le \mu(A_\infty)$ and so $A_\infty \in \mathscr{M}$. It follows that $\mathscr{M}$ is a monotone class. Since $\mathscr{M}$ contains the algebra $\bigcup_{n=0}^\infty \mathscr{F}_n$, it then follows from the monotone class theorem that $\mathscr{F}_\infty \subseteq \mathscr{M}$. In particular $B \in \mathscr{M}$, so $\E(X_\infty) = \E(X_\infty; B) \le \mu(B) = 0$ and therefore $X_\infty = 0$ with probability 1. If $\mu$ is a general finite measure, then by the Jordan decomposition theorem, $\mu$ can be written uniquely in the form $\mu = \mu^+ - \mu^-$ where $\mu^+$ and $\mu^-$ are finite positive measures. Moreover, $X_n^+$ is the density function of $\mu^+$ on $\mathscr{F}_n$ and $X_n^-$ is the density function of $\mu^-$ on $\mathscr{F}_n$. By the first part of the proof, $X^+ = 0$, $X^- = 0$, and also $X = 0$, all with probability 1.
The martingale approach can be used to give a probabilistic proof of the Radon-Nikodym theorem, at least in certain cases. We start with a sample set $\Omega$. Suppose that $\mathscr{A}_n = \{A^n_i: i \in I_n\}$ is a countable partition of $\Omega$ for each $n \in \N$. Thus $I_n$ is countable, $A^n_i \cap A^n_j = \emptyset$ for distinct $i, \, j \in I_n$, and $\bigcup_{i \in I_n} A^n_i = \Omega$. Suppose also that $\mathscr{A}_{n+1}$ refines $\mathscr{A}_n$ for each $n \in \N$ in the sense that $A^n_i$ is a union of sets in $\mathscr{A}_{n+1}$ for each $i \in I_n$. Let $\mathscr{F}_n = \sigma(\mathscr{A}_n)$. Thus $\mathscr{F}_n$ is generated by a countable partition, and so the sets in $\mathscr{F}_n$ are of the form $\bigcup_{j \in J} A^n_j$ where $J \subseteq I_n$. Moreover, by the refinement property $\mathscr{F}_n \subseteq \mathscr{F}_{n+1}$ for $n \in \N$, so that $\mathfrak F = \{\mathscr{F}_n: n \in \N\}$ is a filtration. Let $\mathscr{F} = \mathscr{F}_\infty = \sigma\left(\bigcup_{n=0}^\infty \mathscr{F}_n\right) = \sigma\left(\bigcup_{n=0}^\infty \mathscr{A}_n\right)$, so that our sample space is $(\Omega, \mathscr{F})$. Finally, suppose that $\P$ is a probability measure on $(\Omega, \mathscr{F})$ with the property that $\P(A^n_i) \gt 0$ for $n \in \N$ and $i \in I_n$. We now have a probability space $(\Omega, \mathscr{F}, \P)$. Interesting probability spaces that occur in applications are of this form, so the setting is not as specialized as you might think.
Suppose now that $\mu$ a finte measure on $(\Omega, \mathscr{F})$. From our assumptions, the only null set for $\P$ on $\mathscr{F}_n$ is $\emptyset$, so $\mu$ is automatically absolutely continuous with respect to $\P$ on $\mathscr{F}_n$. Moreover, for $n \in \N$, we can give the density function of $\mu$ with respect to $\P$ on $\mathscr{F}_n$ explicitly:
The density function of $\mu$ with respect to $\P$ on $\mathscr F_n$ is the random variable $X_n$ whose value on $A^n_i$ is $\mu(A^n_i)/ \P(A^n_i)$ for each $i \in I_n$. Equivalently, $X_n = \sum_{i \in I_n} \frac{\mu(A^n_i)}{\P(A^n_i)} \bs{1}(A^n_i)$
Proof
We need to show that $\mu(A) = \E(X_n; A)$ for each $A \in \mathscr F_n$. So suppose $A = \bigcup_{j \in J} A^n_j$ where $J \subseteq I_n$. Then $\E(X_n; A) = \sum_{j \in J} \E(X_n; A^n_j) = \sum_{j \in J} \frac{\mu(A^n_j)}{\P(A^n_j)} \P(A^n_j) = \sum_{j \in J} \mu(A^n_j) = \mu(A)$
By our theorem above, there exists a random variable $X$ such that $X_n \to X$ as $n \to \infty$ with probability 1. If $\mu$ is absolutely continuous with respect to $\P$ on $\mathscr{F}$, then $X$ is a density function of $\mu$ with respect to $\P$ on $\mathscr{F}$. The point is that we have given a more or less explicit construction of the density.
For a concrete example, consider $\Omega = [0, 1)$. For $n \in \N$, let $\mathscr{A}_n = \left\{\left[\frac{j}{2^n}, \frac{j + 1}{2^n}\right): j \in \{0, 1, \ldots, 2^n - 1\}\right\}$ This is the partition of $[0, 1)$ into $2^n$ subintervals of equal length $1/2^n$, based on the dyadic rationals (or binary rationals) of rank $n$ or less. Note that every interval in $\mathscr{A}_n$ is the union of two adjacent intervals in $\mathscr{A}_{n+1}$, so the refinement property holds. Let $\P$ be ordinary Lebesgue measure on $[0, 1)$ so that $\P(A^n_i) = 1 / 2^n$ for $n \in \N$ and $i \in \{0, 1, \ldots, 2^n - 1\}$. As above, let $\mathscr{F}_n = \sigma(\mathscr{A}_n)$ and $\mathscr{F} = \sigma\left(\bigcup_{n=0}^\infty \mathscr{F}_n\right) = \sigma\left(\bigcup_{n=0}^\infty \mathscr{A}_n\right)$. The dyadic rationals are dense in $[0, 1)$, so $\mathscr{F}$ is the ordinary Borel $\sigma$-algebra on $[0, 1)$. Thus our probability space $(\Omega, \mathscr{F}, \P)$ is simply $[0, 1)$ with the usual Euclidean structures. If $\mu$ is a finite measure on $([0, 1), \mathscr{F})$ then the density function of $\mu$ on $\mathscr{F}_n$ is the random variable $X_n$ whose value on the interval $[j / 2^n, (j + 1) / 2^n)$ is $2^n \mu[j / 2^n, (j + 1) / 2^n)$. If $\mu$ is absolutely continuous with respect to $\P$ on $\mathscr{F}$ (so absolutely continuous in the usual sense), then a density function of $\mu$ is $X = \lim_{n \to \infty} X_n$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/17%3A_Martingales/17.05%3A_Convergence.txt |
$\renewcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Q}{\mathbb{Q}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\var}{\text{var}}$
Basic Theory
A backwards martingale is a stochastic process that satisfies the martingale property reversed in time, in a certain sense. In some ways, backward martingales are simpler than their forward counterparts, and in particular, satisfy a convergence theorem similar to the convergence theorem for ordinary martingales. The importance of backward martingales stems from their numerous applications. In particular, some of the fundamental theorems of classical probability can be formulated in terms of backward martingales.
Definitions
As usual, we start with a stochastic process $\bs{Y} = \{Y_t: t \in T\}$ on an underlying probability space $(\Omega, \mathscr{F}, \P)$, having state space $\R$, and where the index set $T$ (representing time) is either $\N$ (discrete time) or $[0, \infty)$ (continuous time). So to review what all this means, $\Omega$ is the sample space, $\mathscr{F}$ the $\sigma$-algebra of events, $\P$ the probability measure on $(\Omega, \mathscr{F})$, and $Y_t$ is a random variable with values in $\R$ for each $t \in T$. But at this point our formulation diverges. Suppose that $\mathscr{G}_t$ is a sub $\sigma$-algebra of $\mathscr{F}$ for each $t \in T$, and that $\mathfrak G = \{\mathscr{G}_t: t \in T\}$ is decreasing so that if $s, \, t \in T$ with $s \le t$ then $\mathscr{G}_t \subseteq \mathscr{G}_s$. Let $\mathscr{G}_\infty = \bigcap_{t \in T} \mathscr{G}_t$. We assume that $Y_t$ is measurable with respect to $\mathscr{G}_t$ and that $\E(|Y_t|) \lt \infty$ for each $t \in T$.
The process $\bs Y = \{Y_t: t \in T\}$ is a backwards martingale (or reversed martingale) with respect to $\mathfrak G = \{\mathscr{G}_t: t \in T\}$ if $\E(Y_s \mid \mathscr{G}_t) = Y_t$ for all $s, \, t \in T$ with $s \le t$.
A backwards martingale can be formulated as an ordinary martingale by using negative times as the indices. Let $T^- = \{-t: t \in T\}$, so that if $T = \N$ (the discrete case) then $T^-$ is the set of non-positive integers, and if $T = [0, \infty)$ (the continuous case) then $T^- = (-\infty, 0]$. Recall also that the standard martingale definitions make sense for any totally ordered index set.
Suppose again that $\bs Y = \{Y_t: t \in T\}$ is a backwards martingale with respect to $\mathfrak G = \{\mathscr{G}_t: t \in T\}$. Let $X_t = Y_{-t}$ and $\mathscr{F}_t = \mathscr{G}_{-t}$ for $t \in T^-$. Then $\bs X = \{X_t: t \in T^-\}$ is a martingale with respect to $\mathfrak F = \{\mathscr{F}_t: t \in T^-\}$.
Proof
Since $\mathfrak G$ is a decreasing family of sub $\sigma$-algebras of $\mathscr{F}$, the collection $\mathfrak F$ is an increasing family of sub $\sigma$-algebras of $\scr F$, and hence is a filtration. Next, $X_t = Y_{-t}$ is measurable with respect to $\mathscr{G}_{-t} = \mathscr{F}_t$ for $t \in T^-$, so $\bs X$ is adapted to $\mathfrak F$. Finally, if $s, \, t \in T^-$ with $s \le t$ then $-t \le -s$ so $\E(X_t \mid \mathscr{F}_s) = \E(Y_{-t} \mid \mathscr{G}_{-s}) = Y_{-s} = X_s$
Most authors define backwards martingales with negative indices, as above, in the first place. There are good reasons for doing so, since some of the fundamental theorems of martingales apply immediately to backwards martingales. However, for the applications of backwards martingales, this notation is artificial and clunky, so for the most part, we will use our original definition. The next result is another way to view a backwards martingale as an ordinary martingale. This one preserves nonnegative time, but introduces a finite time horizon. For $t \in T$, let $T_t = \{s \in T: s \le t\}$, a notation we have used often before.
Suppose again that $\bs Y = \{Y_t: t \in T\}$ is a backwards martingale with respect to $\mathfrak G = \{\mathscr{G}_t: t \in T\}$. Fix $t \in T$ and define $X^t_s = Y_{t-s}$ and $\mathscr{F}^t_s = \mathscr{G}_{t-s}$ for $s \in T_t$. Then $\bs{X}^t = \{X^t_s: s \in T_t\}$ is a martingale relative to $\mathfrak{F}^t = \{\mathscr{F}^t_s: s \in T_t\}$.
Proof
The proof is essentially the same as for the previous result. Since $\mathfrak G$ is a decreasing family of sub $\sigma$-algebras of $\mathscr{F}$, the collection $\mathfrak{F}^t$ is an increasing family of sub $\sigma$-algebras of $\scr F$, and hence is a filtration. Next, $X^t_s = Y_{t-s}$ is measurable with respect to $\mathscr{G}_{t-s} = \mathscr{F}^t_s$ for $s \in T_t$, so $\bs{X}^t$ is adapted to $\mathfrak{F}^t$. Finally, if $r, \, s \in T_t$ with $r \le s$ then $t - s \le t - r$ so $\E(X^t_s \mid \mathscr{F}^t_r) = \E(Y_{t-s} \mid \mathscr{G}_{t-r}) = Y_{t-r} = X^t_r$
Properties
Backwards martingales satisfy a simple and important property.
Suppose that $\bs Y = \{Y_t: t \in T\}$ is a backwards martingale with repsect to $\mathfrak G = \{\mathscr{G}_t: t \in T\}$. Then $Y_t = \E(Y_0 \mid \mathscr{G}_t)$ for $t \in T$ and hence $\bs Y$ is uniformly integrable.
Proof
The fact that $Y_t = \E(Y_0 \mid \mathscr{G}_t)$ for $t \in T$ follows directly from the definition of a backwards martingale. Since we have assumed that $\E(|Y_0|) \lt \infty$, it follows from a basic property that $\bs Y$ is uniformly integrable.
Here is the Doob backwards martingale, analogous to the ordinary Doob martingale, and of course named for Joseph Doob. In a sense, this is the converse to the previous result.
Suppose that $Y$ is a random variable on our probability space $(\Omega, \mathscr{F}, \P)$ with $\E(|Y|) \lt \infty$, and that $\mathfrak G = \{\mathscr{G}_t: t \in T\}$ is a decreasing family of sub $\sigma$-algebras of $\mathscr{F}$, as above. Let $Y_t = \E(Y \mid \mathscr{G}_t)$ for $t \in T$. Then $\bs Y = \{Y_t: t \in T\}$ is a backwards martingale with respect to $\mathfrak G$.
Proof
By definition, $Y_t = \E(Y \mid \mathscr{G}_t)$ is measurable with respect to $\mathscr{G}_t$. Also, $\E(|Y_t|) = \E[|\E(Y \mid \mathscr{G}_t)|] \le \E[\E(|Y| \mid \mathscr{G}_t)] = \E(|Y|) \lt \infty, \quad t \in T$ Next, suppose that $s, \, t \in T$ with $s \le t$. Then $\mathscr{G}_t \subseteq \mathscr{G}_s$ so by the tower property of conditional expected value, $\E(Y_s \mid \mathscr{G}_t) = \E[\E(Y \mid \mathscr{G}_s) \mid \mathscr{G}_t] = \E(Y \mid \mathscr{G}_t) = Y_t$
The convergence theorems are the most important results for the applications of backwards martingales. Recall once again that for $k \in [1, \infty)$, the k-norm of a real-valued random variable $X$ is $\|X\|_k = \left[\E\left(|X|^k\right)\right]^{1/k}$ and the normed vector space $\mathscr{L}_k$ consists of all $X$ with $\|X\|_k \lt \infty$. Convergence in the space $\mathscr{L}_1$ is also referred to as convergence in mean, and convergence in the space $\mathscr{L}_2$ is referred to as convergence in mean square. Here is the primary backwards martingale convergence theorem:
Suppose again that $\bs Y = \{Y_t: t \in T\}$ is a backwards martingale with respect to $\mathfrak G = \{\mathscr{G}_t: t \in T\}$. Then there exists a random variable $Y_\infty$ such that
1. $Y_t \to Y_\infty$ as $t \to \infty$ with probability 1.
2. $Y_t \to Y_\infty$ as $t \to \infty$ in mean.
3. $Y_\infty = \E(Y_0 \mid \mathscr{G}_\infty)$.
Proof
The proof is essentially the same as the ordinary martingale convergence theorem if we use the martingale constructed from $\bs Y$ above. So, fix $t \in T$ and let $T_t = \{s \in T: s \le t\}$. Let $X^t_s = Y_{t - s}$ and $\mathscr{F}^t_s = \mathscr{G}_{t - s}$ for $s \in T_t$, so that $\bs{X}^t = \{X^t_s: s \in T_t\}$ is a martingale relative to $\mathfrak{F}^t = \{\mathscr{F}^t_s: s \in T_t\}$. Now, for $a, \, b \in \R$ with $a \lt b$, let $U_t(a, b)$ denote the number of up-crossings of $[a, b]$ by $\bs{X}^t$ on $T_t$. Note that $U_t(a, b)$ is also the number of down-crossings of $[a, b]$ by $\bs Y$ on $T_t$. By the up-crossing inequality applied to the martingale $\bs{X}^t$, $\E[U_t(a, b)] \le \frac{1}{b - a}[\E(|X_t|) + |a|] = \frac{1}{b - a} [\E(|Y_0|) + |a|]$ Now let $U_\infty(a, b)$ denote the number of down-crossings of $[a, b]$ by $\bs Y$ on all of $T$. Since $U_t \uparrow U_\infty$ as $t \to -\infty$ it follows from the monotone convergence theorem that $E[U_\infty(a, b)] \le \frac{1}{b - a} [\E(|Y_0|) + |a|]$ Hence with probability 1, $U_\infty(a, b) \lt \infty$ for every $a, \, b \in \Q$ with $a \lt b$. By the characterization of convergence in terms of down-crossings (completely analogous to the one for up-crossings), there exists a random variable $Y_{\infty}$ with values in $\R^* = \R \cup \{-\infty, \infty\}$ such that $Y_t \to Y_{\infty}$ as $t \to \infty$. By Fatou's lemma, $\E(|Y_\infty|) \le \liminf_{t \to \infty} \E(|Y_t|) \le \E(|Y_0|) \lt \infty$ In particular, $\P(Y_\infty \in \R) = 1$. Since $\bs Y$ is uniformly integrable, and $Y_\infty \in \mathscr{L}_1$, it follows that $Y_t \to Y_\infty$ as $t \to \infty$ in $\mathscr{L}_1$ also.
It remains to show that $Y_\infty = \E(Y_0 \mid \mathscr{G}_\infty)$. Let $A \in \mathscr{G}_\infty$. Then $A \in \mathscr{G}_t$ for every $t \in T$. Since $Y_t = \E(Y_0 \mid \mathscr{G}_t)$ it follows by definition that $\E(Y_t; A) = \E(Y_0; A)$ for every $t \in T$. Letting $t \to \infty$ and using the dominated convergence theorem, gives $\E(Y_\infty ; A) = \E(Y_0; A)$. Hence $Y_\infty = \E(Y_0 \mid \mathscr{G}_\infty)$.
As a simple extension of the last result, if $Y_0 \in \mathscr{L}_k$ for some $k \in [1, \infty)$ then the convergence is in $\mathscr{L}_k$ also.
Suppose again that $\bs Y = \{Y_t: t \in T\}$ is a backwards martingale relative to $\mathfrak G = \{\mathscr{G}_t: t \in T\}$. If $Y_0 \in \mathscr{L}_k$ for some $k \in [1, \infty)$ then $Y_t \to Y_\infty$ as $t \to \infty$ in $\mathscr{L}_k$.
Proof
The previous result applies, of course, so we know that there exists a random variable $Y_\infty \in \mathscr{L}_1$ such that $Y_t \to Y_\infty$ as $t \to \infty$ with probability 1 and in $\mathscr{L}_1$. The function $x \mapsto |x|^k$ is convex on $\R$ so by Jensen's inequality for conditional expected value, $\E(|Y_t|^k) = \E[|\E(Y_0 \mid \mathscr{G}_t)|^k] \le \E[\E(|Y_0|^k \mid \mathscr{G}_t] = \E(|Y_0|^k) \lt \infty$ so $Y_t \in \mathscr{L}_k$ for every $t \in T$. By Fatou's lemma, $\E(|Y_\infty|^k) \le \liminf_{t \to \infty} \E(|Y_t|^k) \le \E(|Y_0|^k) \lt \infty$ so $Y_\infty \in \mathscr{L}_k$ also. Next, since $Y_t = \E(Y_0 \mid \mathscr{G}_t)$ and $Y_\infty$ is measurable with respect to $\mathscr{G}_t$, we can use Jensen's inequality again to get $|Y_t - Y_\infty|^k = |\E(Y_0 - Y_\infty \mid \mathscr{G}_t)|^k \le \E(|Y_0 - Y_\infty|^k \mid \mathscr{G}_t)$ It follows that the family of random variables $\{|Y_t - Y_\infty|^k: t \in T\}$ is uniformly integrable, and hence $\E(|Y_t - Y_\infty|^k) \to 0$ as $t \to \infty$.
Applications
The Strong Law of Large Numbers
The strong law of large numbers is one of the fundamental theorems of classical probability. Our previous proof required that the underlying distribution have finite variance. Here we present an elegant proof using backwards martingales that does not require this extra assumption. So, suppose that $\bs X = \{X_n: n \in \N_+\}$ is a sequence of independent, identically distributed random variables with common mean $\mu \in \R$. In statistical terms, $\bs X$ corresponds to sampling from the underlying distribution. Next let $Y_n = \sum_{i=1}^n X_i, \quad n \in \N$ so that $\bs Y = \{Y_n: n \in \N\}$ is the partial sum process associated with $\bs X$. Recall that the sequence $\bs Y$ is also a discrete-time random walk. Finally, let $M_n = Y_n / n$ for $n \in \N_+$ so that $\bs M = \{M_n: n \in \N_+\}$ is the sequence of sample means.
The law of large numbers
1. $M_n \to \mu$ as $n \to \infty$ with probability 1.
2. $M_n \to \mu$ as $n \to \infty$ in mean.
Proof
As usual, let $(\Omega, \mathscr{F}, \P)$ denote the underlying probability space. Also, equalities involving random variables (and particularly conditional expected values) are assumed to hold with probability 1. Now, for $n \in \N$, let $\mathscr{G}_n = \sigma\{Y_n, Y_{n+1}, Y_{n+2}, \ldots\} = \sigma\{Y_n, X_{n+1}, X_{n+2} \ldots\}$ so that $\mathfrak G = \{\mathscr{G}_n: n \in \N\}$ is a decreasing family of sub $\sigma$-algebras of $\mathscr{F}$. The core of the proof is to show that $\bs M$ is a backwards martingale relative to $\mathfrak G$. Let $n \in \N_+$. Clearly $M_n$ is measurable with respect to $\mathscr{G}_n$. By independence, $\E(X_i \mid \mathscr{G}_n) = \E(X_i \mid Y_n)$ for $i \in \{1, 2, \ldots, n\}$. By symmetry (the sequence $\bs X$ is exchangeable), $\E(X_i \mid Y_n) = \E(X_j \mid Y_n)$ for $i, \, j \in \{1, 2, \ldots, n\}$. Hence for $i \in \{1, 2, \ldots, n\}$ $Y_n = \E(Y_n \mid \mathscr{G}_n) = \sum_{j=1}^n E(X_j \mid \mathscr{G}_n) = \sum_{j=1}^n \E(X_i \mid \mathscr{G}_n) = n \E(X_i \mid \mathscr{G}_n)$ so that $\E(X_i \mid \mathscr{G}_n) = Y_n / n = M_n$ for each $i \in \{1, 2, \ldots, n\}$. Next, $\E(Y_n \mid \mathscr{G}_{n+1}) = \E(Y_{n+1} - X_{n+1} \mid \mathscr{G}_{n+1}) = Y_{n+1} - \E(X_{n+1} \mid \mathscr{G}_{n+1}) = Y_{n+1} - \frac{1}{n+1} Y_{n+1} = \frac{n}{n + 1} Y_{n+1}$ Dividing by $n$ gives $\E(M_n \mid \mathscr{G}_{n+1}) = M_{n+1}$ and hence $\bs M$ is a backwards martingale with respect to $\mathfrak G$. From the backwards martingale convergence theorem, there exists $M_\infty$ such that $M_n \to M_\infty$ as $n \to \infty$ with probability 1 and in mean. Next, for $n, \, k \in \N_+$ simple algebra gives $M_{n+k} = \frac{1}{n + k} \sum_{i=1}^k X_i + \frac{n}{n + k} \frac{1}{n} \sum_{i = k + 1}^{k + n} X_i$ Letting $n \to \infty$ then shows that $M_\infty = \lim_{n \to \infty} \frac{1}{n} \sum_{i = k + 1}^{k + n} X_i$ for every $k \in \N_+$. Hence $M_\infty$ is a tail random variable for the IID sequence $\bs X$. From the Kolmogorov 0-1 law, $M_\infty$ must be a constant. Finally, convergence in mean implies that the means converge, and since $\E(M_n) = \mu$ for each $n$, it follows that $M_\infty = \mu$.
Exchangeable Variables
We start with a probability space $(\Omega, \mathscr F, \P)$ and another measurable space $(S, \mathscr S)$. Suppose that $\bs X = (X_1, X_2, \ldots)$ is a sequence of random variables each taking values in $S$. Recall that $\bs X$ is exchangeable if for every $n \in \N$, every permutation of $(X_1, X_2, \ldots, X_n)$ has the same distribution on $(S^n, \mathscr{S}^n)$ (where $\mathscr{S}^n$ is the $n$-fold product $\sigma$-algebra). Clearly if $\bs X$ is a sequence of independent, identically distributed variables, then $\bs X$ is exchangeable. Conversely, if $\bs X$ is exchangeable then the variables are identically distributed (by definition), but are not necessarily independent. The most famous example of a sequence that is exchangeable but not independent is Pólya's urn process, named for George Pólya. On the other hand, conditionally independent and identically distributed sequences are exchangeable. Thus suppose that $(T, \mathscr{T})$ is another measurable space and that $\Theta$ is a random variable taking values in $T$.
If $\bs X$ is conditionally independent and identically distributed given $\Theta$, then $\bs X$ is exchangeable.
Proof
Implicit in the statement is that the variables in the sequence have a regular conditional distribution $\mu_\Theta$ given $\Theta$. Then for every $n \in \N_+$, the conditional distribution of every permutation of $(X_1, X_2, \ldots, X_n)$, given $\Theta$, is $\mu^n_\Theta$ on $(S^n, \mathscr{S}^n)$, where $\mu_\Theta^n$ is the $n$-fold product measure. Unconditionally, the distribution of any permutation is $B \mapsto \E[\mu^n_\Theta(B)]$ for $B \in \mathscr{S}^n$.
Often the setting of this theorem arises when we start with a sequence of independent, identically distributed random variables that are governed by a parametric distribution, and then randomize one of the parameters. In a sense, we can always think of the setting in this way: Imagine that $\theta \in T$ is a parameter for a distribution on $S$. A special case is the beta-Bernoulli process, in which the success parameter $p$ in sequence of Bernoulli trials is randomized with the beta distribution. On the other hand, Pólya's urn process is an example of an exchangeable sequence that does not at first seem to have anything to do with randomizing parameters. But in fact, we know that Pólya's urn process is a special case of the beta-Bernoulli process. This connection gives a hint of de Finetti's theorem, named for Bruno de Finetti, which we consider next. This theorem states any exchangeable sequence of indicator random variables corresponds to randomizing the success parameter in a sequence of Bernoulli trials.
de Finetti's Theorem. Suppose that $\bs X = (X_1, X_2, \ldots)$ is an exchangeable sequence of random variables, each taking values in $\{0, 1\}$. Then there exists a random variable $P$ with values in $[0, 1]$, such that given $P = p \in [0, 1]$, $\bs X$ is a sequence of Bernoulli trials with success parameter $p$.
Proof
As usual, we need some notation. First recall the falling power notation $r^{(j)} = r (r - 1) \cdots (r - j + 1)$ for $r \in \R$ and $j \in \N$. Next for $n \in \N_+$ and $k \in \{0, 1, \ldots, n\}$, let $B^n_k = \left\{(x_1, x_2, \ldots, x_n) \in \{0, 1\}^n: \sum_{i=0}^n x_i = k\right\}$ That is, $B^n_k$ is the set of bit strings of length $n$ with 1 occurring exactly $k$ times. Of course, $\#(B^n_k) = \binom{n}{k} = n^{(k)} / k!$.
Suppose now that $\bs X = (X_1, X_2, \ldots)$ is an exchangeable sequence of variables with values in $\{0, 1\}$. For $n \in \N_+$ let $Y_n = \sum_{i=1}^n X_i$ and $M_n = Y_n / n$. So $\bs Y = \{Y_n: n \in \N_+\}$ is the partial sum process associated with $\bs X$ and $\bs M = \{M_n: n \in \N_+\}$ the sequence of sample means. Let $\mathscr{G}_n = \sigma\{Y_n, Y_{n+1}, \ldots\}$ and $\mathscr{G}_\infty = \bigcap_{n=0}^\infty \mathscr{G}_n$. The family of $\sigma$-algebras $\mathfrak G = \{\mathscr{G}_n: n \in \N_+\}$ is decreasing. The key to the proof is to find two backwards martingales and use the backwards martingale convergence theorem.
Let $m \in \N_+$ and $k \in \{0, 1, \ldots m\}$ The crucial insight is that by exchangeability, given $Y_m = k$, the random vector $(X_1, X_2, \ldots, X_m)$ is uniformly distributed on $B^m_k$. So if $n \in \N_+$ and $n \le m$, the random vector $(X_1, X_2, \ldots, X_n)$, again given $Y_m = k$, fits the hypergeometric model: a sample of size $n$ chosen at random and without replacement from a population of $m$ objects of which $k$ are type 1 and $m - k$ are type 0. Thus, if $j \in \{0, 1, \ldots, n\}$ and $(x_1, x_2, \ldots, x_n) \in B^n_j$ then $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n \mid Y_m = k) = \frac{k^{(j)} (m - k)^{(n - j)}}{m^{(n)}}$ Equivalently, $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n \mid Y_m) = \frac{Y_m^{(j)} (m - Y_m)^{(n - j)}}{m^{(n)}}$ Given $Y_m$, the variables $(Y_{m+1}, Y_{m+2}, \ldots)$ give no additional information about the distribution of $(X_1, X_2, \ldots, X_n)$ and hence $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n \mid \mathscr{G}_m) =\E[\bs{1}(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n) \mid \mathscr{G}_n] = \frac{Y_m^{(j)} (m - Y_m)^{(n - j)}}{m^{(n)}}$ For fixed $n$, $j$, and $(x_1, x_2, \ldots, x_n) \in B^n_j$, the conditional expected value in the middle of the displayed equation, as a function of $m$, is a Doob backward martingale with respect to $\mathfrak G$ and hence converges to $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n \mid \mathscr{G}_\infty)$ as $m \to \infty$.
Next we show that $\bs M$ is a backwards martingale with respect to $\mathfrak G$. Trivially $M_n$ is measurable with respect to $\mathscr{G}_n$ and $\E(M_n) \le 1$ for each $n \in \N$. Thus we need to show that $\E(M_n \mid \mathscr{G}_m) = M_m$ for $m, \, n \in \N_+$ with $n \le m$. From our previous work with $(X_1, X_2, \ldots, X_n)$ we know that the conditional distribution of $Y_n$ given $Y_m = k$ is hypergeometric with parameters $m$, $k$, and $n$: $\P(Y_n = j \mid Y_m = k) = \binom{n}{j} \frac{k^{(j)} (m - k)^{(n - j)}}{m^{(n)}}, \quad j \in \{0, 1, \ldots, n\}$ Recall that the mean of the hypergeometric distribution is the sample size times the proportion of type 1 objects in the population. Thus, $\E(M_n \mid Y_m = k) = \frac{1}{n} \E(Y_n \mid Y_m = k) = \frac{1}{n} n \frac{k}{m} = \frac{k}{m}$ Or equivalently, $\E(M_n \mid Y_m) = Y_m / m = M_m$. Once again, given $Y_m$, the variables $Y_{m+1}, Y_{m+2}$ give no additional information and so $\E(Y_n \mid \mathscr{G}_m) = Y_m$. Hence $\bs M$ is a backwards martingale with respect to $\mathfrak G$. From the backwards martingale convergence theorem, there exists a random variable $P$ such that $M_n \to P$ as $n \to \infty$ with probability 1.
It just remains to connect the dots. Suppose now that $n \in \N_+$ and $j \in \{0, 1, \ldots n\}$ and that $m \in \N_+$ and $k_m \in \{0, 1, \ldots, m\}$. From simple calculus, if $n$ and $j$ are fixed and $k_m / m \to p \in [0, 1]$ as $m \to \infty$ then $\frac{k_m^{(j)} (m - k_m)^{(n - j)}}{m^{(n)}} \to p^j (1 - p)^{n - j} \text{ as } m \to \infty$ (You may recall that this computation is used in the proof of the convergence of the hypergeometric distribution to the binomial.) Returning to the joint distribution, recall that if $(x_1, x_2, \ldots, x_n) \in B^n_j$ then $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n \mid \mathscr{G}_m) = \frac{Y_m^{(j)} (m - Y_m)^{(n - j)}}{m^{(n)}}$ Let $m \to \infty$. Since $Y_m / m \to P$ as $m \to \infty$ we get $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n \mid \mathscr{G}_\infty) = P^j (1 - P)^{n - j}$ Random variable $P$ is measurable with respect to $\mathscr{G}_\infty$ so $\P(X_1 = x_1, X_2 = x_2, \ldots, X_n = x_n \mid P) = P^j (1 - P)^{n - j} \text{ as } m \to \infty$ Given $P = p \in [0, 1]$, $\bs X$ is a sequence of Bernoulli trials with success parameter $p$.
De Finetti's theorem has been extended to much more general sequences of exchangeable variables. Basically, if $\bs X = (X_1, X_2, \ldots)$ is an exchangeable sequence of random variables, each taking values in a significantly nice measurable space $(S, \mathscr{S})$ then there exists a random variable $\Theta$ such that $\bs X$ is independent and identically distributed given $\Theta$. In the proof, the result that $M_n \to P$ as $n \to \infty$ with probability 1, where $M_n = \frac{1}{n} \sum_{i=1}^n X_i$, is known as de Finetti's strong law of large numbers. De Finetti's theorem, and it's generalizations are important in Bayesian statistical inference. For an exchangeable sequence of random variables (our observations in a statistical experiment), there is a hidden, random parameter $\Theta$. Given $\Theta = \theta$, the variables are independent and identically distributed. We gain information about $\Theta$ by imposing a prior distribution on $\Theta$ and then updating this, based on our observations and using Baye's theorem, to a posterior distribution.
Stated more in terms of distributions, de Finetti's theorem states that the distribution of $n$ distinct variables in the exchangeable sequence is a mixture of product measures. That is, if $\mu_\theta$ is the distribution of a generic $X$ on $(S, \mathscr{S})$ given $\Theta = \theta$, and $\nu$ is the distribution of $\Theta$ on $(T, \mathscr{T})$, then the distribution of $n$ of the variables on $(S^n \mathscr{S}^n)$ is $B \mapsto \int_T \mu_\theta^n(B) \, d\nu(\theta)$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/17%3A_Martingales/17.06%3A_Backwards_Martingales.txt |
Brownian motion is a stochastic process of great theoretical importance, and as the basic building block of a variety of other processes, of great practical importance as well. In this chapter we study Brownian motion and a number of random processes that can be constructed from Brownian motion. We also study the Ito stochastic integral and the resulting calculus, as well as two remarkable representation theorems involving stochastic integrals.
18: Brownian Motion
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\D}{\mathbb{D}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$
Basic Theory
History
In 1827, the botanist Robert Brown noticed that tiny particles from pollen, when suspended in water, exhibited continuous but very jittery and erratic motion. In his miracle year in 1905, Albert Einstein explained the behavior physically, showing that the particles were constantly being bombarded by the molecules of the water, and thus helping to firmly establish the atomic theory of matter. Brownian motion as a mathematical random process was first constructed in rigorous way by Norbert Wiener in a series of papers starting in 1918. For this reason, the Brownian motion process is also known as the Wiener process.
Run the two-dimensional Brownian motion simulation several times in single-step mode to get an idea of what Mr. Brown may have observed under his microscope.
Along with the Bernoulli trials process and the Poisson process, the Brownian motion process is of central importance in probability. Each of these processes is based on a set of idealized assumptions that lead to a rich mathematial theory. In each case also, the process is used as a building block for a number of related random processes that are of great importance in a variety of applications. In particular, Brownian motion and related processes are used in applications ranging from physics to statistics to economics.
Definition
A standard Brownian motion is a random process $\bs{X} = \{X_t: t \in [0, \infty)\}$ with state space $\R$ that satisfies the following properties:
1. $X_0 = 0$ (with probability 1).
2. $\bs{X}$ has stationary increments. That is, for $s, \; t \in [0, \infty)$ with $s \lt t$, the distribution of $X_t - X_s$ is the same as the distribution of $X_{t - s}$.
3. $\bs{X}$ has independent increments. That is, for $t_1, t_2, \ldots, t_n \in [0, \infty)$ with $t_1 \lt t_2 \lt \cdots \lt t_n$, the random variables $X_{t_1}, X_{t_2} - X_{t_1}, \ldots, X_{t_n} - X_{t_{n-1}}$ are independent.
4. $X_t$ is normally distributed with mean 0 and variance $t$ for each $t \in (0, \infty)$.
5. With probability 1, $t \mapsto X_t$ is continuous on $[0, \infty)$.
To understand the assumptions physically, let's take them one at a time.
1. Suppose that we measure the position of a Brownian particle in one dimension, starting at an arbitrary time which we designate as $t = 0$, with the initial position designated as $x = 0$. Then this assumption is satisfied by convention. Indeed, occasionally, it's convenient to relax this assumption and allow $X_0$ to have other values.
2. This is a statement of time homogeneity: the underlying dynamics (namely the jostling of the particle by the molecules of water) do not change over time, so the distribution of the displacement of the particle in a time interval $[s, t]$ depends only on the length of the time interval.
3. This is an idealized assumption that would hold approximately if the time intervals are large compared to the tiny times between collisions of the particle with the molecules.
4. This is another idealized assumption based on the central limit theorem: the position of the particle at time $t$ is the result of a very large number of collisions, each making a very small contribution. The fact that the mean is 0 is a statement of spatial homogeneity: the particle is no more or less likely to be jostled to the right than to the left. Next, recall that the assumptions of stationary, independent increments means that $\var(X_t) = \sigma^2 t$ for some positive constant $\sigma^2$. By a change in time scale, we can assume $\sigma^2 = 1$, although we will consider more general Brownian motions in the next section.
5. Finally, the continuity of the sample paths is an essential assumption, since we are modeling the position of a physical particle as a function of time.
Of course, the first question we should ask is whether there exists a stochastic process satisfying the definition. Fortunately, the answer is yes, although the proof is complicated.
There exists a probability space $(\Omega, \mathscr{F}, \P)$ and a stochastic process $\bs{X} = \{X_t: t \in [0, \infty)\}$ on this probability space satisfying the assumptions in the definition.
Proof sketch
The assumptions in the definition lead to a consistent set of finite dimensional distributions (which are given below). Thus by Kolmogorov existence theorem, there exists a stochastic process $\bs{U} = \{U_t: t \in [0, \infty)\}$ that has these finite dimensional distributions. However, $\bs{U}$ does not have continuous sample paths, but we can construct from $\bs{U}$ an equivalent process that does have continuous sample paths.
First recall that a binary rational (or dyadic rational) in $[0, \infty)$ is a number of the form $k / 2^n$ where $k, \, n \in \N$. Let $\D_+$ denote the set of all binary rationals in $[0, \infty)$, and recall that $\D_+$ is countable but also dense in $[0, \infty)$ (that is, if $t \in [0, \infty) \setminus \D_+$ then there exists $t_n \in \D_+$ for $n \in \N_+$ such that $t_n \to t$ as $n \to \infty$).
Now, for $n \in \N_+$, let $X_n(t) = U_t$ if $t$ is a binary rational of the form $k \big/ 2^n$ for some $k \in \N$. If $t$ is not such a binary rational, define $X_n(t)$ by linear interpolation between the the closest binary rationals of this form on either side of $t$. Then $X_n(t) \to U(t)$ as $n \to \infty$ for every $t \in \D_+$, and with probability 1, the convergence is uniform on $\D_+ \cap [0, T]$ for each $T \gt 0$. It then follows that $\bs{U}$ is continuous on $\D_+$ with probability 1.
For the last step, let $X_t = \lim_{s \to t, \; s \in \D_+} U_s$ for $t \in [0, \infty)$. The limit exists since $\bs{U}$ is continuous on $\D_+$ with probability 1. The process $\bs{X} = \{X_t: t \in [0, \infty)\}$ is continuous on $[0, \infty)$ with probability 1, and has the same finite dimensional distributions as $\bs{U}$.
Run the simulation of the standard Brownian motion process a few times in single-step mode. Note the qualitative behavior of the sample paths. Run the simulation 1000 times and compare the empirical density function and moments of $X_t$ to the true probabiltiy density function and moments.
Brownian Motion as a Limit of Random Walks
Clearly the underlying dynamics of the Brownian particle being knocked about by molecules suggests a random walk as a possible model, but with tiny time steps and tiny spatial jumps. Let $\bs{X} = (X_0, X_1, X_2, \ldots)$ be the symmetric simple random walk. Thus, $X_n = \sum_{i=1}^n U_i$ where $\bs{U} = (U_1, U_2, \ldots)$ is a sequence of independent variables with $\P(U_i = 1) = \P(U_i = -1) = \frac{1}{2}$ for each $i \in \N_+$. Recall that $\E(X_n) = 0$ and $\var(X_n) = n$ for $n \in \N$. Also, since $\bs{X}$ is the partial sum process associated with an IID sequence, $\bs{X}$ has stationary, independent increments (but of course in discrete time). Finally, recall that by the central limit theorem, $X_n \big/ \sqrt{n}$ converges to the standard normal distribution as $n \to \infty$. Now, for $h, d \in (0, \infty)$ the continuous time process $\bs{X}_{h, d} = \left\{d X_{\lfloor t / h \rfloor}: t \in [0, \infty) \right\}$ is a jump process with jumps at $\{0, h, 2 h, \ldots\}$ and with jumps of size $\pm d$. Basically we would like to let $h \downarrow 0$ and $d \downarrow 0$, but this cannot be done arbitrarily. Note that $\E\left[X_{h, d}(t)\right] = 0$ but $\var\left[X_{h,d}(t)\right] = d^2 \lfloor t / h \rfloor$. Thus, by the central limit theorem, if we take $d = \sqrt{h}$ then the distribution of $X_{h, d}(t)$ will converge to the normal distribution with mean 0 and variance $t$ as $h \downarrow 0$. More generally, we might hope that all of requirements in the definition are satisfied by the limiting process, and if so, we have a standard Brownian motion.
Run the simulation of the random walk process for increasing values of $n$. In particular, run the simulation several times with $n = 100$. Compare the qualitative behavior with the standard Brownian motion process. Note that the scaling of the random walk in time and space is effecitvely accomplished by scaling the horizontal and vertical axes in the graph window.
Finite Dimensional Distributions
Let $\bs{X} = \{X_t: t \in [0, \infty)\}$ be a standard Brownian motion. It follows from part (d) of the definition that $X_t$ has probability density function $f_t$ given by $f_t(x) = \frac{1}{\sqrt{2 \pi t}} \exp\left(-\frac{x^2}{2 t}\right), \quad x \in \R$ This family of density functions determines the finite dimensional distributions of $\bs{X}$.
If $t_1, t_2, \ldots, t_n \in (0, \infty)$ with $0 \lt t_1 \lt t_2 \cdots \lt t_n$ then $(X_{t_1}, X_{t_2}, \ldots, X_{t_n})$ has probability density function $f_{t_1, t_2, \ldots, t_n}$ given by
$f_{t_1, t_2, \ldots, t_n}(x_1, x_2, \ldots, x_n) = f_{t_1}(x_1) f_{t_2 - t_1}(x_2 - x_1) \cdots f_{t_n - t_{n-1}}(x_n - x_{n-1}), \quad (x_1, x_2, \ldots, x_n) \in \R^n$
Proof
This follows because $\bs{X}$ has stationary, independent increments.
$\bs{X}$ is a Gaussian process with mean function mean function $m(t) = 0$ for $t \in [0, \infty)$ and covariance function $c(s, t) = \min\{s, t\}$ for $s, t \in [0, \infty)$.
Proof
The fact that $\bs{X}$ is a Gaussian process follows because $X_t$ is normally distributed for each $t \in T$ and $\bs{X}$ has stationary, independent increments. The mean function is 0 by assumption. For the covariance function, suppose $s, \, t \in [0, \infty)$ with $s \le t$. Since $X_s$ and $X_t - X_s$ are independent, we have $\cov(X_s, X_t) = \cov\left[X_s, X_s + (X_t - X_s)\right] = \var(X_s) + 0 = s$
Recall that for a Gaussian process, the finite dimensional (multivariate normal) distributions are completely determined by the mean function $m$ and the covariance function $c$. Thus, it follows that a standard Brownian motion is characterized as a continuous Gaussian process with the mean and covariance functions in the last theorem. Note also that $\cor(X_s, X_t) = \frac{\min\{s, t\}}{\sqrt{s t}} = \sqrt{\frac{\min\{s, t\}}{\max\{s, t\}}} ,\quad (s, t) \in [0, \infty)^2$ We can also give the higher moments and the moment generating function for $X_t$.
For $n \in \N$ and $t \in [0, \infty)$,
1. $\E\left(X_t^{2n}\right) = 1 \cdot 3 \cdots (2 n - 1) t^n = (2 n)! t^n \big/ (n! 2^n)$
2. $\E\left(X_t^{2n + 1}\right) = 0$
Proof
These moments follow from standard results, since $X_t$ is normally distributed with mean 0 and variance $t$.
For $t \in [0, \infty)$, $X_t$ has moment generating function given by $\E\left(e^{u X_t}\right) = e^{t u / 2}, \quad u \in \R$
Proof
Again, this is a standard result for the normal distribution.
Simple Transformations
There are several simple transformations that preserve standard Brownian motion and will give us insight into some of its properties. As usual, our starting place is a standard Brownian motion $\bs{X} = \{X_t: t \in [0, \infty)\}$. Our first result is that reflecting the paths of $\bs{X}$ in the line $x = 0$ gives another standard Brownian motion
Let $Y_t = -X_t$ for $t \ge 0$. Then $\bs{Y} = \{Y_t: t \ge 0\}$ is also a standard Brownian motion.
Proof
Clearly the new process is still a Gaussian process, with mean function $\E(-X_t) = -\E(X_t) = 0$ for $t \in [0, \infty)$ and covariance function $\cov(-X_s, -X_t) = \cov(X_s, X_t) = \min\{s, t\}$ for $(s, t) \in [0, \infty)^2$. Finally, since $\bs{X}$ is continuous, so is $\bs{Y}$.
Our next result is related to the Markov property, which we explore in more detail below. If we restart Brownian motion at a fixed time $s$, and shift the origin to $X_s$, then we have another standard Brownian motion. This means that Brownian motion is both temporally and spatially homogeneous.
Fix $s \in [0, \infty)$ and define $Y_t = X_{s + t} - X_s$ for $t \ge 0$. Then $\bs{Y} = \{Y_t: t \in [0, \infty)\}$ is also a standard Brownian motion.
Proof
Since $\bs{X}$ has stationary, independent increments, the process $\bs{Y}$ is equivalent in distribution to $\bs{X}$. Clearly also $\bs{Y}$ is continuous since $\bs{X}$ is.
Our next result is a simple time reversal, but to state this result, we need to restrict the time parameter to a bounded interval of the form $[0, T]$ where $T \gt 0$. The upper endpoint $T$ is sometimes referred to as a finite time horizon. Note that $\{X_t: t \in [0, T]\}$ still satisfies the definition, but with the time parameters restricted to $[0, T]$.
Define $Y_t = X_{T - t} - X_T$ for $0 \le t \le T$. Then $\bs{Y} = \left\{Y_t: t \in [0, T]\right\}$ is also a standard Brownian motion on $[0, T]$.
Proof
$\bs{Y}$ is a Gaussian process, since a finite, linear combination of variables from this process reduces to a finite, linear combination of variables from $\bs{X}$. Next, $\E(Y_t) = \E(X_{T - t}) - \E(X_T) = 0$. Next, if $s, \; t \in [0, T]$ with $s \le t$ then \begin{align} \cov(Y_s, Y_t) & = \cov(X_{T - s} - X_T, X_{T-t} - X_t) = \cov(X_{T-s}, X_{T-t}) - \cov(X_{T-s}, X_T) - \cov(X_T, X_{T-t}) + \cov(X_T, X_t) \ & = (T - t) - (T - s) - (T - t) + T = s \end{align} Finally, $t \mapsto Y_t$ is continuous on $[0, T]$ with probability 1, since $t \mapsto X_t$ is continuous on $[0, T]$ with probability 1.
Our next transformation involves scaling $\bs{X}$ both temporally and spatially, and is known as self-similarity.
Let $a \gt 0$ and define $Y_t = \frac{1}{a} X_{a ^2 t}$ for $t \ge 0$. Then $\bs{Y} = \{Y_t: t \in [0, \infty)\}$ is also a standard Brownian motion.
Proof
Once again, $\bs{Y}$ is a Gaussian process, since finite, linear combinations of variables in $\bs{Y}$ reduce to finite, linear combinations of variables in $\bs{X}$. Next, $\E(Y_t) = a \E(X_{a^2 t}) = 0$ for $t \gt 0$, and for $s, \, t \gt 0$ with $s \lt t$, $\cov(Y_s, Y_t) = \cov\left(\frac{1}{a} X_{a^2 s}, \frac{1}{a} X_{a^2 t}\right) = \frac{1}{a^2} \cov\left(X_{a^2 s}, X_{a^2 t}\right) = \frac{1}{a^2} a^2 s = s$ Finally $\bs{Y}$ is a continuous process since $\bs{X}$ is continuous.
Note that the graph of $\bs{Y}$ can be obtained from the graph of $\bs{X}$ by scaling the time axis $t$ by a factor of $a^2$ and scaling the spatial axis $x$ by a factor of $a$. The fact that the temporal scale factor must be the square of the spatial scale factor is clearly related to Brownian motion as the limit of random walks. Note also that this transformation amounts to zooming in or out of the graph of $\bs{X}$ and hence Brownian motion has a self-similar, fractal quality, since the graph is unchanged by this transformation. This also suggests that, although continuous, $t \mapsto X_t$ is highly irregular. We consider this in the next subsection.
Our final transformation is referred to as time inversion.
Let $Y_0 = 0$ and $Y_t = t X_{1/t}$ for $t \gt 0$. Then $\bs{Y} = \{Y_t: t \in [0, \infty)\}$ is also a standard Brownian motion.
Proof
Clearly $\bs{Y}$ is a Gaussian process, since finite, linear combinations of variables in $\bs{Y}$ reduce to finite, linear combinations of variables in $\bs{X}$. Next, $\E(Y_t) = t \E(X_{1/t}) = 0$ for $t \gt 0$, and for $s, \, t \gt 0$ with $s \lt t$, $\cov\left(Y_s, Y_t\right) = \cov\left(s X_{1/s}, t X_{1/t}\right) = s t \, \cov\left(X_{1/s}, X_{1/t}\right) = s t \frac{1}{t} = s$ Since $t \mapsto X_t$ is continuous on $[0, \infty)$ with probability 1, $t \mapsto Y_t$ is continuous on $(0, \infty)$ with probability 1. Thus, all that remains is to show continuity at $t = 0$. Thus we need to show that with probability 1, $t X_{1/t} \to 0$ as $t \downarrow 0$. or equivalently, $X_s / s \to 0$ as $s \uparrow \infty$. But this last statement holds by the law of the iterated logarithm, given below.
Irregularity
The defining properties suggest that standard Brownian motion $\bs{X} = \{X_t: t \in [0, \infty)\}$ cannot be a smooth, differentiable function. Consider the usual difference quotient at $t$, $\frac{X_{t+h} - X_t}{h}$ By the stationary increments property, if $h \gt 0$, the numerator has the same distribution as $X_h$, while if $h \lt 0$, the numerator has the same distribution as $-X_{-h}$, which in turn has the same distribution as $X_{-h}$. So, in both cases, the difference quotient has the same distribution as $X_{\left|h\right|} \big/ h$, and this variable has the normal distribution with mean 0 and variance $\left|h\right| \big/ h^2 = 1 \big/ \left|h\right|$. So the variance of the difference quotient diverges to $\infty$ as $h \to 0$, and hence the difference quotient does not even converge in distribution, the weakest form of convergence.
The temporal-spatial transformation above also suggests that Brownian motion cannot be differentiable. The intuitive meaning of differentiable at $t$ is that the function is locally linear at $t$—as we zoon in, the graph near $t$ begins to look like a line (whose slope, of course, is the derivative). But as we zoon in on Brownian motion, (in the sense of the transformation), it always looks the same, and in particular, just as jagged. More formally, if $\bs{X}$ is differentiable at $t$, then so is the transformed process $\bs{Y}$, and the chain rule gives $Y^\prime(t) = a X^\prime(a^2 t)$. But $\bs{Y}$ is also a standard Brownian motion for every $a \gt 0$, so something is clearly wrong. While not rigorous, these examples are motivation for the following theorem:
With probability 1, $\bs{X}$ is nowhere differentiable on $[0, \infty)$.
Run the simulation of the standard Brownian motion process. Note the continuity but very jagged quality of the sample paths. Of course, the simulation cannot really capture Brownian motion with complete fidelity.
The following theorems gives a more precise measure of the irregularity of standard Brownian motion.
Standard Brownian motion $\bs{X}$ has Hölder exponent $\frac{1}{2}$. That is, $\bs{X}$ is Hölder continuous with exponent $\alpha$ for every $\alpha \lt \frac{1}{2}$, but is not Hölder continuous with exponent $\alpha$ for any $\alpha \gt \frac{1}{2}$.
In particular, $\bs{X}$ is not Lipschitz continuous, and this shows again that it is not differentiable. The following result states that in terms of Hausdorff dimension, the graph of standard Brownian motion lies midway between a simple curve (dimension 1) and the plane (dimension 2).
The graph of standard Brownian motion has Hausdorff dimension $\frac{3}{2}$.
Yet another indication of the irregularity of Brownian motion is that it has infinite total variation on any interval of positive length.
Suppose that $a, \, b \in \R$ with $a \lt b$. Then the total variation of $\bs{X}$ on $[a, b]$ is $\infty$.
The Markov Property and Stopping Times
As usual, we start with a standard Brownian motion $\bs{X} = \{X_t: t \in [0, \infty)\}$. Recall that a Markov process has the property that the future is independent of the past, given the present state. Because of the stationary, independent increments property, Brownian motion has the property. As a minor note, to view $\bs{X}$ as a Markov process, we sometimes need to relax Assumption 1 and let $X_0$ have an arbitrary value in $\R$. Let $\mathscr{F}_t = \sigma\{X_s: 0 \le s \le t\}$, the sigma-algebra generated by the process up to time $t \in [0, \infty)$. The family of $\sigma$-algebras $\mathfrak{F} = \{\mathscr{F}_t: t \in [0, \infty)\}$ is known as a filtration.
Standard Brownian motion is a time-homogeneous Markov process with transition probability density $p$ given by $p_t(x, y) = f_t(y - x) = \frac{1}{\sqrt{2 \pi t}} \exp\left[-\frac{(y - x)^2}{2 t} \right], \quad t \in (0, \infty); \; x, \, y \in \R$
Proof
Fix $s \in [0, \infty)$. The theorem follows from the fact that the process $\{X_{s+t} - X_s: t \in [0, \infty)\}$ is another standard Brownian motion, as shown above, and is independent of $\mathscr{F}_s$.
The transtion density $p$ satisfies the following diffusion equations. The first is known as the forward equation and the second as the backward equation. \begin{align} \frac{\partial}{\partial t} p_t(x, y) & = \frac{1}{2} \frac{\partial^2}{ \partial y^2} p_t(x, y) \ \frac{\partial}{\partial t} p_t(x, y) & = \frac{1}{2} \frac{\partial^2}{ \partial x^2} p_t(x, y) \end{align}
Proof
These results follows from standard calculus.
The diffusion equations are so named, because the spatial derivative in the first equation is with respect to $y$, the state forward at time $t$, while the spatial derivative in the second equation is with respect to $x$, the state backward at time 0.
Recall that a random time $\tau$ taking values in $[0, \infty]$ is a stopping time with respect to the process $\bs{X} = \{X_t: t \in [0, \infty)\}$ if $\{\tau \le t\} \in \mathscr{F}_t$ for every $t \in [0, \infty)$. Informally, we can determine whether or not $\tau \le t$ by observing the process up to time $t$. An important special case is the first time that our Brownian motion hits a specified state. Thus, for $x \in \R$ let $\tau_x = \inf\{t \in [0, \infty): X_t = x\}$. The random time $\tau_x$ is a stopping time.
For a stopping time $\tau$, we need the $\sigma$-algebra of events that can be defined in terms of the process up to the random time $\tau$, analogous to $\mathscr{F}_t$, the $\sigma$-algebra of events that can be defined in terms of the process up to a fixed time $t$. The appropriate definition is $\mathscr{F}_\tau = \{B \in \mathscr{F}: B \cap \{\tau \le t\} \in \mathscr{F}_t \text{ for all } t \ge 0\}$ See the section on Filtrations and Stopping Times for more information on filtrations, stopping times, and the $\sigma$-algebra associated with a stopping time.
The strong Markov property is the Markov property generalized to stopping times. Standard Brownian motion $\bs{X}$ is also a strong Markov process. The best way to say this is by a generalization of the temporal and spatial homogeneity result above.
Suppose that $\tau$ is a stopping time and define $Y_t = X_{\tau + t} - X_\tau$ for $t \in [0, \infty)$. Then $\bs{Y} = \{Y_t: t \in [0, \infty)\}$ is a standard Brownian motion and is independent of $\mathscr{F}_\tau$.
The Reflection Principle
Many interesting properties of Brownian motion can be obtained from a clever idea known as the reflection principle. As usual, we start with a standard Brownian motion $\bs{X} = \{X_t: t \in [0, \infty) \}$. Let $\tau$ be a stopping time for $\bs{X}$. Define $W_t = \begin{cases} X_t, & 0 \le t \lt \tau \ 2 X_\tau - X_t, & \tau \le t \lt \infty \end{cases}$ Thus, the graph of $\bs{W} = \{W_t: t \in [0, \infty)\}$ can be obtained from the graph of $\bs{X}$ by reflecting in the line $x = X_\tau$ after time $\tau$. In particular, if the stopping time $\tau$ is $\tau_a$, the first time that the process hits a specified state $a \gt 0$, then the graph of $\bs{W}$ is obtained from the graph of $\bs{X}$ by reflecting in the line $x = a$ after time $\tau_a$.
Open the simulation of reflecting Brownian motion. This app shows the process $\bs{W}$ corresponding to the the stopping time $\tau_a$, the time of first visit to a positive state $a$. Run the simulation in single step mode until you see the reflected process several times. Make sure that you understand how the process $\bs{W}$ works.
The reflected process $\bs{W} = \{W_t: t \in [0, \infty)\}$ is also a standard Brownian motion.
Run the simulation of the reflected Brownian motion process 1000 times. Compaure the empirical density function and moments of $W_t$ to the true probability density function and moments.
Martingales
As usual, let $\bs{X} = \{X_t: t \in [0, \infty)\}$ be a standard Brownian motion, and let $\mathscr{F}_t = \sigma\{X_s: 0 \le s \le t\}$ for $t \in [0, \infty)$, so that $\mathfrak{F} = \{\mathscr{F}_t: t \in [0, \infty)\}$ is the natural filtration for $\bs{X}$. There are several important martingales associated with $\bs{X}$. We will study a couple of them in this section, and others in subsequent sections. Our first result is that $\bs{X}$ itself is a martingale, simply by virtue of having stationary, independent increments and 0 mean.
$\bs{X}$ is a martingale with respect to $\mathfrak{F}$.
Proof
Again, this is true of any process with stationary, independent increments and 0 mean, but we give the proof anyway, for completeness. Let $s, \, t \in [0, \infty)$ with $s \lt t$. Since $X_s$ is measurable with respect to $\mathscr{F}_s$ and $X_t - X_s$ is independent of $\mathscr{F}_s$ we have $\E\left(X_t \mid \mathscr{F}_s\right) = \E\left[X_s + (X_t - X_s) \mid \mathscr{F}_s\right] = X_s + \E(X_t - X_s) = X_s$
The next martingale is a little more interesting.
Let $Y_t = X_t^2 - t$ for $t \in [0, \infty)$. Then $\bs{Y} = \{Y_t: t \in [0, \infty)\}$ is a martingale with respect to $\mathfrak{F}$.
Proof
Let $s, \, t \in [0, \infty)$ with $s \lt t$. Then $Y_t = X_t^2 - t = \left[X_s + (X_t - X_s)\right]^2 - t = X_s^2 + 2 X_s (X_t - X_s) + (X_t - X_s)^2 - t$ Since $X_s$ is measurable with respect to $\mathscr{F}_s$ and $X_t - X_s$ is independent of $\mathscr{F}_s$ we have $\E\left(Y_t \mid \mathscr{F}_s\right) = X_s^2 + 2 X_s \E(X_t - X_s) + \E\left[(X_t - X_s)\right]^2 - t$ But $\E(X_t - X_s) = 0$ and $\E\left[(X_t - X_s)^2\right] = \var(X_t - X_s) = t - s$ so $\E\left(Y_t \mid \mathscr{F}_s\right) = X_s^2 - s = Y_s$.
Maximums and Hitting Times
As usual, we start with a standard Brownian motion $\bs{X} = \{X_t: t \in [0, \infty) \}$. For $y \in [0, \infty)$ recall that $\tau_y = \min\{t \ge 0: X_t = y\}$ is the first time that the process hits state $y$. Of course, $\tau_0 = 0$. For $t \in [0, \infty)$, let $Y_t = \max\{X_s: 0 \le s \le t\}$, the maximum value of $\bs{X}$ on the interval $[0, t]$. Note that $Y_t$ is well defined by the continuity of $\bs{X}$, and of course $Y_0 = 0$. Thus we have two new stochastic processes: $\{\tau_y: y \in [0, \infty)\}$ and $\{Y_t: t \in [0, \infty)\}$. Both have index set $[0, \infty)$ and (as we will see) state space $[0, \infty)$. Moreover, the processes are inverses of each other in a sense:
For $t, \; y \in (0, \infty)$, $\tau_y \le t$ if and only if $Y_t \ge y$.
Proof
Since standard Brownian motion starts at 0 and is continuous, both events mean that the the process hits state $y$ in the interval $[0, t]$.
Thus, if we can compute the distribution of $Y_t$ for each $t \in (0, \infty)$ then we can compute the distribution of $\tau_y$ for each $y \in (0, \infty)$, and conversely.
For $y \gt 0$, $\tau_y$ has the same distribution as $y^2 \big/ Z^2$, where $Z$ is a standard normal variable. The probability density function $g_y$ is given by $g_y(t) = \frac{y}{\sqrt{2 \pi t^3}} \exp\left(-\frac{y^2}{2 t}\right), \quad t \in (0, \infty)$
Proof
Let $t \gt 0$. From the previous result, note that $X_t \ge y \implies Y_t \ge y \implies \tau_y \le t$. Hence $\P(X_t \ge y) = \P(X_t \ge y, \tau_y \le t) = \P(X_t \ge y \mid \tau_y \le t) \P(\tau_y \le t)$ But from the strong Markov property above, $s \mapsto X(\tau_y + s) - y$ is another standard Brownian motion. Hence $\P(X_t \ge y \mid \tau_y \le t) = \frac{1}{2}$. Therefore $\P(\tau_y \le t) = 2 \P(X_t \ge y) = \frac{2}{\sqrt{2 \pi t}} \int_y^\infty e^{-x^2 / 2 t} \, dx = \frac{2}{\sqrt{2 \pi}} \int_{y/\sqrt{t}}^\infty e^{-z^2/2} \, dz$ The second integral follows from the first by the change of variables $z = x \big/ \sqrt{t}$. We can recognize this integral as $\P\left(y^2 \big/ Z^2 \le t\right)$ where $Z$ has a standard normal distribution. Taking the derivative of the integral with respect to $t$ gives the PDF.
The distribution of $\tau_y$ is the Lévy distribution with scale parameter $y^2$, and is named for the French mathematician Paul Lévy. The Lévy distribution is studied in more detail in the chapter on special distributions.
Open the hitting time experiment. Vary $y$ and note the shape and location of the probability density function of $\tau_y$. For selected values of the parameter, run the simulation in single step mode a few times. Then run the experiment 1000 times and compare the empirical density function to the probability density function.
Open the special distribution simulator and select the Lévy distribution. Vary the parameters and note the shape and location of the probability density function. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function to the probability density function.
Standard Brownian motion is recurrent. That is, $\P(\tau_y \lt \infty) = 1$ for every $y \in \R$.
Proof
Suppose first that $y \gt 0$. From the proof of the last theorem, $\P(\tau_y \lt \infty) = \lim_{t \to \infty} \P(\tau_y \le t) = \frac{2}{\sqrt{2 \pi}} \int_0^\infty e^{-z^2 / 2} \, dz = 1$ Note that the integral above is equivalent to the integral of the standard normal PDF over $\R$. In particular, the function $g_y$ given above really is a valid PDF. If $y \lt 0$ then by symmetry, $\tau_y$ has the same distribution as $\tau_{-y}$, so $\P(\tau_y \lt \infty) = 1$. Trivially, $\tau_0 = 0$.
Thus, for each $y \in \R$, $\bs{X}$ eventually hits $y$ with probability 1. Actually we can say more:
With probability 1, $\bs{X}$ visits every point in $\R$.
Proof
By continuity, if $\bs{X}$ reaches $y \gt 0$ then $\bs{X}$ visits every point in $[0, y]$. By symmetry, a similar statement holds for $y \lt 0$. Thus the event that $\bs{X}$ visits every point in $\R$ is $\bigcap_{n=1}^\infty \left(\{\tau_n \lt \infty\} \cap \{\tau_{-n} \lt \infty\}\right)$. The probability of a countable intersection of events with probability 1 still has probability 1.
On the other hand,
Standard Brownian motion is null recurrent. That is, $\E(\tau_y) = \infty$ for every $y \in \R \setminus \{0\}$.
Proof
By symmetry, it suffices to consider $y \gt 0$. From the result above on the distribution of $\tau_y$, $\E(\tau_y) = \int_0^\infty \P(\tau_y \gt t) \, dt = \frac{2}{\sqrt{2 \pi}} \int_0^\infty \int_0^{y / \sqrt{t}} e^{-z^2 / 2} \, dz \, dt$ Changing the order of integration gives $\E(\tau_y) = \frac{2}{\sqrt{2 \pi}} \int_0^\infty \int_0^{y^2/z^2} e^{-z^2 / 2} \, dt \, dz = \frac{2 y^2}{\sqrt{2 \pi}} \int_0^\infty \frac{1}{z^2} e^{-z^2 / 2} \, dz$ Next we get a lower bound on the last integral by integrating over the interval $[0, 1]$ and noting that $e^{-z^2 / 2} \ge e^{-1/2}$ on this integral. Thus, $\E(\tau_y) \ge \frac{2 y^2 e^{-1/2}}{\sqrt{2 \pi}} \int_0^1 \frac{1}{z^2} \, dz = \infty$
The process $\{\tau_x: x \in [0, \infty)\}$ has stationary, independent increments.
Proof
The proof relies on the temporal and spatial homogeneity of Brownian motion and the strong Markov property. Suppose that $x, \; y \in [0, \infty)$ with $x \lt y$. By continuity, $\bs{X}$ must reach $x$ before reaching $y$. Thus, $\tau_y = \tau_x + (\tau_y - \tau_x)$. But $\tau_y - \tau_x$ is the hitting time to $y - x$ for the process $t \mapsto X(\tau_x + t) - x$, and as shown above, this process is also a standard Brownian motion, independent of $\mathscr{F}(\tau_x)$. Hence $\tau_y - \tau_x$ is independent of $\mathscr{F}(\tau_x)$ and has the same distribution as $\tau_{y-x}$.
The family of probability density functions $\{g_x: x \in (0, \infty)\}$ is closed under convolution. That is, $g_x * g_y = g_{x+y}$ for $x, \, y \in (0, \infty)$.
Proof
This follows immediately from the previous theorem. A direct proof is an interesting exercise.
Now we turn our attention to the maximum process $\{Y_t: t \in [0, \infty)\}$, the inverse of the hitting process $\{\tau_y: y \in [0, \infty)\}$.
For $t \gt 0$, $Y_t$ has the same distribution as $\left|X_t\right|$, known as the half-normal distribution with scale parameter $t$. The probability density function is
$h_t(y) = \sqrt{\frac{2}{\pi t}} \exp\left(-\frac{y^2}{2 t}\right), \quad y \in [0, \infty)$
Proof
From the inverse relation and the distribution of $\tau_y$, $\P(Y_t \ge y) = \P(\tau_y \le t) = 2 \P(X_t \ge y) = \P\left(\left|X_t\right| \ge y\right)$ for $y \ge 0$. By definition, $\left|X_t\right|$ has the half-normal distribution with parameter $t$. In particular, $\P(Y_t \ge y) = \frac{2}{\sqrt{2 \pi t}} \int_y^\infty e^{-x^2 / 2 t} \, dx$ Taking the negative derivative of the integral above, with respect to $y$, gives the PDF.
The half-normal distribution is a special case of the folded normal distribution, which is studied in more detail in the chapter on special distributions.
For $t \ge 0$, the mean and variance of $Y_t$ are
1. $\E(Y_t) = \sqrt{\frac{2 t} {\pi}}$
2. $\var(Y_t) = t \left(1 - \frac{2}{\pi}\right)$
Proof
These follow from standard results for the half-normal distribution.
In the standard Brownian motion simulation, select the maximum value. Vary the parameter $t$ and note the shape of the probability density function and the location and size of the mean-standard deviation bar. Run the simulation 1000 times and compare the empirical density and moments to the true probability density function and moments.
Open the special distribution simulator and select the folded-normal distribution. Vary the parameters and note the shape and location of the probability density function and the size and location of the mean-standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function and moments to the true density function and moments.
Zeros and Arcsine Laws
As usual, we start with a standard Brownian motion $\bs{X} = \{X_t: t \in [0, \infty)\}$. Study of the zeros of $\bs{X}$ lead to a number of probability laws referred to as arcsine laws, because as we might guess, the probabilities and distributions involve the arcsine function.
For $s, \; t \in [0, \infty)$ with $s \lt t$, let $E(s, t)$ be the event that $\bs{X}$ has a zero in the time interval $(s, t)$. That is, $\E(s, t) = \{X_u = 0 \text{ for some } u \in (s, t)\}$. Then $\P\left[E(s, t)\right] = 1 - \frac{2}{\pi} \arcsin\left(\sqrt{\frac{s}{t}}\right)$
Proof
Conditioning on $X_s$ and using symmetry gives $\P\left[E(s, t)\right] = \int_{-\infty}^\infty \P\left[E(s, t) \mid X_s = x\right] f_s(x) \, dx = 2 \int_{-\infty}^0 \P\left[E(s, t) \mid X_s = x\right] f_s(x) \, dx$ But by the homogeneity of $\bs{X}$ in time and space, note that for $x \gt 0$, $\P\left[E(s, t) \mid X_s = -x\right] = \P(\tau_x \lt t - s)$. That is, a process in state $-x$ at time $s$ that hits 0 before time $t$ is the same as a process in state 0 at time 0 reaching state $x$ before time $t - s$. Hence $\P\left[E(s, t)\right] = \int_0^\infty \int_0^{t-s} g_x(u) f_s(-x) \, du \, dx$ where $g_x$ is the PDF of $\tau_x$ given above. Substituting gives $\P\left[E(s, t)\right] = \frac{1}{\pi \sqrt{s}} \int_0^{t-s} u^{-3/2} \int_0^\infty x \exp\left[-\frac{1}{2} x^2 \left(\frac{u + s}{u s} \right) \right] \, dx \, du = \frac{\sqrt{s}}{\pi} \int_0^{t-s} \frac{1}{(u + s) \sqrt{u}} \, du$ Finally substituting $v = \sqrt{u / s}$ in the last integral give $\P\left[E(s, t)\right] = \frac{2}{\pi} \int_0^{\sqrt{t/s - 1}} \frac{1}{v^2 + 1} \, dv = \frac{2}{\pi} \arctan \left(\sqrt{\frac{t}{s} - 1}\right) = 1 - \frac{2}{\pi} \arcsin\left(\sqrt{\frac{s}{t}} \right)$
In paricular, $\P\left[E(0, t)\right] = 1$ for every $t \gt 0$, so with probability 1, $\bs{X}$ has a zero in $(0, t)$. Actually, we can say a bit more:
For $t \gt 0$, $\bs{X}$ has infinitely many zeros in $(0, t)$ with probability 1.
Proof
The event that $\bs{X}$ has infinitely many zeros in $(0, t)$ is $\bigcap_{n=1}^\infty E(0, t / n)$. The intersection of a countable collection of events with probability 1 still has probability 1.
The last result is further evidence of the very strange and irregular behavior of Brownian motion. Note also that $\P\left[E(s, t)\right]$ depends only on the ratio $s / t$. Thus, $\P\left[E(s, t)\right] = \P\left[E(1 / t, 1 / s)\right]$ and $\P\left[E(s, t)\right] = \P\left[E(c s, c t)\right]$ for every $c \gt 0$. So, for example the probability of at least one zero in the interval $(2, 5)$ is the same as the probability of at least one zero in $(1/5, 1/2)$, the same as the probability of at least one zero in $(6, 15)$, and the same as the probability of at least one zero in $(200, 500)$.
For $t \gt 0$, let $Z_t$ denote the time of the last zero of $\bs{X}$ before time $t$. That is, $Z_t = \max\left\{s \in [0, t]: X_s = 0\right\}$. Then $Z_t$ has the arcsine distribution with parameter $t$. The distribution function $H_t$ and the probability density function $h_t$ are given by \begin{align} H_t(s) & = \frac{2}{\pi} \arcsin\left(\sqrt{\frac{s}{t}}\right), \quad 0 \le s \le t \ h_t(s) & = \frac{1}{\pi \sqrt{s (t - s)}}, \quad 0 \lt s \lt t \end{align}
Proof
For $0 \le s \lt t$, the event $Z_t \le s$ is the same as $\lef[E(s, t)\right]^c$, that there are no zeros in the interval $(s, t)$. Hence the formula for $H_t$ follows from the result above. Taking the derivative of $H_t$ and simplifying gives the formula for $h_t$.
The density function of $Z_t$ is $u$-shaped and symmetric about the midpoint $t / 2$, so the points with the largest density are those near the endpoints 0 and $t$, a surprising result at first. The arcsine distribution is studied in more detail in the chapter on special distributions.
The mean and variance of $Z_t$ are
1. $\E(Z_t) = t / 2$
2. $\E(Z_t) = t^2 / 8$
Proof
These are standard results for the arcsine distribution. That the mean is the midpoint $t/2$ also follows from symmetry, of course.
In the simulation of standard Brownian motion, select the last zero variable. Vary the parameter $t$ and note the shape of the probability density function and the size and location of the mean-standard deviation bar. For selected values of $t$ run the simulation is single step mode a few times and note the position of the last zero. Finally, run the simulation 1000 times and compare the empirical density function and moments to the true probability density function and moments.
Open the special distribution simulator and select the arcsine distribution. Vary the parameters and note the shape and location of the probability density function and the size and location of the mean-standard deviation bar. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function and moments to the true density function and moments.
Now let $Z = \{t \in [0, \infty): X_t = 0\}$ denote the set of zeros of $\bs{X}$, so that $Z$ is a random subset of $[0, \infty)$. The theorem below gives some of the strange properties of the random set $Z$, but to understand these, we need to review some definitions. A nowhere dense set is a set whose closure has empty interior. A perfect set is a set with no isolated points. As usual, we let $\lambda$ denote Lebesgue measure on $\R$.
With probability 1,
1. $Z$ is closed.
2. $\lambda(Z) = 0$
3. $Z$ is nowhere dense.
4. $Z$ is perfect.
Proof
1. Note that $Z$ is the inverse image of the closed set $\{0\}$ under the function $t \mapsto X_t$. Since this function is continuous with probability 1, $Z$ is closed with probability 1.
2. For each $t \in (0, \infty)$ note that $\P(t \in Z) = \P(X_t = 0) = 0$ since $X_t$ has a continuous distribution. Using Fubini's theorem $\E\left[\lambda(Z)\right] = \E \left[\int_0^\infty \bs{1}_Z(t) \, d\lambda(t)\right] = \int_0^\infty \E\left[\bs{1}_Z(t)\right] \, d\lambda(t) = 0$ and hence $\P\left[\lambda(Z) = 0\right] = 1$,
3. Since $Z$ is closed and has Lebesgue measure 0, it's interior is empty (all of these statements with probability 1).
4. Suppose that $s \in Z$. Then by the temporal and spatial homogeneity properties, $t \mapsto X_{s + t}$ is also a standard Brownian motion. But then by the result above on zeros, with probability 1, $\bs{X}$ has a zero in the interval $(s, s + 1 / n)$ for every $n \in \N_+$. Hence $s$ is not an isolated point of $Z$.
The following theorem gives a deeper property of $Z$. The Hausdorff dimension of $Z$ is midway between that of a point (dimension 0) and a line (dimension 1).
$Z$ has Hausdorff dimension $\frac{1}{2}$.
The Law of the Iterated Logarithm
As usual, let $\bs{X} = \{X_t: t \in [0, \infty)\}$ be standard Brownian motion. By definition, we know that $X_t$ has the normal distribution with mean 0 and standard deviation $\sqrt{t}$, so the function $x = \sqrt{t}$ gives some idea of how the process grows in time. The precise growth rate is given by the famous law of the iterated logarithm
With probability 1, $\limsup_{t \to \infty} \frac{X_t}{\sqrt{2 t \ln \ln t}} = 1$
Computational Exercises
In the following exercises, $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a standard Brownian motion process.
Explicitly find the probability density function, covariance matrix, and correlation matrix of $(X_{0.5}, X_1, X_{2.3})$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/18%3A_Brownian_Motion/18.01%3A_Standard_Brownian_Motion.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$
Basic Theory
Definition
We start with the assumptions that govern standard Brownian motion, except that we relax the restrictions on the parameters of the normal distribution.
Suppose that $\mu \in \R$ and $\sigma \in (0, \infty)$. Brownian motion with drift parameter $\mu$ and scale parameter $\sigma$ is a random process $\bs{X} = \{X_t: t \in [0, \infty)\}$ with state space $\R$ that satisfies the following properties:
1. $X_0 = 0$ (with probability 1).
2. $\bs{X}$ has stationary increments. That is, for $s, \, t \in [0, \infty)$ with $s \lt t$, the distribution of $X_t - X_s$ is the same as the distribution of $X_{t - s}$.
3. $\bs{X}$ has independent increments. That is, for $t_1, t_2, \ldots, t_n \in [0, \infty)$ with $t_1 \lt t_2 \lt \cdots \lt t_n$, the random variables $X_{t_1}, X_{t_2} - X_{t_1}, \ldots, X_{t_n} - X_{t_{n-1}}$ are independent.
4. $X_t$ has the normal distribution with mean $\mu t$ and variance $\sigma^2 t$ for $t \in [0, \infty)$.
5. With probability 1, $t \mapsto X_t$ is continuous on $[0, \infty)$.
Note that we cannot assign the parameters of the normal distribution of $X_t$ arbitrarily. We know that since $\bs{X}$ has stationary, independent increments, $\E(X_t)$ and $\var(X_t)$ must be linear functions of $t \in [0, \infty)$.
Open the simulation of Brownian motion with drift and scaling. Run the simulation in single step mode several times for various values of the parameters. Note the behavior of the sample paths. For selected values of the parameters, run the simulation 1000 times and compare the empirical density function and moments to the true density function and moments.
It's easy to construct Brownian motion with drift and scaling from a standard Brownian motion, so we don't have to worry about the existence question.
Relation to standard Brownian motion.
1. Suppose that $\bs{Z} = \{Z_t: t \in [0, \infty)\}$ is a standard Brownian motion, and that $\mu \in \R$ and $\sigma \in (0, \infty)$. Let $X_t = \mu t + \sigma Z_t$ for $t \in [0, \infty)$. Then $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a Brownian motion with drift parameter $\mu$ and scale parameter $\sigma$.
2. Conversely, suppose that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a Brownian motion with drift parameter $\mu \in \R$ and scale parameter $\sigma \in (0, \infty)$. Let $Z_t = (X_t - \mu t) \big/ \sigma$ for $t \in [0, \infty)$. Then $\bs{Z} = \{Z_t: t \in [0, \infty)\}$ is a standard Brownian motion.
Proof
It's straightforwrd to show that the processes $\bs{X}$ and $\bs{Z}$ satisfy the appropriate set of assumptions.
In differential form, part (a) can be written as $d X_t = \mu \, dt + \sigma \, d Z_t, \; X_0 = 0$
Finite Dimensional Distributions
Suppose that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is Brownian motion with drift parameter $\mu \in \R$ and scale parameter $\sigma \in (0, \infty)$. It follows from part (d) of the definition that $X_t$ has probability density function $f_t$ given by $f_t(x) = \frac{1}{\sigma \sqrt{2 \pi t}} \exp\left[-\frac{1}{2 \sigma^2 t} (x - \mu t)^2\right], \quad x \in \R$ This family of density functions determines the finite dimensional distributions of $\bs{X}$.
If $t_1, t_2, \ldots, t_n \in (0, \infty)$ with $0 \lt t_1 \lt t_2 \cdots \lt t_n$ then $(X_{t_1}, X_{t_2}, \ldots, X_{t_n})$ has probability density function $f_{t_1, t_2, \ldots, t_n}$ given by $f_{t_1, t_2, \ldots, t_n}(x_1, x_2, \ldots, x_n) = f_{t_1}(x_1) f_{t_2 - t_1}(x_2 - x_1) \cdots f_{t_n - t_{n-1}}(x_n - x_{n-1}), \quad (x_1, x_2, \ldots, x_n) \in \R^n$
Proof
This follows because $\bs{X}$ has stationary, independent increments.
$\bs{X}$ is a Gaussian process with mean function mean function $m$ and covariance function $c$ given by
1. $m(t) = \mu t$ for $t \in [0, \infty)$
2. $c(s, t) = \sigma^2 \min\{s, t\}$ for $s, \, t \in [0, \infty)$.
Proof
The fact that $\bs{X}$ is a Gaussian process follows from the construction $X_t = \mu t + \sigma Z_t$ for $t \in [0, \infty)$, where $\bs{Z}$ is a standard Brownian motion. We know that $\bs{Z}$ is a Gaussian process. The form of the mean and covariance functions follow because $\bs{X}$ has stationary, independent increments. Note that $\mu$ and $\sigma^2$ are the mean and variance of $X_1$.
The correlation function is independent of the parameters, and thus is the same as for standard Brownian motion. This is hardly surprising since correlation is a standardized measure of association. $\cor(X_s, X_t) \frac{\sigma^2 \min\{s, t\}}{\sigma s \sigma t} = \frac{\min\{s, t\}}{s t} = \sqrt{\frac{\min\{s, t\}}{\max\{s, t\}}}, \quad (s, t) \in [0, \infty)^2$
Transformations
There are a couple simple transformations that preserve Brownian motion, but perhaps change the drift and scale parameters. Our starting place is a Brownian motion $\bs{X} = \{X_t: t \in [0, \infty)\}$ with drift parameter $\mu \in \R$ and scale parameter $\sigma \in (0, \infty)$. Our first result involves scaling $\bs{X}$ is time and space (and possible reflecting in the spatial origin).
Let $a \in \R \setminus \{0\}$ and $b \in (0, \infty)$. Define $Y_t = a X_ {b t}$ for $t \ge 0$. Then $\bs{Y} = \{Y_t: t \ge 0\}$ is also a Brownian motion with drift parameter $a b \mu$ and scale parameter $\left|a\right| \sqrt{b} \sigma$.
Proof
Clearly the new process is still a Gaussian process. The mean function is $\E(Y_t) = a \E(X_{b t}) = a b \mu t$ for $t \in [0, \infty)$. The covariance function is $\cov(Y_s, Y_t) = a^2 \cov(X_{bs}, X_{bt}) = a^2 \sigma^2 \min\{b s, b t\} = a^2 b \sigma^2 \min\{s, t\}$ for $(s, t) \in [0, \infty)^2$. Finally, since $\bs{X}$ is continuous, so is $\bs{Y}$.
Suppose that $a \gt 0$ in the previous theorem, so that we are scaling temporally and spatially. In order to preserve the original drift parameter $\mu$ we must have $a b = 1$ (if $\mu \ne 0$). In order to preserve the original scale parameter $\sigma$, we must have $a \sqrt{b} = 1$. We can't have both unless $\mu = 0$, which leads to a slight generalization of one of our results for standard Brownian motion:
Suppose that $\bs{X}$ is a Brownian motion with drift parameter $\mu = 0$ and scale parameter $\sigma > 0$. Suppose also that $c \gt 0$ and let $Y_t = \frac{1}{c} X_{c^2 t}$ for $t \ge 0$. Then $\bs{Y} = \{Y_t: t \in [0, \infty)\}$ is also a Brownian motion with drift parameter 0 and scale parameter $\sigma$.
Our next result is related to the Markov property, which we explore in more detail below. We return to the general case where $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a Brownian motion with drift parameter $\mu \in \R$ and scale parameter $\sigma \in (0, \infty)$. If we restart Brownian motion at a fixed time $s$, and shift the origin to $X_s$, then we have another Brownian motion with the same parameters.
Fix $s \in [0, \infty)$ and define $Y_t = X_{s + t} - X_s$ for $t \ge 0$. Then $\bs{Y} = \{Y_t: t \in [0, \infty)\}$ is also a Brownian motion with the same drift and scale parameters.
Proof
Clearly $\bs{Y}$ is also a Gaussian process. Moreover, $\E(Y_t) = \E(X_{s + t}) - \E(X_s) = \mu(s + t) - \mu s = \mu t$ for $t \in [0, \infty)$. Also, if $r, \, t \in [0, \infty)$ with $r \le t$ then \begin{align} \cov(Y_r, Y_t) & = \cov(X_{s + r} - X_s, X_{s + t} - X_s) \ & = \cov(X_{s + r}, X_{s + t}) - \cov(X_{s + r}, X_s) - \cov(X_s, X_{s + t}) + \cov(X_s, X_s) \ & = \sigma^2 (s + r) - \sigma^2 s - \sigma^2 s + \sigma^2 s = \sigma^2 r \end{align} Finally, $\bs{Y}$ is continuous by the continuity of $\bs{X}$.
The Markov Property and Stopping Times
As usual, we start with a Brownian motion $\bs{X} = \{X_t: t \in [0, \infty)\}$ with drift parameter $\mu$ and scale parameter $\sigma$. Recall again that a Markov process has the property that the future is independent of the past, given the present state. Because of the stationary, independent increments property, Brownian motion has the property. As a minor note, to view $\bs{X}$ as a Markov process, we sometimes need to relax Assumption 1 and let $X_0$ have an arbitrary value in $\R$. Let $\mathscr{F}_t = \sigma\{X_s: 0 \le s \le t\}$, the sigma-algebra generated by the process up to time $t \in [0, \infty)$. The family of $\sigma$-algebras $\mathfrak{F} = \{\mathscr{F}_t: t \in [0, \infty)\}$ is known as a filtration.
Brownian motion is a time-homogeneous Markov process with transition probability density $p$ given by $p_t(x, y) = f_t(y - x) =\frac{1}{\sigma \sqrt{2 \pi t}} \exp\left[-\frac{1}{2 \sigma^2 t} (y - x - \mu t)^2\right], \quad t \in (0, \infty); \; x, \, y \in \R$
Proof
Fix $s \in [0, \infty)$. The theorem follows from the fact that the process $\{X_{s+t} - X_s: t \in [0, \infty)\}$ is another standard Brownian motion, as noted above, and is independent of $\mathscr{F}_s$.
The transtion density $p$ satisfies the following diffusion equations. The first is known as the forward equation and the second as the backward equation.
\begin{align} \frac{\partial}{\partial t} p_t(x, y) & = -\mu \frac{\partial}{\partial y} p_t(x, y) + \frac{1}{2} \sigma^2 \frac{\partial^2}{ \partial y^2} p_t(x, y) \ \frac{\partial}{\partial t} p_t(x, y) & = \mu \frac{\partial}{\partial x} p_t(x, y) + \frac{1}{2} \sigma^2 \frac{\partial^2}{ \partial x^2} p_t(x, y) \end{align}
Proof
These results follows from standard calculus.
The diffusion equations are so named, because the spatial derivative in the first equation is with respect to $y$, the state forward at time $t$, while the spatial derivative in the second equation is with respect to $x$, the state backward at time 0.
Recall again that a random time $\tau$ taking values in $[0, \infty]$ is a stopping time with respect to the process $\bs{X}$ if $\{\tau \le t\} \in \mathscr{F}_t$ for every $t \in [0, \infty)$. The $\sigma$-algebra associated with $\tau$ is $\mathscr{F}_\tau = \left\{B \in \mathscr{F}: B \cap \{\tau \le t\} \in \mathscr{F}_t \text{ for all } t \ge 0\right\}$ See the section on Filtrations and Stopping Times for more information on filtrations, stopping times, and the $\sigma$-algebra associated with a stopping time. Brownian motion $\bs{X}$ is also a strong Markov process.
Suppose that $\tau$ is a stopping time and define $Y_t = X_{\tau + t} - X_\tau$ for $t \in [0, \infty)$. Then $\bs{Y} = \{Y_t: t \in [0, \infty)\}$ is a Brownian motion with the same drift and scale parameters, and is independent of $\mathscr{F}_\tau$. | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/18%3A_Brownian_Motion/18.02%3A_Brownian_Motion_with_Drift_and_Scaling.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$
Basic Theory
Definition and Constructions
In the most common formulation, the Brownian bridge process is obtained by taking a standard Brownian motion process $\bs{X}$, restricted to the interval $[0, 1]$, and conditioning on the event that $X_1 = 0$. Since $X_0 = 0$ also, the process is tied down at both ends, and so the process in between forms a bridge (albeit a very jagged one). The Brownian bridge turns out to be an interesting stochastic process with surprising applications, including a very important application to statistics. In terms of a definition, however, we will give a list of characterizing properties as we did for standard Brownian motion and for Brownian motion with drift and scaling.
A Brownian bridge is a stochastic process $\bs{X} = \{X_t: t \in [0, 1]\}$ with state space $\R$ that satisfies the following properties:
1. $X_0 = 0$ and $X_1 = 0$ (each with probability 1).
2. $\bs{X}$ is a Gaussian process.
3. $\E(X_t) = 0$ for $t \in [0, 1]$.
4. $\cov(X_s, X_t) = \min\{s, t\} - s t$ for $s, \, t \in [0, 1]$.
5. With probability 1, $t \mapsto X_t$ is continuous on $[0, 1]$.
So, in short, a Brownian bridge $\bs{X}$ is a continuous Gaussian process with $X_0 = X_1 = 0$, and with mean and covariance functions given in (c) and (d), respectively. Naturally, the first question is whether there exists such a process. The answer is yes, of course, otherwise why would we be here? But in fact, we will see several ways of constructing a Brownian bridge from a standard Brownian motion. To help with the proofs, recall that a standard Brownian motion process $\bs{Z} = \{Z_t: t \in [0, \infty)\}$ is a continuous Gaussian process with $Z_0 = 0$, $\E(Z_t) = 0$ for $t \in [0, \infty)$ and $\cov(Z_s, Z_t) = \min\{s, t\}$ for $s, \, t \in [0, \infty)$. Here is our first construction:
Suppose that $\bs{Z} = \{Z_t: t \in [0, \infty)\}$ is a standard Brownian motion, and let $X_t = Z_t - t Z_1$ for $t \in [0, 1]$. Then $\bs{X} = \{X_t: t \in [0, 1]\}$ is a Brownian bridge.
Proof
1. Note that $X_0 = Z_0 = 0$ and $X_1 = Z_1 - Z_1 = 0$.
2. Linear combinations of the variables in $\bs{X}$ reduce to linear combinations of the variables in $\bs{Z}$ and hence have normal distributions. Thus $\bs{X}$ is a Gaussian process.
3. $E(X_t) = \E(Z_t) - t \E(Z_1) = 0$ for $t \in [0, 1]$
4. $\cov(X_s, X_t) = \cov(Z_s - s Z_1, Z_t - t Z_1) = \cov(Z_s, Z_t) - t \, \cov(Z_s, Z_1) - s \, \cov(Z_1, Z_t) + s t \, \cov(Z_1, Z_1) = \min\{s, t\} - s t - s t + s t$ for $s, \, t \in [0, 1]$.
5. $t \mapsto X_t$ is continuous on $[0, 1]$ since $t \mapsto Z_t$ is continuous on $[0, 1]$.
Let's see the Brownian bridge in action.
Run the simulation of the Brownian bridge process in single step mode a few times.
For the Brownian bridge $\bs{X}$, note in particular that $X_t$ is normally distributed with mean 0 and variance $t (1 - t)$ for $t \in [0, 1]$. Thus, the variance increases and then decreases on $[0, 1]$ reaching a maximum of $1/4$ at $t = 1/2$. Of course, the variance is 0 at $t = 0$ and $t = 1$, since $X_0 = X_1 = 0$ deterministically.
Open the simulation of the Brownian bridge process. Vary $t$ and note the change in the probability density function and moments. For various values of $t$, run the simulation 1000 times and compare the empirical density function and moments to the true density function and moments.
Conversely to the construction above, we can build a standard Brownian motion on the time interval $[0, 1]$ from a Brownian bridge.
Suppose that $\bs{X} = \{X_t: t \in [0, 1]\}$ is a Brownian bridge, and suppose that $Z$ is a random variable with a standard normal distribution, independent of $\bs{X}$. Let $Z_t = X_t + t Z$ for $t \in [0, 1]$. Then $\bs{Z} = \{Z_t: t \in [0, 1]\}$ is a standard Brownian motion on $[0, 1]$.
Proof
1. Note that $Z_0 = X_0 = 0$.
2. Linear combinations of the variables in $\bs{Z}$ reduce to linear combinations of the variables in $\bs{X}$ and hence have normal distributions. Thus $\bs{Z}$ is a Gaussian process.
3. $\E(Z_t) = \E(X_t) + t \E(Z) = 0$ for $t \in [0, 1]$.
4. $\cov(Z_s, Z_t) = \cov(X_s + s Z, X_t + t Z) = \cov(X_s, X_t) + t \, \cov(X_s, Z) + s \, \cov(X_t, Z) + s t \, \var(Z) = \min\{s, t\} - s t + 0 + 0 + s t = \min\{s, t\}$ for $s, \, t \in [0, 1]$.
5. $t \mapsto Z_t$ is continuous on $[0, 1]$ since $t \mapsto X_t$ is continuous on $[0, 1]$.
Here's another way to construct a Brownian bridge from a standard Brownian motion.
Suppose that $\bs{Z} = \{Z_t: t \in [0, \infty)\}$ is a standard Brownian motion. Define $X_1 = 0$ and $X_t = (1 - t) Z\left(\frac{t}{1 - t}\right), \quad t \in [0, 1)$ Then $\bs{X} = \{X_t: t \in [0, 1]\}$ is a Brownian bridge.
Proof
1. Note that $X_0 = Z_0 = 0$ and by definition, $X_1 = 0$.
2. Linear combinations of variables in $\bs{X}$ reduce to linear combinations of variables in $\bs{Z}$ and hence have normal distributions. Thus $\bs{X}$ is a Gaussian process.
3. For $t \in [0, 1]$, $\E(X_t) = (1 - t) \E\left[Z\left(\frac{t}{1 - t}\right)\right] = 0$
4. If $s, \, t \in [0, 1)$ with $s \lt t$ then $s \big/ (1 - s) \lt t \big/ (1 - t)$ so $\cov(X_s, X_t) = \cov\left[(1 - s) Z\left(\frac{s}{1 - s}\right), (1 - t) Z\left(\frac{t}{1 - t}\right)\right] = (1 - s)(1 - t) \frac{s}{1 - s} = s (1 - t)$
5. Finally, $t \mapsto X_t$ is continuous with probability 1 on $[0, 1)$, and with probability 1, $X_t = (1 - t) Z\left[t \big/ (1 - t)\right] \to 0$ as $t \uparrow 1$.
Conversely, we can construct a standard Brownian motion from a Brownian bridge.
Suppose that $\bs{X} = \{X_t: t \in [0, 1]\}$ is a Brownian bridge. Define $Z_t = (1 + t) X\left(\frac{t}{1 + t}\right), \quad t \in [0, \infty)$ Then $\bs{Z} = \{Z_t: t \in [0, \infty)\}$ is a standard Brownian motion process.
Proof
1. Note that $Z_0 = X_0 = 0$
2. Linear combinations of the variables in $\bs{Z}$ reduce to linear combinations of the variables in $X$, and hence have normal distributions. Thus $\bs{Z}$ is a Gaussian process.
3. For $t \in [0, \infty)$, $\E(Z_t) = (1 + t) \E\left[X\left(\frac{t}{1 + t}\right)\right] = 0$
4. If $s, \, t \in [0, 1]$ with $s \lt t$ Then $s \big/ (1 + s) \lt t \big/ (1 + t)$ so $\cov(Z_s, Z_t) = \cov\left[(1 + s) X\left(\frac{s}{1 + s}\right), (1 + t) X\left(\frac{t}{1 + t}\right)\right] = (1 + s)(1 + t) \left[\frac{s}{1 + s} - \frac{s}{1 + s}\frac{t}{1 + t}\right] = s$
5. Since $t \mapsto X_t$ is continuous, $t \mapsto Z_t$ is continuous
We return to the comments at the beginning of this section, on conditioning a standard Brownian motion to be 0 at time 1. Unlike the previous two constructions, note that we are not transforming the random variables, rather we are changing the underlying probability measure.
Suppose that $\bs{X} = \{X_t: t \in [0, \infty)\}$ is a standard Brownian motion. Then conditioned on $X_1 = 0$, the process $\{X_t: t \in [0, 1]\}$ is a Brownian bridge process.
Proof
Part of the argument is based on properties of the multivariate normal distribution. The conditioned process is still continuous and is still a Gaussian process. In particular, suppose that $s, \, t \in [0, 1]$ with $s \lt t$. Then $(X_t, X_1)$ has a joint normal distribution with parameters specified by the mean and covariance functions of $\bs{X}$. By standard computations, the conditional distribution of $X_t$ given $X_1 = 0$ is normal with mean 0 and variance $t (1 - t)$. Similarly, the joint distribution of $(X_s, X_t, X_1)$ is normal with parameters specified by the mean and covariance functions of $\bs{X}$. Again, by standard computations, the conditional distribution of $(X_s, X_t)$ given $X_1 = 0$ is bivariate normal with 0 means and with $\cov(X_s, X_t \mid X_1 = 0) = s (1 - t)$.
Finally, the Brownian bridge can be defined in terms a stochastic integral
Suppose that $\bs{Z} = \{Z_t: t \in [0, \infty)\}$ is standard Brownian motions. Define $X_1 = 1$ and $X_t = (1 - t) \int_0^t \frac{1}{1 - s} \, dZ_s, \quad t \in [0, 1)$ Then $\bs{X} = \{X_t: t \in [0, 1]\}$ is a Brownian bridge process.
Proof
1. Note that $X_0 = 0$ and by definition, $X_1 = 0$.
2. Since the integrand in the stochastic integral is deterministic, $\bs{X}$ is a Gaussian process.
3. $\bs{X}$ is continuous on $[0, 1)$ with probability 1, as a basic property of stochastic integrals. Moreover, $X_t \to 0$ as $t \uparrow 1$ as a consequence of the martingale inequality.
4. $\E(X_t) = 0$ since the stochastic integral has mean 0.
5. Suppose that $s, \, t \in [0, 1]$ with $s \le t$. Then $\cov(X_s, X_t) = \cov\left[(1 - s) \int_0^s \frac{1}{1 - u} \, dZ_u, (1 - t)\left(\int_0^s \frac{1}{1 - u} \, dZ_u + \int_s^t \frac{1}{1 - u} \, dZ_u\right)\right]$ But $\int_0^s \frac{1}{1 - u} \, dZ_u$ and $\int_s^t \frac{1}{1 - u} \, dZ_u$ are independent, $\cov(X_s, X_t) = (1 - s)(1 - t) \var\left(\int_0^s \frac{1}{1 - u} \, dZ_u\right)$ But then by the Ito isometry, $\cov(X_s, X_t) = (1 - s)(1 - t) \int_0^s \frac{1}{(1 - u)^2} \, du = (1 - s)(1 - t) \left(\frac{1}{1 - s} - 1\right) = (1 - t)s$
In differential form, the process above can be written as $d X_t = \frac{X_t}{1 - t} \, dt + dZ_t, \; X_0 = 0$
The General Brownian Bridge
The processes constructed above (in several ways!) is the standard Brownian bridge. it's a simple matter to generalize the process so that it starts at $a$ and ends at $b$, for arbitrary $a, \, b \in \R$.
Suppose that $\bs{Z} = \{Z_t: t \in [0, 1]\}$ is a standard Brownian bridge process. Let $a, \, b \in \R$ and define $X_t = (1 - t) a + t b + Z_t$ for $t \in [0, 1]$. Then $\bs{X} = \{X_t: t \in [0, 1]\}$ is a Brownian bridge process from $a$ to $b$.
Of course, any of the constructions above for standard Brownian bridge can be modified to produce a general Brownian bridge. Here are the characterizing properties.
The Brownian bridge process $\bs{X} = \{X_t: t \in [0, 1]\}$ from $a$ to $b$ is characterized by the following properties:
1. $X_0 = a$ and $X_1 = b$ (each with probability 1).
2. $\bs{X}$ is a Gaussian process.
3. $\E(X_t) = (1 - t) a + t b$ for $t \in [0, 1]$.
4. $\cov(X_s, X_t) = \min\{s, t\} - s t$ for $s, \, t \in [0, 1]$.
5. With probability 1, $t \mapsto X_t$ is continuous on $[0, 1]$.
Applications
The Empirical Distribution Function
We start with a problem that is one of the most basic in statistics. Suppose that $T$ is a real-valued random variable with an unknown distribution. Let $F$ denote the distribution function of $T$, so that $F(t) = \P(T \le t)$ for $t \in \R$. Our goal is to construct an estimator of $F$, so naturally our first step is to sample from the distribution of $T$. This generates a sequence $\bs{T} = (T_1, T_2, \ldots)$ of independent variables, each with the distribution of $T$ (and so with distribution function $F$). Think of $\bs{T}$ as a sequence of independent copies of $T$. For $n \in \N_+$ and $t \in \R$, the natural estimator of $F(t)$ based on the first $n$ sample values is $F_n(t) = \frac{1}{n}\sum_{i=1}^n \bs{1}(T_i \le t)$ which is simply the proportion of the first $n$ sample values that fall in the interval $(-\infty, t]$. Appropriately enough, $F_n$ is known as the empirical distribution function corresponding to the sample of size $n$. Note that $\left(\bs{1}(T_1 \le t), \bs{1}(T_2 \le t), \ldots\right)$ is a sequence of independent, identically distributed indicator variables (and hence is a sequence of Bernoulli trials), and corresponds to sampling from the distribution of $\bs{1}(T \le t)$. The estimator $F_n(t)$ is simply the sample mean of the first $n$ of these variables. The numerator, the number of the original sample variables with values in $(-\infty, t]$, has the binomial distribution with parameters $n$ and $F(t)$. Like all sample means from independent, identically distributed samples, $F_n(t)$ satisfies some basic and important properties. A summary is given below, but to make sense of some of these facts, you need to recall the mean and variance of the indicator variable that we are sampling from: $\E\left[\bs{1}(T \le t)\right] = F(t)$, $\var\left[\bs{1}(T \le t)\right] = F(t)\left[1 - F(t)\right]$
For fixed $t \in \R$,
1. $\E\left[F_n(t)\right] = F(t)$ so $F_n(t)$ is an unbiased estimator of $F(t)$
2. $\var\left[F_n(t)\right] = F(t)\left[1 - F(t)\right] \big/ n$ so $F_n(t)$ is a consistent estimator of $F(t)$
3. $F_n(t) \to F(t)$ as $n \to \infty$ with probability 1, the strong law of large numbers.
4. $\sqrt{n}\left[F_n(t) - F(t)\right]$ has mean 0 and variance $F(t)\left[1 - F(t)\right]$ and converges to the normal distribution with these parameters as $n \to \infty$, the central limit theorem.
The theorem above gives us a great deal of information about $F_n(t)$ for fixed $t$, but now we want to let $t$ vary and consider the expression in (d), namely $t \mapsto \sqrt{n}\left[F_n(t) - F(t)\right]$, as a random process for each $n \in \N_+$. The key is to consider a very special distribution first.
Suppose that $T$ has the standard uniform distribution, that is, the continuous uniform distribution on the interval $[0, 1]$. In this case the distribution function is simply $F(t) = t$ for $t \in [0, 1]$, so we have the sequence of stochastic processes $\bs{X}_n = \left\{X_n(t): t \in [0, 1]\right\}$ for $n \in \N_+$, where $X_n(t) = \sqrt{n}\left[F_n(t) - t\right]$ Of course, the previous results apply, so the process $\bs{X}_n$ has mean function 0, variance function $t \mapsto t(1 - t)$, and for fixed $t \in [0, 1]$, the distribution $X_n(t)$ converges to the corresponding normal distribution as $n \to \infty$. Here is the new bit of information, the covariance function of $\bs{X}_n$ is the same as that of the Brownian bridge!
$\cov\left[X_n(s), X_n(t)\right] = \min\{s, t\} - s t$ for $s, \, t \in [0, 1]$.
Proof
Suppose that $s \le t$. From basic properties of covariance, $\cov\left[X_n(s), X_n(t)\right] = n \, \cov\left[F_n(s), F_n(t)\right] = \frac{1}{n} \cov\left(\sum_{i=1}^n \bs{1}(T_i \le s), \sum_{j=1}^n \bs{1}(T_j \le t)\right) = \frac{1}{n} \sum_{i=1}^n \sum_{j=1}^n \cov\left[\bs{1}(T_i \le s) \bs{1}(T_j \le t)\right]$ But if $i \ne j$, the variables $\bs{1}(T_i \le s)$ and $\bs{1}(T_j \le t)$ are independent, and hence have covariance 0. On the other hand, $\cov\left[\bs{1}(T_i \le s), \bs{1}(T_i \le t)\right] = \P(T_i \le s, T_i \le t) - \P(T_i \le s) \P(T_i \le t) = \P(T_i \le s) - \P(T_i \le s) \P(T_i \le t) = s - st$ hence $\cov\left[X_n(s), X_n(t)\right] = \frac{1}{n} \sum_{i=1}^n \cov\left[\bs{1}(T_i \le s), \bs{1}(T_i \le t)\right] = s - s t$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/18%3A_Brownian_Motion/18.03%3A_The_Brownian_Bridge.txt |
$\newcommand{\P}{\mathbb{P}}$ $\newcommand{\E}{\mathbb{E}}$ $\newcommand{\R}{\mathbb{R}}$ $\newcommand{\N}{\mathbb{N}}$ $\newcommand{\Z}{\mathbb{Z}}$ $\newcommand{\bs}{\boldsymbol}$ $\newcommand{\cov}{\text{cov}}$ $\newcommand{\cor}{\text{cor}}$ $\newcommand{\var}{\text{var}}$ $\newcommand{\sd}{\text{sd}}$
Basic Theory
Geometric Brownian motion, and other stochastic processes constructed from it, are often used to model population growth, financial processes (such as the price of a stock over time), subject to random noise.
Definition
Suppose that $\bs{Z} = \{Z_t: t \in [0, \infty)\}$ is standard Brownian motion and that $\mu \in \R$ and $\sigma \in (0, \infty)$. Let $X_t = \exp\left[\left(\mu - \frac{\sigma^2}{2}\right) t + \sigma Z_t\right], \quad t \in [0, \infty)$ The stochastic process $\bs{X} = \{X_t: t \in [0, \infty)\}$ is geometric Brownian motion with drift parameter $\mu$ and volatility parameter $\sigma$.
Note that the stochastic process $\left\{\left(\mu - \frac{\sigma^2}{2}\right) t + \sigma Z_t: t \in [0, \infty) \right\}$ is Brownian motion with drift parameter $\mu - \sigma^2 / 2$ and scale parameter $\sigma$, so geometric Brownian motion is simply the exponential of this process. In particular, the process is always positive, one of the reasons that geometric Brownian motion is used to model financial and other processes that cannot be negative. Note also that $X_0 = 1$, so the process starts at 1, but we can easily change this. For $x_0 \in (0, \infty)$, the process $\{x_0 X_t: t \in [0, \infty)\}$ is geometric Brownian motion starting at $x_0$. You may well wonder about the particular combination of parameters $\mu - \sigma^2 / 2$ in the definition. The short answer to the question is given in the following theorem:
Geometric Brownian motion $\bs{X} = \{X_t: t \in [0, \infty)\}$ satisfies the stochastic differential equation $d X_t = \mu X_t \, dt + \sigma X_t \, dZ_t$
Note that the deterministic part of this equation is the standard differential equation for exponential growth or decay, with rate parameter $\mu$.
Run the simulation of geometric Brownian motion several times in single step mode for various values of the parameters. Note the behavior of the process.
Distributions
For $t \in (0, \infty)$, $X_t$ has the lognormal distribution with parameters $\left(\mu - \frac{\sigma^2}{2}\right)t$ and $\sigma \sqrt{t}$. The probability density function $f_t$ is given by $f_t(x) = \frac{1}{\sqrt{2 \pi t} \sigma x} \exp \left(-\frac{\left[\ln(x) - \left(\mu - \sigma^2 / 2\right)t \right]^2}{2 \sigma^2 t} \right), \quad x \in (0, \infty)$
1. $f$ increases and then decreases with mode at $x = \exp\left[\left(\mu - \frac{3}{2} \sigma^2\right)t\right]$
2. $f$ is concave upward, then downward, then upward again with inflection points at $x = \exp\left[(\mu - \sigma^2) t \pm \frac{1}{2} \sigma \sqrt{\sigma^2 t^2 + 4 t}\right]$
Proof
Since the variable $U_t = \left(\mu - \sigma^2 / 2\right) t + \sigma Z_t$ has the normal distribution with mean $(\mu - \sigma^2/2)t$ and standard deviation $\sigma \sqrt{t}$, it follows that $X_t = \exp(U_t)$ has the lognormal distribution with these parameters. These result for the PDF then follow directly from the corresponding results for the lognormal PDF.
In particular, geometric Brownian motion is not a Gaussian process.
Open the simulation of geometric Brownian motion. Vary the parameters and note the shape of the probability density function of $X_t$. For various values of the parameters, run the simulation 1000 times and compare the empirical density function to the true probability density function.
For $t \in (0, \infty)$, the distribution function $F_t$ of $X_t$ is given by $F_t(x) = \Phi\left[\frac{\ln(x) - (\mu - \sigma^2/2)t}{\sigma \sqrt{t}}\right], \quad x \in (0, \infty)$ where $\Phi$ is the standard normal distribution function.
Proof
Again, this follows directly from the CDF of the lognormal distribution.
For $t \in (0, \infty)$, the quantile function $F_t^{-1}$ of $X_t$ is given by $F_t^{-1}(p) = \exp\left[(\mu - \sigma^2 / 2)t + \sigma \sqrt{t} \Phi^{-1}(p)\right], \quad p \in (0, 1)$ where $\Phi^{-1}$ is the standard normal quantile function.
Proof
This follows directly from the lognormal quantile function.
Moments
For $n \in \N$ and $t \in [0, \infty)$, $\E\left(X_t^n\right) = \exp\left\{\left[n \mu + \frac{\sigma^2}{2}(n^2 - n)\right] t\right\}$
Proof
This follows from the formula for the moments of the lognormal distribution.
In terms of the order of the moment $n$, the dominant term inside the exponential is $\sigma^2 n^2 / 2$. If $n \gt 1 - 2 \mu / \sigma^2$ then $n \mu + \frac{\sigma^2}{2}(n^2 - n) \gt 0$ so $\E(X_t^n) \to \infty$ as $t \to \infty$. The mean and variance follow easily from the general moment result.
For $t \in [0, \infty)$,
1. $\E(X_t) = e^{\mu t}$
2. $\var(X_t) = e^{2 \mu t} \left(e^{\sigma^2 t} - 1\right)$
In particular, note that the mean function $m(t) = \E(X_t) = e^{\mu t}$ for $t \in [0, \infty)$ satisfies the deterministic part of the stochastic differential equation above. If $\mu \gt 0$ then $m(t) \to \infty$ as $t \to \infty$. If $\mu = 0$ then $m(t) = 1$ for all $t \in [0, \infty)$. If $\mu \lt 0$ then $m(t) \to 0$ as $t \to \infty$.
Open the simulation of geometric Brownian motion. The graph of the mean function $m$ is shown as a blue curve in the main graph box. For various values of the parameters, run the simulation 1000 times and note the behavior of the random process in relation to the mean function.
Open the simulation of geometric Brownian motion. Vary the parameters and note the size and location of the mean$\pm$standard deviation bar for $X_t$. For various values of the parameter, run the simulation 1000 times and compare the empirical mean and standard deviation to the true mean and standard deviation.
Properties
The parameter $\mu - \sigma^2 / 2$ determines the asymptotic behavior of geometric Brownian motion.
Asymptotic behavior:
1. If $\mu \gt \sigma^2 / 2$ then $X_t \to \infty$ as $t \to \infty$ with probability 1.
2. If $\mu \lt \sigma^2 / 2$ then $X_t \to 0$ as $t \to \infty$ with probability 1.
3. If $\mu = \sigma^2 / 2$ then $X_t$ has no limit as $t \to \infty$ with probability 1.
Proof
These results follow from the law of the iterative logarithm. Asymptotically, the term $\left(\mu - \sigma^2 / 2\right) t$ dominates the term $\sigma Z_t$ as $t \to \infty$.
It's interesting to compare this result with the asymptotic behavior of the mean function, given above, which depends only on the parameter $\mu$. When the drift parameter is 0, geometric Brownian motion is a martingale.
If $\mu = 0$, geometric Brownian motion $\bs{X}$ is a martingale with respect to the underlying Brownian motion $\bs{Z}$.
Proof from stochastic integrals
This is the simplest proof. When $\mu = 0$, $\bs{X}$ satisfies the stochastic differential equation $d X_t = \sigma X_t \, dZ_t$ and therefore $X_t = 1 + \sigma \int_0^t X_s \, dZ_s, \quad t \ge 0$ The process associated with a stochastic integral is always a martingale, assuming the usual assumptions on the integrand process (which are satisfied here).
Direct proof
Let $\mathscr{F}_t = \sigma\{Z_s: 0 \le s \le t\}$ for $t \in [0, \infty)$, so that $\mathfrak{F} = \{\mathscr{F}_t: t \in [0, \infty)\}$ is the natural filtration associated with $\bs{Z}$. Let $s, \, t \in [0, \infty)$ with $s \le t$. We use our usual trick of writing $Z_t = Z_s + (Z_t - Z_s)$, to take advantage of the stationary and independent increments properties of Brownian motion. Thus, $X_t = \exp\left[-\frac{\sigma^2}{2} t + \sigma Z_s + \sigma (Z_t - Z_s)\right]$ Since $Z_s$ is measurable with respect to $\mathscr{F}_s$ and $Z_t - Z_s$ is independent of $\mathscr{F}_s$ we have $\E\left(X_t \mid \mathscr{F}_s\right) = \exp\left(-\frac{\sigma^2}{2} t + \sigma Z_s\right) \E\left\{\exp\left[\sigma(Z_t - Z_s)\right]\right\}$ But $Z_t - Z_s$ has the normal distribution with mean 0 and variance $t - s$, so from the formula for the moment generating function of the normal distribution, we have $\E\left\{\exp\left[\sigma(Z_t - Z_s)\right]\right\} = \exp\left[\frac{\sigma^2}{2}(t - s)\right]$ Substituting gives $\E\left(X_t \mid \mathscr{F}_s\right) = \exp\left(-\frac{\sigma^2}{2} s + \sigma Z_s\right) = X_s$ | textbooks/stats/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/18%3A_Brownian_Motion/18.04%3A_Geometric_Brownian_Motion.txt |
Probability theory is concerned with probability, the analysis of random phenomena. The central objects of probability theory are random variables, stochastic processes, and events: mathematical abstractions of non-deterministic events or measured quantities that may either be single occurrences or evolve over time in an apparently random fashion.
• Area Under the Normal Curve and the Binomial Distribution
Typically the probability distribution does not follow the standard normal distribution, but does follow a general normal distribution. When this is the case, we compute the z-score first to convert it into a standard normal distribution
• Probability and Independence
For an experiment we define an event to be any collection of possible outcomes. A simple event is an event that consists of exactly one outcome. "or" means the union (i.e. either can occur) "and" means intersection (i.e. both must occur)
• Probability Distributions
A variable whose value depends upon a chance experiment is called a random variable. Suppose that a person is asked who that person is closest to: their mother or their father. The random variable of this experiment is the boolean variable whose possibilities are {Mother, Father}. A continuous random variable is a variable whose possible outcomes are part of a continuous data set.
• The Central Limit Theorem
Consider the distribution of rolling a die, which is uniform (flat) between 1 and 6. We will roll five dice we can compute the pdf of the mean. We will see that the distribution becomes more like a normal distribution. That is due to the Central Limit Theorem.
• The Normal Distribution and Control Charts
The Random Distribution is a special distribution that we will use often and is a distribution for a continuous random variable.
• The z-score
The number of standard deviations from the mean is called the z-score.
• Tree Diagrams and Counting
A tree diagram is a diagram that branches out and ends in leaves that correspond to the final variety.
Supplemental Modules (Probability)
Typically the probability distribution does not follow the standard normal distribution, but does follow a general normal distribution. When this is the case, we compute the z-score first to convert it into a standard normal distribution. Then we can use the table.
Example $1$
The Tahoe Natural Coffee Shop morning customer load follows a normal distribution with mean 45 and standard deviation 8. Determine the probability that the number of customers tomorrow will be less than 42.
Solution
We first convert the raw score to a z-score. We have
$z =\dfrac{42 - 45}{8} = -0.375$
Next, we use the table to find the probability. The table gives 0.3520. (We have rounded the raw score to -0.38).
We can conclude that
$P(x < 42) = 0.352$
That is there is about a 35% chance that there will be fewer than 42 customers tomorrow.
Example $2$
A study was done to determine the stress levels that students have while taking exams. The stress level was found to be normally distributed with a mean stress level of 8.2 and a standard deviation of 1.34. What is the probability that at your next exam, you will have a stress level between 9 and 10?
Solution
We want
$P(9 < x < 10)$
We compute the z-scores for each of these
9 - 8.2 10 - 8.2
z9 = = 0.60 z10 = = 1.34
1.34 1.34
Now we want
$P(0.60 < z < 1.34)$
This is the "in between" type hence we subtract
$P(0.60 < z < 1.34) = P(z < 1.34) - P(z < 0.60)$
We use the table to get
$P(0.60 < z < 1.34) = 0.9099 - 0.7257 = 0.1842$
We conclude that there is about an 18 percent chance that the stress level will be between nine and ten.
Example $3$
Suppose that your wife is pregnant and due in 100 days. Suppose that the probability density distribution function for having a child is approximately normal with mean 100 and standard deviation 8. You have a business trip and will return in 85 days and have to go on another business trip in 107 days.
1. What is the probability that the birth will occur before your second trip?
2. What is the probability that the birth will occur after you return from your first business trip?
3. What is the probability that you will be there for the birth?
4. You are able to cancel your second business trip, and your boss tells you that you can return home from your first trip so that there is a 99% chance that you will make it back for the birth. When must you return home?
Solution
1. We want
P(x < 107)
We compute the z-score:
107 - 100
z = = .88
8
We compute
P(z < .88)
The table on the inside front cover gives us
P(z < .88) = .8106
Hence there is about a 81% chance that the baby will be born before the second business trip.
2. We want
P(x > 85)
We compute the z-score:
85 - 100
z = = -1.88
8
We compute
P(z > -1.88)
The table on the inside front cover gives us
P(z < -1.88) = .0301
We want the complement of this area hence
P(z > -1.88) = 1 - .0301 = .9699
Hence there is about a 97% chance that the baby will be born after the first business trip.
3. Now we want
P( 85 < x < 107)
We see form the picture that this is the middle region. We have
P(85 < x < 107) = P(x < 107) - P(x < 85)
We have already computed these. We have
P(85 < x < 107) = P(x < 107) - P(x < 85) = 81% - 3% = 78%
There is about a 78% chance that you will make it to the birth.
4. This problem asks us to work out the math backwards. We are given the probability and we want the raw score. First, we realize that we if there is a 99% chance that we will make it on time, then there is a 1% chance that we will not. Next, we use the table in reverse. That is, we seek a z-score that gives .01 as the probability.
z .00 .01 .02 .03 .04 .05 .06 .07 .08 .09
-2.4 .0082 .0080 .0078 .0075 .0073 .0071 .0069 .0068 .0066 .0064
-2.3 .0107 .0104 .0102 .0099 .0096 .0094 .0091 .0089 .0087 .0084
-2.2 .0139 .0136 .0132 .0129 .0125 .0122 .0119 .0116 .0113 .0110
We search for the probability value that is closest to .01 and find .0102 and .0099. Since .0099 is the closest to.01, we use this value. The corresponding z-score is -2.33. Now we find the x that produces this z. We have
x - 100
-2.33 =
8
Multiply both sides by 8 to get
-18.64 = x - 100
Add 100 to both sides to get
x = 81.36
We must return from our business trip in 81 days.
Using the Normal Distribution to Approximate the Binomial Distribution
The Binomial Distribution is easy to calculate as long as we only need a few values. However, if we need many values, the computation can be extremely tedious.
Example $4$
Suppose you throw a die 1000 times. What is the probability of having it roll a 6 fewer than 160 times?
Solution
The horrible way of figuring this out is to calculate
$C_{1000,r} \left(\dfrac{1}{6}\right)^r \left(\dfrac{5}{6}\right)^{1000 - r}$
for every r between 0 and 159. We have better things to do with our time than do this. Instead we will approximate the answer. The graph of the distribution is shown below As you may have already guessed, this distribution is very close to being normal.
We give the following theorem
Theorem: Normal Approximation to the Binomial Distribution
If a binomial distribution with probability of success p and failure q and n trials is such that
$np > 5$
$nq > 5$
Then the distribution can be approximated by a normal distribution with mean
$\mu = np$
and standard deviation
$\sigma = \sqrt{npq}$
Now we can continue with our example.
We have
$np = (1000)(1/6) = 166.67 > 5$
and
$nq = (1000)(5/6) = 833 > 5$
Thus we can use the normal distribution.
We have
m = np = (1000)(1/6) = 166.67
and
npq = (1000)(1/6)(5/6) = 138.89
Taking a square root gives
s = 11.79
Now we can compute the z-score, since we want P(x < 160). We have
160 - 166.67
z = = -0.57
11.79
Now we use the table to find the probability. We get .2843. Thus there is about a 28% chance that we will roll a six fewer than 160 times.
Continuity Correction
We can achieve a slightly more accurate approximation with what is called the continuity correction. We looked at P(x < 160). However this is the same as P(x < 159.5) or any such fraction. When using the normal distribution to approximate the binomial distribution, we correct by this 0.5 value.
Example $4$
Each year a squirrel has a 35% chance of surviving the winter. Suppose in patch of land there are 200 squirrels. What is the probability that between 65 and 80 of these squirrels will survive the winter?
Solution
We first check
np = (200)(.35) = 70 > 5 and nq = (200)(.65) = 130 > 5
Thus we can use the normal curve approximation.
We have
$\mu = np = (200)(.35) = 70$
and
$npq = (200)(.35)(.65) = 45.5$
Taking a square root gives
s = 6.7
Instead of using P(65 < x < 80), we use the continuity correction and find P(64.5 < x < 80.5). We compute the two z-scores.
64.5 - 70
z = = -.82
6.7
and
80.5 - 70
z = = 1.57
6.7
Now we use the table to find the probabilities. We get .2061 and .9418. Since we want the middle area, we subtract these
$0.9418 - 0.2061 = 0.7357$
Thus there is about a 73% chance that there will be between 65 and 80 surviving squirrels. | textbooks/stats/Probability_Theory/Supplemental_Modules_(Probability)/Area_Under_the_Normal_Curve_and_the_Binomial_Distribution.txt |
Variables
A variable whose value depends upon a chance experiment is called a random variable. Suppose that a person is asked who that person is closest to: their mother or their father. The random variable of this experiment is the boolean variable whose possibilities are {Mother, Father}. A continuous random variable is a variable whose possible outcomes are part of a continuous data set.
The random variable that represents the height of the next person who walks in the room is a continuous random variable while the random variable that represents the number rolled on a six sided die is not a continuous random variable. A random variable that is not continuous is called a discreet random variable.
Probability Distributions
Example $1$
Suppose we toss two dice. We will make a table of the probabilities for the sum of the dice. The possibilities are:
2,3,4,5,6,7,8,9,10,11,12.
Probability Distribution Table
$x$ 2 3 4 5 6 7 8 9 10 11 12
$P(x)$ 1/36 2/36 3/36 4/36 5/36 6/36 5/36 4/36 3/36 2/36 1/36
Exercise $1$
Suppose that you buy a raffle ticket for $5. If 1,000 tickets are sold and there are 10 third place winners of$25, three second place winners of $100 and 1 grand prize winner of$2,000, construct a probability distribution table. Do not forget that if you have the $25 ticket, you will have won$20.
Value (Mean)
Example $2$: Insurance
We when we buy insurance in black jack we lose the insurance bet if the dealer does not have black jack and win twice the bet if the dealer does have black jack. Suppose you have $20 wagered and that you have a king and a 9 and the dealer has an ace. Should you buy insurance for$10?
Solution
We construct a probability distribution table
$x$ $P(x)$
-10 34/49
20 15/49
(There are 49 cards that haven't been seen and 15 are 10JKQ (jacks, kings and queens) and the other 34 are non tens.)
We define the
expected value = $S \times P(x)$
We calculate:
$-10(34/49) + 20(15/49) = -40/49$
Hence the expected value is negative so that we should not buy insurance. What if I am playing with my wife. My cards are 2 and a 6 and my wife's are 7 and 4. Should I buy insurance? We have:
$x$ $P(x)\ -10 31/47 20 16/47 We calculate: $-10(31/47) + 20(16/47) = 10/47 = 0.21$ Hence my expected value is positive so that I should buy insurance. Standard Deviation We compute the standard deviation for a probability distribution function the same way that we compute the standard deviation for a sample, except that after squaring \(x - m$, we multiply by $P(x)$. Also we do not need to divide by $n - 1$.
Consider the second insurance example:
$x$ $P(x)$ $x - \overline{x}$ $(x - \overline{x}$^2\)
-10 31/47 -10.21 104
20 16/47 19.79 392
Hence the variance is
$104(31/47) + 392(16/47) = 202$
and the standard deviation is the square root of the variance, which is14.2.
Combining Distributions
If we have two distributions with independent random variables $x$ and $y$ and if $a$ and $b$ are constants then if
$L = a + bx$ and $W = ax + by$
then
1. $m_L = a + bm$
2. $\sigma_L^2 = b^2s^2$
3. $\sigma_L = |b|\, s$
4. $m_W = a\,m_x + b\,m_y$
5. $\sigma_W^2 = a^2s_1^2 + b^2s_2^2$
6. $\sigma_W = \sqrt{a^2\, \sigma_x^2 + b^2\, \sigma_y^2}$
Example $3$
Gamblers who played both black jack and craps were studied and it was found that the average amount of black playing per weekend was 7 hours with a standard deviation of 3 hours. The average amount of craps play was 4 hour with a standard deviation of 2 hours. What is the mean and standard deviation for the total amount of gaming?
Solution
Here $a$ and $b$ are 1 and 1. The mean is just
$7 + 4 = 11$
and the standard deviation is just
$\sqrt{3^2 + 2^2} = \sqrt{13}$
Example $4$
If each player spends about $100 per hour on black jack and$200 per hour on craps, what will be the mean and standard deviation for the amount of money that the casino wins per person?
Solution
Here a and b are 100 and 200. The mean is
$100(7) + 200(4) = 1,500$
and the standard deviation is
$\sqrt{(100^2)(3^2)+(200^2)(2^2)}=100\sqrt{17}$
Example $5$
If the players spend \$150 on the hotel, find the mean and standard deviation of the total amount of money that the players spend.
Here
$L = 150 + x$
where $x$ is the result from part B. Hence the mean is
$150 + 1500 = 1,650$
and the standard deviation is the same as part B since the coefficient is 1.
The Binomial Distribution
There is a type of distribution that occurs so frequently that it has a special name. We call a distribution a binomial distribution if all of the following are true
1. There are a fixed number of trials, $n$, which are all independent.
2. The outcomes are Boolean, such as True or False, yes or no, success or failure.
3. The probability of success is the same for each trial.
For a binomial distribution with $n$ trials with the probability of success $p$ and failure $q$, we have
$P(r \text { successes}) = C_{n,r}\, p^r \,q^{n-r}$
Example $6$
Suppose that each time you take a free throw shot, you have a 25% chance of making it. If you take 15 shots, what is the probability of making exactly 5 of them.
Solution
We have $n = 15$, $r = 5$, $p = 0.25$, and $q = 0.75$
Compute
$C_{15,5}\, 0.25^5 \,0.75^{10} = 0.165$
There is a 16.5 % chance of making exactly 5 shots.
Example $7$
What is the probability of making fewer than 3 shots?
Solution
The possible outcomes that will make this happen are 2 shots, 1 shot, and 0 shots. Since these are mutually exclusive, we can add these probabilities.
$C_{15,2} \, 0.25^2\, 0.75^{13} + C_{15,1}\, 0.25^1\, 0.75^{14} + C_{15,0}\, 0.25^0 \,0.75^{15}$
$= 0.156 + 0.067 + 0.013 = 0.236$
There is a 24 % chance of sinking fewer than 3 shots. | textbooks/stats/Probability_Theory/Supplemental_Modules_(Probability)/Probability_Distributions.txt |
For an experiment we define an event to be any collection of possible outcomes. A simple event is an event that consists of exactly one outcome.
• "or" means the union (i.e. either can occur)
• "and" means intersection (i.e. both must occur)
Two events are mutually exclusive if they cannot occur simultaneously. For a Venn diagram, we can tell that two events are mutually exclusive if their regions do not intersect
Definition: Probability
We define probability of an event $E$ to be to be
$P(E) = \dfrac{\text{number of simple events within E}}{\text{ total number of possible outcomes}}$
We have the following:
1. $P(E)$ is always between 0 and 1.
2. The sum of the probabilities of all simple events must be 1.
3. $P(E) + P(\text{not } E) = 1$
4. If $E$ and $F$ are mutually exclusive then
$P(E \text{ or } F) = P(E) + P(F)$
The Difference Between "and" and "or"
If $E$ and $F$ are events then we use the terminology
$E \text{ and } F$
to mean all outcomes that belong to both $E$ and $F$.
We use the terminology
$E \text{ or } F$
to mean all outcomes that belong to either $E$ or $F$.
Below is an example of two sets, $A$ and $B$, graphed in a Venn diagram.
The green area represents $A$ and $B$ while all areas with color represent $A$ or $B$
Example $1$
Our Women's Volleyball team is recruiting for new members. Suppose that a person inquires about the team.
• Let $E$ be the event that the person is female
• Let $F$ be the event that the person is a student
then $E$ and $F$ represents the qualifications for being a member of the team. Note that $E$ or $F$ is not enough.
We define
Definition: Conditional Probability
$P(E|F) = \dfrac{P(E \text{ and } F}{P(F)}$
We read the left hand side as "The probability of event $E$ given event $F$ occurred."
We call two events independent if the following definitions hold.
Definition: Independence
For independent Events
$P(E|F) = P(E) \label{1a}$
Equivalently, we can say that $E$ and $F$ are independent if
Definition: The Multiplication Rule
For Independent Events
$P(E \text{ and } F) = P(E)P(F) \label{1b}$
Example $2$
Consider rolling two dice. Let
• $E$ be the event that the first die is a 3.
• $F$ be the event that the sum of the dice is an 8.
Then $E$ and $F$ means that we rolled a three and then we rolled a 5
This probability is 1/36 since there are 36 possible pairs and only one of them is (3,5)
We have
$P(E) = 1/6$
And note that (2,6),(3,5),(4,4),(5,3), and (6,2) give $F$
Hence
$P(F) = 5/36$
We have
$P(E) P(F) = (1/6) (5/36)$
which is not 1/36 and we can conclude that $E$ and $F$ are not independent.
Exercise $2$
Test the following two events for independence:
• $E$ the event that the first die is a 1.
• $F$ the event that the sum is a 7.
A Counting Rule
For two events, $E$ and $F$, we always have
$P(E \text{ or } F) = P(E) + P(F) - P(E \text{ and } F) \label{2}$
Example $3$
Find the probability of selecting either a heart or a face card from a 52 card deck.
Solution
We let
• $E$ = the event that a heart is selected
• $F$ = the event that a face card is selected
then
$P(E) = \dfrac{1}{4}$
and
$P(F) = \dfrac{3}{13}$
that is, Jack, Queen, or King out of 13 different cards of one kind.
$P(E \text{ and } F) = \dfrac{3}{52}$
The counting rule formula (eq. 2) gives
$P(E \text{ or } F) = \dfrac{1}{4} + \dfrac{3}{13} - \dfrac{3}{52} = \dfrac{22}{52} = 42\text{%}$ | textbooks/stats/Probability_Theory/Supplemental_Modules_(Probability)/Probability_and_Independence.txt |
Most of the time the population mean and population standard deviation are impossible or too expensive to determine exactly. Two of the major tasks of a statistician is to get an approximation to the mean and analyze how accurate is the approximation. The most common way of accomplishing this task is by using sampling techniques. Out of the entire population the researcher obtains a (hopefully random) sample from the population and uses the sample to make inferences about the population. From the sample the statistician computes several numbers such as the sample size, the sample mean, and the sample standard deviation. The numbers that are computed from the sample are called statistics and are inferential of the underlying system.
Example $1$
How many cups of coffee do you drink each week?
Solution
If we asked this question to two different five person groups, we will probably get two different sample means and two different sample standard deviations. Choosing different samples from the same population will produce different statistics. The distribution of all possible samples is called the sampling distribution.
The Five Dice Experiment
Consider the distribution of rolling a die, which is uniform (flat) between 1 and 6. We will roll five dice we can compute the pdf of the mean. We will see that the distribution becomes more like a normal distribution.
Definition: Central Limit Theorem
Let $x$ denote the mean of a random sample of size $n$ from a population having mean $m$ and standard deviation $\sigma$. Let
• $m_x$ = mean value of $x$ and
• $\sigma_x$ = the standard deviation of $x$
then
1. $\sigma_{\bar{x}} = m$
2. $\sigma_x = \dfrac{\sigma}{\sqrt{n}}$
3. When the population distribution is normal so is the distribution of $x$ for any $n$.
4. For large $n$, the distribution of $x$ is approximately normal regardless of the population distribution ($n > 30$ is large)
Example $1$: Slot Machine
Suppose that we play a slot machine such you can either double your bet or lose your bet. If there is a 45% chance of winning then the expected value for a dollar wager is
$1(0.45) + (-1)(0.55) = -0.1 \nonumber$
We can compute the standard deviation:
$x$ $p(x)$ $(x - m)^2$ $p(x)(x - m)^2$
1 0.45 1.21 0.545
-1 0.55 0.81 0.446
Total 0.991
So the standard deviation is
$\sigma = \sqrt{0.991} = 0.995 \nonumber$
If we throw 100 silver dollars into the slot machine then we expect to average a loss of ten cents with a standard deviation of
$\sigma_{\bar{x}} =\dfrac{0.995}{\sqrt{100}}=0.0995 \nonumber$
Notice that the standard deviation is very small. This is why the casinos are assured to make money. Now let us find the probability that the gambler does not lose any money, that is the mean is greater than or equal to 0.
We first compute the z-score. We have
$Z = \dfrac{0-(-0.1)}{0.0995} = 1.01 \nonumber$
Now we go to the table to find the associated probability. We get 0.8438. Since we want the area to the right, we subtract from 1 to get
$P(z > 1.01) = 1 - P(z < 1.01) = 1 - 0.8438 = 0.1562\nonumber$
There is about a 16% chance that the gambler will not lose.
Distributions for Proportions
The last example was a special case of proportions, that is Boolean data. For now on, we can use the following theorem.
Central Limit Theory (for Proportions)
Let $p$ be the probability of success, $q$ be the probability of failure. The sampling distribution for samples of size $n$ is approximately normal with mean
$\mu_{\overline{p}} = p$
and
$\sigma _ {\overline{p}} = \sqrt{\dfrac{pq}{n}}$
Example $3$
The new Endeavor SUV has been recalled because 5% of the cars experience brake failure. The Tahoe dealership has sold 200 of these cars. What is the probability that fewer than 4% of the cars from Tahoe experience brake failure?
Solution
We have $p = 0.05$, $q = 0.95$ and $n = 200$
We have
$m_p = p = 0.05 \nonumber$
$\sigma_p = \sqrt{\dfrac{0.05 \cdot 0.95}{200}} = 0.0154 \nonumber$
Next we want to find
$P(x < 8) \nonumber$
Using the continuity correction, we find instead
$P(x < 7.5) \nonumber$
This is equivalent to
$P(p < 7.5/200) = P(p < 0.0375) \nonumber$
We find the z-score
0.0375 - 0.05
z = -0.81
0.0154
The table gives a probability of 0.2090. We can conclude that there is about a 21% chance that fewer than 4% of the cars will experience brake failure.
Charts for Proportions
For problems associated with proportions, we can use Control Charts and remembering that the Central Limit Theorem tells us how to find the mean and standard deviation.
Example 4
Heavenly Ski resort conducted a study of falls on its advanced run over twelve consecutive ten minute periods. At each ten minute interval there were 40 boarders on the run. The data is shown below:
Time 1 2 3 4 5 6 7 8 9 10 11 12
$r$ 14 18 11 16 19 22 6 12 13 16 9 17
$r/40$ .35 0.45 0.275 .4 .475 .55 0.15 .3 0.325 0.4 0.225 0.425
Make a P-Chart and list any out of control signals by type (I, II, III).
Solution
First we find $p$ by dividing the total number of falls by the total number of skiers:
$p = \dfrac{173}{12(40)} = 0.36 \nonumber$
Now we compute the mean
$\sigma = \sqrt{\dfrac{pq}{n}} = \sqrt{\dfrac{(0.36)(0.64)}{40}} = 0.08 \nonumber$
Now we find two and three standard deviations above and below the mean are
$0.36 - (2)(0.08) = 0.20$
$0.36 - (3)(0.08) = 0.04$
$0.36 + (2)(0.08) = 0.52$
$0.36 + (3)(0.08) = 0.68$
Now we can use this data as before to construct a control chart and determine any out of control signals.
Notice that no nine consecutive points lie on one side of the blue line, no two of three points lie above 0.52 or below 0.20, and no points lie below 0.04 or above 0.68. Hence this data is in control. | textbooks/stats/Probability_Theory/Supplemental_Modules_(Probability)/The_Central_Limit_Theorem.txt |
The Random Distribution
There is a special distribution that we will use often and is a distribution for a continuous random variable that has the following properties:
1. It is symmetric about the mean
2. It approaches the horizontal axis on both the left and right side without touching, that is the x-axis is a asymptote.
3. It is bell shaped with transition points one standard deviation from the mean.
4. Approximately 68% of the data points lie within one standard deviation of the mean.
5. Approximately 95% of the data points lie within two standard deviations of the mean.
6. Approximately 99.7% of the data points lie within three standard deviations of the mean.
You can play with the graphs by going to
Example $1$
You are the manager at a new toy store and want to determine how many Monopoly games to stock in you store. The mean number of Monopoly games that sell per month is 22 with a standard deviation of 6. Assume that this distribution is Normal.
What is the probability that next month you will sell between 10 and 34 games?
Solution
We notice that
$22 - 2(6) = 10$
and
$34 = 22 + 2(6)$
We want to know what the probability is that the outcome lies within two standard deviations of the mean. Property 5 says that this percent is about 95%.
Example $2$
If you stock 45 games, should you feel secure about not running out?
Solution
Since three standard deviations above the mean is
$22 + 3(6) = 40$
and 45 is above that, there is a less than 0.3% chance of running out. You should feel very secure.
Control Charts
We often want to determine if things are beginning to stray from the norm as time goes on.
Example $3$
It has been determined that the mean number of errors that medical staff at a hospital makes is 0.002 per hour with a standard deviation of 0.0003.The medical board wanted to determine if long working hours was related to mistakes. During the day, the medical staff was observed to see when they made mistakes. The table illustrates the finding.
Hours Worked 1 2 3 4 5 6 7 8 9 10
Mistakes per Hour 0.0019 0.0022 0.0015 0.0017 0.0020 0.0022 0.0018 0.0028 0.0019 0.0027
It is difficult to see a trend from just looking at the table and we will create a chart that better illustrates the trends. We call the system out of control if at least one of the following three events occur:
• Out of Control Signal 1: At least one point falls beyond the $3\sigma$ level
• Out of Control Signal 2: A run of nine consecutive points is on the same side of the center line (usually the mean).
• Out of Control Signal 3: At least two of three consecutive points lie beyond the $2\sigma$ level on same side of the center line (usually the mean).
For our example we have
$m + \sigma = 0.002 + 0.0003 = 0.0023$
$m - \sigma = 0.002 - 0.0003 = 0.0017$
$m + 2\sigma = 0.002 + 0.0006 = 0.0026$
$m - 2\sigma = 0.002 - 0.0006 = 0.0014$
$m + 3\sigma = 0.002 + 0.0009 = 0.0029$
$m - 3\sigma = 0.002 - 0.0009 = 0.0011$
We now graph the points on a control chart.
We can see that two of the last three data points lie beyond two standard deviations above the mean, which gives out of control warning signals. The information should make the hospital administration weary about long hours. | textbooks/stats/Probability_Theory/Supplemental_Modules_(Probability)/The_Normal_Distribution_and_Control_Charts.txt |
The Standard Normal Distribution
The Standard Normal distribution follows a normal distribution and has mean of 0 and standard deviation of 1.
Notice that the Standard Normal distribution is perfectly symmetric about 0. If a distribution is normal, but not standard, we can convert a value to the Standard normal distribution table by first by finding how many standard deviations away the number is from the mean.
The z-score
The number of standard deviations from the mean is called the z-score and can be found by the formula
$z = \dfrac {x-m}{\sigma} \label{zscore}$
Example $1$
Find the z-score corresponding to a raw score of 132 from a normal distribution with mean 100 and standard deviation 15.
Solution
Using Equation \ref{zscore}, we compute
$x = \dfrac{132 - 100}{15} = 2.133 \nonumber$
Example $2$
A z-score of 1.7 was found from an observation coming from a normal distribution with mean 14 and standard deviation 3. Find the raw score.
Solution
Plugging these into Equation \ref{zscore}, we have
$1.7 = \dfrac{x-14}{3} \nonumber$
To solve this we just multiply both sides by the denominator 3,
\begin{align*} (1.7)(3) &= x - 14 \[4pt] 5.1 &= x - 14 \[4pt] x &= 19.1 \end{align*}
The z-score and Area
Often we want to find the probability that a z-score will be less than a given value, greater than a given value, or in between two values. To accomplish this, we use the table from the textbook and a few properties about the normal distribution.
Example $3$
Find
$P(z < 2.37) \nonumber$
Solution
We use the table. Notice the picture on the table has shaded region corresponding to the area to the left (below) a z-score. This is exactly what we want. Below are a few lines of the table.
z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
2.2 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890
2.3 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916
2.4 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936
The columns corresponds to the ones and tenths digits of the z-score and the rows correspond to the hundredths digits. For our problem we want the row 2.3 (from 2.37) and the row .07 (from 2.37). The number in the table that matches this is 0.9911. Hence
$P(z < 2.37) = 0.9911 \nonumber$
Example $4$
Find
$P(z > 1.82) \nonumber$
Solution
In this case, we want the area to the right of 1.82.
This is not what is given in the table. We can use the identity
$P(z > 1.82) = 1 - P(z < 1.82) \nonumber$
reading the table gives
$P(z < 1.82) = 0.9656 \nonumber$
Our answer is
$P(z > 1.82) = 1 - .9656 = 0.0344 \nonumber$
Example $5$
Find
$P(-1.18 < z < 2.1) \nonumber$
Solution
Once again, the table does not exactly handle this type of area.
However, the area between -1.18 and 2.1 is equal to the area to the left of 2.1 minus the area to the left of -1.18. That is
$P(-1.18 < z < 2.1) = P(z < 2.1) - P(z < -1.18) \nonumber$
To find $P(z < 2.1)$ we rewrite it as $P(z < 2.10)$ and use the table to get
$P(z < 2.10) = 0.9821. \nonumber$
The table also tells us that
$P(z < -1.18) = 0.1190 \nonumber$
Now subtract to get
$P(-1.18 < z < 2.1) = 0.9821 - 0.1190 = 0.8631 \nonumber$ | textbooks/stats/Probability_Theory/Supplemental_Modules_(Probability)/The_z-score.txt |
We have seen that probability is defined by
$P(E) = \dfrac {\text{Number in E}}{\text{Number in the Sample Space}}$
Although this formula appears simple, counting the number in each can prove to be a challenge. Visual aids will help us immensely.
Example $1$: Tree Diagrams
A native flowering plant has several varieties. The color of the flower can be red, yellow, or white. The stems can be long or short and the leaves can be thorny, smooth, or velvety. Show all varieties.
Solution
We use a tree diagram, which is a diagram that branches out and ends in leaves that correspond to the final variety. The picture below shows this.
To read this tree diagram, we begin from start. then move along the branches collecting words until we get to the end. For example,
• Always taking the upper path leads to the selection of a red long thorny plant.
• Always taking the lower path leads to a blue short velvety plant. We can count the total number of leaves (path endings) and get that there are 18 possible varieties.
• Counting the leaves that came from long stems tell us that there are 9 possible long stemmed varieties.
Example $2$
A committee of three republican senators and four democratic senators is selected to investigate corporate securities fraud. Out of this committee two members are to be selected at random for a subcommittee on the energy sector.
1. What is the probability that both members will be republican?
2. What is the probability that both members will be democrat?
3. What is the probability of one of each?
Solution
We write a tree diagram
In this tree diagram, D represents democrat and R represents republican. The probabilities are given in the diagram.
To answer part A, we need to find
$P(\text{ first is } R \text{ and second is } R) \nonumber$
This corresponds to the bottom leaf. As we travel to the bottom leaf, we pick up the two numbers
$P(R \text{ and } R) = \dfrac{3}{7}\dfrac{1}{3} = \dfrac{1}{7}\nonumber$
To answer part B, we need to find
$P(\text{ first is } D \text{ and second is } D) \nonumber$
This corresponds to the top leaf. We have
$P(D \text{ and } D) = \dfrac{4}{7}\dfrac{1}{2} = \dfrac{2}{7} \nonumber$
To answer the part C we add the two middle leaves
\begin{align*} P((D \text{ and }R) \text{ or } (R \text{ and }D)) &= \dfrac{4}{7}\dfrac{1}{2} + \dfrac{3}{7}\dfrac{2}{3} \[5pt] &= \dfrac{2}{7} + \dfrac{2}{7} \[5pt] &= \dfrac{4}{7} \end{align*}
Permutations
Example $3$: Permutation
Suppose that 40 women try out for the newest play that has an all women cast of seven. You are the director. How many choices do you have?
Solution
The way to work this problem out is to consider the main role first. You have 40 choices for the main role. For the lead supporting actor there are 39 left to select from. For the next role there are 38 to select from. Now following this pattern and consider that there are seven in the cast gives a total number of choices as
$40 \cdot 39 \cdot 38 \cdot 37 .\cdot 36 \cdot 35 \cdot 34$
We could multiply these all out, however there is an easier way. We can write
$40 \cdot 39 \cdot 38 \cdot 37 \cdot 36 \cdot 35 \cdot 34 \left( \dfrac{33 \cdot 32 \cdot 31 \cdot \, ... \cdot 2 . 1}{ 33 \cdot 32 \cdot 31 \cdot\, ... \cdot 2 \cdot 1} \right) = \dfrac{40!}{(40-7)!}$
This expression has a special notation. We write
$P_{40,7} = \dfrac{n!}{(n-r)!} = 93,963,542,400 \nonumber$
We can see that there are plenty of choices. In general we write
$P_{n,r} = \dfrac{n!}{(n-r)!} \nonumber$
This is called a permutation.
Combinations
Example $4$: Combination
How many 5 card poker hands are there?
Solution
We can solve this in a similar way as the prior question. We are selecting 5 cards out of 52 total. Unfortunately this is not quite a permutation since, for example, the hand
2H 3H 4H 5H 6H
is the same as the hand
3H 5H 2H 6H 4H
where "H" means hearts. That is the order at which the cards are dealt does not matter. The number of ways of ordering 5 cards is 5! (five choices for the first card, four left for the second, three left for the third, two for the fourth, and one for the fifth). We divide by this number to get our solution
$= \dfrac{52!}{(52-5)! \; 5!} \nonumber$
We write this with the notation
$C_{52,5} = \dfrac{52!}{(52-5)! \; 5!} = 2,598,960 \nonumber$
In general, we have
$C_{n,k} = \dfrac{n!}{(n-k)! \; k!} \nonumber$
and call this a combination.
Example $5$: Probability
The following was taken from the California state lottery web site:
"SuperLottoPlus is your chance to win millions of dollars! The jackpot ranges from $7 million to$50 million or more. The jackpot rolls over and grows whenever there is no winner. All you have to do is pick five numbers from 1 to 47 and one MEGA number from 1 to 27 and match them to the numbers drawn by the Lottery every Wednesday and Saturday."
What is the probability of winning the lottery?
Solution
There is only one element in the event space. Your numbers. For the sample space, first they pick 5 numbers from 47. There are
$C_{47,5} =1,533,939 \nonumber$
ways of doing this. Next they select a number from 1 to 27. There are 27 ways of doing this. We multiply to get
$1,533,939\; \times \; 27 = 41,416,353 \nonumber$
So your chances are worse than one in forty-million. | textbooks/stats/Probability_Theory/Supplemental_Modules_(Probability)/Tree_Diagrams_and_Counting.txt |
Learning Objectives
• Identify the unique attributes of major modern graphic design styles, beginning with William Morris. The design styles discussed will be those that have a presence or an influence in our current visual culture:
• Morris
• Werkbund
• Bauhaus
• Dada
• International Typographic Style (ITS)
• Late Modern
• Post Modern
• Evaluate the influence of past design styles on one another
• Explain the influence of culture on major modern graphic design styles
• Identify the cross-cultural influences of visual culture that impacted graphic design style
• Identify the technological influences that affected and advanced graphic design
Industrial Revolution Overview
The Craftsman
Before the Industrial Revolution (1760-1840 in Britain) most aspects of design and all aspects of production were commonly united in the person of the craftsman. The tailor, mason, cobbler, potter, brewer, and any other kind of craftsman integrated their personal design aesthetic into each stage of product development. In print, this meant that the printer designed the fonts, the page size, and the layout of the book or broadsheet; the printer chose (even at times made) the paper and ran the press and bindery. Unity of design was implicit.
Typography in this pre-industrial era was predominantly used for books and broadsheets. The visual flavour of the fonts was based on the historic styles of western cultural tradition — roman, black letter, italic, and grotesque fonts were the mainstay of the industry. Typography was naturally small scale — needed only for sheets and pages — and was only large when it was chiseled into buildings and monuments.
Technological Shift
The Industrial Revolution radically changed the structure of society, socially and economically, by moving vast numbers of the population from agrarian-based subsistence living to cities where manufacturing anchored and dominated employment and wealth. Agrarian-based society was tied to an aristocracy overseeing the land and controlling and directing production through the use of human labour. In contrast, urban production, though still very much in need of human labour (female and child labour in particular was in huge demand), was dominated by the mechanized production of goods, directed and controlled by industrialists instead of the aristocracy. The factories were powered initially by steam, and eventually by gasoline and electricity. These new manufacturing models were dominated by an engineering mentality that valued optimization of mechanical processes for high yields and introduced a compartmentalized approach to production.
Design and Production Separate
The design process was separated from the production-based process for a number of reasons. Primary was the efficiency-oriented mindset of the manufacturers who were focused on creating products with low unit costs and high yield outcomes, rather than on pleasing aesthetics or high-quality materials. Design process is time consuming and was considered unnecessary for each production stage of manufactured goods.
Manufactured products were intended for the working and middle classes, and high-quality output was not a goal. These products were never intended to vie for the attention of the upper classes — enticing them away from the services and bespoke products of the craftsman (a contemporary example is Tip Top Tailors attracting Savile Row customers). Rather, they supplied common people with goods they had not been able to afford before. This efficient line of thinking created the still existing equation of minimal design plus low material integrity equalling low-cost products.
Design, rather than being a part of each step of production (implicit in the craftsman’s approach), was added for form development and when a product needed more appeal for the masses — usually during the later stages of production through decorative additions. Design was now directed by the parameters and constraints of the manufacturing process and its needs.
Advertising Emerges
Despite low product standards, the high quantities and low costs of manufactured goods “stimulated a mass market and even greater demand” (Meggs & Purvis, 2011, p. 127). The historic role of graphic design for broadsheets and books expanded at this point to include advertising. Each company and product needed exposure to sell these manufactured products to the mass market — no earlier method of promotion could communicate to this number of people.
The design aesthetic of these times was relatively untouched by stylistic cohesion or design philosophy. Industrialists used a pastiche of historic styles that aspired to make their products look more upscale, but did not go as far as to create a new visual language. This was a strategy that made sense and has since been repeated (consider early computer design aesthetics). Usually, when a new medium or communication strategy is developed (advertising in print and the posters of the Industrial Revolution), it uses visual and language styles that people are already familiar with, and introduces a new way to deliver the message. Too much change alienates, but novelty of delivery works by adding a twist on the shoulders of an already familiar form.
Font Explosion
In addition to its new role in promoting products to the mass market, graphic design moved forward with an explosion of new font designs as well as new production methods. The design of fonts had earlier been linked to the pragmatic and cultural objectives of producing books and broadsheets. With large format posters and numerous other print components, text needed to do much more than represent a phonetic symbol. Innovations in production affected — perhaps infected — printers with the pioneer spirit of the times, and all products and their potential were examined and re-evaluated. This attitude naturally included the function and design of fonts and the methods used to reproduce them. Text was often the only material used to promote its subject and became integral to a visual communication. Jobbing printers who used either letterpress or lithographic presses pushed the boundaries of both, competing with each other by introducing innovations and, in turn, pushing artists and type foundries to create more products they could use. An entirely new font category, slab serif — sometimes called Egyptian — was created. Thousands of new fonts emerged to meet the demand of the marketplace.
Photography
In addition to font development, the Industrial Age also contributed the photograph and ultimately its use in books and advertising. Photography (for print design) was originally used as a research tool in developing engravings, but this was costly and time consuming. Numerous inventors searched for ways to integrate photography into the press process since the early years of its development in the 1830s. Photo engraving eventually arrived in 1871 using negatives and plates. From that time forward, photography has been used to conceptually and contextually support the communication of graphic design in its many forms. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/01%3A_Design_History/1.01%3A_Introduction.txt |
Conditions and Products of the Industrial Age
The Arts & Crafts movement emerged in the second half of the 19th century in reaction to the social, moral, and aesthetic chaos created by the Industrial Revolution. William Morris was its founder and leader. He abhorred the cheap and cheerful products of manufacturing, the terrible working and living conditions of the poor, and the lack of guiding moral principles of the times. Morris “called for a fitness of purpose, truth to the nature of the materials and methods of production, and individual expression by both artist and worker” (Meggs & Purvis, 2011, p. 160). These philosophical points are still pivotal to the expression of design style and practice to this day. Design styles from the Arts & Crafts movement and on have emphasized, in varying degrees, either fitness of purpose and material integrity, or individual expression and the need for visual subjectivity. Morris based his philosophy on the writings of John Ruskin, a critic of the Industrial Age, and a man who felt that society should work toward promoting the happiness and well-being of every one of its members, by creating a union of art and labour in the service of society. Ruskin admired the medieval Gothic style for these qualities, as well as the Italian aesthetic of medieval art because of its direct and uncomplicated depiction of nature.
Many artists, architects, and designers were attracted to Ruskin’s philosophy and began to integrate components of them into their work. Morris, influenced by his upbringing in an agrarian countryside, was profoundly moved by Ruskin’s stance on fusing work and creativity, and became determined to find a way to make it a reality for society. This path became his life’s work.
Pre-Raphealite Brotherhood
Morris met Edward Burne-Jones at Exeter College when both were studying there. They both read extensively the medieval history, chronicles, and poetry available to them and wrote every day. Morris published his first volume of poetry when he was 24, and continued to write and publish for the rest of his life. After graduation, Morris and Burne-Jones tried a few occupations, and eventually decided to become artists. Both became followers of Dante Gabriel Rossetti who founded the Pre-Raphealite brotherhood that was based on many of Ruskin’s principles. Morris did not last long as a painter, eventually finding his design vocation while creating a home for himself and his new wife (Rosetti’s muse and model).
Discovering the lack of design integrity in Victorian home furnishings and various additional deficiencies in other aspects of home products, he chose to not only design his home, but all its furniture, tapestries, and stained glass.
Morris & Co.
In 1860, Morris established an interior design firm with friends based on the knowledge and experiences he had in crafting and building his home. He began transforming not only the look of home interiors but also the design studio. He brought together craftsmen of all kinds under the umbrella of his studio and began to implement Ruskin’s philosophy of combining art and craft. In Morris’s case, this was focused on making beautiful objects for the home. The craftsmen were encouraged to study principles of art and design, not just production, so they could reintegrate design principles into the production of their products. The objects they created were made and designed with an integrity a craftsman could feel proud of and find joy in creating, while the eventual owner would consider these products on par with works of art (an existing example is the Morris chair). The look of the work coming out of the Morris studio was based specifically on an English medieval aesthetic that the British public could connect to. The English look and its integrity of production made Morris’s work very successful and sought after. His organizational innovations and principled approach gained attention with craftsmen and artisans, and became a model for a number of craft guilds and art societies, which eventually changed the British design landscape.
William Morris and the Kelmscott Press
Morris’s interest in writing never waned and made him acutely aware of how the book publishing industry had been negatively affected by industrialization. One of his many pursuits included the revitalization of the book form and its design components through the establishment of the Kelmscott Press. The press was created in 1888 after Morris, inspired by a lecture about medieval manuscripts and incunabula publications, began the design of his first font, Golden, which was based on the Venetian roman face created originally by Nicolas Jenson.
In his reinterpretation of this earlier font, Morris strove to optimize readability while retaining aesthetic integrity — in the process reviving interest in font design of earlier periods. Morris used this font in his first book, The Story of Glittering Plain, which he illustrated, printed, and bound at his press. The design approach of this publication and all others Kelmscott produced in its eight years was based on recreating the integrated approach and beauty of the incunabula books and manuscripts of the medieval period. All aspects of the publication were considered and carefully determined to create a cohesive whole. The press itself used hand-operated machinery, the paper was handmade, and the illustrations, fonts, and page design were all created and unified by the same person to make the book a cohesive, beautiful object of design. Morris did not wholly reject mechanization, however, as he recognized the advantages of mechanical process. He considered, redesigned, and improved all aspects of design and production to increase physical and aesthetic quality.
Kelmscott Press produced over 18,000 volumes in the eight years of its existence and inspired a revival of book design on two continents. In addition, Morris inspired a reinterpretation of design and design practice with his steadfast commitment to Ruskin’s principles. Future generations of designers held to Morris’s goals of material integrity — striving for beautiful utilitarian object design and carefully considered functionality. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/01%3A_Design_History/1.02%3A_William_Morris_and_the_Arts_and_Crafts_Movement.txt |
In the early years of the 20th century, the German Hermann Muthesius returned to Germany from England with Morris’s Arts & Crafts concepts. Muthesius published the The English House in 1905, a book wholly devoted to the positive outcomes of the English Arts & Crafts movement. Muthesius was a sometime cultural ambassador, possibly an industrial spy, for Germany in England. His interest in the Arts & Crafts movement was not based on returning German culture to the romantic values of an earlier pre-manufacturing era. He was focused on infusing the machine-made products of Germany with high-quality design and material integrity. Muthesius believed manufacturing was here to stay. He was one of the original members of the state-sponsored Deutscher Werkbund — an association that promoted the union of art and technology. The Werkbund integrated traditional crafts and industrial mass-production techniques, and put Germany on a competitive footing with England and the United States. Its motto “Vom Sofakissen zum Städtebau” (from sofa cushions to city-building) reveals its range.
Design Embraces the Manufacturing Process
Peter Behrens and Henry van de Velde were also part of the original leadership, and with Muthesius developed the philosophy of Gesamtkultur — a cohesive cultural vision where design was the driving force of a wholly fresh, man-made environment. Every aspect of the culture and its products was examined and redefined for maximum use of mechanization in its production. The new visual language of Gesamtkultur was a style stripped of ornament in favour of simplicity and function. All areas of cultural production were affected by this new philosophy — graphic design, architecture, industrial design, textiles, and so forth — and all were reconfigured and optimized. Sans serif fonts dominated the reductive graphic design style as did standardization of sizes and forms in architecture and industrial design. Optimization of materials and mechanical processes affected every area. Germany embraced this new philosophy and visual style for its simplicity and exactness. In 1919, Walter Gropius, a modernist architect whose work was inspired by Werkbund ideals, was finally successful in opening a school he called the Bauhaus (in Weimar where artists, industrialists, and technicians would develop their products in collaboration). These products would then build a new future for German exports by virtue of their high level of functional utility and beauty.
1.04: Bauhaus
The Bauhaus philosophy has become famous for its integrated approach to design education; “it precipitated a revolution in art education whose influence is still felt today” (Whitford, 1995, p. 10). Most art colleges and universities still base much of their foundational curriculum on its fundamental ideas.
The Bauhaus school was founded with the idea of creating a ‘total’ work of art in which all arts, including architecture, would eventually be brought together. The first iteration of the school brought together instructors from all over Europe working within the latest art and design styles, manufacturing ideologies, and technologies. An example of this new teaching style can be found in its first-year curriculum. This foundation year exposed all students to the basic elements and principles of design and colour theory, and experimented with a range of materials and processes. This allowed every student the scope to create projects within any discipline rather than focus solely on a specialty. This approach to design education became a common feature of architectural and design schools in many countries.
In addition to its influence on art and design education, the Bauhaus style was to become a profound influence upon subsequent developments and practices in art, architecture, graphic design, interior design, industrial design, and typography.
The school itself had three iterations in its 14-year run. With each iteration, the core concepts and romantic ideals were modified and watered down to work within the realities of the difficult Nazi culture. When the school was finally closed by its own leadership under pressure from the Nazi-led government, most of the faculty left the country to teach in less difficult circumstances and continued to spread Bauhaus precepts all over the world. Many of its artists and intellectuals fled to the United States. Because the Bauhaus approach was so innovative and invigorating, the institutions that were exposed to the Bauhaus methodology embraced its principles. This is why the Bauhaus had a major impact on art and architecture trends in Western Europe, the United States, and Canada.
Later evaluation of the Bauhaus design philosophy was critical of its bias against the organic markings of a human element, an acknowledgment of “… the dated, unattractive aspects of the Bauhaus as a projection of utopia marked by mechanistic views of human nature” (Schjeldahl, 2009, para. 6). And as Ernst Kállai proposed in the magazine Die Weltbühne in 1930, “Home hygiene without home atmosphere” (as cited in Bergdoll & Dickerman, 2009, p. 41).
The very machine-oriented and unadorned aesthetic of the Bauhaus refined and evolved, eventually informing the clean, idealistic, and rigorous design approach of the International Typographic Style. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/01%3A_Design_History/1.03%3A_Deutscher_Werkbund.txt |
Dada does not mean anything. We read in the papers that the Negroes of the Kroo race call the tail of the sacred cow: dada. A cube, and a mother, in certain regions of Italy, are called: Dada. The word for a hobby-horse, a children’s nurse, a double affirmative in Russian and Rumanian, is also: Dada. (Tzara, 1992)
Tristan Tzara, Dada Manifesto
Dada was an artistic and literary movement that began in 1916 in Zurich, Switzerland. It arose as a reaction to World War I, and the nationalism and rationalism, which many thought had brought war about. Influenced by ideas and innovations from several early avant-gardes — Cubism, Futurism, Constructivism, and Expressionism — its influence in the arts was incredibly diverse, ranging from performance art to poetry, sculpture, and painting, to photography and photographic and painterly collage.
Dada’s aesthetic, marked by its mockery of materialistic and nationalistic attitudes, became a powerful inspiration for artists and designers in many cities, including Berlin, Paris, and New York, all of which generated their own groups. The movement radically changed typographic ideals and created fresh approaches to text. Unburdened of its rules and conventions, type was allowed to become expressive and subjective. The poetic output of the group was fresh and different, and needed its typography to be as expressive and innovative as its content. Dada, in combination with aspects of Constructivist and Suprematist typography, balanced the cultural discipline created and applied to typography by other streams of contemporary design like the Bauhaus. This movement in particular advanced typography as a medium of its own. It promoted the use of typography as an art material that could be manipulated by artists and designers expressively and without preordained rules and structural principles.
Words emerge, shoulders of words, legs, arms, hands of words. Au, oi, uh. One shouldn’t let too many words out. A line of poetry is a chance to get rid of all the filth that clings to this accursed language, as if put there by stockbrokers’ hands, hands worn smooth by coins. I want the word where it ends and begins. Dada is the heart of words. (Ball, 1996)
Hugo Ball’s manifesto, read at Zunfthaus zur Waag on July 14, 1916
1.06: International Typographic Style
International Typographic Style (ITS), also known as the Swiss Style, emerged in Switzerland and Germany in the 1950s. ITS became known for design that emphasized objective clarity through the use of compositional grids and sans serif typography as the primary design material (or element).
Guiding Principles
ITS was built on the shoulders of the ‘less is more’ ideal of the German Werkbund and the Bauhaus school. But its pioneers pursued ideologies that had much more depth and subtlety. Ernst Keller, whose work in design spanned over four decades, brought an approach to problem solving that was unique. His contribution to design was in defining the problem. For Keller, the solution to a design problem rested in its content. Content-driven design is now a standard practice. Max Bill, another pioneer, brought a purist approach to design that he had been developing since the 1930s. He was instrumental in forming Germany’s Ulm School of Design, famous for its ITS approach. The school introduced Greek rhetorical devices to amplify concept generation and produce greater conceptual work, while the study of semiotics (creating and understanding symbols and the study of sending and receiving visual messages) allowed its design students to understand the parameters of communication in a more scientific and studied way. At this time, there was also a greater interest in visual complexity. Max Huber, a designer known for his excellent manipulation of presses and inks, layered intense colours and composed chaotic compositions while maintaining harmony through the use of complex grids that structured and unified the elements. He was one of many designers who began using grids in strategic ways. ITS design is now known for its use of anchored elements within a mathematical grid. A grid is the “most legible and harmonious means for structuring information” (Meggs & Purvis, 2011, p. 355). Visual composition changed in many ways due to the grid. Design was already moving toward asymmetrical compositions, but now even the design of text blocks changed — from justified text to aligned flush left, ragged right. Fonts chosen for the text changed from serif fonts to sans serif, a type style believed to “express the spirit of a more progressive age” by early designers in the movement. Sans-serif typefaces like Helvetica, Univers, and Akzidenz Grotesk were favoured because they reflected the ideals of a progressive culture more than traditional serif fonts like Times or Garamond. ITS balanced the stabilizing visual qualities of cleanliness, readability, and objectivity with the dynamic use of negative space, asymmetrical composition, and full background photography.
Photography
ITS did not use illustrations and drawings because of their inherent subjectivity. Photography was preferred because of its objective qualities, and was heavily used to balance and organically complement the typography and its structured organizational grid. Often the photograph sat in the background with the type designed to sit within it; the two composed to strengthen each other to create a cohesive whole. ITS refined the presentation of information to allow the content to be understood clearly and cleanly, without persuading influences of any kind. A strong focus on order and clarity was desirable as design was seen to be a “socially useful and important activity … the designers define their roles not as artists but as objective conduits for spreading important information between components of society” (Meggs & Purvis, 2011, p. 355).
Josef Müller-Brockmann, another one of its pioneers, “sought an absolute and universal form of graphic expression through objective and impersonal presentation, communicating to the audience without the interference of the designer’s subjective feelings or propagandistic techniques of persuasion” (Schneider, 2011). Mϋller-Brockmann’s posters and design works feature large photographs as objective symbols meant to convey his ideas in particularly clear and powerful ways.
After World War II, international trade began to increase and relations between countries grew steadily stronger. Typography and design were crucial to helping these relationships progress — multiple languages had to be factored into a design. While clarity, objectivity, region-less glyphs, and symbols were essential to communication between international partners, ITS found its niche in this communicative climate and expanded beyond Switzerland, to America.
ITS is still very popular and commonly used for its clarity and functionality. However, there is a fine line between clean and simple, and simply boring. As the style became universal, its visual language became less innovative and was perceived to be too restrictive. Designers wanted the freedom to be expressive, and the culture itself was moving from cultural idealism to celebratory consumerism. ITS can be a very successful design strategy to adopt if there is a strong concept binding all of the design components together, or when there is a vast amount of complexity in the content and a visual hierarchy is needed to calm the design to make it accessible.
1.07: Late Modern New York Style
Late Modernism encompasses the period from the end of World War II to the early 21st century. Late Modernism describes a movement that arose from and reacted to trends in ITS and Modernism. The Late Modern period was dominated by American innovations spurred on by America’s new-found wealth. The need for more advertising, marketing, and packaging was matched by a new mood in the culture — a mood that was exuberant and playful, not rigid and rule-oriented.
Late Modern was inspired by European avant-garde immigrants. These immigrants found work in design and quickly introduced Americans to early modern principles of an idealistic and theoretical nature. American design at this point had been pragmatic, intuitive, and organic in composition. The fusion of these two methodologies in a highly competitive and creative climate produced design work that was original in concept, witty, and provocative and, as personal expression was highly prized, full of a variety of visual styles. Paul Rand is one of the great innovators of this style. Rand was adept at using ITS when its rules and principles were called for, but he was also very influenced by European art movements of the times. In his work, he fused the two and made works that were accessible, simple, engaging, and witty. His work was inspirational, but his writing and teaching were as important, if not more, to redefining the practice of design. He restructured the design department at Yale and published books on design practice informed by ITS principles, softened by wit, and espoused the value of the organic look of handmade marks. As a result, artists and designers began to merge organic shapes with simple geometry.
The look of graphic design also changed through advancements in photography, typesetting, and printing techniques. Designers felt confident in exploring and experimenting with the new technologies as they were well supported by the expertise of the print industry. Designers began to cut up type and images and compose directly on mechanical boards, which were then photographed and manipulated on the press for colour experimentation. As well, illustration was once again prized. Conceptual typography also became a popular form of expression.
Push Pin Studios
An excellent example of this expansive style can be found in the design output of New York’s Push Pin Studios. Formed by Milton Glaser and Seymour Chwast, Push Pin was a studio that created innovative typographic solutions — I♥NY— brand identities, political posters, books, and albums (such Bob Dylan’s album Dylan). It was adept at using and mixing illustration, photography, collage, and typography for unexpected and innovative visual results that were always fresh and interesting as well as for its excellent conceptual solutions. The influence of Push Pin and Late Modern is still alive and has recently experienced a resurgence. Many young designers have adopted this style because of its fresh colours, fine wit, and spontaneous compositions. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/01%3A_Design_History/1.05%3A_Dada.txt |
By the early 1970s, the idealistic principles of Modernism were fading and felt flat and lifeless. Pluralism was again emerging as people craved variety as a reaction to the reductivist qualities that modernism espoused.
Punk
In the late 1970s in Britain, Australia, and parts of the United States, a youthful rebellious culture of anger and distain arose against the establishment. In many ways, the design language of Punk echoed the Dadaist style, though Punk was anchored with a pointed, political message against the tyranny of society and the disenfranchisement of youth. A use of aggressive collages, colours, and experimental photography were its hallmarks. These free-form, spontaneous design works incorporated pithy tag lines and seethed with anger in a way that Dada work never attempted to achieve. Punk actively moved away from the conformities of design, and was anti-patriotic and anti-establishment. Punk established the do-it-yourself (DIY) ethos and stylized it with the angry anti-establishment mood of the mid 1970s, a time of political and social turbulence. DIY style was considered shocking and uncontrolled. However, the influence on design has been far reaching and subsequently widely emulated.
Jamie Reid, a pioneer of the Punk style, developed the visual signature look for the Sex Pistols and many other punk bands. His personal signature style was known for a collaged ‘ransom note’ typography that became a typographic style of its own. Reid cut letters out of newspapers and magazines, and collaged them together to be photographed. By doing this, he could see what he was creating as he went along, trying out different font styles and sizes and seeing the results instantly. Treating type as if it were a photograph also freed him from the restrictions of typesetting within a structured grid and allowed him to develop his ideas and concepts as he created. This unguided, process-free approach to design became a part of the Post Modern experimentation that was to come.
When Punk first exploded in the 1970s, it was deemed a youthful rebellion. In actuality, it was one of the many forms of visual expression that manifested as part of the Postmodernist movement that began as a reaction to the rigid restrictions of Modernism.
Early Post Modernism
Early Swiss Post Modern design was driven by the experimentations and teachings of Wolfgang Weingart who taught at the Basel School of design in Basel, Switzerland. Weingart was taught ITS by the masters of the style, Emil Ruder and Armin Hofmann at the Basel School. But once he became an instructor there, he questioned the “value of the absolute cleanliness and order” (Meggs & Purvis, 2011, p. 465) of the style. He experimented vigorously with breaking all typographic and organizational rules to see what the effect on the audience would be. He invigorated typography with energy and in turn changed the viewer’s response to the visual information. Instead of a simple fast reading, the reader now faced dynamic complexity free of any rules or hierarchies. The viewer was now compelled to spend more time with a design piece to understand its message and parse the meaning of its symbolism.
One of his American students, April Greiman, brought this new design language back to California with her and heavily influenced the youth culture there. David Carson, a self-taught designer working in the surf magazine world, took the ideas of the style and adopted them to his own typographic experiments in the surfing magazines he designed. For Carson, Post Modern design reflected the free spirit of the surf community.
Post Modernism is actually an umbrella term for many visual styles that came about after the 1980s. They are unified by their reaction to Modernism’s guiding principles — particularly that of objectivity. A key feature of Post Modern design is the subjective bias and individual style of the designers that practise it. Additional defining stylistic characteristics can be summarized in the idea of ‘de-construction.’ The style often incorporates many different typefaces breaking every traditional rule of hierarchy and composition. Visual organization becomes more varied and complicated with the use of layers and overlapping. The use of image appropriation and culture jamming is a key feature. Dramatic layouts that do not conform to traditional compositions are another common characteristic. A traditional grid is not used to organize the layout of the elements, making composition look ‘free-style.’ Other organizational systems for the elements developed — axial, dilatational, modular, and transitional systems created a fresh way to organize the information. The combination of multiple geometric shapes layered with photographs created depth that worked well on the computer monitor — now a component of contemporary society.
Post Modernism is still in use today, though selectively. The chaos created by our technological advancements needs to be balanced with the ease of accessing information. The Apple brand is a good example of a contemporary design approach that feels fresh and current, while delivering massive amounts of information in a clean and simple way. The Post Modern methods of built-in visual difficulty are less welcome in our data-saturated culture.
1.09: Summary
The technological revolution of the 1990s brought the mobile phone and computer to every home and office and changed the structure of our current society much as manufacturing in the 1800s changed Britain and the Western world. As with the Industrial Revolution, the change in technology over the last 20 years has affected us environmentally, socially, and economically. Manufacturing has slowly been moved offshore and replaced with technology-based companies. Data has replaced material as the substance we must understand and use effectively and efficiently. The technological development sectors have also begun to dominate employment and wealth sectors and overtake manufacturing’s dominance. These changes are ongoing and fast-paced. The design community has responded in many novel ways, but usually its response is anchored by a look and strategy that reduce ornament and overt style while focusing on clean lines and concise messaging. The role of design today is often as a way-finder to help people keep abreast of changes, and to provide instruction. Designers are once again relying on established, historic styles and methods like ITS to connect to audiences because the message is being delivered in a complex visual system. Once the technological shifts we are experiencing settle down, and design is no longer adapting to new forms of delivery, it will begin to develop original and unique design approaches that complement and speak to the new urban landscape.
Questions to consider after completing this chapter:
1. What design principles do Dada and Punk have in common?
2. What influence does ITS have on Post Modern design?
3. What influence does ITS have on current design practice?
4. How did World War II influence design education?
5. How did Morris and the Arts & Crafts movement help to create the Bauhaus design philosophy?
6. How did technology influence early German design?
7. How does technology influence contemporary design practice?
Suggested Reading
Meggs, P. B. (1998). A history of graphic design (3rd ed). New York City, NY: John Wiley & Sons. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/01%3A_Design_History/1.08%3A_Post_Modern.txt |
Learning Objectives
• Explain the role of communication design in print and media
• Describe how the creative process relates to strategic problem solving
• Contrast how the creative process relates to the design process
• Define critical phases of the design process
• Discover how project research helps to define a communication problem
• Give examples of brainstorming techniques that generate multiple concepts based on a common message
• Learn about metaphors and other rhetorical devices to generate concepts
• Explore how concepts translate into messages within a visual form
Communication Design and The Design Process
The practice of graphic or communication design is founded on crafting visual communications between clients and their audience. The communication must carry a specific message to a specific audience on behalf of the client, and do so effectively — usually within the container of a concept that creates context and builds interest for the project in the viewer.
See an illustrated model of the design process here: A Model of the Creative Process
Overview of the Design Process
The process of developing effective design is complex. It begins with research and the definition of project goals. Defining goals allows you to home in on precisely what to communicate and who the audience is. You can then appropriately craft the message you are trying to communicate to them. Additional information regarding how to deliver your message and why it’s necessary are also clarified in the research stage. Often the preferred medium becomes clear (i.e., web, social media, print, or advertising) as does the action you want your audience to take. Asking a millennial to donate to a cause is a good example. Research reveals that transparency of donation use, donor recognition, and ease of making the donation are vital to successfully engaging a millennial audience (Grossnickle, Feldmann, White, & Parkevich, 2010). Research also reveals that millennials resist negative advertising, so the message must be crafted in positive terms that are anchored to a realistic environment (Tanyel, Stuart, & Griffin, 2013). Knowing this information before the concept development begins is vital to crafting a message that will generate the response your client needs. Critiquing and analysis allow you to evaluate the effectiveness of the design approach as it develops through the stages of an iterative process.
In order to design visual materials that communicate effectively, designers must understand and work with the syntax of visual language. Meaning is expressed not only through content but through form as well, and will include both intellectual and emotional messages in varying degrees.
Developing Concepts into Design Solutions
Designers are responsible for the development of the creative concepts that express the message. A concept is an idea that supports and reinforces communication of key messages by presenting them in interesting, unique, and memorable ways on both intellectual and emotional levels. A good concept provides a framework for design decisions at every stage of development and for every design piece in a brand or ad campaign. An early example of this is the witty and playful ‘think small’ Volkswagen Beetle (VW) advertising campaign of the 1960s. By amplifying the smallness of its car in a ‘big’ car culture, VW was able to create a unique niche in the car market and a strong bond between the VW bug and its audience (see Figure 2.1).
Figure 2.1 Volkswagen Beetle
When you implement solutions, you put concepts into a form that communicates effectively and appropriately. In communication design, form should follow and support function. This means that what you are saying determines how you say it and in turn how it is delivered to your audience. Design is an iterative process that builds the content and its details through critiquing the work as it develops. Critiquing regularly keeps the project on point creatively and compositionally. Critiquing and analysis allow you to evaluate the effectiveness of the whole design in relation to the concept and problem. The number of iterations depends on the skill of the designer in developing the content and composition as well as properly evaluating its components in critique. In addition, all of this must occur in the context of understanding the technologies of design and production.
As you begin to build and realize your concepts by developing the content, the elements, and the layouts, you must apply compositional and organizational principles that make sense for the content and support the core concept. Compositional principles are based on psychological principles that describe how human beings process visual information. Designers apply these principles in order to transmit meaning effectively. For example, research has shown that some kinds of visual elements attract our attention more than others; a designer can apply this knowledge to emphasize certain parts of a layout and give a certain element or message importance. These principles apply to all forms of visual materials, digital media, and print.
When dealing with text, issues of legibility and readability are critical. Designers organize information through the use of formal structures and typographic conventions to make it easier for the viewer to absorb and understand content. The viewer may not consciously see the underlying structures, but will respond positively to the calm clarity good organization brings to the text.
2.02: Design Research and Concept Generation
Defining Design Problem Parameters
Many designers define communication design as a problem-solving process. (The problem/opportunity is how to deliver information effectively to the desired audience.) The process that takes the designer from the initial stages of identifying a communication problem to the final stage of solving it covers a lot of ground, and different models can be used to describe it. Some are very complicated, and some are simple. The following sections break the design problem-solving process into four steps: (1) define, (2) research, (3) develop concepts, and (4) implement solutions.
2.03: Define
Step 1: Define the Communication Problem
The inventor Charles Kettering is famously quoted as saying “a problem well-stated is half-solved.”
Clearly the first step in any design activity is to define the communication problem properly. To do this, you will need to meet with clients to establish initial goals and objectives.
Here are some of the questions you should ask:
• What is the business of the client; what products or services does the client offer?
• What are the client’s long-term business goals? (What does the client want its business to have accomplished in 5 or 10 years?)
• What is the purpose of the project? What does the client hope to achieve with it? (The goals of a specific project are usually narrower than overall long-term business goals, but should fit within the larger picture.)
• What are the performance criteria that will be used to evaluate whether project goals are met?
• Who is the target audience?
• What is the client’s message to this audience?
• How does this project fit in with existing corporate materials?
• Does this piece require more than one format or medium?
• What corporate guidelines (if any) must be adhered to?
• Are illustration, photography, or any other special services required?
• Are there any special or unusual considerations around this project?
• What quantity is needed (for print)?
• What distribution method will be used (for print)?
• What is the budget?
• Who will approve the project? Will that person be available for sign-off when required?
Good planning at the beginning can make a project run smoothly and without surprises. Don’t assume anything; both the designer and the client should listen closely to each other and ask plenty of questions. Keep in regular communication, document discussions, and ensure that you have written confirmation of decisions. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/02%3A_Design_Process/2.01%3A_Introduction.txt |
Step 2: Conduct Research
Gather and analyze information What else do you need to know? The information you collected in the first stage is just a starting point — now you need to do more research in order to fine-tune your goals and process. Check every assumption, ask more questions, and add detail.
Research practices may involve:
• Competitor analysis: analyzing the competition to see what they do and determine their strengths and weaknesses
• Ethnographic research: observing user behaviour and culture
• Site research: observing and understanding the strengths and weaknesses of a space to optimize the effectiveness of the design experience you will be creating; site research is necessary to any design project that is situated in a built environment
• Marketing research: analyzing behaviour in terms of consumer practices, including demographic profiling (grouping people based on variables such as age/income/ethnicity/location to create profiles generally describing their thinking/behaviour)
• User testing: measuring the ability of the product or service to satisfy users’ needs
• Co-creation: inviting end-users to brainstorm solutions with the design team before the concept phase of design begins
Incorporating Research into the Design Process
Research should be a part of all design process, but what kind of research is done, and who does it, will be determined by the scope and budget of the project. Some information may be publicly available, for example, through corporate publications or previously published marketing studies or market data, but a design company may need to partner with a research firm in order to do targeted in-depth research.
At the very least, design research should include:
• A literature review (gathering and reviewing all existing material that is relevant to your subject)
• Collected details (existing materials, corporate guidelines) of your client’s business and the services the client offers
• Information on the target audience (What do they want? need? expect?)
• Analysis of competitors (Who are they? how are they different? how are they the same? how do they advertise or make information available?)
• Estimates and technical advice from subcontractors (e.g., printers)
Some things to consider:
• Is a full design audit required? Much like a SWOT analysis, which assesses strengths, weaknesses, opportunities, and threats, a design audit applies the same stringent methodology to analyzing your competitors’ visual presence in the marketplace.
A graphic design audit is a fantastic and relatively easy way to get a clear picture of how your competitors are perceived, what key messages they are communicating and how you look when placed alongside them. It’s also a valuable exercise that informs you about the type of communication your customers are receiving on a regular basis from your key competitors. (Clare, 2006)
• What are the implications of the audience profile in relationship to the project goals?
• What is the most appropriate means to communicate with this audience (i.e., what media and marketing tools should you use)?
• How do the goals of this project fit into your client’s long-term goals?
• Is your client’s message what actually needs to be communicated in order to further the client’s business goals?
Research takes time and can cost money, but in the larger picture will save time and money by helping to focus the direction of the design process. It also helps you provide justification for your proposed communication solutions to your client. Remember that all research must be carefully documented and raw sources saved and made available for future reference.
Now that you have gathered all the information, it’s time to craft the design problem into a well-defined, succinct statement.
A Problem Well-stated is Half-solved
The writer Mark Levy, in his article A Problem Well-stated is Half-solved, developed six steps you can take to state a design problem so its solutions become clearer:
1. State the problem in a sentence. A single sentence forces you to extract the main problem from a potentially complex situation. An example of a problem statement: “We need to increase revenue by 25%.”
2. Make the problem statement into a question. Turning the problem statement into a question opens the mind to possibilities: “How do we increase revenue by 25%?”
3. Restate the question in five ways. If you spin the question from a variety of perspectives, you’ll construct new questions that may provide intriguing answers.
For instance, try asking: “How could we increase revenue by 25% in a month?” “How could we increase it by 25% in an hour?” “How could we increase it by 25% in a minute?” “What could we stop doing that might cause a 25% revenue increase?” “What ways can we use our existing customer base to affect the increase?”
4. Give yourself thinking quotas. An arbitrary production quota gives you a better shot at coming up with something usable, because it keeps you thinking longer and with greater concentration.
When I asked you to “Restate the question five ways,” that was an example of an arbitrary quota. There’s nothing magical about five restatements. In fact, five is low. Ten, or even a hundred, would be far better.
5. Knock your questions. Whatever questions you’ve asked, assume they’re wrong-headed, or that you haven’t taken them far enough.
You might ask, “Why do we need an 25% increase at all? Why not a 5% increase? A 500% increase? A 5,000% increase? What other things in the business might need to change that would be as important as revenue?”
6. Decide upon your new problem-solving question. Based on the thinking you’ve already done, this step may not even be necessary. Often, when you look at your situation from enough angles, solutions pop up without much more effort.
However, if you still need to pick a single question that summarizes your problem, and none seems perfect, force yourself to choose one that’s at least serviceable. Going forward is better than standing still.
Now you can start brainstorming.
Concept Mapping
A good way to begin the process of research and problem definition is to write down everything that you already know about your subject. This brainstorming can be done in a linear way by developing lists, or in a non-linear way, popular with designers, called concept mapping. Concept mapping is a non-linear approach that allows a designer to see what is known and what still needs to be researched. Concept mapping is also used to generate concepts and to create associations and themes.
W5 + 1
The first step is to take a sheet of paper and write a central title or topic in the centre. Then surround this central idea with information gathered by answering the following questions, based on the 5 Ws (who, what, where, why, and when), plus one more, how:
• What are you trying to communicate? (the problem)
• Why must communication occur? (what is its purpose?)
• Who is the target audience?
• Where will communication take place? (in what medium and location?)
• When will communication take place?
• How will you implement the concept?
• What if? (what would be ideal?)
Once you’ve added all the information you have at hand, you will see any assumptions and gaps in that information, and you can begin specific directed research to create a larger, more objective picture.
Here is an example of a concept map (See Figure 2.2). To see a concept map that details the scope of visual communication.
Figure 2.2 Example of a concept map
You can use the information in a concept map to generate other themes and concepts for your project. For example, in the concept map above, you could develop another theme by highlighting in yellow all information from the 1970s. This would reveal the parameters of design practice in the 70s and would additionally reveal what has been added and changed in design practice since.
Text Attributions
A Problem Well-stated is Half-solved by Mark Levy is used under a CC BY-NC-ND 3.0 Licence. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/02%3A_Design_Process/2.04%3A_Research.txt |
Step 3: Developing Concepts
Concept development is a process of developing ideas to solve specified design problems. The concepts are developed in phases, from formless idea to precise message in an appropriate form with supportive visuals and content. Once you have done your research and understand exactly what you want to achieve and why, you are ready to start working on the actual design. Ideally, you are trying to develop a concept that provides solutions for the design problem, communicates effectively on multiple levels, is unique (different and exciting), and stands out from the materials produced by your client’s competitors.
Generate, test, and refine ideas
A good design process is a long process. Designers spend a great deal of time coming up with ideas; editing, revising, and refining them; and then evaluating their results every time they try something. Good design means assessing every concept for effectiveness.
The design process looks roughly like this:
• Generating a concept
• Refining ideas through visual exploration
• Preparing rough layouts detailing design direction(s)
• Setting preliminary specifications for typography and graphic elements such as photography, illustration, charts or graphs, icons, or symbols
• Presenting design brief and rough layouts for client consideration
• Refining design and comprehensive layouts, if required
• Getting client approval of layouts and text before the next phase
Developing Effective Concepts
A concept is not a message. A concept is an idea that contextualizes a message in interesting, unique, and memorable ways through both form and design content.
A good concept reinforces strategy and brand positioning. It helps to communicate the benefits of the offer and helps with differentiation from the competition. It must be appropriate for the audience, facilitating communication and motivating that audience to take action.
A good concept provides a foundation for making visual design decisions. For example, Nike’s basic message, expressed by its tagline, is “Just Do It.” The creative concept Nike has used since 1988 has been adapted visually in many ways, but always stays true to the core message by using images of individuals choosing to take action.
“It was a simple thing,” Wieden recalls in a 2009 Adweek video interview in which he discusses the effort’s genesis. Simplicity is really the secret of all “big ideas,” and by extension, great slogans. They must be concisely memorable, yet also suggest something more than their literal meanings. Rather than just putting product notions in people’s minds, they must be malleable and open to interpretation, allowing people of all kinds to adapt them as they see fit, and by doing so, establish a personal connection to the brand (Gianatasio, 2013).
A good concept is creative, but it also must be appropriate. The creativity that helps develop effective, appropriate concepts is what differentiates a designer from a production artist. Very few concepts are up to that standard — but that’s what you should always be aiming for.
In 1898, Elias St. Elmo Lewis came up with acronym AIDA for the stages you need to get consumers through in order for them to make a purchase. Modern marketing theory is now more sophisticated, but the acronym also works well to describe what a design needs to do in order to communicate and get people to act.
In order to communicate effectively and motivate your audience, you need to:
A — attract their attention. Your design must attract the attention of your audience. If it doesn’t, your message is not connecting and fulfilling its communication intent. Both the concept and the form must stand out.
I — hold their interest. Your design must hold the audience’s interest long enough so they can completely absorb the whole communication.
D — create a desire. Your design must make the audience want the product, service, or information.
A — motivate them to take action. Your design must compel the audience to do something related to the product, service, or information.
Your concept works if it makes your audience respond in the above ways.
Generating Ideas and Concepts from Concept Mapping
You can use the information in a concept map to generate additional concepts for your project by reorganizing it. The concept tree method below comes from the mind-mapping software blog (Frey, 2008)
1. Position your design problem as the central idea of your mind map.
2. Place circles containing your initial concepts for solving the problem around the central topic.
3. Brainstorm related but non-specific concepts, and add them as subtopics for these ideas. All related concepts are relevant. At this stage, every possible concept is valuable and should not be judged.
4. Generate related ideas for each concept you brainstormed in step 3 and add them as subtopics.
5. Repeat steps 3 and 4 until you run out of ideas.
Applying Rhetorical Devices to Concept Mapping
After you have placed all your ideas in the concept map, you can add additional layering to help you refine and explore them further. For example, you can use rhetorical devices to add context to the concepts and make them come alive. Rhetoric is the study of effective communication through the use and art of persuasion. Design uses many forms of rhetoric — particularly metaphor. If you applied a metaphor-based approach to each idea in your concept map, you would find many new ways to express your message.
Rhetorical Devices Appropriate for Communication Design
Allusion is an informal and brief reference to a well known person or cultural reference. In the magazine cover linked below, an allusion is used to underline the restrictive nature of the burqa, a full body cloak worn by some Muslim women, by applying it to Sarah Jessica Parker, an actor whose roles are primarily feminist in nature. (Harris, 2013)
Follow the link to see an example: Marie Claire Cover
Amplification involves the repetition of a concept through words or images, while adding detail to it. This is to emphasize what may not be obvious at first glance. Amplification allows you to expand on an idea to make sure the target audience realizes its importance. (Harris, 2013)
Follow the link to see an example: Life’s too short for the wrong job Marketing Campaign
Analogy compares two similar things in order to explain an otherwise difficult or unfamiliar idea. Analogy draws connections between a new object or idea and an already familiar one. Although related to simile, which tends to employ a more artistic effect, analogy is more practical; explaining a thought process, a line of reasoning, or the abstract in concrete terms. Because of this, analogy may be more insightful. (Harris, 2013)
Follow the link to see an example: WWF Lungs Before It’s Too Late
Hyperbole is counter to understatement. It is a deliberate exaggeration that is presented for emphasis. When used for visual communication, one must be careful to ensure that hyperbole is a clear exaggeration. If hyperbole is limited in its use, and only used occasionally for dramatic effect, then it can be quite attention grabbing.
Follow the link to see an example: Final Major Project by Mark Studio
A written example would be: Thereare a thousand reasons why more research is needed on solar energy.
Or it can make a single point very enthusiastically: I said “rare,” not “raw.” I’ve seen cows hurt worse than this get up and walk.
Hyperbole can be used to exaggerate one thing to show how it differs from something to which it is being compared: This stuff is used motor oil compared to the coffee you make, my love.
Hyperbole is the most overused rhetorical device in the world (and that is no hyperbole); we are a society of excess and exaggeration. Handle it like dynamite, and do not blow up everything you can find (Harris, 2013).
Metaphor compares two different things by relating to one in the same terms commonly used for the other. Unlike a simile or analogy, metaphor proposes that one thing is another thing, not just that they are similar (Harris, 2013).
Follow the link to see an example: Ikea Bigger Storage Idea
Metonymy is related to metaphor, where the thing chosen for the metaphorical image is closely related to (but not part of) that with which it is being compared. There is little to distinguish metonymy from synecdoche (as below). Some rhetoricians do not distinguish between the two (Harris, 2013).
Follow the link to see an example: London Logo
Oxymoron is a paradox presented in two words, in the form of an adjective and noun (“eloquent silence”), or adverb-adjective (“inertly strong”), and is used to impart emphasis, complexity, or wit (Harris, 2013). See Figure 2.3 for another example.
Figure 2.3 Example of an oxymoron
Personification attributes to an animal or inanimate object human characteristics such as form, character, feelings, behaviour, and so on. Ideas and abstractions can also be personified. For example, in the poster series linked below, homeless dogs are placed in environments typical of human homelessness (Harris, 2013).
Follow the link to see an example: Manchester Dogs’ Home Street Life
Simile is a comparison of one thing to another, with the two being similar in at least one way. In formal prose, a simile compares an unfamiliar thing to a familiar thing (an object, event, process, etc.) known to the reader. (Harris, 2013)
Follow the link to see an example: Strong Handle Billboard
Synecdoche is a type of metaphor in which part of something stands for the whole, or the whole stands for a part. It can encompass many forms such that any portion or quality of a thing is represented by the thing itself, or vice versa (Harris, 2013).
Follow the link to see an example: A Global Warming Poster
Understatement deliberately expresses a concept or idea with less importance as would be expected. This could be to effect irony, or simply to convey politeness or tact. If the audience is familiar with the facts already, understatement may be employed in order to encourage the readers to draw their own conclusions (Harris, 2013).
For example: instead of endeavouring to describe in a few words the horrors and destruction of the 1906 earthquake in San Francisco, a writer might state: The 1906 San Francisco earthquake interrupted business somewhat in the downtown area.
Follow the link to see an example: Nike’s Just Do It
An excellent online resource for exploring different rhetorical devices is “A Handbook of Rhetorical Devices” (Harris, 2013). The definitions above have been paraphrased from this site.
Developmental Stages of Design
No design work should ever be done without going through an iterative development process in which you try out different ideas and visual approaches, compare and evaluate them, and select the best options to proceed with. This applies to both form and content.
The development of the concept starts with brainstorming as wide a range of ideas as possible, and refining them through a number of development stages until you are left with those that solve the communication problem most effectively.
The development of graphic forms starts with exploring a wide range of styles, colours, textures, imagery, and other graphic devices and refining them through development stages until you are left with those that best reinforce the concept and message.
The development process starts with thumbnails and works through rough layouts and comprehensives to the final solution. Thumbnails are small, simple hand-drawn sketches, with minimal information. These are intended for the designer’s use and, like concept maps, are visuals created for comparison. These are not meant to be shown to clients.
Their uses include:
• Concept development and visualization of ideas
• Preliminary evaluation of content (they allow you to sift and sort ideas quickly and effectively)
• Preliminary evaluation of form (value studies, compositional studies, potential placement of elements)
• Note-taking (a tool to record verbal or visual information quickly and accurately)
Quantity is very important in thumbnails! The idea is to get as many ideas and options down as possible.
Designers typically take one of two approaches when they do thumbnails: they either brainstorm a wide range of ideas without exploring any of them in depth, or they come up with one idea and create many variations of it. If you use only one of these approaches, force yourself to do both. Brainstorm as many ideas as possible, using a mix of words and images. The point here is the quantity of ideas — the more the better. Work fast and don’t judge your work yet.
Once you have a lot of ideas, take one you think is good and start exploring it. Try expressing the same idea with different visuals, from different points of view, with different taglines and emotional tones. Make the image the focal point of one variation and the headline the focal point of another. The purpose here is to try as many variations of an idea as possible. The first way of expressing an idea is not necessarily the best way, much like the first pancake is not usually the best.
After you’ve fully explored one idea, choose another from your brainstorming session and explore it in the same way. Repeat this with every good idea.
Roughs are exactly that — rough renderings of thumbnails that explore the potential of forms, type, composition, and elements of your best concepts. Often a concept is explored through the development of three to five roughs. These are used to determine exactly how all of the elements will fit together, to provide enough information to make preliminary evaluation possible, and to suggest new directions and approaches.
The rough:
• Uses simple, clean lines and basic colour palettes.
• Accurately renders without much detail (the focus is on design elements, composition, and message)
• Includes all of the visual elements in proper relationship to each other and the page
Comps are created for presenting the final project to the client for evaluation and approval. The comp must provide enough information to make evaluation of your concept possible and to allow final evaluation and proofing of all content.
The comp:
• Is as close as possible to the final form and is usually digital
• May use final materials or preliminary/placeholder content if photographs or illustrations are not yet available
Hand-drawn or Digital?
Comps might be hand-drawn when you are showing a concept for something that doesn’t yet exist, such as a product that hasn’t been fabricated, a structure that hasn’t been built, or to show a photographer how you want material to be laid out in a photograph that has not yet been taken. Although you could create these comps digitally, it’s often more cost effective to create a sketch.
Designers sometimes create hand-drawn comps in order to avoid presenting conceptual work that looks too finished to a client, so they will not be locked into a particular approach by the client’s expectations.
Even in this digital age, you should draw all thumbnails by hand (using pen, pencil, or tablet) for the following reasons:
• You don’t have to make time-wasting decisions that you shouldn’t be making at this early stage (e.g., what typeface should I use? what colour should this be?)
• It’s much faster than doing it digitally.
• Work done on a computer tends to look finished and professional, and this can trick you into thinking an idea is better than it is.
• The technology of a tool tends to define the way it is used. If you are using a computer, you will tend to come up with solutions that can be executed only on a computer, and that limits your creative options. For example, would you think of creating an illustration from coloured paper if you were using the computer?
• Hand-drawn sketches provide a paper trail that shows your concept development process and can be presented in case studies to reveal your entire design process in a more personal and engaging way.
Media Attributions
• nothing-is-written-in-stone-527756_1920 by SBM © Public Domain | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/02%3A_Design_Process/2.05%3A_Develop_Concepts.txt |
Step 4: Solution Implementation
In this step, we are ready to select the final concept options and carry their application through to completion in producing the final design(s). This part of the process requires that you know how to work with photographers and illustrators, as well as with people in production technologies — primarily, programmers and printers. You may also require project management skills. You should also put a process in place so your final solutions can be evaluated for their effectiveness. Did they work? Did they achieve their goals?
There are many components that require attention during the production phase:
Production and Implementation
• Copy placement and preparation of layouts from approved text
• Liaison with suppliers and subcontractors
• Completion of photography, illustration, charts/graphs, icons/symbols
• Ongoing client liaison for proofreading and corrections
• Scanning and electronic preparation of images (black and white, duotones/tritones, colour); may include colour correction and/or digital manipulation
• Preparation of electronic files in line with press/prepress/web requirements
• Supervision of all prepress materials (final files and proofs)
• Organization, maintenance, and archiving of all digital materials related to the job
Production Supervision
• Discuss production options with client, solicit quotes, and select printer/programmer
• When contract is awarded, liaise with production services to discuss and refine project details
• Prepare or review production specifications
• Liaise with client and production to check proofs
• Oversee production to ensure quality control
• Follow up after production work is complete
Evaluation
Every step of a project should be evaluated in terms of the goals you have defined. Two fundamental questions about every design decision you make are:
• What does this accomplish?
• How does what is accomplished help to meet the project goals?
After the original design challenge has been defined, evaluate every stage of the process in that context. It’s surprisingly easy to stray off track when you’re designing. If you find yourself designing something brilliant, but it doesn’t communicate what it should to the right audience, then all that brilliance is wasted.
Communication
Whether they are in print or multimedia, all design works are intended to communicate to a specific audience, and the design must support that function. All concepts must be evaluated with that end in mind. For example:
• Does the work communicate the key message(s) and support the client’s goals?
• Does the work effectively integrate images, design, and text (form and content) to support that communication; create an overall ‘look’; make the piece work as a unified whole with no distractions?
• Is the piece physically easy to read and/or understand?
• Do the design choices amplify material (subject matter, mood) in the text?
• Is the piece appropriate to the audience? (children, youth, adults, seniors have particular interests and needs)
Economic Efficiency
• What is possible and most effective within the budget?
• Will this method attract the desired audience/buyer?
Design and Materials
• Are the design choices compatible with technological requirements for production?
• For print materials, is there efficient and economical use of paper?
• Will the materials chosen support the intended use and method of distribution?
2.07: Summary
Communication design can be described as a problem-solving process that can be broken into four steps: (1) define, (2) research, (3) develop concepts, and (4) implement solutions. Research should be a part of all design process determined by the scope and budget of the project. Concept mapping is a non-linear approach that outlines what is known, what is needs, creates associations and themes, and helps generate ideas. Good design takes time that involves generating and assessing concepts. Time is also spent editing, revising, refining , and evaluating ideas.
In conclusion, defining the design process is complicated as it has many stages and involves many steps at each stage. Complicating it further is the reality that every project is unique in its parameters, goals, time period, and participants. This chapter is meant to facilitate the beginning of how you define your individual design process by basing it on general guidelines. If you are still developing an understanding of your personal design strengths and weaknesses, allow extra time for each stage and track your time for each stage. You’ll soon discover if you fall into the category of a brainstorming, conceptual, or project development type. Conceptual designers find it easy to develop multiple concepts, but less easy to take the steps to develop them to their full potential. Project development types are the opposite — finding concepts hard to create, but developing projects quite easy. Allow extra time to discover which category you fall into and also to develop strengths in your weaker area. As you gain experience developing design projects, you will start to personalize your design process and be able to estimate how long it takes with a fair degree of accuracy. This will help you to estimate project design costs more accurately and gauge the steps needed to bring a project to a successful conclusion.
Questions to consider after completing this chapter:
1. How does communication design work within the constraints of print and media?
2. How does the creative process relate to strategic problem solving?
3. How is the creative process related to the design process?
4. What are the critical phases of the design process?
5. How does project research help to define a communication problem?
6. What are some examples of brainstorming techniques that generate multiple concepts based on a common message?
7. How does using a metaphoric device generate concepts?
8. How do concepts translate into messages within a visual form?
Suggested Reading
Dubberly Design Office. (2009, March 20). A model of the creative process. Retrieved from http://www.dubberly.com/concept-maps...e-process.html | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/02%3A_Design_Process/2.06%3A_Implement_Solutions.txt |
Learning Objectives
• Utilize basic design principles relating to visual composition
• Define design terminology pertaining to form
• Describe organizational systems and core principles for layout grids
• Differentiate between typographic categories
• Establish a visual hierarchy within a layout
• Express ideas using the principles of composition and form
Communication design is essentially the crafting of a message meant for a specific section of the public. This written message is infused with meaningful and relevant visual components. The composition of these components should amplify, clarify, and enhance the message for the viewer. To assist in making sound design choices, a designer applies principles of composition and principles of organization to the design elements selected for a project.
Understanding how to utilize the fundamentals of design elements, principles, and composition is necessary to be able to confidently move through the stages of the design development process and build a project from the initial design brief to the final published design work.
Definitions from various design sources about what comprises a design element are consistent for the most part, but defining design principles is not as consistent and varies from one text to the next. Marvin Bartel’s (2012) definitions of these categories are both simple and on point. He defines a visual element as any “basic thing that can be seen,” and a design principle as a method for “arranging things better.” Also included in this chapter are organizational systems that can focus and direct the overall direction a composition will take.
3.02: Visual Elements Basic Things Tha
Point, line, and plane are the building blocks of design. From these elements, designers create images, icons, textures, patterns, diagrams, animations, and typographic systems. (Lupton & Phillips, 2014, p. 13)
Figure 3.1 Design using points, lines, planes
Point
A point is a precise position or location on a surface. In purely mathematical terms, a point marks a set of coordinates — it has no mass at all. In this objective definition, a point is essentially a place. Visually, a point is a dot and therefore the basic building block of every variation of line, texture, and plane.
Subjectively, the term point has a lot of power. Point can direct attention, be the focus of attention, create emphasis, and cut through veiled information. The compositional term focal point brings the objective and subjective together by being the first place the eye is drawn to in a composition and usually contains the most important piece of visual communication.
Line
Figure 3.2 Lines (by Ken Jeffery)
A line is the second most basic element of design — a line is a collection of points arranged in a linear manner (see Figure 3.2). A line connects two points, or traces the path of a movement. A line can be actual or implied — for instance, as a composition of two or more objects in a row. Lines in nature act as defining planes — examples are a horizon or the silhouette of a forest against the sky. Long straight lines do not often occur in nature, and therefore when they are present, they tend to dominate the landscape visually. Natural settings are usually parsed by the eye into shorter sequences of curved or straight lines and organic shapes.
When made by the hand, a line is created by the stroke of a pencil, pen, brush, or any mark-making tool. These lines can be thin or wide, and are expressive and distinct, reflecting the texture of the tool used to make them. Lines can create a plane (a shape) by being clustered together or by defining a shape. If the line is thickened, it changes and becomes a plane. When lines are made digitally, they can acquire many of the same qualities possessed by hand-drawn lines through the application of effects.
Plane
Figure 3.3 Planes
Like lines, planes (shapes) can be organically made or they can be geometric, as in the example shown in Figure 3.3. A plane is a flat surface that has defined borders. “A line closes to become a shape, a bounded plane” (Lupton & Phillips, 2014, p. 38). Planes are excellent compositional tools for clustering visual elements into visual fields. A plane can also act as a separating device and allow the viewer to see that one section of information is not linked to another.
In design software, a vector graphic is a shape created by defining its parameters with a line, and then filling it with a solid or textured fill. Grids help to create and define typographic planes that float or interact with solid planes of image, texture, or colour. In the physical world, everything is composed of shapes that are either two- or three-dimensional. How you choose to organize and arrange the planes in your photograph, your illustration, or your design will structure the composition and determine not only how the elements intersect with one another but also how the viewer interacts with the composition.
Colour
Figure 3.4 Colours
Graphic design has evolved over the last two centuries from a craft that designed text and images primarily in black and white for books and broadsheets, to a craft that works with full colour in analog and digital media and on every kind of substrate. Controlling and effectively using colour to support communication is now more important than it has ever been. Both media and advertising have become very sophisticated over the last few decades and are adept at creating exciting, sensuous, and energetic environments that are crafted with the skillful use of colour and texture. The public, in turn, has absorbed these unprecedented levels of image saturation with a variety of outcomes. One is an expectation that the visual palette match and enhance the message. A second outcome is a high expectation for strong and authentic visuals of places or objects. A third outcome is a cultural nostalgia for earlier looks created by various devices. Examples like 8-bit graphics or 1950s Kodachrome both possess unique colour and texture palettes and have properties the public can discern. When one of these nostalgic colour palettes is applied to an image, it adds another layer of meaning to the work, and that meaning has to make sense for the viewer.
The explosion of tools for making and sharing digital photography and graphics also reveals how good the general public has become at crafting visuals with relevant atmosphere and texture. The bar has been raised very high with colour use in contemporary times, and understanding colour basics is an absolute necessity.
RBG and CMYK Colour Spaces
Given that design and colour are united in every project, it is important to realize that there are two colour systems, and often a project needs to work in both. Digital media works in the additive colour system, and its primary colours are red, green, and blue (RGB). In this system, the absence of colour equals black, while combining all colours results in white. RGB is the colour system of visible light (see Figure 3.5). This light system is called additive because the three primaries together create all the hues in the spectrum.
Subtractive colour is the system needed for print media, and its primary colours are cyan, magenta, yellow, and black (CMYK), as shown in Figure 3.5. In CMYK, the absence of colour equals white, while combining all colours creates black. Both of these systems have many overlapping colours but their colour spheres are not exactly the same. Understanding where the overlaps e xist and where they don’t correspond is vital to the success of a project. If your print materials cannot be replicated on screen, you will have a major design problem that will have to be corrected. Always choose colours that will work in both systems.
Figure 3.5 Primary colours for the additive and subtractive colour schemes
Environment is another aspect of colour choice that is very important. Both the natural world and the world within the screen vary from moment to moment and screen to screen. Colours are affected and influenced by the amount of atmospheric light available to them as well as by the colours in contact with the object they are viewing. Texture also changes our perception of a colour as does the brightness or darkness around it.
However much a designer hopes to define the parameters of a colour palette, there will always be unknown factors influencing the palette on the viewers’ end. Create a palette that is focused enough to create the right atmosphere and energy level for your project, but one that doesn’t rely too heavily on a specific colour. Careful, considered colour use will help define a message and create a mood that supports the composition and concept of a design work. Always create a palette that will work with both colour systems and also be robust enough to work in less than optimal environmental circumstances.
Negative Space
Negative space, which is also called white space, is the visually quiet area that surrounds the active area of a composition (see Figure 3.6). It is also referred to as figure/ground, and has a very important role in composition as it shapes the visual perception of the subject. Without negative space, there is no positive space — the effect is similar to seeing a polar bear in a snowstorm. Negative space is often thought of as as passive and unimportant, but the active elements or ‘figure’ are always perceived in relation to their surroundings by the mind of the viewer. The composition of the negative space frames and presents the active elements in a flat or dynamic way. If the surrounding area is busy with many other elements, the focal point loses its power because the elements all have a similar visual value. The works of Gustav Klimt exhibit this quality.
Figure 3.6 Example of negative or white space
If, on the other hand, the work is balanced and the negative space is active, it brings energy to the form and its space. The focal point or figure increases its visual power because there is contrast for the eye. Another way to look at this is to see that the range or gamut of visual activity is increased and therefore the experience is more satisfying to the eye.
When designers play with reducing or confusing positive and negative space, they create ambiguity. Ambiguity creates tension, which increases the interest of a composition to the viewer and also increases the visual energy of a design. There are three types of figure/ground relationships.
Stable figure/ground is the most common type. The positive element is clearly separate and defined against its negative space. A good example of this is text blocks in magazines or books.
Reversible figure/ground is the second type and is found in most of the work of M.C. Escher. Both the positive and negative space delivers ‘active’ information that feels equal to the eye and therefore creates a toggling effect in the viewer. One shape is comprehended while the other acts as its negative space, then the opposite happens and the negative space becomes meaningful and its opposite becomes the neutral ‘holding’ space.
Ambiguous figure/ground creates a confusing lack of focal point. The eye searches for a dominant visual ‘starting point’ in the composition but can’t find one. Often this creates energy, and if the effect is compelling, it invites the viewer to stay with the work for a long period of time, absorbing all of the visual information.
Figure 3.7 FedEx express truck
Designers often utilize figure/ground in the crafting of symbols, wordmarks, and logos because of its capacity to create meaning with the space surrounding a mark. An excellent example of figure/ground is the FedEx wordmark (see Figure 3.7). The negative space needed to define the letterforms also augments their meaning by creating a forward pointing arrow. In print design, negative space can also allude to what is outside the frame and makes the field of the page or poster larger that it physically is. On a static or moving screen, negative space has the ability to change the flow of time, to introduce a break, or to create space around an important point.
Composing strong figure/ground tension is an excellent skill to acquire for designers of any media. Crafting white space eventually becomes as important to a designer as selecting the words and the elements of a project. Composing the negative spaces of a composition will allow you to vary visual emphasis of the elements, and control and increase the visual energy overall.
Texture
Figure 3.8 Example of texture
Texture is a visual and a tactile quality that designers work with (see Figure 3.8). Texture is used both in composition and also on the printed substrate or media space. Designers create textures for their projects with anything at hand. A texture can be made with typography, generated in raster or vector software like Photoshop or Adobe Illustrator, or by using a camera and capturing elements in the material world.
Using texture thoughtfully will enhance a visual experience and amplify the context for the content. Often adding texture adds visual complexity and a bit of visceral depth to a two-dimensional design project. It can also tie one piece of design to another, or become a defining element of a brand or a series of communications.
The tactile aspect of a design work comes into play with the choices we make for the substrate we print on. The surface can be smooth or rough, glossy or matte, thick or thin, translucent or opaque, paper, plastic, concrete, metal, wood, or cloth. Paper can even have two or more of these qualities if we augment the original look of the paper with layers of varnish that reverse the tactile effect of the substrate. Often the choice of substrate is most effective if it is sympathetic to or contrasts with the concept and content of the piece. The choice of substrate texture affects how the viewer perceives the content — both physically and optically. Glossy substrates often feel sophisticated, hard, and cold. They are imbued with a sense of precision because the ink sits on top of the surface of the paper and retains almost all of its original integrity. A textured matte paper feels organic, accessible, and warm because the ink is partially absorbed by the paper, and is therefore influenced by and fused to its softer characteristics.
Pattern is part of the element of texture, but because of its special ability to hold content that is meaningful, and its long and significant cultural history, it deserves a special mention. All patterns can be reduced to dot and line and are organized by a grid system of some kind. Their ‘flavour’ is a reflection of the culture and time they come from and of the materials that created them. Patterns can be a subtle addition to the content of any design work. A pattern can be created using a relevant graphic (like a logo) or repeated multiple times, or it can support the organizational principles developed by the designer in a decorative way; for example, if a grid is based on the square and the texture of the pattern is also based on the square.
When the pattern is seen as a whole, its individual components melt away and lose their identity to the larger field of the pattern. This ability to focus on a pattern in multiple ways creates a second purpose for the graphic element (such as a circle, a square, a logo, or symbol) the designer has used. In modern design practice, pattern is an opportunity to augment the clean and simple material surfaces we work with and ornament a page or a website with a relevant texture.
Typography
Figure 3.9 Typography
Typography is the medium of designers and the most important element we work with (see Figure 3.9). Typography not only carries a message but also imbues a message with visual meaning based on the character of a font, its style, and its composition. Words are meaningful in and of themselves, but the style and composition of words tells a reader you are serious, playful, exciting, or calm. Typography is the tonal equivalent of a voice and can be as personal or as general in flavour.
Typography traditionally has two functions in most design projects. One function is to call attention to or to ‘display’ the intent of a communication. This function is called titling or display typography and it is meant to call attention to itself. The second function is to present the in-depth details of a communication within a text block. This function requires a different typographic approach — one that is quiet and does not call attention to itself. Instead, it is intended to make the content accessible and easy to read.
Font Categories
There are many ways to categorize and subcategorize type. This overview discusses the seven major historical categories that build on one another. Serif fonts comprise four of these categories: humanist, old style, transitional, and modern. Italics, first designed in the 1500s, have evolved to become part of a font ‘family’ and were at one time a separate category. They were initially designed as independent fonts to be used in small pocket books where space was limited. They were not embraced as text fonts, but were considered valuable for adding emphasis within a roman text and so became part of the set of options and extensions a font possessed. The trajectory of use is the opposite for the sans serif category. Sans serif fonts have historically been used for display only, but in the 20th century, they became associated with the modern aesthetic of clean and simple presentation and have now become very popular for text-block design. Egyptian or slab serif fonts can be used as either display or text depending on the characteristic of the font design.
Blackletter
Figure 3.10 Example of Blackletter type
Blackletter was the medieval model for the first movable types (see Figure 3.10). It is also know as Block, Gothic, Fraktur, or Old English. The look of this font category is heavy and dark. The letterforms are often condensed and put together tightly in a text block creating a dark colour (tone) for a page — between 70% and 80% grey. To put the tone in context, the usual tone of a modern text page is between 55% and 70% grey. The look of the letterforms makes it hard to read the page, because legibility was not their first function as it is today. The beauty of the font and the form of the book was the primary goal for early publications. Books were considered to be objects of wealth and beauty, not solely as a means to convey information.
Humanist
Figure 3.11 Example of Humanist type
Humanist fonts are also referred to as Venetian, because they were developed in and around Venice in the mid-15th century (see Figure 3.11). Their design was modelled on the lighter, open serif letterforms and calligraphy of the Italian humanist writers. The designers strove to replicate many of the characteristics found in this writing style, including multiple variations of a glyph (letterform) that a written document possessed. For instance, a font could have up to 10 different lowercase a’s to set a page with. Humanist types were the first roman types. Though they were much easier to read and lighter on the page than blackletter, they still created a visually dark and heavy text block in contrast to the fonts we have become accustomed to. Humanist fonts have little contrast between the thick and thin strokes — the strokes are usually heavy overall. The x-height of a humanist font is small compared to contemporary fonts, and this impedes quick comprehension and legibility. Humanist fonts are not often used for these reasons, though they are well respected because they are the original model so many other fonts are based on. It is important to remember that these fonts were a perfect match to the earliest printing technologies and that those presses could not have printed our light and delicate fonts. Fonts have evolved alongside the technological advancements of the printing industry.
Examples of humanist fonts include Jenson, Centaur, Verona, Lutetia, Jersey, and Lynton.
Old Style
Figure 3.12 Example of Old Style type
Old style fonts, also known as Garalde fonts, are the next leap in font design, and their stylistic developments were driven by the technological advancement of presses and the improved skills of punchcutters (see Figure 3.12). Font designers began to explore the possibilities of their medium — both the metal of the punches and the abilities of the presses and their papers. The letterforms became more precise, their serifs more distinct. The contrast of the stroke weights was also increased, and the presses held true to the design and didn’t distort them. The aim of these new fonts ceased to be about replicating the look of handwriting and more about refining the letterforms to create a lighter overall tone.
Examples of old style fonts include Goudy Old Style, Granjon, Janson, Palatino, Perpetua, Plantin, and Sabon.
Transitional
Figure 3.13 Example of Transitional type
A few centuries later, font design was again refined, and this time the impetus came from France and the Enlightenment movement. Fonts were created along the rationalist principles of the times. The strokes were contrasted further with very thick main strokes and very thin sub-strokes, and the serif, which capped the stroke, did not use bracketing (the rounding underneath the intersection of the two strokes). The letterforms took on a look that implied they were constructed mathematically and anchored within a grid. These new fonts broke with humanist and old style tradition and ceased to reference calligraphy.
Examples of transitional fonts include Baskerville, Bookman, Fournier, and Joanna (see Figure 3.13).
Modern
Figure 3.14 Example of Modern type
Modern fonts are also known as Didones and take the contrast started by the transitional fonts much, much further (see Figure 3.14). Bodoni is an excellent example font as nearly everyone can bring to mind the extreme contrast of its thick and thin strokes. The Frenchman Didot and the Italian Bodoni were the first to bring this design style to the public. Its major attributes align with the Romantic period’s aesthetics.
Romantic letters can be extraordinarily beautiful, but they lack the flowing and steady rhythm of the Renaissance forms. It is that rhythm which invites the reader to enter the text and read. The statuesque forms of Romantic letters invite the reader to stand outside and look at the letters instead. (Bringhurst, 2004, p. 130)
The major characteristics of modern fonts are extreme contrast between thick and thin strokes, clean, unbracketed, hairline serifs, and a completely vertical axis. These fonts have an almost mechanical look because of their precise, sharp, and clean appearance. They also possess an elegance that complrments the time period they emerged in. Modern fonts are often used as display fonts and can sometimes be used for text, though very carefully.
Examples of modern fonts include Fenice, Zapf Book, New Caledonia, Bodoni, and Didot.
Egyptian
Figure 3.15 Example of Egyptian type
Egyptian is also known as slab serif, square serif, or mechanical (see Figure 3.15). This category of font was created in England in the 1880s — a design expression of the industrial revolution. The category was named Egyptian because of the popularity of all things Egyptian after Napoleon’s return from a three-year Egyptian expedition. The name of the style has nothing to do with any element of Egyptian culture. The style was created initially for display copy, but over the centuries, fonts like Clarendon have become popular for setting text blocks because they contain the quality of objectivity and yet still feel traditional.
Examples of Egyptian fonts include Officina Sans and Officina Serif, Clarendon, and every typewriter font.
Sans Serif
Figure 3.16 Example of Sans Serif
Sans serif fonts have existed since ancient times, but it was only in the late 19th century that font designers began to consider removing serifs and letting the letterforms stand on their own (see Figure 3.16). These fonts were initially considered appropriate only for titling and display purposes, and only became text fonts in the hands of the 20th-century modernists. The first sans serif forms were created on the early humanist and old style calligraphic forms, but eventually the forms were influenced by objective modernist principles and geometry.
Examples of sans serif fonts include Univers, Helvetica, and Akzidenz-Grotesk. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/03%3A_Design_Elements_Design_Principles_and_Compositional_Organization/3.01%3A_Introduction.txt |
We have many words for the frustration we feel when an interface isn’t directing us to what we need to know. Loud, messy, cluttered, busy. These words. . . express our feeling of being overwhelmed visually by content on a screen or page. We need them to express how unpleasant a user experience it is to not know where to direct our attention next. (Porter, 2010, para 1)
If everything is equal, nothing stands out. (Bradley, 2011)
The proper composition of visual elements generates not only visual stability, it enhances mood through composition and generates order that prevents visual chaos. Designers use compositional rules in their work to make the reader enter their work and experience a design environment that is calm yet exciting, quiet yet interesting. A magazine designer, for example, creates a grid and applies an order to the typographic elements creating a comprehensible hierarchy. This design system is interpreted in different ways, in pages and spreads, issue after issue. If the organizational system is versatile and planned with thought and depth, it can be used to produce unique and exciting layouts that remain true to the rules determined for the overall system initially designed. Organizational principles create a framework for design without determining the end results.
Compositional rules can be used to generate content as well as organize it. The Bauhaus artist and designer Laszlo Moholy-Nagy created a series of paintings by calling in a set of instructions to a sign painter using the telephone. Here is his account of the experience, written in 1944:
In 1922 I ordered by telephone from a sign factory five paintings in porcelain enamel. I had the factory’s color chart before me and I sketched my paintings on graph paper. At the other end of the telephone, the factory supervisor had the same kind of paper divided in to squares. He took down the dictated shapes in the correct position. (It was like playing chess by correspondence). (Moholy-Nagy, 1947, p. 79)
Designing visual elements into a strong composition is a complex endeavour on its own, but increasingly designers are being asked to create vast compositional systems that other people will implement. Much like Laszlo Moholy-Nagy, designers need to be able to make strong compositional systems and also convey how their systems work, how to apply their rules, and how to apply them so they retain a relevant freshness.
Alignment
Figure 3.17 Alignment
Alignment refers to lining up the top, bottom, sides, or middle of a text, composition, or grouping of graphic elements on a page. Often a design composition includes a grid where the alignment of text blocks is dictated by the design of the columns of the grid (see Figure 3.17).
Typographically, horizontal alignment includes flush left (also called left justified or ragged right), flush right (also called right justified or ragged left), centred, and fully justified. Vertical alignment in typography is usually linked to baseline alignment. A baseline grid exists in digital software that is meant for layout of type and is the invisible line where font characters sit.
Contrast
Contrast is a visual device that increases the special character of both elements that have been paired. Contrast assists composition by creating focal points, and adds energy and visual range to a composition. Using contrast enables us to distinguish the qualities of one object by comparing differences with another. Some ways of creating contrast among elements in the design include the use of contrasting colours, sizes, and shapes. Johannes Itten, a design instructor and artist at the Bauhaus focused his research on the concept of contrast in both composition and colour. Itten’s list of contrasts can be applied to both the composition and the atmosphere of a design work. His list includes these pairings: large/small, hard/soft, thick/thin, light/heavy, straight/curved, continuous/intermittent, much/little, sweet/sour, pointed/blunt, light/dark, loud/soft, black/white, strong/weak, diagonal/circular. No design makes use of only one kind of contrast, but usually one dominates the others.
Colour Contrast
Johannes Itten also worked with contrast in his seminal theory of colour and determined that there are seven kinds of contrast.
1. Contrast of hue occurs when a hue or colour is separated by being outlined in black or white lines. White lines weaken the ‘strength’ and appearance of the colour and the colours around the white lines seem darker. In contrast, a black line around a colour strengthens the appearance of the colour, while the colours around the black lines appear to be lighter.
2. Light-dark contrast is the contrast between light values and dark values.
3. Cold-warm contrast refers to the contrast between cool and warm colours. Warm colours are the red, orange, and yellow colours of the colour wheel, while cool colours are blue, green, and purple.
4. Complementary contrast is the contrast between colours directly opposite each other on the colour wheel.
5. Simultaneous contrast occurs between two colours that are almost complementary. One colour is one section to the left or right of the complementary colour of the other.
6. Contrast of saturation refers to the contrast between intense colours and tertiary or muted colors. Muted colours appear duller when placed next to intense colours, and intense colours appear more vivid when next to a muted colour.
7. Contrast of extension refers to the contrast between the area of one colour and another. Different areas of one colour are needed to balance another.
For text, contrast is achieved by using varied colours, serif and sans serif, type styles that are not often paired, or type in place of an image. As contrast in elements diminishes, the elements begin to feel similar, and the level of visual interest decreases.
Emphasis
A focal point in a composition draws the eye to it before the eye engages with the rest of the visual information. This is called emphasis and is achieved by making a specific element gain the attention of the eye. Emphasis is created in graphic design by making only one focal point and clearly emphasizing it by placing the elements on the page in positions where the eye is naturally drawn to the proper entry into the work. Designers rely on additional compositional principles to support the hierarchy of a composition such as contrast, repetition, or movement.
Designers use emphasis to assist viewers in identifying the relative importance of each element in a composition. Emphasis is strongly linked to visual hierarchy. Both emphasis and visual hierarchy create order for the viewer, allowing the eye to see the first element of importance, then the second, then the third, and so on. Graphic elements gain or lose emphasis by changing in size, visual intensity, colour, complexity, uniqueness, placement on the page, and relationship to other elements.
Movement
Figure 3.18 Example of movement
Movement is made by creating visual instability — like motion in a photograph that blurs the image, as shown in the example in Figure 3.18. Creating the illusion of movement photographically or artistically is not difficult because a blur translates into movement in the mind of the viewer. However, it is not the only option for a designer. A composition can also achieve movement if the graphic elements are arranged in a way that directs the eye to move in a specific direction — usually by creating a diagonal that takes the eye up to the right corner (forward motion) or down to the left corner (backward motion). Movement can also be created using overlapping planes that imply depth and distance by becoming progressively smaller and lighter in tone (mimicking depth). Using typography as a visual medium is also an option. Overlapping the text blocks and/or sentences effectively creates both depth and movement (though it destroys legibility). David Carson is a designer who often uses this technique to create movement in his work.
Scale
Varying scale (size) is one of the major tools in the designer’s toolbox. Changing scale is important on two levels. The first is purely compositional — a composition needs variety in the size of its elements to be dynamic and effective. If all the elements have the same visual weight, the composition will be flat. Another aspect to varied scale is conceptual. If a design visually distorts the size relation of one element to another, the viewer is instantly engaged in discovering why. This is a great method to engage the viewer and add a twist to the message embedded in the design. A great example of this is the ‘think small’ ad campaign of the 1960s for Volkswagen Beetle.
The series is witty and engaging and plays on how we perceive size. This distortion is witty and playful, and presents smallness as desirable. Subtle scale differences do not make much visual impact, but large ones are very dramatic. The concept and context of a project should determine the relationship of scale differences for a composition. Large differences in scale are suited to dramatic and energetic design content, while smaller differences in scale are appropriate for professional and institutional content.
Proximity and the Gestalt Theory of Visual Relationships
Proximity of elements is part of Gestalt theory, which is a framework of spatial relationships developed in the 1920s by the German psychologists Max Wertheimer, Wolfgang Kohler, and Kurt Koffka. The term Gestalt means unified whole, and points to the underlying conceptual structure of this framework. Gestalt works because the mind seeks to organize visual information. A composition created using Gestalt principles predetermines how each of the elements within it interacts with the others spatially. In this system of relationships, close proximity of objects, regardless of shape, size, or content, indicates a connection. There are six basic Gestalt principles: (1) similarity, (2) continuation, (3) closure, (4) proximity, (5) figure/ground, and (6) symmetry and order.
Similarity
Figure 3.19 Similarity
When visual elements have a similar shape or look as one another, a viewer will often connect the discrete components and see a pattern or grouping (see Figure 3.19). This effect can be used to create a single illustration, image, or message from a series of separate elements. Similarity of medium, shape, size, colour, or texture will trigger a sense of similarity. The sense of grouping will be strengthened or weakened by increasing or decreasing the commonality of the individual elements.
Continuation
Figure 3.20 Continuity
Continuation is the tendency of the mind to see a single continuous line of connection rather than discrete components (see Figure 3.20). The eye is drawn along a path, line, or curve, as long as there is enough proximity between objects to do so. This tendency can be used to point toward another element in the composition, or to draw the eye around a composition. The eye will continue along the path or direction suggested by the composition even when the composition ends, continuing beyond the page dimensions.
Closure
Figure 3.21 Closure
Closure is a design technique that uses the mind’s tendency to complete incomplete shapes (see Figure 3.21). The principle works if the viewer is given enough visual information to perceive a complete shape in the negative space. In essence, the mind ‘closes’ a form, object, or composition. In the example above , the triangle is formed by the viewer’s mind, which wants to close the shape formed by the gaps and spaces of the adjacent circles and lines. The partial triangle, outlined in black also hints at the missing shape.
Proximity
Figure 3.22 Proximity
Proximity is an arrangement of elements that creates an association or relationship between them (see Figure 3.22). If individual elements are similar, they will probably be perceived first as a whole and second as discrete components. If, like the example above, some of the components form to create a large ‘whole,’ similar elements positioned away from the main shape will also be associated with the large shape. In this case, the viewer interprets them as falling off or away from the main shape. The shapes used do not have to be geometric to create the effect of proximity. Any components have a strong commonality in shape, colour, texture, size, or other visual attribute can achieve proximity. Proximity can also be achieved with dissimilar shapes and textures if cleverly and conceptually composed.
Figure/Ground
Figure 3.23 Figure/Ground
Figure/ground was discussed earlier, but it is part of Gestalt theory, so we present it here again. This principle describes the mind’s tendency to see as two different planes of focus, information in both positive and negative space (see Figure 3.23). It works if those spaces are suggestive enough in their composition.
Symmetry and Order
Figure 3.24 Symmetry
Symmetry and order follow the premise that a composition should not create a sense of disorder or imbalance (see Figure 3.24), because the viewer will waste time trying to mentally reorder it rather than focus on the embedded content. The photographic example in Figure 3.25 is composed symmetrically and allows the viewer to concentrate on the figure in the centre. Achieving symmetry in a composition also gives the composition balance and a feeling of harmony.
Figure 3.25 Example of symmetry and order
Rhythm
Rhythm is integral to the pacing of a design composition and is also necessary for creating a pattern, as used in the example in Figure 3.26. The pacing of a repeating motif or element at regular or irregular intervals within a design determines the energetic quality of a composition; it also creates a consistent and unifying backdrop for the introduction of new elements.
Rhythm is the effect produced in a magazine or book by varying the placement of elements within the grid structure. The changes in the density of elements and visual tones of the spreads translate into a rhythmic visual energy as the energy of each page grows or shrinks. Rhythm is the glue that connects one page to the next; it reveals recurrent themes and creates movement, tension, and emotional value in the content. When viewers understand the rhythm of a book, a magazine, or a website, they will also appreciate the variations that break with or punctuate the rhythm and create interest, change, or tension.
Figure 3.26 Example of rhythm
Repetition
Repetition creates visual consistency in page designs or in visual identities, such as using the same style of headline, the same style of initial capitals, and the same set of elements, or repeating the same basic layout from one page to another (see Figure 3.27).
Excessive repetition, however, creates monotony. This usually leads to viewer boredom and dull, uninteresting compositions for the designer. Be sure to create a design system that allows the repetitions within it to be lively and interesting page after page. The example above uses a simple set of rules, but because the rules allow for colour and compositional changes, each discrete component is as interesting on its own as it is within the whole. If you cannot avoid excessive repetitions, try to add some visual breaks and white spaces where the eyes can rest for a while.
Figure 3.27 Example of repetition
Balance
Balance and symmetry are important design qualities because they are deeply embedded in human DNA. Because our bodies are symmetrical, we have a strong association and satisfaction with centred, symmetrical design. Balancing visual elements compositionally calms the tensions and grounds the design (see Figure 3.28). This is important if you wish to convey a sense of stability to the viewer. When we look at a design, we use our innate sense of what constitutes ‘right balance’ to assess its stability. If that stability is missing, we feel tension, which can counteract the core of the message. Centred design compositions work very well for stable, security-inspiring content, but what about content that demands attention, or tension, or excitement?
When a centred (or stable) composition is not desirable, developing an asymmetrical composition is the best strategy. Asymmetry has been explored in graphic design for the last 150 years, and designers continue to discover new strategies that feel fresh. Asymmetry has no empirical rules but is guided by balancing the distribution of main elements around the space of a composition in an unexpected way. Contrast and counterpoint are the main tools of composition in asymmetry — large shapes balance small ones; intense colours balance neutrals. Creating asymmetrical design is not easy because there are no firm rules to follow, but it is exciting to create and exciting to see for exactly the same reason.
Figure 3.28 Example of balance
Hierarchy: Dominance and Emphasis
Simply put, hierarchy is applying an order of importance to a set of elements. Hierarchical order is apparent in every facet of our lives and is a defining characteristic of our contemporary culture. Hierarchy can be very complex and rigorous — an instruction manual is a good example of this. It can also be uncomplicated and loose. Hierarchy in composition is conveyed visually through variations of all the elements — size, colour, placement, tonal value, and so on (see Figure 3.29).
Figure 3.29 Example of hierarchy
Graphic design does not always embrace hierarchy. There are some messages that are more suited to visual anarchy and chaos (Punk design is a good example). These projects often connect to an audience by experimenting with, and breaking free from universal rules of visual structure. It is important is to match the structure of the composition to the needs of the project.
Typographic hierarchy is very important in design. A body of text is made more comprehensible by imposing order through a system of titles, subtitles, sections, and subsections. Hierarchy is created when the levels of the hierarchy are clear and distinguishable from one another. Subtle signs of difference are not effective. Typography acts as a tonal voice for the viewer, and must create clear variation in tone, pitch, and melody.
Hierarchy is usually created using similarity and contrast. Similar elements have equality in typographic hierarchy. Dominant and subordinate roles are assigned to elements when there is enough contrast between them. The bigger and darker an element is, the more importance it has. Smaller and lighter sizes and tones imply lesser importance.
Every hierarchy has a most important level and a least important level. The elements that fall between the two are ranked according to size and position. However, if you subdivide the text with too many levels, the contrast between different levels will blur their differences in the hierarchical order.
A good strategy to follow with text design is to apply three levels of typographic hierarchy.
Title
The function of a title is to attract the reader to the content of the text block. Often, the title is visually ‘flavourful’ and possesses a strong visual dynamic and energy.
Subtitle
Second-level typography gives the reader the ability to distinguish between types of information within the text block. This level of type includes subheads, pull quotes, captions, and anything else that can help detail and support understanding of the text-block information.
Text block
The text block is the content. As opposed to the ‘display’ function of the title and subtitle, the function of the text block is to make the content legible and easy to digest visually. Readers should be able to decide if they want to read this level based on primary (title) and secondary (subtitle) type levels.
Typically, a typographic hierarchy will convey information from general to specific as it progresses from title to text block. The general points presented in the title will be the most important and will be seen by most everyone. Think of how a newspaper is scanned for interesting news items: If readers are interested in the title, they may choose to read more detailed and in-depth information in the associated text block.
Media Attributions
• alignment by Ken Jeffrey
• Blast2_movement by Wyndham Lewis © Public Domain
• gestalt_similarity-01 by Ken Jeffrey
• gestalt_continuity-02 by Ken Jeffrey
• gestalt_closure-03 by Ken Jeffrey
• gestalt_proximity-04 by Ken Jeffrey
• gestalt_figureground-05 by Ken Jeffrey
• gestalt_symmetry-06 by Ken Jeffrey
• Chicago_world’s_fair by Weimer Pursell © Public Domain
• Class_of_’40_presents_LCCN98513436 by Library of Congress © Public Domain
• Flower_poster by Alvesgaspar © CC BY-SA (Attribution ShareAlike)
• Da_Vinci_Vitruve_Luc_Viatour2 by Leonardo da Vinci © Public Domain | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/03%3A_Design_Elements_Design_Principles_and_Compositional_Organization/3.03%3A_Compositional_Principles__Strateg.txt |
Compositional organization is complex, but even more so when applied to typography. Typography is a complicated medium to work with as it contains two levels of information (display and content), and requires its components to be read in proper sequence with proper emphasis, good legibility, and strong contrast to the substrate. Many elements need to be organized to allow the reader a seamless experience when reading the content. Designing with type requires adept handling of the hierarchy, refining and designing the display elements for focal emphasis and also refining the quiet details of the text block so it sits perfectly and quietly in its space.
Think of these organizational systems as ‘large picture’ constraints. Constraints (rules) allow a designer to focus on the other aspects of a project. Designers make myriad decisions about concept, style, visuals, form, font, size, spacing, colour, placement, proportion, relationships, and materials. When some factors are determined in advance, the designer is able to spend time with the other parts of the project. A well-defined constraint can free up the thought process by taking some decisions off the table. The following eight organizational systems cover composition for type (but can also be applied to general composition), including the traditional ordering system of the grid.
Grid
A grid is a network of lines that structure the placement of elements and create relationships between them. A grid divides a design space into vertical and horizontal divisions. The grid is a bridge between a design rationale and the beginning of implementation for each project, converting a concept into a structured space. It is an exceptional tool for composing, arranging, and organizing every kind of visual element. The grid usually works invisibly in the background, but it can become an active, visible element as well. Designers use grids in a variety of ways. They can be very disciplined about adhering to their grid structure from the beginning of a project, or use it as a starting point for composition and order.
Grid systems create a formal composition in comparison to more casual compositional approaches like transitional or random structures. Grids are often used in publication and web design because they introduce consistency and guide hierarchy. Consistent margins and columns create an underlying structure that unifies the multiple pages of a document or website, and makes the layout process more efficient.
The plan for the grid comes from the content and concept of the design project. The objective in creating a grid is to set up the relationships between elements in a way that stays true to the concept. For instance, if your publication is a book of poetry, the grid must have generous amounts of negative space and generous leading. If, on the other hand, your publication is a daily newspaper, the spacing relationships cannot be so generous, and have to clearly show which article relates to which image. Hierarchy of information must be very clear as well, and should reveal which news item is most important and which is least important. A well-made grid will naturally allow the designer generous scope for variation in image style, text size, and graphic style. Often, a grid that is complex allows for some freedom where the designer can introduce a new element or effect.
A grid activates the entire surface of a project by making all of it available for active elements. It helps create both stable symmetrical and dynamic asymmetrical compositions. By breaking down space into smaller units, grids encourage designers to leave some areas open rather than fill up the whole page.
Types of Grids
Figure 3.30 The golden section
The golden section is also known as the golden ratio, golden mean, or divine proportion, and it is found in mathematics, geometry, life, and the universe — its applications are limitless (see Figure 3.30)
The golden section is a ratio — a relationship between two numbers — that has been applied as an organizational system in art, design, and architecture for centuries. Expressed numerically, the ratio for the golden section is 1 : 1.618. The formula for the golden section is a : b = b : (a+b). In other words, side a is to side b as side b is to the sum of both sides.
Graphic designers use the golden section to create grids and layouts for websites and books. Photographers use it to compose the focal point of an image and also to compose the elements found in an image.
Single-Column Grid
Figure 3.31 Single-column grid
A single-column grid is an excellent approach if the content a designer is working with is formatted in a simple manner (see Figure 3.31). Content that is appropriate for a single-column grid consists of main text for the text block, a few levels of display type, possibly some images, and finally page numbers.
The main column of this style of grid must sit properly on the page, held in place by the negative space that surrounds it. To determine the right amount of negative space on the top, bottom, and sides of the page, a designer usually considers facing pages as a spread. In books and magazines, the two-page spread, not the individual page, is the main unit of design. The designer determines the right amount of negative space on the top and bottom, gutter (inside margin), and outside edge. The spread is often symmetrical, and the pages mirror one another.
Multi-Column Grid
Figure 3.32 Multi-column grid
When a designer creates a grid for a document that is complicated, he or she may use multi-column grids because they allow for a complex hierarchy and provide more options for integrating text and visuals (see Figure 3.32). The more columns you create, the more flexible your grid will be. You can use the grid to articulate the hierarchy of the publication by creating zones for different kinds of content. The columns act as visual units that can be combined or kept separate. A photo can span several columns or be reduced to the width of only one. A text can also occupy a single column or span several.
Hang Lines
Figure 3.33 Hang lines
In addition to creating vertical columns in a grid, you can also divide the page horizontally. Often, a designer determines the placement of hang lines (see Figure 3.33) by applying the rule of thirds (breaking up the horizontal plane into three equal parts). This compartmentalization allows the designer to reserve certain sections for images and others for the text block.
Modular Grid
Figure 3.34 Modular grid
The modules of this type of grid are always identical and are created by applying consistent horizontal and vertical divisions to the design space. Like the written notes in a musical score, the modules allow you to anchor your layout elements and typography to a specific rhythm. With a modular grid, the horizontal guidelines are tied to the baseline grid that governs the whole document. Baseline grids serve to anchor most of the elements to a common leading (typographic line spacing). See Figure 3.34.
Baseline Grid
A baseline grid is the horizontal grid that determines where all of the type will sit. You can also use it to determine the placement and edges of your visual and graphic elements. To create a baseline grid, determine the right font, size, and leading for your text block, then go to your baseline grid settings (found with the other grid preferences) and change the baseline grid default (usually 12 pt) to the leading you will be using in your text block.
Axial
The axial system has a simple premise — all elements are arranged on either side of an axis or line. You can centre the axis itself in the composition or, for a more energetic asymmetrical composition, place the axis off centre to either the right or left. This compositional strategy creates a dynamic negative space on the opposite side. To create a more complex composition, designers often employ an axial system combined with another — like the radial or dilatational system (see below). They may also use double-axis compositions with the axes either parallel to each other, or intersecting to create a strong focal point. There are many instances of the axial system in nature — tree trunks, roots, and vines are good examples. Like these organic examples, an axis does not need to be a straight line — it can be curved, zigzag, or circular.
Modular
Modular organization is a compositional method that utilizes rigour (by constraining the shape) and freedom from structure (modules can be any size and placed anywhere in the space). Modules can also be uniform and contained within a structure (a grid). A module is a fixed element used within a larger system or structure. For example, a pixel is a module that builds a digital image.
Bilateral
The bilateral system is based on mirrored symmetry and is therefore both classic and ubiquitous. Because of its predictability, it is a challenge for designers to work with. Nature exhibits many examples of bilateral composition — the bodies of mammals, the points of a snowflake, and the fractal symmetry of plants are all quickly understood, appreciated, and then dismissed by the viewer. To create a composition based on the bilateral system, a designer must make some part of the composition unusual. The designer can achieve this by moving the axis to a diagonal, off-centre location, which allows the negative space on either side of the bilateral composition to be varied. A second method is to introduce a double axis: the designer uses two columns of bilateral information and varies the size of each.
Radial
The radial system takes its name from the sun — all elements are arranged like rays coming from a central focal point. This is a dynamic compositional strategy as it references dynamic action. Examples of the radial form from the natural world, such as explosions, flowers, spiders, stars, and so on, are all exciting and dynamic. Much like it is difficult to handle the natural objects, reproducing a radial composition is not that easy. There are problems with legibility unless type is very carefully placed and scaled. Every line of type starts and ends in a different place, so continuity is also hard to control. For example, a designer may take a traditional approach so the text reads from top to bottom, or an inverse approach so the text reads from bottom to top. Arranging the text on either side of centre may also be effective. It is important to try placing the type in different positions and in different relationships until it works with the composition and is easy to read.
As in the organizational systems we have discussed, designers can add radial points for a more complex composition or combine a radial system with one that adds stability, such as a grid, axial, or modular system.
Dilatational
Dilatational systems mimic the look of still water when a pebble is dropped into it, creating rings of greater and greater size as they move away from the centre. Like the radial system, this composition has a strong focal point, but unlike the radial system, the composition creates rings, not rays, that expand from the centre. Other examples of this system are the iris of the eye or representations of sound waves.
Random/Spontaneous
Creating a composition that does not follow any compositional principle is not as easy as it sounds. Finding examples of randomness is also one of the most difficult exercises for design students. Random design does not follow any rule, method, direction, or pattern. If a project calls for randomness, it is best to start with materials that are conducive to spontaneity like Jackson Pollock’s paint throws. Allow the elements that fall to organize themselves naturally — eventually, a dynamic and fresh composition should emerge. Random compositions exhibit visual qualities that are not patterned, aligned, or horizontal. Instead, they tend toward compositions that exhibit overlapping, cropping, angling, and textures.
Transitional
The transitional typographic system is defined by the layering of lines of text into informal textured planes and shifting tonal bands. The shapes and bands created with this layering approach are not aligned with one another, and create an overall organic atmosphere. This visual approach is often used in Post Modern design where the clear legibility of text is not as important as the visual atmosphere of the design. Text planes in Post Modernist works point the viewer to the main theme of the message rather than articulate the message in clean, concise text arrangements.
Compositions using the transitional approach have a light, airy look that abstractly imply cloud formations or wood grain patterns rather than solid concrete shapes created by using the grid or axial systems. A transitional composition has lively and active negative space, and can create an excellent ground for a vital focal point if it is in sharp contrast to the rest of the composition.
Attributions
Figure 3.32
Image includes: Photo by Stefanus Martanto Setyo Husodo used under a CC0 license and Cape Nelson Lighthouse, Portland, Australia by Joshua Hibbert used under a CC0 license.
Figure 3.33
Image includes: Photo by Stefanus Martanto Setyo Husodo used under a CC0 license and Cape Nelson Lighthouse, Portland, Australia by Joshua Hibbert used under a CC0 license.
Figure 3.34
Image includes: Photo by Stefanus Martanto Setyo Husodo used under a CC0 license and Cape Nelson Lighthouse, Portland, Australia by Joshua Hibbert used under a CC0 license.
Media Attributions
• grid_single column by Ken Jeffrey
• grid_multi column adapted by Ken Jeffrey
• grid_hang lines adapted by Ken Jeffrey
• grid_modular adapted by Ken Jeffrey
3.05: Summary
Exploring the design possibilities that the organizational systems discussed in this chapter possess is an endless undertaking. Once these systems are innately understood individually, a designer can begin to play with layering two systems within one design project. Combining contrasting systems often works well. For instance, an axial system combined with a radial system tempers the axial system’s linear focus and anchors and diffuses the rays emanating from the radial shapes. A grid combined with a dilatation system gives the composition both vertical and horizontal structure that is softened by the rounded shapes. Organizational systems give the designer ways to distribute words or images within a structure while allowing negative space into the centre of the design space.
Compositional strategies are design constraints. The definition of a design constraint is to apply or impose limitations on the elements or design of a system. The compositional strategies (systems) discussed above are in fact design constraints, and they should be understood as parameters that assist the designer in the design process rather than as restraints that limit the designer’s creativity. Parameters are necessary in every visual system. Applying a visual organizational system also allows the designer to focus on the message and the details of a design project rather than on the structure of the composition that holds the work together. Visual systems create visual unity.
Questions to consider after completing this chapter:
1. Name the design principle that distorts realistic relationships for visual effect and emphasis.
2. Name the three building blocks of design that pertain to form.
3. Describe the eight organizational systems that apply to typography.
4. What are two typographic categories?
5. How many levels of visual hierarchy are needed for hierarchy to exist?
Suggested Readings
Bradley, S. (2011, January 31). Counterpart and counterpoint in typographic hierarchy. Vanseo Design. Retrieved from http://www.vanseodesign.com/web-desi...hic-hierarchy/
Elam, K. (2007). Typographic systems. New York City, NY: Princeton Architectural Press.
Designhistory.org (2011). The Sans Serif Retrieved from http://www.designhistory.org/Type_mi...SansSerif.html
Taylor, K. (n.d.). The metaphysics of color. Philosophy talk. Retrieved from www.philosophytalk.org/commun...aphysics-color | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/03%3A_Design_Elements_Design_Principles_and_Compositional_Organization/3.04%3A_Organizational_Principles.txt |
Our primary goal in colour management is to provide a consistent perceptual experience. As we move from device to device, within the limits of the individual device’s colour gamut, our interpretation of the colour event should be the same.
As we’ve discussed, but it’s worth repeating, we achieve that goal in two fundamental steps:
1. We give colour its meaning through device independent colour values correlated to a specific device’s RGB or CMYK numbers.
2. We transform our destination device’s specific values to match the perceived colour definitions of our source.
The Components
We have spoken at great length about colour profiles, but there are three additional pieces required to enact a colour-managed workflow: the profile connection space (PCS), a colour management module (CMM), and rendering intents.
The PCS provides the device independent colour definitions to which device values can be matched. The ICC specification allows the XYZ or Lab colour spaces for the PCS.
Profiles provide the look up table for the conversion of device values to PCS and vice versa. Any conversion requires two profiles (or a device link into which two profiles have been merged).
The CMM is the software engine that actually does the conversion. It lives in a product like Adobe Photoshop or Kodak Prinergy, where file processing takes place. The CMM provides two major benefits: it reduces profile size by doing the interpolation in colour calculations and compensates for shortcomings in the Lab colour space. The CMM does the heavy lifting by providing the computation of the conversion. For consistency, it is best to keep to a common CMM as much as possible. The Adobe Color Engine or ACE is the CMM seen referenced in the Color Setup dialog window of the Adobe Creative products.
Rendering intents are the instructions for dealing with out-of-gamut colours (see Figure 4.10). They are user-selectable controls to define colour rendition when you move colour from one device to another. There are four types of rendering intents: perceptual, saturation, relative colorimetric, and absolute colorimetric. Each intent represents a different colour conversion compromise, resulting in a different gamut mapping style.
Figure 4.10
Perceptual and saturation intents use gamut compression, where the overall gamut space is adjusted. Relative and absolute colorimetric intents use gamut clipping, where colour matching is maintained throughout the available gamut, and out-of-gamut colours are moved to the available extremes of the destination gamut.
The perceptual intent is used mainly for RGB to CMYK transformations, which are typically image conversions. Since we are moving from a larger gamut (RGB) to a smaller gamut (CMYK), it makes sense to employ a rendering intent that preserves the overall relationship rather than one that emphasizes one-to-one colour matching within the gamut.
The saturation intent is the least relevant for colour-managed workflows. When you use this intent, colour saturation is preserved as much as possible at the expense of hue and luminance. The result is a bad colour match, but the vividness of pure colours is preserved. This intent is usually used for documents such as presentations, charts, and diagrams, but not for graphic arts jobs.
The two colorimetric intents, relative and absolute, are closely related. They are both used for CMYK to CMYK conversions where the gamuts of source and destination are closely matched or the destination is larger (typical of a proofer compared to the press it is matching). They emphasize exact colour matching for in-gamut colours and clip out-of-gamut colours.
The only difference between the two colorimetric rendering intents is in white point handling. The absolute intent pays attention to the colour of white in the source and reproduces that in the destination. Think of newspaper printing where the whitest colour is the paper that has a dull and beige tone. With an absolute rendering intent, a proofer matching the newspaper would add that beige colour to all of the white areas of the proof. This matching of white destination to white source colour is not usually necessary due to the chromatic adaptation or colour constancy that we discussed earlier. We have a built-in mechanism for adjusting to judge the overall colour relationship independent of the appearance of white. For this reason, the relative colorimetric intent is used most of the time and the white of the destination is not adjusted to simulate the white point of the source.
With all of the pieces of the colour management process clearly delineated, we can put them to use in our standard graphic arts workflow applications.
Attribution
Figure 4.10
Image modified from: Comparison of some RGB and CMYK colour gamut by BenRG and cmglee is used under a CC BY SA 3.0 license. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/04%3A_Color_Management_in_the_Graphic_Technologies/4.01%3A_4.1-0_The_Components_and_Purpose_of_a_Colour_Manageme.txt |
Colour management comes into play at two primary points in the print production workflow: during file creation with authoring tools like the Adobe Creative applications (Photoshop, InDesign, Illustrator), and then when the file is processed for output with a workflow software program such as Kodak Prinergy. Let’s examine the details in these most widely used software tools to provide concrete examples.
Colour Set-up in the Adobe Creative Applications
The primary tool for colour management in the Adobe products is the Color Settings dialog under the Edit menu. Fortunately, these settings can be shared across all of the Adobe applications to coordinate a consistent delivery of colour strategy. Define your settings in Photoshop, as this is the application with the largest number of options, to guarantee that all possible options have been set to your choices.
Launch Photoshop and, from the Edit menu, choose Color Settings. There are three main sections to the dialog window: Working Spaces, Color Management Policies, and Conversion Options. Change the Settings option above the Working Spaces panel to North American Prepress 2. This applies a set of defaults that are optimal for a print production workflow.
Working Spaces is Adobe’s name for default profiles. These are the profiles that will be used if no other information is available. If you open a file that is untagged (the terminology for a document that has no profile attached to it), the profile listed for the colour space matching the file will be assumed and used as long as the file is open. It will not persist with the file once the file is closed. If you create a new file in the application, the profile listed will be assigned and the profile reference will move forward with the file.
Let’s review and clarify the terminology associated with describing the status of a colour profile relative to a particular document or file. A file that has a profile is referred to as tagged while one without profile is untagged. A tagged document can have one of two relationships with its colour profile. The colour profile can be embedded or assigned. An embedded profile is actually written into the file content. This increases the file size, but guarantees that the desired profile will be available. For an assigned profile, only a reference to the profile is contained in the document. File size is reduced, but access to the profile depends on the application and environment processing the object. You can think of an assumed profile as a temporary assignment that will only last as long as the file is open.
For Working Spaces options, the RGB default of Adobe RGB (1998) that comes with the North American Prepress 2 setting is a perceptually even RGB space, which makes it better for editing and a good choice. The CMYK field is where you should choose a profile specific to the final output device if it is known. The SWOP profile is a reasonable fallback and is commonly used as the industry standard for generic work. Be aware that choosing SWOP will constrain the gamut to the capability of a web offset print condition.
The list of profiles available for selection comes from the various ColorSync aware folders that we have previously discussed. Priority is given to profiles in the Library/Application Support/Adobe/Color/ Profiles/Recommended folder, and these profiles are listed first. If you have a profile that you wish to give prominence to for selection, place it in this folder.
The Color Management Policies subsection controls behaviour when opening or creating documents and when moving objects between documents. There are three options for each of the available colour space settings: Off, Preserve Embedded, or Convert to Working.
The choice of Off is the most misleading, because we can’t actually turn colour management off: there is always an assumed profile if no other information is presented. With Off, copy and paste of an object moves tint values: a 50% cyan value from the originating document lands as 50% cyan in the destination document.
Preserve Embedded does what it says and maintains what’s in place. New documents use the Working Space profile and become tagged. An untagged file assumes the working space profile but stays untagged. If you copy and paste native RGB objects, they are converted. If you copy and paste native CMYK objects, the tint values are maintained.
Our final choice for colour management policy is potentially the most dangerous. Convert to Working converts tagged documents using the existing profile as a source profile and the Working Space profile as the destination. If you do not have the Alert check boxes ticked, this can happen without your awareness. For an untagged document, it assumes the Working Space profile. Copy and pasting RGB or CMYK objects always converts to preserve appearance (changes the tint values).
After reviewing the choices, the recommendation for Color Management Policies is Preserve Embedded, and make sure all Alert boxes are checked. This allows you to confirm that any action is what you actually want before proceeding.
The last section of the Color Settings dialog window is the Conversion Options. The Engine option refers to the colour management module (CMM) that will be used for calculations in the colour conversions. The default choice of the Adobe Color Engine is good for maintaining consistency. Here we have as well the Rendering Intent entry, which will function as a default unless an alternate intent is specified in any dialog. Relative Colorimetric is a reasonable choice unless you know that almost all of your conversions will be RGB to CMYK for which Perceptual is the appropriate intent option. Always check Use Black Point Compensation. This maps the black point source to the black point destination, avoiding any clipping or flattening of the darkest colours and maintains the full dynamic range.
Now that we have all of our working parameters correctly defined, we can OK the Color Settings dialog and look at the basic mechanism for using colour profiles in the Adobe Creative applications. There are two actions we can invoke from the Edit menu to apply colour profiles: Assign Profile and Convert to Profile.
Assign Profile allows us to select a source profile for the open document. This action will replace the existing profile for a tagged document or provide a new reference for an untagged file. To have the association persist, you must save the file and check the option to embed the current colour profile. Assigning a profile will change the onscreen appearance of the file but not alter any of the file’s tint values (colour data). We are simply instructing the computer on how the original CMYK or RGB values would look if they are coming from our new source (described in the look up table of the new colour profile).
Convert to Profile is an immediate and irreversible (once the file is saved) transformation of the document’s tint values. The assigned profile is used as a source and the user selects a destination profile and rendering intent. The conversion takes place and the file now has new RGB or CMYK numbers as a result. If you use the advanced dialog to specify a device link profile, then the currently assigned source profile is ignored for the calculations since the device link contains information for both the source and destination conditions.
Both the Assign and Convert dialog windows come with a Preview check box that will allow you to toggle between the before and after state to visually validate your choices and experiment with the effects of choosing different rendering intents.
Assessing the Effect of a Colour Profile
Once the profile is applied, what should we be looking for? Both on screen and when comparing hard-copy (printed) samples, there are specific areas of the image that should be checked. There is also industry-specific language used in describing colour appearance that it is helpful to be familiar with. The areas of the image to pay special attention to are saturated colours, flesh tones, neutrals, and the highlights. Proof and print sample sheets will have four or five images that emphasize these areas along with tone ramps in the process and overprint colours. Focusing on these areas of interest will make it easiest to identify variation when checking for colour matching.
The terminology that is often employed is colour cast, to indicate a shift toward a particular colour; heaviness, to suggest excessive tone (particularly in the highlights); dirty, to specify too much complementary colour resulting in greying; and flat, to describe a lack of contrast and/or saturation. Knowing the terminology will help you understand the comments your co-workers may make and will help remind you of the types of analysis you should be doing.
Additional Colour Tools in Adobe Acrobat
In addition to the fundamental profile handling procedures described above, there are several powerful and useful colour tools in Adobe Acrobat that can be used once you have exported your desktop file to a PDF. These are found among the Print Production Tools in the Acrobat Tools menu. Two are of particular note: Convert Colors and Output Preview.
Convert Colors allows you to convert colour spaces, such as changing RGB content to CMYK. It also enables transforming spotcolours (such as Pantone) to CMYK. In addition, if the file incorrectly contains multiple instances of a spot colour that should all appear together on the same printing plate (i.e., Pantone 283 and special blue), they can be linked to behave as a single entry on the colour palette.
Output Preview does not apply any changes to the file, but is an extraordinarily powerful review mechanism. It enables you to confirm that colour elements in the file are as they should be before you commit to the expensive step of actual output. With Output Preview, you can turn individual separations on and off to check overprints and knockouts; check the separation list to confirm which elements are attached to each separation; identify the colour space of each object; and even highlight any area that exceeds your threshold for total ink coverage.
Profile Use in Kodak Prinergy
The final topic in our exploration of colour management in graphic technologies is an example of the application of colour management in a print production workflow. We’ll use one of the predominant workflow applications, Kodak Prinergy, as a model.
The processing instructions for file handling in Prinergy are contained in a process template. Input files are ‘refined’ to PDF in the Prinergy workflow and an important portion of the refining process is the instructions relating to colour conversions. In addition, we have process templates for output both to proof and final output. These templates also contain colour control options. For both refine and output process templates, the colour management information is contained in the Match Colors box of the Color Convert subsection.
Prinergy offers a comprehensive colour management application of its own called ColorFlow. There is a check box in Color Convert to turn on ColorFlow awareness and pass all of the colour management decisions to what has been predefined in ColorFlow. Discussing the structure and functional logic of ColorFlow is beyond the scope of this text. To use Prinergy as a more generic workflow example, we’ll uncheck the ColorFlow option and turn off the integration.
The standard location to store profiles for use by Prinergy is \Prinergy\ CreoAraxi\data\ICC-Profiles. Profiles are not immediately available from this location in Prinergy’s drop-down menus, but can be browsed to if placed in job-specific subfolders.
Let’s look at the Match Colors section of the refine process template. With ColorFlow integration turned off, the entry fields under Assign Input Device Conditions become available. If you check Override Embedded Profiles, then profiles that are checked on in the Assign Input Device Conditions section will replace all existing profiles in the files being processed. Notice that there is a very granular level of control, with independent entries for CMYK images, CMYK graphics, RGB images, and RGB graphics. If you specify a device link profile, it will override any tagged profile whether or not Override Embedded Profiles is checked.
Convert to Destination is where you indicate the destination profile. This will only be used for instances not covered by a device link in the assign section. Remember that any device link contains both source and destination information. Beneath the Convert box is a box for selecting rendering intents. Entries are only available for the options that were checked on under Assign Input Device Conditions. The default entry is Perceptual since most input files will require image conversions from larger colour gamuts.
For output process templates in Prinergy, the options are very similar to what has been described above. We typically use colour management for output to proofs, either single page or imposed. There is a separate ColorFlow section to uncheck to enable the more traditional colour management approach. In the Match Colors box, Match Colors in Page Content must be checked on for any colour profiles to be applied. Also, you must select one of the following radio buttons–As Defined Below, If Not Set in Refining, or Exactly as Defined Below– so you will be able to choose a profile in the Input Device Conditions box. Since all content will now be in the CMYK space from the refine process template, there are no longer independent options for RGB, graphics and images: only one source or device link can be specified. The rendering intent is entered in the Rendering Intent field. The destination profile (usually the proofer profile) goes in the Device Condition box. Once again, this destination profile will be ignored if a device link has been chosen for the source.
By following the steps described, the workflow application user can produce predictable and accurate colour reproduction results through both the file processing and output steps of any print production workflow. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/04%3A_Color_Management_in_the_Graphic_Technologies/4.02%3A_4.1-1_Applying_Colour_Management_in_the_Adobe_Creativ.txt |
In essence, colour management is a translation process. The tools and precepts of colour science give us the lexicon to populate the dictionaries of each of the devices we use in our graphic technology workflow. Our device’s dictionary is the colour profile we generate with the help of a colour target, an instrument to measure the target and colour management software to process the measurements.
Since the device’s dictionary (colour profile) defines the specific RGB or CMYK values of that device in a device independent colour language, we have a conduit between any two devices. By aligning the device independent values between the two devices, we can translate from the device dependent values of our source to the new RGB or CMYK values at our destination to provide a common colour experience.
With an understanding of how to configure our standard graphic arts applications to effectively use our device’s profiles, we can make successful colour translations happen in our real-world production scenarios. Successful colour translations provide the improved productivity that comes with predictability and confidence in process control.
Questions to consider after completing this chapter:
1. What are the three components of the colour event?
2. How does visible light relate to the electromagnetic spectrum?
3. How does the zone theory of optical systems resolve the apparent incompatibility of trichromacy and opponency?
4. What is the importance of the Lab colour space?
5. Define Delta E.
6. Which of RGB, CMYK, and Lab are device dependent colour spaces, and which are device independent colour spaces?
7. How does a spectrophotometer work?
8. What is the purpose of a colour target?
9. What Mac OS utility can be used to view colour profiles?
10. How does a measurement file relate to an ICC profile?
11. What is the vcgt tag in a display profile?
12. Name the common characteristics of output, display, and input profiles.
13. What characteristics are unique to each of output, display, and input profiles?
14. Working Spaces serve what purpose in Adobe’s Color Setup dialog?
15. Which Acrobat tool allows for the merging of spot colours?
16. How do we make a device link profile?
17. Where is a device link profile used in Prinergy’s output process template?
Suggested Readings
Fraser, B., Murphy, C., & Bunting, F. (2004). Real world color management (2nd ed.). Berkeley, CA: Peachpit Press.
Sharma, A. (2003). Understanding color management (1st ed.). Clifton Park, NY: Course Technology.
4.04: Introduction
Learning Objectives
• Define the relationship between the observer, the illuminant, and the object in colour appearance
• Describe the physics of light and colour
• Recognize how the eye-brain system affects colour perception
• Explain the components of the CIE colorimetric system, specifically the Lab colour space and Delta E measurements
• Differentiate between device specific and device independent colour models
• Describe the role of colour management in achieving consistent appearance between proofing cycles and final printed production
• Define an ICC profile and describe its application in digital imaging
• Use a spectrophotometer to capture colour data from industry standard targets as the first step in profile creation
• Create ICC colour profiles for standard colour transformations
• Calibrate, profile, and colour manage an LCD monitor
• Describe the application of display, input, and output colour profiles in the electronic prepress workflow
• Apply colour profiles in a variety of industry-standard applications to achieve desirable results for graphic reproduction
• Combine source and destination profiles into a single device link profile for device specific colour transformations
• Apply colour management with ICC or device link profiles in the colour control module of Prinergy’s Process Plan for proofing output
• Combine surplus spot colours into a single separation for successful printing-plate production
A knowledgeable application of a colour-managed workflow is required for the successful and predictable generation of all colour content in print production and related graphic technologies.
This process requires familiarity with the fundamental colour science that drives the practical steps of colour profile creation and colour profile use in our industry-standard software applications. From colour science, we can progress to an examination of the tools and detailed steps required for profile creation and successful profile use. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/04%3A_Color_Management_in_the_Graphic_Technologies/4.03%3A_4.1-2_Summary.txt |
The Colour Event
The first challenge in dealing with colour in graphic reproduction is to learn to think of colour as an event rather than as an attribute or characteristic of a particular object.
Colour is an event because it requires the participation of three components at a particular point in time to take place. In addition to the object, we require a light source and an observer. Only with the interaction of these three things — object, light, and observer — can we have a colour event or experience.
We need some help from three branches of science, physics, physiology and psychology, to understand how the object, light, and observer contribute to the colour event. If you like memory aids, you can use the acronym POLO to remind you of the three science P’s and the object, light, and observer.
Object
The object and light fall under the domain of physics, while we need both physiology and psychology to describe the observer’s role in the colour event.
The object’s role is to interact with light, and the object can either reflect light back from its surface or transmit light through itself. Reflectance and transmission are the two potential interactions. The majority of objects are opaque, so most of the time we are dealing with the reflection of light. If an object is semi-opaque, and transmits a portion of light, we refer to it as translucent.
Light
Visible light is a tiny sliver of the total electromagnetic spectrum. The electromagnetic spectrum contains all forms of energy, ranging from kilometre-long radio waves at one end and progressing in shortening wavelengths down through microwaves, infrared waves, ultraviolet waves, X-rays, and finally, gamma waves with wavelengths of a subatomic dimension (see Figure 4.1).
Visible light is nestled in-between the infrared and ultraviolet range (see Figure 4.2). The progression from longest to shortest wavelength is from red (following infrared) to violet (preceding ultraviolet) in the 700 to 380 nanometre (millionths of a metre) wavelength distribution.
Figure 4.1 The Electromagnetic Spectrum
Figure 4.2 Visible Spectrum
We describe the temperature (relative warmness to coolness) of light in degrees Kelvin. Typical daylight ranges from 5000 to 6500 degrees Kelvin. We use the labels D50 and D65 to indicate daylight-viewing conditions at these temperature points.
Observer
The greatest complexity of the colour event occurs in the interaction with the observer. The science of physiology, the study of the human body’s functions, provides half the story. Psychology, which provides insights about the function of the mind, completes the tale.
We begin with how our eyes, our optic systems, respond to light. Trichromacy and opponency are the key concepts.
Trichromacy
Figure 4.3 Rods and cones (adapted by Ken Jeffery)
We call it ‘visible’ light because it is the portion of the electromagnetic spectrum that our eyes are sensitive to. The two types of receptors in our eyes are cones and rods (see Figure 4.3). The cones respond to colour and the rods come into play in low-light situations when we see only varying shades of grey. There are three types of cones, each sensitive to approximately one-third of the visible spectrum. We characterize those segments of the spectrum as red, green, and blue, and this three-colour or trichromatic response by the cones is where all colour experience begins. Every colour we perceive comes from mixing varying volumes of the red, green, and blue signals from the three types of cones in our eyes.
The Additive Primaries
We refer to the red, green, and blue colour set (RGB) as the additive primaries. When we combine or add all three of these, we get the entire spectrum and, thus, white light. This is the primary colour set involved whenever we talk about the transmission of light, such as the display on a computer monitor, a tablet, or from a projector. For this reason, red, green, and blue are also referred to as the transmissive primaries.
The Subtractive Primaries
What happens when we project two of the three additive primaries on top of each other? This is the same as removing or subtracting one of the additive primaries from white light. Let’s start with red and green. Though not at all intuitive, if you have any experience with mixing paint or inks, the combination of red and green light produces yellow. Remember that we are adding light to light, so the production of a brighter colour is to be expected. Continuing on: combining green and blue gives us a light blue that we call cyan, while the combination of red and blue produces magenta.
Since each of these colours is produced by subtracting one of the additive primaries from the full complement of white light, we refer to this colour set of cyan, magenta, and yellow (CMY) as the subtractive primaries. Each of the subtractive primaries acts as a filter for its complementary colour in the additive primary colour set. Cyan absorbs all red light, reflecting only green and blue. Magenta absorbs all green light, returning only red and blue; while yellow absorbs all blue light and reflects back only red and green. What colour would you see if you shone green light on a magenta patch?
Just as we can produce any colour sensation in the transmission of light by mixing the appropriate quantities of red, green, and blue, we can produce the corresponding colour sensation when we put ink on paper by absorbing the necessary portions of the visible spectrum so that only the required amounts of red, green, and blue are reflected back. This is how cyan, magenta, and yellow work as our primary colours in the printing environment, and why we also call them the reflective primaries.
Opponency
The second half of the role that our human physiology plays in the observer’s part of the colour event is the concept of opponency. Our eyes’ tristimulus response (a response to the red, green, and blue portions of incoming light) is the input, but the interpretation occurs when we map that input to a location in a colour space determined by three axes of opposing sensations. We have a built-in colour map where we define our colour perception by identifying the perceived colour based on its degree of greenness to redness, blueness to yellowness, and darkness to lightness.
These three pairs of opposites — green-red, blue-yellow, dark-light — are the fundamental guide posts we use to position any colour we perceive on our internal colour map. These opponent colour pairs are exclusive colour entities, occupying opposing ends of the range of our interpretation. Unlike a yellowish-orange or a reddish-purple, we cannot imagine a colour having the properties of red and green or blue and yellow at the same time.
Lab Colour Space
Once the opponent nature of colour interpretation was understood, colour scientists were able to create a model colour space based on the opposing pairs. This is the Lab colour space (see Figure 4.4). The Lab variation of interest to us is officially called CIELAB, and all references in this textbook to Lab will mean CIELAB. Additionally, references to L, a, and b in this textbook are equivalent to the L*, a*, and b* units of the CIELAB colour space. Each of the opposing pairs provides one axis of this three-dimensional colour space. L is the axis for darkness to lightness; a is the axis for greenness to redness; and b is the axis for blueness to yellowness. By providing a value for each of the L, a, and b attributes, a colour is accurately and uniquely located in the colour space. The tremendous utility of the Lab colour space is that it allows for the mathematical description of a colour in a non-ambiguous and meaningful way.
Psychology of Colour Perception
We come to the last of our three science P’s: psychology. After the trichromatic response is passed to opponent interpretation in the physiology of our optic systems, the final engagement of colour perception occurs in our minds. This interaction complicates and potentially confounds our objective measurements, so it is critical to be aware of the typical issues that the filter of our mind brings to the arena of colour perception.
Colour Constancy
Colour constancy is how our mind adjusts our colour perception to discount or remove the effects of an overall colour cast due to a coloured illuminant. If we see someone wearing a yellow sweater illuminated by a bluish cast of light, we still ‘see’ a yellow sweater, even though we have no trouble identifying the colour in the sweater as green if it is isolated from the scene. In our mind, we remove the blue constituents of all colours in the scene that we assume are coming from the tint in the light source. This behaviour is also known as chromatic adaptation.
The effect of adjacency is very similar to colour constancy. A colour placed next to a light colour appears darker than when that same colour is placed next to a dark colour (see examples in Figures 4.5 and 4.6). We make adjustments to our interpretation based on our assessment of the environment.
Figure 4.5 Both greens are the same colour
Figure 4.6 Both reds are the same colour
The effect of colour constancy provides a very important lesson in judging our success in colour matching: it is more important to preserve the overall colour relationships in our image than to focus on individual colour accuracy.
Memory Colours
In our mind’s eye, not all colours are created equal. Due to their historical importance to our survival, we pay special attention to certain colours. Flesh tones, the blue of the sky, and the greens of grass are known as memory colours due to the additional weight they have in our hierarchy of colour.
We need to give these memory colours a priority when we evaluate our colour management efforts. If these key colours aren’t right, then everything will look wrong.
The significant impact of our mind’s contribution to colour perception enforces the requirement to take colour matching beyond the raw numbers we can extract from the physics and physiology of light’s interaction with an object and our optic systems. The psychological components such as colour constancy and memory colours can only be accommodated by human intervention in a colour management system.
Media Attributions
• 1414_Rods_and_Cones_modified-01-01 by Kaidor, adapted by Ken Jeffrey © CC BY-SA (Attribution ShareAlike)
• Lab colour space by Ken Jeffrey
• green-yellow by Ken Jeffrey
• colours in the middle by Ken Jeffrey | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/04%3A_Color_Management_in_the_Graphic_Technologies/4.05%3A_Colour_Science.txt |
We measure light to provide the data needed to manage colour in a graphic production environment. There are three ways to measure light and three corresponding tools available to take those measurements: densitometer, colorimeter, and spectrophotometer.
Densitometre
To measure only the volume of light, we use a densitometer. The densitometre provides a known volume of light and then records what remainder of that light is returned to the device. A transmissive densitometer records how much light gets through a semi-transparent material such as camera film, and a reflective densitometer measures how much light has bounced back. The majority of densitometers in the print environment are reflective.
How does measuring the volume of light help us? Maintaining a consistent thickness of ink in printing is a very good way to control consistency and quality, and measuring the amount of light absorbed by the ink is a very accurate indicator of ink thickness.
Since our eyes have to function over a very wide range of brightness, we have a non-linear response to increasing volumes of light. That means it takes approximately 10 times the amount of light for us to experience one step in our perception of brightness. To match this behaviour of our eyes, the density scale is based on powers of 10, with each larger whole number representing one-tenth the volume of light of the preceding number. A density reading of 1.0 means that 1/10 of the original light has been reflected back. This is a typical reading for a process Yellow patch in offset lithographic printing. A density reading of 2.0 indicates that 1/100 of the original light is returned, while a density reading of 3.0 shows only 1/1000 coming back. Black ink is usually in the 1.7 density range, with cyan and magenta at 1.3 to 1.4.
Scanning or hand-held densitometers are typically found in the viewing station by a press. Densities are recorded when the printed sample matches the desired result and then ongoing adjustments to maintain the target densities keep the printing on target.
Colorimetre
Colorimetres mimic the three-colour response of our eyes by using red, green, and blue filters to measure the amount of light present in each third of the spectrum. They have built-in software to calculate Lab values based on what volume of red, green, and blue is returned from a sample. Colorimeters are particularly useful for calibrating and profiling monitors. Some well-known examples of colorimeters are the X-Rite ColorMunki or i1 Display devices.
Spectrophotometre
Spectrophotometres measure slices of the spectrum to produce a spectral ‘map’ of the light reflected back from a sample. Spectrophotometers are typically more expensive than densitometers and colorimeters but are employed because they can more accurately do the jobs of both devices. They work by recording the light at specific wavelengths over the wavelength range of visible light, and then by converting this spectral data to colorimetric and densitometric values.
While we are talking about measuring spectral values, it is important to note that we do not depend on identical spectral values to achieve matching colour experiences. Different spectral values can trigger the same volume of colour signals in our optic system and lead to matching colour perception. In fact, we depend on this phenomenon in graphic production in order for proofing devices to simulate the colour output of a printing press or for any two devices to be colour aligned. The ability of the CMYK (cyan, magenta, yellow, black) process colour set to mimic most of the colours in the world is also based on the fact that we can achieve a colorimetric match without having identical spectral values. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/04%3A_Color_Management_in_the_Graphic_Technologies/4.06%3A_Measuring_Devices.txt |
The CIE (Commission Internationale d’Eclairage or International Commission on Light) is a scientific body formed by colour scientists in the 1930s that has provided much of the fundamental colour knowledge we possess today. Three core definitions provided by the CIE are the standard observer, the Lab colour space, and Delta E measurements. The latter two are particularly important for colour management.
The Lab Colour Space Revisited
In section, 4.2, we mentioned the Lab colour space as a natural outgrowth of understanding the function of opponency in human vision. It’s comprised of three axes: L represents darkness to lightness, with values ranging from 0 to 100; a represents greenness to redness with values of -128 to +127; and b represents blueness to yellowness also with values from -128 to +127.
Notice that there are no negative values on the L axis as we can’t have less than zero light, which describes absolute darkness. The L axis is considered achromatic meaning without colour. Here we are dealing with the volume rather than the kind of light. In contrast, the a and b axes are chromatic, describing the colour character and the type of light.
The standard two-dimensional depiction is of only the a and b axes, with a as the horizontal axis and b as the vertical axis. This places red to the right, green to the left, blue at the bottom, and yellow at the top. If you found our previous mnemonic aid of POLO helpful, you can use RGBY to remember the colour pairs. For correct placement, remember that red is on the right, and blue is on the bottom.
Colours are more neutral and grey toward the centre of the colour space, along the L axis. Imagine that equivalent values of the opposing colours are cancelling each other out, reducing the saturation and intensity of those colours. The most saturated colours are at the extremes of the a and b axes, in both the large positive and negative numbers. For a visual depiction of the Lab colour space, open the ColorSync application found in the Utilities folder of any Macintosh computer and view one of the default profiles such as Adobe RGB.
Now it’s time to explore the practical application of this colour map for the comparative analysis of colour samples. We can’t make any progress in evaluating our success in colour matching unless we have a frame of reference, some yardstick to determine how much one colour sample is similar or different from another. That yardstick is the Delta E measurement.
Delta E
Figure 4.7 Lab Colour Space
Delta, the fourth letter of the Greek alphabet and symbolized as a triangle, is used in science to indicate difference. Delta E is the difference between two colours designated as two points in the Lab colour space. With values assigned to each of the L, a, and b attributes of two colours, we can use simple geometry to calculate the distance between their two placements in the Lab colour space (see Figure 4.7).
How do we do that? It looks a lot like the formula used to determine the long side of a right triangle that you may remember from high school geometry. We square the difference between each of the L, a, and b values; add them all together; and take the square root of that sum. Written out as a formula it looks a little more daunting: .
Let’s try a simple example to see what we get. Colour 1 has a Lab value of 51, 2, 2 and Colour 2 is 50, 0, 0 (right at the centre of the colour space):
, and, so our first value is 1.
; and , so our second value is 4.
, so the third value is also 4.
Add them together: ; and take the square root: .
The Delta E (difference) between our two colours is 3. Could we detect that difference if we were viewing those two colours? Probably just barely. The minimum Delta E for seeing a difference is about 2. Smaller differences can normally be detected in neutral colours (such as our samples), while more saturated colours require a slightly larger Delta E. A Delta E of 4 is the upper threshold for acceptable machine repeatability or consistency.
Delta E provides a value indicating the overall difference between two colours. It does not provide any colour-related data such as which colour is lighter/darker, redder/greener, more blue/more yellow. To understand how the colours are different, we have to evaluate the comparative L, a, and b differences independently.
Experimentation over time has come to show that conventional Delta E is about 75% accurate in showing the difference we see between two colours. Delta E numbers exaggerate the differences in yellows and compress our perceptual distance between blues. To improve on the representation of our interpretation of colour difference, scientists have produced a modified formula known as Delta E(94).
Delta E(94)
Delta E(94) is a modified formula that provides about 95% accuracy in correlation to human perception of colour differences. Here it is in all its splendour:
where:
(for reference conditions)
You can see that it is still the comparison of three values: L, C, and H, where C and H are produced by applying modifying factors to the original Lab values to compensate for perceptual distortions in the colour space. Each difference is squared and the root taken of the sum of the squares, just as in the original Delta E formula.
There is an additional refinement in Delta E(2000) where further adjustments are applied to blue and neutral colours and compensations for lightness, chroma, and hue. This is a much smaller correction than the change from Delta E to Delta E(94).
The good news is that you don’t need to understand or remember any of the details of these equations. Just remember that these are real numbers measuring actual plotted distances between colour samples. Delta E represents the distance between two points in the Lab colour space. Delta E(94) and Delta E(2000) are enhancements, providing improved numbers that more closely match our perception of colour differences.
Media Attributions
• Delta E in Lab Colour Space by Ken Jeffrey | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/04%3A_Color_Management_in_the_Graphic_Technologies/4.07%3A_Lab_Colour_Space_and_Delta_E_Measurements.txt |
Armed with our fundamental concepts in colour theory, we can apply these principles to the physical process of colour management. The practical application to print production requires a procedure for measurement, colour profile generation, and the correct use of profiles in the manufacturing process. Let’s begin with measurement and discuss the working components of a standard graphic arts spectrophotometer and the colour charts we would use it with.
X-Rite i-One (i1) Pro Spectrophotometer
The i1 Pro is one of the most common hand-held spectrophotometers used in the graphic reproduction industry. It can also be mounted in the iO base for automated scanning. As described in section 4.3, the spectro works by recording spectral data from incremental slices of the range of wavelengths included in visible light. To do this properly, the spectro must calibrate its white point to establish the baseline for interpretation. It does this by reading the white tile supplied in the baseplate that comes with the spectro. Each baseplate is uniquely matched to a specific spectrophotometer and marked with a serial number that corresponds to its spectro. Make sure you confirm that you have the correct baseplate that matches the serial number on your spectro. When used commercially, the manufacturer recommends that a spectro be returned for factory recalibration every two years. The packaging will include a certificate indicating the expiry date for the current calibration.
The spectro may also be identified (on both the serial number tag and surrounding the light-emitting aperture) as a UV-cut. This indicates it has an ultraviolet filter, which acts to remove the impact of fluorescence from optical paper brighteners. If you have access to more than one spectro device, be sure that any related measurements are done consistently either with or without the filter. Best practice is to use exactly the same instrument for any series of measurements.
The USB cable provides power to the spectro and so should be plugged directly into the computer and not into a peripheral such as a keyboard. Additional typical accessories for the spectro include a weighted strap and screw-in base for hanging the spectro against the screen of a monitor and a proof mount base with sliding rail for the spectro to read printed colour targets.
Colour Charts or Targets
You will typically be dependent on the colour management software application that you have chosen to produce a pdf file of the colour chart that your spectro can read. While in the software, you select a reading device (such as the X-Rite i1 Pro or i1 iO) from the list of devices that the software supports and then provide the dimensions for your output device. The choice of reading device will determine the size of colour patches and page format, and the size of output will define how many pages are ganged to a sheet. When prompted, name the pdf with a clear identifier (output device and date or equivalent) and save it.
Once you have the pdf, use it for any direct output, such as a proofer, or to make a plate for your printing press so that the colour chart can be printed. In all cases, it is critical that no colour management be applied to the output device for the production of the chart so that the natural base state of the device is captured in the colour target produced. It is also essential that the proofer or press be in an optimal operating state so that the output is an accurate reflection of the device’s capabilities. This may require a calibration process for a proofer or standard maintenance procedure on the press.
There are several colour chart standards that you should be aware of. The chart produced by your colour management software will likely be one of these or a slight variation thereof. The original standard is the IT8.7/3, composed of 928 colour patches. This was followed by the ECI 2002, which has 1,485 colour samples. There is now an update to the IT8, the IT8.7/4, which has extended the colour sampling to 1,617 patches. The larger number of patches provides a more detailed snapshot of the colour capability of the device that you are profiling. Of course, it takes more time to read the greater number of samples, so the newer sets are more manageable with automated reading devices such as the iO table. If you are reading with a hand-held spectro, choose the IT8.7/3 or smaller patch set if it is offered by your colour management software. The other trade-off between larger and smaller numbers of colour patches lies in the smoothness of colour transitions. Fewer data points mean less interpolation and potentially smoother curves in colour modulation. Be aware that it will require some experimentation on a device-by-device basis to determine the ideal intersection of accuracy and smooth interpretation for your measurement data.
The colour charts come in two standard patterns of organization: random and visual. Random is exactly that, and comprises the vast majority of charts that are produced for measuring. The colour swatches are distributed to provide optimal contrast between adjacent patches to aid the spectrophotometer that is reading them in distinguishing where one patch ends and the next begins. Having the colours scattered over the sheet also generates relatively even distribution of cyan, magenta, yellow, and black ink which leads to better performance from the press. The visual pattern places the colour blocks in a logical and progressive sequence so it’s easy to see the chromatic ranges that are covered. Some scanning spectrophotometers can read the visual arrangement.
Measuring Your Colour Chart
Once you have used the pdf to generate the printed sample, you have to measure the sample. For charts from proofers, it is critical to wait a minimum of 30 to 90 minutes after the sample has been produced before measuring in order for the colour to stabilize. To create a measurement file, you need three things: colour management software; a target from that software; and a measuring instrument supported by the software. After connecting your measuring device to the computer (remember to connect directly to a USB port on the computer, not to a peripheral such as a keyboard), enter the measuring step of your colour management software. You will need to point to the correct measurement chart that was used, which can be easily identified if you have named it distinctively, and confirm the right measuring device is indicated. If you are getting errors once you begin measuring and can’t proceed, the typical culprit is the selection of an incorrect colour chart or incorrect measuring device.
When you begin the measurement process, there are a few option prompts you may have to respond to. Some software allows for the averaging of multiple press sheets to produce an improved characterization of the device. This software can scan the colour bars placed at the bottom of the sheet and indicate which sheets are the best candidates for measuring. If you have chosen to do multiple sheets, then you will have to record the sheet number on each of the pages that you cut from the full press sheet in order to enter the sheet number correctly as you carry on measuring.
Proofing devices are stable enough that it is not necessary to average multiple sheets. You may still be cutting out single pages from a larger sheet (depending on the size of your proofing device) and should label the individual pages to stay organized. You can skip any of the prompts that deal with choosing multi-sheet options.
Once past the initial options, you will be prompted to calibrate the measuring instrument. Make sure the i1 Pro is correctly seated on its base plate and push the button. After successful calibration, you will be instructed to mount page 1 of your colour chart and begin measuring. For hand-held measuring with the i1 Pro, use the white plastic proof mounting base it came with. Secure your page under the spring clip at the top of the mounting base, positioning it so that the sliding clear plastic guide the spectro rides on can reach the first row of colour patches. The clear plastic guide has a horizontal opening that the head of the spectro nestles into to traverse the horizontal row of colour swatches. There is a cradle for the base of the spectro to rest on that slides horizontally across the guide. The spectro must begin on white paper to the left of the colour patch row, and finish on white paper to the right of the row. If your target has a semi-circle printed to the left of its colour rows, use it to position the chart under the scanning opening of the plastic guide. The left semi-circle on the opening in the guide should align with the printed semi-circle on the chart. If there is no printed semi-circle on the chart, allow ¼ – ½ inch of white space showing in the opening to the left of the first colour patch.
With your chart properly placed in the mounting base, position the spectro all the way to the left on the plastic guide with its base in the sliding cradle and reading aperture in the cut-away channel. Press the button on the left side of the spectro and keep it pressed. Wait one second while the white of the paper is read (there may be a beep to indicate this is done), slide the spectro smoothly and evenly to the right until you reach white paper past the colour patches, and then release the button. You should be rewarded with a prompt to scan row B. Slide the entire plastic guide down one row, move the spectro all the way back to the left, and do it all over again! If you receive an error, clear the prompt and try scanning the same row again. Success at hand-held scanning comes with a little bit of practice and a Zen approach. Your degree of success will be inversely proportional to the amount of anxiety you feel. The techniques that contribute to getting it right are smooth and even passage of the spectro; consistent pressure (make sure you are not pushing down on the head); white paper at either end; and keeping the button pressed from start to finish.
After you receive a couple of errors on the same row, the software may switch to individual patch reading. In this case, the prompt will now say “Scan patch B1” instead of the previous “Scan row B.” You must position the read head over each individual patch and press the spectro’s button when prompted on screen. After completing one row of individual patches, you will be returned to full row scanning mode. This procedure allows the measurement process to go forward when the software is having trouble detecting colour boundaries in a particular row.
Having completed all rows on all pages (which may take some time), you will be prompted to save the measurement file. Some colour management software will take you directly to the additional settings required to define a profile and then proceed with the profile creation. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/04%3A_Color_Management_in_the_Graphic_Technologies/4.08%3A_Working_with_a_Spectrophotometer_to_Read_Standard_Col.txt |
It’s important to remember that the measurement file is not a profile! It provides the basic information about the colour behaviour of the device but needs some critical additional instructions and the processing of the colour management software in order to produce a colour profile that can be used for colour transformations. The measurement file is a snapshot in time of device behaviour and depends on the appropriate calibration of the device prior to the chart’s creation for its accuracy.
Content of the Measurement File
Figure 4.8 Example of measurement file
We can open the measurement file in a text editor and examine its structure to understand what information it contains (see Figure 4.8). The first section is header info and provides some basic information about the file. This is followed by a key to the colour data that follows. The actual colour information comprises a row and column ID to identify the specific colour patch and then the numerical data that goes with the patch.
The key information is contained in the second section where the data format is laid out. This indicates what colour spaces are captured in the data for each colour patch. The first four values are CMYK; the next three are XYZ (a device independent colour space like Lab); the next three are Lab; then RGB; and finally there is a wavelength measurement for each of the 10 nanometer slices of the visible spectrum (380-730 nanometers; violet to red).
The CMYK values for each patch are fixed in the software, coming from either the IT8 or ECI specification. These CMYK values have to remain constant because they are our point of reference for all devices. When we read the patch with the spectro, it uses the spectral information (each of the 10 nanometer slices) to calculate the XYZ and Lab values that describe the colour appearance of the colour swatch. By matching the measured Lab value to the predetermined CMYK value supplied in the original pdf, we have the raw material to build a translation table describing the device’s colour behaviour. Once we have this look up table (LUT), we can predict the Lab equivalent for any CMYK value produced on the device. Correspondingly, when we are given the Lab values from another device that we want to match, we can produce the appropriate CMYK values to provide that colour appearance.
Device Dependent Versus Device Independent Colour Spaces
We’ve defined CMYK, RGB, and Lab colour spaces, and we’ve seen how the first step in colour profiling is to measure output from a device to define a relationship between the CMYK device dependent and the Lab device independent numbers. Establishing this relationship between device dependent (RGB or CMYK) values and device independent (Lab) values is a fundamental component of the colour management process.
We call the CMYK and RGB colour spaces device dependent because the results we get from specific RGB or CMYK values depend entirely on the device producing them. When we tell a device to produce “50% cyan,” we are saying, “Give me half of as much cyan as you can produce.” Since the capacity and colour appearance of cyan from any two devices will not be the same, it should not be surprising that a specification of “half that much” will also produce colour events that do not match. Similarly, the RGB values on a monitor or projector simply specify some proportion of the full red, green, and blue signals that the device can produce. Since there is no common starting point between two monitors in terms of what a full red, green, or blue signal is, then providing the same RGB values to those two monitors will in no way provide an opportunity to generate the same colour appearance.
For RGB devices, RGB values simply identify the volume of signal for each channel. For printers, proofers, and presses the CMYK percentages dictate what proportion of pigments are deposited. The numbers associated with specific RGB and CMYK colours only have colour meaning when attached to a particular device. There is no inherent consistency between any two devices based on providing the same RGB or CMYK values.
So if the device dependent colour spaces don’t give us any consistency or control of colour between devices, where can we turn? Enter the device independent colour spaces of Lab and XYZ. We spoke of Lab earlier as the three-dimensional model colour science has produced to map the way we perceive colour. The Lab colour space is device independent because we do not depend on the values associated with the output of a specific device to enumerate the colour. Lab values are calculated from spectral readings in a controlled environment so that they define a consistent colour experience for any circumstance and from any device. Device independent colour is the Rosetta stone of colour management that allows us to translate from the unique dialect of colour behaviour on one device into a universal language and then back to the specific colour dialect of a different device and maintain the colour meaning.
The device independent colour spaces are known as profile connection spaces (PCS) since they provide this gateway service between the device dependent colour spaces of individual devices. Lab and XYZ are the two PCS allowed in the ICC specification that defines profile creation. From the examination of the measurement file, we can see how it provides the first step in establishing and tabulating a relationship between the device dependent values of a particular device’s output and the corresponding device independent values that characterize the actual colour appearance associated with the output.
We’ve talked about the measurement file as the gateway to our profile creation, so let’s see what steps remain to get us there. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/04%3A_Color_Management_in_the_Graphic_Technologies/4.09%3A_The_Measurement_Files_Role_in_Colour_Profiling.txt |
Figure 4.9 what is a profile
The measurement file contains the raw data of the relationship between the device dependent colour space of the device and the device independent colour space of the profile connection spaces (PCS). There are two additional parameters we have to provide along with this data in order for the colour management software to produce an output profile: total ink limit and black generation (see Figure 4.9).
Total Ink Limit
Total ink limit is a percentage, usually between 240% and 400%, that represents the upper threshold that the sum of the four process inks can total. Why would this change? Each device, in combination with its particular substrates (papers) and ink sets, has a different capability in terms of how much ink can be deposited. A coldset web newspaper press might be close to the lower end of 240%, while a high-end inkjet can support 100% from each of cyan, magenta, yellow, and black, and have a 400% rating. This total ink limit setting affects the colour distribution in darker colours. As the total ink limit decreases, dark colours require a higher proportion of black ink relative to CMY to stay within the ink limit percentage. With an ink limit of 360%, when black is at 90%, we still have room left for 90% of cyan, magenta, and yellow (4 x 90 = 360). But if the ink limit is 240%, with black at 90%, we can’t exceed 150% for the sum of the other three (150 + 90 = 240). Since cyan leads in a grey balance mix, we might have 60% cyan, 45% magenta, and 45% yellow (60 + 45 + 45 = 150). As the ink limit drops, black has to do more of the work of providing darkness for any heavy colour mix.
Black Generation
We also have to provide instruction on how the black component of the CMYK mix will be created. Remember that our subtractive primaries are CMY and that, theoretically, they provide all the colours in the visible spectrum. Where does K (black) come in? To bridge the gap between theory and real world performance, black ink does some very important things:
• It compensates for the spectral shortcomings of CMY inks: our inks are not perfect representations of what cyan, magenta, and yellow colours should be.
• It eliminates registration problems in black type: if there were no discreet black ink, every time we wanted to print black type we would have to stack cyan, magenta, and yellow on top of one another and make sure they fit perfectly.
• It helps us achieve easier neutrals: we can use black to replace the grey component of colours reducing the amount of CMY required to stay in balance on the press to provide consistent neutral tones.
• It provides cost savings: black ink is cheaper than coloured ink.
• It increases contrast: black’s greater density extends the tonal range of CMYK printing and improves the appearance of the final printed piece.
Since black is an add-on to our primary CMY colour set, we must provide instructions for its creation. Typically, we specify the black start point, maximum black, and black strength.
• Black start point: The point at which black enters colour mixes (range of 0% to 50%). If the black start point is 20%, then tones from white to 20% will carry CMY only.
• Maximum black: The upper allowable percentage of black ink used in the K separation (range 70% to 100%).
• Black strength: The relative quantity of black versus cyan, magenta, and yellow in the neutral grey component of colours (range 5 to 75%). As the number increases, colours can contain more black.
Black strength settings are also referred to as grey component replacement (GCR) values in colour separations. These may be set as percentages or simply as light, medium, and heavy, where more equals a larger proportion of black taking the place of CMY in neutral tones. GCR is the successor to UCR (under colour removal), which only moved exactly equivalent amounts of CMY to the black channel. UCR separations exhibit a ‘skeletal’ black, where there is a small amount of content in the black separation. With GCR, and as the black strength increases, more and more content moves into the black channel.
Final Processing
With the measurement file available, and total ink limit and black generation specified, processing can start in the colour management software and an output profile created. It will take from two to five minutes for all the calculations to complete. You may be prompted to specify whether to save the profile in the system or user location, or both. The system location will make the profile available to all users of the computer, while the user location will restrict it to the current user. Depending on your permissions at the computer, you may not be able to write into the system location. On an Apple OS X machine, the default location for the system is System/Library/ColorSync/Profiles and for the user, Users/(userID)/Library/ColorSync/Profiles.
Viewing Your Profile
The Mac provides a built-in tool for viewing a profile or comparing two profiles. From the Utilities folder, launch ColorSync. Select the second tab from the left and you’ll see a selection window on the left and a display pane on the right. The selection window contains a list of all ColorSync aware locations on your computer. Tip open the available folders and browse to find the profile you wish to view. Select a profile and a three-dimensional representation of its gamut in the Lab colour space appears on the right. Use your cursor to rotate the profile through all three of its axes. From the drop-down triangle in the display pane, choose Hold for comparison. The current profile is ghosted back, and you can select a second profile to display at full strength on top of it so you can compare the relative gamuts of the two profiles.
Profile Limitations
Remember that your profile represents the behaviour of the device for particular imaging parameters such as a specific substrate, resolution (screen ruling), and ink set. If any of those parameters are significantly altered, it requires a new profile to accurately represent the colour results. For real-world applications, profiles representing related groups of parameters are employed. One profile might represent all uncoated stock at 100 to 133 lines per inch, a second for coated stock at 150 to 175 lines per inch, and a third for a high saturation ink set on glossy stock with high resolution stochastic screening.
Media Attributions
• what is a profile by IFCAR adapted by Ken Jeffrey | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/04%3A_Color_Management_in_the_Graphic_Technologies/4.10%3A_Profile_Creation.txt |
Up to this point, we have focused exclusively on the output profile in our discussion of profiling. This makes sense, since this is the predominant profile we are concerned with in graphic production. Output profiles characterize output devices such as printers, proofers, and presses, but there are other devices that we have to manage in the full spectrum of a colour-managed workflow, and these require two additional classes of ICC profiles: display and input.
Display profiles capture the colour characteristics of monitors, projectors, and other display devices. Input profiles characterize devices that capture images such as scanners and digital cameras.
Display Profiling
You may hear this class of profile referred to as monitor profiling, but the more accurate designation is display profiling to acknowledge the inclusion of components beyond the monitor such as the video card and video drivers. Though less commonly profiled, this class of profile encompasses digital projectors as well.
In preparation for display profiling, the cardinal rule of thumb is to make whatever adjustments we can in the actual monitor. Any software adjustments to the VideoLUT (the look up table stored on the video card) reduce the operating range of the monitor and limit the spectrum of the display. With the predominance of LCD monitors, this means that the brightness or white luminance is the only hardware adjustment available. If you see reference to black level or colour temperature settings, this hearkens back to CRT monitors where these were actual hardware settings. For LCD monitors, these are software controls. For an LCD, all light comes from the backlight, which is a fluorescent array behind a diffuser, so the only monitor control is the brightness of this backlight.
Display profile software typically combines calibration and profiling. A setting called the vcgt (video card gamma type) tag in the display profile can download settings to the VideoLUT on the video card and change monitor behaviour. This is an unusual deviation from the standard protocol in colour management where the profile never alters the behaviour of the device. Calibration is used to optimize device function and characterization or profiling captures a description of device behaviour. Normally, the application of a profile should not have any influence on the device function.
Before calibration, it’s essential to warm up an LCD monitor for 30 to 90 minutes. Check the full brightness. If the monitor has aged to the point where it can’t achieve adequate brightness, then it should be abandoned as a candidate for profiling. Set the standard refresh rate and resolution that will be used on the monitor. If these are changed after profiling, then the profile cannot be considered accurate. Clean the screen with an appropriate gentle cleaner.
When you begin the profiling software, you will be prompted to identify your instrument. Colorimeters are often provided in display profiling packages, but most software works with standard spectrophotometers (spectros), such as the i1 Pro.
The recommended settings to enter for the set-up phase are:
• White point: D65 (6500 K)
• Gamma: 2.2
The setting 6500 K is usually close to the native white point of an LCD monitor. You can choose Native White Point if you feel that 6500 is too far from the actual white point of your monitor. Gamma is the tone reproduction curve of the monitor. The setting 2.2 typically provides the smoothest gradients in monitor display.
Next is the choice of a patch set from small, medium, and large options. This determines the number of colour swatches that will be projected on screen for the instrument to read. The trade-off is between calibration time and colour range. Start with the small patch set and see if you are happy with the results.
To start this process, make sure the i1 is on its base plate for the instrument calibration step and then suspend the spectro in the monitor mounting strap on the monitor. The weight at one end of the strap hangs behind the monitor to counterbalance the spectro, and the i1 Pro locks into the mounting plate at the other end of the strap to keep it in place on the monitor screen. The reading aperture of the spectro should be approximately in the centre of the screen. Tip the monitor back very slightly to help the spectro sit flat of the front of the LCD.
When you tell the software to proceed, it begins projecting a series of colour swatches that the colorimeter or spectro records. As you did to produce the measurement file for your output profile, you are building a comparative table of known device values (the RGB swatches projected on the screen), with device independent values (Lab readings from the spectro) that describe their appearance. This may take from three to ten minutes. During this process, make sure that no screen saver becomes active and you keep the mouse out of the scanning area and. If you leave before the process is completed, check that the spectro is properly positioned when you return.
Once the colour patches are done, you will be prompted to name and save the profile. Make a habit of naming your profile with the date so its age can be easily checked. Saving display profiles is similar to saving output profiles, where the user chooses system and user options. With display profiles, there is no value in saving previous versions. All you are interested in is the current state of the monitor.
To see the active profile on a Mac, choose System Preferences/Displays/Color. The active profile will be highlighted at the top of the list. There is a check box toggle limiting the list so only profiles that are known to be associated with the monitor show.
Input Profiles
As mentioned, we need input profiles when we capture images. There are predominantly two types of devices associated with image capture: scanners and digital cameras. The fundamental concept in producing an input profile is that RGB device values scanned or photographed from a target are matched to device independent Lab values either provided by the target vendor or measured from the target itself.
For input profile creation, the targets always consist of two parts: a physical sequence of colour patches and a target description file (TDF) with the profile connection space (PCS) values for the swatches. The TDF accuracy varies from individually measured targets (done by you or a specialty vendor) at the high end to averaged samples from a batch run (an economical alternative).
As with output profiling, there are standard scanner target formats. We have IT8.7/1 for transmissive originals (film transparencies like slides) and the IT8.7/2 for reflective (photo print paper). These targets are available from a variety of vendors and allow you to match the material of the target to the material you will be scanning. If you will be scanning Kodachrome slides, you will want a Kodachrome IT8/7.1 target. Conversely, if your originals are Fuji photo prints, then you will want an IT8/7.2 target on the matching Fuji photo print paper.
The X-Rite ColorChecker targets are commonly used for digital cameras. There is the original ColorChecker with 24 tiles and the later digital ColorChecker SG with 140 colour tiles. The larger target can be used for initial set-up and there is a mini version of the original ColorChecker that will work in most photo shoots for an ongoing reference check.
Though scanners and digital cameras both fall into our domain of input profiling, they have some very different characteristics that we have to take into account when preparing to produce a useful profile. As with output profiling, we need to calibrate the device by stabilizing and optimizing its performance prior to capturing its colour behaviour. In order to stabilize, we need to understand the potential variables that the device presents. Scanners have a controlled light source and stable filters and typically have the option for extensive software intervention. In contrast, cameras have stable filters and moderate software controls but have the potential for hugely variable lighting conditions. The excessive variability of outdoor lighting limits useful profile creation to interior and in-studio camera work. If the lighting conditions can be controlled adequately in the studio, then colour-accurate capture can take place and colour accuracy can be maintained in the production work that follows.
Stabilizing a scanner’s performance comes from turning off any automatic adjustments for colour correction:
• White and black point setting
• Removing colour casts
• Sharpening
If you can’t turn these off, then the scanner is likely not a good candidate for profiling. Optimize the scanner’s behaviour with an output gamma setting of 2.6 to 3.0.
Stabilizing a camera’s performance comes from the appropriate lighting and capture settings. Use even lighting and full highlight and shadow exposure for target capture. For colour temperature, use grey balance for camera back devices, and white balance for colour filter arrays (CFA). Optimize the camera’s bit depth retention with gamma 1.0 for raw profiling.
With calibration complete, it’s time to capture the target. For a scanner:
• Mount straight
• Mask open bed areas
• Scan as high-bit tiff
• Open in Photoshop (beware of any automated conversions or profile assignments) to rotate, crop, and spot
Comparatively, for the digital camera:
• Light evenly
• Capture square
• Open in Photoshop (beware of any automated conversions or profile assignments) to rotate, crop, and spot
With the digital image in hand, we’re ready for the input profile creation. Measure the commercial target with the spectro if you are not using a supplied target description file. Launch your colour management software and you will be prompted to identify the target image and the corresponding target description file. The profiling software reads the RGB values from the scanned or captured image, and the software processes the target description file and RGB measurement file to produce the input profile. File-saving options are very similar to what we have previous described for output and display profiles.
Device Link Profiles
Device link profiles are most closely related to the output class of profiles. A device link profile combines two output profiles to provide the specific conversion instructions between two particular devices. It provides the opportunity to maintain black and other separation purity (i.e., what begins as black only in the source colour space emerges as black only in the destination colour space) by removing the need for passing the colour transformation through the PCS. To define a device link, we identify a source and destination profile to our colour management software, specify the rendering intent, and provide details on how constrained the re-separation should be. By avoiding the passage into and back out of the PCS, we can very strictly control the parameters of the colour conversion. The options for conversion are:
• Full re-separation — Complete re-separation. Solid colours in the original file may not remain solid. The black generation parameters that you specify are used, which may result in using less chromatic ink and more black ink.
• CMYK integrity — All colour builds can be adjusted. The relative amount of black versus CMY will be preserved in content processed through the device link.
• Black purity only — Any colours other than the black channel (solid K, K-greys) can be adjusted.
• Colour and black purity — The same as fully constrained, but solid colours can be reduced to a tint.
• Fully constrained — Any colour made with only one or two inks will not have other inks added. Solid (100% tints) primaries and secondaries are not affected and remain solid.
• Ink optimizing — A proprietary term in the ColorFlow colour management software for applying a full re-separation with a heavy grey component replacement (GCR) algorithm.
Colour management software used to be required to preview the results of applying a device link. With the last few versions of Adobe Photoshop, a device link option has been added to the advanced dialog window of the Color Conversion menu, making device link previewing much more accessible. Currently, Photoshop only supports CMYK to CMYK device links. It does not support RGB to CMYK device links. Another alternative for viewing the results of applying a device link is to generate a virtual proof (VPS) in Kodak Prinergy with the device link specified. For details, see section 4.11 in this chapter.
With this extraordinary level of control, why don’t we use device links for every colour conversion? The truth is, that with our gain in managing the colour conversion process, we sacrifice an even greater degree of flexibility. The premise of colour management and the use of profiles is that we do not have to generate a unique profile for each pairing of devices. With the power of the PCS gateway to provide the device independent colour description, we only need a single profile for each colour condition of a device and any two profiles can be positioned on either side of the PCS to provide a pathway for the colour conversion.
Where it does make sense to go to the extra trouble of generating a device link profile is a situation where a specific pairing of two devices is used over and over again, such as a proofer for a particular press condition, or to keep two presses in a shop matched for their colour output.
If we process an image from RGB to CMYK at the beginning of our production process, we gain the stability of having the image in our known CMYK space, but we surrender the flexibility of converting to the optimal CMYK space at final output. For final stage or late-binding conversion, we are dependent on the RIP environment for managing the calculations between the profile pair (see Section 5.2). A device link provides additional security in the conversion process by reducing the variability that can come with the processing application input that is part of a profile pair transformation.
4.12: A Review of the Profile Classes
We’ve now touched on the four types or classes of profiles: display, input, output, and device link. What traits do they have in common? They are:
• Specified by ICC-defined file formats
• Contain colour tables
• Have device colour values associated with device independent colour values (PCS)
• Use a measuring device (spectrophotometer) and targets for creation
• Require a specified rendering intent
• Have standard OS library locations
There are also some unique characteristics for each profile class that help define the role they play in the overall colour management process. The display class of profiles:
• Have no separate, tangible target: the device ‘is’ the target
• Can affect device behaviour
• Are mostly integrated with device calibration
The input class of profiles:
• Are unidirectional: A to B table only (device values to PCS)
• Are able to exclude any external measuring (with a supplied TDF)
• Ensure that the target’s job is tell us how the device ‘sees’ the target
Output (and device link) class of profiles:
• Use CMYK (versus RGB for display and input)
• Have black handling settings
• Have the largest and most complex colour tables
• Ensure the target’s job is to tell us how the device ‘makes’ the target
• Provides preview capability for upstream editing
We also discussed the functions of a profile in the colour equation. The two functions are source and destination. A source profile is a profile used to convert device dependent RGB or CMYK information to a device independent colour space (Lab). A destination profile is a profile used to convert information from a device independent colour space (Lab) into a device dependent colour space of an output device (RGB or CMYK). You can think of the source profile as the colour reference or the place from which our desired colour appearance comes. The destination profile describes the location where we are producing colour in the current pairing of devices. If we want a proofer to simulate the colour behaviour of a particular press, the press’s profile is the source (the desired colour behaviour) and the profile for the proofer is the destination (our colour output).
Do not confuse a profile’s class with its function; they are independent and separate characteristics. A particular output profile can assume the function of source in one colour equation and then turn around and assume the role of destination in a second. Take the example of the press above, where the press profile acted as the source. If we have multiple presses in the printing plant, and we have another press that is our master press for colour behaviour, the press profile that acted as a source for the proofer output will now function as a destination profile to align that press to the colour behaviour of the master press (the master press profile is the source profile in this pairing).
Profiles enable the two key processes of any colour managed workflow: clear communication of colour meaning and the transformation of device values to maintain colour appearance. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/04%3A_Color_Management_in_the_Graphic_Technologies/4.11%3A_Beyond_Output_Profiling-_Display_Input_and_Device_Lin.txt |
Learning Objectives
• Explain why raster image processing requires so much data for print imaging
• Compare resolutions required for digital media and print media
• Compare and contrast the positive and negative attributes between using process and spot colours
• Discuss why Pantone colours are more accurate on a printed swatch than on screen.
• List a number of different industry standard spot colour systems
• Describe trapping issues that occur when adjacent colours are imaged independently
• Analyze different imaging technologies for trapping requirements
• Interpret how black ink is to be used in overprint and boost situations
• Define transparency within the context of prepress workflow management
• Differentiate between flattening transparency on the desktop, or at raster image processing
• Describe the most common press sheet imposition styles
• Analyze different binding styles to select the correct imposition required
• Identify opportunities for nesting multiple images to save materials
• Explain the importance of preflight within the context of pre-press workflow
North America’s fifth-largest manufacturing sector is graphic communications technologies. We can become aware of just how huge this industry is by listing all the manufactured images we see in our day. Your list might include the morning paper or magazine you read, the graphic on the side of the bus you ride to work, and the labels on the grocery shelf where you select your evening meal. Increasingly, more of the graphics that are driving that massive industry are produced with computer graphics software on personal computers. Most of the graphics software used to create the images for reproduction is designed to create images for electronic media — primarily the Internet. Computer graphics designers are not aware of, or concerned with, optimizing their designs for the manufacturing process they are driving. This problem is a root cause of less profitability in most sectors of the graphic communications industry. To tackle this problem, we must become aware of all that happens to a computer graphic from the time it leaves the picture created on the computer screen to the image on the label on the package on the grocery shelf, or the photograph on the side of a bus.
We must first distinguish between traditional pre-press technologies and the pre-imaging processes that are relevant in today’s graphic communications industry. Pre-press processes are different from the way we process images for electrophotographic imaging or imaging with an inkjet engine. We must also distinguish between preparing images for a lithographic press and a flexographic press. Electrophotography and inkjet are growing technologies used to produce customized — or individualized — communications materials. Lithography and flexography are used to manufacture mass-produced media products. These four imaging processes are the core imaging technologies that reproduce 90% of the images produced in the graphic communications industry.
Many graphic designers are not aware of what must happen to the computer graphics they produce in order to ready them for manufacturing reproduction. Their experience is limited to hitting ‘command P’ and their computer graphic magically transforming the illuminated masterpiece on their Apple Cinema Display, to the disappointing rendition that appears on the tray of their inkjet printer. Most of the pre-imaging processes are automated in software functions that are built into the print driver, so people are not aware of how a computer graphic must be prepared for an imaging device. Since more and more of the images produced through inkjet, electrophotography, lithography and flexography start their lives as computer graphics, it is important to understand these pre-imaging processes to properly design computer graphics for the manufacturing process.
This chapter will analyze six pre-imaging processes in detail, and describe how they are altered to prepare computer graphics differently for each of the four imaging technologies. We will refer back to the computer graphic design/creation process to outline how graphics could be altered so they can be more effectively reproduced with each imaging technology. This is the missing link in the graphic communications business in today’s marketplace. Designers create computer graphics in software that is increasingly designed for electronic image creation. They do not realize that the same graphic they created for a home page on the Internet should not be used for the cover of a book. They email the image to a lithographic print production facility and the pre-press department of that facility does hand springs trying to alter the image to work on their sheet-fed presses. This adds time and cost to the job that is usually buried. The designer never gets feedback on how the design could be altered to be more effective for lithographic production.
When pre-press was a computer-to-film process, there were two important factors that ensured designers got this critical feedback. The software for computer graphic production was specialized for print creation and content could be photographed or computer-generated and combined on film. Computer graphic designers knew their image was only going to be used for the cover of a book and created it appropriately. They also had to submit their computer graphic to a graphic communications production facility that was separate from the lithographic print facility. If there were extra costs incurred to prepare the computer graphic for a lithographic press, the designer was informed and invoiced for the extra work the image preparation entailed. So the designers were working with computer graphic software that would not let them create imagery that was not appropriate for print production, and if they did dream up an image that did not work well, they were immediately informed of the extra costs they were incurring.
In the 21st-century marketplace, all graphics that drive our four primary imaging technologies are created on the computer. Computer graphics software is designed to create effects for images that will stay in the electronic media: web, broadcast, digital film, and hand-held communication technologies. Pre-imaging processes are either automated or a part of the print manufacturing business and usually considered the painful part of feeding the print machinery that no one wants to talk about. So computer graphic designers drive software that lets them create outrageous images for imaging reproduction manufacture. They are less concerned about the ‘print’ part of a media campaign, and manufacturers are hesitant to inform them that their designs incurred extra costs to reproduce. We can contribute to a solution to this problem by analyzing all of the pre-imaging processes for each type of reproduction manufacture and link them back to the computer graphic design software.
We will examine six pre-imaging processes:
• Raster image processing (RIP) technologies that are common to all four manufacturing processes
• Colour management for repeatability, as a part of the RIP process
• Trapping to lithographic and flexographic specifications
• Transparency, which is a visual effect that has a great impact on imaging
• Imposition for pre-RIP and post-RIP for media utilization
• Preflight analysis and automation for computer file creation | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/05%3A_Pre-press/5.01%3A_Introduction.txt |
The raster image processor (RIP) is the core technology that does the computational work to convert the broad range of data we use to create a computer graphic into the one-bit data that drives a physical imaging device. Let’s examine the creation of a single character of the alphabet, or glyph. A font file delivers PostScript language to the RIP that describes a series of points and vector curves between those points to outline the letter A. The RIP has a matrix grid at the resolution of the output device and computes which spots on the grid get turned on and which are turned off to create the shape of that letter A on the output device. The spots on the grid can only be turned on or off — which is how binary data is encoded — either as 0 or 1. The grid then acts as a switch to turn a mechanical part of the imaging engine on or off.
With computer-to-plate technology for lithographic printing plate production, a laser is used to expose an emulsion on a printing plate. Most plate-setters have a resolution of 2,000 to 3,000 lspi (laser spots per inch). The RIP calculates all the spots that must be turned ‘on’ to create the graphic that will be imaged on the printing plate. If the image fills a typical sheet-fed press, it is (30 inches x 3,000 lspi) x (40 inches x 3,000 lspi) = 1.08 trillion, which takes 10 gigabytes of computer memory to store and transfer. A printing plate for flexographic print production is created by turning a laser on and off at a slightly lower resolution. An inkjet printer uses the same RIP process to deliver the same one-bit data to each inkjet nozzle for each colour of ink in the printer. Most inkjet engines have a resolution between 600 and 1,200 spots per inch — so the matrix grid is smaller — but if it is an eight-colour printer, the data for all eight nozzles must be synchronized and delivered simultaneously. An electophotographic (Xerox) printer usually has a resolution similar to an inkjet printer and utilizes a similar RIP process to change a grid of electrostatic charges to positive or negative on an electrostatic drum that is the maximum media size the machine can image. Each colour in the printer has a separate raster image that charges the drum in the right spot to attract that colour of toner to that exact location. The data for each colour must be synchronized for simultaneous delivery. The data must refresh the charge on the drum after each print in order to pick up new toner. That is a very important fact to remember when we talk about personalizing print with variable data later in this chapter.
This basic understanding of RIP’s place in a computer graphic workflow is essential to understanding how to prepare files for, and manage, RIP resources. It is also essential in solving some of the common problems we see in various RIPs. When we compare the two mass production imaging technologies, lithography and flexography, to the personalized imaging technologies, electrophotography and inkjet, we can identify some core similarities. In lithography and flexography, a high-powered laser is used to alter a physical emulsion that is durable and finely grained enough to let the laser image a spot that is one three-thousandth of an inch without affecting the spot of equal size beside it. We can reliably image that spot in a serif of a glyph set in one point type or a hair on a face in a photo that is imaged with a 5 micron frequency modulated (FM) screening pattern. The mass production technology assures us that the first print will be identical to the millionth print.
The raster grid of one-bit data that the RIP produces must be delivered to the imaging drum or the inkjet nozzle for every image that is produced with an inkjet printer or an electrophotographic engine. This is what allows us to make every image different and personalize it for the person we are delivering the image to. It also makes the process slower and less reliable for mass production. The RIP produces a lower resolution raster grid, so the detail in photos and letter shapes is not as precise. We can have a RIP discard data if we have too much detail for the raster grid it is producing. The RIP does not do a good job of interpolating more data to produce additional detail in a photo or graphic shape if that information is missing to begin with.
That brings us to examining the resources that a RIP must have to produce a perfect raster for every graphic shape it renders, and for every colour being reproduced. The resource a RIP consumes is data. In the graphic communications industry, we should all wear T-shirts that say ‘Pigs for data!’ just to distinguish us from our media colleagues who are producing computer graphics for electronic media. If we think of a RIP as an auto assembly line we are feeding with parts, in the form of files in different data formats, it will help us understand how to make a RIP more efficient. If we feed too many parts into the assembly line, it is easier to throw some parts away than it is to stop and recreate a part that is missing. If we feed the assembly line with five times as many parts needed to make a car, it is still more efficient to throw parts away than it is to stop and recreate a missing part.
If we apply this analogy to image resolution, we can point to examples where designers regularly repurpose images from a web page to use on a book cover or poster print. The web page needs to deliver the photo across a network quickly and only needs to fill a typical computer screen with enough detail to represent the photo. A typical photo resolution to do that properly is 72 pixels per inch. Now remember that the raster grid for a lithographic printing press that will print the book cover is 3,000 lspi. Our RIP needs much more data than the web page image contains! Most of the photos we are reproducing today are captured with electronic devices — digital cameras, phones, scanners, or hand-held devices. Most store the data with some kind of compression to reduce the data the device has to store and transfer. Those efficiencies stop at the RIP though, as this computational engine has to decompress the data before applying it to the graphic page it is rasterizing. It is like breaking a steering wheel down to wires, bolts, and plastic sleeves that efficiently fit into a one-inch-square shipping package, and putting this ‘IKEA furniture’ steering wheel onto an auto production line for the assembler to deal with in two-point-two minutes!
On the other hand, we can capture a digital photo at 6,000 pixels per inch (ppi) and use it on a page scaled to half the original dimension. That is like packing a finished steering wheel in 10 yards of bubble wrap and setting it on the assembly line in a wooden shipping crate! So it is important for designers to pay attention to the resolution of the final imaging device to determine the resolution that the RIP will produce from the graphic files it is processing.
Halftone Screening
It is important to stop here for a discussion about halftone screening that a RIP applies to photographs and graphics to represent grey levels or tonal values in a graphic element. We described how the RIP makes a grid of one-bit data, but graphics are not just black and white — they have tonal values from 0% (nothing) printing to 100% (solid) printing. If we want to render the tonal values in-between in half percent increments, we need 200 addresses to record the different values. Computer data is recorded in bits, two values (on and off), and bytes, which are eight bits strung together in one switch. The number of values a byte can record is 256 — the number of combinations of on and off that the eight bits in the byte can express. A computer records a byte of data for each primary colour (red, green, and blue — RGB) for each detail in a photo, as a pixel (picture element), which controls the phosphors on electronic imaging devices. A RIP must convert the eight-bit RGB values into the four primary printing ink colours (cyan magenta, yellow, and black — CMYK). There are two distinct steps here: (1) conversion from RGB to CMYK continuous tone data (24 bit RGB to 32 bit CMYK); and (2) continuous tone to one-bit screening algorithms. We have to be in the output colour space before we can apply the one-bit conversion. It converts the eight-bit tonal values into one-bit data by dividing the area into cells that can render different sizes and shapes of dots by turning spots on and off in the cell. A cell with a grid that is 10 laser spots wide by 10 laser spots deep can render different 100 dot sizes (10 x 10), from 1% to 99%, by turning on more and more of the laser spots to print. If we think back to the plate-setter for lithographic platemaking, we know it is capable of firing the laser 2,000 to 3,000 times per inch. If the cells making up our printing dots are 10 spots square, we can make dot sizes that have a resolution of 200 to 300 halftone screened dots in one inch. A RIP has screening (dot cell creation) algorithms that convert the data delivered in RGB pixels at 300 pixels per inch into clusters of laser spots (dots) for each printing primary colour (CMYK).
This description of how a RIP processes photographic data from a digital camera can help us understand why it is important to capture and deliver enough resolution to the RIP. It must develop a detailed representation of the photo in a halftone screened dot that utilizes all of the laser spots available. The basic rule is: Required PPI = 2 x lines per inch (LPI) at final size. So if you need to print something at 175 lines per inch, it must have a resolution of 350 pixels per inch at the final scaled size of the reproduction. Use this rule if you are not given explicit direction by your print service provider. You can use a default of 400 ppi for FM screening where lpi is not relevant.
WYSIWYG
It is important to know that each time we view a computer graphic on our computer screen, it is imaging the screen through a RIP process. The RIP can change from one software program to another. This is why some PDF files look different when you open them in the Preview program supplied with an Apple operating system than they do when opened in Adobe Acrobat. The graphics are being processed through two different RIPs. The same thing can happen when the image is processed through two different printers. The challenge is to consistently predict what the printed image will look like by viewing it on the computer screen. We use the acronym WYSIWYG (what you see is what you get) to refer to imagery that will reproduce consistently on any output device. Designers have faced three significant challenges in trying to achieve WYSISYG since the advent of desktop publishing in the early 1980s.
The first challenge was imaging typography with PostScript fonts. The second was colour managing computer screens and output devices with ICC profiles. The third and current challenge is in imaging transparent effects predictably from one output device to another. Font problems are still the most common cause of error in processing client documents for all imaging technologies. Let’s look at that problem in depth before addressing the other two challenges in achieving WYSIWYG.
Font Management
The development of the PostScript computer language was pioneered by Adobe in creating the first device independent font files. This invention let consumers typeset their own documents on personal computers and image their documents on laser printers at various resolutions. To achieve WYSIWYG on personal computer screens, the font files needed two parts: screen fonts and printer fonts. Screen fonts were bitmaps that imaged the letter shapes (glyphs) on the computer screen. Printer fonts were vector descriptions, written in PostScript code, that had to be processed by a RIP at the resolution of the printer. The glyphs looked significantly different when imaged on a 100 dpi laser printer than they did on a 600 dpi printer, and both were quite different from what graphic artists/typographers saw on their computer screen. That was not surprising since the shapes were imaged by completely different computer files — one raster, one vector — through different RIP processors, on very different devices. Many graphic designers still do not realize that when they use Adobe type font architecture they must provide both the raster screen font and the vector PostScript font to another computer if they want the document that utilizes that font to process through the RIP properly. This was such a common problem with the first users of Adobe fonts that Microsoft made it the first problem they solved when developing TrueType font architecture to compete with Adobe fonts. TrueType fonts still contained bitmap data to draw the glyphs on a computer screen, and PostScript vector data to deliver to a RIP on a print engine. The TrueType font file is a single file, though, that contains both raster and vector data. TrueType fonts became widely distributed with all Microsoft software. Microsoft also shared the specifications for TrueType font architecture so users could create and distribute their own fonts. The problems with the keeping screen font files with printer font files went away when graphics creators used TrueType fonts.
The quality of the fonts took a nose dive as more people developed and distributed their own font files, with no knowledge of what makes a good font, and what can create havoc in a RIP. Today, there are thousands of free TrueType fonts available for downloading from a multitude of websites. So how does a designer identify a good font from a bad font? The easiest way is to set some complicated glyphs in a program like Adobe InDesign or Illustrator and use a ‘convert to outlines’ function in the program. This will show the nodes and bezier curves that create the glyph. If there are many nodes with small, straight line segments between them, the font may cause problems in a RIP. Remember that PostScript was meant to be a scalable device independent programming language. If the poorly made glyphs are scaled too small, the RIP has to calculate too many points from the node positions and ends up eliminating many points that are finer than the resolution of the raster image. On the other hand, if the glyph is scaled too large, the straight lines between points make the smooth curve shapes square and chopped-looking. These fonts are usually created by hand drawing the letter shapes, scanning the drawings, and auto tracing them in a program like Illustrator. The ‘convert to outlines’ test reveals the auto tracing right away, and it is a good idea to search out another font for a similar typeface from a more reputable font foundry.
Another good test is to look at the kerning values that are programmed into the font file. Kerning pairs are glyph shapes that need the space between them tightened up (decreased) when they appear together. A good font usually has 600 to 800 kerning pair values programmed into its file. The most common pair that needs kerning is an upper case ‘T’ paired with a lower case ‘o’ (To). The ‘o’ glyph must be tucked under the crossbar of the T, which is done by programming a negative letter space in the font file to have less escapement when the imaging engine moves from rendering the first shape to when it starts imaging the second shape. If we set the letter pair, and put the curser in the space between them, a negative kerning value should appear in the kerning tool. If no kerning value appears, the font is usually a poor one and will cause spacing problems in the document it is used in.
Another common problem occurred when combining Adobe Type 1 fonts with TrueType fonts in the same document. Adobe was the creator of the PostScript programming language, and although it was easy enough to copy its code and create similar fonts, Adobe has maintained fairly tight control over licensing the PostScript interpreting engines that determine how the PostScript code is rendered through a raster image processor. The RIP stores the glyph shapes in a font file in a matrix that can be speedily accessed when rendering the glyphs. Each glyph is assigned an address in the matrix, and each font matrix has a unique number assigned to it so that the RIP can assign a unique rendering matrix. Adobe could keep track of its own font identification numbers but could not control the font IDs that were assigned to TrueType fonts. If a TrueType font had the same font ID number as the Adobe Type 1 font used in a document, the RIP would establish the glyph matrix from the first font it processed and use the same matrix for the other font. So documents were rendered with one font instead of two, and the glyphs, word spacing, line endings, and page breaks were all affected and rendered incorrectly. For the most part, this problem has been sorted out with the creation of a central registry for font ID numbers; however, there are still older TrueType font files out there in the Internet universe that will generate font ID conflicts in a RIP.
Adobe, Apple, and Microsoft all continued to compete for control of the desktop publishing market by trying to improve font architectures, and, as a result, many confusing systems evolved and were discarded when they caused more problems in the RIPs than they solved. There is a common font error that still causes problems when designers use Adobe Type 1 fonts or TrueType fonts. Most of these fonts only have eight-bit addressing and so can only contain 256 glyphs. A separate font file is needed to set a bold or italic version of the typeface. Some page layout programs will allow the designer to apply bold or italic attributes to the glyphs, and artificially render the bold or italic shapes in the document on the computer screen. When the document is processed in the RIP, if the font that contains the bold or italic glyphs is not present, the RIP either does not apply the attribute, or substitutes a default font (usually Courier) to alert proofreaders that there is a font error in the document. The line endings and page breaks are affected by the error — and the printing plate, signage, or printout generated becomes garbage at great expense to the industry.
To solve this problem, Adobe actually cooperated with Microsoft and Apple in the development of a new font architecture. OpenType fonts have unicode addressing, which allows them to contain thousands of glyphs. Entire typeface families can be linked together to let designers seamlessly apply multiple attributes such as condensed bold italic to the typeface, and have the RIP process the document very closely to what typesetters see on their computer screen. PostScript is also the internal language of most page layout software, so the same OpenType font files are used to rasterize the glyphs to screen as the printer’s RIP is using to generate the final output. There can be significant differences in the RIP software, but many font issues are solved by using OpenType fonts for document creation.
One common font error still persists in the graphic communications industry that acutely underlines the difference between creating a document on a single user’s computer but processing it through an imaging manufacturer’s workstation. Designers usually own a specific set of fonts that they use for all the documents they create. The manufacturer tries to use the exact font file each designer supplies with the document. The problem once again involves the font ID number, as each font file activated in an operating system is cached in RAM memory to make the RIP-to-screen process faster. So the font files the manufacturer receives can be different versions of the same font created at different times, but assigned the same font ID number. For example, one designer uses a 1995 version of Adobe’s Helvetica typeface and another uses a 2015 version, but the two typefaces have the same font ID number. The manufacturer’s operating system will not overwrite the first font matrix it cached in RAM, so it is the first font file that renders the document on screen and will be sent down to the RIP. Usually, there are few noticeable changes in the glyph shapes. But it is common for font foundries to adjust kerning values between letter pairs from one version to the next. So if a manufacturer has the wrong version of the font file cached in RAM, a document can have line-ending changes and page reflows. This is a hard error to catch. There are programs and routines the imaging manufacturer can implement to clear the RAM cache, but many times, more ‘garbage’ is generated before the problem is diagnosed. Modern PDF creation usually includes the production of a uniquely tagged font subset package that only contains the glyphs used in the document. The unique font subset ID avoids the potential for font ID conflicts.
Managing fonts on a single user computer has its own challenges, and Apple has included Font Book with its operating systems to help manage fonts in applications on an Apple OS. Adobe offers Typekit with its latest Creative Cloud software to provide greater access to a wide variety of typefaces from a reliable foundry. Third-party font management programs like Suitcase Fusion also help graphic artists manage their fonts for repurposing their documents effectively. It is still the responsibility of individual operators to know how to use the fonts in their documents. They should also make sure that the fonts are licensed and packaged to deliver to other computer systems so that they can drive many different RIPs on a wide variety of output devices. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/05%3A_Pre-press/5.02%3A_Raster_Image_Processing.txt |
The second challenge with implementing WYSIWYG for electronic documents that are imaged on substrates is managing colour expectations. Chapter 4 discussed the challenges of colour management from the perspective of how we see colour, measure it, and manage output devices with ICC profiles. In this chapter, we will explore colour management from the perspective of how we recognize and manage the ICC profiles that are embedded in client documents. We will also explore the preflight issues in managing spot colours in documents. This will also lead us to a discussion of trapping for lithography and flexography.
To design with colour in computer graphics software, we must understand how the software generates the colour values. Page layout and illustration software usually have several systems for creating colour on a page. The colour settings or colour preferences attached to a file can change from one document to another or in the same document restored from one computer to another. If a designer uses the RGB colour model to specify the colours in a document, the colours on the monitor can change depending on the translations done to the colour settings. This is a major turning point for designers creating documents intended to stay in the electronic media. No one pays much attention to how a particular colour of red is rendered from one web browser to another. Designers pay more attention to how the colours interact relative to one another in web-page documents. It is only when we image a computer graphic on a substrate that we must pay attention to rendering the exact hue of red from one device to another. Coca-Cola has very exact specifications for the red used in its documents that tie into its brand recognition. So designers for documents intended for imaging on substrates must use colour models that are proven to render exactly the same results from one output device to another.
Pantone Colours
This is a lofty ideal that the graphic communications industry aspires to. There are systems in place that are proven to render very accurate results. There are challenges in understanding the systems, and one wrong step in the process and accuracy is destroyed. The process starts with how a designer chooses colours in a document. The most-used system for choosing accurate colours was created by the Pantone company. Pantone has developed a library of ink recipes that are published as swatch books. A designer can buy a printed book of a library of colours that matches an electronic library that can be imported into computer software programs. Designers compare their on-screen rendering of a colour to the printed sample swatch. If a designer is developing a corporate identification package with logos that use Pantone 123 and Pantone 456, the designer can be assured that the documents he or she creates will be imaged with inks that have similar spectral values to the swatch books used to choose the colour. I say similar, because the swatch books fade over time, and the substrates the books are printed on don’t usually match all the substrates a corporate logo is imaged on.
It is also important to realize that the Pantone library was created to mix pigments for spot colour inks rather than process colour inks. Spot colours are mixed independently and must each be applied to the substrate independently. Process inks are only the four primary colours: cyan, magenta, yellow, and black. Process inks are transparent and are intended to be combined by halftone screening different percentages on a substrate to render any colour in the Pantone library. Spot colour inks are more opaque and are intended to be applied to a substrate one at a time, through distinctly separate printing units. Since most colour photography is colour separated to render the photo in only the four primary process inks, most documents are created intending to convert the spot colours to process colours. They can be imaged with the photographs in the document. A designer must know how many colours the output device is capable of when deciding which colours will remain as spot colours and which will be converted to CMYK process colours. Most inkjet and electrophotographic devices are only capable of imaging with the four process colours. Some lithographic presses have extra printing units that can print spot colours, so six- and eight-colour presses commonly print the four process colours and two (or four) extra spot colours in one pass. It is not uncommon to have 10- and 12-colour flexographic presses that image no process colours but use 12 spot colours. This is because, historically, flexo plates cannot consistently reproduce very fine halftone dots reliably. This is changing with the development of high-definition plating technology, so we are seeing more photographic content produced well on flexographic presses. Flexography is primarily used in the packaging industry where spot colours are very closely tied to brand recognition in retail outlets. A designer must be very aware of the imaging technology used to reproduce a document when deciding which colours will remain spot colours.
This is where the next round of challenges begins when preflighting (assessing) documents for imaging to substrates. If design elements stay as spot colours, it is a simple process to maintain the spot colour on the output device and to image with the appropriate ink or toner. Some software will not maintain the spot colour in a document easily in some situations. Usually, the problem comes with applying gradients to spot colours. It is very easy to introduce a median colour value on a spot colour gradient that is simulated with a process colour value. The screen version displays a nice smooth gradient that looks like what the designer intended to create. When imaging on a substrate, the gradient will have to be broken down into individual colours: from the solid spot colour to a value of CMYK and back to spot colour. It is very hard to recognize this by viewing the document, or even a composite PDF file. Viewing separated PDF files, or using a ‘separations’ tool in Acrobat, will show the problem before it gets to a printing plate.
There are also colour problems associated with nested files generated in different software. For example, if we create a magazine page with a headline colour named “PMS 123,” add a logo created in Adobe Illustrator with type in a colour named “Pantone 123,” and insert a PDF ad created in Apple’s Pages layout with a border specifying “PANTONE 123,” then even though they are all the same, colour-separating software will generate three separate spot colour plates for that page. The spot colours have to be named exactly the same and come from the same library. Some modern workflows include aliasing rules that will match numbered PMS colours to try to alleviate the problem. Colour libraries can be a problem as well, especially if our software allows the library to convert the spot colour to a process colour. The same colour library in two different versions of Adobe’s Creative Suite software can generate a different process colour recipe for the same Pantone colour. This is not a problem if all the document elements are created within one software package and all spot colours are converted to process colours. The problem arises when a designer places a graphic file from other software on a page with the same colour elements. A logo created in an older version of Adobe Illustrator will use that colour library to look up process colour recipes that can be very different from the recipes in a recent colour library used in Adobe’s InDesign software. So all the Pantone orange colours in a document are supposed to look the same, but do not because the spot colour to process colour conversion has not been done with the same library. The problem becomes worse when we combine files from different software vendors, as designers often have to do when building a document. It is common these days to bring together graphics created in Microsoft Office and Apple software and generate a final document with Adobe InDesign. The best way to create consistent documents for reproduction is to specify a common CMYK colour value that will print reliably on the output device.
Pantone also publishes a swatch book that shows the difference between the swatches run as spot colour ink mixes, and the same swatch printed as halftone screen builds of process inks. This is a designer’s most valuable tool for specifying process ink recipes. It also illustrates that many Pantone colours cannot be simulated very well using halftone screen values of the four process inks. It is very apparent that very vibrant orange, purple, and green Pantone spot colours are not achievable with process inks. There are systems like Hexachrome for colour separations that use more than just CMYK inks to extend the gamut of the Pantone colours that can be reproduced. There are also more and more inkjet and electrophotographic engines that will use extra spot colours to extend the colour range of the device beyond CMYK. The businesses that employ those devices usually know they are unique in the marketplace and have developed marketing tools to help designers use those capabilities successfully.
Accuracy in Design
If we reflect back to the concept of WYSIWYG for a moment, we can use the Pantone selection process to illustrate the challenge very well. If we ask a designer to choose colours for a document based on computer screen displays, we know that the RGB or HSL values they can select will be far too vibrant for reproduction with any imaging engine. To set proper expectations for WYSIWYG, we ask the designer to calibrate a monitor and select the proper output profiles to tone down the screen view and set more realistic expectations. We also ask that a print designer use printed swatch books to select from a library of specified colours and assign realistic CMYK process colour values to her or his colour palette. If those steps are followed, there is a very reasonable chance that the process will achieve WYSIWYG. However, it can break down in a few places. The spot colour swatch books set expectations about colours that cannot be achieved with process inks. When a mixture of spot colours and process inks are used, it is difficult to display both on the same computer screen with reliable colour. Graphics files can originate in different software with different libraries using different process colour recipes for the same Pantone colours.
There are also many spot colour libraries to choose from, and designers don’t know when to use each library. We have described why the Pantone library is a North American standard, and some of its limitations. There are other design communities in the world that use spot colour libraries that are included as choices in graphic creation software tools. There are almost as many spot colours to choose from as there are free fonts files to download from the Internet. Spot colour classification has led to thousands of discrete colours being given unique names or numbers. There are several industry standards in the classification of spot colour systems. These include:
• Pantone, the dominant spot colour printing system used in North America and Europe.
• Toyo, a spot colour system common in Japan.
• DIC colour system guide, another spot colour system common in Japan.
• ANPA, a palette of 300 colours specified by the American Newspaper Publishers Association for spot colour usage in newspapers.
• GCMI, a standard for colour used in package printing developed by the Glass Packaging Institute (formerly known as the Glass Container Manufacturers Institute, hence the abbreviation).
• HKS is a colour system that contains 120 spot colours and 3,250 tones for coated and uncoated paper. HKS is an abbreviation of three German colour manufacturers: Hostmann-Steinberg Druckfarben, Kast + Ehinger Druckfarben, and H. Schmincke & Co.
• RAL is a colour-matching system used in Europe. The RAL Classic system is mainly used for varnish and powder coating.
The guiding principle for using any of these spot colour systems is to check that the manufacturer of the reproduction is using that ink system. The Trumatch library is quickly gaining favour as a tool for colour selection. That library of spot colours has been developed to be exactly replicated with process colour halftone screening. There are no spot colours a designer can choose from that library that cannot be reproduced well with standard process inks. As more computer graphics are being produced on digital imaging devices that only use CMYK, this colour library is becoming the choice for cross-platform or multi-vendor media publications. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/05%3A_Pre-press/5.03%3A_Colour.txt |
Trapping can be a very complex procedure in pre-imaging software for certain imaging technologies. It is an electronic file treatment that must be performed to help solve registration issues on certain kinds of press technologies. Generally, if a substrate has to move from one colour unit to another in the imaging process, the registration of one colour to another will not be perfect. That mis-registration must be compensated for by overlapping abutting colours. As soon as two colours touch in any two graphic elements we must create a third graphic element that contains both colours and overlaps the colours along the abutment line. That third element is called a trap line and can be generated many different ways that we will review.
Electrophotography
First let’s look at the differences between the four most common imaging technologies and determine where and why we need to generate these trap lines. Electrophotography, or toner-based digital printers, generally use only process colours. Each time an electrostatic drum turns, it receives an electrical charge to attract the toner colour it is receiving. The drum keeps turning until all colours of all toners are on the drum, and then all colours are transferred to the substrate at one time. There is no chance for mis-registration between the cyan, magenta, yellow, and black toners as they are imaged at the resolution of the raster generated by the RIP, and the placement of the electronic charge for each colour can be adjusted until it is perfect, which makes it stable from image to image.
Lithography
Let’s compare electrophotography to the lithographic print process. In lithography, a printing plate is generated for each colour and mounted on a plate cylinder. The plates are registered by manually turning wrenches to hold plate clamps, so the plate-mounting procedure can generate registration errors. Each colour printing unit is loaded with a separate ink, a plate that has been imaged to receive that ink, and a blanket that offsets the image from the plate before it transfers it from the blanket to the substrate. This is another mechanical transfer point that can cause registration errors. Most high-end lithographic presses have servo motors and cameras that work together to adjust for mechanical registration errors as the press runs. The substrate must travel from one printing unit to the next, and it is here that most registration errors occur. There are slight differences in the substrate thickness, stability, lead (or gripper) edge, and a different rate of absorbing ink and water that cause slight mis-registration. Also, consider that most sheet-fed litho presses are imaging around 10,000 sheets per hour, and we are only talking about movements of one-thousandth of an inch. On most graphic pages, however, the naked eye can see a mis-registration of one-thousandth of an inch, so the process must be compensated for. The solution is generating trap lines to a standard for lithography of three one-thousandths of an inch. This trap line allowance in abutting colours allows for mis-registrations of two-thousandths of an inch that will not show on the final page.
Inkjet
Inkjet is the next imaging technology we must assess and compare. The print heads on all inkjet machines are mounted on the same unit travelling on the same track. Each ink is transferred one after the other and the substrate does not move after receiving each colour. It is like electrophotography in that mis-registration between print heads can be adjusted electronically, and once in register remain stable for multiple imaging runs on the same substrate. If the substrate is changed between imaging, the operator must recalibrate to bring all colours into registration, and ensure the placement of abutting colours is perfect and no compensation is needed. As a result, no trapping will be needed for most inkjet imaging processes.
Flexography
Flexography is the fourth imaging technology we need to assess. This technology has the most points where mis-registration can occur. The printed image must be raised on the plate to receive ink from an anilox roller that can deliver a metered amount of ink. The computer graphic must be reduced (or flexed) in only one direction around the plate cylinder. A separate printing plate is developed for each colour and mounted on a colour unit that includes an ink bath, anilox roller, doctor blade, and a plate cylinder. The substrate travels from one print unit to the next on a continuous web that is under variable amounts of tension. If a graphic has a lot of white space around it, the substrate can be pushed into the blank space and cause distortion and instability in the shape and pressure of the raised inked image on the substrate. Flexography is used to image the widest range of substrates, from plastic films to heavy corrugated cardboard. This process absolutely needs trap lines generated between abutting colours. Standard traps for some kinds of presses can be up to one point (1/72 of an inch, almost five times our standard litho trap). Graphic technicians need to pay particular attention to the colour, size, and shape of the trap lines as much as to the graphic elements. In most packaging manufacturing plants, there are pre-imaging operators that specialize in creating just the right trapping.
Let’s examine some of the ways these traps can be generated. The simplest way is for a graphic designer to recognize that he or she is designing a logo for a package that will be imaged on a flexographic press that needs one-point trap lines generated for all abutting colours. The designer isolates the graphic shapes that touch and creates overprinting strokes on those graphic elements that contain all colours from both elements. That doesn’t even sound simple! (And it’s not.) It becomes even more complicated when the graphic is scaled to many different sizes on the same package or used on many different packages. So most designers do not pay attention to creating trap lines on the graphics they create and leave it to the manufacturer to create trapping for the specific documents on the specific presses they will be reproduced on.
There is specialized software that analyzes a document, determines where abutting colours are, and generates the tiny graphic lines as a final layer on top of the original graphic. This is done before the document goes to the RIP so it is raster-image processed at the same time as the rest of the document. Most RIPs process PDF files these days, and there are specialized plug-ins for Adobe Acrobat that will analyze a document, generate trap lines, and let an operator examine and edit the thicknesses, shape, and colour of those lines. It takes a skilled operator to examine the extra trap lines and determine if they are appropriate for the press they are going to be printed on. Press operators also need to determine the trap values of their inks. This refers to the ability of one printing ink to stick to another. Inks vary in viscosity depending on the density and types of pigments they are carrying. The trap characteristics and transparency of a printing ink are part of what determines the printing order in which they are applied to the substrate. For example, a process primary yellow ink is very transparent and will not stick (trap) well if printed on top of a heavy silver metallic ink. The metallic silver is thick and very opaque, so it will hide everything that it overprints. A graphics technician must generate trap lines for a graphic that has metallic silver abutting to a process yellow shape. The technician will increase (spread) the shape of the yellow graphic to go under the abutment to the silver. The silver shape will not be altered, and when it overprints, the yellow ink will stick to and hide the yellow trap line shape. The best analogy, we have heard is from a press person — the peanut butter sandwich analogy. We know the jelly sticks to the peanut butter and the peanut butter will not stick to the bread if the jelly is spread first. If a press person does not know the trap values of the inks, he or she can make as big a mess of the press sheet as an upside-down peanut butter and jelly sandwich makes on the front of your shirt! For this reason, trapping should be left to the specialists and is usually applied to a final PDF file before it is sent to a RIP. Ninety percent of trap lines for lithographic and flexographic imaging reproduction are generated automatically by specialized trapping software. Operators are trained to recognize shapes and colour combinations that will cause problems on the press. They will custom trap those documents with the Acrobat plug-ins we talked about earlier.
Special Consideration for Black
There is one trapping combination that should be considered and applied to all four imaging technologies. It is the way that black ink is handled in the document and applied on the imaging device. Most type is set in black ink, and much of it overprints coloured backgrounds. In all four imaging technologies, black is the strongest ink and hides most of what it overprints. It is still a transparent ink and most process black ink is more dark brown than the rich dark black we love to see in our documents. If the size of the black type or graphic is large enough, we will be able to see the black colour shift as it overprints stronger or weaker colours under it. Graphic designers should pay attention to setting whether black type or graphics overprint the background, or knock out the background to print a consistent black colour. A useful rule of thumb is that type above 18 points should be knocked out and boosted. Raise this threshold for very fine faces such as a script where larger point sizes can overprint, and reduce it for excessively heavy fonts like a slab serif. If the graphic is large enough, it should also be ‘boosted’ with other process colours.
The way we handle black ink or toner deserves special consideration in all four imaging technologies. Black is a supplemental colour to the three primary process colours. It is intended to print only when the other three colours are all present in some kind of balance. In all imaging technologies, we must watch that our total ink coverage does not approach 400%, or 100% of each ink overprinting the other inks in the same place. This is usually too much ink or toner for the substrate to absorb. As a result, it will not dry properly and will offset on the back of the next sheet, or bubble and flake off the media in the fuser. Therefore, we must pay attention to how our photographs are colour separated, and how we build black in our vector graphics.
Colour Separations
When colour separating photographs, we can build in an appropriate amount of GCR or UCR to give us the right total ink coverage for the imaging technology we are using for reproduction. UCR stands for under colour removal. It is applied to very dark shadow areas to remove equal amounts of cyan, magenta, and yellow (CMY) where they exceed the total ink limit. For example, in sheet-fed lithography, a typical total ink limit is 360. In areas that print 100% of all four colours, UCR will typically leave the full range black separation, and remove more and more CMY the deeper the shadow colour is. A typical grey balance in shadows may be 95% cyan, 85% magenta, and 85% yellow. Including a 100% black, that area would have a total ink coverage of 365. Other imaging technologies have different total ink limits, and these can vary greatly from one substrate to another within an imaging technology. An uncoated sheet will absorb more ink than a glossy coated sheet of paper and so will have a different total ink limit.
GCR stands for grey component replacement, and it is intended to help improve grey balance stability in a print run and save on ink costs. GCR can affect far more colours than UCR as it can be set to replace equal amounts of CMY with black all the way into the highlight greys. This is particularly useful in technologies like web offset newspaper production. Grey balance is quickly achieved in the make-ready process and easily maintained through the print run. Black ink for offset printing is significantly cheaper than the other process colours, so there are cost savings for long runs as well. GCR is used in photos and vector graphics produced for other imaging technologies as well. Any process where grey balance of equal values of the three primary colours could be an issue is a smart place to employ GCR.
You may be wondering how we can check the shadow areas of every photo we use. These GCR and UCR values can be set in ICC profiles by linking the shadow and neutral Lab values to the appropriate CMYK recipes. When the ICC profile is applied for a given output device, the shadows get the proper ink limits, and the grey tones get the prescribed amount of black, replacing CMY values.
Keylines
Black keylines, or outline frames for photos, are common in many documents. This is another place where a document should have trapping software applied for every imaging technology. Outline strokes on graphics can also have a ‘hairline’ setting, which asks the output device to make the thinnest line possible for the resolution of the device. This was intended for in-house studio printers where the resolution is 300 dpi — so the lines are 1/300th of an inch. But the same command sent to a 3,000 lspi plate-setter will generate a line 1/3000th of an inch, which is not visible to the naked eye. These commands must be distinguished in PostScript and replaced with lines at an appropriate resolution — trapping and preflight software will do this.
Knock outs
The use of solid black backgrounds is becoming more popular in documents, which can cause problems in reproduction with all imaging technologies. The first problem is with filling in details in the graphic shapes that are knocked out of the solid black background. Fine type serifs, small registered or trademark symbols, or the fine hairlines mentioned above will all fill in and be obliterated when imaged. The problem is multiplied when we boost the black colour by adding screened values of cyan, magenta, or yellow to the colour block. When white type or graphics knock out of these background panels, any slight mis-registration of any colour will leave a halo of that colour in the white type or graphic. This problem can also be solved with trapping software. Essentially, the trapping engine outlines all the white type with a small ‘black only’ stroke that knocks out the process colour that boosts the black, making the white type fatter in that colour separation. This ‘reverse trapping’ works well when applied to the four imaging technologies we have been examining: lithography, flexography, electrophotography, and inkjet. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/05%3A_Pre-press/5.04%3A_Trapping.txt |
The biggest challenge in reproducing computer graphics on output devices in today’s marketplace is dealing with transparency in graphic files. This truly emphasizes the importance of WYSIWYG in proofing for the graphic communications industry. We must first emphasize that page layout graphic software is not developed for producing documents for mechanical reproduction. This software prioritizes the creation of documents for viewing on electronic media; they are created on a computer screen for viewing on a computer screen. We have reviewed some of the issues with rasterizing vector shapes consistently, and reliably representing colour from one device to another. Viewing a graphic with three-dimensional transparent elements is significantly different on an illuminated medium where the light is transmitted compared to an opaque medium where the light is reflected. It is very hard to judge how the transparent effects will translate from one to another. There is room for the same kind of collaborative research in this realm, as there was in developing OpenType font architecture and ICC profiles.
The problems in WYSIWYG production for transparency fall in two categories. The first problem is setting expectations so a designer can make a reasonable prediction of how the document will look when imaged on a given media. The second problem is the sheer proportions of the computational processes we are asking of a RIP. PostScript is a three-dimensional language that allows a creator to stack and prioritize elements on a page. The RIP can literally ‘throw away’ raster data that is knocked out by graphic elements that completely cover the elements behind. If those elements have to show through the foreground elements by 20%, the RIP must hold much more raster data in physical memory addresses. Many times, data is lost if there are not enough addresses available for the computations, and this can change from one processing of the document to the next.
Designers can employ strategies at each level of document creation to manage these problems. The first strategy is to use layers well in document creation. By isolating different effects on separate layers, it becomes easier to isolate and edit the transparent effects when they don’t produce the desired results in the final output. The layers can be included in a PDF file of the document, and this allows the possibility of relatively quick editing in PDF editing software closer to the output stage. This can be a completely different working style for some graphic artists. If we start with the premise that the computer screen representation of the document is NOT good WYSIWYG and will probably need editing, then we can justify working with layers more to isolate effects. We can organize design elements on layers after creation — when we are fine-tuning the effects. Usually, this is a good technique when creating many elements on several page dimensions. Designers can review their documents and decide if there are distinct dimensional levels, as page elements are pushed further into the background to pull other page elements forward. A simple example is a book cover for a retrospective, with pictures from four distinct decades. The photos and type from each decade can be set on distinct layers, and transparent values of 25%, 50%, 75%, and 100% can be set for each layer. The screen will render one version of the document, and the printer will render another. It is easier to fine-tune the four layer levels of transparency than to go back and set new transparency levels for dozens of individual page elements.
Another strategy that must be considered for processing multiple transparent page elements is allowing the page layout software to raster the page elements, so it sends raster data to the RIP. This technique treats the transparent elements, such as a photograph on the page, and allows the creator to choose the resolution of the raster. Care must be taken here to ensure overlapping vector elements will raster at the same resolution in the RIP. Let’s say we have a type block that crosses a photo on the page, but it is transparent to let the photo show through the type. If we rasterize the transparent type at 300 ppi — the resolution of the photo — it will be significantly different from the raster of the vector type at the RIP, which might be 3,000 lspi for some plate-setters. The letter shape will be 10 times thicker over the photo, and that will be VERY noticeable if the type crosses the photo in the middle of the glyph. The solution is to make sure to raster the transparent type at 3,000 ppi to match the plate-setter raster. This makes the PDF file very large because it contains lots of raster data. But this solution is also a disadvantage because it does not allow late-stage editing of the transparent values in the PDF file. The advantage is that the transparency elements will have better WYSIWYG, process more consistently in multiple RIPs, and use less RIP resources in processing.
It is very important to be aware of the transparent elements you are creating in a document. It is not always apparent when using effects, plug-ins, or effects filters available in page layout software. Using a bevel or emboss effect, or a simple drop shadow, makes that page element use transparent routines in the RIP. Programs like Adobe InDesign let designers view all the transparent elements on a page. Designers should examine each one to decide if it should be rasterized before RIP-ing or at the RIP. This is a good point at which to decide if transparent elements can be grouped, or organized, on common layers. It is also a good point to decide how the transparent element contributes to the design, and how critical the level of transparency, or WYSIWG value, is in the overall design. In the retrospective book cover design referred to above, WYSIWYG is very important in communicating the message of the book and getting predictable results.
Transparent elements can be rasterized at the page layout stage, the PDF creation stage, and at the RIP stage for the final output device. Adobe Acrobat also has a tool to view transparent elements in a PDF file. It is important for a designer to compare the transparent elements in the PDF to those in the page layout software. The primary concern is that the elements rasterized in the PDF are no longer editable, so it is critical that the levels are right to create the desired overall effect. It is also important for a preflight operator to view the transparent elements in a PDF file to check what the RIP will have to process and to make sure the computational resources are available. If there are processing errors in the final output, they are most likely to occur in rendering the transparent objects. Viewing the transparent elements on a page in Acrobat should provide a mental checklist for the operator when she or he views the final output.
Communication Is Key
The graphic communications industry still has collaborative work to do to make the processing of transparent elements on a page more predictable and repeatable. It is important for designers to understand the problems they can be creating for a RIP, especially for output on an extremely high-resolution device like a plate-setter for waterless lithography. It is also important for operators who are managing documents with lots of transparency to be aware of the checkpoints in a document, and to know when there is not adequate WYSIWYG for the transparent elements on a page. Good questions for all stakeholders to ask when processing a document that relies on many transparent elements are:
• Where are the transparent elements?
• Did they process correctly?
• Is anything missing in the layers that should show through the transparency?
• Are there transparency values that can be adjusted to optimize the overall effect?
Let’s review the primary tools for reproducing transparent page elements in a document. We can utilize layers in a document for setting common transparency values. We should view all transparent elements in a document before and after creating a PDF file. There are several stages to rasterizing the transparent elements. The earlier we rasterize them, the less editable the document becomes, and the more consistent the final output will be. We are creating a larger file to process when we rasterize transparent elements early. Much less computational resources are required at the RIP, and the more predictable our final output will be. When managing late-stage processing of transparency, we must be aware that what we are viewing on a computer screen is not necessarily a good representation of the final output. Graphic artists at all levels of production must pay attention to the transparent areas of a document to check for accuracy. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/05%3A_Pre-press/5.05%3A_Transparency.txt |
Imposition of individual graphics page files serves two primary purposes. The first, and perhaps most important purpose, is to utilize media and manufacturing equipment with the most economic efficiencies. The second is to add what has historically been referred to as ‘furniture’ to the manufactured sheet to control processes. We will discuss both priorities for each primary imaging technology we are examining in this book. There is also a range of equipment capabilities for each technology that affects how documents are imposed. There are a few ways to impose the files either pre-RIP or post-RIP. We will also look at ways of imposing in graphic creation software and in specialized imposition software.
The first technology we will look at is electrophotography, where imposition is perhaps the most underutilized. Electrophotographic, or Xerox-type copiers are usually used for short-run lengths with demands for instant turnaround. Duplexing is the simplest type of imposition, but there are four choices for how to orient the back of a single page on the front. The duplexing style can be specified in the print driver, in the PDF file, or in the RIP. Most small printers will turn on duplexing, rather than image the fronts, turn the printed paper over the right way, and image the back of the sheet. Fewer will take the time to utilize the machine and media capabilities to step and repeat an image two up on a larger sheet to half the run time. Yet, as a manufacturing process, electrophotography is the slowest technology for image reproduction, and the most in need of saving time. There are simple rules of automation that can be programmed in a RIP to automatically impose if a run length is over 100 copies. For example, letter-size documents are the most often imaged on this type of equipment. If run lengths of more than 100 copies were imposed two up on a tabloid sheet, it would halve the run time and open up more imaging time on the machine. This introduces another process — cutting the sheets in half before final delivery. Management can determine how well the imaging engine run time is utilized and when it is efficient to have the same operator cut printed sheets in half. Making that management decision requires a knowledge of workflow options for efficiency. Those efficiencies are the primary purpose of implementing imposition software.
Using a ‘step and repeat’ or ‘duplexing’ imposition of single-page file is the simplest example of imposing files for electrophotographic workflows. More and more copiers have capabilities to fold and bind the finished product ‘inline’ in one continuous process. This process is driven by imposing the single-page files in the correct order as they are processed in the RIP, so they image in the proper spot on the media to fold, and bind together in the correct order.
Imposing page files for binding styles usually follows two types of machine capabilities: saddle stitching and perfect binding. Saddle stitching is a binding process that folds the media after it is imaged on both sides, stacks the printed folded sheets one inside the other, and applies staples to the spine of the book. The other dominant style of book binding built into copiers is perfect binding. Media is imaged on both sides and folded, but the folded sheets are stacked on top of each other, glue is applied, and a cover sheet is wrapped around the glued book to encase the spine. The pages have to be imposed in a completely different order on the printed sheets. The first and last sheets of a saddle-stitched book are imaged on the same sheet, whereas a perfect-bound book has the first pages imaged on the same sheet, and last pages on separate sheets of media.
There are many options for folding a sheet of substrate before binding it together. The options increase the larger a sheet is. An imposition must account for the preferences that are best practices for the specific machines involved. If we look at sheet-fed lithography first, we can identify some common best practices that can also apply to toner-based electrophotography and inkjet. We shall leave an examination of imposition for flexography to discuss nested dieline shapes for packaging applications.
Imposition standards are based on workflows for standard-sized pages, primarily letter-sized pages measuring 8½” x 11″. We speak of two up devices that can generally image substrates up to a maximum size of 12″ x 18″ or 13″ x 19.” Two up devices can accommodate two letter-sized pages plus bleeds, grip, marks, and colour bars — sometimes referred to as furniture on an imposed sheet. These four elements all serve a purpose in page reproduction manufacturing that we shall define later. Four up devices generally accommodate imaging substrates up to 20″ x 29″ to image four letter-sized pages. Forty-inch devices are referred to as eight up and image on a maximum sheet size of 30″ x 40″, which can image eight letter-sized pages and the furniture mentioned above.
There are four common styles of imposition for eight up devices: sheet-wise, work and turn, work and tumble, and cut and stack.
Figure 5.1 work and turn(a)
Figure 5.2 work and turn(b)
Figure 5.3 work and tumble
Figure 5.4 cut and stack
Sheet-wise impositions image the fronts of all pages on one side of the sheet and impose all the backs on a separate set of plates for a press run that will back up all the sheets. Work and turn imposes the fronts on one half of the sheet and the backs on the other half, with the axis running perpendicular to the grip of the sheet (see Figures 5.1 and 5.2). Work and tumble imposes all fronts on the bottom half of a sheet and backup images on the top half of the sheet (see Figure 5.3). The sheets are flipped halfway through the press run, with the axis parallel to the grip edge of the sheet. Cut and stack imposes the pages so full press sheets can be collated, and the collated sheets cut and stacked in order on top of each other to make a final book (see Figure 5.4).
Lithographic web offset presses have imposition orders that depend on how wide the web of paper is, and how many web rolls and half rolls will be brought together before folding on a first former, and cutting on a second former. There are many options for configuring a web-fed litho press, depending on the number of pages in a publication. Usually, the entire publication is printed and folded by running the stack of web paper together and folding it in half over a former.
Imposition has to account for creep and bottling when imposing for thicker publications. Creep pushes the image on a page closer in to the spine the further the page is toward the spine, by the width of the thickness of the publication at the stapled, spine edge. Bottling skews the image on a page to account for the skewing of web rolls of paper that are folded in very thick signatures. The thicker the folded signature of a bound book, the more skewing takes place, which should be accounted for in the ‘Bottling’ value in an imposition.
Imposition for inkjet mediums is usually done when the image is rasterized. The RIP will store the raster image and nest several raster images together to fill the dimensions of the media being imaged. This is usually set as an automated function in the RIP, and is tied to the size and cost of the media being used. When imaging very low resolution images on very low cost media, the manufacturer is usually more concerned with the speed of the machine than the utilization of the media. If an expensive media is being used, the automatic imposition will be utilized in the RIP to save media. Often inkjet images are not square, and the media will be die cut or cut with a router after imaging. The RIP can be set to impose the images so the shapes nest inside each other. This is usually outside of the automatic features for imposition in a RIP, and requires operator intervention. In that case, the imposition operator must know the die cutting or router processes very well to make sure the imaged media can be cut out even though it is nested with another image.
This nesting of images to be die cut after imaging is most prevalent in flexographic printing for packaging. Most package or label shapes are not square and the media is expensive. The imposition function becomes very important for preparing flexographic plates. Nesting the die cut shapes for several packages together on a continuous roll of media takes very specialized software and a highly skilled operator. There are many variables to consider, including media thickness, ink coverage, die shapes, glue releases, and image floor on the flexo plate. Flexo imaging for packaging generally takes more understanding of CAD software and the construction of the final three-dimensional product. Imposition operators must know the structural requirements as well as the press limitations to nest together several package images on the same press run.
The final consideration for all impositions in all imaging technologies is the computer resource requirements for the RIP. We usually require an imaging engine to raster a single document, and proof it one page at a time through a proofing device. When we impose the same document with many other pages in completely different orientations, sometimes RIP processing errors can occur. Fonts drop out, and more commonly, transparent elements do not process properly. This is another checkpoint to make sure the imposed image matches the proof of the single page. It is essential to discuss preflighting for print at this point to establish where the routine checkpoints in document processing should be.
Media Attributions
• work and turn 01 by Ken Jeffrey
• work and turn 02 by Ken Jeffrey
• work and tumble by Ken Jeffrey
• cut and stack-03 by Ken Jeffrey | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/05%3A_Pre-press/5.06%3A_Imposition.txt |
We have covered quite a few parameters that must be considered when preparing a computer graphic for manufactured image reproduction. The parameters shift with different substrates and imaging technologies. The task of checking a computer graphic document in preparation for the manufacturing process is called preflight. Most graphics are created by designers who are working separately from the manufacturer. In some cases, preflight preparation is the responsibility of the designer, or graphics creator, and in some cases, it is the responsibility of the manufacturer. Some manufacturers charge extra if they have to correct a graphics file that is not prepared properly for their imaging process. Most do not charge extra for preflighting, trapping, or imposing a file for their imaging process. Problems occur when all parties believe a file is prepared properly, but it causes errors in the RIP process. A poor font, improper colour separations, and transparency settings that drop out layers on a page are problems that occur most often. This is when time, materials, and money are wasted, and critical media campaign deadlines are missed. Preflight tries to catch the problems before they reach the RIP.
Designers or graphics creators can purchase separate preflight software that will generate reports about a PDF or PostScript file before they submit it to a manufacturer. The most popular dedicated preflight software is from Markzware and is called FlightCheck. There are also a few other companies that are popular in the marketplace. Enfocus bundles its preflight software with a suite of PDF editing tools called Pitstop. The Adobe Creative Suite has preflight functions built into its software suite. Adobe InDesign is the page layout software of choice for creating multi-page documents such as brochures, pamphlets , or books. The preflight module in InDesign will generate a report that can be included with the packaged contents a designer should provide to a manufacturer. The report will list important information about the InDesign document, such as that found in the list below. Adobe Illustrator also has a built-in preflight tool, and Adobe Acrobat has preflight tools that designers should use to analyze their PDF files before submitting them to a RIP.
Various industries have set PDF standards. The magazine publishing industry, for example, developed a PDF/X standard to ensure PDF/X files can be written only if they meet a set of specifications that are common for lithographic magazine production. Other manufacturing processes adopted the standards if they were appropriate for their imaging technology and PDF workflows.
Most preflight software checks for the following elements in an electronic document:
• File format
• Colour management
• Fonts
• Spot colour handling
• Page structure
• Thin lines
• Black overprint
• Trapping
5.08: Summary
This chapter has looked at computer graphic creation through the lens of a manufacturer that must reproduce the electronic image on a substrate. The image must be processed through a RIP that drives a laser, or other imaging technology, to transfer pigments to that substrate. There are unique variables that must be considered in preparing the computer graphic for the reproduction process. We have explored routines for processing vector data such as fonts through a RIP, spot colour handling, trapping, and imposition. The next chapter will look at each of the imaging technologies in more depth.
Questions to consider after completing this chapter:
1. Describe six pre-imaging file analysis processes that should be considered when developing a computer graphic for reproduction manufacture.
2. Describe four major imaging technologies that utilize computer graphics to image on different substrates.
3. Describe the difference between raster data and vector data when creating a computer graphic file.
4. Compare the raster resolution of the data for a typical lithographic plate-setter compared to the resolution of a typical inkjet device.
5. How many addressable values can be recorded in an eight-bit byte of computer data?
6. What does the acronym WYSIWYG stand for?
7. How many kerning pairs are present in a ‘good’ font file?
8. What colour matching library has been developed exclusively for process colour printing inks (CMYK)?
9. What two printing processes must have trapping applied to computer graphics files before making printing plates?
10. What can a page layout artist do to a graphics file if the transparent elements on the page are dropping out or not processing in the RIP?
Suggested Readings
Adobe Systems (Ed.). (1990). Adobe type 1 font format. Reading, MA: Addison-Wesley Pub. Co.
Adobe Systems Incorporated. (2015). Adobe PDF Library SDK. Retrieved from http://www.adobe.com/devnet/pdf/library.html
Adobe Systems Incorporated. (2015). Font formats. Retrieved from http://www.adobe.com/products/type/a...t-formats.html
Adobe Systems Incorporated. (2015). OpenType fonts information. Retrieved from www.adobe.com/content/dotcom/...formation.html
Adobe Systems Incorporated. (2015). Transparency flattening. Retrieved from https://helpx.adobe.com/acrobat/usin...robat-pro.html
Adobe Systems Incorporated. (2015). Using and creating swatches. Retrieved from https://helpx.adobe.com/illustrator/...-swatches.html
LucidDream Software. (2008). Trapping tips. Retrieved from http://www.trapping.org/tips.html
Markzware. (2015). Markzware TV – YouTube Channel. Retrieved from https://www.youtube.com/user/markzwareTV
Montax Imposer. (n.d.). Imposition types. Retrieved from http://www.montax-imposer.com/descri...position-types
Prepressure.com. (n.d.). Transparency in PDF files. Retrieved from http://www.prepressure.com/pdf/basics/transparency
TotalFlow MR. (2012). Imposition for creating a bound book. Retrieved from http://support.ricoh.com/bb_v1oi/pub...s/int/0085.htm | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/05%3A_Pre-press/5.07%3A_Preflight.txt |
Learning Objectives
• Describe digital printing methods and their differences
• List the various inks used in inkjet printing and their characteristics
• Identify the key components of electrophotography
• Explain the seven steps of the electrophotographic process
• Describe the differences between toner types and how they affect imaging
• Evaluate the suitability of a paper for a project based on its characteristics
• Convert between paper basis weights and grammage
• Describe the key differences between page description languages
• Acknowledge the historical significance of Postscript in desktop publishing
• Explain the differences between PDF/X versions
• Describe the function of a RIP in a DFE
• Explain why calibration is critical in electrophotography
• Describe the key component in variable data printing
• Identify key benefits of open standard VDP formats
Digital printing can be defined as the reproduction of an image or document onto a substrate, directly from an electronic file, without the use of a fixed image plate. Traditional printing transfers an image permanently onto a fixed image plate, whereas digital printing transfers the image temporarily onto a photoconductive cylinder, called a drum, or directly onto the substrate itself. Printing in this manner provides some unique capabilities that sets it apart from traditional print methods. There is virtually no set-up or make ready, finishing tasks can be accomplished inline, and each sheet can have unique content, which makes this printing method ideal for publication printing, short print runs, or highly dynamic content.
The two most common digital printing methods in use today are electrophotographic (toner based) and inkjet (ink based). Both technologies are used in a wide range of printing devices from small desktop printers to large high-volume, high-speed digital presses. The term digital press is often used to describe commercial digital printers. In the past, speed was the determining factor of this designation. Today, we have specific criteria published by Idealliance, a not-for-profit member organization that develops standards and best practices for the digital media supply chain (Idealliance, n.d.). Apart from speed, colour accuracy to meet a specification and consistency over long print runs are key parts of Idealliance’s certification process.
For the purposes of this text, we will focus on digital printers and presses used in commercial printing rather than on consumer or office printers.
6.02: Inkjet
Inkjet printing is a type of digital imaging where drops of ink are jetted onto the substrate in a very precise pattern from a nozzle. This nozzle, also called the print head, is required to be very precise and accurate, which is a challenge when you consider that the goal is to get many thousands of tiny drops of ink to land exactly where needed on the printed surface. Over time, inkjet technology has become more advanced, allowing greater resolution, more accurate colour, and overall, finer visual fidelity to the original. The most common method of inkjet printing for commercial purposes is called drop-on-demand (DOD). This type of inkjet print head only fires each individual droplet of ink when needed (on demand) and comes in two types, thermal or piezoelectric (see Figure 6.1). Accuracy in DOD inkjet printing is achieved by keeping the print head close to the surface being printed (substrate) as the velocity of the jetted ink is low.
Figure 6.1 Piezoelectric head on the left, thermal on the right
Thermal Inkjet
In a thermal print head, each nozzle contains a special reservoir that is bounded by a heating element. When current is passed through the heating element, it causes the ink to expand rapidly, ejecting out of the nozzle to land on the substrate in a given position. The print head is made up of a matrix of many of these chambers, and each print head is connected to a different colour of ink. As the ejected ink leaves the chamber, fresh ink is drawn into the reservoir by surface tension and the vacuum created by the previous drop of ink leaving.
Thermal inkjet is most common in household and consumer grade inkjet printers. A major benefit to using thermal printhead technology is the relatively inexpensive print head. Since each colour printed requires a separate print head, and some print devices can contain eight or more colours of ink, thermal technology keeps the initial cost of the device low and reduces replacement costs when a print head fails, or is damaged.
Piezoelectric Inkjet
Piezoelectric (piezo) print heads also use a tiny reservoir to hold a droplet of ink. However, unlike thermal printheads, piezo heads contain a small flexible membrane, or diaphragm, that moves up and down to squirt the ink out of the print nozzle. The pressure caused by the flexing of the piezo material is very precise, allowing a drop, or multiple drops, to strike the substrate accurately. Similar to thermal, the print head is made up of a matrix of a number of these individual nozzles. And by using multiple print heads, multiple colours are possible.
Piezoelectric is more common in commercial and large-format printing applications, although there are a few consumer grades of printers that use piezo as well. Piezo is more accurate, and because the ink in the chamber doesn’t have to be vaporized to form the droplets of ink, piezo can print with a wider variety of inks such as aqueous, ultraviolet, and latex.
Types of Ink
Inkjet printing has become more advanced not only in the mechanics of how the print heads work, but also in the variety and usage of different types of ink. Below are some common types of ink and a brief explanation of how they might be used.
Aqueous Ink
Aqueous ink, as discussed earlier, is a water-based ink. This type of ink is used in consumer printers using thermal technology, but can also be used in commercial piezo printers as well. Aqueous is well suited to thermal technology because the formulation of the ink allows it to vaporize in the print head for expulsion onto the paper. The water component of the ink, however, also contributes to its greatest drawback: the susceptibility of the finished printed piece to run or smear if it gets wet. Many users of desktop printers in their homes have been disappointed when they take their printed pages outside in the rain. Even a few drops of water can cause the ink to run and bleed into the paper.
In commercial uses, aqueous inkjet is well known for colour fidelity and quality, but the finished piece has to be protected from moisture. These types of print products would most likely only be used indoors, mounted behind glass, or with a laminated plastic layer on top. There are water-resistant coatings that can be sprayed onto a finished product, but even then, you would not want to leave it outside for an extended period of time. Aqueous ink is a common choice for art prints.
Ultraviolet Inkjet
Ultraviolet (UV) ink is a type of energy-curedink that stays wet until it is bombarded with ultraviolet radiation. This UV radiation is commonly projected onto the freshly printed surface by means of a special high-intensity light bulb. Once the UV rays hit the ink, a special molecular process is triggered, causing the chains of molecules in the ink to bond and solidify instantly. UV ink does not dry from exposure to air, nor from heat. Once the UV ink has been cured, however, it is very solid and quite durable.
For commercial use, UV inks tend to be popular for outdoor uses such as banners and signage. Indoor signage is commonly printed using UV as well because of its durability and rub resistance. Since UV inks dry instantly, they can be removed from the printer and handled much sooner. UV inks sit mostly on top of the surface of the substrate, and because of their solid bond are more prone to cracking if bent or folded. UV is not a good choice of ink where flexibility of the substrate is required.
Latex Inkjet
Latex ink is a newer formulation that has exploded onto the inkjet printing scene in the last few years. Latex inks are water based and cure primarily through heat, but more importantly, they are not subject to moisture damage once cured. This is because the pigment is carried by the latex molecules, and once the latex has bonded to the substrate, the pigment stays intact. Latex printed products dry immediately and are ready to use as soon as they come off the print device.
Latex inks are used in many commercial applications, particularly where outdoor durability and flexibility are needed. One of the many common uses of latex inkjet printing is in imaging car wraps. A car wrap is a flexible adhesive material that is printed flat, then stretched or wrapped around the contours of a vehicle, usually for marketing or advertising purposes. Figure I.1 in the introduction of this textbook shows an example of a car wrap. Because of this flexibility, latex printed signage can also be adhered to rougher surfaces such as wood or brick. The popularity of latex can be attributed to a previously popular inkjet formulation that is solvent based.
Solvent Inkjet
Because of the rise in popularity of latex ink over the last few years, there has been a great decline in the use of solvent inkjet inks. Formerly, this was the type of ink needed for flexible, durable printing. Solvent inks are formulated using solvent as a carrier for the pigment, and as the solvent dries, the pigment remains bonded to the substrate. A big concern for the use of solvent-based printing is the release of volatile organic compounds, or VOCs. These VOCs are released into the atmosphere during the printing and drying of solvent prints, and have to be vented outdoors so as not to pollute the workspace. Even newer eco-friendly inks still release VOCs, albeit at a lower level. Some areas have environmental laws that restrict the release of pollutants into the air (United States Environmental Protection Agency, 2000). Customers often complain about the smell of solvent prints, particularly when used indoors. Because of this, solvent inkjet is primarily chosen for outdoor uses such as large-format signage, banners, and car wraps. Solvent can be very economical, and while the quality isn’t as sharp as UV or aqueous, it is excellent for very large projects that will be viewed from even a moderate distance. Pressure on the solvent ink market comes because most of these uses can now be achieved with latex inks as well, and the industry has seen a divergence between companies that still use solvent or eco-solvent inks and those that are switching to latex. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/06%3A_Imaging/6.01%3A_Introduction.txt |
Electrophotography (also known as xerography) is a complex process commonly used in copiers and faxes, as well as in digital printers. It is an imaging technology that takes a digital file and utilizes a photoreceptor, light source, electrostatic principles, and toner to produce the printed output. Before this process was used for digital printing, it was extensively used in analog copiers where a lamp illuminated the page being copied, and then a series of mirrors reflected the page directly onto the surface of a drum. Digital copiers replaced the direct light path with a sensor that converts the analog image into digital information, then a laser or an LED array writes the image onto the drum. Many digital printers today are based on the same platform as digital copiers. The technology has seen many improvements over the years, but the electrophotographic process at its core remains relatively unchanged.
Photoreceptor
The photoreceptor is commonly referred to as a drum. It is a cylinder coated with a material that becomes conductive when exposed to light. Areas that are not exposed have a high resistance which allows these areas to hold the electrostatic charge necessary for the process.
Light Source
Light sources used in digital printing include LED arrays or, more commonly, lasers. VCSEL (vertical cavity surface emitting laser) is an advanced type of laser used in the most current digital presses in the market. A VCSEL array can position its beam with high accuracy (addressability) for optimal clarity, resolution, and image positioning. This makes it ideally suited for a digital press.
Electrostatic Principles
Figure 6.2 Like charges repel each other while opposite charges are attracted
To understand electrophotography, we must first understand some basic electrostatic principles. When certain materials come in contact then separate from each other, these materials can become electrically charged. Rubbing these materials together can increase this effect. This is called the triboelectric effect. Static electricity buildup on your clothes in a dryer or from rubbing a balloon on your hair are examples of the triboelectric effect. Charges can have either a positive or negative polarity. Like charges repel each other while opposite charges are attracted, in much the same way as the polarities in magnets (see Figure 6.2).
These properties are at the core of the technology and are utilized in almost every stage of the digital imaging process.
Toner Basics
Toner is a very fine, dry powder medium used in the electrophotographic or xerographic process. It is composed primarily of a resin and includes pigment, wax, and process-enhancing additives. The term xerography, in fact, is derived from the Greek words xeros, ‘dry’ and graphia, ‘writing,’ reflecting how toner rather than ink is used in the imaging process. Toner particles become electrically charged when stirred or agitated through a triboelectric effect. The composition of the toner not only contributes to its imaging characteristics but to its ability to maintain and control its charge properties. The shape of the toner also is a factor in its charging capability. This electrical charge is what allows the toner to be precisely manipulated throughout the process.
There are two basic types of toner production, pulverized and chemical (Figure 6.3). Pulverized toner was commonly used in earlier digital printers and is manufactured by successive compound mixing and grinding steps until the desired consistency and size is achieved. The resulting toner particles are irregular in size and shape and typically average around 6.2 to 10.2 microns in size. Pulverized toner produces good results, up to 600 dpi resolution; however, a consistent size and shape along with a smaller particle size is required to produce better clarity and detail at higher resolutions.
Figure 6.3 Two basic types of toner production
Chemical toners were introduced later to overcome those limitations and are in common use today. Each manufacturer has its own process for creating this type of toner and unique names as well. Xerox’s EA toner, Ricoh’s PxP toner, and Konica Minolta’s Simitri toner are all examples of chemical toners. As the name suggests, chemical toners are created through a process of building or ‘growing’ the particle chemically. This process allows for the precise control of the shape and size of the toner particle (under 5 microns in some cases), resulting in higher definition and resolution capabilities. Resolutions of 1,200 dpi and 2,400 dpi are possible largely due to the use of this type of toner. Other benefits include much lower energy consumption, both in the manufacturing process and printing process, as well as narrower particle size and charge distributions.
Here is a YouTube video of how chemical toner is made: https://youtu.be/852TWDP61T4
Dry toner comes in two forms: mono component and dual component. Both rely on magnetic iron or iron oxide particles to ‘hold’ the charged toner on a magnetic roller. Mono component toners incorporate the magnetic material in the composition of the toner particle itself where dual component toners have the magnetic material mixed together with the toner but as separate components. This mixture is called developer.
ElectroInk
ElectroInk is a unique form of toner used in HP Indigo digital presses. The toner comes in the form of a paste and is mixed internally in the press with imaging oil, a lightweight petroleum distillate. This type of toner is considered a liquid toner as the particles are suspended in the liquid imaging oil, but still uses an electrophotographic process for imaging. One of the important advantages of this type of toner is its particle size. ElectroInk toner particles are 1 to 2 microns, significantly smaller than the smallest dry toner particle. At this size, a dry toner would become airborne and would be very difficult to control. The toner and oil suspension achieves higher resolutions, uniform gloss, sharp image edges, and very thin image layers. A thin image layer allows the toner to conform to the surface of the substrate, producing a consistent look between imaged and non-imaged areas. A drawback of this toner, however, is that substrates may need to be pre-treated in order for the toner to adhere properly. There are substrates available for use specifically on HP Indigo digital presses, but typically these are more expensive or may not be compatible with other printing methods. Some Indigo presses are equipped with a pre-treating station that expands substrate compatibility extensively and even surpasses that of other forms of digital printing.
Nanography
Nanography is a very new and exciting print technology currently in development by the creator of the Indigo digital press, Benny Landa. It borrows some of the same concepts used in the Indigo but with a different approach to the implementation of these. The technology centres around NanoInk, a breakthrough ink with pigment sizes in the tens of nanometers. In comparison, pigments found in good-quality offset inks are in the 500 nanometre range. Colorants intensify and ink density increases at this microscopic level, thereby expanding the ink’s colour gamut considerably. The ink uses water as a carrier instead of imaging oil making it more cost effective and eco-friendly. Billions of ink droplets are jetted onto a heated blanket, not directly onto the substrate as in inkjet printing. The ink spreads uniformly on the blanket and the water quickly evaporates leaving only an ultra-thin (approximately 500 nanometres), dry polymeric film. This film transfers completely onto the substrate on contact and produces a tough, abrasion-resistant image. This print technology can be used with almost any substrate without pre-treatment and, due to its minuscule film thickness, does not interfere with the finish. Whether high gloss or matte, the ink finish matches that of the substrate. Although the technology is poised to revolutionize the print industry, the first press to use it is currently in beta testing. You can find the latest news and more information on nanography on this webpage: http://www.landanano.com/nanography
Media Attributions
• Electrostatic principles by Roberto Medeiros
• EA-HG_toner by Fuji Xerox Co., Ltd | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/06%3A_Imaging/6.03%3A_Electrophotography.txt |
The electrophotographic process consists of seven stages (see Figure 6.4). For the purpose of this text, we will be describing the process using a negatively charged dry toner. The process is the same for a positive toner except the polarity would be reversed in each stage.
Figure 6.4 Electrophotographic Imaging System
Charging
In the first stage, a high negative voltage of approximately -900 volts is provided to a charge roller (see Figure 6.5). The voltage used varies by manufacturer and model. The charge roller applies a uniform layer of negative charge to the surface of the drum. The resistivity of the unexposed photosensitive drum coating allows the charge to remain on the surface.
Figure 6.5 Charge roller
Exposure
A laser is used to write the image onto the charged surface (see Figure 6.6). Because the photosensitive coating on the drum becomes conductive when exposed to light, the charges on the surface of the drum exposed to the laser conduct to the base layer, which is connected to a ground. The result is a near zero volt image and a negative background. This is known as the latent image.
Figure 6.6 Exposure
Development
Many digital printers and presses use a dual component development system (see Figure 6.7). The developer is a mixture of non-magnetic toner and a magnetic carrier. As the developer is stirred and the particles rub up against each other, a triboelectric charge is generated between the them. The toner becomes negatively charged while the carrier becomes positive. The opposite charges cause the toner to be attracted to the carrier. A magnetic development roller holds the mostly iron carrier in alignment with magnetic lines of force forming a magnetic brush. This magnetic brush in turn ‘carries’ the attracted toner to the surface of the drum. A high negative bias is applied to the development roller repelling the toner onto the drum. The toner is attracted to the areas of the drum exposed by the laser, which, being close to zero volts, is much more positive than the negatively charged toner. In this way, the latent image is developed. As the carrier remains on the development roller, it continues to attract toner from the hopper to maintain the optimal concentration on the magnetic brush.
Figure 6.7 Development
Transfer
A sheet of paper or substrate passes between the drum and a transfer charge roller that has a high positive voltage applied to it (see Figure 6.8). The negatively charged toner of the developed latent image on the drum is attracted to the more positive transfer roller and adheres to the sheet in-between. The charge applied to the back of the sheet causes the paper to cling to the drum. A high negative voltage is applied to a discharge plate immediately after the transfer charge roller to aid in the separation of the sheet from the drum. The curvature of the drum along with the weight and rigidity of the sheet also aid in the separation.
Figure 6.8 Transfer
A more advanced method of transfer utilizes an intermediate transfer belt system. This is most common on colour digital presses where four or more colours are transferred onto the belt before transferring the complete image onto the sheet. Charge rollers beneath the belt, under each drum, pull off the developed latent images of each separation directly onto the belt. In the transfer stage, a transfer charge roller beneath the belt applies a negative charge to push the toner onto the sheet. A second roller, directly beneath the first on the other side of the belt, applies pressure keeping the paper in contact with the belt and aiding in transfer for more textured stocks. The lower roller may have a small positive charge applied to it or may be grounded. Some systems can also alternate the charge applied to the transfer charge roller to further aid toner application onto textured substrates.
After this stage, the sheet moves on to fusing where the toner permanently adheres to the substrate. The next two stages described below are post-imaging steps that are necessary to prepare the drum surface for the next print cycle.
Cleaning
After the transfer stage, some toner may be left behind on the surface of the drum. If left there, the background of each successive print would slowly become darker and dirtier. To prevent this, a cleaning blade removes any residual toner from the drum’s surface (see Figure 6.9). Some systems will recycle this toner back to the developing unit, but mostly the waste toner is collected in a container for disposal.
Figure 6.9 Cleaning
Erasing
In this stage, an LED array exposes the length of the drum, bringing this area of the drum to near zero volts. This prepares the drum surface for the charging stage of the next print cycle.
Fusing
This is the final stage in the electophotographic process. The fusing mechanism, or fuser, consists of a heat roller, a pressure roller, and cleaning mechanism (see Figure 6.10). Toner is composed mostly of resin. When the toner is heated by the heat roller and pressure applied by the complement pressure roller, it melts and is pressed into the fibres of the sheet. The toner is never absorbed by the paper or substrate but rather is bonded to the surface. A negative charge is applied to the heat roller or belt to prevent the toner from being attracted to it and the cleaning section removes any toner or other contaminates that may have remained on the heat roller. Heat may also be applied to the pressure roller (at a much lower temperature) to prevent the sheet from curling.
Figure 6.10 Fusing
Along with the transfer stage, fusing can be greatly affected by the paper or substrate used. The thicker and heavier the sheet, the more heat it absorbs. Because of this, these sheets require higher temperatures so there is sufficient heat remaining to melt the toner. Insufficient heat can cause the toner to scratch off easily or not bond at all. Too much heat can cause moisture in the substrate to evaporate quickly and get trapped beneath the toner causing tiny bubbles that prevent the toner from sticking wherever they occur. This issue is seen more on thinner stocks that do not absorb as much heat. Too much heat can also cause toner residue to stick to the heater roller and deposit it on subsequent sheets.
The heat roller can heat up quite quickly but may take much longer to cool down. This can cause delays in producing work that switches between different paper weights. To combat this, some devices use a thin belt that can be both heated and cooled quickly in place of the heater roller. In some cases, a cooling mechanism is also employed further mitigating the cooling lag.
Media Attributions
• EP Imaging System – Complete by Roberto Medeiros
• EP Imaging System – Charging by Roberto Medeiros)
• EP Imaging System – Development by Roberto Medeiros
• EP Imaging System – Transfer by Roberto Medeiros
• EP Imaging System – Cleaning by Roberto Medeiros
• EP Imaging System – Fusing by Roberto Medeiros | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/06%3A_Imaging/6.04%3A_Electrophotographic_Process.txt |
When talking about substrates used in printing, paper is usually what comes to mind. Paper is made most commonly from wood fibre. Today, many papers also have some percentage of recycled fibre as well as fillers and other additives. These all contribute to the quality of the paper itself and to the quality of the printed output. It’s important to understand some basic attributes of paper as they all have a direct impact on imaging processes and results.
Formation
Formation refers to the distribution of fibres, fillers, and additives in paper and how evenly they come together. When you hold a sheet up to a strong light source and look through it, the mix of dark and light areas are the result of formation. The more uniform the formation, the less mottling is observed in the paper. Papers with uniform formation accept inks and toners more evenly, have reduced print mottling, and enhance clarity.
Opacity
In strict terms, opacity is the degree to which light is prevented from travelling through a paper. In practical terms, it’s how well a paper prevents the image on the backside of a sheet showing through to the front. This is measured on a scale from 1 to 100, where 100 is completely opaque. Opacity can be increased with fillers, pigments, or even coatings. In general, a thicker paper, coloured paper, or coated paper is more opaque than its counterparts. Opacity values are very important when projects require thinner paper stocks and both sides of the sheet are being printed.
Basis Weight and Grammage
When looking at the label on a ream of paper used in North America, you usually see two weight designations: the basis weight, designated in pounds (#) and the equivalent grammage, in grams per square metre (g/m2 or gsm). In most of the world, grammage is primarily used. In North America, the basis weight is more common. Grammage is simply how many grams per square metre paper weighs. No other factors are represented by this designation. So we can deduce that the higher the grammage, the thicker or denser the sheet. Basis weight is the weight of 500 sheets of paper at a specific size, known as the ‘parent’ sheet size, which varies based on the historical use of the specific paper. To understand this better, let’s examine two different basis weights.
Cover basis weight is based on a 20″ x 26″ parent sheet. So 500 sheets of 80# cover (the # symbol is used to indicate pounds) at the parent sheet size weighs 80 pounds. Likewise, 500 sheets of 80# text at the text-weight parent sheet size of 25″ x 38″ also weighs 80 pounds. This can be very confusing as a cut sheet of letter (8.5″ x 11″), 80# text, is much thinner than the same size of 80# cover. Table 6.1 shows common basis weights, parent sheet sizes, and typical uses.
Table 6.1 Paper weights, sizes, and uses
Basis Weight Parent Sheet Size Typical Use
Bond 17″ x 22″ Historically used as writing paper and typically uncoated. Standard office paper is 20# bond, while colour prints are more commonly done on 24# or 28# bond due to the need for higher opacity.
Cover 20″ x 26″ Used for paperback book covers, business cards, post cards. Business cards have typically been 100# cover, but have been trending toward higher weights of 110# and 120#.
Text 25″ x 38″ Used for magazines and posters. Relatively thin sheets with higher opacity. Magazines typically use a coated text weight paper for both the cover and the body. Typical weights are 70# to 100#.
Index 25.5″ x 30.5″ Used for index cards and tab stock. Tab stocks are typically uncoated 90# index.
Although basis weight is used as the primary weight on a paper label and description, a digital press will typically use grammage to define the weight property when assigning a paper to a tray. Paper weight is one of the key characteristics that affect many parameters on the digital press, including how much vacuum strength is used for feeding, how much charge is required to transfer toner to paper, and how much heat is required to maintain a consistent fusing temperature to bond toner to the paper, among others. Entering the wrong values for the paper weight can cause paper misfeeds, poor image quality, or toner not adhering to the paper. Using grammage simplifies data entry and avoids errors due to incorrect basis weight selection for the numeric weight value. It may, however, require one to do a conversion calculation if only basis weight is provided. The following conversion factors can be used to do these calculations.
Conversion Factors:
Bond (lbs.) x 3.7606 = gsm
Cover (lbs.) x 2.7048 = gsm
Text (lbs.) x 1.4805 = gsm
Index (lbs.) x 1.8753 = gsm
Grain Direction
In the paper manufacturing process, a slurry of fibre travels over a high-speed mesh conveyor belt that is oscillating side to side. This action and movement causes the fibres to interlace and develop a predominant alignment along the direction of movement. This predominant alignment of the fibres is called grain direction. Short grain refers to fibres running parallel to the short dimension of the sheet, and, conversely, long grain refers to fibres running parallel to the long dimension of the sheet.
It is important to keep grain direction in mind when choosing a paper for a project. You need to consider the print process and binding or finishing method you will use, as choosing the wrong grain direction can produce poor results or may be incompatible with the printing method you have chosen. Sheet fed offset lithography papers are often long grain and are most common. Digital presses require the grain to run perpendicular to the feed direction in order to feed properly and make the sharp turns typically found in a digital press. In this case, most sheets are fed into the press with the short edge first therefore requiring short grain paper. When folding is required, folds that run parallel to the grain will be smooth and sharp while folds that run across the grain will be ragged, and the fibres on the top of the sheet may pull apart. Toner used in digital printing bonds to the surface of the paper and does not penetrate. Folding across the grain will cause the toner to break apart where the fibres separate.
The second or underlined dimension of the sheet will indicate the direction of the grain. For example, 18″ x 12″ is a short grain sheet, and 12″ x 18″ is long grain. If the underline method is used, short grain would be 12″ x 18″ and long grain would be 12″ x 18“. If the dimensions are not noted or the sheet is not in its original packaging, grain direction can be determined by folding the sheet along both dimensions. As noted previously, a fold that runs parallel to the grain will be smooth and sharp while a fold that runs across the grain will be ragged. You can also gently bend the paper in either direction. The bend running in the direction offering the least resistance is the grain direction.
Caliper
Caliper, unlike grammage and basis weight, is a measure of thickness. The most common measurement used in North America is thousandths of an inch, designated as points (common for paper) or mils (common for synthetic paper). This terminology can be confusing, however, as points can also refer to 1/72 of an inch when referring to font size, line thickness, and dimensions on a page. Mils can be confused with millimetres as well. A common misconception is that points and mils can be converted to grammage or basis weight. This is not true. The caliper can vary depending on the coatings or finish. In general, a rougher finished stock will have a higher caliper than the same weight of a smooth stock. Coatings can be heavier than paper fibre so coated paper can have a smaller caliper than the same weight of an uncoated counterpart. A process called calendaring, which irons the paper between two highly polished chrome rollers, improves smoothness and printability but also reduces the caliper without changing the weight of the paper.
Brightness and Whiteness
Brightness and whiteness define the optical properties of paper and differ mainly in how they are measured. Whiteness measures the reflective properties of the paper across the entire visible spectrum of light (defined by CIE). In other words, it defines how white the paper is. A perfect reflecting, non-fluorescent white material measures 100 whiteness. Brightness also measures the reflective properties of paper, on a scale of 1 to 100, but specifically in the blue area of the spectrum at a principal wavelength of 457 nanometres and 44 nanometres wide (defined by TAPPI and ISO standards). This wavelength coincides with lignin absorption. Lignin is what binds the cellulose fibres in wood and pulp and gives it its initial dark brown colour. The more bleaching done to the pulp, the more lignin is removed, and the higher the blue reflectance and therefore brightness. In most parts of the world, paper whiteness measurement is used; however, in North America, most papers use brightness measurement instead. Some papers have brightness values that exceed 100. This is due to the addition of fluorescent whitening agents (FWAs), which return additional blue light when exposed to UV light. The same is true for whiteness, as papers with higher blue reflectance levels tend to have higher whiteness levels.
Finish
Finish defines the look and feel of the paper’s surface and can be achieved during the paper-making process (on-machine) or after (off-machine). On-machine finishes are achieved by the application of a pattern onto the paper by a marking roller while it is still wet. Examples of on-machine finishes are smooth, vellum, laid and felt (see Table 6.2). Off-machine finishes are accomplished with rollers that press the pattern into the paper after it has been made. Off-machine finishes are also known as embossed finishes. Linen, stipple, and canvas are examples of these; Table 6.3 gives a description of each.
Table 6.2 On-machine finishes
On-machine Finishes Description Typical Uses
Smooth Paper is passed through various calendaring rollers, producing a finish that is uniform, flat, and smooth to the touch. Ideal for general digital printing and copying as toner is applied to the surface and does not penetrate the fibres.
Vellum A consistent eggshell appearance that is not quite as smooth as smooth finish but has a velvety feel. Not to be confused with the substrate called vellum, which is translucent. Used most commonly for book paper.
Laid Consists of a series of wide-spaced lines (chain lines) and more narrowly spaced lines (laid lines), which are at 90 degrees to the chain lines. Used for letterhead, reports, presentations.
Felt A felt-covered roller is used to produce this finish. The appearance resembles that of felt. Used for letterhead, reports, presentations.
Table 6.3 Off-machine finishes
Off-machine Finishes Description Typical Uses
Linen A cross-hatch pattern resembling linen fabric. Used for personal stationery, letterhead, fine-dining menus, business cards.
Stipple A fine bump texture that resembles the painted surface of a wall. Used where a subtle uneven texture is desired.
Canvas Simulates the surface of canvas. Used for art prints or where a ‘painted’ appearance is desired.
Coated papers have calcium carbonate or china clay applied to their surface. The coating fills in the spaces between the fibres on the paper’s surface, resulting in a smoother finish. The amount of coating and calendaring produces different finishes and gloss appearance. Examples of coated finishes are matte, dull, satin, silk, and gloss, described in Table 6.4.
Table 6.4 Coated finishes
Coated Finish Description Gloss Level
Matte Roughest surface of coated paper. Very flat, no lustre, no glare, no calendaring applied. None
Dull Smoother surface than matte. No luster, no glare, minimal calendaring. Very low
Satin Smooth and soft to the touch. Slight lustre, low glare, light calendaring. Medium low
Silk Smooth and silky to the touch. Low lustre, low glare, light calendaring. Moderate
Gloss Smooth and slick. Shiny, high calendaring. High
Cast coated paper has a very high gloss finish on the front side and is uncoated and rough on the back. The high gloss finish is created by applying a heated chrome roller to the coated surface to quickly dry it while moisture is released through the uncoated back of the sheet. Calendaring is not used, allowing the back surface to be rough and ideally suited for labels. Cast coated paper holds ink well, but the toner used in digital printing may not adhere to it. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/06%3A_Imaging/6.05%3A_Paper_Basics.txt |
Many page description languages (PDL) exist today; however, Printer Command Language (PCL) and PostScript are the most common and widely adopted. Each has its strengths, weaknesses, and jobs for which it is best suited. It is important to understand these differences and choose the method best suited to your particular printing requirements.
PCL
PCL is a page description language developed by Hewlett-Packard (HP) and was originally used on HP impact and inkjet printers. PCL 3 was the first version to be used with a laser printer, the HP LaserJet, released in 1984, around the same time PostScript was introduced. The goal of PCL was to have an efficient printer control language that could be implemented consistently across HP’s printer line. Simple commands and functionality would not require expensive print controllers, making it very attractive for utility-level printing. Many other printer manufacturers implemented PCL for this reason. Commands are embedded at the beginning of the print job and set the parameters for the printer to use for the job. These commands remain set until a new value is assigned for the command or the printer is reset. If the printer does not support a specific command, it ignores it.
When colour laser printing became available, PCL 5c was developed with similar goals. New commands were added to the existing command set, as was the case with all the predecessors, to add support for colour printing. This ensured backwards compatibility while minimizing development. When it came to colour, HP’s goal was to have colour on the printed page look the same as what was displayed on screen. There were many challenges to achieving this, so print quality adjustments were included to give users the ability to fine-tune the output. With the emergence and widespread adoption of the sRGB standard to define and describe colour on a display, the PCL colour command set could be simplified by adopting this standard for colour printing. Thus, HP’s goal could be achieved without the complexity and overhead of a full colour management system. Operating systems and applications, for the most part, have standardized how they display colour in sRGB, so this approach is the simplest way to achieve acceptable colour between display and print. PCL is most appropriate for general office use where a simple, low-cost print device that produces good quality colour is expected. It is not suitable, however, for a colour critical or print production environment where precision and full colour management is required.
PostScript
PostScript is a page description and programming language developed by Adobe that describes text, graphics, and images, and their placement on a page, independent of the intended output destination. The code created is in plain text that can be written and examined with a basic text editor. The output itself can either be to a printer, display, or other device possessing a PostScript interpreter, making it a device independent language. The interpreter processes the PostScript instructions to create a raster image the device can render. The interpreter is often referred to as a RIP or raster image processor for this reason. It is possible to write valid PostScript code from scratch, but it is impractical as page composition applications can either generate the PostScript code directly or can utilize a print driver, which can convert the page to the PostScript language.
Since PostScript is a general-purpose programming language, it includes many elements that you wouldn’t associate specifically with printing such as data types (numbers, arrays, and strings) and control primitives (conditionals, loops, and procedures). It also has an interesting feature called dictionary, which stores information in a table consisting of a collection of key and value pairs. The values can be entered into the dictionary while the keys are used to reference the information needed. These features made possible documents that acted like an application which could generate pages dynamically from data directly on the printer itself. These printer-based applications were stored temporarily in memory or permanently in the printer’s hard drive, and triggered by a command in the print stream. These capabilities made variable data printing possible using PostScript and are still being used today for that purpose.
The first printer to use PostScript was the Apple LaserWriter in 1985. The same day that Apple announced the LaserWriter, Aldus Corporation announced PageMaker, a page layout application developed to take advantage of the Apple Macintosh computer’s GUI (graphical user interface) and the PostScript PDL. This series of events is considered by many as the genesis of the desktop publishing revolution. In fact, the term desktop publishing is attributed to the founder of Aldus Corporation.
PDF
Portable document format (PDF) is one of the most popular file formats for displaying and printing documents. When this format was released by Adobe in 1993, it shared many of the same concepts and components of PostScript. But where PostScript was designed primarily to provide device independent consistency in print output, PDF was focused on maintaining the visual appearance of a document onscreen, independent of the operating system displaying it. Over the years, PDF has expanded to more specific-use specifications for engineering, archiving, health care, universal access, and printing.
PDF/X is a branch of PDF and an ISO standard that deals specifically with print. It was developed by the Committee for Graphic Arts Technologies Standards (CGATS). Table 6.5 shows the evolution of the standard.
Table 6.5 Evolution of PDF
Data source: Adobe Systems Inc, 2008, p. 4
Preset Compatibility Settings Usage
PDF/X-1a: 2001 Acrobat 4/PDF 1.3 • Convert RGB colour to CMYK (spot colors allowed)
• Transparency flattened
PDF/X-1a ensures that the files are ready for print production—fonts are embedded, colours must be CMYK or spot, layers and transparency are flattened. Note that there is no minimum resolution required for PDF/X.
PDF/X-1a: 2003 Acrobat 5/PDF 1.4
PDF/X-3: 2002 Acrobat 4/PDF 1.3 • Leave RGB and CIELab color unchanged (profiles allowed)
• Transparency flattened
PDF/X-3 has all the benefits of PDF/X-1a plus it allows colour-managed workflows.
PDF/X-3: 2003 Acrobat 5/PDF 1.4
PDF/X-4: 2008 Acrobat 7/PDF 1.6 • Leave RGB and CIELab colour unchanged (profiles allowed)
• Live (unflattened) transparency
• Layers allowed
Has all the benefits of PDF/X-3 plus it allows live (unflattened) transparency and layers for versioning. Print workflows based on the Adobe PDF Print Engine will be able to process PDF/X-4 jobs natively, without flattening artwork or converting to PostScript.
PDF/X-4p: 2008 Acrobat 7/PDF 1.6 Use PDF/X-4p when a required ICC profile is unambiguously identified and supplied separately.
Submitting documents for print using one of these standards is highly recommended as it eliminates many of the causes of print issues and is a more reliable method for graphics file exchange.
Digital Front End
Digital front end (DFE) describes the combination of hardware and software that drives and manages a print device. Hardware is often custom built for this specific purpose and may have proprietary video interfaces that connect directly to the print engine. An operating system serves as the base for the software components of the DFE and is often Microsoft Windows based or a Linux or Unix variant. Although the Windows running on a DFE is much the same as its desktop counterpart, Linux- and Unix-based systems are often custom distributions that are compiled specifically for the DFE.
One of the key components of a DFE is the raster image processor (RIP). The RIP refers to the software component that interprets the PDL and performs the function of rendering or rasterizing the complete instructions into an image or raster the print engine will reproduce. The term RIP is often used interchangeably with DFE. This may have been accurate in the past when a DFE really only performed the ripping function and little else. Modern DFEs, however, do much more. In fact, a DFE may contain multiple RIPs, and within those RIPs they can utilize the multi-threading and processing power of modern computer hardware and operating systems to process many pages or channels simultaneously. PostScript has been the defacto PDL in digital printing for many years but with the development of the PDF/X print standards and the subsequent release of the Adobe PDF Print Engine (APPE), a native PDF RIP, many DFEs now include both PostScript and APPE as their available RIP engines.
ICC-based colour management workflow may be part of the RIP process or can be an independent component of the DFE. Different elements within a print file get processed through their respective channels in the colour management system. Common channels include CMYK, RGB, black, and named colours. The idea is to convert all colour elements into the colour gamut of the print engine’s colorants/paper combination. The conversion process can be complicated, but the basic concept is device dependent source colour spaces (CMYK, RGB, etc.) are converted to a device independent colour space, referred to the profile conversion space (PSC), then from the PSC to the output gamut defined in the output ICC profile. The idea is to take the source ‘recipe’ and define the visual appearance of it first. That is why it needs to convert to device independent colour space, which defines the visual appearance of colour. Once the visual appearance is defined, the ‘recipe’ for the specific output can be calculated.
Systems that support named or spot colour rendering follow a similar process. The named colour is located in a look up table. The name must match perfectly, including punctuation, spaces, and case. Each named colour is defined in a device independent colour space, typically Lab. There is no calculation in this step. The last step is the same; Lab values are then converted via the output profile.
There is one more calculation applied before passing the information through to the printer. Electrophotography is affected by rapid changes in humidity and temperature. The electrophotographic process relies on components that do become less effective over time and use. These factors all affect the colour output. Calibration should be performed on a regular basis to compensate for these variables. Calibration is a process of creating a correction curve to maintain consistent and repeatable print output. This correction curve is applied right after the conversion to output profile, ensuring output is consistent with what was defined in the output profile itself.
In order to maintain all these aspects of the DFE, an intuitive and user-friendly interface is critical. The user interface includes many components. Here is where you would configure the DFE, find the status of the print device and consumables, view and process jobs and job queues, and examine and export job histories and logs. Many WYSIWYG tools are accessed via the user interface, such as those for workflow, imposition, complex job composition, paper libraries, spot colour refinement, the launching of external tools, and even interfaces into other systems such as web2print. DFEs are becoming more powerful and perform more than just the traditional RIP functions. As the digital print industry continues to evolve, DFEs will be called on to perform more duties and functions. User interfaces will need to evolve as well to maintain usability and stay intuitive. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/06%3A_Imaging/6.06%3A_Page_Description_Languages.txt |
Variable data printing, or VDP, refers to a special form of digital printing where document content is determined by entries in a record or data set and can be highly personalized. Varied text, graphics, and images are typical content elements, but layout, element positioning, and even document choice are just some of the other variables. Because the content on the printed page is constantly changing, it would not be feasible to produce this type of print product with traditional offset lithography or with any other process that requires a fixed image plate. Electrophotographic and ink jet printing are ideally suited for this type of printing as each page is imaged individually.
VDP can take many forms. Transactional documents like invoices and statements are probably the oldest form of VDP, but these have evolved to include marketing or informational content. This is known as trans-promo or trans-promotional. A mail merge is a simple form of VDP where a static document has data elements added directly to it. Each record in the data set produces one document. Another VDP form is when you enter the record manually or upload a simple text-based data table, which then fills the content of a template. This method is typically found in web2print solutions and produces items such as business cards, where the layout, fonts, and required elements can be predetermined and the content based on the data entered. More advanced VDP solutions may include campaign management tools, workflow management, two-dimensional barcode generation, image-based font technology, and integration into external systems such as databases, email, web2print solutions, data cleansing, or postal optimization solutions.
One of the core purposes of VDP is to increase response rate and, ultimately, conversions to the desired outcome. In order to accomplish this, it is critical that the content presented is relevant and has value for the intended audience. Today, there are massive amounts of data available on customers and their behaviour. Analyzing and understanding customer data is essential to maintaining a high degree of relevancy and engagement with the customer.
VDP can be broken down into six key components: data, content, business rules, layout, software, and output method. Each component can vary in complexity and capability and may require advanced software solutions to implement. However, even the most basic tools can produce highly effective communications.
Data
Data used for VDP can be simply thought of as a table or data set. Each row in the table is considered a single record. The columns are the fields used to describe the contents of the record. Some examples of columns or fields would be first name, last name, address, city, and so on. The simplest and most common form of representing this table is by using a delimited plain text format like comma separated value (CSV) or tab delimited. The delimiter separates the columns from one another and a new line represents a new row or record in the table. Here is an example of CSV data:
`“FirstName”,”LastName”,”Gender”,”Age”,”FavQuotes”`
`“John”,”Smith”,”M”,”47”,”Do or do not, there is no try.”`
`“Mary”,”Jones”,”F”,”25”,”Grey is my favourite colour.”`
The first row contains the row headers or what the fields represent and is not considered a record. You’ll notice that each field is separated by a comma but is also enclosed within quotes. The quotes are text qualifiers and are commonly used to prevent issues when the delimiting character may also be in the contents of the field as is the case with the first record above. Many VDP applications support more advanced, relational databases like SQL, but a query must be performed to extract the data to be used, which ultimately results in the same row and column record structure. The data must be attached or assigned to the document in the page layout or VDP application.
Content
Content refers to elements displayed on each page. This would include text, graphics, and images, both static and dynamic. Dynamic content uses placeholders, typically named by the column headers of the data, to mark the position of the element and reference the data in the specific column of the current record. When the document is rendered, the placeholder is replaced by the record data element.
“Dear <<FirstName>>…” becomes “Dear John…” when the document is rendered for the first record and “Dear Mary…” for the second record, and so on. A complete document is rendered per record in the dataset.
Business Rules
Business rules are one of the key elements that make VDP documents highly useful. They can be thought of as a series of criteria that are checked against the data to determine what gets displayed on the page. They can also be used to manipulate the data or filter out relevant content. In almost every case, some level of scripting is required. Advanced VDP solutions have built-in scripting capability, utilizing either common scripting language such as VBScript or JavaScript, or a proprietary scripting language that is only applicable in that specific application. If the page layout tool you are using to create your VDP document does not have scripting capability, you can apply business rules to data beforehand in a spreadsheet application like Microsoft Excel or even Google Sheets.
Figure 6.11 Logical test
One of the most common methods for implementing a business rule is using a conditional or IF statement comprising a logical test, an action for a ‘true’ result, and an action for a ‘false’ result (see Figure 6.11).
The logical_test means that the answer will be either true or false. In this case, we want to change our graphics based on gender.
In plain English, you may say:
`“IF gender is male, THEN use plane_blue.tif, or ELSE use plane_orange.tif”`
In scripting, it would look something like this:
`IF(Gender=”male”,”plane_blue.tif”,”plane_red.tif”)`
When doing this in a spreadsheet, you would enter the script in a cell in a new column. The result of the script is displayed in the cell, not the script itself. This new column could now be used to specify the content to be displayed in your layout application. In dedicated VDP applications, the script is attached to the object itself and is processed and displayed in real time.
Figure 6.12 Plane blue
The “@Plane” column was added to dynamically change a graphic based on the contents of the cell in the “Gender” column (B2) (see Figure 6.12).
Business rules can also be applied to the VDP workflow. In this case, the workflow application or component can manipulate the data before applying it to a document, or it can select the document or destination to be used for the record and much, much more.
Layout
When working with variable data documents, there are special layout considerations you should be aware of. Because word lengths will change per record, there needs to be sufficient space to accommodate the largest and smallest record, and prevent oversetting while maintaining the desired visual appearance. This challenge is compounded by the prolific use of proportional fonts. Character widths differ with each letter so word lengths will vary even when the number of characters is the same. This can also force a paragraph to reflow onto another page and change the number of pages in the document. Additional scripting may be required to handle reflow scenarios. Some applications use special copy-fitting algorithms to dynamically fit text into a defined area. The use of tables for layout purposes can also be helpful. Because we are dealing with dynamically generated documents, we may also want to vary the images. Using images with a consistent size and shape make it easier to work with. Transactional documents, such as statements and invoices, extensively use numbers. Most fonts, including proportional ones, keep numbers mono-spaced. In other words, every number character occupies the same amount of space. This is important because, visually, we want numbers to be right justified and lining up vertically in columns with the decimal points aligned. There are, however, some fonts that do not follow this common practice. These fonts may be suitable for use in a paragraph but are not for displaying financial data.
Software
Software that can generate a data-driven document is required for variable data printing. In the early days of VDP, there weren’t many choices for designers. It was common practice to hand code VDP in PostScript, since it was both a programming language and a PDL. Applications like PageMaker and Illustrator were PostScript design applications but lacked VDP capabilities. Applications like PlanetPress emerged as dedicated PostScript VDP applications. Today, designers have a wide variety of software available for creating VDP. There are three basic VDP software types: a built-in function within a page layout or word-processing software, a third-party plug-in, or a dedicated VDP application.
Microsoft Word, for example, has a mail merge function but does not have the ability to vary images, just text. Adobe InDesign has the data merge function, which is basically a mail merge but includes the ability to vary images as well. In both these examples, business rules would be applied to the data prior to using it in these applications.
There are a number of plug-ins available for InDesign that are very sophisticated. These leverage the extensive page layout capability of InDesign while adding scripting and other VDP specific capabilities. XMPie and DesignMerge are examples of these types of plug-ins. FusionPro is another plug-in based VDP product, and while it does have an InDesign plug-in, it only uses this to allocate variable text and image boxes in the layout. Business rules and specific content are applied in its complement plug-in for Adobe Acrobat.
PlanetPress and PrintShop Mail are examples of dedicated applications that combine both page layout and VDP functions. Although they are very strong in VDP functionality, they sometimes lack the sophistication you’d find in InDesign when it comes to page layout. These particular applications have recently moved from PostScript-based VDP to a more modern HTML5 and CSS (cascading style sheets) base, making it easier to produce and distribute data-driven documents for multi-channel communications.
Output Method
The ‘P’ in VDP stands for printing and is the main output method we will discuss here. However, document personalization and data-driven communications have evolved to also include email, fax, web (PURL, personalized landing page), SMS text messaging, and responsive design for various mobile device screen sizes. With the emergence of quick response (QR) codes, even printed communications can tap into rich content and add additional value to the piece. In order to take advantage of these additional distribution and communications channels, a workflow component is often employed.
A key element for optimized print output of VDP documents is caching. This is where the printer’s RIP caches or stores repeatable elements in print-ready raster format. This means the RIP processes these repeating elements once, and then reuses these preprocessed elements whenever the document calls for them. This does require a RIP with enough power to process large amounts of data and resources with support for the caching scheme defined in the VDP file but, ultimately, allows the printer to print at its full rated speed without having to wait for raster data from the RIP.
There have been many proprietary VDP file formats that have striven to improve performance of VDP over the years, but the industry is moving rapidly toward more open standards. PODi, a not-for-profit consortium of leading companies in digital printing, is leading the way with two widely adopted open VDP standards. These standards are PPML (Personalized Print Markup Language) and PDF/VT (portable document format/variable transactional).
PPML
PPML, first introduced in 2000, is a device independent XML-based printing language. There are two types of PPML: thin and thick. Thin PPML is a single file, with the .ppml extension, containing all the instructions necessary for producing the VDP document. It does include caching instructions; however, all resources such as fonts or images are stored externally of the file. The path to these resources is defined in the RIP and retrieved during the rendering process of the document. Thin PPML is ideal for in-house VDP development where resources may be shared by multiple projects. These files are extremely small, however, and network speed and bandwidth may affect performance and are more difficult to implement if using an external print provider. Thick PPML is a .zip file containing all the required resources (fonts, images, instructions, etc.). This format makes the file highly portable and easy to implement on the print device, but it has a larger file size when compared to thin PPML. RIPs that support the PPML format can import the .zip file directly. Regardless of the type used, PPML benefits from exceptional performance, an open standard, open job ticketing support (JDF), and overall reduced file size. To generate PPML, an advanced VDP solution is required.
PDF/VT
PDF/VT is a relatively new international standard (ISO 16612-2) that has a lot of potential. It is built off the PDF/X-4 standard, benefiting from its features, such as support for transparency, ICC-based colour management, extensive metadata support, element caching, preflighting, and much more. In short, PDF/VT includes the mechanisms required to handle VDP jobs in the same manner as static PDF printing allows print providers to use a common workflow for all job types, including VDP. Many of the latest releases of advanced VDP solutions already support PDF/VT as well as many DFE manufacturers.
For more information on PPML and PDF/VT, please refer to the PODi website at: http://www.standards.podi.org
Media Attributions
• plane_blue by Roberto Medeiros
6.08: Summary
Digital printing encompasses a number of technologies that each has unique characteristics, strengths, and applications. Digital imaging may even be the only print method able to produce a certain type of work, as is the case with VDP. Paper is also a major factor in the success of a project. Your paper choice can convey a message or set a tone as much as the content printed on it. Having a good understanding of technology and paper fundamentals can go along way when making choices for producing your print project.
Questions to consider after completing this chapter:
1. All xerography can also be called electrophotography, but not all electrophotography can be called xerography. What key element validates this statement?
2. What are the four key components in electrophotography?
3. How does toner acquire its charge?
4. What is the difference between paper brightness and whiteness?
5. Which PDLs support an ICC colour-managed workflow?
6. Which PDF/X standard leaves layers and transparency live?
7. Why are data content and business rules critical in VDP?
Suggested Readings
Johnson, H. (2004). Mastering digital printing (2nd ed.). Boston, MA: Cengage Learning PTR.
Nanography Lobby – Landa Nanography. (n.d.). Retrieved from http://www.landanano.com/nanography | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/06%3A_Imaging/6.07%3A_Variable_Data_Printing.txt |
Learning Objectives
• Describe web2print and how it benefits the supplier-customer relationship
• Differentiate between the different business models of web2print implementation
• Explore the economic impact of implementing web2print
• Research target markets that would benefit from online selling processes
• Define business to business (B2B) and business to consumer (B2C) sales models
• Examine variable templates and identify their components
• Discuss how print on demand works and how it affects the sales cycle
• Give examples of print workflows and management information system (MIS) integration
• Discuss how web2print business opportunities might affect return on investment
As modern modes of communication are constantly changing, the print industry has had to become very competitive. Successful print companies have evolved beyond producing only printed products; today, they also provide other services geared to an electronic marketplace. With the ever-growing number of people going online, customers have adapted to having instant information (Poon & Swatman, 1995), which printers can now provide. One such service is web2print. Customers benefit from the ease of ordering print anywhere around the world, with quicker turnaround times than traditional methods. Print companies benefit by building a larger customer base, streamlining workflows, and reducing production costs — all while staying competitive.
Print has traditionally been a service requiring many human touch-points, but with advances in business communication over the Internet, the dynamics of custom manufacturing are changing, particularly with respect to the ordering process (Shim, Pendyala, Sundaram, & Gao, 2000). Putting an effective online service in place involves strategically selecting suitable products and services. Online ordering is not ideal for every print product, but rather for specific products that are unnecessarily labour intensive. Products that were once very labour intensive, such as business cards, can now be ordered online. Customers enter their information directly into a template on screen while they sit in their own offices. A print-ready PDF file enters an automated folder in the print company (called a hot folder) and moves into the print workflow, allowing for a fully automated process called lights-out prepress, which saves on labour and allows for greater profits. Web2print provides a client company a value-added feature, while improving workflow process for the print company.
Technology Is the Key
Web2print completely automates the print ordering process by integrating online orders with print production. Using the Internet automates order entry and provides time savings to both the customer and the company.
Electronic commerce offers the possibility of breakthrough changes: changes that so radically alter customer expectations that they redefine the market or create entirely new markets (Gunasekaran, Marri, McGaughey, & Nebhwani, 2002, p. 195).
This technology allows businesses to interact with customers in a new way in addition to more traditional forms of ordering such as email, file transfer protocol (FTP), phone, fax, and face-to-face meetings. While web2print removes the need for more traditional ways of ordering, it does not replace them. Some customers may prefer these ways because they may not know, or be comfortable with, the online options that are increasingly available to them. It is advantageous to a company to inform and educate customers, encouraging them to evolve their buying habits. Traditionally, purchasing involved gaining knowledge about print companies through their sales reps, selecting the appropriate company, and trusting that the sales rep would ensure the product was delivered on time and for the best price. Today, many customers use search engines to obtain information about and decide on which print company to use. Using web2print ensures that the production time frame and the price match what a customer is looking for.
A printing company has to work with and implement many complex and interconnected systems in order to generate a printed product. From the inception of a new item to its delivery to the end-user, there are many opportunities for streamlining. Web2print creates efficiencies at the beginning of the print process, and those benefits trickle all the way through to imaging. The ultimate goal is for the data collected from the customer to proceed directly to the raster image processor (RIP) that drives the production process. A print company that takes advantage of this type of automation is positioning itself for smoother and more cost-effective print runs. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/07%3A_Web2print/7.01%3A_Introduction.txt |
E-commerce is by definition the buying and selling of goods online. Print companies are turning to e-commerce to target the ever-growing number of people who want self-service and a more convenient way to purchase products in their busy lives. Today, many customers are online and the number of users purchasing goods through the Internet is growing substantially (Statistics Canada, 2014). The ability to offer an effective e-commerce service for ordering can put a print company ahead of its competitors. This approach challenges traditional thinking that a printer is only a bricks-and-mortar or face-to-face business. As more and more businesses move online to satisfy their customers’ procurement needs, so must print companies (Supply Management, 2014).
A print company should transition with its customers to stay ahead of the competition. E-commerce allows a customer to order any product 24/7 without having to leave the office or pick up the phone. Providing online ordering has other benefits besides increased sales. It can help a company gain more knowledge about its customers and track purchasing trends. As well, online ordering allows print companies to offer customers additional products and services that may not come up in discussions during traditional forms of print procurement.
Not all business can be conducted online, as the more complex a project is the more a company will benefit from offering its services and expertise in traditional face-to-face communications. However, the ability to stay ahead of the competition by using Internet technologies helps solidify a company’s relationship with its customers. For example, even if customers are buying products offline, they are likely doing their research online first. Therefore, a company can use the valuable data generated during this research to greatly improve its ability to evaluate and manage its business. When customers place orders, companies can analyze the frequency of those orders, their customers’ buying trends, and which products are more successful than others. Data-gathering is the most important advantage online ordering has over traditional offline ordering. Using an e-commerce system allows a print company to learn more about what, when, and how customers are ordering. Online technologies enable a company to broaden the products and services it provides more than ever before.
Templated Variable Data Printing
Templated variable data technology allows a customer to produce a product using a template-driven order-entry system. These templates are used most frequently through an e-commerce store. A variable template allows the user to control the information made available on a product and see the real-time end result. Variable data entry is an ideal solution for direct mailers, promotional flyers, event posters, and stationery. Variable data allows for the custom printing of one-of-a-kind pieces that are unique to the targeted market. The potential for variable data is endless and is only limited by the imagination behind a design.
Variable data is not limited to digital presses, as many lithography presses can incorporate designs with the simple swap of a plate. It is common for companies ordering large quantities of business cards to print blank shells on which they can later imprint a person’s information. A print company obtains this information by having the customer use a variable template, which is then uniquely created to ensure all branding standards are consistent. This allows any employee of the client company to order business cards and stay true to the brand. The person fills out his or her name, title, and phone and email contact information, then views the result on a soft-proof. Then the person simply submits the card through a shopping cart system, eliminating the need for multiple communications, and making production efficient by having fewer hands touch the project.
The Benefits to a Print Company
Web2print has multiple benefits for a print company. Software can automate tasks, eliminating the need for staff to touch a project, which makes the service profitable. Allowing the customer to create a print-ready PDF is an asset, not a loss of control, as it allows the customer to assume responsibility for any typos or errors. Web2print also makes the production process much faster and efficient by automating time-consuming steps, and most importantly, helps build solid rapport and customer loyalty.
Once a PDF is ordered, a job ticket (called a docket) is automatically generated containing all of the order specifications, including pricing. The PDF then travels to a hot folder where it is accessed by the imposition software, which imposes it onto a press sheet. The press sheet either goes straight to the digital press to be printed or to the plate setter if it is being printed by conventional lithography. The whole process is extremely efficient and takes considerably less time to complete than having staff continually involved as the PDF travels through their departments.
The Benefits to the Customer
Web2print is all about customer satisfaction. That should be the top priority when a print company creates customized online storefronts and variable templates. Customers no longer have to call or physically meet a sales rep, as they now have the ability to order their printed materials at any time of the day and from anywhere. Today, many customers do not sit behind a desk for their entire workday, and establishing an online service allows a company to target those on-the-go customers who need to order print.
Companies can track orders in real time, which helps them predict future buying trends. This is especially beneficial to customers wanting to print blank items, otherwise known as shells. Using this information, a company can help a customer determine how many shells to print in the next run.
Business Models: Licensed or Subscribed Software
Web2print services come in two primary types: licensed software and subscribed software. Licensed software allows a print company to own the technology by paying an upfront fee; subscription-based software typically requires a company to pay a monthly or yearly fee to use the software.
Licensed software strategies typically represent a large cash outflow up front. The company can then use the software to create as many portals or implementations as it wishes. While the software is supported through a licence, it is expected that the company will maintain its own web presence, and therefore may require highly trained web developers or programmers. The outlay of expense is gradually recouped over time through increased user implementation.
The subscription model, also referred to as SaaS (software as a service) reduces a company’s support and maintenance costs up front. Saas allows adding new print products to move forward more quickly because the need to support the same version throughout computers internally is removed. All subscribers operate on the same version at the same time, and are upgraded regularly at the same time. Because the Internet is constantly evolving, the SaaS model is flexible and better aligned to move with it. This business model contributes to a company’s return on investment (ROI) as well. Since a company typically doesn’t pay a large sum of money upfront for the software, it can easily budget for payments based on monthly usage. The company only pays for the services it uses. This business model builds a partnership between the print company and its SaaS vendor, and a positive ROI is beneficial to both parties in the partnership because it keeps both vendor and subscriber focused on mutual success. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/07%3A_Web2print/7.02%3A_E-commerce_for_Print_Manufacturing.txt |
Evaluating Strategies and Setting Goals
Print companies must have clear strategies and goals to ensure continued success when implementing web2print. The first step is to evaluate the type of sales they make. There are two basic types of sales a print company makes: business to business (B2B) and business to consumer (B2C). It is very common for a printing company to serve a primarily B2B customer base; however, since B2C requires a vastly different storefront, this decision needs to be made early in the process of implementing web2print.
Once a print company determines the type of storefront its customers need, it should research the three basic types of service: print on demand (POD), variable data printing (VDP), and static warehoused items. By analyzing its target market, a print company can determine which of these services customers will use most. Once the print company chooses a software vendor that can provide the most suitable storefront, only then can it decide on the specific services to offer each of its customers.
Therefore, to be successful, a print company must:
• Know the target market
• Choose an appropriate vendor and storefront
• Make plans to add new customers to the system by setting goals
• Choose the types of products to offer to each customer based on need
Know the Target Market
Every print company has a different customer base, and thus serves a different market. A print company must analyze the customers it serves to determine exactly what its target market is. The biggest mistake print companies make when committing to the purchase of an online ordering system is not researching the technology in relation to their target market. Print companies should choose the system that best suits their needs and benefits their customers. There are hundreds of vendors and products with thousands of features, so print companies need a strategy to ensure they can maximize their return on investment (ROI) while providing the best possible services to their specific, targeted customer base.
Choosing a Digital Storefront and Variable Software
Since not all vendors of e-commerce systems are the same, print companies need to exercise due diligence in making their choice of vendor. They should analyze their own internal workflow to ensure they find a vendor that best meets their specific needs. As well, print companies should determine what their employees’ strengths are and ensure the appropriate staff are hired to accommodate online needs. Staff involved in the implementation and operation of an e-commerce ordering system need a basic knowledge of many web-based programming languages in order to give them a good grasp of the back-end coding necessary to build and maintain the online system. Every online ordering system uses its own method of coding to create its storefronts and templates, so having previous programming knowledge is a major asset.
Many companies such as IBM, HP, Creo, and EFI are building platforms to provide VDP service to print companies. The software these companies provide creates a web2print workflow. This includes the internal processes needed to print a job, as well as a client-facing website, from which customers can order. It is important to understand the benefits of every digital storefront as they all offer different options. Digital storefronts must provide a simple ordering process for the customer while being very robust and efficient for the print company. Selecting the order quantity and displaying pricing should be simple and not confusing for the end-user. Customizing VDP products or ordering POD or warehoused items should be simple and quick. The ability to split products for multiple-shipping destinations should also be considered.
Selecting a storefront that can be integrated into a management information system (MIS) to streamline orders from customization to invoicing is beneficial. The ability to have customers approve or reject orders placed by others is also beneficial, as it allows for an extra review to ensure order information is correct.
To ensure they make appropriate choices, print companies request copies of documentation from a software provider to see what they need to learn if they plan to be a self-service user of the software. They request references and ask how the software provider handles support, system outages, and upgrade development to get a sense of how other users perceive the company. Print companies attend demonstrations of the product and give specifics on what they want to hear beyond the generic sales pitch. Print companies also seek specific information about short- and long-term product upgrades, which gives them a chance to glimpse the software company’s strategic vision and how the product might develop in the future.
Other Considerations Before Purchasing
Print companies take other considerations into account before purchasing software.
Usability: If they have current B2B customers, print companies ask them to test the software before committing to a purchase. If these end-users have difficulty using the software, then it is not the right choice. If print companies have B2C customers, they ask someone without any print knowledge or experience to test the product. Testing against online competitors to see how the software compares is another way print companies assess the usability of a product. They also research customer feedback.
Partnership compatibility: The relationship between a print company and a software provider is a partnership, not just a sales interaction. Print companies are in frequent contact with their software provider to solve technical difficulties, arrange training, or add improved services. Therefore, determining if the software provider will make a compatible partner is important. Print companies don’t rely solely on what the sales rep tells them; they try to get a sense of the software provider’s team by calling the support desk and talking to customer service. This helps print companies determine how well they will be treated and whether the software provider’s staff are knowledgeable.
Features: Assessing software features is usually part of the decision-making process. Print companies generally want to know the following before purchasing:
• How easily will customers understand and use the software’s features in a self-service situation?
• Was the software built to support the print industry or first created for some other use and applied to the print industry? If the former, are the features transferable?
• Do the features allow set up and administration of the site, creation of B2B storefronts, and product development. Do they enable the print company to add variable elements, create users, and take orders without relying on the software provider?
An important tip when choosing software technology is to not put too much emphasis on the number of features offered. Features tend to constantly change, and more does not necessarily mean better. While software product development tends to centre on adding more features, it is not necessarily adding more value. If a feature is added to a product, but is never used by customers, it is possible that the feature did nothing more than add complexity to the ordering process. Such a feature may result in discouraging customers from using the system, or placing future orders.
Starting with a New Customer
One way to introduce a new customer to web2print is to build a single item for them. This allows the customer to learn the ordering process and the print company to learn and incorporate the customer’s products into a production workflow. A workflow should be designed to be as automated as possible, from order entry to production to invoicing. New workflows should include sufficient time to allow a customer to test the variable templates to extremes, entering best- and worst-case scenarios to ensure the template can perform in all situations without errors. Only once the initial workflow has been proven to be efficient should more products be added to the storefront. This ensures that both the customer (external activity) and the print company (internal activity) are confident enough in the workflow to handle more orders.
Setting Goals and Site Testing
Printing companies should allow time to educate their customers in all steps of the process when launching an e-commerce system or when adding a new variable-template-driven item. The easiest way to meet customer expectations is to involve them in the development process, regularly inviting feedback and eliciting suggestions for improvement. Customer satisfaction is important, so a company must ensure that it takes client feedback seriously, incorporating customer input to improve the service process. As the site is being developed, both the programmer and the customer need to rigorously test new products and templates to ensure they are completely satisfied long before allowing product ordering. It is common for a programmer to envision how a template will behave, while the customer intends it to behave in a different way. Often a customer has expectations that the programmer may not have foreseen. Once the entire site, including products and templates, has been developed, it still isn’t ready. A testing phase or pilot period is necessary to find any other bugs or shortcomings that may be more easily discovered once real data is being used. Implementing a pilot period before an official launch of the full workflow also allows everyone to learn how the system will impact them, exposes potential workflow issues (which can arise in the many steps between ordering and invoicing), and allows the customer to provide final feedback.
Most important to keep in mind is that the system only works when customers use it. They will often find opportunities during the pilot period to suggest where the process can be improved, as unforeseen problems are discovered only after people start using a new system or variable template. Often these user-experience issues can prevent adoption of the system by the customer. As well, customers may fall back to the more familiar method of traditionally ordering print if they do feel comfortable using the new system. Including the customer in the entire process allows for the greatest chance of success, and is the best way to ensure the success of the site.
Choosing the Right Type of Products
Before setting out to create products, a print company should determine whether it is a variable template, a print-on-demand piece, or a warehoused item. Other key information needed is the name of the product and the communication intent (i.e., Is the piece promotional or educational? What audience is it intended to reach? How knowledgeable is this audience?). Print companies also need to know whether the product will be ordered regularly or be a one-time communication. It is important to choose the right products before the development phase begins. It is common for a product to be almost completely programmed before it is discovered that another similar product would have been more appropriate. Below are explanations of the three most common types of products, followed by a list of more specific options.
Variable Templates
Variable templates contain all the necessary information for a customer to customize and soft-proof a print order. This usually results in the creation of an automated, print-ready PDF, which is generated while the customer is still online.
A PDF of the design is created containing variable fields assigned for every element. Coding is then applied to each field to determine how the template will behave under given circumstances, such as during customization. For example, coding can force a name to be upper case or email to be lower case. Coding can also be used to upload custom images or force phone numbers to use hyphens (e.g., 604-123-4567) instead of dots (e.g., 604.123.1234). Coding is critical for keeping a customer’s brand consistent, so regardless of who creates an order, all products will be formatted consistently and have the same look.
Deciding which VDP software or plug-in is more appropriate and how it interacts with the digital storefront is important. VDP software comes in the form of third-party applications such as XMPie or is accessed online through a self-hosted dashboard.
Print on Demand
POD products are the opposite of VDP products. POD allows the customer to order a static product to be printed and shipped. POD products do not require customization and are printed using a completed file uploaded by the customer or stored in the system by the programmer.
Warehousing
Storefronts can act as an inventory management system for any products that can be warehoused. These products can be ordered online using the same process as a POD item. Each product has a real-time inventory count associated with it, which updates after every order. Notifications can be sent about low product inventory levels, reminding a customer that a product needs to be replenished. Inventory counts benefit customers by showing their buying patterns, which helps them to effectively determine future quantities.
Below are other examples of different types of products that can be ordered online:
• Ad hoc print: an online print product where the customer provides the content during the ordering process via file upload, such as brochures, flyers, and newsletters.
• Ad hoc business documents: an online print product where the customer provides the content during the ordering process via file upload, such as training manuals, presentations, and reports.
• Ad hoc oversize: An online print product where the customer provides the content during the ordering process via file upload, such as posters, signs, and banners.
• Static print product: An online print product where the content is stored in a catalogue and printed on demand after ordering, such as sales sheets, flyers, and white papers.
• Inventory product: An online print product where the content is stored in a catalogue and pulled from existing inventory after ordering.
• Digital publishing: An online product where the final product is a downloadable PDF instead of a printed product, such as white papers, personalized sales materials, and presentations.
• Kit: An online print product where the customer can buy a basket of goods contained in a single item.
• Promo product: A set of products that are branded with a logo for use in marketing or promotional programs, such as mugs, baseball hats, and pens.
• Integrated campaign: A product that combines multiple-marketing channels to create an integrated campaign a customer can use to track metrics when launching a new product or sales promotion.
• Photo product: An online print product using uploaded photos or photos from online collections, such as photo books, photo cards, and photo calendars.
• Quote request: An online print product used to request a quote for a print job. | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/07%3A_Web2print/7.03%3A_Web2print_Strategies_and_Goals.txt |
Implementation and Deployment
By this point, the workflow strategy has been created, the customer has been included in discussions about its goals, and the print company has created some sample products and print items for the customer to test. As well, the print company and customer have completed a pilot period and identified unforeseen workflow issues. What remains is the final step of making the site live.
Making the site live involves ‘turning it on’ to accept orders from the entire user base. If the above steps have been completed properly, there should be very few issues.
Continuous Assessment
Even after a storefront has been launched, it is not considered complete. There should always be a system of continuous assessment in place to respond to customer feedback and correct any errors as the orders start coming in. Even after the site is live, the programmer should navigate the storefront to ensure its usability, and place a test order to ensure no issues arise for the customer during the ordering process. Also of consideration is a post-order assessment, where the internal processes in the printing company are evaluated for completeness and efficiency, as outlined below.
Workflows and Automation
Orders should enter an automated workflow, creating a seamless transition while bypassing several departments. Once an order has been placed, the appropriate staff are notified to fulfill it. If a VDP product was customized, then a print-ready PDF should automatically be uploaded to a hot folder. At this point, either an automated system or a prepress operator reviews the file for print standards and imposes it on the print template. These files can then be automatically produced on a digital press or be sent to the plate setter to be prepared for litho printing. Throughout every step of the process, email notifications should be sent to appropriate staff so they can fulfill the order, and to the customer so they can be kept informed of anything related to the order such as invoices and product shipping.
MIS Integration
It is beneficial to select a storefront suitable for integration into a management information system (MIS) to streamline orders from customization to invoice. Integration is a connection between two systems that enables the exchange of data. The information is automatically entered into an electronic docket, which is a database that collects and maintains customer information, products ordered, shipping information, and billing information automatically. When integrating two systems, it is important to note which system is the master data holder and which is the subscriber to that data. Only one digital system should ‘hold’ the data, whereas all the other systems access the same database. In a print environment with a functioning print MIS system, it is the MIS system that should be considered to be the master in every case. The MIS system collects orders from everywhere, not just the orders placed through storefronts online. The web2print system pushes data into the MIS system and subscribes to the master data stored and managed in that system, such as pricing and job specifications. This can be challenging because the web2print software and the print MIS system are often provided by separate vendors, which can prevent a smooth exchange of data.
Web2print is one of many secondary, or subscriber, connections into a printer’s business and production workflow. Web2print should serve its main purpose, which is to capture orders in the most efficient manner while maintaining a competitive edge for a print company’s sales team. Orders must be transitioned seamlessly and smoothly into the production workflow and MIS system so they can be treated like any other order, whether they were placed online or by traditional means. In this way, web2print is regarded as only one of many business opportunities that bring sales to a print company.
Analyzing the ROI
When making any business decision, investment must be weighed against return. This is known as return on investment, or ROI. Moving a business online to accept orders is a serious business decision. Web2print can be a worthwhile investment and understanding how to measure ROI before investing in a vendor’s software is important.
Typically, in a print company, the estimate for the actual printing process is very well defined. The estimating department can provide detailed analysis of all of the costs associated with printing a specific printed product. Where web2print differs, however, is in the costs of capturing the sale and in streamlining the process in the print shop. For example, there are specific increased costs in running a web2print site online. If the system was paid for as a one-time licence, then the total cost must be amortized over the life of the licence, and each print order shares a small part of that overall cost. Some SaaS systems, on the other hand, charge a piece rate or a monthly fee. These are easier to incorporate into the costs of the job. On the savings side, there are processes within the print company that are made more efficient, so an analysis of cost savings can be made there as well. However, print companies should not fall into the trap of thinking that just because a print job can be completed in a shorter time, it is automatically cheaper to produce. In order to assess the total ROI, only real costs that affect the print product’s profitability should be assessed.
Timing is important when calculating ROI because a printer must determine when to invest money based on an expected return of that investment. Purchasing or building an online system is not automatically going to generate revenue. It is likely that the print company could invest thousands of dollars very quickly before the system provides any value in return. There is a human aspect to this as well. Sales professionals are still critical for driving new customer sales to the web and finding new online opportunities, both of which will help improve the return on the initial investment.
Systems with monthly payments are sometimes better for new online ventures, as they do not require a huge investment upfront. Up-front payments force a print company to give away all monetary leverage in a single transaction, and while they might be more cost-effective when serving large numbers of customers, they can do serious financial damage in the short term.
7.05: Summary
Web2print is the online connection between a print company and its customers, and the technology should help to solidify this relationship, not hinder it. Print companies offer their services online in response to their customers’ needs and buying trends. As web2print becomes more integrated into a print company’s day-to-day business, it becomes a main channel for interacting with a customer. A key to the strategy for implementing web services is involving the customer as much as possible, since the customer’s use and acceptance of the ordering portal is critical for its success. Print companies should research the types of products and services that will be helpful to customers in the specific target markets they serve, and not add too many products too quickly. Print companies must analyze the types of products their customer needs, and plan how a streamlined workflow will create efficiencies in its operations. Finally, a pilot phase to assess both accuracy of the storefront and user experience is important. To ensure continued customer satisfaction, print companies should be prepared to make ongoing improvements once the site goes live. System integration with print companies’ internal processes is also ongoing, as efficiencies and production enhancements are realized. The print industry continues to evolve and a successful implementation of a web2print portal will help print companies keep up with this evolution and stay in front of the competition.
Questions to consider after completing this chapter:
1. How would you describe web2print and the technology involved?
2. How would you describe e-commerce and how web2print utilizes it?
3. What are the benefits of using web2print to a company and it customers?
4. What are the strategic steps in creating a web2print system?
5. What types of products can be offered through a web2print system?
6. In what ways can a web2print system be integrated into a production workflow?
Suggested Reading
WFTPRINTAM – Web to Print. (n.d.). Retrieved from http://wftprintam.wikispaces.com/Web+to+Print | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Book%3A_Graphic_Design_and_Print_Production_Fundamentals/07%3A_Web2print/7.04%3A_Implementation_and_Workflow_Considerations.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Understand what is the application & 4 different AutoCAD
(CO 2) Install the application on your computer
(CO 3) Understand the User Interface of AutoCAD – Ribbon, Panels, Model space, Layout tabs, Status bar, & Properties
(CO 4) Understand AutoCAD setup tips – Options, Units, Workspace
(CO 5) Understand the types and structure of drawings in Auto CAD- Floor plan, RCP, Elevation, Section, & Details
(CO 6) Input commands and understand different selections
(CO 7) Understand basic drawing tools- Origin, Rectangle
(CO 8) Attach image/PDF/CAD and adjust the scale
(CO 9) Set the project folder, Save the file, and backups
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO1) Understand what is the application & 4 different AutoCAD
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=5
About CAD
Computer-Aided Design(CAD) or Computer-Aided Design and Drafting (CADD) can be defined as using computer systems to assist in the creation, modification, analysis, or optimization of a design. (Narayan, 2008)
CAD software is used to increase the productivity of the designer, improve the quality of design, improve communications through documentation, and create a database for manufacturing. (Narayan, 2008)
image credit: Shaan Hurley, AutoCAD R14 Welcome Sample DWG, Flicker
CAD is an important industrial art extensively used in many applications, including automotive, shipbuilding, and aerospace industries, industrial and architectural design, prosthetics, and many more. (Pottmann, and et al., 2007)
About AutoCAD
AutoCAD is an industry-leading commercial CAD software.
AutoCAD is used by AEC(Architecture, Engineer, and Construction) to generate and optimize 2D and 3D designs. AutoCAD is a widely used software program that can help you draft construction documentation, explore design ideas, visualize concepts through photorealistic renderings, and simulate how a design performs in the real world. (Autodesk)
AutoCAD was first released in December 1982 as a desktop app. In 2010, AutoCAD was released as a mobile- and web app, marketed as AutoCAD 360. (Autodesk and AutoCAD)
Four AutoCAD products for AEC
• AutoCAD: the original version of AutoCAD. This version can use architects, project managers, engineers, graphic designers, city planners, and other professionals.
• AutoCAD Architecture: a version of Autodesk’s flagship product, AutoCAD, with tools and functions specially suited to architecture work. This software supports dynamic elements (wall, door, windows, and other architectural elements) and automatically updating Spaces and Areas for calculations of sqft.
• AutoCAD LT: the lower-cost version of AutoCAD, with reduced capabilities (No 3D, No Network Licensing, No management tools, and more).
• AutoCAD 360: an account-based mobile and web application enabling registered users to view, edit, and share AutoCAD files via a mobile device and web using a limited AutoCAD feature set and using cloud-stored drawing files.
(CO2) How to install the application
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=5
Install AutoCAD
This version is for educational purposes only.
You must know your system requirement first before you install the application. If you do not know your system use, please find the information here.
• [STEP 01] Go to https://www.autodesk.com/education/free-software/autocad on your Windows side, open a web browser (Chrome is recommended because the instructor tested).
• [STEP 02] Click [CREATE ACCOUNT] if you do not have one. If you already have an Autodesk account, please sign in by clicking [SIGN IN].
• [STEP 03] Select an appropriate version of AutoCAD, your system, and language.
• [STEP 04] Click [INSTALL].
• [STEP 05] Accept the license and services agreement.
• [STEP 06] You will receive an email from Autodesk for the license information (Product key and Serial Number). It will be needed for the activation process.
• [STEP 07] Click the downloaded installation file to install. The installation will take a while.
• [STEP 08] After installation, the software will require activation. Please use the license information.
(CO3) Understand AutoCAD interface – Ribbon, Panels, Model space, Layout tabs, Status bar, & Properties
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=5
Once you open AutoCAD by double-clicking the AutoCAD icon, you can create a new drawing by clicking the [START DRAWING] icon on the first page of the AutoCAD application.
You also can select a different template by clicking [TEMPLATES] under start drawing. The default setting is [acad.dwt]
Your recent documents will show in the middle of the first page. You also can click to open the recent documents.
For the tutorial provided by Autodesk, you can click [LEARN]. I recommend you watch the Getting Started Videos.
Once you click [START DRAWING], you will see this user interface below. [please remember the names]
• Application menu: New, open, save, import, export, print
• Quick access toolbar: User can save tools that they often use
• Info Center: Ask a question, find out answers from Autodesk community
• Ribbon: Main menus – Home, Insert, Annotate, View, Manage
• Ribbon tab
• Ribbon view: User can minimize and maximize the ribbon
• File tab: Navigate files and create and open files
• Drawing area/graphic area: Main drawing space
• View cube: User can change the view, top, front, 3D, or more
• Navigation bar: Zoom in and out, pan, zoon to all, and more
• Command box: Can type commands and see the previous commands
• Layout tab: Can see model space and print spaces
• Status bar: Can set grid, snaps, scales, and more
Please see this detailed user interface that is provided by Autodesk
(CO4) AutoCAD setup tips – Options, Unites, Workspace
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=5
Before you start your drafting, it is recommended to set your workspace and options as you wish. Take some time and experiment with the settings, as shown below. You can change settings at any time.
Below are the instructor’s recommended setups based on more than 10-years of drafting experience.
OPTIONS
• [STEP 01] Click [APPLICATION MENU] and then Click [OPTIONS], or type [options] on the command box, and Enter key
• [STEP 02] You will see the Option window
• [STEP 03] Click Display tab > Change Color theme from dark to light
• [STEP 04] Find Crosshair size on the Display tab > Change the value from 5 to 100
• [STEP 05] Find [Colors] and change Uniform background to Black> Click [Apply & Close]
• [STEP 06] Click the Draftingtab > Change the aperture size – make slightly smaller
• [STEP 07] Click the Selection tab and uncheck[allow the press and drag for lasso] > Click [OK] to close the option
UNITS
• [STEP 01] Click [APPLICATION MENU] > click [DRAWING UNITS] > click [UNITS] or, Type [units] on the command box > Enter key
• [STEP 02] Confirm the units are right for your project
Below snap is an image of a typical setting for the Imperial system.
Below snap is an image of a typical setting for the Metric system.
PROPERTIES
• [STEP 01] Click small arrow under Properties on Home Ribbon to open the Properties panel
• [STEP 02] Or press [Ctrl+1] on your keyboard to open the Properties panel
• [STEP 03] Place the panel on your left side of the workspace
• [STEP 04] Click [Layer Properties] on Home Ribbon to open Layer properties
• [STEP 05] Place the panel on your left side of the workspace and click the arrow to hide the panel
The interface after setting changes
(CO5) Types and structure of drawings in Auto CAD- Floor plan, RCP, Elevation, Section, & Details
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=5
Architects and Designers use AutoCAD in slightly different ways. It is different from firm to firm and depending on who draws the drawing. Moreover, it depends on what phase of the design you are in.
Many design firms never use layout tabs. Some firms use AutoCAD only for Schematic Design purposes. Some designers use this application for presentation purposes too.
However, in this course, we are targeting to use all essential functions to generate a Construction Document set.
Below is a typical – fundamental – Construction Document set for an interior design project
• Cover sheet + general project information
• Floor plans
• Furniture + Finish plans
• Ceiling plans
• Elevations + Sections
• Details
For detailed information about the types of drawings, please refer
Kilmer, W. Otie, and Kilmer, Rosemary. Construction Drawings and Details / W. Otie Kilmer and Rosemary Kilmer. Third ed. 2016. Print.
(CO6) Input commands and understand different selections
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=5
To draw in AutoCAD, you must understand different types of command input
• Use icons on Ribbon (Basic level)
e.g. Click [Home] ribbon > Click [Text]
• Use the commands box (Moderate level)
e.g. Click [command box] > Type MULTILINETEXT > Enter key
• Use Shortcuts (Advanced level) – Please practice to improve the speed of work and productivity.
e.g., Type [MT] on a keyboard (mouse can be located anywhere, it can be lower case) > Enter
Often use shortcuts by the instructor [please remember the list of shortcuts]
• [l] – line
• [pl] – polyline
• [mt] – multiline text
• [m] – move
• [co] – copy
• [ro] – rotate
• [z] – zoom and [a] – all
• [b] – block
• [s] – stretch
• [x] – explode
• [ex] – extend
• [c] – circle
• [re] – regen (refresh)
• [h] – hatch
• [o] – offset
• F3 – osnap
• F8 – ortho
Detailed information can be found in this link https://www.autodesk.com/shortcuts/autocad
Three types of selections
one click – individual objects
• window selection (blue) – drag/click from left-top to right bottom to select all objects that are enclosed in the selection rectangle.
• cross selection (green) – drag/click from right bottom to left to select all objects crossed by the selection rectangle.
Tip. To select multiple objects, just click one and another. No need to hold [shift] key or [ctrl]
Refer to this link for select object Information from this link
(CO7) Understand essential drawing tools- Origin and Rectangle
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=5
Understand the “origin” of the drawing
AutoCAD drawing area is on a real scale, which means the drawing scale is 1:1 scale. Moreover, the drawing area is unlimited. You can draw the entire earth in the drawing. Furthermore, you can draw a small object, too. Designers often lose the point/location that they want to draw in the drawing when you draw in a big drawing. Thus, designers use the drawing origin (0,0,0) – ((x,y,z) for a 3D model) as the base point of the project. Usually, the origin of the drawing is the left-bottom corner on the first floor (if it is a 3D model). In AutoCAD, we use only (0,0) – (x,y).
To start your drawing, draw a building footprint or property line first.
• [STEP 1] Click [Rectangle] on the Home Ribbon, or type [rec] and Enter
• [STEP 2] Specify the first point, type [0,0] and Enter
• [STEP 3] Specify the next point. Any point on the right-top corner will be fine. It depends on the project size. For our project, type [58’7″,20’4″]
• [STEP 4] Type [z] to zoom and type [a] and Enter
Line (command)
• [STEP 1] type [l] and Enter
• [STEP 2] specify the first point by clicking a point or typing [x,y]
• [STEP 3] specify the end point by clicking a point or typing [x,y] – absolute point, type [@x,y] – relative point
• Please refer to this link for the line command
Move (command)
• [STEP 1] type [m] and Enter
• [STEP 2] select the object/objects that you want to move and Enter
• [STEP 3] specify the base point
• [STEP 4] specify the second point to move the object/objects
• Please refer to this link for the move command
Copy (command)
• [STEP 1] type [co] and Enter
• [STEP 2] select the object/objects that you want to copy and Enter
• [STEP 3] specify the base point
• [STEP 4] specify the second point to copy the object/objects
• [STEP 5] specify the third point or more to copy the object/objects if you have. If you want to stop, use ESC
• Please refer to this link for the copy command
• Please practice Line, Move, Copy, and Rotate commands
(CO8) Attach image/PDF/CAD and adjust the scale of the attached file
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=5
Download a floor plan image (Eames House-House)
From this page, click [Eames_House_Floor_Plan_House.jpg] and mouse right-click to save the image file to your project folder.
Your CAD file and JPG file MUST be in the same folder. Otherwise, you have to relink the file every time or set your link relatively.
Insert the image file.
• [STEP 1] Click [Insert] on the ribbon tab
• [STEP 2] Click [Attach] on the Reference palette
• [STEP 3] Select [Eames_House_Floor_Plan_House.jpg] from your project folder > Click [open]
• [STEP 4] Click [OK] on the Attach Image window
• [STEP 5] Click the origin point or type [0,0] and Enter
Adjust scale
• [STEP 1] Specify the scale factor [1] and Enter
• [STEP 2] Select the inserted image > Change the Fade value to [50] or lower that you can see the background
• [STEP 3] Zoom in to the scale or a known dimension
• [STEP 4] Type [SC] and Enter for Scale change
• [STEP 5] Click a base point > Type [r] and Enter > Click the base point > Click a second point that you know a dimension (for this draw, you can use the scale bar) > Type the known dimension [1′]
Change the drawing order
• [STEP 1] Click the inserted image
• [STEP 2] Mouse right-click > Click [Draw Order] > Click [Send to Back]
Move the image to the building footprint
• [STEP 1] Select the inserted image
• [STEP 2] Type [m] to move and Enter
• [STEP 3] Click a base point > Click the target point to move
• Tip! Use Object snap [F3] to select the target point from the building footprint.
(CO9) Set the project folder, Save the file, and backups
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=5
Save the file
It is vital to save your file as early as possible. Moreover, save anytime, and the moment you think it is appropriate. I usually save within 15 min (at least four times per hour).
• [STEP 01] Click [Application menu] > Click [Save]
• [STEP 02] Select a project folder on your hard drive, external hard drive, USB, Dropbox, or Onedrive
• [STEP 03] Recommended file type – AutoCAD 2007/LT2007 Drawing(*.dwg)
• [STEP 04] Recommended file name – Eames_House_Project_Firstname_Lastname_01.dwg
Tip! (.bak) file is a backup file. In the default setting, every 10 minutes, the file will be saved. To use the backup file, change the file extension (.bak) to (.dwg) | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/01%3A_Part_One._AutoCAD/01%3A_Chapter_1._Introduction_of_AutoCAD.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Understand Layers – Name, Line type, Thickness, & Color
(CO 2) Draw centerlines – Object snap, Line, Move, & Offest
(CO 3) Draw exterior/interior walls, floor, millwork & openings – Polyline, Spline, Circle, Rectengule, Object Snap, Mirror, Fillet, Trim, Extend, Array, & Match Properties
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Understand Layers – Name, Line type, Thickness, & Color
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=32
The concept of layers in CAD
Architects and designers use layers in vector-based CAD software.
The concept of layers allows CAD information to be organized, facilitates the visual display of the information on a computer screen, and allows the information to be efficiently converted to the conventional print media of drawings.
The efficient use of layers can reduce document preparation time and improve document coordination.
The American Institute of Architects and National Cad Standard published “AIA CAD Layer Guidelines.”
Using layers can make your drawings easier to control and interpret for both you and your team. For example, you can draw your interior walls on one layer, and furniture on another using a different color. You can quickly turn off your furniture layer on a floor plan and turn on the furniture layer on a furniture plan.
You can control layers from the Layer Property Manager.
Each layer has a set of properties assigned to it.
• Name
• Turn On/Off
• Freeze – Looks the same as turning on and off but uses less memory. Boost the speed of work.
• Lock
• Plot – It shows on your screen, but it will not print.
• Color – It is for working on the screen – easier to recognize the layers by color. Usually, prints in black and white.
• Line weight – Important to identify the hierarchy of the lines
• Line type
• Transparency – not often used
• Description – note for the layer
Create Layers for the project
• [STEP 01] Open Layer Property Manager panel by clicking the [Layer Property] icon under the [Home] tab on the [Layers] panel.
• [STEP 02] Click [New Layer] or Press [Alt+N] to add a new layer
• [STEP 03] Rename the name of the layer by double-clicking the name of the layer to A-WALL
• [STEP 04] Update the color of the layer by clicking the color section to YELLOW
• [STEP 05] Update the line weight of the layer by clicking the Lineweight section to 0.5
• [STEP 06] Repeat the [STEP 02] to [STEP 05] to create the listed Layers below.
Note 1. To update line type, click the line type that you want to update, and click [LOAD], and find the line type that you want to use for a layer, and click [OK], click the loaded line type to apply, and click [OK] to apply.
Note 1A. Due to the scale issue, the line type will not show it correctly. To correct the line type scale, type [lts] on the keyboard, and press the [Enter] key and enter [number]. For this project, try [10]
Note 2. The name of the layer is based on AIA CAD layer guidelines, the instructor modified. If you need other layers, please refer to the guideline.
Apply a layer to a model
[Method 1]
• [STEP 01] Fine a line/lines or an element/elements that you want to apply a layer
• [STEP 02] Select a layer that you want to apply to, then a layer of the element will be changed
[Method 2]
• [STEP 01] Select a layer that you want to draw
• [STEP 02] Draw a line/lines or an element/elements within the layer
[Method 3]
• Use [MATCHPROP] command or [MA] – match the property to copy a layer style to another
Tip. I prefer to use this method to speed up the work – Draw lines with all layers with name on the side of the drawing, and use match property command whenever it is needed. You can save time to find the layer and select the layer from the drop-down menu.
(CO 2) Draw centerlines-column grid – Object snap, Line, Move, & Offset
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=32
Drawing a centerline can be the first step in drawing a floor plan. The drawing order depends on the phase of design and who is drawing. However, setting centerlines can be easier to draw a floor plan rather than starting from scratch.
• [STEP 01] Select the right layer to draw column grids. Click [S-GRID] from the [Home] ribbon on the [Layers] tab
• [STEP 02] Make sure [othormode] and [osnap] is on
OSNAP setting – to select a specific point of an element, it is recommended to use object snap [OSNAP]. To set up object snap, mouse right click on the OSNAP icon on the Status bar and check the point that you would like to select while the object snap is on.
This image is the recommended object snap settings for 2D drafting, based on the instructor’s experience. Note. For more information regarding OSNAP, please read this link about using Object Snapes.
• [STEP 03] Select [Line] from [Home] ribbon on [Draw] tab ,or Type [l] and press [Enter] on the keyboard to draw lines
Click the base point [0,0] and mouse move to plan north, and type [25′] and press [Enter] on the keyboard
• [STEP 04] Move the gridline to the center of the building base footprint.
Click [Move] icon from [Home] ribbon on [Draw] tab
or, type [m] and press [Enter] key
Click the midpoint of the selected line, and then click the midpoint of the building footprint
• [STEP 05] To create vertical grid lines, it is recommended to use [OFFSET] command.
Click [OFFSET] icon from [HOME] ribbon on [DRAW] tab
or, type [o] and press [Enter] key
Type a specific number that you want to offset, for this project, enter 7′-4-3/4″
and then press [Enter] key,
and then click a point to the direction that you want to create the copied line,
if you use the same distance, you do not need to enter the number,
if you use different distance, you must enter a specific number
• [STEP 06] Repeat [STEP 03] to [STEP 05] for horizontal grid lines
• [STEP 07] Copy the first level grid lines, both vertical and horizontal, for the second level
To copy the lines, please use an even number (e.g., 28′-11″ based on the base drawing)
(CO 3) Draw exterior/interior walls, floor, millwork & openings – Polyline, Spline, Circle, Rectangle, Mirror, Fillet, Trim, Extend, Array, & Match Properties
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=32
Let’s draw the floor plans (no columns, no furniture, no windows, no doors, no patterns, no symbols, no plumbing, no dimensions, no texts at this point) with the following commands
image credit: Screen captured by the Author from the application
Make sure you use the commands below to control and easy selection
[ORTHO] or [F8] key, [OSNAP] or [F3] key
If needed, you can turn on/off [grid] by press the [F7] key
Polyline(command) – this is for creating a connected line. Once you make an element (wall) with a polyline, the lines will be connected. This connection is useful in controlling each element.
• [STEP 01] Type [pl] on keyboard and press [Enter] key
• [STEP 02] Specify start point
• [STEP 03] Specify the next point.
• [STEP 04-01] If you want to close the object, type [c] on the keyboard and press the [Enter] key
• [STEP 04-02] If you want to finish the object without closing the object, press the [ESC] key
• Please refer to this link for the polyline command
Pedit(command) – this used to convert line objects to a polyline
• [STEP 01] Type [pe] on keyboard and press [Enter] key
• [STEP 02] Specify an object to convert a polyline
• [STEP 03] Type [y] and press [Enter] key to turn the objects into one object.
• [STEP 04] Select [Join]
• [STEP 05] Select all connected objects that you want to join as a polyline, and press [Enter] key, and press [Enter] key one more time to finish the command
• Please refer to this link for the Pedit command
Explodes(command) – this is to convert a polyline or a block to line objects
• [STEP 01] Select an object (polyline or a block) to convert to line objects
• [STEP 02] Type [x] on the keyboard, and press [Enter] on the keyboard
• Please refer to this link for the Explodes command
Spline(command) – may not be needed for the Eames House project, but you will need for your project – This commands for creating a connected curved line.
• [STEP 01] Type [spl] on keyboard, and press [Enter] key
• [STEP 02] Specify the first point
• [STEP 03] Specify the next points
• [STEP 04] Press [Enter] key to finish
• Please refer to this link for the Spline command
Circle(command)
• [STEP 01] Type [c] on keyboard, and press [Enter] key
• [STEP 02] Specify a center point
• [STEP 03] Specify a radius by typing a specific number of by clicking a specific point
• Please refer to this link for the circle command
Rectangle(command)
• [STEP 01] Type [rec] on keyboard, and press [Enter] key
• [STEP 02] Specify a first corner point
• [STEP 03] Specify another corner point
• Please refer to this link for the Rectangle command
Mirror(command)
• [STEP 01] Type [mi] on keyboard, and press [Enter] key
• [STEP 02] Select an object/objects, and press [Enter] key
• [STEP 03] Specify the first point of the mirror line
• [STEP 04] Specify the second point of the mirror line
• [STEP 05] Decide whether to erase the source objects or not, and press the [Enter] key to complete the command
• Please refer to this link for the Mirror command
Fillet(command) – create a corner between two objects
• [STEP 01] Type [f] on keyboard, and press [Enter] key
• [STEP 02] Select the first object
• [STEP 02-01] If you want to create a smooth corner, you can specify radius by type [r] and press [Enter] key, and type [specific number] and press [Enter] key
the radius information will be stored, if you want to create a sharp corner, the R = ‘0.’
• [STEP 03] Select the second object
• Please refer to this link for the Fillet command
Trim(command)
• [STEP 01] Type [tr] on keyboard, and press [Enter] key
• [STEP 02] Select cutting edges
• [STEP 03] Select object(s) to trim, and press the [Enter] key to complete the command
• Please refer to this link for the Trim command
Extend(command)
• [STEP 01] Type [ex] on keyboard, and press [Enter] key
• [STEP 02] Select boundary object(s)
• [STEP 03] Select object(s) to extend, and press the [Enter] key to complete the command
• Please refer to this link for the Extend command
Stretch(command)
• [STEP 01] Type [s] on keyboard, and press [Enter] key
• [STEP 02] Select objects – specify the portion of the object that you want to stretch, using the crossing object selection method, and press [Enter] key
• [STEP 03] Specify the base point
• [STEP 04] Specify the destination point to complete the command
• Please refer to this link for the Stretch command
Match Property(command)
There is no right or wrong way to draw a floor plan, be creative, Designers!!
SAVE the file before closing the application.
Save in a different location for the backup (e.g., a cloud folder) | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/01%3A_Part_One._AutoCAD/02%3A_Chapter_2._Draw_floor_plans.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Add/Edit dimensions (in model space) – Dim, & Dimension style
(CO 2) Add/Edit blocks from AutoCAD Tool Palette & Other sources – Door, Window, Column, Plumbing, Furniture & Equipment
(CO 3) Create custom blocks – Custom furniture
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Add/Edit dimensions (in model space) – Dim, & Dimension style
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=78
Open your working CAD file.
Understand CAD dimension scale settings
• [METHOD 01] Dimension in model space for plotting in model space – This is the traditional method used with single-view drawings – Need Text height charts for correct text height, and it is challenging to be flexible in drawing scale.
• [METHOD 02] Dimension in model space for printing or plotting in paper space – This was the preferred method for complex, multiple-view drawings. (We will practice this method for our project)
• [METHOD 03] dimension in layouts – This is the simplest dimensioning method
• Please refer to this link for more information about scale for dimensions
Text Heights in paper space (1=1)
• 10 pt = 3/32” (minimum font to read)
• 12 pt = 1/8” (Standard text size)
• 18 pt = 3/16” (Subtitle text size)
• 24 pt = 1/4” (Title text size)
Understand the types of dimensions
Before you add dimensions, move the inserted drawing image to avoid accidental deletes and confusion.
Move the image to the right side – 100ft or any number that you can easily remember. You might need to move the image back to the original position.
Note. Make sure you turn on [ortho]
Set Drawing scale
• Before you start to add dimensions, it is recommended to set the drawing scale. Sometimes, the drawing scale can be changed for the drawing. However, once you set the drawing scale, less work is needed at the end.
• The drawing scale is defined with some factors – the paper size, purposes of submissions, and so on.
• For this project, you will be asked to print out your drawings in 11in x 17in (Horizontal layout). Although the inserted original drawing scale is 3/8″ = 1′ -0″, your drawing scale should be 3/16″ = 1′ -0″
• To set the scale, click the drawing scale [1:1] and select the drawing scale [3/16″ = 1′ -0″]
image credit: Screen captured and modified by the Author from the application
• If you cannot find the ft-in types of scale, click [Custom] > click [Add] > add the name of the scale [3/16” = 1’-0”], add the value on Paper units [3/16], add the value on Drawing units [12] > click [OK] to complete the custom scale > click [OK] to set the scale
Set Dimension style
“A dimension style is a named collection of dimension settings that control the appearance of dimensions, such as arrowhead style, text location, and lateral tolerances.” (Autodesk help, Mar 29, 2020)
Please refer to this link for more information about dimension style
• [STEP 01] To open Dimension Style Manager,
click [Annotation] ribbon tab, click [] on Dimension panel
or, type [ddim] and press [Enter] key
• [STEP 02] Click [Annotative] > Click [Set Current] > Click [Modify] to open Modify Dimension Style: Annotative window
• [STEP 03] Click [Lines] tab
adjust the Baseline Spacing to 1/16″
adjust the Extend beyond dim lines to 1/16″
adjust the Offset from origin to 1/16″
• [STEP 04] Click [Symbols and Arrows] tab
adjust the first Arrowheads to / Architectural tick
adjust the second Arrowheads to / Architectural tick
adjust the arrow size to 1/8″
adjust the break size to 1/16″
adjust the Jog height factor to 1/16″
• [STEP 05] Click [Text] tab
adjust the text height to 1/8″
select Aligned with the dimension line
• [STEP 06] Click [Fit] tab
make sure [Annotative] is checked
• [STEP 07] Click [Primary Units]
adjust the Unit format to Architectural
adjust the Precision to 0′ -0 1/8″
• [STEP 08] Click [OK] to complete the modification > Click [Close] to finish the dimension style manager
Add dimensions
• [STEP 01] Change layer to [A-ANNO-DIMS]
• [STEP 02] Click [Annotate] ribbon tab > make sure the dimension style is [Annotative] > click [Linear] to draw dimensions
or, type [dim] and press [Enter]
If Linear doesn’t fit for your purpose, please select the types of dimension that you want to add.
• [STEP 03] make sure you turn on Object Snap
Click the intersection point at the top left corner of the building column grids
Click the next intersection point for dimensioning
Click a third time to place the dimension line and text
• [STEP 04] if the dimensions are in a continuous string, please use [continue] from the [Annotation] tab, [Dimensions] panel.
Add additional dimension lines to the previous dimension.
• [STEP 05] Repeat to dimension all column grids for the first floor and overall building dimensions.
Add a new dimension style
– 3/32″ text size – Annotative and interior wall dimensions.
• [STEP 01] Type [ddim], and press [Enter] key to open [Dimension Style Manager]
• [STEP 02] Click [New] to create a new dimension style
• [STEP 03] Add new style name [Annotative 3-32], check start with [Annotative], [Annotative] checked, and click [Continue]
• [STEP 04] Adjust Arrow size from Symbols and Arrows to 1/8″,
adjust Text height from Text to 3/32″
click [OK], click [Set Current] and click [Close]
• [STEP 05] Make sure your layer is correct in [A-ANNO-DIMS]
• [STEP 06] Type [dim], and press [Enter] key and start dimensioning for the interior walls (You don’t need to match your dimensions exactly, as reference for this project, +/- 4 inches are acceptable)
(CO 2) Add/Edit blocks from AutoCAD Tool Palette & Other sources – Door, Window, Column, Plumbing, Furniture & Equipment
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=78
Understanding the concept of blocks in AutoCAD
• A block is a collection of objects that are combined into a single named object.
• Although a block and a group in AutoCAD look similar, these are different concepts.
Basically, blocks are copies that will change if you change one, but groups will not change. Groups are unique.
• A block consists of the name of the block, the block geometry, the location of the base point, and any associated attribute data.
• For more information about blocks, please refer to this page
Source of blocks
Designers/drafters often bring blocks from outside resources. They use free resources from other websites and/or resources from the firm for schematic design to save time, resources from manufacturers for design development, and/or construction documents.
AutoCAD provides some basic blocks. You can find the block from the [Content Browser] and add blocks to [Tool Palette] and [Design Center]
Cad library websites
Furniture manufactures websites for the Cad library
If designers/drafters cannot find blocks that they want to use for the project, they can create custom blocks. Sometimes it saves more time.
Add blocks from Tool Palettes (Use basic blocks from AutoCAD library)
Move the linked Eames House Floor plan image to match the drawing for reference– We moved the image to 100ft to the right. At this time, move the image back to the original place. If the image is hidden, show the image by clicking [Show Image] on the Image ribbon. If the image hides your drawing (Cad objects), let’s change the drawing order by mouse right-click on the image and click [Draw Order] > click [Send to Back]
We will add columns from the tool palettes.
• [STEP 01] Click [View] tab > Click [Tool Palettes] under Palettes panel to open the panel or, type [TOOLPALETTES] on the command, and press [Enter] key
Once the tool palettes are open, place the palette in a comfortable location (I personally like all palettes on the left side of the application.
• [STEP 02] Click [Structure] tab on the tool palettes > Click [WF beam – Imperial]
• [STEP 03] Change the layer to [S-COLS]
• [STEP 04] Place the column on the top-left corner of the building. At this time, you cannot place the [WF beam – imperial] to the precise location.
• [STEP 05] Move the column to the right position using [osnap] and guidelines
• [STEP 06] Copy the column to other positions of the first-floor plan; you will need to rotate the drawing and adjust your drawing according to the column locations.
Tips. Use the intersection with object snap [F3] and ortho tool [F8]
• [STEP 07] Copy the columns on the first floor to the second floor.
You may lock the layers that do not need to be selected. After you copy the columns on the first floor to the second floor, you may also unlock the layer you locked.
Edit a Dynamic block
You can find [Door-Imperial] and [Window-Imperial] from the Architecture tab on the tool palettes to add to your drawing. However, the currently loaded blocks (ex. [Door-Imperial] and [Window-Imperial] with a thunderbolt image) on the tool palettes are called [Dynamic block], a dynamic block is a parametric block that users can easily modify.
• [STEP 01] Click [Window – imperial] on the tool palette
• [STEP 02] Place [Window – imperial] block in your drawing
• [STEP 03] Double-click [Window – imperial] to open Edit Block Definition
• [STEP 04] Click [OK] to open the Block Editor
• [STEP 05] Select [Width=3’-0”] and adjust [Dist type] to [Increment] on the properties palette. And adjust [Dist increment] to [1/4”]
• [STEP 06] Select [Wall=0’-4”] and adjust [Dist type] to [Increment] on the properties palette
• [STEP 07] Click [Save Block] from the [Block Editor] ribbon
• [STEP 08] Click [Close Block Editor] to close
• [STEP 09] Relocate the window and update the size of the window according to the floor plan
Repeat the steps for [Door-Imperial]. You will need to change the [Dist type] for [Door Size] and [Wall Thickness]
Complete placing and adjusting windows (curtain walls) and door (interior and exterior doors) within correct layers. If you can find the blocks, you can draw the elements with [line] or [poly line]
Add more blocks using [Design Center]
Design Center is a tool to access and add blocks, Dimension styles, Layers, Layouts, Line types, Multi leaders, Table styles, Visual styles, and Xrefs.
For more information about the Design Center, please refer to this page
I am going to demonstrate how to load a sink from the AutoCAD sample blocks.
• [STEP 01] Change layer to [P-FIXT]
• [STEP 02] Click [View] tab > Click [Design Center] icon
or, Type [adcenter] on the command line, and press [Enter] key to open the Design Center window
• [STEP 03] Once you click the [Home] icon , you can find [en-us] folder > double-click to open [en-us] > double-click to open [Design Center]. You can find some sample drawings containing blocks
The folder structure may be different from the versions of AutoCAD, but generally, you can find the sample CAD files
C:\Program Files\Autodesk\AutoCAD 2020\Sample\en-us\DesignCenter
• [STEP 04] Double-click [Kitchens.dwg] > Double-click [Blocks] > Double-click [Sink-single-30 in top] to inset in your file > Click [OK] to confirm the information
• [STEP 05] Place the sink in the kitchen, using object snap
• [STEP 6] Click the inserted block [Sink-single-30 in top] > Mouse right-click > Click [Edit Block In-place] > Click [OK] on reference Edit
• [STEP 7] Use [Stretch] tool to resize the sink, and delete elements that are not needed, and add geometrics for the sink
Note 1. For the assignment, you don’t need to make the model 100% the same. Some flexibility will be acceptable (shape of sink and faucet)
Note 2. When you draw a new line and element, use the [0] layer in a block
• [STEP 8] Once you are done with the editing > Click [Save Changes] on the [Home] ribbon, under [Edit Reference] panel
If you want to rename the block, type [rename] on the commend. > Click [blocks] > Click [blocks that you want to rename]
If the block is your own or whether it was modified from another or created new, add [000_] in the first part of the block name to recognize/find the block easily
If you want to use a downloaded CAD file (e.g., https://cad-blocks.net/kitchen-cad-blocks-kitchen-sink.html ).
• Save the downloaded file in a project folder
• Open the [design center]
• Click [Load]
• Find the project folder where you saved the downloaded cad file
• Click [Open] to load
• Double-click [Blocks]
• Select a block that you want to use in your project and double click to load
If the inserted block is the wrong scale, adjust the scale.
Please use these strategies to draw plumbing fixtures in the kitchen and bathrooms.
You will select your selections of blocks. The below image is a reference only.
• Add (2) beds in the bedrooms
• Add (1) desk and chair set in the bedroom
• Add (1) dining table-chair set in the dining area
• Add (1) table in the alcove
• Add (1) table in the living room
• Add bookshelves in the living room
• Add (1) lounge chair in the living room
• Add (2) chairs in the living room
We will ask you to create custom furniture (sofa set) below.
(CO 3) Create custom blocks – Custom furniture
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=78
Sometimes, you may experience difficulty finding CAD blocks from CAD library websites and/or manufacturers’ websites. Especially, it is hard to find furniture blocks for residential projects. In this tutorial, you will learn how to create blocks using product information and photos. I will demonstrate it using two different sofa sizes from the website Design Within Reach (DWR). You will create using your selections of furniture for the project.
Create an 86″ sofa block
• [STEP 01] Retrieve the product information (dimensions) and images from the DWR website
• [STEP 02] Change layer to [0] > Draw overall size of the sofa (Width=86” & Depth=33”) using [pline] or [rectang], make sure turn [ortho] on for 90 degree
• [STEP 03] Download or screenshot the product image to your project folder
Most of the time, you cannot find a top view of a product. You may rely on a front view to draw a floor plan view for the product.
• [STEP 04] Insert the front view image into your drawing file. Click [Insert] tab > Click [Attach] > Click the downloaded file > Click [Open] > Click [OK] on Attach Image > Click once on the drawing area > Click once again near the first point > Click the inserted image > Adjust the [Fade] value to [50] > mouse right-click on the image > Click [Drawing Order] > Click [Send to back]
• [STEP 05] Relocate and rescale the inserted image to match the overall sofa dimensions.
For relocating, you will need to use [Move], [Rotate], and [osnap] command.
For rescaling, you will need to use [scale] with Reference [R].
• [STEP 06] Draw the details of the sofa using [line], [pline], [rectang], and [circle]. Use the detailed dimensions what the website provided. Also verify the dimensions using [distance] – type [di] and press [Enter] key
• [STEP 07] Before defining the lines to a block, make sure all lines are in the [0] layer.
• [STEP 08] Select all lines for the sofa (excluding the inserted image) > Type [b] on the command line and press [Enter]. Block Definition window will open > Name the block something meaningful (e.g., 000_DWR-Bantam_Sofa-Plan) > Click [OK] > Change the block from [0] layer to [I-FURN] > If you don’t need the inserted image anymore, you can delete the image.
Create a 73″ sofa block using the [000_DWR-Bantam_Sofa-Plan] block
Rather than redraw the 73″ sofa block from scratch, you can start with the [000_DWR-Bantam_Sofa-Plan] block and using the Stretch tool to adjust the length.
• [STEP 01] Type [co] to copy [000_DWR-Bantam_Sofa-Plan] block or Insert the [000_DWR-Bantam_Sofa-Plan] block on the drawing area – Click [Insert] tab > Click [insert] icon under Block panel > Select the block > place the block on the drawing area.
• [STEP 02] Type [x], and press [Enter] key. The Explode command breaks up a block into individual lines. This commend is useful to convert a block into elements for redefining a block.
For more information about this command, refer to this page.
• [STEP 03] Use [Stretch] commend to resize the sofa. And edit the details.
• [STEP 04] Create a new block by typing [b], and pressing [Enter] > name the block [000_DWR-Bantam_Sofa-73-Front] > update the layer to [I-FURN] > place the sofa
SAVE the file before closing the application.
Save in a different location for the backup (e.g., a cloud folder) | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/01%3A_Part_One._AutoCAD/03%3A_Chapter_3._Create_dimensions_and_components.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Draw a section
(CO 2) Draw an elevation from the floor plan
(CO 3) Add/Edit Text & Annotation (in model space-annotative) – M Text, Text Style, M leader, and Multileader Style
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Draw a section
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=80
“A section is a cut-through of a space that will show more of the room’s features. It also allows you to show some structural detail. A section line can be cut from any part of the space, depending on what you would like to show.”
Retrieved from https://www.nda.ac.uk/blog/identify-plans-elevations-sections/
“A ‘section drawing,’ ‘section,’ or ‘sectional drawing’ shows a view of a structure as though it had been sliced in half or cut along another imaginary plane.”
Retrieved from https://www.designingbuildings.co.uk/wiki/Section_drawing
For more information about a building section drawing, please read this page: https://www.designingbuildings.co.uk/wiki/Section_drawing
In this tutorial, students will draw a building section based on Eames House, House, Section A-A’ drawing, the plan south section (You can download the image from Canvas Module and this link Eames_House_House_Section_A-A’.jpg), and your space planning (Furnishings, fixtures, and equipment).
image credit: Screen captured by the Author from http://www.loc.gov/pictures/collection/hh/item/ca4169/ (Eames House as-built drawing, public domain)
• [STEP 01] Open your CAD file for the Eames House Project.
• [STEP 02] Copy the floor plans (1st floor & 2nd floor) to the right side 100′. This step is optional, but I prefer to save the original plans and to use the copied plans for creating a section view.
• [STEP 03] confirm you are in [0] layer and Draw a section line (recommended to use [PLINE]) on the first floor for a section view. Copy the section line in the same position for the second floor. You can rely on a column grid line. (For section line, you can break and offset the line to focus on key interior and/or architectural elements. The line should start and stop outside of the plan, and you should add a small perpendicular box to indicate the direction of the section view. Update the section lines to the [A-ANNO] layer.
• [STEP 4] Draw a perpendicular line from the section line on the first floor to indicate the building boundary and drawing boundary.
• [STEP 5] Insert the section drawing (Eames_House_House_Section_A-A’.jpg) by clicking [inset] > click [attach] > select the file Eames_House_House_Section_A-A’.jpg from your project folder > click [open] > click [ok] on Attach Image window > click a base point and the second point to insert the image > adjust Image Fade to 50 > mouse right-click on the image > click [Draw Order] > click [Send to Back]
• [STEP 6] Relocate (use [move] command) and rescale (use [scale] command) the inserted image to fit the building boundary for the section lines.
Note, you must use the object snape [F3] appropriately when you adjust the scale. Sometimes the object snap works perfectly to click the CAD object. Sometimes the command does not work to click a point in a raster image.
• [STEP 7] Now, you are ready to draw the section with the inserted image.
• Note 1. You will rely on the dimensions on the inserted image, the lines on your floor plan. Use numeric values to draw lines (please, don’t just click on the image except for the spiral stair. The image is reference only because the scaled image is always a bit off).
• Note 2. Create three new layers
• [A-LWT-OBJECT] 0.2mm – The edges of objects, and represent a change in depth
• [A-LWT-SECTION] 0.5mm – The lines are representing the boundary of anything cut-through
• [A-LWT-SURFACE] 0.05mm – The lines are detail lines on an object. They don’t represent much (if any) change in depth
• Note 3. Use [LINE]. [PLINE], [SPLINE], [CIRCLE], [TRIM], [OFFSET], [FILLET], [EXTEND], and [STRETCH] commands.
• Note 4. You also update the line type manually for door and window openings.
• Note 5. First, you draw the guidelines. You are using [xline] for creating a line of infinite length.
• Then, you draw the section lines.
• After that, you draw the object lines.
• Draw surface lines for details.
• Finally, you add furniture, and you should edit the details and objects hidden from the front of the object.
• [STEP 8] Move the section and the section lines that you drew except the inserted image 100′ to the left to save the section drawing in a safe drawing area.
• [STEP 9] Create a block for the section. Select all the elements in the section > Type [B] for creating a block > Define the name [000_Section A-A’] > Click [OK]
(CO 2) Draw an elevation from the floor plan
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=80
“An elevation is a view from the side of an object when drawing interior elevations; this would represent one of the walls. This would include any windows or doors as well as any built-in furniture that is in direct contact with the wall.”
Retrieved from https://www.nda.ac.uk/blog/identify-plans-elevations-sections/
“The term ‘elevation’ refers to an orthographic projection of the exterior (or sometimes the interior) faces of a building, that is, a two-dimensional drawing of the building’s façades.”
Retrieved from https://www.designingbuildings.co.uk/wiki/Elevations
In this tutorial, students will draw an interior elevation based on Eames House, House, Section C-C’ drawing, the plan west elevation in the living room. Students will not draw a section. You will need to understand the concept of elevation and will draw only interior elevation.
• [STEP 01] Decide an elevation view.
• [STEP 02] Draw outline of the elevation by using [xline] to draw the boundary of the elevation.
• [STEP 03] Rotate and relocate the section view c-c’ from the inserted image to match the boundary of the elevation. You will need to rotate 90degree clockwise.
• [STEP 04] Rotate the copied floor plan and the inserted image to 90degree counterclockwise. The reason for this step is to draw the elevation quickly. It typically takes less time to draw the elevation in the right direction (Up-North, Down-South, Left-West, and Right-East).
• [STEP 05] Remove the elements that are not needed from the copied floor plan. Make sure you saved the original floor plan. You only delete the elements in the COPIED floor plan.
• [STEP 06] Now you can draw the elevation
• Draw floor level and ceiling level (8′ -1″ AFF) (Typically, an interior elevation expresses interior elements only. You don’t draw wall thickness, window cut, ceiling structure, and roof structure.) > Change the lines for the wall ends, floor level, and the ceiling level to [A-LWT-SECTION]
• Switch the layer to [A-LWT-OBJECT] > Draw wall and furniture by using [LINE], [PLINE], [CIRCLE], [FILLET]. [TRIM]
• If needed, switch the layer to [A-LWT-SURFACE] > Draw anything that is not important in terms of construction.
• [STEP 07] Add dimensions and opening for more information
• On the Application Status Bar, switch the scale to 3/8” = 1’ 0”
• Type [ddim] and press [enter] to open [Dimension Style Manger]
• Click [Annotative-3-32] > click [Set Current] > click [Close]
• Type [dim] and press [enter] to add dimension
• You will need to click the first extension line origin > click the second extension line origin > specify dimension line location. Repeat this process to add dimensions for the casework.
• [STEP 08] Make a block for the elevation.
• Select the elevation, including lines and dimensions.
• Type [b], press [enter] to open [Block Definition]
• Define the name [000_Elevation-A]
• Click [OK] to finish the command
• [STEP 09] Organize your drawings.
• Move the inserted – reference images to 75′ plan north.
• Move your section and elevation on the right side of the floor plans.
(CO 3) Add/Edit Text & Annotation (in model space – annotative) – M Text, Text Style, M leader, and Multileader Style
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=80
In this tutorial, students will learn how to add and edit text and annotation in the drawing area by using [MULTILINE TEXT], [TEXTSTYLE], [MULTILINE LEADER], and [LEADERSTYLE]
Add room names and room numbers on the floor plan.
• [STEP 01] Switch to the [A-ANNO-TEXT] layer
• [STEP 02] Adjust units by typing [UN] and press [enter] to open [Drawing units].
• The current unit precision is 0′ -0 1/16″
• Change the unit precision to 0′ -0 1/32″
• Click [OK] to close the Drawing units window
• [STEP 03] Add two text styles for the room names and room numbers
• From the Annotate tab, Text panel on the ribbon, Click [Standard] > Click [Manage Text Styles]
• On [Text Style] window, click [New]
• Enter Style name [Annotative 1-8] and click [OK]
• Confirm [Annotative] is checked, update Paper Text Height to [0′-0 1/8″]
• Click [Apply]
• Click [Set Current]
• Click [New]
• Enter Style name [Annotative 3-32] and click [OK]
• Confirm [Annotative] is checked, update Paper Text Height to [0′-0 3/32″]
• Click [Apply] and click [Close]
• [STEP 04] Add room name
• Verify the Text Style is [Annotative 1-8] from [Annotate] tab, on [Text] panel
• Click [Multiline Text] from [Annotate] tab, on [Text] panel
or, Type [mt] and press [Enter]
• Define a text box for a room name. Using All Caps are recommended for a room name. Sometimes, use an acronym. (e.g., LIVING RM)
• Enter [a room name] and click a point outside of the text box.
• [STEP 05] Add room number
• Verify the Text Style is [Annotative 3-32] from [Annotate] tab, on [Text] panel
• Click [Multiline Text] from [Annotate] tab, on [Text] panel
or, Type [mt] and press [Enter]
• Define a text box for a room number.
• Enter [a room name] and click a point outside of the text box. Typically, each room requires one number, e.g., 102 (The first number (1) indicates the floor number. In this case, the living room is located on the first level. The second and the third number (02) indicate room number that starts from the main entry to clockwise. In this case, the HALL is 101; LIVING RM is 102.
• Draw a box around using [RECTANGLE]
• [STEP 06] Create a block for the room name and number that you just created. Name the block to [000_Room name and number]
Note. This strategy is useful because once the block is updated on the floor plan, the room names and room numbers are automatically updated on other plans like a ceiling plan, finish plan and more.
• [STEP 07] Use [Edit Block-in Place] to copy the room name and number to all rooms > Edit the names and numbers by double-clicking the name and the number > Click [Save Changes] to close [Edit Block-in Place]
• [STEP 08] Update the block from [A-ANNO-TEXT] layer to [0] layer
Add text and annotate on the floor plan
• [STEP 01] Switch to [A-ANNO] layer
• [STEP 02] Draw lines for openings and change [line type] to [Dashed]
• [STEP 03] Add multiline texts
• Add [OPEN TO BELOW] text on the second level above the LIVING RM-102. Make sure the scale is 3/16″ =1′ -0″ while you add the text
• Add [OPEN TO HALL] and [OPEN TO KITCHEN] texts on the [ELEVATION A]. Make sure the scale is 3/8″ =1′ -0″ while you add the text
Add annotates on the elevation A
• [STEP 01] Switch to [A-ANNO-TEXT]
• [STEP 02] Click [Manage Multileader Styles] from Annotation tab, Leader panel, under Standard
• [STEP 03] Click [NEW] > Add a new name for leader style [Annotative 3-32] > Check Annotative box on > Click [Continue]
• [STEP 04] Update these values to 3/32″ – Text height from Content tab, Landing gap from Content tab, Arrowhead size from Leader Format, Break size from Leader Format, and Set Landing distance from Leader Structure > Click [OK] to close the window
• [STEP 05] Click [Set Current]
• [STEP 06] Click [Multileader] from Annotate tab, on Leader panel
or, type [MLD] to add leader and text
SAVE the file before closing the application.
Save in a different location for the backup (e.g., a cloud folder) | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/01%3A_Part_One._AutoCAD/04%3A_Chapter_4._Draw_elevation_and_sections.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Draw RCPs from Floor plans
(CO 2) Add/Edit Hatch
(CO 3) Create a legend for the RCPs
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Draw Reflected ceiling plans (RCPs) from floor plans
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=82
In this tutorial, you will learn how to draw an RCP in AutoCAD.
Open your AutoCAD file
Copy elements from the floor plan
• [STEP 01] Turn layers off that are not needed to copy for the RCPs.
Turn off the following layers: A-ANNO, A-ANNO-DIMS, A-ANNO-TEXT, ANNO-TTLB, A-CLNG, A-CLNG-PATT, A-DETL, A-FLOR, A-FLOR-FNSH, A-FLOR-HRAL, A-LWT-OBJECT, A-LWT-SECTION, A-LWT-SURFACE, Defpoints, I-FURN, P-FIX, S-LEVEL
• [STEP 02] Copy the remaining elements to 100’ plan south direction
• [STEP 03] Turn on all layers
• [STEP 04] Clean up the plan
• In the space, we want to show all door openings, but not the doors themselves. To remove the door swing, you must Explode the door blocks and remove the doors themselves and swings
• For the casework, remove the casework that is not below 5 ft.
• Extend lines as needed to clean up the plan
• [STEP 05] Lock [S-GRID] layer > Select all walls and elements for the RCPs > Switch to the [A-CLNG] layer.
• [STEP 06] Make the [A-CLNG] layer to the Current Layer
• [STEP 07] Add ceiling elements (height changes and ceiling structure if it is the open ceiling)
• [STEP 08] Make the [A-CLNG-PATT] layer to the Current Layer > Confirm the layer color to 34 > Add ceiling pattern
• [STEP 09] Make the [A-CLNG-PATT] layer to the Current Layer > Add ceiling annotation
• [STEP 10] Copy the room name and number block from floor plan
• [STEP 11] Create Lighting symbols
• You can bring lighting symbols (blocks) from other websites or the samples from Electrical-lighting folders using [DESIGN CENTER]
• And you also can make your own symbols for your project. I created eight types of lighting fixtures
• [STEP 12] Add [A-CLNG-FIXT], color – 190, lineweight – 0.25mm > Place lighting symbols (blocks) in the floor plans > Switch the light symbols to [A-CLNG-FIXT]
• [STEP 13] Select all lighting symbols from the floor plans > Create a block [000-lighting fixtures] > Move the block to the RCPs
• [STEP 14] Select the block on the RCPs> Mouse right-click > Select [Edit Block in-place] > Adjust and align the lighting fixture with add dimensions > Click [Save changes]
(CO 2) Add /Edit Hatch
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=82
• [STEP 1] Hide unnecessary layers for better hatch results
• Confirm the current layer is [0] layer
• Turn off all layers except [0], [A-WALL], [A-LWT-SECTION], and [A-CLNG] layers
• [STEP 2] Draw lines or pline to create a closed object
• Hatch only recognizes a closed object. Thus you must make a closed pline or lines that are completely closed. If you are unsure the object is closed or not, you have to try the Hatch command first. If the hatch command doesn’t work, you have to redraw using [PLINE] or use [PEDIT] to convert from lines to [pline].
• [STEP 3] Add hatch for the wall
• Confirm the current layer is [A-WALL]
• Click the Hatch icon from the [HOME] tab, on the Draw panel
or, type [H] and press [ENTER]
• Select Solid from the [HATCH EDITOR] tab, on the [PATTERN] panel
• Select color nine from the [HATCH EDITOR] tab, on the [PROPERTIES] panel
• As you move your mouse cursor over a closed object, the hatch pattern fills the area. If you don’t get the preview, that means you don’t have a closed object.
• You can select multiple files for one hatch command
• Press [ENTER] to finish the hatch command
• [STEP 4] Copy wall fills to RCPs
• Select the wall fills you just made. if necessary, you also can make a block for the wall fills by typing [B] and press [ENTER]
• Change drawing order to [SEND TO BACK] by clicking mouse right-click and select [SEND TO BACK]
• Copy the wall fills from the floor plans to the RCPs
• [STEP 5] Add hatch for the section view
• Select the section view
• Mouse right-click > select [EDIT BLOCK IN-PLACE] > Click [OK]
• Draw lines or polylines to make a closed object. Make sure you are in a correct layer
• Select [HATCH] icon from [HOME] tab, on [DRAW] panel
or, Type [H], and press [ENTER] key
• Select a closed wall, floor, ceiling area. You can select multiple areas for one hatch—Press the [ENTER] key to finish the comment.
• Update the hatch fill layer to [A-LWT-SECTION]
• Once you finish hatching, click [SAVE CHANGES] to save the block > Click [OK]
• [STEP 6] Turn on all layers
The Hatch command is to fill an enclosed area or a selected-closed object with hatch patterns or fill.
Hatch is often used to add a fill to the wall, floor, and ceiling thickness for better readability.
Moreover, the hatch is also used to add patterns on a surface to express the finishes.
In this tutorial, you will practice adding solid fills on the wall of the floor plans and RCPs. This is commonly referred to as poche. Additionally, add a hatch pattern on the wall, floor, and ceiling of the section view.
(CO 3) Create a legend for the RCPs
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=82
In your RCPs, a legend is necessary to inform what items are included, such as lighting fixtures, finishes, equipment, HVAC, and more that are used in your RCPs.
In this tutorial, you will make a lighting legend that you used in your RCPs
• [STEP 1] Find/organize the lighting fixtures that you used
• Copy the lighting block from the RCP to an empty drawing area
• Explode the lighting block by typing [X] and press the [ENTER] key
• Remove all dimensions from the exploded lighting block. You can lock the [A-CLNG-FIXT] > select the dimensions > delete the selected dimensions by press the [DELETE] key, or type [ERASE] and press the [ENTER] key
• Delete duplicated lighting fixtures. You only need one lighting fixture block of each lighting fixture type.
• Organize the lighting fixtures vertically. You can draw a line for a reference.
• [STEP 2] Create a table with lines
• AutoCAD offers a Table tool to create a table. You can find the tool from the [ANNOTATE] tab under the [TABLES] panel. However, this table tool is used for very complicated documents. As an interior designer with more than ten years’ experience in the industry, I still use lines to create a table. Please try the table tool if you want.
• Copy the reference line for the column lines for the table. You can use the [OFFSET] command, too.
• Draw rows with lines
• To clean up lines, use [TRIM] commands
• [STEP 3] Add texts to the table
• Make sure your drawing scale is 3/16”=1’-0.”
• Type [MT] and press [ENTER] to draw a text box and add text
• Repeat the [MTEXT] command for other text. You also can copy from what you created.
• [STEP 4] Select the table and table contents > Switch to [A-ANNO] layer
• [STEP 5] Make the table as a block [000-Lighting Legend]
SAVE the file before closing the application.
Save in a different location for the backup (e.g., a cloud folder) | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/01%3A_Part_One._AutoCAD/05%3A_Chapter_5._Draw_ceiling_plans.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Understand Model space and Paper space
(CO 2) Set a new layout – Page layout and plot styles
(CO 3) Set views in Paper space – Defpoints, scaling
(CO 4) Add/Edit/Draw a titleblock
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Understand Model space and Paper space
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=84
In this tutorial, you will understand the differences between the concept of model space and the concept of paper space. AutoCAD provides two different types of drawing areas.
• Model space – A limitless drawing area. You draw at a 1:1 scale.
• Paper space – To prepare your drawing for printing, use paper space. Paper space is a pre-defined and set area. Please refer to the information from this link.
Clean up the CAD file
• Purge is a command to automatically removed all layers, block, dimension styles, and more items that are not currently used in your document. This command is a useful command to reduce the file size.
• Type [PURGE], and press [ENTER] key, select [PURGE ALL], and purge all again until the [PURGE ALL] is grayed out.
(CO 2) Set a new layout – Page setup
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=84
In this tutorial, you will understand how to set a new layout in the paper space using page setup.
Once you click [LAYOUT 1], you will see the models you made in the model space in a rectangular box. It is called a viewport.
There is a dashed line that is inside of the white space; it is called a printable area.
And the white space called a layout. Once you change the paper size to print, the layout and the printable area will change accordingly.
Update Page setup for 11in x 17 in PDF
• [STEP 1] Open Page setup manager
• Mouse right-click on [LAYOUT 1]
• Click [PAGE SETUP MANAGER]
• Click [*Layout1*] and select [MODIFY] to open the Page setup
• [STEP 2] Edit Page Setup
• Click the name of printer/plotter and switch to [DWG To PDF.pc3]
• Update paper size to [ANSI full-bleed B (17.00 x 11.00 Inches)
• Confirm What to plot: [LAYOUT]
• Confirm Plot scale to [1:1]
• Change Plot style table to [monochrome.ctb]
• Check [Display plot styles]
• Check [Landscape] for the drawing orientation
• Click [OK] – Page Setup – Layout 1
• Click [Close] – Page Setup Manager
• [STEP 3] Confirm the layout with an updated sheet size
• [STEP 4] Update the name of the sheets and add sheets
• To update the name of sheet > mouse-right click on the tab > Click [RENAME] > Rename on the tab
• To add a sheet > click [+] tab
• Update the name of the Layout1 to A101. A101 is for the floor plans
• Update the name of the Layout2 to A102. A102 is for the furniture plans
• Add a new layout and change the name to A401. A401 is for RCPs
• Add a new layout and change the name to A601. A601 is for the section view and the elevation
• [STEP 5] Update other sheets to 11×17 PDF
• Mouse right-click on A102
• Open Page Setup Manger
• Select *A101*
• Click [Set Current]
• Click [Close]
• Repeat this process for A401 and A601
(CO 3) Add/Edit/Draw a titleblock
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=84
• [STEP 1] Draw a titleblock
• Select [A101] sheet to open the sheet
• Confirm your current layer is [0]
• Draw a rectangle for a paper size box – Type [REC], press [Enter] key > Type [0,0], press [Enter] key > Type [17”,11”]. Press [Enter] key
• Draw a titleblock outline using the paper size box – Type [O], press [Enter] key > Type [1/4”], press [Enter] key > Click the inside of the sheet
• Draw lines for the titleblock. Please refer to the image below
• [STEP 2] Add titleblock information
• Sheet number – A101
• Project name – Eames House, House
• Your name – First name and last name
• Course number
• Submission date
• Add text using [MT] for titleblock information. Verify the text size is 3/32”. You may rotate the text 90 degrees.
• [STEP 3] Add the titleblock to other sheets
• Click [Inset] from [Insert] tab, on [Block] panel
• Select [000_Titleblock_11x17]
• Type [0,0], press [Enter] key
• Select the titleblock and the information except for the Sheet number.
• Create a block for the selected elements – Name the block – [000_Titleblock_11x17]
• Insert the titleblock to A102, A401, and A601
• Switch the Titleblocks to [A-ANNO-TTLB]
• Copy and modify the sheet number to A102, A401, and A601
• Now you are ready to add a titleblock on the paper space.
(CO 4) Set views in Paper space – Defpoints, scaling
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=84
Now you are ready to set the views in the sheets
• [STEP 1] Change a viewport size and the viewport layer
• Once you click, a viewport can be changed by stretching the corner of the viewport line.
• Select the viewport > switch to [Defpoints] layer. The [Defpoints] layer is set up by default as a non-plot layer.
• [STEP 2] Update the scale of the viewport
• After changing the viewport size > double-click the viewport > Zoom in and out > Pan the view to make the view centered
• Click a scale on the application status bar, and select a desired scale for the floor plan
• If the dimensions have disappeared, click on the Show Annotation icon on the application status bar
• [STEP 3] If you want to add a new viewport, use [MVIEW] command.
• Type [MV], and press [Enter] > Click the first point to draw a rectangle > Click a second point to finish the rectangle. It will automatically show the drawings
• If you have a viewport to copy, you can copy the viewport and pan the view
• Sometimes a copied viewport will show in color. To change it to black and white, double-click the inside of the viewport > type [RE] and press [ENTER] key > double-click the outside of the viewport
• [STEP 4] Update layer visibility for the viewports
• Open [LAYER PROPERTIES MANAGER]
• A101 is for dimensioned floor plans. Thus, it would help if you hid the [I-FURN], [I-CASE], and [P-FIXT] layers
• Double-click the viewport, and click [VP FREEZE] on the I-FURN], [I-CASE], and [P-FIXT] layers
• [STEP 5] Set a furniture plan sheet
• The scale of the furniture plan is 3/16”= 1.’
• VP Freeze [A-ANNO], [A-ANNO-DIMS], [S-GRID], and [S-LEVEL]
• [STEP 6] Set an RCP sheet
• The scale of the RCP is 3/16”= 1.’
• [STEP 7] Set a section view and elevation sheet
• The scale of the section view is 3/16”= 1.’
• The scale of the elevation is 3/8”= 1.’
SAVE the file before closing the application.
Save in a different location for the backup (e.g., a cloud folder) | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/01%3A_Part_One._AutoCAD/06%3A_Chapter_6._Set_sheets.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Add/Edit symbols in Paper spaces – drawing title, elevation symbol and section letters, north arrow
(CO 2) Printing
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Add/Edit symbols in Paper spaces- drawing title, elevation symbol and section letters, north arrow
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=86
Open your AutoCAD file.
Add drawing title and scale for Floor plans, Furniture plans, RCPs, Section view, and elevation
• [STEP 1] Click [A101] sheet to open the view
• [STEP 2] Open [DESIGN CENTER]
• [STEP 3] Find [A-01.dwg] for a sample drawing title
• Click the [HOME] icon
• Open [Sheet Sets] > [Architectural] > [A-01.dwg]
• Open [Blocks]
• Double click [Drawing Title]
• [STEP 4] Add on the sheet
• [STEP 5] Double-click the inserted drawing title and edit the attribute. Adjust the title line length
• [STEP 6] Copy the edited drawing title for the 2nd-floor plan and double-click to edit the drawing title
• [STEP 7] Repeat [STEP 6] for other drawings
• [STEP 8] Update the drawing titles to [A-ANNO] layer
Add an elevation symbol from [TOOL PALETTES]
• [STEP 1] Open [TOOL PALETTES] from [VIEW] tab, on [PALETTES]
• [STEP 2] Click [Annotation] tab from [TOOL PALETTES]
• [STEP 3] Select [CALLOUT BUBBLE – IMPERIAL]
• [STEP 4] Add the symbol on the first floor
• [STEP 5] Double-click to open the attribute editor, and update information
• [STEP 6] Update scale to 0.75
• [STEP 7] Update the symbols to [A-ANNO] layer
Add section letters with [MTEXT]
• [STEP 1] Type [MT], press [ENTER] key
• [STEP 2] Draw a text box to add the section letter
• [STEP 3] Text size update to 1/8”
• [STEP 4] Add section letter
• [STEP 5] Repeat this process for other sections
• [STEP 6] Update section letter to [A-ANNO] layer
Add north arrow with drawing tools [LINE], [CIRCLE], [MTEXT], & [HATCH]
• [STEP 1] Draw a north arrow in the paper space and make a block
• [STEP 2] Rotate the north arrow to match the information from Eames House plan images
• [STEP 3] Place the north arrow to Floor plans, Furniture plans, RCPs
(CO 2) Printing
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=86
Update the print setup for each layout
• [STEP 1] Open [PAGE SETUP MANAGER] by mouse right-click on each layout.
• [STEP 2] Click [Modify] to open [Page setup]
• [STEP 3] Plot style table is already set to Monochrome.ctb. The Monochrome style turns all layer colors to black. Since we don’t want our wall, floor, and ceiling thickness to be black, we need to customize the plot style.
• [STEP 4] Click the edit button next to the name
• [STEP 5] Click [Color 9] from [PLOT STYLES] and change to [USE OBJECT COLOR]
• [STEP 6] Click [SAVE & CLOSE]
• [STEP 7] If the color is not changed automatically on your paper space, use [REGEN] command for the views.
Print a single sheet
• [STEP 1] Click a layout that you want to print
• [STEP 2] Press [CTRL+P] to open [PLOT]
• [STEP 3] Click [PREVIEW] from the [PLOT] window
• [STEP 4] Click [OK] > find a location to save > click [SAVE]
Print multiple sheets at once
• [STEP 1] Click [Home] button > Select [Publish] to open [PUBLISH] window
• [STEP 2] Select the drawing model if it shows, and mouse right-click > click [Remove]
• [STEP 3] Click [PUBLISH]
• [STEP 4] Specify the file location to save > click [SELECT]
• [STEP 5] If you want to save the current list of sheets for later use, click [Yes]
• [STEP 6] Click [close], once a processing background job window popup
• [STEP 7] Wait until the completion message comes
SAVE the file before closing the application.
Save in a different location for the backup (e.g., a cloud folder) | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/01%3A_Part_One._AutoCAD/07%3A_Chapter_7._Symbols_and_prints.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Purpose of using Revit – How & Why Interior Design uses Revit
(CO 2) Install Revit
(CO 3) Understand Project Templates – Default and Own
(CO 4) Save Revit file – Set the project folder and backup file
(CO 5) Open an existing project file
(CO 6) Keyboard shortcuts
(CO 7) Understand the User Interface – Toolbar, Properties, Drawing area, Option bar, and Project Browser
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Purpose of using Revit – How & Why Interior Design uses Revit
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=88
Introduction of Revit, a BIM (Building Information Model) software
• Charles River Software originally developed the software in 1997, renamed Revit Technology Corporation in 2000, and acquired by Autodesk in 2002.
• Autodesk Revit is BIM software for Architects, Interior Designer, Landscape Architects, Structural Engineers, MEP Engineers, Contractors, and more.
• Revit can be used as a very powerful collaboration tool among different disciplines in the Architecture, Engineering, and Construction (AEC) industry.
image credit: https://commons.wikimedia.org/wiki/File:Revit.jpg
• Revit has a huge library for modeling in the software itself and supports plug-ins, advanced data management systems, and energy modeling analysis.
• Revit modeling process is complicated and not easy for a first-time user.
• Software required better hardware, 40% more expensive than Autodesk CAD but higher productivity.
• For architects and interior designers, the software is being used to create construction documents.
• Most of the commercial interior design/architectural design firms use Revit.
• Not much of renderings features, but support Virtual Reality plugins.
• Collaborate with CAD drawing users.
(CO 2) Install Revit
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=88
Install Autodesk Revit
• [STEP 1] Go to https://www.autodesk.com/education/free-software/revit on your window side in a web browser.
• [STEP 2] Click [CREATE ACCOUNT] if you do not have one. If you have the account, please sign in.
• [STEP 3] Click [Get product] under Revit.
• [STEP 4] Select, windows, 2021, and English. Click [INSTALL].
• [STEP 5] Accept the license and services agreement.
• [STEP 6] You will get an email from Autodesk for the license information (Product key and Serial Number). It may need the information for the activation process.
• [STEP 7] Click Install.
• [STEP 8] Click downloaded installation file to install.
• [STEP 9] Setup Initialization will download actual software to install it will take time.
• [STEP 10] While you are installing Select the “Architecture” discipline in the drop-down menu.
• [STEP 11] After installation, the license information required to activate Revit.
(CO 3) Understand Project Templates – Default and Own
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=88
Open Revit by double-clicking the Revit Icon.
Once the application opens you can see the HOME page. From the home page, you can open recent project files and recent family files. Also, you can create a new project file or a new family file.
Create a new project file with Architecture Template.
• [STEP 1] Click [NEW] under Models on the left side of the Home page
• [STEP 2] Click [Imperial-Construction Template] and select [Imperial-Architectural Template]
• [STEP 3] Click [OK] to create a new project file.
If you are working on a specific project like a residential building or a commercial project, there are more templates that Autodesk supports. To open the template, you can click [BROWSE]. And see what template is appropriate for your project.
Additionally, you can also create a project template for your firm. Many firms already created/use their own template (called a seed file) to save time and resources to develop their construction documents.
(CO 4) Save Revit file – Set the project folder and backup file
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=88
Saving a file must be a friend of yours. Once you create a new file you must save the Revit file first and work on it later.
Save a file for the first time
• [STEP 1] Click [FILE] tab
• [STEP 2] Select [SAVE AS] from the file menu
• [STEP 3] Select [Project] under the menu
• [STEP 4] Find a folder location to save. If you do not have a folder, you can create a folder by pressing [ALT+5]. it is recommended to have a project folder to save the project Revit file. You also save other family files (e.g. Furniture, lighting family that you downloaded) and material images that you found under the project folder. You may draw and drop the folder to the left side of the browser for future use.
Tip, I strongly recommend having an external hard drive to save files. If your laptop hard disk is full, your hardware performance will be drastically reduced. To prevent this, your working file should be saved in an external hard disk. For extra safety, it is recommended to use Cloud file storage to save files.
• [STEP 5] Open a folder to save and make a unique name for the project.
• [STEP 6] Before you click [SAVE], you may consider saving a backup file by clicking [OPTIONS]. Revit will automatically make backup project files. It is safe to have the backup files, but it will take time for the automatic backup file while you work on it. You may set the number of backup files (default is 20) by open [Option] in the save as browser. 5 backup files might be enough.
Save a file while working or end of the work
• [STEP 1] Click [FILE] tab
• [STEP 2] Select [SAVE] or simply press [CTRL + S]
(CO 5) Open an existing project file
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=88
Revit has 2 ways to open your project file.
The first method is just double-clicking your Revit project file.
The second method is opening a file in the application. I recommend this second option due to reducing errors that you might have, especially, while you are sharing a file with your co-workers.
• [STEP 1] Open the Revit software
• [STEP 2] Click [File] on the tab
• [STEP 3] Click [Open] on the file tab
• [STEP 4] Select the file that you want to open under the folder
• [STEP 5] Click [Open] to open the file
(CO 6) Keyboard shortcuts
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=88
To increase productivity, I recommend you become familiar with the following keyboard shortcuts.
• MV Move
• TL Thin Line
• VV Visibility
• ZE Zoom All to fit
• WT Tile Views
• TW Tab Views
• DI Aligned Dimension
• DL Detail Line
• CO Copy
• RO Rotate
• MA Match type properties
• AL Align
• PT Paint
For more information regarding shortcuts
https://www.autodesk.com/shortcuts/revit
(CO 7) Understand the User Interface – Toolbar, Properties, Drawing area, Option bar, and Project Browser
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=88
Revit’s user interface is similar to AutoCAD’s one including an icon with text, and wording.
You may change the location of Properties and Project Browser to expand more drawing area.
You also can change the Ribbon panel size by clicking the minimize panel button.
If you accidentally close Properties or Project Browser, you can open with [User Interface]
Select [VIEW] tab > Click [USER INTERFACE] > Check the box to open.
image credit: Screen captured and modified by the Author from the application
In the ribbon, you will find many tools for your work.
• Architecture
• Insert
• Annotate
• Analyze
• Massing & site
• Collaborate
• View
• Manage
• Add-Ins
• Modify
For more information regarding the user interface, please read this page
SAVE the file before closing the application.
Save in a different location for the backup (e.g., a cloud folder) | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/02%3A_Part_Two._Revit/08%3A_Chapter_8._Introduction_to_Revit.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Understand the site plan and information
(CO 2) Set the project location and understand building base point
(CO 3) Import google maps and define true north & project north
(CO 4) Find GIS information
(CO 5) Add & Edit Site – Topo surface, roads, side works, property line, building pod, surrounding buildings, and trees
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Understand the site plan and information
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=90
The site plan is an architectural plan of proposed improvements to a given lot. A site plan usually shows a building footprint, travel ways, parking, and landscaping and garden elements (Department of Building and Development Land Development, 2009).
image credit: Archibald & Fraser Architects Ltd. – Wikimedia Commons – File: Lochaber Centre Site Plan.jpg
A site plan is a “set of construction drawings that a building or contractor uses to make improvements to the property. Counties can use the site plan to verify that development codes are being met and as a historical resource. Site plans are often prepared by a design consultant who must be either a licensed engineer, architect, landscape architect, or land surveyor” (Chesterfield County, 2009).
A site plan is a top view of a property that is drawn to scale. A site plan can show
• Property lines
• Outline of existing and proposed building and structures
• Parking lots, indicating parking spaces
• Driveways
• Surrounding streets
• Landscaping areas
• Terrains
(CO 2) Set the project location and understand building base point
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=90
Draw the building footprint for the site plan
• [STEP 1] Acknowledge the overall size of the building.
• The size of the house part of the Eames House project is 58’-7” x 20’-4.”
• [STEP 2] Select [WALL] from [ARCHITECTURE] tab, under [Build] panel
Or, Type [WA] on your keyboard
• Draw only overall exterior walls on level 1
• Make sure your walls are [Project North] on the [Properties] palette.
• Confirm the wall is – Basic Wall/Generic-8”, Unconnected Height 20’ 0.”
• [STEP 3] Draw the building footprint
• Click a drawing area
• Move the mouse to the right-side
• Enter 58’7” on your keyboard > press [ENTER] key
• Move your mouse to the down-side
• Enter 20’4” on your keyboard > press [ENTER] key
• Move your mouse to the left-side
• Click the third point and forth to create the building
• [STEP 4] Move the elevation symbols closer to the close the building
• Select one elevation symbol
• Type [MV]
• And click one point and move the mouse and click the target point to complete the command.
• Repeat this process to other elevation symbols
Set the project location
This setting is for sun orientation and weather information
• [STEP 1] Select [LOCATION] from [MANAGE] tab, under [PROJECT LOCATION]
• [STEP 2] Enter the project address (Eames House address is 203 N. Chautauqua Blvd. Pacific Palisades, CA) and Select [SEARCH]
• [STEP 3] Select a Weather Station near the project location
• [STEP 4] Click [OK[ to complete the command
(CO 3) Import google maps and define true north & project north
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=90
Save site information from Google Map
• [STEP 1] Open your web browser and go to [GOOGLE MAP] – https://www.google.com/maps
• [STEP 2] Search the address [for Eames House project, 203 N. Chautauqua Blvd. Pacific Palisades, California]
• [STEP 3] Use [Snipping tool] (search Snipping tool from your program list) – Screenshot – must include the building, and the property line, and the scale on the bottom left corner. For your drawing accuracy, I recommend saving both the map image and the satellite map image.
image credit: search result from Google map
• [STEP 4] Save the snipped image file in JPG file format in your project folder
Insert site information from google map
• [STEP 1] Open [SITE] plan by double-clicking from the [PROJECT BROWSER]
• [STEP 2] Move the project origin to the building bottom left corner
• [STEP 3] Change the orientation to [TRUE NORTH]
• [STEP 4] Select [IMPORT IMAGE], from [INSERT] tab
• [STEP 5] Open the project folder and select the google map and click [OPEN]
• [STEP 6] Click on the center of the drawing area
• [STEP 7] Repeat this process for the satellite map
Adjust the scale of the imported google map
• [STEP 1] Select the two imported maps by crossing selection
• [STEP 2] Click [SCALE] from [MODIFY/RASTER IMAGE] tab, under [MODIFY] panel
or, Type [RE] for adjusting the scale
• [STEP 3] Zoom-in to find the graphic scale on the bottom right side of the map
• [STEP 4] Click 0ft – click 50 ft – type 50’ – Enter
• [STEP 5] Move the imported raster maps to be centered
Adjust the orientation of the imported google maps according to the building footprint
• [STEP 1] Select the imported google maps
• [STEP 2] Select [ROTATE] from [MODIFY] tab, under [MODIFY] panel
Or, Type [RO] on your keyboard
• [STEP 3] Select [PLACE]
• [STEP 4] Click the base point to rotate, click an appropriate point to rotate.
Remember how many degrees you rotated (for Eames House, 75.56 clockwise)
• [STEP 5] Relocate the imported google maps, if needed
• [STEP 6] Select only the satellite map > click [SEND TO BACK] from [MODIFY/RASTER IMAGE] under [ARRANGE] panel
• [STEP 7] Select the project base origin
• [STEP 8] Click [ANGLE TO TRUE NORTH]
• [STEP 9] Type [-75.56], and press [ENTER] key (for Eames House, we rotated the google images to 75.56 clockwise, so you need to re-rotate the origin to 75.56 counter-clock-wise)
• [STEP 10] Change the view to [PROJECT NORTH] from the [PROPORTIES] to check the plan rotated the right-way.
• [STEP 11] Change the view to [TRUE NORTH] from the [PROPORTIES] for the site plan
Crop the view to only the site plan area
• [STEP 1] Select [CROP VIEW] and [CROP REGION VISIBLE] from the [PROPERTIES] panel
• [STEP 2] Adjust the region for the view
(CO 4) Find GIS information
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=90
In this tutorial, we will find GIS information from CADMAPPER
• [STEP 1] Open your web browser, go to https://cadmapper.com/
• [STEP 2] Sign up for free if you don’t have the account. Sign on the website.
• [STEP 3] Enter the project address [203 N. Chautauqua Blvd. Pacific Palisades, CA] and search
• [STEP 4] Adjust the area for your project by zooming in and changing the selected area. It is free up to 1 km2
• [STEP 5] Click [CREATE FILE] to generate the 3D model
image credit: Screen captured from www.cadmapper.com
• [STEP 6] You can see the preview for your confirmation. Click [DOWNLOAD]
image credit: Screen captured from www.cadmapper.com
• [STEP 7] Once the download is done, open the folder to extract the zip file
image credit: Screen captured from www.cadmapper.com
• [STEP 8] Extract the zip file
• [STEP 9] Copy the files to your project folder and read the License.txt file before you use the file
• [STEP 10] Open the LEVEL 1 view
• [STEP 11] Click [IMPORT CAD], from [INSERT] tab, under [IMPORT] panel
• [STEP 12] Open the project folder, change the file type to [DXF files], change colors to [Black and White], change Import units to [meter], Click [OPEN]
• [STEP 13] Move the imported CAD site information to the site. To move the map, you have to unpin before you move the site.
• [STEP 14] Switch the view to [SITE] view, Uncheck [CROP VIEW] from [PROPERTIES]
• [STEP 15] Rotate the imported CAD site map to match the imported GOOGLE MAP (75.56 degrees clockwise)
• [STEP 16] Move the imported CAD site map to match the imported GOOGLE MAP.
• To move correctly, you can switch the graphic display option to [WIREFRAME]
• Refer to other building locations and road locations to get the right location aligned
• [STEP 17] Open [WEST] view
• [STEP 18] Move the imported CAD site map to match the building level 1
• You can type the Base Offset or manually move to match
(CO 5) Add & Edit Site – Topo surface, roads, side works, property line, building pod, surrounding buildings, and trees
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=90
Create a TOPO SURFACE
Note, Building sight might be completely flat or with little level change. Please try the TOPO SURFACE tool to make the site model.
• [STEP 1] Select [TOPOSURFACE], from [MASSING & SITE] under [MODEL SITE]
• [STEP 2] Select [CREATE FROM IMPORT], from [MODIFY/EDIT SURFACE] under [TOOLS]
• [STEP 3] Select [SELECT IMPORT INSTANCE]
• [STEP 4] Select the imported CAD site map
• [STEP 5] Only check [COUNTOURS] from the ADD POINTS FROM SELECTED LAYERS] window
• [STEP 6] Adjust point locations if needed
• [STEP 7] Click the green check to complete the topo surface tool
Edit TOPO SURFACE
• [STEP 1] If you want to edit the topo surface, select the topography and select [EDIT SURFACE]
• [STEP 2] If you want to add the point, use [PLACE POINT]
• [STEP 3] If you want to remove the point, select the point and press the [DELETE] key
Topography line setting
• [STEP 1] Select [SITE SETTING] from [MASSING & SITE] tab, under [MODEL SITE].
• It is a small arrow on the panel
• [STEP 2] Set your topo lines
The increment number depends on your site map scale and how much detail you want to express
• [STEP 3] Select [EDIT] for [VISIBILITY/GRAPHICS OVERRIDE]
Find [TOPOGRAPHY]
PRIMARY CONTOURS – SOLID / SECONDARY CONTOURS – DASH
Clean up the imported CAD map
• [STEP 1] Select the imported CAD map
• [STEP 2] Click [DELETE LAYERS] from [MODIFY], under [IMPORT INSTANCE]
• [STEP 3] Check all except [BUILDING]
• [STEP 4] Click [OK] to finish the command
• [STEP 5] Check [CROP VIEW] to see only the region inside
Confirm your topo in 3D
Revit is BIM software. Your 2D drawings can show in 3D. It is wise to double-check in 3D view while you build your model
• [STEP 1] To create a 3D view, click the 3D view on the top of the program
• [STEP 2] Once you click the 3D view, the 3D view will automatically open. To refer back to this view, you can open the 3D view on Project Browser by double-clicking. If you want to keep the view, you must rename the 3D view. For example, ISO-view-01
• [STEP 3] Click [SHADED] to see the color
• [STEP 4] Check [Section box] on the Properties and adjust the Section box by adjusting the blue arrow
• [STEP 5] Click [Sun path On]
• [STEP 6] Click Shadow
• [STEP 7] Now you can simulate the sun path by drag and dropping the sun
Create Building Pod & Property line
For the building pod and property line, you should hide the Topo image by clicking the image, and mouse right-click and select [Hide in view] > [Elements]
• [STEP 01] Go to Site & massing tab – Click [Building Pad]
• [STEP 02] Make sure your building pod is on level 1
• [STEP 03] Draw a closed line
• [STEP 04] Go to Site & massing tab – Click [Property Line] – Create by sketching
• [STEP 05] Use Google Map to draw the property line. Draw a closed line
Create Roads
• Go to Site & massing tab – Click Subregion
• Draw a closed line
Create Neighborhoods (Trees)
• Go to Site & massing tab – Click Site component
Once you finished your site plan, you should hide the Google image.
SAVE the file before closing the application.
Save in a different location for the backup (e.g., a cloud folder) | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/02%3A_Part_Two._Revit/09%3A_Chapter_9._Create_a_site.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Import CAD drawings- Floor plans, building elevations, and sections
(CO 2) Adjust and verify the scale
(CO 3) Create and modify grids and levels
(CO 4) Create plan views- Floors and ceilings
(CO 5) Create columns
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Import Drawings- Floor plans, building elevations, and sections
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=92
Prepare Drawings before import CAD drawings
• [STEP 1] Open Revit application > Open your project > Double-click [LEVEL 1] from [PROECT BROWSER] > Close all other views except [LEVEL 1]
• [STEP 2] Select [SITE] information including the property line, trees, and neighborhood buildings > Mouse right-click on one of the selected elements > select [HIDE IN VIEW] > Select [ELEMENTS] or [CATEGORY]
Repeat this process for FLOOR PLAN – LEVEL 2, ELEVATION – EAST, WEST, NORTH, SOUTH, CEILING PLAN – LEVEL 1, LEVEL 2
Insert drawings
In this tutorial, we will practice how to add an image (drawing) to the view. You can insert CAD files and PDF files
• [STEP 1] Select [IMPORT IMAGE] from [INSERT] tab, under [IMPORT] panel
Note. Revit can import various file types – dwg, dxf, pdf, jpg, tif, rvt, and more
Note. For this project, I will use [tif] files
• [STEP 2] Find the folder where the drawings are saved > select the floor plan > click [OPEN] > place the imported image on the view [LEVEL 1]
Repeat this process for FLOOR PLAN – LEVEL 2, ELEVATION – EAST, WEST, NORTH, SOUTHNote. The imported image will only show on the view where you placed.
Note. To use the image file that you imported for your other level like [LEVEL 2]. Click [MANAGE LINKS] > Click [IMAGES] tab > Click the image that you loaded in the model > Click [PLACE INSTANCE]
(CO 2) Adjust and verify the scale
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=92
Confirm the scale and position of the imported drawings
If the imported image/CAD file/PDF is in the right scale and position, skip this one, but double-check for your accuracy
• [STEP 1] Select imported image > click [SCALE] from [MODIFY] tab, under [MODIFY] panel
• [STEP 2] Click the first point that you know the dimension. I will use the overall building dimension (58’-7”). You may also use a graphic scale if you have one
You need to zoom in to select the first point in the middle of the dimension line.
• [STEP 3] Click the second point
• [STEP 4] Type the dimension [58’7”] and press [Enter] key
• [STEP 5] Zoom out to see the position
• [STEP 6] Move the drawing to match the base drawing. You can use a specific point to match the drawing to the base Revit model that you created
You may need to change the graphic display to [WIREFRAME] to see the image behind the model
Repeat this process for FLOOR PLAN – LEVEL 2, ELEVATION – EAST, WEST, NORTH, SOUTH
(CO 3) Create and modify grids and levels
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=92
Create grids
• [STEP 1] Open [LEVEL 1] view
• [STEP 2] Select [GRID] from [ARCHITECTURE] tab, under [DATUM] panel
• [STEP 3] Using the straight-line selection, hover over the view near the west wall until it becomes highlighted in the center of the wall and an “X” appears
• [STEP 4] Click the first point and click the second point to complete the gridline 1
• [STEP 5] To complete the remaining gridlines, you can copy or continue drawing gridlines based on the imported image
• [STEP 6] To copy, select the Revit gridline 1 that you just made. Select the Copy tool and click a point and specify the second point to the next grid line
• [STEP 7] Also you should use dimension [DI] to specify the distance between gridlines
• [STEP 8] Continue to copy all North/South gridlines
• [STEP 9] If the grid bubble and numbers are too big, you can update the view scale by clicking scale. For the Eames House project, use 3/8” = 1’-0”
• [STEP 10] Create a Horizontal new grid for East/West. And Update the Grid name to [A] and complete East/West grids
• [STEP 11] Your gridlines and dimensions can be locked
• [STEP 12] Open [NORTH] view and adjust the grid line heights by dragging and drop the edge of the grid line. And repeat this for [WEST] view
Create and Modify levels
• [STEP 1] We will now make Levels. Begin by navigating to the [SOUTH] elevation in the Project Browser.
• Adjust the length of the level by dragging and drop the edge of the level line
• Repeat this for [EAST] elevation view
• [STEP 2] After you have confirmed that Levels 1 & 2 are set to the correct elevation, you can add additional levels. To adjust the elevation number, click on the number and type in the correct elevation.
Note: Level 2 will be [9’-7”] higher than Level 1.
• [STEP 3] Add a roof level.
• Select [LEVEL] from [ARCHITECTURE] tab, under [DATUM] panel
Or, type [LL]
• Click the left corner of the level and click the second point to finish the command
• Rename [Roof] > Click OK in the window pop-up and change view name accordingly
• Adjust the height to [18′ 5 1/2″]
• [STEP 4] If you copy (CO)the level from the existing level, the view will not be created. Create Ground and Level 1 top (Black symbol)
• [STEP 5] Update the view scales
• 3/16” = 1’-0” for all elevations
• 3/8” = 1’-0” for floor plans and ceiling plans
(CO 4) Create plan views- Floors and ceilings
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=92
Currently, you have what you need. When you add a new level [LL], the views automatically created. If you made the view by coping existing levels [CO], you have to create new views for the level
• [STEP 1] Select [PLAN VIEWS]
• [STEP 2] Select [FLOOR PLAN] if you need a floor plan
• [STEP 3] Select [NAME OF THE LEVEL] and click [OK], then you can find the view from [PROJECT BROWSER]
If you need additional plans for furniture plans, power, and data plans, and finish plans, you need to duplicate the view by mouse-right clicking and select duplicate view > duplicate
Note, there are three options for duplicating views. Please refer to this link for further explanation
(CO 5) Create columns
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=92
We will now add columns beginning on the Ground level
• [STEP 1] Under the Architecture tab on the Ribbon select Structural Column.
• [STEP 2] Select the 5” x 5” column.
Note: an existing column family may need to be edited to create this size by clicking [EDIT TYPE] > Click [DUPLICATE] > add name [W5x 5] > click [OK] > update width and height to 5”
• [STEP 3] Confirm the Base Level is set to Ground and the Top Level is set to Roof
• [STEP 4] Place a column on your [LEVEL 1]
• [STEP 5] It will be necessary to Move [MV] and Copy [CO] the Revit columns to align as shown on the imported floor plan and the grid that you created
Hide the imported drawing image to confirm all columns are properly placed
Type [TL] to see the thick lines for print
SAVE the file before closing the application.
Save in a different location for the backup (e.g., a cloud folder) | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/02%3A_Part_Two._Revit/10%3A_Chapter_10._Set_grids_levels_dimensions_and_building_columns.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Add/Edit Beam
(CO 2) Create walls
(CO 3) Edit walls wall properties – Wall thickness, Wall details, & Finishes
(CO 4) Edit/add wall properties – Wall opening, wall sweep
(CO 5) Add/Edit Curtainwalls, Mullions, & Panels
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Add/Edit Beam
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=94
Adding beams are not typical for interior design projects, but it is needed for the Eames House project. You may use this information for an open ceiling plan of commercial projects as well.
You can find [BEAM] from the [STRUCTURE] tab, under the [STRUCTURE] panel.
If you are missing the Structure tab, click [HOME] menu > Click [OPTION] > Click [USER INTERFACE] > Check [STRUCTURE TABL AND TOOLS]
To see the structural element in solid shapes instead of lines, the view detail level must be [FINE]. Please update all floor plans, ceiling plans, and elevations to [FINE] on a detail level.
Add Beams
• [STEP 1] Click [Beam] on [Structure]
• [STEP 2] Select W Shapes-W5X5
• If you don’t have a W5x5 type. Please add one by clicking Edit Type > Duplicate > Name the type > Edit Width and Height to 5
• [STEP 3] Select [Level: Level 2] from Option Bar [Modify/Place Beam] for West Wall of the building
• [STEP 4] Click the center of the north column of the west wall, and click the center of the south column of the west wall. You will get a warning message, but it is OK, just click X
• [STEP 5] Confirm the place from the 3D view by clicking the HOUSE icon from quick access and hide site information by category. Update the 3D view scale to 3/8”=1’-0.”
• [STEP 6] Open the 3D view, West elevation, and the floor plan together by typing [WT]
• [STEP 7] The Beam must be rotated to 90 degrees.
• Click the Beam, and change the properties
• Change Cross-Section Rotation to 90 degree
• The beam location needs to be aligned from the Top view of the 3D view. Use Move (MV) command to move the Beam to be aligned.
• [STEP 8] See the West elevation and move the Beam to align with the imported image, [MOVE] command will not work.
• Thus, update [Y OFFSET VALUE] to [1’ 1 ½”]
• [STEP 9] For other beams repeat step 1 through step 8. You may select all beams that you created and change the properties – Step 6 & 7. or Copy elements that have the same properties.
• Complete beams for the second floor and the roof exterior beams, AND the second level interior beams
Add NEW Beams from a new family
• [STEP 1] Click [Load Family] on [Insert] tab
• [STEP 2] Download [LH-Series Bar Joist.rfa] from Canvas Eames House module
• [STEP 3] Find the folder and select the file and Open
• [STEP 4] On Roof plan, Click [BEAM] on the [STRUCTURE] tab
• [STEP 5] Click North column and South column to create the Bar Joist
• [STEP 6] Edit the family type to match the imported drawing
• [STEP 7] Copy (CO) the first Bar Joist for the other Bar joists
(CO 2) Create walls
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=94
Draw WALLs with Various types based on the imported base drawing image
• [STEP 1] Select [WALL] from [ARCHITECTURE] tab, under [BUILD] panel
or Type [WA]
• [STEP 2] Select a Wall family
• If you don’t have a wall type from the Properties, add by clicking Edit Type and Duplicate from a wall type.
• For Eames House, we will use
• Curtain Wall 1
• Retaining -12” Concrete
• Interior-4 7/8” Partition
• Interior-6 1/8” Partition
• Soffit-1/2” GWB & Metal Stud
• But the wall details will be modified, and custom walls will be added later in this tutorial.
• [STEP 3] Confirm [BASE CONSTRAINT] and [TOP CONSTANINT]. If needed set base offset and top offset
• [STEP 4] Specify Location Line
• [STEP 5] If the wall is other than a straight line, specify a line/shape type (circle, arc, rectangle, inscribed polygon, or ellipse. And draw lines to cross-reference of the imported base drawing image
• [STEP 6] Make Dimensions [DI] for verification purposes.
• [STEP 7] Confirm with 3D view
(CO 3) Edit walls wall properties – Wall thickness, Wall details, & Finishes
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=94
For your wall thickness, refer to Kilmer, W., & Kilmer, Rosemary. (2016). Construction drawings and details / W. Otie Kilmer and Rosemary Kilmer. (Third ed.).
For your design project, you will need to decide on your construction details.
Typical residential house wall thickness (For your reference)
• Interior Wall (without Plumbing) = 4 ½” ((1) 2”X4” wood stud, which are 3 ½” deep + (2) ½” Gypsum boards)
• Interior Wall (with Plumbing) = 6 ½” ((1) 2”X6” wood stud, which are 5 ½” deep + (2) ½” Gypsum boards)
• Exterior Wall (with Brick) = 10” ((1) 2”X6” wood stud, which are 5 ½” deep + (1) ½” Gypsum boards + (1) ½” Insulation + (1) 3 ½” Brick)
• Exterior Wall (with Stucco, wood, aluminum, or vinyl) = 7 1/2” ((1) 2”X6” wood stud, which are 5 ½” deep + (1) ½” Gypsum boards + (1) ½” Insulation + (1) 1” Stucco, wood, aluminum, or vinyl)
For the Eames house project, although this building is residential, the construction is a combination of wood base and metal base. Revit provides walls for commercial construction, which is a metal base, so, you may use the provided walls and change some walls.
• Interior Wall = Wood stud with Gypsum BD + Paint Finish or Wood BD (3 1/8”, 4 7/8”, 6”, 8”)
• Exterior Wall = Curtain Wall with Glass, Wood, Metal panels
• Exterior Wall = Metal with Gypsum BD + Paint Finish or Wood BD
• Exterior Retain Wall = Concert
Once you click a wall, you can edit the properties (wall height-where it starts and ends & wall phase) for each wall
If you want to change the wall’s type (wall thickness Graphic style, materials, Structure, BIM information), you can change by clicking [Edit Type]
For the future strategy, it is wise to [Duplicate] the type and edit type properties. For best practice, add “000” for the new duplication. This will allow the types to organize in alphabetical order.
Click [EDIT] on [STRUCTURE] and change thickness and material and click [OK]
To change other walls from the Revit Metal base 4 7/8” wall to Wood base 4 ½.”
• [STEP 1] Click one of Metal base 4 7/8” wall
• [STEP 2] Mouse right click and click [Select all instances] and click [in Entire Project]
• [STEP 3] Select [000_Interior-4 ½” Partition]
(CO 4) Edit/add wall properties – Wall opening, wall sweep
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=94
Wall opening or wall elevation modification
• [STEP 1] On a plan view, click the wall that you want to make wall opening
• [STEP 2] Click [Edit Profile]
• [STEP 3] Go To View window will open.
• [STEP 4] Select an appropriate view to open. I prefer ISO view to quickly change the elements
• [STEP 5] Draw the opening with dimension
• [STEP 6] Click the green checkmark to finish
• [STEP 7] Double check in 3D view
Wall Sweep is a very useful tool to create moldings
• [STEP 1] To create a continuous molding. First, you need to load a profile by clicking [Load Family] from the [Insert] tab.
• [STEP 2] Find Profile from Revit library and open it to the file.
• [STEP 3] Click the small arrow under Wall on Architecture tab
• [STEP 4] Select Wall: Sweep on 3D view. Recommend a camera view or a 3D view with a section box
• [STEP 5] Click [Edit Type] > Duplicate the type. Don’t forget to add “000” for your type.
• [STEP 6] Click profile under construction, select the profile you just loaded.
• [STEP 7] See the preview and click the appropriate location on the wall for the molding on the 3D view.
• [STEP 8] To finish the Sweep, [ESC] on Keyboard.
(CO 5) Add/Edit Curtainwalls, Mullions, & Panels
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=94
Modify curtain wall
• [STEP 1] To add curtain wall grids, open a view. It can be an elevation view. You may close the floor plans.
• [STEP 2] For the Eames house project, confirm the imported base drawing – elevations are in the correct locations.
• [STEP 3] Click [Curtain Grid] on [Architecture] tab
• [STEP 4] Select [All Segments] or [One segment]
• [STEP 5] Click grid lines and use dimensions for accurate distance based on the imported elevations
• [STEP 6] You can edit the grid, click the grid, and click add/remove segments.
• Complete for all four exterior walls
Add curtain wall mullion
• [STEP 1] On the elevation, click [MULLION] on the [ARCHITECTURE] tab
• [STEP 2] You may choose All Grid Lines, Grid Line, or Grid Line Segment
• [STEP 3] Also, you may choose a specific type of Mullion by clicking Properties, and you can edit the type as well
• You can change material, size, profile, the width of sides, angle, & offset
• [STEP 4] Click lines to apply the mullion type that you selected
Modify Mullion
• [STEP 1] If you need to change the mullion type, you can click the mullion that you made and change the properties
• [STEP 2] If you need to change the mullion type and join order, click the Mullion you want to change and click [+] on the view to change the order
To change the panel from Glazing to Solid, door, or window
• [STEP 1] Select the panel to change by multiple [Tab] keys
• [STEP 2] Change Properties to what you need
• [STEP 3] If you need to change the properties, click [Edit type] and click [Duplicate] and change the value
• [STEP 4] If you need to add a new type, Load family first and change the Properties
SAVE the file before closing the application.
Save in a different location for the backup (e.g., a cloud folder) | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/02%3A_Part_Two._Revit/11%3A_Chapter_11._Add_edit_beams_walls_and_curtain_walls.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Understand View template, visibility graphics
(CO 2) Understand View range
(CO 3) Add/Edit Floors & Floor Properties
(CO 4) Add/Edit Ceilings & Ceiling Properties
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Understand View template, visibility graphics
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=96
[VIEW TEMPLATE] in Revit is a collection of view properties, such as a scale of a view, detail level, discipline, view ranges, orientations, model display, and visual settings. With using [VIEW TEMPLATE], you can apply standard settings to views.
For example, I can make a Floor plan view template to apply all my floor plans. The floor plans will have the same scale, detail level, and style of lines. And you can also set a Furniture plan template. On the floor plans, you can hide all door tags and window tags at once. View templates save much time when producing a set of construction documents.
To set a view template
• [STEP 1] Click [VIEW TEMPLATE] from [PROPERTIES] palette, Assign view template window will open
• [STEP 2] Select [ARCHITECTURAL PLAN]
• [STEP 3] Click [DUPLICATE] icon, and Name it [3/8” Floor Plan]
• [STEP 4] Update [VIEW SCALE] to [3/8”=1’-0”]
• [STEP 5] Click [EDIT] on V/G Overrides Model, then Visibility/Graphic Overrides for 3/8” Floor Plan window will open
• [STEP 6] You can hide categories that you don’t want to show in the view by unchecking the categories.
• [STEP 7] Also, you and change the graphic styles by clicking [OVERRIDE]
• [STEP 8] Click [OK]s to apply
To apply the view template to other views
• [STEP 1] Select a view or multiple views
• [STEP 2] Click [VIEW TEMPLATE] from [PROPERTIES] palette
• [STEP 3] Select the view template you made for the selected views
• [STEP 4] Click [OK] to apply
Once you apply a view template, the visibility/graphic override, view scale, display model, detail level, view range, discipline, phase filter, and more items in the view templates will be deactivated on the view. To change the setting, you have to adjust in the view template, not individually.
If you want to set the view settings individually, you must select [NONE] for the view template.
For more information, please refer to this page for the view template.
(CO 2) Understand View range
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=96
[VIEW RANGE] is a set of horizontal planes that control the visibility and display of objects in a plan view.
To adjust view range
• [STEP 1] Click [VIEW RANGE] from [PROPERTIES] palette. If this is deactivated, check your view template and click [VIEW RANGE] from the template.
• [STEP 2] You define the ranges by adjusting the offset value
• [STEP 3] You can see the sample view range for better understanding by clicking [<< SHOW] button
• [STEP 4] Click [OK] to apply
For more information about [VIEW RANGE], please visit this page
[UNDERLAY] is a function to understand the relationship of components at different levels for coordination and construction.
To apply [UNDERLAY]
• [STEP 1] Open the second-floor plan
• [STEP 2] Click Range: Base Level and change the level that you want to look below
• [STEP 3] Click Range: Top Level and change the level that you want to ‘lookup.’
• [STEP 4] For floor plans, the Underlay Orientation should be [LOOK DOWN]
For RCPs, the Underlay Orientation should be [LOOK UP]
Once the underlay function is activated, you can see the gray lines. You are not able to click nor edit the underlay items.
This function is only for the working process. If you don’t need the underlay items, please check [NONE] on Range: Base Level to deactivate the function.
For more information about [VIEW RANGE], please visit this page.
(CO 3) Add/Edit Floors & Floor Properties
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=96
Revit is a BIM software; it needs more information to generate 3D views and Drawings. When in plan view the drawing appears complete much like it does in AutoCAD. However, when you view the Revit model in a 3D view, the floors and ceilings are missing. Therefore, you need to model these elements as well.
There are multiple ways to create floors. I prefer to create a slab(without finishes) and then add finishes over the top of the slab.
To Add/Edit Floors
• [STEP 1] Select [FLOOR ARCHITECTURAL] from [ARCHITECTURE] tab, under [BUILD] panel
• [STEP 2] Select a Floor Type. For the First Level of Eames House, we will use [Generic – 12”], but we will modify the properties later.
• [STEP 3] Select the level where the floor is located. And specify height offset from the level if required.
• [STEP 4]Draw the boundary of the Floor plan. You can draw using straight lines or any of the other options. Make sure the boundary lines are connected and closed
• [STEP 5] Click the green checkmark to complete the sketch
• [STEP 6] Confirm the location on a section view or a 3D view
You can click [SECTION] from the [VIEW] tab, under the [CREATE] panel. And draw a section line for verification purposes.
Make sure your building pod must be below level 1.
Add a floor for the second level
• [STEP 1] Open [LEVEL 2] floor plan
• [STEP 2] Click [FLOOR] from [ARCHITECTURE] tab, under [BUILD] panel
• [STEP 3] Change the floor type to 3” LW Concrete on 2” Metal Deck, and verify the Level 2, and the Offset
• [STEP 4] Draw continued lines for the floor. If there are floor openings, draw the openings as well. If the openings are inside the boundary of the floor, Revit will recognize as it is opening.
• [STEP 5] Click [GREEN CHECK MARK] to complete the sketches
Edit Floor properties
You will change the thickness of the floor 12” to 2.”
• [STEP 1] Select the first level floor from the 3D view
• [STEP 2] Click [EDIT TYPE] on [PROPERTIES] palette
• [STEP 3] Click [DUPLICATE] on [TYPE PROPERTIES]
• [STEP 4] Enter a new name. I recommend adding [000_] of the letter of the name.
For example [000_Generic – 2”]
• [STEP 5] then click [OK]
• [STEP 6] Click [EDIT] for [STRUCTURE]
• [STEP 7] Change [2”] and click [OK] and [OK] to complete
Add Floor types
• [STEP 1] Click [FLOOR] from [ARCHITECTURE] tab, under [BUILD] panel
• [STEP 2] Select [Generic – 12”]
• [STEP 3] Click [EDIT TYPE]
• [STEP 4] Click [DUPLICATE]
• [STEP 5] Rename to [000_TL_01], you will need [000_TL_02], and [000_CPT_01]
• [STEP 4] Click [EDIT] for [STRUCTURE]
• [STEP 5] Change the Function to [FINISH 1]
• [STEP 6] Change the Thickness to [1/2”]
• [STEP 7] Click <By Category>
• [STEP 8] Click [CREATE A NEW MATERIAL]
• [STEP 9] Rename the new material
• [STEP 10] Check [USE RENDER APPEARANCE]
• [STEP 11] Change the foreground pattern
• [STEP 12] Click [APPEARANCE] tab
• [STEP 13] Click [REPLACE THIS ASSET]
• [STEP 14] Find appropriate material from [APPEARANCE LIBRARY]
• [STEP 15] Click the [REPLACE] icon, and close the [ASSET BROWSER]
• [STEP 16] If needed, change the color
• [STEP 17] Click [OK]s to complete
Add floors for finishes
• [STEP 1] Open a plan view to add the finish floor
• [STEP 2] Confirm the level on the [PROPERTIES]
• [STEP 3] Change the Height Offset From Level to [1/2”]
• [STEP 4] Draw Floor Boundary, please consider not to overleap with walls
(CO 4) Add/Edit Ceilings & Ceiling Properties
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=96
Add a ceiling
• [STEP 1] Open [LEVEL 1] ceiling plan
• [STEP 2] Click [CEILING] from [ARCHITECTURE] tab, under [BUILD] panel
• [STEP 3] Select [GWB on Mtl. Stud]
• [STEP 4] Confirm [LEVEL] is [LEVEL 1 CEILING] and [HEIGHT OFFSET FROM LEVEL] to [0’0”].
If you don’t have a level for a ceiling, you can set your level to Level 1 and update the height offset from a level like 8’-0.”
• [STEP 5] Draw the Ceiling boundary.
You will need to draw an independent ceiling for each room.
• [STEP 6] Click the green checkmark to complete this command.
• Repeat this process to place a ceiling at level 2
Edit Ceiling properties
You will change the thickness of the ceiling 12” to 2.”
• [STEP 1] Select the first level ceiling from the 3D view
• [STEP 2] Click [EDIT TYPE] on [PROPERTIES] palette
• [STEP 3] Click [DUPLICATE] on [TYPE PROPERTIES]
• [STEP 4] Enter a new name. I recommend adding [000_] of the letter of the name.
For example, [000_GWB on Mtl.Stud-3”]
• [STEP 5] then click [OK][STEP 6] Click [EDIT] for [STRUCTURE]
• [STEP 7] Change [2 3/8”] for [METAL STUD LAYER] and click [OK] and [OK] to complete
SAVE the file before closing the application.
Save in a different location for the backup (e.g., a cloud folder) | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/02%3A_Part_Two._Revit/12%3A_Chapter_12._Understand_visibility_settings_add_edit_floor_and_ce.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Add/Edit Stair – three types of staircases
(CO 2) Add/Edit Railing
(CO 3) Add/Edit Roof
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Add/Edit Stair – three types of staircases
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=98
• Revit provides various types of staircases, and it is modifiable.
• Revit stair automatically calculates the number of stair runs, automatically creating landing, supports, and even handrails.
• In this tutorial, the instructor will demonstrate four types of staircases – a straight stair, two runs (a switchback stair), a spiral, and a custom with sketch in another Revit file only for reference.
• For the Eames house, students will only need a spiral staircase.
• For more information about Revit stairs, please visit this page
To create a straight stair
Be aware of the overall size of the staircase and create walls for the staircase before you create a staircase in Revit. If it is an open staircase, please use the reference plane for a guideline.
• [STEP 1] Open floor plan views (e.g., Level 1 floor plan view and Level 2 floor plan view) and 3D view or a section view with a perspective view to see the height and overall shape of the staircase. And view change to Window Tile view (WT)
• [STEP 2] Draw Detail Line (DL) for the outlines of the staircase. You may turn on the CAD drawing that you already imported.
• [STEP 3] Click [STAIR] from [ARCHITECTURE] tab, under [BUILD] panel
• [STEP 4] Select [STRAIGHT] from [MODIFY/CREATE STAIR], under [COMPONENT] panel
• [STEP 5] Select a center point for the stair start point on a plan view, then Revit automatically calculates and shows how many stairs are needed to reach the next floor. Then, click the endpoint on the current view to finish
• [STEP 6] Then, the preview will show on all the views. If it looks OK, click the green checkmark to finish. If it needs changes, the types of family, click family type on [PROPERTIES] palette, and select the desired family. If you do not have a desired family for the stair, load the family first
• [STEP 7] If the stair run width need a modification, click the run, then change the width by adjusting the arrows
If the stair Riser Height or/and Tread Depth change, click [EDIT TYPE] and change the Type Properties. Do not forget, if you change the type properties, all types will change. To prevent this, please [DUPLICATE] and edit.
• [STEP 8] Click [GREEN CHECKMARK] to complete
• [STEP 9] Remove the Detail lines or the Reference Plans that you used for the stair
• [STEP 10] If you found errors on your stair, click [EDIT STAIRS] to fix the error
To create a two-runs stair (a switchback stair)
• [STEP 1] Open floor plan views (e.g., Level 1 floor plan view and Level 2 floor plan view) and 3D view or a section view with a perspective view to see the height and overall shape of the staircase. And view change to Window Tile view (WT)
• [STEP 2] Draw Detail Line (DL) for the outlines of the staircase. You may turn on the CAD drawing you already imported.
• [STEP 3] Click [STAIR] from [ARCHITECTURE] tab, under [BUILD] panel
• [STEP 4] Select [STRAIGHT] from [MODIFY/CREATE STAIR], under [COMPONENT] panel
• [STEP 5] Select a center point for the stair start point on a plan view, then Revit automatically calculates and shows how many stairs are needed to reach the next floor. Then, click the second point on the current view to create the landing start
Then, click the third point on the current view to create the second stair start point
Then, click the endpoint on the current view to finish
• [STEP 6] Then, the preview will show in all the views. If it looks OK, click the green checkmark to finish. If it needs changes, the types of family, click family type on [Properties] palette, and select a desire family. If you do not have a desire family for the stair, load the family first
• [STEP 7] If the stair run width needs a modification, click the run, then change the width by adjusting the arrows
• [STEP 8] Click [GREEN CHECKMARK] to complete
• [STEP 9] Remove the Detail lines or the Reference Plans that you used for the stair
To create a spiral stair
• [STEP 1] Open floor plan views (e.g., Level 1 floor plan view and Level 2 floor plan view) and 3D view or a section views with a perspective views to see the height and overall shape of the staircase. And view change to Window Tile view (WT)
• [STEP 2] Draw Detail Line (DL) for the outlines of the staircase. You may turn on the CAD drawing you already imported.
• [STEP 3] Click [STAIR] from [ARCHITECTURE] tab, under [BUILD] panel
• [STEP 4] Select [FULL STEP SPIRAL] from [MODIFY/CREATE STAIR], under [COMPONENT] panel
• [STEP 5] Select a center point of the staircase start point on a plan view, then Revit automatically calculates and shows how many stairs are needed to reach the next floor. Then, click the center point of the run start on the current view to create the staircase.
• [STEP 6] Then, the preview will show on all the views. If it looks OK, click the green checkmark to finish. If needs changes, the types of family, click family type on [PROPERTIES] palette and select a desired family. If you do not have a desired family for the stair, load the family first
• [STEP 7] If the stair run width needs a modification, click the run, then change the width by adjusting the arrows
• [STEP 8] Click [GREEN CHECKMARK] to complete
• [STEP 9] Remove the Detail lines or the Reference Plans that used for the stair
Complete the staircase for the Eames house project with [FULL STEP SPIRAL] tool. It may need some adjustment because of the structure and shape. Please experiment for the staircase.
To create a custom stair
• [STEP 1] Open floor plan views (e.g., Level 1 floor plan view and Level 2 floor plan view) and 3D view or a section view with a perspective view to see the height and overall shape of the staircase. And view change to Window Tile view (WT)
• [STEP 2] Draw [BOUNDARY]
• [STEP 3] Draw [RISER]
• [STEP 4] Draw [STAIR PATH]
• [STEP 5] Click [GREEN CHECKMARK] to finish
• [STEP 6] If needed, click [FLIP] the direction of the staircase
• [STEP 7] Click [GREEN CHECKMARK] to complete the tool
Autodesk provides sample stairs and railings. Please download the samples by clicking this page and copy and paste that which you want to use in your project. This process will load the family files to your Revit file
(CO 2) Add/Edit Railing
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=98
Typically, railings are automatically created when a staircase is made
For more information about Railings, please read this page
However, many times, your railings need modifications.
To extend your railings
• [STEP 1] click the railing
• [STEP 2] click [EDIT PATH]
• [STEP 3] edit/add the path
To edit handrail
• [STEP 1] If a Handrail path needs changing, you can select handrail only by pressing the [TAB] key
• [STEP 2] click [EDIT RAIL]
• [STEP 3] click [EDIT PATH] it would be better to work in Section or a 3d view
To change the handrail type, change the family type from [PROPERTIES] palette
To create a path without a stair
• [STEP 1] Click [RAILING]
• [STEP 2] Click [SKETCH PATH]
• [STEP 3] Sketch a path with a drawing tool on a plan view
• [STEP 4] Click [GREEN CHECKMARK] to finish
(CO 3) Add/Edit Roof
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=98
In this session, I will demonstrate the two types of roof; one is the Eames house roof – Flat roof and a typical residential roof – Hip roof
There are ten standard roof shape models in the Revit tutorial. Please watch this tutorial for your other projects
10 Common Roof Shapes Modeld in Revit Tutorial
Please find Autodesk provided roof tutorials from this link
If you want to create complex roof shapes in Revit, please watch this YouTube video
3 Complex Roof Shapes in Revit
For the Eames House project, you will create two models, which are the roof on the top of the metal deck and the Edge of the roof.
Add a flat roof
• [STEP 1] From the Project Browser, switch to the Roof plan.
• [STEP 2] In the Properties, with nothing selected, set the Underlay to level 2.
• [STEP 3] From the Architectural tab, click the small arrow on the roof] > Click [ROOF BY FOOTPRINT]
• [STEP 4] Use the Draw tool to create a boundary.
• Make sure you check Basic Roof – Generic – 9” > Duplicate – 3.”
• Base level – Roof
• Base offset from level – 0’ 0.”
• Uncheck – Defines slope
• Offset – 0’ 0.”
• [STEP 5] Once the boundary line is done, click the green checkmark
• [STEP 6] Your roof will be over the Metal deck; you can move the roof above the metal deck
• [STEP 7] You may add a point / a line for slope for a roof drain, click [ADD POINT] and click a point on the roof and change the height.
To create a gutter
• [STEP 1] From the Architectural tab, click the small arrow on the roof] > Click [ROOF: GUTTER]
• [STEP 2] Default will be Gutter – Bevel 5” x 5”. We will change to 6”x 6.”
• [STEP 3] Click [EDIT TYPE] > [DUPLICATE] > name change [000_Gutter – Bevel 6″ x 6″] > Click [OK]
• [STEP 4] Click Gutter Profile-Bevel: 5”x5” on profile
• [STEP 5] Change to 6”x 6” > Click [OK]
• [STEP 6] From the ISO view, click the top edges of the roof
• [STEP 7] To finish, [ESC] key
Add a hip roof (combined)
• [STEP 1] From the Project Browser, switch to the Roof plan.
• [STEP 2] In the Properties, with nothing selected, set the Underlay to level 2 or level 1.
• [STEP 3] From the Architectural tab, click the small arrow on the roof] > Click [ROOF BY FOOTPRINT]
• [STEP 4] Use the Draw tool to create a boundary.
• Make sure you check Basic Roof – Generic – 9” > Duplicate – 3.”
• Base level – Roof
• Base offset from level – 0’ 0.”
• Check – Defines slope
• Slope – 9”/12.”
• Offset – 1’ 6.”
• [STEP 5] Select 3 lines do not have a slope, uncheck [DEFINES SLOPE]
• [STEP 6] Click [GREEN CHECKMARK] when it is done
• [STEP 7] From the Architectural tab, click the small arrow on the roof > Click [ROOF BY EXTENSION]
• [STEP 8] Revit will ask you to select a work plan > Select [PICK A PLANE] > Pick a face of wall where the roof profile will start
• [STEP 9] Revit will ask you to select Roof reference level and offset > Click [OK]
• [STEP 10] Draw a continued open line for the roof on a Front view
• [STEP 11] Click Green checkmark when it is done
• [STEP 12] On the plan view, you will adjust the depth of the roof
• [STEP 13] To fix the separated roof, you may use [JOIN/UNJOIN ROOF] on the Modify tab
• [STEP 14] Click the profile first and click the face where the profile will meet
• [STEP 15] To meet all wall to the Roof, Select all walls
• [STEP 16] Click [ATTACH TOP/BASE]
• [STEP 17] Click the roof, you may get a warning, but it is OK.
SAVE the file before closing the application.
Save in a different location for the backup (e.g., a cloud folder) | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/02%3A_Part_Two._Revit/13%3A_Chapter_13._Add_edit_stairs_rails_and_roof.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Understand the concept of family file
(CO 2) Add/Edit Doors and Windows
(CO 3) Add Tags
(CO 4) Add/Edit Lighting fixtures
(CO 5) Add/Edit Titleblocks
(CO 6) Insert Plan views and symbols – North arrow and graphic scale
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Understand the concept of family file
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=100
About Revit Families
• A Revit family is a group of elements with a common set of properties, called parameters, and a related graphical representation
• All of the elements that you add to your Revit projects are created with families.
• For example, the structural members, walls, roofs, windows, and doors that you use to assemble a building model, as well as the callouts, fixtures, tags, and detail components that you use to document it, are all created with families.
For more information about the Revit family, please see this page Autodesk Knowledge – Revit Family
To create a new family
• Click [File] on the menu > Click [New] > Click [Family] > Select a template from library > create the model and parameter
• However, A beginner may elect to not create a new family due to the complexity of the process. Instead, you may want to start with a family that has similar attributes and edit accordingly.
To load Family files from the Revit library
• Click [Load Family] from [Insert] tab, under [Load from Library] panel > Find the library folder from your computer > Select a family / families to load > Click [OPEN]
• Then, you can confirm and find the loaded families from the [Project Browser] under the [Family] category
• You can load as many families as possible but you should consider file size.
Purge Revit families
• Family management is one of the keys to managing your Revit file. If you want to delete unused family files, click [Purge Unused] from the Manage tab.
• Use caution when using this function; you will lose all family and views you are not using. I recommend you make a copy or save as the original file and [Purge Unused].
• Click [Purge Unused] Number of items checked until “0”.
• The file size reduced, then you have a lighter file to handle
(CO 2) Add/Edit Doors and Windows
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=100
Add a door on a Wall
• [STEP 1] Click [Door] from the [Architecture] tab, under [Built] panel
or press [DR] as a shortcut
• [STEP 2] Select door type from the [Properties] palette.
• If you need to load a new type, click the [Modify/Place Door] tab and click [Load Family], Double click [Doors] Folder, find out the family that you want from the sub-folders
• If you cannot find the door family that you want, you may search on various websites above and load into the project file
• If you found the type you want, but you do not find the size of the door, you click [Edit Type] and Duplicate the type and change the values
• [STEP 3] Place the door near the wall. Use [Space bar] to change the direction of the door. It shows the location of the door that will be placed. Click the location that you want.
• [STEP 4] Once you placed the door, you can change the direction by clicking arrows, and you can change the exact location of the door. Typically, the distance between the wall to the door is 4.”
• [STEP 5] If you need to change the door type, click and change the type from the Properties panel
Complete to add all doors in the floor plans for the project
Add a window on a wall
• [STEP 1] Click [Window] from the [Architecture] tab, under [Build] panel
or, press [WN] as a shortcut
• [STEP 2] Select a window type from the [Properties] palette.
• [STEP 3] Adding a window is the same sequence as adding a door.
• [STEP 4] You can modify the sill height when you add a window.
The Eames House project doesn’t have a window.
Add a skylight on a roof
• [STEP 1] Click [Window] from the [Architecture] tab
or press [WN] as a shortcut
• [STEP 2] If you do not find a skylight from the [Properties] palette, click [Load family] > find the Skylight family under the Window folder from the Revit library > click [Open]
• [STEP 3] The Skylight for Eames House is 5’X5’ Flat, so the type should be made by duplicating the existing one.
• [STEP 4] Open [3D view] > click the appropriate location where the skylight is supposed to be located > open [Roof] view > update location of the skylight with dimensions
• [STEP 5] You may notice that the skylight only opens the roof, not the ceilings you may make. You must make a hole to open the ceilings. Click [Shaft] from the [Architecture] tab, under [Openning] panel > Open the plan below the roof level. For Eames house, it is Level 2 > Draw the opening. It must be a little smaller than the skylight size and click the [green checkmark]
• [STEP 6] Make sure the base constraint is Level 2, and the Top constraint is Up to Level: Roof, Top Offset 1’. Relocate the Skylight.
Note. If [Shaft] tool is not working, please try this
• Click [Edit In-Place] > Click [Void Forms] > Select [Void Extrusion] from [Create] tab > Draw lines > click the [Green checkmark] to finish and double-check the void penetrate
Add a door / a window on a curtain wall
• [STEP 1] Adding a curtain wall door or window is not under the Door or Window from the [Architecture] tab. To switch a panel to a door or a window, you must select the panel first by pressing the [Tab] key.
• [STEP 2] You might find the door from the [Properties] palette.
• If the curtain wall door was not loaded, you need to load the door first and then change the properties palette.
• To load the door family, click [Load Family] from [Insert] tab
• Find “Door-Curtain-Wall-Single-Glass.rfa” from the Doors folder > Open
• Find “Curtain Wall-Awning.rfa” from the Windows folder > Open
• Find “Sliding_Curtain_Wall_Door_12253.rfa” from Canvas and download to your project folder > Open
• [STEP 3] Change the type from the properties palette.
Complete to update all curtainwall doors and curtainwall windows for the project
(CO 3) Add Tags
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=100
Add door and window tags
• [STEP 1] Click [Tag by Category] from the [Annotate] tab, under [Tag] panel
or (TG) for the shortcut
• [STEP 2] The Door tag setting can be made a distance (1/4”) from the door
• [STEP 3] Once you click a door, the tag will show
• [STEP 4] The number may show or may not show. If not, you can make a number
• [STEP 5] You can also tag both the curtain wall door and windows with a door and window tag.
Complete to update all tags for doors and windows for the project
(CO 4) Add/Edit Lighting fixtures
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=100
Load Lighting Fixtures
Load Lighting families by clicking Load Family. You can find Revit lighting families from Lighting > Architectural > Internal folder from the Revit library. You can load all lighting fixtures from the folder or only selected lighting fixtures to your project.
If you want a unique lighting fixture, please download it from various websites and save it in your project folder for future use.
Add Lighting fixtures
• [STEP 1] Open [Level 1] Ceiling plan from the [Project Browser]
• [STEP 2] Click [Component] (CM) to place lighting fixtures from the [Architecture] tab
• [STEP 3] Select a family and a family type that you want to add on your ceiling plan. You can search the Lighting family as well
• [STEP 4] Select [Downlight – Recessed Can]
• [STEP 5] Place the lighting fixture on the Utility room ceiling. You may need to switch the placement option. Does not need to be accurate.
• [STEP 6] Make dimensions (DM) and use Align (AL) for the accurate dimension
To see the floor plan for the positioning of the furniture, Select Level 1 for Base Level, Leve 2 for Top-level, “lookup” for Underlay orientation, and then you can see the floor plan on your ceiling plan for lighting layout.
Complete lighting layout with dimensions on the ceiling plans
(CO 5) Add/Edit Titleblocks
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=100
Create sheets
• [STEP 1] Click [Sheet] from the [view] tab, under [Sheet Composition] panel
• [STEP 2] New Sheet window will pop up to create sheets in your project. Click [Load…].
• [STEP 3] The load Family browser will pop up. Find [Titleblocks] folder > Open [B 11 x 17 Horizontal] file > click [OK] > Click [OK] to load on your project file.
• [STEP 4] You can find your A101 – Unnamed on your project browser, once you click [+] mark next to [sheet (all)]
• [STEP 5] If you need more sheets for other views, you can click sheet] from the [view] tab, or you can click the Sheets on Project Browser, and mouse right-click and select New Sheets
• [STEP 6] If you want to change the size of the sheet you already loaded, you can click the title block and change the type on the Properties palette. You can change the Titleblock format that you already loaded
• [STEP 7] On your Titleblock, you should add the project information and the sheet information.
• Once you click the Titleblock, you can add more information. Do not double click. If you double click and open the family file, you are editing the family file.
• Under the Owner, we would typically designate the client’s name. In this case, you can add ARTID263
• For Project Name, you can add your project name, Eames House Project
• For Unnamed, it typically designated for the Sheet name, for example, Floor Plan – Level 1, RCP – Level 2, Site Plan, or more
• For Project number, you will make your own. I typically make 20.263.01 (Year.Course number.Project number)
• The issue date will be the submission date
• The author is your name
• Checker is your instructor’s name
Note. This Sheet information will not change even if you change the Titleblock type. And the Project name, Project number, Client, and Issue date will not change even if you add new sheets. But you should add Drawn by and Checked by for all sheets
Edit Titleblocks family file
• [STEP 1] Once you double click the Titleblock, the family file will open
• [STEP 2] I recommend saving this family before you start to edit to preserve the original file.
• Do not forget to add “000” and save the file in your project family folder
• [STEP 3] You can delete anything in the document. However, I would delete the Autodesk Logo, a line below, and the website information.
• [STEP 4] You can add lines by clicking [Line] from the [Create] tab
• [STEP 5] You can move lines and text box by clicking the elements that you want to move and move (MO) command
• [STEP 6] You can edit lines and text box size as well
• [STEP 7] You can insert images or CAD files by clicking Image or Import CAD from the Insert tab
• [STEP 8] Once you complete your titleblock, you must save and click [Load into Project]
• [STEP 9] After you load the titleblock into your project, you must change the titleblock type to what you made.
(CO 6) Insert Plan views and symbols – North arrow and graphic scale
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=100
Insert plans
• [STEP 1] Open a sheet [A101-Floor Plan-Level 1] to insert a plan
• [STEP 2] Find the view on your project browser to insert, drag and drop the view to the sheet
• [STEP 3] To fit into the Titleblock, you must change your drawing scale. You can double click and change the scale, or you can change the scale from the Properties palette
• [STEP 4] Move the view to be centered and change the length of the title bar to fit into the Titleblocks
Repeat this for site plan, floor plans, and ceiling plans
Insert north arrow
• [STEP 1] The north arrow is under Symbol from the [Annotate] tab
• [STEP 2] To add the north arrow, you will open a plan view
• [STEP 3] On the Properties palette, Orientation should be changed from Project North to True North
• [STEP 4] Click Symbol from Annotate tab
• [STEP 5] Select North Arrow 2 on the Properties palette. If you cannot find North Arrow 2, you need to load the North Arrow family from the Annotation folder
• [STEP 6] Place the North Arrow 2 on your plan view
• [STEP 7] Change Orientation again from Ture North to Project North
• [STEP 8] Move the north arrow to the corner of your plan
Repeat this for site plan, floor plans, and ceiling plans
SAVE the file before closing the application.
Save in a different location for the backup (e.g., a cloud folder) | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/02%3A_Part_Two._Revit/14%3A_Chapter_14._Add_edit_windows_doors_lighting_fixtures_furniture_t.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Add/Edit Furniture families
(CO 2) Add/Edit Model-in-place components – Custom casework
(CO 3) Add/Edit a New Family – Furniture
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Add/Edit Furniture families
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=102
For construction documents in Revit, furniture can be mainly categorized into two main areas. One is a product, which includes custom furniture and manufacture-made furniture, and the other is a contractor (carpenter) made built-in furniture or millwork.
• We typically use [PLACE COMPONENTS] a Revit family for custom furniture and manufacture-made furniture because it will be used multiple times. It includes millwork like Revit Countertop, Shelf, Cabinets, and so on.
• We typically use [MODEL-IN-PLACE] for contractor made built-in furniture or millwork because it will be used only once for the specific space only.
Revit Furniture [PLACE COMPONENTS]
• [STEP 1] Open a Floor Plan – Level 1 view from Project Browser
• [STEP 2] You may change the scale of the view because we changed the drawing scale to fit on an 11in X 17 in sheet. I changed the drawing scale from 3/32” to 3/16”
• [STEP 3] Click [COMPONENTS] (CM) from [ARCHITECTURE] tab
TIPS.
For your Floor plan, you do not need to find the exact furniture that you want to use. You may use the Revit furniture family for your floor plan as a placeholder. Do not waste time to add all objects (like books, decorative pieces for perspectives) that are not showing on your floor plan. For your perspective views and renderings, you would be better to find the most accurate Revit family file. If you cannot find the Revit family file, find the SketchUp file and use it for your rendering.
• [STEP 5] Place the furniture families on your floor plan. Make sure the Level is what you want.
• [STEP 6] Use [Space bar] on your keyboard to rotate the family.
To move/rotate/align/copy/mirror the furniture, click the furniture that you want to move/rotate and then click [MOVE] (MO), [ROTATE] (RO), [align] (AL), [COPY] (CO), or [MIRROR] (MM) from [MODIFY/FURNITURE] tab
• [STEP 7] Also, you can use [ALIGNED DIMENSION] (DI) from the [ANNOTATION] tab
• Complete all furniture placement
(CO 2) Add/Edit Model-in-place components – Custom casework
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=102
Create Model in Place components (Alcove seating)
• [STEP 1] Open a Floor Plan – Level 1 view from Project Browser
• [STEP 2] You may turn on the reference CAD plan/Imported image to see the size of the Alcove setting or turn on [HIDDEN ELEMENTS] if it is hidden and unhide the category
• [STEP 3] Click Component small black arrow and click [MODEL IN PLACE] from [ARCHITECTURE] tab
• [STEP 4] You will need to select the most relevant Family Category. For the Alcove Seating, [CASEWORK] is the most appropriate category. Click [OK]
• [STEP 5] make a name for the component [Alcove seating].
• [STEP 6] To draw lines/models, you must set/confirm the [WORK PLANE] first
• Click [SET] from [CREATE] tab
• You can select the work plane with the name or pick a plan. Make sure this will be the base plane that you work
• [STEP 6] You can make models with the Forms tool
• For the alcove seating, click [EXTRUSION] and draw lines with the [DRAW] tool
• The line must be closed. You may use [TRIM] (TR) to make it closed
• Double-check the Extrusion End and Extrusion Start
• Click the [GREEN CHECKMARK] to finish the drawing
• For the upper part, you will create the upper part in the current [MODEL-IN PLACE COMPONENT]
• If you want to try other Forms, you are welcome to try and practice.
• Use the [VOID] tool and Cut tool to subtract a form from other forms (s)
• Click Extrusion > Draw lines > make sure the extrusion End and Start > Click [GREEN CHECKMARK]
• If the model is not showing, please check on your ISO view and change the [WORK PLANE]
• Once all forms created, you will click the [GREEN CHECKMARK] to Finish Model
• Complete the casework
(CO 3) Add/Edit a New Family – Furniture
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=102
In this tutorial, I will demonstrate how to create a simple furniture family in Revit. We will use Eames Walnut Stool information from https://www.hermanmiller.com/products/seating/stools/eames-walnut-stools/
To create a new family file (Model it will be used multiple times)
• [STEP 1] Click [FILE] > [NEW] > [FAMILY]
• [STEP 2] Find [Furniture.rft] in the Select Template File browser and Click [OPEN]
• [STEP 3] It will automatically open these four windows. Before you start, save the family file to your project folder.
• [STEP 4] I recommend you make the view tile (WT) and Zoom All (ZA) see all views. You will pick a view [REFERENCE LEVEL], or [FRONT] view depends on what you want to make.
• [STEP 5] To draw the stool, you will use [REVOLVE] to draw this stool
• [STEP 6] Before you draw a revolving line, you will need [REFERENCE PLANES] to know the Height and Width. Click [REFERENCE PLANE] from the [CREATE] tab
• [STEP 7] Draw height left, and right reference plane and dimension it
Dimension information from this page
• [STEP 8] On [FRONT] view, Click [REVOLVE] from the [CREATE] tab
• [STEP 9] Draw lines for the Profile (Boundary). The lines are to be closed
• [STEP 10] Select [AXIS LINE] and pick the centerline
• [STEP 11] Click the [GREEN CHECKMARK] to finish the model
• [STEP 12] If you want to add/subtract any model, you can create it in this model
• [STEP 13] Once all your model works are done, save the family file on your project folder.
• [STEP 14] Click [LOAD INTO PROJECT] to your project.
Note. Use Sketchup Model in Revit
Importing SketchUp Files into Revit Tutorial
If you want your Sketchup file in multiple-use, you should make it as a family file.
Sketchup material will not be followed and merged as one material, but I will demonstrate some tips in the next session. Refer to this video
Revit Architecture | Convert SketchUp Models Into Revit(With Materials)
SAVE the file before closing the application.
Save in a different location for the backup (e.g., a cloud folder) | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/02%3A_Part_Two._Revit/15%3A_Chapter_15._Add_edit_model-in-place_components_and_edit_family.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Add/Edit Elevations & Sections – Adjust crop region
(CO 2) Add/Edit Detail views
(CO 3) Add Texts & Annotations
(CO 4) Add/Edit Rooms, Room tags, Room separators
(CO 5) Add/Edit a color fill scheme
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Add/Edit Elevations & Sections – Adjust crop region
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=104
You can create sections or elevations in any floor plans.
To add an interior elevation
• [STEP 1] Open the project, and open [Level 1] floor plan
• [STEP 2] Click [Elevation] from [View] tab, under [Create] panel
• [STEP 3] Click the [Properties] palette > select Interior Elevation, the symbol will be updated.
• [STEP 4] Hover over your plan. You will notice the elevation marker will “snap” parallel to walls. Please select the location where you want to place your elevation and click to set it in place. Press [Esc] to complete the command
• [STEP 5] Select the elevation tag. You can create additional elevations from one elevation symbol. Press [Esc] to complete the command
• [STEP 6] Select the elevation arrow (black filled) to adjust the crop region and view depth by moving the blue line; you can adjust where the elevation view begins. By adjusting the blue nodes on the line, you can define the crop region. Moreover, by moving the arrows, you can adjust the view depth of the elevation.
• [STEP 7] Once your elevation is defined, double click on the arrow of the elevation symbol > the newly created elevation view will be open. You may realize the current view is not what you want to present. If the view shows right, please skip [STEP 8] and [STEP 9]
• [STEP 8]Click the edge of the elevation > click [Edit Crop]
• [STEP 9] Redraw the boundary of the elevation. The boundary must be a closed-loop > Click [Green check-mark] to finish the crop boundary.
• [STEP 10] Make sure you check all three [Crop view], [Crop Region Visible], and [Annotation Crop] on [Properties] palette.
Note. The solid blue defines the visible region, and the dashed line defines where your annotations can be viewed and placed. Additionally, you can continue to adjust their view ranges by clicking on the nodes.
• [STEP 11] Add a sheet for the elevation by mouse right-clicking and clicking [New Sheet] on Sheets(all) from [Project Browser] > Select 11×17 titleblock page that you created > Rename the sheet – Sheet number [A601], Sheet name – Interior Elevation > Once the empty sheet is open > Drag the elevation view from [Project Browser] to the sheet > Rename the view title [E-LIVING-N] > Then you can see the elevation symbol name and view name updated on the floor plan
• [STEP 12] Usually, we want to show the boundary of elevation in a bold line. On your sheet, add [Detail Line] from [Annotation] tab, under [Detail] panel, or type [DL] > Select [Wide Lines] from [Modify] tab, under [Line Styles] > Trace the boundary lines
To add a section view
• [STEP 1] You will use similar steps to create section views
• [STEP 2] Click [Section] from [View] tab, under [Create] panel
• [STEP 3] Draw a section line by clicking two points, then you can change the view direction and boundary of the view
• [STEP 4] Set a sheet for the section
(CO 2) Add/Edit Detail views
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=104
Add detailed views
• [STEP 1] Click [Callout] from [View] tab, under [Create] panel
• [STEP 2] Draw the boundary of a view on your floor plan > then [Level 2-Callout] will be automatically created under [Floor plans] > Rename the view to [Level 2 – Restrooms] > If needed, update the scale
• [STEP 3] Create an elevation view and a section view on the detail view > add details > Update scales > Hide [elevation symbol] on [Level 2] floor plan > Hide [Section symbol] on [Level 2 – Restrooms] detail view and [Level 2] floor plan
• [STEP 4] Add the detail views to the sheet [A701-Restroom 1]
(CO 3) Add Texts & Annotations
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=104
Annotation (Texts, Notes, and Dimensions) can be added to any views and sheets
In this tutorial, you will practice the [TEXT] tool to add texts and notes
• [STEP 1] Open a [view] from [sheet] by double-clicking the view > On the view, click [Text] from [Annotate] tab, under [Text] panel > Select a text type from [Properties] panel > Drag and drop to make a text box > Add text
Note. If you need a different style of text type, you can duplicate the type and edit
• [STEP 2] Add a leader line(s) by clicking the [Leader] icon from the [Modify] tab > adjust arrow direction and the leader line
(CO 4) Add/Edit Rooms, Room tags, Room separators
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=104
Another useful annotation tool is creating, defining, and tagging rooms in Revit. Once you have walls, Revit recognizes the walls as the boundary of the rooms. However, if you have unclear boundaries, you have to define room boundary by using [Room Separator] first
Confirm room boundaries
• [STEP 1] Click [Room] from [Architecture] tab, under [Room & Area] panel
• [STEP 2] Hover the mouse over the floor plan to find a closed room. You can confirm the boundaries by clicking [Highlight Boundaries]
• [STEP 3] Click [Close] to hide the highlighted boundaries
Define boundaries
• [STEP 1] To define boundaries, click [Room Separator] from [Architecture] tab, under [Room & Area] panel
• [STEP 2] Using [Draw] tool to draw separated boundaries, then you can manually define boundaries.
Create rooms
• [STEP 1] Click [Room] from [Architecture] tab, under [Room & Area] panel
• [STEP 2] Hover over each room and click once with the mouse to define the room boundary. You will see a tag that reads [ROOM] and a [X] showing the extent of the room. Continue for each room.
Edit tags
• [STEP 1] Click a room tag (NAME) two times, then you can edit the room name. Click a room tag (NUMBER) two times. Then you can edit the room number.
Note. If the rooms are located on the 1st floor, the number starts from 101. If the rooms are located on the 2nd floor, the number starts from 201.
• [STEP 2] Move the room tags outside of the room to avoid overlapping with model lines and the room names > Select all room tags > Check [Leader Line] box > Move the room tags to the outside of the room
(CO 5) Add/Edit a color fill scheme
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=104
You can use the [color fill legend] to place rooms and spaces after adding rooms. Color coding plans can help the client understand the relationship between public and private, work, and rest spaces.
Add a color fill legend
• [STEP 1] Click a [room] on a floor plan > Add [PUBLIC], [SEMI-PRIVATE], or [PRIVATE] on [Department] from [Properties] palette > If you already added one of the three options, you can select from the drop-down menu.
• [STEP 2] Repeat step 1 for all rooms
• [STEP 3] Duplicate floor plans for only the color-filled plans by right-clicking a view > select [Duplicate view] > select [Duplicate with Detailing] > Rename the copied views
• [STEP 4] Open the duplicated views > Click [Color Fill Legend] from [Annotate] tab, under [Color Fill] panel > Click on a floor plan, then [Choose Space Type and Color Scheme] window will open > Confirm Space type: Room, Color Scheme: Department > Click [OK] then the color-filled legend and color will show.
• [STEP 5] Repeat this step 4 for another plan
• [STEP 6] To update color, Click the legend > click [Edit Scheme] > Define color> Click [OK] to finish color scheme
SAVE the file before closing the application.
Save in a different location for the backup (e.g., a cloud folder) | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/02%3A_Part_Two._Revit/16%3A_Chapter_16._Add_edit_elevation_section_detail_text_annotation_an.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Set perspective views
(CO 2) Set Isometric views
(CO 3) Edit Views – Graphic Display styles
(CO 4) Test Render
(CO 5) Set sun
(CO 6) Edit Artificial lighting
(CO 7) Add/Edit materials
(CO 8) Render material managements
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Set perspective views
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=106
Revit supports perspective views and isometric views.
For more information about 3D views, please read this page
To create a camera view for a perspective view
• [STEP 1] Open a floor plan [Level 1] to create a camera view
• [STEP 2] Click a drop down menu of [3D view] > [Camera view] from the [View] tab, under [Create] panel
• [STEP 3] On the floor plan, click the location of the camera position, click the target position > The perspective view will pop up the window.
To reposition the camera view
• [STEP 1] In order to adjust the view, you will open the camera view that you would like to adjust and the floor plan together. Type (WT) to see the tile view and type (ZA) to see zoom in all
• [STEP 2] Click the frame of the perspective view > you can reposition by controlling the 3D wheel on the top-right corner of the view and resize the camera view by adjusting the nodes
• [STEP 3] You can also change the camera position and the target position on your floor plan. On the Properties panel, change eye elevation & Target elevation, turn off Far Clip Active, and view the name.
Image-print size update for the perspective view
• [STEP 1] Click the view frame
• [STEP 2] Click the [Size Crop] icon from the [Modify] tab
• [STEP 3] On Crop Region Size window, check [Scale (locked proportions)]
• [STEP 4] Change the width of what you want to make, and then the height will automatically change with the view ratio you made.
• [STEP 5] If you want to change the proportion, you check the [Field of view] and change the proportion. Click Apply.
• [STEP 6] Then again, check [Scale] to change the size of the view
• [STEP 7] Click [ok] to finish
(CO 2) Set Isometric views
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=106
Set Isometric view
• [STEP 1] To add an Isometric view, Click the drop-down menu of [3D view] from view] tab, under [Create] panel, click [Default 3D view] > The Isometric view will pop up
• [STEP 2] To edit the boundary of the view, Click the [section box] option on the [properties] palette
• [STEP 3] Click the [section box] on the view > Drag the arrows toward the model to create a dynamic slice of the model
(CO 3) Edit Views – Graphic Display styles
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=106
To set the graphic display styles
• Once you click [Graphic Display] on the [view navigation] panel, click one of the six options, then you can change the styles of graphic
• Examples of the six graphic style preset
Wireframe & Hidden Line
Shaded & Consistent Colors
Realistic & Custom
To modify Graphic Display Options
• [STEP 1] Click [Graphic Display] on the [view navigation] panel > click [Graphic Display Options]
• [STEP 2] Modify the properties that are appropriate for your project
For a quick presentation, I like to use these settings
• [STEP 3] Click [Apply] to confirm what you changed, click [Ok] to set the style
(CO 4) Test Render
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=106
To do a test render, you need to adjust the render settings
• [STEP 1] Click [Render] from the [View] tab, under [Presentation]
or Click [Rendering Dialog] on the [view navigation] panel
• [STEP 2] Adjust settings
• Confirm Setting – Draft for a test render
• Change Lighting scheme – interior for an interior scene
• Change Sun Setting – Sun and Artificial
• Change Background style
• Click [adjust Exposure] if needed
• [STEP 3] Click Render to see the result. You can stop if it shows all right, you do not need to wait until the completion.
(CO 5) Set sun
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=106
Setting Sun
• [STEP 1] Click [Sun Path] On > Click [Shadow] On
• [STEP 2] Click Sun Settings to adjust sun direction
• [STEP 3] Select Still for a specific time
• [STEP 4] Change Display setting to Realistic or Render to see the sun direction and material
(CO 6) Edit Artificial lighting
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=106
Review the default lighting setting
• [STEP 1] To see an accurate lighting setting, change the render setting to [Interior: Artificial only] and [High] quality
• [STEP 2] Open Visibility/Graphics Overrides, or type [VV] for 3D view
• [STEP 3] Turn on Lighting Source to see the light source
Lighting setting
• [STEP 1] Click the lighting fixture that you want to modify
• [STEP 2] Click Edit Type > Click Duplicate and name the new type
• [STEP 3] Modify Photometrics [initial intensity] > I recommend to adjust the value of [Illuminance] and [at a distance of]
Please refer to this page for the recommended lighting level (Lux) for activities Repeat this step for all other lighting sources
• [STEP 4] Test render to see if the light fixtures are at the right setting.
(CO 7) Add/Edit materials
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=106
Apply materials in Family type
• [STEP 1] Click an object that you want to add or change materials
• [STEP 2] Click [Edit Type] to change Materials and Finishes
• [STEP 3] Duplicate if needed
• [STEP 4] Click Material Value (Not all objects have this option, but many family files do this option) to add or modify the material. Then Material Browser will pop up
• [STEP 5] If you already made a material you want to use, please select the material and click ok to finish
• [STEP 6] If you want to make a new material, click Create New Material
• [STEP 7] Mouse right-click and click to [Rename] the new material [Ex. 000-FB-01-White]
• [STEP 8] You can change Identity, Graphics, and Appearance. To add material from Revit Library, click Appearance, Click [Replace] this Asset. Asset Browser will pop up
• [STEP 9] Search a material from Appearance Library
• [STEP 10] Click a replace icon
• [STEP 11] Click Information, name change if this is a unique material
• [STEP 10] Click ok to apply
Painting materials (For Walls, Floors, and Ceilings)
• [STEP 1] To use [Paint] tool, click [Paint] from the [Modify] tab
• [STEP 2] Click a material that you want to use from Material Browser. If you do not find the material that you want to use in the Material Browser, you must add a new material first and then apply the paint tool
• [STEP 3] Click a face that you want to change, use the Tab key to select right face
• [STEP 4] You also can remove the paint material by using the Remove Paint tool
• [STEP 5] Use [Split Face] (If the surface needs to be separated), Click [Split Face] from [Modify] tab
• [STEP 6] Click the elements (Wall, Floor, Ceiling) you want to split face. You may need to use the TAB key to select the elements
• [STEP 7] Draw a closed line to divide the surface. You may need to open an elevation view, a floor plan, or a ceiling plan to draw the lines.
• [STEP 8] Click Green checkmark to finish
• [STEP 9] Then use the Paint tool to apply the material
Decal (Flat Surface – Carpet or pictures)
• [STEP 1] To create Decal, click Decal Types from the Insert tab
• [STEP 2] Please create a new Decal, Name it, insert the image, and click ok
• [STEP 3] To place a Decal, Click Place Decal from the insert tab
• [STEP 4] Place on a surface
• [STEP 5] Change size and location from views
(CO 8) Render material managements
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=106
Manage material
• [STEP 1] Click [Materials] from [Manage] tab
• [STEP 2] In this [Material Browser], you can add and modify materials
• [STEP 3] If you need to apply an imported object (CAD or Sketchup), use [Object Styles]
SAVE the file before closing the application.
Save in a different location for the backup (e.g., a cloud folder) | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/02%3A_Part_Two._Revit/17%3A_Chapter_17._Add_edit_views_lighting_and_materials.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Insert other types of files – Sketchup, AutoCAD
(CO 2) Advanced render settings
(CO 3) Edit render qualities
(CO 4) Understand and make cloud renderings
(CO 5) Save Rendering outputs
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Insert other types of files – Sketchup, AutoCAD
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=108
Although many manufactures provide Revit Family and Revit also provide numerous libraries, designers always seek new models and objects for their design and renderings. Sketchup 3D Warehouse provides a great library from designers and manufacturers. I added three Video tutorials to import the Sketchup model to Revit Family for your reference. You may try.
Sketchup model to Revit with updating material changes
• [STEP 1] Sketchup Model to DXF file
• Open the Sketchup model in Sketchup, or your download furniture model from a 3D warehouse. In this process, you may need Sketchup Pro.
• Create layers by material. If you have 3 materials, create 3 layers with unique names (e.g. 000_Chair_sk_seat_01, 000_Chair_sk_leg_01, 000_Chair_sk_base_01)
• The entire model until there is nothing left to explode.
Modeling credit: Vojislav N. downloaded from 3D warehouse
• Select one material > Mouse right click and Select > All with the same material
Modeling credit: Vojislav N. downloaded from 3D warehouse
• Change layer in [Entity Info] Tray by selecting the desired layer that you previously made
Modeling credit: Vojislav N. downloaded from 3D warehouse
• Do the same procedure (material selection and layer change) for the other materials
• Export > 3D model > Select DXF file > Click option > Select only Faces > Version 2007> OK > Export
Modeling credit: Vojislav N. downloaded from 3D warehouse
• [STEP 2] DXF file Properties change
• Open AutoCAD
• Open the DXF file in AutoCAD. It will show in a 3D view
• Select all lines by pressing [Ctrl+A]
• Change Object color from [Home] tab > Color [By Layer]
• Save the DXF file with Version 2007
• [STEP 3] Import DWF file (Furniture) to Revit
• File > New > Family > Select Family Template file (Furniture)
• Insert tab > Import CAD > Change files of type to DXF > Select the file > Click [Open] > Save the family file > Load into Project
• [STEP 4] Change materials for an imported file in the Revit project file
• Manage tab > Object Style
• Click Imported objects
• Change material by clicking the material slot
(CO 2) Advanced render settings
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=108
Region renders
The Region render function saves time while rendering
• [STEP 1] Check Region on Render Setting
• [STEP 2] You will see RED square on the perspective view. You can adjust the box that you wish to render
Output Settings
The default setting is Screen, which shows the entire screen size. If you need a perspective view for presentations or print, your rendering size should be a different setting
• [STEP 1] Click Print on Render Settings
• [STEP 2] Change the resolution
• 75 DPI – Screen presentation
• 150 DPI – Regular print size
• 300 DPI – Fine print size
• 600 DPI – Not used
• [STEP 3] If you need to change image size, you can adjust the width and height on Crop Region Size
Render image
• [STEP 1] After your rendering process is done, you can save the image in the Revit project file for your future use. You can find the saved rendering under the project browser by clicking [Save to Project] on the [Rendering settings] window. You can include the rendered images on the sheets with a titleblock.
• [STEP 2] You can save the rendering with different formats by clicking [Export] under the image on [Render settings]. Typically, save JPG for photoshopping.
Render Background
• [STEP 1] You can add a background for your render [background]
• The default setting can be Revit Sky, which can correspond with a specific date and time.
• You are also able to add an image as a background.
• You can set as a Transparent image. The file type must be PNG or TIFF file to save a transparent image for the background.
(CO 3) Edit render qualities
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=108
Change the render quality
• Draft – for a test render, but not recommended, because of lighting quality, reflection, bump, and refraction on finishesDraft-Render time 1min
• Med – is recommended for a test render. Not the best quality, but it is acceptable
• High – is recommended for a final render because of time and quality
• Best – Although it produces the best quality, it is not recommended due to rendering time. Best-Render time 2hrs
(CO 4) Understand and make Cloud Renderings
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=108
Cloud rendering
Autodesk provides users with Cloud rendering. Design firms have great hardware to quickly render but Autodesk also provides its servers for rendering. The render quality is a bit better and faster than Revit rendering. Cloud rendering is a paid service, however, for educational purposes, it is offered for free.
• [STEP 1] Once your render settings are what you want to produce, click [Render in Cloud] from [View] tab, under [Presentation] panel
• [STEP 2] If you are a first time user of Cloud Render, you must Sign-in to an Autodesk account (email, PW, text code required)
• [STEP 3] Double-check your view name, output type (Still image), Render Quality, Image Size, Exposure
• [STEP 4] Click email when complete
• [STEP 5] Click Render
(CO 5) Save Cloud Rendering outputs
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=108
Get rendering outputs
• [STEP 1] Check the render results
• [STEP 2] Once you received the notification email, you can check the results in Render Gallery
• [STEP 3] The Render Gallery website will open, click View Project
• [STEP 4] Click the rendered thumbnail image > Click [Post-Processing] to change expose level and color balance
image credit: Autodesk cloud rendering
• [STEP 5] Click Download Icon > Click JPEG
• [STEP 6] If you need Transparent Background (for photoshop), click PNG
image credit: Autodesk cloud rendering
Revit rendering vs. Cloud rendering
Revit Rendering Revit Cloud Rendering
Render time Longer Vary (shorter)
Hardware use Your hardware No hardware required
Control – Image size Fully controllable Few options
Control – Lighting Manual Automatic
SAVE the file before closing the application.
Save in a different location for the backup (e.g., a cloud folder) | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/02%3A_Part_Two._Revit/18%3A_Chapter_18._Create_Revit_rendering_Revit_clouding_rendering.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Understand what is Enscape
(CO 2) Control/navigate Enscape
(CO 3) Create views
(CO 4) Add model backgrounds
(CO 5) Add Entourages
(CO 6) Render images
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Understand what Enscape is
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=110
Enscape is a commercial real-time rendering and virtual reality “plug-in” for Revit, Sketchup, Rhino, ArchiCAD, & Vectorworks.
It is mainly used in the AEC industry and is developed and maintained by Enscape GmbH, founded in 2013 and based in Karlsruhe, Germany.
Enscape features. Information from: Enscape website
• [FEATURE 1] Real-time walk-through
CAYAS Architects was able to save 75% of their time to produce 3D visualizations by using Enscape
• [FEATURE 2] Virtual Reality
“A key add-in for Revit and Rhino workflow through to VR.” Forster+Partners
“Now, with Enscape, our clients are amazed with what we show them. They are able to actually experience their project before it is even built. Enscape helps us to do a better job.” HMFH Architects
• [FEATURE 3] Export functions
“Design is always an iterative process, but Enscape has made it a dynamic one” Turner Fleischer
• Batch export
• 360-degree panorama
• Video exports
• Standalones – an EXE file
• [FEATURE 4] Various visual settings
• Clouds and backgrounds
• Time of day change
• White model mode
• Light analysis mode
• Volume fog effects
• Depth of Field option
• Ortho views
• BIM information
• [FEATURE 5] Asset library – more than 1000 options
Other architectural rendering applications
• Vray
• Lumion 3D
• Twinmotion
• Corona
Read this article to learn more about other rendering applications in this link.
Download and install Enscape
• [STEP 1] Request Enscape educational license form the link
• [STEP 2] Double click the downloaded Setup file.
• [STEP 3] Enscape requires four pre-installed programs, and you must install the necessary application (all free) to run Enscape.
• [STEP 4] In an advanced setup, you can install Enscape only for Revit.
If you would like to use Enscape Sketchup and Revit, you can select all (Default setting).
• [STEP 5] Once the installation process is completed, open Revit. Note. Enscape is not stand-alone software. Enscape is a plug-in application in Revit, Sketchup, and Rhino. Therefore, you must have your application open in order to use Enscape.
• [STEP 6] Once you click Enscape Start, Enscape will ask for the license code to activate.
Please copy and paste the codes from your email and activate it.
Required Software information can be found in this link.
System requirements information can be found in this link
• To check your graphic drivers (Windows)
• Open your “This PC” on your Desktop
• Mouse “right-click” > Click “Properties” > Click “Device Manager” on the left side of the panel > Find “Display adapters” and see the graphic drivers
(CO 2) Control/navigate Enscape
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=110
Open your project file and open a perspective that you want to make a rendered image in Enscape. Start Enscape
• [STEP 1] Click [Enscape] tab
• [STEP 2] Select the view that you would like to start on [Active Document] panel
• [STEP 3] Click [Start] on [Control] panel. Then it would be best if you waited a few seconds or minutes to launch the plug-in
Once Enscape is open, familiarize yourself with the navigation control
• [OPT 1] For the best practice, use dual monitors to see the Enscape real-time render on one monitor, your Revit project on the other monitor.
• [OPT 2] If you do not have two monitors, you can split your screen with two windows
• Use both keyboard and mouse to navigate Enscape
Adjust settings There are two types of settings.
• [OPT 1] General Settings
This setting is for the file and saves locations, mouse and keyboard control, licensing, etc.
• [OPT 2] Visual Settings
This setting is to render styles, image quality, camera settings, atmosphere, background, render settings, etc.
• Below are the settings that I used in the rendering below
• the rendering preview
(CO 3) Create views
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=110
If you find a view while you navigate the real-time view from Enscape, you can create/ save a view
• [STEP 1] Once you find the view, stop to navigate
• [STEP 2] Click “Create 3D view” from the tools panel
• [STEP 3] Name the view
• [STEP 4] You can find the view that you just created on the Revit Project browser
• [STEP 5] To go back your views that you created, click the view name on the [Active Document] panel
(CO 4) Add model backgrounds
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=110
There are two options to change the model background
• [OPT 1] Change your Atmosphere on the visual settings to change the model backgrounds
• [OPT 2] Try to use Skybox. Find free skyboxes from this link
(CO 5) Add Entourages
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=110
Add Enscape models from Asset Library
• [STEP 1] Click Asset Library from the Tools panel
• [STEP 2] Search items by category or tags
• [STEP 3] Open a floor plan that you want to add the selected Enscape model to the Revit model
• [STEP 4] Select the model and place it on your floor plan. Use Move, Rotate commands to place the Enscape model. Also, confirm the positions on your perspective view.
• [STEP 5] Once the placement is done, press [ESC] to go back to the library
(CO 6) Render images
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=110
To make the final render
• [STEP 1] Click [Visual Settings] to set render size, file location, and file type.
• [STEP 2] Click [Capture] in your visual settings
• [STEP 3] Select the [Resolution] of the final rendering – Typically Full HD for PPT presentation, Ultra HD for Print version presentation. If your render size is unique, please select Custom and set the pixel size.
• [STEP 4] Check [Show Safe Frame] to see the preview on your real-time view
• [STEP 5] Check [Export Object-ID, Material-ID, and Depth Chanel], Adjust the depth for the Depth Chanel
• [STEP 6] Set a render location to save your final render image file by clicking the Folder
• [STEP 7] Select a file format, typically [JPG]
• [STEP 8] Select the view name that you want to generate a final render
• [STEP 9] Click [Render image] from [Enscape] tab, under [Tools] panel, to get the final render image outside of the file. You may click “Render image (into the document)” to use the rendered image on your Revit document. This way, you can add your final render image to your sheet | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/03%3A_Part_Three._Enscape/20%3A_Chapter_19._Introduction_to_Enscape.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Understand workflow, name of the material
(CO 2) Use 3D Grass, Water, Reflective, textures
(CO 3) Change to Architecture Maquette (White model, Outline) & Light mode
(CO 4) Create Orthometric views
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Understand workflow, name of the material
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=112
To manage your file size for faster real-time rendering in Enscape, you would be better to reduce your file size
To apply a new material
• [STEP 1] Click [Material Browser] icon on [Material and Finish] category in the [Properties] palette
• [STEP 2] Click [Create New Material] on [Material Browser]
• [STEP 3] Rename the material that you created (Example. 001_XX_01_Description)
• [STEP 4] Check [Use Render Appearance] on [Graphic] tab
• [STEP 5] Click the [Appearance] tab on the [material browser]
• [STEP 6] Click [Replace] Icon
• [STEP 7] Click [ Appearance Library]
• [STEP 8] Select a category
• [STEP 9] Click a specific material that you would like to apply, or change > click the [replace] icon
• [STEP 10] On the [Material Browser], click the Duplication icon. This process is recommended to make a unique material
• [STEP 11] If the material image needs to be changed, click the image name. If the material size, location, or rotate, click image
• [STEP 12] Change the scale, offset, or rotate > Click done
• [STEP 13] Update Information on the [Identity] tab
• Description = Material tag information
• Other information = Material schedule information
• [STEP 14] Update Information on the [Graphics] tab
• Shading, Check [Use Render Appearance]
• Update [Surface Pattern] if needed
• Update [Cut Pattern] if needed
• [STEP 15] Update Information on the [Appearance] tab
• Information name = Should be unique
• Generic = Image or color
• Reflectivity
• Transparency
• Self-illumination – Create self-illuminated material like a light source
• Bump – Create a bumpy texture on a surface
• [STEP 16] Click [Apply] > click [ok]
For material, Enscape uses most of Revit material properties
Materials in Revit can be found at this link
Use sample materials from this link
(CO 2) Use 3D Grass and Water
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=112
Enscape requires a special name for Water and Grass.
The material name must include [water] for water materials.
The material name must include [grass] for grass materials.
(CO 3) Change to Architecture Maquette (White model, Outline) & Light mode
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=112
Designers use white models to discuss the shape or form of space.
• [STEP 1] Open [visual settings]
• [STEP 2] Change mode to [White]
The designer also uses light mode to show lighting intensity. You can see where hot spots are and where the areas that need more lighting.
• [STEP 1] Open [visual settings.]
• [STEP 2] Change the mode to [Light view.]
• [STEP 3] You may uncheck [Automatic Scale] for manual
(CO 4) Create Orthometric views
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=112
To create an isometric view via Enscape, you need an isometric view on your Revit file.
• [STEP 1] Create a Default 3D view by clicking the [3D view] from the View] tab, under [Create] panel
• [STEP 2] Use [section box] and hide in view by element or category to hide unnecessary items
• [STEP 3] Synchronize the view by selecting [view name]
• [STEP 4] Change the Projection to [Orthographic] from [Visual Settings] under [Rendering]
• [STEP 5] You may change the [render style mode] on your [visual settings]
• [STEP 6] You also may change views using numbers on the keyboard
For a floor plan, press [ 5 ] | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/03%3A_Part_Three._Enscape/21%3A_Chapter_20._Edit_model_and_materials.txt |
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Save and load presets
(CO 2) Set and adjust artificial lightings
(CO 3) Create a walk-through video
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Save and load presets
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=114
The saving and loading presets are very helpful if you need to render multiple images. You will save a lot of time and produce consistent image qualities.
You can save the visual setting that you modified
• [STEP 1] Click Presets
• [STEP 2] Click Save Preset > Save to Project or Save to File
• [STEP 3] Name the preset
• It is recommended to save three types of preset [Perspective-Day, Perspective-Night, and Isometric]
To load a preset that you already saved
• [STEP 1] Click Presets
• [STEP 2] Click load Preset > Load from Project, or you may load from the file
• [STEP 3] Select a preset that you would like to load > Click OK
(CO 2) Set and adjust artificial lightings
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=114
For better artificial lighting simulation
• [STEP 1] Use a night-time scene time (sunset or sunrise would work as well)
• [STEP 2] Turn off “Auto Exposure” in Enscape visual settings
• Exposure Brightness = 60% ~ 80% (Nighttime scene)
• Exposure Brightness = 51%~53% (Day time scene)
• [STEP 3] The render quality should be High or Ultra
• [STEP 4] Vignette effect = 0%
• [STEP 5] Artificial Light Brightness = 100% ~ 150%
Once you change the time of day, you must click “create 3D view” to save that time of day.
• [STEP 1] Click [Create 3D view] from the [Enscape] tab
• [STEP 2] Name the view
You may need to adjust the value of Illuminance on the “Initial Intensity” of lighting properties.
Find/use IES lighting from this link
Use/adjust self-illumination materials for lighting sources
(CO 3) Create a walk-through video
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=114
To create a video path with scenes
• [STEP 1] Click “Video Editor” from the Enscape tab
• [STEP 2] Click “Add keyframe” – Once you click “Add keyframe,” the setting will be saved
• [STEP 3] Change the scene by moving the scene
• [STEP 4] Once you think the next scene is the right position (recommend foot walk to see the scene from the same height), click “Add keyframe,” then you can see the second frame is added.
• [STEP 5] You may check the preview to see how this works.
• [STEP 6] Repeat step 3 to 5 to create a complete path for a video
• [STEP 7] Click a [frame] and change the settings (time of the day, duration of the movement, camera field of view, and camera position) and click “Apply” to apply the changes
• [STEP 8] You can repeat other frames to change the settings
• [STEP 9] You can get out of the keyframe editor by clicking Back, Click Preview to see the results
To save a video path
• [STEP 1] Click “Save Path” from the Enscape tab
• [STEP 2] Make a unique name for the path
To render the video
• [STEP 1] Load camera path by clicking “Camera Path” > “Load path” from the Enscape tab
• [STEP 2] Change video size from Visual Setting – Resolution (HD is recommended), the quality can be [Web], FPS must be [30]
• [STEP 3] Click “Render Video” to render the path
Note. To render the video, it will take a while because of the pictures that the video renders. Typically, if you render a 1-second video, Enscape renders 30 frames. If one frame takes 10 seconds to render, a total 1-second Video takes 300 seconds to render. Enscape does a great job because Revit rendering takes 5 minutes to render one frame.
23: Chapter 22. Edit render outputs
Session Objectives
Upon completing this session, students will be able to:
(CO 1) Export enlarged jpg files (Original, ID) for print
(CO 2) Create Executable file
(CO 3) Create Render Panorama image
Session Highlights
At the end of the session, students can create the graphics below.
Lecture Contents
(CO 1) Export enlarged jpg files (Original, ID) for print
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=116
Refer to [lecture 14.1, CO 2], for the final render image process
If you need a special size of render image, change the resolution to Custom and change the values from Capture in Visual Settings
You can use the ID images and Depth map to adjust the rendering in photoshop.
(CO 2) Create Executable file
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=116
To export the EXE file
EXE file type is for a presentation, it does not require any programs like Revit, nor Enscape, material links, and lighting/render settings.
• [STEP 1] Click [Start] to launch Enscape and confirm your model is ready for the export.
• [STEP 2] Select view to start
• [STEP 3] Add information (Icon, Title, Load first screen) from [Customization] on [General Settings]
• [STEP 4] Click Export > Exe Standalone
• [STEP 5] Save > Name the file name and wait until the file is created
(CO 3) Create Render Panorama image
A YouTube element has been excluded from this version of the text. You can view it online here: https://iastate.pressbooks.pub/visualgraphiccomm/?p=116
To create Render Panorama
(This option creates only one 360-degree rendering image with QR code at the same time)
• [STEP 1] Select a view to create a 360-degree rendering image (The center of a room/space)
• [STEP 2] Click Enscape Start. Confirm your Projection mode is [Perspective]
• [STEP 3] Click Render Panorama. The render takes time because this panorama needs to render 16 images (for the normal setting)
• [STEP 4] Automatically saved on [Manage Uploads], open [Manage Uploads]
[STEP 5] Click the rendered image > Upload the rendered image in ENSCAPE CLOUD
• [STEP 6] Click the file to open the web browser or use a QR code
• [STEP 7] And the rendered image file and the QR code can be saved in JPG
Tips from Enscape using Panorama view
Please read this link for using the panorama view
Please read this link for an advanced level using panorama images | textbooks/workforce/Arts_Audio_Visual_Technology_and_Communications/Tutorials_of_Visual_Graphic_Communication_Programs_for_Interior_Design_(Cho)/03%3A_Part_Three._Enscape/22%3A_Chapter_21._Edit_lighting_and_create_a_video.txt |
Information has always been important to us. There are even theories in ecological psychology that propose that information relevant to our interaction with an environment is perceived directly, without first interpreting sensory input into a description of the environment. Still, we are right in calling the current period the information age because of the impact of digital information technologies. Therefore, the starting point in our exploration of building information and information management is the reciprocal relation between information and digitization. This grand theme of our times takes a peculiar form in AECO — a form that in several respects conflicts with general tendencies.
1.02: Digital information
The book starts with some key characteristics of the information age: how the digital revolution changed not only the amount of stored information but also attitudes toward information. Global, ubiquitous infrastructures allow for unprecedented access to information and processing power. This promotes new standards of behaviour and performance in societies and economies that are increasingly information-based.
Information explosion
We are all familiar with how significant storage capacity is: we routinely buy smartphones with gigabytes of memory and hard drives with capacities of a couple of terabytes. The availability and affordability of such devices, and even the familiarity with these data units are a far cry from not so long ago. In the last decades of the previous century, personal computers were a new phenomenon, digital photography was in its infancy and today’s social media did not even exist yet. In 1983, the Apple Lisa, the commercially failed precursor to the Macintosh, had a five megabyte hard disk and cost almost US \$ 10,0000 (the equivalent of over US \$ 25,000 today). In 1988, a FUJIX DS-1P, the first fully digital camera, had a two megabyte memory card that could hold five to ten photographs. Our need for data storage and communication has changed a lot since those heady times.
The obvious reason for this change is the explosive increase in information production that characterizes the digital era. In a process of steady growth through the centuries, human societies had previously accumulated an estimated 12 exabytes of information. By 1944 libraries were doubling in size every 16 years, provided there was physical space for expansion. Space limitations were removed by the rise of home computers and the invention of the Internet. These allowed annual information growth rates of 30% that raised the total to 180 exabytes by 2006 and to over 1.8 zettabytes by 2011. More recently, the total more than doubled every two years, reaching 18 zettabytes in 2018 and 44 zettabytes in 2020, and expected to become 175 zettabytes by 2025.
The Internet is full of such astounding calculations and dramatic projections,[1] which never fail to warn that the total may become even higher, as the population of information users and producers keeps increasing, as well as expanding to cover devices generating and sharing data on the IoT. But even if we ever reach a plateau, as with Moore’s “law” with respect to computing capacity,[2] we already have an enormous problem in our hands: a huge amount of data to manage. 1.2 exabytes are stored only by the big four (Google, Microsoft, Amazon and Facebook), while other big providers like Dropbox, Barracuda and SugarSync, and less accessible servers in industry and academia probably hold similar amounts.[3]
What makes these numbers even more important is that information is not just stored but, above all, intensively and extensively processed. Already in 2008, Google processed 20 petabytes a day.[4] In many respects, it is less interesting how much data we produce on a daily or annual basis than what we do with these data. Not surprisingly, social media and mobile phones dominate in any account of digital data processing: in 2018, people sent 473,400 tweets, shared 2 million photos on Snapchat and posted 49,380 pictures on Instagram. Google handled 3.5 billion searches a day, while 1.5 billion people (one-fifth of the world’s population) were active on Facebook every day.
In 2020, the picture slightly changed as a result of the COVID-19 pandemic: we produced 1.7 MB of data per person per second, with a large share again going into social media, while communication platforms like Zoom and Microsoft Teams, as well online shopping and food ordering, attracted significantly more activity.[5] Anything good or bad happening in the world only increases our dependence on the information and communication possibilities of the Internet, especially now that so many of us can afford utilizing them anytime and anyplace on their smartphones. Consequently, safeguarding information quality, veracity, accessibility and flow already forms a major challenge for both producers and consumers of data.
The situation is further complicated by changing attitudes toward information. Not so long ago, most people were afraid of information overload.[6] Nowadays we have moved to a diametrically different point of view and are quite excited about the potential of big data and related AI approaches. From being a worry, the plethora of information we produce and consume has become an opportunity. At the same time, we are increasingly concerned with data protection and privacy, as amply illustrated by the extent and severity of laws like the General Data Protection Regulation (GDPR) of the European Union (https://gdpr.eu). Attitudes may change further, moreover in unpredictable ways, as suggested by reactions to the Facebook–Cambridge Analytica data breach in 2018 and worries about data collection in relation to COVID-19.
Information and digitization
It is not accidental that we talk about our era as both the information age and the digital revolution — two characterizations that (not coincidentally) appeared in quick succession. The rapid growth of information production and dissemination, the changes in human behaviours and societal standards or the shift from industrial production to information-based economies would not have been possible without digital technologies. Before the digital revolution, there were technologies for recording and transmitting information but they were not capable of processing information or available to practically all. The information age demands digital technologies, which are consequently present in almost every aspect of daily life, making information processing synonymous with digital devices, from wearables to the cloud. This also means that there is increasingly less that we do with alternative means (e.g. order food by phone rather than through an app), especially since a lot of information is no longer available on analogue media. For example, most encyclopaedias and reference works that used to adorn the bookshelves of homes in the second half of the twentieth century are either no longer available on paper or cannot compete with online sources for actuality, detail and multimedia content. Online video, audio and image sharing platforms have similarly resulted in unprecedented collections that include many digitized analogue media. Despite the frequently low resolution and overall quality of transcribed media, there is no practical alternative to the wealth and accessibility of these platforms.
Related to the dominance of these platforms is that most data transactions take place within specific channels and apps. Nobody publishes on social media in general but specifically on Facebook, Twitter, Instagram, Snapchat, TikTok or whatever happens to be popular with the intended audience at the time. Even though overarching search engines can access most of these data, production, storage and communication are restricted by the often proprietary structure of the hosting environments. As a result, digital information tends to be more fragmented than many assume. Leaving aside the thorny issues of data ownership, protection, rights and privacy, the technical and organizational problems resulting from such restrictions and fragmentation may be beyond the capacities of an individual or even a small firm. Being so dependent on specific digital means for our information needs makes us vulnerable in more respects than we probably imagine and adds to the complexity of information management. It also suggests that privacy is totally lost, as data about user actions and communications are collected by tech companies, whose digital products and services we keep on using because of some huge generic advantages, such as the immense extent and power of crowdsourcing on the Internet.
Regardless of such problems, however, it is inevitable that the means of information production, dissemination and management will remain primarily digital, with growing amounts of information available to us and often necessary for our endeavours. Digitization creates new opportunities for our information needs but, on the other hand, also adds to the problems that must be resolved and their complexity. Digitization is so widely diffuse and pervasive that we are already in a hybrid reality, where the Internet and other digital technologies form permanent layers that mediate even in mundane, everyday activities, such as answering a doorbell. In a growing number of areas, the digital layers are becoming dominant: social media are a primary area for politics, while health and activity are increasingly dependent on self-tracking data and economies are to a large extent about intangible data. Consequently, safety and security in cyberspace are at least as important as in reality. Moreover, they call for dynamic, adaptable solutions that match the fluidity and extent of a digital information infrastructure. It follows that, rather than putting our faith in currently dominant techniques, we need to understand the principles on which solution should be based and devise better approaches for the further development of information infrastructures.
Interestingly, these infrastructures are not always about us. One aspect of the digital complexity that should not be ignored is that a lot of machine-produced data (and hence a lot of computational power) goes into machine-to-machine communication and human-computer interaction, e.g. between different systems in a car (from anti-lock braking systems and touch-activated locks to entertainment and navigation systems) or in the interpretation of user actions on a tablet (distinguishing between pushing a button, selecting a virtual brush, drawing a line with the brush or translating finger pressure into stroke width). Such data, even though essential for the operations of information processing, are largely invisible to the end user and hence easy to ignore if one focuses primarily on the products rather than the whole chain of technologies involved in a task. On the other hand, these chains and the data they produce and consume are a major part of any innovation in digital technologies and their applications: we have already moved on from information-related development to development dependent on digitization.
Effects of digital information
The practical effects of digital information technologies are widely known, frequently experienced and eagerly publicized. Digitization is present in all aspects of daily life, improving access and efficiency but also causing worries for lost skills, invasion of privacy and effects on the environment. With apps replacing even shopping lists, handwriting is practiced less and less, and handwritten text is becoming more and more illegible. Communication with friends, colleagues, banks, authorities etc. is predominantly Internet-based but cannot fully replace physical proximity and contact, as we have seen in the COVID-19 pandemic. Electricity demand keeps rising, both at home or work and for the necessary infrastructure, such as data centres.
Other, equally significant effects, are less frequently discussed, arguably because they go much deeper and affect us so fundamentally that we fail to recognize the changes. For example, with the easy availability and wide accessibility of information, it is becoming increasingly difficult to claim ignorance of anything — much harder than it has been since the newspaper and news agency boom in the second half of the nineteenth century, and the radio and television broadcasting that followed. More and more facts, events and opinions are becoming common knowledge, from what happens today all over the world to new interpretations of the past, including absurd complot theories. As patients, citizens, students, tourists or hobbyists we can no longer afford to miss anything that seems relevant to our situations or activities.
Another cardinal effect is that we are no longer the centre of the information world, the sole or ultimate possessor and processor of information. Our environment has been transformed and enriched with machine-based capacities that rival and sometimes surpass our own, so changing our relation to our environment, too. Interestingly, our reactions to this loss of exclusivity are variable and even ambivalent. On one hand, we worry about the influence of hidden algorithms and AI, and on the other, we are jubilant about the possibilities of human-machine collaboration. Dystopian and utopian scenarios abound, while we become more and more dependent on information-processing machines. One of the key messages of this book is that, regardless of hopes and fears, there are principles on which we can base our symbiosis with these machines: tasks we can safely delegate to computers and support we can expect from them in order to improve our own information processing and decision making.
Finally, the most profound and arguably lasting effect of digitization is that it invites us to interpret and even experience the world as information, understanding practically everything in terms of entities, properties, relations and processes. Our metaphors for the world were always influenced by the structure of our artefacts: the things we had designed and therefore knew intimately. Projecting their functioning and principles to other things we have been trying to comprehend, like the cosmos, made sense and enabled us to develop new knowledge and technologies. Current conceptual models of reality are heavily influenced by digital information and the machines that store and process it. Human memory processes are explained analogically to hard drive operations and our visual perception is understood by reference to digital image capture and recognition. Such conceptual models are a mixed blessing. As explanations of the mind or social patterns they can be reductionist and mechanistic but at the same time they can be useful as bridges to processing related information with computers.
Information management
All the above makes information management (IM) a task that is not exclusive to managers and computer specialists. It involves everyone who disseminates, receives or stores information. Very few people are concerned with IM just for the sake of it. Most approach information and its management in the framework of their own activities, for which information is an essential commodity. This makes IM not an alien, externally imposed obligation but a key aspect of everyone’s activities, a fundamental element in communication and collaboration, and a joint responsibility for all — a necessity for anyone who relies on information for their functioning or livelihood.
Given the complexity of our hybrid reality and the lack of transparency in many of our approaches to it, this book bypasses technical solutions and focuses on the conceptual and operational structure of IM: the principles for developing clear and effective approaches. These approaches can lead to better information performance, including through reliable criteria for selecting and evaluating means used for their implementation. In other words, we need a clear understanding of what we have to do and why before deciding on how (which techniques are fitting for our goals and constraints).
The proposed principles include definitions of information and representation, and operational structures for connecting process management to IM. IM therefore becomes a matter not of brute force (by computers or humans) but of organization and relevance. One can store all documents and hope for the best but stored information is not necessarily accessible or usable. As we know from searches on the Internet, search machines can be very clever in retrieving what there is out there but this does not necessarily mean that they return the answers we need. If one asks for the specific causes of a fault in a building, it is not enough to receive all documents on the building to browse and interpret. Identifying all information that refers precisely to the relevant parts or aspects of the building depends on how archives and documents have been organized and maintained. To achieve that, we cannot rely on exhaustive, labour-intensive interpretation, indexing and cross-referencing of each part of each document. Instead, we should try to understand the nature and structure of the information these documents contain and then build better representations and management strategies, which not only improve IM but also connect it better to our processes and the tasks they comprise.
Recommended further reading
• Blair, A. et al. (eds.), 2021, Information: a historical companion. Princeton: Princeton University Press.
• Graham, M., & Dutton, W.H. (eds.), 2019, Society and the Internet. Oxford: Oxford University Press.
• Floridi, L., 2014. The fourth revolution. Oxford: Oxford University Press.
Key Takeaways
• Digitization has added substantial possibilities to our information-processing capabilities and promoted the accumulation of huge, rapidly growing amounts of information
• Digital information and its processing are already integrated in our everyday activities, rendering them largely hybrid
• We are no longer the exclusive possessor or even the centre of information and its processing: machines play an increasingly important role, including for machine-to-machine and human-to-machine interactions
• Information management is critical for the utilization of digital information; instead of relying on brute-force solutions, we should consider the fundamental principles on which it should be based
Exercises
1. Calculate how much data you produce per week, categorized in:
1. Personal emails
2. Social media (including instant messaging)
3. Digital photographs, video and audio for personal use
4. Study-related emails
5. Study-related photographs, video and audio
6. Study-related alphanumeric documents (texts, spreadsheets etc.)
7. Study-related drawings and diagrams (CAD, BIM, renderings etc.)
8. Other (please specify)
2. Specify how much of the above data is stored or shared on the Internet and how much remains only on personal storage devices (hard drives, SSD, memory cards etc.)
3. How do the above (data production and storage) compare to worldwide tendencies?
1. Calculations and projections of information accumulated by human societies can be found in: Rider, F., 1944, The Scholar and the Future of the Research Library. New York: Hadham Press; Lyman, P. & Varian, H.P. 2003, "How much information 2003?" http://groups.ischool.berkeley.edu/archive/how-much-info/; Gantz, J. & Reinsel, D., 2011, "Extracting value from chaos." https://www.emc.com/collateral/analyst-reports/idc-extracting-value-from-chaos-ar.pdf;Turner, V., Reinsel D., Gantz J. F., & Minton S., 2014. "The Digital Universe of Opportunities" https://www.emc.com/leadership/digital-universe/2014iview/digital-universe-of-opportunities-vernon-turner.htm; "Rethink data" Seagate Technology Report, https://www.seagate.com/nl/nl/our-story/rethink-data/
2. Intel co-founder Gordon Moore observed in 1965 that every year twice as many components could fit onto an integrated circuit. In 1975 the pace was adjusted to a doubling every two years. By 2017, however, Moore's "law" no longer applies, as explained in: Simonite, T., 2016. “Moore’s law Is dead. Now what?” Technology Review https://www.technologyreview.com/s/601441/moores-law-is-dead-now-what/
3. Source: https://www.sciencefocus.com/future-technology/how-much-data-is-on-the-internet/
4. The claim was made in a scientific journal paper: Dean, J., & Ghemawat, J., 2008. "MapReduce: simplified data processing on large clusters" Commun. ACM 51, 1 (January 2008), 107–113, https://doi.org/10.1145/1327452.1327492. Regrettably, Google and other tech companies are not in the habit of regularly publishing such calculations.
5. There are several insightful overviews of what happens every minute on the Internet, such as: https://www.visualcapitalist.com/?s=internet+minute; https://www.domo.com/learn/infographic/data-never-sleeps-8; https://www.domo.com/learn/infographic/data-never-sleeps-6
6. The notion of information overload was popularized in: Toffler, A., 1970. Future shock. New York: Random House. | textbooks/workforce/Construction/Building_Information_-_Representation_and_Management_Principles_and_Foundations_for_the_Digital_Era_(Koutamanis)/01%3A_Digitization/1.01%3A_Introduction_to_Digitization.txt |
This chapter presents the background of AECO digitization, starting with general tendencies and moving on to particular developments in AECO, including BIM. It explains these developments from a historical perspective and outlines the limitations they cause to further digitization and decision making in AECO.
Private versus business
While in our private lives we are quite digitally minded and data savvy, there is little to suggest that digitization similarly dominates professional activities in AECO. Despite the enthusiastic reception of technological developments, such as 3D printing, digitization has yet to reach a substantial depth or breadth in AECO. We use computer programs like BIM and CAD to draw or spreadsheets to calculate but reality in AECO remains analogue, dominated by information carriers like drawings and other conventional documents on paper: remnants of an era when we did not have the same information processing capacities as today. This is unlike e.g. the music industry, where vinyl, CD and other carriers are just a matter of nostalgia, while the content has become fully digital, or online on-demand services like Netflix or Spotify, which have moreover changed digital attitudes in spectacular ways, practically eliminating music and video piracy.
The probable reason is that AECO generally remains attached to analogue, largely pre-industrial processes that require little if any mediation from digital technologies — much like fishing and hunting, two other industries with a low investment in digitization. These processes cause legacy information solutions, such as paper-based documents, to persist, severely limiting the potential and nature of digitization. Resisting or even ejecting digitization is, of course, justified if there is no reason for it. Regrettably, this is not the case with AECO, given its far from satisfactory performance. It follows that the high contrast with other industries or even private life calls for a closer investigation of the particular circumstances of AECO, towards a clearer identification of underlying causes and resulting problems.
Digital uptake
There is broad consensus that AECO is one of the least digitized sectors.[1] Everyone seems to be in agreement: on the Internet, in professional and academic publications, in software advertisements. A critical note is that the claim is based on few data, chiefly proxies, and a lot of opinions of people in AECO or digitization, i.e. with vested interests in the deployment of new technologies. Still, the slow digital uptake in AECO seems so plausible that it is widely used as justification for various digital solutions: manifestos by policy makers, standards by professional bodies, new approaches by academic researchers, new software by commercial developers. So, from a vague problem, we jump directly to specific solutions, such as BIM, digital twins, Industry 4.0 etc.: panaceas for all the ills of AECO. The promise of the solutions is invariably deemed so high that the resulting changes in AECO do not just solve the problem; they make it disappear completely.
This poses an interesting conundrum: if the solutions are so readily available and so powerful, there must be at least a significant minority in AECO that adopts them and benefits from observable and convincing improvements in performance. In turn, this should stimulate wider adoption of the solutions in AECO and general advances. In short, things should develop rapidly and smoothly, changing practices and behaviours, as we can see with most digital technologies, from email to satellite navigation. This, however, does not seem to be the case with digitization in AECO. Even CAD and BIM have always been considered primarily with respect to costs and obstacles. This suggests that most of these solutions have little overall effect on the problems of AECO or that they fail to fully utilize the potential of digitization.
The viewpoint advocated in this book is that most solutions do hold some promise for solving real problems in AECO. However, instead of jumping ahead and imposing any solution willy-nilly, we need first to understand the relation between problems and solutions: describe and explain it, so that we can judge if a solution is suitable and feasible. This calls for a closer, more detailed inspection of digitization in AECO and its background, which reveals that more than from slow uptake, digitization in AECO suffers from having a secondary role. Even if investment is low in comparison to other sectors, digitization is clearly present in AECO: drawings are already made with CAD or, increasingly, BIM, while office automation is complete and there are enough crossovers between the two, such as invoicing software that draws data from CAD or BIM. In fact, between 1997 and 2015 investment in digitization among German AECO enterprises more than doubled.
Presence, however, is not enough because digitization remains too far in the background of AECO decision and production processes. Digital technologies are mostly found at the office, where they used to produce conventional analogue documents, for use in outdated decision processes and arguably more significantly in largely manual production processes: building construction still relies more on cheap labour than on digital means, such as productive robotization. AECO appears to have limited investment to basic digitization, such as CAD and electronic invoicing. More advanced and domain-specific technologies, from 3D scanning to robotics, are rare, despite their acknowledged potential for competitiveness, innovation and productivity. The reason for that may be that there is little incentive in advanced technologies that are unrelated or conflicting with current practices: why invest in 3D-scanning precision if the tolerances in building construction remain high? This affects even basic digitization, such as CAD and BIM: why invest in well-structured, precise models if the sole purpose of the software is to produce drawings on paper? It is enough that these drawings look correct.
INFORMATION EXPLOSION IN AECO
Despite the slow, limited uptake of digital technologies, there is ample evidence of the explosive growth of digital information in AECO. On one end of the spectrum, we have new information sources that produce big data, such as smartphones and sensors. These tell us a lot about users and conditions in the built environment, and so promise a huge potential for the analysis and improvement of building performance, but also require substantial investment in technologies and organization. Predictably, there is limited interest for this end, despite the appeal of subjects like prop-tech and smart buildings.
At the other end of the spectrum, we encounter general-purpose technologies (basic digitization) that have already become commonplace and ubiquitous, hence also in AECO. Office automation has taken over the production and dissemination of memos, reports, calculations and presentations. Email, for instance, dominates communication and information exchange by offering a digital equivalent to analogue practices like letter writing. A main characteristic of these technologies is the replication of fragmented analogue practices , to the detriment of integrated, domain-specific technologies. For example, communicating on issues in a BIM-based project via email and reports produced with text processors and spreadsheets is redundant because most BIM software includes facilities for reporting issues and making calculations in direct connection with the model.
Domain-specific technologies, which attempt to structure AECO processes and knowledge, exist in the diffuse zone between the two ends of the spectrum. These try to offer more relevant alternatives to general-purpose technologies, as well as connections to the abundance of digital data. Currently paramount among them is BIM, an integrated approach that is usually justified with respect to performance.[2] Performance improvement through BIM requires intensive and extensive collaboration, which adds to both the importance and the burden of information. Integration in BIM and return on investment also require coverage of most aspects of a project and put emphasis on larger projects. Both comprehensive digitization and larger projects, however, come against interoperability, capacity and coordination problems, making BIM deployment even harder and often haphazard.
The end result is that AECO still resides in the mentality of information overload. In a 2015 survey,[3] 70% of AECO professionals claim that project information deluge actually impedes effective collaboration, while 42% feel unable to integrate new digital tools in their organizations. We have no reason to assume that the problems have been alleviated since then. As information needs in AECO have changed little since the 1980s, when digitization was in its infancy, this suggests that the problem lies primarily not with the unchanged quantities of information but with the way information is accessed through the new, digital means. Therefore, the resulting dissatisfaction with digitization cannot be dismissed as a teething issue. If digitization approaches in AECO were successful, any such issue would have been resolved long ago. Its persistence suggests fundamental misunderstandings that impede the deployment of real solutions to AECO information needs. AECO consequently appears to share many of the problems of the digital information explosion without enjoying adequate benefits from the information-processing opportunities of the digital era.
ORIGINS AND OUTCOMES
To identify and explain these misunderstandings, we have to go back in history and look at the origins of AECO digitization. AECO has always been an intensive producer and consumer of information. In fact, most of its disciplines produce information on buildings rather than buildings, primarily documents that specify what should be constructed and how. Especially drawings have been a major commodity in AECO, both as a widely accessible isomorphic representation of buildings and as a basis for conceptualizing designs through geometry. Throughout the history of AECO, drawings have been ubiquitous in all forms of specification and communication, as well as quite effective in supporting all kinds of decision making.
The history of digitization in AECO starts quite early, already in the 1960s, but with disparate ambitions. Some researchers were interested in automating design (even to the extent of replacing human designers with computers), while others were keen to computerize drawing. In the end, the two ambitions coexisted in the scientific area of CAAD, where design automation was generally treated as the real goal. 3D modelling was acceptable, especially if directly linked to design processes, while computerized drawing was largely left to software companies. With the popularization of computers in the 1990s, however, it was computerized drawing (CAD) that dominated AECO digitization in practice.
As with other software, the original use of CAD was the production of analogue documents: conventional drawings like floor plans and bills of materials on paper. For many years, the advantages of computerized drawing were presented in terms of efficiency improvement over drawing by hand on paper: faster production of drawings, easier modification and compact storage. Even after the popularization of the Internet, the emphasis on conventional documents remained. The only difference was that, rather than working with paper-based documents only, one could also produce and exchange digital files like PDFs.
In this manner, AECO information remained firmly entrenched in conventional, document-based practices. While analogue documents like telephone directories were being replaced by online information systems and people adapted to having their day planners and address lists on mobile phones or using navigation apps instead of maps, AECO stubbornly stuck to analogue practices and documents, prolonging their life into the digital era. This is evident even in BIM, which has stronger relations to design automation than drawing computerization but still retains drawings not only as the main output but also as the primary interface with the information contained in a model.
A further consequence is that the digital AECO information comes in huge amounts, with many and often large files that are poorly connected to each other. The content of these files is accessible through separate, usually proprietary software (as opposed to e.g. browsers that can access all information on the Internet) and involves human interaction and interpretation. The user remains the centre as well as the main actor in information processing, which further increases the number of documents, as users tend to summarize and combine sources. This reveals the biggest problems of this file-inundated information landscape: more than the amounts of information, file sizes and inefficient software, they are redundancy (multiple files covering the same subjects with considerable overlaps), lack of coherence (poor conceptual and operational connections between these files) and low consistency (different descriptions of the same aspects in various files and different descriptions of related aspects).
BIM: RADICAL INTENTIONS
The latest big chapter in the history of AECO digitization concerns BIM. Drawing from product modelling, BIM emerged as a radical improvement of computerized drawing that could provide a closer relation to design. The difference with earlier attempts at design automation was that it did not offer prescriptive means for generating a design but descriptive support to designing: structured representation of buildings, collaboration between AECO disciplines, integration of aspects and smooth transition between phases. By doing so, it shifted attention from drawings to the information they contained. At least, this is the popular perception of BIM. Behind it, lies something more fundamental that forms a recurring theme in this book: meaningful symbolic representation.
The wide acceptance of BIM is unprecedented in AECO computerization. Earlier attempts were often met with reluctance, not in the least for the cost of hardware, software and training they required. By contrast, the reception of BIM was much more positive, even though BIM is more demanding than its predecessors in terms of cost (an issue that nevertheless resurfaced after the initial euphoria). Arguably more than its attention to information or collaboration, it was its apparent simplicity (a Lego-like assembly of a building) that made BIM appealing, especially to non-technical stakeholders. The arcane conventions and practices of analogue drawing no longer seemed necessary or relevant.
Still, BIM remained rooted in these conventions. It may have moved from the graphic to the symbolic but it did so through interfaces laden with graphic conventions. For example, entering a wall in BIM is normally done in a floor plan projection, in a fashion that largely replicates analogue drawing: the user selects the wall type and then draws a line to indicate its axis. As soon as the axis is drawn, the wall symbol appears fully detailed according to the wall type that has been chosen: lines, hatches and other graphic elements indicating the wall materials. The axis is not among the normally visible graphic elements. Such attachment to convention impedes users from understanding that they are actually entering a symbol in the model rather than generating a drawing.
More on such matters follows later in the book. For the moment, it suffices to note that BIM signifies a step forward in AECO digitization but remains a transitional technology that may confuse or obscure fundamental information issues. Even so, as the currently best option for AECO, it deserves particular attention and therefore constitutes the main information environment in this book: representation and IM are discussed in the framework of BIM. Future technologies are expected to follow the symbolic character of BIM, so any strategies developed with respect to BIM will probably remain applicable. It is telling that current proposals on digital twins (representations that capture not only the form and structure of buildings but also their behaviour, as reported in real time by sensors in the real thing) generally depart from BIM-like models.
Limitations and necessities
The current digitization tendencies in AECO are dangerously confusing. While digitization invites us to interpret and even experience the world as information, AECO is still entrenched in analogue practices that keep information implicit. This means that we miss the opportunity to develop new conceptual models of reality, which are a prerequisite to digitization and information processing by machines. Instead, we use the old and arguably outdated analogue practices as the domain of discourse (the stuff that should be digitized).
Equally limiting is that digitization in AECO still calls for human interpretation, which runs contrary to the general tendency to remove ourselves from the centre of the information world. As a result, the explosively increasing amounts of digital information become a burden rather than an opportunity: we still focus on the availability of information for human consumption instead of on the information-processing capacities of machines that can support us in reliable, meaningful ways.
Even worse, the very availability of information may be underplayed. While digitization in general makes increasingly difficult to claim ignorance of anything, in AECO a project can be an isolated microworld that fails to acknowledge what exists beyond its scope. Learning and generalizing from precedents remains unsupported by AECO information technologies but even within a project many silos persist. The brief and budget, for example, are practically never integrated in the setup of a model in BIM, thereby leaving powerful options for design guidance and automation severely underutilized.
Such limitations do not merely affect IM; they also undermine decision making. As we shall see in the chapter on decisions and information, there is strong evidence that human thinking comprises two kinds of processes. The first kind (Type 1) is fast, automatic, effortless and nonconscious, while the second (Type 2) is slow, effortful, conscious and controlled. Type 1 thinking dominates daily life and allows us to be quite efficient in many common tasks but it also regularly leads to errors, especially in complex tasks. Regrettably, we tend to rely too much on the economical Type 1 processes and accept their products, even in situations that clearly call for Type 2 thinking. For example, we tend to make judgements on the basis of the limited information available in our memory at a given moment (e.g. news stories of the past few weeks), instead of taking the trouble to collect all relevant data and analyse them properly before reaching a decision.
This type of thinking occurs only too frequently with respect to the built environment: we become concerned about fire safety only after a publicized disaster and then go into a frenzy of activity that nevertheless soon subsides, especially if there is no similar disaster to rekindle our interest or if a disaster of a different kind occurs, even though the probability and risks of building fires remain the same. Moreover, we do not exhibit the same concern about stair safety, despite the fact that annually there are more victims of stair falls than of building fires, probably because each stair fall usually involves only one person, while a single building fire can have tens of victims.
That such problems are not restricted to AECO is not a consolation but a further danger: studies of human decision making reveal that people take decisions intuitively, on the basis of readily available rather than necessary, well-structured information, even in sensitive, high-risk and high-gain areas like finance. Share trading, for instance, is usually presented as a highly skilled business but performance is not consistent: it seems more a game of luck than one of skill. It is therefore important to take such failures into account also when we try to learn from other areas, especially with respect to management.
In addition to acknowledging and controlling our biases, so as to use Type 2 processes more frequently and purposely, we must take care that we always have access to the right information for these processes. This information, structured in transparent and operational descriptions of a task and its context, is the real goal for digitization in any AECO project: it returns human-computer partnerships, where machines support human decision making through extensive data collection, analysis and representation. Note that this does not imply a lessening role for humans in decision making. On the contrary, it adds to the capacities of humans by facilitating Type 2 thinking through explicit information, as well as by freeing resources for Type 2 processes.
The general conclusion is that AECO digitization is in urgent need of substantial improvement but this improvement is not merely a matter of importing new technologies as panaceas. The prerequisite to any change is a thorough understanding of building information and how it relates to our cognitive and social processes. As we shall see in the following chapters, once this is achieved, all goals, including IM and decision support, become clear and fundamentally feasible.
Key Takeaways
• AECO digitization is characterized by slow, limited uptake, bounded by analogue conventions and confused by its dual origins: automation of design and computerization of drawing
• The persistence of analogue practices makes digital AECO information not only inefficient but also redundant, incoherent and inconsistent
• BIM is a transitional technology, still bounded by analogue practices, but, as a symbolic representation, also an indication of things to come
• Digitization is critical not only for information management but also for decision making
Exercises
1. Calculate how much data a design project may produce and explain your calculations analytically, keeping in mind that there may be several design alternatives and versions. Use the following categories:
1. CAD or BIM files
2. PDFs and images produced from CAD & BIM or other software
3. Alphanumeric files (texts, spreadsheets, databases etc.)
4. Other (please specify)
2. Calculate how much of the above data is produced by different stakeholders, explaining your calculations analytically:
1. Architects
2. Structural engineers
3. MEP engineers
4. Clients
5. Manager
1. Two examples of studies of digitization in AECO are: (a) a typically opinion-based view of digitization in AECO: https://www.mckinsey.com/business-functions/operations/our-insights/imagining-constructions-digital-future#, and (b) a more detailed account, using relevant data and meaningful proxies: https://www.zew.de/en/publications/zukunft-bau-beitrag-der-digitalisierung-zur-produktivitaet-in-der-baubranche-1.
2. Performance and in particular the avoidance of failures and related costs are among the primary reasons for adopting BIM, as argued in: Eastman, C., Teicholz, P.M., Sacks, R., & Lee, G., 2018. BIM handbook (3rd ed.). Hoboken NJ: Wiley.
3. Research conducted in 2015 in the UK: https://www.newforma.com/news-resources/press-releases/70-aec-firms-say-information-explosion-impacted-collaboration/ | textbooks/workforce/Construction/Building_Information_-_Representation_and_Management_Principles_and_Foundations_for_the_Digital_Era_(Koutamanis)/01%3A_Digitization/1.03%3A_Digitization_in_AECO.txt |
In the previous part we have considered how digitization affects our treatment of information and our attitudes concerning information. We have seen that there are marked differences between general tendencies and what is happening in AECO. Many of the differences are due to the way information is represented. Digitization relies heavily on symbolic representations that allow efficient and reliable processing of information contained in the symbols and their relations. This lessens the importance of isomorphic representations like drawings, which retain much of the visual appearance of the real things. In this part, we look at the fundamental structure of symbolic representations, differences with analogue representations and how the two come together in BIM, in a way that exemplifies the transitional character of current AECO digitization.
2.02: Symbolic representation
This chapter introduces symbolic representations: how they are structured and how they describe things, including spatial ones. It introduces graphs for the description of spatial symbolic representations (which presupposes knowledge of the content of Appendix I) and presents some of the advantages of such mathematical foundations. The chapter concludes with the paradigmatic and syntagmatic dimensions of representations, and their relevance for interpretation and management.
Symbolic representations
Many of the misunderstandings concerning information occur when people do not appreciate what representations are and how they convey information. Representations are so central to our thinking that even if the sender of some information fails to structure it in a representation, the receiver does so automatically. A representation can be succinctly defined as a system for describing a particular class of entities. The result of applying a representation to an entity is a description. Representations of the symbolic kind, which proliferate human societies, consist of two main components:
• A set of symbols, usually finite
• Some rules for linking these symbols to the entities they describe
The decimal numeral system is such a symbolic representation. Its symbols are the familiar Hindu-Arabic numerals:
SD = {0,1,2,3,4,5,6,7,8,9}
The rules by which these symbols are linked to the quantities they describe can be summarized as follows:
nn · 10n + nn-1 · 10n-1 + … + n1 · 101 + n0 · 100
These rules underlie positional notation, i.e. the description of a quantity as:
nnnn-1 …. n1n0
For example, the description of seventeen becomes:
1 · 101 + 7 · 100 ⇒ 17
The binary numeral system is essentially similar. Its symbol set consists of only two numerals and its rules employ two as base instead of ten:
SB = {0,1}
nn · 2n + nn-1 · 2n-1 .+ … + n1 · 21 + n0 · 20
This means that seventeen becomes:
1 · 24 + 0 · 23 + 0 · 22 + 0 · 21 + 1 · 20 ⇒ 10001
There are often alternative representations for the same class of entities. Quantities, for example, can be represented by (from left to right) Roman, decimal and binary numerals, as well as one of many tally mark systems:
XVII = 17 = 10001 = IIII IIII IIII II
A representation makes explicit only certain aspects of the described entities. The above numerical representations concern quantity. They tell us, for example, that there are seventeen persons in a room. The length, weight, age and other features of these persons are not described. For these, one needs different representations.
Each representation has its advantages. Decimal numerals, for example, are considered appropriate for humans because we have ten fingers that can be used as an aid to calculation. Being built out of components with two states (on and off), computers are better suited to binary numerals. However, when it comes to counting ongoing quantities, like people boarding a ship, tally marks are better suited to the task. Some representations may be not particularly good at anything: it has been suggested that despite their brilliance at geometry, ancient Greeks and Romans failed to develop other branches of mathematics to a similar level because they lacked helpful numeral representations.
We should also appreciate that representation are heavily constrained by their implementation mechanisms: the things physically used to make them. Cuneiform characters, for example, are strongly related to they styli used for imprinting them on clay tablets: the strokes one could make with these styli on clay. Interestingly, such strokes remained the basis of subsequent writing systems. This suggests that some elements of a representation are transferred from one technology to another, despite the changes in implementation. At the same time, such transitions form a clear progress towards minimizing effort and increasing speed in writing, regardless of script or language.
From an IM viewpoint, symbolic representations are the culmination of a long process of trying to order information into discrete parcels and networks that link them. In this process, we encounter many technologies for organizing large quantities of information, for example card-filing systems, indices, dictionaries and encyclopaedias. In an illustration of the significance of information for management, the structure such technologies provide connects to attempts to order the world and organize our interactions with it. This is something many states and businesses discovered in the nineteenth century, when many of these technologies took off. For example, classifications of professions, races, genders etc. were reduced to what the technologies afforded, sometimes with deleterious effects. Symbolic representations concluded the process and allowed use of the computer as an information and communication device by supporting the parsing of the content of any document into symbols and relations that can be easily digitized.
Symbols and things
The correspondence between symbols in a representation and the entities they denote may be less than perfect. This applies even to the Latin alphabet, one of the most successful symbolic representations and a cornerstone of computerization. The letters (phonograms) that describe sounds (phonemes) in a language are a very compact set of symbols that afford a more economical way of describing words than syllabaries or logographies (graphemes corresponding to syllables or things and ideas). Using the Latin alphabet as the symbol set turns a computerized text into a string of ASCII characters that combine to form all possible words and sentences. Imagine how different text processing in the computer would be if its symbols were not alphabetic characters but pixels or lines like the strokes we make to form the characters in handwriting.
At the same time, the correspondence between letters in the Latin alphabet and phonemes in the languages that employ them is not straightforward. In English, for example, the letter ‘A’ may denote different phonemes:
• ɑ: (as in ‘car’)
• æ (as in ‘cat’)
• ɒ (as in ‘call’)
• ə (as in ‘alive’)
• ɔ: (as in ‘talk’)
The digraph TH can be either:
• θ (as in ‘think’) or
• ð (as in ‘this’)
Conversely, the phoneme eɪ can be written either as:
• AY (as in ‘say’)
• EI (as in ‘eight’)
The lesson we learn from these examples is that abstraction and context are important in representation. Abstraction allows for less strict yet reasonably clear relations between symbols and things: the letter ‘A’ represents only vowels, moreover of a similar kind. A one-to-many correspondence like that is trickier than a simple one-to-one relation but is usually clarified thanks to the context, in our case proximal alphabetic symbols: ‘car’ and ‘cat’ are very similar strings but most English learners soon learn that they are pronounced differently and associate the right phoneme rather than the letter with the word. Similarly, in the floor plan of a building one soon learns to distinguish between two closely spaced lines denoting a wall and two very similar lines representing a step (Figure 1).
Spatial symbolic representations
Symbolic representations are also used for spatial entities. Familiar examples are metro and similar public transport maps. A common characteristic of many such maps is that they started life as lines drawn on a city map to indicate the route of each metro line and the position of the stations (Figure 2). As the size and complexity of the transport networks increased, the metro lines and stations were liberated from the city maps and became separate, diagrammatic maps: spatial symbolic representations, comprising symbols for stations and connections between stations (Figure 3).
In these maps, the symbols are similar for each line but can be differentiated by means of shape or colour, so that one can distinguish between lines. The symbol set for a metro network comprising two lines (the red O line and the blue Plus line) would therefore consist of the station symbol for the red line, the station symbol for the blue line, the connection symbol for the red line and the connection symbol for the blue line:
SM = {o, +, |o, |+ }
The rules that connect these symbols to real-world entities can be summarized as follows:
• Each station on a metro line (regardless of the complexity of the building that accommodates it) is represented by a station symbol of that line
• Each part of the rail network that connects two stations of the same line is represented by a line symbol of that line
These common-sense, practical principles underlie many intuitive attempts at spatial representation and, as discussed later on, even a branch of mathematics that provides quite useful and powerful means for formalizing and analysing symbolic spatial representations.
Our familiarity with metro maps is to a large degree due to their legibility and usability, which make them excellent illustrations of the strengths of a good representation. As descriptions of an urban transport system, they allow for easy and clear travel planning, facilitate recognition of interchanges and connections, and generally provide clear overview and support easy understanding. To manage all that, metro maps tend to be abstract and diagrammatic (as in Figure 3), in particular by simplifying the geometry of the metro lines (usually turning them into straight lines) and normalizing distances between stations (often on the basis of a grid). As a consequence, metro diagrams are inappropriate for measuring geometric distances between stations. Still, as travelling times on a metro primarily depend on the number of stations to be traversed, metro maps can be used to estimate the time a trip may take. However, for finding the precise location of a station, city maps are far more useful.
A comparison of metro maps to numerals shows that the increase in dimensionality necessitates explicit representation of relations between symbols. In the one-dimensional numerals, relations are implicit yet unambiguous: positional notation establishes a strict order that makes evident which numeral stands for hundreds in a decimal number and how it relates to the numerals denoting thousands and tens. Similarly, in an alphabetic text (also a one-dimensional representation), spaces and punctuation marks are used to indicate the clustering of letters into words, sentences and paragraphs, and thus facilitate understanding of not only phonemes but also meanings in the text.
In two-dimensional representations like the metro diagrams, proximity between two station symbols does not suffice for inferring the precise relation between them. One needs an explicit indication like a line that connects the two symbols. A metro map missing such a connection (Figure 4) is puzzling and ambiguous: does the missing connection mean that a metro line is still under development or simply that the drawings is incomplete by mistake? Interestingly, such an omission in a metro diagram is quite striking and does not normally go unnoticed, triggering questions and interpretations, which will be discussed in the chapter on data and information (in relation to anti-data).
Similarly puzzling is a metro map where stations of different lines are close to each other, even touching (Figure 5): does this indicate that the stations are housed in the same building, so that one can change from one line to the other, or that the stations are close by but separate, in which case one has to exit the metro and enter it again (which may involve having to buy a new ticket)? In a metro map where stations are clearly connected or coincide (Figure 3), there is no such ambiguity concerning interchange possibilities.
Graphs
Diagrams like these metro maps are graphs: mathematical structures that describe pairwise relations between things (for a summary of graph theory see Appendix I). In Figure 3, each metro station is a vertex and each connection between two stations an edge. Graphs have a wide range of applications, from computer networks and molecular structures to the organization of a company or a family tree, because the tools supplied by graph theory help quantify many features and aspects. For example, the degree of a vertex is a good indication of complexity. In a metro map, it indicates the number of lines that connect there. The only interchange in Figure 3 is easy to identify by its degree (4), as are the end stations of the two lines, which are leaves.
The degree sequence of a graph obviously helps with similar aspects. In a map of a metro line (i.e. the subgraph consisting of the vertices and edges belonging to the line), this sequence is a good indication of opportunities for crossing over to other lines, as well as of how busy the line and its stations may become as passengers avail themselves of these opportunities.
The eccentricity of a metro station relates to its remoteness or poor connectivity. The diameter of the graph indicates the extent of remoteness in the metro network. Together with the radius, they are used to detect the center and the periphery of the graph: respectively, the well-connected part where most things happen and the more quiet part where little happens.
Finally, in order to be able to travel on the metro, the graph has to be connected: each vertex should connect to every other vertex by some path (the graph in Figure 5 is therefore not connected). Connectivity is affected by bridges. In our metro example, all edges are bridges, making the metro particularly sensitive: any problem between two stations renders it partly unusable, as passengers cannot move along alternative routes.
What the above examples illustrate is that a well-structured representation can rely on mathematical tools that help formalize its structure and analyses. This is important for two reasons: firstly, formalization makes explicit what one may recognize intuitively in a representation; secondly, it supports automation, especially of analyses. Allowing computers to perform painstaking and exhaustive analyses complements, liberates and enhances the creative capacities of humans.
Graphs and buildings
Graph-like representations are also used for buildings. Architects, for example, use bubble and relationship diagrams to express schematically the spatial structure of a design (Figure 3). In such diagrams, nodes usually denote spaces where some specific activities take place (e.g. “Expositions” or “Library”), while edges or overlaps indicate proximity or direct access.
On the basis of graph theory, more formal versions of these diagrams have been developed, such as access graphs. Here nodes represent spaces and edges openings like doors, which afford direct connection between spaces. Access graphs are particularly useful for analysing circulation in a building.[1]
The access graph demonstrates the significance of explicit structure: pictorially it may have few advantages over relationship diagrams, as both make explicit the entities in a representation and their relations. However, imposing the stricter principles of a mathematical structure reduces vagueness and provides access to useful mathematical tools. In a relationship diagram one may use both edges and overlaps to indicate relations, and shapes, colours and sizes to indicate properties of the nodes. In a graph, one must use only vertices and edges, and label them with the necessary attributes. This improves consistency and clarity in representation, similarly to the standardization of spelling in a language. It also facilitates application of mathematical measures which give clear indications of design performance. For example, the eccentricity of the node representing the space from where one may exit a building is a useful measure of how long it may take for people to leave the building, which is critical for e.g. fire egress. Similarly, the significance of a space for pedestrian circulation is indicated by its degree in the access graph, while edges that form bridges are doorways that cut off a part of the building when closed. This makes them potential bottlenecks in pedestrian circulation but also opportune control points, e.g. for security checks: points of singular importance, either as threats or as opportunities. For all these reasons, graphs are a representational basis to which we will returning in this book.
Paradigmatic and syntagmatic dimensions
In a symbolic representation we can analyse descriptions along two dimensions: the paradigmatic and the syntagmatic.[2] The paradigmatic dimension concerns the symbols in the representation, e.g. letters in a text. The syntagmatic dimension refers to the sequence by which these symbols are entered in the description. The meaning of the description relies primarily on the paradigmatic dimension: the symbols and their arrangement in the description. Syntagmatic aspects may influence the form of these symbols and their arrangement but above all reveal much about the cognitive and social processes behind the representation and its application, as well as mechanical aspects. For instance, in a culture where left-to-right writing is dominant, one would expect people to write numerals from left to right, too. However, the Dutch language uses a ten-before-unit structure for number words between 21 and 99 (as opposed to the unit-and-ten structure in English), e.g. “vijfentwintig” (five-and-twenty). Consequently, when writing by hand, e.g. noting down a telephone number dictated by someone else, one often sees Dutch people first enter the ten numeral, leaving space for the unit, and then backtrack to that space to enter the unit numeral. With a computer keyboard such backtracking is not possible, so the writer normally pauses while listening to the ten numeral, waits for the unit numeral and then enters them in the reverse order. Matching the oral representation to the written one may involve such syntagmatic peculiarities, which are moreover constrained by the implementation means of the representation (writing by hand or typing).
In drawing by hand, one may use a variety of guidelines, including perspective, grid and frame lines, which prescribe directions, relations and boundaries. These lines are normally entered first in the drawing, either during the initial setup or when the need for guidance emerges. The graphic elements of the building representation are entered afterwards, often in direct reference to the guidelines: if a graphic element has to terminate on a guideline, one may draw it from the guideline or, if one starts from the opposite direction, slow down while approaching the guideline, so as to ensure clear termination. Similar constraining influences may also derive from already existing graphic elements in the drawing: consciously or unconsciously one might keep new graphic elements parallel or similarly sized as previously entered ones, terminate them against existing lines etc. Such mechanical and proportional dependence on existing graphic elements has led to the development of a wide range of object-snap options and alignment facilities in computerized drawing.
Any analysis of the paradigmatic dimension in a description aims at identifying symbols, e.g. relating each stroke in a handwritten text to a letter. To do that, one has to account for every stroke with respect to not only all symbols available in the representation but also various alternatives and variations, such as different styles of handwriting. Analyses of the syntagmatic dimension have to take into account not only the paradigmatic dimension (especially symbols and implementation mechanisms) but also cognitive, social, mechanical aspects that may have played a role in the temporal process of making a description, such as the tendency to draw from an existing graphic element to ensure clear termination. Similarly, in most BIM editors, one enters openings like doors or windows only after the walls that host them have been entered in the model and rooms are defined only after the bounding walls have been completed.
As all this relates to the organization of a design project and the relations between members of a design team, the syntagmatic dimension is of particular relevance to the management of information processes. Thankfully, there are sufficient tools for registering changes in a digital representation: adding a timestamp to the creation, modification and eventual deletion of a symbol in a computer program is easy and computationally inexpensive. Making sense of what these changes mean requires thorough analysis of the sequences registered and clear distinctions between possible reasons for doing things in a particular order.
The significance of the syntagmatic dimension increases with the dimensionality of the representation: in a one-dimensional representation like a text, the sequence by which letters are entered is quite predictable, including peculiarities like the way Dutch words for numbers between 21 and 99 are structured. In representations with two or more dimensions, one may enter symbols in a variety of ways, starting from what is important or opportune and moving iteratively through the description until it is complete (although completeness may be difficult to ascertain syntagmatically, making uncertain when the process should terminate). This clearly indicates the significance of the syntagmatic dimension for the management of 3D and 4D representations of buildings.
Key Takeaways
• Symbolic representations employ usually finite sets of symbols and rules to relate these symbols to specific classes of entities in order to produce descriptions of these entities
• Familiar spatial symbolic representations like metro diagrams are graphs: mathematical structures that describe pairwise relations between things, using vertices for the things and edges for their relations
• Graphs are a useful representational basis for buildings because they make symbols and relations between symbols explicit and manageable
• Symbolic descriptions have a paradigmatic and a syntagmatic dimension, relating respectively to the symbols they contain and the sequence by which the symbols have been entered in the description
• Interpretation of a description relies primarily on the paradigmatic dimension, while management strongly relates to the syntagmatic dimension
Exercises
1. Add a third, circular line to the metro in Figure 3 using existing stations only:
1. Which stations should the circular line connect?
2. How can you justify your decisions with graph measures?
2. Draw graphs for the post-and-beam structure in Figure 8:
1. One using vertices for the posts and beams and edges for their connections
2. One using vertices for the junctions and edges for the posts and beams
3. Calculate the following for both graphs:
1. The degree and eccentricity of each vertex
2. The diameter and radius of each graph
3. Draw an access graph for the floor plan in Figure 9. In the access graph:
1. Calculate the degree and eccentricity of each vertex
2. Calculate the diameter and radius of the graph
3. Indicate the vertices belonging to the center and the periphery
4. Identify any bridges in the access graph
1. Graph-based applications in the representation of buildings are discussed extensively in: Steadman, P., 1983. Architectural morphology: an introduction to the geometry of building plans. London: Pion.
2. The discussion on the paradigmatic and syntagmatic dimensions in visual representations draws from: Van Sommers, P., 1984. Drawing and cognition: descriptive and experimental studies of graphic production processes. Cambridge: Cambridge University Press. | textbooks/workforce/Construction/Building_Information_-_Representation_and_Management_Principles_and_Foundations_for_the_Digital_Era_(Koutamanis)/02%3A_Representation/2.01%3A_Introduction_to_Representation.txt |
To understand many of the problems surrounding building information, we first need to examine the analogue representations that still persist in AECO. This chapter presents some of the key characteristics that have made these representations so successful but do not necessarily agree with digital environments. Effective computerization replaces the human abilities that enable analogue representations with capacities for information processing by machines.
Pictorial representations and geometry
Familiar building representations tend to be drawings on paper, such as orthographic projections like floor plans and sections, and projective ones, including isometrics and axonometrics: two-dimensional depictions of three-dimensional scenes, through which one tries to describe the spatial arrangement, construction or appearance of a building. What these drawings have in common is:
• They are pictorial representations (not symbolic)
• They rely heavily on geometry
Even though drawings were used in building design already in antiquity, it was in the Renaissance that applied geometry revolutionized the way Europeans represented and conceptualized space, in many cases raising the importance of the graphic image over the written text. Geometry was not merely a handy foundation for descriptive purposes, i.e. formalizing pictorial representations of buildings, but also a means of ordering space, i.e. organizing people’s experiences and thoughts to reveal some inherent order (including that of the cosmos). Consequently, building drawings evolved from schematic to precise and detailed representations that matched the perception of actual buildings, as well as most levels of decision making and communication about building design and construction.
This gave geometry a central position in building design. Many architects and engineers became engrossed in geometric explorations closely linked to some presumed essence or ambition of their profession. With geometry both an overlay and underlay to reality, a complex relation developed between building design and geometry, involving not only the shape of the building but also the shape of its drawings. In turn, this caused building drawings to become semantically and syntactically dense pictorial representations, where any pictorial element, however small, can be significant for interpretation. In comparison to more diagrammatic representations, the interpretation of building drawings involves a larger number of pictorial elements, properties and aspects, such as colour, thickness, intensity and contrast. As representations, building drawings were therefore considered a mixed and transitional case.[1]
The computerization of such complex, highly conventional analogue representations was initially superficial, aiming at faithful reproduction of their appearance. To many, the primary function of digital building representations, including not only CAD but also BIM, is the production of conventional analogue drawings either on paper (prints) or as digital facsimiles (e.g. a PDF of a floor plan). This makes computerization merely an efficiency improvement, especially concerning ease of drawing modification, compactness of storage and speed of dissemination. This is a testimony to the power and success of analogue building drawings but at the same time a major limitation to a fuller utilization of the information-processing capacities of computers. Analogue drawings work well in conjunction with human abilities for visual recognition, allowing us to develop efficient and effective means of specification and communication. For example, most people recognize the same number of spaces in a floor plan on paper; scanning the floor plan transforms it into a computer file but computers generally only recognize it as an array of pixels. Recognizing the rooms and counting them by computer requires explicit representation of spaces.
Visual perception and recognition
Building drawings are surprisingly parsimonious: they manage to achieve quite a lot with a limited repertory of graphic primitives. With just a few kinds of lines, they produce floor plans, sections, perspectives etc., as well as depict a wide variety of shapes and materials in all these projections. To a large degree this is thanks to the ingenious ways they trigger the human visual system and allow us to see things. For example, we tend to associate similar elements if they are proximal. Therefore, two closely parallel lines become one: the depiction of a wall. But if the distance between the lines increases beyond what might be plausible for a thick wall, they become just parallel lines. Seeing two lines as a wall does not necessarily mean they have to be strictly parallel or straight (Figure 1).
It is similarly easy to identify columns in a floor plan. Even more significantly, the arrangement (repetition, collinearity, proximity etc.) and similarity of columns allow us to recognize colonnades: groups of objects with a specific character (Figure 2). A colonnade may be recognizable even when the columns are not identical and their arrangement not completely regular (Figure 3). However, if the arrangement is truly irregular, proximity or similarity do not suffice for the recognition of a colonnade (Figure 4).
Abstraction and incompleteness
Pictorial representations are characterized by a high potential for abstraction, which is evident in the different scales of building drawings: a wall at a scale like 1:20 is depicted by a large number of lines indicating various layers and materials; at 1:100 the wall may be reduced to just two parallel lines; at 1:500 it may even become a single, relatively thick line. Similarly, a door in a floor plan at 1:20 is quite detailed (Figure 6), at 1:100 it is abstracted into a depiction that primarily indicates the door type (Figure 7) and at 1:500 it becomes just a hole in a wall (Figure 8). At all three scales both the wall and the door are clearly recognizable, albeit at different scales of specificity and detail. Such abstraction is largely visual: it mimics the perception of a drawing (or, for that matter, of any object) from various distances. It also corresponds to the design priorities in different stages. In early, conceptual design, one tends to focus on general issues, zooming out of the drawing to study larger parts, while deferring details to later stages. Therefore, the precise type, function and construction of a door may be relatively insignificant, making abstraction at the scale of 1:500 suitable. However, that abstraction level is inappropriate for the final technical design, when one has to specify not just the function and construction of a door but also its interfacing with the wall. To do so, one has to zoom in and use a scale like 1:20 to view and settle all details.
In addition to visual abstraction, one may also reduce common or pertinent configurations, however complex, into a single, named entity, e.g. an Ionic or Corinthian column, a colonnade (Figure 2) or “third floor” and “north wing”. Such mnemonic or conceptual abstraction is constrained by visual recognition, as outlined above, but also relies on cultural convention: it is clearly not insignificant that we have a term for a colonnade. As a result, mnemonic abstraction plays a more important role in symbolic representation than purely visual abstraction.
Pictorial representations are also relatively immune to incompleteness: a hastily drawn line on paper, with bits missing, is still perceived as a line (Figure 9). A house partially occluded by an obstacle is similarly perceived as a single, complete and coherent entity (Figure 10).
Dealing with incomplete descriptions is generally possible because not all parts are critical for understanding their meaning, even if they are not redundant. In English, for example, keeping only the consonants in a text may suffice for recognizing most words:
TH QCK BRWN FX JMPS VR TH LZY DG
(THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG)
This practice, currently known as disenvoweling, is widely applied in digital short messages. In the past, it was used to similar effect by telegraph operators, note takers and others who wanted to economize on message length and the time and effort required for writing or transmitting a message. Identifying the missing vowels is often a matter of context: ‘DG’ in a farmyard setting probably means ‘DOG’ but in an archaeological one it may stand for ‘DIG’. If a word contains many vowels, it may be hard even then: ‘JMPS’ is highly probably ‘JUMPS’ in most contexts but ‘DT’ as a shorthand of ‘IDIOT’ may be far from effective in any context.
Likewise in images, some parts are more critical than others for recognition. A basic example is dashed lines: even with half of the line missing, the human visual system invariably recognizes the complete lines and the shapes they form (Figure 11).
Interestingly, a shape drawn with dashed lines is more easily recognized if the line junctions are present. This relates to a general tendency of the human visual system to rely on points of maximum curvature in the outline of shapes.[2] Corners, in particular, are quite important: the presence of corners often suffices for the perception of illusory figures (Figure 12). The form of a corner gives perceivers quite specific expectations concerning the position and form of other corners connected to it, even if the geometry is curvilinear (Figure 13). The presence of compatible corners in the image leads to perception of an illusory figure occluding other forms. Perception of the illusory figure weakens if occlusion occurs at non-critical parts of the figure, such as the middle of its sides (Figure 14).
The importance of corners underlay one of the early successes in artificial intelligence. Based on a typology of edge junctions (Figure 15), expectations about the connectivity of these types and the orientation of resulting surfaces, computers were able to recognize the composition of scenes with trihedral geometric forms: faces, volumes and their relative positions (Figure 16).[3]
The above examples illustrate how analogue representations can be parsimonious and simultaneously effective but only if complemented with quite advanced and expensive recognition capacities. Empowering computers with such capacities is an emerging future but for the moment at least symbolic representations that contain explicit information are clearly preferable.
Implementation mechanisms
Symbols can exist in various environments, so we use means appropriate to each environment for their implementation. A letter of the alphabet can be handwritten on paper with ink or graphite particles, depending on the writing implement (although one might claim that the strokes that comprise the letter are the real implementation mechanisms with respect to both the paradigmatic and the syntagmatic dimensions). In the computer, the same letter is implemented as an ASCII character in a text processing, spreadsheet and similar programs. In a drawing program, it may comprise pixels or vectors corresponding to the strokes (depending on the type of the program). In all cases, the symbol (the letter) is the same; what changes are the implementation mechanisms used for it.
In analogue building representations, the overemphasis on geometry results in a dominance of implementation mechanisms over symbols. As geometric primitives are the graphic implementation mechanisms in pictorial building representations (underlay) and geometry exercises significant ordering influence on building design (overlay), it has been easy to sidetrack attention to the geometric implementation mechanism of building representations, not only in the analogue but also in the digital versions. This geometric fixation meant lack of progress in CAD and many misunderstandings in BIM.
To understand the true significance of geometric implementation mechanisms for the symbols in a building representation, consider the differences between alternative depictions of the same door in a floor plan (Figure 17). Despite differences between the graphic elements and their arrangement, they all carry the same information and are therefore equivalent and interchangeable. Many people reading the floor plan are unlikely to even notice such differences in notation, even in the same drawing, especially if the doors are not placed close to each other.
Using different door depictions for the same door type in the same drawing makes little sense. Differences in notation normally indicate different types of doors (Figure 18): they trigger comparisons that allow us to identify that there are different door types in the design and facilitate recognition of the precise differences between these types, so as to be able to judge the utility of each instance in the design. These differences are meaningful for understanding the depicted design, not accidental variations or stylistic preferences. In both Figure 17 and 18, we recognize doors but the differences in implementation mechanisms matter only in Figure 18, where they derive from the differences between the door types.
In conclusion, one of the key advantages of symbolic representations is the preeminence of symbols and the attenuation of confusion between symbols and implementation mechanisms. In computerized texts, letters are not formed by handwritten strokes that produce the required appearance; the appearance of letters is added to the letter symbols through properties like font and size. Analogue building representations are similar to handwritten texts in that they may put too much emphasis on graphic elements because it is only through the interpretation of these that one can know e.g. the materials and layers that comprise a wall. In a symbolic representation, the materials and composition of the wall are explicit properties of an explicit symbol: we do not have to infer them from the graphics in a drawing. As these properties are described alphanumerically, symbolic representation removes ambiguity and makes visual displays like drawings just one of the possible views of building information.
Key Takeaways
• Analogue building representations are pictorial and rely heavily on geometry
• Visual perception and recognition are essential for the success of pictorial representations
• The reliance of analogue building representations on geometry leads to overemphasis on implementation mechanisms like graphic elements, even in digital environments
Exercises
1. Identify the building elements and components in Figure 6 and list the properties described graphically and geometrically in the drawing
2. List and explain the differences between the above and what appears in Figure 7 and Figure 8
1. There are many treatises on building drawings, their history, significance and relation to geometry. The summary presented here draws in particular from: Cosgrove, D., 2003. Ptolemy and Vitruvius: spatial representation in the sixteenth-century texts and commentaries. A. Picon & A. Ponte (eds) Architecture and the sciences: exchanging metaphors. Princeton NJ: Princeton University Press; Evans, R., 1995. The Projective Cast: Architecture and Its Three Geometries. Cambridge MA: MIT Press; Goodman, N., 1976. Languages of art; an approach to a theory of symbols (2nd ed.). Indianapolis IN: Hackett.
2. The significance of points of maximum curvature, corners and other critical parts of an image is described among others in: Attneave, F., 1959. Applications of information theory to psychology; a summary of basic concepts, methods, and results. New York: Holt; Kanizsa, G., 1979. Organization in vision: essays on Gestalt perception. New York: Praeger.
3. The algorithmically and conceptually elegant recognition of scenes with trihedral objects was finalized in: Waltz, D., 1975. Understanding line drawings of scenes with shadows. P.H. Winston (ed) The psychology of computer vision. New York: McGraw-Hill. | textbooks/workforce/Construction/Building_Information_-_Representation_and_Management_Principles_and_Foundations_for_the_Digital_Era_(Koutamanis)/02%3A_Representation/2.03%3A_Analogue_representations.txt |
This chapter approaches BIM as a symbolic building representation and explains its key differences from analogue representations and their facsimiles in CAD. It analyses how a model is built out of symbols that may have an uneasy correspondence with real-world objects and how abstraction applies to these symbols. It concludes with a view of models as graphs that reveals what is still missing in BIM.
Symbols and relations in BIM
BIM is the first generation of truly symbolic digital building representations.[1] CAD also used discrete symbols but these referred to implementation mechanisms: the geometric primitives that comprised a symbol in analogue representations. In BIM, the symbols explicitly describe discrete building elements or spaces — not their drawings. BIM symbols usually come in “libraries” of elements, i.e. predefined symbols of various types. The types can be specific, such as windows of a particular kind by a certain manufacturer or abstract, e.g. single-hung sash windows, or even just generic windows. The hierarchical relations between types enable specificity and abstraction in the representation, e.g. deferring the choice of a precise window type to a later design stage, without missing information that is essential for the present stage, as all currently relevant properties of the window, e.g. its size and position, exist in a generic window symbol.
Entering an instance of a symbol in a model normally follows the next procedure:
• The user selects the symbol type from a library menu or palette
• The user positions and dimensions the instance in a geometric view like a floor plan, usually interactively by:
• Clicking on an insertion point for the location of the instance, e.g. on the part of a wall where a window should be
• Clicking on other points to indicate the window width and height relative to the insertion point (this only if the window does not have a fixed size)
Modifications of the instance are performed in three complementary ways:
• Changes of essential properties such as the materials of a component amount to change of type. This is done by selecting a different symbol type from the library menu or palette and linking it to the instance.
• Changes in the geometry of an instance involve either repositioning the reference points or numerically changing the relevant values in any of the ways allowed by the program interface: in dialogue boxes that pop up by right-clicking on the instance, in properties palettes, through dimension lines or schedules.
• Changes in additional properties that do not conflict with the type, e.g. the occupancy of a space or the stage where a wall should be demolished, are entered through similar facilities in the interface, like a properties palette. Some of these properties are built in the symbols, while others can be defined by the user.
BIM symbols make all properties explicit, whether geometric or alphanumeric. The materials of a building element are not inferred from its graphic appearance but are clearly stated among its properties, indicated either specifically or abstractly, e.g. “oak” or “wood”. Most properties in an instance are inherited from the type. This concerns not just materials but also any fixed dimensions: each wall type typically has a fixed cross section. This ensures consistency in the representation by keeping all similar elements and components truly similar in all critical respects. Consistency is essential for many tasks, such as cost estimation or procurement.
Many of the relations between symbols are present in BIM, even if they are not always obvious or directly accessible. Openings like doors and windows, for example, are hosted by a wall. They are normally entered in a model after the wall has been placed and in strict connection to it: moving a window out of the hosting wall is not allowed. Connected walls may also have a specific relation, e.g. co-termination: if one is moved, the others follow suit, staying connected in the same manner. Similarly, spaces know their bounding elements (which also precede them in the representation) and if any of these is modified, they automatically adapt themselves. Through such relations, many links between symbols are hidden in BIM. A door schedule, for example, (Figure 1) reveals that, in addition to its hosting wall, a door knows which two spaces it connects (or separates when closed).
Quite important is the explicit symbolic representation of both the ‘solids’ out of which a building is constructed (building elements like walls, floors, doors and windows) and the ‘voids’ of the building (the spaces bounded by the building elements). In analogue representations, the spaces are normally implicit, i.e. inferred by the reader. Having them explicit in BIM means that we can manipulate them directly and, quite significantly from the perspective of this book, attach to them information that cannot be linked to building elements. Similarly to specifying that a window is made of sustainable wood, one can specify that a space is intended for a particular use, e.g. “office” or for specific activities like “small group meeting” or “CEO’s meeting room”. Such characterizations relate to various requirements (usually found in the brief), such as floor area and performance specifications, e.g. acoustics or daylighting, which can also be attached to the space and used to guide and evaluate the design. Making spaces explicit in the representation therefore allows for full integration of building information in BIM and, through that, higher specificity and certainty. Spaces, after all, are the main reason and purpose of buildings, and most aspects are judged by how well spaces accommodate user activities.
BIM symbols and things
BIM has many advantages but, in common with other symbolic representations, also several ambiguities. One of the most important concerns the correspondence between symbols and real-world things. Building representations in BIM are truly symbolic, comprising discrete symbols. Unfortunately, the structure of building elements often introduces fuzziness in the definition of these symbols. In general, there are two categories of ‘solids’ in buildings. The first is building elements that are adequately represented by discrete symbols. Doors and windows, for example, are normally complete assemblies that are accommodated in a hole in a wall. Walls, on the other hand, are typical representatives of the second category: conceptual entities that are difficult to handle in three respects. Firstly, walls tend to consist of multiple layers of brickwork, insulation, plaster, paint and other materials. Some of these layers continue into other elements: the inner brick layer of an external wall may become the main layer of internal walls, forming a large, complex and continuous network that is locally incorporated in various walls (Figure 2).
Secondly, BIM retains some of the geometric bias of earlier building representations, for example in the definition of elements like walls that have a fixed cross section but variable length or shape. When users have to enter the axis of a wall to describe this length or shape, they inevitably draw a geometric shape. BIM usually defines symbols on the basis of the most fundamental primitives in this shape. Even if one uses e.g. a rectangle to describe the axis, the result is four interconnected yet distinct walls, each corresponding to a side of the rectangle. Similarly, a wall with a complex shape but conceptually and practically unmistakably a single, continuous structure, may be analysed into several walls, each corresponding to a line segment of its axis (Figure 3).
Thirdly, our own perception of elements like walls may get in the way. Standing on one side of a wall, we see only the portion of the wall that bounds the room we are in. Standing on the other side, we perceive not only a different face but possibly also a different part of the wall (Figure 4). As a result, when thinking from the perspective of either space, we refer to parts of the same entity as if they were different walls.
The inevitable conclusion is that some symbols in BIM may require further processing when considered with respect to particular goals. One may have to analyse a symbol into parts that are then combined with parts of other symbols, e.g. for scheduling the construction of the brick network in Figure 2. Other symbols must be grouped together by the user, for instance the internal wall in Figure 3. Such manipulations should not reduce the integrity of the symbols; it makes little sense to represent each layer of a wall separately. At the same time, one has to be both consistent and pragmatic in the geometric definition of building elements. In most cases, acceptance of the BIM preference for the simplest possible geometry is the least painful option: representing each of the two internal walls in Figure 4 as a single, separate entity is a compromise that accommodates all perspectives, including those indicated in the figure.
Abstraction and grouping in BIM
BIM symbols cover a wide range of abstraction levels, from generic symbols like “internal wall” without any further specifications to highly detailed symbols, e.g. of a very specific wall type, including precise descriptions of materials from particular manufacturers. A building representation in BIM often starts with abstract symbols, which become progressively more specific, in parallel with the design or construction process. It is also possible to backtrack to a higher abstraction level rather than sidestep to a different type on the same level, e.g. when some conflict resolution leads to a dead end and one needs to reconsider their options. This typologic abstraction is one of the strong points of BIM but also something one has to treat with care because a model may contain symbols at various abstraction levels. Managing the connections between them, e.g. deciding on the interfacing between a highly specific window and an abstract wall, requires attention to detail. On the positive side, one can use such connections to guide decision making, e.g. restrict the choice of wall type to those that match the expectations from the window.
Symbolic representations have considerable capacities for bottom-up grouping on the basis of explicit relations between symbols, ranging from similarity (e.g. all vowels in a text) to proximity (all letters in a word). As is typical of digital symbolic representations, BIM allows for various groupings of symbols, e.g. the set of all instances of the same door type in a design, all spaces with a particular use on the second floor or the parts of a design that belong to the north wing. For the latter, some additional user input may be required, such as a shape that represents the outline of the north wing or the labelling of every symbol with an additional wing property. No user input is required for relations built into the behavioural constraints of a symbol, e.g. the hosting of openings in walls.
Through the combination of standard symbol features (like their properties) and ad hoc, user-defined criteria (like the outline of a wing), one can process the representation at any relevant abstraction level and from multiple perspectives, always in direct reference to specific symbols. For example, it is possible to consider a specific beam in the context of its local function and connections to other elements but simultaneously with respect to the whole load-bearing structure in a single floor or the whole building. Any decision taken locally, specifically for this beam, relates transparently to either the instance or the type and may therefore lead not only to changes in the particular beam but also reconsideration of the beam types comprising the structure, e.g. a change of type for all similar beams. Reversely, any decision concerning the general type of the structure can be directly and automatically propagated to all of its members and their arrangement.
The automatic propagation of decisions relates to parametric modelling: the interconnection of symbol properties so that any modification to one symbol causes others to adapt accordingly (see Appendix II). In addition to what is built into the relations between types and instances or in behaviours like hosting, one can explicitly link instance properties, e.g. make several walls remain parallel to each other or vertical to another wall. One can also specify that the length of several walls is the same or a multiple of an explicitly defined parameter. Changing the value of the parameter leads to automatic modification of the length of all related walls. Parametric design holds significant promise. People have envisaged building representations in which it suffices to change a few values to produce a completely new design. However, establishing and maintaining the constraint propagation networks necessary for doing so in a reliable manner remains a major challenge. For the moment, parametric modelling is a clever way of grouping symbols with explicit reference to a relation underlying the grouping, e.g. parallelism of walls. Still, even in such simple cases, the effects of parametric relations in combination with built-in behaviours can lead to unpredictable or undesirable results.
In views that replicate conventional drawings, BIM software often also incorporates visual abstraction that mimics that of scales in analogue representations. By selecting e.g. “1:20” and “fine” one can make the visual display of a floor plan more detailed than with “1:200” and “coarse” (Figure 5). Such settings are useful only for visual inspection; they alter only the appearance of symbols, not their type or structure.
The LoD (standard on the specificity of information in a model) is also related to abstraction. Adherence to LoD standards in a representation is a throwback to analogue standards regarding drawing scale and stage, which runs contrary to the integration and compression of stages advocated by BIM theory. LoD standardization often fails to appreciate that information in a model has a reason and a purpose: some people have taken decisions concerning some part or aspect of a design. The specificity of these decisions and of the resulting representations is not accidental or conventional. Rather, it reflects what is needed for that part or aspect at the particular stage of a project. The LoD of the model that accommodates this information can only be variable, as not all parts or aspects receive the same attention at any stage.
Specificity should therefore be driven by the need for information rather than by convention. If information in a representation is at a higher specificity level, one should not discard it but simply abstract in a meaningful way by focusing on relevant properties, relations or symbols. A useful analogy is with how human vision works: in your peripheral vision, you perceive vague forms and movement, e.g. something approaching you rapidly. If you turn your eyes and pay attention to these forms, you can see their details and recognize e.g. a friend rushing to meet you. As soon as you focus on these forms, other parts of what you perceive become vague and schematic. In other words, the world is as detailed as it is; your visual system is what makes some of its parts more abstract or specific, depending on your needs. By the same token, the specificity of a building representation should be as high as the available information allows. Our need for information determines the abstraction level at which we consider the representation, as well as actions by which we can increase the specificity of some of its parts.
Implementation mechanisms in BIM
Despite its symbolic structure, BIM appears to use the same implementation mechanisms as CAD: the same geometric primitives that reproduce the graphic, isomorphic appearance of analogue representations. The key difference is that these primitives are just part of pictorial views, in which they express certain symbol properties. The type of a door, for example, is explicitly named, so that we do not have to infer its swing from the arc used to represent it in a floor plan; the width of a wall is a numerical property of its symbol, so that we do not have to measure the distance between the two lines indicating the outer faces of the wall. On the contrary, this distance is determined by the width property of the symbol. These and other properties are explicit in the model database, making the Unicode symbols in it and their bits and bytes the true implementation mechanisms of BIM. Unfortunately, as this database remains largely hidden from view, users may fail to appreciate its significance.
The above covers the paradigmatic dimension, allowing us to consider any graphic primitive in a drawing view a mere product of real symbols. As we have seen, however, implementation mechanisms used in the syntagmatic dimension still influence the structure of a building representation in other respects: a wall is still partly determined by drawing its axis and so by the geometric shape one draws, as well as by dependencies between this shape to others in the model or view. On the whole, therefore, one should consider BIM largely immune to undue influences from graphic implementation mechanisms but at the same time remain aware of persistent geometric biases both in how BIM treats building representations and in the mindset of BIM users.
Models as graphs
Being symbolic representations, models in BIM can be described by graphs that express their structure in terms of symbols and their relations. These graphs are similar but not the same as the building, adjacency or access graphs discussed in a previous chapter: those are graphs that describe the design rather than the representation. In the graphs that describe a BIM model, symbols are usually represented by vertices and relations between symbols by edges (Figure 6 & 7). The edges often make explicit what is implicit in the model, for example, that each window or door is hosted by a particular wall and that walls connect to each other in a specific manner, e.g. co-terminate at the ends.
The graph summarizes the basic structure of the model: the entities it comprises and their basic relations, including dependencies, e.g. between the shape and size of the room and the configuration of walls that bound it. These relations and their constraints underlie many behaviours in BIM, for example, that doors and windows tend to stick to the hosting walls and that walls try to retain their end-connections.
From an information perspective, this becomes even more interesting if we zoom in on the properties of the symbols and see how they are affected by relations and constraints, as in the case of a window and the wall that hosts it (Figure 8). Both elements are represented by discrete symbols, each with its own set of properties. The hosting relation means that some of the properties of the wall are inherited by the window. For example, the orientation of the window is by definition the same as the orientation of the wall. This constrains the behaviour of the window symbol when the user positions it in the model or modifies either the window or the wall.
Describing such dependencies is especially important for relations that may be missing from the orthographic views and modelling workflows of BIM software, for example between walls and floors (Figure 9). The vertical dimensions of walls are usually determined by a combination of wall symbol properties and constraints, model setup (levels) and the presence of symbols like floors to which the top or base of a wall can attach. Users have to manipulate the wall and floor dimensions and relations in multiple views, which can be summarized in this rather simple graph that affords the overview necessary for IM.
Such graphs also reveal other relations that determine the compatibility of symbols in a relation (a type of parameterization that remains neglected). Wall and window width, for example, must be such that there is a technical solution for inserting the wall in the window: the width of a wall constraints the acceptability of window types. The same applies to length and height: assuming that the window comes in a standard size, the wall should be longer and higher (or at least equal) in order to accommodate it. The example seems trivial but dimensional incompatibilities of this kind are common in walls that combine multiple openings and different components.
Understanding the building representation in terms of symbols, relations and constraints is key to both parameterization and decision making. Therefore, it becomes a main task for IM. In addition to using graphs to describe the structure of a model, the model should be set up in a way that transparently expresses all dependencies and safeguards them effectively and consistently in all workflows. This concerns relations between symbols in the representation, as well as with external constraints, such as planning regulations, building codes and briefs. For example, modellers should make explicit the maximum height allowed by the planning regulations for a particular design and connect it to the relevant symbols and properties, e.g. the position of the roof. By doing this at the onset of a design project, they ensure that designers are aware of the constraints within which they work, e.g. that they are not allowed to place roofs or other elements higher than permitted. As will be discussed in following chapters, such feedforward that guides design is preferable to feedback, i.e. allowing designers to generate solutions that are then tested according to regulations they may have not taken into account.
Key Takeaways
• BIM is a truly symbolic building representation that employs discrete symbols to describe building elements and spaces
• Symbols in BIM integrate all properties of the symbolized entities, which determine their pictorial appearance
• BIM symbols are largely independent of graphic implementation mechanisms and immune to most geometric biases
• The correspondence between BIM symbols and some building elements is problematic in certain respects due to the structure of these elements, persisting geometric biases and human perception
• Abstraction in BIM is both typological (as symbols are at various abstraction levels) and mnemonic (based on similarity of properties and relations like proximity and hosting between symbols)
• Models in BIM can be described by graphs of symbols and relations; these graphs afford the overview and transparency missing from BIM software interfaces
Exercises
1. In a BIM editor of your choice (e.g. Revit), make an inventory of all wall types (Families in Revit) in the supplied library. Classify these types in terms of abstraction, clearly specifying your criteria.
2. In a BIM editor of your choice, make a simple design of a space with four walls and two floors around it. Identify properties of the building elements and space symbols that connect them (e.g. dimensions) and overlapping properties (e.g. space properties that refer to finishings of the building elements).
1. Make schedules and graphs that illustrate your findings.
2. Compare the schedules and graphs.
3. Expand your design with another space and a door that connects them. Make a schedule and a graph that illustrate the key relations between the spaces.
4. In the expanded design, describe step by step how a change in the size of one room is propagated to other symbols in the model.
1. A comprehensive general introduction to BIM is: Eastman, C., Teicholz, P.M., Sacks, R., & Lee, G., 2018. BIM handbook (3rd ed.). Hoboken NJ: Wiley. | textbooks/workforce/Construction/Building_Information_-_Representation_and_Management_Principles_and_Foundations_for_the_Digital_Era_(Koutamanis)/02%3A_Representation/2.04%3A_Building_representation_in_BIM.txt |
The previous parts have presented the tandem of digitization and information, and explained the structure of representations, in particular of the symbolic ones that populate digital environments. Now we move to the content of these representations: the data and information they accommodate. The combination of structure and content is the foundation of building information management. It sounds straightforward but is plagued by inadequate definitions and outdated approaches that keep information management vague, labour-intensive and inefficient. Consequently, the main goal of this part is to separate the wheat from the chaff and establish principles for effective, operational approaches to building information.
3.02: Data and information
What constitutes data and information is a fundamental question that attracts much interest and invites numerous definitions. This chapter introduces definitions suitable to the symbolic representations discussed in the previous part, towards a transparent basis for information management.
Theories and definitions
There is nothing more practical than a good theory: it supplies the definitions people need in order to agree what to do, how and why; it explains the world, providing new perspectives from which we can view and understand it; it establishes targets for researchers keen to improve or refute the theory and so advance science and knowledge. In our case, there is a clear need for good, transparent and operational definitions. Terms like ‘information’ and ‘data’ are used too loosely, interchangeably and variably to remove ambiguities in information processing and management. Management, computing and related disciplines abound with rather too easy, relational definitions of data, information, knowledge, strategy etc., e.g. that data interpreted become information, information understood turns into knowledge and so forth. Such definitions tend to underestimate the complexity of cognitive processes and are therefore not to be trusted. Even methodically sound studies, involving large numbers of leading scholars, can do little to elucidate the meaning and usage of these terms.[1] Arguably, asking for succinct, all-encompassing definitions abstracts the context from which the definitions derive and renders them too axiomatic or too vague.
A theory that resolves these problems cannot draw from the AECO domains only. It needs a firm foundation in general theories of information, especially those that take the potential and peculiarities of digital means into account. Thankfully, there are enough candidates for this.
Syntactic, semantic and pragmatic theories
When one thinks of information theory in a computing context, Shannon’s MTC springs to mind.[2] The MTC is indeed foundational and preeminent among formal theories of information. It addresses what has been visualized as the innermost circle in information theory (Figure 1):[3] the syntactic core of information, dealing with the structure and basic, essential aspects of information, including matters of probability, transmission flows and capacities of communication facilities — the subjects of the technical side of information theory.
The outermost circle in the same visualization is occupied by pragmatics: real-life usage of meaningful information. IM theories (discussed in the next chapter) populate this circle, providing a general operational framework for supporting and controlling information quality and flow. To apply this framework, one requires pragmatic constraints and priorities from application areas. For example, a notary and a facility manager have different interests with regard to the same building information.
Between the syntactic and the pragmatic lies the intermediate circle of semantics, which deals with how meaning is added to the syntactical components of information before they are utilized in real life. As syntactic approaches are of limited help with the content of information and its interpretation, establishing a basis for IM requires that we turn to semantic theories of information.
Arguably the most appealing of these is by Luciano Floridi, who is credited with establishing the subject of philosophy of information. The value of his theory goes beyond his position as a modern authority on the subject. The central role of semantics in his work is an essential contribution to the development of much-needed theoretical principles in a world inundated with rapidly changing digital technologies. In our case, it promises a clear and coherent basis for understanding AECO information and establishing parsimonious structures that link different kinds of information and data. These structures simplify IM in a meaningful and relevant manner: they allow us to shift attention from how one should manage information (the technical and operational sides) to which information and why.
In this book, we focus on data, information and their relation in the operational context of digital building representations. Utilization of information and resulting benefits for individuals, enterprises, disciplines or societies are subjects that require extensive analyses well beyond the scope of the present book. Information certainly contributes to achieving these benefits; in many cases it may even be a prerequisite but seldom suffices by itself. Rather than making unfounded claims about knowledge and performance, we focus on more modest goals concerning IM: understanding building information, its quality and flows, and organizing them in ways that may help AECO take informed decisions, in the hope that informed also means better.
A semantic theory for building information
Data and information instances
A fundamental definition in Floridi’s theory[4] concerns the relation between data and information: an instance of information consists of one or more data which are well-formed and meaningful. Data are defined as lacks of uniformity in what we perceive at a given moment or between different states of a percept or between two symbols in a percept. For example, if a coffee stain appears on a floor plan drawing on paper (Figure 3), this is certainly a lack of uniformity with the previous, pristine state of the drawing (Figure 2) but it is neither well-formed nor meaningful within the context of architectural representations. It tells us nothing about the representation or the represented design, except that someone has been rather careless with the drawing (the physical carrier of the representation).
On the other hand, if the lack of uniformity between the two states is a new straight line segment across a room in a floor plan (Figure 4), this is both well-formed (as a line in a line drawing) and meaningful (indicating a change in the design, possibly that the room has now a split-level floor).
Data and information types
The typology of data is a key component in Floridi’s approach. Data can be:
• Primary, like the name and birth date of a person in a database, or the light emitted by an indicator lamp to show that a radio receiver is on.
• Antidata,[5] i.e. the absence of primary data, like the failure of an indicator lamp to emit light or silence following having turned the radio on. Anti-data are informative: they tell us that e.g. the radio or the indicator lamp are defective.
• Derivative: data produced by other, typically primary data, which can therefore serve as indirect indications of the primary ones, such as a series of transactions with a particular credit card as an indication of the trail of its owner.
• Operational: data about the operations of the whole system, like a lamp that indicates whether other indicator lamps are malfunctioning.
• Metadata: indications about the nature of the information system, like the geographic coordinates that tell where a digital photograph has been taken.
These types also apply to information instances, depending on the type of data they contain: an information instance containing metadata is meta-information.
In the context of analogue building representations like floor plans (Figure 5), lines denoting building elements are primary data. They describe the shape of these elements, their position and the materials they comprise.
In addition to geometric primary data, an analogue floor plan may contain alphanumeric primary data, such as labels indicating the function of a room or dimension lines (Figure 6). A basic principle in hand drawing is that such explicitly specified dimensions take precedence over measurements in the drawing because amending the dimensions is easier than having to redraw the building elements.
Anti-data are rather tricky to identify in the typically abstract and elliptical analogue building representations. Quite often it is hard to know if something is missing. One should therefore consider absence as anti-data chiefly when absence runs contrary to expectation and is therefore directly informative: a door missing from the perimeter of a room indicates either a design mistake or that the room is inaccessible (e.g. a shaft). Similarly, a missing room label indicates either that the room has no specific function or that the drawer has forgotten to include it in the floor plan (Figure 7).
Derivative data in building representations generally refer to the abundance of measurements, tables and other data produced from primary data in the representation, such as floor area labels in a floor plan (Figure 8). One can recognize derivative data from the fact that they can be omitted from the representation without reducing its completeness or specificity: derivative data like the area of a room can be easily reproduced when necessary from primary data (the room dimensions). An important point is that one should always keep in mind the conventions of analogue representations, like the precedence of dimension lines over measurement in the drawing, which turns the former into primary data.
Operational data reveal the structure of the building representation and explain how data should be interpreted. Examples include graphic scale bars and north arrows, which indicate respectively the true size of units measured in the representation and the true orientation of shapes in the design (Figure 9).
Finally, metadata describe the nature of the representation, such as the projection type and the design project or building, e.g. labels like ‘floor plan’ (Figure 10).
BIM, information and data
Data types in BIM
As we have seen in previous chapters, computerization does not just reproduce analogue building representations. Digital representations may mimic their analogue counterparts in appearance but can be quite different in structure. This becomes evident when we examine the data types they contain. Looking at a BIM editor on a computer screen, one cannot help observing a striking shift in primary and derivative data (Figure 11 & 12): most graphic elements in views like floor plans are derived from properties of symbols. In contrast to analogue drawings, dimension lines and their values in BIM are derivative, pure annotations like floor area calculations in a space. This is understandable: the ease with which one can modify a digital representation renders analogue practices of refraining from applying changes to a drawing meaningless.
Less intuitive is that even the lines denoting the various materials of a building element are derivative, determined by the type of the symbol: if the type of a wall changes, then all these graphic elements change accordingly. In analogue representations the opposite applies: we infer the wall type from the graphic elements that describe it in terms of layers of materials and other components.
The main exception to this shift is the geometry of symbols. As described in the previous chapter, when one enters e.g. a wall in BIM, the usual workflow is to first choose the type of the wall and then draw its axis in a geometric view like a floor plan. Similarly, modifications to the location or shape of the wall are made by changing the same axis, while other properties, like layer composition and material properties of each layer, can only be changed in the definition of the wall type. One can also change the axis by typing new coordinates in some window but in most BIM editors the usual procedure is interactive modification of the drawn axis with a pointer device like a mouse. Consequently, primary data appear dispersed over a number of views and windows, including ones that chiefly contain derivative data.
One should not be confused by the possibilities offered by computer programs, especially for the modification of entities in a model. The interfaces of these programs are rich with facilities for interacting with shapes and values. It seems as if programmers have taken the trouble to allow users to utilize practically everything for this purpose. For example, one may be able to change the length of a wall by typing a new value for its dimension line, i.e. via derivative data. Such redundancy of entry points is highly prized in human-computer interaction but may be confusing for IM, as it tends to obscure the type of data and the location where each type can be found. To reduce confusion and hence the risk of mistakes and misunderstandings, one should consider the character of each view or window and how necessary it is for defining an entity in a model. A schedule, for example, is chiefly meant for displaying derivative data, such as area or volume calculations, but may also contain primary data for reasons of overview, transparency or legibility. Most schedules are not necessary for entering entities in a model, in contrast to a window containing the properties of a symbol, from where one chooses the type of the entity to be entered. In managing the primary data of a symbol one should therefore focus on the property window and its contents.
Computer interfaces also include more operational data, through which users can interact with the software. Part of this interaction concerns how other data are processed, including in terms of appearance, as with the scale and resolution settings in drawing views mentioned in the previous chapter (Figure 13).
The presence of multiple windows on the screen also increases the number of visible metadata, such as window headers that describe the view in each window (Figure 14).
Anti-data remain difficult to distinguish from data missing due to abstraction or deferment. The lack of values for e.g. cost or fire rating for some building elements may merely indicate that their calculation has yet to take place, despite the availability of the necessary primary data. After all, both are calculated on the basis of materials present in the elements: if these materials are known, cost and fire ratings are easy to derive. One should remember this inherent duality in anti-data: they do not only indicate missing primary data but the presence of anti-data is significant and meaningful by itself. For example, not knowing the materials and finishes of a window frame, although the window symbol is quite detailed, signifies that the interfacing of the window to a wall is a non-trivial problem that remains to be solved. Interfacing typically produces anti-data, especially when sub-models meet in BIM, e.g. when the MEP and architectural sub-models are integrated, and the fastenings of pipes and cables to walls are present in neither. Anti-data generally necessitate action: no value (or “none”) for the demolition phase of an entity suggests that the entity has to be preserved during all demolition phases — not ignored but actively preserved with purposeful measures, which should be made explicit (Figure 15).
Information instances in BIM
Knowing the type of data in BIM is a prerequisite identifying information as it emerges in a model. The next step is to recognize it in the interfaces of the software. As described in the previous section, data are to be found in the symbols: their properties and relations. In the various views and windows of BIM software, one can easily find the properties of each symbol, either of the instance (Figure 16 & 18) or of the type (Figure 17). What one sees in most views and windows is a mix of different data types, with derivative data like a volume calculation or thermal resistance next to primary data, such as the length and thickness of a wall. Moreover, no view or window contains a comprehensive collection of properties. As a result, when a property changes in one view, the change is reflected in several other parts of the interface that accommodate the same property or data derived from it.
Any lack of uniformity in these properties, including the addition of new symbols to a model, qualifies as data. One can restrict the identification of data to each view separately but it makes more sense for IM to include all clones of the same property, in any view. Any derivative data that are automatically produced or modified as a result of the primary data changes count as different data instances. So, any change in the shape of a space counts as a single data instance, regardless of the view in which the user applies the change or of in how many views the change appears. The ensuing change in the space area value counts as a second instance of data; the change in the space volume as a third.
Relations between symbols are even more dispersed and often tacit. They can be found hidden in symbol behaviours (e.g. in that windows, doors or wash basins tend to stick to walls or in that walls tend to retain their co-termination), in explicit parametric rules and constraints, as well as in properties like construction time labels that determine incidental grouping. Discerning lacks of uniformity in relations is therefore often hard, especially because most derive variably from changes in the symbols. For example, modifying the length of a wall may inadvertently cause its co-termination with another wall to be removed or, if the co-termination is retained, to change the angle between the walls. Many relations can be made explicit and controllable through appropriate views like schedules. As we have seen, window and door schedules make explicit relations between openings and spaces. This extends to relations between properties of windows or doors and of the adjacent spaces, e.g. connects the fire rating of a door to whether a space on either side is part of a main fire egress route or the acoustic isolation offered by the door to the noise or privacy level of activities accommodated in either adjacent space.
Information instances can be categorized by the type of their data: primary, derivative, operational etc. Type is important for IM because it allows, firstly, to prioritize in terms of significance and, secondly, to link information to actors and stakeholders concerning authorship and custodianship. Primary information obviously carries a higher priority than derivative. Moreover, primary information (e.g. the shape of spaces) is produced or maintained by specific actors (e.g. designers), preferably with no interference by others who work with information derived from it (e.g. fire engineers). Information instances concerning space shape are passed on from the designers to the fire engineers, whose observations or recommendations are fed back to the designers, who then initiate possible further actions and produce new data. Understanding these flows, the information types they convey and transparently linking instances to each other and to actors or stakeholders is essential for IM.
Another categorization of information instances concerns scope. This leads to two fundamental categories:
1. Instances comprising one or more properties or relations of a single symbol: the data are produced when one enters the symbol in the representation or when the symbol is modified, either interactively by a user or automatically, e.g. on the basis of a built-in behaviour, parametric rule etc. Instances of this category are basic and homogeneous: they refer to a single entity of a particular kind, e.g. a door. The entity can be:
1. Generic in type, like an abstract internal door
2. Contextually specific, such as a door for a particular wall in the design, i.e. partially defined by relations in the representation
3. Specific in type, e.g. a specific model of a particular manufacturer, fixed in all its properties
2. Instances comprising one or more properties or relations of multiple symbols, added or modified together, e.g. following a change of type for a number of internal walls, or a resizing of the building elements bounding a particular space. Consequently, instances of this category can be:
1. Homogeneous, comprising symbols of the same type, e.g. all office spaces in a building
2. Heterogeneous, comprising symbols of various types, usually related to each other in direct, contextual ways, e.g. the spaces and doors of a particular wing that make up a fire egress route
These categories account for all data and abstraction levels in a representation, from sub-symbols (like the modification of the geometry of a door handle in the definition of a door type) to changes in the height of a floor level that affects the location of all building elements and spaces on that floor, the size and composition of some (e.g. stairs) and potentially also relations to entities on adjacent floors. Understanding the scope of information is essential for IM: it determines the extent to which any information instance or change should be propagated to ensure consistency and coherence.
Symbols and their properties in context
So far we have considered the semantic data types of symbol properties in isolation, as if each symbol were a separate entity rather than incorporated in a representation. However, in the symbol graphs discussed in a previous chapter, we have seen that relations in a model profoundly affect the properties of each symbol. Parameterization adds to the number and complexity of such relations but even without parameterization there are many primary properties that become derivative in the context of a representation due to common, often implicit relations.
In the example of a window and the wall that hosts it, some properties of the window, such as orientation, are inherited from the corresponding properties of the hosting element (Figure 19). These relations therefore affect the semantic data type of symbol properties. Both the window and the wall in this example are each represented by a discrete symbol with its own properties. Most of these properties are primary data, i.e. essential for the identity of each symbol: length, height, width, material composition etc. BIM software routinely also adds properties that are derivative, i.e. products of functions on primary properties, such as area and volume but also fire rating and cost. Orientation is another derivative property that in a straight wall can be calculated from the relative position of the endpoints of the wall axis. This calculation applies to the wall but is not required for the window, which by definition inherits orientation from the wall, as does any other hosted element. One could argue that other properties of the window, notably its dimensions, remain primary in spite of the hosting relation but the fact that their values must be in a range determined by the wall properties also makes them derivative, only not in the strict sense of equality that applies to orientation. They remain the same as in the unattached window so long as they do not cause any interfacing problems with the wall but, when this happens, it becomes clear that the width of the window is linked to that of the hosting wall.
Similar derivation of dimensions on the basis of relations also applies to non-hosted elements. For example, the height of a wall is normally constrained by the position of the floor above and the floor underneath the wall: the wall height is derived from difference in vertical level between the two floors that bound it (Figure 20).
This relation seems straightforward but BIM software makes it more complicated in a way that reveals the intricate chain behind any relation we isolate by way of example. A wall in BIM may be constrained not by floor symbols but by levels: reference planes in the model setup. The wall in Figure 21 has its base on Level 1 (which also determines the position of a floor symbol) but its top is determined by a default value of the type, as indicated in the properties palette. The wall appears to connect to the floor underneath it but in fact the position of both is determined by the same level.
On the other hand, the top of the wall in Figure 22 is determined by Level 2, which also constrains the position of another floor symbol. As the properties palette reveals, this wall is moreover attached at the top. This means that if the floor above the wall is moved to another height, the wall tries to remain connected to it. If the floor below is moved, the wall sticks to the level, losing contact with the floor. If the base was also attached, then the wall would be fully constrained, as in Figure 20.
The above examples demonstrate that the semantic type of each property is often affected by constraints external to the symbols. The width of a wall, for instance, can be determined by its composition out of various layers of different materials, each with its own thickness. This makes wall width derivative and creates some dimensional and technical tolerances, as e.g. a wall can be made thinner by replacing an insulation layer with thinner, better material, without changing the wall’s thermal performance. On the other hand, wall width can also be fixed by external constraints, e.g. for reasons of standardization. This makes wall width primary, while the material composition of the wall (the material layers and their thickness) becomes derivative from the fixed wall width and requirements on e.g. thermal or acoustic performance.
Some of the most important external constraints come from planning regulations. These often determine large parts of a design, e.g. the position of external walls by a setback from the plot boundaries. This means that the footprint of the building is derived from the plot shape and dimensions minus the setbacks. Similarly, most Dutch planning regulations impose a setback from the ends of the roof for dormer windows, e.g. 100 cm from the bottom and side ends, and 50 cm from the top (Figure 23). Consequently, the dimensions of the dormer are derived from those of the roof, which in turn derive from the building footprint, and external constraints, including on the roof pitch (also determined by planning regulations, either by a fixed value, such as 30 degrees, or a bandwidth, e.g. 25–40 degrees). In short, a building representation is based on such networks of relations and constraints, making many primary properties dependent on others and therefore derivative.
The conclusions that can be drawn from the above are:
1. The semantic type is sensitive to the context: what in an isolated symbol is a primary property may become derivative in a representation where the symbol connects to others.
2. These others include symbols in the same representation, as well as external information entities, such as constraints from standards or planning regulations. For IM purposes, these too should be explicitly included in the representation.
Key Takeaways
• A information instance consists of one or more data which are well-formed and meaningful
• Data are lacks of uniformity in what we perceive at a given moment or between different states of a percept or between two symbols in a percept
• Data can be primary, anti-data, derivative, operational or metadata
• There are significant differences between analogue and digital building representations concerning data types, with symbols like dimension lines being primary in the one and derivative in the other
• In BIM, lacks of uniformity can be identified in the properties and relations of symbols
• Information instances can be categorized by the semantic type of their data and by their scope in the representation
• Semantic type depends on the context, which may turn primary data into derivative
Exercises
1. Identify the semantic data types in the infobox of a Wikipedia biographic lemma (the summary panel on the top right), e.g. https://en.Wikipedia.org/wiki/Aldo_van_Eyck (Figure 19),[6] and in the basic page information of the same lemma (e.g. https://en.Wikipedia.org/w/index.php?title=Aldo_van_Eyck&action=info)
2. Explain the information instances produced in BIM when one inserts a door in an existing wall. Use the following notation:
(scope; symbol; name of property or relation; value of property or relation; time; semantic data type)
If the instances concern multiple symbols, use the notation to describe each symbol separately.
3. Explain the information instances produced in BIM when one moves an existing door to a slightly different position in an existing wall. Use the above notation for each concerned symbol separately.
4. In BIM it is claimed that one can add information dimensions to the three geometric dimensions, turning 3D into nD: 4D comes with the addition of time (e.g. when the symbolized entity is constructed), 5D with the addition of cost, 6D with sustainability, 7D with facility management, 8D with accident prevention (or safety) etc. However, for something to qualify as a dimension, it should be primary and not derivative, otherwise area and volume would be dimensions, too.[7]
Describe how the values of these four dimensions emerge and change throughout the lifecycle of a building element or component, such as a door, window, floor, ceiling etc., and which primary or derivative information attracts attention in various stages and activities after development (procurement, transport, realization, maintenance, refurbishment, renovation, demolition etc.). Present your results in a table.
5. IFC (Industry Foundation Classes) is a standard underlying BIM, in particular concerning how each entity is represented. Identify the semantic data types in the IFC wall base quantities, i.e. quantities that are common to the definition of all occurrences of walls (http://www.buildingsmart.org/ifc/dev/IFC4_3/RC2/html/schema/ifcsharedbldgelements/qset/qto_wallbasequantities.htm). Pay particular attention to derivative quantities present in the specification. If each of the quantities becomes a symbol property in BIM, calculate how much of a typical model consists of derivative data, both in percentage and megabytes (assuming that what holds for walls also holds for all entities in BIM).
1. Zins, C., 2007. Conceptual approaches for defining data, information, and knowledge. Journal of the American Society for Information Science and Technology. 58(4) 479-493 DOI: 10.1002/asi.20508
2. There are several fundamental sources on the MTC, starting with the original publication: Shannon, C., 1948. A mathematical theory of communication. Bell System Technical Journal, 27(July, October), 379-423, 623-656; Shannon, C.E., & Weaver, W., 1998. The mathematical theory of communication. Urbana IL: University of Illinois Press; Cover, T.M., & Thomas, J.A., 2006. Elements of information theory (2nd ed.). Hoboken NJ: Wiley-Interscience; Pierce, J.R., 1980. An introduction to information theory: symbols, signals & noise (2nd, rev. ed.). New York: Dover.
3. The classification of theories of information is after: Sommaruga, G., 2009. Introduction. G. Sommaruga (ed), Formal Theories of Information: From Shannon to semantic information theory and general concepts of information. Berlin, Heidelberg: Springer.
4. Floridi’s theory has been published in: Floridi, L., 2008. Trends in the philosophy of information. P. Adriaans & J. v. Benthem (eds), Philosophy of information. Amsterdam: North-Holland; Floridi, L., 2009. Philosophical conceptions of information. G. Sommaruga (ed), Formal Theories of Information: From Shannon to semantic information theory and general concepts of information. Berlin, Heidelberg: Springer; Floridi, L., 2016. Semantic conceptions of information. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/information-semantic/
5. In later publications Floridi has used the term secondary data instead of anti-data but the new name seems rather confusing, suggesting data of a lesser importance rather than the converse of primary data.
6. Source: https://en.Wikipedia.org/wiki/Aldo_van_Eyck; photograph credit: Aldo van Eyck in 1970 by Bert Verhoef, licensed under CC BY-SA 3.0 NL
7. Koutamanis, A., 2020. Dimensionality in BIM: Why BIM cannot have more than four dimensions? Automation in Construction, 114, 2020, 103153, https://doi.org/10.1016/j.autcon.2020.103153. | textbooks/workforce/Construction/Building_Information_-_Representation_and_Management_Principles_and_Foundations_for_the_Digital_Era_(Koutamanis)/03%3A_Information/3.01%3A_Introduction_to_Information.txt |
This chapter introduces the general goals of information management and connects them to building representations, semantic types and AECO processes in order to distill the main goals of building information management.
The need for information management
With the information explosion we have been experiencing, it is hardly surprising that IM seems to have become a self-evident technical necessity. Handling the astounding amounts of information produced and disseminated every day requires more robust and efficient approaches than ever. Nevertheless, IM is considered mostly as a means to an end, usually performance in a project or enterprise: with effective IM, one can improve the chances of higher performance. Consequently, IM usually forms a key component of overall management.
This is widely acknowledged in building design management (DM). Even before the digital era, the evident dependence of AECO on information that came from various sources and concerned different but interconnected aspects of a building had led to general agreement that this information and the way it is handled can be critical for communication and decision making. DM often focuses on information completeness, relevance, clarity, accuracy, quality, value, timeliness etc., as prerequisites to enabling greater productivity, improving risk management, reducing errors and generally raising efficiency and reliability. The dependence on information is such that some even go so far as to suggest that DM is really fundamentally about IM: managing information flows so that stakeholders receive the right information at the right time.[1]
In practical terms, however, there is little clarity concerning what should be managed and how. DM sources often merely affirm that information is important and should be treated with care. What makes information usable, valuable, relevant etc. is assumed to be known tacitly. Information is vaguely defined as data in usable form but is also equated to the thousands of drawings and other documents produced during the lifecycle of a building — the carriers of information. If the right document is present, then it is assumed that stakeholders also possess the right information and are directly capable of judging the veracity, completeness, coherence etc. of what they receive. However, equating information with documents not only prolongs outdated analogue practices, it also places a heavy burden on users.
It is arguably typical of AECO and DM that, in the face of operational and especially technical complexity, they invest heavily in human resources. This goes beyond the interpretation of documents in order to extract information; it also extends to the invention of new roles that assume a mix of new and old tasks and responsibilities. So, in addition to project and process managers, one encounters information managers as well as BIM managers and CAD managers, BIM coordinators and CAD coordinators, working together in complex, overlapping hierarchies. These new roles are usually justified by the need for support with new technologies, which may be yet unfamiliar to the usual participants in an AECO project. This, however, increases the distance between new technologies and their real users, limiting learning opportunities and prolonging the treatment of technologies as new and unfamiliar (something that contrasts sharply with what we do in our private encounters with new technologies, as discussed in the section on digitization). Even worse, all these roles increase complexity and reduce transparency by adding more intermediaries in the already multi-layered structure of AECO.
New roles are inevitable with technological innovation. Sometimes they are temporary and sometimes permanent. In the early days of motorcars, for example, chauffeurs were more widely employed to drive them than today. On the other hand, webmasters have become necessary by the invention and popularity of the World Wide Web and remain so for the foreseeable future, despite the growing web literacy among general users. What matters is that any such new roles should be part of a sound and thorough plan of approach rather than an easy alternative to a good approach. A good plan should determine what is needed and why, allowing for increasing familiarity and even proficiency of many users with various technologies, to a degree that, after some point, they might require little day-to-day support. In our case, one may safely expect that AECO professionals will eventually become quite capable not only of using BIM directly but also of coordinating their BIM activities, with little need for technical intermediaries. To achieve this, AECO needs practical familiarization with the new technologies but above all clear comprehension of what these technologies do with information. Based on that, one can develop a sound IM approach that takes into consideration both domain needs and the capacities of digital technologies in order to determine changes in the tasks, responsibilities and procedures of existing AECO roles, and develop profiles for any additional roles.
Information sources
Inclusiveness
IM is both an activity and a well-defined discipline of professionals who support this activity. The discipline of IM has a broad scope and, as a result, is quite inclusive.[2] It pays no attention to issues of representation and accepts as information sources all kinds of documents, applications, services and schemes. This is due to three reasons. Firstly, IM covers many application areas and must therefore be adaptable to practices encountered in any of them. Secondly, in many areas there is a mix of analogue and digital information, as well as various channels. For example, financial client transactions with a shop can involve cash and debit or credit cards, either physically or via the web. IM provides tools for bringing such disparate material together into more coherent forms, ensuring that no outdated or inappropriate information is used and preventing that information is missing, inaccessible or deleted by error. These tools include correlation with contexts (e.g. time series displays relative to other data), classification and condensation (aggregation, totalling, filtering and summarization). Thirdly, IM has a tenuous relation to computerization, often relying on it but also appearing weary of putting too much emphasis on digital technologies as a general solution.
The inclusiveness of IM with respect to information sources means that it may end up not only tolerating the redundancy of analogue and digital versions of the same information but also supporting outdated practices and conventions, even prolonging their life through superficial digitization, on the assumption that the application area wants it. This reduces IM to mere document management, i.e. making sure that the necessary documents are retained and kept available. Such inclusiveness is arguably an easy way out of most domain problems. At present, there may be enough computer power and capacity to store and retrieve any document produced in a project or enterprise — in our case, throughout the whole lifecycle of a building. However, the information explosion of the digital era and big data approaches suggest the opposite: we already need more intelligent solutions than brute force. Can we upscale the haphazard, inclusive recording of the history of a building to all buildings in the world? At this moment, we may have the illusion that we still have control over the huge amounts of information in production and circulation but this is because AECO currently approaches information with respect to the limited demands of normative practices. Beyond these demands, there is already too much information that is ignored, neglected and even discarded. Moreover, new developments like the IoT could change the overall picture soon, as smart things start communicating with each other with great intensity. For AECO this can be quite critical because buildings are among the prime candidates for accommodating a wide range of sensors and actuators, e.g. for controlling energy consumption, ensuring security or regulating air quality to prevent the spread of epidemics.
Structured, semi-structured and unstructured information
BIM is important for IM because it marks a transition not only to symbolic representation but also to holistic, structured information solutions for AECO. With regard to structure, there are three main data categories:
• Unstructured data are the subject of big data approaches: sensor measurements, social media messages and other data without a uniform, standardized format. Finding relevant information in unstructured data is quite demanding because queries have to take into account a broad range of locations where meaningful data may reside and a wide variety of storage forms (including natural language and images).
• Semi-structured data are a favourite of IM: information sources with a loosely defined structure and flexible use. Analogue drawings are a typical example: one knows what is expected in e.g. a section but there are several alternative notations and few if any prohibitions concerning what may be depicted and how. IM thrives on semi-structured sources, adding metadata, extracting and condensing, so as to summarize relevant information into a structured overview.
• Structured data are found in sources where one knows precisely what is expected and where. Databases are prime examples of structured information sources. In a relational database, one knows that each table describes a particular class of entities, that each record in a table describes a single entity and that each field describes a particular property of these entities in the same, predefined way. Finding the right data in a structured source is therefore straightforward and less challenging for IM.
In contrast to analogue drawings, BIM is clearly structured, along the lines of a database. Each symbol belongs to a particular type and has specific properties. This structure is one of the driving forces behind BIM, in particular with respect to its capacity to integrate and process building information coherently. Given the effort put into developing structured models in BIM, it makes little sense to abandon the advantages they promise. This makes BIM the main environment for IM in AECO and calls for approaches that should:
• Avoid having other primary information sources next to BIM. All building information should be integrated in BIM and any related data linked to it. Currently, there is general agreement that the price of a component, e.g. a washbasin, should be a property of the corresponding symbol. However, the same should apply to all data relevant to AECO, e.g. packaging information for this component. The dimensions of the box in which the washbasin is brought to the building site, the packaging materials it contains etc. are useful for logistic purposes, as well as for waste management. Trying to retrieve this information from the manufacturer’s catalogue is significantly less efficient than integrating the relevant data among the symbol properties. The same applies to a photograph of some part of the building during construction or use. This too should be connected to BIM as a link between the digital file of the photograph and relevant symbols in the model (Figure 1) or even mapped as a decal on the symbols (Figure 2).
• Desist from promoting BIM output to the level of a primary source. Any view of a model, from a floor plan to a cost calculation, can be exported as a separate document (PDF, spreadsheet etc.). Such an export may have its practical uses but one should not treat it as a source separate from the model. Any query about the building should start from the model, bypassing exports and similar output. Using IM to ensure consistency between exports and the model is meaningless. This applies even to legally significant documents like contracts because these too can be expressed as views of the model (i.e. textual frames around data exported from the model).
From the above, a wider information environment emerges around the model, populated largely by data linked to the model, preferably to specific symbols. IM can assist with the organization of this environment but it should not be allowed to cut corners, e.g. answer queries on the basis of satellite files. IM reliability depends on transparent links between queries, external files and the model, specifically the primary data in symbols and their history.
It is perhaps ironic that while the world is focusing on big, unstructured data, AECO should insist on structured data. One explanation is latency: AECO has been late with the development of structured information solutions because it continued to use analogue, semi-structured practices in digital facsimiles. As a consequence, AECO has yet to reap the benefits of structured data approaches, let alone find their limits.
The emphasis on the structured nature of BIM also flies in the face of IM and its inclusiveness. In this respect, one should keep in mind what was discussed in a previous section: IM is a means, not an end, and its adaptability has historical causes. It is not compulsory to retain redundant information sources next to BIM, simply because IM can handle redundancy and complexity. If the structured content of BIM suffices, then IM for AECO simply becomes easier and parsimonious.
Information management goals
Information flow
What we should learn from IM is that the treatment of information should have clear goals. The first of the two main goals of IM is to regulate information flows. This is usually achieved by specifying precise processing steps and stages, which ensure that information is produced and disseminated on time and to the right people, until it is finally archived (or disposed of). In terms of the semantic information theory underlying our approach, this involves identifying and tracking information instances throughout a process, covering both the production and modification of data. IM puts emphasis on the sources and stores of information: the containers from which information is drawn, in which it rests or is archived. BIM combines all these into a single information environment, shifting attention to the symbols, their properties and relations, where all data are found.
Managing information flow involves:
• What: the information required for or returned by each specific task in a process
• Who: the actors or stakeholders who produce or receive the information in a task
• How: the processing of information instances
• When: the timing of information instances
What is about the paradigmatic dimension: symbols in BIM and external sources linked to them. For both internal and external information, it is critical to distinguish between authorship and custodianship: the actors who produce some information are not necessarily the same stakeholders who safeguard this information in a project, let alone in the lifecycle of a building. A typical example is the brief: this is usually compiled in the initiative stage by a specialist on the basis of client and user input, as well as professional knowledge. In the development stage, custodianship often passes on to a project manager who utilizes the information in the brief to guide and evaluate the design, possibly also adapting the brief on the basis of insights from the design. Then in the use stage, it becomes background to facility and property management, before it develops into a direct or indirect source for a new brief, e.g. for the refurbishment of the building. Making custodianship specific and unambiguous in all stages is of paramount importance in an integrated environment like BIM, where overlaps and grey areas are easy to develop.
How information flows are regulated relates to the syntagmatic dimension of a model: the sequence of actions through which symbols, their properties and relations are processed. The information instances produced by these actions generally correspond to the sequence of tasks in the process but are also subject to extrinsic constraints, including from the software (the implementation environment): the presence of bounding walls is necessary for defining a space in most BIM editors, although in many design processes one starts with the spatial design rather than with construction. IM needs to take such conflicts into account and differentiate between the two sequences.
A useful device for translating tasks into information actions is the tripartite scheme Input-Processing-Output (IPO) that underlies any form of information processing. For any task, some actors deliver information as input. This input is then processed by other (or even the same) actors, who return as output some other information. Then, this output usually becomes input for the next task. IM has to ensure that the right input is delivered to the right actors and that the right output is collected. By considering each decision task with respect to I‑P‑O, one can identify missing information in the input and arrange for its delivery.
The syntagmatic dimension obviously also relates to when: the moments when information instances become available. These moments usually form a coherent time schedule. The time schedule captures the sequence of actions and transactions, linking each to specific information instances. Here again one should differentiate between the sequence of tasks, which tends to be adequately covered by a project schedule, and the sequence of information actions, which may require additional refinement. This difference is the subject of the next part in this book.
Information flow in BIM
We are used to viewing the early part of a design process as something almost magical: someone puts a few lines on a scrap of paper and suddenly we have a basis for imagining what the building will look like. The same applies to BIM: one starts entering symbols in a model and suddenly the design is there for all to see and process. Building information flows seem to emerge out of nothing but this is far from true. The designers who make the first sketches or decide on the first elements in a model operate on the basis of general knowledge of their disciplines, more precise knowledge of the kind of building they are designing and specific project information, including location characteristics and briefs. In other words, building representations are the product of cognitive processes that combine both tacit and overt information.
It is also widely assumed that the amount of information in a design process grows from very little in early design to substantial amounts by the end, when a building is fully specified. This actually refers to the specificity of conventional building representations, e.g. the drawing scales used in different design stages. In fact, even before the first sketch is made, there usually is considerable information available on the building. Some of it predates the project, e.g. planning regulations and building codes that determine much of the form of a building and key features of its elements, such as the pitch of the roof and the dimensions of stairs. Other information belongs to the project, e.g. the brief that states accommodation requirements for the activities to be housed in the building, the budget that constrains cost and location-related principles like the continuation of vistas or circulation networks through the building site. Early building representations may conform to such specifications but most related information remains tacit, either in other documents or in the mind of the designers. For example, the site layout on which one starts drawing or modelling rarely includes planning regulations, even though the designers are normally aware of these regulations and their impact on the design.
In managing both AECO processes and information, one should ensure that tacit information becomes explicit and is connected to tasks. In BIM, this means augmenting the basic model setup (site plan, floor levels, grids etc.) with constraints from planning regulations (e.g. in the form of the permissible building envelope), use information from the brief and constraints on the kind of building elements that are admissible in the model (e.g. with respect to the fire rating of the building). Integration of such information amounts to feedforward: measurement and control of the information system before disturbances occur. Feedforward is generally more efficient and effective than feedback, e.g. checking if all building elements meet the fire safety requirements after they have been entered in the model.
It has also been suggested that early design decisions have a bigger impact on the outcome of a design process than later decisions. Having to decide on the basis of little overt information makes these decisions difficult and precarious. This conventional wisdom concerning early decisions may be misleading. Admittedly, early design decisions tend to concern basic features and aspects, from overall form to load-bearing structure, which determine much of the building and so have a disproportionate influence on cost and performance. However, such decisions are not exclusive to early design: the type of load-bearing structure can change late in the process, e.g. in relation to cost considerations, procurement or the unanticipated need for larger spans. Late changes can be even more expensive because they also necessitate careful control of all interfacing between load-bearing and other elements in the design. Moreover, small, local decisions can also be critical, whether in an early or late stage: if some doors in a building are too narrow, wheelchair circulation may become cumbersome or even impossible, leading to costly restrictions or adaptations. From an IM perspective, what matters is that all relevant information is made explicit in BIM, so as to know which data serve as input for a task and how to register the output of the task. Explicitness of information allows us to map decision making in a process and understand the scope and significance of any decision, regardless of process stage.
Information quality
The second main goal of IM is to safeguard or improve information quality.[3] Quality matters to IM in two respects. Firstly, for information utility: information produced and disseminated in a process should meet the requirements of its users. Secondly, concerning information value: information with a higher quality needs to be preserved and propagated with higher priority. IM measures quality pragmatically, in terms of relevance, i.e. fitness for purpose: how well the information supports the tasks of its users. In addition to pragmatic information quality, IM is also keen on inherent information quality: how well the information reflects the real-world entities it represents. It should be noted that IM is not passive with regard to information quality. It can also improve it, both at meta-levels (e.g. by systematically applying tags) and with respect to content (e.g. through condensation).
In both senses, information quality is determined within each application domain. IM offers a tactical, operational and technical framework but does not provide answers to domain questions. These answers have to be supplied by the application environment in order for IM to know which information to preserve, disseminate or prioritize. In our framework, information quality concerns the paradigmatic dimension: the symbols of a representation and their relations. As this dimension tends to be quite structured in symbolic representations, one can go beyond the pragmatic level of IM and utilize the underlying semantic level to understand better how information quality is determined.
The first advantage of utilizing the semantic level lies in the definition of acceptable data as being well-formed and meaningful. This determines the fundamental quality of data: their acceptability within a representation. A coffee stain cannot be part of a building representation but neither can a line segment be part of a model in BIM: it has to be an explicit symbol of something. That symbol may have the appearance of a line segment (i.e. uses the line segment as implementation mechanism, as is the case for a room separation in Revit) but the meaning of the symbol is not inferred by its appearance — quite the opposite: the appearance is determined by the meaning. Any data that do not fit the specifications of a symbol, a property or a relation cannot be well-formed or meaningful in BIM. Such data are indications of low quality that requires attention. If quality cannot be improved, these data should be treated as noise.
Data that pass the fundamental semantic test must then be evaluated concerning relevance for the particular building or project and its tasks. To judge relevance, one needs additional criteria, e.g. concerning specificity. For example, it is unlikely that a model comprising generic building elements is satisfactory for a task like the acoustic analysis of a classroom because the property values of generic elements tend to be too vague regarding factors that influence acoustic performance.
The semantic level also helps to determine information value beyond utility: prioritizing which information should be preserved and propagated depends on semantic type. As derivative data can be produced from primary when needed, they do not have to be prioritized — in many cases, they do not have to be preserved at all. Operational data and metadata tend to change little and infrequently in BIM, so these too have a lower priority than primary data. Finally, anti-data have a high priority, both because they necessitate interpretation and action, and because such action often aims at producing missing primary data.
Parsimonious IM concerning information quality in a symbolic representation like BIM can be summarized as follows:
1. Preservation and completion of primary data
2. Establishing transparent and efficient procedures for producing derivative data when needed
3. Identification and interpretation of anti-data, including specification of consequent actions
4. Preservation of stable operational and metadata
The priority of primary data seems to conflict with IM and its improvement of information quality through condensation, i.e. operations that return pragmatically superior derivative data and metadata. Such operations belong to the second point above: if the primary data serve as input for certain procedures, then these procedures have to be established as a dynamic view or similar output in BIM. If users need to know the floor areas of spaces, one should not just give them the space dimensions and let them work out the calculations themselves but supply instead transparent calculations, organized in a legible and meaningful way. This does not mean that the results of these calculations should be preserved next to the space dimensions from which they derive.
Moving from the semantic to the pragmatic level, veracity is a key criterion of quality: fitness for purpose obviously requires that the information is true. In addition to user feedback, veracity can be established on the basis of comparison to additional, reference data, e.g. laser scans that confirm that a model represents faithfully, accurately and precisely the geometry of an existing building.
Before relevance or veracity, however, one should evaluate the structural characteristics of primary information: a model that is not complete, coherent and consistent is a poor basis for any use. Completeness in a building representation means that all parts and aspects are present, i.e. that there are no missing symbols for building elements or spaces in a model. BIM software uses deficiency detection to identify missing symbols. Missing aspects refer to symbol properties or relations: the definition of symbols should include all that is necessary to describe their structure, composition, behaviour and performance.
Completeness is about the presence of all puzzle pieces; coherence is about how well these pieces fit together to produce a seamless overall picture. In a building representation this primarily concerns the interfacing of elements, including possible conflicts in space or time. Clash detection in BIM aims at identifying such conflicts, particularly in space. Relations between symbols are of obvious significance for coherence, so these should be made explicit and manageable.
Finally, consistency is about all parts and aspects being represented in the same or compatible ways. In a symbolic representation, this refers to the properties and relations of symbols. If these are described in the same units and are present in all relevant symbol types, then consistency is also guaranteed in information use. Colour, for example, should be a property of the outer layer of all building elements. In all cases, the colour should derive from the materials of this layer. This means that any paint applied to an element should be explicit as material with properties that include colour. Moreover, any colour data attached to this material layer should follow a standard like the RAL or Pantone colour matching systems. Allowing users to enter any textual description of colour does not promote consistency.
Key Takeaways
• IM is more than a technical necessity: it is also a means of improving performance in a project or enterprise and therefore a key component of overall management
• IM is inclusive and accepts all kinds of information, from structured, semi-structured and unstructured sources
• As a structured information system, BIM simplifies IM
• IM has two main goals: regulate information flow and safeguard or improve information quality
• Custodianship of information is critical for information control
• Information flow relates to the syntagmatic dimension of a representation and draws from the sequence of tasks in a process, as well as from extrinsic constraints
• In managing information flow one needs to make explicit what, who, how and when
• The I‑P‑O scheme helps translate tasks into information actions
• Even before a design takes shape, there are substantial amounts of information that should be made explicit in a model as feedforward
• Information quality concerns the paradigmatic dimension and can therefore build on the semantic typology of data
• In addition to semantic and pragmatic criteria, information quality also depends on completeness, coherence and consistency
Exercises
1. Use the I‑P‑O scheme to explain how one decides on the width of an internal door in a design. Cluster the input by origin (general, specific, project) and describe the relations between input items.
2. Use the I‑P‑O scheme to explain what, who, how and when in deciding the layout of an office landscape, particularly:
1. Which workstation types are to be included, including dimensions and other requirements.
2. How instances of these types are to be arranged to achieve maximum capacity.
3. In a BIM editor of your choice make the permissible building envelope for a building in a location of your choice. Describe the process in terms of input, information instances produced and resulting constraints for various kinds of symbols in the model.
4. Evaluate the completeness, coherence and consistency of the permissible building envelope model you have made.
5. Analyse how one should constrain types of building elements in relation to performance expectations from the use type of building: compare a hotel bedroom to a hospital ward on the basis of a building code of your choice. Explain which symbol properties are involved and how.
1. The views on DM derive primarily from: Richards, M., 2010. Building Information Management – a standard framework and guide to BS 1192. London: BSI; Eynon, J., 2013. The design manager’s handbook. Southern Gate, Chichester, West Sussex, UK: CIOB, John Wiley & Sons; Emmitt, S., 2014. Design management for architects (2nd ed.). Hoboken NJ: Wiley
2. The presentation of IM is based on: Bytheway, A., 2014. Investing in information. New York: Springer; Detlor, B., 2010. Information management. International Journal of Information Management, 30(2), 103-108, doi:10.1016/j.ijinfomgt.2009.12.001; Flett, A., 2011. Information management possible?: Why is information management so difficult? Business Information Review, 28(2), 92-100, doi:10.1177/0266382111411066; Rosenfeld, L., Morville, P., & Arango, J., 2015. Information architecture: for the web and beyond (4th ed.). Sebastopol CA: O’Reilly Media.
3. IM definitions of information quality derive from: Wang, R.Y., & Strong, D.M., 1996. Beyond accuracy: what data quality means to data consumers. Journal of Management Information Systems, 12(4), 5-33. doi:10.1080/07421222.1996.11518099; English, L.P., 1999. Improving data warehouse and business information quality: methods for reducing costs and increasing profits. New York: Wiley. | textbooks/workforce/Construction/Building_Information_-_Representation_and_Management_Principles_and_Foundations_for_the_Digital_Era_(Koutamanis)/03%3A_Information/3.03%3A_Information_management.txt |
In this part, we conclude the exploration of how digitization and information impact on AECO by focusing on the relation between information and management: how IM contributes to performance by improving not only clarity and transparency but also consistency, efficiency and effectiveness. Previous parts have explained:
• How digitization has changed expectations and attitudes concerning information.
• The structure of digital symbolic representations and their differences from analogue representations.
• What data and information are, and the principles of digital information management.
The viewpoint of this part is primarily managerial. While it is tempting to focus on specific AECO aspects and disciplines, and consider information in a narrower frame, for example study the relations between design, creativity and representation, there are several important reasons for adopting a managerial viewpoint. First and foremost, management is normally about the whole of a process or project. From this holistic perspective, information is less what each actor produces or consumes and more what enables actors and stakeholders to interact with each other concerning common goals and constraints; it is what returns an overview of the whole and an understanding of parts one is only indirectly linked to. In short, the true value of information in a process becomes apparent when considered in the wider frame of someone with a general mandate and overall interests.
Armed with an understanding of symbolic representations, graphs and semantic data types from the previous parts, we consider what we do with information when and where it matters in a process. Unfortunately, as the next chapter explains, the answer is: not much. Our cognition appears to be built in a way that allows us to operate effectively and efficiently in many common situations but also makes us biased and failure-prone in other, more demanding situations. Cognitive limitations are hard to overcome but we should at least provide the means for recognizing them and correcting their mistakes. The book contributes towards this objective by stressing the duality of process and information management, and making it operational, transparent and supportive of reflective and analytical thinking.
4.02: Decisions and information
The chapter introduces the dual-process theory and explains its relevance to decision making in AECO. It presents the foundations of the theory and a number of illusions, biases and fallacies that derive from our cognitive limitations.
Dual-process theory
One of the striking scientific developments around the turn of the century was that several compatible and often complementary views of mental duality emerged independently, in different contexts and disciplines, ranging from social psychology and neuropsychology to decision theory and economics. The notion of many different systems in the brain is not new but what distinguishes the dual-process theory from earlier views is that it builds on a better understanding of the brain’s biologic and cognitive structure to suggest that there are two different types of thinking, each with different functions, strengths and weaknesses:
1. Type 1 processes: these are autonomous, unconscious, automatically executed when relevant stimuli are encountered, unguided by higher-level control systems, fast, low-effort, often parallel and with a high processing capacity. Examples of Type 1 processes are simple mental calculations, such as 2 + 3, the recognition of a common animal species like a dog, the detection of hostility in a person’s voice and the identification of a roof in an elevation drawing.
2. Type 2 processes, which are the opposite: controlled, analytical, slow, high-effort, generally serial and with a limited processing capacity. Examples of Type 2 processes are demanding mental calculations, such as 3943 × 2187, filling in an insurance form for a motor vehicle accident and looking for inward-opening swing doors wider than 67 cm in a floor plan of a large building.
For their immediate responses in a variety of situations, Type 1 processes rely on encapsulated knowledge and tightly compiled learned information, accumulated through exposure and personal experience common to the majority of people. Some of the best examples concern affordances: the actionable properties an environment presents to us, such as that we can walk on a pavement or go through a door. These are things anyone can do, usually equally well. Other things are restricted to a minority of individuals and are often an indication of expertise, for example in a hobby like fishing or a sport like table tennis. Nevertheless, they are all acquired and encoded in a similar manner. In fact, expertise in dual-process theory is seen as an extension of common capacities through practice: in the same way a child learns to recognize animal species, an expert learns to recognize familiar elements in new situations and deploy ready interpretations and actions. It should be noted that expertise is not a single skill but a large collection of small skills: an expert footballer is capable of simultaneously (or in quick succession) doing many small things both individually and in a team, from controlling a ball coming at different speeds and from different directions to passing the ball to teammates according to an agreed plan, taking into account their individual capacities and position in the field.
Type 2 processes are distinguished into two categories. The first consists of reflective processes, which evaluate Type 1 thinking and its results on the basis of beliefs and goals. Reflection often involves cognitive simulation and can be triggered by the outcome of Type 1 processes, such as the feeling of doubt that arises from frequent failure, for example constant slipping on a frozen pavement. It is also linked to interpersonal communication and the need to explain or justify proposed joint actions and goals, such as the tactics of a football team. Once activated, reflective processes can interrupt or override Type 1 processing, suppress its responses and reallocate expensive mental resources from failing Type 1 processes to searches for alternative solutions through Type 2 thinking.
These solutions are the subject of the second category in Type 2 thinking, algorithmic processes: strategies, rule systems, general and specific knowledge, usually learned formally and therefore bounded by culture. For example, unlike a simple DIY job, a loft conversion requires more meticulous organization, which can be based on empirically acquired knowledge, a textbook on home improvement or Internet tutorials. If the project is undertaken by AECO professionals, it inevitably also draws from their training in design, construction, time planning and site management. One of the basic functions of algorithmic processes is cognitive decoupling: the distinction between representations of the world used in Type 1 processes and representations required for the analysis of imaginary or abstract situations. Cognitive decoupling concerns both substituting the naïve representations implicit in daily life, such as that of a flat earth, and allowing for imaginary situations, such as a cubic earth, which form settings for hypothetical reasoning.
It should be pointed out that Type 2 processes do not necessarily return better results than Type 1 ones. Being highly demanding, mentally expensive, subject to personal intelligence and knowledge, and founded on Type 1 biases or professional thinking habits, they may also lead to failure. Being usually acquired through formal learning, they may also be limited or failure-prone because of theoretical and methodical issues. For example, before the Copernican revolution, learned astronomers made fundamental mistakes because they based their work on erroneous earth-centric models, not because they made errors in their calculations. What Type 2 processes certainly do is avoid the biases of Type 1 thinking and therefore have smaller error margins, moreover neither too high nor too low with respect to the truth. Kepler’s laws of the heliocentric model and Newtonian mechanics form a sound basis for calculating planetary motion with sufficient precision and accuracy, regardless of the means used for the calculation.
Dual-process theory has a number of advantages that explain its acceptance and popularity in many application areas. One advantage is that it makes evident why we are are so good at some tasks and so poor at others. For example, we are good grammarians: children at the age of four are already capable of forming grammatically correct sentences. By contrast, we are poor statisticians: we are clever enough to have invented statistics but nevertheless fail to apply statistical thinking in everyday situations that clearly demand it, such as games of chance. The reason for this is that Type 1 processes represent categories by prototypes or typical examples. As a result, they rely on averages and stereotypes, avoiding even relatively easy calculations for drawing conclusions about individual cases from what is known about relevant categories. Interestingly, most people have little difficulty making these Type 2 calculations when asked to do so.
Such variability in cognitive performance should not be mistaken for inconsistency. It is instead an indication of conflicts that are inherent in our cognitive mechanisms. Dual-process theory has the advantage that it includes these conflicts in its core, as opposed to treating them as a loose collection of anomalies to be resolved afterwards, as exceptions to the rules. The most fundamental conflict is that Type 1 processes which undeniably serve us well in most situations are also the cause of frequent and persistent failures, some of which are discussed below. A related conflict is the constant struggle between Type 1 and Type 2 thinking for dominance and mental resources. At practically every moment of the day, we need to choose between what we do automatically and what requires analytical treatment. Sometimes what attracts our attention is an established priority, such as as an exam question. At other times, it is a sudden occurrence or a new priority, for example a sudden opening of a door or a cramp in the writing arm. At yet other times, it is anti-data, such as a pen that fails to write. We are constantly asked to prioritize in a continually changing landscape of often apparently unrelated tasks that nevertheless affect our performance both overall and with respect to specific goals. Unfortunately, we often fail because of Type 1 biases and underlying cognitive limitations.
Cognitive illusions, biases, Fallacies and failures
Inattentional and change blindness
A typical failure caused by our cognitive limitations is inattentional blindness: we systematically fail to notice unexpected things and events, especially when we are concentrating on another, relatively hard task. This is why we may fail to see a cyclist appear next to the car we are driving in heavy traffic (unless of course cyclists are an expected part of this traffic, as in most Dutch cities). Inattentional blindness is hazardous in traffic but the same neglect for unexpected things around us is actually helpful in many other situations because it allows us to reserve our limited cognitive capacities for important tasks at hand. When taking an exam, for example, we do not want to be distracted by extraneous stimuli, such as noise coming from outside, unnecessary movement in the room or incoming messages on social media.
Closely related is change blindness: the failure to see obvious changes in something unfolding in front of our eyes, primarily because we are concentrating on something else, usually a narrative (a subject discussed in more detail later in this chapter). Typical examples are continuity errors in films, which most viewers miss until someone points them out, making them immediately and permanently glaringly obvious to all. Just like inattentional blindness, change blindness is due to our memory limitations: perception is followed by recognition of meaning, which is what we encode in memory rather than every detail of the percept. What we retain in memory is an image or narrative that is above all coherent and consistent with its meaning. This also means that we may embellish the memory with fictitious details that fit the meaning and the emotions it elicits. Vivid details are often an indication of such reproduction after the memory was formed.
Planning and sunk-costs fallacies
A popular subject in news stories are projects poorly conceived by clients and developers, inadequately understood by politicians and authorities that endorse them, and unquestioningly attempted by designers and engineers, with dramatic failures as a result. Such projects are often linked to the planning fallacy: we tend to make designs, plans and forecasts unrealistically close to best-case scenarios, neglecting to compare them to the outcomes of similar projects, therefore repeating mistakes that have led to previous failures. Common reasons behind the planning fallacy are the desire to have a design approved, pressure to have a project completed and a general tendency to act fast so as not to delay. These push decision makers to quick, overoptimistic, typically Type 1 decisions that remain unchecked.
Interestingly, these attitudes persist when the failure becomes apparent, usually by the height of sunk costs. Rather than accept defeat and cut their losses, many stakeholders insist on throwing good money after bad, desperately continuing the project, in the vain hope to change its fortunes around. The escalation of commitment caused by the sunk-costs fallacy is often celebrated for merely reaching some goals, ignoring the devastation it has brought along. It seems that the height of the sunk costs actually increases commitment, as well as the misguided appreciation of partial goals (often just the completion of the project), which confuses stubbornness and incompetence with heroism.
In principle, optimism in the face of danger or failure is a positive characteristic. It encourages us to persist against obstacles and, in many cases, to overcome them. A defeatist attitude under difficult conditions is obviously unhelpful for survival but the same also holds for insistence that is uninformed and uncontrolled by rational analysis. For example, any decision to repair a damaged car should depend on the technical and economic feasibility of the repairs, in relation to the value and utility of the car. Endless expensive repairs of a car that can be easily replaced by another rarely make sense.
Inside and outside view
Such fallacies are reinforced by the tendency of stakeholders to stick to inside views: views shared by the project participants, relying too much on information produced in the project by the participants themselves. By repeatedly using and sharing this information, such as an early budget or time plan, they end up believing it unquestioningly, even when new developments should cast doubt on earlier decisions and lead to major adjustments in the project (relation to change blindness). Consequently, participants subscribing to an inside view tend to underestimate problems and dangers that are evident from an outside view, i.e. one that covers the whole class to which the project belongs and the statistics of this class. Failing to adopt outside views reduces the validity of any basic decision in a project. It also causes unwarranted, pervasive optimism: the mistaken belief that this project is surely not subject to the causes of failure in other, very similar projects.
Illusions of knowledge, confidence and skill
Planning fallacies and inside views are linked to the illusion of knowledge: we think that we know more than we actually do because we understand what happens and mistake it for an understanding of why it happens. The consequence is that we rarely doubt our beliefs and assumptions, and, when confronted with a task, we plunge into direct action before we fully understand the situation. The more complex the task, the more profound the illusion and the more dangerous its effects.
A complementary illusion is that of confidence: we tend to delegate tasks to actors on the basis of what they believe they can do. Even without relevant qualifications or a convincing track record, a person can be confident about their competence to do something and claim it. Even worse, others are inclined to accept the claim: we treat the confidence of a person in their abilities as a true measure of their skill or expertise and therefore overestimate their capacities. However, as talent shows make abundantly and embarrassingly obvious, incompetent people are often overconfident about their abilities. They audition for something that they clearly cannot do, not as a joke but because they genuinely believe in themselves. Moreover, the illusion of skill from which they suffer means that they are less inclined to improve their skills. By contrast, highly skilled persons, such as top athletes and celebrated musicians, always look for ways to improve themselves, e.g. through constant, demanding and often innovative training. Similarly, true experts are aware of their limitations and scope (unlike users of questionable or arbitrary expert-like heuristics), and constantly try to augment or refine their interpretations and solutions in every new situation they encounter.
The illusions of knowledge, skill and confidence are not confined to extreme cases, like the ones in talent shows. Most people suffer from it, typically undertaking jobs they botch and abandon, e.g. in DIY. More importantly, they tend to attribute good performance to superior skills or expertise rather than luck and bad performance to accidental or unforeseen conditions that are beyond their control or to the incompetence or obstruction of others. Even seasoned professionals may be confident that they can achieve the necessary goals in a project, despite having failed to deliver in previous projects. However, experience is not the same as expertise or ability. The true hallmark of knowledge and skill is a persistently high performance, as attested by our expectations from e.g. professional musicians and surgeons.
Substitution
One of the clever strategies of Type 1 thinking is that difficult problems are routinely substituted by simpler ones. Rather than taking the trouble of calculating a sum like 3943 × 2187 precisely, we approximate by calculating 4 × 2 and then adding zeros for the thousands. This is acceptable in many situations but can be misleading in others. For example, when people are asked how happy they are, they invariably base their answer on their current mood and give an answer that reflects only very recent events and conditions. Regrettably, the fluency by which Type 1 thinking finds solutions to the simple problems makes us think that we have found adequate solutions to the complex ones they have substituted. Consequently, we seldom attempt to solve or even understand the complex problems. Instead, we resort to approximations and received wisdom, and so frequently fail to utilize the tools, knowledge and information at our disposal. Simplifying a mental calculation may be a clever strategy but doing so in a spreadsheet or a calculator is pointless, especially if precision is required.
Framing
Fallacies and illusions often relate to narrow framing: focusing on the individual project rather than the enterprises involved in it or focusing on a single problem rather than the whole project. What appears to be the right decision in a narrow frame is often patently wrong when considered more broadly, in relation to other issues and options or for a longer term. However embarrassing, the termination of a failing project may be right for the enterprise and its fortunes, as it allows allocation of resources elsewhere instead of increasing sunk costs. Crucially, a situation can be framed positively or negatively. Our reactions to e.g. the Internet are influenced by how it is presented: either as a source of hidden dangers or a universe of new opportunities. Similarly, a budget overrun by 46,3 million euros for a major, demanding building may raise few eyebrows among people familiar with much worse cases in infrastructure and defence projects but the same overrun expressed as 26% of the original budget sounds more serious.[1]
One of the most significant recent developments in decision theory goes beyond the realization that framing and the context it creates influence decision taking: it actively deploys a choice architecture that structures this context and nudges decisions towards better choices without forcing any outcomes. This involves using suitable defaults (e.g. opting out of organ donation instead of opting in) and making the various options more comprehensible (by providing clear information on what they entail in both short and long term) to prevent people from taking simplistic solutions when the number of choices and complexity of problems increase. Nudging accepts the possibility that people make errors and develops means to suppress them. In addition to the above feedforward mechanisms, a choice architecture should also provide feedback that informs people on the consequences of their decisions, warns them if things go wrong and allows them to learn about the tasks they are facing and their own behaviour and performance.
Narratives, coherence and causes
We have all experienced it in one form or another: we have attended a presentation that convinced us but afterwards it was unclear why; we have enjoyed watching a film only to realize later that the plot was full of holes. The success of a narrative is often due to the halo effect: a presentation delivered by someone we like, respect or admire is received positively by Type 1 thinking, often impeding Type 2 processes from analysing its content. But even without this effect, our love of narratives and the coherence they bring to our perceptions and experiences are decisive. Type 1 thinking looks for coherence in a story and confuses it with quality of evidence: if the narrative is coherent, it is plausible and therefore acceptable. Its probability matters little; it suffices that it is a simple, concrete story about a few striking events, with enough detail to make it realistic.
This is the kind of stories that convinces and appeals to us in real life, in literature or in films. These stories usually tell us that practically everything is due to talent, stupidity or intention. They can therefore be didactic or inspirational, allowing us to marvel at the leadership of a great military hero or a titan of industry. The role of luck and circumstance is often ignored, in a way that makes us believe not only that we understand the past and the present but also predict and control the future. On the positive side, this allows the development of beliefs and confidence in them, so that we are not easily daunted by adversity or yet unsolved problems. The “never mind, we’ll find a solution; let’s crack on now” mentality can be beneficial for survival but not necessarily helpful with problems that require careful analysis and planning, such as big construction projects.
Unfortunately, narratives may contain invented causes that help facts connect and make sense together. This also implies selectivity as to which facts are included. As information is costly to obtain, store and process in our brains, we apply the same simplification as in our memories: we reduce narratives into something simple that makes sense and reveals agents, intentions and causes with clear characteristics and relations. Interestingly, we then embellish narratives (similarly to memories) with details that make them more believable, consistent and coherent — details that may have been invented or imagined, at the cost of others that reveal the complexity or randomness of reality. This retrospective distortion of facts enhances the illusion of fully understanding reality and being able to predict it, and sustains a dangerous eagerness to accept overgeneralizations like “generation X”, zodiac signs or perfect curves that profess to be analytical, despite their lack of foundation in real data.
Part of the appeal of narratives lies in their linear structure, which reinforces three biases that make us jump to conclusions: our minds are built to assume causality on the basis of precedence in time, to infer causes from coincidence and to detect meaning in patterns. These Type 1 cornerstones allow us to connect events like the flicking of a switch to a light turning on, identify malfunctions in a machine by the sounds it makes and recognize familiar persons in a crowd by the way they move. On the negative side, they lead to illusions of cause when we infer relations and patterns in random data. The illusion may affect even experts who, when confronted with randomness or unknown patterns, do not pause to consider alternatives but remain in Type 1 mode and seek confirmation of what they know or expect. Spurious correlations are quite easy to find given enough data but unfortunately not all of them are as obviously ridiculous as the link of US spending on science, space and technology to the number of suicides by hanging, strangulation or suffocation; margarine consumption to the divorce rate in Maine; the age of Miss America to murders by steam, hot vapours or hot objects; the consumption of mozzarella cheese to civil engineering doctorates; or superstitions like an athlete always wearing the same “lucky” socks.[2]
Narratives are also inherently unfair, as we realize from the need to change the narratives of the colonial past: the old depictions and descriptions of persons and events were at best partial and selective, providing coherence and simplicity by downplaying the complexity of a world that often seems inexplicable because it contains things we consider improbable or fixed. Change blindness makes us fail to notice things that change if they fall outside the narrative and so eliminate them from the story, which is then presented in a way that reinforces the remaining points. In this way, other facts and perspectives remain hidden, only to resurface too late, when the partiality and unfairness of the narrative has become painfully obvious.
Dual processes and information
A widely accepted explanation for our frequent, systematic cognitive failures is that the otherwise highly efficient Type 1 processes stem from our evolution and adaptation to a world quite different to today’s highly technological, fast and busy environment. Back then, our cognitive systems developed ways to allocate their limited resources for the requirements of fundamental tasks at walking or running speed, not for demanding problems in mathematics or the much higher speed of motorized traffic. The world has since changed, as have our activities and priorities it it, but Type 1 processes remain the same, making us prone to biases and errors. What makes us able to use simple construction tools to build a garden wall is not what is required in planning and executing the realization of a skyscraper.
Unfortunately, our capability at many everyday tasks makes us overconfident about our cognitive abilities in general. We confuse the fluency of Type 1 processes with deep understanding and treat it as a general performance guarantee. This causes the cognitive illusions that make us underperform with worrying regularity. Even worse, we seem unable to appreciate the significance of these illusions and the frequency or magnitude of our failures. Law courts, for example, insist on putting too much weight on eyewitness accounts, despite known limitations of our memory and especially the tendency to reconstruct memories and complement them with fictitious details that enhance their meaning.
Most scientists agree that our cognitive limitations cannot be avoided. All we can do is be aware of them and remain alert to situations and conditions that require activation of Type 2 reflective processes: learn to recognize and neutralize biases, so as to avoid the pitfalls of intuitive, automatic decisions and actions. To do so, we require information that helps us both fully understand the situation and identify better solutions through Type 2 algorithmic processing. Making explicit the information surrounding every task in a process is therefore a prerequisite to any improvement in our decision making. Finding this information and processing it towards better solutions often involves the use of technologies that aid memory, such as writing, or processing, such as calculators. It follows that digital information technologies are a key part of the further, hybrid evolution of human thinking. Matching their structure and potential to our cognitive capacities is the starting point for understanding how IM can support decision making.
AECO and dual-process theory
Among the examples of failure mentioned in dual-process literature, AECO has a prominent place: its history is full of examples of the planning fallacy. Defence, industrial, ICT and infrastructure projects may result in higher overruns but the AECO examples are far more frequent and widespread. The reasons behind the planning fallacy appear to be endemic in AECO and regularly lead to hasty designs or project plans, build on largely unfounded Type 1 decisions. As one would expect, the disregard for realism and reliability does not change when projects start to fail or are affected by sunk costs. AECO stakeholders stubbornly keep investing in failures, focusing on project goals and anchoring on plans that are clearly faulty. Despite the availability of data, knowledge and advanced tools, many decisions are taken on the basis of norms and rules of thumb, which encourage superficial treatment of building performance and process structure. The result is that costs increase disproportionately, while even minor cuts or concessions dramatically reduce quality and scope. In the end, it seems that all that matters is that the buildings are realized, however expensive or poor. Even when we avoid outright failure, we generally produce buildings with the same severe limitations, through the same processes that have returned so many earlier mediocrities or failures.
The planning fallacy in AECO is reinforced by strong inside views. The dictum “every building is unique” is misleading because it ignores similarities not only between the composition and performance of buildings at various levels (from that of individual parts like doors and corridors to common configurations like open-plan offices) but also between the processes of development, realization and use in different projects. It leads to an arrogant, persistent repetition of the same mistakes, coupled to lack of interest in thorough analyses that can reveal what goes wrong. It seems than the main priority of AECO is to keep on producing, even though the high turnover in construction is linked to a rather low mark-up for many stakeholders.
Under these conditions, substitution is rife in AECO. Practically everything is kept as simple as possible, regardless of what is actually needed. This especially affects forecasts, such as cost estimates, and analyses, which are reduced to mere box ticking against basic norms like building codes. Compliance clearly weighs more heavily than real performance and this adds to the persistence of outdated approaches and technologies. While other production sectors increasingly invest in advanced computerization, AECO insists on doing too much manually, simultaneously heavily relying on cheap labour.
Such issues are exacerbated by the frequently loose structure of AECO processes, which include large numbers of black boxes with uncertain connections between them. These black boxes generally relate to the illusion of confidence, which underlies the delegation of aspects and responsibilities to specific actors. It is often enough that these actors represent a discipline relevant to an aspect; if they also emanate confidence in their knowledge and decisions, we tend to take it for granted that they know what they are doing and so impose few if any checks on their contributions — checks on which any sensible manager should insist anyway, given the high failure rate in AECO projects and the blame games that follow failure.
As for narratives and their capacity to obscure true causes, the halo effect has a strong presence in AECO, as attested by the attention given to famous architects and the belief that their designs are good by default. Any failure by a grand name is either a heroic one, attempted against all odds, or the result of unjust lack of acceptance or support by others. The culture of learning from prominent peers means that lesser AECO professionals are also fully aware of the power of a coherent narrative and therefore often choose to focus on simple, strong ideas rather than detailed, fully worked-out designs and plans. This is facilitated by ritual processes, formulated as prescriptive, box-ticking sequences of actions, products and stages: a process can usually proceed only if the previous steps have been completed, for example, if there are drawings of the design at the agreed scale, budgets and time schedules. The presence of these carriers is more important than the quality of their content, which remains unchecked until a problem emerges later on. So long as the coherence of the core narrative holds, we see what we want to see, oblivious to our inattentional or change blindness. This also leads to illusions of cause: we often attribute problems and failures to the immediately previous stage of a project, instead of searching for true causes throughout the project.
The fallacies and illusions in AECO are closely related to narrow and biased framing that isolates problems and solutions. For example, policy makers propose intensified, denser and higher housing construction in the Dutch Randstad in order to meet demand, without linking this to other issues, such as environmental concerns (e.g. the negative effects of urban heat islands) or transportation problems. These issues are subjects of other ongoing debates and current policies, which are kept separate, even though they are obviously related to urbanization and will be directly affected by housing development. By keeping them out of the housing demand frame, their resolution is deferred to a later stage, when the effects of this urbanization wave may have become painfully apparent and possibly irreversible. The danger is that they will be addressed only then, again in a piecemeal, narrow-frame fashion.
In short, despite the nature of its problems and solutions, AECO thinking appears dominated by Type 1 processes. Quick decisions based on norm, habit, principle or goal proliferate, notwithstanding the availability of detailed specifications (e.g. construction drawings) that could be used for precise evaluations of validity and feasibility. Moreover, such specifications and evaluations usually come after the decisions are taken and are constrained by resulting narratives: what does not fit the basic message that should conveyed in a project is frequently underplayed.
The social and information sides
An ingrained bias in process management is the frequent overemphasis on the social side: the stakeholders and actors, their interests, what they should do and when, and how to align them towards common goals and joint products. While this side of management is obviously significant, it also entails the danger of black boxes, formed out of generic, vague assumptions and expectations. We roughly know the capacities and remit of each participant in a project, so we feel disinclined to specify their contributions to the project or the connections between contributions in detail. Instead, we assume that everyone will do their bit and problems will be solved automatically, perhaps even before we realize their existence.
However, as the dual-process theory explains, this view is a highly suspect product of Type 1 thinking. The consequences of uncontrollable black boxes and undefined connections can be quite grave. A manager can always adopt a laissez-faire attitude and wait for crises to emerge in a project, in the knowledge that crises trigger action. Unfortunately, not all problems qualify as crises. In many cases, failure is the sum of many smaller problems that remain systematically unsolved (a characteristic malaise in AECO). But even when the problems are big enough for the crisis to be recognized, it may be too late for effective and economic solutions (as the sunk-costs fallacy indicates). The obvious solution is to structure processes clearly and consistently as a sequence of specific tasks for particular actors (the subject of the chapter on process diagrams), so that we can deploy constructive Type 2 reflection. However, the resulting critical review of a process is not enough. We additionally need to understand and control the process in terms of its main currency, information: what connects stakeholders, actors and tasks, what is consumed and produced in each task.
By managing information in addition to social interactions, we ensure that each task receives the right input and returns the right output, that tasks are unambiguously linked (as the output of one becomes input to another) and that all tasks are consistently and coherently specified. Each participant can consequently know what is required of them, and what they should have to do their job. This makes the management of a process transparent and controllable, devoid of black boxes and grey areas. In this sense, the information side validates the social side, revealing possible gaps and inconsistencies, and removing unnecessary vagueness from the process.
Decision making in AECO is ostensibly based on expertise and often takes place in settings that justify the emphasis on expertise: confusing and dynamic situations, unclear goals and poorly defined procedures, a looming larger context and time pressure. These are typical of conditions in the field, where true experts, such as firefighters, pilots and nurses, must take rapid, difficult decisions under pressure and uncertainty. One could argue that similar conditions exist in many AECO situations, from design reviews to construction sites. However, the similarity is generally superficial or even artificial. There are very few situations in AECO that qualify as true emergencies. In most cases, it is the emphasis on the social side of management and the power games it involves that define the conditions, underplaying the information side and its significance for decision making. In essence, emergency conditions are created in AECO through poor preparation, inadequate procedures and lack of analyses or development in depth. Such inadequacies create the illusion that decisions must be taken on principle or on the basis of expertise and in total disregard to the huge amounts of information AECO professionals typically produce in documenting a situation and developing a design or plan. Relegating this information to a mere background for the personalities involved in a project makes little sense for the functioning of decision makers, as well as with respect to the information-intensive character of AECO.
Even when expertise is called for, it is not a matter of gut feeling alone. Analyses of decision making by experts suggest that it is a two-stage process: it does start with intuition (recognition of how one needs to respond to a situation) but this is followed by deliberate evaluation (simulation of response and its outcomes). In this blend of intuition and analysis, the simulation can be mental but is nevertheless based on information, including both general rules and specific characteristics of the particular situation. A firefighter, for example, needs to know the materials and structure of a building on fire in order to predict how the envelope and the load-bearing structure may behave or how smoke may develop in egress routes. The more relevant information is available, the better for the decision making. Furthermore, experts often rely on explicit information tools for aiding their memory and structuring their procedures, such as data charts and checklists.
The conclusion we can draw from this is that even if we treat every participant in an AECO project as an expert, capable of amazing Type 1 feats, there is every reason to invest in IM for a number of critical reasons:
• Clear information structures and flows allow managers to understand what takes place and guide the process.
• Reliable and meaningful information around each task helps other participants evaluate and adjust their own actions, generally without the need for interventions by manager.
• Any such intervention can be made less confrontational, as well as more operational, if conveyed through information.
Recommended further reading
Four books that explain dual-process theory in order of accessibility to a wider audience:
• Kahneman, D. (2013). Thinking, fast and slow. New York: Farrar, Straus and Giroux.
• Chabris, C. F., & Simons, D. J. (2010). The invisible gorilla: and other ways our intuitions deceive us. New York: Crown.
• Stanovich, K. E. (2011). Rationality and the reflective mind. New York: Oxford University Press.
• Evans, J. S. B. T., & Frankish, K. (2009). In two minds: dual processes and beyond. Oxford: Oxford University Press.
Nudge theory and choice architecture are presented in: Thaler, R. H., & Sunstein, C. R. (2021). Nudge: the final edition. New Haven: Yale University Press.
A enlightening analysis of true expertise is: Klein, G. A. (1998). Sources of power: how people make decisions. Cambridge MA: MIT Press.
The benefits of checklists for medical and other experts are presented in: Gawande, A. (2010). The checklist manifesto: how to get things right. New York: Metropolitan Books.
Key Takeaways
• Thinking comes in two different types, Type 1(fast but biased) and Type 2 (analytical but slow and expensive)
• Type 2 processes can be reflective (evaluation of Type 1 results) or algorithmic (strategies, rule systems etc., often based on cognitive decoupling)
• Failures due to Type 1 thinking include: inattentional and change blindness; planning and sun-costs fallacies; inside views; illusions of knowledge, confidence, skill and cause; inappropriate substitution; narrow framing; false coherence, invented causes and partiality in narratives
• The activation and execution of Type 2 processes depends on information
• AECO exhibits many failures that suggest a dominance of Type 1 thinking
• Management has a social and an information side
• True expertise relies on information
Exercises
1. Analyse a project known from literature to have suffered from planning or sunk-cost fallacies:
1. What were the causes of the failure in terms of the dual-process theory?
2. What would be the right frame and outside view for the project?
3. Which information is critical in this frame and view?
2. Study the MacLeamy curve (any variation):
1. Is it a law, a generalization (like Moore’s “law”) or an overgeneralization?
2. Which information is necessary for constructing the curve?
3. Give three examples that do not fit the curve, including specific information and the resulting curve.
1. The budget overrun example concerns the Amare cultural complex in The Hague, as calculated by the local Court of Auditors: https://www.rekenkamerdenhaag.nl/publicatie-onderwijs-en-cultuurcomplex/.
2. For a number of amusing spurious correlations, see http://tylervigen.com. | textbooks/workforce/Construction/Building_Information_-_Representation_and_Management_Principles_and_Foundations_for_the_Digital_Era_(Koutamanis)/04%3A_Management/4.01%3A_Introduction_to_Management.txt |
This chapter explains how a process can be described as a graph of tasks that affords overview and supports reliable planning and effective guidance for each task and the whole process. By doing so, process diagrams address many questions on the social side of management. The chapter presupposes knowledge of graphs and in particular of directed graphs (see Appendix I).
Process descriptions
As we have seen in the chapter on IM, there is a strong correspondence between the sequence of tasks in a process and the sequence of information actions: process management and IM overlap. Therefore, the first step towards effective IM in any process is understanding the process itself: what people do and how their actions, decisions, interactions and transactions relate to the production, dissemination and utilization of information. Starting IM by analysing the process also has advantages for the deployment of IM measures: most people and organizations are more process-oriented than information-oriented. As a result, they may have difficulty identifying and organizing information actions without a clear operational context. Using a process model as basis makes clearer why and how one should manage information.
The various ways of describing processes fall under two main kinds:
1. Textual descriptions, such as reports, often including tables and lists that summarize key points
2. Diagrammatic descriptions: visual displays of the process structure, either focusing on the overall picture or providing step-by-step descriptions of the process flow
The two kinds are complementary: textual descriptions can be detailed specifications, while diagrammatic ones afford overview. This fundamental difference makes textual descriptions better suited for the level of single tasks and diagrammatic ones the unmissable starting point for the whole process. Doing away with diagrammatic descriptions and relying solely on texts is inadvisable because of the resulting difficulty in constructing mental overviews. Recognizing dependencies between multiple tasks, redundancies, omissions and other process characteristics is quite demanding for any reader of a text. It can lead to unnecessary errors in interpretation, including through illusions of cause from presumed precedence or coincidence in time, especially in sequential processes (which abound in AECO). Diagrammatic descriptions help us overcome such cognitive limitations by serving as mnemonic aids for understanding and managing processes: they can be seen as checklists of tasks and of relations between tasks that unburden actors’ memories and prevent them from missing critical steps or available options at any step in a process.
This chapter builds on the potential of graphs to answer two fundamental questions:
1. How process diagrams should be made: the syntactic and semantic rules they should follow to capture the composition and structure of tasks in a process with the right abstraction and consistency
2. Which problems can be addressed in these diagrams, with emphasis on the unwanted products of Type 1 thinking, so that the social side of management becomes both more specific and free from cognitive illusions and fallacies
Flowcharts
Basic flowcharts suffice for describing practically any AECO process as a sequence of tasks towards a specific outcome. These diagrams are directed graphs (digaphs), in which objects are represented by nodes of various kinds, while relations are described by arcs (Figure 1). The direction of the arcs indicates the direction of flow in the process. Bidirectional arcs should be avoided because they usually fuse different relations, obscuring differences in time and purpose between denoted actions, e.g. between an evaluation and the feedback that follows. Separate representation of each such action is essential for understanding and managing process flow.
To make an usable flowchart of a process, one should adhere to a few basic rules:
• Uniqueness: each thing should be represented by a single node in the diagram. The uniqueness rule makes explicit the actors, stakeholders and tasks in a process, the scope of each, process flow and, through these, the overall complexity of a process. It also permits the use of graph-theoretic measures, such as the degree of a node, in analysing the process.
• Decision degrees: the in- and out-degrees of each decision node should be at least 2. This means that there are at least two things to be compared in order to take the decision, for example a design and a brief, and at least two decisions that could be taken, for example design improvement or approval.
• Specificity and comprehensiveness: a process flowchart is not an abstract depiction of vague intentions, like many conceptual diagrams. Each node and arc should be meaningful as an unambiguous actor, task or relation. No directly relevant task or relation should be left implicit or otherwise absent from the diagram. For example, it is not helpful to assume that a design is somehow evaluated anyway and neglect including the evaluation tasks in the diagram or omit the criteria of the evaluation.
Figure 2 is a simple example of a process in building design: the estimation of construction cost in early design, on the basis of gross floor area. The process involves three actors: a client, an architect and a cost specialist. These are responsible for the budget, the design, the cost estimation and the evaluation of the estimate, which leads to either feedback to the design (usually to lower the cost) or acceptance of the design as it is. The process described by the diagram is as follows:
1. The client decides on a budget for the building
2. The architect makes a design for that budget
3. The cost specialist estimates the costs of that design
4. The design is evaluated by comparing its costs to the budget
5. If the costs are within the budget, the design is approved; if not, the design must be improved and evaluated again (repeat from step 2)
The comparison between this list of steps and the process diagram is telling: there is nothing in the list that cannot be inferred from the diagram. Reading the diagram is faster than reading the list and the process structure is easier to recognize in the diagram than in the list, especially concerning relations between tasks. If the list was replaced by a less structured text, the differences would become even greater.
The example is purposely kept simple in order to illustrate the basic principles of process diagramming. A client obviously issues more instructions and requirements, e.g. a design brief, while the design must also take into account other goals and constraints, such as the applicable building codes and planning regulations, location features etc. Even in this simple form, however, the diagram accurately describes the fundamental structure of cost estimation, including the double role of the budget as a constraint in designing and as a criterion for design evaluation, as well as the generate-and-test approach to design (with analysis between generation and testing).
Due to the uniqueness rule, there is a single node for the design. Evaluation is followed not by a new design node but by feedback to the design. This makes clear that the decision is to improve the design rather than produce a completely new design, possibly by a new architect or with a new budget. This expresses the iterative character of design: the cost evaluation can be repeated a number of times, each resulting in design improvements, until finally the evaluation returns approval of the design. The example contains feedback only to the design but it could also feed back to the budget if the design evaluation suggested that higher investment, e.g. an energetic solution that raised construction costs but lowered operation costs, would return significant life-cycle benefits.
A process diagram without feedback loops is by definition suspect. Figure 3 is a negative example (a poor diagram), which also violates the uniqueness and decision degrees rules: it is as if many architects are involved, each producing a different design that is subjected to a different evaluation with unclear criteria and outcomes. Above all, however, it presents an iterative process as sequential, with an arbitrary, unsatisfactory conclusion that comes only because the diagram ran out of space. In comparison, Figure 2 leaves no fundamental matters unspecified, including by means of feedback to an earlier task following a transparent decision.
In short, the diagram in Figure 2 affords overview of the process in the same way that a metro map allows travellers to see every station and line in a city, the location of each station, its connections to others via different lines and the patterns that emerge in each line and any part of the metro network. The process diagram allows us to zoom in on any task, understand its immediate context, track how we come to that task and where we go from there, as well as identify general characteristics of the process, for example if it is sequential or iterative.
Testing process diagrams
A basic way of testing the structure and content of basic diagrams is from the perspective of each actor and stakeholder. Starting from the beginning of the process, we need to consider at each node if this actor or stakeholder is related to this task (i.e link what to who and consequently also how). If yes, then we need to establish if the connection is:
• Direct: it is a task that should be undertaken by this particular actor, e.g. the design by the architect
• Indirect: it connects to another task by this stakeholder, e.g. the architect needs the budget to guide the design costs
Once this is completed and the diagram accordingly corrected, the involvement of each actor and stakeholder can be tracked in the process (i.e. extend to when). To do this, we need to examine the subgraph that contains all directed walks that start from the node of the actor or stakeholder. Figure 4 is the subgraph of the cost specialist, in which there are two directed walks to design approval: one directly after evaluation and one following feedback to design. In this subgraph, we can identify the extent of the cost specialist’s involvement, examine if the sequence of the process steps is logical and, on the basis of anti-data, identify relations to other actors, stakeholders and tasks, e.g. who is making the design improvements following feedback and what are the criteria for the evaluation (i.e. that the evaluation node should connect to a budget). The results returned from this examination are obviously significant for the functioning of the particular actor or stakeholder but also useful for the manager: in many situations, missing nodes and especially arcs become apparent only in such subgraphs.
Following the examination of each perspective separately, we should investigate how they come together in a simulation the process flow. This is best done by a group, in which each member assumes the role of an actor or stakeholder. In board-game fashion, the simulation goes through the process step by step, stopping at every task to consider not only if the task is fully and clearly specified but also where each actor and stakeholder is at that step of the process. The former makes clear the interactions between actors and stakeholders, including through their products. The latter helps anticipate the nature and timing of upcoming interactions, e.g. that the budget should be available before the architect starts designing. Many conflicts in a process are due to bad timing and the consequent need to make haste in order to catch up with the process flow.
Overcoming cognitive limitations
The overview afforded by process diagrams is not merely practical for explaining the process, so that all actors know what to do and when, and managers can organize actions towards project goals. It is also instrumental for overcoming cognitive limitations that lead to mistakes and failures. Acknowledging that project participants may suffer from cognitive biases and illusions, and trying to help avoid them is a key managerial task. For example, explicit connections between tasks help avoid illusions of cause. If the connections are accurate and no tasks are missing, one cannot easily infer fictitious causes from accidental precedence or coincidence in time.
To achieve such protection, it is important to avoid thinking in conventional, vague structures, such as preliminary, concept, technical and detail design stages. In a process diagram, such stages should never become nodes. Instead, all stages should be parsed into specific tasks and patterns, like the evaluation pattern in Figure 2, which matches any stage and therefore supports condensation. Distrust of conventional structures is important for the successful deployment of Type 2 reflective processes. These are often acquired by education and training, and therefore reflect cultural and professional habits. As a result, they may actually discourage analytical thinking by imposing rules of thumb and other summary or pseudo-analytical decision structures, which are embedded in conventions we tend to follow unthinkingly.
It is also important that managers are aware of their own cognitive limitations and how these can affect a process. Process diagrams can protect actors individually and as a group from avoidable mistakes but the same applies to mistakes managers make in the setup and control of the process. The most common of these relate to illusions of confidence and skill that cause managers to allocate tasks or even delegate custodianship of whole parts of a process without sufficient control (black boxes). Making the relations between actors, stakeholders and tasks explicit is a solid foundation for avoiding such illusions.
Reflective thinking, cognitive simulation and learning potential
A general advantage of a process diagram like Figure 2 is that it stimulates Type 2 reflective thinking. In contrast to an inspirational conceptual diagram, a flowchart like this supports analytical anticipation of actions and outcomes in an explicit manner that reveals unsolved problems and casts doubt on automatic decisions based on habit or convention. Unlike the mind-numbing Figure 3, it invites us to actively follow progress in a process, discovering possible problems on the way. By tracking the dipaths that lead to a node of interest or depart from it, we can examine if anything is missing or uncertain, if the connections between tasks seem doubtful or vague, or if the diagram contains practices that have led to failures in other projects. For example, a client may rightly worry that feedback to the design could lead to endless iterations with minimal improvements every time and therefore to considerable delays.
The diagram also supports improving the process through Type 2 algorithmic thinking. By tracking and evaluating progress, developing variations and measuring the graph we engage in cognitive simulation that allows all actors and stakeholders to test their assumptions and verify their expectations in an interactive manner. At a basic yet critical level, a process diagram can be used to verify the process frame, that is, if all options and consequences of each decision, all goals and constraints are present. For example, the budget cannot be absent from Figure 2: how else can we evaluate the costs of the design?
With the right frame, we can then consider improvements either by tweaking the process design or by projecting what-if and other scenarios, so as to connect better to project constraints or the perspectives of various participants. This helps anticipate problems and so prevent planning and sunk-costs fallacies. In the example of Figure 2, how long can we keep on with the iterations in a generally acceptable but not perfect situation? By evaluating the improvement achieved with each iteration, we can see if the design is reaching a plateau and take the decision to abandon or approve it, even if the costs are still higher than the budget. Alternatively, we can impose a ceiling on costs (as in Figure 5), so that the difference between costs and budget is not so big that the iterations become pointless. This nudges the design towards staying close to the budget, for example by constraining the selection of construction types and finishes.
Allowing for problems that could not have been anticipated is more difficult and requires awareness of Type 1 thinking limitations in both process management and the contributions of each actor. The interaction between the two is critical: the process design should stimulate reflective thinking in all respects, allowing actors to reflect not only on their own tasks but also on the whole process. Precision in the description of tasks is an important prerequisite because it stimulates meaningful questions about how and why. For instance, it can stimulate actors to question whether the cost estimate and approval in Figure 2 should be binding for the whole design project: this estimate is rather rough and relies heavily on the new design being quite similar to other buildings from which reference costs are derived. Therefore, making more detailed and precise estimations, including by refining the reference class, should be considered as soon as the design information allows it and before the difference between later, more detailed estimates and this one become an issue of contention in the project.
Working with analytical process diagrams is also important for skill development. In many activities, such as sports or music, improvement depends on instant feedback that triggers calibration: a wrong pass or a false note immediately call for evaluation and adjustment. In areas like AECO, there is a long latency period between taking a decision and realizing its effects — so long and so obscured by intervening events (therefore also subject to illusions of cause) that the relation between cause and effect may completely elude us. In the absence of instant feedback, it is perhaps not surprising that we opt for optimism and confidence. Cognitive simulation with process diagrams compresses time, making it easier to discern probable consequences of specific decisions before they occur, reconsider these decisions and their backgrounds, and so understand and learn.
Framing
The functions of a process diagram (providing overview, stimulating reflective thinking, supporting cognitive simulation and generally helping avoid Type 1 mistakes) require an appropriately broad frame: one that includes all relevant aspects and constraints, as well as all probable options and outcomes. In such a frame, we can easily detect dependencies between tasks and consequently take properly informed decisions at any task and prevent the spread of local mistakes to the rest of the process. However, a broad, inclusive frame goes against the conventional wisdom that reducing the complexity of a phenomenon makes it easier to understand and handle. This is something frequently done in conceptual diagrams, to the detriment of clarity. Process diagrams do not necessarily reduce the complexity of a process. Instead, they make it explicit and manageable, preventing the isolation of subproblems in narrow frames.
With respect to framing, a process diagram should go beyond stated objectives and include all things that connect directly to each task in the process, whether they occur in the process or not. In a design process, for example, the diagram should include the applicable planning regulations and possibly the local authorities behind them. The reason is that the regulations constrain many decisions on the form of the design and the local authorities may have to grant an exemption from these regulations. If an exemption is probable, the process diagram should also include feedback to the regulations. Things with an indirect relation to the process, for example the legal framework of planning regulations and the central authorities that determine it are highly unlikely to play a role in a design process (e.g. receive direct feedback from any process task). These should not be included in the process diagram.
In the digraph of the process diagram, extraneous nodes of questionable relevance can be detected by their degree. Long subgraphs starting from a source node and having an in- and out-degree sequence of ones should be considered for exclusion because they probably describe tasks that are irrelevant to the process (Figure 5). As a rule, such peripheral subgraphs, starting from a source node, should have a length of 2 or less: one actor or stakeholder node and one task node, the latter connecting to a task that certainly belongs to the process, as with the client/brief subgraph in Figure 2. If the source is a constraint, e.g. planning regulations, and the planning authorities are not involved, then the length can be 1.
A related use of graph measures is to make certain that the core of the process, i.e. all essential tasks are in the center of the graph, with actors, stakeholders, preparatory and external tasks in the periphery. If the center contains non-essential or peripheral nodes, the digraph probably contains extraneous nodes.
There are two complementary ways you can measure the center and periphery of a process diagram. The first is to do it in the underlying undirected graph, i.e. the graph that is obtained if every arc is replaced by an edge (Table 1). As any process diagram is a weakly connected digraph, this returns an impression of the overall structure of the process. In this example, the center comprises the design and evaluation nodes. The client and cost specialist nodes form the periphery. The closeness measures agree with the interpretation of the center but also suggest that the architect and design approval nodes are not as central as the budget and cost estimation nodes.
Table 1. Eccentricity and closeness in the underlying undirected graph of Figure 2
Client Cost specialist Architect Budget Design Cost estimation Evaluation Design approval Eccentricity Closeness
Client × 4 2 1 2 3 2 3 4 0,41
Cost specialist 4 × 3 3 2 1 2 3 4 0,39
Architect 2 3 × 2 1 2 2 3 3 0,47
Budget 1 3 2 × 1 2 1 2 3 0,58
Design 2 2 1 1 × 1 1 2 2 0,70
Cost estimation 3 1 2 2 1 × 1 2 3 0,58
Evaluation 2 2 2 1 1 1 × 1 2 0,70
Design approval 3 3 3 2 2 2 1 × 3 0,44
The second way is to measure distances in the digraph itself (Table 2). Note that as distances are measured in dipaths, there are many nodes that do not connect to each other and that the table is not symmetric with respect to the diagonal: the distance from node X to node Y is not the same as the distance from node Y to node X. The measures in our example suggest that the core of the process comprises the budget, cost estimation and evaluation nodes, with only the architect in the periphery. In terms of closeness, the evaluation node is the most central, followed by the cost estimation, while the client and the cost specialist are closer to the architect. In other words, the digraph measures give a slightly different view, more specific to the cost estimation process. Nevertheless, such differences are minute. What matters is that both tables confirm that there is nothing fundamentally wrong with the process in Figure 2.
Table 2. Eccentricity and closeness in the digraph of Figure 2
Client Cost specialist Architect Budget Design Cost estimation Evaluation Design approval Eccentricity Closeness
Client × 1 2 3 2 3 3 0,64
Cost specialist × 3 1 2 3 3 0,78
Architect × 1 2 3 4 4 0,70
Budget × 1 2 1 2 2 1,17
Design × 1 2 3 3 1,17
Cost estimation 2 × 1 2 2 1,40
Evaluation 1 2 × 1 2 1,75
Design approval ×
Linear subgraphs, like the ones in Figure 5, with a degree sequence consisting largely of ones, are suspect for two additional reasons. Firstly, they may be the result of over-analytical thinking that unnecessarily splits tasks in steps that should be combined. Secondly, the absence of feedback arcs and other cross-connections suggests a schematic interpretation of the real process that does not include all options and constraints, i.e. narrow framing. To prevent decision taking in narrow frames, it is advisable to avoid sequential procedures, consisting of tasks each involving only one or two actors or stakeholders and concerning decisions on a single issue or aspect. Instead, decisions should be combined and made by larger groups. In a design evaluation, for example, one should not first evaluate the design for compliance to planning regulations, then to the building code, then to the brief and finally to the budget. Instead, all checks should be combined in a single evaluation that also includes the relations between the three criteria (Figure 6): a discrepancy with respect to the brief could be due to inescapable planning constraints, while additional costs can incur as a result of design decisions that achieve more than what the brief asks for. A combined evaluation therefore supports precise and effective feedback to the cause of a problem, such as a request for exemption from existing planning regulations because of the added value of an energetically innovative solution that adds to the building height.
Group processes
A frequent objection to combining decisions and evaluations is the resulting complexity of the tasks (recognizable by the high in-degree of the nodes). This objection is founded on the presumed efficiency of simple tasks with narrow frames and ignores not only the dangers of narrow framing but also the low effectivity and inefficiency of sequential processes, especially if complex problems are artificially parsed into long sequences. It also follows the dangerous tendency in group decision making to put consensus above true solutions. In meetings, for example, it is customary to start debating a problem immediately and try reach a conclusion upon which all participants agree as soon as possible. This allows the most vocal participants to dictate the level and direction of the discussion. However, as we have seen, these persons may be less knowledgeable than presumed and suffer from illusions of confidence. They might therefore lead the discussion astray, while more hesitant participants hide behind them and follow their direction, creating a false feeling of rational alignment. It is recommended that, instead of initiating a meeting with an immediate debate towards general consensus, each participant should first prepare and present a separate analysis of the problem and suggestions for its solution. Informing each other in this manner is a more constructive and inclusive basis for a discussion that compares or combines different options, taking more aspects into account and considering them from more perspectives.
This approach to group decision making reflects the differences between Type 1 and Type 2 processes: for personal action, Type 1 thinking may suffice but for joint action and interpersonal communication, especially with respect to complex, partially shared goals (as in most AECO processes), Type 2 reflective processes are required. The load-bearing structure of a design can be decided in a flash of inspiration but explaining it to the other members of a design team who have to adjust to it, as well as estimating its effects (e.g. direct and indirect costs) is clearly more analytical and time consuming. It is therefore important that the process design ensures that actors arrive at combined decisions adequately prepared, with complete plans, proposals, analyses and evaluations, which are made available to all in time, before any deliberation and decision. This makes all options explicit, creating a broad frame and basing consensus on the comparison and combination of options rather than the opinions and personalities of vocal participants.
In a process diagram, this means that actors and stakeholders do not give direct input to a decision node, as the client does in the faulty example of Figure 3. That would imply a personal, possibly variable opinion, e.g. that clients change their mind about what they want without prior communication to other project participants. Instead, actors and stakeholders should connect to decisions through the products of their tasks, as the client does through the budget in Figure 2. A decision should take place by comparing the different products, e.g. a design to its brief and budget, and its outputs should include feedback to these products: if clients change their minds, this should be expressed clearly as adaptation of briefs, budgets etc. In graph terms, this means that the distance of actors and stakeholders from decision nodes should be at least 2.
In the same vein, task nodes do not output into actor or stakeholder nodes. An arc leading from e.g. a design to a client, as in the misleading diagram of Figure 3, does not mean that the design is given to the client but that a client is selected on the basis of the design — in fact, a new client, different to the one who initiated the process at the top of the diagram. A correct diagram would indicate that the design is submitted to an evaluation of its compliance to the brief, similarly to the comparison to the budget in Figure 2. The client is not directly involved in this compliance evaluation. One should not confuse the tasks in process diagrams with the social interactions that take place around the tasks. The presence of clients in meetings on brief compliance does not mean that the brief is ignored, only that the compliance evaluation is communicated directly and transparently to the clients.
The emphasis on tasks and their products in process diagrams is important for solving problems in AECO. These problems are often considered ill-defined and therefore hard to solve because there is no clear agreement on the problem, its constraints and goals. This makes difficult to agree on solutions and take joint decisions. By making tasks and products explicit, any lack of agreement becomes clear to all and nudges them towards less fuzzy problem descriptions and procedures. There is, for example, no reason why a budget should not be specific and transparent, calculated on the basis of clear parameters that can be modified in response to the design or changing social, technological or economic conditions.
Illusions of confidence and skill
Process management is particularly sensitive to illusions of confidence concerning self-assured participants or presumed experts. A clear expression of these illusions are black boxes in a process: chunks that are delegated to a particular actor without clear understanding of what takes place there and uncertain relations to the rest of the process. Such chunks are often claimed by specific disciplines, which makes the selection of project participants for those disciplines quite sensitive. Choosing them on the basis of track record is a positive development, only marred by the lack of objective and reliable performance measurements, which render any selection even more sensitive to illusions of confidence and skill. It is quite hard to distinguish what went well in previous projects thanks to a particular actor versus other actors or contextual factors. In a successful project, practically everyone claims credit for what went well. It is even more difficult to establish what went wrong due to a specific actor, since very few are brave enough to admit responsibility. In the end, all one can tell is that some actors were involved in a project, that the project had a certain performance and that some aspects were better than others. Such vagueness does not free us from any illusion.
To safeguard a process we should therefore avoid delegating large clusters of tasks to actors or stakeholders, turning them into black boxes that are inevitably beyond control. Instead, a process description should parse activities in as specific tasks as possible. For example, rather than abstractly asking for a cost estimate, we should specify how the cost estimate should be made: the prerequisites to making an estimate (such as a design containing the necessary information), the method used for the estimation, the timing of the estimate relative to the rest of the process (making sure that the prerequisites can be met, as well as that the estimate is directly used) and how the estimate should be evaluated, including follow-up actions such as feedback to the design. It goes without saying that bundling the design, the cost estimation and the evaluation into a single node is unacceptable. Equally dangerous is entrusting the subgraph containing all these tasks to a single stakeholder.
A structured, analytical process is neither trivial nor insulting, even to the greatest of experts, especially if the parsing of the black boxes is based on their approach and facilitates their actions and interactions with the rest of the process in a transparent and operational framework. As for process managers, it is merely a matter of good housekeeping and discipline that amounts to feedforward (anticipating what might occur and establishing procedures for prevention, early detection and immediate action), as opposed to feedback (waiting until a problem emerges, deliberating about its significance and finding ways to resolve it). Feedback as a means of control seems inevitable in any process but feedforward greatly reduces the need for feedback and, above all, the pressures associated with it.
Narratives and coherence
A realistic diagram makes the process inclusive, empowering each participant to track progress from their own perspective and identify interactions with others. Process management benefits from inclusiveness, too, because it becomes protected from the dangers of one-sided narratives and the frictions and imbalances they can cause. Instead of having a single narrative, from the perspective of a dominant participant and possibly accepted due to a halo effect, the various actors and stakeholders can project their own narratives on the process design and so escape inside views and planning fallacies. In this way, coherence is not apparent or imposed but real and constructed by all participants together, resulting in scenarios that are realistic (i.e. scenarios that may include conflicts and lacunae but also make them clear to all) and combat probability neglect: multiple perspectives help give each risk the right consideration, so that big risks are not ignored and small risks are not given too much weight (as is often the case with inside views).
This also puts performance before compliance: rather than trying to stay within the narrative, the process is driven towards the highest goals attainable. For example, the budget in Figure 2 is based on assumptions that may remain unchallenged if all that matters is that the design conforms to the budget. If, on the other hand, these assumptions are negotiable and adaptable to suggestions by project participants and outcomes, then the process can lead to a better relation between costs and performance.
To avoid straight-jacketing a process into a narrative, it is again advisable to avoid sequential process designs. Most narratives, certainly in AECO, tend to have a linear structure that imposes one narrow frame after the other on the interpretation of a problem, cutting it down into subproblems in a way that fits the coherence of the narrative but not necessarily the needs of the project. Combined tasks and parallel tracks can improve reliability, as well as effectivity, by allowing each participant adequate scope for their activities and priorities.
Graph measures
As already mentioned, graph measures can be used to quantify indicators, checks and controls, making them easier to implement in a process diagram:
• Decisions nodes should have in- and out-degrees equal or higher than 2
• Linear, peripheral subgraphs with degree sequences of ones should have a length of 2
• Important tasks should be in the center
• Preparatory and external tasks should be in the periphery
• Actor and stakeholder nodes should have a distance of 2 from decision nodes
In addition to these:
• The degree of a node is a good indication of local complexity.
• A high in-degree suggests complexity in information processing and decision making at the particular task, as well as dependence on multiple actors or previous tasks. If a high in-degree indicates a collection point, e.g. different specialists coming together to compile a brief, you should organize the resulting group processes with care without splitting the node into a sequence of tasks that gives the illusion of simplicity. On the other hand, if a node denotes an abstraction or agglomeration of too many tasks (e.g. a complete stage like concept design), its high degree should prompt a more analytical and explicit description of these tasks.
• The out-degree indicates scope: how broad the effects of a task are or how widely a stakeholder is involved in the process. For example, it is expected that the products of a key task like briefing are used in many places in a process. But even if the significance of the task is low, a high out-degree means that many others may depend on its timely execution.
• Bridges indicate transitions from one part of the process to another (e.g. transition between stages), as well as connections that are sensitive to process delays or interruptions: if the transaction described by the arc does not take place, the whole process halts. A process diagram with many bridges usually describes a sequential or phased process. As processes with combined tasks and parallel tracks are preferable, a process diagram should contain as few bridges as possible. Those that remain should be strategically chosen: in the same way that a bridge in an access graph may disrupt pedestrian circulation but also presents opportunities for control, a bridge in a process diagram, even if unavoidable, should be coupled to actions that benefit the process and its management, e.g. synchronization of different aspects.
Key Takeaways
• Processes can be described textually or diagrammatically; diagramatic descriptions are necessary because they afford overview
• Flowcharts are digraphs that can be used to describe a process as a sequence of interconnected tasks (process diagram)
• By making tasks and connections explicit, process diagrams are a useful basis for the social side of management
• In a process diagram, each thing should be represented by a single node (uniqueness rule)
• The in- and out-degrees of decision nodes should be at least 2 (decision degrees rule)
• Process diagrams are not abstract conceptual displays: every node and arc should represent a specific actor, task or relation, and no task or relation should be missing
• Process descriptions should stimulate reflective thinking, support cognitive simulation and provide instant feedback for learning
• Graph measures help with determining an appropriately broad frame, avoiding sequential designs and identifying potential interruptions (bridges)
• Actors and stakeholders connect to decisions indirectly, through the products of their tasks
Exercises
1. Measure the degree and eccentricity of nodes, and the diameter and radius of the graph in the process diagram of Figure 6. What do these measures suggest, especially in comparison to Figure 2?
2. Expand the process diagram of Figure 6 with additional aspects, actors, tasks and feedback connections. What do the measures in the resulting graph suggest, especially in comparison to Figure 6?
3. Expand the process diagram of Figure 2 to cover all design stages, using increasingly more precise and informed cost estimates. What changes in the structure and measures of the graph? Do you observe patterns that are combined or repeated? | textbooks/workforce/Construction/Building_Information_-_Representation_and_Management_Principles_and_Foundations_for_the_Digital_Era_(Koutamanis)/04%3A_Management/4.03%3A_Process_diagrams.txt |
Subsets and Splits