text
stringlengths 270
6.81k
|
|---|
h) 2ha (a + h)2 + o(h). Hence, rearranging and taking the limit as h → 0, we have F(a, v) + 2v a2 (a, v) = − 2 a ∂ F ∂a. (3) (4) (5) (6) (1) (2) Worked Examples and Exercises 377 Integrating (2) using the condition F(a, a) = 1 gives F(a, v) = 2av − v2 a2. (3) The densities of U and W may be found by the same method (exercise). III: Symmetry Suppose we pick three points independently at random on the perimeter of a circle with perimeter of length a. Then choose any of the three as origin and “unwrap” the perimeter onto (0, a). The other two points are distributed as X and Y. However, by the symmetry of the original problem the three lengths U, V, and W have the same density. By method I, part (i), it is 2(a − z)/a2. (b) Let θ, φ, ψ be the angles subtended at the centre by the three sides of the triangle. 2r 2(sin θ + sin φ + sin ψ) ; note that this expression is The area of the triangle is A = 1 still valid when an angle is obtuse. However, by part (a), each of the arc lengths r θ, r φ, and r ψ has the same density 2(2πr − z)/(2πr )2. Hence, θ has density 2(2π − θ )/(2π )2, and E(A) = 3 2 r 2E(sin θ) = 3 2 r 2 2π 0 2(2π − θ ) sin θ (2π )2 dθ = 3r 2 2π. (4) (5) (6) (7) (8) (9) Show that the probability that U, V, and W can form a triangle is 1 4. Find the densities of U and W by method II. Suppose that X 1, X 2,..., X n are independently and uniformly distributed on (0, 1) Exercise Exercise Exercise with order statistics X (1),..., X (n).
|
What is the density of X (k+1) − X (k); 1 ≤ k ≤ n − 1? Exercise (6) Continued What is the joint density of X (k+1) − X (k) and X ( j+1) − X ( j) for j = k? Exercise Two points are picked at random on the perimeter (including its diameter) of a semicircle with radius 1. Show that the expected area of the resulting triangle they make with the midpoint of the diameter is 1/(2 + π). Exercise Write down the joint density of U and W ; then integrate to derive (1) by a fourth method. 8.13 Example: Buffon’s Needle An infinite horizontal table is marked with a rectangular grid comprising two families of distinct lines A and B. The lines of A are parallel, and the distance between neighbouring lines is 2a. All the lines of B are perpendicular to every line of A and are distance 2b apart. A thin symmetrical needle of length 2l, where l < min {a, b}, is thrown at random onto the table. (a) Show that the probability that the needle intersects both an A-line and a B-line is (1) (2) P(A ∩ B) = l 2 πab. (b) Show that the probability that the needle intersects an A-line and does not intersect a B-line is P(A ∩ Bc) = 2bl − l 2 πab. 378 8 Jointly Continuous Random Variables B-line a l 0 III II I l b A-line Figure 8.2 Buffon’s needle. The centre C of the needle must fall in some 2a × 2b rectangle R, whose Solution sides are A-lines and B-lines. The words “at random” mean that the centre is uniformly distributed over R, and the angle! that the needle makes with any fixed line is uniformly distributed. By symmetry, we can suppose that C lies in one quarter of R, namely, the a × b rectangle Q, and also that 0 ≤ θ < π. That is to say, we assume that C = (X, Y ) and! are jointly uniform on {0 ≤ x < a} × {0 ≤ y < b} × {0 ≤ θ < π } with joint density (π
|
ab)−1. (i) Now consider Figure 8.2. The needle can intersect both A and B only when C = (x, y) lies in the positive quadrant of the circle, radius l, centred at the origin (region I). If the angle it makes with OB lies between ± sin−1(x/l), then it cuts only OA. Likewise the needle cuts only OB if it lies within the angle − sin−1(y/l) <! < sin−1(y/l). Therefore, when X = x > 0, Y = y > 0, and x 2 + y2 ≤ l 2, the probability of two in- tersections is π − 2 sin−1(x/l) − 2 sin−1(y/l). Hence, (3) P(A ∩ B) = 1 πab π − 2 sin−1 x l − 2 sin−1 y l d x d y, where the integral is over x > 0, y > 0, x 2 + y2 ≤ l 2. Now l (l2−x 2) 1 2 0 0 sin−l 2 − x 2) 1 2 sin−1 x l d x l 2θ cos2 θ dθ, with the obvious substitution, 0 π/2 = = 0 π 2 16 l 2. − 1 4 Hence, substituting into (3) gives P(A ∩ B) = 1 πab l 2π 2 4 − 4l 2 π 2 16 − 1 4 = l 2 πab. Worked Examples and Exercises 379 (ii) For P(A ∩ Bc), we examine Figure 8.2 again. First, if C is in region I, then the needle cuts A and not B if! lies in an angle 2 sin−1(x/l), as we remarked above. Second, if C lies in region II (that is, 0 ≤ y < l, but x 2 + y2 > l 2), then the needle cuts A and not B if it lies in an angle of size 2 cos−1(y/l). Hence, πabP(A ∩ Bc) = 2 sin−1 d x d y + 2 cos−1 d xd y x l y l 2(l 2 − x 2) 1 2 sin−1 II d x x l 2(l −
|
(l 2 − y2) 1 2 ) cos−1 2(a − l) cos−1 y l dy 2θ(cos2 θ − sin2 θ) dθ + l 0 = 2al − l 2, as required. y l dy π 2 0 (2l + 2(a − l))θ sin θ dθ (4) Exercise Show that the probability that the needle intersects no line of the grid is 1 − 2l πb − 2l πa + l 2 πab. (5) (6) (7) (Do this in two ways, one of which is an integral.) Exercise Suppose the table is marked with only one set of parallel lines, each distance 2a from its next neighbour. Show that the probability that a needle of length 2l < 2a intersects a line is 2l/πa. (Do this two ways also.) Exercise intersection is Consider the problem of Exercise 5 when 2l > 2a. Show that the probability of an 2 π cos−1 a l + 2l πa 1 − 1 − a2 l 2 1 2. Exercise Suppose (instead of a needle) you roll a penny of radius l on to the grid of A-lines and B-lines. What is the probability that when it topples over it intersects a line? When is this an evens chance? 8.14 Example: Targets (a) Let (Xi ; 1 ≤ i ≤ 2n + 1) be independently and uniformly distributed over (−1, 1), and let Yn = X (n+1) so that Yn is the sample median of the Xi. Find the density of Yn, and hence evaluate the integral 0 (1 − x 2)n d x. (b) Now n shots hit a circular target. The points of impact are independently and uniformly distributed over the circle. Let Zn be the radius of the largest circle concentric with the target which includes no hit. Find E(Zn). + 1 380 8 Jointly Continuous Random Variables (a) First note that the uniform distribution on (−1, 1) is F(x) = 1 2 (1 + x). Solution Now let Ak be the event that X (n+1) = X k; this occurs of course if n of the Xi are greater than X k, and the remaining n are less than X k. Then P(Y
|
n ≤ y) = 2n+1 k=1 P(Yn ≤ y ∩ Ak) = (2n + 1)P(Yn ≤ y ∩ A1), = (2n + 1) y f X 1(y) −1 2n n n 1 + y 2 1 − (1 + y) 2 by symmetry, n dy, by conditional probability. Hence, Yn has density fY (y) = ((2n + 1)!/(n!)2)((1 − y2)n/22n+1). Because this is a density, its integral over (−1, 1) is unity, so 1 (1 − y2)n dy = 22n(n!)2 (2n + 1)!. 0 [Alternatively, you could write down the density for fY (y) using the known density of order statistics.] (b) Let Ri be the distance of the ith hit from the centre of the target. Because hits are uniform, P(Ri ≤ x) = x 2 for 0 ≤ x ≤ 1. Obviously, P(Zn > x) = P(Ri > x for all i) = = (1 − x 2)n. n i=1 P(Ri > x), by independence, Hence, E(Zn) = 1 0 P(Zn > x) d x = 1 (1 − x 2)nd x = 22n(n!)2 (2n + 1)!. 0 (1) (2) (3) (4) Find var (Yn). Let An be the area of the smallest circle concentric with the target that includes all the Exercise Exercise hits. Find E(An). Exercise of the smallest circle concentric with the target that includes all the remaining hits? Exercise n → ∞, P(n(1 − Rn) ≤ x) → 1 − e−2x. Let Rn be the distance of the furthest hit from the centre of the target. Show that as The hit furthest from the centre of the target is deleted. What now is the expected area 8.15 Example: Gamma Densities Let X and Y be independent, having gamma distributions with parameters {α, λ} and {β, λ}, respectively. (a) Find the joint density of U = X + Y and V = X X + Y. (b) Deduce that E X X + Y = E
|
(X ) E(X ) + E(Y ). (c) What is the density of V? Worked Examples and Exercises 381 (a) We use the change of variables technique. The transformation u = x + Solution y, v = x/(x + y) for x, y > 0, is a one–one map of the positive quadrant onto the strip 0 < v < 1, u > 0, with inverse x = uv and y = u(1 − v). Hence, J = u, and by Theorem 8.2.1, U and V have joint density λαλβ (α) (β) (uv)α−1(u(1 − v))β−1e−λuve−λu(1−v)u f (u, v) = = c1uα+β−1e−λuc2vα−1(1 − v)β−1, where c1 and c2 are constants. Hence, U and V are independent, as f (u, v) has factorized. (b) Using the independence of U and V gives E(X ) = E(U V ) = E(U )E(V ) = (E(X ) + E(Y ))E, X X + Y as required. (c) A glance at (2) shows that V has the beta density with parameters α and β. Exercise Show that 1 x α−1(1 − x)β−1d x. = (α) (β) (α + β) 2 ) = π 1 2. 0 Exercise Exercise that c−1 = π 1 Show that ( 1 Let the random variable Z have density c(1 + x 2)−m, m > 1 2 2 (m − 1 2 )/ (m)., −∞ < x < ∞. Show 8.16 Example: Simulation – The Rejection Method (a) Let U and X be independent random variables such that U is uniform on (0, 1) and X has density f X (x); suppose that there exists a constant a that for all x the function f S(x) satisfies + ∞ −∞ f S(x) d x = 1. Show that 0 ≤ f S(x) ≤ a f X (x) and P(X ≤ x|aU f X (X )
|
≤ f S(X )) = x −∞ f S(y) dy. (b) Explain how this result may be used to produce realizations of a random variable Z with density f S(z). (a) By conditional probability, Solution P(X ≤ x|aU f X (X ) ≤ f S(X )) = P(X ≤ x, aU f X (X ) ≤ f S(X )) P(aU f X (X ) ≤ f S(X )) x = −∞ ∞ −∞ P(aU f X (x) ≤ f S(x)) f X (x) d x P(aU f X (x) ≤ f S(x)) f X (x) d x (1) (2) (3) (4) (5) (6) (1) (2) (3) 382 8 Jointly Continuous Random Variables ∞ x f S(x) a f X (x) f X (x) d x −∞ = = −∞ x −∞ f S(x) d x. f S(x) a f X (x) f X (x) d x, by (1), (b) Suppose we have independent realizations of U and X. Then the above equation says that conditional on the event A = {aU f X (X ) ≤ f S(X )}, X has density f S(x). In familiar notation, we have f X |A(x) = f S(x). Now suppose we have a sequence (Uk, X k; k ≥ 1) of random variables that have the same distributions as (U, X ). For every pair (Uk, X k) for which A occurs, the random variable X k has density f S(x), and we can write Z = X k. Then Z has density f S(z). Remark It is implicit in the question that we want a random variable with density f S(x), and so any pair (Uk, X k) for which Ac occurs is rejected. This explains the title of the example (although in the circumstances you might think a better title would be the conditional method). Obviously, this offers a method for simulating random variables with an arbitrary density f S(x), subject only to the constraint that we have to be able to simulate X with density f X (x) that satisfi
|
es (1). (4) (5) (6) Find the mass function and mean of the number N of pairs (Uk, X k) that are rejected Exercise before the first occasion on which A occurs. What does this imply about a? If X is exponential with parameter 1, show that (2) takes the form P(X ≤ x|aU1U2 ≤ Exercise f S(X )) = FS(x), where U1 and U2 are independent and uniform on (0, 1). Hence, describe how you would simulate a random variable with density f S(x) = (2/π) Exercise − log U2. What is the density of X conditional on Y > 1 Let U1 and U2 be independent and uniform on (0, 1). Let X = − log U1 and Y = 2 e−x 2/2, x > 0. 1 2 (X − 1)2? 8.17 Example: The Inspection Paradox Let N (t) be a Poisson process, and at each time t > 0, define C(t) to be the time since the most recent event. (This is called the current life or age.) Further, define B(t) to be the time until the next event (this is called the balance of life or excess life). Show that B(t) and C(t) are independent, and find the distribution of C(t). What is E(B + C)? [Note: By convention, if N (t) = 0, we set C(t) = t.] Solution that N (t) has independent increments. Now Recall that we used the conditional property of the Poisson process to show P(B(t) > y, C(t) > z) = P(N (t + y) − N (t) = 0, N (t) − N (t − z) = 0) = P(N (t + y) − N (t) = 0)P(N (t) − N (t − z) = 0), = P(B(t) > y)P(C(t) > z). by the independence of increments, Worked Examples and Exercises 383 Furthermore, we showed that N (t) − N (t − z) has the same distribution as N (z), for t and t − z both
|
nonnegative. Hence, (1) (2) Likewise, Hence, P(C(t) > z) = 1 e−λz. P(B(t) > y) = 1 e−λy y < 0 y > 0. E(B + C) = 1 λ + t 0 λte−λt dt + te−λt = 2 λ − 1 λ e−λt. If we suppose N (t) is the number of renewals of (say) light bulbs, then (2) Remark says that the expected life of the light bulb inspected at time t is 2/λ − 1/λe−λt, which is greater than the expected life of a randomly selected light bulb, which is 1/λ. It may seem as though we make light bulbs last longer by inspecting them, this is the “paradox.” Of course, this is not so, it is just that if you only look once, you are more likely to see a longer-lived light bulb. This is related to other sampling paradoxes mentioned previously, see for example, “congregations.” (3) Exercise: The Markov Property so-called Markov property: Show that for any t1 < t2 <... < tn, the process N (t) has the P(N (tn) = jn|N (tn−1) = jn−1,..., N (t1) = j1) = P(N (tn) = jn|N (tn−1) = jn−1). Exercise: The Shower Problem Your telephone is called at the instants of a Poisson process with parameter λ. Each day you take a shower of duration Y starting at time X, where X and Y are jointly distributed in hours (and not independent). Show that the number of times that the telephone is called while you are in the shower has a Poisson distribution with parameter λE(Y ). (Assume 0 ≤ X ≤ X + Y ≤ 24.) Exercise Aesthetes arrive at a small art gallery at the instants of a Poisson process of parameter λ. The kth arrival spends a time X k in the first room and Yk in the second room, and then leaves. The random variables X k and Yk are not
|
independent, but (X k, Yk) is independent of (X j, Y j ) for j = k. At time t, let R1 and R2 be the number of aesthetes in the respective rooms. Show that R1 and R2 are independent Poisson random variables. Exercise Find cov (N (s), N (t)), and the correlation ρ(N (s), N (t)) 8.18 Example: von Neumann’s Exponential Variable Let the sequence of random variables X 1, X 2, X 3,... be independent and identically distributed with density f and distribution F. Define the random variable R by R = min{n−1 < X n}. (4) (5) (6) (1) 384 8 Jointly Continuous Random Variables (a) Show that P(R = r ) = (r − 1)/r!, and that P(X R ≤ x) = exp (1 − F(x)) − e(1 − F(x)). (b) Now let X n be uniformly distributed on (0, 1) for all n. Show that (2) (3) Deduce that P(X 1 ≤ x; R = r ) = x r −1 (r − 1)! − x r r!. P(X 1 ≤ x|R is even) = 1 − e−x 1 − e−1. Finally, define a random variable V as follows. A sequence X 1, X 2,..., X R is a “run”; it is “odd” is R is odd, otherwise it is “even.” Generate runs until the first even run, and then let V equal the number N of odd runs plus X 1 in the even run. Show that V has density e−v for v > 0. (a) Let X (1) ≤ X (2) ≤... ≤ X (r ) be the order statistics of (X 1,..., Xr ). Solution By symmetry (X 1,..., Xr ) is equally likely to be any one of the r! permutations of (X (1),..., X (r )). For the r − 1 permutations of the form, (X (r ),..., X (k+1), X (
|
k−1),..., X (1), X (k)), 2 ≤ k ≤ r, the event R = r occurs, and for no others. Hence, P(R = r ) = r − 1 r!. The above remarks also show that P(X R ≤ x) = = = ∞ r =2 ∞ r =2 ∞ r ==2 r P(X (k) ≤ x) r r j k=2 j=k (F(x)) j (1 − F(x))r − j by (8.7.12) (r F(x) − 1 + (1 − F(x))r ) = e(F(x) − 1) + exp (1 − F(x)), on summing the series. It is easy to check that this is continuous and nondecreasing as x increases, and differentiation gives the density of X R: f X R (x) = e f (x)(1 − exp (1 − F(x))). (b) Now observe that the event {R > r } ∩ {X 1 ≤ x} occurs if and only if X k ≤ x for 1 ≤ k ≤ r [which has probability (F(x))r, and X 1 ≥ X 2 ≥ X 3 ≥... ≥ Xr (which has probability 1/r!]. Hence, when X k is uniform on (0, 1), P(R > r ; X 1 ≤ x) = x r r! Worked Examples and Exercises 385 and so P(R = r ; X 1 ≤ x) = x r −1 (r − 1)! − x r r!. Now summing over even values of R, we have P(X 1 ≤ x, R is even) = 1 − e−x, and hence, P(X 1 ≤ x|R is even) = (1 − e−x )/(1 − e−1). This shows that P(R is even) = 1 − e−1, and so by independence of runs, N is a geometric random variable, with mass function P(N = n) = e−n(1 − e−1), for n > 0. Finally, let us denote X 1 in the even run by X 0. Then P(X 0 > x) = 1 − (1 − e−x )/(1 − e−1), from (3).
|
Hence, (4) P(V > v) = P(N ≥ [v] + 1) + P(N = [v]; X 0 > v − [v]) 1 − 1 − e−v+[v] 1 − e−1 = e−[v]−1 + ((1 − e−1)e−[v]) = e−v for 0 < v < ∞. Thus, V is exponentially distributed. Remark This method of generating exponential random variables from uniform ones was devised by von Neumann in 1951. Notice that it is computationally economical, in that it is necessary to store only the number of odd runs to date, and the first X 1 in the run in progress. Also, the expected number of uniform random variables used for each exponential random variable is small. Since the original result, the method has been extended to generate other continuous random variables from uniform r.v.s. (5) (6) (7) (8) Exercise What is the density of X R−1? As above, X 1, X 2,... are independent and identically distributed with Exercise: Bad luck density f and distribution F. Define T = min {n: X n > X 1}. Find P(T = n) and show that T has infinite expectation. Show that X T has distribution FX T (x) = F(x) + (1 − F(x)) log(1 − F(x)). Exercise Explain why the above exercise is entitled “Bad luck.” Exercise Use the result of Exercise 6 to show that when X 1 has an exponential distribution, X T has a gamma distribution. Why is this obvious without going through the analysis of Exercise 6? 8.19 Example: Maximum from Minima Let X 1,..., X n be a collection of nonnegative random variables with finite expected values. Show that E max j X j = E min(X j, X k) j<k EX j − j + i< j<k E min(Xi, X j, X k) − · · · + (−1)n+1E min X j. j 386 8 Jointly Continuous Random Variables Let I j = I (X j > x) be the indicator of the event that X j > x. Then for all Solution x we have I j Ik = I (min(X
|
j, X k) > x), and so on for any product. Furthermore, for all x, we see by inspection that I (max I j > x) = 1 − = j n (1 − I j ) j=1 I (X j > x) − j<k I (X j ∧ X k > x) + · · · + (−1)n+1 I (min X j > x). Now taking expectations gives P(max X j > x) = j P(X j > x) − j<k P(X j ∧ X k > x) + · · · Finally, integrating over [0, ∞) and recalling the tail integral for expectation, Theorem 7.4.11, gives the result. (1) Exercise Deduce that for any collection of real numbers (x1,..., xn) we have x j = max j j x j − j<k x j ∧ xk + · · · + (−1)n+1 min j x j. (2) (3) (4) Exercise: Coupon Collecting (Example 5.3.3) Revisited Suppose that at each transaction you acquire a coupon of the ith type with probability ci, 1 ≤ i ≤ n, where = 0. Show that the expected number of coupons collected until you first have at least one of every type is i ci = 1 and ci j − 1 c j j<k 1 c j + ck + i< j<k 1 ci + c j + ck − · · · + (−1)n(−1)n+1 1 i ci. Exercise r = 0, so that A wood contains n birds of r species, with bi of the ith species, bi i=1 bi = n. You catch and cage birds at random one by one. Let B be the expected number of captures until you have at least one of every species, and C the expected number of captures until you have caged all the representatives of any one of the species. Show that <k 1 b j + bk + 1 + · · · + (−)r +1 b1 + · · · + br + 1, and C = n − B + 1. Exercise A complex machine contains r distinct components, and the lifetime to failure of the ith component is an exponential random variable with parameter ai, independent of
|
all the rest. On failing, it is replaced by one with identical and independent properties. Find an expression for the expected time until every distinct component has failed at least once. Worked Examples and Exercises 387 8.20 Example: Binormal and Trinormal Let U and V be independent random variables having the standard normal N (0, 1) density. Set (1) X = U, Y = ρU + 1 − ρ2V ; |ρ| < 1. % Show that the pair X, Y has the standard bivariate normal density. Deduce that P(X > 0, Y > 0) = 1 4 + 1 2π sin−1 ρ. Solution The joint density of U and V is f (u, v) = (2π)−1 exp − 1 2! (u2 + v2), and the inverse transformation is U = X, V = (Y − ρ X )/(1 − ρ2)1/2. Hence, by Theorem 8.2.1, X and Y have joint density f (u(x, y), v(x, y))|J | = 1 2π(1 − ρ2)1/2 exp −! (x 2 − 2ρx y + y2), 1 2(1 − ρ2) which is indeed the standard bivariate normal density. Using this, and recalling Example 8.3.8, we find P(X > 0, Y > 0) = P(U > 0, ρU + % 1 − ρ2V > 0) ρ (1 − ρ2)1/2 + tan−1 < tan−1 V U! ρ (1 − ρ2)1/2 < ∞ · 1 2π 2 + 1 2π sin−1 ρ, π = P = = 1 4 where we use the well-known fact that sin θ = ρ ⇔ cos θ = % 1 − ρ2 ⇔ tan θ = % ρ 1 − ρ2. (2) (3) (4) Exercise Exercise Exercise Show that a X + bY has the normal density N (0, a2 + 2ρab + b2). Show that X and Y are independent if and only if cov (X, Y ) = 0. Show that and E(Y |a X + bY = z
|
) = b2 + ρab a2 + 2ρab + b2 z b var (Y |a X + bY = z) = a2(1 − ρ2) a2 + 2ab + b2. 388 8 Jointly Continuous Random Variables (5) Exercise that Let X, Y, and Z have the standard trivariate normal density, defined in (8.10) Show P(X > 0, Y > 0, Z > 0) = 1 8 + 1 4π Express X, Y, and Z in terms of three independent N (0, 1) random variables U, V, and W. Hence, deduce that sin−1ρ12 + sin−1 ρ23 + sin−1 ρ31. 2 3 E(Z |X, Y ) = {(ρ31 − ρ12ρ23)X + (ρ23 − ρ12ρ31)Y }/ 1 − ρ2 12 and var (Z |X, Y ) = 2 1 − ρ2 12 − ρ2 23 − ρ2 31 + 2ρ12ρ23ρ31 3 /. 1 − ρ2 12 (6) Exercise Do Example 8.11 again, this time using the representation in (1) for X and Y. 8.21 Example: Central Limit Theorem Let (X n; n ≥ 1) be a collection of independent Poisson random variables with parameter 1. By applying the central limit theorem to the X n, prove Stirling’s formula: √ lim n→∞ ne−nnn/n! = (2π )− 1 2. [You may assume without proof that the convergence of the sequence of distributions to (x) is uniform in x on finite intervals including 0.] n r =1 Xr. Then Sn is Poisson with parameter n, mean n, and vari- Solution ance n. Thus, P(Sn = n) = e−nnn/n!, and we may write Let Sn = (1) (2) (3) (4) √ ne−nnn/n! = = = √ √ √ n √ nP(Sn = n) = n − 1 − n √ n nP " nP(n − 1 < Sn ≤ n) < Sn − n √ n # ≤ 0 Fn(0) − Fn
|
− 1√ n √, where Fn is the distribution function of (Sn − n)/ we have n, and by the Central Limit Theorem, Fn(x) → (x) as n → ∞. Furthermore, because (x) = φ(x), we have, as n → ∞, √! − 1√ n Because the convergence in (2) is uniform on finite intervals, we may let n → ∞ in (1) and use (3) to yield the result. → φ(0) = 1√ 2π (0) − n. Exercise variables to show that Apply the central limit theorem to the same family (X n; n ≥ 1) of Poisson random e−n lim n→∞ 1 + n + n2 2! + · · · + nn n! = 1 2. Worked Examples and Exercises 389 (5) (6) Exercise It is said that D. Hagelbarger built a machine to predict whether a human coin-flipper would call heads or tails. In 9795 flips, the machine was correct on 5218 occasions. What is the probability of doing at least this well by chance? (The flipped coin was known to be fair by all involved.) Exercise An aeroplane has 120 seats and is full. There are 120 inflight meals, of which 60 are fish and 60 are pasta. Any passenger, independently of the rest, prefers pasta with probability 0.55, or prefers fish with probability 0.45. Show that the probability that 10 or more passengers will not get their first choice is approximately 0.234. [You are given that (0.734) 0.7676 and (2.94) 0.9984.] 8.22 Example: Poisson Martingales Suppose that (N (t); t ≥ 0) is a collection of nonnegative integer-valued random variables such that N (s) ≤ N (t) for all 0 ≤ s ≤ t < ∞, and W (t) = exp{−θ N (t) + λt(1 − e−θ )} is a martingale. Show that N (t) is a Poisson process. Solution For s < t, E(exp(−θ(N (t) − N
|
(s))|N (u); 0 ≤ u ≤ s) W (t) W (s) exp[(−λ(t − s))(1 − e−θ )]|N (u); 0 ≤ u ≤ s = E = exp[−λ(t − s)(1 − e−θ )] because W (t) is a martingale. As this does not depend on N (u), 0 ≤ u ≤ s, it follows that N (t) has independent increments. Furthermore, we recognise the final expression as the moment generating function of a Poisson random variable with parameter λ(t − s). Hence, N (t) is a Poisson process. (1) (2) (3) Let N (t) be a Poisson process with parameter λ. Show that W (t) defined above is a Exercise martingale. Exercise a}, where a is a positive integer. Use the optional stopping theorem to show that (a) ET = a/λ. (b) varT = a/λ2. Let N (t) be a Poisson process with parameter λ, and N (0) = 0. Let T = min{t: N (t) = (c) Ee−Hint. Recall the martingales of Example 8.8.17.) Exercise: Integrated Poisson process. that Let N (t) be a Poisson process with parameter λ. Show t 0 N (u) du − 1 2λ N (t)2 + 1 2λ N (t) is a martingale. If T = min{t : N (t) = a} where a is a positive integer, deduce that E T 0 N (u) du = 1 2λ a(a − 1). 390 8 Jointly Continuous Random Variables 8.23 Example: Uniform on the Unit Cube Let X, Y, and Z be independent, identically distributed, and uniform on [0, 1]. Show that W = (X Y )Z is also uniform on (0, 1). Solution First, we recall from Example 7.2.2 that the random variable U has a uniform density on (0, 1) if and only if − log U has an exponential density on (0, ∞). Hence, taking logarithms and using Example 8.5.15 and Theorem 8.6.11,
|
we have E exp[−t log W ] = E exp{t Z {− log X − log Y }} = E{E{exp[t Z (− log X − log Y )]|Z }}, since − log X and − log Y are exponential, 1 (1 − t Z )2 1 = E = 0 1 (1 − t z)2 dz = 1 1 − t. Hence, by Example 7.5.4, − log W has an exponential density, so that W has a uniform density on (0, 1), by the remark above. Remark 1), with order statistics X (1), X (2), X (3). In the following exercises, X, Y, and Z are independent and uniform on (0, (1) Exercise Let S = X + Y + Z. Show that S has density f S(s) = 0 ≤ s < 1 s2/2, −s2 + 3s − 3/2, 1 ≤ s < 2 s2/2 − 3s + 9/2, 2 ≤ s ≤ 3. (2) (3) (4) (5) Regarding X, Y, and Z as three points on the unit interval, show that the probability Exercise that the distance between any two is at least d is (1 − 2d)3, when d < 1 2. Exercise Exercise 2 5 Exercise 1. Deduce that V and W are independent. Show that the joint density of V = X (1) X (2) Show that the probability that has two real roots is 5 + 1 36 Show that the probability that X t 3 − 3Y t + Z = 0 has three real roots is 1 − 5 is f (v, w) = 2w for 0 ≤ v, w ≤ r, where r = 41/3. 6 log 2. 4 r −1 + and W = X (2) X (3) √ 8.24 Example: Characteristic Functions For any random variable X, define the function φ(t) = Eeit X, where i = for all real t √ −1. Show that |φ(t)| ≤ 1, and find φ(t) when X is exponential with parameter λ. Solution By definition of eit X, Eeit X = E(cos t X + i sin t
|
X ), Problems 391 but | cos t X + i sin t X | = cos2 t X + sin2 t X = 1, and so |φ(t)| ≤ 1. If X is exponential, ∞ Eeit X = 0 λe−λx cos t x d x + i ∞ 0 λe−λx sin t x d x. Integrating by parts gives λe−λx cos t x d x = 1 − ∞ 0 te−λx sin λ2 λe−λx cos t x d x. ∞ 0 Hence, Likewise, Hence, ∞ 0 ∞ 0 λe−λx cos t x d x 1 − t 2 λ2 = 1 λe−λx sin t x d x 1 + t 2 λ2 = t λ. Eeit X = 1 + it/λ 1 + t 2/λ2 = λ λ − it. φ(t) is called the characteristic function of X. We have occasionally Remark been frustrated by the nonexistence of a moment generating function. This example shows that we can use the characteristic function in such cases because it always exists. For example, it can be shown that the Cauchy density f (x) = which has no moments at all, has characteristic function φ(t) = e−|t|. π (1 + x 2) 3−1, 2 Furthermore, for any random variable X that does have a moment generating function M(θ ) = Eeθ X, for |θ | < ε > 0, the characteristic function of X is given by φ(t) = M(it). (1) (2) (3) Exercise sin t. t Exercise the same Cauchy density as the Xi. Exercise Show that the random variable uniform on [−1, 1] has characteristic function φ(t) = If X 1,..., X n are independent Cauchy random variables, show that X = 1 n n 1 Xi has Find the characteristic function of the random variable X with density f (x) = 1 2 e−|x|, x ∈ R When is f (x, y) = x y + ax + by + 1 a joint density function on 0 ≤ x, y ≤ 1? Can it be the joint density of random variables X and Y that are independent
|
? Find cov (X, Y ) for the joint density of Problem 1. 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 392 8 Jointly Continuous Random Variables 2 /(U 1 2 + V 1 Let X and Y have joint density f (x, y) = c exp(−x − y) for x > 0, y > 0. Find (a) c, (b) P(X + Y > 1), and (c) P(X < Y ). Let X and Y have joint density f (x, y) = g(x + y) for x ≥ 0, y ≥ 0. Find the density of Z = X + Y. Let X and Y have joint density f = c(1 + x 2 + y2)− 3 2 for all x and y. (a) What is c? (b) Find the marginal density of X. Let X and Y have the joint density of Problem 5, and define W = X 2 + Y 2, Z = Y/ X. Show that W and Z are independent. Let U and V be independently and uniformly distributed on (0, 1). Find the joint density of X = U 1 2. By considering P(X ≤ x|Y ≤ 1), devise a rejection sampling procedure for simulating a random variable with density 6x(1 − x); 0 < x < 1. Let U1, U2, and U3 be independently and uniformly distributed on (0, 1), with order statistics U(1) < U(2) < U(3). Show that the density of U(2) is 6x(1 − x); 0 < x < 1. Let U1, U2, U3, and U4 be independently and uniformly distributed on (0, 1). What is the density of X = log (U1U2)/ log (U1U2U3U4)? (Hint: One way uses Example 8.15). Let X and Y be independent normal random variables, and set U = X + Y, V = X − Y. Show that U and V are independent if and only if var (X ) = var (Y ). Simulation Using Bivariate Rejection Define the random variables Let U and V be independent and uniform on (0, 1). 2 ) and 2U − 1)2 + (2V − 1)2, X = (2
|
U − 1)(2Z −1 log Z −1) Y = (2V − 1)(2Z −1 log Z −1) 1 2 1 2. 2 (x 2 + y2)). Show that the conditional joint density of X and Y given Z < 1, is (2π)−1 exp (− 1 Explain how this provides a method for simulating normal random variables. Let X and Y be independent exponential random variables with respective parameters λ and µ. Find P(max {X, Y } ≤ a X ) for a > 0. Let X and Y have joint density f = cye−y(x+1), 0 ≤ x < y < ∞ for some constant c. What is the conditional density of X given Y = y? A spherical melon has radius 1. Three gravid insects alight independently (for oviposition) at A, B, and C, where A, B, and C are uniformly distributed on the surface of the melon. For any two insects, if the distance between them (along a great circle of the melon) is less than π/2, then they detect each other’s presence and will both fly off to seek an unoccupied melon. Show that the probability that exactly one insect is left in possession of the melon is 3(π − 1)/4π, and that the probability that all three remain on the melon is 1/4π. Let X and Y have joint density f (x, y) = c sin(x + y), 0 < x, y < π/2. Show that c = 1 2 ; cov (X, Y ) = 1 2 (π − 2) − 1 16 π 2, and ρ(X, Y ) = (8(π − 2) − π 2)/(π 2 + 8π − 32). Let X and Y be independent exponential with parameters λ and µ, respectively. Now define U = X ∧ Y and V = X ∨ Y. Find P(U = X ), and show that U and V − U are independent. Let (Xi ; i ≥ 1) be independent with the uniform density on (−1, 1). Let the density of fn(x). Show that n i=1 Xi be fn(x) = 1 2 x+1 x−1 fn−1(u) du for n ≥ 2, and deduce
|
that for any integer k, the density fn(x) is a polynomial in x for x ∈ [k, k + 1). Let X and Y have the bivariate normal density of Examples 8.4.3 and 8.20. (a) If σ = τ, what is E(X |X + Y )? (b) If σ = τ, what are E(X |X + Y ) and E(Y |X + Y )? 19 20 21 22 23 24 25 26 27 28 29 30 31 32 Problems 393 Let (X n; n ≥ 1) be independent and uniformly distributed on (0, 1). Define T = min n: Xi > 1. n i=1 Show that P(T ≥ j + 1) = 1/j! for j ≥ 1. Deduce that E(T ) = e. Let (X 1, X 2, X 3) be independent and uniformly distributed on (0, 1). What is the probability that the lengths X 1, X 2, X 3 can form a triangle? Let (Ui ; i ≥ 1) be independently and uniformly distributed on (0, 1), and define Mn = max {U1,..., Un}. Show that, as n → ∞, the distribution of Zn = n(1 − Mn) converges to an exponential distribution. Let (Xi ; i ≥ 1) be independent exponential random variables each with parameter µ. Let N be independent of the Xi having mass function f N (n) = (1 − p) pn−1; n ≥ 1. What is the density of Y = Let N (t) be a Poisson process, C(t) its current life at t, and X 1 the time of the first event. Show that cov (X 1, C(t)) = 1 Simulating Gamma and X is exponential with parameter α−1 ≤ 1. Show that the density of X conditional on Let U and X be independent, where U is uniform on (0, 1) 2 t 2e−λt. N i=1 Xi? α−1 eX α exp − X ≥ U α − 1 α √ is x α−1e−x / (α). Why is this of value? (a) Let X be exponential with parameter 1. Show that X/λ is exponential with parameter λ
|
. (b) Let (Xi ; 1 ≤ i ≤ n) be independent and exponential with parameter 1. Use the lack-of-memory property of the exponential density to show that max {X 1,..., X n} has the same distribution as X 1 + X 2/2 +... + X n/n. 2nx) → (x). Let X 1, X 2, X 3, and X 4 be independent standard normal random variables. Show that W = X 1/ X 2 has the Cauchy density, and Z = |X 1 X 2 + X 3 X 4| has an exponential density. Let X and Y be independent Poisson random variables each with parameter n. Show that, as n → ∞, P(X − Y ≤ Let (Ui ; i ≥ 1) be a collection of independent random variables each uniform on (0, 1). Let X have mass function f X (x) = (e − 1)e−x ; x ≥ 1 and let Y have mass function fY (y) = 1/ {(e − 1)y!} y ≥ 1. Show that Z = X − max {U1,..., UY } is exponential. (Assume X and Y are independent of each other and of the Ui ). Let X and Y be independent gamma with parameters (α, 1) and (β, 1), respectively. Find the conditional density of X given X + Y = z. Let X and Y be independent standard normal random variables. Show that the pair X and Z, where Z = ρ X + (1 − ρ2) Let X and Y have joint moment generating function M(s, t) and define K (s, t) = log M(s, t). Show that Ks(0, 0) = E(X ), Kss(0, 0) = var (X ), and Kst (0, 0) = cov (X, Y ). A sorcerer has hidden a an infinite number of boxes numbered..., −2, −1, 0, 1, 2, 3,.... You only have time to look in 11 boxes. The sorcerer gives you a hint. He tosses 100 fair coins and counts the number of heads. He does not tell you this number, nor does he tell you the number of the box with the ring in it
|
, but he tells you the sum of these two numbers. (a) If the sum is 75, which 11 boxes should you look in? (b) Give an approximation to the probability of finding the ring. 2 Y, |ρ| ≤ 1, has a standard bivariate normal density. ring in one of 1 11 10 0 1√ 2π e− u2 2 du = 0.36. 394 8 Jointly Continuous Random Variables 33 Multivariate Normal Density Let (Y1,..., Yn) be independent, each having the N (0, 1) den- sity. If Xi = j ai j Y j + bi for 1 ≤ i, j ≤ n, i=1 ti Xi )). Deduce that the following three statements are equivalent: then (X 1,..., X n) are said to have a multivariate normal density. Find the joint m.g.f. n Mn(t1,..., tn) = E(exp( (a) The random variables (X 1,..., X n) are independent. (b) (X 1,..., X n) are pairwise independent. (c) cov (Xi, X j ) = 0 for 1 ≤ i = j ≤ n. A sequence of random variables X 1, X 2,... is said to obey the Central Limit Theorem (CLT) if and √ only if the distribution of (Sn − E(Sn))/ var(Sn) tends to the standard normal distribution, where Sn = n i=1 Xi. State sufficient conditions on (X n) for the sequence to obey the CLT and say which of your 34 conditions are necessary. Let (Un(λn)) be a sequence of independent random variables having the Poisson distribution with nonzero means (λn). In each of the following cases, determine whether the sequence (X n) obeys the CLT: (i) X n = Un(1). (ii) X n = Un(1) + n. (iii) X n = Un( 1 Let X 1 and X 2 be independent with the same density f (x). Let U be independent of both and uniformly distributed on (0, 1). Let Y = U (X 1 + X 2). Find f (x) such that Y can also have density f (x).
|
36 Molecules A molecule M has velocity v = (v1, v2, v3) in Cartesian corrdinates. Suppose that (iv) X n = U2n(1)/(1 + U2n−1(1)). (v) X n = Un(n). 2 )!. 35 v1, v2, and v3 have joint density: f (x, y, z) = (2πσ 2)− 3 2 exp Show that the density of the magnitude |v| of v is − 1 2σ 2 (x 2 + y2 + z2). 37 38 39 f (w) = 1 2 2 π σ −3w2 exp − 1 2σ 2 w2, w > 0. Let C be a circle radius r with centre O. Choose two points P and Q independently at random in C. Show that the probability that the triangle OPQ contains an obtuse angle is 3 2. (Note: No integration is required.) Given a fixed line AB, a point C is picked at random such that max {AC, BC} ≤ AB. Show that the probability that the triangle ABC contains an obtuse angle is 1 √ 3. [Note: No integration is required. This is a version of a problem given by Lewis Carroll. To combat insomnia, he solved mathematical problems in his head; this one was solved on the night of 20 January 1884. He collected a number of these mental exercises in a book entitled Pillow problems (Macmillan, 1895).] Let X be a nonnegative random variable such that P(X > x) > 0 for all x > 0. Show that P(X > ∞ x) ≤ EX n/x n for all n ≥ 0. Deduce that s = n=0 < ∞. 1 EX n Let N be a random variable with the mass function P(N = n) = 1 Show that: (a) for all x > 0, Ex N < ∞; (b) EX N = ∞. sEX n. Problems 395 40 41 42 43. Let X and Y be independent and uniform on [0, 1], and let Z be the fractional part of X + Y. Show that Z is uniform on [0, 1] and that X, Y, Z are pairwise independent but not independent. Let (X (k); 1 ≤ k ≤ n) be the order statistics
|
derived from n independent random variables, each uniformly distributed on [0, 1]. Show that (a) EX (k) = k n + 1 (b) var X (k) = k(n − k + 1) (n + 1)2(n + 2) Let (X (k); 1 ≤ k ≤ n) be the order statistics derived from n independent random variables each uniformly distributed on [0, 1]. Show that they have the same distribution as (Yk; 1 ≤ k ≤ n), where Y0 = 0 and, given Y j−1, Y j, has the density (1 − y)n− j ; Y j−1 ≤ y ≤ 1, for 1 ≤ j ≤ n. Normal Sample: dom variables. (a) By considering the joint moment generating function of X and (Xr − X ; 1 ≤ r ≤ n), show that n X = 1 1 Xr and n Example 8.7.4 Revisited Let (Xr ; 1 ≤ r ≤ n) be independent N (µ, σ 2) ran-. S2 = 1 n − 1 n (Xr − X )2 1 are independent. (b) Show that X and Xr − X are uncorrelated, and deduce that X and S2 are independent. (i) Suppose that the random variable Q has density sin q, on [0, π]. Find the distribution 44 function of Q, and deduce that sin2 {Q/2} has the uniform density on [0, 1]. (ii) Suppose that the random vector (X, Y ) is uniformly distributed on the unit circle, and set R2 = X 2 + Y 2. Show that R2 has the uniform density on [0, 1]. Deduce that the random vector (U, V, W ) is uniformly distributed on the unit sphere, where U = 2X (1 − R2) 2, and W = 1 − 2R2. 2, V = 2Y (1 − R2) 1 1 9 Markov Chains In all crises of human affairs there are two broad courses open to a man. He can stay where he is or he can go elsewhere. P.G. Wodehouse, Indiscretions of Archie 9.1 The Markov Property In previous chapters, we found it useful and interesting to consider sequences of independent random variables. However, many observed sequences in the natural world are patently not independent.
|
Consider, for example, the air temperature outside your window on successive days or the sequence of morning fixes of the price of gold. It is desirable and necessary to consider more general types of sequences of random variables. After some thought, you may agree that for many such systems it is reasonable to suppose that, if we know exactly the state of the system today, then its state tomorrow should not further depend on its state yesterday (or on any previous state). This informal (and vague) preamble leads to the following formal (and precise) statement of the Markov property for a sequence of random variables. (1) (2) Let X = (X n; n ≥ 0) be a sequence of random variables taking values Definition in a countable set S, called the state space. If for all n ≥ 0 and all possible values of i, k, k0,..., kn−1, we have P(X n+1 = k|X 0 = k0,..., X n = i) = P(X n+1 = k|X n = i) = P(X 1 = k|X 0 = i), then X is said to be a Markov chain or to have the Markov property. We write pik = P(X 1 = k|X 0 = i), where ( pik; i ∈ S, k ∈ S) are known as chain. the transition probabilities of the Sometimes we write pi,k for pik, and you are warned that some books use pki to denote pik. 396 9.1 The Markov Property 397 Another popular rough and ready way of interpreting the formal condition (2) is to say that, for a Markov chain, the future is conditionally independent of the past, given the present. Notice that in some applications it is more natural to start the clock at n = 1, so the chain is X = (X n; n ≥ 1). Occasionally, it is convenient to suppose the chain extends in both directions so that X = (X n; −∞ < n < ∞). The state space S is often a subset of the integers Z or a subset of the set of ordered pairs of integers Z2. Markov chains may take values in some countable set that happens not to be a subset of the integers. However, this set can immediately be placed in one–one correspondence with some appropriate subset of the
|
integers, and the states relabelled accordingly. (3) Example: Simple Random Walk Let (Sn; n ≥ 0) be a simple random walk. Because the steps (Sn+1 − Sn; n ≥ 0) are independent, the sequence Sn clearly has the Markov property, and the transition probabilities are given by pik = P(Sn+1 = k|Sn = i) = The state space S is the set of integers Z. p if k = i + 1 q if k = i − 1 0 otherwise. (4) Example: Branching Process Let Zn be the size of the nth generation in an ordinary branching process. Because family sizes are independent, Z = (Zn; n ≥ 0) is a Markov chain. The transition probabilities are given by pik = P(Zn+1 = k|Zn = i) = P Yr = k, ) i * where Y1,..., Yi are the i families of the nth generation given that Zn = i. The state space is the set of nonnegative integers Z+. r =1 When S is a finite set, X is known as a finite Markov chain. Until further notice, we consider finite chains (unless it is specifically stated otherwise) and write |S| = d. (5) Example: Information Source A basic concern of telecommunications engineers is the transmission of signals along a channel. Signals arise at a source, and to devise efficient methods of communication it is necessary to have models for such sources. In general, it is supposed that the source produces a sequence of symbols randomly drawn from a finite alphabet A. By numbering the symbols from 1 to |A|, the output becomes a sequence of random variables (X n; n ≥ 1) called a message. Various assumptions can be made about the output of sources, but a common and profitable assumption is that they have the Markov property. In this case, the output is a finite Markov chain and the source is called a simple Markov source. Having formally defined a Markov chain X, we emphasize that there are many ways of presenting the idea of a Markov chain to the mind’s eye. You should choose the one that 398 9 Mark
|
ov Chains best suits the context of the problem and your own psyche. For example: (i) A particle performs a random walk on the vertices of a graph. The distribution of its next step depends on where it is, but not on how it got there. (ii) A system may be in any one of d states. The distribution of its next state depends on its current state, but not on its previous states. Because of this imagery, we talk equivalently of chains visiting k, being at k, taking the value k, and so on. Whatever the choice of concept, the notation is always essentially that of Definition 1, but (to avoid repetitive strain injury) some abbreviations of notation are widespread. Thus, we commonly write P(X n+1 = k|X 0 = k0,..., X n−1 = kn−1, X n = i) = P(X n+1 = k|X 0,..., X n). If we want to stress or specify the initial value of X, then we write P(X n+1 = k|X 0 = k0,..., X n), and so on. Note that the Markov property as defined in (1) is equivalent to each of the following properties, which it is occasionally convenient to take as definitive. First: P(X n+m = k|X 0,..., X n) = P(X n+m = k|X n) for any positive m and n. Second: = k|X n1 P(X nr,..., X nr −1) = P(X nr = k|X nr −1) for any n1 < n2 <... < nr. Third: P(X 1 = k1,..., Xr −1 = kr −1, Xr +1 = kr +1,..., X n = kn|Xr = kr ) = P(X 1 = k1,..., Xr −1 = kr −1|Xr = kr ) × × P(Xr +1 = kr +1,..., X n = kn|Xr = kr ). (6) (7) (8) You are asked to prove the equivalence of
|
Definition 1 and (6), (7), and (8) in Problem 6. Notice that (8) expresses in a precise form our previously expressed rough idea that given the present state of a Markov chain, its future is independent of its past. Finally, it should be noted that the Markov property is preserved by some operations, but not by others, as the following examples show. (9) Example: Sampling 0, is a Markov chain. Let X be a Markov chain. Show that the sequence Yn = X 2n; n ≥ Solution Because X is a Markov chain, we can argue as follows: P(Yn+1 = k|Y0,..., Yn = i) = P(X 2n+2 = k|X 0,..., X 2n = i) = P(X 2n+2 = k|X 2n = i) by (7) = P(Yn+1 = k|Yn = i). So Y is a Markov chain. It is said to be imbedded in X. 9.1 The Markov Property 399 (10) Example If X is a Markov chain with state space SX, show that Yn = (X n, X n+1); n ≥ 0, is a Markov chain. What are its transition probabilities? Solution is to say, SY = {(s1, s2): s1 ∈ SX, s2 ∈ SX }. Now The state space of Y is a collection of ordered pairs of the states of X ; that P(Yn+1 = ( j, k)|Y0,..., Yn) = P(X n+2 = k, X n+1 = j|X 0,..., X n+1) = P(X n+2 = k, X n+1 = j|X n+1, X n) = P(Yn+1 = ( j, k)|Yn). since X is Markov, So Y is Markov. Also, (11) P(Yn+1 = (k, l)|Yn = (i, j)) = P(X n+2 = l|X n = k)δ jk = pklδk j, where is the usual Kronecker delta. δk j = 1 if k = j
|
0 otherwise (12) Example Let X be a Markov chain. Show that Yn = |X n|; n ≥ 0, is not necessarily a Markov chain. Solution cept for Then Let X have state space S = {−1, 0, 1} and transition probabilities zero, ex- p−1,0 = 1 2, p−1,1 = 1 2, p0,−1 = 1, p1,0 = 1. P(Yn+1 = 1|Yn = 1, Yn−1 = 1) = P(X n+1 = 1|X n = 1, X n−1 = −1) = 0. But P(Yn+1 = 1|Yn = 1) = P(Yn+1 = 1|X n ∈ {−1, 1}), which is not necessarily zero. So Y is not Markov. Notice that the states −1 and +1 for X n produce one state +1 for Yn; they are said to be lumped together. The example shows that lumping states together can destroy the Markov property. Conversely, given a sequence Yn which is not a Markov chain, it is sometimes possible to construct a Markov chain involving Yn by enlarging the state space. (13) Example A machine can be in one of two states; working (denoted by s0), or repair (denoted by s1). Each day, if working, it may break down with probability α independently of other days. It takes r days to repair, where r > 1. Now if X n is the state of the machine on the nth day, this is not a Markov chain. To see this note that P(X n+1 = s0|X n = s1, X n−1 = s0) = 0, 400 but 9 Markov Chains P(X n+1 = s0|X n = X n−1 =... = X n−r +1 = s1) = 1. However, suppose we now let the state space be S = {s0, s1,..., sr }, where X n = si if the machine has been in repair for i days. Then P(X n+1 = si+1|X n = si, X n−1,...) = 1, if 1 ≤ i ≤ r − 1, P(X
|
n+1 = s0|X n = sr,...) = 1, and so on. It is easy to see that X n now is a Markov chain. 9.2 Transition Probabilities Recall that X is a Markov chain with state space S, where |S| = d. The transition probabilities pik are given by pik = P(X n+1 = k|X n = i) for n ≥ 0. The d × d matrix ( pi j ) of transition probabilities is called the transition matrix and is denoted by P. Let us first record two simple but important facts about P. Because ( pik; k ∈ S) is a conditional mass function, we have and pik ≥ 0 for all i and k; k∈S pik = 1. Any matrix P satisfying (2) and (3) is called stochastic. We remark that addition if in i∈S pik = 1, pik ≤ 1, then P is doubly stochastic. Also, if (3) is replaced by the condition k∈S then a matrix satisfying (2) and (5) is called substochastic. For example, the simple random walk of Example 9.1.3 is doubly stochastic. If pi j > 0 for all i and j, then P is called positive. Now, given that X 0 = i, the distribution of X n is denoted by pik(n) = P(X n = k|X 0 = i) = P(X n+m = k|X m = i) because of (9.1.2). Trivially, of course, pik(n) = 1. k∈S These probabilities are called the n-step transition probabilities, and they describe the random evolution of the chain. (1) (2) (3) (4) (5) (6) 9.2 Transition Probabilities 401 Note that pi j (n) is a function of three variables, the two states i, j, and the time n. In more complicated expressions involving several such probabilities, you should use the symbols i, j, k, l to denote states and the symbols m, n, r, t to denote time (possibly with suffices). Some simple special cases illustrate these notions. (7) Example (9.1.3) Continued: Simple Random Walk Recall that
|
(Sn; n ≥ 0) are the successive values of a simple random walk. If Sn = k and S0 = i, then from Theorem 5.6.4 we have (8) pik(n) = n 2 (n + k − i) 1 0 p 1 2 (n+k−i)(1 − p) 1 2 (n−k+i) if n + k − i is even otherwise. (Note that this chain has infinite state space.) (9) Example: Survival A traffic sign stands in a vulnerable position. Each day, independently of other days, it may be demolished by a careless motorist with probability q. In this case, the city engineer replaces it with a new one at the end of the day. At the end of day n, let X n denote the number of days since the sign in position was newly installed. Show that X n is a Markov chain, and find pik and pik(n). (Note that this chain has infinite state space.) Solution By construction, X n+1 = X n + 1 with probability 1 − q = p with probability q. 0 Because the choice of outcomes is independent of previous days, X n is a Markov chain, and pik = if k = i + 1 p if k = 0 q 0 otherwise. For the n-step transition probabilities, we note that either the sign survives for all n days or has been struck in the meantime. Hence, (10) (11) and pik(n) = pn if k = i + n pik(n) = qpk if 0 ≤ k ≤ n − 1. Returning to the general case, we examine the relationship between pik and pik(n). It is a remarkable and important consequence of the Markov property (9.1.1) that the random evolution of the chain is completely determined by pik, as the following theorem shows. 402 9 Markov Chains (12) (13) (14) Theorem: Chapman–Kolmogorov Equations For any i and k in S, and any positive m and n, we have Let X have transition matrix P. pik(m + n) = pi j (m) p jk(n) j∈S and also pik(n + 1) =
|
... j1∈S jn ∈S pi j 1 p j 1 j 2... p jn k. Proof Recall that if (A j ; j ≤ d) is a collection of disjoint events such that ∪d then for any events B and C 1 A j =, P(B|C) = d j=1 P(B ∩ A j |C). Hence, setting A j = {X m = j}, we have pik(m + n) = = = = j∈S j∈S j∈S j∈S P(X m+n = k, X m = j|X 0 = i) P(X m+n = k|X m = j, X 0 = i)P(X m = j|X 0 = i), P(X m+n = k|X m = j)P(X m = j|X 0 = i), by conditional probability, by the Markov property, pi j (m) p jk(n). Hence, in particular, (15) pik(n + 1) = =... = j 1∈S pi j 1 p j 1k(n) j 1∈S j 2∈S pi j 1 p j1 j2 p j2k(n − 1)... j 1∈S j n ∈S pi j 1... p j n k by repeated application of (15). An alternative proof of (14) is provided by the observation that the summation on the right-hand side is the sum of the probabilities of all the distinct paths of n steps, which lead from i to k. Because these are mutually exclusive and one of them must be used to make the trip from i to k, the result follows. 9.2 Transition Probabilities 403 The n-step transition probabilities pik(n) tell us how the mass function of X n depends on X 0. If X 0 itself has mass function and X n has mass function αi = P(X 0 = i) α(n) i = P(X n = i) then, by conditional probability, they are related by α(n) k = αi pik(n). j∈S The probabilities α(n) i are sometimes called the absolute probabilities of X n. Now we notice that the d 2 n-step transition probabilities ( pik
|
(n); 1 ≤ i ≤ d, 1 ≤ k ≤ d) ; 1 ≤ i ≤ d) as a row can be regarded as a matrix Pn, and the absolute probabilities (α(n) vector αn. It follows from Theorem 12 and (18) that i and αn = αPn, where α = (α1,..., αd ). Pm+n = PmPn = Pm+n (16) (17) (18) (19) (20) Example: Two State Chain The following simple but important example is very helpful in illustrating these and other ideas about Markov chains. Let X have state space S = {1, 2} and transition matrix. You can verify by induction that (α + β)Pn = α α β β + (1 − α − β)n α −β. −α β Hence, for example, p12(n1 − α − β)n. Descending once again from the general to the particular, we identify some special varieties of chain that have attractive properties that we find useful later. (21) Definition If for some n0 < ∞ we have pi j (n0) > 0 for all i and j, then the chain is said to be regular. (22) Example Let X have transition probabilities P =. 0 1 2 1 1 2 404 9 Markov Chains Then P is not positive, but so P is regular. P2 =, Roughly speaking, a chain is regular if there is a time such that, no matter where it started, the chain could be anywhere in S. Some chains satisfy the weaker condition that every state can be reached from every other state with nonzero probability. This is called irreducibility. (23) Definition such that pik(n0) > 0. A chain X is irreducible if for each i and k in S there exists an n0 < ∞, (24) Example Let X have transition matrix P =. 1 0 0 1 Then, and P2n = 1 0 0 1 ; n ≥ 0 P2n+. Hence, X is neither positive nor regular but it is irreducible. In fact, it is said to be periodic with period 2 because pii (n) > 0 if n is even. A state with no period greater than 1 is aperiodic. (25) Example
|
Let X and Y be independent regular Markov chains with transition matrices P = ( pik) and Q = (qik), respectively. Show that Zn = (X n, Yn); n ≥ 0, is a regular Markov chain. Solution Using the independence of X and Y, P(Zn = (k, l)|Zn−1 = (i, j), Zn−2,..., Z0) = P(X n = k|X n−1 = i, X n−2,..., X 0) × P(Yn = l|Yn−1 = j,..., Y0) = pikq jl because X and Y are Markov chains. Therefore, Z is a Markov chain. Likewise, Z has n-step transition probabilities pik(n)q jl(n). Finally, because P and Q are regular, there exists n0 and m0 (both finite) such that pik(m0) and q jl(n0) are both positive for all i, k and all j, l, respectively. Hence, pik(m0n0)q jl(m0n0) > 0, for all i, j, k, l, and so Z is regular. 9.2 Transition Probabilities 405 Note two further bits of jargon. A set C of states is called closed if pik = 0 for all i ∈ C, k /∈ C. Furthermore, if C is closed and |C| = 1, then this state is called absorbing. We conclude this section with two examples drawn from communication theory. (26) Example: Entropy Let the random vector X n = (X 0,..., X n) have joint mass func- tion f (x0,..., xn). Then the entropy (also called uncertainty) of X n is defined as H (X n) = −E[log( f (X 0,..., X n))] (with the convention that 0 log 0 = 0). Let X 0,..., X n be the first n + 1 values of a Markov chain with transition matrix P and initial mass function α. Show that, in this case, (27) H (X n) = −E[log(αX 0)] − n
|
r =1 E[log( pXr −1 Xr )]. Solution Hence, Because X is a Markov chain, f (x0, x1,..., xn) = αx0 px0x1... pxn−1xn. E[log( f (X 0,..., X n))]... = αx0 px0x1 xn ∈S αx0 log αx0 + = x0∈S x0 x0,x1 n... pxn−1xn (log αx0 + log px0x1 + · · · + log pxn−1xn ) αx0 px0x1 log px0x1 + · · · + αxn−1 pxn−1xn log pxn−1xn xn−1,xn = E[log(αX 0)] + E[log pXr −1 Xr ], r =1 as required, yielding (27). (28) Example: Simple Markov Source Let the random variable X and the random vector Y be jointly distributed, and denote the conditional mass function of X given Y by f (x|y). Then the conditional entropy of X with respect to Y is defined to be H (X |Y ) = −E[E(log f (X |Y )|Y )] = − f (x|y) log f (x|y)P(Y = y). y x Let X 0,..., X n+1 be the output from the Markov source defined in Example 9.1.5. Show that H (X n+1|X 0,..., X n) = H (X n+1|X n). Solution Let Y be (X 0,..., X n). Then, by the Markov property, f (x|y) = P(X n+1 = x|X 0 = y0,..., X n = yn) = P(X n+1 = x|X n = yn) = pyn x. 406 Hence, 9 Markov Chains H (X n+1|Y ) = − = − y x yn x pyn x log pyn x P(X 0, =
|
y0,..., X n = yn) pyn x log pyn x P(X n = yn) = H(X n+1|X n). 9.3 First Passage Times For any two states i and k of X, we are often interested in the time it takes for the chain to travel from i to k. This is not merely a natural interest, these quantities are also of theoretical and practical importance. For example, in the simple gambler’s ruin problem the state 0 entails ruin, and in the simple branching process X = 0 entails extinction. (1) For a Markov chain X with X 0 = i: Definition (a) When i = k, the first passage time to k from i is defined to be Tik = min {n ≥ 0 : X n = k|X 0 = i}; the mean first passage time is µik = E(Tik). (b) When i = k, the recurrence time of i is defined to be Ti = min {n > 0: X n = i|X 0 = i}; the mean recurrence time is µi = E(Ti ). Note the simple but important fact that the chain has not entered k by time n if and only if Tik > n. (2) (3) (4) (5) Example Let X have transition matrix Then, given X 0 = 1, the chain enters 2 as soon as it leaves 1. Hence, and P(T12 = r ) = 2 3. r −1 1 3 ; r ≥ 1, µ12 = r −1 ∞ r =.3 First Passage Times 407 Likewise, first return to 1 at the r th step occurs after r − 2 consecutive visits to 2. so P(T1 =. Hence, µ1 = 1 3 + r −2 ∞ r =2 r 1 6 3 4 = 11 3. If we do not require the distribution of T12 or T1, then a simpler procedure will suffice to find µ12 and µ1, as follows. Conditioning on the first step of the chain and assuming all the expectations exist, we find that µ12 = 1 3 E(T12|X 1 = 1) + 2 3 E(T12|X 1 = 2
|
). But, by the Markov property, E(T12|X 1 = 1) = 1 + E(T12), and obviously E(T12|X 1 = 2) = 1. Hence, µ12 = 1 + 1 3 find µ21 = 4, and using conditional expectation again yields µ1 = 1 + 2 3 before. µ12 as above. Likewise, we 3 as µ21 = 11 For a rather different type of behaviour consider the following. (6) Example Let X have transition matrix Because p33 = 1, state 3 is absorbing, which is to say that upon entering 3 the chain never leaves it subsequently. Hence, T12 = r occurs when the first r − 1 visits to 1 are followed by a step to 2. Thus, P(T12 = r ) = ; r ≥ 1. r 1 3 Hence, and µ12 = ∞. Likewise, P(T12 < ∞) = ∞ r =1 r 1 3 = 2 3 P(T1 =, and so P(T1 < ∞) = 2 3 and µ1 = ∞. 408 9 Markov Chains These examples demonstrate that the properties of recurrence and first passage times depend strongly on the nature of the transition matrix P. In fact, we are going to show that, for any finite regular chain, both µk and µik are finite (with finite expectation) for all i and k. First we need to clear the ground a little. Because we are only considering finite chains with |S| = d, we can without loss of generality set k = d. (If you like mnemonics you can think of d as the destination of the chain.) Also, as we are only interested in the progress of the chain until it arrives at d, it is natural to focus attention on the probabilities rik(n) = P(X n = k, n < Tid |X 0 = i), i = d = k. These are the transition probabilities of the chain before entering d, and we denote the array (rik(n)) by Rn. By definition, for one step, rik(1) = pik for i = d = k. For n > 1, the n-step d-avoiding probabilities are given by the following. (
|
7) (8) (9) Theorem For i = d = k, (10) rik(n) = or in matrix form Rn = Rn 1.... j1=d j2=d jn−1=d pi j1 p j1 j2... p jn−1k, Proof We use the idea of paths. Every distinct path of the chain that goes from i to k in n steps and does not enter d is of the form i, j1, j2,..., jn−1, k, where jr ∈ S\d for... p jn−1k and one of them is used, so rik(n) 1 ≤ r ≤ n − 1. Such a path has probability pi j1 is just the sum of all these probabilities as given on the right-hand side of (10). (11) (12) (13) Corollary For any state i of a regular chain, lim n→∞ k=d and more strongly, rik(n) = 0, ∞ n=1 k=d rik(n) < ∞. Proof First, suppose that the chain is positive so that for every i, pid > 0. Hence, there exists t such that k=d pik < t < 1. Therefore, using (13) on the last sum in (10), rik(n) ≤... pi j1... p jn−2 jn−1t. k=d j1=d jn−1=d 9.3 First Passage Times 409 Hence, k=d r (n) ik ≤ t n on using (13) to bound each summation successively. Because t < 1, (11) and (12) follow k r jk ≤ 1 in this case. If the chain is regular but not positive, we first note that because we have that rik(n + 1) = ri j (n) r jk ≤ ri j (n). k=d j=d k j=d Thus, that pid (m0) > 0 for all i. By the argument of the first part, for some t0 < 1, k rik(n) is nonincreasing in n. Because the chain is regular, there is an m0 such (14) Hence,
|
because n k proving (12). rik(nm0) < t n 0 < 1. k=d k rik(n) is nondecreasing (11) follows. Finally, rik(n) ≤ m0 1 + rik(m0n) ≤ m0 n,k 1 1 − t n 0 < ∞ With these preliminaries completed, we can get on with proving the main claim of the paragraph following Example 6. (15) Theorem For a regular chain, Tid is finite with probability 1 and has finite mean. More precisely, P(Tid > n) < cλn for some constants c < ∞ and λ < 1. Proof By the remark preceding (4), P(Tid > n) = k=d rik(n) → 0 as n → ∞ by (11). Therefore, Tid is finite with probability 1. Also, ∞ ∞ E(Tid ) = P(Tid > n) = rik(n) < ∞ by (12). The second statement of the theorem follows easily from (14). n=0 n=0 k=d There is a simple and useful generalization of this result, as follows. (16) Theorem Let X be a regular Markov chain, and let D be a subset of the state space S. For i /∈ D, define the first passage time Ti D = min {n: X n ∈ D|X 0 = i}. Then, E(Ti D) < ∞. 410 9 Markov Chains Proof This is an exercise for you. It should be remarked that E(Ti ) < ∞ is a trivial consequence of Theorem 15. As discussed above, first passage times are interesting in themselves for practical reasons, but they are even more interesting because of a crucial theoretical property. Informally, it says that given the state of a chain at a first passage time T, the future of the chain is independent of the past. The following example makes this more precise. (17) Example: Preservation of Markov Property at First Passage Times Let X be a regular Markov chain with transition matrix P, and let T be the first passage time of the chain to d. Show that for any m > 0 and xr = d, we have
|
(18) P(X T +m = k|Xr = xr for 1 ≤ r ≤ T, X T = d) = pdk(m). Solution conditional probability, the left-hand side of (18) may be written as Let us denote the event {Xr = xr = d for1 ≤ r < T } by A(T ). Then, using (19) P(X T +m = k, A(T ), X T = d) P(A(T ), X T = d) Now the numerator can be expanded as P(X T +m = k, A(t), X t = d, T = t) ∞ t=1 = ∞ t=1 P(X t+m = k|A(t), X t = d)P(A(t), X t = d, T = t) = pdk(m) ∞ P(A(t), X t = d, T = t) by the Markov property, t=1 = pdk(m)P(A(T ), X T = d). Finally, substitution into (19) yields (18). It would be difficult to overemphasize the importance of this result in the theory of Markov chains; it is used repeatedly. [It is a special case of the “strong Markov property,” that we meet later.] To conclude this section we show that the mass functions of Tid and of Td are related to the transition probabilities pik(n) by very elegant and useful identities. Let fid (n) = P(Tid = n), i = d, and fdd (n) = P(Td = n). Define the generating functions Pik(z) = ∞ n=0 pik(n)zn and Fid (z) = ∞ n=0 fid (n)zn with the convention that pii (0) = 1, pi j (0) = 0, for i = j, and fi j (0) = 0 for all i and j. (20) Theorem When i = k, we have (21) Pik(z) = Fik(z)Pkk(z), 9.3 First Passage Times 411 and otherwise (22) Pii (z) = 1 + Fii (z)Pii (z). Proof The idea of the proof
|
is much the same as that of Example 17. For each k in S, let us define the event Am = {X m = k}, and let Bm be the event that the first visit to k after time 0 takes place at time m. That is, Bm = {Xr = k for 1 ≤ r < m, X m = k}. Then following a now familiar route, we write (23) pik(m) = P(Am|X 0 = i) = m r =1 P(Am ∩ Br |X 0 = i) = = = m r =1 m r =1 m r =1 P(Am|Br, X 0 = i)P(Br |X 0 = i) P(Am|Xr = k)P(Br |X 0 = i) by the Markov property, pkk(m − r ) fik(r ). The right-hand side of (23) is a convolution, so multiplying both sides by zm and summing 1 if i = k over all m ≥ 1 gives Pik(z) − δik = Fik(z)Pkk(z), where δik = 0 otherwise as required. (24) Example: Weather Successive days are either hot or cold, and they are also either wet or dry. From one day to the next, either the temperature changes with probability α or the precipitation changes with probability 1 − α. Let f (n) be the probability that it is again hot and dry for the first time on the nth day, given that it was hot and dry on day zero. Show that (25) F(z) = ∞ n=1 f (n)zn = z2 1 + (1 − 2z2)(1 − 2α)2 2 − z2 − z2(1 − 2α)2. Solution It is helpful to visualize this Markov chain as a random walk on the vertices of a square in which steps are taken along a horizontal edge with probability α or a vertical edge with probability 1 − α. We identify the four states of the chain with the vertices of the square; the origin is hot and dry. The walk can return to the origin only after an even number 2n of steps, of which 2k are horizontal and 2n − 2k are vertical. Hence p0(2n), the probability of returning
|
on the 2nth step (not necessarily for the first time), is 2n 2k = 1 2 (α + (1 − α))2n + 1 2 (α − (1 − α))2n n p0(2n) = α2k(1 − α)2n−2k k=0 = 1 2 ((1 − 2α)2n + 1). 412 Hence, 9 Markov Chains P0(z) = ∞ 0 p0(2n)z2n = 1 2 1 1 − (1 − 2α)2 z2 + 1 1 − z2. Hence, by (22), we have F(z) = P0(z) − 1 P0(z) = z2 1 + (1 − 2z2)(1 − 2α)2 2 − z2(1 + (1 − 2α)2), which is (25). If you have read Section 6.7, you will have noticed much in common with the above analysis and the results of that section. This is, of course, because the visits of a Markov chain to some given state k form a renewal process. We explore this link a little in Example 9.14. 9.4 Stationary Distributions (1) (2) (3) We now consider one of the most important properties of the transition matrix P. That is, for any d × d stochastic matrix P, the set of equations xk = xi pik; 1 ≤ k ≤ d, always has a solution such that 1≤i≤d and xi ≥ 0 d i=1 xi = 1. Such a solution is thus a probability mass function, and it is commonly denoted by x = π = (π1,..., πd ). It may not be unique. (4) Example (a) If then clearly π = (b) If. 1 2, 1 2 then it is also clear that.4 Stationary Distributions 413 (c) If, 1 0 0 1 P = then we have π = (α, 1 − α) for any α ∈ [0, 1]. Note that the first chain is regular, the second periodic, and the third has two absorbing states; these chains evolve in very different ways. The mass function π is called a stationary distribution of the chain for the following reason.
|
Suppose that π is the mass function of X 0, then X 1 has mass function αk(1) = πi pik = πk i because π is a solution of (1). Hence, X 1 has mass function π, and by a trivial induction so does X n for all n: (5) P(X n = k) = πk; n ≥ 0. Remark The chain is sometimes said to be in equilibrium. In formal terms, (1) says that P has a positive left eigenvector corresponding to the eigenvalue 1. Experience of student calculations leads us to stress that π is a left eigenvector. If your stationary vector π is constant, check that you have not inadvertently found the right eigenvector. [And see (22) below.] Here is a less trivial example. (6) Example: Random Walk with Retaining Barriers {0, 1, 2,..., d} and transition probabilities Let X have state space pi,i+1 = p; pi,i−1 = 1 − p = q p00 = q pdd = p. Then a stationary distribution must satisfy (7) πi = pπi−1 + qπi+1; pπ0 = qπ1 qπd = pπd−1. 1 ≤ i ≤ d − 1 Simple substitution shows that if p = q, then πi = π0 i p q. Because i πi = 1, it now follows that πi = 1 − p q d+1 1 − p q i. p q 414 9 Markov Chains (8) Example Let X have transition matrix P, and suppose that there exists a stationary distribution π satisfying (1). Define the Markov chain Yn by Yn = (X n, X n+1); n ≥ 0. Show that Y has stationary distribution (9) ηi j = πi pi j ; i ∈ S, j ∈ S. Solution that Y has transition probabilities We just have to check that η satisfies (1) and (3). Recall from Example 9.1.10 P(Yn+1 = (k, l)|Yn = (i, j)) = pklδ jk, so that i, j ηi j pklδ jk = i, j �
|
�i pi j pklδ jk = j π j pklδ jk = πk pkl = ηkl. Furthermore, i, j ηi j = i, j πi pi j = j = 1 by (3). π j by (1) Hence, η is the stationary distribution of Y. (10) Example: Nonhomogeneous Random Walk Let (Sn; n ≥ 0) be a Markov chain with transition matrix given by pi,i+1 = λi, pi,i−1 = µi, pi,k = 0, if |i − k| = 1, where λi + µi = 1. This may be regarded as a random walk, taking positive or negative unit steps on the integers, such that the step probabilities depend on the position of the particle. Is there a stationary distribution π? For simplicity, let us suppose that µ0 = 0 and S0 ≥ 0, so that the walk is confined to the nonnegative integers. Then if π exists, it satisfies π0 = µ1π1 π1 = λ0π0 + µ2π2 π2 = λ1π1 + µ3π3 and in general, for k > 1, πk = λk−1πk−1 + µk+1πk+1. Solving these equations in order of appearance gives π1 = λ0 µ1 π0; π2 = λ0λ1 µ1µ2 π0; π3 = λ0λ1λ2 µ1µ2µ3 π0; 9.4 Stationary Distributions 415 and so on. It is now easy to verify that for n > 0, πn = λ0λ1... λn−1 µ1µ2... µn π0. This is a stationary distribution if n−1 1 = ∞ n=0 πn = π0 + π0 ∞ n=1 0 n 1 λr, µr and so we deduce that a stationary distribution exists if this sum converges. Having examined some consequences of (1), we now turn to the question of proving it. The existence of x satisfying (1), (2), and (3) is
|
a famous result with many algebraic and analytical proofs. Most of these are neither elementary nor probabilistic. We prefer to give a proof that uses the ideas of probability theory and is elementary. (11) Theorem tribution π. A regular Markov chain with transition matrix P has a stationary dis- Proof Let s be an arbitrary state of the chain with recurrence time Ts and mean recurrence time µs. For all k ∈ S, let ρk(s) be the expected number of visits to k between successive visits to s; with the convention that ρs(s) = 1. We show that 1 ≤ k ≤ d πk = µ−1 ρk(s); s (12) is a stationary distribution of the chain. First, let In denote the indicator of the event that the chain visits k at the nth step and has not previously revisited s, given that it started in s. Then the total number of visits to k between visits to s is Rk = ∞ n=1 In; k = s, and in accord with our convention above, when k = s, we have Rs = 1. Now Ts = 1 + Rk. k=s It follows that the expected value ρk(s) of Rk is finite, and also that (13) µs = ρk(s). Furthermore, ρk(s) = E(Rk) = ∞ n=1 E(In) = k ∞ n=1 P(X n = k, Ts ≥ n|X 0 = s). 416 Now for n = 1, For n ≥ 2, 9 Markov Chains P(X 1 = k, Ts ≥ 1|X 0 = s) = psk. P(X n = k, Ts ≥ n|X 0 = s) P(X n = k, X n−1 = j, Ts ≥ n|X 0 = s) P(X n = k|X n−1 = j, Ts ≥ n, X 0 = s)P(X n−1 = j, Ts ≥ n|X 0 = s) p jkP(X n−1 = j, Ts ≥ n − 1|X 0 = s) by the Markov property. by conditional probability = = = j=s j=s j=s Hence, (14) ρk(s) = psk +
|
∞ P(X n−1 = j, Ts ≥ n − 1|X 0 = s) j=s = ρs(s) psk + n=2 p jkρ j (s) = j ρ j (s) p jk. p jk j=s Dividing throughout by µs yields the result (12), as required. In view of the appearance of mean recurrence times in the above proof, it is perhaps not surprising to discover another intimate link between π and µ. Theorem satisfies πkµk = 1; k ∈ S. Hence, For a regular Markov chain, the stationary distribution is unique and ρk(s) = µs µk. Proof Recall that Tik = min {n ≥ 0: X n = k|X 0 = i}, so that, in particular, Tkk = 0 and Tk = min {n ≥ 1: X n = k|X 0 = k}. Conditioning on the outcome of the first transition of the chain we have, for i = k, µik = E(E(Tik|X 1)) = 1 + pi j µ jk. Also, By using the Kronecker delta, j µk = 1 + j pk j µ jk. δik = 1 if i = k 0 otherwise (15) (16) (17) (18) (19) 9.4 Stationary Distributions 417 these may be combined as one equation valid for all i: (20) µik + δikµk = 1 + pi j µ jk. j Now if π is a stationary distribution, we multiply (20) by πi and sum over all i to give πi µik + πi δikµk = 1 + πi pi j µ jk = 1 + π j µ jk i i i j j on using the fact that π = πP. Hence, using (19) in the second sum, we have πkµk = 1. Because µk is uniquely determined and finite, the required results follow. (21) Example: Cube Suppose that a particle performs a random walk on the vertices of a cube in such a way that when it is at a vertex it is equally likely to move along any one of the three
|
edges that meet there, to a neighbouring vertex. Find the mean recurrence time of each vertex. Solution abilities are The state space can be chosen as S = {i: 1 ≤ i ≤ 8} and the transition prob- pi j = if i and j are joined by an edge 1 3 0 otherwise. Hence, 15, µi = 8; 1 ≤ i ≤ 8. i∈s pi j = 1, and so the stationary distribution is πi = 1 8 ; 1 ≤ i ≤ 8. By Theorem More generally, we note that for any finite regular doubly stochastic Markov chain, all states have the same mean recurrence time. This follows easily from the observation that in the doubly stochastic case, we have (22) = 1 d i∈S 1 d pi j = 1 d. Hence, πi = d −1 is a stationary distribution and µi = d. (23) Example: Library Books My local lending library permits me to borrow one book at a time. Each Saturday I go to the library. If I have not finished reading the book I renew it; otherwise, I borrow another. It takes me Wr weeks to read the r th book, where (Wr ; r ≥ 1) is a sequence of independent random variables that are identically distributed. Let X n be the number of times that I have renewed the book that I take out of the library on the nth Saturday. Show that X n is a Markov chain and find its transition matrix P. Find the stationary distribution of P when Wr is uniformly distributed on {1,..., d}. Solution Let Wr have mass function f (k) and distribution function F(k). Let R denote the record of borrowings and renewals up to, but not including, the book I am currently reading, and suppose that X n = i. Either I renew it again, so X n+1 = i + 1, or I borrow 418 9 Markov Chains a new one, in which case X n+1 = 0. Because the Wr are independent and identically distributed, (24) P(X n+1 = i + 1|X n = i, R) = P(W1 ≥ i + 1|W1 = i) = P(W1 ≥ i + 1) P(W1 ≥ i) = 1 − F(i + 1) 1 − F
|
(i) = pi,i+1; i ≥ 0. by conditional probability = P(X n+1 = i + 1|X n = i) Otherwise, (25) P(X n+1 = 0|X n = i, R) = 1 − pi,i+1 = f (i) 1 − F(i). Hence, X is a Markov chain with transition probabilities given by (24) and (25). If Wr is uniform on {1,..., d}, then pi,i+ and pd−1,0 = 1. Hence, any stationary distribution π satisfies (26) πi+1 = πi d − i − 1 d − i ; and π0 = πd−1. Iterating (26) gives πi+1 = d−i−1 that d 0 ≤ i < d − 1 π0, and because d i πi+1 = 2(d − i − 1) d(d + 1). 9.5 The Long Run πi = 1, it follows It is natural to speculate about the behaviour of the Markov chain X in the long run, that is, as n → ∞. As usual, we obtain insight from examples before turning to the general case. Example 9.2.9 Revisited: Survival pik(n) = Recall that pn qpk if k = i + n if 0 ≤ k ≤ n − 1. Now allowing n → ∞ shows that pik(n) → qpk. Notice that tion. To see this, just check that π = π P because k q pk = 1 and that the collection πk = qpk; k ≥ 0 is a stationary distribu- π0 = p ∞ 0 πk = q 9.5 The Long Run 419 and πk = pπk−1. This stationary distribution does not depend on the starting point of the chain. Example (9.2.7) Revisited: Simple Random Walk In this case, for all i and k, lim n→∞ pik(n) = 0. Example: Gambler’s Ruin This is a simple random walk that stops when it reaches 0 or K, and is therefore a Markov chain. The probability of ruin starting from i is pi, and we have shown above
|
that the probability of winning from i is 1 − pi. Hence, for 0 ≤ i ≤ K, as n → ∞, pi0(n) → pi pi K (n) → 1 − pi pi j (n) → 0 for 0 < j < K. The pair { pi, 1 − pi } is a stationary distribution, but it depends on the initial state of the chain. These examples illustrate the possibilities and agree with our intuition. Roughly speaking, if a chain can get to absorbing states, then it may eventually be absorbed in one of them; on the other hand, if it has no stationary distribution then the chance of finding it in any given state vanishes in the long run. The most interesting case, as you might expect, arises when the chain has a unique stationary distribution, and the principal result for a finite chain is then the following theorem. (1) Theorem for any i and k, Let X be regular with transition probabilities pik. Then, as n → ∞, pik(n) → πk > 0. Furthermore, (i) πk does not depend on i πk = 1 (ii) k∈S and (iii) πk = and so π is the stationary distribution of X. In addition, πkµk = 1. πi pik, i∈S Our proof of Theorem 1 will rely on the idea of coupling. This technique has many forms and applications; we use a simple version here. Suppose we run two independent chains X and Y with the same transition matrix, and they first take the same value s at time T, say. Now, as we have shown above, the Markov property is preserved at such first passage times, so given X T = YT = s, the further progress of X and Y is independent of their 420 9 Markov Chains activities before T. Hence, on the event T ≤ n we have (2) P(X n = k; T ≤ n) = P(Yn = k; T ≤ n), because given T = t both sides are equal to psk(n − t). The chains are coupled at T. Now we can tackle the theorem. Proof of Theorem 1 Let X and Y be independent regular Markov chains with the same state space S and transition matrix pi j. Let X 0 have mass function P
|
(X 0 = i) = 1, and Y0 have the stationary distribution of pi j so that P(Y0 = i) = πi. Define the Markov chain W = (X, Y ), and let T be the first passage time of W to the set D = {(x, y): x = y}, namely, T = min {n: X n = Yn}. Now, by (9.2.11), because X and Y are regular, so is W. Hence, T is finite with probability 1 (and has finite mean) by Theorem 9.3.15. Now, bearing in mind our preparatory remarks above, we can say (3) | pik(n) − πk| = |P(X n = k) − P(Yn = k)| = |P(X n = k, n ≥ T ) − P(Yn = k, n ≥ T ) +P(X n = k, n < T ) − P(Yn = k, n < T )| = |P(X n = k, n < T ) − P(Yn = k, n < T )| by (2) ≤ P(T > n), where the last inequality follows because |P(A ∩ B) − P(A ∩ C)| ≤ P(A) for any events A, B, and C. Because P(T > n) → 0 as n → ∞, we have pik(n) → πk, as required. The rest of the assertions follow because π is the stationary distribution of X. This is a rather useful result; to find the long term behaviour of the chain, we just solve π = π P, which gives the limiting distribution of X. Indeed, we know from the results of Section 9.3 that this distribution is approached rather quickly, because from Theorem 9.3.15 (4) | pi j (n) − π j | < P(T > n) < cλn for some constants c < ∞ and λ < 1. The probabilities pi j (n) are said to approach π j geometrically fast. (5) Example Let X have state space {1, 2} and transition matrix. From the results of Example 9.2.
|
20, we see that when 0 < α + β < 2, as n → ∞ 9.5 The Long Run 421 p11(n) → β, p21(n) → β p12(n) → α, p22(n) → α. And, of course, so β, αβ, α), is the stationary distribution as it must be. When α + β = 0, the chain is not irreducible, and when α + β = 2, the chain is not regular (being periodic). (6) Example: Entropy of a Markov Source Let X = (X n; n ≥ 1) be a collection of jointly distributed random variables, and write X n = (X 1,..., X n). Recall that in Example 9.2.28, we defined the conditional entropy function H (X n+1|X n) = −E[E(log f (X n+1|X n)|X n)]. If HX = limn→∞ H (X n+1|X n) exists, then HX is said to be the entropy or uncertainty of X. Now let X be a regular Markov chain with transition matrix P. Show that HX does (7) (8) indeed exist and is given by HX = − πi i k pik log pik, where π is the stationary distribution of P. Solution In Example 9.2.28, it was shown that for a Markov chain X, H (X n+1|X n) = H (X n+1|X n) = − P(X n+1 = k|X n = i) × log(P(X n+1 = k|X n = i)) i k ×P(X n = i) = − αi (n) pik log pik. i k Now, by Theorem 1, as n → ∞, αi (n) → πi, and therefore taking the limit as n → ∞ of the right-hand side of (8) gives (7), as required. The basic limit theorem (1) tells us that in the long run the probability of finding the regular chain X in state k converges to πk, for each k ∈ S. It seems plausible that, also in the long run, the proportion of time that X spends
|
visiting k should converge to πk. The following theorem shows that a more precise version of this vague statement is indeed true. It may be thought of as a type of weak law of large numbers for Markov chains. (10) (11) (12) (13) (14) (15) 422 9 Markov Chains (9) Theorem Let X be regular with transition matrix P and stationary distribution π. Let Vk(n) be the number of visits to the state k by X up to time n. Then for any ε > 0, $ $ $ $ > ε Vk(n) − πk as n → ∞. $ $ $ $ → 0, P 1 n + 1 Proof Some groundwork is required before setting about the proof of (10). Let Ik(n) be the indicator of a visit to k at time n, so Ik(n) = By the basic property of indicators, 1 if X n = k 0 otherwise. and for m = r, E(Ik(n)) = P(X n = k) = αk(n) E(Ik(m)Ik(r )) = αk(s) pkk(t), where s = min {m, r } and t = |m − r |. These indicators will be useful because Vk(n) = n r =0 Ik(r ). Now we recall that for some constants c and λ with 1 ≤ c < ∞ and 0 < λ < 1, and |αk(n) − πk| < cλn | pik(n) − πk| < cλn. At last, we are in a position to tackle (10). By Chebyshov’s inequality Vk(n) − πk $ $ $ $ > Vk(n) − (n + 1)πk (n + 1) 1 (n + 1)22 E n r =0 n n = = = (Ik(r ) − πk) 2 E m=0 (Ik(m)Ik(r ) − πk Ik(m) − πk Ik(r ) + π 2 k 1 (n + 1)22 1 (n + 1)22 + πk[(αk(s) − πk) +
|
( pkk(t) − πk) − (αk(m) − πk) − (αk(r ) − πk)]) ((αk(s) − πk)( pkk(t) − πk) r =0 m,r ≤ 1 (n + 1)22 → 0 m,r 2c2(λs + λt ) by (14) and (15) as n → ∞, establishing (10). by (11) and (12) 9.5 The Long Run 423 (16) Corollary For any bounded function g(x), and any > (Xr ) − πk g(k) > → 0, r =0 k∈S $ $ $ $ $ (17) P as n → ∞. Proof The key to this lies in the observation that n g(Xr ) = g(k)Vk(n). r =0 k∈S Hence, we can rewrite (17) as P $ $ $ $ $ $ $ g(k) k∈S g(k)=0 Vk(n) n + 1 − πk > ≤ $ $ $ $ $ $ $ k∈S g(k)=0 $ $ $ $ P Vk(n) n + 1 − πk $ $ $ $ > dg(k) → 0, as n → ∞, by Theorem 9 (using the fact that S is finite). We can give an immediate application of these results. (18) Example: Asymptotic Equipartition for a Markov Source Let the regular Markov chain X with transition matrix P and stationary distribution π represent the output from a Markov information source, as defined in Example 9.1.5. Let X n = (X 0,..., X n) have joint mass function f (x0,..., xn) = P(X 0 = x0,..., X n = xn), and recall from Example 9.5.6 that the entropy of X is HX = − πi pik log pik. i∈S k∈S Show that, for any δ > 0, as n → ∞, $ $ $
|
HX + 1 $ n P log f (X 0,..., X n) $ $ $ $ > δ → 0. Solution Markov chain with stationary distribution (πi pik; i ∈ S, k S). Second, we have First, from Example 9.4.8, the sequence Yn = {X n, X n+1}; n ≥ 0, is a − 1 n log f (X 0,..., X n) = − 1 n = − 1 n log( pX 0 X 1 pX 1 X 2 n−1 log pXr Xr +1 r =0.,..., pX n−1 X n ) Finally, we note that if we set g(Yn) = log pX n X n+1, then Corollary 16 applied to the Markov chain Y shows that $ $ $ $ $ P 1 n n−1 r =0 log pXr Xr +1 − i,k πi pik log pik > δ → 0 $ $ $ $ $ and (20) follows immediately, on remembering (19) and (21). (19) (20) (21) 424 9 Markov Chains Here is a useful application of the asymptotic equipartition example. If d is the size of the alphabet, then the total number of messages of length n which the source can emit is d n. Let us divide them into disjoint sets T and A, where (22) and T = (x1,..., xn): A = (x1,..., xn): $ $ $ $ 1 n $ $ $ $ 1 n log f (x1,..., xn) + HX! $ $ $ $ < δ log f (x1,..., xn) + HX! $ $ $ $ ≥ δ. By Example 18, for any > 0 and δ > 0, there exists n0 < ∞ such that (23) P{(X 1,..., X n0) ∈ T } ≥ 1 − ; because this is arbitrarily near 1, sequences in T are called typical. Also, by Example 18, P({X 1,..., X n0 } ∈ A) ≤, which is arbitrarily small, so sequences in A are called atypical. If
|
you are seeking efficient transmission of messages, it therefore makes sense to concentrate on the typical sequences. It follows that a natural question is, how many typical sequences are there? At this point, we recall that by convention the logarithms in Example 18 are taken to base 2. Hence, from (22), But also, from (23), 2−n(HX +δ) < f (x1,..., xn) < 2−n(HX −δ). 1 − ≤ xn ∈T f (x1,..., xn) ≤ 1. Hence, the number |T | of sequences in T satisfies (1 − )2n(HX −δ) ≤ |T | ≤ 2n(HX +δ), which is to say that, roughly speaking, there are about 2n HX typical messages of length n. We end this section with a brief look at Markov chains in general. Up to now, we have dealt chiefly with finite regular chains because such chains have useful and elegant properties with elementary proofs. However, as some examples have indicated, many chains are irregular or infinite or both. We therefore give a brief account of some of the important results for more general chains; the proofs are all omitted. In the above sections, it was found that in a finite regular chain, any state d has a recurrence time Td, which is finite with probability 1 and has finite expectation. When X has countably infinite state space, this need no longer be true, as a glance at the unrestricted simple random walk shows immediately. We therefore distinguish these cases. 9.6 Markov Chains with Continuous Parameter 425 (24) Let the state d have recurrence time Td. Then: Definition (i) If P(Td < ∞) < 1, then d is said to be transient. (ii) If P(Td < ∞) = 1 but E(Td ) = ∞, then d is said to be recurrent null (or persistent null). Otherwise, d is recurrent (or persistent). These new types of behaviour seem to complicate matters, but the following theorem helps to simplify them again. (25) Decomposition Theorem The state
|
space S can be uniquely partitioned as S = T ∪ C1 ∪ C2 ∪..., where T is the set of transient states and each Ci is an irreducible closed set of recurrent states. This means that eventually the chain ends up in some one of the Ci and never leaves it, or it remains forever in the transient states. Of course, if there is only one closed set of recurrent states matters are even simpler, so the following theorem is useful. (26) Theorem The chain X has a unique stationary distribution π if and only if S contains exactly one recurrent nonnull irreducible subchain C. For each i in C, (27) πi = µ−1 i, where µi is the mean recurrence time of i, and for i /∈ C, πi = 0. There is also a limit theorem for general Markov chains. (28) Theorem For any aperiodic state k of a Markov chain, pkk(n) → 1 µk as n → ∞, where the limit is zero if k is null or transient. If i is any other state of the chain, then pik(n) → 1 µk P(Tik < ∞). 9.6 Markov Chains with Continuous Parameter We have suggested above that Markov chains can provide a good description of various real systems. However, a moment’s thought about real systems is sufficient to see that many of them do not change their state at integer times. Components fail, your telephone rings, meteorites fall, at any time. Spurred by this, it is natural to want to study collections of random variables of the form X = (X (t); t ≥ 0). 426 9 Markov Chains Here, t ∈ R is often regarded as the time, and then X (t) ∈ Z is regarded as the state of the system X at time t. Such a collection is often called a random process. The most obvious thing about X is that it is an uncountable collection of random variables, and it follows that a rigorous account of the behaviour of X (t) is beyond the scope of an elementary text such as this. However, we can discover quite a lot informally in special cases. First, the remarks in Section 9.1 that motivated our interest in Markov chains apply equally well in continuous time. We therefore make the following de�
|
��nition, analogous to Definition 9.1.1. As usual, X (t) ∈ S, where S is a subset of the integers called the state space. (1) Definition The process X = (X (t); t ≥ 0) taking values in S is a Markov process (or has the Markov property), if P(X (t) = k|X (t1) = i1,..., X (tn) = in) = P(X (t) = k|X (tn) = in) for all possible k, i1,..., in, and any sequence 0 ≤ t1 < t2 <... < tn < t of times. We write P(X (t + s) = k|X (s) = i) = pik(t). As in the discrete case, ( pik(t); i, k ∈ S) are known as the transition probabilities, and they satisfy the Chapman–Kolmogorov equations, as follows. (2) Theorem For any s > 0 and t > 0 and i, k ∈ S, pik(s + t) = pi j (s) p jk(t). j∈S Proof By the same arguments as we used in Theorem 9.2.12. (3) pik(s + t) = = = j∈S j∈S j P(X (s + t) = k, X (s) = j|X (0) = i) P(X (t + s) = k|X (s) = j, X (0) = i)P(X (s) = j|X (0) = i) pi j (s) p jk(t). Given this collection of equations, it is possible to set about solving them in special cases, without any further ado. In fact, we do just that in the next section, but as usual there are a few preliminaries. First we must ask, do any nontrivial Markov processes exist? (Obviously, the trivial process X (t) = 1 for all t is a Markov process, but not a very exciting one.) This question is not as stupid as it may appear to you. Recall that we defined Markov chains by visualizing
|
a counter or particle moving around the vertices of a graph according to some specified distributions, and if necessary we could actually do it. Here, we have started with a collection of probabilities, with no description of how we might actually produce a sequence X (t) having these transition probabilities and joint distributions. 9.6 Markov Chains with Continuous Parameter 427 Of course, the answer to the above question is, yes they do exist, and you have already met one, namely, the Poisson process. This was defined by construction in Definition 8.8.1, and we showed that it had the Markov property in Exercise 8.17.3. Henceforth, where necessary, we assume without proof that the processes we consider exist; in more advanced texts, it is shown that they do. (4) Example: Poisson Process It has already been shown that if N = (N (t); t ≥ 0) is a Poisson process with parameter λ, then N (t) − N (0) has a Poisson distribution with parameter λt, which is to say that pik(t) = e−λt (λt)k−i (k − i)!. Hence, we can calculate k j=i pi j (s) p jk(t) = k j=i e−λs(λs) j−i ( j − i)! k−i e−λt (λt)k− j (k − j)! = e−λ(t+s) (k − i)! r =0 k − i r (λs)r (λt)k−i−r = e−λ(t+s)(λ(s + t))k−i (k − i)! = pik(s + t). Thus, the transition probabilities of the Poisson process satisfy the Chapman–Kolmogorov equations, as they must of course by Theorem 2. Note that this result of itself does not show that N is a Markov process: there are processes that are not Markov, whose transition probabilities nevertheless satisfy (3). The crucial property which makes N a Markov process is the exponential distribution of times between events. This property is in fact characteristic of Markov processes in general; they wait in each successive state for an exponentially distributed time before moving to the next. Naturally, it is the lack-of-
|
memory property of the exponential distribution that is basically responsible for this essential role in the theory of Markov processes. However, we can do no more here than state the fact baldly; exploring its ramifications is beyond our scope. One example will suffice to give some trivial insight into these remarks. Let X (t) be a Markov chain with two states 0 and 1. Example: Falling Off a Log Suppose that transitions from 1 to 0 are impossible. Let us consider transitions from 0 to 1. Because X (t) is Markov, the transition probabilities satisfy the Chapman–Kolmogorov equations. Hence, as p10(t) = 0, we have p00(s + t) = p00(s) p00(t) + p01(s) p10(t) = p00(s) p00(t). However, as we have remarked previously, the only bounded solutions to the equation f (x + y) = f (x) f (y) are of the form f (x) = e−λx. Hence, p00(t) = e−λt 428 9 Markov Chains for some λ ≥ 0. The exponential density is forced upon us by the assumption that X (t) is Markov. 9.7 Forward Equations: Poisson and Birth Processes It is all very well to verify that a previously obtained solution satisfies (9.6.3). A pressing question is, can we solve (9.6.3) without already knowing the answer? We therefore develop a technique for tackling the Chapman–Kolmogorov equations in this section. First, we observe that for the Poisson process, as t → 0 (1) (2) (3) (4) pk,k+1(t) = P(N (t) = 1) = λte−λt = λt + o(t);† pkk(t) = P(N (t) = 0) = e−λt = 1 − λt + o(t); for j < k, and for j > k + 1, Equations (1)–(4) say that: pk j (t) = P(N (t) < 0) = 0; pk j (t) = P(N (t) > 1
|
) = o(t). N (t) is nondecreasing. (5) (6) The probability of an event in [s, s + t] is proportional to t, for small t, and does not depend on previous events. (7) The probability of two or more events in [s, s + t], for small t, is o(t). What we are going to do now is to seek a Markov process X (t) with transition proba- bilities pik(t), which satisfy (5), (6), and (7). Because pik(t) satisfies (9.7.3), we have for small t pik(s + t) = k j=1 pi j (s) p jk(t) = pik(s)(1 − λt + o(t)) + pi,k−1(s)(λt + o(t)) + k−2 j=i pi j (s).o(t). Hence, pik(s + t) − pik(s) t = −λpik(s) + λpi,k−1(s) + o(1) and allowing t → 0 gives (8) d ds valid for all 0 ≤ i ≤ k [remembering that pi,i−1(s) = 0]. pik(s) = −λpik(s) + λpi,k−1(s), † We discussed the o(.) notation in Section 7.5. 9.7 Forward Equations: Poisson and Birth Processes 429 At t = 0, we have the initial condition pii (0) = 1. The equations (8), as i and k range over all possible values, are known as the forward equations for pik(t), and may be solved in various ways. Theorem The solution of (8) is given by pik(t) = e−λt (λt)k−i (k − i)!, namely, the transition probabilities of the Poisson process. Proof We give two methods of proof. First, solve (8) with k = i, using (9), to find pii (t) = e−λt. Substituting this into (8) with k = i + 1 yields pi.i+1(t) = λte−λt
|
. A simple induction now yields (10). A second method relies on the generating function G(z, t) = ∞ k=i pik(t)zk = E(z N (t)|N (0) = i). Multiply (8) by zk and sum over k to obtain From (9), we have ∂G ∂t = λ(z − 1)G. G(z, 0) = zi. The solution of (11) that satisfies (12) is and the coefficient of zk in this expression is just (10). G(z, t) = zi exp (λt(z − 1)) (9) (10) (11) (12) (13) The point of this elaborate reworking is that the simple assumptions (5), (6), and (7) also lead to the Poisson process in a simple and straightforward way. It turns out that a great many useful processes can be analysed by specifying pik(t) for small t, and all i and k, then obtaining the forward equations, and finally (occasionally) solving them. (14) Example: The Simple Birth Process A population of individuals grows as follows. Each member of the population in existence at time t may be replaced by two new individuals during [t, t + h] with probability λh + o(h), independently of the other members of the population. Otherwise, the given member of the population remains intact during 430 9 Markov Chains [t, t + h] with probability 1 − λh + o(h), also independently of the rest of the population. If the population at time t is X (t), and X (0) = 1, show that (15) What is E(X (t))? E(z X (t)) = z z + (1 − z)eλt. Solution Let pn(t) = P(X (t) = n) and p jk(t) = P(X (t) = k|X (0) = j). Suppose that X (t) = i. Because each individual is replaced (or not) independently of all the others we have, as h → 0, pii (h) = (1 − λh + o(h))i pi,i+1(h) = i(λ
|
h + o(h))(1 − λh + o(h))i−1 = iλh + o(h) pik(h) = o(h); pik(h) = 0; k > i + 1 k < i. Following the by now familiar routine, we find pk(t + h) = (1 − λkh) pk(t) + λ(k − 1)hpk−1(t) + o(h) and so ∂ ∂t pk(t) = −λkpk(t) + λ(k − 1) pk−1(t). Now we set G X (z, t) = E(z X (t)), and notice that because probability generating functions are differentiable, at least for |z| < 1, we have ∂G X ∂z = ∞ k=1 kpk(t)zk−1. Now, on multiplying (16) by zk and summing over k, we notice ∂G X right-hand side. In fact, ∂z appearing on the Also, because X (0) = 1, ∂G X ∂t = λz(z − 1) ∂G X ∂z. G X (z, 0) = z. By inspection, for any differentiable function h(.), the function G(z, t) = h λt + z 1 v(v − 1) dv (16) (17) (18) (19) (20) 9.8 Forward Equations: Equilibrium 431 satisfies (18). Imposing the boundary condition (19) reveals that z = h z 1 v(v − 1) dv = h log. z − 1 z Hence, the function h(.) is given by h(y) = (1 − ey)−1 and so G X (z, t) = 1 − exp 1 λt + log z−1 z = z z + (1 − z)eλt, as required. Now, to find E(X (t)), we have a choice of methods. Obviously, by differentiating (15) with respect to z and setting z = 1, we find (21) E(X (t)) = ∂ ∂z G X (
|
1, t) = eλt. However, we could have obtained E(X (t)) without solving (18). If we assume E[(X (t))2] exists, then differentiating (18) with respect to z and setting z = 1 yields ∂ ∂t ∂G X ∂z (1, t) = λ ∂G X ∂z (1, t). This has solution given by (21). Or we could simply have noted that X has a geometric distribution with parameter e−λt. 9.8 Forward Equations: Equilibrium Guided by our glances at the Poisson and simple birth processes, we can now outline a simple technique for dealing with some elementary Markov chains. The aim is to obtain forward equations, so from the Chapman–Kolmogorov equations we write, for h > 0, pik(t + h) − pik(t) h = 1 h j pi j (t) p jk(h) − pik(t). We want to let h ↓ 0. By inspection of (1), this is possible if for some finite numbers (g jk; j, k ∈ S) we have as h → 0, and p jk(h) = g jkh + o(h), pkk(h) = 1 + gkkh + o(h). In this case, we obtain the required forward equations by letting h ↓ 0 in (1), to give ∂ ∂t pik(t) = j pi j (t)g jk. (1) (2) (3) (4) 432 9 Markov Chains The application of this idea is best illustrated by examples. We give perhaps the simplest here, others follow in due course. (5) Example: Machine A machine can be either up or down. [You can interpret this figuratively (working/not working), or literally (a lift), it makes no difference to the mathematics.] If it is up at time t, then it goes down during [t, t + h] with probability αh + o(h), independently of its past record. Otherwise, it stays up with probability 1 − αh + o(h) during [t, t + h]. Likewise, if it is down at time t, it goes up in [t, t + h] with probability βh + o
|
(h), independently of its past, or it stays down with probability 1 − βh + o(h). (a) If it is up at t = 0, find the probability that it is down at time t > 0. (b) Let N (t) be the number of occasions on which it has gone down during [0, t]. Find E(N (t)). (c) Find the probability generating function E(z N (t)). (a) Let X (t) be the state of the machine, where X (t) = 0 if it is up at t and Solution X (t) = 1 if it is down at t. By the assumptions of the question, X (t) is a Markov process, and p01(t + h) = p00(t)αh + p01(t)(1 − βh) + o(h) p00(t + h) = p01(t)βh + p00(t)(1 − αh) + o(h). Hence, (6) (7) (8) and, because X (0) = 0, d dt d dt p01(t) = −βp01(t) + αp00(t) p00(t) = −αp00(t) + βp01(t) p00(0) = 1. Solving (6), (7), and (8) gives the required probability of being down at t p01(t) = α α + β (1 − e−(α+β)t ). (b) The first thing to realize here is that N (t) is not a Markov process. However, if we let M(t) be the number of times the machine has gone up during [0, t], then is a Markov process. By the assumptions of the problem, as h → 0, Y(t) = {N (t), M(t)} P(Y(t + h) = (k + 1, k)|Y(t) = (k, k)) = αh + o(h) P(Y(t + h) = (k, k)|Y(t) = (k, k)) = 1 − αh + o(h) P(Y(t + h) = (k, k)|Y(t) = (
|
k, k − 1)) = βh + o(h) P(Y(t + h) = (k, k − 1)|Y(t) = (k, k − 1)) = 1 − βh + o(h). 9.8 Forward Equations: Equilibrium 433 Hence, if fk j (t) = P(Y(t) = (k, j)), the forward equations may be derived routinely as (9) (10) d dt where f0,−1(t) = 0. d dt fkk(t) = −α fkk(t) + β fk,k−1(t); k ≥ 0. fk,k−1(t) = −β fk,k−1(t) + α fk−1,k−1(t); k ≥ 1, Now consider a Poisson process Z (t) of rate α. By construction of N (t), P(N (t) = k) ≤ P(Z (t) = k) for all k. Hence, E(N (t)) exists because it is less than E(Z (t)) = αt. In fact, P(N (t) = k) = fkk(t) + fk,k−1(t), and so E(N (t)) = m1(t) + m2(t), where and m1(t) = ∞ k=1 k fkk(1) m2(t) = ∞ k=1 k fk,k−1(t). Multiplying (9) and (10) by k, and summing gives (11) (12) and dm1 dt = −αm1 + βm2 dm2 dt = −βm2 + α k (k − 1 + 1) fk−1,k−1(t) = −βm2 + αm1 + α fk−1,k−1(t) k = −βm2 + αm1 + αP(N (t) = M(t)) = −βm2 + αm1 + αp00(t) because the machine is up if N (t) = M(t). Hence, adding (11) and (12), we have d dt E(N (t)) = αp00(t),
|
and so, using the result of (a), E(N (t)) = t α 0 β α + β + αe−(α+β)ν α + β dν. It follows that as t → ∞, t −1E(N (t)) → αβ/(α + β). (c) Let x(t, z) = ∞ k=0 fkk(t)zk and y(t, z) = ∞ k=1 fk,k−1(t)zk. 434 9 Markov Chains Then E(z N (t)) = x(t, z) + y(t, z), and multiplying each of (9) and (10) by zk and summing over k gives ∂ x ∂t = −αx + βzy, and ∂ y ∂t = −βy + αzx. This pair of simultaneous differential equations is solved by elementary methods, subject to the initial conditions x(0, z) = 1 and y(0, z) = 0, to yield x(t, z) = [(α + λ2(z))/(λ2(z) − λ1(z))]eλ1(z)t + [(α + λ1(z))/(λ1(z) − λ2(z))]eλ2(z)t and where and y(t, z) = (α + λ1)(α + λ2)(eλ1(z)t − eλ2(z)t )/(βz(λ2 − λ1)), λ1(z) = 1 2 [−(α + β) + ((α − β)2 + 4αβz2) 1 2 ] λ2(z) = 1 2 [−(α + β) − ((α − β)2 + 4αβz2) 1 2 ]. Recalling our results about chains in discrete time, it is natural to wonder whether chains in continuous time have stationary distributions and whether pi j (t) converges as t → ∞. A detailed answer to these questions is far beyond our scope, but we can make some guarded remarks. Let us start with the simplest. (13) Theorem limt→∞ pi j (t) exists for all i and j. Let X (t) be a finite
|
Markov process with transition matrix pi j (t). Then If X (t) is irreducible then the limit is independent of i, we write Furthermore, π j satisfies j pi j (t) = π j. lim t→∞ π j = 1 and π j = πi pi j (t); t ≥ 0, i π is the stationary distribution of X (t). [A chain is irreducible if for each i, j > 0.] there is some finite t such that pi j (t) (14) Example 5 Revisited In this case, we have p01(t) = α α + β (1 − e−(α+β)t ) → α α+β 0 if α + β > 0 otherwise, with three similar results for p00, p10, and p11. If α = β = 0, then X (t) = X (0) for all t. However, if αβ > 0, then the chain is irreducible and has stationary distribution π = 9.8 Forward Equations: Equilibrium 435 β α+β (, α α+β ). We can check that for all t π0 = = + αe−(α+β) − βe−(α+β)t α + β = π0 p00(t) + π1 p10(t). In practice, the state space is often countably infinite, and of course we would like to use the forward equations (4). The following theorem is relevant. (15) Theorem Let X (t) be an irreducible Markov process with transition matrix pi j (t), satisfying (2) and (3) above. Then limt→∞ pi j (t) exists and is independent of i, for all j. There are two possibilities. Either (a) where πi = 1 and and (16) for all j; or (b) lim t→∞ pi j (t) = π j > 0, π j = i πi pi j (t) i πi gi j = 0 lim t→∞ pi j (t) = 0. We give no proof of this result, but you may notice with some pleasure that there are no tiresome reservations about periodicity. (17
|
) Example: Queue Let X (t) be the length of queue formed before a single service point at time t. The times between arrivals are exponentially distributed with parameter λ; each individual is served on reaching the head of the queue; each service time is exponentially distributed with parameter µ; interarrival times and service times are all independent of each other. It follows that as h → 0, when X (t) > 0, P(X (t + h) − X (t) = 1) = P(one arrival; no service completed in (t, t + h)) = λh(1 − µh) + o(h) = λh + o(h). Likewise, P(X (t + h) − X (t) = −1) = µh + o(h) 436 and 9 Markov Chains P(X (t + h) − X (t) = 0) = 1 − (λ + µ)h + o(h). The process X (t) is Markov, by the properties of the exponential density, and the above statements show that pi,i+1(h) = λh + o(h), pi,i−1(h) = µh + o(h); pii (h) = 1 − (λ + µ)h + o(h); i = 0 i = 0. When i = 0, no service can be completed so we have p01(h) = λh + o(h) and p00(h) = 1 − λh + o(h). These supply us with all the numbers gi j, and so by (16) to find the stationary distribution we seek a solution to the equations π G = 0, that is, −λπ0 + µπ1 = 0 λπi−1 − (λ + µ)πi + µπi+1 = 0; i ≥ 1. Solving recursively shows that i λ µ πi = π0; i ≥ 0, and so a stationary distribution exists if ( λ µ ) < 1, and it is given by i πi = 1 − λ µ λ µ. 9.9 The Wiener Process and Diffusions All the Markov processes that we have considered up to now have been “discrete,” which is to say that they take values in
|
the integers (or some other countable set). They have thus been “jump processes,” in the sense that transitions between discrete states take place instantaneously at times that may be fixed (as in the simple random walk), or they may be random times indexed by a continuous parameter (as in the Poisson process). But it is a matter of simple observation that many real random processes do not make jump transitions in a countable set of states. In complete contrast, such processes are seen to move continuously between their possible states, which typically lie in some interval of the real line (or all the real line). We may mention noisy electronic signals, meteorological data, and the size and position of sunspots, for example. We have seen that Markov chains have been effective and amenable in describing processes with discrete state space. It is natural to ask if there are also useful processes that are continuous and have the Markov property (Definition 9.6.1). The answer is yes, and by far the simplest and most useful of such models is that called the Wiener process (also known as Brownian motion for reasons we discuss in a moment). Here it is: 9.9 The Wiener Process and Diffusions 437 (1) The random process W (t), t ≥ 0, is called the Wiener process if it Definition satisfies (a) W (0) = 0 (b) W (t) is continuous (c) W (s + t) − W (s) is normally distributed with mean 0 and variance σ 2t, for all s, t ≥ 0, and some constant σ 2 > 0. Note that we usually take σ 2 = 1, in which case this may be called the Remark standard Wiener process. Our Wiener processes are standard unless otherwise stated. (d) W (t) has independent increments, which is to say that for any 0 < t0 < t1 <... < tn, W (t0), W (t1) − W (t0),..., W (tn) − W (tn−1), are independent random variables. [Note that it still seems to be an open problem as to whether pairwise independence of increments is sufficient to define W (t).] This may seem a rather abrupt and arbitrary defi
|
nition, so our first task must be to justify it. First, of course, we must check that it is indeed a Markov process satisfying Definition (9.6.1). We write, for any t0 < t1 <... < tn < tn + s, P(W (tn + s) ≤ w|W (tn) = wn,..., W (t0) = w0) = P(W (tn + s) − wn ≤ w − wn|W (tn), W (tn) − W (tn−1),..., W (t0) = w0) = P(W (tn + s) − wn ≤ w − wn|W (tn) = wn) by the independence of increments = P(W (tn + s) ≤ w|W (tn) = wn), which is the Markov property, as required. The second natural question is, why have we picked that particular set of properties to define W (t)? The answer most easily appears from the history and background of the process, which is also of interest in its own right. Ancient peoples were able to cut and polish rock crystal, and also glass when it was invented. They were aware that, when lens-shaped, such objects distorted observation and focussed light. However, the crucial step of exploiting this commonplace fact for useful purposes was taken in Holland at the end of the sixteenth century. Hans Lippeshey had an effective telescope by 1608, and Hans and Zacharias Janssen developed a microscope at about the same time. Galileo immediately used the telescope to revolutionize astronomy and cosmology, and Robert Hooke and others used the microscope to revolutionize our knowledge of smaller-scale aspects of the universe. In particular, Antonie van Leeuwenhoek noted tiny objects in drops of water, which he called animalcules. He observed that their motion “was so fast, and so random, upwards, downwards and round in all directions that it was truly wonderful to see.” He attributed this motion to the objects being alive, and indeed he is credited with first identifying bacterial cells. 438 9 Markov Chains However, it was the botanist Robert Brown who (in 1822) conducted key experiments that demonstrated that this erratic motion is exhibited by any suf�
|
�ciently small particle, inanimate as well as animate. He began his observations on pollen, which is in general too large to show the effect, but he observed that the smaller particles were in motion. Further, the smaller the particle the more vigorous the motion, and he obtained the same result using “every mineral which I could reduce to powder,” including arsenic and “a fragment of the Sphinx.” The motion must therefore be purely mechanical, and is in fact caused by the ceaseless battering of the atoms of the fluid in which the particles are suspended. What are the properties of the process generated by the movements of such a randomly battered particle? (i) First the physical nature of particles and molecules entails that their movements in nonoverlapping time intervals are independent. This is 1(d). (ii) Second, the path of a particle is surely continuous. This is 1(b). (iii) Third, the position of the particle at time t is the cumulative sum of an arbitrarily large number of small steps that are independent by (i) above. The sizes of steps over time intervals of the same length have the same distribution, by the homogeneous nature of the conditions of the problem. The central limit theorem therefore applies, and we see that the position of the particle X (t) is normally distributed. The mean of X (t) is 0 by symmetry. For the variance, we note that the variance of the position of a discrete random walk is proportional to its duration. Because we envisage X (t) as the continuous limit of such a walk, it is natural to set var X (t) = c2t. That is, 1(c). (iv) Without loss of generality, we start the particle at 0. This is 1(a). The first successful attempt at a mathematical and scientific description of the effect was undertaken by A. Einstein in 1905. (The same year as his better-known work on relativity.) He characterized the motion in terms of physical laws and constants; a description that later allowed Perrin to obtain Avogadro’s number, and hence a Nobel prize. Another mathematical description is implicit in the earlier work of L. Bachelier in 1900, but this (by contrast) was ignored for half a century. In both cases, the idea underlying the model is that the effect of the impact of atoms on the particles is to force them
|
to execute a random walk, whose steps (on any scale) are equally likely to be in any direction. Clearly, the steps must be independent, and an application of the central limit theorem tells us that steps in any given direction must be normally distributed. These are just the properties we set out in Definition (1). There is a third, rather less obvious question about the process W (t) defined in (1). That is, does it exist? This may seem paradoxical, but recall our discussion after (9.6.3). The point is that almost every other process in this book was defined by construction, and then we deduced its joint distributions. Here we simply stated the joint distributions (implicitly); thus leaving open the question of whether a construction is possible. The answer “yes,” together with a construction, was supplied by N. Wiener in a series of papers from 1918. This is why we give it that name. The proof is too intricate for inclusion here, so we move on to look at the basic properties of W (t). A key point to grasp here is that such properties fall into two types: sample-path properties and distributional properties. The idea of what we mean by that is best conveyed by giving some such properties. Here is a list of properties of the paths of W (t). 9.9 The Wiener Process and Diffusions 439 With probability 1, (a) W (t) is continuous. (b) W (t) is not differentiable anywhere. (c) W (t) changes sign infinitely often in any interval [0, ε], ε > 0. (d) W (t)/t → 0, as t → ∞. Roughly speaking, W (t) is incredibly spiky, but does not get too far away from zero. These path properties are not so easy to verify so we turn our attention to the other kind, that is, properties of the joint distributions of W (t), which are again best illustrated by examples. It is useful to recall that the N (0, σ 2) normal density is denoted by φσ 2(x) = 1√ exp(− 1 2 x 2 σ 2 ), 2πσ 2 and we omit the suffix if σ 2 = 1, so φ1(x) = φ
|
(x). Also, recall from the end of 8.10, the review of Chapter 8, that the joint distribution of a collection of multivariate normal random variables is determined by their means, variances, and covariances. Hence, we have for the Wiener process (2) Example: Joint Distribution Because increments are independent and normally distributed, the joint density of W (t1), W (t2),..., W (tn), where t1 < t2 <... < tn, is given by f (w1,..., wn) = φt1(w1)φt 2−t 1(w2 − w1)... φt n −t n−1(wn − wn−1). In particular, the joint density of W (s) and W (t) is (3) f (s, x; t, y) = 1 2π 1√ s(t − s) exp{− 1 2 x 2 s − 1 2 (y − x)2 t − s }, 0 < s < t. Now by comparison with Example (8.11) we see immediately that 4 ρ(W (s), W (t)) = s t and cov (W (s), W (t)) = s. Because W (t) is a Markov process its transition probabilities satisfy the Chapman– Kolmogorov equations, as we showed in Theorems 9.2.12 and 9.6.2 for discrete Markov processes. In this case, because W (t) has continuous state space, the equations are expressed in terms of an integral. That is to say, by considering the possible values z of W (u) for s < u < t, and using the Markov property at u, we have (4) f (s, x; t, y) = R f (s, x; u, z) f (u, z; t, y) dz. In Problem 9.35, you are asked to show that the transition density in (3) does indeed satisfy these equations. Joint densities can be readily used to find joint moments and conditional densities, but there are often simpler techniques. Here is an example. 440 9 Markov Chains (5) Example: Conditioning We could use (2) and (3) to obtain the conditional density of W (s) given
|
W (t) = y, as fW (s)|W (t)(x|W (t) = y) = f (s, x; t, y) (y − x)2 t − s x − sy t − 1 2 − 1 2 By inspection, this is a normal density with mean sy/t and variance φt (y)! y2 t t s(t − s) + 1 2! − 1 2 ∝ exp ∝ exp x 2 s. 2 √ s(t − s)/t (where, for simplicity, we have omitted the normalizing constant). That is to say, for 0 < s < t, (6) (7) (8) Hence, E(W (s)|W (t)) = s t W (t), var (W (s)|W (t)) = s(1 − s t ). cov (W (s), W (t)) = E(W (s)W (t)) 5 = E([E(W (s)W (t)|W (t))] 6 = E = s = s ∧ t, W (t)2 s t as we remarked above. In fact, in this case and in many other similar situations, it is easier and quicker to recall the clever device from Example (8.20). Thus, we first calculate cov(W (s), W (t)) directly as E(W (s)W (t)) = E{W (s)[W (t) − W (s)] + W (s)2} Hence, W (s)/ √ s and W (t)/ t have a standard bivariate normal density with correlation by independent increments = EW (s)2, = s = s ∧ t = min{s, t}. √ ρ = E W (t)√ s W (t)√ t 4 =. s t √ W (s)√ s 4 |W (t) = s t W (t)√ t = √ t s W (t), By (8.20), we find immediately that given W (t), W (s)/ mean s is normal with conditional E and conditional variance as are found above. var W (s)√ s |W (t) = 1 − ρ2 = 1 − s t, 9.9 The Wiener Process and
|
Diffusions 441 A famous and important special case is the so-called “Brownian Bridge,” which (perhaps surprisingly) turns out to be useful in making statistical inferences about empirical distribution functions. (9) Example: Brownian Bridge This is the process B(t), 0 ≤ t ≤ 1, defined to be W (t), conditional on the event W (1) = 0. (It is sometimes called the “tied-down Wiener process.”) Because W (t) has multivariate normal joint distributions, it also follows immediately that B(t) does. To be precise, from the results of Example (5), we have EB(t) = E(W (t)|W (1) = 0) = 0; and for 0 < s < t < 1, E(B(s)B(t)) = E(W (s)W (t)|W (1) = 0) = E(E(W (s)W (t)|W (t), W (1) = 0)|W (1) = 0) by (5.7.4), the tower property, = E(E(W (s)W (t)|W (t))|W (1) = 0) by independence of the increments of W (t), W (t)2|W (1) = 0 by (6 − st. t(1 − t) by (7) Obviously, the Brownian Bridge cannot have independent increments because it is forced to satisfy B(1) = 0. Nevertheless, it is a Markov process, as you can easily show by verifying that its transition probabilities satisfy (9.6.1). In some ways, it is unsatisfying to have a process defined as a conditional version of another, especially when the conditioning is on an event of probability zero. The following result is therefore very useful. (10) Lemma Bridge. Let B∗(t) = W (t) − t W (1), 0 ≤ t ≤ 1. Then B∗(t) is also the Brownian Proof B∗(t) has multivariate normal joint distributions because W (t) does; obviously, B∗(t) has zero mean. Also, for s < t < 1, cov (B∗(s), B∗(t)) = E(W (s)W (t
|
) − sW (t)W (1) − t W (s)W (1) + st W 2(1)) = s − st − st + st = s ∧ t − st = cov(B(s), B(t)). The proof is complete, when we recall from (8.10) that multinormal distributions are determined by their first and second joint moments. Besides conditional processes, there are several other operations on the Wiener process that have interesting and important results. 442 9 Markov Chains (11) Example: Scaling For any constant c > 0, the process W ∗(t) = Wiener process. √ cW ( t c ) is also a Solution normal increments because W (t) does. It is immediate that W ∗(t) is continuous, starts at zero, and has independent Finally, we need only check the variance of an increment by calculating " var(W ∗(t) − W ∗(s)) = cE Hence, W ∗(t) satisfies the conditions to be a Wiener process. = t − s. (12) Example: Ornstein–Uhlenbeck Process linear scaling of time and size; thus, This is obtained from W (t) by another non- U (t) = e−t W (e2t ). This process has been used by physicists as a model for the velocity of a particle in a fluid at time t. The joint distributions of U (t) are multivariate normal because those of W (t) are. In particular, we have EU (t) = 0 varU (t) = E[U (t)]2 = e−2t E(W (e+2t )2) = 1. and Also, for s, t > 0, E(U (t)U (t + s)) = e−2t−sE[W (e2t )W (e2(t+s))] = e−s. Hence, U (t) and U (t + s) have the standard bivariate normal density with correlation e−s; we can therefore write, if we want, U (s + t) = e−sU (t) + 1 − e−2s Z, % where Z is a normal N (0, 1) random variable that is independent of U (t).
|
You can show that U (t) is a Markov process (exercise). (13) Example: Drifting Wiener Process This is obtained from the standard Wiener pro- cess W (t) by setting D(t) = σ W (t) + µt. If σ W (t) represents the position of a particle enjoying Brownian motion in some fluid, then µt may be interpreted as a superimposed global motion of the fluid moving with constant velocity µ. Alternatively, we may regard D(t) as the continuous limit of an asymmetric simple random walk. (14) Example: Exponential (or Geometric) Wiener Process This is obtained from W (t) by setting Y (t) = exp(µt + σ W (t)). 9.9 The Wiener Process and Diffusions 443 Because this cannot be negative and log Y (t) has independent increments, it is popular as a model for stock prices. It is also called geometric Brownian motion. (15) Example: Reflected Wiener Process This is obtained from W (t) by setting Z (t) = |W (t)|. It is interpreted as the Wiener process with a “reflecting barrier” at the origin. It can be shown to be a Markov process, with density 2φt (z), giving EZ (t) = (2t/π)1/2 and varZ (t) = 1 − 2 π t. (16) Example: Integrated Wiener Process This is obtained from W (t) by setting V (t) = t W (u) du. 0 Note that the integral exists because W (t) is continuous. The process V (t) has multivariate normal joint distributions because W (t) does, but V (t) is not in this case a Markov process. Clearly, EV (t) = 0, and you can show that varV (t) = 1 3 t 3 and cov(V (s), V (t)) = 1 2 s2(t − s/3), s ≤ t. We look at some even more interesting functions of the Wiener process shortly, but first we turn aside to make some important remarks. In our earlier work on Markov processes, the idea of the first passage time T of the process to some value
|
was most important. This is for two main reasons. First, such first passage times are often naturally important in the real world that our processes describe. The second reason is that it is often useful to condition some event or expectation on the value of T, thus yielding immediate results or at least tractable equations for solution. These may be difference or differential or integral equations, or combinations of these. The key to the success of this approach is that the Markov property of the process is preserved at such first passage times, as we proved in the discrete case in Example 9.3.17. It is crucial to our success in studying the Wiener process that these properties remain true; we summarize the basics here. (17) Definition (a) The first passage time to b of the Wiener process W (t) is given by Tb = inf{t: W (t) = b}. (b) The first exit time from (a, b) of the Wiener process W (t) is given by T = inf{t: W (t) ∈ (a, b)}, a < 0, b > 0. Remark We may write T = Ta ∧ Tb, with an obvious notation. 444 9 Markov Chains (18) Definition Let S be a nonnegative random variable. If the event S ≤ t has probability 0 or 1, given W (s) for 0 ≤ s ≤ t, then S is called a stopping time (or Markov time) for W (t). The two key properties (whose proofs we omit) are these: (19) Lemma The first passage times Tb and T = Ta ∧ Tb are stopping times for W (t). (20) Theorem: Strong Markov Property If S is a stopping time for the Wiener process W (t), then W (S + t) − W (S), t ≥ 0, is a Wiener process independent of W (s), s ≤ S. This last result says that the Markov property is preserved at Markov (or stopping) times, and this fact is extremely important and useful. Probabilists have been exceptionally inventive in exploiting the symmetry of the Wiener process coupled with the strong Markov property, but we have space for only some simple illustrative results about the maxima of the Wiener process and
|
the Brownian Bridge. (21) Example: Maximum points: first, for c > 0, Let M(t) = max{W (s), 0 ≤ s ≤ t}. We note these two useful (22) (23) {M(t) ≥ c} ⊇ {W (t) ≥ c}. Second, for c > 0, denoting the first passage time to c by Tc, {M(t) ≥ c} ≡ {Tc ≤ t}, and after Tc the process has a symmetric distribution about c that is independent of W (t), t ≤ Tc. Therefore, P(W (t) ≤ c|M(t) ≥ c) = P(W (t) ≥ c|M(t) ≥ c) = P({W (t) ≥ c} ∩ {M(t) ≥ c})/P(M(t) ≥ c) using (22). = P(W (t) ≥ c)/P(M(t) ≥ c), 1 = P(W (t) ≤ c|M(t) ≥ c) + P(W (t) ≥ c|M(t) ≥ c) = 2P(W (t) ≥ c)/P(M(t) ≥ c) Hence, and so P(M(t) ≥ c) = 2P(W (t) ≥ c) = P(|W (t)| ≥ c) 4 = 2 πt ∞ c exp(− 1 2 w2 t ) dw. 9.9 The Wiener Process and Diffusions It follows that M has the density (24) f M (x) = 4 2 πt exp − x 2 2t, x ≥ 0. 445 (25) Corollary: First passage Because {Tc ≤ t} ≡ {M(t) ≥ c} it is immediate by setting w = c(t/v) 1 2 in the integral and differentiating for t that fTc (t) = |c| (2πt 3)1/2 exp{− c2 2t }, t ≥ 0. We can use much the same idea on the Brownian bridge. (26) Example If B(t) is a Brownian Bridge, 0 ≤ t ≤ 1, show that for c > 0 P( max 0<t<1 B(t) > c
|
) = e−2c2. Solution distributed about c, and independent of the process before Tc. Hence, Once again, we use the fact that after Tc, the Wiener process is symmetrically P(Tc < 1; 0 ≤ W (1) < ε) = P(Tc < 1, 2c − ε < W (1) ≤ 2c) = P(2c − ε < W (1) ≤ 2c) = 1√ 2π e−2c2ε + o(ε) But, by definition of B(t), for 0 ≤ t ≤ 1, P(max B(t) > c) = lim ε→0 = lim ε→0 = lim ε→0 P(Tc < 1|0 ≤ W (1) ≤ ε) P(Tc < 1; 0 ≤ W (1) < ε)/P(0 ≤ W (1) ≤ ε)! e−2c2ε + o(ε) /{φ(0)ε + o(ε)} = e−2c2. 1√ 2π Remark The above technique, in which we combined the symmetry of the Wiener process with the Markov property at Tc, is called the reflection principle. This name is used because one can display a geometric visualization by taking the segment of a relevant path of the process after Tc, and reflecting it in the line y = c. We proved this for the simple random walk in Lemma 5.6.19 and used it in the Hitting Time Theorem 5.6.17. You can envisage the same idea for the Wiener process by drawing a much spikier version of Figure 5.2. However, the proof in this case is beyond our scope. Next, recall that earlier in this section we introduced several functions of the Wiener process, of various types. It turns out that functions of the Wiener process that are martingales turn out to be particularly useful. (27) Theorem If W (t) is a standard Wiener process, the following are martingales. (a) W(t). (b) Q(t) = {W (t)}2 − t. (c) Z (t) = exp(aW (t) − a2t/2), a
|
∈ R. 446 9 Markov Chains Proof Because W (t) = W (s) + [W (t) − W (s)] and W (t) has independent increments: (a) E(W (t)|W (u), 0 ≤ u ≤ s) = E(W (s) + W (t) − W (s)|W (u), 0 ≤ u ≤ s) = W (s). (b) See Example (9.2.3). (c) E{exp(aW (s) + aW (t) − aW (s))|W (u), 0 ≤ u ≤ s} = eaW (s) exp(+ 1 2 a2(t − s)), because a(W (t) − W (s)) is N (0, a2(t − s)). The result follows. For some applications, see Examples 9.22 and 9.23. Remark There is a remarkable converse to this theorem. If X (t) is a continuous process with X (0) = 0 such that X (t) and X 2(t) − t are both martingales, then X (t) is the Wiener process. We omit the proof. In fact, the existence of the third of the above martingales is particularly important. It asserts that (28) E exp θ W (t) − 1 2 θ 2t|W (u), 0 ≤ u ≤ s = exp θ 2s. θ W (s) − 1 2 If we differentiate (28) with respect to θ and then set θ = 0, we find that E(W (t)|W (u), 0 ≤ u ≤ s) = W (s), which is simply the statement that W (t) is a martingale. Differentiating (28) repeatedly with respect to θ, and then setting θ = 0 at the appropriate moment, suggests that all the following (and many more) are martingales: (29) (30) (31) W (t)2 − t W (t)3 − 3t W (t) W (t)4 − 6t W (t)2 + 3t 2. These can all be verified to be martingales just as we did in Theorem (27), but for the higher order functions of W (t
|
) it is a relief to know that the formal operation of differentiating with respect to θ and setting θ = 0 can be proved to yield these martingales. Just as we found in Chapter 8, such martingales are particularly useful when taken with a stopping time T. Because we are working with continuous martingales having continuous parameter, there are many technical considerations. We omit all the details and simply state the useful. 9.9 The Wiener Process and Diffusions 447 (32) Optional Stopping Let M(t) be a continuous martingale for t ≥ 0. Theorem: Suppose T is a stopping time for M(t) such that P(T < ∞) = 1. Then M(t ∧ T ) is also a martingale; further, EM(T ) = EM(0) if any of the following hold for some constant K : (a) |M(t)| ≤ K < ∞, for t ≤ T. |M(T ∧ t)| ≤ K < ∞. (b) E supt (c) E M(T ) ≤ K < ∞, and (d) T ≤ K < ∞. E(M(t)I (T > t)) = 0. lim t→∞ We give several applications later; here is one simple basic corollary. (33) Example: Exiting a Strip a < 0 and b > 0. Show that Let T be the time at which W (t) first hits a or b, where P(W (T ) = a) = b b − a, P(W (T ) = b) = − a b − a. We have that W (t) is a martingale, and |W (t)| ≤ a ∨ b = max{a, b}. It is Solution easy to see that P(T < ∞) = 1, and so by the first form of the optional stopping theorem (34) (35) 0 = EW (T ) = aP(W (T ) = a) + bP(W (T ) = b). Now using P(T < ∞) = 1, we have 1 = P(W (T ) = a) + P(W (T ) = b), and solving (34) and (35) gives the result. We conclude this section with a brief look at another
|
important application of martin- gales and the Wiener process. (36) Example: The Option-Pricing Martingale A popular model for a simple market comprises two available assets: a bond whose value B(t) grows at a continuously compounded constant interest rate r, so that B(t) = B(0)er t, and a stock whose price per unit is some suitable random process S(t). The model assumes that no dividends taxes or commissions are paid; you may buy negative amounts of stock, which is called “selling short” and may lead to a “short position.” In 1900, Bachelier suggested the Wiener process as a model for S(t), but this has the drawback that it may be negative, which stock prices never are. More recently, it has been standard to assume that S(t) follows a geometric Brownian motion, which is to say that for some constants µ and σ 2, where W (t) is the standard Wiener process. S(t) = exp{(µt + σ W (t)}, 448 9 Markov Chains In many practical situations, to reduce uncertainty and risk, it is desirable to acquire an option to purchase stock at some time T in the future. One simple and popular type is the European call option. This confers the right (but not the obligation) to buy a unit of stock at time T (the exercise date) at cost K (the strike price). Clearly, the value of this option at time T is V (T ) = (S(T ) − K )+ = max{S(T ) − K, 0}. The key question is, what is the fair price V (t) for this option at any other time t, where 0 ≤ t < T? An answer to this question is suggested by our interpretation of a martingale as a fair game. The point is that because you can always invest in the bond with interest rate r, the future value of the stock at time t must be discounted now by e−r t. That is to say, suppose at time 0 you intend to buy a unit of stock at time s and sell it at time t > s. The present discounted value of the purchase is e−r s S(s), the present discounted value of the sale is e−r t S(t). If this is to be a “fair game,” then the expected value of
|
the sale should equal the purchase price so that the expectation E0 taken with respect to the pay-off odds of this “fair game” must satisfy E0(e−r t S(t)|S(u); 0 ≤ u ≤ s) = e−r s S(s). That is to say, e−r t S(t) is a martingale. It turns out that if we set µ = r − σ 2/2, then e−r t S(t) is indeed a martingale. It can further be shown also that e−r t V (t) is a martingale with respect to these same pay-off odds, and then it follows that the fair price of the option at t = 0 is v = E0[e−r T (S(T ) − K )+], where the expectation E0 is taken according to the pay-off odds fixed in (37). See Example 9.26 for more details. Note that the heuristic argument above can indeed be made rigorous, but the details are well beyond our scope here. It sometimes happens that students seeing the valuation (38) for the first time find it a little counterintuitive, because the expected value on the right side is taken with respect to the pay-off odds satisfying (37), not with respect to the “real” probability distribution of the process S(t). The following analogy, metaphor, or parable is often useful in reorienting such misdi- rected intuition. (37) (38) (39) Example: The Bookmaker Suppose a bookie is setting the pay-off odds (making a book) for a two-horse race. As it happens, these horses (A and B) are identical twins, their record in 1000 head-to-head races is 500 wins each, and they have just run shoulder to shoulder in training. A priori, the probability of A winning is P(A) = 1 2. But the market (the gamblers) has laid $5000 on A to win and $10, 000 on B to win. If the bookie sets the pay-off odds to be evens (the same as the a priori probabilities), then she loses $5000 if B wins. Of course, she gains $5000 if A wins, and the a priori 2 5000 = 0, which is “fair.�
|
� But if the bookie expectation of her outcome is 1 wanted to take risks, she could simply gamble, and this is not why she is a bookie. 2 5000 − 1 9.10 Review and Checklist for Chapter 9 449 In fact, suppose she sets the pay-off odds on A to be 2 : 1, and those on B to be 1 : 2. Then whatever the outcome of the race, she is all square, which makes it a “fair game.” These pay-off odds correspond to a probability distribution P 0(B) = 2 3 P 0(A) = 1 3, ; which is different from the empirical a priori distribution. (In practice, of course, she would shorten all the pay-off odds to guarantee a profit or arbitrage.) But the point for the price of the option is clear; it is determined by the fair pay-off odds arising from actual market opportunities, not from any theoretical a priori valuation of the stocks. 9.10 Review and Checklist for Chapter 9 The intuitive idea of a Markov process is that conditional on its present state, the future is independent of the past. We make this idea precise for Markov processes in discrete and continuous time. For those with discrete state space, we derive the Chapman–Kolmogorov equations and use them to examine the evolution of the chain over time. We consider first passage times and recurrence times, and examine the link to stationary distributions. Then we discuss the link between stationary distributions and the possible long-run behaviour of the chain. We consider Poisson and birth processes, in particular. Finally we turn to Markov processes, with a continuous state space, that do not have jump transitions. The first and classic example of such a process is the Wiener process model for Brownian motion. We examine many of its properties, with a brief glance at other diffusion processes derived from the Wiener process. Martingales derived from the Wiener process turn out to be especially useful. Synopsis of Notation and Formulae For a discrete time-homogeneous Markov chain X n, we have The transition probabilities: pi j = P(X n+1 = j|X n = i) The m-step transition probabilities: pi j (m) = P(X n+m = j|X n = i) The Chapman–Kolmogorov equations
|
: pik(m + n) = pi j (m) p jk(n). j∈S The first passage time from i to k = i is Tik, with mean first passage time µik = ETik. The recurrence time of i is Ti = min{n > 0 : X n = i|X 0 = i], with mean recurrence time µi = ETi. If µi < ∞, then i is nonnull. A stationary measure is a nonnegative solution (xi ; i ∈ S) of xk = xi pik. A stationary distribution is a stationary measure xi such that In this case, we write xi = πi. i∈S i∈S xi = 1. 450 9 Markov Chains A chain is regular if pi j (n0) > 0 for all i, j, and some n0 < ∞. For a finite state space regular chain, There is a unique stationary distribution π. πi µi = 1 for all i ∈ S. For all pairs i, j, as n → ∞, pi j (n) → π j = µ−1 j. If the chain is not regular or has infinite state space, a wider range of behaviour is possible. For a Markov chain X t with continuous parameter, we have The transition probabilities: pi j (t) = P(X s+t = j|X s = i). The Chapman–Kolmogorov equations: pik(s + t) = pi j (s) p jk(t). The first passage time from i to k = i is j∈S Tik = inf {t ≥ 0 : X t = k|X 0 = i}, with mean first passage time µik = ETik. A stationary measure is a nonnegative solution (xi ; i ∈ S) of xk = xi pik(t). i i xi = 1. In this case, we A stationary distribution is a stationary measure such that write xi = πi. For a finite state space chain with pi j (t) > 0 for all i, j, There is a unique stationary distribution π. For all pairs i, j as t → ∞, pi j (t
|
) → π j. If the chain is not irreducible or has infinite state space, a much wider range of behaviour is possible. Checklist of Terms 9.1 Markov property Markov chain imbedding 9.2 transition matrix doubly stochastic matrix Chapman–Kolmogorov n-step transition probabilities regular chain irreducible aperiodic absorbing state 9.3 mean first passage time, mean recurrence time Markov property at first passage times Worked Examples and Exercises 451 9.4 Stationary distribution 9.5 limiting distribution coupling transient persistent null 9.6 Chapman–Kolmogorov equations 9.7 Poisson process birth process 9.8 forward equations stationary distribution 9.9 Wiener process Brownian bridge Ornstein–Uhlenbeck process drift reflection geometric Wiener process martingales optional stopping option pricing strong Markov property.11 Example: Crossing a Cube One vertex O of a unit cube is at the origin (0, 0, 0). The others are at (0, 0, 1), (0, 1, 0) and so on. A particle performs a random walk on the vertices of this cube as follows. Steps are of unit length, and from any vertex it steps in the x direction with probability α, the y direction with probability β or the z direction with probability γ, where α + β + γ = 1. (a) Let T be the first passage time from O to V. Find E(s T ), and deduce that E(b) Let X be the number of visits that the walk makes to V before the first return to O. Show that E(X ) = 1. The walk visits O whenever an even number of steps has been taken in all Solution three possible directions (x, y, and z directions). Thus, u(2n) = P (the walk visits O at the 2nth step) = α2i β 2 j γ 2k (2n)! (2i)!(2 j)!(2k)!. i+ j+k=n 452 9 Markov Chains Now write β + γ − α = a, α + γ − β = b, and α + β − γ = c. It is easy to check, by expanding each side, that 4u(2n
|
) = (α + β + γ )2n + (β + γ − α)2n + (α + γ − β)2n + (α + β − γ )2n = 1 + a2n + b2n + c2n. Hence, U (s) = ∞ n=0 s2nu(2n) = 1 4 1 1 − s2 + 1 1 − a2s2 + 1 1 − b2s2 + 1 1 − c2s2. Similarly, starting from O, the walk visits V whenever an odd number of steps has been taken in all three possible directions. Hence, u V (2n + 1) = P(the walk visits V at the (2n + 1)th step) = α2i+1β 2 j+1γ 2k+1 i+ j+k=n−1 (2n + 1)! (2i + 1)!(2 j + 1)!(2k + 1)!. Now it is easy to check as above that 4u V (2n + 1) = 1 − a2n+1 − b2n+1 − c2n+1. Hence, UV (s) = ∞ n=1 u V (2n + 1)s2n+1 = s3 4 1 1 − s2 − a3 1 − a2s2 − b3 1 − b2s2 − c3 1 − c2s2. Now we use (9.3.21) to see that E(s T ) = UV (s) U (s). Hence, evaluating d ds E(s T ) at s = 1 yields E(T ) = 1 + α−1 + β−1 + γ −1, as required. (b) Let ρ be the probability that the walk returns to O before ever visiting V. By symmetry, this is also the probability that a walk started at V never visits O before returning to V. For X = k, it is necessary for the walk to reach V before revisiting O, then revisit V on k − 1 occasions without visiting O, and finally return to O with no further visit to V. Hence, P(X = k) = (1 − ρ)ρk−1(1 − ρ), by the Markov property. Therefore, E(X ) = �
|
� k=1 kρk−1(1 − ρ)2 = 1. (1) (2) Suppose that at every step the walk may remain at its current vertex with probability Exercise δ, where now α + β + γ + δ = 1. Find: (c) E(T ). (a) The mean recurrence time of O; Let W be the vertex (1, 1, 0), and define ˆT to be the number of steps until the walk Exercise first visits V or W starting from O. (That is to say, ˆT is the first passage time from O to {V, W }.) Show that E( ˆT ) = (α−1 + β −1)γ. (b) E(X ); (1) (2) (3) (4) (5) Worked Examples and Exercises 453 9.12 Example: Reversible Chains random variables (X (n); n ∈ Z) (X (n1), A collection of X (n2),..., X (nr )) has the same distribution as (X (m − n1), X (m − n2),..., X (m − nr )) for all m and n1 < n2 <... < nr. Let X be an irreducible nonnull recurrent aperiodic Markov chain. is called reversible if (a) Prove that the Markov chain X with transition probabilities pi j is reversible if and only if it is stationary and there exist (πi ; i ∈ S) such that for all i, j ∈ S πi > 0, i∈S πi = 1, πi pi j = π j p ji. (b) Prove further that if X is stationary then it is reversible if and only if pi1i2 pi2i3... pir i1 = pi1ir pir ir −1... pi2i1 for any finite sequence of states i1, i2,..., ir in S. Solution of X, for summing (3) over j yields Next we note that (3) implies (a) The truth of (1), (2), and (3) implies that πi is the stationary distribution π
|
i pi j = π j, as required. i πi pi j (n) = π j p ji (n). To see this, consider any n-step path from i to j, and then using (3) gives πi pii1 pi1i2... pin−1 j = pi1i πi1 pi1i2... pin−1 j = pi1i pi2i1... p jin−1 π j after repeated applications of (3). Now summing over all paths from i to j gives (5). Now applying (5) shows that P(X n1 = i1, X n2 = i2,..., X nr = ir ) = πi1 pi1i2(n2 − n1)... pir −1ir (nr − nr −1) = P(X m−n1 = i1,..., X m−nr = ir ), as required. The converse is obvious, and so we have finished (a). (b) If the chain is reversible then (3) holds. Hence, pi1i2 πi2 πi1 which is (4) because all the πi cancel successively. Conversely, if (4) holds, we may choose two states i1 and ir (say) as fixed and equal, so i1 = ir = j. Then summing (4) over the remaining indices yields... pir i1 πi1 πir pi2i1 pi1ir... =, pi j (r − 1) p ji = p ji (r − 1) pi j. Now, as the chain is aperiodic and nonnull recurrent, we can let r → ∞ and obtain (3), as required. 454 9 Markov Chains Equations (3) are called the detailed balance equations. This example is Remark important because a remarkable number of Markov chains are reversible in equilibrium (particularly those encountered in examinations). In the exercises, you may assume that X is aperiodic and irreducible. Exercise that for n1 < n2 <... < nr Let (X n; n ∈ Z) be a Markov chain, and let Yn = X −n be the chain X reversed. Show
|
,..., Ynr −1 ) = P(Ynr Show that if X has stationary distribution π and transition probabilities P, then, in equilibrium, Y is a Markov chain with transition probabilities qi j = π j π −1 = k|Ynr −1 ). = k|Yn1 P(Ynr i p ji. Exercise 6 Continued Let X be a Markov chain, with transition probabilities pi j. Show that if there exists a stochastic matrix (qi j ) and a mass function (πi ) such that πi qi j = π j p ji, then qi j is the transition matrix of X reversed and πi is its stationary distribution. Exercise Let X n be a Markov chain on the nonnegative integers with transition matrix pi j = λi µi 0 p00 = µ0. if j = i + 1 > 0 if j = i − 1 ≥ 0 otherwise (6) (7) (8) Show that X is reversible in equilibrium. Deduce that X has a stationary distribution if and only if < ∞. ∞ n=1 n k=1 λk−1 µk (9) Exercise π and ν, respectively. Let Zn = (X n, Yn). Show that Zn is reversible in equilibrium. Let X n and Yn be independent reversible Markov chains with stationary distributions 9.13 Example: Diffusion Models (a) Two separate containers together contain m distinguishable particles. At integer times, t = 1, 2,... one of the particles is selected at random and transferred to the other container. Let X n be the number of particles in the first container after the nth transfer. (i) Show that (X n; n ≥ 0) is a Markov chain and write down its transition matrix. (ii) Find the stationary distribution of X. (b) The two containers C1 and C2 are now separated by a semipermeable membrane. At each time t, a particle is selected at random; if it is in C1, then it is transferred with probability α; if it is in C2, then it is transferred with probability β. Otherwise, the particles remain where they are. Show that X has stationary distribution (1) πi = αm−i βi (α + β)m
|
. m i (a) Given X 0,..., X n = j, the probability that a particle in C1 is selected Solution for transfer is j/m, and the probability that a particle in C2 is selected for transfer is (m − j)/m. Hence, X is a Markov chain and p j, j+1 = (m − j)/m, p j, j−1 = j/m, p jk = 0 when |k − j| = 1. Worked Examples and Exercises 455 The chain is of the type considered in Exercise 9.12.8. Therefore, the chain is reversible and the stationary distribution may be obtained by solving πi pi j = π j p ji. Thus, πi+1 = πi pi,i+1/ pi+1,i = πi (m − i)/(i + 1) = π0 Because m 0 πi = 1, it follows that π −1 0 = m i=0 m i = 2m m i + 1 on iterating. and so πi = ( m i )2−m. This is a symmetric binomial distribution. (b) By the same reasoning as given in (a), this is a Markov chain with pi,i+1 = β(m − i)/m pi,i−1 = αi/m pii = 1 − αi/m − β(m − i)/m pi j = 0 when |i − j| > 1. Again, this is reversible, so the stationary distribution is given by πi+1 = πi pi,i+1/ pi+1,i = πi β(m − i)/(α(i + 1)) i+1 β α m i + 1 on iterating. = π0 Now requiring πi = 1 yields πi = αm−i βi (α + β)m, m i which is an asymmetric binomial distribution. Remark This is the Ehrenfest model for diffusion (and heat transfer) (1907). Exercise Exercise Is it true that as n → ∞, pi j (n) → π j in either case (a) or case (b)? (a) Show that E(X n) = (1 − 2, when α = β = 1. E(X 0)
|
− m 2 + m 2 m )n (b) What is E(X n) when α = 1 = β? Two adjacent containers C1 and C2 each contain m Exercise: Bernoulli Diffusion Model particles. Of these 2m particles, m are of type A and m are of type B. At t = 1, 2,... one particle is selected at random from each container, and these two particles are each transferred to the other container. Let X n be the number of type A particles in C1. (a) Show that X is a Markov chain and write down its transition matrix. (b) Find the stationary distribution. (c) Is the stationary distribution the limiting distribution of X? Exercise In the above exercise, suppose that the containers are separated by a semipermeable membrane. In this case, if particles of different types are chosen they are exchanged with probability α if the type A particle is in C1, or with probability β if the type A particle is in C2. Find the stationary distribution of X. (2) (3) (4) (5) (6) (7) 456 9 Markov Chains Let ( fn; n ≥ 1) satisfy fn ≥ 0 and 9.14 Example: The Renewal Chains n i=1 fi ≤ 1. Define a sequence (un; n ≥ 0) by u0 = 1 and un = n r =1 fr un−r ; n ≥ 1. Such a sequence (un; n ≥ 0) is called a renewal sequence. (a) Show that (un; n ≥ 0) is a renewal sequence if and only if there is a Markov chain Un such that for some state s ∈ S, un = P(Un = s|U0 = s). (b) Let X be a random variable having the probability mass function ( fn; n ≥ 1). Show that the chain U is recurrent if n fn = 1 and nonnull if E(X ) < ∞. (c) Explain why un is called a renewal sequence. Solution F0 = 0 and (a) Let un and fn be as defined above. Define the sequence (Fn; n ≥ 0) by Fn = n r =1 fr ; n ≥ 1. Next, let (Un; n ≥ 0) be a Markov chain taking values in the nonnegative integers, with U0 = 0, and having
|
transition probabilities pi0 = fi+1 1 − Fi pi,i+1 = 1 − pi0 = 1 − Fi+1 1 − Fi ; i ≥ 0. Now let us calculate the first return probability f00(n), that is the probability that the chain first returns to 0 at the nth step. At each stage, the chain either does so return or increases by 1. Hence, f00(n) = p01 p12... pn−2,n−1 pn−1,0 = 1 − F1 1 − F0 · 1 − F2 1 − F1 · · · 1 − Fn−1 1 − Fn−2 · fn 1 − Fn−1 = fn. It follows by conditional probability and the Markov property that P(Un = 0) = n r =1 f00(r )P(Un−r = 0) = n r =1 fr P(Un−r = 0). (1) (2) (3) (4) (5) (6) Worked Examples and Exercises 457 Because P(U0 = 0) = 1, it follows by comparison with (2) that un = P(Un = 0|U0 = 0), as required. (b) First, observe that if Fj = 1 for some j < ∞, then the chain is finite. Obviously, fi = 1, in this case, E(X ) < ∞ and the chain is recurrent and nonnull. Otherwise, if n f00(n) = 1 by (5), and hence the chain is recurrent. Because it is irreducible we then can settle whether it is nonnull by seeking a stationary distribution π. Any such stationary distribution satisfies πi+1 = πi pi,i+1; = πi 1 − Fi+1 1 − Fi i ≥ 0 = π0(1 − Fi+1) on iterating. i πi = 1 if i (1 − Fi ) < ∞. That is Hence, π is a stationary distribution satisfying to say, U is nonnull if E(X ) < ∞, and then πi = 1 − Fi E(X ) = P(X > i) E(X ). (7) (8) (9) (c) Recall the recurrent event or
|
renewal processes defined in Section 6.7. Events may occur at integer times and the intervals between successive events are independent and identically distributed random variables (Xi ; i ≥ 1), where f X (r ) = fr. An event occurs at n = 0. Now the construction of the chain U in (3) and (4) allows us to identify visits of U to zero with the occurrence of an event in this renewal process. At any time n, the state Un of the chain is just the time elapsed since the most recent event of the renewal process. Thus, Un is the current life (or age) of the renewal process at time n. Finally, un is just the probability that an event of the renewal process occurs at time n. Show that if un and vn are renewal sequences, then unvn is a renewal sequence. Show that if un is a renewal sequence, then (und ; u ≥ 0) is a renewal sequence for any Exercise Exercise fixed d. (10) Exercise Let (Xi ; i ≥ 1) be the interevent times of a discrete renewal process, and at any time n let Bn be the time until the next following event of the process. (That is, Bn is the excess life or balance of life.) Show that Bn is a Markov chain, and find the stationary distribution when it exists. (11) Exercise Write down the transition probabilities of the chain Un reversed in equilibrium, and also write down the transition probabilities of Bn reversed in equilibrium. Explain your answers. (Hint: Use Exercise 9.11.6.) (12) Exercise (13) Exercise Use Theorem 9.6.5 to show that limn→∞ un = π0. Recall the recurrent event (renewal) processes of Section 6.7. Use (12) to show that as n → ∞, un → 1 E(X 2) and vn → 1 E(X 2). 9.15 Example: Persistence Let i and j be states of the Markov chain X. We write i → j if there exists n < ∞ such that pi j (n) > 0. Let Vi j be the number of occasions on which X visits j, given that initially X 0 = i. Define and recall that Ti j is the first passage time from i to j. ηi j = P(Vi j
|
= ∞), 458 (a) Show that (b) Show that 9 Markov Chains ηii = 1 if i is persistent 0 if i is transient. ηi j = P(Ti j < ∞) 0 if j is persistent if j is transient. (c) Show that if i → j and i is persistent then ηi j = η ji = 1. Solution Next, (a) First note that Vii ≥ 1 if and only if Ti < ∞, so P(Vii ≥ 1) = P(Ti < ∞). P(Vii ≥ 2) = P(Vii ≥ 2|Vii ≥ 1)P(Vii ≥ 1) = P(Vii ≥ 2|Ti < ∞)P(Vii ≥ 1). However, we have shown in Example 9.3.17 that the Markov property is preserved at Ti when it is finite, and therefore P(Vii ≥ 2|Ti < ∞) = P(Vii ≥ 1). Now an obvious induction shows that P(Vii ≥ n) = (P(Vii ≥ 1))n = (P(Ti < ∞))n → 1 0 if P (Ti < ∞) = 1 if P (Ti < ∞) < 1, as n → ∞, as required. (b) Using the same idea as in (a), we write P(Vi j ≥ m) = P(Vi j ≥ m|Vi j ≥ 1)P(Vi j ≥ 1) = P(Vi j ≥ m|Ti j < ∞)P(Ti j < ∞) = P(V j j ≥ m − 1)P(Ti j < ∞) because the Markov property is preserved at Ti j. Now allowing m → ∞ gives the result, by (a). (c) Because i is persistent 1 = P(Vii = ∞) = P({Vi j = 0} ∩ {Vii = ∞}) + P({Vi j > 0} ∩ {Vii = ∞}) ≤ P(Vi j = 0) + P({Ti j < ∞} ∩ {Vii = ∞}) = 1 − P(Ti j < ∞) + P(Ti j < ∞)P(V ji = ∞) because the Markov property is preserved at Ti j. Hence
|
, P(V ji = ∞) ≥ 1, and therefore η ji = 1. Hence, P(Ti j < ∞) = 1 and so j → i. It follows that ηi j = 1. (1) (2) (3) (4) (5) Exercise Exercise Exercise Exercise Exercise Show that if i is persistent and i → j, then j is persistent. Show that ηi j = 1 if and only if P(Ti j < ∞) = P(T j < ∞) = 1. Show that if i → j and j → i, then i and j have the same class and period. Show that if X is irreducible and persistent, then P(Ti j < ∞) = P(T ji < ∞) = 1. Show that if i → j but j → i, then i is transient. (1) (2) (3) (4) (5) Worked Examples and Exercises 459 9.16 Example: First Passages and Bernoulli Patterns Let X be a finite Markov chain with n-step transition probabilities pi j (n). As usual, Ti j is the first passage time from i to j, with mean µi j ; T j is the recurrence time of j with mean µ j. We write Fi j (s) = E(s Ti j ), and Pi j (s) = ∞ n=0 pi j (n)sn. (a) A biased coin is tossed repeatedly, let X n be the outcome of tosses n + 1, n + 2, and n + 3; for n ≥ 0. [Thus, (X n; n ≥ 0) is a Markov chain with state space S comprising all triples using H and T, namely, and so on.] Show that for any i and j ∈ S, µi j = [1 + p j j (1) + p j j (2) − pi j (1) − pi j (2)]µ j. Deduce that if, and the coin is fair, then µi j = 10 and µ ji = 14. (b) Let D be a given subset of the states of a finite Markov chain (Yn; n ≥ 0), and let Ts D be the first passage time from the state
|
s /∈ D into D, with mean µs D = E(Ts D). Also, let φs j be the probability that the chain first enters D at the state j. Show that for i ∈ D µsi = µs D + j∈D φs j µ ji. (c) Hence, show that in an unlimited sequence of tosses of a fair coin, the probability that the consecutive sequence T H T occurs before H H H is 7 12. Solution (a) From Theorem 6.2.13, we have that µi j = lim s↑1 1 − Fi j (s) 1 − s = lim s↑1 Pj j (s) − Pi j (s) (1 − s)Pj j (s) by Theorem 9.3.20. Now because tosses of the coin are independent, we have pi j (n) = p j j (n) for n ≥ 3. Also, using Theorems 6.2.13 and 9.3.20 again gives (1 − s)Pj j (s) = µ−1, j lim s↑1 and now (3), (4), and (5) give (1). If the chance of a head is p, then pi j (1) = pi j (2) = p j j (1) = 0 and p j j (2) = p(1 − p). Hence, µi j = [1 + p(1 − p)]µ j. When the coin is fair p = 1 pii (1) = p, pii (2) = p2. Hence, when the coin is fair, 2 and µ j = 8, so µi j = 10. Likewise, p ji (1) = p ji (2) = 0 and µ ji = 1 + 1 2 + 1 4 8 = 14. 460 9 Markov Chains (b) Let D j denote the event that the chain first enters D at j. Then µsi = E(Tsi ) = E(Tsi − Ts D + Ts D) = E(Tsi − Ts D) + µs D = E(Tsi − Ts j |D j )φs j + µs D. j∈D However, given D j, the chain continues its journey to i independently of the past
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.