text
stringlengths
270
6.81k
, that is to say, E(Tsi − Ts j |D j ) = µ ji. (6) (7) Substituting (7) into (6) gives (2). (c) Now let the outcomes of successive tosses be (Sn; n ≥ 1) and define the Markov chain (Yn; n ≥ 0) by Y0 = φ = s Y1 = S1 Y2 = S1S2 Yn = Sn−2Sn−1Sn; n ≥ 3. Thus, on the third step, the chain enters the closed irreducible subset of sequences of length 3. Setting H H H ≡ 1, T H T ≡ 2, so D = {H H H, T H T } = {1, 2}, we have from (2) that φs1 = µs2 + µ21 − µs1 µ12 + µ21. Now we showed in Example 6.15 that µs1 = 14 and µs2 = 10. Also, in (b), we established that µ12 = 10 and µ21 = 14. Hence φs1 = 10 + 14 − 14 10 + 14 = 5 12 and φs2 = 7 (8) 12 is the probability that T H T occurs before H H H. Show that for a fair coin the expected number of tosses to obtain H H H after H T H Exercise is 12, and the expected number required to obtain H T H after H H H is 8. (9) Exercise (10) Exercise (11) Exercise. Show that the probability that H H H is observed before H T H is 3 10. Show that the probability that T T H is observed before H H H is 7 10 A fairground showman offers to play the following game. On payment of an entry fee of £1, a customer names a possible outcome of a sequence of 3 coin tosses; the showman then names another possible outcome, and a fair coin is tossed repeatedly until one of the named sequences is obtained in three successive throws. The player who named that sequence wins. (i) Show that the probability that T H H beats H H H is 7 8. (ii) Show that the probability that T T H beats T H H is 2 3. (iii) Show that the probability that T T H beats T H T is 7 8. (iv) Show that the probability that H T T beats T
T H is 2 3. (v) If the showman wants to make on average 30 p per game, what prize money should he offer: (a) if customers choose sequences at random? (b) if customers make the best possible choice? Remark This game was named Penney–Ante by W. Penney in 1969. Worked Examples and Exercises 461 9.17 Example: Poisson Processes Let X (t) be a Markov process taking values in the nonnegative integers; suppose that X (t) is nondecreasing, with X (0) = 0. Suppose that as h → 0 the transition rates satisfy P(X (t + h) = i + 1|X (t) = i) = λ(t)h + o(h) P(X (t + h) = i|X (t) = i) = 1 − λ(t)h + o(h), so that X (t) changes its value by jumps of size one. Denote the times at which X (t) jumps by T1, T2, T3,.... + t 0 (a) Show that for fixed t, X (t) has a Poisson mass function with parameter "(t) = λ(u) du. Hence, find the density of T1, the time of the first jump. (b) Find the joint density of T1 and T2; hence, find the conditional density of T1 given T2. Remark function λ(t). X (t) is called a nonhomogeneous Poisson process with intensity (or rate) Solution (a) Let pn(t) = P(X (t) = n). Then, by conditional probability, pn(t + h) = pn(t)(1 − λ(t)h) + pn−1(t)λ(t)h + o(h). Hence, we obtain the forward equations in the usual manner as (1) (2) (3) dpn(t) dt = −λ(t) pn(t) + λ(t) pn−1(t); n ≥ 0, where p−1(t) = 0. Setting G(z, t) = ∂G ∂t ∞ 0 zn p
n(t), we find using (1) that = +λ(t)(z − 1)G. Because X (0) = 0, we have G(z, 0) = 1, and so, by inspection, (2) has solution G(z, t) = exp (z − 1) 0 t λ(u) du. This of course is the p.g.f. of the Poisson distribution with parameter "(t), as required. Now we note that P(T1 > t) = P(X (t) = 0) = G(0, t) = exp(−"(t)). (b) From (2) and (3), we can now see that for w > t, E(z X (w)|X (t)) = z X (t) exp (z − 1)! λ(u) du, w t and E{y X (t)z X (w)−X (t)} = E(y X (t))E(z X (w)−X (t)). 462 9 Markov Chains It follows that this nonhomogeneous Poisson process also has independent increments. Now P(T1 > t, T2 > w) = P(X (w) ∈ {0, 1}, X (t) = 0) = P(X (t) = 0)P(X (w) − X (t) ≤ 1) = e−"(t)[e−("(w)−"(t))(1 + "(w) − "(t))]. by the independence of increments Hence, differentiating, T1 and T2 have joint density f (t, w) = λ(t)λ(w)e−"(w); 0 < t < w < ∞. Integrating with respect (w)e−"(w), and so the conditional density is to t shows that the density of T2 is fT2(w) = λ(w)" fT1|T2(t|w) = λ(t) "(w) ; 0 < t < w. (4) (5) (6) (7) Exercise Exercise Exercise: Compound Poisson Process tributed and independent of X (t). Let Z (t) = Show that P(T1 < ∞) = 1 if and only if limt→∞ "(t) =
∞. If λ(t) = λe−λt for λ > 0, show that limt→∞ P(X (t) = k) = 1/ek! Let (Yn; n ≥ 1) be independent and identically dis- X (t) n=1 Yn. + Show that E(eθ Z (t)) = exp( t 0 λ(u) du(M(θ) − 1)), where M(θ ) = E(eθ Y1 ). Exercise: Doubly Stochastic Poisson Process Suppose that X (t) is a nonhomogeneous Poisson process with random intensity λ(t); that is to say, for any realization of the process X (t), λ(t) is + a realization of a random process Y (t), where E(exp[θ 0 Y (t) dt]) = M(θ ). Show that X (t) has probability generating function M(z − 1). Find the mean and variance of X (t) in this case. t 9.18 Example: Decay n Let (Tn = i=1 Xi ; n ≥ 1) be the partial sums of the independent exponential random variables (Xi ; i ≥ 1) having parameter λ. A certain class of particles has the property that when freshly produced their (independent) lifetimes are exponential with parameter µ. At the ends of their lives, they disappear. At time Tn, a number Yn of fresh particles is released into a chamber; the random variables (Yn; n ≥ 0) are independent and identically distributed with p.g.f. GY (z). At time t, the number of particles in the chamber is N (t). Show that (1) E(z N (t)) = exp λ [GY ((z − 1)e−µv + 1) − 1]dv. t 0 Solution By construction, the batches arrive at the jump times of a Poisson process. Hence, given that k batches have arrived at time t, their arrival times are independently and uniformly distributed over (0, t). For any particle in a batch of size Y that arrived at time U, the chance of survival to t is e−µ(t−U ) independently of all the others. Hence, given U = u, the p.g.f. of the number S of survivors
of Y at t is E((ze−µ(t−u) + 1 − e−µ(t−u))Y ) = GY (ze−µ(t−u) + 1 − e−µ(t−u)). Worked Examples and Exercises 463 Hence, E(z S) = E(E(z S|U )) = 1 t t 0 GY ((z − 1)e−µv + 1) dv. Finally, recalling that the total number of particles at t are the survivors of a Poisson number of such batches we obtain (1). (2) (3) Exercise What is E(z N (t)) when GY (z) = z? In this case, find limt→∞ P(N (t) = k). Exercise In the case when GY (z) = z, show that N (t) is a Markov process such that as h → 0, pi,i+1(h) = λh + o(h) pi,i−1(h) = iµh + o(h) pii (h) = 1 − λh − iµh + o(h). Hence, obtain your answer to Exercise 2 by using the forward equations. (4) In the case when GY (z) = z, let tn be the time when N (t) makes its nth jump. Let Exercise Zn = N (tn) be the imbedded Markov chain that records the successive different values of N (t). Find the stationary distribution of Zn. 9.19 Example: Disasters A population evolves as follows. Immigrants arrive according to a Poisson process of rate ν. On arrival, each immigrant immediately starts a simple birth process with parameter λ independently of all other immigrants. Disasters occur independently of the population according to a Poisson process of rate δ; when a disaster occurs, all individuals then in existence are annihilated. A disaster occurs at t = 0. Let X (t) denote the number of individuals in existence at time t ≥ 0. (a) Show that limt→∞ E(X (t)) is finite if and only if δ > λ. (b) Find an expression for E(s X (t)). Solution Because X (t) is a Markov process,
we could proceed by writing down forward equations. However, it is neater to use the properties of the Poisson process directly as follows. We start by assembling some facts established in earlier sections. At time t, let C(t) be the time that has elapsed since the most recent disaster. From Example 8.17, we recall that (1) P(C(t) > x) = e−δx ; 0 ≤ x ≤ t. Now note that arrivals are a Poisson process independent of disasters, so given that C(t) = x, the number of subsequent arrivals up to time t is a Poisson random variable N, with parameter vx. Next, we recall from Theorem 8.8.6 that conditional on N = k, these k arrivals are independently and uniformly distributed over the interval (t − x, t), at times t − Y1, t − Y2,..., t − Yk, say [where the Yi are uniform on (0, x)]. Finally, we remember that given Y1 = y, the expected number of descendants at time t from this arrival at t − y is eλy; this is from (9.8.21). Now we remove the conditions one 464 9 Markov Chains by one. First, the expected number of descendants of an arrival at t − Y1 is 1 x x 0 eλy dy = 1 λx (eλx − 1) because Y1 is uniform on (0, x). Second, the expected number of descendants at t of the N arrivals during (t − x, t) is, using (2), E(N ) 1 λx (eλx − 1) = ν λ (eλx − 1). Finally, using (1) and (3), we have E(X (t)) = δe−δx ν λ (eλx − 1) d x + ν λ t 0 e−δt (eλt − 1). You can now see (if you want to) that, in more formal terms, what we have done is to say E(X ) = E(E(E[E(X |C, N, Y1,..., YN )|C, N ]|C)), and then to successively evaluate the conditional expectations from the inside out. So from (4), if λ ≥ δ
, E(X (t)) → ∞ as t → ∞, whereas if λ < δ, E(X (t)) → ν δ−λ. An expression for E(s X (t)) is found by following exactly the same sequence of successive conditional expectations. Thus, given that C(t) = x, N (x) = k, and Y1 = y1, this arrival initiates a simple birth process whose size at time y1 has generating function se−λy1 1 − s + se−λy1 by Example 9.8.14. Hence, because Y1 is uniformly distributed on [0, x], the generating function of the number of descendants at time x of one arrival in [0, x] is dy = − 1 λx se−λy 1 − s + se−λy log(1 − s + se−λx ). 1 x x 0 (2) (3) (4) (5) By independence, the generating function of the sum of k such independent arrivals is (log(1 − s + se−λx )− 1 λx )k. Next, we recall that N (x) is Poisson with parameter vx, so that using conditional expectation again, the generating function of the descendants at t of the arrivals in [t − x, t] is (6) exp(νx(log(1 − s + se−λx )− 1 λx − 1)) = e−νx (1 − s + se−λx ) ν λ. Now we recall from Example 8.17 that the current life (or age) of a Poisson process has density fC(t)(x) = δe−δx ; 0 ≤ x ≤ t, with P(C(t) = t) = e−δt. Hence, finally, (7) E(s X (t)) = t 0 δe−δx e−νx (1 − s + se−λx ) ν λ d x + e−δt e−νt (1 − s + se−λt ) ν λ. Worked Examples and Exercises 465 (8) (9) Attempt to obtain (7) by writing down the forward equations for P(X (t) = n); n ≥ 0, Exercise and solving
them. Exercise parameters λ and µ. Show that limt→∞ E(X (t)) < ∞ if and only if δ > λ − µ. [See 9.21]. Suppose that each immigrant gives rise to a simple birth and death process with Suppose that an ordinary immigration–death process with parameters ν and µ is δ+µ. subject to disasters. Show that the population size X (t) has a stationary distribution with mean ν [Set λ = 0 in Exercise 9.] (10) Exercise 9.20 Example: The General Birth Process Let (Yn; n ≥ 1) be a collection of independent exponentially distributed random variables such that Yn has parameter λn−1. Let Tn = n r =1 Yr and N (t) = max {n: Tn ≤ t}. The process N (t) is a general birth process. Show that if then for t > 0, ∞ 0 λ−1 r < ∞, P(N (t) < ∞) < 1. Also, show that E(N (t)|N (t) < ∞) is finite or infinite, depending on whether converges or diverges. Solution First, recall the often-used identity P(Tn ≤ t) = P(N (t) ≥ n). Let Tn have density fn(t) and moment generating function Mn(θ ), and define T = lim n→∞ Tn = sup {t: N (t) < ∞}. Because the Yn are independent and exponentially distributed, it follows that Mn(θ) = n−1 r =0 (1 + θλ−1 r )−1. (1) (2) (3) (4) (5) ∞ 0 r λ−1 r If (1) holds, then as n → ∞ the infinite product converges (uniformly) to a nonzero limit M(θ ). By the continuity theorem, this is the moment generating function of the density fT (t) of T. Hence, by (3), ∞ P(N (t) < ∞) = P(T > t) = fT (u)du < 1 t for t > 0. 466 9 Markov Chains If (1) does not hold,
then the product in (5) diverges to zero as n → ∞, for θ = 0, and fT (t) = 0. Furthermore, from (3), (6) pn(t) = P(N (t) = n) = P(Tn ≤ t) − P(Tn+1 ≤ t) = t 0 fn(u)du − t 0 fn+1(u)du. Now using (6), for θ < 0, we have ∞ 0 eθt pn(t) dt = 1 = 1 λn θ (Mn(θ) − Mn+1(θ)) on integrating by parts Mn+1(θ) using (5). Because Mn(θ) converges uniformly to M(θ), we can use the inversion theorem on each side of (7) to find that, as n → ∞, (7) (8) Now λn pn(t) → fT (t). E(N (t)|N (t) < ∞) = ∞ n=0 ∞ npn(t), pn(t) n=0 which converges or diverges with n=0 nλ−1 ∞) < ∞ if and only if ∞ n < ∞. npn(t). Using (8), it follows that E(N (t)| N (t) < (9) (10) Exercise Exercise Write down the forward equations for pn(t) and deduce (7) directly from these. Deduce from (7) that pn(t) = 1 λn n i=0 ai λi e−λi t, where ai = n j=0 j=i λ j λ j − λi. (11) Exercise Show that if λn = n(log n)γ ; γ > 1 then for any β > 0, E([N (t)]β |N (t) < ∞) = ∞. 9.21 Example: The Birth–Death Process Let the Markov process X (t) represent the number of individuals in a population at time t. During any interval (t, t + h), any individual alive at t may die with probability µh + o(h), or split into two individuals with probability λh
+ o(h). All individuals act independently in these activities. Write down the forward equations for pn(t) = P(X (t) = n) and show that if X (0) = I, then (1) E(s X (t)) =  where θ(s) = (λs − µ)/(1 − s).   I s + λt(1 − s) 1 + λt(1 − s) µ exp (t(λ − µ)) + θ (s) λ exp (t(λ − µ)) + θ (s) I if λ = µ if λ = µ, Worked Examples and Exercises 467 Solution (t, t + h) when X (t) = k is Because individuals act independently, the probability of no change during pkk(h) = (1 − λh − µh + o(h))k = 1 − (µ + λ) kh + o(h). Similarly, the probability of just one split and no deaths among k individuals during (t, t + h) is pk,k+1(h) = kλh(1 − λh − µh + o(h))k−1 = kλh + o(h), and likewise the chance of just one death is pk,k−1(h) = kµh + o(h). Other transitions have probabilities that are all o(h) as h → 0, and so by conditional probability pk(t + h) = hλ(k − 1) pk−1(t) + hµ(k + 1) pk+1(t) + (1 − (λ + µ)kh) pk(t) + o(h). The forward equations now follow as usual, giving dpk(t) dt = λ(k − 1) pk−1(t) + µ(k + 1) pk+1(t) − (λ + µ)kpk(t), with the convention that p−1(t) = 0. Defining G(s, t) = E(s X (t)) and differentiating G with respect to s, shows that ∂G ∂G �
�s ∂t = (λs − µ)(s − 1) − (λ + µ)s ∂G ∂s ∂G ∂s ∂G ∂s = λs2 + µ. (2) Because X (0) = I, we have G(s, 0) = s I, and it is straightforward but dull to verify that (1) satisfies (2) and the initial condition X (0) = I. (3) Exercise Let η be the probability that the population ever falls to zero. Show that η =    1 µ λ I if µ ≥ λ if λ > µ. (4) Exercise Let T be the time until X (t) first takes the value zero. Show that if X (0) = 1, E(T |T < ∞) =    1 λ log 1 µ log. λ < µ (5) Exercise Let X (0) = 1 and define z(t) = P(X (t) = 0). Show that z(t) satisfies dz dt Hence, find z(t). What is P(X (t) = 0|X (s) = 0) for 0 < t < s? = µ − (λ + µ)z(t) + λ(z(t))2. (6) Exercise Suppose that X (0) = 1 and λ < µ. Show that P(X (t) = k|X (t) > 0) = lim t→∞ λ µ k 1 −. λ µ 468 9 Markov Chains (7) Exercise Suppose that new individuals join the population at the instants of a Poisson process with parameter ν (independently of the birth and death process). Write down the forward equations for the process. Deduce that if λ < µ the stationary distribution is πk = 1 k. What is the mean of this distribution? 9.22 Example: Wiener Process with Drift Let D(t) be the drifting standard Wiener process, D(t) = µt + W (t). (a) Show that M(t) is a martingale
, where M(t) = exp{λD(t) − 1 2 λ2t − λµt}. (b) Let Tb be the first passage time of D(t) to b > 0. Show that Ee−θ Tb = exp{b(µ − % µ2 + 2θ )}. Solution we know to be a martingale from 9.9.27. (a) By definition D(t) = µt + W (t), so M(t) = exp{λW (t) − 1 2 λ2t}, which λ2 > (b) Because Tb is a stopping time, 1 = EM(Tb ∧ t). But if λ is so large that λµ + 1 2 0, then 0 ≤ M(Tb ∧ t) ≤ eλb. Let t → ∞ and use the dominated convergence theorem 5.9.7 to give 1 = EM(Tb) = eλbEe−Tb( 1 2 λ2+λµ). Now setting θ = λµ + 1 2 moment generating function) gives the result. λ2 and choosing the larger root of the quadratic (which yields a (1) (2) (3) Exercise Exercise Exercise the probability that D(t) hits b first is Find P(Tb < ∞) in the two cases µ > 0 and µ < 0. Show that e−2µD(t) is a martingale. Let T be the time at which D(t) first hits a or b, where a < 0 and b > 0. Show that P(D(T ) = b) = 1 − e−2aµ e−2bµ − e−2aµ. (4) Exercise Show that if µ < 0, then P(max t≥0 D(t) ≥ b) = e2µb. (Note: In the remaining Exercises (5)–(9), we consider the standard Wiener process with no drift in which µ = 0.) (5) Exercise that Ee−θ Tb = exp(− √ 2θb). Let W (t) be the Wiener process and Tb the �
��rst passage time of W (t) to b > 0. Show Worked Examples and Exercises 469 We can deduce from (5) that P(Tb < ∞) = 1, but see also the next Exer- Remark cise (6). Use Example 9.9.33 to show that P(Tb < ∞) = 1. Exercise Let X (t) and Y (t) be independent Wiener processes, and let Tb be the first passage time Exercise of X (t) to b > 0. Use conditional expectation and (5) to show that Eeiθ Y (Tb) = e−|θ |b, and deduce that Y (Tb) has a Cauchy density. (Hint for the final part: Look at Example 8.24.) Use the fact that the density of Tb is given in Corollary 9.9.25 to calculate the density Exercise of Y (Tb) directly. Let c > b. Explain why Y (Tb) is independent of Y (Tc) − Y (Tb). Now recall Example Exercise 9.9.11 and use it to deduce that Y (Tb) has the same density as bY (T1). Finally, use this and the fact that Y (Tb) has the same density as −Y (Tb) to conclude that Eeiθ Y (Tb) = e−K a|θ |, for some constant K. 9.23 Example: Markov Chain Martingales Let (X n; n ≥ 0) be a Markov chain with transition probabilities pi j, and suppose that the function v(·, ·) is such that pi j v( j, n + 1) = λv(i, n), λ = 0. Show that λ−nv(X n, n) is a martingale with respect to X n, provided that E|v(X n, n)| < ∞. j Solution Using the Markov property of X n, E(λ−(n+1)v(X n+1, n + 1)|X 0,..., X n) = λ−(n+1) j pX n j v( j, n + 1) = λ−(n+1)λv(X
n, n). The result follows, on noting that E|v(X n, n)| < ∞. Let X n be a Markov chain with state space {0, 1,..., b}, such that X n is also a Exercise martingale. Show that 0 and b are absorbing states, and that if absorption occurs with probability one in finite time, then the probability of absorption at b is X 0/b. Exercise pi j ; and suppose that the bounded function v(.) is such that v(0) = 0, v(b) = 1, and Let X n be a Markov chain with state space {0, 1,..., b} and transition probabilities v(i) = pi j v( j), i ∈ S. (6) (7) (8) (9) (1) (2) j∈S If 0 and b are absorbing states, show that if absorption occurs in finite time with probability one, then the probability of absorption at b is v(X 0). (3) Exercise ities Let X n be a birth–death process on the nonnegative integers, with transition probabil- pii+1 = pi, pii−1 = qi, pii = ri, i > 0 i > 0 i ≥ 0, where r0 = 1. 470 Define 9 Markov Chains x r −1 v(x) = qi pi, x ≥ 2, i=1 while v(0) = 0 and v(1) = 1. Show that v(X n) is a martingale. Deduce that the probability of v(x)−v(a) hitting b before a, given X 0 = x < b, is v(b)−v(a). Deduce that the process is persistent if and only if diverges. r −1 r =1 ∞ qi pi 1 i=1 9.24 Example: Wiener Process Exiting a Strip Let W (t) be the standard Wiener process and let T be the first time at which W (t) hits a or b, where a < 0 and b > 0. Show that ET = −ab. Solution any finite integer n, because T is a stopping time, We know from Theorem 9.9.27 that W (t)2 −
t is a martingale. Hence, for (1) 0 = EW (T ∧ n)2 − E(T ∧ n). Because W (t)2 ≤ a2 + b2, we can let n → ∞ in (1) to obtain, by using Example 9.9.33, E(W (T ∧ n)2) E(T ∧ n) = lim n→∞ ET = lim n→∞ = EW (T )2 = a2P(W (T ) = a) + b2P(W (T ) = b) = a2b b − a = −ab. − ab2 b − a (2) (3) Exercise Exercise Let Tb be the first passage time of W (t) to b = 0. Show that ETb = ∞. Use the martingales 9.9.29–9.9.31 to show that 3ET 2 = 3a2b2 − ab(a2 + b2) and 3varT = −ab(a2 + b2). (4) Exercise Use the martingale eθ W (t)− 1 2 θ 2t = Mθ to show that, when a = −b, Ee−sT = [cosh(a √ 2s)]−1. (Hint: Show that Mθ + M−θ is a martingale.) (5) Exercise b > 0, (Hint: Write Use the result of Example 9.23 on first passage times to show that, for any a < 0, Ee−θ T = sinh( √ 2θb) − sinh( √ sinh[ 2θ(b − a)] √ 2θa) Ee−θ Tb = E[e−θ Tb I {Ta < Tb}] + E[e−θ Tb I {Tb < Ta}] and use the Markov property at Ta. Then, do likewise for Ee−θ Ta.) Worked Examples and Exercises 471 9.25 Example: Arcsine Law for Zeros Show that the probability that the Wiener process has no zero in (s, t) is 4 2 π sin−1 s t = 2 π arc sin 4
. s t Solution Let Z be the event that W (t) does have at least one zero in (s, t). Recall that Tw is the first passage time of W (t) to w, with density fT given in Corollary 9.9.25. By the symmetry of the Wiener process, if W (s) = w, then P(Z ) = P(Tw ≤ t − s) = P(T−w ≤ t − s). Therefore, conditioning on W (s), we have P(Z ) = 2 fT (u) fW (s)(−w) dudw ∞ t−s w=0 u=0 t−=0 t−s = 2 π tan−1 = 2 π cos−1 u−3/2 4 4 du (u + s) t s s t − 1,, w exp − 1 2 w2 u + s us! dwdu ∞ w=0 √ u on setting u = sv2 using the right-angled triangle with sides Finally, the required probability is %. P(Z c) = 1 − P(Z ) = 2 π sin−, and 1. s t, on using the same right-angled triangle with sides Let V1 be the time of the last zero of W (t) before t, and V2 the time of the first zero (1) (2) 7 Exercise after t. Show that (i) P(V2 ≤ s) = 2 (ii) P(V1 < s, V2 > v) = 2 Exercise is π cos−1 t s, t < s. % s v π sin−1, s < t < v. Show that the probability that the Brownian bridge has no zeros in (s, t), 0 < s < t < 1, 2 π cos−1[(t − s)/[t(1 − s)]]1/2. (3) Exercise If M(t) = sup0≤s≤t W (s), argue that P(M(t) ≤ y|Tc = s) = P(M(t − s) ≤ y − c), s ≤ t. 472 9 Markov Chains Deduce that M(t) and Tc have the joint density f M,T (y,u) = √ c u(
t − u) πu exp − 1 2 (y − c)2 t − u − 1 2 c2 u!, (4) (1) (2) Exercise Let U (t) be the time at which W (t) attains its maximum in [0, t]. [It can be shown that U (t) exists and is unique with probability 1.] Use the previous exercise, and the fact that U (t) = Tx on the event M(t) = x to show that M(t) and U (t) have joint density f M,U (x,u) = √ πu Deduce that U (t) satisfies x u(t − u) exp − x 2 2u. P(U ≤ u) = 2 π sin−. 9.26 Example: Option Pricing: Black–Scholes Formula In Example 9.9.36, we gave the fair price at t = 0 for a European call option with exercise time T as v = E{e−r T (S(T ) − K )+}, where the stock price S(t) is assumed to be a geometric Wiener process of the form S(t) = S(0){µt + σ W (t)} and µ + 1 2 σ 2 = r. Show that v can be written explicitly as v = S(0)(H ) − K e−r T (H − σ √ T ), where (x) is the standard normal distribution function and H = {(r + 1 2 σ 2)T + log[S(0)/K ]}/{σ √ T }. Solution Consider a random variable Z with the normal N (γ, τ 2) density. We have ∞ log K /a ∞ (aez − K ) E(ae Z − K )+ = = dz exp 1 √ 2π −(z − γ )2 2τ 2 exp − 1 2 τ 1√ dy, 2π, and α = log K /a − γ y2 τ, α (aeγ +τ y − K ) where y = z − γ τ ∞ 1√ 2π τ 2(τ − α) − K (−α). − 1 2 exp τ 2 α 2 = aeγ + 1 = aeγ + 1 2 (y − τ )2 dy −
K (−α) Problems 473 For the problem in question, we can write S(T ) = ae Z, where a = S(0) and Z is normal σ 2)T, σ 2T ). Inserting these values of a, γ, and τ in the above, shows that N ((r − 1 2 v = Ee−r T (S(T ) − K )+ = e−r T {S(0)e(r − 1 2 σ 2)T + 1 2 σ 2T (τ − α) − K (−α)} = S(0) = S(0) as required. τ 2 − log−r T (−α) σ 2T + r T − 1 σ 2T + log S(0) √ K 2 σ T − K e−r T (H − σ √ T ), (3) (4) Show that the value v of the option given in (2) is an increasing function of each of Exercise S(0), T, r and σ, but is a decreasing function of K. Exercise The “American Call option” differs from the European call in one respect: It may be exercised by the buyer at any time up to the expiry time T. Show that the value of the American call is the same as that of the corresponding European call and that there is no advantage to the holder in exercising it prior to the expiry time 10 n Let (X n; n ≥ 1) be a collection of independent identically distributed nonnegative random variables. Define: (i) Sn = i=1 Xi. (ii) Mn = max{X 1, X 2,..., X n}. (iii) L n = min{X 1, X 2,..., X n}. (iv) Kn = X n + X n−1. (a) Which of the sequences X, S, M, L, K are Markov chains? (b) For those that are, find the transition probabilities. Classify the chains in Problem 1; that is to say, show whether the states are persistent, null, periodic. Can a reversible chain be periodic? Let (X n; n ≥ 1) and (Yn; n ≥ 1) be independent irreducible Markov chains, and set Zn = (X n, Yn); n ≥
1. (a) Is Zn irreducible? (b) If X and Y are reversible and also aperiodic, show that Z is reversible. Let X be a Markov chain. Show that the sequence (Xi ; i ≥ 0) conditional on X m = r still has the Markov property. Show that Definition 9.1.1 is equivalent to each of (9.1.6), (9.1.7), and (9.1.8) as asserted. Let Yn be the number of heads shown in n tosses of a coin. Let Zn = Yn modulo 10. Show that (Zn; n ≥ 0) is a Markov chain; find its transition probabilities and stationary distribution. Let (Sn; n ≥ 0) be a simple random walk with S0 = 0; show that Yn = |Sn| is a Markov chain. Let (X n; n ≥ 1) be a Markov chain. Show that E(E(g(X n+m)|X n)|Xr ) = E(g(X n+m)|Xr ), for r ≤ n. Let (un; n ≥ 0) be a sequence defined by u0 = 1 and un = ∞ k=1 fkun−k, where fk > 0 and if g(X n) is any function of X n, n then (a) Show that vn defined by vn = ρnun; n ≥ 0, is a renewal sequence as defined in Example 9.13, k=1 fk < 1. ∞ n=1 if ρn fn = 1. (b) Show that as n → ∞, for some constant c, ρnun → c. 474 9 Markov Chains 11 Murphy’s Law Let (X n; n ≥ 1) be an irreducible aperiodic persistent chain. Let s = > 0. Show (s1,..., sm) be any finite sequence of states of the chain such that ps1s2 ps2s3... psm−1sm that with probability 1 the sequence s occurs in finite time. Explain the implications. Let (X n; n ≥ 0) be a Markov chain. Show that for any constant d the
sequence (X nd ; n ≥ 0) is a Markov chain. Let A be a subset of the states of a regular chain X. Let T1 < T2 < T3 <... be the successive times at which the chain visits A. Show that (X Tr ; r ≥ 1) is a Markov chain. Let (X n; n ≥ 0) and (Yn; n ≥ 0) be Markov chains with the same state space S, and distinct transition matrices p X i j. Let (Wn; n ≥ 0) be a process defined on S with transition probabilities i j and pY i j (n). i j (n) + pY p X qi j (n) = 1 2 Show that qi j ≥ 0 and $ j qi j = 1, but that (Wn; n ≥ 0) is not a Markov chain in general. Let (X n; n ≥ 0) be an irreducible Markov chain with state space S, transition Truncation probabilities pi j, and stationary distribution (πi ; i ∈ S). Let A be some subset of S, and suppose that a new chain Y is formed by banning transitions out of A. That is to say, Y has transition probabilities qi j, where for i ∈ A, qi j = pi j for j ∈ A and j = i, and qii = pii + j∈Ac pi j. Show that if X is reversible in equilibrium, then so is Y, and write down the stationary distribution of Y. Let X n and Yn be independent simple random walks. Let Zn be (X n, Yn) truncated as in Problem 15 to the region x ≥ 0, y ≥ 0, x + y ≤ a. Find the stationary distribution of Zn. Let (X n; n ≥ 0) be a Markov chain with state space S. For each n ≥ 0 independently, X n is replaced by s ∈ S with probability p. Is the new sequence a Markov chain? At each time n = 0, 1, 2,... a number Yn of particles is injected into a chamber, where (Yn; n ≥ 0) are independent Poisson random variables with parameter λ. The lifetimes of particles are independent and geometric with parameter p. Let X n be the number of particles in the chamber at time n
. Show that X n is a Markov chain; find its transition probabilities and the stationary distribution. Let ( fk; k ≥ 0) be a probability mass function. Let the irreducible Markov chain X have transition probabilities, p jk = fk− j+1 if k − j + 1 ≥ 0, j ≥ 1 ∞ and p0k = p1k. Show that X is recurrent and nonnull if Let ( fk; k ≥ 0) be a probability mass function. Suppose the Markov chain X has transition probabilities k=1 k fk < 1. p jk =    f j−k+1 ∞ fi i= j+1 0 for k > 0, j − k + 1 ≥ 0 for k = 0 otherwise. ∞ Show that X is recurrent and nonnull if Let X have state space S and suppose that S = ∪k Ak, where Ai ∩ A j = φ for i = j. Lumping Let (Yn; n ≥ 0) be a process that takes the value yk whenever the chain X lies in Ak. Show that Y is also a Markov chain if pi1 j = pi2 j for any i1 and i2 in the same set Ak. k=1 k fk > 1. 12 13 14 15 16 17 18 19 20 21 22 Markov Times Let X be a Markov chain. Let T be a positive random variable such that P(T = t|X 0,..., X t ) is either zero or one. T is called a Markov time. Show that the Markov property is preserved at T. Let Sn be the random walk, such that P(Sn+1 − Sn = 2) = p, and P(Sn+1 − Sn = −1) = q, where p + q = 1. 23 Problems 475 If the origin is a retaining barrier, show that equilibrium is possible with Sn ≥ 0 if p < 1 3 and that, in this case, the stationary distribution has p.g.f., π(s) = (1 − 3 p)(s − 1) s − q − ps3. 24 Let X (t) be the two-state chain in continuous time, t ∈ R, X (t) ∈ {0, 1
}, having stationary distribution {π0, π1}. (a) Show that as τ →∞ P(X (0) = 1|X (−τ ), X (τ )) → π1. 25 t > 0. (b) Find cov (X (s), X (s + t)); (c) What is lims→∞ cov(X (s), X (s + t))? Let N (t) and M(t) be independent Poisson processes with parameters λ and µ, respectively. (a) Is N (t) +M(t) a Poisson process? (b) Is either of min {N (t), M(t)} or max {N (t), M(t)} a Poisson process? Let N (t) be a nonhomogeneous Poisson process with rate λ(t). Find cov (N (s), N (s + t)); t > 0. 26 27 Mosquitoes land on your neck at the jump times of a Poisson process with parameter λ(t) and each bites you with probability p independently of the decisions of the others. Show that bites form a Poisson process with parameter pλ(t). Let X (t) be a Markov chain with transition probabilities pi j (t) and stationary distribution π. Let (Tn; n ≥ 0) be the jump times of a Poisson process independent of X (t). Show that the sequence Yn = X (Tn) is a Markov chain with the same stationary distribution as X (t). Find the mean and variance of the size X (t) of the population in the birth–death process of Example 9.21. A Nonhomogeneous Chain h → 0, Let X (t) be a Markov chain with X (0) = I and such that, as 28 29 30 and P(X (t + h) = k + 1|X (t) = k) = 1 + µk 1 + µt h + o(h) P(X (t + h) = k|X (t) = k) = 1 − 1 + µk 1 + µt h + o(h). Show that G = E(s X (t)) satisfies ∂G ∂t = s − 1 1 + µt G + µs. ∂G ∂s 31 Hence, fi
nd E(X ) and var (X ). Let (X n; n ≥ 0) be an irreducible Markov chain, with state space S, stationTruncation Again ary distribution (πi ; i ∈ S), and transition probabilities pi j. Let A be some subset of S, and suppose that (Zn; n ≥ 0) is a Markov chain with state space A and transition probabilities qi j = pi j pi A for i, j ∈ A, where pi A = j∈A pi j. If X is reversible, show that Z is reversible with stationary distribution given by vi = πi pi A i∈A πi pi A. 476 9 Markov Chains “Motto” is a coin-tossing game at the start of which each player chooses a sequence of three letters, each of which is either H or T (his “motto”). A fair coin is then tossed repeatedly, and the results recorded as a sequence of H s and T s (H for “heads,” T for “tails”). The winner is the first player whose motto occurs as three consecutive letters in this sequence. Four players A, B, C, D choose as their mottoes, respectively, and H T T. Show that if only A and B take part in a game then B has probability 3 4 of winning. With what probability does C win if he plays a game with B as the only opponent? If all four players take part simultaneously, what are the respective probabilities of each player winning? (You may assume that if a fair coin is tossed repeatedly then with probability 1 any motto will occur eventually.) Let N (t) be a nonhomogeneous Poisson process. Show that, conditional on N (t) = k, the times T1,..., Tk of the events have conditional joint density k k! i=1! λ(ti ) "(t), 0 ≤ t1 ≤... ≤ tk ≤ t. Show that (a) the Brownian bridge, (b) the reflected Wiener process |W (t)|, and (c) the Ornstein– Uhlenbeck process all satisfy the Markov property. + Explain why the integrated Wiener process R(t) = t 0 W (u)du does not have this property. Show that the transition density
f (s, x; t, y) of the Wiener process satisfies the Chapman– Kolmogorov equations 32 33 34 35 f (s, x; u, z) = R f (s, x; t, y) f (t, y; u, z) dy, s < t < u. Let W (t) be the Wiener process. Show that for s < t < u, 36 E(W (t)|W (u), W (s)) = [(t − s)W (u) + (u − t)W (s)]/(u − s) and var(W (t)|W (u), W (s)) = [u(t − s) + t(s − t)]/(u − s). (Hint: Use the result of Exercise 4 in Example 8.20) Deduce that the conditional correlation is, for u = 1, ρ(W (s), W (t)|W (1)) =! 1/2. s(1 − t) t(1 − s) 37 38 The random walk X (n) takes place on the non-negative integers. From any nonzero value r, it steps to one of {0, 1, 2,..., r + 1} with probability 1/(r + 2). From 0 it surely steps to 1. (a) Find the stationary distribution, and deduce that the expected number of steps to reach 0 starting from 1 is 2(e − 1). (b) Show that in equilibrium, if the walk is at 0, the probability that it arrived there from r is 2/{(r + 2)r!}. (c) If the walk starts at 1, find the probability that it visits 0 before visiting r + 1. I walk to and from work, and I have a total of m umbrellas at home or in my office. If it is raining when I set off either way, I take an umbrella if I have one to take. For any journey, it is raining independently with probability p = 1 − q. Let U (n) be the number of umbrellas available to hand when I start the nth journey; ignore the chance of rain starting during the trip. (a) Verify that U (n) is a Markov chain, and write down its transition matrix. (b) Show that the chain
is reversible in equilibrium with a stationary distribution such that π0 = q/(m + q). (c) Deduce that the expected number w of trips between occasions when I must set off in the rain is w = (m + q)/( pq). Show that w takes its smallest possible value s when Problems 477 p = m + 1 − [m(m + 1)] 1 2 and that in this case s = {2m + 1 − 2[m(m + 1)] 2 }−1 = 4m + 2 − 1/(4m) + 1/(8m2) + o(m−2) as m increases. 1 (d) When I own only one umbrella, and I have it to hand at the start of the first trip, show that the probability generating function of the number X of trips until I get wet is Eθ X = pqθ 2 1 − pθ + q 2θ 2. Find EX, and s and the corresponding value of p in this case. Appendix: Solutions and Hints for Selected Exercises and Problems No experienced mathematician feels well acquainted with a subject until he has tackled some problems; through attempting and failing, we extend the boundaries of our knowledge and experience. This observation applies to students also. It would be a big mistake to treat the remarks of this section as a solution sheet. Many of the hints and comments will be useful only to those who have spent a half hour, say, on the problem already. The remarks vary in style and content between small hints and detailed solutions; some problems receive no comments at all (indicating, perhaps, that they are either very easy or good challenges). C H A P T E R 1 1.8.1 P({1, 1} ∪ {1, 2} ∪ {2, 1} ∪ {6, 6}) = 4 36 Exercises = 1 9. 1.8.2 (a) 1 2 ; (b) 1 2 ; (c) 1 4. − 1 18 1.8.5 1.9.1 = {(i, j): 1 ≤ i < j ≤ 2n} and so || = n(2n − 1). Likewise, r |{H H }| = 1 n(n − 1). Hence, P(H H ) = (n − 1)/(2(2n − 1)). 2 1 1 2 2 (b) (
a) ;. 1.8.3 1.8.4 1.9.2 1.9.3 1.9.4 11 18 5 12 b) (a) zero; 1.10.1 1.10.2 Let C j be the event that the jth cup and saucer match. Then, i = j, P(Ci ∩ C j ∩ Ck) = 1 24 ; P(Ci ∩ C j ) = 1 12 P(. 478 479 Hence, by (1.4.5), (1.4.8), (1.6.1), P 4 Appendix 12 − 4 · 1 24 + 1 24 = 3 8. i=1 ; P(C) = 25 91 1.11.1 P(A) = 36 91 1.11.2 Let all the players continue rolling, even after first rolling a 6, and let Hr be the event that ; P(B) = 30 91. all three roll a 6 in the r th round. Because E ⊆ P(E) ≤ 1 − P ∞ Hr ∞ = 1 − ∞ ∞ c Hr, we have r =1 ∞ P(Hr ) = 1 − (63 − 1)r −1/63r = 0. r =1 1.11.3 Ignoring Chryseis, P(A) = r =1 52n/62n+1 = 6 11 1.11.4 As above, let all players continue rolling irrespective of 6s achieved. A gets his first 6 on the (3r + 1)th roll in 5r ways; B and C both have at least one already in (6r − 5r )2 ways. Hence, P(A last) = n=0 r =1 ∞.. 5r (6r − 5r )2/63r +1 = 305 1001 (b) P(B2) = 1 8 ; P(B3) = 1 8... (n2 4 + n2 5 + n2 6 + n2 9 + n2 + n2 8 = 97 324 + 1 6. 10) = 97 324. 107 324. 27n2 4 + 26n2 5 + 25n2 6. 1.12.2 P(B1) = 5 8. r =1 ; P(
C) = 1 4 ; P(B3) = 1 8 ; P(B336)2 (a) P(B2) = 3 8 (c) P(B2) = 1 8 1 (a) 0; (b) ; (c) 4 + n11 p2 = n7 36 36 p3 = p2 + 2 (36)3 436 990 (b) (a) ; 526 990 1.12.3 1.12.4 1.13.1 1.13.2 1.13.3 1.14.3 p1 = 361 990 ∞ 1 2, p3 = p4 = 502 990 2r 1 2 = 2 3. r =0, p5 = 601 990 1.13.4 Let p j be the chance of winning when the first die shows j. Then, so you would fix the first die at 5 if you could. 1.14.4 Let s be a sequence of length m + n in which the first m terms have x heads and m − x tails, and the next n have x heads and n − x tails. Now change the first x heads to tails and the first m − x tails to heads, giving a sequence t of length m + n with n heads. This map is 1–1, so the number of t-sequences equals the number of s-sequences, giving the result. Problems 1 13 1 13 (a) 1 2 3 > 1 2 8 15 ; (b). < 1 2 112 225 480 4 5 6 7 Appendix 1 2 2n for some integer n ≥ 1. ; (a) (b) 25 216 125 216 (a) (A ∩ B) ∪ (B ∩ C) ∪ (C ∩ A); (b) (Ac ∩ B ∩ C) ∪ (A ∩ Bc ∩ C) ∪ (A ∩ B ∩ C c); (d) 17 because (c) 1; 5 6 ; 17 < 0.05 < 16. 5 6 (c) (A ∩ B ∩ C)c 8 The loaded die is equivalent to a fair 10-sided die with five faces numbered 6. (a) 5 10 4 = 81 1 6 4 so the factor is 81; (
b) p23 = 4 p24. 12 (a) Use induction; (b) 1 9. (Hint: The cups can be arranged in 90 distinct ways.) (b) 1 4 ; (c) 3 16 ; (d) 1 8. 0.16 9 10 11 (a) 5 ; 6 35 36 24 19 36 2 3 1 3 13 14 15 17 19 21 in each case. (a 16 (a) 4 − 3 6 4 2 6 (b) n ↓ 0; 5 6 (bac) You would get the same answers. (c) Use induction. n 12 1 n 1 n ; ; In every case, p = (b) x = 6, y = 10; x(x − 1) (x + y)(x + y − 1) ↑ 1. 6 5 n 4 − 6 5 n 12 ; (b) 1 3 ; 1 4 ; 1 12 ; 1 2. (a) x = 3, y = 1 and x = 15, y = 6; (c) when r = 6, x = 2 and y = 7. 22 Always in each case, except (c), which holds when B ⊆ C ⊆ A, and (d), which holds when A ∩ C = φ. 23 1988 was a leap year, so p = 1 − (366)! (366 − m)!(366)m C H A P T E R 2 Exercises 1 − r 2 2.6.6 2.6.7 P(V |An ∪ Bn) = P(An)/P(An ∪ Bn) = dependence on n.) 2.7.1 Use induction. 2.7.2 2.7.3 (c + d)/(b + c + d) (c + nd)/(b + c + nd) → 1 as n → ∞ r n−1 p r n−1 p + r n−1q = p p + q. (Note the lack of any Appendix 481 2.7.4 Use (2.6.1). 2.7.5 It is also the probability of getting m cyan balls and n blue balls in any given fixed order. (1 − p)n; np(1 − p)n−1 2.8.4 2.8.5 (a) 1 − πn; (b) n−1 k
=1 pkπk−1sn−k; (c) 1 − (1 − p)n; n(n − 1) p2(1 − p)n−2. 1 2 2.8.7 ∞ k=1 (1 − pk) > 0, if and only if ∞ k=1 pk < ∞. 2.9.1 α(1 − α) + α2(1 − γ )/(1 − (1 − α)(1 − γ )) 2.9.2 (2 − α + αγ (1 − γ ))−1 2.9.3 Biggles 2.10.1 2.10.2. (a) P(E) = 0.108; P(Ac|D) = 45 86 (b) P(E) = 0.059; P(Ac|D) = 5 451 p(µ(1 − π) + ν(1 − µ)) p(µ(1 − π) + ν(1 − µ)) + (1 − p)(1 − π) > 5 (b) P(A|Dc) = 6 451 61. (a) P(A|Dc) = 1 46 < 45 86 ; 2.10.3 P(L) = (1 − ρ)P(E) + ρ(1 − P(E)); P(Ac|M) = (π(1 − ρ) + ρ(1 − π))(1 − p) ρP(Dc) + (1 − ρ)P(D) P(A|M c) = p[(1 − ρ)(µ(1 − π) + ν(1 − µ)) + ρ(µπ + (1 − µ)(1 − ν))] 1 − k/K ρP(D) + (1 − ρ)P(Dc) (1 − ρ)(1 − ρ K ) (1 − ρk+1)(1 − ρ K −k), where ρ = 1 − p p. 2.11.4 2.11.6 2.12.1 When λ = µ. 2.12.2 P(A2|A1) − P(A2) = (µ − λ)2
2(λ + µ) = 0. When λ = µ. 2.12.3 (a) (µ3 + λ3)/(µ2 + λ2) (b) µn + λn µn−1 + λn−1 → max{λ, µ}, as n → ∞ 2.12.4 2.12.5 (a) µ λ + µ ; µn µn + λn (b) λ λ + µ → 1, as n → ∞, if µ > λ. 2.13.3 (a) 2 p1 3 p1 + p2(1 − p1) 2.13.4 Yes, if p1 < 1 2 2.14.1 3 p1 3 p1 + p2(1 − p1). (b) ; and p2 = p1 1 − p1 (b) a = b = c.. (a) a = 1, b = 1, c = 2; 16 41 20 41 2.14.1 2.14.2 482 1 3 4 5 6 7 8 9 Appendix Problems (a) 0.12; (b) 0.61; (c) 0.4758; (d) ; 1 2 respectively; (ii) 7 9 ; ; (i) 1 1 6 3 (b) No. (a) 0.36; (b) 0.06; (c) 0.7; (d) (iii) 18 41. 7 13 1 14 ; 2 7 ; 9 14, respectively; (iv) 6 7. ; (a) (a) zero; 3 5 1 2 3 4 (b) (a) ; ; (b) one. (b) 2 3 ; (c) 5 6 ; (d) 4 5. (b)(i) 1 2 ; (ii) 1 4 ; (c) 1 36 ; (d) 1 42. (c) 0.7. 11 83 102 12 (a) (1 − p2)2; (b) 1 − p + p(1 − p2)2; (c) p(1 − p2)2 1 − p + p(1 − p2)2 13 ; (a ≤ exp(−x). (b) ; (c) zero if r < b, 1 2 if r = b, 1 if r >
b. 14 15 Let P(A) = α and P(B) = β. If the claim is false, then (1 − α)(1 − β) < 4 9 and αβ < 4 9 plane is empty, so the claim is true. and α(1 − β) + β(1 − α) < 4 9. The intersection of these three regions in the α − β 16 P(E|A = tail) = (1 − (1 − p)s−1)P(E|A = head) P(E) = pr −1(1 − (1 − p)s) 1 − (1 − pr −1)(1 − (1 − p)s−1) 17 He must answer k satisfying u(1 − b) b(1 − u Therefore, the examiners must set questions such that (u/b)n > p/q or the student can never convince them that he is not bluffing. 18 (i) 0.7; (ii) 2 9 ; (iii) Use (2.11.2). 19 The second set of rules. 20 21 22 16 31 6−2 (a) 1 − 1 2 and µ = 5 12 2 3 − n ; (b) √ 5 4 ; (c) 1 36(λ − µ) 1 2. λn 1 − λ − µn 1 − µ where λ = 5 12 + √ 5 4 Appendix 483 n−1 2 3 = 3 4.. 23 P(ever hit) ≤ 1 4 83 128 (ii) 7 8 24 (i) ; ; (ii) θ = 1 10 √ ( 41 − 1); (iii) 2(. 25 (i) p A = pB = 26 (a) 197 450 ; (b) 27 (a) 5n−1.6−n; (b)  1 − 1 2 (θ + θ 2) 2 − θ + θ 2 25 148 1 6 ; (c)  77 225 5 11 (c) ;.. n−1 28 (a) 2 p     − q    ; (b) ( p − q) q p q p n−1 − 1 q + n−1 n−+1; pk pk
0 = 0, pk n = pk−1 n−1 = 1 − m pk m n −1 k=0 60/k + m n = 1. (b) Choose x = 6. (c) 29 (i) pk m (ii) p0 pk m− 30 (a) 31 (a) 32 (i) 6 1 6 k=1 1 − p1 3 − p1 1 16 ; 33 (a) 1 (n!)2 (b) pt = ; (b) (1 − p1)2 2 + (1 − p1)2 ; (c) 1 − p1 3 − p1 − p2 − p3. (ii) 5 metres and one step;  (iii) 5 metres. for 2 ≤ t ≤ j ∧ k   t − 1 jk ( j ∧ k) − 1 jk j + k − t − 1 jk for c) j = k = n! for ii) For men, 0.5; 0.6; for women, 0.1; 0.11.. 34 35 (i) 0.3; 0.1545; 12 23 11 23 (b) (a) ; 36 No 39 pn = (1 − p) 1 − (q − p)n−1 1 − (q − p)n 44 P(win with 1–6 flat) = 0.534; P(win with 5–2 flat) = 0.5336. → 1 − p. 484 3.9.1 3.9.2 3.9.3 3.10.1 3.11.7 3.11.8 3.12.1 Appendix C H A P T E R 3 Exercises (a) (n + r )! r! n + r r (i) 462, assuming oranges are indistinguishable. (c) pr +1(n). (b) ; ; (ii) 7 = x 5| (the coefficient of x 5) in 7 k=1 (1 − x k)−1. 4 (a) ; ; (c) (b), where the sum is over
all k such that 6 n(n − 1) 5!(n − 5)(n − 6) 2n(n − 1)(n − 2)(n − 3) 6(n − 3) n(n − 1) n! 1 k!(r − 2k)!(n − r + k)! nr max{0, r − n} ≤ 2k ≤ r ≤ 2n. 1 nr n−r p(m, n) − p(m − 1, n), where p(m, n) is given in Example 3.11. k=0 n r (−)k Mk, where r ≥ 2n and Mk = (n − k)r −k(n − k + 1)k. n n k 3.12.3 3.13.3 Rotational symmetry. 3.13.4 As in the example, k disjoint pairs of seats can be chosen in Mk ways, k given pairs of twins can occupy the pairs of seats in 2kk! ways, the rest occupy their seats in (2n − 2k)! ways, so P (no pair adjacent) = n Mk2kk!(2n − 2k)! (2n)! n!(2n − k − 1)! (n − k)!(2n − 1)! 2k → e−1 (−)k k! (−)k n n k = k=0 3.13.5 k=0 as n → ∞. e−2. a k 3.15.2 Same as 3.15.1 3.15.1 b n k + 1 3.15..15.4 b + a a k b k 3.16.2 3.16.4 sum. (b) n x n a n k + 1 px (1 − p)n−x in all three cases. 3.17.5 K (20, 3) = 3.17.6 K (17, 3) = = 231. = 171. 3.17.7 Recall Example 3.12. 22 2 19 2 n j=r 3.18.5 p(n, r + s) p(n, j). Let n → ∞. (a) This depends on the order in which you catch the species, so the answer is a horrible n x bx cn−x (b + c)n. Avoid sampling with partial replacement! Appendix 485 3.
18.7 (a) 1 n!r! n−r (−)k(n − r − k)!/k!; k=0 3.18.8 (a) zero; (b) zero; (c) e−2. (b) n k=0 (−)k (n − k)! n!k! ; (c) ( p(n, 0))2. Problems ; (b) 1 a!b!c! ; (c) 6 (a + b + c)!. 49 153 (a) 1 2 3 (a) 6a!b!c! (a + b + c)! 4!48! (12!)4 52! (13!)4 ; 26 13 ; (b) − 16 52 13 72 + 52 13 39 13 52 13 52 13 ; (d) 4 2 48 9 52 13 72 39 13. − 6 26 13 + 4 43 12 3 52 5 52 5 4 2 ; (b) 44 13 5 52 5 13 2 52 5 4 2 ; 4 3. 52 5 ; (d) 4 ; (e) 156 (c) 4 4 (a) 13 39 13 4 2 (c) 4510 32491 1 + 105 (i) 10; (aii) S, where S = max{n : n2 ≤ M}. is an integer. (b) You have (k − 1)! colours, and k balls of each colour. How many arrangements are there? 4 9 (a) 5 if the question means exactly 4 aces, or 4 1 6 26 6 if it means at least 4 acesb) 6 times (a). (c) 1 − 3 10 (a) 18 ; (b) 8! 64 8 3n r ; 4n r 3n r (c) 36. 4n r − 4 (b) 2;. 64 8 n 2 (b) 2n ii) 11 (a) 1 − (c) 12 (a) 15; 14 Pk = 15 (b)(i) ; 4n r 3n r − 2 4n 486 Appendix 16 Follows from − 1 2 + K n + 1 4 17 (a) kn; (b) k! if k ≤ n; (c 19 See Example 3.11. (−)m m! n! (n − m)! 20 m=0 2 (2n − 2m)! 2n! 2m ∼ ∞ 0 (−)m m!
1 2m. 23 (a) (n − 4)(n − 3) (n − 2)(n − 1) whether Arthur sits at random or not. (b) Number the knights at the first sitting, and then use Problem 21. n (−)k 25 (a) k=0 n k (2n − k)! (2n)! → e− 1 2. (b) Problem 21 again. The limit is e−1. 1 = 1 2 5 (ii) 7 7 3 ; 2−7 + (iii 26 27 36 2n n ( pq)n = (4 pq)n − 1 2 n 1 ≤ (4 pq)n exp n + 3 4 1 4 −1 3. 3n − 2k k 39 Use induction.. Now use inclusion–exclusion. 4 7 5 3 1 − 1 2n... − 1 2 k−1 ≤ (4 pq)n exp log n → 0. 37 The number of ways of choosing k nonoverlapping triples (three adjacent) is 2−7 + 1 2 = 99 128 ; (iv) 92 99. C H A P T E R 4 Exercises 4.8.2 (a) 992; (b) 32k!(31)k−1. 4.8.3 They give the same chance because trials are independent. 4.8.4 P(T > j + k|T > j) = (31/32)k = P(T > k). 4.9.4 Either form a difference equation or rearrange the sum. " # 4.9.6 (a) [(n + 1) p − 1]; (b) − 1. 4.9.7 (a) 1 2 n (1 + (2 p − 1)n); m! (m − k)! n m (c) m=k (1 − (2 p − 1)n); (b) pm(1 − p)n−m = pkn! (n − k)! ; 0 ≤ k ≤ n. k p 1 2 (1 − e−λ − λe−λ)/(1 − e−λ) 4.10.1 4.10.2 λ(1 − e−λ)−1 4.10.4 (a) [λ − 1]; (b) k. Appendix 487 4.10.5 (
a) exp(λ(e − 1)); (b) e−λ; (c) λ; (d) λk. 4.11.3 Choose t to minimize L(t) = a x≤t (t − x) f (x) + bP(X > t). " " (a) ˆt = (c) ˆt = b a log a − log(a + bp) log q # # + 1; (b) ˆt = ) if this lies in [−n, n]; otherwise, ˆt = n if b/a > 2n. (What if a or b can be negative?). 4.11.4 (a) Any median of X ; (b) E(X ). 4.11.5 Minimize L(m) = b m (m − k) pk(m) + c m+n (k − m) pk(m), where k=0 pk(m) = pk(1 − p)m+n−k. For no overbooking, you need L(0) < L(m) for all m > 0; solutions are approximate or numerical. k(a + b − k). 4.12.2 4.12.3 Let pk be the probability that B wins if A’s initial fortune is k. Then m + n k k=m+1 P(A|B) = P(B|A)P(A)/P(B) = ppk+1/ pk and P(Ac|B) = (1 − p) pk−1/ pk. Hence, ppk+1E(X k+1|B) − pkE(X k|B) + (1 − p) pk−1E(X k−1|B) = − pk. When p = 1 2 E(X k|B) = 1 3, we have pk = (a + b − k)/(a + b), giving ((a + b)2 − (a + b − k)2),   0 ≤ k < a + b.! − 1 − (2 p)−(a+b) 1 − (q/ p)a+a + b)(a + b − 1); a+.12
.4 a+b k=0 mk a + b k 2−(a+b) =  4.12.5 (i) p = q; k − (a + b) q − p + p (q − p)2 q p ; (a + b)(a + b − 1) − k(k − 1). (ii) p = 1 2 M + 1 j( j + 1)M (b) 4.13.1 ; (c) 1 − exp(−λj) − 1 2 λM(M + 1) 1 − exp k. q p −. 4.13.2 E(X A) = 2. ∞ 4.13.3 (b) n=1 m + 1 m + n = ∞; (c) e− 1 2 m(m+1) ∞ n=1 e− 1 2 λ(m+n)(m+n+1) < ∞. 4.13.4 (i) Median is ∞; " (ii) E(X A|X A < ∞) = ∞. #! 4.14.1 ˆm = max 0, N (b + d.14.2 ˆm = [m], where m is the positive root of (m + 1)q m = c/(b + c). 4.15.1 You win $1 with probability 1, but your winning bet has infinite expected value. 488 4.15.2 p(1 − (2q)L+1) (1 − 2q)(1 − q L+1) If p = 1 2, then expectation is Appendix if q = p. This → ∞ if p < 1 2 or → p/(2 p − 1) if +1. 1 4. → ∞ as L → ∞. s−1 = 2.3s−24−s; 4.16.2 With an obvious notation mus = 2 3 3 4 s−1 mds = −s. Hence, the r th search downstairs comes after the sth search upstairs if 2 > 3−s4s−r > 1 6 9 The order is duuuuduuuuudu.... 4.16.3 Place mr s = (1 − dr 1)(1 − dr 2)... (1
− dr s) pr in nonincreasing order. 4.17.4 q 2(1 + p)/(1 − pq) = (1 − p2)2/(1 + p3) (1 − q 2)2/(1 + q 3) 4.17.5 2 1 − pq 4.17.6 E(X |Bc) = 4.17.7 P(A1|B(A2|B) = qp 1 + p. 4.17.8 Every number in [2, 3) is a median. 4.17.9 P(B) = p2(1 − q 3)/(1 − (1 − p2)(1 − q 2)) E(X ) = (1 + pq)(1 − 2 pq)/(1 − pq(1 − p)(1 + q)) 4.17.10 With new rules P(B) = q 2 p2 + q 2 Brianchon is making a mistake. E(X ) = 4 with new rules., which is smaller than old P(B) if p > 1 2. 4.18.2 Because the answer “yes” is false with probability (1 − p)/(2 p), individuals should be 2Yn n − 1 should not be too far from p in the much more likely to tell the truth. Then long run. ∞ Ak() 4.18.3 P n ∞ ≤ n P(Ak()). 4.18.4 Use 4.18.3 4.18.5 Use 4.18.1 4.18.6 Use Markov’s inequality and Chebyshov’s inequality 4.18.8 a(m, k) = m(m − 1)... (m − k + 1) (m + 1)... (m + k). Hence, for large enough m with k fixed $ $ $ $ $log a(m, k) + 1 $ $ $ ≤ m Hence, [a(m, k)]mek2 → 1. The inequalities follow from + · · · + m2 → 0 as m → ∞. 2m m 4−m < 2m(2m − 2)... 2 (2m + 1)... 3.1 = 1 2m + 1 −1 2m m 4−m and 2 2m
m 4−m > (2m − 2)... 4.2 (2m − 1)... 3.1 = 1 2m 2m m −1. 4−m Appendix 489 4.19.6 For the left inequality, prove and use the fact that for any collection of probabilities p1,..., pr, we have − pi log pi < − pi log pi. Equality holds when g(.) is a one–one map. For the right-hand inequality, note that i i i fi = exp(−cg(xi )) i exp(−cg(xi )) is a mass function and use 4.19.1. Equality holds if fi = f X (xi ) for all i. Problems ; f (3) = 55 140, E(2) = 66 140, f (1) = 18 140 (b) f (0) = 1 140 35 12 If X is uniform on {1, 2,..., n}, then var (X ) = 1 12 (a)(i)(e2 − 1)−1; (ii) p−1 − 1; (b)(i) 2e2(e2 − 1)−1; (ii) (1 − p)−1; (n2 − 1). (iii) (log(1 − p)−1)−1; (iv) 6π −2; (v) 1. (iii) p((1 − p) log(1 − p)−1)−1; (iv) ∞; (v) ∞. 6 Yes, in all cases. 7 c = 4(M + 1)(M + 2) M(M + 3) → 4; E(X ) = 2(M + 1) M + 3 → 2. 8 Condition on the appearance of the first tail to get P(An−2) + 1 8 P(An−1) + 1 4 P(An) = 1 P(An−3), n > 3. Hence, P(An) = Aαn + Bβ n + Cγ n, 2 where α, β, γ are roots of 8x 3 − 4x 2 − 2x − 1 = 0, and A, B, C are chosen to ensure that P(A1) = P(A2) = 0 and P(A3) = 1 8. Similar
conditioning gives E(T ) = 1 2 (1 + E(T )) + 1 4 (2 + E(T )) + 1 8 (3 + E(T )) + 3 8. Hence, E(T ) = 14. To find E(U ), consider the event that a sequence of n tosses including no H T H is followed by H T H. Hence, either U = n + 1 or U = n + 3, and so P(U > n) 1 8 = P(U = n + 1) 1 4 + P(U = n + 3). Summing over n gives 9 E(X ) = ∞. 1 8 E(U ) = 1 4 + 1; E(U ) = 10. 14 15 1 (i) b − a + 1 e.g. f (−2) = 1 2 for a ≤ k ≤ b; (ii), f (1) = f (3.. 16 (i) ( f (2n) − f (2n + 1)); (ii) zero. ∞ −∞ 490 Appendix 17 FY (y) =    ; FX. 19 20 (b) p−1 − log(1 − p)−1 − 1. (a) When λ = 0; (a) If the type is uncommon and the population is large. (c) X is roughly Poisson with parameter 10 so P (this is the only one) 10(e10 − 1)−1, which is very small. 2 2 22 21 ; E(R) → 1 2 r, f R(r − 1) = n ; E(X ) = 2 − 5 18 (m + n)(m + n + 1)m−2 for −m ≤ n ≤ 0. For n − r n ; f X (3) = 2 7 f R(r + 1) = f X (1) = 5 18 24 P(X ≤ n) = 1 2 0 ≤ n ≤ m, P(X ≤ n) = 1 − 1 2 (m − n)(m − n − 1)m−2. + 2 7 n.. 25 Mean µp. 26 (i) 6! (2
!)3 ; 27 (a) pkq n−k (ii) 6 5 n − 1 k − 1 + 16 15 + 5 3 = 59 15. ; (b) (r p)k(1 − r p)n−k ; n − 1 k − 1 28 (1 − pr )−n; E(X ) = n(c) (1 − r )k pk(1 − p)n−k n k (i) Choose A if 2000(1 − p) < 1000 p (i.e., if ii) Choose A if 2000 < 1000 ).. 4 + 5 p (c) (1 − p)−1. 32 (a) pn; (b) (1 − p) pn−1; 1 + p as n → ∞. 29 31 M(n)/n → C = 1 (b) p(1 − p)x−1/(1 − (1 − p)7); (a) 1 − (1 − p)7; − 7(1 − p)7 x f X (x) = 1 1 − (1 − p)7 p (b) 102(1 − p)10 + 103 p(1 − p)9; 1 p (d) (c) 7 36 ; 1 (1 − (1 − p)4). +(1 − r )102c. 2 3 (n + 1). 37 (c) 38 (a) Y is B ; E(Y ) = n/36. (c) [102(1 − p)10 + 990 p(1 − p)19]b (b) X − r is B n − r, 6 7 ; E(X − r ) n, 1 36 n − r ) p2 1 + p 1 1 − r 39 (a) (b) −1 ; p2 1 + p p2 1 + p 2(1 − r )2 + pq (1 − r )2 − pq ; p2 p + q −1 + q 2 1 + q p2 + 2q p2 + q 2 + pq.. The second set of rules., but p > 8 17, so choose B. n, 1 32 n − E(H ) = E(n − H ) = 41 B 43, use the Poisson approximation. Appendix 491 n−1 (2n − k) n (n − k)
pk = k=0 = 0 n 2k−2n. 2n − k − 1 n 2n − k n 1 (2n − k + 1) E(2n + 1 − H ) − 2n + 1 22n+1 2k−1−2n 2n n. = 1 2 So E(H ) = 2n + 1 22n q(1 − q)k−1ak 2n n − 1. k 44 45 With an obvious notation, m = 1 + 1 2 m13 (and two similar equations), also m12 = 1 + 5 6 1 m12 + 1 3 6 equations). Solve to get m. m2 + 1 6 m1 + 1 3 m3, also m1 = 1 + 1 2 m1+ m12 (and two similar C H A P T E R 5 Exercises 5.11.3 ρ = − 1 2 m−x qr (1 − q)( 18!( pq)9−kr 2k ((9 − k)!)2(2k)! m x 9 5.11.4 5.11.5 5.11.6 (n − Y ) k= is B(m, p2) with mean mp2, variance mp2(1 − p2). 5.11.7 5.12.1 5.12.5 A is the sum of two binomial random variables B(m, φ) and B(m, µ). Hence, a f A(a) = φk(1 − φ)m−k m k Therefore E(A) = m(φ + µ). k=0 µa−k(1 − µ)m−a+k m a − k 5.12.6 E(A|S) = 2S + (m − S)φ + (m − S)µ 2n−1 − 1 5.13.3 E(Rn) = 2 + 2 2n + 1 3 pr = r m n 2n − 1 2n−.13.4 → 7 2 5.13.5 5.14.4 When Xi = c, where c is constant. 5.14.5 We assume that “at random” means an individual is selected at random from n independent families X 1,..., X n. Define I (Xi ≥ k) = 1 0
if Xi ≥ k if Xi < k. 492 Appendix Then and f R(k) = E n i=1 I (Xi ≥ k) n i=1 Xi = nE I (X 1 ≥ k) n i=1 Xi E(R) = E k k I (X 1 ≥ k) 1 Xi /n n. 5.14.6 Let X be uniform on {x1,..., xn} and Y uniform on {y1,..., yn}. 5.15.1 S + (n − S) p − pγ 1 − γ p 5.15.2 N γ 5.15.3 ρ(N, S) = 1 2 γ (1 − p) 1 − γ p 5.15.4 j (i) P(T = k, S = j, N = i) = k × pi (1 − p)n−i for k ≤ j ≤ i ≤ n. (ii) P(N = i|T = k) = n − k n − i 5.15.5 n − s i − s p(1 − γ ) 1 − pγ i−s 1 − p 1 − pγ τ k(1 − τ ) j−k i j γ j (1 − γ )i− j n i n−i i−k p − pγ τ 1 − pγ τ, which is binomial. τ k(1 − τ )s−k; k ≤ s ≤ i. 1 − p 1 − pγ τ n−i s k 5.15.6 Zero 5.17.2 No 5.17.3 E(Z ) = 1 2 5.17.4 1 6 n; 5 36 n. ; var (Z ) = 5 12. 5.18.7 Recall the ballot theorem. 5.19.3 5.19.4 E(Zm) = b a na(b − a)(b − n)/(b2(b − 1)). + b − 1 a − 1 5.19.5 P(X = k|X + Y = j.. 2n j 5.20.5 E(Rr ) = ∞ 5.20.7 For a walk starting at zero, the expected number of visits to zero including the first is 1 | p − q|. Hence, for p < q
and r > 0, E(V ) = r < 0, E(V ) = | p − q|. Likewise, if p > q, r 1 r p q E(V ) =   p q 1 | p − q|  1 | p − q| 1 | p − q| ; for p < q and r < 0 r > 0. Appendix 493 P(SX = SY = 0) = n 1 42n (2n − 1)2(2n − 3)2... 12 (2n)2(2n − 2)2... 22 ≥ n k=0 n 1 2n = ∞. (2n)! (k!)2((n − k)!)2 = n 1 42n 2 2n n 5.20.8 E(V ) = = n n + 5.21.5 P(X ≥ Y ) = FY (x) f X (x)d x ≥ + FX (x) f X (x)d x = 1 2. 5.21.7 (i) Let I j be independent Bernoilli random variables with parameter p. Then m I j ≤ n 1 1 I j for m ≤ n. (ii) Let I j be independent Bernoulli with parameter p2, and let K j be independent Bernoulli with parameter. Then I j K j is Bernoulli with n parameter p1 and p1 p2 12 13 14 Problems, f (1, 0) = f (0, 1) = 8 36, f (0, 0) = 16 36, f (1, 1) = 2 36 ; f (2, 0) = f (0, 2) = 1 36 cov (X, Y ) = − 1 18 1 16 (a) Zero; (b) ; ; ρ(Xc) ;. (d) 1. 1 2 e.g. X = ±1 with probability each, Y = |X |. cov (U, V ) = ac + bd + (ad + bc); ρ(X, Y ) = 0 for many choices of a, b, c, d. p − 1 2 + 1 2 (i) P(correct) = 2 (b) P(U = m, V = n) = pm+1q n + q m
+1 pn; cov (U, V ) = (4 pq − 1)/( pq); ρ(U, V ) = −| p − q|. (ii) P(correct) = p3 + 3 p(1 − p)2. ; 2 10 You need results like n i=1 Then cov (X, Y ) = − n + 1 12 i 2 = 1 3 n(n2 − 1) + 1 2 ; ρ(X, Y ) = − 1 11 (b) a − 4a2; (c) E(X |Y = 0) = a 5 6 2. 2 cov (U, V ) = 7 6 (a) E(|X Y |) = E(|X ||Y |) = E(|X |)E(|Y |) < ∞; √ 1 − 2a (a) Yes, when θ = 3 − 2 (d) yes, when θ = 1 2 (f) yes, when α = 6π −2. (3 − 2 2; √ n(n + 1) and i= j 1≤i, j≤n i j = 2 i − n i=1 i 2. n i=1 → 0. n − 1 ; E(X |Y = 1) = 1 2 ; (d) a = 1 4. (b) E(X 2)E(Y 2) < ∞. (c) yes, when θ = 1 2 (e) yes, when 5 − 1); √ ( 2) and independence holds; (b) no; αβ 1 − β = 1; (e) f X (if) fY ( j) = α j −2, 1 ≤ j. 494 Appendix    1 + θ 1 − θ 2θ 1 − θ θ |i|, i = 0, i = 0 15 (a) f X (i) = (c) f X (i) = θ i+1 1 − θ, i ≥ 0 θ 2i+3 1 − θ, iβ c − iβ i ≥ 0 (d) f X (i) = − (i − 1)β c − (i − 1)β k k a1 x1
2 ai k 2 xi. 1 ai k 1 xi 17 (c) P(X 1 = x1) = 18 (a) 1 − pc 1 − p ; (b) P(min{X, Y } > n) = pn 1 pn β mq n 1 − β mq n. 26 Let (x, y, z) take any of the 8 values (±1, ±1, ±1). Then pβ 1 − qβ ; 2 so E(Z ) = αp βq (b) (a) 20. 1 1 − p1 p2 1 − x y = |1 − x y| = |(1 − x y)||(−x z)| because |(−x z)| = 1, = |(1 − x y)(−x z)| = |yz − x z| ≥ ±(yz − x z). 27 28 29 2n + 1 Now use Corollary 5.3.2 to get the result. 1 fY ( j) = 1 ; f X (i) = 2m + 1 f (0, 1) = f (1, 2) = f (2, 0) = 1 3 P(Y < X ) = 1. 3 (a) E(Ur ) = U 2 U + V (b) Let T be the number of tosses to the first head.. Then f X (i) = fY (i, and P(X < Y ) = 2 3,. pU 2V 2 (U + V )(U V p + U + V − p(U + V )). + E(UT −1, this is U 2 When U = V = 1 p k 1 Si, where Si is the score on the ith ball. i j = 1 12 k(n + 1), E(Si S j ) = 2 n(n − 1) ∼ 2 3 3U − 2 U. i> j (n + 1)k(n − k). If M is the maximum, 31 The total T = (a) E(T ) = 1 2 var (T ) = 1 12 (3n + 2)(n + 1). Hence, P(M = m) = P(M = m without replacement; k with replacement. 33 (a) √ 3 ; (b) 1 2 5 − 1 6 ; (c) 5
6 Appendix 495. 34 Use the argument of Theorem 5.6.7; pr 3 − r + q = 0; r = (− p + ( p2 + 4 pq) = e−22r r! ; mean = variance = 12. r 3n−r e−88n n! 3n−r 4n ; 4n r n r ∞ n r 35 n=r r +1 1 2 )/(2 p). 38 (a) P(M ≥ r ) = p q ; P(M = r ) = (b) P(M = r |S0 = −k) = αβ k 1 − p q βp q 40 For a neat method, see Example (6.6.6). 41 No P(S0 = −k|M = r ) = βp q 1 − ; E(M) = p p q − r +k p q k p q ; ; P(M = r ) = k ≥ 0.. q − p α(q − p) q − βp r ; p q 6.10.7 E(s Ta0 ) + E(s Ta K ) (λ2(s))a 6.10.8 6.10.9 (i)P(Ta0 < ∞) = 1 q p (ii)E(Ta0|Ta0 < ∞) = C H A P T E R 6 Exercises |. 6.10.10 6.11.1 (i) E(s T ) = psE(s T10 ) + qsE(s T01 ); 1 (ii) E(T |T < ∞) = 1 + | p − q|. (0, 2, 2, 4) and (2, 3, 3, 4). 6.11.2 (b) f (x) and g(x) have nonnegative coefficients and = x(1 − x 12) 12(1 − x) 2. f (x) f (1) g(x) g(1) 6.11.4 Yes, trivially 6.11.5 No nσ 2 6.12.10 r is the chance of extinction derived in Example 6.3.16. 6.12.13 6.13.7 Use induction. 6.13.9 (a) By Jensen’s inequality (4.6.14),
we have E(X 2) = E(X 2|X > 0)P(X > 0) ≥ (E(X |X > 0))2P(X > 0) = E(X |X > 0)E(X ). (b) Hence, E(Znρ−n|Zn > 0) ≤ E 6.13.10 Let E(s Z ∗ n ) = G∗ n(s). Then ρ−2n Z 2 n E(s Z ∗ n t Z ∗ n+m ) = G∗ m−1(t)G∗ n(sGm(t)) → sG∗ m−1(t)Gm(t)(ρ − 1)/(ρsGm(t) − 1). 496 Appendix 6.14.4 Set z = y + 1 in (3) and equate coefficients. 6.15.10 For HHH, E(X ) = 2 + 4 + 8 = 14; for HTH, E(X ) = 2 + 8 = 10; for HHT, E(X ) = 8; for THH, E(X ) = 8. The others all follow by symmetry from these. 1 p (= 42 in the fair case). + 1 p3q 2 + 1 p2q 6.15.16 6.16.6 Arguing directly E(T ) = E(X 1 Ia(X 1 Ia(T )E(I c a ). Hence, if X 1 is geometric. E(T ) = E(X 1) I c a 1 − E = E(X 1) P(X 1 ≤ a) = 1 q(1 − pa), b )/P(X 1 > b) = (1 − pb)/(q pb), if X 1 is geometric. 6.16.7 E(T ) = b + E(X 1 I c 6.16.8 This is (1) with a = 0 and b = r. 6.16.9 This is (8) with X geometric. 6.16.10 P(L n < r ) = P(W > n), where W is as defined in (9). Hence, 1 + n snπn,r = n snP(W > n) = 1 − E(s W ) 1 − s
= 1 − pr sr 1 − s + q pr sr +1. Problems s−n − sn+1 1 − s ; s = 0 (a) G = 1 n P(X ≤ k)sk+1 (b) G = 1 P(X < k)sk = n − sn+1 1 − s (c) G = 1 − (1 − s−1) log(1 − s); (d) G = 1 − 1 2 (e1 − s) log(1 − s−1) − 1 2 + cs−1 1 − cs−1 1 + cs 2n + 1 1 − cs |s| ≤ 1 1 2 3 (1 − s−1) log(1 − s);! |s| = 1 ; |s| = 1. (a) A p.g.f. wherever G X (s) exists; (d) a p.g.f. for all s; |βS| < 0. (b) not a p.g.f. (c) a p.g.f. for |s| < p−1; (e) a p.g.f. for |s| ≤ 1; (f ) a p.g.f. if α log(1 + β) = 1, β < 0, for 5 Let N have p.m.f. f N (k) = 2−k, k ≥ 1, and (Xi ; i ≥ 1) be independent and identically distributed with p.g.f. G, then Y = N Xi. 6 7 9 10 1 If it were possible, then 1 − s11 = (1 − s)R1(s)R2(s), where R1 and R2 are polynomials (with real positive coefficients) of degree five. Because the imaginary roots of unity form conjugate pairs, this is impossible. (b) Yes, make it a flat with f (2) = 0. Y (1) − (G (b) Gn 1 3 Y (1) + G (G X (1) + G X (ω) + G X (ω2)), where ω is a complex cube root of unity. Y (1))2 = var(N )(E(X ))2 + E(N )var(X ). (c) Appendix 497 11 G =
G X, if and only if G = (1 + µ)−1 1 − s(1 + µ)−1 E(s N ) + s2 13 Use conditioning. So E(s N ) = s 4 2 (2 + E(N )) + 1 2 (1 + E(N )) + 1 4 (b) P(R = r ) = E(. (a) 14, so X is geometric. E(s N ) + s2 4 ; ; E(N ) = 6. 1 − 1 ps Hence, (1 − s)G R(s) = cd(1 − p)s + cs. ER = 1 2 15 By the independence var(H − T ) = var(H ) + var(T ) = λ = var(N ). 16 With the notation of Problem 15, log(1 − ps), cp/(1 − p). where d = log(1 − p) 1 p P(R = r |X = x)P(X = x) = ∞ x=r cpx x(x + 1). E(s H t T ) = E(s H t N −H ) = E(( ps + qt)N ) = G N ( ps + qt). If H and T are independent, then G N ( ps + qt) = G H (s)G T (t) = G N ( ps + q)G N ( p + qt). Write s = x + 1, t = y + 1, G N (v) = f (v − 1) to get f ( px + qy) = f ( px) f (qy). The only continuous solutions of this are f (z) = eλz, so G N (s) = eλ(s−1). λ   n. (m − 2)s m − 2s... s m − (m − 1)s. 17 G X n (s) = p 1 − qs n   = 1 − 1 − n λs n   → eλ(s−1). (b) E(s N ) = s + (s − 1)(exp(se−a) − 1); E(N ) = ee−a. 18 19 Do not differentiate G(s)! (i
) G X (s) = s Do not differentiate this to find the mean! (ii) GY (s) = s. (m − 1)s m − s 3 − 2s 20. 21 Use L’Hopital’s rule. 22 Gn(s) = s 2 − Gn−1(s) ; Gn(1) = 1; G n(1) = n. 24 Differentiate. λ a 1 + a − s 25 27 28 X + Y, µ(k) = (λ + µ)k. ( ps + q)n( p + qs−1)msm = ( ps + q)m+n. (a) αp 1 − [1 − p + p(1 − α)s]t ; (b) p(1 − α) 1 − αp 29 For all 0 ≤ r ≤ n, we have. n − r k n k=0 r (1 + x)n x k = (1 + x)n−r = r k n = −x 1 + x k=0 1 − x 1 + x k (1 + x)n. 26 E(s X +Y t X −Y ) = exp(λ(st − 1) + µ(st −1 − 1)). Hence, for X − Y, κr = λ + (−)r µ and for 498 Appendix Because the sums are polynomials in r of degree at most n, it follows that they must be identically equal. Hence, setting r = −n − 1 and x = 1 gives n k=0 n + k k 2−k+n = n k=0 2n + 1 k = 1 2 2n+1 k=0 2n + 1 k = 22n. Hence, n k=0 ak = 1 2n+1 n k=0 n + k k 2−k = 1 2. Recognising the p.g.f. of the negative binomial distribution with parameter 1 2, this says that in a sequence of coin tosses the chance of getting up to n tails before n + 1 heads equals the chance of getting n + 1 or more tails before n + 1 heads equals 1 2. Now remember the ant of Example 3.7.1. 30 Let Sn = X n + Yn. Then Sn is a simple random walk with p = α1 + α2 and 31
q = β1 + β2 = 1 − p. Sn is symmetric so (a) E(T ) = ∞ and (c) Let Un = X n − Yn and Vn = X n + Yn, so (b) E(s T1 )|s=1 = 1. E(sU1 t V1 ) = 1 4 = 1 2 (st + st −1 + ts−1 + s−1t −1) (s + s−1) 1 2 (t + t −1). Hence, Un and Vn are independent simple random walks and E(s X T −YT ) = E(E(s VT |T )) s + s−1 2 = E T = F1 s + s−1 2 m, where F1(s) = 1 − (1 − s2) s 1 2. 40 E(s X m |X m > 0) = (E(s X m ) − P(X m = 0))/P(X m > 0) /(1 − pm) = pm(e−m log(1−qs) − 1)/(1 − em log p) − pm = m p 1 − qs = pm(m log(1 − qs) + O(m2))/(m log p + O(m2)) → log(1 − qs) log(1 − q) 41 We know U (s) = as m → 0. [O(.) is defined in Section 7.5] u2ks2k = (1 − s2)− 1 2. Let r2n = = s2kr2k = U (s) 1 − s2 2k 2k k = (1 − s2)−3/2 = 1 s d ds 1 (1 − s2) 2k + 2 k + 1 4−ks2k−2 = (2k + 2) 4−k−1s2k. The result follows. n 0 1 2 u2k. Then = 1 s d ds 2k k 4−ks2k Appendix C H A P T E R 7 Exercises 2 499 7.11.4 f (x) = 3 4 + 1 4. 12 X =  7.11.5 P(Y ≤ y) = P − log U ≤ x − 1 2 �
��  U, so you toss a coin twice and set if you get two heads otherwise. β y γ = 1 − exp − β y γ (a) (π(1 + x 2))−1; −∞ < x < ∞ (b) 2(π(1 + x 2))−1; 0 ≤ x < ∞. 7.11.7 7.12.7 First note that the coefficient of x n in Hn is 1, so Dn Hn = n! Now integrating by parts ∞ −∞ Hn Hmφ = [(−)m−1 Hn Hm−1φ]∞ −∞ + D Hn(−)m−1 Dm−1φ. ∞ −∞ The first term is zero, and repeated integration by parts gives zero if m > n, or ∞ 7.12.8 By Taylor’s theorem φ(x) = t n(−)n Dnφ(x)/n! = φ(x − t). φ Dn Hn = n! if m = n. −∞ t n Hn(x) n! 2 x 2 = e− 1 2 (x−t)2+ 1 = e− 1 2 t 2+xt. Hence, t n Hn(x) n! 7.12.9 Set φ = −φ/x in the integral in (6), and integrate by parts again. 1 − F(x) 1 − F(t) 7.12.10 E(X − t|X > t) = d x = e(λt)2 e−(λx)−1(1 − (λ + 2)) and the inequality follows using (3).. 7.13.2 Still 7.14.5 The policy is essentially the same with the one difference that ˆx = ˆy. 7.14.6 The new expected cost function λ∗ is related to λ by λ∗(x) = λ(x) + mP(Z > x) = λ(x) + m(1 − F(x)). Then − (h + p + mλ) exp(−λ( ˆy − a)). Thence, λ∗
( ˆx) + c ˆx = k + c ˆy + λ∗( ˆy). = 0 yields 0 = c − h ∂µ ∂ y 7.15.6 Let g(s) = log s − (s − 2)(s + 1). At s = 1, we have g(1) = 2 > 0; at s = e4, we have g(e4) = 4 − (e4 − 2)(e4 + 1) = (3 − e4)(2 + e4) < 0. There is thus at least one root. However, log s lies below its tangent and s2 − s − 2 lies above, so there can be no more than one root in this interval. 7.15.7 Use log s ≤ s − 1. 7.16.2 7.16.3 7.16.4 r (t) = λ. Your part has no memory. (a) Use Bayes’ theorem. (b) π → 1 if λ > µ; π → 0 if λ > µ; π = p if λ = µ. d 2 dθ 2 By Cauchy–Schwarz, (E(X eθ X ))2 = [E(X eθ X/2eθ X/2)]2 ≤ E(X 2eθ X )E(eθ X ). E(X 2eθ X )E(eθ X ) − (E(X eθ X ))2 (M(θ))2 log M(θ) =. 7.16.5 P(T > t) = E(exp(−"t)) = M"(−t). But dr (t) dt = − d 2 dt 2 P(T > t) = − d 2 dt 2 M"(−t) < 0, by (4). Hence, T is DFR. = e(λt)2 π 1 1 3 500 Appendix 7.16.6 As above r (t) = E( f (1 − FTλ) f Tλ + f 2 (E( fT" ))2 ≤ (E(− f Hence, r (t) ≤ 0. Tλ ≤ 0; hence, T" (1 − FT" )) T" )(1
− E(FT" )) + (E( fT" ))2. Now because FTλ is DFR, we have 1 2 )2 ≤ E(− f T" )E(1 − FT" ) by Cauchy–Schwarz. 7.17.4 (i) d dt 1 t t 0 r (v)dv = r (tv)dv = 1 t 2 0 t [r (t) − r (v)]dv > 0 if r (v) > 0 for all v. Hence, IFR ⇒ IFRA. (ii) Use (7.8.6) and Theorem 7.8.7. 7.17.5 ∞ ∞ (i) E(T − t|At ) = by Definition 7.7.8 (iii). Hence, NBU ⇒ NBUE. ds ≤ P(T > t + s) P(T > t) 0 0 7.17.6 P(T > t) = λ2xe−λx d x = (1 + λt)e−λt. Hence ∞ t P(T > s)ds if NBU. H (t) = − log(1 + λt) + λt, r (t) = λ − 1, and r (t) = 1 (1 + λt)2 > 0. 1 + λt 0, 1 2 7.18.3 Uniform on. 7.18.4 E{X ∧ (1 − X )} = 1 4 1 7.18.5 E(sin!) = 2 2 0. ; E{X ∨ (1 − X )} = 3 4 d x = 1√ 2 √ x (x 2 + (1 − x)2) 1 2 Likewise, E(cos!) = 1√ 2 √ 2) + 1 − 2) − (1 − E(sin!) E(cos!) = log(1 + √ log(1 + 1 (−1 + log(1 + √ 2 √ 2) 0.36. 2)) + 1, so (1 + log(1 + √ 2)) − 1. 7.18.7 E(cot!) = 2 0 1 B(a, b) x a−2(1 − x)bd x + 1 1
B(a, b) x a(1 − x)b−2d x. 1 2 7.19.6 Remember (or prove) that (n) = (n − 1)! when n is an integer. 7.19.7 By the reflection principle, the number of paths that visit b on the way from (0, 0) to (2n, 0) is the same as the number of paths from (0, 0) to (2n, 2b), namely,. 2n n − b 2n n − b n2n+1 2n n Hence, the probability required is = = (n!)2 (n − b)!(n + b)! −n+b− 1 1 − b n 2 (n − b)n−b+ 1 −n−b+ 1 1 + b n 2 2 (n + b)n+b+ 1 2 → e−y2. using Stirling’s formula (a) Use Stirling’s formula. (b) Take logs. (c) Use the Riemann integral. 7.19.8 7.19.9 The number of paths from (0, 0) to (2n, 2 j) is ; the number from (0, 0) to 2n n − j ; and the number from (2r, 0) to (2n, 2 j) that do not visit 0 is 2r r 2n − 2r n − r + j (2r, 0) is j n − r ; (recall the reflection principle, or the hitting time theorem). Hence, the required probability is 2r r Stirling’s formula and take logs in the usual way. 2n − 2r n − r + j j n − r 2n n − j = fr. Now use Appendix Problems β $ $ $ $ 501 $ $ $ $. 1 3 4 5 9 11 21 22 x lies between α and β, c(α, β)−1 = (x − α)(β − x)d x α 2 P(X = x) = lim n→∞ F(x) − F x − 1 n. 2. − var (X ) = B(a + 2, b) B(a, b) (sin x)α(cos x)β d x = c 2 B(a + 1, b) B(
a, b) π/ sin−1 x 1/2 exp(− exp(−x)) 6 7 E(Y ) < ∞ for λ > 2a > 0 φ(x + a x )r (x + a x ) → e−a, 1 − x + a x (1 − (x)) = φ(x)r (x) using the properties of Mills ratio r (x), and φ(x). (i) F(x) = (b − 4π 2ml0x −2)/(b − a) for 2π (ii) E(X ) = (4π(ml0) 1 2 )/( a + √ √ b). 1 2 ml0 b ≤ x ≤ 2π 1 2. ml0 a 12 Choose xl such that F(xl ) < 2−(n+1) and xu such that 1 − F(xu) < 2−(n+1). Then set. Sn(X ) = xl + r for xl + r < X ≤ xl + (r + 1), for all r in 0 ≤ r ≤ # " xu − xl √ √ √ 3x)2, so f X (x) = 4 3(1 − 2 3x). 14 P(X > x) = (1 − 2 15 c = E(X ) − 2 3 3 2, exp λt 16 17 F (x) ∝ g(x) and F(∞) = 1 18 κ1 = µ; κ2 = σ 2; κr = 0, r ≥ 3 t > 0 19 log MX (t) = − log  = 1 − t λ if 10 < V ≤ 1 if if 3 5 9 10   U U 1 2 U 1 3 20 Set X = ∞ r =1 t r r λr, so κr = (r − 1)! λr (a) Set u = v/a (b) Set u = b/v after differentiating. (c) Integrate boundary condition I (a, 0) = π 1 2/(2a). (a) Set x = v2 to get MX (t) = 2αI
ercises 8.11.3 By definition, fY |X (y|x) = f (x, y)/ f X (x) = 1 (2π(1 − ρ2)) ρ Xt + 1 2 1 2 − (y − ρx)2 2(1 − ρ2). exp 1 − ρ2 t 2. This is N (ρx, 1 − ρ2); therefore, E(etY |X ) = exp 8.11.4 By conditional expectation, E(esX +tY ) = E(E(esX +tY |X )) = E(e(s+ρt)X )e 1 2 (1−ρ2)t 2 = exp s + ρt (1 − ρ2)t 2 1 2 2 + 1 2 8.11.5 E(esW +tZ ) = E exp (αi s + βi t)Xi (αi s + βi t)2. This factorizes as as above. 1 2 αi βi = 0 = exp required for the independence if and only if (or in geometrical terms, α.β = 0). 8.11.6 E(et(a X +bY )) = exp 1 2 is N (0, a2 + 2ρab + b2). Appendix t 2(a2 + 2ρab + b2). Hence, a X + bY 503 8.12.4 A triangle is feasible if U < V + W and V < U + W and W < U + V. In terms of X and Y, this gives (when X < Y ) the constraints X < 1 2 similar possibility arises when X > Y. Now a sketch of these two regions shows that they, Y − X < 1 2 and Y > 1 2. A form two triangles with combined area Y ) is uniform on the unit square. 1 4, and this is the required probability because (X, 8.12.6 By symmetry, it is the same as X (1), namely n(1 − x)n−1; 0 ≤ x ≤ 1. 8.12.7 By symmetry, this is the same as the joint density of X (1) and 1 − X (n). Now P(X (1) > x, 1 − X (n) > y) = (1 −
x − y)n, so f = n(n − 1)(1 − x − y)n−2. 8.12.8 Given neither point is on the diameter, the density of the angle they make at the midpoint of the diameter is given by (1) with a = π. Hence, the expected area in this case is π 2(π − x) π 2 1 2 sin xd x = 1 π. 0 Given one on the diameter and one not, they are jointly uniform on (0, π) × (−1, 1), so the expected area is π 2 0 1 π 1 2π 1 0 y sin x 2 d xd y = 1 2π. 2 π π + 2 + 1 2π 4π (π + 2)2 = 1 2 + π. Hence, the expected area is 8.13.4 The easy method uses (1) and (2) to see that P(Ac ∩ Bc) = 1 − P(A ∩ B) − P(A ∩ Bc) − P(Ac ∩ B), which gives the answer. The other method observes that Q can be divided into five regions in which (given C = (x, y)), P(Ac ∩ Bc) takes the values 0,1 − 2 π cos−1 y l,1 − 2 π cos−1 x l,1 − 2 π cos−1 x l − 2 π cos−1 y l respectively. Identify the regions and do the integrals. 8.13.5 The easy method allows b → ∞ in (2). You should also do it via an integral. 8.13.6 Draw a picture with no B-lines to see that the probability of an intersection is 8.13.7 for l ≤ a ∧ b. Evens if 2l 2 − 2(a + b)l + ab = 0, which implies that the min {l cos θ, a}dθ. π/2 1 πa 0 (a − l)(b − l) ab coin has diameter a + b − (a2 + b2) (2n + 3)−1 8.14.1 8.14.2 πn/(n + 1) 8.14.3 π(n − 1)/(n + 1) 8.14
.4 ∞ 1 2. = 0 λα+β uα+β−1e−λudu. Integrate (2) remembering that (1) is a density and (α + β) 504 Appendix 8.14.5 Let X and Y be independent with density x − 1 2 e−x. Then U = X + Y has density 1 1 2 −2 1 2 e−u u 0 v− 1 2 (u − v)− 1 2 dv = πe−u −2. 1 2 The result follows. 8.15.6 Set (1 + x 2)−1 = v and use Exercise 5. 8.16.4 P(N = n) = 1 − 1 a n 1 a with E(N ) = a − 1. Hence, a should be as small as (1) permits. 8.16.5 We choose a so that ae−x is as small as possible (since e−X is uniform). Hence, 1 1 a = sup x ex− 1 2 x 2 2 π 2 = sup x 2 π 2 e− 1 2 (x−1)2+ 1 2 = 1 2e π 2. Thus, we get a variable with density f S(x) if we set X = − log U1 whenever e 1 2 U1U2 < exp(−(log U1)2/2). 8.16.6 Now − log Ui is exponential with parameter 1, so P(X ≤ x|Y > 1 2 (X − 1)2) ∝ Hence, f X |A = 1 2 2 π e−x 2/2. ∞ x 0 1 2 (v−1)2 e−ye−vdydv ∝ x 0 v2 e− 1 2 dv. 8.17.3 Use independence of increments. 8.17.4 Conditional on N (24) = k, the k calls are independent and uniform over (0, 24). Given X = x and Y = y, a call at time U finds you in the shower if x < U < x + y, with probability y/24. Hence, P(a call at U finds you in the shower |N = k) = E(Y )/24 = p (say). Hence, the number Z of calls that finds you in the shower given
N = k is binomial with parameters k and p. Hence, E(s Z |N = k) = ( ps + 1 − p)k; hence, E(s Z ) = E(( ps + 1 − p)N ) = exp(24λ( ps + 1 − p)) = exp(λE(Y )(s − 1)). 8.17.5 Argue as in (4). Given N (t) = n, then R1 and R2 have a trinomial mass function with p.g.f. E(x R1 y R2 |N = n) = ( p1x + p2 y + 1 − p1 − p2)n. Hence, E(x R1 y R2 ) factorizes into two Poisson p.g.f.s. 8.17.6 λ min{s,t} 8.18.5 X R−1 is always the smallest of X 1,..., X R. So P(X R−1 ≥ x) = r − 1 r! × (1 − F(x))r = 1 − Fe1−F. Hence P(X R−1 ≤ x) = Fe1−F, with density f (x) = (1 − F(x)) exp(1 − F(x) f X (x). 8.18.6 T > n if and only if X 1 = X (n); by symmetry, therefore, P(T > n) = 1 n 1 n(n − 1), n ≥ 2. When T = n, X T = X (n), so r =2 ∞. Hence, P(T = n) = 1 n − 1 ∞ P(X T ≤ x) = = − 1 n (F(x))n (n − 1)n n=2, as required. 8.18.7 If X 1 represents your loss at some hazard and (Xr ; r ≥ 1) represents the losses of your successors, then the expected time until someone does worse than you is infinite. The argument is symmetrical, of course, but we do not feel so strongly about our good luck. 1 2 3 4 5 7 Appendix 505 Problems 2 b + 1 ≥ 0, 1 f is a density if 1 cov (X,Y ) = (1 − ab)/144. 1 2 g(z)du =
zg(z). (a) c = 1; (b) 2e−1; f Z (z) = (c). z. Independence is impossible. 0 (a) c = (2π)−1; (b) y 0 (a + y2)− 3 2 dy = y a(a + y2) 1 2, so X has a Cauchy density. f (x, y) = 4y3x(1 − x) for 0 < x < 1, 0 < y < 1 x 1 with density 6x(1 − x) by forming U 2 + V 2 ) and accepting it as a value of X if 2 + V U 6x(1 − x). Hence, you can simulate X ∧ 1 2 ≤ 1. 1 − x 2 /(U 1 1 1 1 9 10 E(esU +tV ) = E(e(s+t)X +(s−t)Y ) = exp + 1 2 Y (s − t)2 σ 2, which factorizes if σX = σY. µX (s + t) + 1 2 σ 2 X (s + t)2 + µY (s − t) 11 Given Z < 1, the point (2U − 1, 2V − 1) has the uniform density over the unit disc, namely, r π in polar coordinates. Thus, Z = R2 and 0 <! < 2π. Make the transformation X = (2 log R−2) − 1 r 2 = exp 2 2 cos!,Y = (2 log R−2), θ = tan−1 y (x 2 + y2) x 2 sin!, with inverse and J = 1 2 (x 2 + y2) − 1 4 exp. 1 1 The result follows. (i) zero for a < 1; 12 (ii) aµ λ + aµ for a ≥ 1. ye−x y/(1 − e−y2 ) 13 14 Let A be at the top of the melon; let angle AOB be θ, where O is the centre of the melon. Then the probability that all three remain is, when But fθ (θ) = 1 2 Hence, P (all three remain) = sin θ; 0 < θ < π. π (π − θ) sin �
� 4π dθ = 1 4π. π/2 Likewise, P (any one of the three remains) = 3P (given one remains) π 2 < θ < π, (π − θ )/(2π). π = 3 θ sin θ 4π 4π 16 P(U = X ) = P(Y > X ) = π/2 dθ = 3(π − 1) λ λ + µ.. 17 Use induction 1 18 2 (a) (X + Y ) by symmetry. (b) E(X |X + Y = V ) = 19 P(T ≥ j + 1) = P j i=1 σ 2 + ρσ τ σ 2 + 2ρσ τ + τ 2 V ; E(Y |X + Y = V ) = τ 2 + ρσ τ σ 2 + 2ρσ τ + τ 2 V Xi < 1 = p j (say). Trivially, p1 = 1 1!, p2 = 1 2!. Now p j is the volume of the “ j-dimensional pyramid” with apex O and corners (1, 0,..., 0), (0, 1,...,0), 506 Appendix etc. Because p j = E(T ) = 0 P(T ≥ j + 1) = e. 1 x j−1 p j−1d x, the result follows by induction. Finally, 20 A triangle is impossible if X 1 > X 2 + X 3. This has the same probability as by Problem 19. Two more similar constraints give 1 3, namely, P(triangle) = 1 − 3. 1 = 1. 2 3! 1 − x 21 P(n(1 − Mn) > x) = n n → e−x. 22 23 N E exp t Xi 1 = E E exp t Xi |N N 1 = E((Eet X 1 )N ) µ µ − t = (1 − p) = (1 − p)µ µ(1 − p) − t. 1 − pµ µ − t So Y is exponential with parameter µ(1 − p). E(C(t)X 1) = E(X 1E(C(t)|X 1)) ∞ utλe−λudu + = t = t 2e−λt + te
−λt λ 0 + 1 λ2 e−λ(t−u) t 1 u λ − te−λt λ − 1 λ − e−λt λ2 λe−λudu − 1 2 t 2e−λt. So cov (C(t), X 1) = 1 2 t 2e−λt. 24 Let the condition be A. Now we notice that P(X ≤ x, A) ∝ x x α−1e− α−1 α e− x α d x = x α−1e−x. x α e− α−1 ≤ x α Hence, the result is true by the rejection method Example 8.16, provided that α x ≤ 1 for x ≥ 0. Because α − 1 ≥ 0, this is equivalent to α−1 ex α log random variables, given a supply of uniform random variables. 25 Hint: (X 1,..., X n) has the same joint distributions as, (X n,..., X 1). 26 (a) Recalling Theorem 8.4.6 on quotients gives the required density as ∞ − 1, so the result holds. This clearly provides a method for simulating gamma 1 |u| u2 w2 1 2π exp − 1 2 u2 + u2 w2 −∞ (b) E(et X 1 X 2 ) = E(E(et X 1 X 2 |X 1)) = E(e 1 2 t 2 X 2 2 ) = du = 2.. 1 2πw2 ∞ 1√ 2π −∞ 1 (1 + 1/w2) e− 1 2 x 2(1−t 2)d x, as required. = (1 − t 2)− 1 2. Hence, Eet(X 1 X 2+X 3 X 4 where Y is exponential with parameter 1. The result followsE(etY ) + E(e−tY )), Appendix 507 28 Let V = max {U1,..., UY }. First, we notice that by conditioning on Y FV = P(max {U1,..., UY } ≤ v) = Now let us find the m.g.f. of Z, ∞ y=1 v y (e − 1)y! = ev − 1 e −
1. E(et Z ) = E(et X )E(e−t V ) = (e − 1)et−1 1 − et−1 1 e−tv ev e − 1 0 dv = 1 1 − t. Hence, Z is exponential with parameter 1. Alternatively, you can find FV directly. X Z has a beta density. 29 30 Consider their joint m.g.f. E(es X +t Z ) = E(e(s+tρ)X +t(1−ρ2) 35 f (x) = e−x. Calculate: E(etU (X 1+X 2)) = E E(etU (X 1+X 2)|U )E 1 (1 − U t)2 = 1 0 1 (1 − ut)2 du 1 2 Z ) = e 1 2 (s2+2ρst+t 2), as required. = 1 1 − t = E(etY ). C H A P T E R 9 Exercises satifies π = πP, and so 9.11.1 (a) Column sums are one, as well as row sums. Hence, πi = 1 8 µ0 = 8 = µV. (b) E(X ) = 1 by the same argument. (c) E(T ) is different. We use the following device. T is the sum of the M steps at which the walk moves to a different vertex and the steps at which it does not move. The number N of nonmoving steps before leaving O has the same expectation (and distribution) as the number at every other vertex on the way from O to V, so E(T ) = E(M)E(N ). By the example, E(M) = 1 + (α + β + γ ) × (α−1 + β −1 + γ −1), and it is easy to find that E(N ) = δ 1 − δ. Hence E(T ) = δ((α + β + γ )−1 + α−1 + β −1 + γ −1). 9.11.2 Consider a random walk on a unit square that takes x-steps with probability p and y-steps with probability q. Then if T is the first passage time from (0, 0) to (
1, 1), arguments similar to those of the example show that E(s T ) = UV (s) U (s), where and U (s) = 1 2 1 1 − s2 + 1 1 − ( p − q)2s2 UV (s) = s2 2 1 1 − s2 − ( p − q)2 1 − ( p − q)2s2, which yields E(T ) after some plod. More simply, by conditional expectation we have E(T ) = 1 + p(1 + pE(T )) + q(1 + qE(T )), which yields E(T ) = p−1 + q −1. If the walk can wait at vertices with probability r, then by the same device as used in (1), 508 Appendix we find E(T ) = r p problem with p = α, q = β, r = γ. + r q. Now we recognise that the question is equivalent to this 9.12.6 LHS = P(Ynr = k, Ynr −1 = k1,..., Yn1 = kr −1)/P(Ynr −1 = k1,..., Yn1 = kr −1) = P(X −nr P(X −nr −1 P(X −nr = k,..., X −n1 = k1,..., X −n1 = k|X −nr −1 )P(X −n1 = kr −1) = kr −1) = P(X −n1 = k|Ynr −1 ), = kr −1,... |X −nr −1 ) = P(Ynr = kr −1,... |X −nr −1 ) where we used (9.1.8) at the crucial step. In equilibrium, qi j = P(Y2 = j, Y1 = i)/P(Y1 = i) = P(X −1 = i|X −2 = j)P(X −2 = j)/P(Y1 = i) = p ji π j π −1. i 9.13.4 No for (a) because of periodicity. Yes for (b). 9.13.5 (a) E(X n+1|
X nb) E(X n(X 0) − βm α + β + βm α + β. 9.14.8 Because vn is a renewal sequence, there is a Markov chain Vn such that vn = P(Vn = 0|V0 = 0). Let Un and Vn be independent. Then ((Un,Vn); n ≥ 0) is a Markov chain and unvn = P((Un,Vn) = (0,0)|(U0,V0) = (0,0)), thus (unvn; n ≥ 0) is a renewal sequence. 9.14.9 Consider the chain (Und ; n ≥ 0). 9.14.10 If Bn > 0, then Bn − 1 = Bn+1, and if Bn = 0, then Bn+1 is the time to the next event, less the elapsed unit of time. Hence, B is a Markov chain with pi, i−1 = 1; i > 0 and p0 j = f X ( j + 1) = P(X = j + 1). Hence, for a stationary distribution π with π(s) = i si πi, whence and so if i πi = 1, Hence, π j = π j+1 + π0 f X ( j + 1), π(s) = π0 G X (s) − 1 s − 1, π(s) = 1 E(X ) 1 − G X (s) 1 − s if E(X ) < ∞. πi = P(X > i) E(X ). Appendix 509 9.14.11 The transition probabilites of U reversed are qi, i−1 = 1 − Fi 1 − Fi−1. πi−1 πi = 1; i > 0 and those of B reversed are qi, i+1 = 1. πi+1 πi = 1 − Fi+1 1 − Fi ; i ≥ 0. Hence, U reversed is B and B reversed is U. 9.15.1 Using (b) and (c) shows that j is persistent. 9.15.2 Follows from (b). 9.15.3 By assumption, pi j (n) > 0 and p ji (m) > 0 for some �
�nite n and m. Hence, p j j (m + r + n) ≥ p ji (m) pii (r ) pi j (n). Now sum over r to get p j j = ∞ if pii (r ) = ∞. So if i is persistent so is j. Interchange the roles of i and j. If j has period t, let r = 0 to find that when p j j (m + n) ≥ 0, m + n is a multiple of t. Hence, the right-hand side is nonzero only when r is a multiple of t, so i has period t. 9.16.8 With HTH = 1, HHH = 2, we have p12(1) = 0, p12(2) = p2 = 1 4 p22(1) = 0 = p21(1) = p11(1); p22(1) = p = 1 2 = p22(2) = p21(2) = p11(2). Hence, and µ12 = = 12 µ21 = (1 + p2 − p2)8 = 8. 9.16.9 We set HHH = 1, HTH = 2, then 9.16.10 Here TTH = 1, HHH = 2. Calculate µs1 = 8. Now φs1 = 10 + 12 − 14 8 + 12 = 3 10. p11(1) = p11(2) = p21(1) = p21(2) = p12(1) = 0,, p22(2) = 1. 4, p22(1) = 1 2 p12(2) = 1 4 Hence, φs1 = 14 + 8 − 8 12 + 8 = 7 10. 9.17.4 P(T1 < ∞) = P(X (t) > 0) = 1 − exp(−"(t)). Now let t → ∞. t 9.17.5 E(s X (t)) = exp → ez−1 = e−1 λe−λudu (z − 1) zk(k!)−1, and we use the continuity theorem for p.g.f.s. 0 9.17.6 By conditional expectation E(eθ Z ) = E(E(eθ Z |X ))
= E((Eeθ Y )X ). You can also get this using forward equations with a lot more work. 510 9.17.7 E(z X ) = E(E(z X |Y )) = E Appendix " (z − 1) exp # t 0 Y (u)du t G X (z)|z=1 = M (0) = X (1))2 = M (0) − (M (0))2 + M (0) E(Y (u))du. 0 G X (1) + G X (1) − (G 9.18.2 λ µ (z − 1)(1 − e−µt ) 9.18.3 The forward equations are: exp t 0 Y (u)du + t 0 E(Y (u))du. = var → exp λ µ (z − 1) p n(t) = λpn−1(t) + µ(n + 1) pn+1(t) − (λ + µn) pn(t). Set G = 9.18.4 πk = 1 2(k!) sn pn to get 1 + kµ λ ∂G ∂t λ µ = (s − 1) k − λ µ exp ∂G ∂s λG − µ. 9.19.10 E(s X (t)) = e−δt + t δe−δx exp ν µ (s − 1)(1 − e−µx ) 0. d x, where ν is the immigration rate, µ the death rate and δ the disaster rate. Hence, E(s X (t)) = lim t→∞ 1 δ µ exp 0 ν µ (s − 1)(1 − y) y(δ/µ)−1dy, where we have set e−µx = y in the integrand. Differentiating with respect to s and setting s = 1 gives the stationary mean ν/(δ + µ). 9.20.9 The forward equations are = λn−1 pn−1(t) − λn pn(t). dpn(t) dt ∞ From (3), we have fn+1(t) = ∞ Mn+1(θ) = λn n+1 pn
(t)eθt dt, as required. 0 p k(t) = λn pn(t), so 9.20.10 Use (5) and partial fractions. 9.21.3 η = lim P(X (t) = 0) = lim t→∞ t→∞ 9.21.4 P(T > t) = 1 − G(0, t). If λ < µ then extinction is certain, so G(0, t) E(T ) = P(T > t)dt = 0 (µ − λ) exp((λ − µ)t) µ − λ exp((λ − µ)t) dt = 1 λ log µ µ − λ. ∞ However, if λ > µ, then P(T < ∞) = η = µ λ 1 − P(T ≤ t) η, so dt = ∞ 0 1 − G(0, t) η dt (λ − µ) exp((µ − λ)t) λ − µ exp((µ − λ)t) dt = 1 µ log λ λ − µ. E(T |T < ∞) = ∞ = 0 Appendix 511 9.21.5 Condition on events in (0, h) (and use the fact that if the first individual splits the replacements act independently) to get η(t + h) = 1.µh + η(t)(1 − λh − µh) + (η(t))2λh + o(h). Rewrite the equation as dη (1 − η)(µ − λη) η(t) η(s) 9.21.6 E(s X (t)|X (t) > 0) = G(s, t) − G(0, t) expected. P(X (t) = 0|X (s) = 0) = 1 − G(0, t). → (µ − λ)s µ − λs. = dt and integrate to get η(t) = G(0, t), as 9.21.7 pi, i+1(h) = (ν + λi)h, pi, i−1(
h) = µi h pii (h) = 1 − (ν + λi)h − µi h + o(h). Hence, dpn dt = µ(n + 1) pn+1 − (ν + (λ + µ)n) pn + (ν + λ(n − 1)) pn−1. Setting dpn dt = 0, you can check that the given πk satisfies the resulting equations. Problems 1 (α) X n is, with pi j = P(X 1 = j). (β) Sn is, with pi j = P(X 1 = j − i). (γ ) Mn is, with pi j = P(X 1 = j); j > i, pii = P(X 1 ≤ i). (δ) L n is, with pi j = P(X 1 = j), j < i, pii = P(X 1 ≥ i). () Kn is not a Markov chain. 2 X is persistent; S is transient unless P(X 1 = 0) = 1; M is absorbing if X 1 is bounded, transient otherwise; L is absorbing. 3 Only if the period is 2. 4 5 Check that P(X k+1 = j|X 0,X 1,...,X k,X n) = P(X k+1 = j|X k,X n) by expanding the (a) Not necessarily if X and Y are periodic. conditional probabilities. 6 These all work by expanding the appropriate conditional probabilities and rearranging 7 them. It is Markov because coins are independent. Then pi, i+1 = p; 0 ≤ i ≤ 8, p90 = p, pii = q; πi = 10−1. n n 10 ρnun = ρn−kun−kρk fk, so vn satisfies vn = 1 1 renewal sequence provided vn = P(X n = s) → πs as n → ∞. Hence, ρnun → πs = c. 11 Zn = (X n,X n+1,...,X n+m−1) is also a persistent chain. 13 The Markov property is preserved at each first passage time. f ∗ k vn−k with f ∗ k =
ρk fk. Hence, it is a ρn fn = 1. It follows that there is a Markov chain such that 14 In the obvious notation, we require Q2 = 1 4 (PX + PY )2 shows that this requires (PX − PY )2 = 0. Hence, W is Markov if PX = PY. 17 No. Pick j = s = i, then P(X n+1 = j|X n = s,X n−1 = i) = pi j (2) = P(X n+1 = j|X n = s). 18 The lack-of-memory of the geometric distribution means that X is a Markov chain. If X n = i then X n+1 is the survivors Sn of X n plus the new arrivals. The probability that a geometric lifetime survives one step is p = 1 − q, so Sn is binomial with parameters X n and p. (PX + PY )2 = Q(2); multiplying out 512 Appendix Hence pi j = P(Sn + Yn = j). In equilibrium X n+1 and X n have the same distribution, so E(s X n+1 ) = G(s) = E(s Sn +Yn ) = E(sY n)E(E(s Sn|X n)) = E(sY n)E(( ps + 1 − p)X n ) = eλ(s−1)G( ps + 1 − p) = eλ/q(s−1) after a simple induction. So the stationary distribution is Poisson with parameter λ/q, just after the fresh particles. The stationary distribution just before the fresh particles is Poisson with parameter λp/q. 19 We seek a stationary distribution that must satisfy π j = π0 f j + j+1 i=1 πi f j−i+1 for j ≥ 0. The sum on the right is nearly a convolution, so we introduce π(s) = G(s) = fi si to get πi si and Hence, π(s) = π0G(s) + 1 s (π(s) − π0)G(s). π(s) = π0(s − 1)G(s) s − G(s), which has π(1) =
1 if π0 > 0 and G(1) < 1. 20 Seek a stationary distribution that satisfies πk = ∞ j=k−1 π j f j−k+1 for k ≥ 1, with π0 = ∞ ∞ π j j=0 i= j+1 fi. θ r fr = G(θ). In an optimistic spirit, we seek a solution of the form π j = (1 − θ )θ j, θ k = ∞ j=k−1 θ j f j−k+1 = ∞ r =0 θ k+r −1 fr = θ k−1G(θ ), Let giving and ∞ θ j θ 1 = fi = 1 − G(θ ) 1 − θ. i= j+1 These both reduce to G(θ) = 0. Hence, π j exists if G(θ ) = θ has a root less than 1. By convexity, it does if G(1) > 1, as required. j=0 22 Recalling Example 9.3.17, we define the event A(t) as we did there; that is, as the event that the chain follows a path consistent with T = t. Then the condition of the problem can be rewritten as P(T = t, A(t), X t = j) = zero or P(X t = j, A(t)). Now the proof follows exactly the same route as Example 9.3.17. 23 π must satisfy π j = pπ j−2 + qπ j+1 with π0 = qπ0 + qπ1 and π1 = qπ2. Hence, sπ(s) = ps3π(s) + q(π(s) − π0) + qsπ0. This gives π(s) = q(s − 1)π0 s − q − ps3 π j ≥ 0 and π(1) = 1. ; now insist that 24 (a) P(X (0) = 1|X (−τ ) = i, X (τ ) = j) Appendix 513 = P(X (−τ ) = i,X (0) = 1,X (τ ) = j)/P(X
(−τ ) = i,X (τ ) = j) = P(X (−τ ) = i) pi1(τ ) p1 j (τ ) P(X (−τ ) = i) pi j (2τ ) (b) P(X (s + t) = X (s) = 1) − P(X (s) = 1)P(X (s + t) = 1) as s → ∞. (a) Yes, with parameter λ + µ; 25 26 "(s) 28 Let A(t) be any event defined by X (s) for s ≤ t. Then by the Markov property π1π j π j (b) No. →. (c) → π1( p11(t) − π1), P(Yn+1 = k|Yn = j,Tn = t,A(Tn)) = P(Yn+1 = k|Yn = j,Tn = t). Hence, ∞ P(Yn+1 = j|Yn = i,A(Tn)) = P(Yn+1 = j|Yn = i,Tn = t) fTn (t)dt 0 ∞ = P(Yn+1 = j|Yn = i), = pi j (t)λe−λt dt = qi j, say. so Y is Markov Then πi qi j = 0 πi pi j (t)λe−λt dt = 29 E(X (t)) = I e(λ−µ)t var (X (t)) = π j λe−λt dt = π j, as required. 2I λt(λ−µ)t (e(λ−µ)t − 1); λ = µ. Further Reading There are many attractive books on probability. To compile a list as short as this requires regrettably ruthless selection from their number. Intermediate Probability If you want to read further at an intermediate level, then high on your list should be the classic text: Feller, W. (1968) An introduction to probability theory and its applications, Vol. I (3rd edn.) John Wiley, New York. Other books at this level or a little above include: Grimmett, G.R.
and Stirzaker, D.R. (2001) Probability and random processes (3rd edn.) Clarendon Press, Oxford. (2001) One thousand exercises in probability, Clarendon Press, Oxford. Ross, S.M. (2003) Introduction to probability models (8th edn.) Academic Press, Orlando. Combinatorics A classic text on combinatorics for probabilists is: Whitworth. W.A. (1901) Choice and Chance (5th edn.) reprinted 1948 by Hafner, New York. Recent introductions include: Anderson, I. (1974) A first course in combinatorial mathematics, Clarendon Press, Oxford. Hall, M. (1967) Combinatorial theory, Blaisdell, Waltham, Mass. Slomson, A. (1991) An introduction to combinatorics, Chapman and Hall, London. Advanced Probability To advance in probability requires the student to plunge into measure theory. Excellent texts at this level include: Billingsley, P. (1995) Probability and measure (3rd edn.) John Wiley, New York. Dudley, R.M. (1989) Real analysis and probability, Wadsworth, Belmont, Calif. Durrett, R. (1996) Probability: theory and examples, (2nd edn.) Duxbury, Belmont. Calif. Feller, W. (1971) An introduction to probability theory and its applications, Vol II (2nd edn.) John Wiley, New York. Laha, R.G. and Rohatgi, V.K. (1979) Probability theory, John Wiley, New York. Shiryayev, A.N. (1984) Probability, Springer, New York. Williams, D (1991) Probability with martingales, Cambridge University Press. Markov Chains and Other Random Processes Most of the above books contain much material on Markov chains and other random processes at their own levels. However, mention should be made of the classic text: Doob, J.L. (1953) Stochastic processes, John Wiley, New York. Finally, if you wish to find out more about the origins of probability read: Hald, A. (1990) A history of probability and statistics and their applications before 1750, John Wiley, New York. History 514 Index of Notation −1 indices maximum
–74, 87 disasters, 463–465 family planning, 44–45 forward eqns, 428–431 general, 465–466 bivariate generating fns, 90 bivariate normal density, 342, 348–349, 352, 359–360, 373–376, 440 bivariate rejection, 392 Black-Scholes formula, 472–473 bookmaker example, 448–449 Boole’s inequalities, 36, 39, 50 bounded convergence thm, 201 boys and girls, 8, 73–74, 87 branching pr family tree, 243–244 geometric, 272–273 Markov chains, 397 martingales for, 278 517 calculus, 265–267 call option, 448, 472–473 Camelot, 111 cans without labels, 153 capture-recapture, 216–217 cardinality (size), 19, 29 cards, 51–53, 114, 160–161, 178, 179 Carroll, Lewis, 81–82, 394 Cauchy-convergent, 200 Cauchy density, 295–296, 304, 320, 391 Cauchy distn, 295–296 Cauchy-Schwarz inequality, 170–171, 211, 352 central joint moments, 167 Central Limit Theorem (CLT), 370, 388–389, 394 central moments, 123, 124–125, 135 certain events, 26 chain. See Markov chain in continuous time; Markov chain in discrete time chance, examples of, 1–3 change of variables technique, 342–344, 361, 381 Chapman-Kolmogorov eqns, 402, 426, 428, 439, 449–450 characteristic fns, 390–391 Chebyshov-Hermite polynomials, 324 Chebyshov’s inequality, 131–132, 156, 306, 370 chi-squared density, 297, 299 518 Index closed state, 405 coats, 223–224 coincidences, 12–13, 258 coin tosses constant of proportionality, 5 continuity, in calculus, 265–266 continuity thm, 251, 308 continuous partition rule, 361, 373, Bernoulli patterns, 275–276, 376–377 459–460 conditional distn, 129–130 fair, 190–191 generating fns, 235–236, 242–243, 278 geometric distn, 124–125, 175–176 independence, 57–58 joint distn, 161–162 by machine, 389 M
otto, 476 Murphy’s law in, 46–47 notation, 27 runs, 103–104, 111, 275–276 simulations, 301 visits of a rw, 219 colouring, 106 combinations, 86–87 complacency example, 67–68 complements, 17–18, 26, 39 composition, 302 compound Poisson pr, 462 conditional density, 310–312, 319, 355–361, 365–366, 373, 440 conditional distn, 127–130, 135–136, 310–312, 319, 356, 373 conditional entropy, 214, 405 conditional expectation, 128–129, 136, 177–183, 204–205, 311, 357, 373 conditional Gambler’s Ruin, 70–72, 180–181, 230–231 conditional independence, 57–58, 65, 177–183, 205 conditional mass fns, 116–121, 127–128, 178, 211–212 conditional mgf, 359–360 conditional probability, 51–82 Bayes’s thm, 54–55 conditional distn, 127–130 definition, 52 generating fns, 235 independence and, 57–60 overview, 51–57 rv, 127–130 rw, 189 recurrence and difference eqns, 60–62 repellent and attractive events, 56–57 conditional property of Poisson pr, 365–366 conditional rw, 189 conditional variance, 205, 440 conditional Wiener pr, 440 conditioning rule, 64, 135–136, 319 congregations, 210–211 continuous rv, 287–336. See also jointly continuous rv ageing and survival, 312–314 conditional distn, 310–312 density, 291–297 discrete approximation, 300–301 expectation, 302–306, 319 fns of, 297–301, 318–319 inverse fns, 299–300 mgf, 306–310, 319 normal distn, 296, 299, 303, 306–310, 320, 323–324 random points, 315–318 step fns, 300 stochastic ordering, 314–315 uniform distn, 287–290, 297–299 continuous set fns, 37 continuous time. See Markov chain in continuous time convergence, 22, 199–203, 265 convexity, 132–133 convolutions, 91, 175, 257, 411 cooperation, 214–215 correlation coefficient, 170, 172, 204, 206–207, 352,
373 countable additivity, 34–35 countable sets, 27 countable union, 27 counting, 83–113 coin tosses, 103–104 colouring, 106 combinations, 86–87, 95 derangements, 88, 96 dice example, 84 first principles, 83–84 generating fns, 90–93, 95 Genoese Lottery, 98–99 identity example, 102–103 inclusion-exclusion, 87–88 lottery examples, 98–101 matching, 107–108, 153, 173 M´enages problem, 101–102 permutations, 84–86, 95 railway trains, 97–98 recurrence relations, 88–89 ringing birds, 99–100 techniques, 93–95 couples survival, 208–209 coupling technique, 419–420 coupon collectors, 92–93, 110, 126–127, 156–157, 166, 386 covariance, 167–169, 204, 248, 352–353 craps, 45–46, 59–60, 82 Crofton, M. W., 315, 317, 376 crossing the road, 277 cubes, rw on, 417, 451–452 cumulant generating fns, 247, 271 cumulative distn fn, 117–118 cups and saucers, 42 current life or age, 382–383 cutting for the deal, 160–161, 168–169, 178, 179 dart throws, 26, 115 decay, 462–463 decomposition thm, 425 defective rv, 239, 244 delayed renewal, 256–258 de Moivre, Abraham, 15, 20, 137, 334 de Moivre-Laplace thm, 309, 333–334 de Moivre’s thm, 122 de Moivre trials, 247–248 de Morgan’s laws, 37–38 density. See also distribution beta, 334 bivariate normal, 342, 348–349, 352, 359–360, 373–376 calculus, 266 Cauchy, 295–296, 304, 320, 391 chi-squared, 297, 299 conditional, 310–312, 319, 355–361, 365–366, 373, 440 definition, 291, 318 expectation and, 304–305, 355–361 exponential, 292–294, 297–298, 303, 311–312, 314, 320 gamma, 297, 299, 307, 320, 380–381 independence and, 344–345, 347 joint.
See joint density Laplace, 320 marginal, 341, 356, 371 mixtures, 297, 318, 335 mgf and, 306–310 multinormal, 373–374 multivariate normal, 373–374, 394 normal. See normal density order statistics, 363–364 Pareto, 303 standard normal, 296, 299 sums, products, and quotients, 348–351 triangular, 228 trinormal, 373–374, 387–388 two-sided exponential, 294–295, 297 uniform. See uniform density Weibull, 313, 322 dependence, 169–171, 177–183 derangements, 88, 96, 113 derivatives, 266 detailed balance eqns, 454 determinism, 11 “De Vetula,” 6, 14 dice cooperation, 214–215 counting principles and, 84 craps, 45–46, 59–60, 82 dodecahedral, 270 models for, 6, 25–26, 34, 40–41 regeneration, 255 sixes, 43–44, 182 tetrahedral, 270 unorthodox numbers on, 269–271 weighted, 7–8, 82, 270–271 difference, 27, 39 differentiable fns, 266, 267 diffusion models, 454–455 disasters, 463–465 discrete approximation, 300–301 discrete rv, 114 discrete renewal pr, 256 disjoint events, 17, 29, 63 distribution beta, 334 binomial. See binomial distn binormal, 387–388 Cauchy, 295–296 chi-squared, 297, 299 conditional, 127–130, 135–136, 310–312, 319, 373 continuous rv, 287–290 convergence, 199–200 in counting, 83–84 cumulative, 117–118 current life, 382–383 defective, 239, 244 discrete, 114 excess life, 382–383 expectation, 302–306 exponential, 292–294, 297–298, 303, 311–312, 314, 320 fns, 117–118 gamma, 297 Gaussian. See normal distn geometric, 124–125, 134, 137, 217, 292 hypergeometric, 134 joint, 158–162, 337–342 logarithmic, 282 marginal, 341, 356, 371 moments, 306 negative binomial, 117–118, 134 negative hypergeometric, 217 normal. See normal distn Poisson, 117, 130–131, 139–141, 145, 364–368, 389 rv, 115–
120 sample space and, 287–288 sequences of, 130–131 standard trivariate normal, 373–374 stationary, 412–418, 434–435, 436, 449–450, 454–455 triangular, 228 trinormal, 374, 387–388 Index 519 two-sided exponential, 294–295, 297 uniform, 123, 178, 288, 291, 338–339, 345–346 dog bites, 139–141, 143–144 dogfight example, 68–69 dominated convergence, 201 doubly stochastic matrices, 400, 417 doubly stochastic Poisson pr, 462 drifting Wiener pr, 442, 468–469 duelling, 147–149 Eddington’s controversy, 75–76 eggs, 180, 249–250 Ehrenfest model for diffusion, 455 eigenvectors, 413 Einstein, Albert, 438 entropy, 150–151, 212–214, 405, 421 epidemics, 301–302 equilibrium, of Markov chains, 413, 431–436 estimation of mass fns, 199 European call option, 448, 472–473 events attractive, 56–57 certain, 26 definition, 25, 39 disjoint, 17, 29, 63 event space, 31 impossible, 26 independence of, 57–60 recurrent, 256, 258, 457 repellent and attractive, 56–57 sequences of, 36–37 useful identities, 28 event space, 31 excess life, 382–383 expectation, 120–127 conditional, 128–129, 136, 177–183, 204–205, 311, 373 conditional density and, 355–361 continuous rv, 302–306, 319 defective, 239, 244 definition, 120, 135 densities and, 302–306, 355–361 fns of rv, 120–123, 205, 318–319 joint distn, 165–172 jointly continuous rv, 351–360, 372 tail integral, 305 tail sums, 156 experiments, 24, 26 exponential characteristic fns, 390–391 exponential density, 292–294, 297–298, 303, 311–312, 314, 320 exponential distn, two-sided, 294–295, 297 exponential fns, 91 exponential generating fns, 90, 92–93, 95, 100, 308 exponential limit thm, 23 exponential variable, von Neumann’s, 383–385 exponential Wi