text
stringlengths 270
6.81k
|
---|
�ne,..., Mn = X 1 1, M2 = Sn−1 n − 1 M1 = Sn n, Let Sn = n 1 Xr, where the Xr are independent 196 5 Random Vectors: Independence and Dependence which is to say that Mm = Sn−m+1/(n − m + 1). Then obviously, E|Mn| < ∞. Also by symmetry, for 1 ≤ r ≤ m, E(Xr |Sm) = Sm/m. Hence, also by symmetry, E(Mm+1|M1,..., Mm−m n − m |Sn−m+1 = Sn−m+1 n − m + 1 = Mm. Therefore, Mm, 1 ≤ m ≤ n is a martingale, and so the sequence Yr = Sr r, 1 ≤ r ≤ n is called a backward martingale for obvious reasons. It is often convenient to make a slightly more general definition of a martingale. (19) Definition A sequence (Sn; n ≥ 0) is a martingale with respect to the sequence (X n; n ≥ 0) if E|Sn| < ∞ and for all n. E(Sn+1|X 0,..., X n) = Sn, In fact, we often omit any reference to the underlying sequence (X n; n ≥ 0), but simply say that (Sn; n ≥ 0) is a martingale. Finally, we note that there are appropriate optional stopping theorems for sub-, super-, and backward martingales also. For example, if X n is a nonnegative supermartingale and T is a stopping time for X n, then EX T ≤ EX 0. We do not pursue these matters further here. 5.8 The Law of Averages Suppose that, as a result of some experiment, the event A occurs with probability p (or the event Ac occurs with probability 1 − p). Typically, A might be an event such as “the patient was cured,” “the dart hit the target,” or “the molecule split.” Let Nn be the number of times A occurs in n independent repetitions of this experiment. Now we have shown that Nn is a binomial random variable, and in Example 4.17, we proved that for > 0 (
|
1) P $ $ $ $ 1 n $ $ $ $ > Nn − p ≤ 2 exp (−n2/4) → 0 as n → ∞. Roughly speaking, this says that the proportion of experiments in which A occurs approaches the probability of A as n increases. (It is pleasing that this agrees with our intuitive notions about events and their probabilities.) 5.8 The Law of Averages 197 We now develop this simple idea a little. Statements such as (1) are common in proba- bility, so we make a formal definition. (2) Definition Let (X n; n ≥ 1) be a sequence of random variables. We say the sequence X n converges in probability to X if, for any > 0, as n → ∞ (3) (4) (5) (6) (7) P(|X n − X | > ) → 0. For brevity, this is often written as X n P→ X. In this notation, (1) becomes Nn P→ p. 1 n Here the limit p is a constant random variable of course. In Section 5.5 above, we observed that we could write Nn = Sn = n k=1 Ik, where Ik is the indicator of the event that A occurs in the kth experiment, and E(Ik) = p. In the notation of Definition 2, we can thus write (4) as Sn P→ E(I1). 1 n It is natural to wonder whether this result may also hold for sequences other than indicators. The following celebrated result shows that in many cases it does. Let (X n; n ≥ 1) be a sequence of Theorem: Weak Law of Large Numbers independent random variables having the same finite mean and variance, µ = E(X 1) and σ 2 = var (X 1). Then, as n → ∞, It is customary to write Sn = n i=1 Xi for the partial sums of the Xi. (X 1 + · · · + X n) P→ µ. 1 n Proof Recall Chebyshov’s inequality: P(|Y | > ) ≤ E(Y 2)−2. Hence, letting Y = n−1(Sn − nµ), we have ) * 2 for any random variable Y and > 0, $ $ $ $ P 1 n $
|
$ $ $ > Sn − µ ≤ 1 n22 E = n−2−2 n (Xi − µ) i=1 n var (Xi ) = σ 2/(n2) and (6) is proved. i=1 → 0 as n → ∞, 198 5 Random Vectors: Independence and Dependence Actually, when Sn is a binomial random variable, we have already shown in Exercise 4.18.3 that more can be said. (8) Theorem If (Xi ; i ≥ 1) are independent indicator random variables with E(Xi ) = p, then as n → ∞, for 0 < < 1, $ $ $ $ P 1 m $ $ $ $ > Sm − p for any m ≥ n → 0. Proof Remember that for any events (Ai ; i ≥ 1), we have Ai ≤ P i i P(Ai ). It follows [using (1)] that $ $ $ $ P 1 m $ $ $ $ > Sm − p for any =n $ $ $ $ > ≤ Sm − p ∞ m=n 2e−m2/4 = 2e−n2/4(1 − e−2/4)−1 → 0 as n → ∞. Roughly speaking, this says that not only does the chance of finding n−1Sn far from p vanish, but also the chance that any of (m−1Sm; m ≥ n) are far from p vanishes as n → ∞. Results of this type are called strong laws of large numbers. Results like those above go some way toward justifying our intuitive feelings about averages in the long run. Such laws of large numbers also have other applications; the following two examples are typical. Suppose f (x) is a nonnegative function, and we (9) Example: Monte Carlo Integration + b require I = a f (x) d x. If f is sufficiently nasty to defy basic methods, a surprisingly effective method of finding I is as follows. Let R be the rectangle {x, y : a ≤ x ≤ b; c ≤ y ≤ d} where 0 ≤ c < f (x) < d for a ≤ x < b. The curve y = f (x) divides R into two disjoint regions, A lying above f and B lying below f. Now pick a point P1 at random in R, by which we mean
|
uniformly in R. Then (10) P(P1 ∈ B) = 1 |R| b f (x) d x = p say. a Now we pick a series (Pj ; j ≥ 1) of such points independently in R, and let I j be the indicator of the event that Pj lies in B. Then by the weak law above 1 n n j=1 P→ p, I j as n → ∞. Hence, for large n, we may expect that (|R|/n) approximation to I and that it improves as n increases. n j=1 I j is a reasonable 5.9 Convergence 199 (We have glossed over one or two details in this simple account; more discussion is provided in Chapter 7.) Here is a similar example. (11) Example: Estimation of Mass Functions Let (Xi ; i ≥ 1) be independent and identically distributed with an unknown mass function f (x). Suppose we want to know f (x) for some given x. Let Ik be the indicator of the event that X k = x. Obviously, E(Ik) = f (x), so by the weak law 1 n Sn = 1 n n k=1 P→ f (x). Ik Thus, n−1Sn should be a good guess at f (x) for large n. Notice that in both these cases, we applied the weak law of large numbers (WLLN) to a binomial random variable arising as a sum of indicators. Despite its simplicity, this is an important special case, so we return to Theorem 6 and note that the proof shows something a little stronger than the WLLN. In fact, in using Chebyshov’s inequality we showed that, as n → ∞, (12 Sn n → 0. This type of statement is also widespread in probability and warrants a formal emphasis. (13) Definition Let X 1, X 2,... be a sequence of random variables. If there is a random variable X such that lim n→∞ E(X n − X )2 = 0, then X n is said to converge in mean square to X. We sometimes write this as X n m.s.→ X. Although important, this section may be omitted at a first reading. 5.9 Convergence In the preceding section and at various earlier times, we introduced several ideas about the long
|
run behaviour of random variables and their distributions. This seems an appropriate moment to point out the connections between these concepts. First, we recall our earlier definitions. Here (X n; n ≥ 1) is a sequence of random variables with corresponding distributions (Fn(x); n ≥ 1). Also, F(x) is the distribution of a random variable X. Then, as n → ∞, we say: (1) X n converges in distribution if Fn(x) → F(x) whenever F(x) is continuous. (2) X n converges in probability if P(|X n − X | > ) → 0, for any > 0. (3) X n converges in mean square if E(|X n − X |2) → 0, where E(X 2 n)E(X 2) < ∞. 200 5 Random Vectors: Independence and Dependence These are clearly not equivalent statements. For example, let X be integer valued and symmetrically distributed about zero, then let Y = −X. By symmetry X and Y have the same distribution, so |FY (x) − FX (x)| = 0. However, P(|X − Y | > ) = P(2|X | > ) = 1 − P(X = 0). Hence, convergence in distribution does not necessarily imply convergence in probability. This in turn does not necessarily imply convergence in mean square because (2) may hold for random variables without a variance. What we can say is the following. Theorem distributions (Fn(x); n ≥ 1). We have the following two results: Let (X n; n ≥ 1) be a sequence of random variables having corresponding (i) If X n (ii) If X n m.s.→ X, then X n P→ X, then Fn(x) → F(x) at all points x where F(x) is continuous. P→ X. Proof (i) By Chebyshov’s inequality, for > 0 P(|X n − X | > ) ≤ −2E(X n − X )2 → 0 by hypothesis. (ii) For any > 0, Fn(x) = P(X n ≤ x) = P(X n ≤ x, X ≤ x + ) + P(X n ≤ x, X > x + ) ≤ F(x + ) + P(|
|
X n − X | > ). Likewise, Hence, F(x − ) = P(X ≤ x −, X n ≤ x) + P(X ≤ x −, X n > x) ≤ Fn(x) + P(|X n − X | > ). F(x − ) − P(|X n − X | > ) ≤ Fn(x) ≤ F(x + ) + P(|X n − X > ). Now allowing n → ∞ and → 0 yields the result. Furthermore, it is not always necessary to postulate the existence of a limit random variable X. A typical result is this, which we give without proof. (4) Theorem Let (X n; n ≥ 0) be a sequence of random variables with finite variance such that, as j → ∞ and k → ∞, E(X k − X j )2 → 0. Then there exists a random variable X such that EX 2 < ∞ and X n Such sequences are called “Cauchy-convergent.” m.s.→ X. In fact, we can go a little further here, by using Markov’s inequality (4.6.2) in this form: (5) P(|X n − X | > ε) ≤ E|X n − X |/ε, where ε > 0. (6) (7) (8) (9) (10) (11) 5.9 Convergence 201 It follows that, as n → ∞, X n P→ X, if E|X n − X | → 0. The converse implications to the above result are false in general, but can become true if appropriate extra conditions are added. We single out one famous and important example. Theorem: Dominated Convergence variables such that |X n| ≤ Z for all n, where EZ < ∞. If X n X | → 0, as n → ∞. Let (X n; n ≥ 0) be a sequence of random P→ X, then E|X n − Proof Note that because we are dealing with random variables, many statements should include the modifier “with probability 1.” This quickly becomes boring, so we omit this refinement. Let Zn = |X n − X |. Because |X n| ≤ Z for all n,
|
it follows that |X | ≤ Z. Hence, |Zn| ≤ 2Z. Introduce the indicator function I (A), which takes the value 1 if the event A occurs, and is otherwise 0. Then, for ε > 0, E|Zn| = E[Zn I (Zn ≤ ε)] + E[Zn I (Zn > ε)] ≤ ε + 2E[Z I (Zn > ε)]. Because E|Z | < ∞ and Zn P→ 0, the last term decreases to zero as n → ∞. To see this set, E[Z I [Zn > ε]] = E[Z I (Zn > ε, Z > y)] + E[Z I (Zn > ε, Z ≤ y)] ≤ E[Z I (Z > y)] + yP(Zn > ε). Now choose first y and then n as large as we please, to make the right side arbitrarily small. The result follows because ε was arbitrary. If Z is a finite constant, then this yields a special case called the bounded convergence theorem. There is another theorem called the monotone convergence theorem, which asserts that EX n → EX if X n ↑ X, as n → ∞. We offer no proof of this, though we will use it as necessary. Now we can use Theorem (7) immediately to prove some optional stopping theorems for martingales. Let X n be a martingale and T a stopping time for X n. Then EX T = EX 0 Theorem if any of the following hold for some real positive finite constant K : (a) T is bounded (i.e., T ≤ K < ∞). (b) |X n| ≤ K for all n, and P(T < ∞) = 1. (c) ET < ∞ and E(|X n+1 − X n|X 0,..., X n) ≤ K, n < T. (d) E|X T | < ∞, P(T < ∞) = 1, and E(X n I (T > n)) → 0 as n → ∞. 202 5 Random Vectors: Independence and Dependence In practice a popular technique when P(T < ∞) = 1 is to apply
|
part (a) of Remark the theorem at T ∧ n, and then let n → ∞, using Dominated or Monotone convergence to obtain the required result. Proof We proved this when (a) holds in (5.7.14), where we also showed that E(X T ∧n − X 0) = 0. Allowing n → ∞, we have X T ∧. Now suppose (b) holds. Because the sequence is bounded, using (7) above gives the result. Next, we suppose (c) holds and write $ $ $ $ $ T ∧n r =1 (Xr − Xr −1) ≤ ∞ r =1 |Xr − Xr −1|I (T ≥ r ). $ $ $ $ $ |X T ∧n − X 0| = Hence, because (c) holds, E ∞ r =1 |Xr − Xr −1|I (T ≥ r ) ≤ ∞ r =1 K P(T ≥ r ) = K ET. Because ET < ∞, we can use the dominated convergence theorem as n → ∞ to obtain the required result. For (d) see Problem 5.44 Similar results hold for submartingales and supermartingales, and are proved in the same way; we omit the details. We conclude with another important result. One of the principal features of martingales is that they converge with only weak additional constraints on their nature. Here is one example. (12) Example: Martingale Convergence Let (Yn; n ≥ 0) be a martingale such that EY 2 n ≤ K for all n. Show that Yn converges in mean square as n → ∞. Solution E(Yr −1|Y0,..., Yi ). Iterating shows that E(Yr |Y0,..., Yi ) = Yi. Hence, E(Yr |Y0,..., Yi ) = E(E(Yr |Yr −1,..., Y0)|Y0,..., Yi ) = For r ≥ i, E(Yr Yi ) = E[E(Yr Yi |Y0,..., Yi )] = E. Y 2 i It follows that for i ≤ j ≤ k (13) E((Yk − Y j )Yi ) =
|
E(YkYi ) − E(Y j Yi ) = 0. Thus, after some algebra, E[(Yk − Y j )2|Y0,..., Yi )] = E Y 2 k |Y0,..., Yi − E Y 2 j |Y0,..., Yi. n ; n ≥ 1) is nondecreasing and Therefore, 0 ≤ E(Yk − Y j )2 ≤ EY 2 k bounded, and therefore converges. Finally, we deduce that E(Yk − Y j )2 → 0 as k, j → ∞, and this shows Yk j. Now (EY 2 ms→ Y ; by (4). − EY 2 The above example included a proof of the following small but important result. 5.10 Review and Checklist for Chapter 5 203 (14) Corollary: Orthogonal Increments m, then using (11) If (X n; n ≥ 0) is a martingale and i ≤ j ≤ k ≤ which is called the orthogonal increments property, because of (5.3.9) E[(X m − X k)(X j − Xi )] = 0, 5.10 Review and Checklist for Chapter 5 This chapter extended ideas from earlier chapters to enable us to make probabilistic statements about collections and sequences of random variables. The principal instrument to help us is the joint probability distribution and joint probability mass function of discrete random variables. We defined the concepts of independence and conditioning for random variables. Jointly distributed random variables have joint moments, and we looked at covariance and correlation. Conditional expectation is a concept of great importance and utility; we used all these ideas in examining the properties of functions of random variables. Finally, we looked at random walks, martingales, stopping times, optional stopping, sequences of random variables, and simple ideas about their convergence. We give details of these principal properties for pairs of random variables; all these expressions are easily generalized to arbitrary collections, at the expense of more notation and space : Pairs X and Y of such variables have a joint mass function f (x, y) = P(X = x, Y = y), which appears in the Key Rule for joint distributions: for any set C of possible values of (X, Y ) P((X, Y ) ∈ C) = f (x, y). (x,y)
|
∈C In particular, we have the Joint distribution: F(x, y) = P(X ≤ x, Y ≤ y) = u≤x v≤y f (u, v). Marginals: Functions: f X (x) = y f (x, y), fY (y) = x f (x, y). P(g(X, Y ) = z) = x,y:g=z f (x, y). 204 Sums: 5 Random Vectors: Independence and Dependence P(X + Y = z) = x f (x, z − x) = y f (z − y, y). Independence: X and Y are independent if f (x, y) = f X (x) fY (y) for all x and y. Conditioning: The conditional probability mass function of X given Y is f X |Y (x|y) = f (x, y) fY (y), fY (y) > 0. The Key Rule for conditional mass functions: P(X ∈ A|Y = y) = f X |Y (x|y), x∈A and the Partition Rule says f X (x) = y fY (y) f X |Y (x|y). Expectation: The expected value of the random variable g(X, Y ) is Eg(X, Y ) = g(x, y) f (x, y). In particular, for constants a and b, x,y E(ag(X, Y ) + bh(X, Y )) = aEg + bEh. Moments: The covariance of X and Y is cov (X, Y ) = E[(X − EX )(Y − EY )] and the correlation coefficient is We note that ρ(X, Y ) = cov (X, Y )/{var X varY }1/2. Y j = Xi, cov i, j cov (Xi, X j ). Independence: When X and Y are independent E(X Y ) = EX EY, so that X and Y are uncorrelated and cov (X, Y ) = ρ(X, Y ) = 0. In this case, when the Xi are independent, Xi = var Xi. var Conditional expectation: The conditional expectation of X given Y = y is E(X |
|
Y = y) = x f X |Y (x|y) = x f (x, y)/ fY (y). x 5.10 Review and Checklist for Chapter 5 205 For any pair of random variables where both sides exist, E[E(X |Y )] = EX. Key properties: the conditional expectation E(X |Y ) satisfies E(X |Y ) = EX, if X and Y are independent. E(Xg(Y )|Y ) = g(Y )E(X |Y ), the pull-through property. E(E(X |Y ; Z )|Y ) = E(X |Y ), the tower property. Conditional variance: var X = var E(X |Y ) + Evar (X |Y ), var (X |Y ) = E(X 2|Y ) − [E(X |Y )]2. Experience of student calculations leads us to stress that it is not in general where Remark true that var X = E var (X |Y ). Conditional independence: X and Y are conditionally independent given Z = z if, for all x and y, P(X = x, Y = y|Z = z) = f X |Z (x|z) fY |Z (y|z). Checklist of Terms for Chapter 5 5.1 joint probability mass function marginal mass function 5.2 independent random variables 5.3 expectation covariance joint moments orthogonal random variables correlation coefficient Cauchy–Schwarz inequality 5.4 sums of indicators inclusion–exclusion inequalities convolution 5.5 conditional mass function conditional expectation random sum tower property discrete partition rule 5.6 simple random walk first passage time recurrence time 206 5 Random Vectors: Independence and Dependence hitting time theorem reflection principle ballot theorem 5.7 martingale stopping time optional stopping Wald’s equation 5.8 weak law of large numbers convergence mean square 5.9 convergence of distributions dominated convergence martingale convergence.11 Example: Golf Arnold and Bobby play a complete round of 18 holes at golf. Holes are independent, and any hole is won by Arnold with probability p, won by Bobby with probability q, or it is halved with probability r. Of the 18 Arnold wins X, Bobby wins Y, and Z are halved. (a) Find the joint mass function of X
|
, Y, and Z. (b) What is the marginal distribution of X? (c) Show that the correlation coefficient of X and Y is ρ(X, Y ) = − pq (1 − p)(1 − q) 1 2. Solution Let Ak be the event that Arnold wins the kth hole, Bk the event that he loses the kth hole, and Hk the event that the kth hole is halved. We illustrate the possibilities by giving more than one solution. (a) I A typical outcome is a string of x As, y Bs, and z H s. Such a sequence has probability px q yr z of occurring, by independence, and by Theorem 3.3.2 there are 18!/(x!y!z!) such sequences. Hence, (1) P(X = x, Y = y, Z = z) 18! px q yr z x!y!z! ; x + y + z = 18. II By Definition 5.5.1, P(X = x, Y = y, Z = z) = P(X = x, Y = y|Z = z)P(Z = z). Now the number of holes halved is just the number of successes in 18 Bernoulli trials with P(success) = r. Hence, by Example 5.4.1 (or Example 4.2.3), P(Z = z) = r z(1 − r )18−z 18 z. Worked Examples and Exercises 207 Now for any given hole, P(A|H c) = p/( p + q). Hence, given Z = z, the number of holes won by Arnold is just the number of successes in 18 − z Bernoulli trials with P(success) = p/( p + q). Therefore, P(X = x, Y = y|Z = z) = where x + y = 18 − z. Thus, 18 − (X = x, Y = y, Z = z) = (x + y)! px q y (1 − r )x+y r z(1 − r )18−z 18! (x + y)!z!. x!y! = 18! px q yr z x!y!z!. (b) I As in (5.1.5), we have P(
|
X = x) = (2) = = y,z 18−x y=0 18 x P(X = x, Y = y, Z = z) 18! px x!(18 − x)! (18 − x)! y!(18 − x − y)! q yr 18−x−y px (q + r )18−x, which is binomial with parameters 18 and p. II Either Arnold succeeds with probability p in winning each hole, or he fails with probability 1 − p = q + r. Hence, by Example 5.4.1, the mass function of X is binomial as in (2). (c) Let Ik be the indicator of the event Ak that Arnold wins the kth hole, and Jk the indicator of the event Bk that Bobby wins it. Then Ik Jk = 0, and I j is independent of Jk for j = k. Hence, E(I j Ik) = pq for j = k, and therefore, E(X Y ) = E Thus, 18 18 Ik k=1 j=1 J j = 18 × 17 pq. cov (X, Y ) = 18 × 17 pq − 18 p × 18q = −18 pq. Finally, we note that because X and Y are binomial, we have var (X ) = 18 p(1 − p) and var (Y ) = 18q(1 − q). Therefore, ρ(X, Y ) = cov (X, Y ) (var (X ) var (Y ))) 1 2 = −18 pq (18 p(1 − p).18q(1 − q)) 1 2, as required. (3) (4) (5) (6) (7) Exercise What is ρ(Y, Z )? Exercise What is the conditional mass function of X, given X + Y = m? Exercise What is the probability that the match is halved? Exercise What is E(X |Y )? Exercise What is E(X |Y, Z )? 208 5 Random Vectors: Independence and Dependence 5.12 Example: Joint Lives Suppose that 2m individuals constitute m married couples at some given initial date. We want to consider the survivors at some given later date. Suppose that each individual is alive at the later date with probability p independently of the others. Let A be the number of individuals then alive, and let S be the
|
number of surviving couples in which both the partners are alive. Show that E(S|A) = A(A − 1) 2(2m − 1). Remark This problem was discussed by Daniel Bernoulli in 1768. Solution methods of solution; you can choose your favourite, or of course find a better one. Let Sa be the number of surviving couples given that A = a. We give several I Let I j be the indicator of the event that the jth couple survives. Then E(Sa) = E m 1 I j = mE(I1) = mP(I1 = 1) because the chance of survival is the same for every couple. Now we can choose the a survivors in (2m a ) ways, and the number of these in which the first couple remain alive is (2m − 2 a − 2 ). (This is the number ways of choosing a − 2 other survivors from the other m − 1 couples.) Because these are equally likely outcomes P(I1 = 1) = 2m − 2 a − 2 2m a = a(a − 1) 2m(2m − 1). Hence, E(Sa) = a(a − 1)/(2(2m − 1)). II Suppose that the a individuals remaining alive include x couples. If one more individual were to die, then the expected number of couples remaining would be E(Sa−1|Sa = x). To evaluate this, observe that if a widow/er dies then there are still x couples; however, if a survivor’s spouse dies, there are now x − 1 couples. The probability of a widow/er’s death is (a − 2x)/a; the probability of the death of one individual of the x couples is (2x/a). Hence, E(Sa−1|Sa = x) = x(a − 2x) a + (x − 1)2x a = (a − 2) a x. Hence, by Theorem 5.5.6, E(Sa−1) = E(E(Sa−1|Sa)) = a − 2 a E(Sa). This relation may be iterated on the left or the right, to give either E(Sa) = a(a − 1) E(S2) = a(a − 1) 2(2m − 1) 2 Worked Examples and
|
Exercises 209 or E(Sa) = a(a − 1) 2m(2m − 1) E(S2m) = a(a − 1) 2(2m − 1). III Number the couples 1,..., m. Let Yi be the indicator of the event that the male of the ith couple survives and Xi the indicator of the event that the female of the ith couple survives. Then Sa = m 1 Xi Yi. Now P(Y1 = 1|X 1 = 1) = (a − 1)/(2m − 1) and P(X 1 = 1) = a/(2m). Hence, E(Sa) = m 1 E(Yi Xi ) = mE(Y1|X 1 = 1)P(X 1 = 1) = m a 2m a − 1 2m − 1, as required. (1) (2) (3) (4) (5) (6) Find the mass function of S, and write down its mean and variance. Exercise Show that E(AS) = 2m((m − 1) p3 + p2). Exercise Show that E(A|S) = 2mp + 2(1 − p)S. Exercise Show that the correlation ρ(A, S) is given by ρ(A, S) = (2 p/(1 + p)) Exercise Exercise Suppose that males and females have different death rates, so the probability of a male surviving is µ and the probability of a female surviving is φ. Show that S has a B(m, µφ) mass function. What is the mass function of A? What is E(A)? Exercise When males and females have different survival rates µ and φ, find E(A|S) and hence show that in this case 1 2. ρ(A, S) = (2 − φ − µ)(φµ) ((1 − φµ)(φ(1 − φ) + µ(1 − µ))) 1 2 1 2. 5.13 Example: Tournament Suppose that 2n tennis players enter a knock-out singles tournament, and the players are completely ranked (with no ties). The draw for the tournament is at random, and we suppose that in any match the higher ranked player always wins. Let Rn be the rank of the losing finalist; fi
|
nd E(Rn), and show that as n → ∞ E(Rn) → 3. Solution The losing finalist comes from the half of the draw not containing the topranked player. These 2n−1 players have ranks N1 < N2 < · · · < N2n−1, which are drawn at random from the 2n − 1 integers {2, 3, 4,..., 2n}, and Rn = N1. Let X 1, X 2,... be the numbers of players drawn with the top-ranked player, between successive players drawn for the other half. That is to say X 1 = N1 − 2 X k = Nk − Nk−1 − 1, X 2n−1+1 = 2n − N2n−1. 2 ≤ k ≤ 2n−1, (1) (2) (3) (4) (5) (1) (2) 210 5 Random Vectors: Independence and Dependence By symmetry, for all j and k, E(X k) = E(X j ). Hence, (2n−1 + 1)E(X 1) = E 2n−1+1 1 X k = 2n − 1 − 2n−1. Thus, E(Rn) = E(N1) = E(X 1) + 2 = 2n+1 − 2n−1 + 1 2n−1 + 1 → 3, as n → ∞. Suppose that the ranking allows ties (so that, for example, a possible ranking is 1, 1, Exercise 3, 3, 3, 6,...). Show that as n → ∞, lim E(Rn) ≤ 3. Suppose that there are 3 × 2n−1 entrants, and these are divided at random into a group Exercise of size 2n−1 and a group of size 2n who then knock each other out in the usual way to provide two finalists. Find E(Rn) and show that E(Rn) → 7 2. Exercise ball is drawn. Show that the expected number drawn is (b + r + 1)/(b + 1). An urn contains b blue and r red balls. Balls are removed at random until the first blue The balls are replaced, and then removed at random until all the balls remaining are of the
|
same colour. Show that the expected number remaining is r/(b + 1) + b/(r + 1). What is the probability pr that they are all red? Exercise Let X 1, X 2,... be independent and identically distributed. What is m when m ≤ n? E 1 n Xi Xi 1 5.14 Example: Congregations Suppose that n initially separate congregations of people are then united, and one person is picked at random from the united group. Let the size of the congregation of which she was originally a member be Y. If the respective sizes of the original congregations are the random variables (Xi ; 1 ≤ i ≤ n), show that n E( Xi 1 n E(Xi ). 1 ≥ 1 n Solution probability that the selected individual was in the r th congregation initially is xr /( Let us use conditional expectation. Given that Xi = xi for 1 ≤ i ≤ n, the n 1 xi ). Worked Examples and Exercises 211 Hence, E(Y |X 1 = x1,..., X n = xn) = n 1 xr P(Y = xr ) = n r =1 / x 2 r n. xi i=1 Therefore, E(Y ) = E(E(Y |X 1,..., X n)) = Xi 1 Now recall Cauchy’s inequality for real numbers (xi ; 1 ≤ i ≤ n) and (yi ; 1 ≤ i ≤ n), namely, 2 xi yi ≤ n n y2 )/( n 1 xi ) ≥ 1 n 1 xi /n, and the required inequality Setting yi = 1, for all i, yields ( (2) follows. Remark Observe that if a congregation is picked at random by choosing a number in n {1, 2,..., n} at random, then the expected size of the chosen congregation is 1 1 E(Xi ). n The fact that a member picked at random was in a larger expected congregation is a form of sampling “paradox.” For what distributions of Xi, if any, does the expected size of a randomly selected Exercise individual’s group actually equal the mean size of groups? Family sizes are independent and identically distributed
|
with mean µ. If you pick an Exercise individual at random find the probability that she is the kth born of her family, and the expectation of her order of birth in her family. Compare this with µ. Exercise Use the Cauchy–Schwarz inequality Lemma 5.3.13 to prove Cauchy’s inequality (3). 5.15 Example: Propagation A plant sheds N seeds, where N is a binomial random variable with parameters n and p. Each seed germinates with probability γ independently of all the others. Let S denote the number of resulting seedlings. Find: (a) The conditional mass function of S given N. (b) The joint mass function of S and N. (c) The probability mass function of S. (d) The conditional mass function of N given S. (a) Given that there are i seeds, that is N = i, the germination of any one Solution can be regarded as a Bernoulli trial with P(success) = γ. Then by Example 5.4.1 (also discussed in Example 4.2.3), the total number of successes is a binomial random variable (3) (4) (5) (6) 212 5 Random Vectors: Independence and Dependence with parameters i and γ. So P(S = j|N = i) = (b) Now for the joint mass function i j γ j (1 − γ )i− j ; 0 ≤ j ≤ i. P(N = i ∩ S = j) = P(S = j|N = i)P(N = i) i j γ j (1 − γ )i− j n i = pi (1 − p)n−i. (c) Now we require the marginal mass function of S, which is given by P(S = j) = = n i= j n i= j P(N = i, S = j) using (5.1.6) γ j (1 − γ )i− j pi (1 − p)n−i (n − i)! j!(i − j)! γ 1 − γ j n j n i= j = (1 − p)n i (1 − γ ) p 1 − p (n − j)! (n − i)!(i − j)! = n j (γ p
|
) j (1 − γ p)n− j. Thus, S is binomial with parameters n and γ p. (d) Finally, P(N = i|S = j) = P(N = i ∩ S = j) P(S = j) = n−− j ; j ≤ i ≤ n. Thus, the variable N − S given that S = j germinate is binomial with parameters n − j and ( p − pγ )/(1 − γ p). (1) (2) (3) (4) (5) (6) Exercise Exercise Exercise Exercise succumbs to wilt with probability 1 − τ. Let T be the number of resulting trees. Find E(N |S). Find E(S|N ). Find cov (N, S) and ρ(N, S). Each seedling independently succeeds in growing into a tree with probability τ, or Find the joint probability mass function of N, S, and T, and also the conditional mass function of N given T. Exercise Exercise Find the joint mass function of N and T given that S = s. Find the conditional covariance of N and T given that S = s. 5.16 Example: Information and Entropy (a) Let the random variable X take a finite number of values (xi ; 1 ≤ i ≤ n), with mass function P(X = x) = f (x). Suppose that (ai ; 1 ≤ i ≤ n), are such that ai > 0 for 1 ≤ i ≤ Worked Examples and Exercises 213 n n and i=1 ai = 1. Show that n f (i) log ai ≥ − n i=1 f (i) log f (i) − i=1 with equality if and only if ai = f (i) for all i. (b) The random variables X and Y take a finite number of values and have joint mass function f (x, y). Define I (X, Y ) = x y f (x, y) log. f (x, y) f X (x) fY (y) Show that I ≥ 0, with equality if and only if X and Y are independent. Solution (a) By definition, log y = y x −1d x ≤ 1 = y − 1. y 1 d x with equality if
|
y = 1, Hence, (1) log y ≤ y − 1 with equality if and only if y = 1. Therefore, − i f (i) log f (i) + i f (i) log ai = ≤ i i = 0, f (i) log f (i) ai f (i) ai f (i) − 1 by (1) with equality if and only if f (i) = ai for all i. (b) The positive numbers f X (x) fY (y) satisfy f X (x) fY (y) = 1. Therefore, by part (a), f (x, y) log f (x, y) ≥ f (x, y) log( f X (x) fY (y)) x,y x,y with equality if and only if for all x and y f (x, y) = f X (x) fY (y). But this is a necessary and sufficient condition for the independence of X and Y. (2) (3) Exercise Exercise Show that I = E(log f (X, Y )) − E(log f X (X )) − E(log fY (Y )). Show that if the conditional mass function of X given that Y = y is f (x|y), we have I = = x,y x,y fY (y) f (x|y) log f (x|y) − E(log f X (X )) f X (x) f (y|x) log f (y|x) − E(log fY (Y )). (4) Exercise Find I (X, Z ) and I (Z, X ). A die is rolled twice, yielding the respective scores X and Y. Let Z = max{X, Y }. 214 5 Random Vectors: Independence and Dependence Remark interesting that this is equal to the information about Y conveyed by X. The quantity I is sometimes said to be the information about X conveyed by Y. It is The quantity H (X ) = E(− log f X (X )) is known as the entropy or uncertainty of X, and H (X |Y ) = H (X ) − I (X, Y ) is known as the conditional entropy (or uncertainty) of X given Y. It is interpreted as the uncertainty of X, reduced by the information conveyed about
|
X by Y. Exercise Exercise H (Z |X ) = 0. Show that H (X |X ) = 0. Show that if H (X |Y ) = 0 = H (Y |X ) and H (Y |Z ) = H (Z |Y ) = 0, then H (X |Z ) = 5.17 Example: Cooperation Achilles and his two friends, Briseis and Chryseis, play a cooperative game. They possess a die with n faces, and each of them rolls it once. Then I (AC) is the indicator of the event that Achilles and Chryseis each turn up the same face of the die. I (AB) and I (BC) are defined similarly. Show that I (AB), I (AC), and I (BC) are pairwise independent if and only if the die is unbiased. Solution Let the die, when rolled, show its kth face with probability f (k). Then P(I (AB) = 1, I (BC) = 1) = P(all three rolls show the same face) = n ( f (k))3 k=1 and P(I (AB) = 1) = P(two rolls show the same face) = n ( f (k))2. k=1 Pairwise independence then requires that 2 ( f (k))2 = k k ( f (k))3. Now let X be a random variable that takes the value f (k) with probability f (k). Then (1) states that 0 = E(X 2) − (E(X ))2 = E(X − E(X ))2 = var (X ). (5) (6) (1) Hence, X must be constant by Example 4.6.10, and so the die is unbiased because f (k) = n ; 1 ≤ k ≤ n. In this case, it is easy to check that 1 n P(I (AB) = 1, I (BC) = 0(I (AB) = 1)P(I (BC) = 0) = 1 n. 1 n k=1 and two other conditions are satisfied. The indicators are pairwise independent only in this case. Worked Examples and Exercises 215 Exercise Exercise Exercise ith and jth women turn up the same face of the die, find the mean and variance of Are the indicators independent
|
? Find the mean and variance of Z = I (AB) + I (BC) + I (AC). If n women play a similar game, and I (Ai, A j ) is the indicator of the event that the i= j I (Ai A j ). 5.18 Example: Strange But True Let (Sn; n ≥ 0) be a simple symmetric random walk with S0 = 0. Let f0(n) = P(T0 = n) be the probability that the walk first returns to zero at the nth step, and let u(n) = P(Sn = 0). Let Vn be the number of values which a walk of n steps has visited exactly once. (a) Show that (b) Deduce that f0(2k) = u(2k − 2) − u(2k). P 2n k=1 Sk = 0 = P(T0 > 2n) = u(2n). (c) Hence, show that for all n ≥ 1 E(Vn) = 2. (2) (3) (4) (1) (2) (3) Solution (a) By symmetry and the reflection principle, using Theorem 5.6.17, f0(2k) = 1 P(S2k−1 = 1) = 2−(2k−1) 2k − 1 2k k = 2−2k+2 2k − 2 k − 1 2k − 1 = 2−2k 2k − 1 2k − 1 k − 2−2k 2k k = u(2k − 2) − u(2k). (b) P(T0 > 2n) = 1 − n f0(2k) = 1 − n (u(2k − 2) − u(2k)) by (1), k=1 = u(2n). k=1 (c) Clearly, V1 = 2, so it suffices to show that E(Vn) = E(Vn−1) for n ≥ 2. Let V n be the number of points visited just once by S1, S2,..., Sn. This has the same distribution as Vn−1, and so also the same expectation. Let T0 be the time of first return to the origin.
|
Now, (4) Vn = V n V n V n + 1 if T0 > n − 1 if S1,..., Sn revisits zero exactly once otherwise (5) (6) (7) (8) 216 Hence, 5 Random Vectors: Independence and Dependence E(Vn) − E(Vn−1) = E(Vn) − E(V n) = P(T0 > n) − P(S1,..., Sn revisits 0 exactly once) by (4), = P(T0 > n) − = P(T0 > n) − [ n 2 ] k=1 [ n 2 ] k=1 P(T0 = 2k)P n i=2k+1 Si = 0 P(T0 = 2k)P(Sn−2k = 0) by (2). = P(T0 > n) − P(Sn = 0) = 0 by (2). Exercise Exercise Exercise u(2k)u(2n − 2k). Show that 2k f0(2k) = u(2k − 2). Show that P(S2n = 0) = (1/2n)E(|S2n|). Let L 2n be the time of the last visit to 0 up to time 2n. Show that P(L 2n = 2k) = Show that if k and n increase in such a way that k/n = x, then P L 2n 2n ≤ x 0 ≤ x ≤ 1 → 2 π sin−1 x; = 2 π arc sin x. [Stirling’s formula says that n! e−nnn+ 1 2 (2π) 1 2 for large n.] This is an arc-sine law. 5.19 Example: Capture–Recapture A population of b animals has had a number a of its members captured, marked, and released. (a) Let Ym be the number of animals that it is necessary to capture (without re-release) to obtain m, which have been marked. Find P(Ym = n) and E(Ym). (b) If, instead, it had been decided just to capture [E(Ym)] animals, what would have been the expected number of marked animals among them? Compare this with m.
|
Solution (a) I For the event {Ym = n} to occur, it is necessary that: (i) The nth animal is marked, which can occur in a ways. (ii) The preceding n − 1 animals include exactly m − 1 marked and n − m unmarked n − m) ways. animals, which may occur in ( a − 1 m − 1)( b − a The total number of ways of first selecting a distinct animal to fill the nth place, and n − 1). Because these then choosing n − 1 animals to fill the remaining n − 1 places is b.(b − 1 are assumed to be equally likely, the required probability is (1) P(Ym = n. Worked Examples and Exercises 217 To calculate E(Ym), you may write E(Ym) = b−a+m n=m nP(Ym = n) = b−a+=+1−(a+1)+m b−a+m n+1=m+1 a b n=a + 1) n + 1 − (m + 1 where a = a + 1 and so on because (1) is a probability distribution with sum equal to unity. II Alternatively, suppose that you were to capture them all, and let X 0 be the number of unmarked animals captured before the first marked animal, Xr the number of unmarked animals captured between the r th and the (r + 1)st marked animals and Xa the number captured after the last marked animal. Then a 0 Xi = b − a, and, by symmetry, for all i and j, E(Xi ) = E(X j ). Hence, E(Xr ) = b − a a + 1, and E(Ym) = m−1 0 E(Xr ) + b) It is possible to write down the distribution of the number of marked animals captured, and then evaluate the mean by a method similar to the first method of (a). It is easier to let I j be the indicator of the event that the jth captured animal is marked. Then the required expectation is (2) E [E(Ym )] 1 I j = [E(Ym)]E(I j ) = [E(Ym)] ". Remark The distribution of Ym is called the negative hypergeometric distribution, by analogy with
|
the relation between the negative binomial distribution and the binomial distribution. The hypergeometric p.m.f. is (3.16.1). (3) (4) (5) If you capture and keep a fixed number n of animals, find the variance of the number Exercise that are marked. Exercise the number of captures required to secure m marked animals, find E(Zm). Exercise and p. Find P(X = k|X + Y = j) and explain why the answer takes the form you find. Your pen will only hold m animals, so you return the unmarked ones. Now if Zm is Let X and Y be independent binomial random variables with the same parameters n 218 5 Random Vectors: Independence and Dependence 5.20 Example: Visits of a Random Walk Let (Sn; n ≥ 0) be a simple symmetric random walk with S0 = 0. (a) Let Vr be the number of visits to r before the walk revisits the origin. Show that E(Vr ) = 1. (b) Show that the expected number of visits to the origin is infinite. Solution return to 0. Then Let In be the indicator of a visit to the point r at the nth step before any ∞ In = E(Vr ) = E ∞ n=1 E(In) n=1 = ∞ n=1 P(Sn = r, S1 = 0,..., Sn−1 = 0). Now we make two important observations: If Sn = r, then if and only if X k+. Because the Xi are independent and identically distributed, X k+1 + · · · + X n has the same distribution as X 1 + · · · + X n−k, and this remains true when Sn = r. Hence, we can write (1) as n ∞ E(Vr ) = Xi = r, X 1 = 0−1 = 0 (1) (2) (3) (4) P P P i=1 n i=1 n i=1 n=1 ∞ n=1 ∞ n=1 ∞ n=1 = = = Xi = r,..., X n = r by (2), Xi = r−1 = r,..., X 1 = r by (3), fr (n
|
) = 1, using Theorem 5.6.7. (b) Let Jn be the indicator of a visit to the origin at the nth step, and let R be the total number of returns to the origin. Then E(R) = E ∞ Jn ∞ = E(Jn) = ∞ n=1 P(Sn = 0) where n = 2k, n=1 2k k n=1 1 22k (2k − 1)(2k − 3)... 3.1 2kk(k − 1)... 2.1 = = ≥ = ∞ k=1 ∞ k=1 ∞ k=1 ∞ k=1 1 2k = ∞. (2k − 1). (2k − 2) (2k − 1). (2k − 3). (2k − 4) (2k − 3 2kk! Worked Examples and Exercises 219 Remark Result (a) is indeed remarkable. Interpreted as a game, it says that if a coin is tossed repeatedly and you get $1 every time the total number of heads is r more than the total number of tails, until heads and tails are equal, then your expected gain is $1, independently of r. [See Example 9.11 (b) for another method.] For a symmetric simple random walk with S0 = 0, let Rr be the total number of returns For a symmetric simple random walk with S0 = 0, show that the probability that the Exercise to r. What is E(Rr )? Exercise first visit to S2n takes place at time 2k is P(S2k = 0)P(S2n−2k = 0); 0 ≤ k ≤ n. Exercise What is E(V ), the expected number of visits of an asymmetric simple random walk to r? Consider a two-dimensional symmetric random walk (SX, SY ) on the points (i, j), Exercise where i and j are integers. From (i, j), the walk steps to any one of (i ± 1, j) or (i, j ± 1) with equal probability 1 4. Show that the expected number E(V ) of visits to the origin is infinite. Let X and Y be random variables such that for all x 5.21 Example: Ordering FX (
|
x) ≤ FY (x). Show that E(X ) ≥ E(Y ), and deduce that FX (x) ≤ FY (x) if and only if, for all increasing functions h(.), E(h(X )) ≥ E(h(Y )). Solution From Example 4.3.3, we have ∞ E(X ) = P(X > k) − −∞ P(X < k) = ∞ (1 − FX (k)) − 0 ∞ ≥ (1 − FY (k)) − k=0 −∞ FY (k) 0 0 = E(Y ). Now if h(.) is an increasing function 0 −∞ 0 FX (k) by (1) P(h(X ) > z) = P(X > inf {t: h(t) > z}) ≥ P(Y > inf {t: h(t) > z}) by (1) = P(h(Y ) > z). Hence, P(h(X ) ≤ z) ≤ P(h(Y ) ≤ z) and (2) follows on using (3). Conversely, if we choose h(Z ) to be the indicator of the event that Z ≤ x, then E(h(X )) = P(X ≤ x) ≤ P(Y ≤ x) = E(h(Y )). Exercise Exercise way that P(X > Y ) > 1 Exercise If X and Y are independent and for all x FX (x) ≤ FY (x), show that P(X ≥ Y ) ≥ 1 2. If X, Y, and Z are independent show that X, Y, and Z can be distributed in such a 2 ; P((Y > Z ) > 1 Let X (n, p) have binomial distribution with parameters n and p. Show that P(X (m, p) ≤ x) ≥ P(X (n, p) ≤ x) for m ≤ n (5) (6) (7) (8) (1) (2) (3) (4) (5) (6) (7) 220 and 5 Random Vectors: Independence and Dependence P(X (n, p1) ≤ x) ≥ P(X (n, p2) ≤ x) for p1 ≤ p2. 5.22 Example: More Martingales Let (X n; n ≥ 1) be a collection of
|
independent random variables with respective means (µn; n ≥ 1) and finite variances (σ 2 n ; n ≥ 1). Show that n 2 n Mn = (Xr − µr ) − σ 2 r r =1 r =1 defines a martingale with respect to X n. Now assume that the X n are identically distributed, with mean µ and variance σ 2, and T. is a stopping time for (X n; n ≥ 1) with ET < ∞. Show that when Yn = n 1 Xr, E(YT − T µ)2 = σ 2ET. Solution First, by the independence, E|Mn| ≤ n r =1 E(Xr − µr )2 + n r =1 σ 2 r < ∞. Second, we have by using independence again, E(Mn+1|X 1,..., X n) = Mn + E(X n+1 − µn+1)2 − σ 2 n+1 = Mn, and Mn is a martingale. For the last part, it is easy to see that we cannot apply the Optional Stopping theorem directly; so we employ an ingenious trick. First, note that T ∧ n is a finite stopping time. By the first case in the optional stopping theorem 5.9.11, it follows that (1) (2) E(YT ∧n − µT ∧ n)2 = σ 2ET ∧ n. Next, we observe that as n → ∞, we have T∧n → T and YT ∧n → YT, both statements being true with probability 1. Also, for any m ≥ n, we have, using the fact that martingales have orthogonal increments, Corollary (5.9.14), E(YT ∧m − µT ∧ m − YT ∧n + µT ∧ n)2 = E(YT ∧m − µT∧m)2 − E(YT ∧n − µT∧n)2 = σ 2(ET∧m − ET∧n) → 0 as m, n → ∞, since ET < ∞. Hence, by Theorem (5.7.4), Y
|
T ∧n − µn ∧ T converges in mean square as n → ∞. But from (2) above, we know it converges to YT − µT with probability 1. Hence, E(YT ∧n − µT∧n)2 → E(YT − µT )2. Now taking the limit as n → ∞ in (1) gives the required result. Worked Examples and Exercises 221 (3) Exercise Un and Vn are martingales, where Let (X n; n ≥ 1) be independent with respective finite means (µn; n ≥ 1). Show that (a) Un = n r =1 Xr − n r =1 µr. (b)Vn = X 1... X n µ1... µn. (4) Exercise Let Dn = X n − X n−1, where X n is a martingale with finite variance. Show that var X n = n r =1 var Dr. (5) Let (X n; n ≥ 1) and (Yn; n ≥ 1) be two collections of independent random variables, Exercise with each collection identically distributed having respective means µx and µy. Show that if T is a stopping time with respect to the sequence {(X n, Yn); n ≥ 1} and ET < ∞, then * ) T T E Xr − T µx Yr − T µy = ET cov(X 1, Y1). 1 1 (6) Exercise Set Sn = Let (X n; n ≥ 1) be independent and identically distributed with M(t) = Eet X 1 < ∞. 1 Xr. Show that Mn = exp(t Sn)(M(t))−n is a martingale with respect to Sn. n 5.23 Example: Simple Random Walk Martingales Let Sn = S0 + X 1 + · · · + X n, where (X n; n ≥ 1) are independent, and such that 0 = P(X 1 = 1(X 1 = −1) = 1 (a) Show that (q/ p)Sn is a martingale with respect to Sn. (b) If a < S0 < b, find the probability that the walk hits a before it hits b, where a, b and S
|
0 are integers. Solution the conditions for ( q p )Sn to be a martingale follow easily. (a) We noted in (5.6.10) that E( q p )Xr = 1. Because the X n are independent, Let T be the first time at which Sn takes either of the values a or b. The probability that any consecutive sequence of X n, of length a + b, yields a + b consecutive 1s is pa+b. Hence, P(T > m(a + b)) < (1 − pa+b)m → 0, as m → ∞. Hence, P(T < ∞) = 1. Because Sn and Sn∧T are bounded, so are ( q A be the event that the walk hits a before b; because P(T < ∞) = 1, we must have p )Sn and ( q p )Sn∧T. Let P(walk hits b before a) = 1 − P(A). We can apply the second case of the optional stopping theorem 5.9.11 to obtain, if S0 = s ST S0 = E q p q p P(A) + b (1 − P(A)). 222 Hence, (1) Exercise (2) Exercise 5 Random Vectors: Independence and Dependence P(A − − If p = q = 1 2, show that Sn is a martingale and deduce that P(A) = s − b a − b. If p = q = 1 2, show that S2 n − n is a martingale and hence that (3) Exercise If a → −∞ and p > 1 2, use an appropriate martingale to show that ET = (s − a)(b − s). ET = (b − s) 1 − 2q. (4) (5) Exercise and p > 1 2, deduce that Show that [(Sn − n( p − q))2 − 4npq; n ≥ 0] is a martingale. If a → −∞ varT = 4(b − s) pq ( p − q)3. Let Sn be a simple symmetric random walk started at the origin, and let T be the Exercise number of steps until the walk first hits −a or b, where a and b are positive. Show that the following are all martingales: −
|
6nS2 − n; (a) Sn; n Hence find P(ST = −a), ET, and E(T ∩ {ST = −a}). Show finally that varT = ab(a2 + b2 − 2)/3. + 3n2 + 2n. − 3nSn; (b) S2 n (d) S4 n (c) S3 n 5.24 Example: You Can’t Beat the Odds Let Yn be the total net fortune of a gambler after betting a unit stake on each of n consecutive fair plays in a casino. Thus, the return from the nth unit stake is Yn − Yn−1. We assume that Yn constitutes a martingale. A gambler devises a betting system that entails placing a stake Sn on the nth play, where Sn is not necessarily a unit stake, but Sn is necessarily a function only of (Y0,..., Yn−1), and does not depend on any Yn+k, k ≥ 0. Write down an expression for the gambler’s fortune Zn, after n plays, and show that Zn is a martingale if E|Zn| < ∞. Solution Yn−1). Hence, From the description of the system, the return on the nth play is Sn(Yn − Zn = Zn−1 + Sn(Yn − Yn−1) = Y0 + n r =1 Sr (Yr − Yr −1). Worked Examples and Exercises 223 Because Sn is a function of (Y0,..., Yn−1), we have E(Zn+1|Y0,..., Yn) = Zn + E(Sn+1(Yn+1 − Yn)|Y0,..., Yn) = Zn + Sn+1[E(Yn+1|Y0,..., Yn) − Yn] = Zn because Yn is a martingale. The result follows. Remark using a system. The exercises supply more instances of this. The point of this example is that you cannot turn a fair game in your favour (1) (2) (3) Show that using any of the following systems, the gambler’s fortune is a marting
|
ale: Exercise (a) Optional skipping. At each play, the gambler skips the round or wagers a unit stake. (b) Optional starting. The gambler does not join in until the (T + 1)th play, where T is a stopping time for Yn. (c) Optional stopping. The gambler uses the system until a stopping time T, and then quits. Exercise: Optional Sampling The gambler only uses the system at the plays numbered (T1, T2,...), where (Tn; n ≥ 1) is a sequence of stopping times such that P(Tr ≤ nr ) = 1 for some non random sequence of finite real numbers nr. Show that (Z Tr ; r ≥ 1), where T0 = 0, is a martingale. Exercise K < ∞, and she plays only at stopping times (Tr ; r ≥ 1), where Show that the result of Exercise (2) is true if the gambler’s fortunes are bounded by 0 ≤ T1 ≤ T2 ≤ T3 ≤... 5.25 Example: Matching Martingales In a cloakroom, there are C distinct coats belonging to C people who all attempt to leave by picking a coat at random. Those who select their own coat leave; the rest return their coats and pick again at random. This continues until everyone leaves; let N be the number of rounds required. Show that EN = C and varN ≤ C. Let Mn be the number of people present after the nth round, and X n the Solution number of matches in the nth round. Thus, M0 = C, Mn+1 = Mn − X n+1, n ≥ 0, and MN = 0. By the result of Example 5.4.3, EX n = 1 for all n, so that E(Mn+1 + n + 1|M0,..., Mn) = Mn + n. Thus, (Mn + n; n ≥ 0) is a martingale, and N is clearly a stopping time. Also, P(at least one match) ≥ C −1 for all values of Mn, so P(N < ∞) = 1, and also EN < ∞. By the appropriate part of the optional stopping theorem, C = M0 + 0 = E(MN + N ) = EN. We also have from Example 5.4.
|
3 that var(X n+1|M0,..., Mn) = 1 0 if Mn > 1 if Mn = 1. (1) (2) (3) 224 5 Random Vectors: Independence and Dependence Hence, var X n+1 ≤ 1, and we may write E((Mn+1 + n + 1)2 + Mn+1|M0,..., Mn) = (Mn + n)2 − 2(Mn + n)E(X n+1 − 1) + Mn +E((X n+1 − 1)2 − X n+1|M0,..., Mn) ≤ (Mn + n)2 + Mn. Thus, (Mn + n)2 + Mn is a nonnegative supermartingale, and by the appropriate optional stopping theorem [given at the end of Section 5.7], C 2 + C = M 2 0 + M0 ≥ E((MN + N )2 + MN ) = EN 2. The result follows, using the first part. Exercise Suppose the coat-grabbers adopt a slightly smarter approach. At each round, those with their own coats leave, those left call out the name on the label of the coat they have picked. Any pair holding each other’s coat swap them, and leave. The rest return their coats for another round. (a) Show that the expected number of rounds now required is C/2. (b) Let X n be the number of departures in the nth round; show that varX n = 3, for Mn ≥ 4. [Hint: With an obvious notation using suitable indicators, X n = j I j + j=k I jk. Hence, when Mn−1 = m, EX 2 n = mEI1 + m(m − 1)E(I1 I2) Show that (Mn + 2n)2 + 3 +2m(m − 1)(m − 2)E(I1 I23) + 2m(m − 1)EI12 +m(m − 1)(m − 2)(m − 3)E(I12 I34) = 7.] 2 Mn is a supermartingale, and deduce that varN ≤ 3 Exercise Suppose it was a mathematicians party, and at each round any subgroup of size less than or equal to k, that holds no
|
coats outside the subgroup, simply redistributes their coats correctly, and leaves. Show that the expected number of rounds required is C/k. Exercise Suppose now that the purpose of the coats exercise is not simply to leave, but to leave in pairs. Thus, only pairs holding each others coat swap and leave; the rest, including those who have their own coat, return them for another round. Show that when C is even, EN = C and varN ≤ 2C. 2 C. What can you say when C is odd? 5.26 Example: Three-Handed Gambler’s Ruin Three players start with a, b, and c chips, respectively, and play the following game. At each stage, two players are picked at random, and one of those two is picked at random to give the other a chip. This continues until one of the three is out of chips, and quits the game; the other two continue until one player has all the chips. Let X n, Yn, and Zn be the chips possessed by each player, respectively, after the nth stage; and let T be the number of transfers until someone has all the chips. Show that ET = ab + bc + ca. Solution We claim that is a martingale. To see this, we need to consider two cases; thus, Un = X nYn + Yn Zn + Zn X n + n Worked Examples and Exercises 225 (i) X nYn Zn > 0. Here, for instance, E(X n+1Yn+1|X 0, Y0, Z0,..., X n, Yn, Zn) [(X n + 1)Yn + (X n − 1)Yn + X n(Yn + 1) + X n(Yn − 1) = 1 6 +(X n + 1)(Yn − 1) + (Yn + 1)(X n − 1)] = X nYn − 1 3. The other two terms being treated similarly, we find that (1) E(Un+1 + n + 1|X 0,..., Zn) = Un + n + 1 − 1. (ii) One of X n, Yn or Zn is zero. If for instance Zn = 0, then E(X n+1Yn+1|X 0,...,
|
Zn) = 1 2 [(X n + 1)(Yn − 1) + (X n − 1)(Yn + 1)] = X nYn − 1. The other two possibilities being treated similarly, we obtain the same martingale condition (1). Clearly, T is a finite-mean stopping time for this bounded martingale, and UT = 0, so ET = E(UT + T ) = E(U0 + 0) = ab + bc + ca. (2) Exercise Let S be the number of transfers until one of the players is first out of chips. Show that is a martingale, and deduce that Mn = X nYn Zn + 1 3 n(a + b + c) ES = 3abc a + b + c. (3) The three players play a different game. Thus, they start with a, b, and c chips, Exercise respectively. At each stage, one player is picked at random to receive one chip from each other player still in; players drop out when they have no chips. Show that Mn and Vn are martingales, where Mn = X nYn Zn + n(a + b + c − 2) Vn = X nYn + Yn Zn + Zn X n + 3n. If S and T are defined as above, deduce that and ES = abc a + b + c − 2, ET = ab + bc + ca − 2abc a + b + c − 2. (4) Exercise The three players are now joined by a fourth, and all four return to play by the rules of the first game. The fourth starts with d chips, and they have X n, Yn, Zn, and Wn at the nth stage. Let S be the first time at which only two players remain in the game, and T the first time at which only one is left with all the chips. Verify that Un = X nYn + Yn Zn + Zn X n + X n Wn + WnYn + Wn Zn + n, 226 and 5 Random Vectors: Independence and Dependence Vn = X nYn Zn + Wn X nYn + WnYn Zn + Wn X n Zn + n 2 (
|
a + b + c + d) are martingales, and deduce that and ES = 2(abc + bcd + acd + abd) a + b + c + d, ET = ab + bc + cd + da + ac + bd. P R O B L E M S You roll two fair dice. Let X be the number of 2s shown, and Y the number of 4s. Write down the joint probability mass function of X and Y, and find cov (X, Y ) and ρ(X, Y ). Let the random variables X and Y have joint probability mass function f (x, y) such that:, f (1, 2) = 1 8 f (2, 2) = 1 16, f (1, 3) = 1 16 f (2, 3) = 1 8,, f (1, 4) = 1 4 f (2, 4) = 3 8,. (c) X + Y is odd (d) X − Y ≤ 1. Find the probability of the following: (a) X > Y (b) X ≥ Y Find two random variables X and Y that are uncorrelated, but not independent. Show that if E((X − Y )2) = 0, then X = Y with probability one. Show that if E((X − Y )2) = E(X 2) + E(Y 2), then X and Y are orthogonal. Let X be uniformly distributed on {0, 1, 2,..., 4n}. Let Y = sin( 1 2 (a) What is the joint probability mass function of Y and Z? (b) What is the distribution of Y + Z? Show that Y and Z are orthogonal. Let X and Y be jointly distributed with finite second moments and unit variance. Show that for some nonzero constants a, b, c, d, the random variables U and V are uncorrelated where U = a X + bY, V = cX + dY. Are a, b, c, and d unique? A source produces a message forming a sequence of zeros and ones. In being transmitted, it passes through two independent channels, each of which transmits the wrong symbol with probability 1 − p, or the correct symbol with probability p. Show that a symbol is least likely to be transmitted correctly when p = 1 2. π X )
|
and Z = cos( 1 2 π X ). Find the probability of correct transmission of a symbol when the message passes through three similar independent channels. Let (X n; n ≥ 1) be a sequence of independent random variables such that P(X n = 1(X n = −1). Let U be the number of terms in the sequence before the first change of sign, and V the further number of terms before the second change of sign. (In other words, X 1, X 2 Problems 227 10 11 3. Find: is made up of runs of +1s and runs of −1s; U is the length of the first run and V the length of the second.) (a) Show that E(U ) = pq −1 + qp−1 and E(V ) = 2. (b) Write down the joint distribution of U and V, and find cov (U, V ) and ρ(U, V ). An urn contains n balls numbered individually with the integers from 1 to n. Two balls are drawn at random without replacement, and the numbers they bear are denoted by X and Y. Find cov(X, Y ), ρ(X, Y ), and the limit of ρ(X, Y ) as n → ∞. Let X and Y have joint distribution defined by f (0, 0) = 1 − 3a; and f (0, 1) = f (1, 0) = f (1, 1) = a; a ≤ 1 (a) The p.m.f.s of X and Y (b) cov (X, Y ) (c) E(X |Y ) and E(Y |X ) (d) Whether X and Y can be independent, and if so, when. You roll two fair dice, and they show X and Y, respectively. Let U = min{X, Y }, V = max{X, Y }. Write down the joint distributions of: (a) {U, X } (b) {U, V } (c) {X, Y, V }. Find cov (U, V ) and E(X Y V ). (a) If X and Y are independent with finite expectation, show that E(X Y ) exists. (b) Find a sufficient condition on the moments of X
|
and Y, for E(X Y ) to exist in general. 14 Which of the following functions f (i, j) can be a joint probability mass function of two random 12 13 j |i| + | j, j < ∞ 1 ≤ i ≤ c, j ≥ 1, c an integer 1 ≤ i ≤ j < ∞. variables X and Y? (a) θ |i|+| j|; (b) θ i+ j ; (c) θ i+ j+2; (d) θ i+ j+1; (e) (i j − (i − 1) j )α ; (f) α(i n − (i − 1)n) j −n−2; Are X and Y independent in any case? For each function in problem 14 that is a joint probability mass function of X and Y, find the marginal mass functions of X and Y. Suppose that random variables X and Y are such that P(|X − Y | ≤ M) = 1, where M is finite. Show that if E(X ) < ∞, then E(Y ) < ∞ and |E(X ) − E(Y )| ≤ M. Show that the following are joint p.m.f.s and find the marginal distributions. β c 15 16 17 (a) (b) f (x1,..., xk) = x! x1!... xk! p x1 1 · · · p xk k, where k 1 p j = 1 and k 1 x j = x. f (x, x1,..., xk) = (x + r − 1)! (r − 1)! pr 0 p x1 1 x1! · · · p xk k xk!, where k 0 p j = 1 and k 1 x j = x ≥ 0. 228 (c) 5 Random Vectors: Independence and Dependence f (x1,..., xk) = a1 x1 ak xk... . k 1 k 1 ai xi 18 19 Let X and Y be independent geometric random variables with parameters p1 and p2, respectively. (a) If c is an
|
integer and Z = min{c, X }, find E(Z ). (b) Find the distribution and expectation of min {X, Y }. Let X and Y have joint p.m.f. f (x, y) = C (x + y − 1)(x + y)(x + y + 1) ; m ≥ 1, n ≥ 1. Find the p.m.f. of X, the p.m.f. of Y, and the value of C. Let X and Y be independent random variables each with a geometric distribution, so 20 f X (x) = αβ x−1; fY (y) = pq y−1; x ≥ 1, α + β = 1, y ≥ 1, p + q = 1. Let R = X/Y. (a) Find P(R > 1). (b) If r = m/n where m and n are integers with no common factor except unity, find P(R = r ),, P(R = r ) = 1/(2m+n − 1). and show that when α = p = 1 2 Let X and Y be independent random variables each uniformly dis- Triangular Distribution tributed on {0, 1,..., n}. Find the p.m.f. of (a) X + Y. (b) X − Y. Let X have binomial distribution with parameters n and p, and Y a binomial distribution with parameters m and p. Show that if X and Y are independent then X + Y has a binomial distribution with parameters m + n and p. Deduce that =0 Find the conditional distribution of X given that X + Y = k. If X has a binomial distribution with parameters n and p, show that E 1 1 + X = 1 − (1 − p)n+1 (n + 1) p. Let X and Y be independent Poisson random variables. Show that Z = X + Y has a Poisson distribution. Show also that for some p, P(X = k|Z = n) = pk(1 − p)n−k, which is to say that the condi- n k tional distribution of X given Z is binomial. Let X and Y be independent geometric random variables, such that for m ≥ 0 21 22 23 24 25 (a) Show that P(X = m) = (1
|
− λ)λm and P(Y = m) = (1 − µ)µm. P(X + Y = n) = (1 − λ)(1 − µ) λ − µ (λn+1 − µn+1), λ = µ. Find P(X = k|X + Y = n). Problems 229 (b) Find the distribution of Z = X + Y when λ = µ, and show that in this case P(X = k|Z = n) = Let X, Y, and Z be jointly distributed random variables such that each can 1/(n + 1). Bell’s Inequality take either of the values ±1. Show that E(X Y ) ≤ 1 − |E((X − Y )Z )|. (This inequality is interesting because it has been claimed that there are experiments in quantum mechanics for which it does not hold true.) Show that for any c such that |c| ≤ 4 the function 26 27 f (i, j) = 1 (2m + 1)(2n + 1) + c(i − m)( j − n) ((2n + 1)(2m + 1))2 ; 0 ≤ i ≤ 2m, 0 ≤ j ≤ 2n, is a joint probability mass function with marginal distributions that do not depend on c. Show that the covariance of this distribution is cmn(m + 1)(n + 1) 9(2n + 1)(2m + 1). 28 Construct two identically distributed random variables X and Y such that P(X < Y ) = P(Y < X ). 29 30 31 32 33 Bernoulli’s Urn Initially an urn contains U umber balls and a vase contains V viridian balls. From each container, a ball is removed at random and placed in the other container. Let Ur be the number of umber balls in the urn after r repetitions of this operation. (a) Find E(Ur ) and show that limr→∞E(Ur ) = U 2/(U + V ). (b) Just before each time balls are exchanged, a coin is tossed (which shows a head with probability p); find the expected number of umber balls in the urn when the coin first shows a head. Show that if U = V and U p = 1, this expectation is about 2 Let (Xi
|
; i ≥ 1) be a random walk with S0 = 0 and Sr = max1≤k≤n{Sk}. Show that P(Mn ≥ x) ≤ ( p 1 Xi. Define the maximum Mn = 3 U for large U. r q )x, x ≥ 0, and deduce that for p < q. E(Mn) ≤ q/(q − p), lim n→∞ An urn contains n balls such that each of the n consecutive integers 1, 2,..., n is carried by one ball. If k balls are removed at random, find the mean and variance of the total of their number in the two cases: (a) They are not replaced. (b) They are replaced. What is the distribution of the largest number removed in each case? Let the random variables X and Y have joint distribution P(X = a, Y = 0) = P(X = 0, Y = a) = P(X = −a, Y = 0) = P(X = 0, Y = −a) = 1 4. Show that X − Y and X + Y are independent. The random variables U and V each take the values ±1. Their joint distribution is given by P(U = +1) = P(U = −1) = 1 2 = P(V = −1|U = −1),, P(V = +1|U = 1) = 1 3 P(V = −1|U = 1) = 2 3 = P(V = +1|U = −1). 230 5 Random Vectors: Independence and Dependence n (a) Find the probability that x 2 + U x + V = 0 has at least one real root. (b) Find the expected value of the larger root given that there is at least one real root. (c) Find the probability that x 2 + (U + V )x + U + V = 0 has at least one real root. Let Sn = p, p + q = 1. Let Ta0 be the time at which the walk first visits zero. Show that if p ≤ 1 P(Ta0 < ∞) = 1, but if p > 1 3 then P(Ta0 < ∞) = r a < 1. What is r? Casualties arriving at a certain hospital require surgery, independently of one another,
|
with probability 1 4. What is the probability that, on a day when n casualties arrive, exactly r require surgery? The number X of casualties arriving on weekdays follows a Poisson distribution with mean 8; 1 Xi be a random walk with S0 = a > 0, such that P(Xi = −1) = q, P(Xi = +2) = 3 then that is, for each day, P{X = n} = e−88n/n!n = 0, 1, 2,... Show that the number requiring surgery each day also follows a Poisson distribution and find its mean. Suppose that the situation is identical on Saturdays and Sundays, except that there are on average only four casualties arriving per day. Find the mean and variance of the number of patients requiring surgery each week. (Assume that each day’s arrivals are independent and recall Problem 24.) An urn contains m white balls and M − m black balls. Balls are chosen at random without replacement. Show that the probability pk of choosing exactly k white balls in n choices (0 ≤ k ≤ m) is given by M n pk = − Define a random variable, where Xi = 0 or 1 according as the ith ball is black or white. Show that P(X = k) = pk, P(Xi = 1) = m/M, P(Xi = 1, X j = 1) = m(m − 1) M(M − 1), i = j. By considering E(X ), E(X 2), or otherwise, find the mean and variance of the distribution given by pk. Conditional Gambler’s Ruin An optimistic gambler seeks to know the expected duration of the game assuming that he wins. As usual, he plays a sequence of fair wagers losing or gaining $1 each time. The game stops as soon as he has $0 or $K. Initially, his fortune is $k(< $K ), the event that he stops with $K is Vk, and Dk is the duration of the game. Let δk = E(Dk|Vk). Show that for 1 < k < K, (k + 1)δk+1 − 2kδk + (k − 1)δk−1 + 2k = 0. Write down two boundary conditions at k = 1 and k = K,
|
and deduce that E(Dk|Vk) = 1 3 Let (Sn; n ≥ 1) be a simple random walk, and let M be its maximum, M = maxn≥1{Sn}. (a) If S0 = 0, and p < q, show that m has a geometric distribution and find its mean. (b) If S0 is a random variable with distribution P(S0 = −k) = αβ k; k = 0, 1, 2,... find the distri- 1 ≤ k ≤ K. (K 2 − k2), bution of M. In this case, what is the conditional distribution of S0 given M? Let X 1, X 2, and X 3 be independent geometric random variables with parameters 1 − p1, 1 − p2, and 1 − p3, respectively. 34 35 36 37 38 39 Problems 231 (a) Show that P(X 1 < X 2 < X 3) = (1 − p1)(1 − p2) p2 p 2 (1 − p2 p3)(1 − p1 p2 p3) 3. (b) Find P(X 1 ≤ X 2 ≤ X 3). (c) Three players, A, B, and C, roll a fair die in turn, that is, in the order ABCABCA... Show that the probability that A throws the first six, B the second six, and C the third six, is 216 1001. 40 Matching Once again, n letters with n matching envelopes are inadvertently placed at random in the envelopes. Let X be the number of letters that are in their matching envelope. Find E(X ) and var(X ), and show that E(X (X − 1)... (X − k + 1)) = 1 0 k ≤ n k > n. 41 42 43 44 Let n be a prime number greater than two, and let X and Y be independently and uniformly distributed on {0, 1,..., n − 1}. For all r such that 0 ≤ r ≤ n − 1, define Zr = X + r Y, modulo n. Show that the random variables (Zr ; 0 ≤ r ≤ n − 1) are pairwise independent. Is this true if n is not prime? If g(.) is convex, show that
|
g(E(X |Y )) ≤ E(g(X )|Y ). Jensen’s Inequality A bag initially contains r red and b blue balls, r b > 0. Polya’s urn (Example 2.7) revisited. A ball is drawn at random, its colour noted, and it is returned to the bag together with a new ball of the same colour. Let R(n) be the number of red balls in the bag after n such operations. Let T be the number of balls drawn until the first blue ball appears. (a) Show that R(n)/{n + b + r } is a martingale. (b) Deduce that E{(b + 1)(b + r )/(T + r + b)} = b. Optional Stopping. P(T < ∞) = 1. Prove that EX (T ) = EX (0) if either of (a) or (b) holds. (a) E(supn (b) E|X (T )| < ∞, Let X (n) be a martingale, and T a stopping time for X (n) such that and E{X (n)I (T > n)} → 0 as n → ∞. |X (n∧T )|) < ∞. 6 Generating Functions and Their Applications Everything future is to be estimated by a wise man, in proportion to the probability of attaining it, and its value when attained. Samuel Johnson, [The Rambler, 20] This chapter deals with a special subject and may be omitted on a first reading. Its contents are important and useful, but are not a prerequisite for most of the following chapters. 6.1 Introduction In Chapter 3, we found that generating functions can provide elegant and concise methods for handling collections of real numbers. The mass function of an integer valued random variable is such a collection, and so we may anticipate (correctly as it turns out) that the following generating function will be useful. (1) Definition variable X is defined by The probability generating function G(s) of the integer valued random G(s) = k P(X = k)sk. Because all random variables in this chapter are integer valued, this is not again mentioned explicitly. (2) Example Let X be uniformly distributed in {−a,
|
−a + 1,..., b − 1, b}, where a, b > 0. Then provided s = 1, G(s) = b k=−a 1 a + b + 1 sk = s−a − sb+1 (a + b + 1)(1 − s). Notice that, by Theorem 4.3.4, we have from Definition 1 of G(s) that (3) G(s) = E(s X ); 232 6.1 Introduction 233 this is a particularly useful representation of G(s), and we use it a great deal in what follows. For example, suppose we seek the probability generating function of Y = X + a, where a is constant. Using (3), we can write GY (s) = E(sY ) = E(s X +a) = sa G X (s). We will see many other applications of (3) later. When X is defective (that is, when P(|X | < ∞) < 1), the representation (3) can still be used, provided that we remember that the expectation is taken over the finite part of the distribution of X. We write G X (s) when we want to stress the role of X ; and for brevity, G X (s) is sometimes known as the p.g.f. of X. Obviously, if P(|X | < ∞) = 1, then G X (1) = k P(X = k) = 1. To sum up, if X is finite with probability 1, then G X (s) is a power series in s with nonnegative coefficients such that G X (1) = 1. Conversely, if G(s) is a power series with nonnegative coefficients such that G(1) = 1, then G is the p.g.f. of some integer valued random variable X, which is finite with probability 1. (4) Example Let G(s) = (a + bs)/(1 − cs). When is G the p.g.f. of a finite integer valued random variable X? Solution we need to consider various cases. First, we note that if X is finite then G(1) = 1, and so a +
|
b + c = 1. Now (i) If 0 ≤ c < 1, then we can write, for any n, G(s) = (a + bs)(1 + cs + · · · + (cs)n) + a + bs 1 − cs (cs)n+1 = a + (b + ac)s + (b + ac)cs2 + · · · + (b + ac)cn−1sn + bcnsn+1 + a + bs 1 − cs (cs)n+1. For |s| < c−1, we can let n → ∞ to obtain a series expansion of G(s). This has the required properties of a p.g.f. if 1 ≥ a ≥ 0 and 1 ≥ b + ac ≥ 0. In this case, X is a nonnegative random variable. (ii) If c = 1, then a = −b = 1. In this case X is zero with probability 1. (iii) If c > 1, then we can use a method similar to that of (i) to obtain a different series expansion of G(s), that is: G = − b c − ac + b c2s − ac + b c3s2 −... This is a p.g.f. if −c ≤ b ≤ 0 and −c2 ≤ b + ac ≤ 0. In this case, X is nonpositive. (iv) If c < 0, then a = 1 and b = c. In this case, X is zero with probability 1. 234 6 Generating Functions and Their Applications See Example 14 for more insight into the nature of this probability generating function. Another useful theorem is 3.6.7, which we restate here. (5) Theorem Let X be a random variable with mass function f (k), and suppose that a ≤ X ≤ b. Let tn = P(X > nk). Define the tail generating function b−1 T (s) = sntn. Then, whenever both sides exist, a In particular, if X ≥ 0, then (6) (7) (1 − s)T (s) = sa − G(s). (1 − s)T (s) = 1 − G(s). Proof The left-hand side of (7) may be written as (1 − s) b−1 n=a P(
|
X > n)sn = b−1 snP(X > n) − b−1 n = a sn+1P(X > n) n = a b−1 = sn(P(X > n) − P(X > n − 1)) + saP(X > a) n = a+1 − sbP(X > b − 1) = sa − b a P(X = n)sn = sa − G(s), as required. (8) Example 2 Revisited Here X is uniform on {−a,..., b}, and so TX (s) = s−a 1 − s − (s−a − sb−1) (a + b + 1)(1 − s)2 = (1 − s)(a + b)s−a + sb+1 − s−a+1 (a + b + 1)(1 − s)2. More generally, we can show that the identity (7) holds for unbounded nonnegative random variables. One way of doing this is to observe that the coefficients of sn on each side are equal for all n, and then use a standard theorem about power series. (9) Example q < 1. Then Let X be geometric with mass function f (k) = (1 − q)q k−1; k ≥ 1, 0 < G(s) = ∞ k = 1 (1 − q)q k−1sk = (1 − q)s 1 − qs, 6.1 Introduction 235 and Thus, (1 − s)T (s) = 1 − (1 − q)s 1 − qs = 1 − s 1 − qs. T (s) = 1 1 − qs. For future reference, we record the following trivial corollary of (7); that is, if P(0 ≤ X < ∞) = 1, then (10) ∞ j = 0 s j P(X ≤ j) = G X (s) 1 − s. It is useful to bear in mind that conditional probability mass functions also have generating functions. Thus, if A is some event, we can write P(X = k|A) = f (k|A) and define the generating function G X |A(s) = f (k|A)sk = E(s X |A), in the usual
|
notation. If (Ai ; i ≥ 1) is a collection of disjoint events with k i Ai =, then it is easy to show that (11) E(s X ) = i E(s X |Ai )P(Ai ). This result is often useful in finding E(s X ). If the random variables X and Y are jointly distributed, then in like manner we have (12) E(s X |Y = y) = k skP(X = k|Y = y) = k sk f (k, y) fY (y) ; fY (y) > 0. As y runs over all the possible values of Y, this yields the conditional p.g.f. of X given Y G X |Y (s) = E(s X |Y ). (13) We therefore have the useful result: G X (s) = E(G X |Y (s)) = E(E(s X |Y )). (14) Example 4 Revisited Suppose we have two biased coins; the first shows a head with probability a, and the second shows a head with probability 1 − c. The first coin is tossed and, if it shows a tail then the second coin is tossed repeatedly until a head is shown. Let X be the number of times the second coin is tossed. What is G X (s)? Solution Let H be the event that the first coin shows a head. If H occurs then X = 0, so E(s X |H ) = 1. If H c occurs, then X is geometric with f X (k) = (1 − c)ck−1; k ≥ 1. Hence, by Example 9, E(s X |H c) = (1 − c)s 1 − cs. 236 6 Generating Functions and Their Applications Therefore, by (11), E(s X ) = a + (1 − a)(1 − c)s 1 − cs = a + (1 − a − c)s 1 − cs. Looking back, we see that this is the generating function considered in Example 4, case (i). It follows that we can think of (a + bs)/(1 − cs) as being the p.g.f. of a random variable X, which is either zero with probability a, or with probability 1 − a is a geometric
|
random variable with parameter c. Such random variables arise quite naturally in applications. (15) Example A biased coin is tossed repeatedly until the first occasion when r consecutive heads have resulted. Let X be the number of tosses required. Find E(s X ). We suppose that the chance of a head is p, and note that if the first i tosses Solution are i − 1 heads followed by a tail, then the further number of tosses required has the same mass function as X. Hence, with an obvious notation: E(s X |H i−1T ) = si E(s X ); 1 ≤ i ≤ r. Also, E(s X |H r ) = sr. It follows that and so Hence, E(s X ) = r i=1 qpi−1si E(s X ) + pr sr E(s X ) 1 − qs r −1 ( ps)i i = 0 = pr sr. E(s X ) = pr sr (1 − ps) 1 − s + qpr sr +1. We discover different methods for proving this later. 6.2 Moments and the Probability Generating Function For the remainder of this chapter, random variables are assumed to be nonnegative unless stated otherwise. In this case, whenever |s| ≤ 1, ∞ ∞ ∞ |G X (s)| = f (k)sk f (k)|sk| ≤ f (k) = 1 This simple property has enormous consequences for the p.g.f. G(s). These are fully explored in textbooks on calculus and analysis, so we merely state the most important relevant results here. First, we state without proof: (1) Theorem The function G(s) is differentiable for |s| < 1 and its derivative is G(s) = ∞ n=1 n f (n)sn−1 < ∞. 6.2 Moments and the Probability Generating Function 237 At s = 1, G(1) = lim s↑1 ∞ n=1 n f (n)sn−1 whether the limit is finite or not. More generally, it follows that for k ≥ 1, and G(k)(s) = ∞ n = k n! (n − k)! f (n)sn−k, |s| < 1 G(k)(1) = lim s↑1 ∞ n =
|
k n! (n − k)! f (n)sn−k. (2) (3) (4) Second, it follows that G(s) determines the collection ( f (k); k ≥ 0). (5) Theorem (Uniqueness) for some G(s), we have Let X and Y have generating functions G X (s) and GY (s). If G X (s) = GY (s) = G(s) for |s| < 1, then X and Y have the same mass function. Proof This follows from (3) because both f X (k) and fY (k) are given by f X (k) = G(k)(0) k! = fY (k) for all k. Third, it follows that we can obtain all the moments of X from G(s). (6) (7) (8) (9) (10) Theorem If X has p.g.f. G(s), then E(X ) = G(1); more generally, the kth factorial moment is µ(k) = E(X (X − 1)... (X − k + 1)) = G(k)(1); and, in particular, var (X ) = G(1) + G(1) − (G(1))2. Proof Equation (7) is a trivial consequence of (2), and (8) follows from (4). To see (9), write var (X ) = E(X − E(X ))2 = E(X 2) − (G(1))2 = E(X (X − 1) + X ) − (G(1))2, as required. 238 6 Generating Functions and Their Applications Just as (10) gives the second moment and second central moment in terms of the first two factorial moments; likewise, σk and µk may be obtained in principle in terms of (µ(k); k ≥ 1). (11) Example: Binomial p.g.f. Let X have a binomial distribution with parameters n and p. Then, with q = 1 − p as usual, n G(s) = 0 n k q n−k pksk = (q + ps)n. Now using (8), we have µ(k) = n!
|
(n − k)! 0 pk 1 ≤ k ≤ n, k > n. Hence, by (9), var (X ) = n(n − 1) p2 + np − (np)2) = npq. (12) Example: Poisson p.g.f. Let X have a Poisson distribution with parameter λ. Then G(s) = ∞ 0 e−λ λk k! sk = e+λ(s−1). Hence, we find that µ(k) = λk, for k ≥ 1. Moments can also be obtained from the tail generating function T (s) defined in Theorem 6.1.5. (13) Theorem Let X be a random variable with T (s) = ∞ 0 skP(X > k). E(X ) = T (1) var (X ) = 2T (1) + T (1) − T (1)2. Then and, if E(X ) < ∞, (14) (15) Proof By L’Hˆopital’s rule, 1 − G(s) 1 − s Likewise, differentiating (6.1.7) yields T (1) = lim s↑1 = G(1) = E(X ), by (7). T (1) = lim s↑1 1 − G(s) (1 − s)2 − G(s) 1 − s = G(1) 2, by L’Hˆopital’s rule, and the result follows using Theorem 6. 6.3 Sums of Independent Random Variables 239 More generally, a straightforward extension of this theorem shows that (16) µ(k) = kT (k−1)(1). (17) Example: Geometric p.g.f. Let X have a geometric distribution with mass function f (k) = (1 − q)q k−1, k ≥ 1; 0 < q < 1. Then, by Example 6.1.9, Hence, by Theorem 13, T (s) = 1 1 − qs. E(X ) = T (1) = 1 1 − q, and likewise From (16), var (X ) = 2T (1) + T (1) − T (1)2 = q (1 − q)2. µ
|
(k) = kq k−1 (1 − q)k. We conclude this section with a note about defective probability mass functions. k = 0 f (k) < 1, then it still makes sense If X is a nonnegative random variable such that ∞ k k f (k) < ∞, to define the generating function G(s) = then G(1) = k k f (k). However, this is not now the expectation E(X ), but rather the “defective” expectation k = 0 sk f (k). Furthermore, if ∞ E(X I {X < ∞}) = E(X ; X < ∞) = E(X |X < ∞)P(X < ∞), where I {X < ∞} is the indicator of the event that X is finite. In the general case, we have likewise G(1) = E(X ||X | < ∞)P(|X | < ∞) when the expectation exists. In such cases, it can be of interest to calculate E(X |X < ∞) = G(1) G(1). 6.3 Sums of Independent Random Variables If X and Y are independent, then the mass function of their sum Z = X + Y is f Z (k) = f X ( j) fY (k − j). Practical folk (such as statisticians and the like) are frequently interested in the sum of n independent random variables: j S = n 1 Xi. (1) (2) (3) (4) 240 6 Generating Functions and Their Applications The prospect of performing the summation in (1) on n − 1 occasions to find f S(k) is not an attractive one. The next theorem renders it unnecessary in many important cases. Theorem G2(s), respectively. Then the sum Z = X 1 + X 2 has generating function (a) Let X 1 and X 2 be independent with generating functions G1(s) and G(s) = G1(s)G2(s). (b) More generally, if (Xi ; 1 ≤ i ≤ n) are independent with generating functions n (Gi (s); 1 ≤ i ≤ n), then the sum Z = 1 Xi has generating function n G Z (s) = Gi (s). i
|
= 1 Proof (a) Because X 1 and X 2 are independent, s X 1 and s X 2 are also independent. Hence, G Z (s) = E(s X 1+X 2) = E(s X 1)E(s X 2) by Theorem 5.3.8 = G1(s)G2(s). Part (b) is proved similarly. Let X and Y be independent and binomially distributed Example: Binomial Sum with parameters (m, p) and (n, p), respectively. Then recalling Example 6.2.11, we have G X +Y (s) = E(s X +Y ) = E(s X )E(sY ) by independence = (1 − p + ps)m+n. Hence, X + Y is binomially distributed with parameters m + n and p, using Theorem 6.2.5, the uniqueness theorem. (5) Example Let (Xi ; i ≥ 1) be independent Poisson random variables having respective parameters (λi ; i ≥ 1). Find the mass function of Z = n i=1 Xi. Solution Reproducing the argument of the above theorem, we have: G Z (s) = E(s Z ) = n n = exp E(s Xi ) by independence i=1 λi (s − 1) by Example 6.2.12. Thus, Z is Poisson with parameter i=1 n i=1 λi, by the uniqueness theorem. 6.3 Sums of Independent Random Variables 241 (6) Example Let Sk = Let (Xi ; i ≥ 1) be independently and uniformly distributed on {1, 2,..., n}. k i=1 Xi, and define Tn = min {k: Sk > n}. (Thus, Tn is the smallest number of the Xi required to achieve a sum exceeding n.) Find the mass function and p.g.f. of Tn, and hence calculate E(Tn) and var (Tn). Solution First, we observe that Tn ≥ j + 1 if and only if S j ≤ n. Therefore, (7) (8) (9) (10) (11) (12) P(Tn ≥ j + 1) = P(S j ≤ n). Now, by independence, E(z S j ) = (E(z X 1)) j =
|
1 n j j z − zn+1 1 − z by Example 6.1.2. Hence, by Example 6.1.9, ∞ k= j zkP(S j ≤ k) = z n j (1 − zn) j (1 − z) j+1. Equating coefficients of zn on each side of (8) gives P(S j ≤ n) = 1 n j n j = P(Tn ≥ j + 1) by (7). Hence, P(Tn = j) = 1 n j− From (9), Tn has tail generating function n n z j P(Tn > j. Hence, from Theorem 6.2.13, E(Tn) = n 1 + 1 n and var (Tn) = 2 1 + 1 n n− 2n. Finally, Tn has p.g.f. G(z) = 1 + (z − 1) 1 + z n n. Generating functions become even more useful when you are required to consider the sum of a random number of random variables. (13) (14) 242 6 Generating Functions and Their Applications Theorem: Random Sum and suppose that N is nonnegative and that for all i Let N and (Xi ; i ≥ 1) be independent random variables, Then the sum Z = E(s Xi ) = G(s). N i=1 Xi has generating function G Z (s) = G N (G(s)). Proof By conditional expectation, E(s Z ) = E(E(s Z |N )) = E(E(s X 1)... E(S X N )) = E(G(s)N ) = G N (G(s)) by independence by (14) by Definition 6.1.3, and the result follows. (15) Example You toss a fair coin repeatedly. Each time it shows a tail you roll a fair die, when the coin first shows a head you stop. What is the p.g.f. of the total sum of the scores shown by the rolls of the die? As you know by now, the number N of tails shown has mass function Solution f N (k) = ( 1 2 )k+1; k ≥ 0, with generating function G N (s) = 1 2 − s. The score shown by
|
each die has p.g.f. G X (s) = 1 6 s(1 − s6) 1 − s, and so the p.g.f. of the total is given by Theorem 13 as G(s) = 2 − 1 6 s(1 − s6) 1 − s −1. (16) Example Let Z = N i=1 Xi, where f X (k) = −k−1 pk log(1 − p) ; k ≥ 1, 0 < p < 1 and f N (k) = λke−λ/k!; k ≥ 1, 0 < λ. Show that Z has a negative binomial mass function. Solution It is easy to show that G X (s) = log(1 − sp) log(1 − p) ; G N (s) = eλ(s−1). 6.3 Sums of Independent Random Variables 243 Z10 = 0 Z9 = 1 Z8 = 2 Z7 = 1 Z6 = 1 Z5 = 3 Z4 = 1 Z3 = 2 Z2 = 2 Z1 = 1 Z0 = 1 Figure 6.1 A realization of a branching process. The orientation of the diagram explains the name. Hence, G Z (s) = e−λ exp(λG X (s)) = 1 − p 1 − ps −λ(log(1− p))−1, which is the p.g.f. of a negative binomial mass function. (17) Example: Branching A collection of particles behaves in the following way. At time n = 0, there is one particle. At time n = 1, it is replaced by a random number X of particles, where X has mass function f (k), k ≥ 0. At every subsequent time n = 2, 3,..., each particle in existence at that time is replaced by a random number of new particles, called its family. All family sizes are independent, and they all have the same mass function as the first family X. [An example is given in Figure 6.1.] P(Zn = 0). Let the number of particles in existence at time n be Zn. Find E(s Zn ) and lim n→∞ Let G(s) = E(s X ) and Gn(s) = E(s Zn ). Let the family sizes of the partiSolution cl
|
es existing at time n be (X j ; 0 ≤ j ≤ Zn). Then we obtain the attractive and useful representation and, by Theorem 13, Zn+1 = Zn j=0 X j Gn+1(s) = Gn(G(s)). (18) (19) (20) 244 6 Generating Functions and Their Applications (This basic argument is used repeatedly in the theory of branching processes.) Hence, Gn+1(s) is the (n + 1)th iterate of G(.), that is to say: Gn+1(s) = G(G(... G(s)...)), n ≥ 0. Now let P(Zn = 0) = ηn, and define η to be the smallest nonnegative root of the equation We now show that G(s) = s lim n→∞ ηn = η. First, we consider three trivial cases: (i) If f (0) = 0 then ηn = Gn(0) = 0 = η. (ii) If f (0) = 1 then ηn = Gn(0) = 1 = η. (iii) If f (0) + f (1) = 1, with f (0) f (1) = 0, then ηn = Gn(0) = 1 − ( f (1))n → 1 = η. Thus, (20) is true in each case. In what follows, we exclude these cases by requiring that 0 < f (0) < f (0) + f (1) < 1. Now note that {Zn = 0} ⊆ {Zn+1 = 0} and so, by Example 1.4.11, ηn ≤ ηn+1 ≤ 1. Hence, limn→∞ ηn exists; let us denote it by λ. By (18), Gn+1(0) = G(Gn(0)); now letting n → ∞ and using the continuity of G(s), we find that λ is a root of (19): λ = G(λ). However, if for some n, ηn < η then, because G(s) is increasing, ηn+1 = G(ηn) ≤ G(η) = η. But �
|
�0 = G(0) ≤ G(η) = η, and so ηn < η for all n. Hence, λ ≤ η and so λ = η. Once again, we conclude with a note about defective random variables. If X and Y are defective, then they are independent if P(X = i, Y = j) = P(X = i)P(Y = j) for all finite X and Y. Hence, we can still write and we can denote this by G X +Y (s) = G X (s)GY (s), E(s X +Y ) = E(s X )E(sY ). if we remember that the expectation is taken only over the finite part of the distribution. 6.4 Moment Generating Functions 245 6.4 Moment Generating Functions The moments (µk; k ≥ 1) of a random variable X also form a collection of real numbers, so we may also expect their generating functions to be useful. In this case, it is convenient to use the exponential generating function of the collection (µk; k ≥ 1). Let the random variable X have finite moments µk = E(X k) for all Definition k ≥ 1. Then the moment generating function (or m.g.f.) of X is the function MX (t) given by MX (t) = ∞ k=0 µkt k k!. If X takes only a finite number of values, then we easily obtain the convenient representation MX (t) = ∞ k=0 E (X k)t k k! = E ∞ k=0 (X t)k k! = E(e X t ). More generally, (3) holds provided the moments µk do not get too large as k increases. ∞ |µk|/k! < ∞, then MX (t) exists for |t| < 1, and we can use the For example, if k=0 equivalent of Theorem 6.2.1. This yields From (3), we find that where G X (s) is the p.g.f. of X. µk = M (k) X (0). MX (t) = G X (et ), (1) (2) (3) (4
|
) (5) (6) Example ( n+k−1 k Let X have a negative binomial distribution with mass function f (k) = )q k pn, k ≥ 0. By the negative binomial expansion, G(s) = ∞ k=0 pn n + k − 1 k q ksk = n p 1 − qs, |s| < q −1. Then X has moment generating function M(t) = p 1 − qet n, t < − log q. Let us consider an example in which X may take negative integer values. (7) Example Let X have mass function f (k) = 1 2 q |k|−1(1 − q); k = ±1, ±2,..., 246 6 Generating Functions and Their Applications where 0 < q < 1. Then X has p.g.f. G(s) = sk 1 2 q |k|−1(1 − q) k=0 = 1 2 (1 − q) s 1 − qs + 1 s − q, q < |s| < q −1. Hence, X has m.g.f. M(t) = 1 2 (1 − q) et 1 − qet + 1 et − q, log q < t < − log q. In this case, M(t) exists in an interval including the origin, and (4) holds. The factorial moments (µ(k); k ≥ 1) also have a generating function, which is related to the probability generating function as follows: (8) k µ(k)t k kX (X − 1)... (X − k + 1))t k k! P(X = n) k! n! (n − k)! n t k P(X = n) n k P(X = n)(1 + t)n = G X (1 + t). k=0 t k n = k The change in the order of summation is justified because the terms in the sum are all nonnegative. Now let us make the important observation that both of these moment generating functions are useful for dealing with sums of independent random variables for essentially the same reasons that made the p.g.f. so useful. To see this, let X and Y be independent, and set Z = X + Y. Then (9) (10) Likewise,
|
MZ (t) = E(et(X +Y )) = MX (t)MY (t) by independence. G Z (1 + t) = E((1 + t)Z ) = G X (1 + t)GY (1 + t). Finally, we record the existence of yet another function that generates the moments of X, albeit indirectly. (11) Definition If the function κ(t) = log(E(e X t )) = log(MX (t)) 6.5 Joint Generating Functions 247 can be expanded in powers of t, in the form (12) κ(t) = ∞ r =1 κr t r /r!, then it is called the generating function of the cumulants (κr ; r ≥ 1). (13) Example If X is Poisson with parameter λ, then log (MX (t)) = log (exp[λ(et − 1)]) = λ(et − 1) = ∞ r =1 λ r! t r. Hence, for all r, κr = λ. 6.5 Joint Generating Functions Generating functions can be equally useful when we want to consider the joint behaviour of a number of random variables. Not surprisingly, we need a joint generating function. (1) Definition A random vector (X 1,..., X n), with joint mass function f (x1,..., xn), has a joint probability generating function,..., X n (s1,..., sn) = G X (s) = G X 1 x1,x2,...,xn s x1s x2... s xn f (x1,..., xn). By Theorem 5.3.1, we obtain the following useful representation of G, G X (s Xi i. i=1 (2) Example A coin shows heads with probability p or tails with probability q = 1 − p. If it is tossed n times, then the joint p.g.f. of the number X of heads and the number of tails is G(s, t) = E(s X t Y ) = t nE = (qt + ps)n because X is binomial, The fact that G is the nth power of (qt + ps) suggests that independence could have been
|
used to get this result. We use this idea in the next example. (3) Example: de Moivre trials Each of a sequence of n independent trials results in a win, loss, or draw, with probabilities α, β, and γ respectively. Find the joint p.g.f. of the wins, losses, and draws, the so-called trinomial p.g.f. Solution or draw. Then Let Wi, L i, and Di be the respective indicators on the ith trial of a win, loss, E(x Wi y L i z Di ) = αx + βy + γ z. 248 6 Generating Functions and Their Applications But the required joint p.g.f. is G(x, y, z Wi y = (αx + βy + γ z)n. n 1 Di = [E(x Wi y L i z Di )]n by independence, (4) (5) (6) Knowledge of the joint p.g.f. entails knowledge of all the separate p.g.f.s because, for example, if X and Y have joint p.g.f. G(s, t), then G X (s) = E(s X ) = E(s X 1Y ) = G(s, 1). Likewise, GY (t) = G(1, t). Indeed, we can quickly obtain the p.g.f. of any linear combination of X and Y ; for example, let Z = a X + bY, then E(s Z ) = E(sa X +bY ) = E(sa X sbY ) = G(sa, sb). Further, the joint generating function also provides us with the joint moments when they exist, in the same way as G X (s) provides the moments of X. (7) Example Let X and Y have joint p.g.f. G(s, t) and suppose that X and Y have finite variance. Then E(X Y ) exists (by the Cauchy–Schwarz inequality) and Hence, Likewise, ∂ 2G ∂s∂t = ∂ 2 ∂s∂t E(s X sY ) = E(X Y s X −1t Y −1). E(X Y ) = ∂ 2G ∂s∂t $ $ $ $
|
s=t=1. E(X ) = $ $ $ $ ∂G ∂s s=t=1, and E(Y ) = $ $ $ $ ∂G ∂t. s=t=1 Quite often, we write Gst (s, t) for ∂ 2G/∂s∂t, and so on; in this form, the covariance of X and Y is given by cov (X, Y ) = Gst (1, 1) − Gs(1, 1)Gt (1, 1). (8) (9) (10) Example 5.11 Revisited: Golf Recall that you play n holes of golf, each of which you independently win, lose, or tie, with probabilities p, q, and r, respectively. The numbers of wins, losses, and ties are X, Y, and Z, respectively, with X + Y + Z = n. (a) Find ρ(X, Y ). (b) Find var (X − Y ). Solution (a) By Example 3 above, we calculate E(x X yY z Z ) = ( px + qy + r z)n = G(x, y, z) say. 6.5 Joint Generating Functions 249 Hence, and and E(X ) = G x (1, 1, 1) = np, var (X ) = G x x (1, 1, 1) + G x (1) − (G x (1))2 = np(1 − p), E(X Y ) = G x y(1, 1, 1) = n(n − 1) pq. Therefore, the correlation between X and Y is (11) ρ(X, Y ) = cov (X, Y ) (var (X )var (Y )) = n(n − 1) pq − n2 pq (n2 p(1 − p)q(1 − q)) 1 2 = − 1 2 pq (1 − p)(1 − q) 1 2. You should compare the labour in this calculation with the more primitive techniques of Example 5.11. (b) Using (6) with a = 1, b = −1, we have, on setting W = X − Y, G W (s) = E(s X −Y ) = G(s, s−1, 1) = ( ps
|
+ qs−1 + r )n. Hence, dG W /ds = n( p − qs−2)( ps + qs−1 + r )n−1, and d 2G W ds2 ( ps + qs−1 + r )n−2 + 2nqs−3( ps + qs−1 + r )n−1. Therefore, = n(n − 1)( p − qs−2)2 var (W ) = n(n − 1)( p − q)2 + 2nq + n( p − q) − n2( p − q)2 = n( p + q − ( p − q)2). Finally, we record that joint generating functions provide a useful characterization of independence. Theorem and only if (12) (13) Let X and Y have joint p.g.f. G(s, t). Then X and Y are independent if G(s, t) = G(s, 1)G(1, t). Proof If (13) holds, then equating coefficients of s j t k gives P(X = j, Y = k) = P(X = j)P(Y = k), as required. The converse is immediate by Theorem 5.3.8. (14) Example 5.5.8 Revisited: Eggs Recall that the number X of eggs is Poisson with parameter λ, and eggs hatch independently with probability p. Let Y be the number that do hatch, and Z the number that do not. Show that Y and Z are independent, and also that ρ(X, Y ) = √ p. 250 6 Generating Functions and Their Applications Solution Conditional on X = x, the number Y of hatchings is binomial with p.g.f. (15) E(sY |X = x) = ( ps + 1 − p)x. Hence, by conditional expectation, E(yY z Z ) = E(yY z X − py z z X E X Y y z |X by (15) = exp (λ( py + (1 − p)z − 1)) = eλp(y−1)eλ(1− p)(z−1). since X is Poisson, Hence, Y and Z are independent by Theorem 12. Furthermore, we see immediately that Y is Poisson with
|
parameter λp. To find ρ(X, Y ), we first find the joint p.g.f. of X and Y, again using conditional expectation. Thus, E(s X yY ) = E(s X E(yY |X )) = E(s X ( py + 1 − p)X ) = exp (λ(s( py + 1 − p)− 1)). Hence, using (7), E(X Y ) = λ2 p + λp, and so, using the first part, ρ(X, Y ) = should compare this with the method of Example 5.5.8. √ p. You (16) Example: Pairwise Independence Independent random variables X and Y each take the values +1 or −1 only, and P(X = 1) = a, with P(Y = 1) = b. Let Z = X Y. Show that there are values of a and b such that X, Y, and Z are pairwise independent. (17) Solution Consider the joint probability generating function of X and Z. G(s, t) = E(s X t Z ) = E(s X t X Y ) = E(E(s X t X Y |X )) = E bs X t X + (1 − b) s X t X = a bst + (1 − b) s t + (1 − a) b st + (1 − b) t s = abs2t 2 + a(1 − b)s2 + (1 − a)(1 − b)t 2 + b(1 − a) st as2(bt 2 + 1 − b) + (1 − a)(1 − b) (bt 2 + 1 − b) = 1 st b + b(1 − a) − (1 − b)2(1 − a)b−1, which factorizes into a product of a function of s and a function of t if b2 − (1 − b)2 = 0, that is if b = 1 dependent, and a = b = 1 2. In this case, X and Z are independent. If a = 1 2 entails the pairwise independence of X, Y, and Z. 2 then Y and Z are in- 6.6 Sequences 6.6 Sequences 251 In Section 4.5
|
, we defined the convergence of a sequence of mass functions. This can be usefully connected to the convergence of corresponding sequences of generating functions. For sequences of probability generating functions, we have the following result, which we give without proof. (1) Theorem Let f (k) be a probability mass function with generating function G(s) = ∞ 0 sk f (k), and suppose that for each n ≥ 1, fn(k) is a probability mass function with generating function Gn(s) = ∞ 0 sk fn(k). Then, as n → ∞, fn(k) → f (k) for all k, if and only if Gn(s) → G(s) for all 0 < s < 1. We now use this to prove a result, which we have already shown by more primitive methods. (2) Example Let (X n; n ≥ 1) be a sequence of random variables such that X n has a binomial distribution with parameters n and λ/n, λ > 0. Then E( → eλ(s−1) as n → ∞. This is the p.g.f. of a Poisson random variable, and so as n → ∞, P(X n = k) → e−λλk/k! It is often convenient to work with distributions and moment generating functions. In this case the following result (for which we give no proof) is useful. (3) Theorem Continuity Let {Fn(x); n ≥ 1} be a sequence of distribution functions with corresponding moment generating functions {Mn(t) : n ≥ 1}. If F(x) is a distribution having corresponding moment generating function M(t), then, as n → ∞, Mn(t) → M(t) for all t, if and only if Fn(x) → F(x), whenever F(x) is continuous. Additional conditions are required to link the convergence of a sequence of mass functions or distributions and the convergence of their moments. The following theorem (for which again we offer no proof) is for a sequence of distributions. (4) Theorem Suppose that {µ j (n); j ≥ 1}, such that |µ j (n)| < a j < ∞. for each n ≥ 1, the distribution Fn(x) has moments 252 6 Generating Functions and Their Applications (i) Let Fn(x)
|
→ F(x), as n → ∞, wherever F(x) is continuous. Then as n → ∞, for each j, µ j (n) → µ j < ∞, and (µ j ; j ≥ 1) are the moments of F(x). (ii) Conversely, for each j ≥ 1, as n → ∞, suppose that µ j (n) → µ j < ∞, where {µ j ; 1 ≤ j} are the moments of a unique distribution F(x). Then, as n → ∞, Fn(x) → F(x) wherever F(x) is continuous. There is a corresponding result for sequences of mass functions. These theorems find applications (for example) in the theory of random graphs, and other combinatorial problems where moments are more tractable than distributions. (5) Example Let X n have the binomial distribution with parameters n and λ/n. Then, by Example 6.2.11, µ(k) = n! (n−k)! 0 k λ n → λk, 1 ≤ k ≤ n k > n as n → ∞. But, by Example 6.2.12, these are the factorial moments of the Poisson distribution. Hence, as n → ∞, P(X n = k) → e−λλk/k!, which we proved directly in Example 6.6.2, and earlier in Example 4.10. (6) Example: Matching Again Recall that we are assigning n distinct letters randomly to n matching envelopes, and X is the number of matched pairs (of letter and envelope) that result. Consider the kth factorial moment of X. µ(k) = E(X (X − 1)... (X − k + 1); 1 ≤ k ≤ n. (7) (8) Let I j be the indicator of the event that the jth envelope contains the matching letter. Then the sum S = I j1... I jk is just the number of ways of choosing a set of size k from the set of matching pairs. Another way of writing this is as ( X k ). Hence, j1< j2<...< jk µ(k) k! = E = E (X (X − 1)... (X − k + 1)) k! X k = E
|
(S) = n k E(I j1... I jk ) 253 6.6 Sequences = = n k n k P (a given set of k all match!. µ(k) = 1; 0;! k ≤ n k > n, → 1 for all k Hence, as n → ∞. But these are the factorial moments of the Poisson distribution with parameter 1, and so as n → ∞ (9) P(X = k) → 1 ek!. We conclude with an example that leads into the material of Chapter 7. (10) Example 6.3.6 Revisited Recall that the p.g.f. of Tn (where Tn is the number of uniform random variables required to give a sum greater than n) is G(s) = 1 + (s − 1) 1 + s n What happens as n → ∞? How do you interpret this? Solution From (6.3.12), as n → ∞, n. or, equivalently, It follows that E(s Tn ) → 1 + (s − 1)es E(et Tn ) → 1 + (et − 1)eet. P(Tn = k) → 1 (k − 1)! − 1 k!. The limiting factorial moments have a simple form for E((1 + t)Tn ) → 1 + te1+t = 1 + ∞ k=1 e (k − 1)! t k. Hence, in the limit µ(k) = ek, k ≥ 1. To interpret this, we return to the original definition Tn = min k : Xi > n = min k : n 1 n 1 > 1, Xi n where each Xi /n is uniformly distributed on {1/n, 2/n,..., 1}. However, the limit of this sequence of uniform mass functions as n → ∞ is not the mass function of a discrete random variable. 254 6 Generating Functions and Their Applications Intuitively, you may feel that it is approaching the distribution of a variable that is uniformly distributed over the interval [0, 1]. This vague remark can in fact be given a meaning if we introduce new objects – namely, continuous random variables. This is the subject of the next two chapters; after much technical development, it can be shown that the limit of Tn above is indeed the number of independent
|
random variables, each uniform on [0, 1], required to produce a sum greater than 1. 6.7 Regeneration Many interesting and important sequences of random variables arise as some process evolves in time. Often, a complete analysis of the process may be too difficult, and we seek simplifying ideas. One such concept, which recurs throughout probability, is the idea of regeneration. Here is an illustration: (1) Example: Maze You are trying to traverse an unknown labyrinth. You set off at a constant speed from the clock by the portal, and each time a decision is required you choose at random from the alternatives. It is dark, and you have no pencil and paper, so a description of the process (i.e., your route) is impossible. However, each time you arrive back at the portal, you can look at the clock and record Tr, the time at which you return to the clock for the r th time; T0 = 0 say. Now it is clear from the setup that, when you set off for a second time (at time T1), your chance of following any given route around the maze is the same as when you set off for the first time. Thus, the time until your second return, which is T2 − T1, has the same distribution as T1 and is independent of T1. The same is true of every subsequent interval between successive visits to the portal. These times (Tn; n ≥ 0) are regenerative in the sense that the distribution of your paths starting from Tn is the same as the distribution starting from Tm, for all m and n. Of course, if you leave pebbles at junctions, or take a ball of string, or make a map, then this is no longer true. Here is another archetypal illustration. (2) Example: Renewal A machine started at T0 = 0 uses a bit that wears out. As soon as it wears out, at time T1, it is replaced by a similar bit, which in turn is replaced at T2. Assuming that the machine performs much the same tasks as time passes, it seems reasonable to assume that the collection (X n; n ≥ 1), where X n = Tn − Tn−1, are independent and identically distributed. The replacement (or renewal) times are regenerative. In fact, we have already used this idea of restarting from scratch; see, for example,
|
Examples 4.4.9, 4.19, and 5.4.13. Here is another elementary illustration. Three players A, B, and C take turns rolling a fair die in the order Example ABC AB... until one of them rolls a 5 or a 6. Let X 0 be the duration of the game (i.e., the number of rolls). Let A0 be the event that A wins, and let Ar be the event that A wins after the r th roll. Let Wi, i ≥ 1 be the event that the game is won on the ith roll. Of course, 6.7 Regeneration 255 ∩ A3}, where P(W1) = 1 3 Now, as usual, we denote the indicator of any event E by I (E), and so A0 = W1 ∪ {(s X 0 I (W1)) = 1 3 s. Next we observe that if the first three rolls fail to yield 5 or 6, then the process regenerates (in the sense discussed above), so X 0 = 3 + X 3, where X 3 has the same distribution as X 0. Hence, E(s X 0 I (A3)) = E(s3+X 3 I (A3)) = s3E(s X 0 I (A0)). Therefore, we can write E(s X 0 I (A0)) = E = 1 3 Hence, 2 s X 0 s + 8 27 I (W1 ∩ A3 3 s3E(s X 0 I (A0)). E(s X 0 I (A0)) = Likewise, in an obvious notation, E(s X 0 I (B0)) = E(s X 0 I (C0)) = and Hence, 1 s 3 1 − 8 27. s3 2 s2 9 1 − 8 27, s3 4 s3 27 1 − 8 27. s3 E(s X 0) = E(s X 0(I (A0) + I (B0) + I (C0))) = 4s3 + 6s2 + 9s 27 − 8s3. Now we consider a more general case. Let H be some phenomenon (or happening) that may occur or not at any time n = 1, 2, 3,... Let Hn be the event that H occurs at time n, and define X n to be the time
|
interval between the (n − 1)th and nth occurrences of H. Thus, X 1 = min {n > 0: Hn occurs} X 1 + X 2 = min {n > X 1: Hn occurs} 256 6 Generating Functions and Their Applications Figure 6.2 A delayed renewal process. Here, X 1 = T1 = 3; X 2 = 2, X 3 = 5, X 4 = 1,... and so on. We suppose that (X n; n ≥ 2) are independent and identically distributed random variables with mass function ( f (k); k ≥ 1) and p.g.f. G(s). The first interval X 1 is independent of (X n; n ≥ 2), but its mass function may or may not be the same as that of X 2. This gives rise to two cases: Case (O) The ordinary case. The mass function of X 1 is ( f (k); k ≥ 1), the same as X 2. Case (D) The delayed case. The mass functions of X 1 E(s X 1) = D(s). is (d(k); k ≥ 1), and These two cases admit a conventional interpretation: in the ordinary case, we suppose that H0 occurred, so X 1 has the same mass function as the other intervals; in the delayed case, H0 did not occur, so X 1 may have a different mass function. The mathematical structure described above is known as a recurrent event process, or alternatively as a discrete renewal process. The important point about such a process is that each time H occurs, the process regenerates itself, in the sense discussed above. Figure 6.2 displays a realization of a renewal process. Now Examples 1 and 2 make it clear that there are two essentially different types of renewal process. In Example 1, there is always a chance that you do traverse the maze (or encounter the Minotaur), and so do not return to the entrance. That is, P(X 2 < ∞) < 1. Such a process is called transient. However, all machine bits wear out eventually, so in (2) we have P(X 2 < ∞) = 1. Such a process is called persistent (or recurrent). Now define the probabilities un = P(Hn), n ≥ 1. A natural question is to ask whether this distinction between persistence and transience can also be observed in the properties of un
|
. (The answer is yes, as we will see.) It is customary to make a further distinction between two different types of persistent renewal process. Definition process is said to be nonnull. If E(X 2) = ∞, then the process is said to be null; if E(X 2) < ∞, then the Note that E(X 2) is sometimes known as the mean recurrence time of the process. Ordinary renewal is just a special case of delayed renewal, of course, but it is convenient to keep them separate. We therefore define u0 = 1 and un = P(Hn), in the ordinary case; and v0 = 0, and vn = P(Hn), in the delayed case. These have respective generating 257 ∞ 6.7 Regeneration ∞ 1 functions, U (s) = not probability generating functions in the sense in which we use that term.] 0 unsn and V (s) = vnsn. [Remember that U (s) and V (s) are Now we have the following: (3) Theorem (i) (ii) U (s) = 1 1 − G(s) V (s) = D(s) 1 − G(s). Proof By conditional probability, in Case (O), un = n k=1 P(Hn|X 1 = k)P(X 1 = k). However, given Hk, the probability of any later occurrence of H is as if the process started at k. That is to say (4) P(Hn|X 1 = k) = P(Hn|Hk) = P(Hn−k) = un−k, n ≥ k. Hence, un = n k=1 un−k f (k). Because the right-hand sum is a convolution, its generating function is the product of the two generating functions U (s) and G(s), whence (5) U (s) − 1 = U (s)G(s). Likewise, in Case (D), vn = = n k=1 n k=1 P(Hn|X 1 = k)P(X 1 = k) = n k = 1 P(Hn|X 1 = k) d(k) un−kd(k) by (4). Hence, (6) V (s) = U (
|
s)D(s) = D(s) 1 − G(s) by (5). Thus, given G(s) [and D(s) in the delayed case] we can in principle find P(Hn), the probability that H occurs at time n, by expanding U (s) in powers of s. Conversely, given V (s) [and U (s) in the delayed case] we can find G(s) and D(s), and also decide whether the process is transient or persistent. (7) Corollary If U (1) < ∞, then the process is transient. Otherwise, it is persistent. Proof This follows immediately from Theorem 3(i). 258 6 Generating Functions and Their Applications (8) Example: Coincidences Suppose that a number c of independent simple symmetric random walks are started simultaneously from the origin. Let H be the “happening” that they are all at the origin, so H2n is the event that all the c walks are at 0 on the 2nth step. Show that H is persistent when c = 2, but H is transient for c ≥ 3. Solution For c = 2, we recall that 2 P(H2n) = (u2n) 2n 1 4n 2n... 2 1 − 1 2n − 1 1 − 1 2n 1 − 1 4 on successive cancellation. = 1 4n Hence, H is persistent as n P(H2n) diverges. For c ≥ 3, we have similarly that P(H2n) = < 1 4n 2n · · · 2n − 1 2n. 2n 2n + 1 1 − 1 4 c/2 = 2 · · · 1 2n + 1 1 − 1 2n c/2 c/2 2. Hence, H is transient as n(1/(2n + 1))c/2 < ∞. (9) Example: Stationary Renewal Let X > 0 have p.g.f. G(s). Show that if E(X ) < ∞, then H (s) = 1 E(X ) 1 − G(s) 1 − s is the p.g.f. of a nonnegative random variable. Now consider the delayed case of a recurrent event process in which E(s X 2) = G(s) and E(s
|
X 1) = H (s). Show that for all n (10) P(Hn) = 1 G(1). Solution cients. Furthermore, by L’Hˆopital’s rule, From (6.1.6), we have that H (s) is a power series with nonnegative coeffi- H (1) = lim s↑1 −G(s) −E(X ) = 1 Hence, H (s) is a p.g.f. Finally, if D(s) = H (s) in (6), then 6.8 Random Walks V (s) = 1 E(X 2)(1 − s), and the result follows. 259 6.8 Random Walks Recall that if (Xi ; i ≥ 1) are independent and identically distributed, then Sn = So + n 1 Xi is a random walk. Because generating functions have been so useful in handling sums of random variables, we may expect them to be exceptionally useful in analysing random walks. If X has p.g.f. G(z), then trivially we have, when S0 = 0. It follows that we can define the function H by Gn(z) = E(z Sn ) = (G(z))n. (1) H (z, w) = ∞ n = 0 wnGn(z) = (1 − wG(z))−1. This bivariate generating function tells us everything about Sn in principle, as P(Sn = r ) is the coefficient of zr wn in H (z, w). However, the analytical effort required to work at this level of generality is beyond our scope. We proceed by considering simple examples. (2) Example: Simple Symmetric Random Walk 1 Xi ; n ≥ 0) be a simple symmetric random walk, with S0 = 0. Let Hn be the event that Sn = 0. Because steps of the walk are independent and identically distributed, it follows that visits to the origin form an ordinary renewal process. Here, un = P(Sn = 0). Let (Sn = n Define the first passage times, T j = min {n > 0 : Sn = j|S0 = 0}, ∞ and the generating functions, U (s) = 0 unsn and G j (
|
s) = E(s T j ). Find U (s) and G0(s), and show that the simple symmetric random walk is persistent null. Solution We give two methods of finding U (s) and G j (s). For the first, define T ∗ j = min {n: Sn = j − 1|S0 = −1} and let ˆT1 be a random variable having the same distribution as T1, but independent of T1. Because the steps of the walk are independent, and symmetrically and identically distributed, it follows that (3) (4) (5) (6) and E(s T1) = E(s T−1), T2 = T1 + ˆT1, E(s T ∗ 2 ) = E(s T2) = E(s T1+ ˆT1) = (G1(s))2, by independence. 260 6 Generating Functions and Their Applications Hence, by conditional expectation, (7) G1(s) = E(E(s T1|X 1)) = (s1+T ∗ = 1 2 E(s T1|X 1 = 1) + 1 2 E(s T1|X 1 = −1) s + 1 2 s(G1(s))2 by (6). One root of (7) is a probability generating function, so this root is G1(s), namely, G1(s) = (1 − (1 − s2) 1 2 )/s. Now, using conditional expectation again, (8) G0(s) = E(E(s T0|X 1)) = 1 2 = 1 2 E(s T0|X 1 = 1) + 1 2 sE(s T11 − s2) E(s T0|X 1 = −1) sE(s T−1) = sG1(s) by (3) (9) (10) (11) Hence, U (s) = (1 − s2)−1/2 by Theorem 6.7.3. Alternatively, we could observe that S2n = 0 if and only if the walk has taken n steps in each direction. They may be taken in any order so u2n = 2n n 2n. 1
|
2 Now recall that by the negative binomial theorem ∞ 0 2n 1 2 2n n x n = (1 − x) −1 2, and (9) and (8) follow. Setting s = 1 shows that G0(1) = 1 (and U (1) = ∞) so H is persistent. However, d ds G0(s) = s (1 − s2) 1 2, and setting s = 1 shows that H is null; the expected number of steps to return to the origin is infinite as we know already, recall Example 5.6.27. Now that we have the generating functions U (s) and G0(s), we can provide slicker derivations of earlier results. For example, G0(s) = 1 − (1 − s2) 1 2 = 1 − (1 − s2)U (s). 6.8 Random Walks 261 Hence, equating coefficients of s2k gives (5.18.1) (12) Also, f2k = u2k−2 − u2k. s d ds G0(s) = s2(1 − s2)− 1 2 = s2U (s) and so equating coefficients again gives (5.18.6) (13) 2k f2k = u2k−2. See Problem 41 for another simple application of this. Here is a trickier application. (14) Example: Truncated Walk Let (Sn; n ≥ 1) be a simple symmetric random walk with S0 = 0, and let T = min {n > 0 : Sn = 0}. Let T ∧ 2m = min {T, 2m} and show that (15) E(T ∧ 2m) = 4mu2m = 2E(|S2m|). We establish (15) by showing that all three terms have the same generating Solution function. Equality then follows by the uniqueness theorem. First, ∞ ∞ 4mu2ms2m = 2s 2ms2m−1u2m = 2s U (s) = d ds 2s2 (1 − s2)3/2. 0 1 Second, recalling (13) and (5.18.2), Hence, E(T ∧ 2m) = m k=1
|
2k f2k + 2mP(T > 2m) = s2mE(T ∧ 2m) = m−1 s2m m k=0 u2k + s + sU (s) = s2U (s) 1 − s2 2s2 (1 − s2) =. 3 2 m k=1 m u2k−2 + 2mu2m. 2ms2m−1u2m using (6.1.10) Finally, using the hitting time theorem (5.6.17), m s2mE(|S2m|) = 2 s2m 2kP(S2m = 2k) m m = 2 m d ds = 2s k=1 m k=1 s2m s2m 2m f2k(2m) m f2k(2m) m k=1 (17) (18) (19) 262 6 Generating Functions and Their Applications (G2(s) + G4(s) + G6(s) +...) (G1(s))2 (1 − (G1(s))2) by Example (2) = 2s = 2s d ds d ds d ds = s ((1 − s2)− 1 2 − 1) = s2 (1 − s2) 3 2. As a final example of the use of generating functions in random walks, we establish yet another arc-sine law. (16) Example: Arc-Sine Law for Leads Let (Sn; n ≥ 0) be a simple symmetric random walk with S0 = 0. Of the first 2n steps, let L 2n denote the number that do not enter the negative half-line. Show that P(L 2n = 2k) = 4−n 2k k. 2n − 2k n − k Solution Define the generating functions G2n(s) = E(s L 2n ) and H (s, t) = ∞ n=0 t 2nG2n(s). Let T be the number of steps until the walk first revisits zero, and recall that F(s) = E(s T ) = ∞ 1 s2r f (2r ) = 1 − (1 − s2)
|
1 2. Now using conditional expectation E(s L 2n ) = (E(E(s L 2n |T ) n = E(s L 2n |T = 2r ) f (2r ) + E(s L 2n |T > 2n) r = 1 Now, depending on the first step, ∞ r = n+1 f (2r ). P(L T = T ) = 1 2 and visits to zero constitute regeneration points for the process L 2n. Hence, we may rewrite (18) as = P(L T = 0), G2n(s) = n r =1 G2n−2r 1 2 (s2r + 1) f (2r ) + ∞ r =n+1 f (2r ) 1 2 (s2n + 1). Multiplying by t 2n and summing over n yields H (s, t) = 1 2 H (s, t)(F(st) + F(t)) + 1 2. 1 − F(st) 1 − t 2s2 + 1 2. 1 − F(t) 1 − t 2, by the convolution theorem. Now substituting for F(.) from (19) gives H (s, t) = ((1 − s2t 2)(1 − t 2))− 1 2. n 6.9 Review and Checklist for Chapter 6 263 The coefficient of t 2ns2k in this is (17). Now use Exercise 5.18.8 to produce the arc-sine distribution. 6.9 Review and Checklist for Chapter 6 All random variables have a probability distribution, and many of them also have moments. In this chapter, we introduced two miraculous devices to help us with many of the chores involved in handling and using probabilities and moments. Probability generating function of X: p.g.f. G X (s) = P(X = n)sn = Es X. Moment generating function of X: m.g.f. MX (t) = P(X = n)ent = Eet X. n You can think of these functions as organizers that store a collection of objects that they will regurgitate on demand. Remarkably, they will often produce other information if suitably stimulated; thus, the p.g.f. will produce the moments (if any exist), and the m.
|
g.f. will produce the probability distribution (in most cases). We used them to study sums of independent random variables, branching processes, renewal theory, random walks, and limits. They have these properties: Connections: Tails: Uniqueness: Moments: MX (t) = G X (et ) and G X (s) = MX (log s). T (s) = ∞ n=0 snP(X > n) = 1 − G X (s) 1 − s, when X ≥ 0. f X (k) = G(k) X (0)/k!. µ(k) = G(k) µ(k) = kT (k−1)(1). X (1) and EX k = M (k) X (0) [Where µ(k) is the kth factorial moment.] Sums and random sums: For independent (X n; n ≥ 1), Es X 1+···X n = n r =1 G Xr (s); E exp(t(X 1 + · · · X n)) = n r =1 MXr (t). For independent (X n; n ≥ 1) and independent nonnegative integer valued N, Es X 1+···X N = G N (G X (s)); E exp (t(X 1 + · · · + X N )) = G N (MX (t)). 264 6 Generating Functions and Their Applications Joint generating functions: GX(s) = E n r =1 ) s X r r MX(t) = E exp * n r =1 tr Xr Independence: X and Y are independent if and only if E(x X yY ) = G X (x)GY (y), for all x, y, or E[exp(s X + tY )] = MX (s)MY (t), for all s, t. Branching: If the family size p.g.f. is G(s), then Gm+n(s) = Es Zm+n = Gm(Gn(s)) = Gn(Gm(s)). The probability η of ultimate extinction is the smallest positive root of G(x) = x. Special generating functions: Binomial distribution (q + ps)n Uniform distribution on {0, 1,..., n} Geometric distribution ps 1−qs Poisson distribution eλ(s−1)
|
Negative binomial distribution ( ps Logarithmic distribution log(1−sp) log(1− p) 1−qs )n 1−sn+1 (n+1)(1−s) Checklist of Terms for Chapter 6 6.1 probability generating function tail generating function 6.2 uniqueness theorem factorial moments 6.3 sums and random sums branching process extinction probability 6.4 moment generating function cumulant generating function 6.5 joint probability generating function factorization and independence 6.6 continuity theorem 6.7 renewal process persistent transient null, nonnull 6.8 simple random walk arc-sine law for leads 6.9 Review and Checklist for Chapter 6 265 Finally, we note that we have occasionally used elementary ideas from calculus in this chapter, and we need to do so more frequently in Chapter 7. We therefore include a brief synopsis of the basic notions. Appendix: Calculus Fundamental to calculus is the idea of taking limits of functions. This in turn rests on the idea of convergence. Convergence real number a such that |xn − a| is always ultimately as small as we please; formally, Let (xn; n ≥ 1) be a sequence of real numbers. Suppose that there is a where is arbitrarily small and n0 is finite. |xn − a| < for all n > n0, In this case, the sequence (xn) is said to converge to the limit a. We write either as n → ∞ or xn = a. xn → a lim n→∞ Now let f (x) be any function defined in some interval (α, β), except possibly at the point x = a. Let (xn) be a sequence converging to a, such that xn = a for any n. Then ( f (xn); n ≥ 1) is also a sequence; it may converge to a limit l. If the sequence ( f (xn)) converges to the same limit l for every Limits of Functions sequence (xn) converging to a, xn = a, then we say that the limit of f (x) at a is l. We write either f (x) → l as x → a, or f (x) = l. lim x→a Suppose now that f (x) is defined in the interval (α, β), and let limx→a f (x) be the limit of
|
f (x) at a. This may or may not be equal to f (a). Accordingly, we define: Continuity The function f (x) is continuous in (α, β) if, for all a ∈ (α, β), f (x) = f (a). lim x→a Now, given a continuous function f (x), we are often interested in two principal questions about f (x). (i) What is the slope (or gradient) of f (x) at the point x = a? (ii) What is the area under f (x) lying between a and b? Question (i) is answered by looking at chords of f (x). For any two points a and x, the slope of the chord from f (a) to f (x) is s(x) = f (x) − f (a) x − a. 266 6 Generating Functions and Their Applications If s(x) has a limit as x → a, then this is what we regard as the slope of f (x) at a. We call it the derivative of f (x), and say that f (x) is differentiable at a. Derivative The derivative of f (x) at a is denoted by f (a), where f (a) = lim x→a f (x) − f (a) x − a. We also write this as $ $ $ $. f (a) = d f d x In this notation d f /d x = d f (x)/d x is the function of x that takes the value f (a) when x = a. If we can differentiate the derivative f (x), then we obtain the second derivative denoted by f (2)(x). Continuing in this way, the nth derivative of f (x) is f (n)(x) = d n f d x n. For question (ii), let f (x) be a function defined on [a, b]. Then the area under the curve x = 0 f (x) in [a, b] is denoted by b a f (x)d x, and is called the integral of f (x) from a to b. In general, areas below the x-axis are counted as negative; for a probability density, this case does not arise because density functions are never negative. The integral is also de�
|
��ned as a limit, but any general statements would take us too far afield. For well-behaved positive functions, you can determine the integral as follows. Plot f (x) on squared graph paper with interval length 1/n. Let Sn be the number of squares lying entirely between f (x) and the x-axis between a and b. Set In = Sn/n2. Then lim n→∞ In = b a f (x)d x. The function f (x) is said to be integrable. Of course, we almost never obtain integrals by performing such a limit. We almost always use a method that relies on the following, most important, connection between differentiation and integration. Fundamental Theorem of Calculus Let f (x) be a continuous function defined on [a, b], and suppose that f (x) is integrable. Define the function Fa(x) by Fa(x) = x a f (t)dt. Then the derivative of Fa(x) is f (x); formally, F a(x) = f (x). 6.9 Review and Checklist for Chapter 6 267 This may look like sorcery, but actually it is intuitively obvious. The function F a(x) is the slope of Fa(x); that is, it measures the rate at which area is appearing under f (x) as x increases. Now just draw a picture of f (x) to see that extra area is obviously appearing at the rate f (x), so F a(x) = f (x). We omit any proof. Summary of elementary properties (i) f (x) = d f /d x = f. It follows that for constant c. (c f + g) = c f + g, If f is constant, then f = 0) = f g + f g. f (g) = f (g)g. (ii) F(x) = + x −∞ f (t)dt. It follows that If f is constant, then F(b) − F(a) = If f < g, then. b a f (t)dt ∝ b − a. b b a a (c f + g). f gd x + b a f gd x = b a ( f g)d x = f (b)g(b) − f (
|
a)g(a). x 1 (1/y)dy. + (iii) log x = Functions of More Than One Variable We note briefly that the above ideas can be extended quite routinely to functions of more than one variable. For example, let f (x, y) be a function of x and y. Then (i) f (x, y) is continuous in x at (a, y) if limx→a f (x, y) = f (a, y). (ii) f (x, y) is continuous at (a, b) if lim(x,y)→(a,b) f (x, y) = f (a, b). (iii) f (x, y) is differentiable in x at (a, y) if f (x, y) − f (a, y) x − a lim x→a = f1(a, y) exists. We denote this limit by ∂ f /∂ x. That is to say, ∂ f /∂ x is the function of x and y that takes the value f1(a, y) when x = a. Other derivatives, such as ∂ f /∂ y and ∂ 2 f /∂ x∂ y, are defined in exactly the same way. 268 6 Generating Functions and Their Applications Finally, we note a small extension of the fundamental theorem of calculus, which is used more often than you might expect: ∂ ∂ x g(x) f (u, y)du = dg(x) d x f (g(x), y)..10 Example: Gambler’s Ruin and First Passages n i=1 Xi be a simple random walk with a ≥ 0. Let Sn = a + (a) Suppose that 0 ≤ Sn ≤ K, and that the walk stops as soon as either Sn = 0 or Sn = K. (In effect, this is the gambler’s ruin problem.) Define Ta0 = min {n: Sn = 0}, and find E(s Ta0) = Fa(s) (say). (b) Now suppose that K = ∞, and define Ta0 = min{n : Sn = 0}. (In effect, this is the gambler’s ruin with an infi
|
nitely rich opponent.) Find E(s T10) = F1,0(s) (say). Solution This makes no difference to the fact that, by conditional expectation, for 0 < a < K, (a) Of course, Ta0 is defective in general because the walk may stop at K. Fa(s) = E(E(s Ta0|X 1)) = pE(s Ta0|X 1 = 1) + qE(s Ta0|X 1 = −1) = psE(s Ta+1,0) + qsE(s Ta−1,0) = ps Fa+1(s) + qs Fa−1(s). Because the walk stops at 0 or K, FK (s) = 0 and F0(s) = 1. The difference equation (1) has auxiliary equation psx 2 − x + qs = 0, with roots λ1(s) = (1 + (1 − 4 pqs2) λ2(s) = (1 − (1 − 4 pqs2) 1 2 )/(2 ps) 2 )/(2 ps). 1 Hence, in the by now familiar way, the solution of (1) that satisfies (2) is Fa(s) = λK 1 λa 2 λK 1 λK 2 − λa 1 − λK 2. (b) When the walk is unrestricted it is still the case, by conditional expectation, that F1,0(s) = E(E(s T10|X 1)) = psE(s T20) + qs. However, using the same argument as we did above for the symmetric random walk yields E(s T20) = (E(s T10))2. Substituting in (4) shows that F1,0(s) is either λ1(s) or λ2(s). However, λ1(s) is not the p.g.f. of a nonnegative random variable and so F1,0(s) = λ2(s) = (1 − (1 − 4 pqs2) 1 2 )/(2 ps). (1) (2) (3) (4) (5) Worked Examples and Exercises 269 (6) Exercise Let Ta K = min {n: Sn =
|
K }. Show that E(s Ta K ) = λa 1 λK 1 − λa 2 − λK 2. (7) (8) (9) Exercise What is the p.g.f. of the duration of the game in the gambler’s ruin problem? Exercise What is E(s Ta0 ) for the unrestricted random walk? Exercise ever visits 0 and E(Ta0|Ta0 < ∞). For the unrestricted random walk started at a > 0, find the probability that the walk Let Sn be a simple random walk with S0 = 0. Let T = min {n > 0: Sn = 0}. Show (10) Exercise that E(s T ) = 1 − (1 − 4 pqs2) 2. What is E(T |T < ∞)? 1 6.11 Example: “Fair” Pairs of Dice You have the opportunity to play a game of craps with either “Lucky” Luke or “Fortunate” Fred. Whose dice shall you play with? Luke’s two dice are perfectly regular cubes, but the faces bear unorthodox numbers: Luke explains that, when rolled, the sum of his two dice has the same mass function as the sum of two conventional fair dice; he uses these to ensure that no one can surreptitiously switch to unfair dice. Fred’s two dice are conventionally numbered, but are irregular cubes. Fred explains that these have been cleverly biased so that, when rolled, the sum has the same mass function as two fair dice; their irregular shape ensures that no one can secretly switch to unfair dice. Assuming you want to play at the usual odds, whose dice should you use? (Sadly, your own dice were confiscated by a casino last week.) Solution Let X and Y be the scores of two fair dice. The p.g.f. of their sum is E(s X +Y ) = E(s X )E(sY ) = (s + s2 + s3 + s4 + s5 + s6) 2 1 6 (1 + 2s + 3s2 + 4s3 + 5s4 + 6s5 + 5s6 + 4s7 + 3s8 + 2s9 + s10) = s2 36 = G(s) (say). 270 6 Generating Functions and Their Applications
|
Now the sum of Luke’s dice L 1 + L 2 has p.g.f. E(s L 1+L 2) = 1 6 (s + 2s2 + 2s3 + s4) 1 6 (s + s3 + s4 + s5 + s6 + s8) = G(s) on multiplying out the brackets. So Luke’s claim is correct. However, G(s) can be factorized as 36G(s) = s2(1 + s)2(1 − s + s2)2(1 + s + s2)2, where 1 + s + s2, 1 − s + s2 are irreducible, having complex roots. Hence, there are only two possibilities for the generating functions of Fred’s dice: either (i) and or (ii) the dice are fair: E(s F1) = 1 2 s(1 + s)(1 − s + s2)2 E(s F2) = s 18 (1 + s)(1 + s + s2)2; E(s F1) = E(s F2) = 1 6 s(1 + s)(1 + s + s2)(1 − s + s2) = 1 6 (s + s2 + s3 + s4 + s5 + s6). However, (1 + s)(1 − s + s2)2 = 1 − s + s2 + s3 − s4 + s5 and the negative coefficients 2 s(1 + s)(1 − s + s2)2 is not a p.g.f. The only remaining possibility is that the ensure that 1 dice are fair, which palpably they are not. This shows that the sum of two biased dice cannot have the same mass function as the sum of two fair dice. Thus, Fred’s claim is incorrect; his dice are as crooked as yours probably were. You should play with Luke’s dice. (1) (2) You have two fair tetrahedral dice whose faces are numbered conventionally 1, 2, 3, Exercise 4. Show how to renumber the faces so that the distribution of the sum is unchanged. Exercise (a) Write down the generating function of the sum of two fair dodecahedra with faces numbered 1 Yet another regular Platonic solid is the dodecahedron with 12
|
pentagonal faces. to 12 inclusive. (b) Show that two such dodecahedral dice can be biased in such a way that their sum has the same distribution as the sum of the fair dice. Hint: Let f (x) = x + x 12 + (2 − + (7 − 4 √ √ 3)(x 2 + x 11) + (5 − 2 √ √ 3)(x 3 + x 10) 3)(x 4 + x 9) + (10 − 5 3)(x 5 + x 8) + (11 − 6 √ 3)(x 3 + x 7) and g(x) = (x + x 4 + x 7 + x 10)(1 + √ 3x + x 2). Consider f (x)g(x). (3) (4) (5) (1) (2) (3) (4) (5) (6) Worked Examples and Exercises 271 Show that it is not possible to weight two conventional dice in such a way that the Exercise sum of the numbers shown is equally likely to take any value between 2 and 12 inclusive. Exercise value between 2 and 12 inclusive? Exercise dice? Can the sum of three biased dice have the same mass function as the sum of three fair Is it possible to re-number two fair dice so that their sum is equally likely to take any Remark and RM Shortt in the American Mathematical Monthly, 1988. Some results of this example were recorded by SG Landry, LC Robertson 6.12 Example: Branching Process Let Zn be the size at time n of the ordinary branching process defined in Example 6.3.16. Thus the r th individual in the nth generation (that is, at time n) is replaced by a family of size X (r, n + 1), where the X (r, n + 1) are independent and identically distributed, with mean µ, variance σ 2, and cumulant generating function κ(t) = log (E[exp(t X (1, 1))]). E(Zn) = µn. Show that Show also that var (Zn) = var (Z1)(E(Z1))n−1 + (E(Z1))2var (Zn−1) and hence find an expression for var (Zn) in terms of µ and σ, when µ = 1. Solution First
|
, recall the basic identity of branching processes: namely, given Zn−1, Zn−1 Zn = X (r, n). r =1 Now let the cumulant generating function of Zn be κn(t). Then by conditional expectation, κn(t) = log (E(et Zn )) = log (E(E(et Zn |Zn−1))) ) * Zn−1 = log E E exp t X (r, n) |Zn−1 by (4) r =1 = log (E[(E(et X (1.1)))Zn−1]) = log (E([eκ(t)]Zn−1)) = κn−1(κ(t)). Now expanding κn−1(κ(t)) using (6.4.12) gives κn−1(κ(t)) = κ(t)E(Zn−1) + 1 (κ(t))2var (Zn−1) + · · · 2 σ 2E(Zn−1t 2 + 1 = µtE(Zn−1) + 1 2 2 µ2var (Zn−1)t 2 + O(t 3) 272 6 Generating Functions and Their Applications on expanding κ(t) using (6.4.12). Hence, equating coefficients of t and t 2 in (5) and (6) now yields (7) (8) and E(Zn) = µE(Zn−1) var (Zn) = σ 2µn−1 + µ2var (Zn−1). Iterating (7) gives (2), and equation (8) is just (3), as required. To solve the difference equation (8), we note first that Aµ2n is a solution of the reduced equation var (Zn) = µ2var (Zn−1). By inspection, a particular solution of (8), when µ = 1, is given by σ 2µn−1/(1 − µ). Imposing the initial condition var (Z1) = σ 2 now shows that when µ = 1, (8) has solution var (Zn) = σ 2µn−1 1 − µn 1 − µ. (9
|
) (10) Exercise (11) Exercise Find var (Zn) when µ = 1. Show that for n > m, E(Zn Zm) = µn−mE(Z 2 m). Deduce that when µ = 1, m n Find an expression for ρ(Zm, Zn) when µ = 1, and deduce that for µ > 1 as n, ρ(Zm, Zn) = 2. 1 (12) Exercise m → ∞, with n − m held fixed, ρ(Zm, Zn) → 1. (13) Exercise If r is such that r = E(r Z1 ), show that E(r Zn+1 |Zn) = r Zn. What is r? 6.13 Example: Geometric Branching Let (Zn; n ≥ 0) be an ordinary branching process with Z0 = 1, and suppose that E(s Z1) = 1 − p 1 − ps. (a) Show that, for p = 1 2, E(s Zn ) = ρn − 1 − ρs(ρn−1 − 1) ρn+1 − 1 − ρs(ρn − 1), where ρ = p/(1 − p). (b) Now let (Z ∗ n ; n ≥ 0) be an ordinary branching process with Z0 = 1, family size distribution given by (1), and such that at time n, for all n ≥ 1, one new particle is added to the population independently of Zn. Show that for p < 1 2 lim n→∞ E(s Zn |Zn > 0) = lim n→∞ E(s Z ∗ n ) = s(1 − 2 p) 1 − p(1 + s). (1) (2) (3) Worked Examples and Exercises 273 Solution that (2) holds for n, it follows from Example 6.3.16 that (a) As usual, let E(s Zn ) = Gn(s). We establish (2) by induction. Assuming Gn+1(s) = Gn 1 − p 1 − ps = (ρn − 1)(1 − ps) − p(ρn−1 − 1) (ρn+1 − 1)(1 − ps) − p(ρn − 1) �
|
�n+1 − 1 − ρs(ρn − 1) ρn+2 − 1 − ρs(ρn+1 − 1) =. (4) by the induction hypothesis Because (2) is true for n = 1, by (1), the result does follow by induction. (b) Let ( ˆZn; n ≥ 1) be a collection of independent random variables such that ˆZn has the same distribution as Zn. Now Z ∗ n is the sum of the descendants of the initial individual, and the descendants of the fresh individual added at n = 1, and those of the next added at n = 2, and so on. That is to say Z ∗ n has the same distribution as 1 + ˆZ1 + · · · + ˆZn. Hence, (5) E(s Z ∗ n ) = s n Gr (s) = s(ρ − 1) ρn+1 − 1 − ρs(ρn − 1) by successive cancellation, r =1 → s(ρ − 1) ρs − 1 = s(1 − 2 p) 1 − p(1 + s). as n → ∞ This is the generating function of a random variable with mass function f (k + 13), we require the conditional generating function the other half of For E(s Zn |Zn > 0). Because P(Zn = 0) = Gn(0), this is given by E(s Zn |Zn > 0) = Gn(s) − Gn(0) 1 − Gn(0). Substituting for Gn(.) from (2), we find E(s Zn |Zn > 0) = ρn − 1 − sρn + ρs ρn+1 − 1 − sρn+1 + ρs − ρn − 1 ρn+1 − 1 1 − ρn − 1 ρn+1 − 1 → s(1 − ρ) 1 − ρs as n → ∞, as required. Exercise 2−(k+1); k ≥ 0. Show that Let Zn be an ordinary branching process with family size mass function P(X = k) = Gn(s) = n − (n − 1)s n + 1 − ns ; n ≥ 0
|
. (6) (7) 274 6 Generating Functions and Their Applications (8) Exercise (7) Continued Show that in this case ( p = 1 2 ), we have E Zn n |Zn > 0 → 1, as n → ∞. (a) Let X be any nonnegative random variable such that E(X ) = 1. Show that (9) Exercise E(X |X > 0) ≤ E(X 2). (b) Deduce that if p > 1 2 (10) Exercise When p < q, find lim n→∞, E(Znρ−n|Zn > 0) < 2 p/( p − q). n t Z ∗ E(s Z ∗ n+m ). 6.14 Example: Waring’s Theorem: Occupancy Problems Let (Ai ; 1 ≤ i ≤ n) be a collection of events (not necessarily independent). Let X be the number of these events that occur, and set pm = P(X = m), qm = P(X ≥ m), and sm = P(Ai1 ∩... ∩ Aim ). Show that (a) (b) and (c) i1<...<im sm = E X m = n i=m i m i m pi ; si ; si. i − 1 m − 1 pm = qm = n (−)i−m i=m n (−)i−m i=m Solution (a) Recall that I (A) is the indicator of the event A. Because P(Ai1 ∩... ∩ Aim ) = E(I (Ai1)... I (Aim )) it follows that the sum sm is just the expected number of distinct sets of size m (of the Ai ) that occur. But, given X = i, exactly ( i m ) such distinct sets of size m occur. Hence, by conditional expectation, (1) sm = n m i m P(X = i) = E. X m (b) Now define generating functions Gs(z) = By (1), n m=0 zmsm and Gq (z) = n m=0 zmqm. (2) Gs(z − 1) = n m=0 (z − 1)msm = n m=0 (z − 1)mE X m
|
= E X (z − 1)m m=0 X m = E((1 + z − 1)X ) = E(z X ) = Now equating coefficients of zm proves (b). n m=0 pm zm. Worked Examples and Exercises 275 (c) By Theorem 3.6.7 or Theorem 6.1.5, we have Hence, by (2), (3) Gq (z) = 1 − zG X (z) 1 − z. Gq (z) − 1 z = Gs(z − 1) − 1 z − 1. Equating coefficients of zm yields (c). (4) Exercise Show that sm = n i=m qi. i − 1 m − 1 P(Ai1 i1<...<im Let tm = Exercise of (ti ; 1 ≤ i ≤ n). Exercise If r balls are placed at random in n cells, so that each ball is independently equally likely to arrive in any cell, find the probability that exactly c cells are each occupied by exactly i balls. ∪... ∪ Aim ). Find expressions for sm, pm, and qm in terms 6.15 Example: Bernoulli Patterns and Runs A coin is tossed repeatedly, heads appearing with probability p(= 1 − q) on each toss. (a) Let X be the number of tosses required until the first occasion when successive tosses show HTH. Show that E(s X ) = p2qs3 1 − s + pqs2 − pq 2s3. (b) Let Y be the number of tosses required until the first occasion when three successive tosses show HTH or THT. Show that E(sY ) = pqs3(1 − 2 pqs + pqs2) 1 − s + pqs2 − p2q 2s4. Solution does then appear as a result of the next three. The probability of this is P(X > n) p2q. (a) Consider the event that HTH does not appear in the first n tosses, and it This event is the union of the following two disjoint events: (i) The last two tosses of the n were HT, in which case X = n + 1. (ii) The last
|
two tosses of the n were not H T, in which case X = n + 3. The probabilities of these two events are P(X = n + 1) pq and P(X = n + 3), respectively. Hence, P(X > n) p2q = P(X = n + 1) pq + P(X = n + 3); n ≥ 0. Multiplying (3) by sn+3, summing over n, and recalling (6.1.7), yields 1 − E(s X ) 1 − s The required result (1) follows. p2qs3 = pqs2E(s X ) + E(s X ). (5) (6) (1) (2) (3) (4) 276 6 Generating Functions and Their Applications (b) First, consider the event that neither HTH nor THT have appeared in the first n tosses and HTH then appears. This event is the union of three disjoint events: (i) The last two tosses of the n were HT, so Y = n + 1, with the appearance of HTH; (ii) The last two tosses of the n were TT, so Y = n + 2, with the appearance of THT; (iii) Otherwise, Y = n + 3, with the appearance of HTH. Let f 1(n) denote the probability that Y = n with the occurrence of HTH, and f 2(n) the probability that Y = n with the occurrence of THT. Then from the above, we have P(Y > n) p2q = f 1(n + 1) pq + f 2(n + 2) p + f 1(n + 3). Hence, multiplying by sn+3 and summing over n, 1 − E(sY ) 1 − s where Gi (s) = n sn f i (n); i = 1, 2. p2qs3 = pqs2G1(s) + spG2(s) + G1(s), Second, consider the event that neither HTH or THT have appeared in the first n tosses and THT then occurs. Arguing as in the first case yields P(Y > n) pq 2 = f 2(n + 1) pq + f 1(n + 2)q
|
+ f 1(n + 3). Hence, Now we also have 1 − E(sY ) 1 − s pq 2s3 = pqs2G2(s) + qsG1(s) + G2(s). so solving (6), (8), and (9) for E(sY ) yields (2) as required. E(sY ) = G1(s) + G2(s) (5) (6) (7) (8) (9) (10) Exercise When p = 1 2, find E(X ) for all possible triples of the form HHH, HHT, etc. Comment on your results. (11) Exercise (12) Exercise (13) Exercise Show that E(X ) = 1/ p + 1/(qp2). Show that E(Y ) = (1 + pq + p2q 2)/( pq(1 − pq)). Let Z be the number of tosses required for the first occurrence of HHH. Find E(s Z ) and show that E(Z ) = 1 p + 1 p2 + 1 p3. (14) Exercise Show that (15) Exercise Let W be the number of tosses required for the first appearance of either HHH or TTT. E(s W ) = s3( p3 + q 3 + qp( p2 + q 2)s + p2q 2s2) 1 − pqs2 − pqs3 − p2q 2s4. Let V be the number of tosses required for the first appearance of either r consecutive heads or ρ consecutive tails. Find E(s V ), and show that E(V ) = pr (1 − p) 1 − pr + q ρ(1 − q) 1 − q ρ −1. (16) Exercise Find the expected number of tosses required for the first appearance of HTHTH. Worked Examples and Exercises 277 6.16 Example: Waiting for Unusual Light Bulbs The light bulbs in the sequence illuminating your room have independent and identically distributed lifetimes, so the replacement times form an ordinary renewal process (as defined in Example 6.7.2). Suppose the lifetimes are (Xi ; i ≥ 1) with common mass function f (k). A
|
bulb is called “unusual” if its life is shorter than a or longer than b, where a ≤ b. Let T be the time at which a bulb is first identified as being an unusual bulb. Show that (for integers a and b), E(s T ) = E(Ias X 1) + sbE(Ib) a I c b ) 1 − E(s X 1 I c. where Ia and Ib are the indicators of the events {X 1 < a} and {X 1 > b} respectively. That is, and Ia = I {X 1 < a}, with I c a = 1 − Ia, Ib = I {X 1 > b}, with I c b = 1 − Ib. Solution Because Ia + Ib + I c a I c b = 1, we can write E(s T ) = E(Ias T ) + E(Ibs T ) + E(I c a I c b s T ). Now on the event Ia, X 1 = T because the first bulb failed before a and was identified as unusual at X 1. So E(Ias T ) = E(Ias X 1). a I c b, the process regenerates at the first replacement X 1 ∈ [a, b], and so On the event I c T = X 1 + T, where T is independent of X 1 and has the same distribution as T. So b s T ) = E(s T )E(I c Finally on the event Ib, the first light bulb is identified as unusual when it survives beyond b, so b s X 1). E(I c a I c a I c Substituting (3), (4), and (5) into (2), gives (1). E(ST Ib) = sbE(Ib). Find the expected time until a light bulb has a lifetime shorter than a. Evaluate this Exercise when f X (k) = qpk−1, k ≥ 1. Successive cars pass at instants Xi seconds apart (i ≥ 1). Exercise: Crossing the Road You require b seconds to cross the road. If the random variables (Xi ; i ≥ 1) are independent and identically distributed, find your expected waiting time until you cross
|
. Evaluate this when f X (k) = qpk−1; k ≥ 1. Exercise Let L be the time until a light bulb has lasted longer than r. Show that (1) (2) (3) (4) (5) (6) (7) (8) E(s L ) = sr P(X > r ) r 1 − skP(X = k). 1 278 6 Generating Functions and Their Applications (9) Exercise A biased coin is tossed repeatedly; on each toss, it shows a head with probability p(= 1 − q). Let W be the number of tosses until the first occasion when r consecutive tosses have shown heads. Show that E(s W ) = (1 − ps) pr sr 1 − s + qpr sr +1. (10) Exercise In n tosses of a biased coin, let L n be the length of the longest run of heads, and set πn,r = P(L n < r ). Show that 1 + ∞ n=1 snπn,r = 1 − pr sr 1 − s + q pr sr +1. 6.17 Example: Martingales for Branching Let Gn(s) be the probability generating function of the size Zn of the nth generation of a branching process (as defined in example 6.3.17), where Z0 = 1 and var Z1 > 0. Let Hn be the inverse function of the function Gn, and show that Mn = (Hn(s))Zn defines a martingale with respect to (Zn; n ≥ 0). Solution likewise, so are all the functions Gn(s). By definition, Because var Z1 > 0, the function G(s) is strictly increasing on [0, 1). Hence, Gn(Hn(s)) = s, and s = Gn+1(Hn+1(s)) = Gn(G(Hn+1(s))) by (6.3.18). Hence, by (1), because Hn(s) is unique, G(Hn+1(s)) = Hn(s). Finally, using (6.3.18) again, E([Hn+1(s)])Zn+1|Z0,..., Zn) = [G(H
|
n+1(s))]Zn = [Hn(s)]Zn by (2). Trivially, EMn = 1; it follows that Mn is a martingale. Show that η Zn is a martingale where η is the extinction probability defined in (6.3.20). Exercise If EZ1 = µ, show that Znµ−n is a martingale. Exercise Exercise Let Zn be the size of the nth generation of the branching process in which the nth generation is augmented by a random number In of immigrants who are indistinguishable from the other members of the population, and such that the In are independent and identically distributed, and independent of the process up to time n. If EIn = m, and the expected family size is not 1, show that Mn = µ−n Zn − m is a martingale. 1 − µn 1 − µ (1) (2) (3) (4) (5) Worked Examples and Exercises 279 6.18 Example: Wald’s Identity Let (X n; n ≥ 1) be independent and identically distributed with M(t) = Eet X 1. Define Sn = Xr, n r =1 and Yn = exp(t Sn)M(t)−n, n ≥ 1, with Y0 = 1. Suppose that T is a stopping time for Yn, with ET < ∞, and |Sn| ≤ K < ∞ for n < T. Show that, whenever 1 ≤ M(t) < ∞, Yn is a martingale and E[exp(t ST )M(t)−T ] = 1. Solution Now From the independence of the X n, it easily follows that Yn is a martingale. E(|Yn+1 − Yn||Y0,..., Yn) = YnE $ $ $ $ et X n+1 M(t) $ $ $ $ − 1 ≤ YnE(et X n+1 + M(t))/M(t) = 2Yn. Hence, for n < T, E(|Yn+1 − Yn||Y0,..., Yn) ≤ 2Yn ≤ 2e|t|K M(t)n ≤ 2e|t|K. Because
|
ET ≤ K < ∞, we can use the final part of the optional stopping theorem 5.7.14 to obtain EYT = 1, which is the required result. Exercise Show that ET ≤ K < ∞. Exercise (1). Show that, approximately, Let var X 1 > 0, and let T be the smallest n such that either Sn ≤ −a < 0 or Sn ≥ b > 0. Assume there is some t = 0 such that M(t) = 1, and let T be defined as in exercise P(ST ≤ −a) etb − 1 etb − e−ta, and P(ST ≥ b) 1 − e−ta etb − e−ta. Deduce also that P(ST ≤ −a) ≤ e−at, and P(ST ≥ b) ≤ e−bt. By differentiating E(Yn|Y0,..., Yn−1) = Yn−1 for t, and setting t = 0, show that the Exercise following are martingales. (You may assume that it is justified to interchange the expectation and differentiation.) (a) Sn − nEX 1 (b) (Sn − nEX 1)2 − nvar X 1 (c) (Sn − nEX 1)3 − 3(Sn − nEX 1)var X 1 − nE(X 1 − EX 1)3 If you have a full pen and lots of paper, you can find as many more such martingales as you please. Let Sn be a simple random walk with P(X 1 = 1(X 1 = −1). Exercise Use Wald’s identity to show that, when a and b are integers, λa 1 Es T = 2(λb λa − λb 1) + λa − λa+b 2 1 1 λa+b 1 − λa 2, (1) (2) (3) (4) 280 where 6 Generating Functions and Their Applications λ1,2 = 1 ± (1 − 4 pqs2)1/2 2 ps. 6.19 Example: Total Population in Branching Let X n be an ordinary branching process such that X 0 = 1, EX 1 = µ, var X 1 = σ 2, and Es X 1 = G(s). If Yn
|
= and show that (1) Qn(s) = EsYn, 0 ≤ s ≤ 1, Qn+1(s) = sG(Qn(s)). Solution copy of the branching process. Conditional on X 1, we may therefore write Note that each member of the first generation X 1 gives rise to an independent Y (1) n−1 + ˜Y (2) n−1 + · · · + ˜Y (X 1) n−1, where Y (i) finally, n−1 has the same distribution as Yn−1, and has generating function Qn−1(s). Hence, Es X 0+X 1+···X n = sE(E(s X 1+···+X n |X 1)) = sE((Qn−1)X 1) = sG(Qn−1(s)). (2) Exercise Deduce that if Y = ∞ n=0 X n, then Q(s) = EsY satisfies Q(s) = sG(Q(s)), 0 ≤ s ≤ 1, where s∞ ≡ 0. If µ < 1, show that (a) Q(1) = 1. (b) EY = (1 − µ)−1. (c) var Y = σ 2/(1 − µ)3. Exercise haves in the two cases p < q and q < p. Exercise that xn(s) satisfies (3) (4) Find Q(s) in the special case when G(s) = p 1−qs, p + q = 1. Discuss how Q(s) be- Suppose that G(s) = p/(1 − qs), p + q = 1. Set Qn(s) = yn(s)/xn(s) in (1) to find with x0 = 1 and x1 = 1 − qs. Deduce that xn(s) = xn−1(s) − spq xn−2(s), where λ = 1 + √ 1 − 4spq, µ = 1 − Qn(s) = 2 ps (λ − 2qs)λn−1 − (µ − 2qs)µn−1 (λ − 2qs)
|
λn − (µ − 2qs)µn 1 − 4spq. √, Problems P R O B L E M S 281 ∞ 0 f X (k)sk, where f X (k) = P(X = k); k ≥ 0. Show that: Find the probability generating function of each of the following distributions and indicate where it exists. (a) 1 ≤ k ≤ n. Let G(s) = ∞ (a) P(X < k)sk = sG(s)/(1 − s). 0 ∞ (b) P(X ≥ k)sk = (1 − sG(s))/(1 − s). 0 f (k) = f (k) = 1 ; n f (k) = 1 2n + 1 1 k(k + 1) 1 2k(k + 1) 1 2k(k − 1) f (k) = ; (b) (c) (d) ; − n ≤ k ≤ +n. 1 ≤ k. for k ≥ 1 for k ≤ −1. (e) f (k) = 1 − c 1 + c c|k|; k ∈ Z, 0 < c < 1. 1 2 (c; p > 0, q > 0 Which of the following are probability generating functions, and when? (a) exp (−λ(1 − G X (s))), where λ > 0, and G X (s) is a p.g.f. (b) sin πs 2 q 1 − ps (d) (q + ps)r (e) 1 − (1 − s2) (f) α log(1 + βs) If the random variable X has p.g.f. G(s), show that for constants a and b the random variable a X + b has p.g.f. sbG(sa). For what values of s is this defined? Let X have p.g.f. G(s). Describe a random variable Y, which has p.g.f. GY (s) = G(s)(2 − G(s))−1. For what values of s is this defined? A loaded die may show different faces with different probabilities. Show that
|
it is not possible to load two traditional cubic dice in such a way that the sum of their scores is uniformly distributed on {2, 3,..., 12}. The three pairs of opposite faces of a fair die show 1, 2, and 3, respectively. The two faces of a fair coin show 1 and 2, respectively. (a) Find the distribution of the sum of their scores when tossed together. (b) Is it possible to weight the die in such a way that the sum of the scores is uniform on {2, 3, 4, 5}? Let X have p.g.f. G(s), and let E be the event that X is even. Show that E(s X |E) = G(s) + G(−s) G(1) + G(−1). Define the probability generating function of an integer valued random variable X, and show how it may be used to obtain the mean µX, variance σ 2 X, and third moment about the mean γX. 1 2 3 4 5 6 7 8 9 282 6 Generating Functions and Their Applications (a) Let Y = N i=1 Xi, where the Xi are independent integer valued random variables identically distributed as X. Let µX = 0, and let N be an integer valued random variable distributed independently of the Xi. Show that σ 2 Y X, and γY = µN γX. = µN σ 2 Y when µX = 0. (b) Find σ 2 An unfair coin is tossed n times, each outcome is independent of all the others, and on each toss a head is shown with probability p. The total number of heads shown is X. Use the probability generating function of X to find: (a) The mean and variance of X. (b) The probability that X is even. (c) The probability that X is divisible by 3. Let the nonnegative random variable X have p.g.f. G X (s). Show that G(s) = 1 E(X ). 1 − G X (s) 1 − s is the p.g.f. of a nonnegative random variable Y. When is G(s) = G X (s)? Let G1(s) and G2(s) be probability generating functions, and suppose that 0 ≤ λ ≤ 1. Show that λG1
|
+ (1 − λ)G2 is a p.g.f., and interpret this result. In a multiple-choice examination, a student chooses between one true and one false answer to each question. Assume the student answers at random, and let N be the number of such answers until she first answers two successive questions correctly. Show that E(s N ) = s2(4 − 2s − s2)−1. Hence, find E(N ) and P(N = k). Now find E(N ) directly. A number X of objects are ranked in order of beauty (with no ties). You pick one at random with equal probability of picking any. (a) If X − 1 has a Poisson distribution with parameter λ, show that the p.g.f. of the rank of the object you pick is 1 − eλ(s−1) λ(1 − s). s What is the mean rank of your object? (b) What if X has the logarithmic distribution, f X (k) = cpk/(k + 1); k ≥ 1? A biased coin is tossed N times, where N is a Poisson random variable with parameter λ. Show that if H is the number of heads shown and T the number of tails, then H and T are independent Poisson random variables. Find the mean and variance of H − T. A biased coin is tossed N times, where N is a random variable with finite mean. Show that if the numbers of heads and tails are independent, then N is Poisson. [You may want to use f (x + y) = f (x) f (y) take the form f (x) = eλx for the fact that all continuous solutions of some λ.] Let X n have a negative binomial distribution with parameters n and p(= 1 − q). Show (using generating functions) that if n → ∞ in such a way that λ = nq remains constant, then lim P(X n = n→∞ k) = e−λλk/k!. Show that E(X n) = nqp−1 and var (X n) = nqp−2. events The min{n : An occurs}. (a) Show that E(s N ) = s + (s − 1) and P(
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.