text
stringlengths 270
6.81k
|
|---|
(b) No two are in the same row or column? An urn contains 4n balls, n of which are coloured black, n pink, n blue, and n brown. Now, r balls are drawn from the urn without replacement, r ≥ 4. What is the probability that: (a) At least one of the balls is black? (b) Exactly two balls are black? (c) There is at least one ball of each colour? Find the number of distinguishable ways of colouring the faces of a solid regular tetrahedron with: (a) At most three colours (red, blue, and green); (b) Exactly four colours (red, blue, green, and yellow); (c) At most four colours (red, blue, green, and yellow). An orienteer runs on the rectangular grid through the grid points (m, n), m, n = 0, 1, 2,... of a Cartesian plane. On reaching (m, n), the orienteer must next proceed either to (m + 1, n) or (m, n + 1). (a) Show the number of different paths from (0, 0) to (n, n) equals the number from (1, 0) to (c) A, K, Q? (n + 1, n) and that this equals ( 2n n ), where ( k r ) = k! r!(k−r )!. (b) Show that the number of different paths from (1, 0) to (n + 1, n) passing through at least one of the grid points (r, r ) with 1 ≤ r ≤ n is equal to the total number of different paths from (0, 1) to (n + 1, n) and that this equals ( 2n (c) Suppose that at each grid point the orienteer is equally likely to choose to go to either of the two possible next grid points. Let Ak be the event that the first of the grid points (r, r ), r ≥ 1, n−1 ). 110 3 Counting to be visited is (k, k). Show that P(Ak) = 4−k 2k − 1 A bag contains b black balls and w white balls. If balls are drawn from the bag without replacement, what is the probability Pk that exactly k black balls are drawn before the first white ball? 2k − 1
|
k. By considering b k=0 Pk, or otherwise, prove the identity b k= for positive integers b, w. (a) Show that N ‘£’ symbols and m‘.’ symbols may be set out in a line with a‘.’ at the right-hand end in ( N +m−1 m−1 ) ways, provided m ≥ 1. (b) A rich man decides to divide his fortune, which consists of N one-pound coins, among his m friends. Happily N > m ≥ 1. (i) In how many ways can the coins be so divided? (ii) In how many ways can the coins be so divided if every friend must receive at least one? (c) Deduce, or prove otherwise, that whenever N > m ≥ 1=1 m k Let N balls be placed independently at random in n boxes, where n ≥ N > 1, each ball having an equal chance 1/n of going into each box. Obtain an expression for the probability P that no box will contain more than one ball. Prove that N (N − 1) < K n, where K = −2 log P, and hence that N < 1 2 (K n + 1 4 ). √ + Now suppose that P ≥ e−1. Show that N − 1 < 4n/5 and hence that K n < N (N + 1). Prove finally that N is the integer nearest to [You may assume that log(1 − x) < −x for 0 < x < 1, that log(1 − x) > −x − 3 4 ) when P ≥ e−1. (K n + 1 √ 5, and that N −1 r =1 r 2 = N (N − 1)(2N − 1)/6.] x < 4 Consider sequences of n integers a1, a2,..., an such that 0 ≤ ai < k for each i, where k is a positive integer. (a) How many such sequences are there? (b) How many sequences have all ai distinct? (c) How many sequences have the property that a1 ≤ a2 ≤... ≤ an? Let an(n = 2, 3,...) denote the number of distinct ways the expression x1x2... xn can be bracketed so that only two quantities are multiplied together at any
|
one time. [For example, when n = 2 there is only one way, (x1x2), and when n = 3 there are two ways, (x1(x2x3)) and ((x1x2)x3).] 2 x 2 for 0 < Prove that an+1 = an + a2an−1 + a3an−2 + · · · + an−2a3 + an−1a2 + an. Defining A(x) = x + a2x 2 + a3x 3 + · · · prove that (A(x))2 = A(x) − x. Deduce that A(x) = 1 2 (1 − (1 − 4x) 1 2 ), and show that an = 1.3... (2n − 3) n! 2n−1. Coupons Each packet of some harmful and offensive product contains one of a series of r different types of object. Every packet is equally likely to contain one of the r types. If you buy n ≥ r packets, show that the probability that you are then the owner of a set of all r types is n r (−)k k=0 r k 1 − k r. 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 Problems 111 Suppose that 2n players enter for two consecutive tennis tournaments. If the draws for Tennis each tournament are random, what is the probability that no two players meet in the first round of both tournaments? If n is large, show that this probability is about e− 1 2. Lotteries Again Suppose that n balls numbered from 1 to n are drawn randomly from an urn. Show that the probability that no two consecutive numbers are actually carried by consecutive balls drawn is 1 − 1 1! 1 − 1 n + 1 2! 1 − 2 n − 1 3! 1 − 3 n + · · · + (−)n−1 n!. [Hint: show that the number of arrangements of 1, 2,..., n such that at least j pairs of consecutive integers occur is (n − j)!.] Runs that there are k head runs and k tail runs (see Example 3.14 for definitions) is 2( a−1 Deduce that A fair coin is tossed n times yielding a heads and n − a tails. Show that the probability k−1 ) a!
|
(n−a)!. k−1 )( n−a−1 n! a∧(n−a) k=where x ∧ y denotes the smaller of x and y). Camelot Round Table. (There are n seats, and n knights including these three.) (a) If the n knights sit at random, what is the probability that Arthur sits next to neither? Does it For obvious reasons Arthur would rather not sit next to Mordred or Lancelot at the make any difference whether Arthur sits at random or not? (b) If the n knights sit at random on two occasions, what is the probability that no one has the same left-hand neighbour on the two occasions? By considering ()r, show that n indistinguishable objects may be divided into r distinct groups with at least one object in each group in ( n−1 There are 2n balls in an urn; the balls are numbered 1, 2,..., 2n. They are withdrawn at random without replacement. What is the probability that (a) For no integer j, the 2 jth ball drawn bears the number 2 j? (b) For no integer j, the ball bearing the number j + 1 is removed next after the ball bearing the r −1 ) ways. number j? Find the limit as n → ∞ of the probabilities in (a) and (b). A chandelier has seven light bulbs arranged around the circumference of a circle. By the end of a given year, each will have burnt out with probability 1 2. Assuming that they do so independently, what is the probability that four or more bulbs will have burnt out? If three bulbs burn out, what is the probability that no two are adjacent? I decide that I will replace all the dead bulbs at the end of the year only if at least two are adjacent. Find the probability that this will happen. If it does, what is the probability that I will need more than two bulbs? A biased coin is tossed 2n times. Show that the probability that the number of heads is the same as the number of tails is ( 2n Show that: n (a) n (b) Observe that (i) (1 + x)m(1 + x)n = (1 + x)m+n (ii) (1 − x)m(1 − x)−n−2 = (1 − x)m−n−2. Now show that k j )( n
|
(a) n )( pq)n. Find the limit of this as n → ∞. n/2 k=0( n n/2 k=0( n 2k ) = 2n−1 k ) = 2n−1 If n is even; If n is even. 0(−)k( n 0( n k− j ) = ( m+n k=1(−)m−k( m n+1 ) = ( n k ) = 2n; k ) = 0; k )( n+k j=0( m (c) (d) m−1 ). and (b) m ) k 30 31 32 33 34 35 36 37 38 39 40 41 112 3 Counting Show that for j ≤ n/2, n−n Show that for fixed n, ( n Show that k=0( n k ) ≤ j − j (n − j)−(n− j). k ) is largest when k is the integer nearest to + 2k n + 2k + 2k r + 2k n b ) = ( a+1 and interpret this in Pascal’s triangle. b+1 ) − ( a−n k=0( a−k Show that An urn contains b blue balls and a aquamarine balls. The balls are removed successively at random from the urn without replacement. If b > a, show that the probability that at all stages until the urn is empty there are more blue than aquamarine balls in the urn is (b − a)/(a + b). b+1 ), and deduce that a−1 ) = ( n+a a ). k=0( k+a−1 n Why is this result called the ballot theorem? (Hint: Use conditional probability and induction.) The points A0, A1,..., An lie, in that order, on a circle. Let a1 = 1, a2 = 1 and for n > 2, let an denote the number of dissections of the polygon A0 A1... An into triangles by a set of noncrossing diagonals, Ai A j. (a) Check that a3 = 2 and a4 = 5. (b) Show that in each dissection there is a unique i(1 ≤ i ≤ n − 1) such that cuts are made along both A0 Ai and An Ai
|
. (c) Show that an = a1an−1 + a2an−2 + · · · + an−2a2 + an−1a1. (d) If f (x) = 1 an x n, show (by considering the coefficient of each power of x) that ( f (x))2 − ∞ √ f (x) + x = 0, and show that f (x) = 1 2 − 1 2 (1 − 4x). Let A, B, C, D be the vertices of a tetrahedron. A beetle is initially at A; it chooses any of the edges leaving A and walks along it to the next vertex. It continues in this way; at any vertex, it is equally likely to choose to go to any other vertex next. What is the probability that it is at A when it has traversed n edges? Suppose that n sets of triplets form a line at random. What is the probability that no three triplets from one set are adjacent? Suppose a group of N objects may each have up to r distinct properties b1,..., br. With the notation of (3.4.2), show that the number possessing exactly m of these properties is Mm = r −m k=0 (−)k m + k k N (b1,..., bm+k). The M´enages Problem Revisited that exactly m couples are seated in adjacent seats is Use the result of Problem 38 to show that the probability pm = 2 m! n−m k=0 (−)k k(n − m − k)!(2n − m − k − 1)! k!n!(2n − 2m − 2k)!. Suppose that N objects are placed in a row. The operation Sk is defined as follows: “Pick one of the first k objects at random and swap it with the object in the kth place.” Now perform SN, SN −1,..., S1. Show that the final arrangement is equally likely to be any one of the N! permutations of the objects. Suppose that n contestants are to be placed in order of merit, and ties are possible. Let r (n) be the number of possible distinct such orderings of the n contestants. (Thus, r (0) = 0, r (
|
1) = 1, r (2) = 3, r (3) = 13, and so on.) Show that r (n) has exponential generating function Er (x) = ∞ n=0 x n n! r (n) = 1 2 − ex. [Hint: Remember the multinomial theorem, and consider the coefficient of x n in (ex − 1)k.] Problems 113 42 Derangements rangements of the first n integers is the coefficient of x1x2x3... xn in (3.4.4 Revisited) Write ¯x = x1 + x2 + · · · xn. Explain why the number of de- (x2 + x3 + · · · + xn)(x1 + x3 + · · · + xn) · · · (x1 + x2 + · · · + xn−1) = ( ¯x − x1)( ¯x − x2)... ( ¯x − xn) = ( ¯x)n − ( ¯x)n−1 xi + · · · + (−)n x1x2... xn, 43 44 45 and hence deduce the expression for Pn given in (3.4.4). (a) Choose n points independently at random on the perimeter of a circle. Show that the probability of there being a semicircular part of that perimeter which includes none of the n points is n21−n. (b) Choose n points independently at random on the surface of a sphere. Show that the probability of there being a hemisphere which includes none of the n points is (n2 − n + 2)2−n. A large number of students in a lecture room are asked to state on which day of the year they were born. The first student who shares a birthday with someone already questioned wins a prize. Show that, if you were in that audience, your best chance of winning is to be the twentieth person asked. The n passengers for an n-seat plane have been told their seat numbers. The first to board chooses a seat at random. The rest, boarding successively, sit correctly unless their allocated seat is occupied, in which case they sit at random. Let pn be the probability that the last to board finds her seat free.
|
Find pn, and show that pn → 1 2, as n → ∞. 4 Random Variables: Distribution and Expectation I am giddy, expectation whirls me round. William Shakespeare, Troilus and Cressida 4.1 Random Variables In many experiments, outcomes are defined in terms of numbers (e.g., the number of heads in n tosses of a coin) or may be associated with numbers, if we so choose. In either case, we want to assign probabilities directly to these numbers, as well as to the underlying events. This requires the introduction of some new functions. (1) Given a sample space (with F and P(.)), a discrete random variable Definition X is a function such that for each outcome ω in, X (ω) is one of a countable set D of real numbers. Formally, X (.) is a function with domain and range D, and so for each ω ∈, X (ω) = x ∈ D, where D is a countable (denumerable) subset of the real numbers. (2) Example: Pairs in Poker How many distinct pairs are there in your poker hand of five cards? Your hand is one outcome ω in the sample space of all possible hands; if you are playing with a full deck, then || = ( 52 5 ). The number of pairs X depends on the outcome ω, and obviously X (ω) ∈ {0, 1, 2}, because you can have no more than two pairs. Notice that this holds regardless of how the hand is selected or whether the pack is shuffled. However, this information will be required later to assign probabilities. We always use upper case letters (such as X, Y, T, R, and so on) to denote random variables and lower case letters (x, y, z, etc.) to denote their possible numerical values. You should do the same. Because the possible values of X are countable, we can denote them by {xi ; i ∈ I }, where the index set I is a subset of the integers. Very commonly, all the possible values of X are integers, in which case we may denote them simply by x, r, k, j, or any other conventional symbol for integers. 114 4.2 Distributions 115 (3) Definition (a) If X takes only the
|
values 0 or 1, it is called an indicator or sometimes a Bernoulli trial. (b) If X takes one of only a finite number of values, then it is called simple. (4) Example Suppose n coins are tossed. Let X j be the number of heads shown by the jth coin. Then, X j is obviously zero or one, so we may write X j (H ) = 1 and X j (T ) = 0. Let Y be the total number of heads shown by the n coins. Clearly, for each is an indicator and Y is outcome, ω ∈, Y (ω) ∈ {0, 1, 2,..., n}. Thus, X j simple. It is intuitively clear also that Y = We discuss the meaning of this and its implications in Chapter 5. Finally, note that the sample space need not be countable, even though X (ω) takes 1 X j. n one of a countable set of values. (5) Example: Darts You throw one dart at a conventional dartboard. A natural sample space is the set of all possible points of impact. This is of course uncountable because it includes every point of the dartboard, much of the wall, and even parts of the floor or ceiling if you are not especially adroit. However, your score X (ω) is one of a finite set of integers lying between 0 and 60, inclusive. 4.2 Distributions Next, we need a function, defined on the possible values x of X, to tell us how likely they are. For each such x, there is an event Ax ⊆, such that (1) ω ∈ Ax ⇔ X (ω) = x. Hence, just as the probability that any event A in occurs is given by the probability function P(A) ∈ [0, 1], the probability that X (ω) takes any value x is given by a function P(Ax ) ∈ [0, 1]. (We assume that Ax is in F.) For example, if a coin is tossed and X is the number of heads shown, then X ∈ {0, 1} and A1 = H ; A0 = T. Hence, P(X = 1) = P(A1) = P(H ) = 1 This function has its own special name and notation. Given,
|
F, and P(.): 2, if the coin is fair. (2) Definition given by A discrete random variable X has a probability mass function f X (x) f X (x) = P(Ax ). This is also denoted by P(X = x), which can be thought of as an obvious shorthand for P({ω: X (ω) = x}). It is often called the distribution. 116 4 Random Variables For example, let X be the number of pairs in a poker hand, as discussed in Example 4.1.2. If the hand is randomly selected, then f X (2) = P({ω: X (ω) = 2}) = |{ω: X (ω) = 2}| = 13 2 4 2 4 2 44 1 52 5 0.048. 52 5 Likewise, and hence, f X (1) = 13 1 4 2 12 3 4 1 3 52 5 0.42, f X (0) = 1 − f X (1) − f X (2) 0.53. Returning to Example 4.1.4 gives an example of great theoretical and historical importance. (3) Example 4.1.4 Revisited: Binomial Distribution value r, if exactly r heads appear in the n tosses. The probability of this event is ( n where p = P(H ) = 1 − q. Hence, Y has probability mass function The random variable Y takes the r ) pr q n−r, fY (r ) = n r pr q n−r, 0 ≤ r ≤ n. (4) (5) (6) The suffix in f X (x) or fY (y) is included to stress the role of X or Y. Where this is unnecessary or no confusion can arise, we omit it. In the interests of brevity, f (x) is often called simply the mass function of X, or even more briefly the p.m.f. The p.m.f., f (x) = f X (x), has the following properties: first, f (x) ≥ 0 for x ∈ {xi : i ∈ Z} f (x) = 0 elsewhere. That is to say, it is positive for a countable number of values of x and zero elsewhere. Second, if X (ω) is fin
|
ite with probability one, then it is called a proper random variable and we have f (xi ) = i Third, we have the Key Rule i P(Axi ) = P() = 1. P(X ∈ A) = f (x). x∈A i f (xi ) < 1, then X is said to be defective or improper. It is occasionally useful to If allow X to take values in the extended real line, so that f X (∞) has a meaning. In general, it does not. We remark that any function satisfying (4) and (5) can be regarded as a mass function, in that, given such an f (.), it is quite simple to construct a sample space, probability function, and random variable X, such that f (x) = P(X = x). Here are two famous mass functions. (7) Example: Poisson Distribution Let X be a random variable with mass function 4.2 Distributions 117 Then, f X (x) = λx e−λ x!, x ∈ {0, 1, 2,...}, λ > 0. ∞ x=0 f (x) = e−λ ∞ x=0 λx x! = 1 by Theorem 3.6.9. Hence, X is proper. This mass function is called the Poisson distribution and X is said to be Poisson (or a Poisson random variable), with parameter λ. (8) Example: Negative Binomial Distribution any number q such that 0 < q < 1, we have ∞ (1 − q)−n = r =0 By the negative binomial theorem, for qr. n + r − 1 r Hence, the function f (r ) defined by f (r ) = n + r − 1 r qr (1 − q)n, r ≥ 0, is a probability mass function. Commonly, we let 1 − q = p. The following function is also useful; see Figure 4.1 for a simple example Definition FX (x), where (9) A discrete random variable X has a cumulative distribution function FX (x) = i:xi ≤x f (xi ). Figure 4.1 The distribution function FX (x) of the random variable X, which is the indicator of the event A. Thus, the jump at zero is P(X = 0) = P(
|
Ac) = 1 − P(A) and the jump at x = 1 is P(X = 1) = P(A). 118 4 Random Variables This is also denoted by P(X ≤ x) = P({ω: X (ω) ≤ x}); it may be referred to simply as the distribution function (or rarely as the c.d.f.), and the suffix X may be omitted. The following properties of F(x) are trivial consequences of the definition (9): (10) (11) (12) F(x) ≤ F(y) for x ≤ y. 1 − F(x) = P(X > x). P(a < X ≤ b) = F(b) − F(a) for a < b. Some further useful properties are not quite so trivial, in that they depend on Theorem 1.5.2. Thus, if we define the event Bn = {X ≤ x − 1/n}, we find that ∞ P(X < x) = P n=1 = lim n→∞ F Bn = P( lim n→∞ x − 1 n = lim y↑x F(y). Bn) = lim n→∞ P(Bn) by Theorem 1.5.2, If the random variable X is not defective then, again from (9) (and Theorem 1.5.2), limx→∞ F(x) = 1, and limx→−∞ F(x) = 0. The c.d.f. is obtained from the p.m.f. by (9). Conversely, the p.m.f. is obtained from the c.d.f. by f (x) = F(x) − lim y↑x F(y) where y < x. When X takes only integer values, this relationship has the following simpler more attractive form: for integer x (13) f (x) = F(x) − F(x − 1). (14) Example: Lottery An urn contains n tickets bearing numbers from 1 to n inclusive. Of these, r are withdrawn at random. Let X be the largest number removed if the tickets are replaced in the urn after each drawing, and let Y be the largest number removed if the drawn tickets are not replaced. Find
|
f X (x), FX (x), fY (x), and FY (x). Show that FY (k) < FX (k), for 0 < k < n. Solution repetition allowed, is x r. Because there are nr outcomes, The number of ways of choosing r numbers less than or equal to x, with FX (x) =, for 1 ≤ x ≤ n, r x n when x is an integer. For any real x, FX (x) = ([x]/n)r, (where [x] denotes the largest integer which is not greater than x). Hence, for integer x and 1 ≤ x ≤ n, by (13), r f X (x) = and elsewhere f X (x) is zero.2 Distributions 119 Without replacement, the number of ways of choosing r different numbers less than or equal to x is ( x r ). Hence, for integer x, and 1 ≤ x ≤ n, n r FY (x) = x r Hence, again by (13), fY (x which is of course obvious directly. Furthermore, r FY (k) = k!(n − r )! (k − r )!n! = FX (k). < k n for 1 < k < n Because real valued functions of random variables are random variables, they also have probability mass functions. (15) Theorem If X and Y are random variables such that Y = g(X ), then Y has p.m.f. given by Proof x:g(x)=y f X (x). fY (y) = P(g(X ) = y) = P(X = x) = f X (x). x:g(x)=y x:g(x)=y (16) Example Let X have mass function f (x). Find the mass functions of the following functions of X. (a) −X (b) X + = max {0, X } (c) X − = max {0, −X } (d) |X | = X + + X − (e) sgn X = X |X | 0,, X = 0, X = 0. Solution Using Theorem 15 repeatedly, we have: (a) (b) f−X (x) = f X (−x). f X (x); f X +(x) = (c) f X −(x) = x
|
> 0. x = 0. x > 0 x = 0. f X (x); x≤0 f X (−x); f X (x); x≥0 120 4 Random Variables (d) f|X |(x) = (e) fsgnX (x) = f X (x) + f X (−x); f X (0); x>0 f X (0); f X (x); f X (x); x<0 x = 0 x = 01. Finally, we note that any number m such that limx↑m F(x) ≤ 1 2 of F (or a median of X, if X has distribution F) 4.3 Expectation ≤ F(m) is called a median (1) Definition that by x Let X be a random variable with probability mass function f (x) such |x| f (x) < ∞. The expected value of X is then denoted by E(X ) and defined E(X ) = x x f (x). This is also known as the expectation, or mean, or average or first moment of X. Note that E(X ) is the average of the possible values of X weighted by their probabilities. It can thus be seen as a guide to the location of X, and is indeed often called a location parameter. The importance of E(X ) will become progressively more apparent. (2) Example Suppose that X is an indicator random variable, so that X (ω) ∈ {0, 1}. Define the event A = {ω: X (ω) = 1}. Then X is the indicator of the event A; we have f X (1) = P(A) and E(X ) = 0. f X (0) + 1. f X (1) = P(A). (3) Example Let X have mass function f X (x) = 4 x(x + 1)(x + 2), x = 1, 2,... and let Y have mass function fY (x) = 1 x(x + 1), x = 1, 2,... Show that X does have an expected value and that Y does not have an
|
expected value. Solution For any m < ∞, m m x=1 |x| f X (x) = 4 (x + 1)(x + 2) x=1 = 4 m x=(m + 2)−1, 4.3 Expectation 121 by successive cancellations in the sum. Hence, the sum converges as m → ∞, and so because X > 0, E(X ) = 2. However, for the random variable Y, ∞ which is not finite. x x=1 |x| fY (x) = 1 x + 1, |x| f (x) < ∞ amounts to E(X +) + E(X −) < ∞ (use ExNotice that the condition ample 4.2.16 to see this). A little extension of Definition 1 is sometimes useful. Thus, if E(X −) < ∞ but E(X +) diverges, then we may define E(X ) = +∞. With this extension in Example 3, E(Y ) = ∞. Likewise, if E(X +) < ∞ but E(X −) diverges, then E(X ) = −∞. If both E(X +) and E(X −) diverge, then E(X ) is undefined. In general, real valued functions of random variables are random variables having a mass function given by Theorem 4.2.15. They may therefore have an expected value. In accordance with Example 3, if Y = g(X ), then by definition E(g(X )) = yi fY (yi ). i We used this with Example 4.2.16(b) in observing that E(X +) = x f X (x). x>0 This was easy because it was easy to find the mass function of X + in terms of that of X. It is not such an immediately attractive prospect to calculate (for example) E(cos(θ X )) by first finding the mass function of cos (θ X ). The following theorem is therefore extremely useful. (4) Theorem real valued function defined on R. Then, Let X be a random variable with mass function f (x), and let g(.) be a E
|
(g(X )) = whenever x |g(x)| f (x) < ∞. x g(x) f (x) Proof Let (g j ) denote the possible values of g(X ), and for each j define the set A j = {x: g(x) = g j }. Then P(g(X ) = g j ) = P(X ∈ A j ), and therefore, provided all the following summations converge absolutely, we have E(g(X )) = g j P(g(x) j x∈A j g(x) f (x), because g(x) = g j for x ∈ A j, x∈A j g(x) f (x), because A j ∩ Ak = φ for j = k. = = j j x 122 4 Random Variables (5) Example Let X be Poisson with parameter λ. Find E(cos(θ X )). Solution imaginary square root of −1. Now, by Theorem 4, First, recall de Moivre’s Theorem that eiθ = cos θ + i sin θ, where i is an E(cos(θ X )) = k! ∞ k=0 = Re cos (kθ)e−λλk ∞ eikθ e−λλk k!, where Re (z) is the real part of z, k=0 = Re (exp (λeiθ − λ)) = eλ(cos θ−1) cos (λ sin θ), using de Moivre’s Theorem again. Now, we can use Theorem 4 to establish some important properties of E(.). (6) Let X be a random variable with finite mean E(X ), and let a and b be Theorem constants. Then: (i) E(a X + b) = aE(X ) + b; (ii) If P(X = b) = 1, then E(X ) = b; (iii) If P(a < X ≤ b) = 1, then a < E(X ) ≤ b; (iv) If g(X ) and h(X ) have finite mean, then E(g(X ) + h(X )) = E(g(X )) +
|
E(h(X )). Proof (i) First, we establish the necessary absolute convergence: |ax + b| f (x) ≤ (|a||x| + |b|) f (x) = |a| |x| f (x) + |b| < ∞, x x x as required. Hence, by Theorem 4. E(a X + b) = (ax + b) f (x) = a x x x f (x) + b = aE(X ) + b. (ii) Here, X has mass function f (b) = 1, so by definition E(X ) = b f (b) = b. (iii) In this case, f (x) = 0 for x /∈ (a, b], so ≤ b f (x) = b; E(X ) = x f (x) > x x x a f (x) = a. (iv) Because |g(x) + h(x)| ≤ |g(x)| + |h(x)|, absolute convergence is quickly estab- lished. Hence, by Theorem 4, E(g(X ) + h(X )) = (g(x) + h(x)) f (x) = x = E(g(X )) + E(h(X )). x g(x) f (x) + x h(x) f (x) The following simple corollary is of some importance. (7) Theorem If E(X ) exists, then (E(X ))2 ≤ (E(|X |))2 ≤ E(X 2). 4.3 Expectation 123 Proof First, note that (|X | − E(|X |))2 ≥ 0. Hence, by Theorem 6(iii), 0 ≤ E((|X | − E(|X |))2 = E(|X |2) − (E(|X |))2, by Theorem 6(iv) and 6(ii) = E(X 2) − (E(|X |))2, which proves the second inequality. Also, |X | − X ≥ 0, so by 6(iv) and 6(iii) E(X ) ≤ E(|X |), which proves the fi
|
rst inequality. (8) Example: Uniform Distribution Recall that an urn contains n tickets numbered from 1 to n. You take one ticket at random; it bears the number X. Find E(X ) and E(X 2), and verify that Theorem 7 holds explicitly. Solution bility evenly over the values of X, it is called the uniform distribution.) Hence, The mass function of X is P(X = k) = 1/n. (Because it distributes proba- n E(=1 1 2 (x(x + 1) − x(x − 1)) (n + 1) by successive cancellation. x=1 = 1 2 Likewise, using Theorems 4 and 6(iv), E(X 2) + E(=1 1 3 (x(x + 1)(x + 2) − (x − 1)x(x + 1)) (n + 1)(n + 2) by successive cancellation. x=1 = 1 3 Hence, E(X 2) = 1 6 (n + 1)(2n + 1) ≥ 1 4 (n + 1)2 = (E(X ))2. In practice, we are often interested in the expectations of two particularly important collections of functions of X ; namely, (X k; k ≥ 1) and ([X − E(X )]k; k ≥ 1). (9) Let X have mass function f (x) such that Definition (a) The kth moment of X is µk = E(X k). (b) The kth central moment of X is σk = E((X − E(X ))k). (c) The kth factorial moment of X is µ(k) = E(X (X − 1)... (X − k + 1)). In particular, σ2 is called the variance of X and is denoted by σ 2, σ 2 var (X ). Thus, x |x|k f (x) < ∞. Then, X, or var (X ) = E((X − E(X ))2). 124 4 Random Variables Example: Indicators Let X be the indicator of the event A (recall Example 4.3.2). Because X k(ω) = X (ω) for all k and ω, we have µk = E(X k) = P(A). Also, var (
|
X ) = P(A)P(Ac), and µ(k) = P(A); 0; k = 1 k > 1. (10) Example Show that if E(X 2) < ∞, and a and b are constants then var (a X + b) = a2 var (X ). Solution Using Theorem 6(i) and the definition of variance, var (a X + b) = E((a(X − E(X )) + b − b)2) = E(a2(X − E(X ))2) = a2 var (X ). Sometimes the tail of a distribution, P(X > x), has a simpler form than the mass function f (x). In these and other circumstances, the following theorems are useful. (11) Theorem If X ≥ 0 and X takes integer values, then E(X ) = ∞ x=1 P(X ≥ x). Proof By definition, E(X ) = ∞ x=1 x f (x) = ∞ x=1 f (x) x r =1 1. Because all terms are nonnegative, we may interchange the order of summation to obtain ∞ ∞ x=1 r =x f (r ) = ∞ x=1 P(X ≥ x). This tail-sum theorem has various generalizations; we state one. (12) Theorem If X ≥ 0 and k ≥ 2, then µ(k) = E(X (X − 1)... (X − k + 1)) = k ∞ x=k (x − 1)... (x − k + 1)P(X ≥ x). Proof This is proved in the same way as Theorem 11, by changing the order of summation on the right-hand side. (13) Example: Waiting–The Geometric Distribution A biased coin shows a head with probability p or a tail with probability q = 1 − p. How many times do you expect to toss the coin until it first shows a head? Find the various second moments of this waiting time. 4.3 Expectation 125 Let the required number of tosses until the first head be T. Then because they Solution are independent, P(T = x) = q x−1 p; x ≥ 1. (T is said to have the geometric
|
distribution.) Hence, ∞ E(T ) = xq x−1 p = x=1 = 1 p. p (1 − q)2, by (3.6.12) with n = 2, Alternatively, we can use Theorem 11 as follows. Using the independence of tosses again gives P(T > x) = q x, so E(T ) = ∞ x=0 P(T > x) = ∞ x=0 q x = 1 p. For the second factorial moment, by Theorem 12 (x − 1)q x−1 = 2q p2 µ(2) = 2 ∞ x=2 Hence, the second moment is, by (3.6.12) again. E(T 2) = E(T (T − 1)) + E(T ) = 2q p2 + 1 p = 1 + q p2. Finally, the second central moment is σ2 = E((T − E(T ))2) = E(T 2) − (E(T ))2 = 1 + q p2 − 1 p2 = q p2. (14) Example Let X have mass function f X (x) = a x 2 ; x = 1, 2, 3,... and Y have mass function fY (y) = b y2 ; y = ±1, ±2,... (a) Find a and b. (b) What can you say about E(X ) and E(Y )? Solution (a) Because f X (x) is a mass function ∞ 1 = f X (x) = a Hence, a = 6π −2. Likewise, b = 3π −2. (b) We have x x=(X ) = a ∞ x=1 x f X (x) = a ∞ x=1 1 x = ∞. Because E(Y +) and E(Y −) both diverge, E(Y ) does not exist. 126 4 Random Variables (15) Example: Coupons Each packet of an injurious product is equally likely to contain any one of n different types of coupon, independently of every other packet. What is the expected number of packets you must buy to obtain at least one of each type of coupon? Let Ar Solution set of n coupons. Let C r the first r. Then, n be the event that
|
the first r coupons you obtain do not include a full k be the event that you have not obtained one of the kth coupon in We may calculate: Ar n = n k=1. C r k P(C r 1) = P( and, in general, for any set S j of j distinct coupons P i∈, r. Hence, by (1.4.8), P(Ar n) = P C r k = n k=1 n j=1 (−1) j+1 n j 1 − j n r, because for each j there are ( n j ) sets S j. Now, let R be the number of packets required to complete a set of n distinct coupons. n). Hence, by Theorem n occurs if and only if R > r, we have P(R > r ) = P(Ar Because Ar 11, ∞ E(R) = P(R > r ) = r =0 n = (−1) j+1 n j ∞ r =0 n j n (−1) j+1 j=1 r n j 1 − j n = nun, say. j=1 Now, (16) un+1 − un = n + 1 1 j j (−1) j+1 j (−1) j+1 n + 1 n j=1 n j=1 1 j n j n j n − (−1) j+1 j=+1 (−1) j+1 j=1 = (−1)n+2 n + 1 = (−1)n+.4 Conditional Distributions 127 n+1 j=0 (−1) j+1 n + 1 j = (1 − 1)n+1 = 0. because Hence, iterating (16), so that un = n j=1 1 n − j + 1, E(R) = n j=1 n n − j + 1. In Chapter 5, we discover a much easier method of obtaining this result. 4.4 Conditional Distributions Let be some sample space, X some random variable defined on, and P(.) a probability function defined on. Now suppose that we are given that some event B ⊆ occurs, with P(B) > 0. Just as we argued in Chapter 2 that this gives rise to a conditional probability function, so we now conclude that this gives rise to a conditional distribution of X given B.
|
In fact, using (4.2.1), we write (1) (2) (3) P(X (ω) = x|B) = P(Ax |B) = P(Ax ∩ B)/P(B) ≥ 0, where as usual Ax = {ω: X (ω) = x}. Furthermore, because whenever x = y we have x Ax = and Ax ∩ Ay = φ, P(Ax |B) = P(Ax ∩ B)/P(B) = P( ∩ B)/P(B) = 1. x x Hence, the function f (x|B) defined by f (x|B) = P(Ax |B) = P(X = x|B) is a probability mass function, in that f (x|B) ≥ 0 and ditional mass function of X given B. x f (x|B) = 1. It is the con- (4) Example Let X be uniformly distributed on {1, 2,..., n}, and let B be the event that a ≤ X ≤ b, where 1 ≤ a < b ≤ n. Find the mass function of X given B. Solution Obviously, P(B) = b i=a 1 n = (b − a + 1)/n 128 and Hence, 4 Random Variables P({X = k} ∩ B) = 1 n 0; ; a ≤ k ≤ b otherwise. f (x|B; ; a ≤ x ≤ b otherwise Thus, given that X lies in B, it is uniformly distributed over B. (5) (6) Because f (x|B) is a probability mass function, it may have an expectation. In line |x| f (x|B) < ∞. If this condition is satisfied, with Definition 4.3.1, we require that then the conditional expectation of X given B is denoted by E(X |B) and defined by x E(X |B) = x f (x|B). x Expectation and conditional expectation are related by the following exceptionally important result. Theorem such that P(B)P(Bc) > 0. Then, Let X be a random variable with mean E(X ), and let B be an event E(X ) =
|
E(X |B)P(B) + E(X |Bc)P(Bc). Proof By conditional probability, f (x) = P(X = x) = P({X = x} ∩ B) + P({X = x} ∩ Bc) = f (x|B)P(B) + f (x|Bc)P(Bc). E(X ) = x x f (x) = P(B) x x f (x|B) + P(Bc) x x f (x|Bc), Hence, as required. More generally, it is shown in exactly the same way that if (Bi ; i ≥ 1) is a collection of events such that (i) i Bi =, (ii) Bi ∩ B j = φ; (iii) P(Bi ) > 0, i = j, and 4.4 Conditional Distributions 129 then E(X ) = i E(X |Bi )P(Bi ) whenever the summation is absolutely convergent. Finally, we make the small but useful observation that if A ⊆ B, then E(X |A ∩ B) = E(X |A). (7) (8) (9) Example A coin is tossed repeatedly. As usual, for each toss, P((H c). The outcome is a sequence of runs of heads alternating with runs of tails; the first run can be of either heads or tails. Let the length of the nth run be Rn. For all k and j, show that E(R2k+1) ≥ E(R2 j ) and var (R2k+1) ≥ var (R2 j ), with equality in each case if and only if p = q = 1 2. Solution We know that Let X be the number of heads shown before the first appearance of a tail. (10) P(X = k) = pkq, k ≥ 0. Let us consider the mass function of X conditional on the first toss. Given that the first toss is H, let X be the further number of tosses before the first tail. By independence, P(X = k) = pkq = P(X = k). Hence, conditional on H, we have X = 1 + X, and conditional on
|
H c, we have X = 0. Therefore, by Theorem 6, (11) E(X ) = pE(X |H ) + qE(X |H c) = p(1 + E(X )) + 0 = p + pE(X ) because E(X ) = E(X ). Thus, E(X ) = p/q, which of course we could have obtained directly from (10); we chose to do it this way to display the new technique. Likewise, if Y is the number of tails before the first head, E(Y ) = q/ p. Now R2k+1 is a run of heads if and only if the first toss is a head. Hence, again using independence, E(R2k+1) = E(R2k+1|H ) p + E(R2k+1|H c)q = E(1 + X ) p + E(1 + Y ). Likewise, R2k is a run of heads if and only if the first toss yields a tail, so E(R2k, with equality if and only if p = 1 2 (R2k) = E(R2 2k) − 4. Arguing as above, and using conditional probability again, yields = q. [Because ( p − q)2 > 0, for p = q.] Now, var E(R2 2k) = qE((1 + X )2) + pE((1 + Y )2 p2 130 and so (12) Likewise, (13) and so Now 4 Random Variables var (R2k) = 1 + ( p − q)2 − 2 pq pq. E(R2 2k+1) = q 1 + q p2 + p 1 + p q 2 var (R2k+1) = q p2 + p q 2 − 2. var (R2k+1) − var (R2k) = p4 + q 4 + 2 p2q 2 − pq p2q 2 = ( p3 − q 3)( p − q) p2q 2 ≥ 0 with equality if and only if p = q = 1 2. 4.5 Sequences of Distributions If an experiment is repeated indefinitely, it may give rise to a sequence (Fn
|
(x); n ≥ 1) of distributions. (1) Definition Let f (x) be a probability mass function that is nonzero for x ∈ D, and zero for x ∈ R\D = C. Let F(x) be the corresponding distribution function F(x) = f (xi ). A sequence of distribution functions Fn(x) is said to converge to F(x) if, as n → ∞, Fn(x) → F(x) for x ∈ C. xi ≤x One special case is important to us; if D is included in the integers, then Fn(x) converges to F(x) if, for all x, fn(x) → f (x) as n → ∞. (2) Example: Matching revisited In Example 3.18, we showed that the probability of exactly r matches in n random assignments of letters is p(n, r ) = 1 r! n−r (−)k k! → e−1 r! k=0 as n → ∞. This shows that as n → ∞ the number of matches has a Poisson distribution (with parameter 1) in the limit. (3) Example: M´enages Revisited In Problem 3.38, we found the probability that exactly m couples were adjacent when seated randomly at a circular table (alternating the 4.6 Inequalities 131 sexes) is pm = 2 m! n−m k=0 (−)k(n − m − k)!(2n − m − k − 1)! k!(2n − 2m − 2k)!n! → 2m m! ∞ k=0 (−)k 2k k! = 2me−2 m! as n → ∞. Thus, the number of adjacent couples is Poisson with parameter 2 in the limit as n → ∞. Finally, we note that the appearance of the Poisson distribution in Examples 2 and 3 is significant. This distribution commonly arises in limits of this type, and that is one of the reasons for its major importance. 4.6 Inequalities Calculating the exact probability that X lies in some set of interest is not always easy. However, simple bounds on these probabilities will often be sufficient for the task in hand. We start with a basic inequality. (1) Theorem: Basic Inequality If
|
h(x) is a nonnegative function, then, for a > 0 P(h(X ) ≥ a) ≤ E(h(X ))/a). Proof Define the following function of X : I (h ≥ a) = 1 whenever h(X ) ≥ a 0 otherwise Observe that I is an indicator, and so by Example 4.3.2, E(I ) = P(h(X ) ≥ a). Now, by its construction I satisfies h(X ) − a I ≥ 0, and so by Theorem 4.3.6 [parts (iii) and (iv)], E(h(X )) ≥ aE(I ) = aP(h(X ) ≥ a). The following useful inequalities can all be proved using Theorem 1 or by essentially the same method. You should do some as exercises. For any a > 0, we have: Markov’s Inequality Chebyshov’s Inequality† (2) (3) P(|X | ≥ a) ≤ E(|X |)/a. P(|X | ≥ a) ≤ E(X 2)/a2. † Some writers use the transliteration “Chebyshev”. They then have to remember that the second “e” is pronounced as “o”. 132 4 Random Variables One-Sided Chebyshov’s Inequality P(X − E(X ) ≥ a) ≤ var (X ) a2 + var (X ). Generalized Markov Inequality ative then If h(x) is increasing for x > 0, even, and nonneg- If X is nonnegative, then If c > 0, then and P(|X | ≥ a) ≤ E(h(X ))/ h(a). P(X > a) ≤ E(X ) a. P(X > a) ≤ E((X + c)2) (a + c)2, P(X > a) ≤ E(exp (c(X − a))). Here is one important application. (4) (5) (6) (7) (8) (9) Example Let X be a random variable such that var (X ) = 0. Show that X is constant with probability one. Solution By (3), for any integer n ≥ 1, (10)
|
P |X − E(X )| > 1 n ≤ n2var (X ) = 0. Hence, defining the events Cn = {|X − E(X )| > 1/n}, we have ∞ P(X = E(X )) = P n=1 = 0. Cn = P( lim n→∞ Cn) = lim n→∞ P(Cn) by Theorem 1.5.2 An important concept that crops up in many areas of pure and applied mathematics is that of convexity. We are interested in the following manifestation of this. (11) Definition A function g(x) (from R to R) is called convex if, for all a, there exists λ(a) such that (12) g(x) ≥ g(a) + λ(a)(x − a), for all x. 4.6 Inequalities 133 If g(x) is differentiable, then a suitable λ is given by λ(a) = g(a) and (12) takes the form g(x) ≥ g(a) + g(a)(x − a). (13) This says that a convex function lies above all its tangents. If g is not differentiable, then there may be many choices for λ; draw a picture of g(x) = |x| at x = 0 to see this. (There are several other definitions of a convex function, all equivalent to Definition 11.) We are interested in the following property of convex functions. (14) Theorem: Jensen’s Inequality a convex function. Then, Let X be a random variable with finite mean and g(x) (15) E(g(X )) ≥ g(E(X )). Proof Choosing a = E(X ) in (12), we have g(X ) ≥ g(E(X )) + λ(X − E(X )). Taking the expected value of each side gives (15). For example, g(x) = |x| and g(x) = x 2 are both convex, so E(|X |) ≥ |E(X )| and E(X 2) ≥ (E(X ))2. This is Theorem 4.3
|
.7. Here is a less trivial example. (16) Example Let X be a positive random variable. Show that E(log X ) ≤ log E(X ). Solution − log x is convex. Fortunately, this is easy, as follows. By definition, for x > 0, This follows immediately from Jensen’s inequality if we can show that − log x = 1 x y−1dy = 1 y−1dy + a y−1dy, for a > 0, = − log a + x a a y−1dy ≥ − log a + x a x a−1dy = − log a − 1 a (x − a), and this is (12) with λ(a) = −a−1. Example 16 has many important applications, of which we see more later. Here is one to begin with. (17) Example: Arithmetic–Geometric Means Inequality Let (xi ; 1 ≤ i ≤ n) be any collection of positive numbers and ( pi ; 1 ≤ i ≤ n) any collection of positive numbers such that i pi = 1. Show that (18) p1x1 + p2x2 + · · · + pn xn ≥ x1 p1 x2 p2 · · · xn pn. Solution pi ; 1 ≤ i ≤ n. Then, from (16), Let X be the random variable with probability mass function P(X = xi ) = log E(X ) = log( p1x1 + · · · + pn xn) ≥ E(log X ) = p1 log x1 + · · · + pn log xn = log(x1 p1 x2 p2 · · · xn pn ). The result (18) follows because log x is an increasing function. 134 4 Random Variables (19) Example: AM/GM Inequality In the special case when pi = 1/n, 1 ≤ i ≤ n, then (18) takes the form 1 n n 1 xi ≥ n 1/n xi i=1. 4.7 Review and Checklist for Chapter 4 We have seen that many experiments have numerical outcomes; and when they do not, the sample space can be usefully mapped to suitable points on the line. Such real-valued functions defined on the sample space are called random variables. This chapter defined some important types of random
|
variable, and considered several important named examples. We introduced the key concepts of distribution, expectation, functions of random variables, conditional distributions, and sequences of distributions. In Table 4.1 we display the distribution, expectation, and variance for some important specific random variables : In general, every random variable has a distribution function FX (x) = P(X ≤ x). Discrete random variables, taking one of a countable set of possible values, have a probability mass function When X is integer valued, we can write f X (x) = P(X = x). f X (x) = F(x) − F(x − 1). Table 4.1. Discrete random variables Distribution f (x) Mean Variance Uniform Bernoulli Binomial Geometric Poisson Negative binomial Hypergeometric n−1, 1 ≤ x ≤ n px (1 − p)1−x, x ∈ {0, 1} px (1 − p)n−x, 0 ≤ x ≤ n (1 − p)x−1 p, x ≥ 1 n x x−1 k−1, x ≥ 1 e−λλx x! pk(1 − p)x−k, x ≥ k x )( w ( m n−x ) ( m+n + 1) p 1 12 (n2 − 1) p(1 − p) np p−1 λ np(1 − p) (1 − p) p−2 λ kp−1 k(1 − p) p−2 nm m+w nmw(m+w−n) (m+w−1)(m+w)2 4.7 Review and Checklist for Chapter 4 135 In any case, we have the Key Rule: P(X ∈ A) = x∈A f X (x). Expectation: A discrete random variable has an expected value if this case, EX = x fx (x). |x| f X (x) < ∞. In x When X is integer valued and nonnegative, x EX = ∞ x=0 P(X > x) = ∞ x=0 (1 − F(x)). Functions: Suppose that discrete random variables X and Y are such that Y = g(X ) for some function g(.). Then, Also, fY (y) = x:g(x)=y f X (x).
|
EY = x g(x) f X (x) [provided that It follows that in any case where each side exists: |g(x)| f X (x) < ∞.] x E[ag(X ) + bh(X )] = aEg(X ) + bEh(X ). Variance: For any random variable X, the variance σ 2 is given by var X = E(X − EX )2 = EX 2 − (EX )2 = σ 2 ≥ 0. The number σ ≥ 0 is called the standard deviation, and in particular we have, for constants a and b, var (a X + b) = a2varX. Moments: The kth moment of X is µk = EX k, k ≥ 1; usually we write µ1 = µ. The kth central moment of X is k ≥ 1; σk = E(X − EX )k, usually we write σ2 = σ 2. Conditioning: Any event B in may condition a random variable X, leading to a conditional distribution function FX |B(x|B) = P(X ≤ x|B) 136 4 Random Variables and a conditional mass function Key Rule: f X |B(x|B) = P(X = x|B); P(X ∈ A|B) = x∈A f X |B(x|B). This distribution may have an expected value, called Conditional expectation: E(X |B) = x f X |B(x|B). In particular, x EX = E(X |B)P(B) + E(X |Bc)P(Bc) and if (Bi ; i ≥ 1) is a partition of EX = i E(X |Bi )P(Bi ). Similarly, a random variable X may condition an event B, so that P(B|X = x) = P(B ∩ {X = x})/P(X = x), yielding P(B) = x P(B|X = x) f X (x). Checklist of Terms for Chapter 4 4.1 discrete random variable Bernoulli trial indicator simple random variable 4.2 probability distribution probability mass function binomial distribution Poisson distribution negative binomial distribution distribution function 4.3 expectation and expected value uniform distribution moments variance tail sum geometric distribution 4.4 conditional mass function conditional expectation 4.5
|
sequences of distributions Worked Examples and Exercises 137 4.6 Markov’s inequality Chebyshov’s inequality Jensen’s inequality Arithmetic-geometric means inequality.8 Example: Royal Oak Lottery This eighteenth-century lottery paid winners 28 to 1; the chance of winning at any given bet was 2−5, independently of other bets. Gamesters (as usual) complained that the odds were unfair. It is reported by de Moivre (in Doctrine of Chances, 1756) that the Master of the Ball maintained that any particular point of the Ball should come up once in 22 throws; he offered to bet on this (at evens) at any time, and did so when required. The seeming contradiction between the 2−5 chance at any bet, with 22 throws for any chance to come up, so perplexed the gamesters that they began to think they had the advantage; so they played on and continued to lose. Explain why there is no contradiction. Solution Let P be a point of the Ball. Let T be the number of trials required to yield P for the first time. At each trial, P fails to appear with probability 31/32, and T > k if and only if the first k trials do not yield P. Hence, by independence, P(T > k) = k. 31 32 Now, Hence, 22 31 32 0.49 < 0.5. P(T ≤ 22) > 0.5. ∞ [However, note that E(T ) = 0 P(T > k) = (1 − 31 32 )−1 = 32.] Thus, by betting on the event T ≤ 22, the Master of the Ball was giving himself a better than evens chance of winning. However, if we let W be the profit to the gambler of a $1 stake wagered on P turning up, we have P(W = 28) = 1 32. Hence, E(W ) = 28 32. A loss. 32 32 and P(W = −1) = 31 − 31 32 Thus, in the long run, the gambler will surely lose at a rate of nearly 10% of his stake = − 3 each play. (See Example 4.18 for a proof of this.) Remark bution of T is less than its mean. See Problem 4.51 for bounds on this difference. The Master of the Ball was exploiting
|
the fact that the median of the distri- Note that T has a geometric distribution. 138 4 Random Variables (1) (2) (3) Give an example of a distribution for which the median is larger than the mean. Find: (a) var (T ) and (b) µ(k) T. Exercise Exercise Exercise Which of the following strategies gives the gambler a better chance of winning if she takes up the offer of a bet on P not occurring in 22 trials: (a) Making such a bet immediately? (b) Waiting for a run of 22 trials during which P has not appeared? (c) Waiting until P has appeared in consecutive trials and then betting on its nonappearance in the following 22? (4) Exercise Calculate P(T > j + k|T > j). Explain the significance of your answer. 4.9 Example: Misprints Each printed character in a book is misprinted independently with probability p, or is correct with probability 1 − p. Let n be the number of characters in the book, and let X be the number of misprinted characters. (a) Find P(X = r ). (b) Show that E(X ) = np. (c) Suppose that E(X ) is fixed, and let A be the event that X = 0. Find E(X |A), and show that as n → ∞, E(X |A) → E(X )/(1 − exp [−E(X )]). (a) We provide two solutions. Solution I Because characters are misprinted independently, the probability that r given characters are misprinted and the remaining n − r are correct is pr (1 − p)n−r. Because there are ( n r ) distinct ways of fixing the positions of the r misprints, it follows that (1) P(X = r ) = n r pr (1 − p)n−r. This is the binomial distribution, which we met in Example 4.2.3. We some- Remark times denote it by B(n, p). II Consider the first character, and let M be the event that it is misprinted. Then, P(X = r ) = P(X = r |M)P(M) + P(X = r |M c)P(M c). We write P(X =
|
r ) = p(n, r ) and observe that if M occurs, then X = r if and only if there are r − 1 misprints in the remaining n − 1 characters. Hence, (2) p(n, r ) = p(n − 1, r − 1) p + p(n − 1, r )(1 − p) where p(n, 0) = (1 − p)n and p(n, n) = pn, n ≥ 0. Now the substitution p(n, r ) = pr (1 − p)n−r c(n, r ) gives c(n, r ) = c(n − 1, r − 1) + c(n − 1, r ), where c(n, 0) = c(n, n) = 1, n ≥ 0. We already know that this difference equation has the solution c(n, r ) = ( n r ), as required (recall Pascal’s triangle). (b) We consider two solutions. I Let m(n) be the expected number of misprints in n characters. Then, by Theorem 4.4.6, m(n) = E(X |M)P(M) + E(X |M c)P(M c) = (1 + m(n − 1)) p + m(n − 1)(1 − p), Worked Examples and Exercises 139 where we have used E(X |M) = r P(X = r |M) = r p(n − 1, r − 1), r r = p(n − 1, r − 1) + r r because misprints are independent, (r − 1) p(n − 1, r − 1) = 1 + m(n − 1). Hence, m(n) = m(n − 1) + p. Obviously, m(0) = 0, so this difference equation has solution (3) m(n) = np. n r II Using (a) n r m(n) = r =0 n = np r =1 (c) By definition, n − 1 r − 1 pr (1 − p)n−r = n r =1 np (n − 1)! (r − 1)!(n − r )! pr −1(1 − p)n−r pr −1(1 −
|
p)n−r = np by the binomial theorem. E(X |A) = n r =1 P(X = r ) P(X > 0) = np/ 1 − = np/(1 − (1 − p)n) n = E(X )/ 1 − 1 − np n → E(X )/(1 − exp (−E(X )) n 1 − E(X ) n as n → ∞. (4) (5) (6) (7) Show that var (X ) = np(1 − p) by two different methods. Show that as n → ∞, if E(X ) is fixed P(X = 0) → exp(−E(X )). Let X have the binomial distribution P(X = k) = ( n Exercise Exercise Exercise (a) For fixed n and p, for what value of k is P(X = k) greatest? (b) For fixed k and p, for what value of n is P(X = k) greatest? Exercise (a) The probability that X is even. π X )). (b) E(sin2( 1 2 (c) µ (k) X. If X has a binomial distribution with parameters n and p, find: k ) pk(1 − p)n−k. 4.10 Example: Dog Bites: Poisson Distribution (a) Let X be a binomial random variable with parameters n and p, such that np = λ. Show that for fixed k, as n → ∞, with λ fixed, P(X = k) → 1 k! λke−λ. 140 4 Random Variables (b) During 1979–1981, in Bristol, 1103 postmen sustained 215 dog bites. A total of 191 postmen were bitten, of whom 145 were bitten just once. Which should be the postman’s motto: “Once bitten, twice shy” or “Once bitten, twice bitten”? (8) Solution (a) Because X is binomial P(X = k) = n k Now for fixed k, as n → ∞ with λ fixed, pk(1 − p)n−k = n(n − 1)...
|
(n − k + 1) nk 1 − n−k. λ n. λk k; 1 ≤ j ≤ k, n → 1, and 1 − → e−λ. λ n Hence, as n → ∞, 1 − λ n P(X = k) → λk k! e−λ. (4) Remark This is the Poisson distribution, which we met in Example 4.2.7. (b) Suppose you were a postman, and let X be the number of your bites. If dogs bite any postman at random, then X is a binomial random variable with parameters 215 and (1103)−1, because it may be thought of as a series of 215 trials in which a “success” is being bitten with probability (1103)−1 at each trial, independently of the rest. Hence, and P(X = 0) = 215 1 − 1 1103 P(X = 1) = 215 1103 1 − 1 1103 214. You may either compute these directly or recognise from (a) that the number of bites you get is approximately Poisson, with parameter λ = 215 1103 0.195. So, P(X = 0) e−λ 0.82 P(X = 1) λe−λ 0.16 P(X > 1) 1 − e−λ − λe−λ 0.02. However, if we pick a postman at random and let X be the number of bites he sustained, we find that P(X = 0) = 912 1103 P(X = 1) = 145 1103 P(X > 1) = 46 1103 0.83 0.13 0.04. It seems that “once bitten, twice bitten” should be the postman’s motto. Worked Examples and Exercises 141 (5) Remark Our conclusion may be given more substance by investigating the extent to which the observed distribution differs from the expected Poisson distribution. Such techniques are known to statisticians as “goodness-of-fit tests,” and an appropriate procedure here would use the χ 2 test. This may be found in textbooks of elementary statistics; the motto is the same. (1) (2) (3) (4) (5) (6) If bites are Poisson with parameter λ, what
|
is the probability that you get more than If bites are Poisson, what is your expected total number of bites given that you get at Exercise one bite, given that you get at least one? Exercise least one? Let X n have a binomial distribution with parameters n and p, such that np = λ, and Exercise let An be the event that X n ≥ 1. If Y is a Poisson random variable with parameter λ, show that as n → ∞, P(X n = k|An) → P(Y = k|Y ≥ 1). Exercise (a) For fixed λ, what value of k maximizes P(X = k)? (b) For fixed k, what value of λ maximizes P(X = k)? Exercise Let X have a Poisson distribution with parameter λ. Show that E(X ) = λ. If X has a Poisson distribution with parameter λ, find: (a) E(e X ) (b) E(cos(π X )) (c) var (X ) (d) µ (k) X. If X has a Poisson distribution with parameter λ, show that: Exercise (a) The probability that X is even is e−λcosh λ. (b) E(|X − λ|) = 2λλe−λ/(λ − 1)!, when λ is an integer greater than zero. 4.11 Example: Guesswork You are trying to guess the value of a proper integer valued random variable X, with probability mass function f (x) (which you know). If you underestimate by y, it will cost you $by; if you overestimate by y, it will cost you $ay. Your guess is an integer; what guess minimizes your expected loss? Solution (1) If you guess t, then your expected loss is L(t) = a (t − x) f (x) + b (x − t) f (x). x≤t x>t Substituting t + 1 for t in (1) gives an expression for L(t + 1), and subtracting this from (1) gives (2) L(t) − L(t + 1) = a f (x) + b x≤t x>t f (x) = −a F(t) + b(1 − F
|
(t)). = D(t) (say). Now limx→−∞ D(x) = b, limx→∞ D(x) = −a, and both −F(t) and 1 − F(t) are nonincreasing. Therefore, there is a smallest t such that D(t) = L(t) − L(t + 1) ≤ 0, and this is the guess that minimizes your expected loss. Hence, denoting this guess by ˆt, t: F(t) ≥ b ˆt = min by (2).!, a + b 142 4 Random Variables (3) (4) (5) Suppose that if you underestimate X you incur a fixed loss £b, whereas if you Exercise overestimate X by y it will cost you £ay. Find an expression that determines the guess that minimizes your expected loss. Find this best guess when x ≥ 1. x ≥ 1, p = 1 − q > 0. (a) P(X = x) = pq x−1; (b) P(X = x) = 1/(x(x + 1)); (c) P(X = x) = 1/(2n + 1); −n ≤ x ≤ n. Exercise What is your best guess if (a) L(t) = E(|X − t|)? (b) L(t) = E((X − t)2)? Icarus Airways sells m + n tickets for its n-seat aeroplane. Passengers fail to show up Exercise with probability p independently. Empty seats cost $c, and a passenger with a ticket who cannot fly is paid $b for being bumped. What choice of m minimizes the airline’s expected losses on booking errors? What level of compensation b would be sufficient to ensure that it was not worthwhile for the airline to overbook at all (for fixed p)? For fixed b, what value of p would entail no overbooking by the airline? 4.12 Example: Gamblers Ruined Again Alberich and Brunnhilde have a and b gold coins, respectively. They play a series of independent games in which the loser gives a gold piece to the winner; they stop when one of them has no coins remaining. If Alberich wins each game with probability p, �
|
��nd the expected number of games played before they stop. (Assume p = q = 1 − p.) (3) Solution Let X k be the number of games they will play when Alberich’s fortune is k, and let mk = E(X k). Clearly, m0 = ma+b = 0 because in each case one player has no coins. If A is the event that Alberich wins the first game, then for 0 < k < a + b, E(X k|A) = 1 + E(X k+1) = 1 + mk+1 because his fortune is then k + 1, and succeeding games are independent of A. Likewise, it follows that E(X k|Ac) = 1 + E(X k−1). Hence, by Theorem 4.4.6, mk = E(X k|A)P(A) + E(X k|Ac)P(Ac) = 1 + pmk+1 + qmk−1. Setting gives mk = k q − p + uk, uk = puk+1 + quk−1, for 0 < k < a + b. (1) (2) (3) (4) (5) (2) Worked Examples and Exercises 143 In particular u0 = 0, and ua+b = −(a + b)(q − p)−1. Proceeding as in Example 2.11, using Theorem 2.3.1, shows that uk = + (a + b+b Exercise What is the expected number of games played when p = 1 2? Let B be the event that Brunnhilde wins the entire contest. Find a difference equation Exercise satisfied by E(X k|B). Solve this in the case when p = 1 2. Exercise When the first game is over they redivide the a + b coins as follows. All the coins are tossed, one player gets those showing a head, the other gets all those showing a tail. Now they play a series of games as before. What is the expected number to be played until one or other player again has all the coins? What if p = 1 2? Exercise Alberich is blackmailing Fafner, so each time he loses his last gold coin, he immediately demands (and gets) one replacement coin, with
|
which to continue gambling. What now is the expected number of games played? What if p = 1 2? 4.13 Example: Postmen A and B are postmen. They start work on day 1. The probability that A sustains a dog bite on day n, given that he has not been bitten on any of the preceding days is p A(n). The corresponding probability for B is pB(n). Let X A and X B, respectively, be the number of days until each sustains his first bite. (a) Find P(X A = n) and P(X B = n). (b) A is wary, so p A(n) decreases as n increases. If p A(n, find P(X A = n) and show that E(X A) = ∞, while P(X A < ∞) = 1. (c) B is complacent, so pB(n) increases as n increases. If, for some λ < 0, find P(X B = n) and E(X B). pB(n) = 1 − e−λn Solution he is bitten for the first time on the nth day is (a) Let Hk be the event that A is bitten on the kth day. Then the event that n−1 ∩ Hn. Hence, 1 H c k n−1 n−1 n−1 P(X A = n) = P ∩ Hn = P H c k 1 Hn(n)P H c n−(n) n−2 1 1 n−2 1 n−1 1 (1 − p A(k)), on iterating. 144 4 Random Variables Likewise, P(X B = n) = pB(n) n−1 (1 − pB(k)). (b) Employing a similar argument 1 n P(X A > n) = P n = (1 − p A(k)) H c k = n k=1 Hence, P(X A < ∞) = 1. Also, k=1 1 − 1 k + 1 k=1 = 1 n + 1 → 0 as n → ∞. P(X A = n) = P(X A > n − 1) − P(X A > n) = 1 n(n + 1), and
|
finally E(X A) = bitten is infinite. ∞ 1 1/(n + 1), which diverges. The expected time until A is first (c) By the same argument, P(X B ≥ n) = n j=1 (1 − pB( j)) = n j=1 e−λj = e−(λ/2)n(n+1). Hence, P(X B = n) = (1 − e−λn)e−(λ/2)n(n−1) and E(X B) = ∞ 1 e−(λ/2)n(n+1) < ∞. B expects to be bitten in a finite time. (1) (2) (3) (4) In both cases (b) and (c), find the probability that the postman is first bitten on the jth If A is less wary, so that p A(n) = 2/(n + 2), show that E(X A) is now finite, but var Exercise day, given that he is bitten on or before day M. Exercise (X A) diverges. Exercise days, find the expected further time until he is bitten. Exercise 2, he is never bitten. What is the median of the distribution of X A in this case? Find the expectation of X A, given that X A is finite. If A is extremely wary and p A(n) = 1/(n + 1)2 show that with probability 1 In each case (b) and (c), given that the postman has not been bitten during the first m 4.14 Example: Acme Gadgets This company has developed a new product. The demand for it is unknown, but it is assumed to be a random variable X, which is distributed uniformly on {0, 1,..., N}. The gadgets have to be made in advance; each one sold makes a profit of $b, and each one made and left unsold represents a net loss of $c. How many should be made, to maximize the expected profit? Worked Examples and Exercises 145 Solution interpreted as losses) is Suppose that m items are made. Then the total profit
|
(negative profits are Ym = bm; bX − c(m − X ); X ≥ m m > X. The expected profit is E(Ym) = bmP(X ≥ m) + b b = m N + 1 Now, m−1 x=−1 x=0 m − x N + 1, m(b + c). N + 1 2 2(N + 1)(E(Ym+1) − E(Ym)) = (2N + 1)b − c − (2m + 1)(b + c), so that the expected profit is largest when m = ˆm, where #! " N b − c b + c Suppose that an unsatisfied customer represents a loss of $d. What now is the choice ˆm = max 0,. (1) (2) (3) Exercise of m which maximizes expected profit? Exercise parameter p. Find the choice of m that maximizes the expected profit. Exercise that the expected profit if they make m items is Suppose the unknown demand X is a Poisson random variable with parameter λ. Show Suppose that the unknown demand X is assumed to have a geometric distribution with λ(b + c) − (b + c)λm+1 m! m 0 λk k! − mc, and that this is maximized by the value of m that minimizes (b + c)λm+1 m! m λk k! + mc. 0 4.15 Example: Roulette and the Martingale Suppose you are playing roulette; the wheel has a zero. The chance of winning on red is p < 1 2 and you bet at evens; if you win, you gain an amount equal to your stake. Your first bet is $1 on red. If it wins, you quit; if it loses, your second bet is $2 on red. If it wins you quit, and so on. Your nth bet is $2n−1 so long as you lose; as soon as you win you quit. (a) Show that you are certain to win $1 every time you use this system. (b) Find the expected size of your winning bet. Now suppose the house limit is $2L, so this must be
|
your last bet if you have not already won. 146 4 Random Variables (c) What is your expected gain when you stop? (d) Would you prefer large or small house limits? Remark This gambling system is the martingale. Avoid it unless you seek ruin! Solution Let T be the number of spins of the wheel until the outcome is first red. losses on the previous T − 1 spins are $ (a) Your bet on that spin is $2T −1, and because you win, you gain $2T −1. However, your T −1 1 ∞ 2k−1 = $2T −1 − 1. Because ∞ P(T = k) = p(1 − p)k−1 = 1, k=1 k=1 this means you are certain to win $1. (b) Because your winning bet is $2T −1, it has expected value E(2T −1) = ∞ k=1 2k−1 p(1 − p)k−1 = ∞, since 2(1 − p) > 1. (c) You win $1 if 1 ≤ T ≤ L + 1; otherwise, you lose $ gains are $γ, where L k=0 2k. Hence, your expected γ = P(T ≤ L + 1) − (2L+1 − 1)P(T > L + 1) = 1 − (2(1 − p))L+1. (d) Because your expected losses increase exponentially fast with L, you must hope the casino is sufficiently generous to have low limits. (1) (2) Exercise What difference does it make to these results if the wheel is fair? (That is, p = 1 Exercise With house limit $2L, what is the expected size of your winning bet, given that you do indeed win? What happens as L → ∞? (Remember to consider all three cases.), p = 1 2 2.) 4.16 Example: Searching (a) Let X be a positive integer valued random variable such that f (n) = P(X = n) is nonincreasing as n increases. Suppose that (g(x); x = 1, 2,...) is a function, taking positive integer values, such that for any k, g(x) = k for at most one positive integer x = rk. Show
|
that E(g(X )) ≥ E(X ). (b) You have lost a key. There are n places in which you might have mislaid it with respective probabilities ( pk; 1 ≤ k ≤ n). If you search the kth place once, you find the key with probability dk, if it is indeed there. (You can search any place any number of times.) How do you arrange your searching to minimize the expected time until you find the key? (Searches are successful independently of each other.) Solution value rk of X, (a) Consider the distribution of g(X ). Because g(X ) = k for at most one P(g(X ) ≤ n) = k P(g(X ) = k) = k P(X = rk) = k f (rk) Worked Examples and Exercises 147 where the final sum contains m ≤ n nonzero terms. If these are arranged in decreasing order as f (rk1) ≥ f (rk2) ≥ · · · ≥ f (rkm ), then f (rk1) ≤ f (1) f (rk2) ≤ f (2), and so on. Hence, summing these inequalities yields P(g(X ) ≤ n) ≤ P(X ≤ n), and so ∞ E(g(X )) = P(g(X ) > n) = ∞ (1 − P(g(X ) ≤ n)) 0 ∞ 0 ≥ (1 − P(X ≤ n)) = E(X ). (b) The probability that you find the key on the sth search of the r th room is 0 mr s = (1 − dr )s−1dr pr. To see this, note that the key has to be there (with probability pr ) and you have to fail to find it s − 1 times before you succeed. Let pk be the kth largest of the numbers (mr s; r ≥ 1, s ≥ 1). Then pk is a probability mass function and ( pk; k ≥ 1) is nonincreasing. Take this ordering as an order of search; that is, if mr s ≥ muv, then the sth search of the r th place precedes the vth search of the uth place. This search is consistent [the mth search
|
of a given place precedes the (m + 1)st for every m], and kpk is the expected number of searches required to find the key. By part (a), any other order yields greater expected duration of the search time, because the function g(x) is a permutation, and thus one–one. (1) (2) (3) The key is upstairs with probability 2 Show that you can arrange your searches so that the expected time to find the key is 3 or downstairs, with probability 1 3. Any search 4 if the key is there; any search downstairs is successful 4 if the key is there. How do you arrange your searches to minimize the expected Exercise finite. Exercise upstairs is successful with probability 1 with probability 3 number of searches? Suppose the sth search of the r th room (conditional on s − 1 previous unsuccessful Exercise searches of this room) discovers the key with probability dr s. How do you order your searches to minimize the expected number of searches? 4.17 Example: Duelling Pascal and Brianchon fight a series of independent bouts. At each bout, either Pascal is awarded a hit with probability p, or Brianchon is awarded a hit with probability q = 1 − p. The first to be awarded two consecutive hits is declared the winner and the duel stops. Let X be the number of bouts fought. Find the distribution and expected value of X. For what value of p is E(X ) greatest? 148 4 Random Variables Solution Let B be the event that Brianchon wins. Then, f X (n) = P({X = n} ∩ B) + P({X = n} ∩ Bc). For B to occur at the nth bout, he must win the nth and (n − 1)th bouts (with probability q 2), and the preceding n − 2 bouts must be awarded alternately to each contestant. The probability of this is p(n/2)−1q (n/2)−1 if n is even, or p(n/2)−(1/2)q (n/2)−(3/2) if n is odd, because bouts are independent. A similar argument applies if Bc occurs, yielding f X (n) p(n/2)−1q (n/2)−1(q 2 + p2)
|
p(n/2)−(1/2)q (n/2)−(1/2)(q + p) if n is even if n is odd. The expected value of X is then, by definition, ∞ E(X ) = p j−1q j−1(q 2 + p2)2 j + j=1 ∞ j=1 q j p j (q + p)(2 j + 1). Summing this series is elementary and boring. To get a solution in closed form, it is more fun to argue as follows. Let Ak be the event that Pascal is awarded the kth bout. Then, E(X ) = E(X |A1) p + E q, X |Ac 1 by conditioning on the outcome of the first bout. Now if Pascal is awarded the first bout but not the second, the state of the duel in respect of the final outcome is exactly the same as if he had lost the first bout, except of course that one bout extra has been fought. Formally, this says E X |A1 ∩ Ac 2 = 1 + E X |Ac 1. Hence, E(X |A1) = E(X |A1 ∩ A2) p + E X |A1 ∩ Ac Ac 1 Likewise, E X |Ac 1 = 2q + p(1 + E(X |A1)). Solving (2) and (3), and substituting into (1), yields E(X ) = 2 + qp 1 − qp. Because 2, this is greatest when p = 1 2 and then E(X ) = 3. Exercise What is P(B)? Exercise What is P(Bc)? Exercise Exercise Exercise Exercise to win three consecutive bouts. Given that Pascal wins, find the distribution and expected value of the number of bouts. Find P(A1|B) and P(A2|B). Find the median number of bouts when p = 1 2. Find P(B) and the expectation of the number of bouts fought if the winner is required (1) (2) (3) (4) (5) (6) (7) (8) (9) Worked Examples and Exercises 149 (10) Exercise Brianchon suggests that they adopt a different rule for deciding the
|
winner, viz: when first a player has been awarded a total number of bouts two greater than the number of bouts awarded to his opponent, then the match stops and the leading player wins. If p > q, do you think Brianchon was wise to suggest this? (Assume he wants to win.) What is the expected duration of this game when p = q? 4.18 Binomial Distribution: The Long Run Let X have a binomial distribution with parameters n and p, where p = 1 − q. Show that for λ > 0 and > 0, P(X − np > n) ≤ E(exp [λ(X − np − n)]). Deduce that as n → ∞, (You may assume without proof that for any x, 0 < ex ≤ x + ex 2.) P(|X − np| ≤ n) → 1. For k > np + n, when λ > 0, we have exp (λ(k − np − n)) > 1. Hence Solution k>n( p+) P(X = k) < < exp (λ(k − np − n))P(X = k) k>n( p+) exp (λ(k − np − n))P(X = k), because ex > 0, k = E(exp (λ(X − np − n))). Now, the left side is just P(X > np + n) and E(eλX ) = n 0 n k ( peλ)kq n−k = (q + peλ)n, so the right side is ( peλq + qe−λp)ne−λn ≤ ( peλ2q 2 + qeλ2 p2 ≤ exp (nλ2 − λn). )ne−λn, because ex ≤ x + ex 2 Now, choosing λ = /2 gives P(X − np > n) ≤ exp (−n2/4). Likewise, P(X − np < n) ≤ exp (−n2/4), so P(|X − np| > n) ≤ 2 exp (−n2/4) → 0 as n → ∞, as required. Exercise You want to ask each of a large number n of people a question to which the answer “yes” is so embarrassing that many individuals would falsely answer “no”. The answer “no” is
|
not embarrassing. The following procedure is proposed to determine the embarrassed fraction of the population. As the question is asked, a coin is tossed out of sight of the questioner. If the true answer would have been “no” and the coin shows heads, then the answer “yes” is given. Otherwise, people (1) (2) 150 4 Random Variables should respond truthfully. If the number responding “yes” is now Yn and “yes” is the true answer for a proportion p of the whole population, show that for > 0 $ $ $ $ P Yn n − 1 2 (1 + p) $ $ $ $ > ≤ 2 exp (−n2/4). Explain the advantages of this procedure. Exercise in n tosses, and An() the event that |X n/n − p| >, where 2 > > 0. Show that as n → ∞, Suppose a coin shows a head with probability p, and let X n be the number of heads ∞ P n Ak() → 0. 1 32, or trials, and An() the event Suppose a gambler wins $28 with probability Exercise loses his stake of $1 with probability 31 32 at each trial. Let Wn be his fortune after n such independent n=m An()) → 0. Deduce that his fortune is equal to its initial value on only finitely many occasions, with probability one. (Hint: recall Problem 1.24.) Note: In the following exercise, X is a binomially distributed random variable with parameters n and p. Exercise Exercise Show that for any fixed finite a and b, as n → ∞, P(a < X ≤ b) → 0. Show that for a > 0, |Wn/n + 3/32| >. Show that as n → ∞, P( ∞ that (3) (4) (5) (6(1 − p))1/2 a2n min {( p(1 − p))1/2, an1/2}. (7) Exercise (a) Show that if p = (m − 1)/n where m is an integer, then $ $ $ $ pm(1 − p)n−m+1. (b) Find var (|X/n − p|). (8) Exercise If n = 2m and p = 1 2, show
|
that P(X − m = k) = where, as m → ∞, (a(m, k))m → e−k2. Also, show that 2m m 1 4m a(m, k) < 2m m 1 2m 1 2 1 4m < 1 (2m + 1) 1 2. [You may assume that |log(1 + x) − x| < x 2 for small enough x.] 4.19 Example: Uncertainty and Entropy Let X and Y be simple random variables taking values in the same set {x1,..., xn}, with respective probability mass functions f X (.) and fY (.). Show that (1) −E(log fY (X )) ≥ −E(log f X (X )), Problems 151 and −E(log f X (X )) ≤ log n, with equality in (1) if and only if fY (.) ≡ f X (.), and equality in (2) if and only if f X (xi ) = n−1 for all xi. (Hint: Show first that log x ≤ x − 1.) Solution By definition, for x > 0, − log x = 1 x y−1 dy ≥ 1 x dy = 1 − x, (2) (3) with equality if and only if x = 1. Hence, E(log f X (X )) − E(log fY (X )) = f X (xi ) log f X (xi ) − i f X (xi ) log fY (xi ) f X (xi ) log[ fY (xi )/ f X (xi )] f X (xi )[1 − fY (xi )/ f X (xi )] by (3, with equality iff f X (xi ) = fY (xi ) for all xi, which proves (1). In particular, setting fY (xi ) = n−1 yields (2). It is conventional to denote −E(log f X (X )) by H (X ) [or alternatively h(X )] Remark and the logarithms are taken to base 2. The number H (X ) is known as the uncertainty or entropy of X, and is an essential tool in information theory and communication theory. The result (1) is sometimes called the Gibbs inequality. (4) Exercise Let f X (x
|
) = ( n x ) px (1 − p)n−x ; 0 ≤ x ≤ n. Show that H (X ) ≤ −n( p log p + (1 − p) log(1 − p)), with equality if n = 1. Exercise (5) Let f X (x) = pq x−1/(1 − q M ), for 1 ≤ x ≤ M, where p = 1 − q. Show that H (X ) = − p−1[ p log p + (1 − p) log(1 − p)]. lim M→∞ (6) Exercise Let Y = g(X ) be a function of the random variable X. Show that for any c > 0 H (Y ) ≤ H (X ) ≤ cE(Y ) + log e−cg(xi ). When does equality hold box contains 12 sound grapefruit and four that are rotten. You pick three at random. (a) Describe the sample space. (b) Let X be the number of sound grapefruit you pick. Find f X (x) and E(X ). 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 152 4 Random Variables Show that the expected number of pairs in your poker hand is about 0.516. You roll a die once. What is the variance of your score? What is the variance of a uniform random variable? For each of the following functions f (x) (defined on the positive integers x = 1, 2,...), find: (a) The value of c for which f (x) is a probability mass function. (b) The expectation (i) (ii) (iii) f (x) = c.2x /x! f (x) = cpx ; f (x) = cpx x −1iv) (v) f (x) = cx −2 f (x) = c[x(x + 1)]−1 If X is a random variable, explain whether it is true that X + X = 2X and X − X = 0. Are 0 and 2X random variables? For what value of c is f (x) = c(x(x + 1)(x + 2))−1; 1 ≤ x ≤ M, a probability mass function? Find its expectation E(X ). Find the limit of c and E(X )
|
as M → ∞. A fair coin is tossed repeatedly. Let An be the event that three heads have appeared in consecutive tosses for the first time on the nth toss. Let T be the number of tosses required until three consecutive heads appear for the first time. Find P(An) and E(T ). Let U be the number of tosses required until the sequence HTH appears for the first time. Can you find E(U )? You choose a random number X as follows. Toss a coin repeatedly and count the number of tosses until it shows a head, N say. Then pick an integer at random in 1, 2,..., 10N. Show that P(X = k) = 1 19. 1 20d−1, a E((X − a)2) = var (X ). where d is the number of digits in the decimal expansion of k. What is E(X )? Let X have a Poisson distribution f (k), with parameter λ. Show that the largest term in this distribution is f ([λ]). Show that if E(X 2) < ∞, min Let f1(x) and f2(x) be probability mass functions. Show that if 0 ≤ p ≤ 1, then f3(x) = p f1(x) + (1 − p) f2(x) is a probability mass function. Interpret this result. Let X be a geometric random variable. Show that, for n > 0 and k > 0, P(X > n + k|X > n) = P(X > k). Let X be a random variable uniform on 1 ≤ x ≤ m. What is P(X = k|a ≤ X ≤ b)? In particular find P(X > n + k|X > n). A random variable is symmetric if for some a and all k, f (a − k) = f (a + k). Show that the mean and a median are equal for symmetric random variables. Find a nonsymmetric random variable for which the mean and median are equal. If X is symmetric about zero and takes integer values, find E(cos(π X )) and E(sin(π X )). Let X have distribution function F. Find the distribution of Y = a X + b and of Z = |X |
|
. Let X have a geometric distribution such that P(X = k) = q k−1 p; k ≥ 1. Show that E(X −1) = log( p(1/ p−1). (a) Let X have a Poisson distribution with parameter λ. Show that E(1/(X + 1)) = λ−1(1 − e−λ), and deduce that for all λ, E(1/(X + 1)) ≥ (E(X + 1))−1. When does equality hold? (b) Find E(1/(X + 1)) when P(X = k) = (−k−1 pk)/ log(1 − p); k ≥ 1. Fingerprints are of a given type, has a Poisson distribution with some parameter λ. (a) Explain when and why this is a plausible assumption. (b) Show that P(X = 1|X ≥ 1) = λ(eλ − 1)−1. It is assumed that the number X of individuals in a population, whose fingerprints Problems 153 (c) A careless miscreant leaves a clear fingerprint of type t. It is known that the probability that any randomly selected person has this type of fingerprint is 10−6. The city has 107 inhabitants and a citizen is produced who has fingerprints of type t. Do you believe him to be the miscreant on this evidence alone? In what size of city would you be convinced? Initially urn I contains n red balls and urn II contains n blue balls. A ball selected randomly from urn I is placed in urn II, and a ball selected randomly from urn II is placed in urn I. This whole operation is repeated indefinitely. Given that r of the n balls in urn I are red, find the mass function of R, the number of red balls in urn I after the next repetition. Show that the mean of this is r + 1 − 2r/n, and hence find the expected number of red balls in urn I in the long run. A monkey has a bag with four apples, three bananas, and two pears. He eats fruit at random until he takes a fruit of a kind he has eaten already. He throws that away and the bag with the rest. What
|
is the mass function of the number of fruit eaten, and what is its expectation? 21 22 23 Matching Consider the matching problem of Example 3.17. Let µ(k) be the kth factorial moment of the number X of matching letters, µ(k) = E(X (X − 1)... (X − k + 1)). Show that µ(k) = 1; 0; k ≤ n k > n. 24 25 26 27 Suppose an urn contains m balls which bear the numbers from 1 to m inclusive. Two balls are removed with replacement. Let X be the difference between the two numbers they bear. (a) Find P(X ≤ n). (b) Show that 0 ≤ x ≤ 1. then P(|X | ≤ n) → 1 − (1 − x)2; is fixed as m → ∞, if n/m = x (c) Show that E|X |/m → 1 3. Suppose the probability of an insect laying n eggs is given by the Poisson distribution with mean µ > 0, that, is by the probability distribution over all the nonnegative integers defined by pn = e−µµn/n! (n = 0, 1, 2,...), and suppose further that the probability of an egg developing is p. Assuming mutual independence of the eggs, show that the probability distribution qm for the probability that there are m survivors is of the Poisson type and find the mean. Preparatory to a camping trip, you can buy six cans of food, all the same size, two each of meat, vegetables, and fruit. Assuming that cans with the same contents have indistinguishable labels, in how many distinguishable ways can the cans be arranged in a row? On the trip, there is heavy rain and all the labels are washed off. Show that if you open three of the cans at random the chance that you will open one of each type is 2 5. If you do not succeed, you continue opening cans until you have one of each type; what is the expected number of open cans? A belt conveys tomatoes to be packed. Each tomato is defective with probability p, independently of the others. Each is inspected with probability r ; inspections are also mutually independent. If a tomato is defective and inspected, it is rejected. (a) Find the probability that the nth tomato is the kth defective tomato.
|
(b) Find the probability that the nth tomato is the kth rejected tomato. (c) Given that the (n + 1)th tomato is the first to be rejected, let X be the number of its predecessors that were defective. Find P(X = k), the probability that X takes the value k, and E(X ). 28 Mr. Smith must site his widget warehouse in either Acester or Beeley. Initially, he assesses the probability as p that the demand for widgets is greater in Acester, and as 1 − p that it is greater in Beeley. The ideal decision is to site the warehouse in the town with the larger demand. The cost of the wrong decision, because of increased transport costs, may be assumed to be £1000 if Acester is the correct choice and £2000 if Beeley is the correct choice. Find the expectations of these costs 154 4 Random Variables for each of the two possible decisions, and the values of p for which Acester should be chosen on the basis of minimum expected cost. Mr. Smith could commission a market survey to assess the demand. If Acester has the higher demand, the survey will indicate this with probability 3 4 and will indicate Beeley with probability 4. If Beeley has the higher demand the survey will indicate this with probability 2 1 3 and will indicate 3. Show the probability that the demand is higher in Acester is 9 p/(4 + 5 p) Acester with probability 1 if the survey indicates Acester. Find also the expected cost for each of the two possible decisions if the survey indicates Acester. If the survey indicates Acester and p < 8/17, where should Mr. Smith site the warehouse? A coin is tossed repeatedly and, on each occasion, the probability of obtaining a head is p and the probability of obtaining a tail is 1 − p (0 < p < 1). (a) What is the probability of not obtaining a tail in the first n tosses? (b) What is the probability pn of obtaining the first tail at the nth toss? (c) What is the expected number of tosses required to obtain the first tail? The probability of a day being fine is p if the previous day was fine and is p if the previous day was wet. Show that, in a consecutive sequence of days, the probability un that the nth is fine
|
satisfies un = ( p − p)un−1 + p, n ≥ 2. Show that as n → ∞, un → p(1 − p + p)−1. By considering the alternative possibilities for tomorrow’s weather, or otherwise, show that if today is fine the expected number of future days up to and including the next wet day is 1/(1 − p). Show that (today being fine) the expected number of future days up to and including the next two consecutive wet days is (2 − p)/((1 − p)(1 − p)). Cars are parked in a line in a parking lot in order of arrival and left there. There are two types of cars, small ones requiring only one unit of parking length (say 15 ft) and large ones requiring two units of parking length (say 30 ft). The probability that a large car turns up to park is p and the probability that a small car turns up is q = 1 − p. It is required to find the expected maximum number of cars that can park in a parking length of n units, where n is an integer. Denoting this number by M(n) show that: (a) M(0) = 0 (b) M(1) = 1 − p (n ≥ 2) (c) M(n) − q M(n − 1) − pM(n − 2) = 1, Show that the equations are satisfied by a solution of the form M(n) = Aαn + Bβ n + Cn, where α, β are the roots of the equation x 2 − q x − p = 0, and A, B, C are constants to be found. What happens to M(n) as n → ∞? The probability that the postman delivers at least one letter to my house on any day (including Sundays) is p. Today is Sunday, the postman has passed my house and no letter has been delivered. (a) What is the probability that at least one letter will be delivered during the next week (including next Sunday)? (b) Given that at least one letter is delivered during the next week, let X be the number of days until the first is delivered. What is f X (x)? (c) What is the expected value of X? (d) Suppose that all the conditions in the �
|
��rst paragraph hold, except that it is known that a letter will arrive on Thursday. What is the expected number of days until a letter arrives? A gambler plays two games, in each of which the probability of her winning is 0.4. If she loses a game she loses her stake, but if she wins she gets double her stake. Suppose that she stakes a in the first game and b in the second, with a + b = 1. Show that her expected loss after both games is 0.2. Suppose she plays again, but now the stake in the first game buys knowledge of the second, so that the chance of winning in the second is ap (≤1). Show that the value of a which gives the greatest expected gain is 0.5 + 0.2/ p. 29 30 31 32 33 34 35 36 37 38 39 40 41 Problems 155 Let f1(X ) and f2(X ) be functions of the random variable X. Show that (when both sides exist) 2 ). Deduce that P(X = 0) ≤ 1 − [E(X )]2/E(X 2). [E( f1 f2)]2 ≤ E( f 2 (Recall that at 2 + 2bt + c has distinct real roots if and only if b2 > ac.) 1 )E( f 2 r P(X = r ) = 1. Any oyster contains a pearl with probability p independently of its fellows. You have a tiara that requires k pearls and are opening a sequence of oysters until you find exactly k pearls. Let X be the number of oysters you have opened that contain no pearl. (a) Find P(X = r ) and show that (b) Find the mean and variance of X. (c) If p = 1 − λ/k, find the limit of the distribution of X as k → ∞. A factory produces 100 zoggles a day. Each is defective independently with probability p. If a defective zoggle is sold, it costs the factory £100 in fines and replacement charges. Therefore, each day 10 are selected at random and tested. If they all pass, all 100 zoggles are sold. If more than one is defective, then all 100 zoggles are scrapped. If one is defective, it is scrapped and a further sample of size 10 is taken. If any are defective
|
, the day’s output is scrapped; otherwise, 99 zoggles are sold. (a) Show that scrapping the day’s output the probability r of not (1 − p)10(1 + is 10 p(1 − p)9). (b) If testing one zoggle costs £10, find the expected cost of a day’s testing. (c) Find the expected returns on a day’s output in terms of the profit b of a sold zoggle and cost c of a scrapped zoggle. An urn contains two blue balls and n − 2 red balls; they are removed without replacement. (a) Show that the probability of removing exactly one blue ball in r − 1 removals is 2(r − 1)(n − r + 1) n(n − 1). (b) Show that the probability that the urn first contains no blue balls after the r th removal is 2(r − 1) n(n − 1). (c) Find the expected number of removals required to remove both blue balls. Suppose that n dice are rolled once; let X be the number of sixes shown. These X dice are rolled again, let Y be the number of sixes shown after this second set of rolls. (a) Find the distribution and mean of Y. (b) Given that the second set of rolls yielded r sixes, find the distribution and mean of X. Pascal and Brianchon now play a series of games that may be drawn (i.e., tied) with probability r. Otherwise, Pascal wins with probability p or Brianchon wins with probability q, where p + q+ r = 1. (a) Find the expected duration of the match if they stop when one or other wins two consecutive games. Also, find the probability that Pascal wins. (b) Find the expected duration of the match if they stop when one or other wins two successive games of the games that are won. (That is, draws are counted but ignored.) Find the probability that Pascal wins. If you were Brianchon and p > q, which rules would you rather play by? Let the random variable X have a geometric distribution, P(X = k) = q k−1 p; k ≥ 1. Show that for t > 0, P(X ≥ a + 1) ≤ pe−ta(
|
1 − qet )−1. Deduce that P(X ≥ a + 1) ≤ (a + 1) p[q(a + 1)a−1]a, and compare this with the exact value of P(X ≥ a + 1). An archer shoots arrows at a circular target of radius 1 where the central portion of the target inside radius 1 4 is called the bull. The archer is as likely to miss the target as she is to hit it. When the 156 4 Random Variables archer does hit the target, she is as likely to hit any one point on the target as any other. What is the probability that the archer will hit the bull? What is the probability that the archer will hit k bulls in n attempts? Prove that the mean number of bulls that the archer hits in n attempts is n/32. Show that if the archer shoots 96 arrows in a day, the probability of her hitting no more than one bull is approximately 4e−3. Show that the average number of bulls the archer hits in a day is 3, and that the variance is approximately (63 Prove Chebyshov’s inequality that, for a random variable X with mean µ and variance σ 2, 3/64)2. √ P(|X − µ| ≤ hσ ) ≥ 1 − 1 h2, for any h > 0. When an unbiased coin is tossed n times, let the number of tails obtained be m. Show that 0.4 ≤ m n when n ≥ 100. Given that n = 100, show that P ≤ 0.6 ≥ 0.75 P 0.49 ≤ m n ≤ 0.51 √ % (2π))−1. 3(5 (You may assume Stirling’s formula that n! An ambidextrous student has a left and a right pocket, each initially containing n humbugs. Each time she feels hungry she puts a hand into one of her pockets and if it is not empty, takes a humbug from it and eats it. On each occasion, she is equally likely to choose either the left or the right pocket. When she first puts her hand into an empty pocket, the other pocket contains H humbugs. (2π)nn+1/2e−n when n is large.) Show that if ph is the probability that H = h, then 2n − h n ph = 1 22n
|
−h, n h=0(n − h) ph, or otherwise. and find the expected value of H, by considering You insure your car. You make a claim in any year with probability q independently of events in other years. The premium in year j is a j (where a j < ak for k < j), so long as no claim is made. If you make a claim in year k, then the premium in year k + j is a j as long as no further claim is made, and so on. Find the expected total payment of premiums until the first claim. A Scotch die has two are faces bearing tartan patterns: Meldrum, and one is Murray. Show that the expected number of times you must roll the die before all three patterns have appeared is 7.3. Tail Sums Let X ≥ 0 be integer valued. Use the indicator I (X > k) to prove that are McDiarmid, three EX = P(X > k), and k ≥ 0 EX r = k ≥ 0 r kr −1P(X > k). Coupon Collecting: Example (4.3.15) Revisited Let X n be the number of coupons collected until you first obtain a coupon that is a duplicate of one you already possess. Find P(X n = k) and deduce that n+1 (a) k=2 (b) EX n = = 1. k − 1 nk n! (n − k + 1)! n n! (n − k)! k=0 n−k. 42 43 44 45 46 47 Problems 157 48 49 50 51 Let (xi ; 1 ≤ i ≤ n) be a collection of positive numbers. Show that −1 ≤ 1 n n i=1 1 xi n xi i=1 1/n. If (yi ; 1 ≤ i ≤ n) is any ordering of (xi ; 1 ≤ i ≤ n), show that Let X have finite variance, and set ν(x) = E(X − x)2. Show that Eν(X ) = 2varX. Let X have mean µ, variance σ 2, |µ − m| < σ. and median m. Use (4.6.4) ≥ n. xi yi n i=1 to show that 5 Random Vectors: Independence and Dependence Wherever there is
|
wealth there will be dependence and expectation. Samuel Johnson [The Rambler, 189] 5.1 Joint Distributions Commonly, each outcome of an experiment generates two (or more) real numbers of interest. We can treat these as individual random variables (Xi ; 1 ≤ i ≤ n), but it is often important to consider their joint behaviour. For example, if the experiment is your visit to your doctor, you may find out your height H and weight W. These are separate random variables, but are often informative when considered jointly. Thus, the outcome H = 150 cm and W = 150 kg might disturb your physician, whereas the outcome H = 190 cm and W = 80 kg probably would not. Likewise, the random vector comprising height, weight, age, sex, blood pressure, and heart rate is of more use considered jointly than separately. As another example, complicated systems (e.g., space shuttles) have several on-board computers that work together to run the system. If one fails or makes an error, the others can override it; thus, the system fails only when a majority of the computers fail. If Xi is the time until the ith processor fails, then the time until the system fails depends jointly on the collection of random variables, X 1,..., X n. It is natural to refer to such a collection as a random vector, and write X = (X 1, X 2,..., X n). Formally, as before, we have X = X (ω); ω ∈ and Ax = {ω : X (ω) = x} ∈ F, but we do not often refer to the underlying sample space. Because X maps into a countable subset S of Rn, we think of S as the sample space. (You may well have already been doing this instinctively in Chapter 4.) For simplicity, we summarize the properties of random vectors in two dimensions; the appropriate generalizations in more dimensions are straightforward. Let X and Y be two discrete random variables taking values (xi ; i ≥ 1) Definition and (y j ; i ≥ 1), respectively. Their joint probability mass function f (x, y) is defined by as x and y range over all possible values xi and y j of X and Y. f (x, y) = P(X = x, Y = y) 158 The mass function f (x, y) is zero
|
, except at a countable set of points in R2. In fact, 5.1 Joint Distributions 159 f (x, y) ≥ 0 for all x and y, and further if important is the Key Rule: i, j f (xi, y j ) = 1, then the joint distribution f (x, y) is not defective. Most P((X, Y ) ∈ C) = (x,y)∈C f (x, y). (1) (2) (3) Example Suppose that a coin is tossed twice; let X be the total number of heads shown and Y the total number of tails. Then, (X, Y ) takes values in S = {0, 1, 2} × {0, 1, 2} = {(i, j) : i ∈ {0, 1, 2}, j ∈ {0, 1, 2}}. Clearly, f (x, y) is zero, except at the points (0, 2), (1, 1), and (2, 0). Furthermore, f (0, 2) + f (1, 1) + f (2, 0) = (1 − p)2 + 2 p(1 − p) + p2 = 1, where we have denoted the probability of a head by p, as usual. Any real function g(X, Y ) of two such jointly distributed random variables is itself a random variable. If we set Z = g(X, Y ), then Z has a probability mass function given by using the Key Rule (2) above: f Z (z) = P(g(X, Y ) = z) = f (x, y), where the summation is over all x and y, such that g(x, y) = z. In particular, if g(x, y) = x, we have and if g(x, y) = y, we have f X (x) = fY (y) = y x f (x, y), f (x, y). (4) (5) (6) Thus, we have shown the important result that, if we know the joint mass function of several random variables, we can find all their separate mass functions. When obtained in this way, f X (x) and fY (y) are sometimes called marginal mass functions. Here are some examples illustrating joint mass functions. (7) Example
|
A row of n numbered machines are producing components that are identical, except for the serial number. On any day, the kth component produced by the jth machine bears the serial number ( j, k). On the day in question, the r th machine produces cr (1 ≤ r ≤ n) components, and at the end of the day one component C is picked at random from all those produced. Let its serial number be (X, Y ). Find f (x, y), f X (x) and fY (y). 160 Solution 5 Random Vectors: Independence and Dependence Because C is picked at random from all n −1 f (x, y) = cr, 1 ≤ x ≤ n; 1 ≤ y ≤ cx, n r =1 cr components, we have Then, by (5), = a 1 (say). f X (x) = Now define the function H (i, j) = Then, by (6), 1 0 y f (x, y) = acx. if ci ≥ c j, otherwise. fY (y) = x f (x, y) = a n x=1 H (x, y). (8) Example: Cutting for the Deal It is customary, before engaging in a card game, to cut for the deal; each player removes a portion of the deck in turn, and then each reveals the bottom card of his segment. The highest card wins. For these to be random variables, we need to assign numerical values to the court cards, so we set J = 11, Q = 12, K = 13, A = 14, when aces are high. (a) Art and Bart cut for deal, aces high. Let X be Art’s card, and Y be Bart’s card. Find the joint mass function of X and Y. Does it make any difference how many cards Art removes from the deck? (b) Let V be the loser’s card, and W the dealer’s (winning) card. Find the joint mass function of V and W, and the separate mass functions of V and W. (c) Find the mass function of the dealer’s winning margin (namely, W − V ). (d) What is the mass function of the dealer’s card when three players cut for deal? Note that in the event of a tie, the deck is shuffled and the
|
players cut again to choose the dealer. (a) Each random variable takes values in {2, 3..., 14}. Cutting the deck Solution twice amounts to selecting two cards at random, and because ties are not allowed, X = Y. By symmetry, any two unequal values are equally likely to occur, so f X,Y (x, y) = P(X = x, Y = y) = . 1 13 1 12, y ≤ 14. It makes no difference how many cards Art removes with his cut. (b) Of course, W > V, so fV,W (v, w) = P(V = v, W = w) = P(X = v, Y = w) + P(X = w, Y = v) = 2 12 0 · · 1 13 = 1 78 ; ; 2 ≤ v < w ≤ 14 otherwise. 5.1 Joint Distributions 161 This is otherwise obvious, because the experiment amounts to choosing an unordered pair of unequal cards at random, with equal probability of choosing any pair. Hence, for v < w, f (v, w) = ( 13 2 )−1, as above. Now, by (5), 14 = 14 − v 78 fV (v) = 1 78 w=v+1 ; 2 ≤ v ≤ 13. Then by (6), (c) By (4), fW (w) = w−1 2 1 78 = w − 2 78 ; 3 ≤ w ≤ 14. f Z (z) = P(W − V = z) = 1 78, where the summation is over all v and w such that w − v = z. Because z ≤ v < w ≤ 14, there are exactly 13 − z terms in this sum, so P(W − V = z) = 13 − z 78 ; 1 ≤ z ≤ 12. (d) Arguing as we did for (b), where now u < v < w, we have −1 P(U = u, V = v, W = w) = 13 3. Hence, fW (w) = −1 13 3 ; −1 2≤u<v<w = (v − 2) 3≤v<w 13 3 4 ≤ w ≤ 14, = 1 2 (w − 3)(w − 2) −1 13 3. (9
|
) Example For what value of c is the function f (x, y) = c x + y − 1 x λx µy, x ≥ 0; y ≥ 1 joint mass function? For this value of c, find the mass functions of X and Y. Solution By (2), c−1 = = x + y − 1 x λx µy = ∞ y=1 µy (1 − λ)y by (3.6.12), ∞ ∞ y=1 x=0 µ 1 − λ − µ. 162 5 Random Vectors: Independence and Dependence Then, by (5), f X (x) = c µλx ∞ y= µy−1 = (1 − λ − µ)λx (1 − µ)x+1, x ≥ 0. Likewise, fY (y) = c ∞ x=0 x + y − 1 x λx µy = (1 − λ − µ)µy−1 (1 − λ)y, y ≥ 1. Thus, X + 1 and Y are both geometric, with parameters taking values in the nonnegative integers and Y in the positive integers. λ 1−µ and µ 1−λ, respectively, X (10) Example Leif and Rolf are bored with fair games. They want to play a game in which the probability of winning (for Leif) is λ, where λ is an arbitrary number in [0, 1]. Also, they want the game to be of finite duration with probability 1. Unfortunately, the only gaming aid they have is a fair coin. Can you supply them with a game? Solution Let λ have binary expansion λ = 0.b1b2b3... = ∞ n=1 bn2−n. Now toss the coin repeatedly and let In be the indicator of the event that the nth toss is a head. Let T be the first toss such that In = bn, T = min{n: In = bn}. If IT < bT, then Leif wins; otherwise, Rolf wins. Now, P(T = n) = ( 1 2 )n so that P(T < ∞) = ∞ 1 1 2 n = 1 and indeed E(T ) = 2. Also, Leif can only
|
win at the nth toss if bn = 1 so P(Leif wins) = bnP(T = n) = bn2−n = λ, as required. n n 5.2 Independence Given the joint mass function of X and Y, equations (5.1.5) and (5.1.6) yield the marginal mass functions of X and Y. However, to be given the marginal distributions does not in general uniquely determine a joint distribution. (1) Example Let X and Y have joint mass function given by f (0, 0) = 1 6, f (1, 0) = 1 12, f (0, 1) = 1 3, f (1, 1) = 5 12, and let U and V have joint mass function given by f (0, 0) = 1 4, f (0, 1) = 1 4, f (1, 0) = 0, f (1, 1) = 1 2. 5.2 Independence 163 Then, summing to get the marginal mass functions shows that:, f X (1) = fU (1) = 1 f X (0) = fU (0) = 1 2 2 ; and fY (0) = fV (0) = 1 4, fY (1) = fV (1) = 3 4. These marginal mass functions are the same, but the joint mass functions are different. There is one exceptionally important special case when marginal mass functions do determine the joint mass function uniquely. (2) Definition Random variables X and Y are independent if, for all x and y, f (x, y) = f X (x) fY (y). This is equivalent to P(A ∩ B) = P(A)P(B), where A = {ω: X (ω) = x} and B = {ω: X (ω) = y}, which is the definition of independence for the events A and B. More generally, a collection (Xi ; 1 ≤ i ≤ n) with mass function f is independent if for all x = (x1,..., xn) (3) (4) f (x) = n i=1 f Xi (xi ). Note that if X or Y (or both) are improper random variables [so that f (x, y) is defective], then to say they are independent
|
is interpreted as meaning P(X = x, Y = y) = P(X = x)P(Y = y) for all finite x and y. This may seem odd, but such random variables occur quite naturally in simple random walks and other topics. Example 5.1.7 Revisited Recall that n machines produce components. Suppose that all the machines produce the same number c of components, and as before we pick one at random and let its serial number be (X, Y ), where X is the machine number and Y is the component index. Then, f (x, y) = (nc)−1; 1 ≤ x ≤ n; 1 ≤ y ≤ c and f X (x and fY (y. Obviously, f (x, y) = f X (x) fY (y) and so X and Y are independent in this case. 164 5 Random Vectors: Independence and Dependence (5) Example 1 Revisited Observe that the mass functions of X and Y, and of U and V, do not satisfy Definition 2. Let W and Z be independent random variables such that, f Z (1) = 3 fW (0) = 1 4. 2 Then, by Definition 2, their joint mass function is 2 and f Z (0) = 1, fW (1) = 1 4 f (0, 0) = 1 8, f (0, 1) = 3 8, f (1, 0) = 1 8, f (1, 1) = 3 8. (6) Example 5.1.9 Revisited Observe that X and Y are not independent because )λx µy cannot be expressed in the form f X (x) fY (y). If X + 1 and Y were indeµ 1−λ, then the joint mass ( x+y−1 x pendent geometric random variables with parameters function would be f (x, y−1, x ≥ 0; y ≥ 1. λ 1−µ and µ 1 − λ The apparently simple Definition 2 implies a great deal more about independent random variables, as the following result shows. (7) (8) Theorem (a) For arbitrary countable sets A and B, Let X and Y be independent random variables. Then: P(X ∈ A, Y ∈ B) = P(
|
X ∈ A)P(Y ∈ B), and (b) For any real functions g(·) and h(·), g(X ) and h(Y ) are independent. Proof (a) The left-hand side of (8) is P(X = x, Y = y) = P(X = x)P(Y = y) by independence x∈A y∈B y∈B P(X = x) = x∈A x∈A y∈B P(Y = y) = P(X ∈ A)P(Y ∈ B), as required. For (b), let A = {x: g(X ) = ξ } and B = {y: h(Y ) = η}. Then, by part (a), for any ξ and η, P(g(X ) = ξ, h(Y ) = η) = P(X ∈ A, Y ∈ B) = P(X ∈ A)P(Y ∈ B) = P(g(X ) = ξ )P(h(Y ) = η), as required. (9) Example Independent random variables X and Y take the values −1 or +1 only, and P(X = 1) = a, P(Y = 1) = α. A third random variable Z is defined by Z = cos((X + Y ) π 2 ). If 0 < a, α < 1, show that there are unique values of a and α such that X and Z are independent, and Y and Z are independent. In this case, are X, Y, and Z independent? 5.3 Expectation 165 Solution First, for Z, P(Z = 1) = P(X + Y = 0) = a(1 − α) + α(1 − a) and, likewise, P(Z = −1) = aα + (1 − a)(1 − α). Now, P(Z = 1, X = 1) = P(X = 1, Y = −1) = a(1 − α). Hence, if (10) a(1 − α) = a(a(1 − α) + α(1 − a)) we have P(Z = 1, X = 1) = P(Z = 1)P(X = 1).
|
Simplifying (10) yields α = 1 2. Now, plodding through three similar constraints shows that X and Z are independent iff α = a = 1 2. By symmetry, the same condition holds iff Y and Z are independent. However, X, Y, and Z are not independent because P(X = 1, Y = 1, Z = −1) = 0 = P(X = 1)P(Y = 1)P(Z = −1). Independent random variables often have interesting and useful properties. (11) Example Let X and Y be independent geometric random variables having respective mass functions f X (x) = (1 − λ)λx and fY (y) = (1 − µ)µy for x ≥ 0 and y ≥ 0. What is the mass function of Z = min{X, Y }? Solution By independence, P(Z > n) = P(X > n ∩ Y > n) = P(X > n)P(Y > n) = λn+1µn+1 = (λµ)n+1. Hence, P(Z = n) = P(Z > n − 1) − P(Z > n) = (1 − λµ)(λµ)n and Z is also geometric with parameter λµ. 5.3 Expectation Let the random variable Z = g(X, Y ) be a function of X and Y. Using (5.1.4) and the definition of expectation (4.3.1), we have E(Z ) = z f Z (z) = zP(g(X, Y ) = z). This expression for E(Z ) is not always simple or convenient for use in calculation. The following generalization of Theorem 4.3.4 is therefore very useful. z z (1) Theorem the right-hand side is absolutely convergent, we have Let X and Y have joint mass function f (x, y). Whenever the sum on E(g(X, Y )) = g(x, y) f (x, y). Proof The proof is essentially the same as that of Theorem 4.3.4. x,y 166 5 Random Vectors: Independence and Dependence (2) Corollary For any real numbers a and b, E(a X + bY ) = a
|
E(X ) + bE(Y ) when both sides exist and are finite. Proof Because the sum is absolutely convergent, by (1), E(a X + bY ) = (ax + by) f (x, y) = ax f (x, y) + x,y = ax f X (x) + x,y by fY (y) x y = aE(X ) + bE(Y ). x,y by f (x, y) by (5.1.5) and (5.1.6) (3) Example: Coupons Recall Example 4.3.15 in which you were collecting coupons; we can now find E(R) more quickly. Let T1 be the number of packets required to obtain the first coupon, T2 the further number of packets required to obtain a second type of coupon, T3 the further number required for the third type and so on. Then, Obviously, T1 = 1. Also, R = n k=1 Tk. P(T2 = r ) = 1 n r −1 1 − 1 n so that T2 is geometric with mean n n−1. Likewise, Tk is geometric with mean Hence, by (2), E(Tk. E(R) = n k=1 E(Tk) = n k=1 n n − k + 1, the same as the answer obtained with somewhat more effort in Example 4.3.15. Corollary (2) is often useful when considering sums of indicators. For example, let {A1, A2,..., An} be any collection of events, and let Ii = 1 if Ai occurs 0 if Ai does not occur be the indicator of Ai. Now, let X be the number of the Ai that occur. Then, by construction X = n i=1 Ii, and by (2) E(X ) = n i=1 E(Ii ) = n i=1 P(Ai ). We use this result in the following example. 5.3 Expectation 167 Example: Binomial Distribution that the factorial moments of X are given by Let X be binomial with parameters n and p. Show µ(k) = pkn(n − 1)... (n − k + 1). Solution Suppose a coin that shows heads with probability p is tossed
|
n times. Then, the number of heads has the mass function of X. Let Y be the number of distinct sets of k such that all k tosses show heads. Then, Y = ( X k ) distinct sets of k tosses shows k heads with probability pk. Hence, E(Y ) = ( n k ) pk. Therefore, we have k ). However, each of the ( n which is the desired result. E X k = n k pk, We single certain expectations out for special notice. Just as random variables have moments, jointly distributed random variables have joint moments. (4) Definition The joint moments of X and Y are µi j = E(X i Y j ); i, j ≥ 1. (5) Definition The covariance of X and Y is cov (X, Y ) = E[(X − E(X ))(Y − E(Y ))] = E(X Y ) − E(X )E(Y ). This is the most important of the central joint moments, which are σi j = E[(X − E(X ))i (Y − E(Y )) j ]; i, j ≥ 1. Here are two interesting properties of cov (X, Y ). (6) Theorem we have: For jointly distributed random variables X and Y, and constants a, b, c, d, (i) cov (a X + b, cY + d) = ac cov (X, Y ) (ii) var (X + Y ) = var (X ) + var (Y ) + 2 cov (X, Y ) Proof (i) cov (a X + b, cY + d) = E[(a X + b − aE(X ) − b)(cY + d − cE(Y ) − d)] = E[ac(X − E(X ))(Y − E(Y ))] = ac cov(X, Y ) (ii) var (X + Y ) = E(X + Y − E(X ) − E(Y ))2 = E[(X − E(X ))2 + (Y − E(Y ))2 + 2(X − E(X ))(Y − E(Y ))], as required. Let us find cov (X, Y ) for the simple examples we have met above. 168 5 Random Vectors
|
: Independence and Dependence (1) Examples 5.2.1 and 5.2.5 Revisited Find the covariance for each of the three joint mass functions given in these two examples. Solution In every case, E(X Y ) = 12 f (1, 1) and E(X ) = f (1, 0) + f (1, 1), and E(Y ) = f (0, 1) + f (1, 1). Hence, cov (X, Y ) = f (1, 1) − ( f (1, 0) + f (1, 1))( f (0, 1) + f (1, 1)). (i) Evaluating this in the three given instances shows that: cov (X, Y ) = 5 12 cov (U, V ) = 1 2 cov (W 12 = 1 8 + 3 8 + 5 12 − (ii) (iii) − − = 1 24 + 5 12 = 0. (7) Example 5.1.8 Revisited (X, Y ) and cov (V, W ). Solution Recall that Art and Bart are cutting for the deal. Find cov E(X ) = E(Y ) = 14 2 x 13 = 8. Also, using (1), E(X Y ) =. 1 13. y 13 x = 1 12 12 (105 × 104 − 1118) = 64 − 7 6 y. 2≤x=y≤14. 1 13 = 1 12 (105 − y − 1)y Hence, cov (X, Y ) = − 7 6. Likewise, using the expressions in Example 5.1.8 for the marginal mass functions of V and W, we find 13 E(V ) = v(14 − v) 78 13 13 2 v=2 = 1 78 13 (13v − v(v − 1)) v=2 [v(v + 1) − v(v − 1)] [(v + 1)v(v − 1) − v(v − 1)(v − 2)] v=2 = + 1 78 − 1 3 = 17 3 after successive cancellation of the terms in the sum. 5.3 Expectation Similarly, we find E(W ) = 31 3. Now for all ω, X (ω)Y (ω) = V (ω)W (ω), so E(V W ) = E(X Y ), and �
|
��nally, cov (V, W ) = 62 + 5 6 − 17 3. 31 3 = + 77 18. 169 Just as joint mass functions have a simple form when random variables are independent, so too do joint moments simplify. (8) Theorem If X and Y are independent random variables with finite expectations, then E(X Y ) exists, and E(X Y ) = E(X )E(Y ). It follows that cov (X, Y ) = 0 in this case. Proof By independence and Theorem 5.3.1, E(|X Y |) = x,y |x y| f X (x) fY (y) = x |x| f X (x) y |y| fY (y) = E(|X |)E(|Y |) so E(|X Y |) < ∞. Thus, E(X Y ) exists, and the same argument shows that E(X Y ) = E(X )E(Y ). (9) Definition E(X Y ) = 0, then X and Y are said to be orthogonal. If cov (X, Y ) = 0, then X and Y are said to be uncorrelated. If It follows that independent random variables are uncorrelated, but the converse is not true, as the following example shows. (10) Example A random variable X is said to be symmetric if P(X = −x) = P(X = x) for all x. Let X be symmetric with E(X 3) < ∞, and let Y = X 2. Then, because X has an expectation it is zero, by symmetry, and E(X Y ) = E(Y 3) = x>0 x 3( f (x) − f (−x)) = 0 = E(X )E(Y ). Thus, cov(X, Y ) = 0, even though X and Y are not independent. In this case, X and Y are uncorrelated and orthogonal, but dependent. Thus, up to a point, and in a way that we carefully leave unspecified, cov (X, Y ) can be an indication of the dependence of X and Y. It has the drawback that it depends on the scale of X and Y. Thus, if a is a constant, a X and Y
|
have the same “dependence” as X and Y 170 5 Random Vectors: Independence and Dependence (whatever we mean by that), but cov (a X, Y ) = a cov (X, Y ). For this reason, statisticians more commonly use the following. (11) Definition The correlation coefficient of random variables X and Y is ρ(X, Y ) = cov (X, Y ) (var (X ) var (Y )) 1 2, whenever the right-hand side exists. Example 5.1.3 Revisited tively, when a coin is tossed twice. What are cov (X, Y ) and ρ(X, Y )? Here, X and Y are the number of heads and tails, respec- Solution Trivially, E(X Y ) = 12P(X = 1, Y = 1) = 2 p(1 − p). Likewise, E(X ) = 2 p, E(Y ) = 2(1 − p), var (X ) = 2 p(1 − p) and var (Y ) = 2 p(1 − p). Hence, cov (X, Y ) = 2 p(1 − p) − 4 p(1 − p) = −2 p(1 − p) and ρ(X, Y ) = −2 p(1 − p) (4 p2(1 − p)2) 1 2 = −1. The correlation coefficient ρ has the following interesting properties; we assume that X and Y are not constant, and have finite variance. (12) Theorem If X and Y have correlation ρ(X, Y ), then: (i) −1 ≤ ρ(X, Y ) ≤ 1. (ii) |ρ| = 1 if and only if P(X = aY ) = 1 for some constant a. (iii) ρ(a X + b, cY + d) = sgn (ac)ρ(X, Y ), where sgn(x) denotes the sign of x. (iv) ρ = 0 if X and Y are independent. The proof of this theorem relies on the following important and useful result. (13) Lemma: Cauchy–Schwarz Inequality If E(X 2)E(Y 2) < ∞, then (14) (
|
15) (E(X Y ))2 ≤ E(X 2)E(Y 2). Proof Suppose 0 < E(X 2)E(Y 2). By Theorem 4.3.6 (iii), 0 ≤ E[(X E(Y 2) − Y E(X Y ))2] = E(X 2)(E(Y 2))2 − 2E(X Y )2E(Y 2) + E(Y 2)[E(X Y )]2 = E(Y 2)[E(X 2)E(Y 2) − (E(X Y ))2]. 5.3 Expectation 171 Because E(Y 2) > 0, (14) follows. Of course, (14) is trivially true if E(Y 2) = 0, for then Y = X Y = 0 with probability one. Proof of (12) (i) Applying Lemma 13 to the random variables X − E(X ) and Y − E(Y ) shows that (ρ(X, Y ))2 ≤ 1, and so −1 ≤ ρ ≤ 1, as required. (ii) If |ρ| = 1, then from (15), E[(X E(Y 2) − Y E(X Y ))2] = 0, and so from Example 4.6.10, with probability one X = (E(X Y )/E(Y 2))Y. (iii) Expanding, and using Theorem 6(i), ρ(a X + b, cY + d) = ac cov (X, Y ) (a2var (X )c2var (Y )) 1 2 = ac% (ac)2 ρ(X, Y ), as required (iv) This follows immediately from Theorem 8. (16) Example (5.1.9) Revisited Recall that X and Y have joint mass function f (x, y λx µy. Show that ρ(X, Y ) = λµ (1 − λ)(1 − µ) 1 2. Solution First, we calculate E(X Y ) as λx µy = 1 − λ − µ µ λx−1 = (1 − λ − µ)λ (1 − λ)µ ∞ y=1 y2 ∞ y=1 y2µyλ y µ 1 − λ x,=1
|
λ(1 − λ + µ) (1 − λ − µ)2. × = Now we have already discovered in Example 5.1.9 that X and Y have geometric mass functions, so by Example 4.3.13 E((1 − λ) (1 − λ − µ)2 − 1, E((1 − µ) (1 − λ − µ)2 var (Y ) = var (X ) =, and plugging all this into (11) yields ρ = λ(1 − λ + µ) − λ(1 − λ) (µ(1 − λ)λ(1 − µ)) 1 2, as required. 172 5 Random Vectors: Independence and Dependence Finally, we remark that cov (X, Y ) and ρ(X, Y ) are not the only functions used to measure dependence between X and Y. Another such function is (17) I (X, Y ) = x y f (x, y) log See Example 5.16 for more on this. f (x, y) f X (x) fY (y) = E log. f (X, Y ) f X (X ) fY (Y ) 5.4 Sums and Products of Random Variables: Inequalities These arise in many ways. For example, it is often useful to write a random variable as a sum of simpler random variables. (1) Example: Binomial Random Variable f X (k) = n k The random variable X with mass function pk(1 − p)n−k has arisen in many ways; classically, it is the number of heads in n tosses of a biased coin. We now see that we can think about X in a different way. Let Ik be the indicator of the event that the kth toss of the coin shows a head. Then, X = I1 + I2 + · · · + In = n k=1 Ik. We have written X as a sum of Bernoulli trials or indicators. Hence, E(X ) = E Ik = n k=1 n k=1 E(Ik) = np. Likewise, E(X 2) = E n 2 Ik k=1 = n k=1 E I 2 k + j=k E(I j Ik) = np + n(n − 1) p2. Hence, var (X ) =
|
np(1 − p). You should compare this with your earlier methods using n n E(X 2) = k2 pk(1 − p)n−k = (k(k − 1) + k) pk(1 − p)n−k n! k!(n − k)! n k k=1 k=1 and so on. (2) Theorem Any discrete random variable X can be written as a linear combination of indicator random variables; thus, X = i ai I (Ai ) for some collection of events (Ai ; i ≥ 1) and real numbers (ai ; i ≥ 1). 5.4 Sums and Products of Random Variables: Inequalities 173 Proof Just let (ai ; i ≥ 1) include the set of possible values of X, and set Ai = {ω:X (ω) = ai }. (3) Example: Matching Suppose that n distinct numbered keys ordinarily hang on n hooks bearing corresponding distinct numbers. On one occasion an inebriated turnkey hangs the keys at random on the hooks (one to each hook). Let X be the number of keys, which are then on the correct hooks. Find E(X ) and var (X ). Solution hook. Then, Let I j be the indicator of the event that the jth key does hang on the jth X = n j=1 I j. Now by symmetry P(I j = 1) = 1/n and for j = k, (4) Hence, Also, P(I j Ik = 1) = 1 n(n − 1). E(X ) = E n I j = j=1 n j=1 E(I j ) = E(X 2) = E n j=1 + I 2 j j=k P(I j = 1) = 1. n j=1 I j Ik = 1 + 1, using (4), and the fact that I 2 j = I j. Hence, var (X ) = 1. Indicators can also be useful when multiplied together; here is an illustration. (5) Example Let us prove (1.4.8). Recall that we have events A1,..., An, and we seek the probability that at least one of them occurs, namely, P n j=1 A j = tn (say). For economy of notation, we set sr = (6) i1<i2
|
<···<ir P Ai1 ∩... ∩ Air ; 1 ≤ r ≤ n. We let I j be the indicator of the event that A j occurs, and set Sr = i1<i2<···<ir Ii1 Ii2... Iir. 174 5 Random Vectors: Independence and Dependence Next, observe that (7) n 1 − (1 − I j ) = j=1 1 if at least one A j occurs 0 otherwise Hence, this is the indicator of the event whose probability we seek, and n P j=1 A j = E 1 − n j=1 (1 − I j ) = E(S1 − S2 + · · · + (−)n+1Sn) = s1 − s2 + · · · + (−)n+1sn on multiplying out, by (6), as required. The same expansion (7) can be used to prove the following interesting inequalities. TheoremTheorem: Inclusion–Exclusion Inequalities ple 5, for 1 ≤ r ≤ n, With the notation of Exam- (−)r P n j=1 A j − s1 + s2 − · · · (−)r sr ≥ 0. Proof First, we prove a simple identity. Obviously, (1 + x)k(1 + 1 Hence, equating the coefficient of x r +1 on each side gives k r + 1 − k r + 2 + · · · + (−)k−r +1 k k = Furthermore, we have k − 1 r ≥ 0. x )−1 = x(1 + x)k−1. n (1 − I j ) − S1 + S2 − · · · + (−)r Sr 1 − 1 = (−)r (Sr +1 − Sr +2 + · · · + (−)n−r +1Sn). (8) (9) Now suppose exactly k of A1,..., An occur. If k ≤ r, the right-hand side of (9) is zero. If k > r, the contribution in the bracket on the right-hand side is k r + 1 − k r + 2 + · · · + (−)k−r +. Hence, no matter how many A j s occur (−)r E 1 − n 1 (1 − I j ) − S1 + S2 +
|
· · · + (−)r Sr by (8), = (−)2r E(Sr +1 − · · · + (−)t−r +1Sn) ≥ 0, as required. 5.4 Sums and Products of Random Variables: Inequalities 175 It can similarly be shown (or deduced from the above) that if tr = P Ai1 i1<i2<···<ir ∪ · · · ∪ Air, then tn = n r =1(−)r −1sr, and (−)r P n 1 Ai − t1 + t2 − · · · + (−)r tr ≥ 0. Often, it is natural and important to consider the sum Z of two variables X and Y. This is itself a random variable, and so we may require the distribution of Z = X + Y. This is given by Example 5.1.3, so we have proved that f Z (z) = f X,Y (x, z − x). One special case of this result must be singled out. x Theorem has probability mass function If X and Y are independent discrete random variables, then Z = X + Y f Z (z) = x f X (x) fY (z − x). (10) (11) (12) Proof Substitute Definition 5.2.2 into (10). A summation of this form is called a convolution. (13) Example: Sum of Geometric Random Variables (a) Let X 1 and X 2 be independent random variables, each having a geometric distribution (14) P(Xi = k) = (1 − p)k p; Show that the mass function of X 1 + X 2 is f (z) = (z + 1)(1 − p)z p2; z ≥ 0. k ≥ 0. (b) If (Xi ; i > 1) are independent random variables, each having the geometric distri- n 1 Xi. bution (14), find the mass function of Z = Solution (a) Using (12), z f (z) = (1 − p)k p(1 − p)z−k p = k=0 z k=0 p2(1 − p)z = (z + 1) p2(1 − p)z. Alternatively, suppose we have a coin that, when tossed, shows a head with
|
probability p. Toss this coin repeatedly until a head first appears, and let T1 be the number of tails shown up to that point. Continue tossing the coin until the second head appears and let T2 be the further number of tails shown up to that point. Then, T1 and T2 are independent and have the geometric distribution (14), and T1 + T2 is the number of tails shown before the second head appears. But we know from Example 4.2.8 that P(T1 + T2 = z) = Thus, X 1 + X 2 also has this mass function. z + 1 1 p2(1 − p)z. (15) (16) 176 5 Random Vectors: Independence and Dependence (b) Extending the second argument above, we see that n 1 Xi has the same distribution as the number of tails shown upto the point where n heads have first appeared in successive tosses of our biased coin. But the probability that z tails have appeared is just z + n − 1 n − 1 pn(1 − p)z and so this is the mass function of Z. It is negative binomial. Now that we are in possession of the answer, we can use (12) to verify it by induction. n+1 n 1 Xi is 1 Xi has mass function (15). Then, by (12), the mass function of Assume that n+1 P i=1 Xi = z = = p(1 − p)1 − p)z−k pn pn+1(1 − p) = pn+1(1 − p)z z k=0 z k=0, z + n k as required, where we have used the identity z r = Because (15) holds for n = 1, the result follows by induction. Note that (16) can be derived immediately by equating the coefficient of x z on both sides of the trivial identity 1 1 + x. 1 (1 + x)n = 1 (1 + x)n+1. Example: Sum of Binomial Random Variables dom variables with respective mass functions f X (x) = m x px (1 − p)m−x, and fY (y) = Show that X + Y has a binomial distribution. n y p y(1 − p)n−y. Let X and Y be independent ran
|
- Solution m i=1 Ii and Y = m+n i=m+1 Ii. Hence, The expeditious method of doing this is to use Example 5.4.1 to write X = X + Y = m+n i=1 Ii ; this has the B(m + n, p) mass function by Example 5.4.1. Alternatively, this may be shown by using (5.4.12); this is an exercise for you. Turning to expected values, we recall that from Definition 5.2.2, for any random variables (Xi ; i ≥ 1) with finite expectation, whether they are independent or not, (17) E n i=1 Xi = n i=1 E(Xi ). 5.5 Dependence: Conditional Expectation 177 If in addition the Xi are independent, then (18) E i=1 n Xi = n i=1 E(Xi ). Many inequalities for probabilities and moments are useful when dealing with random vectors. Most of them are beyond our scope at this stage, but we give a few simple examples with some applications. (19) Basic Inequality If X ≤ Y with probability one, then E(X ) ≤ E(Y ). Proof This follows immediately from Theorem 4.3.6. Corollary (a) For 1 < r < s, (20) (21) (b) For r ≥ 1, E(|X |r ) ≤ (E(|X |s)) + 1. E(|X + Y |r ) ≤ 2r (E(|X |r ) + E(|Y |r )). Proof (a) If |x| ≤ 1, then |x|r ≤ 1; and if |x| > 1 and r < s, then |x|r ≤ |x|s. Hence, in any case, when r ≤ s, |x|r ≤ |x|s + 1. Thus, E(|X |r ) = |x|r f (x) ≤ |x|s f (x) + 1 = E(|X |s) + 1. x x (b) For any real numbers x and y, if k ≤ r, |x|k|y|r −k ≤ |x r + |y|r, because either (|x|k/|y|k) ≤ 1 or (
|
|y|r −k/|x|r −k) < 1. Hence, |x + y|r ≤ (|x| + |y|)r ≤ ≤ r k r k=0 |x|k|y|r −k r k r k=0 (|x|r + |y|r ) = 2r (|x|r + |y|r ), and (3) follows. (22) Corollary These inequalities show that: (a) If E(X s) < ∞, then for all 1 ≤ r ≤ s, E(X r ) < ∞. (b) If E(X r ) and E(Y r ) are finite then E((X + Y )r ) < ∞. 5.5 Dependence: Conditional Expectation Let X and Y be jointly distributed random variables. We may be given the value of Y, either in fact, or as a supposition. What is the effect on the distribution of X? 178 5 Random Vectors: Independence and Dependence (1) If X and Y have joint probability mass function f (x, y), then given Definition Y = y, the random variable X has a conditional probability mass function given by f X |Y (x|y) = f (x, y) fY (x) for all y such that fY (y) > 0. Example Let X and Y be independent geometric random variables each having mass function f (x) = (1 − λ)λx ; x ≥ 0, 0 < λ < 1. Let Z = X + Y. Show that for 0 ≤ x ≤ z, f X |Z (x|z) = 1/(z + 1). Solution (z + 1)(1 − λ)2λz and so From Example 5.4.13, we know that Z has mass function f Z (z) = f X |Z (x|z) = P(X = x, Z = z) (z + 1)(1 − λ)2λz = (1 − λ)2λx λz−x (1 − λ)2λz(z + 1) = (z + 1)−1. (2) Example 5.1.8 Revisited: Cutting for the Deal Find the conditional mass function of the loser’s card conditional
|
on W = w; find also fW |V (w|v). Solution According to Example 5.1.8, f (v, w) = 1 78 ; 2 ≤ v < w ≤ 14, and Hence, using Definition 1, fW (w) = w − 2 78 ; 3 ≤ w ≤ 14. fV |W (v|w. The loser’s score is uniformly distributed given W. Likewise, fW |V (w|v) = 1 78 also a uniform distribution. 14 − v 78 = 1 14 − v ; v < w ≤ 14, (3) Example Let X and Y be independent. Show that the conditional mass function of X given Y is f X (x), the marginal mass function. Solution Definition 1, Because X and Y are independent, f (x, y) = f X (x) fY (y). Hence, applying f X |Y (x|y) = f (x, y)/ fY (y) = f X (x). (4) Theorem f X |Y (x|y) is a probability mass function, which is to say that (i) f X |Y (x|y) ≥ 0 and x f X |Y (x|y) = 1. (ii) 5.5 Dependence: Conditional Expectation Proof Part (i) is trivial. Part (ii) follows immediately from (5.1.5). 179 Recall that two events A and B are said to be conditionally independent given C, if P(A ∩ B|C) = P(A|C)P(B|C). Likewise, it is possible for two random variables X and Y to be conditionally independent given Z, if f X,Y |Z = f X |Z fY |Z. Example: Cutting for the Deal (Example 5.1.8 Revisited) Suppose three players cut for the deal (with ties not allowed, as usual). Let X be the lowest card, Y the highest card and Z the intermediate card. Clearly, X and Y are dependent. However, conditional on Z = z, X and Y are independent. The mass function f X |Z is uniform on {2,..., z − 1} and fY |Z is uniform on {z + 1,..., 14}. Being a mass function, f X
|
|Y may have an expectation; it has a special name and importance. (5) Definition is The conditional expectation of X, given that Y = y where fY (y) > 0, E(X |{Y = y}) = x x f (x, y)/ fY (y), when the sum is absolutely convergent. As y varies over the possible values of Y, this defines a function of Y, denoted by E(X |Y ). Because it is a function of Y, it is itself a random variable, which may have an expectation. (6) (7) Theorem If both sides exist, E(E(X |Y )) = E(X ). Proof Assuming the sums are absolutely convergent we have, by Theorem 4.3.4, E(X |{Y = y}) fY (y) = y x x f (x, y) fY (y) fY (y) by Example 2 E(E(X |Y )) = = y x f X (x) x = E(X ). by (5.1.4) This is an exceptionally important and useful result. Judicious use of Theorem 6 can greatly simplify many calculations; we give some examples. 180 5 Random Vectors: Independence and Dependence (8) Example: Eggs A hen lays X eggs where X is Poisson with parameter λ. Each hatches with probability p, independently of the others, yielding Y chicks. Show that ρ(X, Y ) = √ p. Conditional on X = k, the number of chicks is binomial B(k, p), with mean E(X Y ) = E(E(X Y |X )) = E(X 2 p) = (λ2 + λ) p. E(Y 2) = E(E(Y 2|X )) = E(X p(1 − p) + X 2 p2) = λp(1 − p) + (λ + λ2) p2 = λp + λ2 p2. Solution kp. Hence, Likewise, Hence, ρ(X, Y ) = E(X Y ) − E(X )E(Y ) (var (X ) var (Y )) 1 2 = λ2 p + λp − λ.λp (λ(λp + λ
|
2 p2 − λ2 p2)) 1 2 √ p. = (9) Example: Variance of a Random Sum Let X 1, X 2,... be a collection of independent identically distributed random variables, and let Y be an integer valued random variable independent of all the Xi. Let SY = Y i=1 Xi. Show that 2 var (Y ) + E(Y )var (X 1). var (SY ) = E X 1 Solution By (7), E(SY ) = E(E(SY |Y )) = E E Y 1 Xi |Y = E(Y E(X 1)) = E(Y )E(X 1). Likewise, E S2 Y = E E = E(Y ) S2 Y E |Y X 2 1 and so substituting into var (SY ) = E(S2 + Y (Y − 1)[E(X 1)](Y 2)[E(X 1)]2, − [E(X 1)]2 Y ) − (E(SY ))2, gives the result. (10) Example 4.12 Revisited: Gamblers’ Ruin Two gamblers, A and B, have n coins. They divide this hoard by tossing each coin; A gets those that show heads, X say, B gets the rest, totalling n − X. They then play a series of independent fair games; each time A wins he gets a coin from B, each time he loses he gives a coin to B. They stop when one or other has all the coins. Let DX be the number of games played. Find E(DX ), and show that, when the coins are fair, ρ(X, DX ) = 0. Solution Conditional on X = k, as in Example 4.12, Dk = 1 2 Dk+1 + 1 2 Dk−1 + 1 5.5 Dependence: Conditional Expectation 181 with solution Dk = k(n − k). Hence, observing that X is B(n, p) (where p is the chance of a head), we have E(DX ) = E(E(DX |X )) = E(X (n − X )) = n(n − 1) p(1 − p). Finally, cov(X, DX ) = E(X 2(n − X )) − EX EDX = n(n − 1) p( p − 1)(2 p −
|
1), whence ρ = 0, when p = 1 2 (11) Example Partition Rule: Show that if X and Y are jointly distributed, then f X (x) = fY (y) f X |Y (x|y). y Solution This is just Theorem 6 in the special case when we take X to be Ix, the indicator of the event {X = x}. Then, E(Ix ) = f X (x), and E(Ix |Y = y) = f X |Y (x|y). The result follows from (7). Alternatively, you can substitute from Definition 1. Essentially this is the Partition Rule applied to discrete random variables. Recall that we have already defined E(X |B) for any event B in Chapter 4. It is convenient occasionally to consider quantities, such as E(X |Y ; B). This is defined to be the expected value of the conditional distribution (12) P(X = x|{Y = y} ∩ B) = P({X = x} ∩ {Y = y} ∩ B) P({Y = y} ∩ B) for any value y of Y such that P({Y = y} ∩ B) > 0. We give some of the more important properties of conditional expectation. (13) Theorem Let a and b be constants, g(.) an arbitrary function, and suppose that X, Y, and Z are jointly distributed. Then (assuming all the expectations exist), (i) E(a|Y ) = a (ii) E(a X + bZ |Y ) = aE(X |Y ) + bE(Z |Y ) (iii) E(X |Y ) ≥ 0 if X ≥ 0 (iv) E(X |Y ) = E(X ), if X and Y are independent (v) E(Xg(Y )|Y ) = g(Y )E(X |Y ) (vi) E(X |Y ; g(Y )) = E(X |Y ) (vii) E(E(X |Y ; Z )|Y ) = E(X |Y ). Property (v) is called the pull-through property, for obvious reasons. Property (vii) is called the tower property. It enables us to consider multiple conditioning by taking the random variables in
|
any convenient order. 182 5 Random Vectors: Independence and Dependence Proof We prove the odd parts of Theorem 13, the even parts are left as exercises for you. (i) f (a, y) = fY (y), so E(a|Y ) = a fY (y)/ fY (y) = a. (iii) If X ≥ 0, then every term in the sum in Theorem 6 is nonnegative. The result follows. (v) E(Xg(Y )|Y = y) = xg(y) f (x, y)/ fY (y) = g(y) f (x, y)/ fY (y) x = g(y)E(X |Y = y). x (vii) For arbitrary values, Y = y and Z = z of Y and Z, we have E(X |Y ; Z ) = x x f (x, y, z)/ fY,Z (y, z). Hence, by definition, E(X |Y ; Z ) fY,Z (y, z)/ fY (y) = x f (x, y)/ fY (y) = E(X |Y ). z x x f (x, y, z)/ fY (y) z = x (14) E(E(X |Y ; Z )|Y ) = (15) Example Three children (Aelhyde, Beowulf, and Canute) roll a die in the order A, B, C, A,..., etc. until one of them rolls a six and wins. Find the expected number of rolls, given that Canute wins. Solution We use a form of the tower property, (14). Let C be the event that Canute wins, let X be the duration of the game, and let Y denote the first roll to show a six in the first three rolls, with Y = 0 if there is no six. Then, (16) E(X |C) = E(E(X |Y ; C)|C). Now if Y = 0, then with the fourth roll the game stands just as it did initially, except that three rolls have been made. So E(X |Y = 0; C) = 3 + E(X |C). Obviously, Y is otherwise 3, and E(
|
X |Y = 3; C) = 3. Therefore, substituting in (16), we have E(X |C) = 3 + E(X |C)P(Y = 0|C) and so E(X |C) = 3 = 648 91. 3 5 6 1 − Of course, there are other ways of doing this. Finally, we remark that conditional expectation arises in another way. (17) Theorem Let h(Y ) be any function of Y such that E(h(Y )2) < ∞. Then, (18) E((X − h(Y ))2) ≥ E((X − E(X |Y ))2). 5.6 Simple Random Walk 183 Further, if h(Y ) is any function of Y such that (19) E((X − h(Y ))2) = E((X − E(X |Y ))2), then Proof E((h(Y ) − E(X |Y ))2) = 0. (20) E((X − h(Y ))2) = E((X − E(X |Y ) + E(X |Y ) − h(Y ))2) = E((X − E(X |Y ))2) + E((E(X |Y ) − h(Y ))2) +2E((X − E(X |Y )) + (E(X |Y ) − h(Y ))). However, by (7), we can write E((X − E(X |Y ))(E(X |Y ) − h(Y ))) = E(E((X − E(X |Y ))(E(X |Y ) − h(Y ))|Y )) = E(E(X |Y ) − h(Y ))E((X − E(X |Y ))|Y ) by Theorem 13(v) = 0, because E((X − E(X |Y ))|Y ) = 0. The result h(Y ))2) ≥ 0. Finally, if (19) holds then from (20), we have E((E(X |Y ) − h(Y ))2) = 0, (18) follows because E((E(X |Y ) − as required to complete the proof of Theorem 17. Recall that if E(X 2) = 0, then P(X = 0) = 1; hence, if E((
|
X − Y )2) = 0, then X = Y with probability one. This suggests that the smaller E((X − Y )2) is, then the “closer” X is to Y, in some sense. The point of the theorem is then that among all functions of Y, E(X |Y ) is the one which is “closest” to X. It is thus possible, and in later work desirable, to define E(X |Y ) by this property. However, to explore all these ideas would take us too far afield. 5.6 Simple Random Walk The ideas above now enable us to consider an exceptionally famous and entertaining collection of random variables. (1) Definition Let (Xi ; i ≥ 1) be a collection of independent identically distributed ran- dom variables with mass function P(X 1 = 1) = p; P(X 1 = −1) = q = 1 − p. Then the collection (Sn; n ≥ 0), where (2) Sn = S0 + n i=1 Xi, is called a simple random walk. If p = q, it is called a symmetric simple random walk. 184 5 Random Vectors: Independence and Dependence Figure 5.1 A path (or realization) of a simple random walk. The points represent successive positions (or values) of the walk; these are joined by steps of the walk. In this case, S0 = 2, S1 = 3, S2 = 2, and so on. The walk visits the origin at the fourth step. The nomenclature follows from the visualization of Sn as representing the position of a particle that is initially at S0, and then takes a series of independent unit steps; each step being positive with probability p or negative with probability q. It is conventional to display the walk in Cartesian coordinates as the sequence of points (n, Sn) for n ≥ 0. Any particular such sequence is called a path of the random walk; see Figure 5.1 for an illustration. Much of the effort of the first probabilists (Fermat, Pascal, Bernoulli, de Moivre, Laplace) was devoted to discovering the properties of the simple random walk, and more general random walks are still being investigated by modern probabilists. It is easy to see that the celebrated gambler’s ruin problem of
|
Example 2.11 is just a simple random walk in which S0 is interpreted as the initial capital of the gambler, and the walk stops on the first occasion D when either Sn = 0 (the gambler is ruined) or Sn = K (his opponent is ruined). That is, the random variable (3) D = min{n : {Sn = 0} ∪ {Sn = K }} is the duration of the game. In the context of random walks, D is called the first passage time of the walk to {0, K }. The first thing to find is the mass function of Sn (when it is not stopped). (4) Theorem P(Sn − S0 = k) = n 2 (n + k) 1 1 2 (n+k)q p 1 2 (n−k). Proof Consider a path of the walk from (0, S0) to (n, Sn) with r positive steps and s 2 (n + k) and negative steps. If Sn − S0 = k, then r − s = k and r + s = n. Hence, r = 1 s = 1 r) such paths, and each has the same probability, namely, pr q s. Hence, 2 (n − k). There are (n P(Sn − S0 = k) = n r pr q s, which is (4). 5.6 Simple Random Walk 185 Alternatively, we can simply observe that 1 with parameters n and p, and (4) follows. 2 (Sn − S0 + n) is a binomial random variable As suggested by the gambler’s ruin problem, we are interested in the first passage times of random walks. (5) Definition Let (Sn; n ≥ 0) be a simple random walk with S0 = i. Then, the first passage time from i to k is the random variable Tik = min{n > 0; Sn = k}. When i = k, the random variable Tkk is called the recurrence time of k. We often denote Tkk by Tk. One obvious but important property of a first passage time Tik is that steps after the first passage to k are independent of those before. It follows that we can write, for example, (6) T02 = T01 +
|
T12, where T01 and T12 are independent. Furthermore, T12 and T01 have the same distribution because the Xi are identically distributed. These simple remarks are of great importance in examining the properties of the walk. Our first result is striking. (7) Theorem ability p/q. If p ≥ q, then T01 is certainly finite. If p < q, then T01 is finite with prob- Proof Let P(T jk < ∞) = r jk. Now let us condition on the first step of the walk from S0 = 0, giving r01 = P(T01 < ∞) = pP(T01 < ∞|X 1 = 1) + qP(T01 < ∞|X 1 = −1). On the one hand, if X 1 = 1, then T01 = 1. On the other hand, if X 1 = −1, then the walk has to go from −1 to 0 and then from 0 to +1 in a finite number of steps for T01 < ∞. Hence, as r01 = r−10, r01 = p + qr 2 01 This has two roots r01 = 1 and r01 = p/q. It follows that if p ≥ q, then the only root that is a probability is r01 = 1, as required. If p < q, then it certainly seems plausible that r01 = p/q, but a little work is needed to prove it. This can be done in several ways. We choose to use the following interesting fact; that is, for all i,. E Hence, if S0 = 0, Xi q p +1 q p q p + q −1 = p = 1. Sn E q p = n i=1 E q p Xi = 1. (8) (9) (10) (11) 186 5 Random Vectors: Independence and Dependence Now denote T01 by T, and suppose that p < q and r01 = 1. Then conditioning on whether T ≤ n or T > n gives Sn (12) (13 Sn $ $ $ $T ≤ n P(T ≤ n) + E Sn $ $ $ $T > n q p P(T > n). Now if T ≤ n, then ST = 1 and Sn = 1 + X T +1 +
|
· · · + X n. Hence, E q p Sn $ $ $ $1+···+X n = q p by (10). Furthermore, if T > n, then Sn ≤ 0, and so $ $ $ $T > n E Sn q p ≤ 1. Hence, allowing n → ∞ in (13) gives 1 = q/ p + 0. But this is impossible when p < q, so (12) must be impossible. Hence, when p < q, we must have r01 = p/q. In the case when p > q, so that T01 is finite, it is natural to ask what is E(T01)? If we knew that E(T01) < ∞, then we could write E(T01) = E(E(T01|X 1)) = pE(T01|X 1 = 1) + qE(T01|X 1 = −1) = p + q(1 + E(T−1.1)) = 1 + 2qE(T01) by (6). Hence, (14) E(T01, as required. It is not too difficult to show that E(T01) < ∞, as we now demonstrate. (15) Theorem E(T01) < ∞ when p > q. Proof P(T01 > n) = P(Si ≤ 0 for 0 ≤ i ≤ n) ≤ P(Sn ≤ 0) = P 1 2 (Sn + n) ≤ n 2 5.6 Simple Random Walk 1 2 (Sn +n) 187 since p > q, q p = P ≤ E ≥ n/2 q p −n/2 1 2 (Sn +n) q p q p by the basic inequality, Theorem 4.6.1. Now we recall the observation in Theorem 4 that 1 2 (Sn + n) has the B(n, p) mass function. Hence, (16) Therefore, finally, 1 2 (Sn +n2q)n. E(T01) = ∞ n=0 P(T01 > n) ≤ ∞ n=0 (2q)n n/2 p q = 1 1 − 2( pq) 1 2 since pq < 1 4 This establishes (14), as required. There are one or two gaps in the above; for example
|
, we do not yet know either the mass function of T01 or E(T01) in the case p = q = 1 2. Both of these can be filled by the following beautiful theorem. (17) Hitting Time Theorem Let (Sn; n ≥ 0) be a simple random walk with S0 = 0. Let T0b be the first passage time from 0 to b > 0, with mass function f0b(n). Then f0b(n) = P(T0b = n) = b n P(Sn = b) = b n n 2 (n + b) 1 1 2 (n+b)q p 1 2 (n−b). The proof of (17) relies on the following lemma, which is of considerable interest in its own right. First, we observe that the number of paths of the walk from (0, 0) to (n − 1, b + 1) is denoted by Nn−1(0, b + 1), and we have (18) Nn−1(0, b + 1) =. n − 1 2 (n − b) − 1 1 (19) Lemma: The Reflection Principle Let N b (0, 0) to (n − 1, b − 1) that pass through b at least once. Then n−1(0, b − 1) = Nn−1(0, b + 1). N b (20) n−1(0, b − 1) be the number of paths from Proof Let π be a path that visits b on its journey from (0, 0) to (n − 1, b − 1). Let L be the occasion of its last visit. Now reflect that part of the walk after L in the line y = b. This yields a path π from (0, 0) to (n − 1, b + 1). Conversely, for any path from (0, 0) to (n − 1, b + 1), we may reflect the segment in y = b after its last visit 188 5 Random Vectors: Independence and Dependence Figure 5.2 The solid line is the path of the walk: the dashed line is the reflection in y = b of that part of the walk after its last visit to b before n, at time L. to b
|
, to give a path π from (0, 0) to (n − 1, b − 1). These two sets are thus in one– one correspondence, and (20) follows. Figure 5.2 illustrates the reflection. Proof of (17): Hitting Time Theorem If T0b = n, then we must have X n = +1 and Sn−1 = b − 1. Now there are Nn−1(0, b − 1) paths from (0, 0) to (n − 1, b − 1) of which 2 (n−b). Hence, n−1(0, b − 1) visit b on route. Each such path has probability p 1 N b using the reflection principle, 2 (n+b)−1q 1 P(T0b = n) = p(Nn−1(0, b − 1) − Nn−1(0, b + 1)) p 1 2 (n−b) 1 2 (n+b)−1q 2 (n+b)q 1 1 2 (n−b) p − 1 n − 1 2 (n + b) 2 (n−b) 1 n − 1 2 (n + b) − 1 1 n 2 (n + b) P(Sn = b), n+b)q by (4). Because a similar argument works for negative values of b, we have (21) P(T0b = n) = and E(T0b) = ∞ n=1 |b|P(Sn = b). |b| n P(Sn = b) (22) Example: Symmetric Random Walk When p = q, the simple random walk is said to be symmetric. In this case, E(T01) = ≥ ∞ m=0 ∞ m=0 P(S2m+1 = 1) = ∞ m=0 2m(2m − 2)... 2.1 (m + 1)(m)... 1 2m + 1 m + 1 ∞ 2−m = m=0 2−(2m+1) 1 m + 1 = ∞. 5.6 Simple Random Walk 189 Hence, the symmetric random walk has the interesting property that P(T01 < ∞) = 1, but E(T01) = ∞. (23)
|
(24) Example: Conditioned Random Walk Now, of course, when p < q we know that P(T01 < ∞) = p/q < 1, so T01 has no expectation. But consider Sn conditional on the event that T01 < ∞. In this case, by conditional probability, P(X 1 = +1|T01 < ∞) = P(T01 < ∞|X 1 = 1)P(X 1 = 1) P(T01 < ∞) = p/ p q = q. Likewise, P(X 1 = −1|T01 < ∞) = p. Hence, if we knew that E(T01|T01 < ∞) were finite, then by conditioning on the first step, E(T01|T01 < ∞) = q + pE(T01|X 1 = −1; T01 < ∞) by (6). = 1 + p2E(T01|T01 < ∞), Hence, when q > p, (25) (26) E(T01|T01 < ∞) = 1 q − p. It is straightforward to use Theorem 17 to establish that E(T01|T01 < ∞) < ∞ (an exercise for you) and (25) is proved. Together with (14) and Example 22, this shows that for any value of p and b > 0, E(T0b|T0b < ∞) = b | p − q|. We may also consider recurrence times. (27) Example and Let T0 be the recurrence time of 0. Show that P(T0 < ∞) = 1 − | p − q| E(T0|T0 < ∞) = 1 + 1 | p − q|. Solution what we know about T01 and T10. You fill in the details. Just consider T0 conditional on the outcome of the first step, and then use Finally, we prove a famous result, the so-called (28) Ballot Theorem 1 Xi be a simple random walk with S0 = 0. Then Let Sn = n 2n−1 1 P Si = 0|S2n = 2r = r n. Proof We count paths as we did in the hitting time theorem. What is the number
|
2n−1(1, 2r ) of paths from (1, 1) to (2n, 2r ) that visit the origin? We can reflect the N 0 190 5 Random Vectors: Independence and Dependence Figure 5.3 The ballot theorem The solid line is a path of the walk; the dashed line is the reflection in the x-axis of that part of the walk before its first visit to zero. 2n−1(1, 2r ) = walk before its first zero in the x-axis, see Figure 5.3, and this shows that N 0 N2n−1(−1, 2r ). Because all N2n(0, 2r ) paths from (0, 0) to (2n, 2r ) are equally likely, it follows that the required probability is 2n−1(1, 2r ) N2n−1(1, 2r ) − N 0 N2n(0, 2r ) = N2n−1(1, 2r ) − N2n−1(−1, 2r ) N2n(0, 2r ) 2n − 1 n + r − 1 2n − 1 n + r − = 2n n + r = 2r 2n. The following application explains the name of the theorem. Example: Ballot In an election, candidate A secures a votes and candidate B secures b votes. What is the probability that A is ahead throughout the count? By the above argument, this probability is (a − b)/(a + b) when a > b. 5.7 Martingales In this section, we consider a remarkably useful class of random processes called martingales. They arise naturally as general models for fair games, but turn up in all kinds of unexpected places. In particular, they are used extensively in modern financial mathematics, but it is beyond our scope to explore this area in great detail. We begin with this: (1) A collection (Sn; n ≥ 0) of random variables is a martingale if, for Definition all n, (a) E|Sn| < ∞. (b) E(Sn+1|S0, S1,..., Sn) = Sn. 5.7 Martingales 191 This definition
|
clearly shows the interpretation as a fair game; if Sn is a gambler’s fortune after the nth play, then (b) asserts that the expectation of this fortune after the next play—taking into account all previous fluctuations in his fortune—is simply equal to Sn. Briefly, conditional on the past, future expectations equal the current value. Note that this section will rely heavily on the properties of conditional expectations summarized in 5.5.13; keep them well in mind. Martingales get their name from a particularly well-known gambling strategy that we discussed above. We recall example 4.15 in the special case when p = 1 2. (2) Example: The Martingale You bet $1 at evens (calling heads on the flip of a fair coin, say); if you win you quit. If you lose, you bet $2 at evens, and so on. That is, you double the stake at each loss, and quit at the first win. Let Sn denote your fortune after the nth bet, and let X n denote the outcome of the nth flip of the coin; thus, X n = +1 with probability 1 2 −1 with probability 1 2. Because EX n+1 = 0, it follows immediately that E(Sn+1|S0,..., Sn) = Sn. Also, |Sn| ≤ 1 + 2 + · · · + 2n ≤ 2n+1, so E|Sn| < ∞ and Sn is a martingale. If the game stops at the nth flip, your fortune is Sn = −1 − 2 − 4 − · · · − 2n−1 + 2n = 1, so you always win. However, recall that this game is “fair” only in a mathematical sense, and the strategy is not as good as it may look! It has the serious drawback that the expected size of the winning bet is infinite. To see this, note that the game ends on the nth play with probability 1 2−n. 2n−1, which 2−n. The stake on this play is $ 2n−1. So the expected stake is $ is infinite. You are well-advised not to gamble, and above all avoid the martingale if you do. ∞ We
|
give a few simple examples of natural martingales. (3) Example (a) If EX n = 0, then Sn = Let (X n; n ≥ 0) be independent. n r =0 Xr defines a martingale because E(Sn+1|S0,..., Sn) = E(Sn + X n+1|Sn) = Sn. (b) If EX n = 1, then Sn = n r =0 Xr defines a martingale because E(Sn+1|S0,..., Sn) = E(Sn X n+1|Sn) = Sn. The properties of conditional expectation extend in natural and obvious ways to larger collections of random variables; we single out, in particular, the tower property 5.5.13 vii in the presence of random vectors Y and Z: (4) E(X |Y) = E(E(X |Y, Z)|Y). This leads to an important class of martingales. 192 5 Random Vectors: Independence and Dependence (5) Example: Doob Martingale Let X, X 0, X 1,... be any collection of jointly distributed random variables with E|X | < ∞. Define Mn = E(X |X 0, X 1,..., X n). Then by Jensen’s inequality, see Theorem 4.6.14 and Problem 5.42, E|Mn| = E[|E(X |X 0,..., X n)|] ≤ E[E(|X ||X 0,..., X n)] = E|X | < ∞, and E(Mn+1|X 0,..., X n) = E(E(X |X 0,..., X n+1)|X 0,..., X n) = E(X |X 0,..., X n) = Mn. by (4) Hence, Mn is a martingale. As one example of this, consider the following example. (6) Example: An Options Martingale Suppose that (X n; n ≥ 0) represent the price of some stock on successive trading days, n ≥ 0. Naturally, E|X n| < ∞. Suppose you own the right (but not the
|
obligation) to purchase this stock at a fixed price K at an exercise date T. Then your option at that date is worth X, say, where X = max{(X T − K ), 0} = (X T − K )+, because if X T < K, then your option is worthless; you could buy the stock anyway for the actual price less than K. At any time n < T, the expected value of your option, in the knowledge of the stock prices up to then, is Mn = E(X |X 0,..., X n). By the previous example, Mn is a martingale. [Note well that Mn is not the fair price for this option!] We will see many more examples of martingales later, but for the moment we turn aside to observe that the key property of all realistic fair games is that they have to stop. For example, the gambler is bankrupt or decides to quit while ahead, or the casino imposes a house limit. There are many other real life actions that have this central property that their nature is fixed, but their timing is optional. In all such cases, the action can only be taken in light of your knowledge up to the time of execution. Nobody can follow a rule that says “stop just before you have a big loss.” This is unreasonable. We therefore make a useful definition of such reasonable times of action, called stopping times, as follows. 5.7 Martingales 193 (7) A random variable T taking values in {0, 1, 2,...} is called a stopDefinition ping time with respect to {S0, S1,...}, if the event {T = n} may depend only on {S0, S1,... Sn}, and is independent of Sn+k for all k ≥ 1. It may be that P(T < ∞) = 1, in which case T is said to be almost surely finite. (8) Example: The Martingale 4.15 Again The wheel is spun repeatedly and, in this case, T is the time when it first yields red. Let Ik be the indicator of the event that the kth spin yields red, so that by construction {T = n} = {Ik = 0, 1 ≤ k ≤ n − 1, In = 1
|
}. Note that {T = n} does not depend on any In+k, so T is a stopping time for the martingale. In this case, P(T < ∞) = limk→∞{1 − P(T > k)} = 1 − limk→∞(1 − p)k = 1, so T is almost surely finite. (9) Example: First Passage Times and T = min{n : Sn ≥ b}, then T is easily seen to be a stopping time. If {Sn; n ≥ 0} is a random walk, with S0 = 0 (say) Stopping times are at least as important to the theory of martingales as they are in real life. The main reason for this is the fact that a martingale {X n; n ≥ 0} that is stopped at a random time T is still a martingale, provided that T is a stopping time for {X n; n ≥ 0}. That is to say, formally: (10) Theorem Let T be a stopping time for the martingale {X n; n ≥ 0}, and let Zn = X T ∧n = n ≤ T X n, X T, n > T. Then Zn is a martingale, and EZn = EX 0. Proof We can rewrite Zn using indicators as Zn = n−1 r =0 Xr I {T = r } + X n I {T ≥ n}. Hence, E|Zn| ≤ n 0 E|Xr | < ∞. Also, using indicators again, Zn+1 = Zn + (X n+1 − X n)I {T > n}. To see this, note that if I {T > n} = 1, then Zn+1 = X n+1, and if I {T ≤ n} = 1, then Zn+1 = Zn, both being consonant with the definition of Zn. (11) (12) 194 5 Random Vectors: Independence and Dependence Next we note that, because {T > n} is independent of X n+k for all k, and is a function of X,..., X n, (13) E[(X n+1 − X n)I (T > n)|X 0,..., X n] =
|
I (T > n)E(X n+1 − X n|X 0,..., X n) = 0, using the pull through property (5.5.13)(v). Hence, E(Zn+1|Z0,..., Zn) = E(Zn+1|X 0,..., X n) = Zn, and the result follows. Now, if T is almost surely finite, it is true with probability 1 that Zn → X T as n → ∞, from (11). It is natural to ask if also, as n → ∞, EZn → EX T, which would entail, using (12), the remarkable result that EX T = EX 0. It turns out that this is true, under some extra conditions. Here are some popular cases: (14) Theorem: Optional stopping Let X n be a martingale and T a stopping time for (X n; n ≥ 0). Then EX T = EX 0, if any of the following hold for some positive finite constant K. (a) T is bounded (i.e., T ≤ K < ∞). (b) |X n| ≤ K for all n, and P(T < ∞) = 1. (c) E(|X n+1 − X n||X 0,..., X n) ≤ K for n < T, and ET < ∞. Proof We prove (a) and (b) here, postponing the proof of (c) to Theorem 5.9.9. First, recall that we showed EX T ∧n = EX 0 in Theorem (10). So if we take n = K, this proves (a) is sufficient. To show that (b) is sufficient, note that |EX 0 − EX T | = |EX T ∧n − EX T | ≤ 2K P(T > n), because |X n| < K, → 0 as n → ∞, because P(T < ∞) = 1. Hence, |EX 0 − EX T | = 0, which yields the result. 5.7 Martingales 195 This simple-looking result is remarkably useful and powerful; we give many illustrations of this in later worked examples. Here is one to begin with. (15)
|
Example: Wald’s Equation mean µ, and set Let (X n; n ≥ 1) be independent, having the common Yn = Y0 + n r =1 Xr − nµ. It is easy to see that Yn is a martingale (when E|Y0| < ∞) because E(Yn+1|Y0,..., Yn) = Yn + E(X n+1 − µ). Furthermore, E|Yn+1 − Yn| = E|X n+1 − µ| ≤ E|X 1| + |µ| < ∞. Hence, when Y0 = 0, and if T is any stopping time for (Yn, n ≥ 0) such that ET < ∞, part (c) of the optional stopping theorem yields EYT = E Xr − T µ = EY0 = 0. T r =1 That is to say, (16) E T r =1 Xr = µET. Of course, this would be trivial when T is independent of the X n. It is remarkable that it remains true when T depends on the sequence X n. We conclude this section by noting that there are several other kinds of interesting and important martingales. Recall that a martingale is intuitively your fortune at the nth play of some fair game, and the martingale property implies that, for the martingale X n, E(X n+1 − X n|X 0,..., X n) = 0. It is equally easy to show that this implies the martingale property. The extension to unfair games is natural: (17) Definition Suppose that the sequence X n satisfies E|X n| < ∞. Then (a) X n is a supermartingale if E(X n+1 − X n|X 0,..., X n) ≤ 0. (b) X n is a submartingale if E(X n+1 − X n|X 0,..., X n) ≥ 0. These correspond to unfavourable and favourable games, respectively. A different type of martingale is motivated by looking at the average of a random walk. (18) Example: Backward Martingale and identically distributed with common mean µ < ∞. De�
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.