text
stringlengths 270
6.81k
|
---|
An) = 1 − exp(−λn). Define N = (An; n ≥ 1) independent exp sn. are ∞ n − λk 1 (b) Find E(s N ) and E(N ), when λn = a + log n. The probability of obtaining heads when a certain coin is tossed is p. The coin is tossed repeatedly until a sequence of three heads is obtained. If pn is the probability that this event occurs in n throws, n=1 10 11 12 13 14 15 16 17 18 19 Problems 283 show that p0 = p1 = p2 = 0, p3 = p3, and pn = p3(1 − p) 1 − n−4 k=0 pk if n ≥ 4. Show that the generating function G(s) = ∞ k=0 pksk is given by G(s) = p3s3(1 − ps) 1 − s + p3(1 − p)s4. Now find the expected number of throws of an unbiased coin needed to obtain three consecutive heads. Each packet of a certain breakfast cereal contains one token, coloured either red, blue, or green. The coloured tokens are distributed randomly among the packets, each colour being equally likely. Let X be the random variable that takes the value j when I find my first red token in the jth packet which I open. Obtain the probability generating function of X, and hence find its expectation. m ). + 1 3 + · · · + 1 More generally, suppose that there are tokens of m different colours, all equally likely. Let Y be the random variable that takes the value j when I first obtain a full set, of at least one token of each colour, when I open my jth packet. Find the generating function of Y, and show that its expectation is m(1 + 1 2 A gambler repeatedly plays the game of guessing whether a fair coin will fall heads or tails when tossed. For each correct prediction he wins £1, and for each wrong one he loses £1. At the start of play, he holds £n (where n is a positive integer), and he has decided to stop play as soon as either (i) he has lost all his money, or (ii) he possesses £K, where K is a given integer greater than n. Let p(n
|
) denote for 1 ≤ n ≤ K − 1 the probability that he loses all his money, and let p(0) = 1, p(K ) = 0. Show that p(n) = 1 2 ( p(n − 1) + p(n + 1)); (1 ≤ n ≤ K − 1). 20 21 G(s) = k−1 n=0 p(n)sn then, provided s = 1, G(s) = 1 (1 − s)2 (1 − (2 − p(1))s + p(K − 1)s K +1). 22 Hence, or otherwise, show that p(1) = 1 − 1/K, p(K − 1) = 1/K and that, in general, p(n) = 1 − n/K. A class of particles behaves in the following way. Any particle in existence at time n is replaced at time n + 1 by a random number of similar particles having probability mass function f (k) = 2−(k+1), k ≥ 0, independently of all other particles. At time zero, there is exactly one particle in existence and the set of all succeeding particles is called its descendants. Let the total number of particles that have ever existed by time n be Sn. Show that the p.g.f. Gn(z) = E(z Sn ) satisfies Gn(z) = z 2 − Gn−1(z) for 0 ≤ z ≤ 1 and n ≥ 1. Deduce that with probability one, the number of particles that ever exist is finite, but that as n → ∞, E(Sn) → ∞. Let G(s) be the generating function of the family size in an ordinary branching process. Let Zn be the size of the population in the nth generation, and let Tn be the total number of individuals 23 284 6 Generating Functions and Their Applications who have ever lived up to that time. Show that Hn(s, t), the joint generating function of Zn and Tn satisfies Hn(s, t) = t G(Hn−1(s, t)). Show that for each integer n, (s + n − 1)(s + n − 2)... s/n!, is the probability generating function of some random variable X. Show that as n → ∞, E(
|
X )/ log n → 1. Find the probability generating function of the distribution P(X = k) = λ λ(λ + 1... (λ + k − 1) (1 + a)kk! ; a 1 + a k > 0,P(X = 0) = λ. a 1 + a Let X and Y be independent Poisson random variables with parameters λ and µ respectively. Find the joint probability generating function of X − Y and X + Y. Find the factorial moments of X + Y and the cumulants of X − Y. Let X be a binomial random variable with parameters n and p, and let Y be a binomial random variable with parameters m and q(= 1 − p). Find the distribution of X − Y + m, and explain why it takes the form it does. A series of objects passes a checkpoint. Each object has (independently) probability p of being defective, and probability α of being subjected to a check which infallibly detects a defect if it is present. Let N be the number of objects passing the checkpoint before the first defective is detected, and let D be the number of these passed objects that were defective (but undetected). Find: (a) The joint p.g.f. of D and N. (b) E(D/N ). If the check is not infallible, but errs with probability δ, find the above two quantities in this case. Let the sequence (ai ; i ≥ 0) be defined by 1 (2 − s)n+1 = ∞ 0 ai si. n i=0 ai = 1 2, and interpret this result in terms of random variables. Show that [Hint: (1 + x)n−r = (1 − x/(1 + x))r (1 + x)n.] A two-dimensional random walk (X n, Yn; n ≥ 0) evolves in the following way. If (X n, Yn) = (x, y), then the next step is to one of the four points (x + 1, y), (x − 1, y), (x, y + 1), (x, y − 1), with respective probabilities α1, β1, α2, β2, where α1 + β1 + α2 + β2 = 1. Initially, (X 0,
|
Y0) = (0, 0). Define T = min{n; X n +Yn = m}. Find the probability generating function of T. In Problem 30, if α1 = β1 and α2 = β2, show that E(X 2 n Also, in Problem 30, if α1 = β1 = α2 = β2 = 1 4, (a) Show that E(T ) = ∞. (b) Show that the point at which the walk hits x + y = m is a proper random variable. (c) Find its generating function E(s X T −YT ). ∞ Use the identity t(1 + t)n−1 = (1 + t)n i=0(−t −1)i to prove that n ) = n + · · · + (−)n−. Let the generating function of the family size in an ordinary branching process be G(s) = 1 − p(1 − s)β ; 0 < p, β < 1. Show that if Z0 = 1 E(s Zn ) = 1 − p1+β+···+βn−1 (1 − s)βn. 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Let the generating function of the family size of an ordinary branching process be G(s) = q + ps, and let E(s Z0 ) = eλ(s−1). Let T = min{n; Zn = 0}. Show that Problems 285 P(T = n) = e−λpn+1 − e−λpn. Let the number of tosses required for a fair coin to show a head be T. An integer X is picked at random from {1,..., T } with equal probability 1 Show that for α > 0, β > 0, α + β < 1, T of picking any one. Find G X (s). G(s, t) = log(1 − αs − βt) log(1 − α − β) + κ (Y ) r is a bivariate p.g.f. Find the marginal p.g.f.s and the covariance. Let X and Y be independent with r th cumulants κ (X ) κ (X ) r Let X have cumulants κr ; r ≥ 1 and moments µr
|
; r ≥ 1. Show that κ1 = E(X ), κ2 = var (X ), and κ3 = µ3 − 3µ1µ2 + 2µ3 1. Show that the joint probability mass function. Show that X + Y has r th cumulant and κ (Y ). r r f (x, y λx µy; x ≥ 0, y ≥ 1 has joint p.g.f. G(s, t) = (1 − λ − µ)t 1 − λs − µt. What is cov (X, Y )? Let X m have generating function ( p/(1 − qs))m, where p = 1 − q > 0. Show that as m → 0 E(s X m |X m > 0) → log(1 − qs) log(1 − q). 41 Prove the identity n k=0 2k k 4−k = (2n + 1) 2n n 4−n. Now let Sn be a simple symmetric random walk with S0 = 0. Let vn be the expected number of visits of the walk to zero, up to and including time n. Show that (including the initial visit) v2n = v2n+1 = (2n + 1) 2n n 2−2n. 42 Let (Sn; n ≥ 0) be a simple random walk with S0 = 0. Let Rr be the number of steps until the walk first revisits the origin for the r th time, and let T0r be the number of steps until the walk first visits r. Show that E(s T0r ) = E(s Rr ). r 1 2qs 286 6 Generating Functions and Their Applications Deduce that P(Rr = n) = (2q)r P(Tr = n − r ), and hence that, as n → ∞ P(Rr = n) P(R1 = n) → r. [H. Kesten and F. Spitzer have shown that this remains true for a wider class of random walks. (J. d’Anal. Math. Vol. 11, 1963).] Let X be geometric with parameter p. Use the fact that qEs X = Es X −1 − p to deduce that E(X − 1)k =
|
qEX k for k ≥ 1. If X has p.g.f. G(s), show that T (s) = EX (X − 1)... (X − k + 1) = kT (k−1)(1). s snP(X > n) = (1 − G(s))/(1 − s). Deduce that n 43 44 7 Continuous Random Variables He talks at random: sure the man is mad. W. Shakespeare, Henry VI 7.1 Density and Distribution √ Hitherto, we have assumed that a random variable can take any one of only a countable set of values. However, suppose your height is 5 feet or 6 feet (or somewhere in between). Then, previously (however briefly), your height in feet has taken every value in [1, 5], 2, e, π, and so on. (Each value can be taken more than once because you including are taller in the morning than in the evening.) Thus, if X is the height of a randomly selected member of the population, the state space of X is not countable. There are many other simple examples of variables that may take any one of an uncountable number of values; for example, the brightness of a randomly chosen star, the time until some cell divides, the velocity of a comet, the direction of the wind, and so on. Think of some yourself. In view of these remarks, we are about to introduce a new class of random variables such that the state space is uncountable, and X (ω) may take any one of an uncountable number of real values. However, before we embark on this task, it is as well to reassure you that, despite their separate presentation, these new random variables share most of the useful properties of discrete random variables. Also, many of these properties are proved in exactly the same way as in the discrete case and (even better) we are able to use much of the same notation. Thus, as in the discrete case, we start with a probability function P(·) defined on a collection F (the event space) of subsets of (the sample space). Then we think of a random variable X as a real valued function X (ω) defined for each ω. Our first requirement (as in the discrete case) is a function that tells us about the relative likelihood
|
s of possible values of X. Happily, we already have such a function; recall the following: 287 288 7 Continuous Random Variables (1) (2) (3) Definition The distribution function F of the random variable X is the function F(x) = P(Ax ), where Ax is the event Ax = {ω : X (ω) ≤ x}, xR. We usually write (2) as F(x) = P(X ≤ x), and denote F by FX (x) when we want to stress the role of X. Notice that for Definition 1 to be meaningful, the event Ax must be in F, so that we know P(Ax ). Thus, X (·) only qualifies to appear in Definition 1 if Ax F for all x. This is true throughout this book; the implications of this so-called measurability condition are explored in more advanced books. (4) Example: Uniform Distribution You devise an experiment in which the outcome is equally likely to be any point Q in the interval [0, 1]. Thus, the sample space is the set of points (Q : Q[0, 1]). The event space F will include all intervals in [0, 1]; we omit the proof that such an F exists, but you should rest assured that it does exist. Define the random variable X (Q) to be the distance from the origin O to Q. From the nature of the experiment, if Aab = {Q : Q(a, b)}, 0 ≤ a ≤ b ≤ 1, then P(Aab) = b − a. Hence, X has distribution function sketched in Figure 7.1 FX (x) = P(A0x ) = 0 x 1 if if if. In the future, the underlying sample space will make few appearances. We tend to think of the possible values of X as the sample space, as we did for discrete random variables. (It can be proved that this is a permissible view.) Figure 7.1 The distribution function FX (x) of a random variable X distributed uniformly on (0, 1). 7.1 Density and Distribution 289 Figure 7.2 The altitude AP is of length h. (5) Example A point Q is picked at random in a triangle of area a, with base of length b. Let X
|
be the perpendicular distance from Q to the base. What is the distribution function FX (x)? [See figure 7.2 for a sketch of the triangle]. Solution inside the triangle ABC. For reasons of symmetry, Let the height of the triangle AP be h. The event X > x occurs when Q lies P(Q ABC) = (area of ABC)/a = h − x h 2. Hence, FX (x) = P(X ≤ x) = 1 − P(Q ABC) = . We summarize the basic properties of F(x) in the following: (6) Theorem Let X have distribution function F(x). Then 0 ≤ F(x) ≤ 1 for all x, and (7) (8) If h > 0, then P (x < X ≤ y) = F(y) − F(x) ≥ 0, for all x ≤ y. F(x + h) = F(x). lim h→0 (9) If P(|X | < ∞) = 1, then F(x) = 1 and lim x→∞ lim x→−∞ F(x) = 0. 290 7 Continuous Random Variables In plain words, this theorem says that as x increases, F(x) is nondecreasing, continuous on the right, and lies between 0 and 1. It can be shown conversely that any function with these properties is the distribution function of some random variable. Proof The first result follows from (2) because 0 ≤ P(Ax ) ≤ 1. To show (7), note that {ω: x < X ≤ y} = Ay ∩ Ac x = Ay\Ax. Because Ax ⊆ Ay for x ≤ y, we have P (x < X ≤ y) = P (Ay) − P(Ax ) = F(y) − F(x) ≥ 0 by the nonnegativity of P(·). To prove (8), we use the continuity of P(·), see (1.5.4.). Let (hk; k ≥ 1) be any sequence decreasing to zero, and let A(k) be the event that X ≤ x + hk. Then lim h→0 F(x + h) = lim k→∞ = P( lim k→∞
|
F(x + hk) = lim k→∞ P(A(k)) A(k)) = P(Ax ) = F(x), as required. Finally, for (9), lim x→∞ F(x) = lim n→∞ P(An) = P() by (1.5.4). The last part is proved similarly. Although the distribution function has not played a very active role so far in this book, it now assumes a greater importance. One reason for this is Theorem 6 above, which shows that F(x) really is a function that can tell us how likely X is to be in some simple subset of the real line. Another reason is the following simple corollary of Example 5. (10) Corollary If F(x) is continuous, then for all x P(X = x) = 0. Proof P (X = x) = lim n→∞ P x − 1 n by (1.5.4) by Theorem 6 < X ≤ x x − 1 n = lim n→∞ F(x) − F = 0 because F is continuous. (11) Example Let X be uniformly distributed on (0, 1). Then F(x is clearly continuous, so that P(X = x) = 0 for all x. 7.1 Density and Distribution 291 If FX is continuous, then X is known as a continuous random variable. We now define a particularly important class of continuous random variables. (12) Definition Let X have distribution function F. If the derivative (13) (14) (15) d F d x = F (x) exists at all but a finite number of points, and the function f defined by F (x) 0, where F (x) exists elsewhere, f (x) = satisfies F(x) = x −∞ f (ν) dν, then X is said to be a continuous random variable with density f (x). It follows from (15) that if X has density f (x), then for C ⊆ R the Key Rule is: P (X C) = f (x)d x, C when both sides exist. [In line with the remark following (3), {ω: X (ω) C}F.] they exist if Example
|
4 Revisited: Uniform Density uniformly at random in [0, 1], then It follows that X has a density FX (x) = f X (x) = In Example 4, we found that if X was chosen. 0 < x < 1 otherwise. Figures 7.3 to 7.6 illustrate the density and distribution of random variables Y and Z, which are uniformly distributed on (a, b) and (a, b) ∪ (c, d), respectively. Example (5) Revisited the perpendicular distance from Q to the base, we found For the point Q picked at random in a triangle, where X is FX (x) = . 292 7 Continuous Random Variables Figure 7.3 The density function f (y) of a random variable distributed uniformly on (a, b). It follows that X has density f X (x) = 2(h − x)/ h2 0 0 < x < h otherwise. There are continuous random variables that do not have a density, but none appear in this text. Therefore, whenever X is said to be continuous here, it always follows that X has a density f. Let us consider a basic example. In the discrete case, the geometric distribution is of the form F(x) = 1 − q x, if x = 1, 2,... It is natural to consider the analogous distribution in the continuous case, which turns out to be equally (if not more) important. (16) Example: Exponential Density Let X have distribution F(x) = max{0, 1 − e−λx }, where λ > 0. Then F is continuous for all x, and also differentiable except at x = 0, so F (x) = 0 λe−λx if x < 0, if x > 0. Figure 7.4 The distribution function F(y) of a random variable distributed uniformly on (a, b). 7.1 Density and Distribution 293 fZ(z) (d − c + b − a)−1 0 a b c d z Figure 7.5 The density function of a random variable distributed uniformly on (a, b) ∪ (c, d), where a < b < c < d. Now let Then of course, for x > 0, f (x) = λe
|
−λx 0 if x > 0, x ≤ 0. F(x) = 1 − e−λx = x 0 λe−λvdv = x ∞ f (v)dv, and, for x < 0, F(x) = 0 = x −∞ f (v)dv. Hence, f (x) is a density of X. See Figures 7.7 and 7.8. Notice that F (0) does not exist, and also that the function f (x) = λe−λx x ≥ 0 x < 0 0 FZ(z) 1 0 a b c d z Figure 7.6 The distribution function of a random variable distributed uniformly on (a, b) ∪ (c, d), where a < b < c < d. 294 7 Continuous Random Variables Figure 7.7 The density of an exponential random variable with parameter λ; f (x) = λe−λx for x ≥ 0. + x −∞ f (v)dv. This illustrates the fact that (15) does not uniquely would also satisfy F(x) = determine f (x) given F(x). However, this problem can only arise at a finite number of points and (to some extent) it does not matter what value we give f (x) at any of these exceptional points. Usually we make it zero, but consider the following: (17) Example: Two-Sided Exponential Distribution Let X have distribution F(x) = peλx 1 − (1 − p)e−λx if x < 0 if x ≥ 0, where 0 < p < 1 and λ > 0. Then F(x) is continuous and F (x) = λpeλx λ(1 − p)e−λx if x < 0 if x > 0. Figure 7.8 The distribution of an exponential random variable with parameter λ; F(x) = 1 − e−λx for x ≥ 0. 7.1 Density and Distribution 295 A suitable density is f (x) = λpeλx 0 λ(1 − p)e−λx if x < 0 if x = 0 if x > 0. 2, it is tempting to set f (0) = λ However, if p = 1 point is that it really
|
does not matter very much. 2 and write f (x) = 1 2 λe−λ|x| for all x. The Finally, we note the obvious facts that for any density f, (18) (19) and if P(|X | < ∞) = 1, then f (x) ≥ 0, ∞ −∞ f (v)dv = 1. It is straightforward to see that any function with these properties is the density of some random variable, and so any integrable nonnegative function can be used to form a density. Example and The function g(x) = x 2 − x + 1 is easily seen to be nonnegative for all x, +b +a g(v)dv = 1 3 (b3 − a3) − 1 2 (b2 − a2) + b − a = c(a, b), say. Hence, the function is a density function. f (x) = c(a, b)−1g(x) 0 if a < x < b elsewhere (20) Example: Cauchy Distribution Show that for an appropriate choice of the constant c(a, b), the function f (x) = c(a, b)(1 + x 2)−1 0 if a < x < b elsewhere is a density function. Show that c(−∞, ∞) = 1/π and that c(−1, 1) = c(0, ∞) = 2/π. Solution Trivially, f ≥ 0 if c ≥ 0. New recall that d d x tan−1 x = (1 + x 2)−1. Thus, + a f (x)d x = 1 if and only if c(a, b)−1 = tan−1 b − tan−1 a. In particular, b c(−∞, ∞)−1 = π 2 + π 2 = π, 296 7 Continuous Random Variables as required, and c(−1, 1)−1 = π 4 + π 4 = c(0, ∞)−1. In general then, given a nonnegative function g(x), the function f (x) = g(x) ∞ −1 g(v)dv −∞ is a density, if the integral exists. To discover whether it does, the following technique is useful. If we can
|
find a constant b such that for all n > 0, + 0 g(v)dv < b < ∞, then n ∞ 0 exists by monotone convergence. g(v)dv = lim n→∞ n 0 g(v)dv (21) Example: Normal Density Show that − 1 2 x 2 for all x R f = c exp can be a density. Solution For any n > 1, n exp − 1 2 v2 −n + ∞ −∞ exp(− 1 2 dv < 2 1 0 dv + n 1 e−vdv < 2(1 + e−1). Hence, c−1 = v2)dv exists, and f is a density for this c. Remark In fact, it can be shown that c−1 = (2π)1/2. The proof of this is not quite trivial; we give it in Example 8.3.8. Also, note that there are other normal densities; the one in Example 21 is called the standard normal density denoted by N (0, 1), and by φ(x) = (2π)−1/2 exp(− 1 2 x 2). Its distribution is (x), given by (x) = x −∞ φ(v)dv. (22) (23) (24) (25) Example: Gamma Distribution Show that for α, λ, x > 0, the function f (x) = cλα x α−1e−λx can be a density. When α is a positive integer, show that c−1 = (α − 1)! 7.2 Functions of Random Variables 297 First, we show that the integral Solution x α−1e− 1 x α−1e− 1 2 2 λx → 0 as λx < 1. Hence, for n > m, x → ∞, there is some m < ∞ such that exists. Because x > m, for + ∞ 0 x α−1e−λx d x n 0 vα−1e−λvdv < < vα−1e−λvdv + n m e− 1 2 λvdv vα−1e−λvdv + 2λ−1e− 1 2 λm = b (say). m 0 m 0 + ∞ 0 Hence, c
|
−1 = gration by parts gives c−1 = (α − 1)! λα x α−1e−λx d x exists, and if α is a positive integer then repeated inte- λe−λvdv = (α − 1)! + ∞ 0 This density is known as the gamma density with parameters α and λ. When α is not an integer, the integral above defines a function of α known as the gamma function, and denoted by ∞ (α) = 0 λαvα−1e−λvdv = uα−1e−udu. ∞ 0 The density function in this case is f (x) = 1 (α) λα x α−1e−λx for x > 0. In particular, when λ = 1 2 and α = n 2, the density f (x) = 1 ( n 2 ) is known as the χ 2(n) density, and it is referred to as chi-squared with n degrees of freedom. −1e −1 2 x λ n 2 x n 2 We conclude with another way of producing densities. (26) (27) (28) (29) Example: Mixtures Let f1(x) and f2(x) be density functions, and let γ f1(x) + (1 − γ ) f2(x), where 0 ≤ γ ≤ 1. Then f3(x) ≥ 0, and f3 = γ f1 + (1 − γ ) f2 = 1. Hence, f3 is a density, and is said to be a mixture of f1 and f2. f3(x) = For example, the two-sided exponential density of Example 17 may now be seen as a mixture of f1 = λeλx (x < 0) and f2 = λe−λx (x > 0) with γ = p. 7.2 Functions of Random Variables Suppose that X and Y are random variables such that Y = g(X ), where g(.) is some given function. If we know the density of X, can we find the distribution of Y? In general terms, the answer is straightforward because, by the properties of densities and 298 distributions, 7 Continuous Random Variables (1) F(y) = P(Y ≤ y)
|
= P(g(X ) ≤ y) = f X (v)dv, C where C = {v : g(v) ≤ y}. Then, if F(y) is continuous and differentiable, we can go on to find the density of Y, if it exists. Here are some simple examples of this idea in practice. (2) Example Let X be uniformly distributed on (0, 1) with density f (x) = 1 if 0 < x < 1 otherwise. 0 If Y = −λ−1 log X, where λ > 0, what is the density of Y? Solution First, we seek the distribution of Y : FY (y) = P(−λ−1 log X ≤ y) = P(log X ≥ −λy) 1 − e−λy 0 for y ≤ 0 otherwise. = P(X ≥ exp(−λy)) = Hence, the derivative exists except at y = 0, and fY (y) = λe−λy 0 This is the exponential density with parameter λ. Some care is required if g(.) is not one–one. if y > 0 if y ≤ 0. (3) Example Let X be uniformly distributed on [−1, 1]. Find the density of Y = X r for nonnegative integers r. First, note that X has distribution function F(x) = 1 2 (1 + x) for −1 ≤ x ≤ 1. Solution Now, if r is odd, then the function g(x) = x r maps the interval [−1, 1] onto itself in one– one correspondence. Hence, routinely: P(Y ≤ y) = P(X r ≤ y) = P(X ≤ y1/r ) = 1 2 (1 + y1/r ) for − 1 ≤ y ≤ 1, and Y has density f (y) = 1 2r If r is even, then g(x) = x r takes values in [0, 1] for x ∈ [−1, 1]. Therefore, −1, − 1 ≤ y ≤ 1. y 1 r P(Y ≤ y) = P(0 ≤ X r ≤ y) = P(−y1/r ≤ X ≤ y1/r ) = y1/r for 0 ≤ y ≤ 1. Hence, Y has density f (y) = 1 r 1 r −1, y 0 ≤
|
y ≤ 1. 7.2 Functions of Random Variables 299 Finally, if r = 0, then X r = 1, FY (y) is not continuous (having a jump from 0 to 1 at y = 1) and so Y does not have a density in this case. Obviously, Y is discrete, with P(Y = 1) = 1. (4) Example Let X have the standard normal distribution with density f (x) = (2π)− 1 2 exp − 1 2 x 2. Find the density of Y = σ X + µ for given constants µ and σ = 0. Also, find the density of Z = X 2. if σ > 0 if σ < 0 Solution Adopting the by now familiar technique: (5) P(σ X + µ ≤ y) = P(σ X ≤ y − µ) = P = FX y − µ σ 1 − FX y − µ σ Hence, differentiating (5) with respect to y, if σ > 0 if σ < 0 fY (y) = 1 |2πσ 2) 1 2 exp − 1 2 y − µ σ 2. (6) Second, P(X 2 ≤ z) = P(X ≤ √ z) − P(X ≤ − √ z) = FX ( √ z) − FX (− √ z). Differentiating now gives f Z (z) = 1 √ 2 z (7) √ f X ( z) + 1 √ 2 z √ f X (− z) = 1√ 2π z exp. − 1 2 z Remark The density given by (6) is known as the normal density with parameters µ and σ 2, sometimes denoted by N (µ, σ 2). The standard normal density of Example 7.1.21 was N (0, 1) because φ(x) has µ = 0 and σ = 1. The density given by (7) is the gamma density of (7.1.23) with parameters 1 2 and 1 2. This is known as the chi-squared density with parameter 1, sometimes denoted by χ 2(1). This is a special
|
case of (7.1.28). (8) Example: Inverse Functions Let X have distribution function F(x), where F(x) is continuous and strictly increasing. Let g(x) be a function satisfying F(g) = x. Because 300 7 Continuous Random Variables F(x) is continuous and strictly increasing, this defines g(x) uniquely for every x in (0, 1). The function g(.) is called the inverse function of F(.) and is often denoted by g(x) = F −1(x). Clearly, F is the inverse function of g, that is (9) g(F(x)) = F(g(x)) = x, and g(x) is an increasing function. (a) Use this function to show that Y = F(X ) is uniformly distributed on (0, 1). (b) Show that if U is uniform on (0, 1), then Z = F −1(U ) has distribution F(z). Solution (a) As usual, we seek the distribution function P(Y ≤ y) = P(F(X ) ≤ y) = P(g(F(X )) ≤ g(y)) = P(X ≤ g(y)) by(9). = F(g(y)) = y by (9) (b) Again, P(F −1(U ) ≤ z) = P(F(g(U )) ≤ F(z)) = P(U ≤ F(z)) = F(z). by (9) Although we have introduced them separately, discrete and continuous variables do have close links. Here are some examples to show this. (10) Example: Step Functions Let X have distribution function F(x) and density f. Define the function S : R → Z by (11) (12) (13) S(X ) = k, if k ≤ X < k + 1, where k is any integer. Then S(X ) is an integer valued discrete random variable with mass function f S(k) = P(k ≤ X < k + 1) = k+1 f (v)dv. k Obviously, P(S(X ) ≤ X ) = 1 and FS(x) ≥ FX (x), and |S(X ) − X | ≤ 1. Now equation (13) shows that the integer valued S(X
|
) is, in some sense, a rough approximation to the continuous random variable X. It is easy to get much better approximations as follows. (14) Example: Discrete Approximation As usual, X has density f (x); suppose also that X > 0. For fixed n, with 0 ≤ r ≤ 2n − 1 and k ≥ 0, define Sn(X ) = k + r 2−n if k + r 2−n ≤ X < k + (r + 1)2−n. 7.3 Simulation of Random Variables 301 Then Sn(X ) is a discrete random variable taking values in (k + r 2−n; k ≥ 0, 0 ≤ r ≤ 2n − 1), with P(Sn(X ) = k + r 2−n) = k+(r +1)2−n k+r 2−n f (v)dv. Again, we have Sn(X ) ≤ X, but this time, by the construction, (15) |Sn(X ) − X | ≤ 2−n. Thus, by choosing n large enough, we can find a discrete random variable Sn(X ) such that |X − Sn(X )| is as small as we please. In fact, it can be shown that we can find a simple random variable (taking only a finite number of values) that is arbitrarily close to X, but in a weaker sense than (15). (See Problem 12.) 7.3 Simulation of Random Variables A random variable is a mathematical concept (having no other existence) that is suggested by the outcomes of real experiments. Thus, tossing a coin leads us to define an X (.) such that X (H ) = 1, X (T ) = 0, and X is the number of heads. The coin exists, X is a concept. A natural next step, having developed theorems about mathematical coins (e.g., the arcsine laws) is to test them against reality. However, the prospect of actually tossing a large enough number of coins to check the arc-sine laws is rather forbidding. Luckily, we have machines to do large numbers of boring and trivial tasks quickly, namely, computers. These can be persuaded to produce many numbers (ui ; i ≥ 1) that are sprinkled evenly and “randomly” over the interval (0
|
, 1). The word randomly appears in quotations because each ui is not really random. Because the machine was programmed to produce it, the outcome is known in advance, but such numbers behave for many practical purposes as though they were random. They are called pseudorandom numbers. Now if we have a pseudorandom number u from a collection sprinkled uniformly in (0, 1), we can look to see if u < 1 2 in which case we call it “tails.” This process is called simulation; we have simulated tossing a coin. 2, in which case we call it “heads”, or u > 1 Different problems produce different random variables, but computers find it easiest to produce uniform pseudorandom numbers. We are thus forced to consider appropriate transformations of uniform random variables, and therefore many of the results of Section 7.2 find concrete applications when we seek to simulate random variables. A natural first question (before “how”) is why might we want to simulate such random variables? Some examples should suffice to answer this question. (1) Example: Epidemic An infection is introduced into a population. For each individual the incubation period is a random variable X, the infectious period is a random variable Y, and the number of further individuals infected is a random variable N, depending on Y and the behaviour of the infected individual. What happens? Unfortunately, exact solutions 302 7 Continuous Random Variables to such problems are rare and, for many diseases (e.g., the so-called “slow viruses” or prions), X and Y are measured in decades so experiments are impractical. However, if we could simulate X and Y and the infection process N, then we could produce one simulated realization (not a real realization) of the epidemic. With a fast computer, we could do this many times and gain a pretty accurate idea of how the epidemic would progress (if our assumptions were correct). (2) Example: Toll Booths Motorists are required to pay a fee before entering a toll road. How many toll booths should be provided to avoid substantial queues? Once again an experiment is impractical. However, simple apparatus can provide us with the rates and properties of traffic on equivalent roads. If we then simulate the workings of the booth and test it with the actual traffic flows, we should obtain reasonable estimates of the chances of congestion
|
. Because of the ready availability of large numbers of uniform pseudorandom numbers, interest is concentrated on finding transformations that then yield random variables of arbitrary type. We have seen several in Section 7.2. Here is another idea. Example: Composition (0, 1). Show how to simulate a random variable with density The pseudorandom variable U is uniformly distributed on f X = 1 4 (x − 1 2 + (1 − x)− 1 2 ), 0 < x < 1. Solution Next consider Recall that if U is uniform on (0, 1) then U 2 has density f1(x(1 − U 2 ≤ x) = P(U ≥ (1 − x) 1 2 ) = 1 − (1 − x) 1 2. Hence, 1 − U 2 has density f2(x) = 1 write 2 (1 − x)− 1 2. Now toss a coin (real or simulated), and X = U 2 1 − U 2 if it’s heads if it’s tails. Then as required. f X (x) = 1 2 f1(x) + 1 2 f2(x) = 1 4 (x − 1 2 + (1 − x)− 1 2 ), We describe other methods of simulation as the necessary ideas are developed. Random variables with a density may have an expected value, similar to random variables with a mass function. 7.4 Expectation 7.4 Expectation 303 (1) (2) Definition pected value, which is given by Let X have density f (x). If + ∞ −∞ |v| f (v)dv < ∞, then X has an ex- E(X ) = ∞ −∞ v f (v)dv. (3) Example: Uniform Density Let X be uniformly distributed on (a, b). Then (4) Example: Exponential Density b E(X ) = v b − a dv = 1 2 (b − a). a Let X have density f (x) = λe−λx for x ≥ 0. Then ∞ E(X ) = 0 vλe−λvdv = λ−1. (5) Example: Normal Density Let X have the N (µ, σ 2) density. Then E(X ) = = ∞ −∞ ∞ 1 2 1 σ (
|
2π) 1 σ (2π) µ σ (2π) 1 2 + v exp(−(v − µ)2/(2σ 2))dv (v − µ) exp −∞ ∞ − 1 2 v − µ σ 2 2 dv − 1 2 v − µ σ u2 du + −∞ exp u exp 1 2 ∞ −∞ − 1 2 = 1 (2π) 1 2 dv µ (2π ) 1 2 ∞ exp −∞ − 1 2 u2 du on making the substitution u = (v − µ)/σ in both integrands. The first integrand is an odd function, so the integral over R is zero. The second term is µ by Example 7.1.21 and 7.1.22. Hence, E(X ) = µ. Expectation may be infinite, as the next example shows. (6) Example: Pareto Density Let X have density f (x) = (α − 1)x −α for x ≥ 1 and α > 1. Then if α ≤ 2, the expected value of X is infinite because E(X ) = lim n→∞ n (α − 1)v vα 1 dv = (α − 1) lim n→∞ n 1 1 vα−1 dv, which diverges to ∞ for α − 1 ≤ 1. However, for α > 2, ∞ E(X ) = 1 (α − 1) vα−1 dv = (α − 1) (α − 2). Then again, the expectation of X may not exist, as the next example shows. 304 7 Continuous Random Variables (7) Example: Cauchy Density Let X have density f (x) = 1 π(1 + x 2), − ∞ < x < ∞. + a Because 0 have an expected value. v(π(1 + v2))−1dv diverges as a a → −∞ and as a a → +∞, X does not It is appropriate to give a moment to considering why we define E(X ) by Definition 1. This definition is at least plausible, by analogy with the definition E(X ) = �
|
� v=−∞ v f (v), in the discrete case. Of course, Definition 1 is much more than just a plausible analogy, but a complete account of expectation is well beyond our scope. However, we can use Example 7.2.14 to give a little more justification for Definition 1. Let k + r 2−n = a(k, r, n). Recall from (7.2.15) that |Sn − X | < 2−n. Now by definition, because Sn(X ) is discrete, E(Sn(X )) = = k,r a(k, r, n) a(k,r +1,n) k,r a(k,r,n) a(k,r +1,n) a(k,r,n) f (v)dv (v f (v) + (a(k, r, n) − v) f (v))dv. Because |a(k, r, n) − v| < 2−n, it can be shown (with more work, which we omit) that ∞ E(Sn(X )) = v f (v)dv + n = E(X ) + n, −∞ where n → 0 as n → ∞. An explicit demonstration may be helpful here. (8) Example uniformly distributed on Let X be uniform on (0, 1) with mean value E(X ) = 1 2 2 0, 2−n, 2.2−n, 3.2−n,..., 1 − 2−n. Therefore, 3. Then Sn(X ) is E(Sn(X )) = 2n −1 r =0 r 2−n.2−n = 1 2 (2n − 1)2−n = E(X ) − 2−(n+1) → E(X ), as n → ∞. Thus, our definitions of expectation for discrete and continuous variables are at least consistent in some way. In more advanced books, a single definition of E(.) is given, which is shown to yield our definitions as special cases. Next we return to considering functions of random variables. Suppose we are given random variables Y and X related by Y = g(X ).
|
What is E(Y )? If we know the density of X, then we may be able to find E(Y ) by first discovering fY (y), if it exists. This is often an unattractive procedure. We may do much better to use the following theorem, which we state without proof. (9) (10) (11) (12) (13) 7.4 Expectation 305 Theorem f (x). Then Y has an expected value if Let random variables X and Y satisfy Y = g(X ), where X has density + ∞ −∞ |g(v)| f (v)dv < ∞, and in this case, ∞ E(Y ) = −∞ g(v) f (v)dv. The proof of this is straightforward but long. An heuristic discussion of the type above shows that if we represent the distribution of X as a limit of discrete distributions, and then formally proceed to this limit in Theorem 4.3.4, equation (10) is the result. Again, this only makes (10) plausible, it does not provide the proof, which is beyond our scope. This important result implies that the useful consequences of Theorem 4.3.4 remain true for random variables with a density. In particular, Theorem 4.3.6 remains true; the proofs of most parts are just typographical variants of the proofs in the discrete case; just replace by +. We describe one important and less trivial case in detail, namely, the analogy of Theo- rem 4.3.11. Theorem: Tail integral distribution F, and finite expected value E(X ). Then Let the nonnegative random variable X have density f, ∞ E(X ) = (1 − F(x))d x. 0 Proof For any finite y, we may integrate by parts to obtain But we have y 0 x f (x)d x = −x(1 − F(x))|y 0 + y 0 (1 − F(x))d x. ∞ ∞ y(1 − F(y)) = y f (x)d x ≤ x f (x)d x → 0 y y as y → ∞, because E(X ) < ∞. Hence, we can let y → ∞ in (13) to prove the theorem. We can use this to prove a
|
useful special case of Theorem 9. (14) Example Let the nonnegative random variable X have density f, and let g(X ) ≥ 0. Show that E(g(X )) = + ∞ 0 g(v) f (v)dv. Solution E(g(X )) = ∞ 0 ∞ P(g(X )) ≥ v) dv by (12) ∞ = 0 x:g(x)≥v f (x)d xdv = f (x) 0 0 g(x) ∞ dvd x = f (x)g(x)d x, 0 306 7 Continuous Random Variables as required. The interchange in the order of integration is justified by a theorem on double integrals, which we omit. The various moments of a random variable with a density are defined just as they were for discrete random variables, that is to say: µk = E(X k), and σk = E((X − E(X ))k). (15) Example: Normal Density Let X have the density N (0, σ 2). Find µk for all k. Solution is odd. If k = 2n, then integrating by parts gives If k is odd, then x k exp(−x 2/(2σ 2)) is an odd function. Hence, µk = 0 if k µ2n = = ∞ −∞ v2n exp (−v2/(2σ 2))dv −v2n−1σ 2 exp (−v2/(2σ 2))|∞ −∞ (2n − 1)σ 2v2n−2 exp (−v2/(2σ 2))dv 1 σ (2π) 1 σ (2π) ∞ 1 2 1 2 + −∞ = (2n − 1)σ 2µ2n−2 = σ 2n (2n)! 2nn! on iterating and observing that µ0 = 1. Hence, in particular, µ2 = σ 2. Finally, and thankfully, we are pleased to record that the expectation E(X ) of a continuous random variable X has the same useful basic properties that we established for the discrete case in Section 4.6. For convenience, we recall them here. (16) Theorem Let a and b be constants, and let g and h be functions.
|
Then: (i) If g(X ) and h(X ) have finite mean, then E(g(X ) + h(X )) = E(g(X )) + E(h(X )). (ii) If P(a ≤ X ≤ b) = 1, then a ≤ E(X ) ≤ b. (iii) If h is nonnegative, then for a > 0, P(h(X ) ≥ a) ≤ E(h(X )/a). (iv) Jensen’s inequality If g is convex then E(g(X )) ≥ g(E(X )). Proof The proof is an exercise for you. When h(x) = x 2 in (iii), we have: (17) Chebyshov’s inequality: P(|X | ≥ a) ≤ EX 2/a2. 7.5 Moment Generating Functions In dealing with integer valued discrete random variables, we found the probability generating function exceptionally useful (see Chapter 6). It would be welcome to have such 7.5 Moment Generating Functions 307 a useful workhorse available for random variables with densities. Of course, if X has a density then P(X = x) = 0, so we cannot expect the probability generating function to be of much use. Fortunately, another function will do the job. Definition by (1) (2) If X has density f, then X has moment generating function MX (t) given MX (t) = E(etX ) = ∞ −∞ etv f (v)dv. We are only interested in MX (t) for those values of t for which it is finite; this includes t = 0, of course. It is particularly pleasant when MX (t) exists in a neighbourhood of zero, but it is beyond our scope to explain all the reasons for this. (3) Example: Uniform Density Let X be uniform on [0, a]. Find E(etX ). Where does it exist? Solution E(et X ) = a 1 a 0 etv dv = " 1 at etv # a 0 = eat − 1 at. This exists for all t, including t = 0, where it takes the value 1. (4) Example: Gamma Density Recall from (7.1.24) that the gamma function (α) is defined for any
|
α > 0 and λ > 0 by (5) Hence, (α) = ∞ 0 x α−1λαe−λx d x. f (x) = λα (α) x α−1e−λx, x ≥ 0, is the density of a random variable x. Find E(etX ). Where does it exist? Solution E(etX ) = ∞ 0 etv λα (α) vα−1e−λvdv = λα (α) ∞ 0 vα−1e−(λ−t)vdv. The integral exists if λ > t, and then making the substitution (λ − t)v = u gives (6) MX (t) = λ λ − t α ∞ 0 uα−1 (α) e−udu = α λ λ − t by (5), for −∞ < t < λ. 308 7 Continuous Random Variables (7) Example: Normal Density Let X be a standard normal random variable. Then √ 2π MX (t) = ∞ −∞ exp ∞ − = exp −∞ − 1 2 (x − t) exp √ −∞ 2π. − 1 2 v2 dv, setting x − t = v, So MX (t) = e 1 2 t 2. Now by (7.2.4) if Y is N (µ, σ 2), MY (t) = eµt+ 1 2 σ 2t 2. You may ask, why is MX (t) called the moment generating function? The answer lies in the following formal expansion. (8) E(etX ) = E ∞ k=0 X kt k k! = ∞ k=0 E (X k)t k k! = ∞ k=0 µkt k k!. Thus, provided the interchange of expectation and summation at (8) is justified, we see that MX (t) is the (exponential) generating function of the moments µk. Note that the word “exponential” is always omitted in this context, and that the required interchange at (8) is permissible if MX (t) exists in an interval that includes the origin. You may also ask, do we always know the density f X (x), if we know
|
MX (t)? After all, the probability generating function uniquely determines the corresponding mass function. Unfortunately, the answer is no in general because densities not uniquely determined by their moments do exist. However, none appear here; every density in this book is uniquely determined by its moment generating function (if it has one). We state the following inversion theorem without proof. (9) Theorem If X has moment generating function M(t), where for some a > 0, M(t) < ∞ for |t| < a, then the distribution of X is determined uniquely. Furthermore, M(t) = 1 k! t kE(X k). ∞ k=0 The moment generating function is especially useful in dealing with sequences of random variables; the following theorem is the basis of this assertion. We state it without proof. (10) Theorem: Continuity Theorem Let (Fn(x); n ≥ 1) be a sequence of distribution functions with corresponding moment generating functions (Mn(t); n ≥ 1) that exist for |t| < b. Suppose that as n → ∞ Mn(t) → M(t) for |t| ≤ a < b, where M(t) is the m.g.f. of the distribution F(x). Then, as n → ∞, Fn(x) → F(x) at each point x where F(x) is continuous. The main application of this theorem arises when M(t) = e 1 2 t 2 and F(x) = (x), as we see in Chapter 8 when we come to the celebrated central limit theorem. Here is a preliminary note. 7.5 Moment Generating Functions 309 Note: The O–o Notation In considering limits of sequences of functions, we quite often produce large and unwieldy expressions of which only one or two terms remain in the limit. Rather than keep a precise record of the essentially irrelevant terms, it is convenient to have a special compact notation for them. Definition If g(n) and h(n) are two functions of n, then we write h(n) = O(g(n)) as n → ∞ if |h(n)/g(n)| < c for all large enough n and some finite constant c. For example, as n → ∞, and n2 + log n = O(n2) with c = 2 n2 +
|
n 3 2 = O(n2) with c = 2. Observe that this is an abuse of notation (= being the abused symbol) because it does not follow from these two examples that log n = n 3 2. Also, if h(n) = O(g(n)) and k(n) = O(g(n)), then h(n) + k(n) = O(g(n)). A similar definition holds for small values of the argument. Definition x → 0, if |h(x)/g(x)| < c for all small enough x and some constant c. If g(x) and h(x) are two functions of x, then we write h(x) = O(g(x)) as Often, an even cruder representation will suffice. If g(x) and h(x) are two functions of x, then we write h(x) = Definition o(g(x)) as x → ∞, if limx→∞(h(x)/g(x)) = 0. Likewise, h(x) = o(g(x)) as x → 0 if limx→0(h(x)/g(x)) = 0. For example, x 2 = o(x) as x → 0 and x = o(x 2) as x → ∞. For another example, x + x log x + x 2 = o(1) as x → 0. We use this new notation in the following famous result. (11) Example: de Moivre–Laplace Theorem For each n ≥ 1, let X n be a binomial random variable with parameters n and p. Let q = 1 − p, and define Yn = X n − np (npq) 1 2. Show that as n → ∞ P(Yn ≤ x) → (x) = x −∞ (2π )− 1 2 e−y2/2dy. 310 7 Continuous Random Variables Solution We use Theorem 10. First calculate the moment generating function ) E(etYn ) = E ) exp t(X n − np) (npq) 1 2 = E exp = p exp qt (npq) 1 2 + q exp − pt (npq) 1 2 t(X 1 −
|
p) (npq) n 1 2 *. * n Next we expand the two exponential terms in (12) to give E(etYn ) = 1 + t 2 2n n− 3 2 + O n. Now we recall the useful result that says that, for constant a, 1 + a n lim n→∞ + o(n−1) n = ea. Applying this to (13) shows that E(etYn ) = e 1 2 t 2, lim n→∞ which is the m.g.f. of the standard normal distribution, as required. [More demanding readers should note that they can prove (14) by first taking logarithms.] The appearance of the normal distribution in these circumstances is one of the most remarkable results in the theory of probability. The first proof, due to de Moivre, was greatly improved by Laplace. Their methods were different from those used here, relying on fairly precise direct estimates of the binomial probabilities. We outline a modern version of their proof in Example 7.20. 7.6 Conditional Distributions Just as in the discrete case, it is often necessary to consider the distribution of a random variable X conditional upon the occurrence of some event A. By definition of conditional probability, we have FX |A(x) = P(X ≤ x|A) = P({ω: X ≤ x} ∩ A)/P(A) = P(X ≤ x; A)/P(A), say. (Obviously, A has to be in F, the event space.) The case that arises most commonly is when A is an event of the form A = {ω: a < X ≤ b}; (12) (13) (14) (1) (2) that is, we seek the distribution of X conditional on its lying in some subset of its range. (3) Example X (ω) ≤ c}. Let a < b < c < d. Let X be uniform on (a, d), and let A = {ω: b < 7.6 Conditional Distributions 311 Then, by (1), P(X ≤ x|A) = P(X ≤ x; b < X ≤ c) P(b < X ≤ c if b < x ≤ c otherwise for b < x ≤ c. Thus, the distribution of X given A is just uniform on (b
|
, c). More generally, it is easy to see that a uniform random variable, constrained to lie in any subset A of its range, is uniformly distributed over the subset A. Because P(X ≤ x|A) is a distribution, it may have an expectation. For example, suppose that X has density f, and A is given by (2). Then, by (1), P(X ≤ x|A) = F(x) − F(a) F(b) − F(a) for a < x ≤ b, and differentiating yields the conditional density f X |A(x) = f (x) F(b) − F(a) 0 if a < x ≤ b otherwise. Notice that + a f X |A(v)dv = 1, as it must. Then we may define the conditional expectation b E(X |A) = b a v f (v) F(b) − F(a) dv b a F(b) − F(v) F(b) − F(a) dv, (1 − FX |A(v))dv = a + ∞ = 0 on integrating by parts, (4) (5) (6) on using (4). Notice that this is in agreement with Theorem 7.4.11, as of course it must be. (7) Example: Exponential Density and Lack-of-Memory tributed with parameter λ. Show that Let X be exponentially dis- (8) P(X > s + t|X > s) = e−λt = P(X > t). Find E(X |X > s) and E(X |X ≤ s). Solution Trivially, P(X > s + t|X > s) = P(X > s + t)/P(X > s) = e−λ(s+t)/e−λs = e−λt. 312 Hence, 7 Continuous Random Variables (9) E(X |X > s) = s + ∞ 0 e−λt dt = s + E(X ). We remark that the remarkable identity (8) is known as the lack-of-memory property of the exponential distribution. Finally, E(X |X ≤ s) = s P(s ≥ X > v) P(s ≥ X
|
) dv = 1 λ − s eλs − 1. 0 7.7 Ageing and Survival Many classic examples of continuous random variables arise as waiting times or survival times. For instance, the time until the cathode-ray tube in your television fails, the time until you are bitten by a mosquito after disembarking in the tropics, the time until a stressed metal component fails due to fatigue. For definiteness, we consider the lifetime T of some device or component. The device is said to fail at time T. It is often useful to quantify the ageing process of a device; in particular, we may want to compare a device of given age with a new one. (We are all familiar with the fact that it is not necessarily always a good thing to replace a working component with a new one. This fact is embodied in the popular saying: “If it works, don’t fix it”) Let T have distribution F and density f. The following quantities turn out to be of paramount importance in comparing devices of different ages. The survival function The hazard function The hazard rate function F(t) = 1 − F(t) = P(T > t). H (t) = − log(1 − F(t)). r (t) = f (t) F(t) = f (t) 1 − F(t) = dH (t) dt. The last equality explains why r (t) is called the hazard rate. Integrating (3) yields exp − r (s)ds = F(t). t 0 Before we explain the significance of these quantities, you are warned that terminology in the literature of ageing is quite chaotic. Note that: (i) The survival function is also known as the survivor function, reliability function, or hazard function. (ii) The hazard function is also known as the log-survivor function. (iii) The hazard rate function is also known as the failure rate function, mortality function, or hazard function. Beware! (1) (2) (3) (4) Now let At denote the event that T > t. Then 7.7 Ageing and Survival 313 FT |At (s + t) = P(T ≤ s + t|T > t) = F(s + t) − F(t) This is the probability that the device fails during (t, t + s),
|
given that it has not failed by time t. Now 1 s FT |At (s + t) = (1 − F(t))−1 lim s→0 F(t + s) − F(t) s = f (t) 1 − F(t) 1 − F(t) = r (t). lim s→0. (5) Thus, r (t) may be thought of as the “intensity” of the probability that a device aged t will fail. (6) Example: Exponential Life If T has an exponential density, then F(t) = e−λt, H (t) = λt, and r (t) = λ. This constant hazard rate is consonant with the lack-of-memory property mentioned in Example 7.6.7. Roughly speaking, the device cannot remember how old it is, and so the failure intensity remains constant. We see that intuitively there is a distinction between devices for which r (t) increases, essentially they are “wearing out,” and those for which r (t) decreases, they are “bedding in.” A simple and popular density in this context is the Weibull density, which can exhibit both types of behaviour. (7) Example: Weibull Life If T has density f (t) = αt α−1 exp (−t α), t > 0, α > 0, then it has distribution F(t) = 1 − exp (−t α). Hence, F(t) = exp (−t α), and so P(T > t + s|T > s) P(T > t) = exp (−(t + s)α + sα + t α), which is > 1 or < 1 according as α < 1 or α > 1. [To see this, just consider the stationary value of x α + (1 − x)α − 1 at x = 1 2.] Hence, if α < 1, the chance of lasting a further time t (conditional on T > s) increases with s. However, if α > 1, this chance decreases with s. The behaviour of r (t) is not the only measure of comparison between new and old devices. There is a large hierarchy of measures of comparison, which we display formally as follows [in the notation of (1)–(3)]. (8) Definition (
|
i) If r (t) increases, then T is (or has) increasing failure rate, denoted by IFR. (ii) If H (t) (iii) If for all s ≥ 0, t ≥ 0, increases, then T is (or has) increasing failure rate average, denoted by IFRA. t H (s + t) ≥ H (s) + H (t), then T is new better than used, denoted by NBU. 314 7 Continuous Random Variables (iv) If for all t ≥ 0 E(T ) ≥ E(T − t|At ), then T is new better than used in expectation, denoted by NBUE. (v) If for all 0 ≤ s < t < ∞ then T has (or is) decreasing mean residual life, denoted by DMRL. E(T − s|As) ≥ E(T − t|At ), The random variable T may also be decreasing failure rate (DFR), decreasing failure rate on average (DFRA), new worse than used (NWU), new worse than used in expectation (NWUE), or increasing mean residual life (IMRL). All these are defined in the obvious way, analogous to (i)–(v). It can be shown that the following relationships hold between these concepts: ⇒ IFR IFRA ⇒ NBU ⇒ DMRL ⇒ ⇒ NBUE Some of these implications are trivial, and some are established in Example 7.17 below. These ideas are linked to another concept, that of stochastic ordering. 7.8 Stochastic Ordering As in Section 7.7, let T be a nonnegative random variable. In general, let R(s) be a random variable whose distribution is that of T − s given that T > s, namely, (1) FR(x) = P(T − s ≤ x|T > s). We refer to R(s) as the residual life (of T at s). The above example shows that if T has the exponential density, then its residual life is also exponentially distributed with constant mean. More generally, FR(s) may depend on s, and more significantly it may do so in a systematic way; the following definition is relevant here. (2) Definition Let X and Y be random variables. If (3) FX (x) ≥
|
FY (x) for all x, then X is said to be stochastically larger than Y. Now we can supply a connection with the ideas of the preceding section (7.7). (4) Example If T is a random variable with residual life R(s), s > 0, show that T has increasing failure rate if and only if R(s) is stochastically larger than R(t) for all s < t. (5) (6) Solution First, we find 7.9 Random Points 315 P(R(t) > x) = P(T − t > x|T > t) = F(t + x)/F(t) t+x t = exp − r (s)ds exp r (s)ds by (7.7.4) = exp − r (s)ds. t+x 0 0 Differentiating (5) with respect to t, we have 0 P(R(t) > x) = (r (t) − r (t + x)) exp − ∂ ∂t + r (s)ds. t+x t Because exp (− T is DFR or IFR, the result follows. r ds) is positive, and r (t) − r (t + x) is positive or negative according as Finally, we have the useful: (7) Theorem If X is stochastically larger than Y, then E(X ) ≥ E(Y ). Proof We prove this when X ≥ 0 and Y ≥ 0. (The general result is left as an exercise.) From Theorem 7.4.11, ∞ ∞ E(X ) = FX (x)d x ≥ FY (x)d x by hypothesis, 0 0 = E(Y ). 7.9 Random Points Picking a point Q at random in the interval (0, 1) yielded the uniform density (of the length OQ). It is intuitively attractive to consider problems that involve picking one or more points at random in other nice geometric figures, such as discs, squares, triangles, spheres, and so on. Indeed this idea is so natural that mathematicians had already started doing this kind of thing in the eighteenth century, and one of the most celebrated articles on the subject is that of M.W. Crofton in the 1885 edition of the Encyclopaedia Britannica. Such questions also have applications in statistics. Con�
|
��ning ourselves to two dimensions for definiteness, suppose a point Q is picked at random in a region R of area |R|. Then it is natural to let the probability P(S), that Q lies in a set S ⊆ R, be given by (1) P(S) = |S| |R|, where, now, |S| denotes the area of S. It follows from the properties of area that P(.) has the required properties of a probability function, and we can proceed to solve various simple problems using elementary geometry. The following is typical. (2) Example A point Q is picked at random in the unit square. What is the probability ν that it is nearer to the centre O of the square than to its perimeter? 316 7 Continuous Random Variables Solution point (x, y) is nearer to O than the perimeter if By symmetry, we need to consider only the sector 0 ≤ y ≤ x ≤ 1 % 2. Then the − x; that is, if in this sector, x 2 + y2 < 1 2 x < 1 4 − y2, for ). Hence, the area is given by an integral and ν = 8 √ 1 2 ( 2−1) 0 1 4 − y2 − y dy = 4 3 √ 2 − 5 3. An equally trivial but much more important example is the following. (3) Example Let f = f (x) be an integrable function with 0 ≤ f (x) ≤ 1 for 0 ≤ x ≤ 1. Let Q be picked at random in the unit square, and let Av be the set of points such that 0 ≤ x ≤ v and 0 ≤ y ≤ f (x). Then, from (1), (4) v P(Q ∈ Av) = f (x)d x. 0 This trivial result has at least two important applications. The first we have met already in Example 5.8.9. (5) Example: Hit-or-Miss Monte Carlo Integration Let f (x) and Q be as defined in Example 3, and declare Q a hit if Q lies below f (x), 0 ≤ x ≤ 1. Then the probability of a hit is P(A1) = 0 f (x)d x. Now we pick a sequence of such points Q1, Q2,... and let X n be the number of hits. If
|
points are picked independently, then X n is a binomial random variable with parameters n and P(A1), and we have shown that as n → ∞, for > 0, + 1 P |n−1 X n − 1 0 f (x)d x| > → 0. + 1 0 f (x)d x. In practice, one would This therefore offers a method for evaluating the integral be unlikely to use this method in one dimension, but you might well use the analogous method to evaluate f (x)d x, where x is a vector in (say) 11 dimensions. + (6) Example: Simulation With Q and f (x) defined as above, consider the probability that Q lies in Av given that it is a hit. By definition, this has probability P(Av|A1) = v 0 1 0 f (x)d x. f (x)d x By inspection, the function F(v) = P(Av|A1) is the distribution function of the xcoordinate of Q given that it is a hit. This procedure therefore offers a method of simulating a random variable X with density function 7.9 Random Points (7) f X (x) = f (x). f (x)d x 1 0 You can just pick a point Q and, if it is a hit, let its x-coordinate be X. 317 A natural next step is to consider events defined jointly by a number of points picked independently in a region R. One famous example is Sylvester’s problem: for four points picked at random in R, what is the probability that one of them lies in the triangle formed by the other three? This is too difficult for us, but we can consider an amusing simpler problem to illustrate a few of the basic ideas. (8) Example: Two Points in a Disc Let λ(r ) be the expected value of the distance L(r ) between two points Q1 and Q2, each distributed uniformly (and independently) over a disc of radius r. Show that (9) λ(r ) = 128r 45π. Solution M.V. Crofton in 1885. This can be done by a brutal integration; here is a better way, discovered by Consider a disc of radius x + h, which we may think of as a disc D of radius x
|
, surrounded by an annulus A of width h. Then, if Q1 and Q2 are dropped at random on to the disc of radius x + h, we have (using independence and the properties of the uniform density) that (10) P(Q1 ∈ D ∩ Q2 ∈ D) = Also, 2 π x 2 π(x + h)2 = 1 − 4h x + o(h). P(Q1 ∈ D ∩ Q2 ∈ A) = = 2h x and P(Q1 ∈ A ∩ Q2 ∈ A) = o(h). Hence, by conditional expectation, π x 2 π (x + h)2 π x 2 π(x + h)2 1 − + o(h) (11) λ(x + h) = E(L(x + h)|Q1 ∈ D; Q2 ∈ D) + 2E(L(x + h)|Q1 ∈ D; Q2 ∈ A) 1 − 4h x 2h x + o(h) + o(h) + o(h). Now E(L(x + h)|Q1 ∈ D; Q2 ∈ A) is just the mean distance of a random point Q1 in a disc of radius x, from a point Q2 on its circumference (plus a quantity that is o(h)). Hence, taking plane polar coordinates with Q2 as origin: E(L(x + h)|Q1 ∈ D; Q2 ∈ A) = 1 π x 2 = 32x 9π π/2 −π/2 0 + o(h). 2x cos θ v2dvdθ + o(h) 318 7 Continuous Random Variables Returning to (11), note that E(L(x + h)|Q1 ∈ D; Q2 ∈ D) = λ(x); hence, rearranging (11) and letting h → 0 gives dλ(x) d x = lim h→0 1 h λ(x + h) − λ(x) = −4 x λ(x) + 128 9π. Integrating this, and observing that λ(0) = 0, we have λ(x) = 128x 45π. Using the same idea, and with a lot more toil, we
|
can find the density of L. The next natural step is to pick lines (or other objects) at random and ask how they divide up the region R in random tessellations or coverings. This is well beyond our scope, but the trivial Example 7.18 illustrates some of the problems. 7.10 Review and Checklist for Chapter 7 We introduced the class of random variables having a density f X (x) and a distribution FX (x). These take one of an uncountable number of values in R and are called “absolutely continuous.” The familiar ideas of expectation, conditioning, functions, and generating functions are explored in this new context, together with some applications. SYNOPSIS OF FORMULAE: Key Rule: P(X ∈ B) = f X (x)d x. x∈B Distribution and density: x FX (x) = f X (y)dy = P(X ≤ x). −∞ For small h, P(x < X ≤ x + h) f X (x)h, and if F(x) is differentiable d F d x F(x) is nondecreasing; = f (x). lim x→−∞ F(x) = 0; ∞ lim x→+∞ F(x) = −∞ f (u)du = 1, if X is proper. Mixture: If f and g are densities, then so is h = λ f + (1 − λ)g, 0 ≤ λ ≤ 1. Functions: If continuous random variables X and Y are such that Y = g(X ) for some function g(.) that is differentiable and strictly increasing, then fY (y) = f X (g−1(y)) d dy [g−1(y)], 7.10 Review and Checklist for Chapter 7 319 where g−1(.) is the inverse function of g. In general, we can write fY (y) = d dy f X (x)d x, x:g(x)≤y and proceed by ad hoc arguments. Expectation: A random variable X has an expected value EX provided that + ∞ −∞ |x| f X (x)d x < ∞, and then ∞ When X > 0, EX = x f X (x)d x. −∞ ∞ EX = P(
|
X > x)d x. 0 If random variables X and Y are such that Y = g(X ) and X is continuous, then Y has an expected value if + ∞ −∞ |g(x)| f X (x)d x < ∞ and ∞ EY = Eg(X ) = g(x) f X (x)d x. −∞ Moments: In particular, if g(X ) = X r, this yields the r th moment µr of X. When X > 0, ∞ EX r = 0 r x r −1P(X > x)d x. When g(X ) = exp (t X ), this yields the m.g.f., ∞ MX (t) = et x f X (x)d x. −∞ Conditioning: Any event B in may condition a random variable X on, leading to a conditional distribution and density, and FX |B(x|B) = P(X ≤ x|B) f X |B(x|B) = d d x FX |B(x|B), when the derivative exists, with the Key Rule: P(X ∈ A|B) = f (x|B)d x. x∈A Such conditioned random variables may have an expectation if ∞ |x| f X |B(x|B)d x < ∞, −∞ 320 7 Continuous Random Variables Table 7.1. Continuous random variables and their associated characteristics X f (x) EX var X m.g.f. Uniform Exponential Normal N (µ, σ 2) Gamma Laplace Cauchy and then (b − a)−1, a ≤ x ≤ b λe−λx, x ≥ 0 − 1 2 x−µ σ (2π)−1/2σ −1 exp −∞ < x < ∞ λr x r −1e−λx (r −1)! λ exp (−λ|x|) 1 2 {π(1 + x 2)}−b + a) λ−1 1 12 (b − a)2 λ−2 µ r λ−1 0 σ 2 r λ−2 2λ−2 E(X |B) = ∞ −∞ x f X |B(x|B)d x. ebt −eat t(b
|
−a) λ/(λ − t) exp (µt + 1 2 σ 2t 2) r λ λ−t λ2 λ2−t 2 Table 7.1 gives some useful continuous random variables with their elementary properties. Checklist of Terms for Chapter 7 7.1 distribution function density standard normal density φ(x) mixture 7.2 functions inverse function 7.3 simulation composition 7.4 expected value expectation of functions tail integral for expectation 7.5 moment generating function continuity theorem O–o notation de Moivre–Laplace theorem 7.6 conditional distribution conditional density conditional expectation lack-of-memory property 7.7 hazard function hazard rate 7.8 stochastically larger 7.9 geometrical probability (1) (2) Worked Examples and Exercises 321.11 Example: Using a Uniform Random Variable The random variable U is uniformly distributed on (0, 1). (a) Can you use U to get a random variable with density f0(y) = 12 2 y − 1 2 for 0 < y < 1? (b) Actually, you really want a random variable with density f (x) = 3 |1 − 2x| 1 2 for 0 < x < 1, 2 x − 1 2 + 1 8 and in your pocket is a fair coin. Explain how the coin is useful. Solution If g(U ) is a continuous increasing function, and Y = g(U ), then FY (y) = P(g(U ) ≤ y) = P(U ≤ g−1(y)) = g−1(y) because U is uniform. From (1), we have the distribution of interest 2 y − 1 2 Hence, if we find a function g(.) such that FY (y) = 12 0 y dy = −1(y, then g(U ) has the density (1) as required. Setting y = g(u) and solving we find immediately that u = 4 g(u(u is the required function g(.). For the second part, we notice that and that 1 |1 − 2x| 1 2 = 1 (1 − 2x) 1 2 if 0 < x < 1 2, f1(x) = (1 − 2x)− 1 0 2 if 0 < x < 1 2 elsewhere 322 7 Continuous Random Variables is a density function. By the method of the fir
|
st part or by inspection, we see that g1(U ) = 1 2 (1 − U 2) is a random variable with density f1(x). (To see this, just make the simple calculation P(g1(U ) ≤ x) = P (1 − U 2) ≤ x = P(U ≥ (1 − 2x) 1 2 ) = 1 − (1 − 2x) 1 2, 1 2 and differentiate to get the density f1.) Likewise, f2(x) = is a density function, and 2 = |1 − 2x| 1 2 (2x − 1)− 1 0 if 1 < x < 1 2 elsewhere g2(U ) = 1 2 (1 + U 2) is a random variable with density f2. (You can check this, as we did for g1 and f1.) Now you take the coin and toss it three times. Let A be the event that you get either three heads or three tails, B the event that you get two heads and a tail (in any order) and C the event that you get two tails and a head (in any order). Define the random variable (3) X = 1 + U 2) (1 − U 2) + 1 2 if A occurs if B occurs if C occurs. Then the density of X is just a mixture of the densities of g(U ), g1(U ), and g2(U ), namely, f X (x) = 1 4 f0(x) + 3 8 f1(x) + 3 8 f2(x) = 3 as required. [We defined a mixture in (7.1.29).] 2 x − 1 2 + 3 8 |1 − 2x|− 1 2, (4) Exercise (5) (6) Exercise Exercise Explain how you would use U to get a random variable with density 1 + (2x − 1)2 if 0 < x < 1. f (x) = 3 4 Show that Y = γ (− log U ) Let the random variable X be defined by 1 β has a Weibull distribution. X = �
|
�� (2U ) 1 2 2 − (2 − 2U ) 1 2 if U < 1 2 if U ≥ 1 2. Show that X has a triangular density on [0, 2]. Worked Examples and Exercises 323 Find the densities of: (7) Exercise (a) tan(πU ). (b) tan( π 2 U ). (1) (2) (3) (4) (5) Let 7.12 Example: Normal Distribution φ(x) = (2π)− 1 2 e−x 2/2; (x) = x −∞ φ(u)du. (a) Define the sequence of functions Hn(x); n ≤ 0, by (−)n d nφ(x) d x n = Hn(x)φ(x); H0 = 1. Show that Hn(x) is a polynomial in x of degree n. What is H1(x)? (b) Define Mills’ ratio r (x) by r (x)φ(x) = 1 − (x). Show that for x) < 1 x. Solution First, make the important observation that dφ d x = d d x ((2π)− 1 2 e−x 2/2) = −xφ. (a) We use induction. First, by (2), −Hn+1(x)φ(x) = + d d x (Hn(x)φ(x)) = H n(x)φ(x) − Hn(x)xφ(x), by (4). Hence, Hn+1(x) = x Hn(x) − H n(x) and, by (2) and (4), H1(x) = x. The result follows by induction as claimed. (b) For the right-hand inequality, we consider 1 − (x) = ∞ x ∞ φ(u)du ≤ ∞ x φ(u)du by (4), u x φ(u)du for x > 0(x). 324 7 Continuous Random Variables For the left-hand inequality, we consider ∞ ∞ (6) 1 − (x) = x " = − φ(u)du = − #∞ x ∞
|
φ(u) u − ∞ x φ(u) u φ(u) u2 du = = φ(x) x φ(x) x ≥ φ(x) + − 1 x x φ(u) u3 du ∞ +! x φ(x) x 3 − 1 x 3 x. du by (4) on integrating by parts, by (4), 3φ(u) u4 du on integrating by parts, Remark For large x, these bounds are clearly tight. (7) Exercise Show that they are orthogonal with respect to φ(x) over R, which is to say that The polynomials Hn(x) are known as Hermite (or Chebyshov–Hermite) polynomials. ∞ −∞ Hn(x)Hm(x)φ(x)d x = 0 m = n n! m = n. (8) Exercise Show that the exponential generating function of the Hn is ∞ n=0 Hn(x) t n n! = et x− 1 2 t 2. (9) Exercise Show that for x10) Exercise Let X have the Weibull distribution F(x) = 1 − exp(−(λt)2). Show that 1 2 λ−2t −1 − 1 4 λ−4t −3 < E(X − t|X > t) < 1 2 λ−2t −1. 7.13 Example: Bertrand’s Paradox (a) A point P is chosen at random inside a circular disc of radius a. What is the probability that its distance from O, the centre of the disc, is less than d? Let X be the length of the chord of the disc of which P is the midpoint. Show that P(X > √. 3a) = 1 4 (b) Now choose another chord as follows. A point Q is fixed on the circumference of the disc and a point P is chosen at random on the circumference. Let the length of PQ be Y. Show that P(Y > √. 3a) = 1 3 Solution with area πd 2. Therefore, the required probability is (a) If P is less than d from the centre, then it lies inside the disc of radius d (1) πd 2/(πa
|
2) = d 2/a2. Worked Examples and Exercises 325 Figure 7.9 Bertrand’s paradox. In this case, X < √ 3a because O P > 1 2 a. √ Now X > of the disc. This occurs (see Figure 7.9) if and only if OP has length less than 1 3a if and only if the chord R Q subtends an angle greater than 2π 3 at the centre 2 a. Hence, by (1), P(X > √ 3a) = (a/2)2 a2 = 1 4. (b) As in (a), we observe that Y > 3a if and only if PQ subtends an angle greater than 2π/3 at O. This occurs if and only if P lies on the dashed interval of the circumference of the disc in Figure 7.10. Because this interval is one-third of the circumference, P(Y > √ √ 3a) = 1 3. (2) (3) (4) In part (b), suppose that Q is picked at random as well as P. What is P(Y > 3a)? A point P is picked at random on an arbitrarily chosen radius of the disc. Let Z be Exercise Exercise the length of the chord of which P is the midpoint. Show that P(Z > Exercise A point Q is fixed on the circumference. The chord is drawn, which makes an angle! with the tangent at Q, where! is uniform on (0, π). If the length of this chord is W, show that P(W > Is it just a coincidence that this answer is the same as (b) above? 3a) = 1 3 3a) = 1 2 √ √.. √ Figure 7.10 Bertrand’s paradox. In this case, Y > √ 3a. 326 7 Continuous Random Variables 7.14 Example: Stock Control A manufacturer of bits and bobs has a shop. Each week it is necessary to decide how many bits to deliver to the shop on Monday, in light of the following information. (i) Delivering y bits costs c per bit, plus a fixed delivery charge k. (ii) Any bit unsold at the weekend has to be packed, stored, insured, and discounted over the weekend, at a total cost of h per bit. (iii)
|
If the shop sells every bit before the weekend, then further customers that week are supplied by post at the end of the week; this costs p per bit, due to postage, packing, paperwork, and other penalties, and p > c. (iv) The demand Z for bits each week is a random variable with density f (z) and distri- bution F(z) where F(0) = 0. If the manager seeks to minimize the expected costs of her decision and she has x bits in the shop over the weekend, approximately how many bits should she order on Monday morning? Note that the customer pays the same whether the bit comes from the shop or factory. Note also that the problem implicitly assumes that we are content with a continuous approximation to what is actually a discrete problem. Solution If nothing is delivered, then costs are p(Z − x) h(x − Z ) if Z > x; if Z < x. Hence, expected costs are ∞ λ(x) = p (z − x) f (z)dz + h x x 0 (x − z) f (z)dz. If y − x bits are delivered, to bring the stock of bits to y, then expected costs are µ(x, y) = k + c(y − x) + λ(y). Now and ∂µ ∂ y = c + λ(y) = c + h F(y) − p(1 − F(y)) ∂ 2µ ∂ y2 = (h + p) f (y) ≥ 0. Because µ(0) < 0, and µ(y) > 0 for large y, it follows that µ(x, y) has a unique minimum at the value ˆy such that F( ˆy1) (2) (3) Worked Examples and Exercises 327 Thus, if any delivery is made, the expected total costs are minimized by choosing y = ˆy, and the minimum is µ(x, ˆy) = k + c( ˆy − x) + λ( ˆy). The only alternative is to have no delivery, with expected total cost λ(x). Hence, the optimal policy is to have no delivery when x > ˆy or λ(x) ≤ k + c ˆy + λ( ˆy)
|
− cx, and to deliver ˆy − x when x < ˆy and Now, if we set g(x) = λ(x) + cx, we have λ(x) > k + c ˆy + λ( ˆy) − cx. and g(x) = c − p + (h + p)F(x) g(x) = (h + p)F(x) ≥ 0. Because g(0) < 0 and g( ˆy) = 0, it follows that there is a unique point ˆx such that g( ˆx) = λ( ˆx) + c ˆx = k + c ˆy + λ( ˆy). Hence, the optimal policy takes the simple form: Deliver no bits if x ≥ ˆx, or Deliver ˆy − x bits if x < ˆx, where ˆy satisfies (3) and ˆx satisfies (4). Exercise What is the optimal policy if the fixed delivery cost k is zero? Exercise m + py. If demand is exponentially distributed with distribution Suppose that the postal deliveries also have a setup cost, so that posting y bits costs F(x) = find the optimal delivery policy. 1 − e−λ(x−a); 0; x ≥ a x < a, (4) (5) (6) 7.15 Example: Obtaining Your Visa A certain consular clerk will answer the telephone only on weekdays at about 10.00 a.m. On any such morning, it is an evens chance whether he is at his desk or not; if he is absent no one answers, and days are independent. The line is never engaged. If he is at his desk, the time T that he takes to answer the telephone is a random variable such that (1) P(T ≤ t) = 0 1 − t −1 t ≤ 1 t > 1. 328 7 Continuous Random Variables (a) If you telephone this clerk one morning, and do not hang up, what is the probability that the telephone rings for at least a time s? (b) You adopt the following procedure. Each day until you are successful you telephone the clerk and hang up at time s if he has
|
not answered by then. Show that to minimize the expected time you spend listening to the ringing tone, you should choose s to be the unique positive root s0 of log s = (s + 1)(s − 2). Solution not, we have (a) Let R be the ringing time. Conditioning on whether the clerk is there or (2) P(R > s) = 1 2 P(R > s| absent) + 1 2 P(R > s| present for s < 1 for s ≥ 1, by (1). (b) If your call is successful, then the expected time for which the telephone rings is (3) E(R|R < s) = = s 0 s 0 P(R > x|R < s) d x by (7.4.12) P(x < R < s)d x P(R < sx −1 − s−1) d x = s log s s − 1, s > 1. The number of unsuccessful calls has a geometric mass function with parameter ρ = 2 (1 − 1 1 s ), and expectation (4) (5) (6) (7) (81 + 1 s (. Hence, the expected time spent listening to the ringing tone is ω(s) = s(s + 1) s − 1 Differentiating with respect to s gives ω(s) = (s − 1)−2(s2 − s − 2 − log s). Thus, a stationary value in (1, ∞) occurs at a zero of s2 − s − 2 − log s. That is where (s − 2) (s + 1) = log s. + s log s s − 1. Exercise a minimum ωmin. Exercise Exercise P(T ≤ x) = F(x). Show that there is just one such zero, and by inspection of (5) this stationary value is Show that ω(s) ≤ 2s2/(s − 1) and deduce that ωmin ≤ 8. More generally, suppose that the clerk is in his office with probability p and that Worked Examples and Exercises 329 Show that When F(x) = x 1+x, show that E(R) = s p F(s) − s 0 F(x) F(s) d x. E(R) = (1 + s)((1 − p) p−1
|
+ s−1 log(1 + s)). 7.16 Example: Pirates Expensive patented (or trade marked) manufactures are often copied and the copies sold as genuine. You are replacing part of your car; with probability p you buy a pirate part, with probability 1 − p a genuine part. In each case, lifetimes are exponential, pirate parts with parameter µ, genuine parts with parameter λ, where λ < µ. The life of the part you install is T. Is T IFR or DFR? Does it make any difference if λ > µ? (See Section 7.7 for expansions of the acronyms.) Solution By conditional probability, P(T > t) = F(t) = pe−µt + (1 − p)e−λt. Hence, setting q = 1 − p, we have (1) r (t) = f (t)/F(t) = µp + λqe(µ−λ)t p + qe(µ−λ)t = λ + p(µ − λ) p + qe(µ−λ)t. This decreases as t increases. Hence, your part has DFR. It makes no difference if λ > µ. This is obvious anyway by symmetry, but also r (t) given by (1) decreases as t increases if λ > µ. (2) (3) Exercise What happens if λ = µ? Exercise (a) Show that the probability π that it is a pirate part is given by Suppose the part has survived for a time t after you install it. π(t) = p p + (1 − p)e(µ−λ)t. (b) Find the limit of π(t) as t → ∞, and explain why the answer depends on whether λ > µ or λ < µ. (4) Exercise Let X have density f and m.g.f. MX (θ) = E(eθ X ). Show that d 2 dθ 2 log(MX (θ)) > 0. (5) (6) Due to variations in the manufacturing process, the lifetime T is exponential with [You have shown that MX (θ) is log–convex, if you are interested.] Exercise parameter " where " has density f (λ). Use the preceding exercise to show that T is D
|
FR. Exercise random variable with density f (λ). Let M(t) be the continuous mixture Let T" be a family of random variables indexed by a parameter ", where " is a ∞ M(t) = P(T" ≤ t) = FTλ (t) f (λ)dλ. 0 330 7 Continuous Random Variables Show that if FTλ (t) is DFR for all λ, then M(t) is DFR. [Hint: The Cauchy–Schwarz inequality says that E(X Y ) ≤ (E(X 2)E(Y 2)) 2.] 1 (1) (2) (3) (4) (5) (6) 7.17 Example: Failure Rates‡ Let T have distribution F(t). (a) Show that T is IFRA if and only if, for all 0 ≤ α ≤ 1, (F(t))α ≤ F(αt). (b) Show also that if T is IFRA, then it is NBU. Solution This is the same as saying that, for all 0 ≤ α ≤ 1, (a) By definition, T is IFRA if H (t)/t = 1 t + t 0 r (v)dv is increasing in t. αt 0 1 αt r (v)dv ≤ 1 t t 0 r (v)dv. But, by (7.7.4), this is equivalent to −1 α log F(αt) ≤ − log F(t). Now (1) follows as required because ex is a monotone increasing function of x. (b) Because H (t)/t is increasing in t, for all 0 ≤ α ≤ 1, we have H (αt) ≤ α H (t), and H ((1 − α)t) ≤ (1 − α)H (t). Hence, Setting αt = s gives condition (iii) in Definition 7.7.8 for NBU. H (αt) + H (t − αt) ≤ H (t). Exercise Exercise Exercise Show that if T is IFR, then it is IFRA and DMRL. Show that if T is NBU or DMRL, then it is NBUE. Let T have a gamma density with parameters 2 and λ. Find H (t) and r (t). Is T I
|
FR? A point P is chosen at random along a rod of length l. 7.18 Example: Triangles (a) The rod is bent at P to form a right angle, thus forming the two shorter sides of a right-angled triangle. Let! be the smallest angle in this triangle. Find E(tan!) and E(cot!). (b) The rod is now cut into two pieces at P. A piece is picked at random and cut in half. What is the probability that the three pieces of the rod can form a triangle of any kind? Show that, conditional on the event that a triangle can be formed, the probability that it has no obtuse angle is 2( 2 − 1). √ ‡ See Section 7.7 for expansions of the acronyms. Worked Examples and Exercises 331 Solution that the length OP is a random variable X uniformly distributed on [0, 1]. Without loss of generality, we can suppose the rod to be the unit interval, so (a) Because! is the smallest angle tan! = . Hence, (1) E(tan!) = log 2 − 1 0.39. For variety and instruction, we choose a different method of finding E(cot!). Let Y = cot!. Then, for y ≥ 1, F(y) = P(Y ≤ y) = P = P ≤ y 1 tan! X ≤ y 1 + y − FX y 1 + y = FX = + Hence, differentiating (2) Thus, fY (y) = 2 (y + 1)2. E(cot!) = E(Y ) = ∞ 1 2y (y + 1)2 dy = ∞. (b) Suppose (without loss of generality) that the piece cut in half has length 1 − X. Then, if 2 (1 − X ), X. This is possible if, which has probability it exists, the triangle is isosceles with sides 1 and only if 1. 1 2 There is an obtuse angle (between the two sides of equal length) if and only if 2 X, which occurs if and only if X < 1 2 (1 − X ) > 1 2 (1 − X ), 1 2 which occurs if and only if X > > 1√ 2, 1 1 2 X
|
2 (1 − X ) √ 2 − 1. 332 Hence, 7 Continuous Random Variables P (no obtuse angle | the triangle exists) = P {X < √ 2 − 1) = 2). = P(X < P X < 1 2 (3) (4) Exercise What is the distribution of the length X ∧ (1 − X ) of the shortest side of the triangle? Exercise The longest side is X ∨ (1 − X ). Show that E {X ∧ (1 − X )} E {X ∨ (1 − X )} = 1 3, where x ∧ y = min {x, y} and x ∨ y = max {x, y}. Exercise Find E(sin!) E(cos!). (5) (6) Exercise Show that the hypotenuse of the triangle has density f (y) = 2y (2y2 − 1) 1 2, 1√ 2 ≤ y ≤ 1. (7) Exercise Let X have density f X (x) = 1 B(a, b) x a−1(1 − x)b−1; 0 < x < 1. Show that E(cot!) is finite if and only if a > 1 and b > 1. [See Problem 7.3 for the definition of B(a, b).] (1) (2) (3) 7.19 Example: Stirling’s Formula Let (x) be the gamma function defined by ∞ (x) = 0 t x−1e−t dt. (a) Show that (x) = x x− 1 2 e−x ∞ −x 1 2 (1 + ux − 1 2 )x−1e−ux 1 2 du. (b) Show that for fixed u the integrand converges to exp(− 1 2 u2), as x → ∞, and deduce that, as x → ∞, (x)ex x −x+ 1 2 → ∞ exp −∞ − 1 2 u2 du. You may assume that log(1 + x(x 3) for |x| < 1. Proof (a) Making the substitution t = x + ux 1 2 in (1) gives (2). (4) (5) (6) (7)
|
(8) Worked Examples and Exercises 333 (b) Let the integrand in (2) be f (x, u). Then for x − 1) log(1 + ux 1 1 log f (x, u) = −ux → − 1 2 u2 as x → ∞. u2 − ux − 1 2 + O(x −1) Now, if we were justified in saying that lim x→∞ f (x, u)du = lim x→∞ f (x, u) du, then (3) would follow from (2), (4), and (5). However, it is a basic result in calculus† that + ∞ −∞ g(u)du < ∞, then (5) is justified. All we need to do is if 0 ≤ f (x, u) ≤ g(u), where find a suitable g(u). First, for x 1 2 > 1 and u ≥ 0, eu f (x, u) = e−u(x 1/2−1)(1 + ux 1 2 )x−1 → 0 as u → ∞. Hence, f (x, u) < M1e−u, u ≤ 0, for some constant M1. Second, for u < 0, the function e−u f (x, u) has a maximum where − (1 + x 1 2 ) + (x − 1)(u + x 1 2 )−1 = 0, that is, at u = −1. Hence, f (x, u) ≤ eu max u<0 1 2 → eue 2 e−u f (x, u) 3 as x → ∞. = eu(1 − x − 1 2 )x−1 exp(1 + x 1 2 ) Hence, for some constant M2, f (x, u) < M2eu, u < 0. Therefore, f (x, u) < Me−|u| for some M, and we have our g(u). Show that limn→∞ n!enn−n− 1 Exercise Let (Sn; n ≥ 0) be a simple random walk with S0 = 0. Given that S2n = 0, find the Exercise probability Pb that Sr = b for some r such that 0 ≤ r ≤ 2
|
n. Show that if n → ∞ and b → ∞ in 2, then Pb → e−y2. such a way that b = yn 1 Exercise: de Moivre–Laplace Theorem Let Sn be binomial with parameters n and p. Define 2 = (2π) 1 2. Yn = Sn − np (npq) 1 2, q = 1 − p, and yk = k − np (npq) 1 2. Show that as n → ∞ P(Sn = k) = n 2πk(n − k) 1 2 k np k nq n − k n−k (1 + o(1)). Deduce that as n → ∞ P(Yn = yk) = 1 (2πnpq) 1 2 exp − 1 2 y2 k 1 + O n− 1 2 †The Dominated Convergence Theorem. 334 7 Continuous Random Variables Conclude that for fixed finite a and b, as n → ∞, P(a < Yn < b) → 1 (2π) 1 2 b a e− 1 2 y2 dy = (b) − (a). (9) (10) Let (Sn; n ≥ 0) be a simple random walk with S0 = 0. Given that S2n = 2 j, show that Exercise the probability that the last visit to the origin was at the 2r th step is fr = 2n 2r Show that if r, j, and n all increase in such a way that r n lim r→∞ fr = y √ π x 1 (1 − x) 3 2 exp, 0 ≤ x < n. √ n → y, then → x, j/ −x y2 1 − x, 0 < x < 1. Remark lished by de Moivre in 1730. The formula actually proved by Stirling in 1730 was The result of Exercise 6, which is known as Stirling’s formula, was estab- −(n+ 1 2 ) n + 1 2 n! en+ 1 2 → (2π) 1 2, as n → ∞. (11) Exercise Prove (10). 10 Let f (x) = c(α, β)(x − α)(β − x). For what values of x and c(α, β
|
) can f be a density function? Let X have distribution F(x). Show that P(X = x) > 0 if and only if F(x) is discontinuous at x. + va−1(1 − v)b−1dv; a > 0, b > 0. The beta disThe beta function B(a, b) is given by B(a, b) = tribution has density 1 0 f (x) = 1 B(a, b) x a−1(1 − x)b−1 for 0 < x < 1. If X has the beta distribution, show that E(X ) = B(a + 1, b)/B(a, b). What is var (X )? For what value of c is f = c(sin x)α(cos x)β ; 0 < x < π/2, a density function? What is the distribution function of the random variable having the beta density with a = b = 1 2? Let X have the density f = exp(−x − exp(−x)) for x ∈ R. What is the distribution function of X? Let X be exponentially distributed with parameter λ. What is the density of Y = ea X? For what values of λ does E(Y ) exist? Let X have the gamma density with parameters α and λ. Show that µk = α(α + 1)... (α + k − 1)λ−k, and var(X ) = α λ2. Let X have the standard normal density, and a > 0. Show that P(X > x + ax −1|X > x) → e−a as x → ∞. (a) Let X have the standard normal density. Show that |X | has distribution function F = 2(x) − 1, for x > 0. (b) Let X have distribution F(x). What is the distribution of |X |? What is the density of |X | if it Problems 335 exists? 11 12 13 14 15 16 17 √ An elastic string has modulus of elasticity λ and natural length l0. A mass m is attached to one end, the other being fixed. The period of oscillation of the mass when slightly displaced is 2π ml0/λ. Suppose that the modulus of elasticity is uniformly distributed on [a, b].
|
What is the density and expectation of the period? Let X have density f (x). Construct a simple random variable Sn(X ) such that given > 0, P(|Sn(X ) − X | > ) < 2−n. (Assume X is proper.) If X is exponentially distributed find the m.g.f. of X, E(et X ). A point Q is chosen at random inside an equilateral triangle of unit side. Find the density of the perpendicular distance X to the nearest side of the triangle. For what value of c is E((X − c)2) least? Suppose a machine’s lifetime T has hazard rate λ Suppose that X has distribution function t, where λ > 0. Find P(T > t). √ F(x) = 1 − exp − x 0 g(u)du for some function g(.). Show that this is possible if and only if g(u) ≥ 0 and 18 What are the cumulants of the normal density? 19 What are the cumulants of the exponential density? 20 You have two independent random variables, each uniform on (0, 1). Explain how you would use them to obtain a random variable X with density + ∞ 0 g(u)du = ∞. f (x for 0 ≤ x ≤ 1. 21 22 + ∞ 0 exp(−a2u2 − b2u−2)du for a, b > 0. Show that: = −2I (1, ab); Define I (a, b) = (a) I (a, b) = a−1 I (1, ab); Use the result of Problem 21 to find the m.g.f. of the following densities: (a) (b) Let X be a standard normal random variable. What is the density of X −2? Let X be a standard normal random variable. Find the m.g.f. and density of X 2. 2 exp(−β/x − γ x) for x > 0; β, γ > 0. What is α? f (x) = αx − 1 f (x) = (2π x 3)− 1 2 exp(−(2x)−1) for x > 0. (c) I (a, b) = 2a e−2ab. (b) ∂ I
|
∂b √ π 23 24 25 What is the moment generating function of the two-sided exponential density? Where is it defined? Let U be uniform on (0, 1). Show that, if [a] denotes the integer part of a, and 0 < p < 1, 26 X = 1 + " # log U log(1 − p) 27 28 29 30 has a geometric distribution. Let U be uniform on (0, 1). Show how to use U to simulate a random variable with density f (x) = 24 25. Let P and Q be two points chosen independently and uniformly in (0, a). Show that the distance between P and Q has density 2(a − x)a−2 for 0 < x < a. Continuous mixture + ∞ g(θ) = νe−νθ for θ ≥ 0, ν > 0. Show that 0 Let X n be a Poisson random variable with parameter n. Show that as n → ∞, P(X n ≤ n + (x). Let f (θ, x) be the exponential density with parameter θ for 0 ≤ x. Let f (θ, x)g(θ)dθ is a density. nx) → √ 336 7 Continuous Random Variables 31 32 33 34 35 Let U be uniform on (0, 1). Show how to simulate a random variable X with the Pareto distribution given by F(x) = 1 − x −d ; x > 1, d > 1. Let X α have gamma density with parameters α and 1, so fα(x) = x α−1e−x of Yα = (X α − α)α− 1 density. Let X have mean µ and variance σ 2. Show that P(|X − µ| ≤ aσ ) ≥ 1 − a−2, for any a > 0. Let Z be a standard normal random variable, and define. Find the mean and variance of Y, and show that (α−1)!. Let φα(x) be the density 2. Show that as α → ∞, φα(x) → φ(x), where φ(x) is the standard normal P | − 4α−2(β 2 + 2γ 2). Verify the assertions of Theorem 7.4.16. Use the indicator function
|
I (X > x) to show that, for a random variable X that is nonnegative, (a) EX = P(X > x) d x. ∞ ∞ 0 (b) EX r = 0 r x r −1P(X > x) d x. ∞ (c) Eeθ X = 1 + θ (d) When X ≥ 0 is integer valued, eθ x P(X > x) d x. 0 ∞ k=0 skP(X > k) = 1 − G X (s) 1 − s. 36 Let X be a standard normal random variable with density φ(x) and distribution (x). Show that for x > 0, x x 2 + 1 φ(x) ≤ 1 − (x) ≤ 1 x φ(x). [Hint: Consider f (x) = xe−x 2 /2 − (x 2 + 1)[1 − (x)].] 8 Jointly Continuous Random Variables Some instances of correlation are quite whimsical: thus cats which are entirely white and have blue eyes are generally deaf. Charles Darwin, Origin of Species 8.1 Joint Density and Distribution It is often necessary to consider the joint behaviour of several random variables, which may each take an uncountable number of possible values. Just as for discrete random vectors, we need to define a variety of useful functions and develop the appropriate machinery to set them to work. For simplicity in definitions and theorems, we start by considering a pair of random variables (X, Y ) taking values in R2. This theoretical outline can be easily extended to larger collections of random variables (X 1, X 2,..., X n) taking values in Rn, with a correspondingly greater expenditure of notation and space. As usual, we should start with a sample space, an event space F, and a probability function P, such that for all x and y Then Ax y = {ω: X ≤ x, Y ≤ y} ∈ F. F(x, y) = P(Ax y) = P(X ≤ x, Y ≤ y) is the joint distribution function of X and Y. In fact, we suppress this underlying structure, and begin with random variables X and Y having joint distribution F(x, y) given by (1). A special class of such jointly distributed random variables is of great importance
|
. Let F(x, y) be a joint distribution. Suppose that ∂ 2 F Definition ∂ x∂ y exists and is nonnegative, except possibly on a finite collection of lines in R2. Suppose further that the function f (x, y) defined by (1) (2) f (x, y∂ y 0 where this exists elsewhere, 337 338 8 Jointly Continuous Random Variables satisfies (3) F(x, y) = x y −∞ −∞ f (u, v) du dv. Then X and Y, being random variables having the (joint) distribution F, are said to be (jointly) continuous with (joint) density function f (x, y). The words “joint” and “jointly” are often omitted to save time and trees. Sometimes we write f X,Y (x, y) and FX,Y (x, y) to stress the role of X and Y, or to avoid ambiguity. (4) Example: Uniform Distribution Suppose you pick a point Q at random in the rectangle R = (x, y : 0 < x < a, 0 < y < b). Then from the properties of the uniform distribution [see (7.8.1)], we have F(x, y) = 1 x y ab y b x a 0 if x ≥ a, y ≥ b if 0 ≤ x ≤ a, 0 ≤ y ≤ b if x ≥ a, 0 ≤ y ≤ b if 0 ≤ x ≤ a, y ≥ b elsewhere. Differentiating wherever possible gives Hence, the function 1 ab 0 ∂ 2 F ∂ x∂ y = f (x, y) = 1 ab 0 if 0 < x < a, 0 < y < b if x < 0 or x > a or y < 0 or y > b. if 0
|
< x < a, 0 < y < b otherwise satisfies (3), and is the density of X and Y. It is uniformly distributed over the rectangle R. Furthermore, if A is a subset of R with area |A|, then using (7.8.1) (and a theorem about double integrals), we have P((X, Y ) ∈ A) = |A| ab = f (x, y) d xd y. (x,y)∈A In fact, a version of the useful relationship (8) holds true for all densities f (x, y). This is important enough to state formally as a theorem, which we do not prove. (5) (6) (7) (8) 8.1 Joint Density and Distribution 339 (9) Theorem given by the Key Rule: If X and Y have density f (x, y) and P((X, Y ) ∈ A) exists, then it is P((X, Y ) ∈ A) = f (x, y) d xd y. (x,y)∈A Note that the condition that the probability exists is equivalent to saying that {ω: (X (ω), Y (ω)) ∈ A} ∈ F. This is another demonstration of the fact that, although we can just about suppress (, F, P) at this elementary level, further rigorous progress is not possible without bringing the underlying probability space into play. The attractive result (9) may then be proved. Here is a simple example of Theorem 9 in use. (10) Example Let X and Y have density f (x, y) = 8x y 0 if 0 < y < x < 1 elsewhere. What are P(2X > 1, 2Y < 1) and P(X + Y > 1)? Find F(x, y). Solution S is the square with vertices ( 1 2 Notice that the constraints 2X > 1, 2Y < 1 require that (X, Y ) ∈ S, where 2 ), (1, 1, 1 2 ). Hence, P(2X > 1, 2Y < 1) = 8 2 f (x, y) d yd x = 8, 0), (1, 0), ( dy = 3 8. Likewise, X + Y > 1 if (X, Y ) ∈ T, where T is the triangle with vertices
|
( 1 2 (1, 1). Hence,, 1 2 ), (1, 0), P(X + Y > 1) = 8 1 x 1 2 1−x x y dy d x = 5 6. Finally, F(x, y) = y x 0 v 8uv du dv = 2x 2 y2 − y4. The geometric problems of Section 7.9 can now be reformulated and generalized in this new framework. Obviously, “picking a point Q at random in some region R” is what we would now describe as picking (X, Y ) such that X and Y are jointly uniform in R. More generally, we can allow (X, Y ) to have joint density f (x, y) in R. Example: More Triangles f (x, y) = (a) What is c? (b) What is F(x, y)? The random variables X and Y have joint density if x < 1, y < 1, x + y > 1, a > −1, otherwise. cx a 0 340 8 Jointly Continuous Random Variables (c) Show that it is possible to construct a triangle with sides X, Y, 2 − X − Y, with prob- ability one. (d) Show that the angle opposite to the side of length Y is obtuse with probability p0 = c 1 x a+1 − x a+2 2 − x 0 d x. (e) When a = 0, show that p0 = 3 − 4 log 2. Solution (a) Because + + f (x, y) d x d y = 1, this entails c−1 = 1 x a 0 1 1−x d y d x = (a + 2)−1. (b) Using (3) gives F(x, y) = cua dv du = a + 2 a + 1 yx a+1 + x a++1 x y 1−u 1−y + 1 a + 1 (1 − y)a+2. (c) Three such lengths X − Y > Y, and Y + 2 − X − Y > X. But define the region in which f (x, y) is nonzero and form a triangle + + (d) if θ is the angle opposite Y, then if X + Y > 2 − X − Y, X + 2 − those that these constraints are just f d x
|
dy = 1. cos θ = X 2 + (2 − X − Y )2 − Y 2 2X (2 − X − Y ) < 0, if θ is an obtuse angle. Hence, in this case, Y > X 2 − 2X + 2 2 − X = g(X ), say. Now g(x) ≥ 1 − x (with equality only at x = 0). Hence, p0 is given by P(θ is obtuse) = P(Y > g(X )) = 1 1 0 g(x) f (x, y) d yd x = c 1 0 (1 − g(x)). (e) When a = 0, p0 = 2x − log 2. Next we record that, as a result of (1) and Theorem 9, f (x, y) and F(x, y) have the following elementary properties, analogous to those of f and F in the discrete case. 8.1 Joint Density and Distribution 341 First, F(x, y) is obviously nondecreasing in x and y. More strongly, we have (11) 0 ≤ P(a < X ≤ b, c ≤ Y ≤ d) = P(a < X ≤ b, Y ≤ d) − P(a < X ≤ b, Y ≤ c) = F(b, d) − F(a, d) − F(b, c) + F(a, c). Second, if X and Y are finite with probability 1, then ∞ ∞ 1 = −∞ −∞ f (u, v) du dv = lim x,y→∞ F(x, y). Third, knowledge of F(x, y) and f (x, y) will also provide us with the separate distributions and densities of X and Y. Thus, the marginals are: FX (x) = P(X ≤ x) = lim y→∞ P(X ≤ x, Y ≤ y) = ∞ x −∞ −∞ f (u, v) dudv, and Likewise, and f X (x) = d d x FX (x) = ∞ −∞ f (x, v) dv. fY (y) = ∞ −∞ f (u, y) du FY (y) = lim x→∞
|
F(x, y). (12) (13) (14) (15) (16) Here are some examples to illustrate these properties. Note that in future we will specify f (x, y) only where it is nonzero. (17) Example Verify that the function f (x, y) = 8x y for 0 < y < x < 1 is a density. For what value of c is f (x, y) = cx y for 0 < x < y < 1, a density? Find the density of X in the second case. Solution Because f > 0 and 1 x 0 0 8x y dyd x = 1 0 4x 3 d x = 1, f is indeed a density. By symmetry, c = 8 in the second case also, and we have f X (x) = 1 0 8x y dy = 4x(1 − x 2). (18) Example The function H (x, y) = 1 − e−(x+y) for x > 0, y > 0, is nondecreasing in x and y, and 0 ≤ H ≤ 1. Is it a distribution? 342 Solution that 8 Jointly Continuous Random Variables No, because ∂ 2 H ∂ x∂ y exists and is negative in x > 0, y > 0. Alternatively, note H (1, 1) − H (1, 0) − H (0, 1) + H (0, 0) = 2e−1 − 1 − e−2 < 0, which cannot (as it should) be the value of P(0 < X ≤ 1, 0 < Y ≤ 1). (19) Example: Bivariate Normal Density f (x, y) = 1 2πσ τ (1 − ρ2) 1 2 exp − 1 2(1 − ρ2) Verify that when σ, τ > 0, " x 2 σ 2 − 2ρx y σ τ # + y2 τ 2 is a density for |ρ| < 1, and find the marginal densities f X (x) and fY (y). Solution From (14), if f (x, y) is a density, we have ∞ f X (x) = −∞ f (x, y) dy = 1 2πσ τ (1 − ρ2) 1 2 ∞ −∞ ) exp − 1 2(1 − �
|
�2) 2 y τ − ρx σ + x 2 σ 2 − ρ2x 2 σ 2 * dy Now setting y τ − = u, and recalling that ρx σ ∞ −∞ yields exp − u2 2(1 − ρ2) τ du = (2π (1 − ρ2)) 1 2 τ f X (x) = 1 (2π) 1 2 σ exp − x 2 2σ 2. This is the N (0, σ 2) density, and so f satisfies (12) and is nonnegative. It is therefore a density. Interchanging the roles of x and y in the above integrals shows that fY (y) is the N (0, τ 2) density. See Example 8.20 for another approach. 8.2 Change of Variables We have interpreted the random vector (X, Y ) as a random point Q picked in R2 according to some density f (x, y), where (x, y) are the Cartesian coordinates of Q. Of course, the choice of coordinate system is arbitrary; we may for some very good reasons choose to represent Q in another system of coordinates (u, v), where (x, y) and (u, v) are related by u = u(x, y) and v = v(x, y). What now is the joint density of U = u(X, Y ) and V = v(X, Y )? Equally, given a pair of random variables X and Y, our real interest may well lie in some function or functions of X and Y. What is their (joint) distribution? 8.2 Change of Variables 343 As we have remarked above, at a symbolic or formal level, the answer is straightforward. For U and V above, and A = {x, y: u(x, y) ≤ w, v(x, y) ≤ z} then, by Theorem 8.1.9, FU,V (w, z) = f X,Y (x, y) d x d y. A The problem is to turn this into a more tractable form. Fortunately, there are well-known results about changing variables within a multiple integral that provide the answer. We state without proof a theorem for a transformation T satisfying the following conditions. Let C and D be subsets of R2. Suppose that
|
T given by maps C one–one onto D, with inverse T −1 given by T (x, y) = (u(x, y), v(x, y)) T −1(u, v) = (x(u, v), y(u, v)), which maps D one–one onto C. We define the so-called Jacobian J as J (u, v) = ∂ x ∂u ∂ y ∂v − ∂ x ∂v ∂ y ∂u, where the derivatives are required to exist and be continuous in D. Then we have the following result. (1) Theorem u(X, Y ) and V = v(X, Y ) have joint density Let X and Y have density f (x, y), which is zero outside C. Then U = fU.V (u, v) = f X,Y (x(u, v), y(u, v))|J (u, v)| for (u, v) ∈ D. Here are some examples of this theorem in use. (2) Example Suppose Q = (X, Y ) is uniformly distributed over the circular disc of radius 1. Then X and Y have joint density (3) (4) f (x, y) = 1 π for x 2 + y2 ≤ 1. it seems more natural to use polar rather than Cartesian coordinates However, 2 and θ = tan−1(y/x), with inin this case. These are given by r = (x 2 + y2) verse x = r cos θ and y = r sin θ. They map C = {x, y: x 2 + y2 ≤ 1} one–one onto D = {r, θ: 0 ≤ r ≤ 1, 0 < θ ≤ 2π}. 1 In this case, ∂ y ∂r Hence, the random variables R = r (X, Y ) and! = θ (X, Y ) have joint density given by = r cos2 θ + r sin2 θ = r. J (r, θ) = ∂ x ∂r ∂ y ∂θ ∂ x ∂θ − f R,!(r, θ) = r π for 0 ≤ r ≤ 1, 0 < θ ≤ 2π. Notice that f (r,
|
θ) is not uniform, as was f (x, y). 344 8 Jointly Continuous Random Variables (5) Example satisfying Let Q = (X, Y ) be uniformly distributed over the ellipse C with boundary of area |C|. What is P(X > Y, X > −Y )? x 2 a2 + y2 b2 = 1, Here the transformation x = ar cos θ and y = br sin θ maps the ellipse one– Solution one onto the circular disc with radius 1. Furthermore, J = abr, Now X and Y have density f (x, y) = 1 |C|, for (x, y) ∈ C, so R and! have joint density f (r, θ) = abr |C|, for 0 ≤ r < 1, 0 < θ ≤ 2π. Hence, |C| = πab, and P(X > Y, X > − Y ) = P −1 < Y X < 1 = P −1 < b a tan! < 1 π tan−1 a = 1 b, because! is uniform on (0, 2π). As usual, independence is an extremely important property; its definition is by now familiar. 8.3 Independence (1) Definition Jointly distributed random variables are independent if, for all x and y, P(X ≤ x, Y ≤ y) = P(X ≤ x)P(Y ≤ y). In terms of distributions, this is equivalent to the statement that (2) (3) (4) (5) F(x, y) = FX (x)FY (y). For random variables with a density, it follows immediately by differentiating that f (x, y) = f X (x) fY (y) if X and Y are independent. Using the basic property of densities (Theorem 8.1.9) now further shows that, if C = (x, y: x ∈ A, y ∈ B) and X and Y are independent, then f (x, y) d xd y = f X (x) d x fY (y) dy. A (Assuming of course that the integrals exist.) C B Finally, if the random variables U and V satisfy U = g(X ), V = h(Y ), and X and Y are independent, then
|
U and V are independent. To see this, just let A = (x: g(x) ≤ u) and B = (g: h(y) ≤ v), and the independence follows from (4) and (2). An important and useful converse is the following. 8.3 Independence 345 (6) Theorem If X and Y have density f (x, y), and for all x and y it is true that then X and Y are independent. f (x, y) = f X (x) fY (y), The proof follows immediately from a standard theorem on multiple integrals (just consider + + y −∞ f (u, v) du dv) and we omit it. x −∞ (7) Example: Uniform Distribution Let X and Y have the uniform density over the unit circular disc C, namely, f (x, y) = π −1 for (x, y) ∈ C. (a) Are X and Y independent? (b) Find f X (x) and fY (y). (c) If X = R cos!, and Y = R sin!, are R and! independent? Solution (a) The set {x, y: x ≤ −1/ 2, y ≤ −1/ √ √ 2} lies outside C, so F − 1√ 2, − 1√ 2 √ = 0. However, the intersection of the set {x: x ≤ −1/ 2} with C has nonzero area, so FX − 1√ 2 FY − 1√ 2 > 0. Therefore, X and Y are not independent. (b) By (8.1.14), f X (x) = 1 −1 f (x, y) dy = 1 π (1−x 2) −(1−x 2) 1 2 1 2 dy = 2 π (1 − x 2) 1 2. Likewise, fY (y) = 2 π (1 − y2) (c) By Example 8.2.4, R and! have joint density f R,!(r, θ) = r π, 1 2. for 0 ≤ r < 1, 0 < θ ≤ 2π. Hence, f!(θ) = 1 0 f (r, θ) dr = 1 2π ; 0 < θ ≤ 2π, 346 and 8
|
Jointly Continuous Random Variables 2π f R(r ) = f (r, θ) dθ = 2r ; 0 ≤ r ≤ 1. 0 Hence, f (r, θ) = f!(θ) f R(r ), and so R and! are independent. Example: Bertrand’s Paradox Again Suppose we choose a random chord of a circle C radius a, as follows. A point P is picked at random (uniformly) inside C. Then a line through P is picked independently of P at random [i.e., its direction! is uniform on (0, 2π)]. Let X be the length of the chord formed by the intersection of this line with the circle. Show that P(X > a √ 3) = 1 3 + √ 3 2π. Solution has distribution given by P(R/a ≤ r ) = r 2; 0 ≤ r ≤ 1. Now X > a 2R sin! < a, as you can see by inspecting Figure 8.1. Hence, Let R be the distance from the centre of the circle to P; by the above, R 3, if and only if √ P(X > a √ 3) = 2 π = 2 π π/2 0 π/6 0 P R < a 2 sin θ π/2 1 4 π/6 dθ + 2 π dθ cosec2θ dθ = 1 3 + √ 3 2π. Compare this with the results of Example 7.13. (8) Example: Normal Densities 2 x 2) for all x. f (x) = k exp (− 1 Let X and Y be independent with common density (a) Show that k = (2π)− 1 2. (b) Show that X 2 + Y 2 and tan−1(Y/ X ) are independent random variables. (c) If a > 0 < b < c and 0 < α < 1 2 π, find the probability that b < (X 2 + Y 2) 1 4 π < tan−1(Y/ X ) < 1 2 π, given that (X 2 + Y 2) 1 2 < a, Y > 0, and tan−1( and π. Figure 8.1 Bertrand’s paradox. 8.3 Independence 347 Solution Because X and Y
|
are independent, they have joint density f (x, y) = k2 exp − 1 2 (x 2 + y2). Make the change of variables to polar coordinates, so that by Theorem 8.2.1 the random variables R = (X 2 + Y 2) 2 and! = tan−1(Y/ X ) have joint density 1 f (r, θ) = k2r exp − 1 2 r 2 for 0 ≤ r < ∞, 0 < θ ≤ 2π. Hence, R has density and! has density f R(r ) = r exp − 1 2 r 2 for 0 ≤ r < ∞, f!(θ) = k2 for 0 < θ ≤ 2π. It follows immediately that (a) k2 = (2π)−1. (b) f (r, θ) = f R(r ) f!(θ), so that! and R are independent by Theorem 6. Hence,! and R2 are independent. (c) Finally, note that (9) and 12 P(b < R < c, R < a) = FR(c) − FR(b) FR(a) − FR(b) 0 if c < a if b < a ≤ c otherwise (10) = FR((a ∧ c) ∨ b) − FR(b), where x ∧ y = min {x, y} and x ∨ y = max {x, y}. Now, because R and! are independent, P b < R < c|R < a(b < R < c, R < a) P(R < a) P π 4FR(a) (FR((a ∧ c) ∨ b) − FR(b)), by (9) and (10). 348 8 Jointly Continuous Random Variables 8.4 Sums, Products, and Quotients We now return to the question of the distribution of functions of random vectors, and take a brief look at some particularly important special cases. Of these, the most important is the sum of two random variables. Theorem Let X and Y have joint density f (x, y). Show that if Z = X + Y, then (1) (2) f Z (z) = ∞ −∞ f (u, z − u) du, and that if X and
|
Y are independent, then ∞ f Z (z) = −∞ f X (u) fY (z − u) du. Proof and Y are independent. Turning to the proof of (1), we give two methods of solution. First notice that by (8.3.3) the result (2) follows immediately from (1) when X I Let A be the region in which u + v ≤ z. Then P(Z ≤ z) = = (u,v)∈A ∞ z −∞ −∞ f (u, v) du dv = ∞ z−u −∞ −∞ f (u, v) dvdu f (u, w − u) dwdu on setting v = w − u. Now differentiating with respect to z gives f Z (z) = + ∞ −∞ f (u, z − u) du. II This time we use the change of variable technique of Section 8.2. Consider the transformation z = x + y and u = x, with inverse x = u and y = z − u. Here J = 1. This satisfies the conditions of Theorem 8.2.1, and so U = u(X, Y ) and Z = z(X, Y ) have joint density f (u, z − u). We require the marginal density of Z, which is of course just (1). (3) Example Let X and Y have the bivariate normal distribution of Example 8.1.19, f (x, y) = 1 2πσ τ (1 − ρ2)1/2 exp − 1 2(1 − ρ2) x 2 σ 2 − 2ρx y σ τ + y2 τ 2. Find the density of a X + bY for constants a and b. Remark: Note from 8.3.6 that X and Y are independent if and only if ρ = 0. Solution The joint density of U = a X and V = bY is g(u, v) = 1 ab f., u a v b 8.4 Sums, Products, and Quotients 349 Hence, by the above theorem, the density of Z = U + V = a X + bY is ∞ (4) f Z (z) = −∞ 1 ab f u a, z − u b du. Rearranging
|
the exponent in the integrand we have, after a little manipulation, −1 2(1 − ρ2) u2 a2σ 2 = −1 2(1 − ρ2) − 2ρu(z − u) abσ z − u)2 b2τ 2 − (1 − ρ2) a2b2σ 2τ 2 + z2 α, where α = 1 a2σ 2 + 2ρ abσ τ + 1 b2τ 2, and β = ρ abσ τ + 1 b2τ 2. Setting u = v + β α z in the integrand, we evaluate ∞ exp − −∞ αv2 2(1 − ρ2) dv = 2π (1 − ρ2) α 1 2. Hence, after a little more manipulation, we find that f Z (z) = 1 (2πξ 2) exp 1 2 − z2 2ξ 2, where ξ 2 = a2σ 2 + 2ρabσ τ + b2τ 2. That is to say, Z is N (0, ξ 2). One important special case arises when ρ = 0, and X and Y are therefore independent. The above result then shows we have proved the following. (5) Theorem Let X and Y be independent normal random variables having the densities N (0, σ 2) and N (0, τ 2). Then the sum Z = a X + bY has the density N (0, a2σ 2 + b2τ 2). (See Example 8.20 for another approach.) Next we turn to products and quotients. (6) Theorem (7) Let X and Y have joint density f (x, y). Then the density of Z = X Y is u, z u f (z) = 1 |u| ∞ du, −∞ f and the density of W = X Y is f (w) = ∞ −∞ |u| f (uw, u) du. 350 8 Jointly Continuous Random Variables Proof We use Theorem 8.2.1 again. Consider the transformation u = x and z = x y, with inverse x = u and y = z/u. Here, 1 −z u2 This satisfies the conditions of Theorem 8.2.1, and so U = X and Z =
|
X Y have joint density J (u, z) = = u−18) f (u, z) = 1 |u| The result (7) follows immediately as it is the marginal density of Z obtained from f (u, z). Alternatively, it is possible to derive the result directly by the usual plod, as follows: u, z u f. P(X Y ≤ z) = P 0 X > 0, Y ≥ z X ∞ z/u = = = −∞ z/u 0 z −∞ z −∞ ∞ −∞ −∞ f (u, v) dv du + f f u, t u u, t u dt (−u) du |u| dt. 0 −∞ ∞ f (u, v) dv du z du + f 0 −∞ u, t u dt u du The required density is obtained by comparison of this expression with (7.1.15). Now we turn to the quotient W = X/Y. First, let V = 1/Y. Then, by definition, X ≤ x, Y ≥ 1 v = ∞ x −∞ 1/v f (s, t) dsdt. FX,V (x, v) = P(X ≤ x, V ≤ v) = P Hence, on differentiating, the joint density of X and Y −1 is given by f X,V (x, v) = 1 v2 f x, 1 v. Now W = X V, so by the first part, ∞ 1 |u| u2 w2 f u, u w du = ∞ −∞ |v| f (vw, v) dv fW (w) = −∞ on setting u = vw in the integrand. Alternatively, of course, you can obtain this by using Theorem 8.2.1 directly via the transformation w = x/y and u = y, or you can proceed via the routine plod. As usual, here are some illustrative examples. (9) Example xe− x2 Let X and Y be independent with respective density functions f X (x) = 2 for |y| < 1. 2 for x > 0 and fY (y) = π −1(1 − y2)− 1 Show that X Y has a normal distribution
|
. 8.5 Expectation 351 Solution orem 6 takes the special form When X and Y are independent, we have f (x, y) = f X (x) fY (y), and The- f (z) = ∞ 1 |u| −∞ ∞ f X (u) fY z u du = 1 |u| u>z ue− u2 2 π −1 − 1 2 1 − z2 u2 du = 1 π z 2 e− u2 (u2 − z2) u du. 1 2 Now we make the substitution u2 = z2 + v2 to find that f (z) = 1 π e− z2 2 ∞ 0 which is the N (0, 1) density. e− v2 2 dv = e− z2 (2π ) 2, 1 2 (10) Example Let X and Y have density f (x, y) = e−x−y for x > 0, y > 0. Show that U = X/(X + Y ) has the uniform density on (0, 1). To use Theorem 6, we need to know the joint density of X and V = X + Y. Solution A trivial application of Theorem 8.2.1 shows that X and V have density f (x, v) = e−v for 0 < x < v < ∞. Hence, by Theorem 6, ∞ f (u) = 0 = 1 ve−vdv,, for 0 < uv < v for 0 < u < 1. Alternatively, we may use Theorem 8.2.1 directly by considering the transformation with, x = uv, y = v(1 − u) and |J | = v. Hence, U = X/(X + Y ) and V = X + Y have density f (u, v) = ve−v, for v > 0 and 0 < u < 1. The marginal density of U is 1, as required. Suppose that the random variable Z = g(X, Y ) has density f (z). Then, by definition, ∞ 8.5 Expectation E(Z ) = z f (z) dz −∞ provided that E(|Z |) < ∞. However, suppose we know only the joint density f (x, y) of X and Y. As we have
|
discovered above, finding the density of g(X, Y ) may not be a trivial matter. Fortunately, this task is rendered unnecessary by the following result, which we state without proof. 352 8 Jointly Continuous Random Variables (1) Theorem then (2) If X and Y have joint density f (x, y) and ∞ ∞ |g(u, v)| f (u, v) dudv < ∞, −∞ −∞ E(g(X, Y )) = ∞ ∞ −∞ −∞ g(u, v) f (u, v) dudv. This useful result has the same pleasing consequences as did the corresponding result (Theorem 5.3.1) for discrete random variables. (3) Corollary Let X and Y have finite expectations. Then (i) E(a X + bY ) = aE(X ) + bE(Y ), for any constants a and b. (ii) If P(X ≤ Y ) = 1, then E(X ) ≤ E(Y ). Suppose further that E(X 2) and E(Y 2) are finite. Then (iii) E(X ) ≤ E(|X |) ≤ (E(X 2)) (iv) E(X Y ) ≤ (E(X 2)E(Y 2)) 2. 1 1 2, Recall that this last result is the Cauchy–Schwarz inequality. Finally, suppose that E(g(X )) and E(h(Y )) are finite, and that X and Y are independent. Then (v) E(g(X )h(Y )) = E(g(X ))E(h(Y )). We omit the proofs of these results. Generally speaking, the proofs follow the same line of argument as in the discrete case, with the difference that those proofs used results about rearrangement of sums, whereas these proofs use standard results about multiple integrals. Definition discrete random variables, we define the covariance as Some important expectations deserve special mention. Just as we did for cov (X, Y ) = E((X − E(X ))(Y − E(Y ))), and the correlation as ρ(X, Y ) = cov (X, Y ) (var (X )
|
var (Y )) 1 2. Remark When X and Y are independent, then it follows from Corollary 3(v) that cov (X, Y ) = ρ(X, Y ) = 0, but not conversely. There is an important exception to this, in that bivariate normal random variables are independent if and only if ρ(X, Y ) = 0. See Examples 8.11 and 8.20 for details (4) Example Let! be uniformly distributed over (0, α). Find cov (sin!, cos!), and show that for α = kπ, (k = 0), sin! and cos! are uncorrelated and not independent. Solution Routine calculations proceed thus: 8.5 Expectation 353 α E(sin! cos!) = 1 α 4α (1 − cos 2α), and E(sin!) = (1/α)(1 − cos α), and E(cos!) = (1/α) sin α. Hence, sin 2θdθ = 1 1 2 0 cov (sin!, cos!) = 1 4α (1 − cos 2α) − 1 This covariance is zero whenever α = kπ, (k = 0), and so for these values of α, ρ = 0. However, sin! and cos! are not independent. This is obvious because sin! = cos( π − 2!), but we can verify it formally by noting that sin α(1 − cos α). α2 P sin! > 3 4, cos! > 3 4 = 0 = P sin! > 3 4 P. cos! > 3 4 Generating functions have been so useful above that it is natural to introduce them again now. Because we are dealing with jointly continuous random variables, the obvious candidate for our attention is a joint moment generating function. Definition tion of X and Y is Let X and Y be jointly distributed. The joint moment generating func- MX,Y (s, t) = E(es X +tY ). If this exists in a neighbourhood of the origin, then it has the same attractive properties as the ordinary m.g.f. That is, it determines the joint distribution of X and Y uniquely, and also it does yield the moments, in that ∂ m+n ∂sm∂t n M(s, t) $ $ $ $ = E(X mY n). s=
|
t=0 Furthermore, just as joint p.g.f.s factorize for independent discrete random variables, it is the case that joint m.g.f.s factorize for independent continuous random variables. That is to say, if and only if X and Y are independent. MX,Y (s, t) = MX (s)MY (t) We offer no proofs for the above statements, as a proper account would require a wealth of analytical details. Nevertheless, we will use them freely, as required. (5) (6) (7) (8) Example Let X and Y have density (x, y) = e−y, 0 < x < y < ∞. Then MX,Y (s, t) = ∞ ∞ 0 x esx+t y−yd yd x = ((1 − t)(1 − s − t))−1. 354 8 Jointly Continuous Random Variables Hence, differentiating, we obtain ∂ M ∂s Thus, cov (X, Y ) = 1. $ $ $ $ s=t=0 = 1, $ $ $ $ ∂ M ∂t = 2, s=t=0 ∂ 2 M ∂s∂t $ $ $ $ s=t=0 = 3. (9) Example variables. Let M = 1 independent. Let X and Y be independent and identically distributed normal random 2 (X + Y ) and V = (X − M)2 + (Y − M)2. Show that M and V are Solution Then consider the joint moment generating function of M, X − M, and Y − M: We will use an obvious extension of (7). Let E(X ) = µ and var (X ) = σ 2. E(exp (s M + t(X − M) + u(Y − M))) = E = exp µ 2 (s + t + u) + σ 2 8 (s + t + u)2 exp exp 1 2 µ (s + t + u)X + 1 2 σ 2 (s − t − u) + 8 2 (s − t − u)Y, (s − t − u)2 because X and Y are independent, (10) = exp µs + s2 exp σ 2 4 σ 2 4 (t + u)2. Hence, M is independent of the random vector (X − M, Y
|
− M), and so M is independent of V. Remark This remarkable property of the normal distribution extends to any independent collection (Xi ; 1 ≤ i ≤ n) of N (µ, σ 2) random variables, and is known as the independence of sample mean and sample variance property. (11) Example Let X and Y be independent and identically distributed with mean zero, variance 1, and moment generating function M(t), which is thrice differentiable at 0. Show that if X + Y and X − Y are independent, then X and Y are normally distributed. Solution (12) By the independence of X and Y, M(s + t)M(s − t) = E(e(s+t)X )E(e(s−t)Y ) = E(es(X +Y )+t(X −Y )) = E(es(X +Y ))E(et(X −Y )), by the independence of X + Y and X − Y = (M(s))2 M(t)M(−t), using the independence of X and Y again. Next we note that, by the conditions of the problem, M (0) = E(X ) = 0 and M (0) = E(X 2) = 1. Now differentiating (12) twice with respect to t, and then setting t = 0 gives M(s)M (s) − (M (s))2 = (M(s))2(M(0)M (0) − (M (0))2) = (M(s))2(E(X 2) − (E(X ))2) = (M(s))2. 8.6 Conditional Density and Expectation 355 Integrating this differential equation once gives M (s) M(s) = s and integrating again yields M(s) = exp( 1 function of the N (0, 1) density, it follows that X and Y have this density. 2 s2). Because this is the moment generating We are of course already aware from Chapter 6 that generating functions are of considerable value in handling sums of independent random variables. (13) Example n Y = i=1 X 2 Let (X 1,..., X n) be independent having the N (0, 1) density. Show that i has a χ 2(n) density. Solution Hence, (14) With a
|
view to using moment generating functions, we first find ∞ E(etX 2 1 ) = −∞ 1 (2π) 1 2 exp tx 1 − 2t) 1 2. 1))n E(etY ) = (E(etX 2 1 (1 − 2t) = n 2 by independence and by (7.1.28) and (7.5.6), this is the m.g.f. of the χ 2(n) density. Hence, by Theorem 7.5.9, Y has a χ 2(n) density. Many results about sums of random variables can now be established by methods which, if not trivial, are at least straightforward. (15) Example dom variables, with parameter λ. Then for Sn = Let (X k; k ≥ 1) be independent and identically distributed exponential ran- n k=1 X k, E(etSn ) = (E(etX 1))n n = λ λ − t and so Sn has a gamma distribution by (7.5.6). by independence by Example 7.5.4, 8.6 Conditional Density and Expectation Suppose that X and Y have joint density f (x, y), and we are given the value of Y. By analogy with the conditional mass function that arises when X and Y are discrete, we make the following definition. 356 8 Jointly Continuous Random Variables (1) (2) (3) Definition X given Y = y is given by If X and Y have joint density f (x, y), then the conditional density of f X |Y (x|y) = f (x, y) fY (y) 0 if 0 < fY (y) < ∞ elsewhere. We observe immediately that f X |Y (x|y) is indeed a density, because it is nonnegative and f (x, y) fY (y) f X |Y (x|y) d x = d x = 1 · fY (y) = 1. fY (y) ∞ ∞ −∞ −∞ The corresponding conditional distribution function is FX |Y (x, y) = x −∞ f X |Y (u|y) du = P(X ≤ x|Y = y
|
), and we have the Key Rule P(X ∈ A|Y = y) = f X |Y (x|y) d x. x∈A (4) Example Let (X, Y ) be the coordinates of the point Q uniformly distributed on a circular disc of unit radius. What is fY |X (y|x)? Solution definition, Recall that for the marginal density f X (x) = (2/π)(1 − x 2) 1 2. Hence, by fY |X (y|x) = f (x, y) f X (x) = 1 π π 2(1 − x 2) 1 2 = 1 2 (1 − x 2)− 1 2 for |y| < (1 − x 2) 1 2. This conditional density is uniform on (−(1 − x 2) our earlier observations about conditioning of uniform densities. 2, (1 − x 2) 1 1 2 ), which is consistent with (5) Example Let X and Y be independent and exponential with parameter λ. Show that the density of X conditional on X + Y = v is uniform on (0, v). Solution density of X and Y is f (x, y) = λ2e−λ(x+y) for x > 0, y > 0. To use (1), we need to take some preliminary steps. First note that the joint Next we need the joint density of X and X + Y so we consider the transformation u = x and v = x + y, with inverse x = u and y = v − u, so that J = 1. Hence, by Theorem 8.2.1, fU,V (u, v) = λ2e−λv for 0 < u < v < ∞. 8.6 Conditional Density and Expectation 357 It follows that and so by definition fV (v) = v 0 λ2e−λv du = λ2ve−λv, fU |V (u|v) = f (u, v) fV (v) = 1 v for 0 < u < v. This is the required uniform density. This striking result is related to the lack-of-memory property of the exponential density. Now, because f X |Y (x|y) is a density it may have an expected value, which naturally
|
enough is called conditional expectation. (6) Definition given Y = y is given by If + R |x| f X |Y (x|y) d x < ∞, then the conditional expectation of X E(X |Y = y) = x f X |Y (x|y)d x. R (7) Example (5) Revisited If X and Y are independent and exponential, then we showed that the density of X given X + Y = v is uniform on (0, v). Hence, E(X |X + Y = v) = 1 2 v. Actually, this is otherwise obvious because, for reasons of symmetry, E(X |X + Y = v) = E(Y |X + Y = v), and trivially E(X + Y |X + Y = v) = v. Hence the result follows, provided it is true that for random variables X, Y and V, we have (8) E(X + Y |V = v) = E(X |V = v) + E(Y |V = v). In fact, this is true as we now show. (9) Theorem Let X, Y, and V have joint density f (x, y, v). Then (8) holds. Proof The joint density of W = X + Y and V is + ∞ −∞ f (w − u, u, v) du. Then, by definition, E(X + Y |V = v) = 1 fV (v) ∞ ∞ −∞ −∞ w f (w − u, u, v) dudw. Now consider the transformation x = w − u and y = u, with inverse w = x + y and u = y, so that J = 1. Changing the variables in the double integral accordingly and using 358 8 Jointly Continuous Random Variables standard results about such double integrals shows that ∞ ∞ E(X + Y |V = v) = 1 fV (v) = 1 fV (v) −∞ ∞ −∞ −∞ (x + y) f (x, y, v) d x d y ∞ x f X,V (x, v) d x + 1 fV (v) = E(X |V = v) + E(Y |V = v). Next
|
, we make the important observation that by writing (10) ψ(y) = E(X |Y = y) y fY,V (y, v) dy −∞ we emphasize the fact that the conditional expectation of X given Y is a function of Y. If the value of Y is left unspecified, we write ψ(Y ) = E(X |Y ) on the understanding that when Y = y, ψ(Y ) takes the value E(X |Y = y) defined above. It is therefore natural to think of E(X |Y ) as a random variable that is a function of Y. (A more rigorous analysis can indeed justify this assumption.) Just as in the discrete case, its expected value is E(X ). (11) Theorem The expected value of ψ(Y ) is E(X ); thus, EX = E(E(X |Y )). Proof Because ψ(Y ) is a function of Y, we can calculate its expected value in the usual way as ∞ E(ψ(Y )) = ψ(y) fY (y) dy ∞ −∞ ∞ = = −∞ ∞ −∞ ∞ −∞ −∞ x f X |Y (x|y) fY (y) d x d y by Definition 6 x f (x, y) d x d y = E(X ). We recall an earlier example. (12) Example Let X and Y have density f (x, y) = 8x y for 0 < y < x < 1. Find E(X |Y ) and E(Y |X ). Solution Because and f X (x) = x 0 8x y dy = 4x 3, fY (y) = 1 y 8x y d x = 4y(1 − y2), 8.6 Conditional Density and Expectation 359 we have f X |Y (x|y) = 2x/(1 − y2), y < x < 1. Hence, E(X |Y ) = 1 Y 2x 2 (1 − Y ) Likewise, fY |X (y|x) = 2y/x 2 0 < y < x and, therefore, E(Y |X ) = X 0 2y2 X 2 dy = 2 3 X. The identity in The
|
orem 11 can also be used to calculate probabilities by the simple device of letting X be the indicator of the event of interest. (13) Example Let U and Y have density f (u, y). What is P(U < Y )? Solution Let X be the indicator of U < Y. Then P(U < Y ) = E(X ) = E(E(X |Y )) by Theorem 11 ∞ y = E f (u|Y ) du = f (u, y) dudy by (2). −∞ −∞ Y −∞ Of course, we could have written this down immediately by Theorem 8.1.9. (14) Example: Bivariate Normal Let X and Y have the bivariate normal density (15) f (x, y) = 1 2πσ τ (1 − ρ2) 1 2 exp − 1 2(1 − ρ2) x 2 σ 2 − 2ρx y σ τ + y2 τ 2. (a) Find the conditional density of X given Y = y. (b) Find E(et X Y ), and hence find the density of Z = X 1Y1 + X 2Y2, where (X 1, Y1) is independent of (X 2, Y2) and each has the density (15). Solution (a) From Example 8.1.19, we know that Y has the N (0, τ 2) density. Hence, f X |Y (x|y) = f (x, y) fY (y) = 1 σ (2π(1 − ρ2)1/2) exp − 1 2(1 − ρ2) x σ − ρy τ 2. Hence, the conditional density of X given Y = y is N (ρσy/τ, σ 2(1 − ρ2)). Note that if ρ = 0, then this does not depend on y, which is to say that X is independent of Y. (b) By part (a), the conditional moment generating function of X given Y is MX |Y (t) = exp ρσY τ t + 1 2 σ 2(1 − ρ2)t 2. 360 8 Jointly Continuous Random Variables Hence, by conditional expectation, E(etXY ) = E(E(etXY |Y )) = E(MX
|
|Y (tY )) " # = E exp σ 2(1 − ρ2)t 2 Y 2 ρσ τ t + 1 2 = 1 (1 − 2ρσ τ t − σ 2τ 2(1 − ρ2)t 2) 1 2 using Example 8.5.13, and so X 1Y1 + X 2Y2 has moment generating function, (16) (17) M(t) = (1 − 2ρσ τ t − σ 2τ 2(1 − ρ2)t 2)−1 − ρ)t Hence, Z = X 1Y1 + X 2Y2 has an asymmetric bilateral exponential density, 1 1 − σ τ (1 + ρ)t = 1 − ρ 2. f (z) = 1 + ρ 2 1 − ρ 2 exp (−σ τ (1 + ρ)z) if z > 0 exp (σ τ (1 − ρ)z) if z < 0. We note without proof that ψ(Y ) has the useful properties that we recorded in the discrete case. Among the most important is that E(Xg(Y )|Y ) = g(Y )ψ(Y ) for any function g(Y ) of Y. Finally, we stress that conditional expectation is important in its own right, it should not be regarded merely as a stage on the way to calculating something else. For example, suppose that X and Y are random variables, and we want to record the value of X. Unfortunately, X is inaccessible to measurement, so we can only record the value of Y. Can this help us to make a good guess at X? First, we have to decide what a “good” guess g(Y ) at X is. We decide that g1(Y ) is a better guess than g2(Y ) if (18) E[(g1(Y ) − X )2] < E[(g2(Y ) − X )2]. According to this (somewhat arbitrary) rating, it turns out that the best guess at X given Y is ψ(Y ) = E(X |Y ). (19) Theorem For any function g(Y ) of Y, E[(X − g(Y ))2] ≥ E[(X − �
|
�(Y ))2]. Proof Using (17), we have (20) Hence, E[(X − ψ)(ψ − g)] = E[(ψ − g)E(X − ψ|Y )] = 0. E[(X − g)2] = E[(X − ψ + ψ − g)2] = E[(X − ψ)2] + E[(ψ − g)2] by (20) ≥ E[(X − ψ)2]. 8.7 Transformations: Order Statistics 361 We conclude this section by recording one more useful property of conditional densities, which may be called the continuous partition rule. The proof is an easy exercise. For continuous random variables X and Y, (21) (1) f X (x) = R f X |Y (x|y) fY (y) dy. 8.7 Transformations: Order Statistics We introduced the change of variable technique in Section 8.2. We return to this topic to consider a particularly important class of transformations, namely, those that are linear. Thus, let the random vector (X 1,..., X n) have joint density f (x1,..., xn). Suppose that for 1 ≤ i, j ≤ n, and some constants ai j, Yi = n j=1 ai j Xi. What can be said about the joint density of (Y1,..., Yn)? In Section 8.2, we required such transformations to be invertible, and we make the same restriction here. Therefore, we suppose that the matrix A = (ai j ) has an inverse A−1 = (bi j ) = B. A sufficient condition for this is that the determinant det A is not zero. Then we have the following useful result, which we state without proof. (2) Theorem Suppose that (X 1,..., X n) has density f X (x1,..., xn), and that (Y1,..., Yn) is related to (X 1,..., X n) by Yi = n j=1 ai j X j and Xi = n j=1 bi j Y j, where B A = I, the identity matrix, and det A = 0. Then the density of (Y1,...
|
, Yn) is given by (3) fY (y1,..., yn) = 1 |det A| f X (x1(y1,..., yn)... xn(y1,..., yn)) = |det B| f X (x1,..., xn). (4) Example: Normal Sample Let (X 1,..., X n) be independent N (0, 1) random vari- ables, and define (5) Y j = n j=1 x j ai j for 1 ≤ j ≤ n, where the matrix A = (ai j ) is an orthogonal rotation with det A = 1 and, denoting the transpose of A by AT, A AT = I = AT A. 362 8 Jointly Continuous Random Variables (a) Show that (Y1,..., Yn) are independent N (0, 1) random variables. (b) Deduce that the sample mean and the sample variance X = 1 n n 1 Xi s2 = 1 n − 1 n i=1 (Xi − X)2 are independent, and that (n − 1)s2 has a χ 2 density. Solution problems of this type. Thus, we write x T for the transpose of x, where (a) It is convenient to use the standard notation for vectors and matrices in (6) (7) Furthermore, x = (x1,..., xn) = yAT from (5). n i=1 x 2 i = x x T = yAT AyT = yyT = n i=1. y2 i Hence, by (3), (Y1,..., Yn) have density fY = 1 (2π) n 2 exp − 1 2 n i=1 y2 i. Because this factorizes, (Y1,..., Yn) are independent with the N (0, 1) density. (b) Now let (ai j ) be any rotation such that a1 j = n− 1 2, giving Then Y1 = n j=1 1√ n √ n X. X j = (n − 1)s2 = = = n i=1 n i=1 n i=1 Xi + nX 2 X 2 i − 2
|
X n i=1 X 2 i 2 − nX Hence, s2 is independent of X, by the independence of Y1 and (Y2,..., Yn). Finally, because each Yi is N (0, 1), (n − 1)s2 has a χ 2(n − 1) density by Example 8.5.13. See Problem 8.43 for another way to do this. 8.7 Transformations: Order Statistics 363 A particularly important linear transformation is the one that places (X 1,..., X n) in nondecreasing order. Thus, Y1 = smallest of X 1,..., X n Y2 = second smallest of X 1,..., X n... Yn = largest of X 1,..., X n. We assume that each X k has a density f (xk), so that the chance of ties is zero. It is customary to use the special notation Yk = X (k), and then X (1), X (2),..., X (n) are known as the order statistics of X 1,..., X n. Now the above transformation is linear, but not one–one. It is in fact many–one; to see this, suppose that y1 < y2 <... < yn. Then the outcomes and X 1 = y1, X 2 = y2,..., X n = yn X 2 = y1, X 1 = y2,..., X n = yn both yield the same set of order statistics, namely, X (1) = y1, X (2) = y2,..., X (n) = yn. However, if (π(1),..., π (n)) is any one of the n! distinct permutations of the first n integers and Rπ is the region xπ(1) < xπ(2) <... < xπ(n), then the transformation x(k) = xπ(k); 1 ≤ k ≤ n is one–one and linear. In the notation of (2), we have ai j = 1 if i = π( j) 0 otherwise, n i=1 f (yi ). and |det A| = 1. Therefore, the density of X (1),..
|
., X (n) is Now we observe that X 1, X 2,..., X n lies in just one of the n! regions Rπ ; hence, the order statistics have joint density n (8) n! f (yi ) i=1 for y1 < y2 <... < yn. Here are some applications of this useful result. (9) Example Let (X 1,..., X n) be independently and uniformly distributed on (0, a). Then, by (8), their order statistics have the density (10) f = n! an for y1 < y2 <... < yn. 364 8 Jointly Continuous Random Variables It follows from (8) that we may in principle obtain the marginal density of any subset of the order statistics by performing appropriate integrations. For small subsets, this is actually unnecessary. (11) Example Solution Hence, Show that X (k) has density n k f(k)(y) = k f (y)(1 − F(y))n−k[F(y)]k−1. The event X (k) ≤ y occurs if and only if at least k of the Xi lie in (−∞, y]. F(k)(y) = n j=k n j [F(y)] j (1 − F(y))n− j. Now, differentiating to obtain the density, n " f(k)(y) = f (y) F j−1(1 − F)n− j − ( j + 1) n j j = f (y)k j=k n k F k−1(1 − F)n−k by successive cancellation in the sum. n j + 1 # F j (1 − F)n−( j+1) (12) (13) 8.8 The Poisson Process: Martingales A recurring idea in previous chapters has been that of a series of events or happenings that may occur repeatedly at random times denoted by T1, T2,.... For example, we have considered light bulbs that may fail and be replaced at (Tn; n ≥ 1), or machine bits that may wear out and be renewed at (Tn; n ≥ 1) and so on. Other practical problems may also have this structure; for example, the Tn may be the times at which my telephone rings, cars arrive at the
|
toll booth, meteorites fall from the sky, or you get stung by a wasp. You can think of many more such examples yourself, and it is clear that it would be desirable to have a general theory of such processes. This is beyond our scope, but we can now consider one exceptionally important special case of such processes. The basic requirement is that the times between events should be independent and identically distributed random variables (X k; k ≥ 1); we assume further that they have an exponential distribution. (1) Definition random variables with parameter λ. Let T0 = 0, and set Let (X k; k ≥ 1) be independent identically distributed exponential Tn = n 1 X k; n ≥ 1. 8.8 The Poisson Process: Martingales Define (2) N (t) = max{n: Tn ≤ t}; Then N (t) is a Poisson process with parameter λ. t ≥ 0. 365 A couple of remarks are in order here. First, note that N (t) is just the number of happenings or events by time t; N (t) is constant until an event occurs, when it increases by 1. Second, the collection (N (t); t ≥ 0) is an uncountable collection of random variables. We have said nothing about such collections up to now, and so our analysis of N (t) must of necessity be rather informal. Our first result explains why N (t) is called a Poisson process. (3) Theorem N (t) has mass function f N (k) = e−λt (λt)k k! ; k ≥ 0. (4) Proof First, we note that from Definition 2 of N (t), the event N (t) ≥ k occurs if and only if Tk ≤ t. It follows that (5) P(N (t) ≥ k) = P(Tk ≤ t), and because Tk has a gamma density by (8.5.15) we have f N (k) = P(N (t) ≥ k) − P(N (t) ≥ k + 1) = P(Tk ≤ t) − P(Tk+1 ≤ t) t = 0 λkvk−1 (k − 1)! − λk+1vk k! e
|
−λv dv = e−λt (λt)k k! after an integration by parts. As an alternative, we could argue straight from (5) and (6.1.7) that 1 − sE(s N (t)) 1 − s = ∞ 0 skP(N (t) ≥ k) = 1 + ∞ k=1 sk t 0 λkvk−1 (k − 1)! e−λvdv = 1 + sλ t 0 = 1 + s 1 − s eλvs−λvdv = 1 − s − seλt(s−1) 1 − s 1 − s = 1 − seλt(s−1) 1 − s, [eλv(s−1)]t 0 and the Poisson mass function of N (t) follows. Our next result is one of the most striking and important properties of N (t), from which many other results flow. (6) Theorem: Conditional Property of the Poisson Process Let N (t) be a Poisson process as defined in Definition 1. Conditional on the event N (t) = k, the k random variables 366 8 Jointly Continuous Random Variables T1,..., Tk have conditional density fT |N =k(t1,..., tk) = k! t k ; 0 < t1 < t2 <... < tk ≤ t. Before proving (7) let us interpret it. From Example 8.7.9, we recognize that the density k!/t k is the density of the order statistics of k independent random variables, each uniform on (0, t). Thus, Theorem 6 can be more dramatically expressed: given N (t) = k, the k events of the process are independently and uniformly distributed on (0, t). Proof of (6) Because X 1,..., X k are independent and exponential, they have joint density f (x1,..., xk) = λk exp (−λ(x1 + · · · + xk)). (7) (8) Next, observe that the transformation n tn = i=1 xi ; 1 ≤ n ≤ k + 1 is linear and invertible with |J | = 1. Hence, by Theorem 8.7.
|
2, the random variables Tn = n 1 Xi ; 1 ≤ n ≤ k + 1 have joint density f (t1,..., tk+1) = λk+1e−λtk+1; 0 < t1 <... < tk+1. (9) Now P(0 < T1 < t1 < T2 <... < Tk < tk; N (t) = k) = P(0 < T1 < t1 <... < Tk < tk < t < Tk+1) = λkt1(t2 − t1)... (tk − tk−1)e−λt on integrating the density (9). Hence, the conditional distribution of T1,..., Tk given N (t) = k is given by (10) P(T1 ≤ t1 <... < Tk ≤ tk|N (t) = k) = P(T1 ≤ t1 <... < tk; N (t) = k)/P(N (t) = k) = t1(t2 − t1)... (tk − tk−1). k! t k Now differentiating (10) with respect to all of t1,..., tk gives (7), as required. As we have remarked, this result finds many applications, see Example 8.17 for some of them. For the moment we content ourselves with showing that N (t) has the so-called independent increments property. (11) Theorem: The Poisson Process has Independent Increments Let N (t) be a Poisson process, as usual, and let s < t ≤ u < v. Then N (t) − N (s) is independent of N (v) − N (u). Proof Let W = N (t) − N (s) and Z = N (v) − N (u). Then, by conditional expectation, (12) E(wW z Z ) = E[E(wW z Z |N (v))]. 8.8 The Poisson Process: Martingales 367 However, conditional on N (v) = k, these events are independently and uniformly distributed in (0, v), whence (W, Z, k − W − Z ) has a tr
|
inomial distribution with (13) E(wW z Z |N (v) = k. Hence, combining (12) and (13) and Theorem (3) gives (14) E(wW z Z ) = exp (λw(t − s) + λz(v − u) + λ(s + u − t) − λv) = exp (λ(t − s)(w − 1) + λ(v − u)(z − 1)) = E(wW )E(z Z ), as required. We may also observe from (14) that because E(wW ) = exp (λ(t − s)(w − 1)), it follows that W is Poisson with parameter λ(t − s). That is to say, N (t) − N (s) has the same mass function as N (t − s). This property may be called homogeneity or the property of stationary increments. Note that it is possible, and often convenient, to define N (t) as a process with stationary independent Poisson-distributed increments. It is then necessary to prove that interevent times are independent identically distributed exponential random variables. Now looking back at Section 5.6, we see that the simple random walk also has the property of independent increments. In Section 5.7, we found that martingales were particularly useful in analysing the behaviour of such random walks. We may expect (correctly) that martingales will be equally useful here. First, we need to define what we mean by a martingale for an uncountably infinite family of random variables (X (t); t ≥ 0). (15) (X (t); t ≥ 0) is a martingale if Definition (a) E|X (t)| < ∞, for all t. (b) E(X (t)|X (t1), X (t2),..., X (tn), X (s)) = X (s), for any 0 ≤ t1 < t2 <... < tn < s < t. It is customary and convenient to rewrite (b) more briefly as (b)E(X (t)|X (u); u ≤ s) = X (s), but note that this is slightly sloppy
|
shorthand; we cannot condition on an infinite number of values of X (u). A stopping time for X (t) is a random variable T taking values in [0, ∞), such that the event (T ≤ t) depends only on values of X (s) for s ≤ t. There are many technical details that should properly be dealt with here; we merely note that they can all be satisfactorily resolved and ignore them from now on. Such martingales and their stopping times can behave much like those with discrete parameter. We state this useful theorem without proof, or technical details. 368 8 Jointly Continuous Random Variables (16) Theorem: Optional Stopping Let X (t) be a continuous parameter martingale and T a stopping time for X (t). Then X (t ∧ T ) is a martingale, so EX (0) = EX (t ∧ T ) = EX (t). Furthermore, EX (T ) = EX (0) if any one of the following holds for some constant K < ∞: (a) T is bounded (i.e., T ≤ K < ∞). (b) |X (t)| ≤ K for all t, and P(T < ∞) = 1. (c) E(sup X (t ∧ T ) < ∞, and P(T < ∞) = 1. (d) E|X (T )| ≤ K, and P(T < ∞) = 1, and lim t→∞ E(X (t)I (T > t)) = 0. Note that similar appropriate definitions and theorems apply to sub- and supermartingales with continuous parameter, but we do not explore that further here. Note further that, just as in the discrete-time case, a popular technique when P(T < ∞) = 1 is to apply part (a) of the theorem at T ∧ t, allow t → ∞, and use either the Dominated or Monotone Convergence theorem, as appropriate. (17) Example: Poisson Martingales If (N (t); t ≥ 0) is a Poisson process with parameter λ, then the following are all martingales: (a) U (t) = N (t) − λt. (b) V (t) = (U (t))2
|
− λt. (c) W (t) = exp {−θ N (t) + λt(1 − e−θ )}, θ ∈ R. To see (a), we calculate E(U (t)|N (u); 0 ≤ u ≤ s) = E(N (t) − N (s) + N (s)|N (u), 0 ≤ u ≤ s) = U (s) + E[N (t) − N (s)] − λ(t − s) = U (s), where we used the independent increments property. The proof of (b) proceeds likewise: E(V (t)|N (u), 0 ≤ u ≤ s) = = E[N (t) − λt − N (s) + λs] + (N (s) − λs)]2|N (u); 0 ≤ u ≤ s] − λt = [U (s)]2 + λ(t − s) − λt = V (s), on using the independence of the increments and var [N (t) − N (s)] = λ(t − s). Part (c) is left as an exercise, but see also Example (8.22). 8.9 Two Limit Theorems Perhaps surprisingly (as we are toward the end of the book), this is an appropriate moment to reconsider our basic ideas about chance. Suppose we are given a number n of similar observations or measurements, denoted by x1,..., xn. For example, the xi may be the height of each of n men, or they may be the lifetimes of n light bulbs, or they may be the weight of potatoes yielded by each of n plants. By “similar” in this context, we mean that no measurement has any generic reason to be larger or smaller than the others; the 8.9 Two Limit Theorems 369 potatoes are of the same variety and grown in the same circumstances; the light bulbs are of the same make and type, and the men are of the same age and race. It is convenient to have one number that gives an idea of the size of a typical xi, and a popular candidate for this number is the average ¯x given by ¯x n = 1 n n i=1 xi. One reason for the popularity of ¯x is that it is empirically observed
|
that, as n increases, the sequence ¯x n undergoes smaller and smaller fluctuations, and indeed exhibits behaviour of the kind we call convergent. A special case of such measurements arises when each xi takes the value 1 or 0 according to whether some event A occurs. Then ¯x n is the proportion of times that A occurs in n trials, and the fact that ¯x n fluctuates less and less as n increases is sometimes used as a basis to justify the axioms of probability. Of course, in mathematical terms, we think of xi as the outcome of some random variable Xi. It follows that, if our theory of probability is as relevant as we have claimed, then the sequence X n = 1 n n i=1 Xi ought also to exhibit a similar kind of regularity in the long run as n → ∞. What kind might there be? To gain some insight into the problem, consider the case when each xi is 1 if A occurs, and 0 otherwise. In the mathematical formulation of this, Xi is the indicator of the event A, n 1 Xi is a binomial random variable with and we assume the Xi are independent. Then parameters n and p, and we have shown in Examples 4.18 and 7.5.11 that, as n → ∞, (1) (2) and $ $ $ $ $ 1 n P n i=1 $ $ $ $ $ Xi − p > → 0 P n 1 (npq) 1 2 i=1 (Xi − p) ≤ x → (x), n where (x) is the standard normal distribution. (Indeed, we proved something even stronger than (1) in Example 4.18 and Theorem 5.8.8.) It seems that n−1 1(Xi − E(X 1)) is settling down around E(Xi ) = p and that the distribution of (n var (X 1))− 1 1(Xi − E(X 1)) is getting closer to the standard normal distribution (x). More generally, we showed in Theorem 5.8.6 that (1) holds for any collection of independent discrete random variables with the same mean and variance. We called this the weak law of large numbers. n 2 This is deliberately vague and informal, but it should now seem at least plausible that the following results might be true. 370 8 Jointly Continuous Random Variables (3) Let (X k; k ≥ 1) be
|
independent and identically distributed random variTheorem ables with mean µ, variance σ 2 < ∞, and moment generating function MX (t), |t| < a. Then we have: (i) Weak Law of Large Numbers $ $ $ $ $ P (4) For ε > 0, as n → ∞, 1 n n (Xi − µ) i=. (ii) Central Limit Theorem As n → ∞, (5) P 1 √ n σ (Xi − µ) ≤ x → (x) = x −∞ (2π )− 1 2 e− 1 2 y2 dy. n i=1 It is a remarkable fact that both of these are indeed true, and we now prove them. Proof of (4) The essential step here is to recall Chebyshov’s inequality, for then we may write, using Theorem 7.4.16Xi − µ−2E n−2 2 n (Xi − µ) 1 n = ε−2n−2E (Xi − µ)2 by independence, 1 = ε−2n−1σ 2 → 0 as n → ∞ Note that the proof here is the same as that of Theorem 5.8.6. So it is not necessary for (4) that the Xi be identically distributed or have an m.g.f., it is sufficient that they have the same mean and variance. Proof of (5) The essential step here is to recall the continuity theorem (7.5.10), for then we may write ) E exp t √ n σ n 1 * " (Xi − µ) = E exp σ √ # n t √ n (X 1 − µ), by independence = (MY (t/(σ 1 + E(Y 2) n)))n where Y = X − µ, t 2 t 2 2σ 2n n 2 t 2, because E(Y 2) = σ 2. 2 + o 1 = → e by Theorem 7.5.9 8.10 Review and Checklist for Chapter 8 371 Now we recall that e 1 φ(x), and (5) follows by the continuity theorem (7.5.10). 2 t 2 is the moment generating function of the standard normal density 8.10 Review and Checklist for Chapter 8 In this chapter,
|
we consider the joint behaviour of collections of continuous random variables having joint density functions. We also introduce the joint distribution function, and show how these yield the marginal densities and distributions. The change of variable technique is given and used to study important functions of sets of random variables (including sums, products, quotients, and order statistics). We look at expectation, independence, and conditioning, especially the key concept of conditional expectation. Finally, we discuss the Poisson process and its crucial properties, together with continuous parameter martingales and the optional stopping theorem, and prove simple forms of the weak law of large numbers and the central limit theorem. We summarize most of these principal properties for the bivariate case (X, Y ). The extension to larger collections of random variables (X 1, X 2, X 3,... ; the multivariate case) is straightforward but typographically tedious. SYNOPSIS OF FORMULAE: The random vector (X, Y ) is supposed to have joint density f (x, y) and distribution F(x, y). Key rule: P[(X, Y ) ∈ B] = f (x, y) d xd y. (x,y)∈B Basic rules: F(x, y) = x y −∞ −∞ f (u, v) dudv = P(X ≤ x, Y ≤ y). For small h and k, P(x < X ≤ x + h, y < Y ≤ y + k) f (x, y) hk, and when F is differentiable, ∂ 2 F(x, y) ∂ x∂ y = f (x, y). Marginals: Functions: f X (x) = R f (x, y)dy; FX (x) = F(x, ∞); fY (y) = f (x, y)d x, R FY (y) = F(∞, y). P(g(X, Y ) ≤ z) = f (x, y) d xd y. g≤z 372 8 Jointly Continuous Random Variables In particular, f X +Y (z) = R f (x, z − x) d x. Transformations: More generally, suppose (u, v) = (u(x, y), v(x, y)) defines a one–one invertible function; let
|
J (u, v) = ∂ x ∂u ∂ y ∂v − ∂ y ∂u ∂ x ∂v, where the derivatives are continuous in the domain of (u, v). Then the random variables (U, V ) = (u(X, Y ), v(X, Y )) are jointly continuous with density fU,V (u, v) = f X,Y (x(u, v), y(u, v))|J (u, v)|. Independence: X and Y are independent, if and only if FX,Y (x, y) = FX (x)FY (y), for all x, y, or f X,Y (x, y) = f X (x) fY (y), for all x, y. Expectation: If + + |g| f (x, y)d xd y < ∞, then g(X, Y ) has an expected value given by Eg(X, Y ) = g(x, y) f (x, y)d xd y. Moments: Joint moments, covariance, and correlation are defined as they were in the discrete case. Joint generating functions: The joint probability generating function of integer-valued X and Y is G X,Y (x, y) = E(x X yY ). The joint moment generating function of X and Y is MX,Y (s, t) = E(es X +tY ) = G X,Y (es, et ). Moments are obtained by appropriate differentiation, so cov (X, Y ) = ∂ x (1, 1) ∂G ∂s (0, 0) ∂ M Random variables X and Y are independent if and only if the moment generating function factorizes as a product of separate functions of s and t for all s, t; thus, ∂ x∂ y (1, 1) − ∂G ∂ 2G ∂s∂t (0, 01, 1) ∂t (0, 0) if X, Y are discrete in any case. M(s, t) = M(s, 0)M(0, t). 8.10 Review and Checklist for Chapter 8 373 Conditioning: The conditional density of X given Y = y is f X |Y (x|y) = The Key Rule is �
|
� f (x, y) fY (y) 0,, 0 < fY (y) < ∞ otherwise. P(X ∈ A|Y = y) = f X |Y (x|y) d x, x∈A the continuous partition rule is f X (x) = R f X |Y (x|y) fY (y) dy, and the conditional distribution function is FX |Y (x|y) = x −∞ f X |Y (x|y)d x = P(X < x|Y = y). |x| f X |Y (x|y)d x < ∞, then the conditional expectation of X given Y = y is + If E(X |Y = y) = x f X |Y (x|y) d x. R As Y varies, this defines a function E(X |Y ), where E(X |Y ) = E(X |Y = y) when Y = y. Key theorem for conditional expectation: E(E(X |Y )) = EX. This has the same properties as the conditional expectation in the discrete case. Limit Theorems: We established the Weak Law of Large Numbers and the Central Limit Theorem in (8.9.3). Multivariate normal density: Jointly normal random variables are particularly important. Recall that the standard normal density is φ(x) = f X (x) = 1√ 2π exp − 1 2 x 2, − ∞ < x < ∞. If the random variables X and Y are jointly distributed, and a X + bY has a normal distribution for all choices of the constants a and b, then X and Y are said to have a bivariate normal, or binormal, density. In particular, X and Y have the standard binormal density with correlation coefficient (or parameter) ρ, if f (x, y) = (2π)−1(1 − ρ2)−1/2 exp (x 2 − 2ρx y + y2). −1 2(1 − ρ2)! Thus X and Y are independent if and only if uncorrelated. n More generally, X = (X 1,..., X n) is said to be multivariate normal (or multinormal) 1 ai Xi has a normal distribution
|
for all choices of a = (a1,..., an). It follows that the if multinormal distribution is determined by the means and covariances of (X 1,..., X n). 374 8 Jointly Continuous Random Variables To see this, we simply calculate the joint moment generating function of X: MX(t) = E exp tr Xr. n 1 This is easy because is normal with mean n 1 tr Xr i, j cov (Xi, X j )ti t j. Hence, by Example 7.5.7, ti EXi + 1 2 MX(t) = exp " i i, j cov (Xi, X j )ti t j, i ti EXi # and variance and this determines the joint distribution by the basic property of m.g.f.s. In particular, X 1, X 2, X 3 have the standard trivariate normal distribution (or trinormal) when varX 1 = varX 2 = varX 3 = 1, EX 1 X 2 = ρ12, EX 1 X 3 = ρ23, EX 3 X 1 = ρ31, and f (x1, x2, x3) = (2π)−3/2 A− 1 2 exp − 1 2A j k! a jk x j xk, where a11 = 1 − ρ2 23 ρ12ρ23 − ρ31, a23 = a32 = ρ12ρ31 − ρ23, The joint moment generating function of X 1, X 2, X 3 is, a33 = 1 − ρ2 12, a22 = 1 − ρ2 31, a12 = a21 = ρ31ρ23 − ρ12, a13 =a31 = + 2ρ12ρ23ρ31. − ρ2 31 − ρ2 23 and A = 1 − ρ2 12 M(t1, t2, t3) = exp + ρ12t1t2 + ρ23t2t3 + ρ31t3t1!. Checklist of Terms for Chapter 8 8.1 joint distribution function joint density function marginals bivariate normal density 8.2 change of variable formula 8.3 independence and factorization 8.4 sums, products, and quotients 8.5 expectation Cauchy–Schwarz inequality independence and expectation covariance and correlation joint moment generating function
|
normal sample 8.6 conditional density conditional distribution conditional expectation 8.7 normal sample order statistics 8.8 Poisson process conditional property Worked Examples and Exercises 375 independent increments martingales optional stopping theorem 8.9 weak law of large numbers central limit theorem.11 Example: Bivariate Normal Density Let X and Y have the standard bivariate normal joint density f (x, y) = 1 2π(1 − ρ2) 1 2 exp − x 2 − 2ρx y + y2 2(1 − ρ2). Show that the joint moment generating function of X and Y is exp (s2 + 2ρst + t 2). 1 2 Solution We are asked to find (1) M = E(es X +tY ) = ∞ ∞ −∞ −∞ f (x, y)esx+t y d xd y. After a little thought, we observe that the terms in the exponents in the integrand can be rearranged to give 2 M = 1 2 1 2π(1 − ρ2) + y − ρx (1 − ρ2) 1 2 − x 2 2 + x(s + tρ) − 1 2 y − ρx (1 − ρ2) 1 2 exp t(1 − ρ2) 1 2 d xd y. This suggests that we make the change of variables u = x, v = (y − ρx)/(1 − ρ2) integral. This map is one–one, and J = (1 − ρ2) −∞ −∞ 1 1 2 in the M = 1 2π exp −∞ −∞ 2. Hence, u2 + (s + tρ)u − 1 2 − 1 2 v2 + t(1 − ρ2) 1 2 v dudv. Because the integrand factorizes, we now recognize the right-hand side as being equal to E(e(s+tρ)U )E(et(1−ρ2) 1 2 V ), where U and V are standard normal random variables. But we know the m.g.f. E(et V ) of a standard normal random variable V to be e 1 2 (s+tρ)2 2 t 2. Hence, 2 t 2(1−ρ2), 1 1 M = e e (2) as required
|
. 376 8 Jointly Continuous Random Variables Find the conditional m.g.f. of Y given X. Use (3) to find E(es X +tY ). Show that ρ(X, Y ) = cov (X, Y ) = ρ. Deduce that X and Exercise Exercise Y are independent if and only if cov (X, Y ) = 0. Exercise and Z = Exercise Let X 1, X 2,..., X n be independent standard normal variables. Let W = n i=1 Find the distribution of a X + bY, where X and Y have the bivariate normal distribution. βi Xi. When are Y and Z independent? αi Xi n i=1 Remark See Example 8.20 for another approach. 8.12 Example: Partitions (a) The random variables X and Y are independently and uniformly distributed on (0, a). Find the density of U, V, and W, where U = min{X, Y }, V = |X − Y |, and W = a − max {X, Y }. (b) Use this to show that if three points are picked independently and uniformly on the perimeter of a circle of radius r, then the expected area of the resulting triangle is 3r 2/(2π). Solution (a) We give three methods of solution. I: Basic Plod (i) By independence P(U ≤ u) = 1 − P(X > u; Y > uii) By the basic property of densities, if we let C be the set {x, y: |x − y| ≤ v}, then P(V ≤ v) = a−iii) By independence, P(W ≤ w) = P(max {X, Y } ≥ a − w Hence, U, V, and W have the same density: f (z) = 2 a2 (a − z), for 0 < z < a. II: Crofton’s Route Let F(a, v) be the distribution of V, and consider F(a + h, v). By conditioning on the three events {both X and Y lie in (0, a)}, {one of X, Y lies in (0, a)}, and {neither of X, Y lie in (0, a)}, we find that F(a + h, v) = F(a, v(
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.