text
stringlengths
270
6.81k
degree. We saw that derivations can be thought of as infinitesimal automorphisms. One can similarly consider k[[t]]-linear maps of the form Φ(a) = a + tφ1(a) + t2φ2(a) + · · · and consider whether they define automorphisms of A ⊗ k[[t]]. Working modulo t2, we have already done this problem — we are just considering automorphisms of A[ε], and we saw that these automorphisms correspond to derivations. Definition (Integrable derivation). We say a derivation is integrable if there is an automorphism of A ⊗ k[[t]] that gives the derivation when we mod t2. In this case, the obstructions are 2-cocycles which are not coboundaries. Theorem (Gerstenhaber). Suppose HH 2(A, A) = 0. Then all derivations are integrable. The proof is an exercise on the third example sheet. We haven’t had many examples so far, because Hochschild cohomology is difficult to compute. But we can indeed do some examples. 61 3 Hochschild homology and cohomology III Algebras Example. Let A = k[x]. Since A is commutative, we have HH 0(A, A) = A. Since A is commutative, A has no inner derivations. So we have HH 1(A, A) = DerA = f (x) d dx : f (x) ∈ k[x]. For any i > 1, we have So we have HH i(A, A) = 0. Dim(A) = 1. We can do this by explicit calculation. complex, we had a short exact sequence If we look at our Hochschild chain 0 ker µ A ⊗ A A 0 (∗) and thus we have a map A ⊗ A ⊗ A d A ⊗ A whose image is ker µ. The point is that ker µ is a projective A-A-bimodule. This will mean that HH i(A, M ) = 0 for i ≥ 2 in the same way we used to show that when AAA is a projective A-A-bimodule
for i ≥ 1. In particular, HH i(A, A) = 0 for i ≥ 2. To show that ker µ is projective, we notice that A ⊗ A = k[X] ⊗k k[X] ∼= k[X, Y ]. So the short exact sequence (∗) becomes 0 (X − Y )k[X, Y ] k[X, Y ] k[X] 0. So (X − Y )k[X, Y ] is a free k[X, Y ] module, and hence projective. We can therefore use our theorems to see that any extension of k[X] by k[X] is split, and any ∗-product is trivial. We also get that any derivation is integrable. Example. If we take A = k[X1, X2], then again this is commutative, and HH 0(A, A) = A HH 1(A, A) = DerA. We will talk about HH 2 later, and similarly HH i(A, A) = 0 for i ≥ 3. From this, we see that we may have star products other than the trivial ones, and in fact we know we have, because we have one arising from the Rees algebra of A1(k). But we know that any infinitesimal deformation yields a star product. So there are much more. 62 3 Hochschild homology and cohomology III Algebras 3.4 Gerstenhaber algebra We now want to understand the equations (†) better. To do so, we consider the graded vector space HH·(A, A) = ∞ HH n(A, A), A=0 as a whole. It turns out this has the structure of a Gerstenhaber algebra The first structure to introduce is the cup product. They are a standard tool in cohomology theories. We will write Sn(A, A) = Homk(A⊗n, A) = HomA-A(A⊗(n+2), AAA). The Hochschild chain complex is then the graded chain complex S·(A, A). Definition (Cup product). The cup product : Sm(A, A) ⊗ Sn(A, A) → Sm+n(A, A) is de�
��ned by (f g)(a1 ⊗ · · · ⊗ am ⊗ b1 ⊗ · · · ⊗ bn) = f (a1 ⊗ · · · ⊗ am) · g(b1 ⊗ · · · ⊗ bn), where ai, bj ∈ A. Under this product, S·(A, A) becomes an associative graded algebra. Observe that δ(f g) = δf g + (−1)mnf δg. So we say δ is a (left) graded derivation of the graded algebra S·(A, A). In homological (graded) algebra, we often use the same terminology but with suitable sign changes which depends on the degree. Note that the cocycles are closed under. So cup product induces a product on HH·(A, A). If f ∈ Sm(A, A) and g ∈ Sn(A, A), and both are cocycles, then (−1)m(g f − (−1)mn(f g)) = δ(f ◦ g), where f ◦ g is defined as follows: we set f ◦i g(a1 ⊗ · · · ⊗ ai−1 ⊗ b1 ⊗ · · · ⊗ bn ⊗ ai+1 · · · ⊗ am) = f (a1 ⊗ · · · ⊗ ai−1 ⊗ g(b1 ⊗ · · · ⊗ bn) ⊗ ai+1 ⊗ · · · ⊗ am). Then we define f ◦ g = m i=1 (−1)(n−1)(i−1)f ◦i g. This product ◦ is not an associative product, but is giving a pre-Lie structure to S·(A, A). Definition (Gerstenhaber bracket). The Gerstenhaber bracket is [f, g] = f ◦ g − (−1)(n+1)(m+1)g ◦ f 63 3 Hochschild homology and cohomology III Algebras This defines a graded Lie algebra structure on the Hochschild chain complex
, It is a grade Lie algebra on but notice that we have a degree shift by 1. S·+1(A, A). Of course, we really should define what a graded Lie algebra is. Definition (Graded Lie algebra). A graded Lie algebra is a vector space L = Li with a bilinear bracket [ ·, · ] : L × L → L such that – [Li, Lj] ⊆ Li+j; – [f, g] − (−1)mn[g, f ]; and – The graded Jacobi identity holds: (−1)mp[[f, g], h] + (−1)mn[[g, h], f ] + (−1)np[[h, f ], g] = 0 where f ∈ Lm, g ∈ Ln, h ∈ Lp. In fact, S·+1(A, A) is a differential graded Lie algebra under the Gerstenhaber bracket. Lemma. The cup product on HH·(A, A) is graded commutative, i.e. f g = (−1)mn(g f ). when f ∈ HH m(A, A) and g ∈ HH n(A, A). Proof. We previously “noticed” that (−1)m(g f − (−1)mn(f g)) = δ(f ◦ g), Definition (Gerstenhaber algebra). A Gerstenhaber algebra is a graded vector space H = H i with H·+1 a graded Lie algebra with respect to a bracket [ ·, · ] : H m × H n → H m+n−1, and an associative product : H m × H n → H m+n which is graded commutative, such that if f ∈ H m, then [f, · ] acts as a degree m − 1 graded derivation of : [f, g h] = [f, g] h + (−1)(m−1)ng [f, h] if g ∈ H n. This is analogous to the definition of a Poisson algebra. We’ve seen that HH·(A, A) is an example of a Gerstenhaber algebra. We can look at what happens in low degrees. We know that H 0 is a commutative
k-algebra, and : H 0 × H 1 → H 1 is a module action. Also, H 1 is a Lie algebra, and [ ·, · ] : H 1 × H 0 → H 0 is a Lie module action, i.e. H 0 gives us a Lie algebra representation of H 1. In other words, 64 3 Hochschild homology and cohomology III Algebras the corresponding map [ ·, · ] : H 1 → Endk(H 0) gives us a map of Lie algebras H 1 → Der(H 0). The prototype Gerstenhaber algebra is the exterior algebra DerA for a commutative algebra A (with A in degree 0). Explicitly, to define the exterior product over A, we first consider the tensor product over A of two A-modules V and W, defined by V ⊗A W = V ⊗k W av ⊗ w − v ⊗ aw The exterior product is then V ∧A V = V ⊗A V v ⊗ v : v ∈ V. The product is given by the wedge, and the Schouten bracket is given by [λ1 ∧ · · · ∧ λm] = (−1)(m−1)(n−1) (−1)i+j[λi, λj] λ1 ∧ · · · λm ith missing ∧ λ 1 ∧ · · · ∧ λ n jth missing. i,j For any Gerstenhaber algebra H = H i, there is a canonical homomorphism of Gerstenhaber algebras H 1 → H. Theorem (Hochschild–Kostant–Ronsenberg (HKR) theorem). If A is a “smooth” commutative k-algebra, and char k = 0, then the canonical map H 0 (DerA) → HH ∗(A, A) A is an isomorphism of Gerstenhaber algebras. We will not say what “smooth” means, but this applies if A = k[X1, · · ·, Xn], or if k = C or R and A is appropriate functions on smooth manifolds or algebraic varieties. In the 1960’s, this was
stated just for the algebra structure, and didn’t think about the Lie algebra. Example. Let A = k[X, Y ], with char k = 0. Then HH 0(A, A) = A and HH 1(A, A) = DerA ∼= p(X, Y ) ∂ ∂y + q(X, Y ) : p, q ∈ A. ∂ ∂Y So we have which is generated as an A-modules by ∂ HH 2(A, A) = DerA ∧A DerA, ∂Y. Then ∂X ∧ ∂ HH i(A, A) = 0 for all i ≥ 3 We can now go back to talk about star products. Recall when we considered possible star products on V ⊗k k[[t]], where V is the underlying vector space of the algebra A. We found that associativity of the star product was encapsulated by some equations (†λ). Collectively, these are equivalent to the statement 65 3 Hochschild homology and cohomology III Algebras Definition (Maurer–Cartan equation). The Maurer–Cartan equation is δf + 1 2 [f, f ]Gerst = 0 f = tλFλ, for the element where F0(a, b) = ab. When we write [ ·, · ]Gerst, we really mean the k[[t]]-linear extension of the Gerstenhaber bracket. If we want to think of things in cohomology instead, then we are looking at things modulo coboundaries. For the graded Lie algebra ·+1(DerA), the Maurer–Cartan elements, i.e. solutions of the Maurer–Cartan equation, are the formal Poisson structures. They are formal power series of the form for πi ∈ DerA ∧ DerA, satisfying Π = tiπi [Π, Π] = 0. There is a deep theorem of Kontsevich from the early 2000’s which implies Theorem (Kontsevich). There is a bijection equivalence classes of star products ←→ classes of formal Poisson structures This applies for smooth algebras in char 0, and in particular for polynomial algebras A = k[X1, · · ·, Xn]. This is a di�
��cult theorem, and the first proof appeared in 2002. An unnamed lecturer once tried to give a Part III course with this theorem as the punchline, but the course ended up lasting 2 terms, and they never reached the punchline. 3.5 Hochschild homology We don’t really have much to say about Hochschild homology, but we are morally obliged to at least write down the definition. To do Hochschild homology, we apply · ⊗A-A M for an A-A-bimodule M to the Hochschild chain complex. · · · d A ⊗k A ⊗k A d A ⊗k A µ A 0., We will ignore the → A → 0 bit. We need to consider what · ⊗A-A · means. If we have bimodules V and W, we can regard V as a right A ⊗ Aop-module. We can also think of W as a left A ⊗ Aop module. We let B = A ⊗ Aop, 66 3 Hochschild homology and cohomology III Algebras and then we just consider V ⊗B W = V ⊗k W vx ⊗ w − v ⊗ xw : w ∈ B = V ⊗k W ava ⊗ w − v ⊗ awa. Thus we have b1 · · · (A ⊗k A ⊗k A) ⊗A-A M b0 (A ⊗k A) ⊗A-A M ∼= M, Definition (Hochschild homology). The Hochschild homology groups are HH0(A, M ) = HHi(A, M ) = M im b0 ker bi−1 im bi for i > 0. A long time ago, we counted the number of simple kG-modules for k alge[A,A], braically closed of characteristic p when G is finite. In the proof, we used A and we pointed out this is HH0(A, A). Lemma. In particular, Proof. Exercise. HH0(A, M ) = M xm − mx : m ∈ M, x ∈ A. HH0(
A, A) = A [A, A]. 67 4 Coalgebras, bialgebras and Hopf algebras III Algebras 4 Coalgebras, bialgebras and Hopf algebras We are almost at the end of the course. So let’s define an algebra. Definition (Algebra). A k-algebra is a k-vector space A and k-linear maps → xy u : k → A λ → λI called the multiplication/product and unit such that the following two diagrams commute: A ⊗ A ⊗ A µ⊗id A ⊗ A k ⊗ A u⊗id A ⊗ A id ⊗µ A ⊗ A µ µ A ∼= µ A A ⊗ k id ⊗u ∼= These encode associativity and identity respectively. Of course, the point wasn’t to actually define an algebra. The point is to define a coalgebra, whose definition is entirely dual. Definition (Coalgebra). A coalgebra is a k-vector space C and k-linear maps ∆ : called comultiplication/coproduct and counit respectively, such that the following diagrams commute: C ⊗ C ⊗ C id ⊗∆ C ⊗ C k ⊗ C ε⊗id C ⊗ C id ⊗ε C ⊗ k ∆⊗id C ⊗ C ∆ C ∆ ∼= µ C ∼= These encode coassociativity and coidentity A morphism of coalgebras f : C → D is a k-linear map such that the following diagrams commute subspace I of C is a co-ideal if ∆(I) ≤ C ⊗ I + I ⊗ C, and ε(I) = 0. In this case, C/I inherits a coproduct and counit. A cocommutative coalgebra is one for which τ ◦ ∆ = ∆, where τ : V ⊗ W → W ⊗ V given by the v ⊗ w → w ⊗ v is the “twist
map”. It might be slightly difficult to get one’s head around what a coalgebra actually is. It, of course, helps to look at some examples, and we will shortly do so. It also helps to know that for our purposes, we don’t really care about coalgebras per se, but things that are both algebras and coalgebras, in a compatible way. 68 4 Coalgebras, bialgebras and Hopf algebras III Algebras There is a very natural reason to be interested in such things. Recall that when doing representation theory of groups, we can take the tensor product of two representations and get a new representation. Similarly, we can take the dual of a representation and get a new representation. If we try to do this for representations (ie. modules) of general algebras, we see that this is not possible. What is missing is that in fact, the algebras kG and U(g) also have the structure of coalgebras. In fact, they are Hopf algebras, which we will define soon. We shall now write down some coalgebra structures on kG and U(g). Example. If G is a group, then kG is a co-algebra, with ∆(g) = g ⊗ g ε λg (g) = λg. We should think of the specification ∆(g) = g ⊗ g as saying that our groups act diagonally on the tensor products of representations. More precisely, if V, W are representations and v ∈ V, w ∈ W, then g acts on v ⊗ w by ∆(g) · (v ⊗ w) = (g ⊗ g) · (v ⊗ w) = (gv) ⊗ (gw). Example. For a Lie algebra g over k, the universal enveloping algebra U(g) is a co-algebra with ∆(x) = x ⊗ 1 + 1 ⊗ x for x ∈ g, and we extend this by making it an algebra homomorphism. To define ε, we note that elements of U(g) are uniquely of the form λ + λi
1,...,in xi1 1 · · · xin n, where {xi} is a basis of g (the PBW theorem). Then we define λ + ε λi1,...,in xi1 1 · · · xin n = λ. This time, the specification of ∆ is telling us that if X ∈ g and v, w are elements of a representation of g, then X acts on the tensor product by ∆(X) · (v ⊗ w) = Xv ⊗ w + v ⊗ Xw. Example. Consider O(Mn(k)) = k[Xij : 1 ≤ i, j ≤ n], the polynomial functions on n × n matrices, where Xij denotes the ijth entry. Then we define n ∆(Xij) = Xi ⊗ Xj, and These are again algebra maps. i=1 ε(Xij) = δij. 69 4 Coalgebras, bialgebras and Hopf algebras III Algebras We can also talk about O(GLn(k)) and O(SLn(k)). The formula of the determinant gives an element D ∈ O(Mn(k)). Then O(GLn(k)) is given by adding a formal inverse to D in O(GLn(k)), and O(SLn(k)) is obtained by quotienting out O(GLn(k)) by the bi-ideal D − 1. From an algebraic geometry point of view, these are the coordinate algebra of the varieties Mn(k), GLn(k) and SLn(k). This is dual to matrix multiplication. We have seen that we like things that are both algebras and coalgebras, compatibly. These are known as bialgebras. Definition (Bialgebra). A bialgebra is a k-vector space B and maps µ, υ, ∆, ε such that (i) (B, µ, u) is an algebra. (ii) (B, ∆, ε) is a coalgebra. (iii) ∆ and ε are algebra morphisms. (iv) µ and u are coalgebra morphisms. Being a bial
gebra means we can take tensor products of modules and still get modules. If we want to take duals as well, then it turns out the right notion is that of a Hopf algebra: Definition (Hopf algebra). A bialgebra (H, µ, u, ∆, ε) is a Hopf algebra if there is an antipode S : H → H that is a k-linear map such that µ ◦ (S ⊗ id) ◦ ∆ = µ ◦ (id ⊗S) ◦ ∆ = u ◦ ε. Example. kG is a Hopf algebra with S(g) = g−1. Example. U(g) is a Hopf algebra with S(x) = −x for x ∈ U(g). Note that our examples are all commutative or co-commutative. The term quantum groups usually refers to a non-commutative non-co-commutative Hopf algebras. These are neither quantum nor groups. As usual, we write V ∗ for Homk(V, k), and we note that if we have α : V → W, then this induces a dual map α∗ : W ∗ → V ∗. Lemma. If C is a coalgebra, then C ∗ is an algebra with multiplication ∆∗ (that is, ∆∗|C∗⊗C∗ ) and unit ε∗. If C is co-commutative, then C ∗ is commutative. However, if an algebra A is infinite dimensional as a k-vector space, then A∗ may not be a coalgebra. The problem is that (A∗ ⊗ A∗) is a proper subspace of (A ⊗ A)∗, and µ∗ of an infinite dimensional A need not take values in A∗ ⊗ A∗. However, all is fine for finite dimensional A, or if A is graded with finite dimensional components, where we can form a graded dual. In general, for a Hopf algebra H, one can define the Hopf dual, H 0 = {f ∈ H ∗ : ker f contains an ideal of �
�nite codimension}. 70 4 Coalgebras, bialgebras and Hopf algebras III Algebras Example. Let G be a finite group. Then (kG)∗ is a commutative non-cocommutative Hopf algebra if G is non-abelian. Let {g} be the canonical basis for kG, and {φg} be the dual basis of (kG)∗. Then ∆(φg) = h1h2=g φh1 ⊗ φh2. There is an easy way of producing non-commutative non-co-commutative Hopf algebras — we take a non-commutative Hopf algebra and a non-co-commutative Hopf algebra, and take the tensor product of them, but this is silly. The easiest non-trivial example of a non-commutative non-co-commutative Hopf algebra is the Drinfeld double, or quantum double, which is a general construction from a finite dimensional hopf algebra. Definition (Drinfeld double). Let G be a finite group. We define D(G) = (kG)∗ ⊗k kG as a vector space, and the algebra structure is given by the crossed product (kG)∗ G, where G acts on (kG)∗ by Then the product is given by f g(x) = f (gxg−1). (f1 ⊗ g1)(f2 ⊗ g2) = f1f g−1 1 2 ⊗ g1g2. The coalgebra structure is the tensor of the two coalgebras (kG)∗ and kG, with ∆(φg ⊗ h) = g1g2=g φg1 ⊗ h ⊗ φg2 ⊗ h. D(G) is quasitriangular, i.e. there is an invertible element R of D(G) ⊗ D(G) such that R∆(x)R−1 = τ (∆(x)), where τ is the twist map. This is given by R = R
1 = g g (φg ⊗ 1) ⊗ (1 ⊗ g) (φg ⊗ 1) ⊗ (1 ⊗ g−1). The equation R∆R−1 = τ ∆ results in an isomorphism between U ⊗ V and V ⊗ U for D(G)-bimodules U and V, given by flip follows by the action of R. If G is non-abelian, then this is non-commutative and non-co-commutative. The point of defining this is that the representations of D(G) correspond to the G-equivariant k-vector bundles on G. As we said, this is a general construction. Theorem (Mastnak, Witherspoon (2008)). The bialgebra cohomology H· bi(H, H) for a finite-dimensional Hopf algebra is equal to HH·(D(H), k), where k is the trivial module, and D(H) is the Drinfeld double. 71 4 Coalgebras, bialgebras and Hopf algebras III Algebras In 1990, Gerstenhaber and Schack defined bialgebra cohomology, and proved results about deformations of bialgebras analogous to our results from the previous chapter for algebras. In particular, one can consider infinitesimal deformations, and up to equivalence, these correspond to elements of the 2nd cohomology group. There is also the question as to whether an infinitesimal deformation is integrable to give a bialgebra structure on V ⊗ k[[t]], where V is the underlying vector space of the bialgebra. Theorem (Gerstenhaber–Schack). Every deformation is equivalent to one where the unit and counit are unchnaged. Also, deformation preserves the existence of an antipode, though it might change. Theorem (Gerstenhaber–Schack). All deformations of O(Mn(k)) or O(SLn(k)) are equivalent to one in which the comultiplication is unchanged. We nwo try to deform O(M2(k)). By the previous theorems, we only
have to change the multiplication. Consider Oq(M2(k)) defined by X12X11 = qX11X12 X22X12 = qX12X22 X21X11 = qX11X21 X22X21 = qX21X22 X21X12 = X12X21 X11X22 − X22X11 = (q−1 − q)X12X21. We define the quantum determinant = X11X22 − q−1X12X21 = X22X11 − qX12X21. det q Then Then we define ∆(det q ) = det q, ⊗ det q ε(det q ) = 1. O(SL2(k)) = O(M2(k)) (detq −1), where we are quotienting by the 2-sided ideal. It is possible to define an antipode, given by S(X11) = X22 S(X12) = −qX12 S(X21) = −q−1X21 S(X22) = X11, and this gives a non-commutative and non-co-commutative Hopf algebra. This is an example that we pulled out of a hat. But there is a general construction due to Faddeev–Reshetikhin–Takhtajan (1988) via R-matrices, which are a way of producing a k-linear map V ⊗ V → V ⊗ V, 72 4 Coalgebras, bialgebras and Hopf algebras III Algebras where V is a fintie-dimesnional vector space. We take a basis e1, · · ·, en of V, and thus a basis e1 ⊗ ej of V ⊗ V. We write Rm ij for the matrix of R, defined by R(ei ⊗ ej) = Rm ij e ⊗ em.,m The rows are indexed by pairs (, m), and the columns by pairs (i, j), which are put in lexicographic order. The action of R on V ⊗ V induces 3 different actions on V ⊗ V ⊗ V. For
s, t ∈ {1, 2, 3}, we let Rst be the invertible map which acts like R on the sth and tth components, and identity on the other. So for example, R12(e1 ⊗ e2 ⊗ v) = m i,j e ⊗ em ⊗ v. Definition (Yang–Baxter equation). R satisfies the quantum Yang–Baxter equation (QYBE ) if and the braided form of QYBE (braid equation) if R12R13R23 = R23R13R12 R12R23R12 = R23R12R23. Note that R satisfies QYBE iff Rτ satisfies the braid equation. Solutions to either case are R-matrices. Example. The identity map and the twist map τ satisfies both. Take V to be 2-dimensional, and R to be the map Rm ij =  −, where q = 0 ∈ K. Thus, we have R(e1 ⊗ e1) = qe1 ⊗ e2 R(e2 ⊗ e1) = e2 ⊗ e1 R(e1 ⊗ e2) = e1 ⊗ e2 + (q − q−1)e2 ⊗ e1 R(e2 ⊗ e2) = qe2 ⊗ e2, and this satisfies QYBE. Similarly, (Rτ )m ij =  − satisfies the braid equation. We now define the general construction. 73 4 Coalgebras, bialgebras and Hopf algebras III Algebras Definition (R-symmetric algebra). Given the tensor algebra T (V ) = ∞ n=0 V ⊗n, we form the R-symmetric algebra SR(V ) = T (V ) z − R(z) : z ∈ V ⊗ V. Example. If R is the identity, then SR(V ) = T (V ). Example. If R = τ, then SR(V ) is the usual symmetric algebra.
Example. The quantum plane Oq(k2) can be written as SR(V ) with R(e1 ⊗ e2) = qe2 ⊗ e1 R(e1 ⊗ e1) = e1 ⊗ e1 R(e2 ⊗ e1) = q−1e1 ⊗ e2 R(e2 ⊗ e2) = e2 ⊗ e2. Generally, given a V which is finite-dimensional as a vector space, we can identify (V ⊗ V )∗ with V ∗ ⊗ V ∗. We set E = V ⊗ V ∗ ∼= Endk(V ) ∼= Mn(k). We define R13 and R∗ 24 : E ⊗ E → E ⊗ E, where R13 acts like R on terms 1 and 3 in ∗, and 24 acts like R∗ on terms 2 and 4. identity on the rest; R∗ Definition (Coordinate algebra of quantum matrices). The coordinate algebra of quantum matrices associated with R is R13(z) − R∗ T (E) 24(z) : z ∈ E ⊗ E = SR(E), where T = R∗ 24R−1 13. The coalgebra structure remains the same as O(Mn(k)), and for the antipode, we write E1 for the image of e1 in SR(V ), and similarly Fj for fj. Then we map E1 → Fj → n j=1 n i=1 Xij ⊗ Ej Fi ⊗ Xij. This is the general construction we are up for. Example. We have for Oq(M2(k)) = ARτ (V ) Rm ij =  −, 74 Index Index 2-cocycle integrable, 60 2-cocycle condition, 19 A M, 55 A[ε], 55 Aop, 7 An, 31 An(k), 31 Fi, 59 G-graded algebra, 18 J(A), 8 K0, 27 R-symmetric algebra, 74 GK-dim(M ), 39 HH n(A, M ), 52 Z-filtered algebra, 32 Z-graded algebra, 33 gr M,
Gelfand-Kirillov dimension, 39 Gerstenhaber algebra, 64 Gerstenhaber bracket, 63 Goldie rank, 48 Goldie’s theorem, 5, 49 graded algebra, 33 homogeneous components, 33 graded derivation, 63 graded ideal, 33 graded Jacobi identity, 64 graded Lie algebra, 64 group algebra, 15 Hattori–Stallings trace map, 28 Hilbert basis theorem, 30 Hilbert polynomial, 39 Hilbert-Serre theorem, 37 HKR theorem, 65 Hochschild chain complex, 51 Hochschild cochain complex, 51 Hochschild cohomology groups, 52 Hochschild homology, 67 Hochschild–Kostant–Ronsenberg ideal, 3 graded, 33 prime, 7 idempotent orthogonal, 24 primitive, 24 identity, 68 indecomposable, 21 infinitesimal deformations, 60 injective envelope, 45 injective hull, 45 injective module, 43 inner derivations, 55 integrable derivation, 61 integrable 2-cocycle, 60 irreducible module, 7 Jacobi identity graded, 64 Jacobson radical, 8 Krull–Schmidt theorem, 22 left regular representation, 6 Leibnitz rule, 36 Lie algebra graded, 64 universal enveloping algebra, 32 local algebra, 21 Maschke’s theorem, 15 Maurer–Cartan equation, 66 module, 6 injective, 43 irreducible, 7 projective, 19 simple, 7 uniform, 46 morphism, 68 coalgebras, 68 multiplication, 68 multiplicity, 40 theorem, 65 Nakayama lemma, 8 76 Index III Algebras Noetherian algebra, 4, 30 obstruction, 60 opposite algebra, 7 orthogonal idempotents, 24 Poincar´e series, 37 Poincar´e-Birkhoff-Witt theorem, 34 Poisson algebra, 36 positive filtration, 32 pre-Lie structure, 63 prime ideal, 7 primitive idempotent, 24 product, 68 projective module, 19 projective resolution, 50 quantum double, 71 quantum groups, 70 quantum plane, 35, 74 quantum torus, 35 quantum Yang–Baxter equation, 73 quasitriangular, 71 QYBE, 73 radical, 11 Rees algebra, 33 resolution free, 50 projective, 50 Samuel polynomial, 39 Schouten bracket,
65 Schur’s lemma, 13 semi-direct product, 55 semisimple algebra, 9 separable algebra, 53 separated filtration, 32 separating idempotent, 53 simple algebra, 9 simple module, 7 skew fields, 3 split extension, 56 star product, 59 equivalence, 60 trivial, 60 submodule essential, 44 trace of projective, 28 trivial star product, 60 uniform dimension, 48 uniform module, 46 unique decomposition property, 22 unit, 68 universal enveloping algebra, 4, 32 Weyl algebra, 4, 31 77> 0)(∀y ∈ A) |y − a| < δ ⇒ |f (y) − f (a)| < ε. Intuitively, f is continuous at a if we can obtain f (a) as accurately as we wish by using more accurate values of a (the definition says that if we want to approximate f (a) by f (y) to within accuracy ε, we just have to get our y to within δ of a for some δ). For example, suppose we have the function f (x. Suppose that we don’t know what the function actually is, but we have a computer program that computes this function. We want to know what f (π) is. Since we cannot input π (it has infinitely many digits), we can try 3, and it gives 0. Then we try 3.14, and it gives 0 again. If we try 3.1416, it gives 1 (since π = 3.1415926 · · · < 3.1416). We keep giving more and more digits of π, but the result keeps oscillating between 0 and 1. We have no hope of what f (π) might be, even approximately. So this f is discontinuous at π. However, if we have the function g(x) = x2, then we can find the (approximate) value of g(π). We can first try g(3) and obtain 9. Then we can try g(3.14) = 9.8596, g(3.1416) = 9.86965056 etc. We can keep trying and obtain more and more accurate values of g(π). So g is continuous at π. Example.
– Constant functions are continuous. – The function f (x) = x is continuous (take δ = ε). The definition of continuity of a function looks rather like the definition of convergence. In fact, they are related by the following lemma: Lemma. The following two statements are equivalent for a function f : A → R. – f is continuous – If (an) is a sequence in A with an → a, then f (an) → f (a). Proof. (i)⇒(ii) Let ε > 0. Since f is continuous at a, (∃δ > 0)(∀y ∈ A) |y − a| < δ ⇒ |f (y) − f (a)| < ε. 23 4 Continuous functions IA Analysis I We want N such that ∀n ≥ N, |f (an) − f (a)| < ε. By continuity, it is enough to find N such that ∀n ≥ N, |an − a| < δ. Since an → a, such an N exists. (ii)⇒(i) We prove the contrapositive: Suppose f is not continuous at a. Then (∃ε > 0)(∀δ > 0)(∃y ∈ A) |y − a| < δ and |f (y) − f (a)| ≥ ε. For each n, we can therefore pick an ∈ A such that |an − a| < 1 f (a)| ≥ ε. But then an → a (by Archimedean property), but f (an) → f (a). n and |f (an) − Example. (i) Let f (x) = −1 = f (0). f (− 1. Then f is not continuous because − 1 n → 0 but (ii) Let f : Q → R with f (x) = 1 x2 > 2 0 x2 < 2 Then f is continuous. For every a ∈ Q, we can find an interval about a on which f is constant. So f is continuous at a. (iii) Let f (x) = sin 1 0 x x = 0 x = 0 Then f (a) is discontinuous. For example, let an = 1/[(2n + 0.5)π
]. Then an → 0 and f (an) → 1 = f (0). We can use this sequence definition as the definition for continuous functions. This has the advantage of being cleaner to write and easier to work with. In particular, we can reuse a lot of our sequence theorems to prove the analogous results for continuous functions. Lemma. Let A ⊆ R and f, g : A → R be continuous functions. Then (i) f + g is continuous (ii) f g is continuous (iii) if g never vanishes, then f /g is continuous. Proof. (i) Let a ∈ A and let (an) be a sequence in A with an → a. Then (f + g)(an) = f (an) + g(an). But f (an) → f (a) and g(an) → g(a). So f (an) + g(an) → f (a) + g(a) = (f + g)(a). (ii) and (iii) are proved in exactly the same way. 24 4 Continuous functions IA Analysis I With this lemma, from the fact that constant functions and f (x) = x are continuous, we know that all polynomials are continuous. Similarly, rational functions P (x)/Q(x) are continuous except when Q(x) = 0. Lemma. Let A, B ⊆ R and f : A → B, g : B → R. Then if f and g are continuous, g ◦ f : A → R is continuous. Proof. We offer two proofs: (i) Let (an) be a sequence in A with an → a ∈ A. Then f (an) → f (a) since f is continuous. Then g(f (an)) → g(f (a)) since g is continuous. So g ◦ f is continuous. (ii) Let a ∈ A and ε > 0. Since g is continuous at f (a), there exists η > 0 such that ∀z ∈ B, |z − f (a)| < η ⇒ |g(z) − g(f (a))| < ε. Since f is continuous at a, ∃δ > 0 such that ∀y ∈ A, |y − a| < �
� ⇒ |f (y) − f (a)| < η. Therefore |y − a| < δ ⇒ |g(f (y)) − g(f (a))| < ε. There are two important theorems regarding continuous functions — the maximum value theorem and the intermediate value theorem. Theorem (Maximum value theorem). Let [a, b] be a closed interval in R and let f : [a, b] → R be continuous. Then f is bounded and attains its bounds, i.e. f (x) = sup f for some x, and f (y) = inf f for some y. Proof. If f is not bounded above, then for each n, we can find xn ∈ [a, b] such that f (xn) ≥ n for all n. By Bolzano-Weierstrass, since xn ∈ [a, b] and is bounded, the sequence (xn) has a convergent subsequence (xnk ). Let x be its limit. Then since f is continuous, f (xnk ) → f (x). But f (xnk ) ≥ nk → ∞. So this is a contradiction. Now let C = sup{f (x) : x ∈ [a, b]}. Then for every n, we can find xn n. So by Bolzano-Weierstrass, (xn) has a convergent ≤ f (xnk ) ≤ C, f (xnk ) → C. Therefore if such that f (xn) ≥ C − 1 subsequence (xnk ). Since C − 1 nk x = lim xnk, then f (x) = C. A similar argument applies if f is unbounded below. Theorem (Intermediate value theorem). Let a < b ∈ R and let f : [a, b] → R be continuous. Suppose that f (a) < 0 < f (b). Then there exists an x ∈ (a, b) such that f (x) = 0. Proof. We have several proofs: (i) Let A = {x : f (x) < 0} and let s = sup A. We shall show that f (s) = 0 (this is similar to the proof that If f (s) < 0, then setting �
� = |f (s)| in the definition of continuity, we can find δ > 0 such that ∀y, |y − s| < δ ⇒ f (y) < 0. Then s + δ/2 ∈ A, so s is not an upper bound. Contradiction. 2 exists in Numbers and Sets). √ If f (s) > 0, by the same argument, we can find δ > 0 such that ∀y, |y − s| < δ ⇒ f (y) > 0. So s − δ/2 is a smaller upper bound. (ii) Let a0 = a, b0 = b. By repeated bisection, construct nested intervals and f (an) < 0 ≤ f (bn). Then by the [an, bn] such that bn − an = b0−a0 2n 25 4 Continuous functions IA Analysis I nested intervals property, we can find x ∈ ∩∞ an, bn → x. Since f (an) < 0 for every n, f (x) ≤ 0. Similarly, since f (bn) ≥ 0 for every n, f (x) ≥ 0. So f (x) = 0. n=0[an, bn]. Since bn − an → 0, It is easy to generalize this to get that, if f (a) < c < f (b), then ∃x ∈ (a, b) such that f (x) = c, by applying the result to f (x) − c. Also, we can assume instead that f (b) < c < f (a) and obtain the same result by looking at −f (x). Corollary. Let f : [a, b] → [c, d] be a continuous strictly increasing function with f (a) = c, f (b) = d. Then f is invertible and its inverse is continuous. Proof. Since f is strictly increasing, it is an injection (suppose x = y. wlog, x < y. Then f (x) < f (y) and so f (x) = f (y)). Now let y ∈ (c, d). By the intermediate value theorem, there exists x ∈ (a, b) such that f (x) = y
. So f is a surjection. So it is a bijection and hence invertible. Let g be the inverse. Let y ∈ [c, d] and let ε > 0. Let x = g(y). So f (x) = y. Let u = f (x − ε) and v = f (x + ε) (if y = c or d, make the obvious adjustments). Then u < y < v. So we can find δ > 0 such that (y − δ, y + δ) ⊆ (u, v). Then |z − y| < δ ⇒ g(z) ∈ (x − ε, x + ε) ⇒ |g(z) − g(y)| < ε. With this corollary, we can create more continuous functions, e.g. √ x. 4.2 Continuous induction* Continuous induction is a generalization of induction on natural numbers. It provides an alternative mechanism to prove certain results we have shown. Proposition (Continuous induction v1). Let a < b and let A ⊆ [a, b] have the following properties: (i) a ∈ A (ii) If x ∈ A and x = b, then ∃y ∈ A with y > x. (iii) If ∀ε > 0, ∃y ∈ A : y ∈ (x − ε, x], then x ∈ A. Then b ∈ A. Proof. Since a ∈ A, A = ∅. A is also bounded above by b. So let s = sup A. Then ∀ε > 0, ∃y ∈ A such that y > s − ε. Therefore, by (iii), s ∈ A. If s = b, then by (ii), we can find y ∈ A such that y > s. It can also be formulated as follows: Proposition (Continuous induction v2). Let A ⊆ [a, b] and suppose that (i) a ∈ A (ii) If [a, x] ⊆ A and x = b, then there exists y > x such that [a, y] ⊆ A. (iii) If [a, x) ⊆ A, then [a, x] ⊆ A. Then A
= [a, b] 26 4 Continuous functions IA Analysis I Proof. We prove that version 1 ⇒ version 2. Suppose A satisfies the conditions of v2. Let A = {x ∈ [a, b] : [a, x] ⊆ A}. Then a ∈ A. If x ∈ A with x = b, then [a, x] ⊆ A. So ∃y > x such that [a, y] ⊆ A. So ∃y > x such that y ∈ A. If ∀ε > 0, ∃y ∈ (x − ε, x] such that [a, y] ⊆ A, then [a, x) ⊆ A. So by (iii), [a, x] ⊆ A, so x ∈ A. So A satisfies properties (i) to (iii) of version 1. Therefore b ∈ A. So [a, b] ⊆ A. So A = [a, b]. We reprove intermediate value theorem here: Theorem (Intermediate value theorem). Let a < b ∈ R and let f : [a, b] → R be continuous. Suppose that f (a) < 0 < f (b). Then there exists an x ∈ (a, b) such that f (x) = 0. Proof. Assume that f is continuous. Suppose f (a) < 0 < f (b). Assume that (∀x) f (x) = 0, and derive a contradiction. Let A = {x : f (x) < 0} Then a ∈ A. If x ∈ A, then f (x) < 0, and by continuity, we can find δ > 0 such that |y − x| < δ ⇒ f (y) < 0. So if x = b, then we can find y ∈ A such that y > x. We prove the contrapositive of the last condition, i.e. x ∈ A ⇒ (∃δ > 0)(∀y ∈ A) y ∈ (x − δ, x]. If x ∈ A, then f (x) > 0 (we assume that f is never zero. If not, we’re done). Then by continuity,
∃δ > 0 such that |y − x| < δ ⇒ f (y) > 0. So y ∈ A. Hence by continuous induction, b ∈ A. Contradiction. Now we prove that continuous functions in closed intervals are bounded. Theorem. Let [a, b] be a closed interval in R and let f : [a, b] → R be continuous. Then f is bounded. Proof. Let f : [a, b] be continuous. Let A = {x : f is bounded on [a, x]}. Then a ∈ A. If x ∈ A, x = b, then ∃δ > 0 such that |y − x| < δ ⇒ |f (y) − f (x)| < 1. So ∃y > x (e.g. take min{x + δ/2, b}) such that f is bounded on [a, y], which implies that y ∈ A. Now suppose that ∀ε > 0, ∃y ∈ (x, −ε, x] such that y ∈ A. Again, we can find δ > 0 such that f is bounded on (x − δ, x + δ), and in particular on (x − δ, x]. Pick y such that f is bounded on [a, y] and y > x − δ. Then f is bounded on [a, x]. So x ∈ A. So we are done by continuous induction. Finally, we can prove a theorem that we have not yet proven. Definition (Cover of a set). Let A ⊆ R. A cover of A by open intervals is a set {Iγ : γ ∈ Γ} where each Iγ is an open interval and A ⊆ γ∈Γ Iγ. A finite subcover is a finite subset {Iγ1, · · ·, Iγn } of the cover that is still a cover. Not every cover has a finite subcover. For example, the cover {( 1 n, 1) : n ∈ N} of (0, 1) has no finite subcover. Theorem (Heine-Borel*). Every cover of a closed, bounded interval [a,
b] by open intervals has a finite subcover. We say closed intervals are compact (cf. Metric and Topological Spaces). 27 4 Continuous functions IA Analysis I Proof. Let {Iγ : γ ∈ Γ} be a cover of [a, b] by open intervals. Let A = {x : [a, x] can be covered by finitely many of the Iγ}. Then a ∈ A since a must belong to some Iγ. If x ∈ A, then pick γ such that x ∈ Iγ. Then if x = b, since Iγ is an open interval, it contains [x, y] for some y > x. Then [a, y] can be covered by finitely many Iγ, by taking a finite cover for [a, x] and the Iγ that contains x. Now suppose that ∀ε > 0, ∃y ∈ A such that y ∈ (x − ε, x]. Let Iγ be an open interval containing x. Then it contains (x − ε, x] for some ε > 0. Pick y ∈ A such that y ∈ (x−ε, x]. Now combine Iγ with a finite subcover of [a, y] to get a finite subcover of [a, x]. So x ∈ A. Then done by continuous induction. We can use Heine-Borel to prove that continuous functions on [a, b] are bounded. Theorem. Let [a, b] be a closed interval in R and let f : [a, b] → R be continuous. Then f is bounded and attains it bounds, i.e. f (x) = sup f for some x, and f (y) = inf f for some y. Proof. Let f : [a, b] → R be continuous. Then by continuity, (∀x ∈ [a, b])(∃δx > 0)(∀y) |y − x| < δx ⇒ |f (y) − f (x)| < 1. Let γ = [a, b] and for each x ∈ γ, let Ix = (x − δx, x + δx). So by Heine-Borel
, we can find x1, · · ·, xn such that [a, b] ⊆ n 1 (xi − δxi, xi + δxi ). But f is bounded in each interval (xi − δxi, xi + δxi) by |f (xi)| + 1. So it is bounded on [a, b] by max |f (xi)| + 1. 28 5 Differentiability IA Analysis I 5 Differentiability In the remainder of the course, we will properly develop calculus, and put differentiation and integration on a rigorous foundation. Every notion will be given a proper definition which we will use to prove results like the product and quotient rule. 5.1 Limits First of all, we need the notion of limits. Recall that we’ve previously had limits for sequences. Now, we will define limits for functions. Definition (Limit of functions). Let A ⊆ R and let f : A → R. We say or “f (x) → as x → a”, if lim x→a f (x) =, (∀ε > 0)(∃δ > 0)(∀x ∈ A) 0 < |x − a| < δ ⇒ |f (x) − | < ε. We couldn’t care less what happens when x = a, hence the strict inequality 0 < |x − a|. In fact, f doesn’t even have to be defined at x = a. Example. Let f (x Then lim x→2 = 2, even though f (2) = 3. Example. Let f (x) = sin x x. Then f (0) is not defined but lim x→0 f (x) = 1. We will see a proof later after we define what sin means. We notice that the definition of the limit is suspiciously similar to that of continuity. In fact, if we define g(x) = f (x) x = a x = a Then f (x) → as x → a iff g is continuous at a. Alternatively, f is continuous at a if f (x) → f (a)
as x → a. It follows also that f (x) → as x → a iff f (xn) → for every sequence (xn) in A with xn → a. The previous limit theorems of sequences apply here as well Proposition. If f (x) → and g(x) → m as x → a, then f (x) + g(x) → + m, f (x)g(x) → m, and f (x) m if g and m don’t vanish. g(x) → 5.2 Differentiation Similar to what we did in IA Differential Equations, we define the derivative as a limit. 29 5 Differentiability IA Analysis I Definition (Differentiable function). f is differentiable at a with derivative λ if lim x→a f (x) − f (a) x − a = λ. lim h→0 f (a + h) − f (a) h = λ. Equivalently, if We write λ = f (a). Here we see why, in the definition of the limit, we say that we don’t care what happens when x = a. In our definition here, our function is 0/0 when x = a, and we can’t make any sense out of what happens when x = a. Alternatively, we write the definition of differentiation as f (x + h) − f (x) h = f (x) + ε(h), where ε(h) → 0 as h → 0. Rearranging, we can deduce that f (x + h) = f (x) + hf (x) + hε(h), Note that by the definition of the limit, we don’t have to care what value ε takes when h = 0. It can be 0, π or 101010. However, we usually take ε(0) = 0 so that ε is continuous. Using the small-o notation, we usually write o(h) for a function that satisfies o(h) h → 0 as h → 0. Hence we have
Proposition. f (x + h) = f (x) + hf (x) + o(h). We can interpret this as an approximation of f (x + h): f (x + h) = f (x) + hf (x) linear approximation + o(h) error term. And differentiability shows that this is a very good approximation with small o(h) error. Conversely, we have Proposition. If f (x + h) = f (x) + hf (x) + o(h), then f is differentiable at x with derivative f (x). Proof. f (x + h) − f (x) h = f (x) + o(h) h → f (x). We can take derivatives multiple times, and get multiple derivatives. Definition (Multiple derivatives). This is defined recursively: f is (n + 1)times differentiable if it is n-times differentiable and its nth derivative f (n) is differentiable. We write f (n+1) for the derivative of f (n), i.e. the (n + 1)th derivative of f. Informally, we will say f is n-times differentiable if we can differentiate it n times, and the nth derivative is f (n). 30 5 Differentiability IA Analysis I We can prove the usual rules of differentiation using the small o-notation. It can also be proven by considering limits directly, but the notation will become a bit more daunting. Lemma (Sum and product rule). Let f, g be differentiable at x. Then f + g and f g are differentiable at x, with (f + g)(x) = f (x) + g(x) (f g)(x) = f (x)g(x) + f (x)g(x) Proof. (f + g)(x + h) = f (x + h) + g(x + h) = f (x) + hf (x) + o(h) + g(x) + hg(x) + o(h) = (f + g)(x) + h(
f (x) + g(x)) + o(h) f g(x + h) = f (x + h)g(x + h) = [f (x) + hf (x) + o(h)][g(x) + hg(x) + o(h)] = f (x)g(x) + h[f (x)g(x) + f (x)g(x)] + o(h)[g(x) + f (x) + hf (x) + hg(x) + o(h)] + h2f (x)g(x) error term By limit theorems, the error term is o(h). So we can write this as = f g(x) + h(f (x)g(x) + f (x)g(x)) + o(h). Lemma (Chain rule). If f is differentiable at x and g is differentiable at f (x), then g ◦ f is differentiable at x with derivative g(f (x))f (x). Proof. If one is sufficiently familiar with the small-o notation, then we can proceed as g(f (x + h)) = g(f (x) + hf (x) + o(h)) = g(f (x)) + hf (x)g(f (x)) + o(h). If not, we can be a bit more explicit about the computations, and use hε(h) instead of o(h): (g ◦ f )(x + h) = g(f (x + h)) = g[f (x) + hf (x) + hε1(h) ] the “h” term = g(f (x)) + f g(x) + hε1(h)g(f (x)) + hf (x) + hε1(h)ε2(hf (x) + hε1(h)) = g ◦ f (x) + hg(f (x))f (x) ε1(h)g(f (x)) + f (x) + ε1(h)ε2 + h error term hf (x) + hε1
(h). We want to show that the error term is o(h), i.e. it divided by h tends to 0 as h → 0. But ε1(h)g(f (x)) → 0, f (x)+ε1(h) is bounded, and ε2(hf (x)+hε1(h)) → 0 because hf (x) + hε1(h) → 0 and ε2(0) = 0. So our error term is o(h). 31 5 Differentiability IA Analysis I We usually don’t write out the error terms so explicitly, and just use heuristics like f (x + o(h)) = f (x) + o(h); o(h) + o(h) = o(h); and g(x) · o(h) = o(h) for any (bounded) function g. Example. (i) Constant functions are differentiable with derivative 0. (ii) f (x) = λx is differentiable with derivative λ. (iii) Using the product rule, we can show that xn is differentiable with derivative nxn−1 by induction. (iv) Hence all polynomials are differentiable. Example. Let f (x) = 1/x. If x = 0, then f (x + h) − f (x) h = by limit theorems. 1 x+h − 1 h x −h x(x+h) h = = −1 x(x + h) → −1 x2 Lemma (Quotient rule). If f and g are differentiable at x, and g(x) = 0, then f /g is differentiable at x with derivative f g (x) = f (x)g(x) − g(x)f (x) g(x)2. Proof. First note that 1/g(x) = h(g(x)) where h(y) = 1/y. So 1/g(x) is differen−1 g(x)2 g(x) by the chain rule. tiable at x with derivative By the product rule, f /g is differentiable
at x with derivative f (x) g(x) − f (x) g(x) g(x)2 = f (x)g(x) − f (x)g(x) g(x)2. Lemma. If f is differentiable at x, then it is continuous at x. Proof. As y → x, f (y) − f (x) y − x → f (x). Since, y − x → 0, f (y) − f (x) → 0 by product theorem of limits. So f (y) → f (x). So f is continuous at x. Theorem. Let f : [a, b] → [c, d] be differentiable on (a, b), continuous on [a, b], and strictly increasing. Suppose that f (x) never vanishes. Suppose further that f (a) = c and f (b) = d. Then f has an inverse g and for each y ∈ (c, d), g is differentiable at y with derivative 1/f (g(y)). In human language, this states that if f is invertible, then the derivative of f −1 is 1/f. Note that the conditions will (almost) always require f to be differentiable on open interval (a, b), continuous on closed interval [a, b]. This is because it doesn’t make sense to talk about differentiability at a or b since the definition of f (a) requires f to be defined on both sides of a. 32 5 Differentiability IA Analysis I Proof. g exists by an earlier theorem about inverses of continuous functions. Let y, y + k ∈ (c, d). Let x = g(y), x + h = g(y + k). Since g(y + k) = x + h, we have y + k = f (x + h). So k = f (x + h) − y = f (x + h) − f (x). So g(y + k) − g(y) k = (x + h) − x f (x + h) − f (x) = f (x + h) − f (x) h −1. As k →
0, since g is continuous, g(y + k) → g(y). So h → 0. So g(y + k) − g(y) k → [f (x)]−1 = [f (g(y)]−1. Example. Let f (x) = x1/2 for x > 0. Then f is the inverse of g(x) = x2. So f (x) = 1 g(f (x)) = 1 2x1/2 = 1 2 x−1/2. Similarly, we can show that the derivative of x1/q is 1 q x1/q−1. Then let’s take xp/q = (x1/q)p. By the chain rule, its derivative is p(x1/q)p−1 · 1 q x1/q−1 = p q x 5.3 Differentiation theorems p−1 q + 1 q −1 = p q −1. x p q Everything we’ve had so far is something we already know. It’s just that now we can prove them rigorously. In this section, we will come up with genuinely new theorems, including but not limited to Taylor’s theorem, which gives us Taylor’s series. Theorem (Rolle’s theorem). Let f be continuous on a closed interval [a, b] (with a < b) and differentiable on (a, b). Suppose that f (a) = f (b). Then there exists x ∈ (a, b) such that f (x) = 0. It is intuitively obvious: if you move up and down, and finally return to the same point, then you must have changed direction some time. Then f (x) = 0 at that time. Proof. If f is constant, then we’re done. Otherwise, there exists u such that f (u) = f (a). wlog, f (u) > f (a). Since f is continuous, it has a maximum, and since f (u) > f (a) = f (b), the maximum is not attained at a or b. Suppose maximum is attained at x ∈ (a, b). Then for any h = 0, we have f (x + h) − f (x since
f (x + h) − f (x) ≤ 0 by maximality of f (x). By considering both sides as we take the limit h → 0, we know that f (x) ≤ 0 and f (x) ≥ 0. So f (x) = 0. 33 5 Differentiability IA Analysis I Corollary (Mean value theorem). Let f be continuous on [a, b] (a < b), and differentiable on (a, b). Then there exists x ∈ (a, b) such that f (x) = f (b) − f (a) b − a. Note that f (b)−f (a) b−a is the slope of the line joining f (a) and f (b). f (x) f (b) f (a) The mean value theorem is sometimes described as “rotate your head and apply Rolle’s”. However, if we actually rotate it, we might end up with a non-function. What we actually want is a shear. Proof. Let Then g(x) = f (x) − f (b) − f (a) b − a x. g(b) − g(a) = f (b) − f (a) − f (b) − f (a) b − a (b − a) = 0. So by Rolle’s theorem, we can find x ∈ (a, b) such that g(x) = 0. So f (x) = f (b) − f (a) b − a, as required. We’ve always assumed that if a function has a positive derivative everywhere, then the function is increasing. However, it turns out that this is really hard to prove directly. It does, however, follow quite immediately from the mean value theorem. Example. Suppose f (x) > 0 for every x ∈ (a, b). Then for u, v in [a, b], we can find w ∈ (u, v) such that f (v) − f (u) v − u = f (w) > 0. It follows that f (v) > f (u). So f is strictly increasing. Similarly, if f (x) ≥ 2 for every x and f (0) = 0, then f
(1) ≥ 2, or else we can find x ∈ (0, 1) such that 2 ≤ f (x) = f (1) − f (0) 1 − 0 = f (1). 34 5 Differentiability IA Analysis I Theorem (Local version of inverse function theorem). Let f be a function with continuous derivative on (a, b). Let x ∈ (a, b) and suppose that f (x) = 0. Then there is an open interval (u, v) containing x on which f is invertible (as a function from (u, v) to f ((u, v))). Moreover, if g is the inverse, then g(f (z)) = 1 f (z) for every z ∈ (u, v). This says that if f has a non-zero derivative, then it has an inverse locally and the derivative of the inverse is 1/f. Note that this not only requires f to be differentiable, but the derivative itself also has to be continuous. Proof. wlog, f (x) > 0. By the continuity, of f, we can find δ > 0 such that f (z) > 0 for every z ∈ (x − δ, x + δ). By the mean value theorem, f is strictly increasing on (x − δ, x + δ), hence injective. Also, f is continuous on (x − δ, x + δ) by differentiability. Then done by the inverse function theorem. Finally, we are going to prove Taylor’s theorem. To do so, we will first need some lemmas. Theorem (Higher-order Rolle’s theorem). Let f be continuous on [a, b] (a < b) and n-times differentiable on an open interval containing [a, b]. Suppose that f (a) = f (a) = f (2)(a) = · · · = f (n−1)(a) = f (b) = 0. Then ∃x ∈ (a, b) such that f (n)(x) = 0. Proof. Induct on n. The n = 0 base case is just Rolle’s theorem. Suppose we have k < n and xk �
� (a, b) such that f (k)(xk) = 0. Since f (k)(a) = 0, we can find xk+1 ∈ (a, xk) such that f (k+1)(xk+1) = 0 by Rolle’s theorem. So the result follows by induction. Corollary. Suppose that f and g are both differentiable on an open interval containing [a, b] and that f (k)(a) = g(k)(a) for k = 0, 1, · · ·, n − 1, and also f (b) = g(b). Then there exists x ∈ (a, b) such that f (n)(x) = g(n)(x). Proof. Apply generalised Rolle’s to f − g. Now we shall show that for any f, we can find a polynomial p of degree at most n that satisfies the conditions for g, i.e. a p such that p(k)(a) = f (k)(a) for k = 0, 1, · · ·, n − 1 and p(b) = f (b). A useful ingredient is the observation that if then Therefore, if Qk(x) = (x − a)k k!, Q(j) k (a(x) = n−1 k=0 f (k)(a)Qk(x), 35 5 Differentiability IA Analysis I then Q(j)(a) = f (j)(a) for j = 0, 1, · · ·, n − 1. To get p(b) = f (b), we use our nth degree polynomial term: p(x) = Q(x) + (x − a)n (b − a)n f (b) − Q(b). Then our final term does not mess up our first n − 1 derivatives, and gives p(b) = f (b). By the previous corollary, we can find x ∈ (a, b) such that f (n)(x) = p(n)(x). f (n)(x) = n! (b − a)n f (b) − Q(b).
f (b) = Q(b) + (b − a)n n! f (n)(x). That is, Therefore Alternatively, f (b) = f (a) + (b − a)f (a) + · · · + (b − a)n−1 (n − 1)! f (n−1)(a) + (b − a)n n! f (n)(x). Setting b = a + h, we can rewrite this as Theorem (Taylor’s theorem with the Lagrange form of remainder). f (a + h) = f (a) + hf (a) + · · · + hn−1 (n − 1)! f (n−1)(a) + (n−1)-degree approximation to f near a hn f (n)(x) n! error term. for some x ∈ (a, a + h). Strictly speaking, we only proved it for the case when h > 0, but we can easily show it holds for h < 0 too by considering g(x) = f (−x). Note that the remainder term is not necessarily small, but this often gives us the best (n − 1)-degree approximation to f near a. For example, if f (n) is bounded by C near a, then hn n! f (n)(x) ≤ C n! |h|n = o(hn−1). Example. Let f : R → R be a differentiable function such that f (0) = 1 and f (x) = f (x) for every x (intuitively, we know it is ex, but that thing doesn’t exist!). Then for every x, we have f (x) = 1 + x + x2 2! + x3 3! + · · · = ∞ n=0 xn n!. While it seems like we can prove this works by differentiating it and see that f (x) = f (x), the sum rule only applies for finite sums. We don’t know we can differentiate a sum term by term. So we have to use Taylor’s theorem. 36 5 Differentiability IA Analysis I Since f (x) = f (x), it follows that all derivatives exist. By Taylor’s theorem, f (
x) = f (0) + f (0)x + f (2)(0) 2! x2 + · · · + f (n−1)(0) (n − 1)! xn−1 + f (n)(u) n! xn. for some u between 0 and x. This equals to f (x) = n−1 k=0 xk k! + f (n)(u) n! xn. We must show that the remainder term f (n)(u) that x is fixed, but u can depend on n. n! xn → 0 as n → ∞. Note here But we know that f (n)(u) = f (u), but since f is differentiable, it is continuous, and is bounded on [0, x]. Suppose |f (u)| ≤ C on [0, x]. Then f (n)(u) n! xn ≤ C n! |x|n → 0 from limit theorems. So it follows that f (x) = 1 + x + x2 2! + x3 3! + · · · = ∞ n=0 xn n!. 5.4 Complex differentiation Definition (Complex differentiability). Let f : C → C. Then f is differentiable at z with derivative f (z) if lim h→0 f (z + h) − f (z) h exists and equals f (z). Equivalently, f (z + h) = f (z) + hf (z) + o(h). This is exactly the same definition with real differentiation, but has very different properties! All the usual rules — chain rule, product rule etc. also apply (with the same proofs). Also the derivatives of polynomials are what you expect. However, there are some more interesting cases. Example. f (z) = ¯z is not differentiable. z + h − z h = ¯h h = 1 −1 h is purely imaginary h is real If this seems weird, this is because we often think of C as R2, but they are not the same. For example, reflection is a linear map in R2, but not in C
. A linear map in C is something in the form x → bx, which can only be a dilation or rotation, not reflections or other weird things. Example. f (z) = |z| is also not differentiable. If it were, then |z|2 would be as well (by the product rule). So would |z|2 z = ¯z when z = 0 by the quotient rule. At z = 0, it is certainly not differentiable, since it is not even differentiable on R. 37 6 Complex power series IA Analysis I 6 Complex power series Before we move on to integration, we first have a look at complex power series. This will allow us to define the familiar exponential and trigonometric functions. Definition (Complex power series). A complex power series is a series of the form ∞ anzn. n=0 when z ∈ C and an ∈ C for all n. When it converges, it is a function of z. When considering complex power series, a very important concept is the radius of convergence. To make sense of this concept, we first need the following lemma: Lemma. Suppose that anzn converges and |w| < |z|, then anwn converges (absolutely). Proof. We know that |anwn| = |anzn| · Since anzn converges, the terms anzn are bounded. So pick C such that. n w z for every n. Then |anzn| ≤ C 0 ≤ ∞ n=0 |anwn| ≤ ∞ n=0 C n w z, which converges (geometric series). So by the comparison test, anwn converges absolutely. It follows that if anzn does not converge and |w| > |z|, then anwn does not converge. Now let R = sup{|z| : anzn converges } (R may be infinite). If |z| < R, then we can find z0 with |z0| ∈ (|z|, R] such that ∞ 0 converges. So by lemma above, anzn converges. If |z| > R, then anzn diverges by definition of R. n anzn
Definition (Radius of convergence). The radius of convergence of a power series anzn is R = sup |z| : anzn converges. {z : |z| < R} is called the circle of convergence.1. If |z| < R, then anzn converges. If |z| > R, then anzn diverges. When |z| = R, the series can converge at some points and not the others. Example. ∞ n=0 zn has radius of convergence of 1. When |z| = 1, it diverges (since the terms do not tend to 0). 1Note to pedants: yes it is a disc, not a circle 38 6 Complex power series IA Analysis I ∞ n=0 zn n Example. to nth is has radius of convergence 1, since the ratio of (n + 1)th term zn+1/(n + 1) zn/. So if |z| < 1, then the series converges by the ratio test. If |z| > 1, then eventually the terms are increasing in modulus. If z = 1, then it diverges (harmonic series). If |z| = 1 and z = 1, it converges by Abel’s test. Example. The series ∞ n=1 zn n2 converges for |z| ≤ 1 and diverges for |z| > 1. As evidenced by the above examples, the ratio test can be used to find the radius of convergence. We also have an alternative test based on the nth root. Lemma. The radius of convergence of a power series anzn is R = 1 lim sup n|an|. Often n|an| converges, so we only have to find the limit. Proof. Suppose |z| < 1/ lim sup n|an|. Then |z| lim sup n|an| < 1. Therefore there exists N and ε > 0 such that |z| n|an| ≤ 1 − ε sup n≥N by the definition of lim sup. Therefore |anzn| ≤ (1 − ε)n for every n ≥ N, which implies (by comparison with geometric series) that anzn converges absolutely. On the other hand, if |z| lim sup n|an| > 1, it follows that |z| n|
an| ≥ 1 for infinitely many n. Therefore |anzn| ≥ 1 for infinitely many n. So anzn does not converge. Example. The radius of convergence of So lim sup n|an| = 1 2. So 1/ lim sup n|an| = 2. zn 2n is 2 because n|an| = 1 2 for every n. But often it is easier to find the radius convergence from elementary methods such as the ratio test, e.g. for n2zn. 6.1 Exponential and trigonometric functions Definition (Exponential function). The exponential function is ez = ∞ n=0 zn n!. By the ratio test, this converges on all of C. 39 6 Complex power series IA Analysis I A fundamental property of this function is that ez+w = ezew. Once we have this property, we can say that Proposition. The derivative of ez is ez. Proof. But So ez+h − ez h = ez eh − 1 h 1 + = ez h 2! + h2 3! + · · · h 2! + h2 3! + · · · ≤ |h| 2 + |h|2 4 + |h|3 8 + · · · = |h|/2 1 − |h|/2 → 0. ez+h − ez h → ez. But we must still prove that ez+w = ezew. Consider two sequences (an), (bn). Their convolution is the sequence (cn) defined by cn = a0bn + a1bn−1 + a2bn−2 + · · · + anb0. The relevance of this is that if you take N n=0 anzn N n=0 bnzn and N n=0 cnzn, and equate coefficients of zn, you get cn = a0bn + a1bn−1 + a2bn−2 + · · · + anb0. n=0 an and ∞ Theorem. Let ∞ let (cn) be the convolution of the sequences (an) and (bn). Then ∞ converges (absolutely), and n=0 bn be two absolutely convergent series, and n=0 cn ∞ n=
0 ∞ ∞ bn. an cn = n=0 n=0 Proof. We first show that a rearrangement of cn converges absolutely. Hence it converges unconditionally, and we can rearrange it back to cn. Consider the series (a0b0) + (a0b1 + a1b1 + a1b0) + (a0b2 + a1b2 + a2b2 + a2b1 + a2b0) + · · · (∗) Let SN = N n=0 an, TN = N n=0 bn, UN = N n=0 |an|, VN = N n=0 |bn|. 40 6 Complex power series IA Analysis I Also let SN → S, TN → T, UN → U, VN → V (these exist since an and bn converge absolutely). If we take the modulus of the terms of (∗), and consider the first (N + 1)2 terms (i.e. the first N + 1 brackets), the sum is UN VN. Hence the series converges absolutely to U V. Hence (∗) converges. The partial sum up to (N + 1)2 of the series (∗) itself is SN TN, which converges to ST. So the whole series converges to ST. Since it converges absolutely, it converges unconditionally. Now consider a rearrangement: a0b0 + (a0b1 + a1b0) + (a0b2 + a1b1 + a2b0) + · · · Then this converges to ST as well. But the partial sum of the first 1 + 2 + · · · + N terms is c0 + c1 + · · · + cN. So N n=0 cn → ST = ∞ ∞ bn. an n=0 n=0 Corollary. ezew = ez+w. Proof. By theorem above (and definition of ez), ∞ n=0 ∞ ezew = ezew = 1 · wn n! + 1 n! wn + z 1! n 1 wn−1 (n − 1)! + zwn−1 + z2 2! n 2 w
n−2 (n − 2)! + · · · + · 1 z2wn−2 + · · · + zn zn n! n n n=0 ∞ = (z + w)n by the binomial theorem n=0 = ez+w. Note that if (cn) is the convolution of (an) and (bn), then the convolution of (anzn) and (bnzn) is (cnzn). Therefore if both anzn and bnzn converge absolutely, then their product is cnzn. Note that we have now completed the proof that the derivative of ez is ez. Now we define sin z and cos z: Definition (Sine and cosine). sin z = cos z = eiz − e−iz 2i eiz + e−iz 2 = z − = 1 − z3 3! z2 2! + + z5 5! z4 4! − − z7 7! z6 6! + · · · + · · ·. We now prove certain basic properties of sin and cos, using known properties of ez. 41 6 Complex power series IA Analysis I Proposition. d dz d dz sin z = cos z = ieiz + ie−iz 2i ieiz − ie−iz 2 = cos z = − sin z sin2 z + cos2 z = e2iz + 2 + e−2iz 4 + e2iz − 2 + e−2iz −4 = 1. It follows that if x is real, then | cos x| and | sin x| are at most 1. Proposition. Proof. cos(z + w) = cos z cos w − sin z sin w sin(z + w) = sin z cos w + cos z sin w cos z cos w − sin z sin w = (eiz + e−iz)(eiw + e−iw) 4 + (eiz − e−iz)(eiw − e−iw) 4 = ei(z+w) + e−i(z+w) 2 = cos(z + w). Differentiating both sides wrt z gives − sin z cos w − cos z sin w = − sin(z + w). So sin(z + w) = sin z cos w + cos z sin w. When x is real, we know that cos x ≤ 1. Also
sin 0 = 0, and d dx sin x = cos x ≤ 1. So for x ≥ 0, sin x ≤ x, “by the mean value theorem”. Also, cos 0 = 1, and d dx cos x = − sin x, which, for x ≥ 0, is greater than −x. From this, it follows that when x ≥ 0, cos x ≥ 1 − x2 2 comes from “integrating” −x, (or finding a thing whose derivative is −x)). 2 (the 1 − x2 Continuing in this way, we get that for x ≥ 0, if you take truncate the power series for sin x or cos x, it will be ≥ sin x, cos x if you stop at a positive term, and ≤ if you stop at a negative term. For example, sin x ≥ x − x3 3! + x5 5! − x7 7! + x9 9! − x11 11!. In particular, cos 2 ≤ 1 − 24 4! Since cos 0 = 1, it follows by the intermediate value theorem that there exists some x ∈ (0, 2) such that cos x = 0. Since cos x ≥ 1 − x2 2, we can further deduce that x > 1. = 1 − 2 + 22 2! < 0. 2 3 + Definition (Pi). Define the smallest x such that cos x = 0 to be π 2. 42 6 Complex power series IA Analysis I sin π Since sin2 z + cos2 z = 1, it follows that sin π 2 ≥ 0 by the mean value theorem. So sin π Thus 2 = ±1. Since cos x > 0 on [0, π 2 ], 2 = 1. Proposition. Proof. z + z + cos sin π 2 π 2 = − sin z = cos z cos(z + π) = − cos z sin(z + π) = − sin z cos(z + 2π) = cos z sin(z + 2π) = sin z z + cos π 2 = cos z cos π 2 = − sin z sin = − sin z π 2 − sin z sin π 2 and similarly for others. 6.2 Differentiating power series We shall show that inside the circle of convergence, the derivative of ∞ given by the obvious formula �
� n=1 nanzn−1. n=0 an z is We first prove some (seemingly arbitrary and random) lemmas to build up the proof of the above statement. They are done so that the final proof will not be full of tedious algebra. Lemma. Let a and b be complex numbers. Then bn − an − n(b − a)an−1 = (b − a)2(bn−2 + 2abn−3 + 3a2bn−4 + · · · + (n − 1)an−2). Proof. If b = a, we are done. Otherwise, bn − an b − a = bn−1 + abn−2 + a2bn−3 + · · · + an−1. Differentiate both sides with respect to a. Then −nan−1(b − a) + bn − an (b − a)2 = bn−2 + 2abn−3 + · · · + (n − 1)an−2. Rearranging gives the result. Alternatively, we can do bn − an = (b − a)(bn−1 + abn−2 + · · · + an−1). Subtract n(b − a)an−1 to obtain (b − a)[bn−1 − an−1 + a(bn−2 − an−2) + a2(bn−3 − an−3) + · · · ] and simplify. 43 6 Complex power series IA Analysis I This implies that (z + h)n − zn − nhzn−1 = h2((z + h)n−2 + 2z(z + h)n−3 + · · · + (n − 1)zn−2), which is actually the form we need. Lemma. Let anzn have radius of convergence R, and let |z| < R. Then nanzn−1 converges (absolutely). Proof. Pick r such that |z| < r < R. Then |an|rn converges, so the terms |an|rn are bounded above by, say, C. Now n|anzn−1| = n|an|rn−1 n−1 |z| r ≤ C r n |z| r n−1 The series n by the comparison test. |z| r
n−1 converges, by the ratio test. So n|anzn−1| converges, Corollary. Under the same conditions, ∞ n=2 n 2 anzn−2 converges absolutely. Proof. Apply Lemma above again and divide by 2. Theorem. Let anzn be a power series with radius of convergence R. For |z| < R, let ∞ ∞ f (z) = anzn and g(z) = nanzn−1. Then f is differentiable with derivative g. n=0 n=1 Proof. We want f (z + h) − f (z) − hg(z) to be o(h). We have f (z + h) − f (z) − hg(z) = ∞ n=2 an((z + h)n − zn − hnzn). We started summing from n = 2 since the n = 0 and n = 1 terms are 0. Using our first lemma, we are left with h2 ∞ n=2 (z + h)n−2 + 2z(z + h)n−3 + · · · + (n − 1)zn−2 an We want the huge infinite series to be bounded, and then the whole thing is a bounded thing times h2, which is definitely o(h). Pick r such that |z| < r < R. If h is small enough that |z + h| ≤ r, then the last infinite series is bounded above (in modulus) by ∞ n=2 |an|(rn−2 + 2rn−2 + · · · + (n − 1)rn−2) = ∞ n=2 |an| n 2 rn−2, which is bounded. So done. 44 6 Complex power series IA Analysis I In IB Analysis II, we will prove the same result using the idea of uniform convergence, which gives a much nicer proof. Example. The derivative of is ez = 1 + z + z2 2! + z3 3! + · · · 1 + z + z2 2! + · · · = ez. So we have another proof that of this fact. Similarly, the derivatives of sin z and cos z work out as cos z and − sin z. 6.
3 Hyperbolic trigonometric functions Definition (Hyperbolic sine and cosine). We define cosh z = sinh z = ez + e−z 2 ez − e−z 2 = 1 + = z + z2 2! z3 3! + + z4 4! z5 5! + + z6 6! z7 7! + · · · + · · · Either from the definition or from differentiating the power series, we get that Proposition. d dz d dz cosh z = sinh z sinh z = cosh z Also, by definition, we have Proposition. Also, Proposition. cosh iz = cos z sinh iz = i sin z cosh2 z − sinh2 z = 1, 45 7 The Riemann Integral IA Analysis I 7 The Riemann Integral Finally, we can get to integrals. There are many ways to define an integral, which can have some subtle differences. The definition we will use here is the Riemann integral, which is the simplest definition, but is also the weakest one, in the sense that many functions are not Riemann integrable but integrable under other definitions. Still, the definition of the Riemann integral is not too straightforward, and requires a lot of preliminary definitions. 7.1 Riemann Integral Definition (Dissections). Let [a, b] be a closed interval. A dissection of [a, b] is a sequence a = x0 < x1 < x2 < · · · < xn = b. Definition (Upper and lower sums). Given a dissection D, the upper sum and lower sum are defined by the formulae UD(f ) = LD(f ) = n i=1 n i=1 (xi − xi−1) sup x∈[xi−1,xi] f (x) (xi − xi−1) inf x∈[xi−1,xi] f (x) Sometimes we use the shorthand Mi = sup x∈[xi−1,xi
] f (x), mi = inf x∈[xi−1−xi] f (x). The upper sum is the total area of the red rectangles, while the lower sum is the total area of the black rectangles: y · · · · · · a x1 x2 x3 xi xi+1 · · · b x Definition (Refining dissections). If D1 and D2 are dissections of [a, b], we say that D2 refines D1 if every point of D1 is a point of D2. Lemma. If D2 refines D1, then UD2f ≤ UD1f and LD2f ≥ LD1 f. 46 7 The Riemann Integral IA Analysis I Using the picture above, this is because if we cut up the dissections into smaller pieces, the red rectangles can only get chopped into shorter pieces and the black rectangles can only get chopped into taller pieces. y y x0 x1 x x0 x1 x2 x3 x Proof. Let D be x0 < x1 < · · · < xn. Let D2 be obtained from D1 by the addition of one point z. If z ∈ (xi−1, xi), then UD2f − UD1f = (z − xi−1) sup x∈[xi−1,z] f (x) + (xi − z) sup x∈[z,xi] f (x) − (xi − xi−1)Mi. But supx∈[xi−1,z] f (x) and supx∈[z,xi] f (x) are both at most Mi. So this is at most Mi(z − xi−1 + xi − z − (xi − xi−1)) = 0. So UD2 f ≤ UD1 f. By induction, the result is true whenever D2 refines D1. A very similar argument shows that LD2f ≥ LD1 f. Definition (Least common refinement). If D1 and D2 be dissections of [a, b]. Then the least common refinement of D1 and D2 is the dissection made out of the points of D1 and D2. Coroll
ary. Let D1 and D2 be two dissections of [a, b]. Then UD1f ≥ LD2 f. Proof. Let D be the least common refinement (or indeed any common refinement). Then by lemma above (and by definition), UD1f ≥ UDf ≥ LDf ≥ LD2 f. Finally, we can define the integral. Definition (Upper, lower, and Riemann integral). The upper integral is The lower integral is b a f (x) dx = inf D UDf. b a f (x) dx = sup D LDf. 47 7 The Riemann Integral IA Analysis I If these are equal, then we call their common value the Riemann integral of f, and is denoted b a f (x) dx. If this exists, we say f is Riemann integrable. We will later prove the fundamental theorem of calculus, which says that integration is the reverse of differentiation. But why don’t we simply define integration as anti-differentiation, and prove that it is the area of the curve? There are things that we cannot find (a closed form of) the anti-derivative of, like e−x2. In these cases, we wouldn’t want to say the integral doesn’t exist — it surely does according to this definition! There is an immediate necessary condition for Riemann integrability — boundedness. If f is unbounded above in [a, b], then for any dissection D, there must be some i such that f is unbounded on [xi−1, xi]. So Mi = ∞. So UDf = ∞. Similarly, if f is unbounded below, then LDf = −∞. So unbounded functions are not Riemann integrable. Example. Let f (x) = x on [a, b]. Intuitively, we know that the integral is (b2 − a2)/2, and we will show this using the definition above. Let D = x0 < x1 < · · · < xn be a dissection. Then UDf = n i=1 (xi − xi−1)xi
We know that the integral is b2−a2 2. So we put each term of the sum into the form i −x2 x2 2 i−1 plus some error terms. = = = n i=1 x2 i 2 − x2 i−1 2 + x2 i 2 − xi−1xi + x2 i−1 2 1 2 1 2 n i=1 (x2 i − x2 i−1 + (xi − xi−1)2) (b2 − a2) + 1 2 n (xi − xi−1)2. i=1 Definition (Mesh). The mesh of a dissection D is maxi(xi+1 − xi). Then if the mesh is < δ, then 1 2 n (xi − xi−1)2 ≤ i=1 δ 2 n (xi − xi−1) = i=1 δ 2 (b − a). So by making δ small enough, we can show that for any ε > 0, Similarly, So b a b a x dx < x dx > 1 2 1 2 (b2 − a2) + ε. (b2 − a2) − ε. b a x dx = 1 2 (b2 − a2). 48 7 The Riemann Integral IA Analysis I Example. Define f : [0, 1] → R by f (x. Let x0 < x1 < · · · < xn be a dissection. Then for every i, we have mi = 0 (since there is an irrational in every interval), and Mi = 1 (since there is a rational in every interval). So UDf = n i=1 Mi(xi − xi−1) = n (xi − xi−1) = 1. i=1 Similarly, LDf = 0. Since D was arbitrary, we have 1 0 f (x) dx = 1, 1 0 f (x) dx = 0. So f is not Riemann integrable. Of course, this function is not interesting at all. The whole point of its existence is to show undergraduates that there are some functions that are not integrable! Note that it is important to say that the function is not Riemann integrable. There are other notions for integration in which this function is integrable. For example, this
function is Lebesgue-integrable. Using the definition to show integrability is often cumbersome. Most of the time, we use the Riemann’s integrability criterion, which follows rather immediately from the definition, but is much nicer to work with. Proposition (Riemann’s integrability criterion). This is sometimes known as Cauchy’s integrability criterion. Let f : [a, b] → R. Then f is Riemann integrable if and only if for every ε > 0, there exists a dissection D such that UD − LD < ε. Proof. (⇒) Suppose that f is integrable. Then (by definition of Riemann integrability), there exist D1 and D2 such that and UD1 < LD2 > b a b a f (x) dx + f (x) dx − ε 2, ε 2. Let D be a common refinement of D1 and D2. Then UDf − LDf ≤ UD1 f − LD2 f < ε. (⇐) Conversely, if there exists D such that UDf − LDf < ε, 49 7 The Riemann Integral IA Analysis I then which is, by definition, that inf UDf − sup LDf < ε, b a f (x) dx − b a f (x) dx < ε. Since ε > 0 is arbitrary, this gives us that b a f (x) dx = b a f (x) dx. So f is integrable. The next big result we want to prove is that integration is linear, ie b a (λf (x) + µg(x)) dx = λ b a f (x) dx + µ b a g(x) dx. We do this step by step: Proposition. Let f : [a, b] → R be integrable, and λ ≥ 0. Then λf is integrable, and b b λf (x) dx = λ f (x) dx. Proof. Let D be a dissection of [a, b]. Since a a sup x∈[xi−1,xi] λf (x) = λ sup x∈[
xi−1,xi] f (x), and similarly for inf, we have UD(λf ) = λUDf LD(λf ) = λLDf. So if we choose D such that UDf − LDf < ε/λ, then UD(λf ) − LD(λf ) < ε. So the result follows from Riemann’s integrability criterion. Proposition. Let f : [a, b] → R be integrable. Then −f is integrable, and b a −f (x) dx = − b a f (x) dx. Proof. Let D be a dissection. Then sup x∈[xi−1,xi] inf x∈[xi−1,xi] −f (x) = − inf x∈[xi−1,xi] f (x) −f (x) = − sup f (x). x∈[xi−1,xi] Therefore UD(−f ) = n (xi − xi−1)(−mi) = −LD(f ). i=1 50 7 The Riemann Integral IA Analysis I Similarly, So LD(−f ) = −UDf. UD(−f ) − LD(−f ) = UDf − LDf. Hence if f is integrable, then −f is integrable by the Riemann integrability criterion. Proposition. Let f, g : [a, b] → R be integrable. Then f + g is integrable, and b a (f (x) + g(x)) dx = b a f (x) dx + b a g(x) dx. Proof. Let D be a dissection. Then UD(f + g) = n (xi − xi−1) i=1 n (xi − xi−1) ≤ i=1 = UDf + UDg Therefore, (f (x) + g(x)) sup x∈[xi−1,xi] sup u∈[xi−1,xi] f (u) + sup g(v) v∈[xi−1,xi] b a (f (x) + g(x)) dx ≤ b a f (x) dx + Similarly, b a (f (x) + g(x)) dx ≥ b a b a g(x) dx
= f (x) dx + b a b a f (x) dx + b a g(x) dx. g(x) dx. So the upper and lower integrals are equal, and the result follows. So we now have that b a (λf (x) + µg(x)) dx = λ b a f (x) dx + µ b a g(x) dx. We will prove more “obvious” results. Proposition. Let f, g : [a, b] → R be integrable, and suppose that f (x) ≤ g(x) for every x. Then b b f (x) dx ≤ g(x) dx. Proof. Follows immediately from the definition. a a Proposition. Let f : [a, b] → R be integrable. Then |f | is integrable. Proof. Note that we can write sup x∈[xi−1,xi] f (x) − inf x∈[xi−1,xi] f (x) = sup u,v∈[xi−1,xi] |f (u) − f (v)|. 51 7 The Riemann Integral IA Analysis I Similarly, sup x∈[xi−1,xi] |f (x)| − inf x∈[xi−1,xi] |f (x)| = sup u,v∈[xi−1,xi] ||f (u)| − |f (v)||. For any pair of real numbers, x, y, we have that ||x| − |y|| ≤ |x − y| by the triangle inequality. Then for any interval u, v ∈ [xi−1, xi], we have ||f (u)| − |f (v)|| ≤ |f (u) − f (v)|. Hence we have sup x∈[xi−1,xi] |f (x)| − inf x∈[xi−1,xi] |f (x)| ≤ sup x∈[xi−1,xi] f (x) − inf x∈[xi−1,xi] f (x). So for any dissection D, we have UD(|f |) − LD(|f |) ≤ UD(f ) − LD(f ). So the result follows from Riemann’
s integrability criterion. Combining these two propositions, we get that if |f (x) − g(x)| ≤ C, for every x ∈ [a, b], then b a f (x) dx − b a g(x) dx ≤ C(b − a). Proposition (Additivity property). Let f : [a, c] → R be integrable, and let b ∈ (a, c). Then the restrictions of f to [a, b] and [b, c] are Riemann integrable, and b c c f (x) dx + f (x) dx = f (x) dx a b a Similarly, if f is integrable on [a, b] and [b, c], then it is integrable on [a, c] and the above equation also holds. Proof. Let ε > 0, and let a = x0 < x1 < · · · < xn = c be a dissection of D of [a, c] such that and UD(f ) ≤ LD(f ) ≥ f (x) dx + ε, f (x) dx − ε. c a c a Let D be the dissection made of D plus the point b. Let D1 be the dissection of [a, b] made of points of D from a to b, and D2 be the dissection of [b, c] made of points of D from b to c. Then and UD1 (f ) + UD2(f ) = UD(f ) ≤ UD(f ), LD1 (f ) + LD2(f ) = LD(f ) ≥ LD(f ). 52 7 The Riemann Integral IA Analysis I Since UD(f ) − LD(f ) < 2ε, and both UD2 (f ) − LD2(f ) and UD1 (f ) − LD1(f ) are non-negative, we have UD1(f ) − LD1(f ) and UD2(f ) − LD2(f ) are less than 2ε. Since ε is arbitrary, it follows that the restrictions of f to [a, b] and [b, c] are both Riemann integrable. Furthermore, b a f (x) dx + Similarly, b a f (x) dx + c b c b f (x) dx ≤ UD1(
f ) + UD2(f ) = UD(f ) ≤ UD(f ) ≤ c a f (x) dx + ε. f (x) dx ≥ LD1(f ) + LD2 (f ) = LD(f ) ≥ LD(f ) ≥ c a f (x) dx − ε. Since ε is arbitrary, it follows that b a f (x) dx + c b f (x) dx = c a f (x) dx. The other direction is left as an (easy) exercise. Proposition. Let f, g : [a, b] → R be integrable. Then f g is integrable. Proof. Let C be such that |f (x)|, |g(x)| ≤ C for every x ∈ [a, b]. Write Li and i for the sup and inf of g in [xi−1, xi]. Now let D be a dissection, and for each i, let ui and vi be two points in [xi−1, xi]. We will pretend that ui and vi are the minimum and maximum when we write the proof, but we cannot assert that they are, since f g need not have maxima and minima. We will then note that since our results hold for arbitrary ui and vi, it must hold when f g is at its supremum and infimum. We find what we pretend is the difference between the upper and lower sum: xi − xi−1)(f (vi)g(vi) − f (ui)g(ui) (xi − xi−1)f (vi)(g(vi) − g(ui)) + (f (vi) − f (ui))g(ui) n i=1 n i=1 n = ≤ C(Li − i) + (Mi − mi)C i=1 = C(UDg − LDg + UDf − LDf ). Since ui and vi are arbitrary, it follows that UD(f g) − LD(f g) ≤ C(UDf − LDf + UDg − LDg). Since C is fixed, and we can get UDf − LDf and UDg − LDg arbitrary small (since f and g are integrable), we can get UD(f g) − LD(f g) arbitrarily small
. So the result follows. 53 7 The Riemann Integral IA Analysis I Theorem. Every continuous function f on a closed bounded interval [a, b] is Riemann integrable. Proof. wlog assume [a, b] = [0, 1]. n, 2 Suppose the contrary. Let f be non-integrable. This means that there exists some ε such that for every dissection D, UD − LD > ε. In particular, for every n, · · ·, n n, let Dn be the dissection 0, 1 n. Since UDn − LDn > ε, there exists some interval k in which sup f − inf f > ε. Suppose the supremum and infimum are attained at xn and yn respectively. Then we have |xn − yn| < 1 n and f (xn) − f (yn) > ε. n, k+1 By Bolzano Weierstrass, (xn) has a convergent subsequence, say (xni). Say xni → x. Since |xn − yn| < 1 n → 0, we must have yni → x. By continuity, we must have f (xni) → f (x) and f (yni) → f (x), but f (xni ) and f (yni ) are always apart by ε. Contradiction. n With this result, we know that a lot of things are integrable, e.g. e−x2 To prove this, we secretly used the property of uniform continuity:. Definition (Uniform continuity*). Let A ⊆ R and let f : A → R. Then f is uniformly continuous if (∀ε)(∃δ > 0)(∀x)(∀y) |x − y| < δ ⇒ |f (x) − f (y)| ≤ ε. This is different from regular continuity. Regular continuity says that at any point x, we can find a δ that works for this point. Uniform continuity says that we can find a δ that works for any x. It is easy to show that a uniformly continuous function is integrable, since by uniformly continuity, as long as the mesh of a dissection is sufficiently
small, the difference between the upper sum and the lower sum can be arbitrarily small by uniform continuity. Thus to prove the above theorem, we just have to show that continuous functions on a closed bounded interval are uniformly continuous. Theorem (non-examinable). Let a < b and let f : [a, b] → R be continuous. Then f is uniformly continuous. Proof. Suppose that f is not uniformly continuous. Then (∃ε)(∀δ > 0)(∃x)(∃y) |x − y| < δ and |f (x) − f (y)| ≥ ε. Therefore, we can find sequences (xn), (yn) such that for every n, we have |xn − yn| ≤ 1 n and |f (xn) − f (yn)| ≥ ε. Then by Bolzano-Weierstrass theorem, we can find a subsequence (xnk ) converging to some x. Since |xnk −ynk | ≤ 1, ynk → x as well. But |f (xnk )−f (ynk )| ≥ ε nk for every k. So f (xnk ) and f (ynk ) cannot both converge to the same limit. So f is not continuous at x. This proof is very similar to the proof that continuous functions are integrable. In fact, the proof that continuous functions are integrable is just a fuse of this proof and the (simple) proof that uniformly continuously functions are integrable. Theorem. Let f : [a, b] → R be monotone. Then f is Riemann integrable. 54 7 The Riemann Integral IA Analysis I Note that monotone functions need not be “nice”. It can even have infinitely many discontinuities. For example, if f : [0, 1] → R maps x to the 1/(first non-zero digit in the binary expansion of x), with f (0) = 0. Proof. let ε > 0. Let D be a dissection of mesh less than ε f (b)−f (a). Then UDf − LDf = n i=1 (xi − xi−1)(f (xi) − f (xi−1
)) ≤ ε f (b) − f (a) n i=1 = ε. (f (xi) − f (xi−1)) Pictorially, we see that the difference between the upper and lower sums is total the area of the red rectangles. y x To calculate the total area, we can stack the red areas together to get something f (b)−f (a) and height f (b) − f (a). So the total area is just ε. of width ε Lemma. Let a < b and let f be a bounded function from [a, b] → R that is continuous on (a, b). Then f is integrable. An example where this would apply is 1 x. It gets nasty near x = 0, but its “nastiness” is confined to x = 0 only. So as long as its nastiness is sufficiently contained, it would still be integrable. 0 sin 1 The idea of the proof is to integrate from a point x1 very near a up to a point xn−1 very close to b. Since f is bounded, the regions [a, x1] and [xn−1, b] are small enough to not cause trouble. Proof. Let ε > 0. Suppose that |f (x)| ≤ C for every x ∈ [a, b]. Let x0 = a and pick x1 such that x1 − x0 < ε 8C. Also choose z between x1 and b such that b − z < ε Then f is continuous [x1, z]. Therefore it is integrable on [x1, z]. So we can find a dissection D with points x1 < x2 < · · · < xn−1 = z such that 8C. UDf − LDf < ε 2. 55 7 The Riemann Integral IA Analysis I Let D be the dissection a = x0 < x1 < · · · < xn = b. Then UDf − LDf < ε 8C · 2C + ε 2 + ε 8C · 2C = ε. So done by Riemann integrability criterion. Example. – f (x) = sin 1 0 x x = 0 x = 0 defined on [−1
, 1] is integrable. – g(x) = x x ≤ 1 x2 + 1 x > 1 defined on [0, 1] is integrable. Corollary. Every piecewise continuous and bounded function on [a, b] is integrable. Proof. Partition [a, b] into intervals I1, · · ·, Ik, on each of which f is (bounded and) continuous. Hence for every Ij with end points xj−1, xj, f is integrable on [xj−1, xj] (which may not equal Ij, e.g. Ij could be [xj−1, xj)). But then by the additivity property of integration, we get that f is integrable on [a, b] We defined Riemann integration in a very general way — we allowed arbitrary dissections, and took the extrema over all possible dissection. Is it possible to just consider some particular nice dissections instead? Perhaps unsurprisingly, yes! It’s just that we opt to define it the general way so that we can easily talk about things like least common refinements. Lemma. Let f : [a, b] → R be Riemann integrable, and for each n, let Dn be the dissection a = x0 < x1 < · · · < xn = b, where xi = a + i(b−a) for each i. Then n and UDn f → LDn f → b a b a f (x) dx f (x) dx. Proof. Let ε > 0. We need to find an N. The only thing we know is that f is Riemann integrable, so we use it: Since f is integrable, there is a dissection D, say u0 < u1 < · · · < um, such that UDf − b a f (x) dx < ε 2. We also know that f is bounded. Let C be such that |f (x)| ≤ C. For any n, let D be the least common refinement of Dn and D. Then UDf ≤ UDf. Also, the sums UDn f and UDf are the same, except that at most m of the sub
intervals [xi−1, xi] are subdivided in D. 56 7 The Riemann Integral IA Analysis I For each interval that gets chopped up, the upper sum decreases by at most b−a n · 2C. Therefore UDn f − UDf ≤ b − a n 2C · m. Pick n such that 2Cm(b − a)/n < ε 2. Then So UDnf − UDf < ε 2. UDn f − b a f (x) dx < ε. This is true whenever n > 4C(b−a)m therefore ε. Since we also have UDn f ≥ b a f (x) dx, UDn f → b a f (x) dx. The proof for lower sums is similar. For convenience, we define the following: Notation. If b > a, we define a b f (x) dx = − b a f (x) dx. We now prove that the fundamental theorem of calculus, which says that integration is the reverse of differentiation. Theorem (Fundamental theorem of calculus, part 1). Let f : [a, b] → R be continuous, and for x ∈ [a, b], define F (x) = x a f (t) dt. Then F is differentiable and F (x) = f (x) for every x. Proof. F (x + h) − F (x) h = 1 h x+h x f (t) dt Let ε > 0. Since f is continuous, at x, then there exists δ such that |y − x| < δ implies |f (y) − f (x)| < ε. If |h| < δ, then 1 h x+h x f (t) dt − f (x) = ≤ ≤ x+h (f (t) − f (x)) dt x x+h x |f (t) − f (x)| dt 1 h 1 |h| ε|h| |h| = ε. 57 7 The Riemann Integral IA Analysis I Corollary. If f is continuously differentiable on [a, b], then Proof. Let b a f (t) dt = f (b) −
f (a). g(x) = x a f (t) dt. Then d dx Since g(x) − f (x) = 0, g(x) − f (x) must be a constant function by the mean value theorem. We also know that g(x) = f (x) = (f (x) − f (a)). g(a) = 0 = f (a) − f (a) So we must have g(x) = f (x) − f (a) for every x, and in particular, for x = b. Theorem (Fundamental theorem of calculus, part 2). Let f : [a, b] → R be a differentiable function, and suppose that f is integrable. Then b a f (t) dt = f (b) − f (a). Note that this is a stronger result than the corollary above, since it does not require that f is continuous. Proof. Let D be a dissection x0 < x1 < · · · < xn. We want to make use of this dissection. So write f (b) − f (a) = n i=1 (f (xi) − f (xi−1)). For each i, there exists ui ∈ (xi−1, xi) such that f (xi) − f (xi−1j) = (xi − xi−1)f (ui) by the mean value theorem. So f (b) − f (a) = n (xi − xi−1)f (ui). i=1 We know that f (ui) is somewhere between sup x∈[xi,xi−1] f (x) and inf x∈[xi,xi−1] f (x) by definition. Therefore LDf ≤ f (b) − f (a) ≤ UDf. Since f is integrable and D was arbitrary, LDf and UDf can both get arbitrarily close to b a f (t) dt. So f (b) − f (a) = b a f (t) dt. 58 7 The Riemann Integral IA Analysis I Note that the condition that f is integrable is essential. It is possible to find a differentiable function whose derivative is not integrable!
You will be asked to find it in the example sheet. Using the fundamental theorem of calculus, we can easily prove integration by parts: Theorem (Integration by parts). Let f, g : [a, b] → R be integrable such that everything below exists. Then b a f (x)g(x) dx = f (b)g(b) − f (a)g(a) − b a f (x)g(x) dx. Proof. By the fundamental theorem of calculus, b a (f (x)g(x) + f (x)g(x)) dx = b a The result follows after rearrangement. (f g)(x) dx = f (b)g(b) − f (a)g(a). Recall that when we first had Taylor’s theorem, we said it had the Lagrange form of the remainder. There are many other forms of the remainder term. Here we will look at the integral form: Theorem (Taylor’s theorem with the integral form of the remainder). Let f be n + 1 times differentiable on [a, b] with with f (n+1) continuous. Then f (b) = f (a) + (b − a)f (a) + + (b − a)n n! f (n)(a) + b a (b − a)2 2! (b − t)n n! f (2)(a) + · · · f (n+1)(t) dt. Proof. Induction on n. When n = 0, the theorem says f (b) − f (a) = b a f (t) dt. which is true by the fundamental theorem of calculus. Now observe that b a (b − t)n n! f (n+1)(t) dt = = b f (n+1)(t) a f (n+1)(t) dt −(b − t)n+1 (n + 1)! b + a (b − a)n+1 (n + 1)! (b − t)n+1 (n + 1)! f (n+1)(a) + b a (b − t)n+1 (n + 1)! f (n+2)(t) dt. So the result follows by induction.
Note that the form of the integral remainder is rather weird and unexpected. How could we have come up with it? We might start with the fundamental 59 7 The Riemann Integral IA Analysis I theorem of algebra and integrate by parts. The first attempt would be to integrate 1 to t and differentiate f (t) to f (2)(t). So we have f (b) = f (a) + b a f (t) dt = f (a) + [tf (t)]b a − b a tf (2)(t) dt = f (a) + bf (b) − af (a) − b a tf (2)(t) dt We want something in the form (b − a)f (a), so we take that out and see what we are left with. = f (a) + (b − a)f (a) + b(f (b) − f (a)) − b a tf (2)(t) dt Then we note that f (b) − f (a) = b a f (2)(t) dt. So we have = f (a) + (b − a)f (a) + b a (b − t)f (2)(t) dt. Then we can see that the right thing to integrate is (b − t) and continue to obtain the result. Theorem (Integration by substitution). Let f : [a, b] → R be continuous. Let g : [u, v] → R be continuously differentiable, and suppose that g(u) = a, g(v) = b, and f is defined everywhere on g([u, v]) (and still continuous). Then b a f (x) dx = v u f (g(t))g(t) dt. Proof. By the fundamental theorem of calculus, f has an anti-derivative F defined on g([u, v]). Then v u f (g(t))g(t) dt = = v u v u F (g(t))g(t) dt (F ◦ g)(t) dt = F ◦ g(v) − F ◦ g(u) = F (b) − F (a) = b a f (x) dx.
We can think of “integration by parts” as what you get by integrating the product rule, and “integration by substitution” as what you get by integrating the chain rule. 7.2 Improper integrals It is sometimes sensible to talk about integrals of unbounded functions or integrating to infinity. But we have to be careful and write things down nicely. 60 7 The Riemann Integral IA Analysis I Definition (Improper integral). Suppose that we have a function f : [a, b] → R b a+ε f (x) dx such that, for every ε > 0, f is integrable on [a + ε, b] and lim ε→0 exists. Then we define the improper integral b a f (x) dx to be lim ε→0 b a+ε f (x) dx. even if the Riemann integral does not exist. We can do similarly for [a, b − ε], or integral to infinity: ∞ a f (x) dx = lim b→∞ b a f (x) dx. when it exists. Example. So 1 ε x−1/2 dx = 2x−1/21 ε = 2 − 2ε1/2 → 2. 1 0 x−1/2 dx = 2, even though x−1/2 is unbounded on [0, 1]. Note that officially we are required to make f (x) = x−1/2 a function with domain [0, 1]. So we can assign f (0) = π, or any number, since it doesn’t matter. Example. x 1 1 t2 dt = − as x → ∞ by the fundamental theorem of calculus. So 1 x2 dx = 1. Finally, we can prove the integral test, whose proof we omitted when we first 1 ∞ began. Theorem (Integral test). Let f : [1, ∞] → R be a decreasing non-negative function. Then ∞ n=1 f (n) converges iff ∞ 1 f (x) dx < ∞. Proof. We have n+1 n f (x) dx ≤ f (n) ≤ n n−1 f (x) dx, since f is
decreasing (the right hand inequality is valid only for n ≥ 2). It follows that N +1 N N f (x) dx ≤ f (n) ≤ f (x) dx + f (1) 1 n=1 1 So if the integral exists, then f (n) is increasing and bounded above by ∞ 1 f (x) dx, so converges. 1 f (x) dx is unbounded. Then N n=1 f (n) If the integral does not exist, then N is unbounded, hence does not converge. Example. Since x 1 1 t2 dt < ∞, it follows that ∞ n=1 1 n2 converges. 61emann integral*). Let f : [a, b] → R be a bounded function, and let Df be the set of points of discontinuities of f. Then f is Riemann integrable if and only if Df has measure zero. Using this result, a lot of our theorems follow easily of these. Apart from the easy ones like the sum and product of integrable functions is integrable, we can also easily show that the composition of a continuous function with an integrable function is integrable, since composing with a continuous function will not introduce more discontinuities. Similarly, we can show that the uniform limit of integrable functions is integrable, since the points of discontinuities of the uniform limit is at most the (countable) union of all discontinuities of the functions in the sequence. Proof is left as an exercise for the reader, in the example sheet. 24 4 Rn as a normed space IB Analysis II 4 Rn as a normed space 4.1 Normed spaces Our objective is to extend most of the notions we had about functions of a single variable f : R → R to functions of multiple variables f : Rn → R. More generally, we want to study functions f : Ω → Rm, where Ω ⊆ Rn. We wish to define analytic notions such as continuity, differentiability and even integrability (even though we are not doing integrability in this course). In order to do this, we need more structure on Rn. We already know that Rn is a vector space, which means that we can add, subtract and multiply by scalars. But to do analysis, we need something to replace our notion of |x −
y| in R. This is known as a norm. It is useful to define and study this structure in an abstract setting, as opposed to thinking about Rn specifically. This leads to the general notion of normed spaces. Definition (Normed space). Let V be a real vector space. A norm on V is a function ∥ · ∥ : V → R satisfying (i) ∥x∥ ≥ 0 with equality iff x = 0 (non-negativity) (ii) ∥λx∥ = |λ|∥x∥ (linearity in scalar multiplication) (iii) ∥x + y∥ ≤ ∥x∥ + ∥y∥ (triangle inequality) A normed space is a pair (V, ∥ · ∥). If the norm is understood, we just say V is a normed space. We do have to be slightly careful since there can be multiple norms on a vector space. Intuitively, ∥x∥ is the length or magnitude of x. Example. We will first look at finite-dimensional spaces. This is typically Rn with different norms. – Consider Rn, with the Euclidean norm ∥x∥2 = 2. x2 i This is also known as the usual norm. It is easy to check that this is a norm, apart from the triangle inequality. So we’ll just do this. We have ∥x + y∥2 = n (xi + yi)2 i=1 = ∥x∥2 + ∥y∥2 + 2 xiyi ≤ ∥x∥2 + ∥y∥2 + 2∥x∥y∥ = (∥x∥2 + ∥y∥2), where we used the Cauchy-Schwarz inequality. So done. – We can have the following norm on Rn: ∥x∥1 = |xi|. It is easy to check that this is a norm. 25 4 Rn as a normed space IB Analysis II – We can also have the following norm on Rn: ∥x∥∞ = max{|xi| : 1 ≤ i ≤ n}. It is also easy to check that this is a norm. – In general, we can define the p norm (for p ≥ 1) by ∥x∥p = |xi
|p1/p. It is, however, not trivial to check the triangle inequality, and we will not do this. We can show that as p → ∞, ∥x∥p → ∥x∥∞, which justifies our notation above. We also have some infinite dimensional examples. Often, we can just extend our notions on Rn to infinite sequences with some care. We write RN for the set of all infinite real sequences (xk). This is a vector space with termwise addition and scalar multiplication. – Define ℓ1 = (xk) ∈ RN : |xk| < ∞. This is a linear subspace of RN. We define the norm by ∥(xk)∥1 = ∥(xk)∥ℓ1 = |xk|. – Similarly, we can define ℓ2 by ℓ2 = (xk) ∈ RN : x2 k < ∞. The norm is defined by ∥(xk)∥2 = ∥(xk)∥ℓ2 = 1/2. x2 k We can also write this as ∥(xk)∥ℓ2 = lim n→∞ ∥(x1, · · ·, xn)∥2. So the triangle inequality for the Euclidean norm implies the triangle inequality for ℓ2. – In general, for p ≥ 1, we can define ℓp = (xk) ∈ RN : |xk|p < ∞ with the norm ∥(xk)∥p = ∥(xk)∥ℓp = |xk|p1/p. 26 4 Rn as a normed space IB Analysis II – Finally, we have ℓ∞, where ℓ∞ = {(xk) ∈ RN : sup |xk| < ∞}, with the norm ∥(xk)∥∞ = ∥(xk)∥ℓ∞ = sup |xk|. Finally, we can have examples where we look at function spaces, usually C([a, b]), the set of continuous real functions on [a, b]. – We can define the L1 norm by ∥f ∥L1 = ∥f ∥1 =
b a |f | dx. – We can define L2 similarly by ∥f ∥L2 = ∥f ∥2 = 1 2 f 2 dx b a – In general, we can define Lp for p ≥ 1 by ∥f ∥Lp = ∥f ∥p = 1 p f p dx b a – Finally, we have L∞ by ∥f ∥L∞ = ∥f ∥∞ = sup |f |... This is also called the uniform norm, or the supremum norm. Later, when we define convergence for general normed space, we will show that convergence under the uniform norm is equivalent to uniform convergence. To show that L2 is actually a norm, we can use the Cauchy-Schwarz inequality for integrals. Lemma (Cauchy-Schwarz inequality (for integrals)). If f, g ∈ C([a, b]), f, g ≥ 0, then b a f g dx ≤ b 1/2 b f 2 dx) 1/2 g2 dx. a a a f 2 dx = 0, then f = 0 (since f is continuous). So the inequality Proof. If b holds trivially. Otherwise, let A2 = b a f 2 dx ̸= 0, B2 = b b ϕ(t) = (g − tf )2 dt ≥ 0. a g2 dx. Consider the function for every t. We can expand this as a ϕ(t) = t2A2 − 2t b a gf dx + B2. 27 4 Rn as a normed space IB Analysis II The conditions for a quadratic in t to be non-negative is exactly 2 gf dx − A2B2 ≤ 0. b a So done. Note that the way we defined Lp is rather unsatisfactory. To define the ℓp spaces, we first have the norm defined as a sum, and then ℓp to be the set of all sequences for which the sum converges. However, to define the Lp space, we restrict ourselves to C([0, 1]), and then define the norm. Can we just define, say, L1 to be the set of all functions such that 1 0 |f | dx exists? We could, but then the norm would no longer be the norm, since if we have the function f (x)
= 1 x = 0.5 0 x ̸= 0.5, then f is integrable with integral 0, but is not identically zero. So we cannot expand our vector space to be too large. To define Lp properly, we need some more sophisticated notions such as Lebesgue integrability and other fancy stuff, which will be done in the IID Probability and Measure course. We have just defined many norms on the same space Rn. These norms are clearly not the same, in the sense that for many x, ∥x∥1 and ∥x∥2 have different values. However, it turns out the norms are all “equivalent” in some sense. This intuitively means the norms are “not too different” from each other, and give rise to the same notions of, say, convergence and completeness. A precise definition of equivalence is as follows: Definition (Lipschitz equivalence of norms). Let V be a (real) vector space. Two norms ∥ · ∥, ∥ · ∥′ on V are Lipschitz equivalent if there are real constants 0 < a < b such that a∥x∥ ≤ ∥x∥′ ≤ b∥x∥ for all x ∈ V. It is easy to show this is indeed an equivalence relation on the set of all norms on V. We will show that if two norms are equivalent, the “topological” properties of the space do not depend on which norm we choose. For example, the norms will agree on which sequences are convergent and which functions are continuous. It is possible to reformulate the notion of equivalence in a more geometric way. To do so, we need some notation: Definition (Open ball). Let (V, ∥ · ∥) be a normed space, a ∈ V, r > 0. The open ball centered at a with radius r is Br(a) = {x ∈ V : ∥x − a∥ < r}. Then the requirement that a∥x∥ ≤ ∥x∥′ ≤ b∥x∥ for all x ∈ V is equivalent to saying B1/b(0) ⊆ B′ 1(0) ⊆ B1/a(0), where B′ is the ball with respect to ∥ · ∥′, while B
is the ball with respect to ∥ · ∥. Actual proof of equivalence is on the second example sheet. 28 4 Rn as a normed space IB Analysis II Example. Consider R2. Then the norms ∥ · ∥∞ and ∥ · ∥2 are equivalent. This is easy to see using the ball picture: where the blue ones are the balls with respect to ∥ · ∥∞ and the red one is the ball with respect to ∥ · ∥2. In general, we can consider Rn, again with ∥ · ∥2 and ∥ · ∥∞. We have ∥x∥∞ ≤ ∥x∥2 ≤ √ n∥x∥∞. These are easy to check manually. However, later we will show that in fact, any two norms on a finite-dimensional vector space are Lipschitz equivalent. Hence it is more interesting to look at infinite dimensional cases. Example. Let V = C([0, 1]) with the norms ∥f ∥1 = 1 0 |f | dx, ∥f ∥∞ = sup [0,1] |f |. We clearly have the bound ∥f ∥1 ≤ ∥f ∥∞. However, there is no constant b such that ∥f ∥∞ ≤ b∥f ∥1 for all f. This is easy to show by constructing a sequence of functions fn by y 1 1 n x n and the height is 1. Then ∥fn∥∞ = 1 but ∥fn∥1 = 1 where the width is 2 Example. Similarly, consider the space ℓ2 = (xn) : x2 regular ℓ2 norm and the ℓ∞ norm. We have n → 0. n < ∞ under the ∥(xk)∥∞ ≤ ∥(xk)∥ℓ2, 29 4 Rn as a normed space IB Analysis II but there is no b such that ∥(xk)∥ℓ2 ≤ b∥(xk)∥∞. For example, we can consider the sequence xn = (1, 1, · · ·, 1, 0, 0, · · · ), where the first n terms are 1. So far in all our examples, out of the
two inequalities, one holds and one does not. Is it possible for both inequalities to not hold? The answer is yes. This is an exercise on the second example sheet as well. This is all we are going to say about Lipschitz equivalence. We are now going to define convergence, and study the consequences of Lipschitz equivalence to convergence. Definition (Bounded subset). Let (V, ∥ · ∥) be a normed space. A subset E ⊆ V is bounded if there is some R > 0 such that E ⊆ BR(0). Definition (Convergence of sequence). Let (V, ∥ · ∥) be a normed space. A sequence (xk) in V converges to x ∈ V if ∥xk − x∥ → 0 (as a sequence in R), i.e. (∀ε > 0)(∃N )(∀k ≥ N ) ∥xk − x∥ < ε. These two definitions, obviously, depends on the chosen norm, not just the vector space V. However, if two norms are equivalent, then they agree on what is bounded and what converges. Proposition. If ∥ · ∥ and ∥ · ∥′ are Lipschitz equivalent norms on a vector space V, then (i) A subset E ⊆ V is bounded with respect to ∥ · ∥ if and only if it is bounded with respect to ∥ · ∥′. (ii) A sequence xk converges to x with respect to ∥ · ∥ if and only if it converges to x with respect to ∥ · ∥′. Proof. (i) This is direct from definition of equivalence. (ii) Say we have a, b such that a∥y∥ ≤ ∥y∥′ ≤ b∥y∥ for all y. So a∥xk − x∥ ≤ ∥xk − x∥′ ≤ b∥xk − x∥. So ∥xk − x∥ → 0 if and only if ∥xk − x∥′ → 0. So done. What if the norms are not equivalent? It is not surprising that there are some sequences that converge with respect to one norm but not another. More surprisingly, it is possible that a sequence converges to different limits
under different norms. This is, again, on the second example sheet. We have some easy facts about convergence: Proposition. Let (V, ∥ · ∥) be a normed space. Then (i) If xk → x and xk → y, then x = y. 30 4 Rn as a normed space IB Analysis II (ii) If xk → x, then axk → ax. (iii) If xk → x, yk → y, then xk + yk → x + y. Proof. (i) ∥x − y∥ ≤ ∥x − xk∥ + ∥xk − y∥ → 0. So ∥x − y∥ = 0. So x = y. (ii) ∥axk − ax∥ = |a|∥xk − x∥ → 0. (iii) ∥(xk + yk) − (x + y)∥ ≤ ∥xk − x∥ + ∥yk − y∥ → 0. Proposition. Convergence in Rn (with respect to, say, the Euclidean norm) is equivalent to coordinate-wise convergence, i.e. x(k) → x if and only if x(k) j → xj for all j. Proof. Fix ε > 0. Suppose x(k) → x. Then there is some N such that for any k ≥ N such that ∥x(k) − x∥2 2 = (x(k) j − xj)2 < ε. n j=1 Hence |x(k) j − xj| < ε for all k ≤ N. On the other hand, for any fixed j, there is some Nj such that k ≥ Nj implies j − xj| < ε√ n. So if k ≥ max{Nj : j = 1, · · ·, n}, then |x(k) ∥x(k) − x∥2 =   n  1 2 (x(k) j − xj)2  < ε. So done j=1 Another space we would like to understand is the space of continuous functions. It should be clear that uniform convergence is the same as convergence under the uniform norm, hence the name. However, there is no norm
such that convergence under the norm is equivalent to pointwise convergence, i.e. pointwise convergence is not normable. In fact, it is not even metrizable. However, we will not prove this. We’ll now generalize the Bolzano-Weierstrass theorem to Rn. Theorem (Bolzano-Weierstrass theorem in Rn). Any bounded sequence in Rn (with, say, the Euclidean norm) has a convergent subsequence. Proof. We induct on n. The n = 1 case is the usual Bolzano-Weierstrass on the real line, which was proved in IA Analysis I. Assume the theorem holds in Rn−1, and let x(k) = (x(k) bounded sequence in Rn. Then let y(k) = (x(k) know that 1, · · ·, x(k) 1, · · ·, x(k) n ) be a n−1). Since for any k, we ∥y(k)∥2 + |x(k) n |2 = ∥x(k)∥2, it follows that both (y(k)) and (x(k) n ) are bounded. So by the induction hypothesis, there is a subsequence (kj) of (k) and some y ∈ Rn−1 such that y(kj ) → y. Also, 31 4 Rn as a normed space IB Analysis II by Bolzano-Weierstrass in R, there is a further subsequence (x that converges to, say, yn ∈ R. Then we know that (kjℓ ) n ) of (x(kj ) n ) So done. x(kjℓ ) → (y, yn). Note that this is generally not true for normed spaces. Finite-dimensionality is important for both of these results. Example. Consider (ℓ∞, ∥ · ∥∞). We let e(k) j = δjk be the sequence with 1 in the kth component and 0 in other components. Then e(k) j → 0 for all fixed j, and hence e(k) converges componentwise to the zero element 0 = (0, 0, · · · ). However, e(k) does not converge to the zero element since
∥e(k) − 0∥∞ = 1 for all k. Also, this is bounded but does not have a convergent subsequence for the same reasons. We know that all finite dimensional vector spaces are isomorphic to Rn as vector spaces for some n, and we will later show that all norms on finite dimensional spaces are equivalent. This means every finite-dimensional normed space satisfies the Bolzano-Weierstrass property. Is the converse true? If a normed vector space satisfies the Bolzano-Weierstrass property, must it be finite dimensional? The answer is yes, and the proof is in the example sheet. Example. Let C([0, 1]) have the ∥ · ∥L2 norm. Consider fn(x) = sin 2nπx. We know that 1 ∥fn∥2 L2 = |fn|2 = 1 2. 0 So it is bounded. However, it doesn’t have a convergent subsequence. If it did, say fnj → f in L2, then we must have ∥fnj − fnj+1∥2 → 0. However, by direct calculation, we know that ∥fnj − fnj+1∥2 = 1 0 (sin 2njπx − sin 2nj+1πx)2 = 1. Note that the same argument shows also that the sequence (sin 2nπx) has no subsequence that converges pointwise on [0, 1]. To see this, we need the result that if (fj) is a sequence in C([0, 1]) that is uniformly bounded with fj → f pointwise, then fj converges to f under the L2 norm. However, we will not be able to prove this (in a nice way) without Lebesgue integration from IID Probability and Measure. 4.2 Cauchy sequences and completeness Definition (Cauchy sequence). Let (V, ∥ · ∥) be a normed space. A sequence (x(k)) in V is a Cauchy sequence if (∀ε)(∃N )(∀n, m ≥ N ) ∥x(n) − x(m)∥ < ε. 32 4 Rn as a normed space IB Analysis II Definition (Complete normed space). A normed space (V, ∥ ·
∥) is complete if every Cauchy sequence converges to an element in V. We’ll start with some easy facts about Cauchy sequences and complete spaces. Proposition. Any convergent sequence is Cauchy. Proof. If xk → x, then ∥xk − xℓ∥ ≤ ∥xk − x∥ + ∥xℓ − x∥ → 0 as k, ℓ → ∞. Proposition. A Cauchy sequence is bounded. Proof. By definition, there is some N such that for all n ≥ N, we have ∥xN − xn∥ < 1. So ∥xn∥ < 1 + ∥xN ∥ for n ≥ N. So, for all n, ∥xn∥ ≤ max{∥x1∥, · · ·, ∥xN −1∥, 1 + ∥xN ∥}. Proposition. If a Cauchy sequence has a subsequence converging to an element x, then the whole sequence converges to x. Proof. Suppose xkj → x. Since (xk) is Cauchy, given ε > 0, we can choose an N such that ∥xn − xm∥ < ε 2 for all n, m ≥ N. We can also choose j0 such that − x∥ < ε 2. Then for any n ≥ N, we have kj0 ≥ n and ∥xkj0 ∥xn − x∥ ≤ ∥xn − xkj0 ∥ + ∥x − xkj0 ∥ < ε. Proposition. If ∥ · ∥′ is Lipschitz equivalent to ∥ · ∥ on V, then (xk) is Cauchy with respect to ∥ · ∥ if and only if (xk) is Cauchy with respect to ∥ · ∥′. Also, (V, ∥ · ∥) is complete if and only if (V, ∥ · ∥′) is complete. Proof. This follows directly from definition. Theorem. Rn (with the Euclidean norm, say) is complete. Proof. The important thing is to know this is true for n = 1, which we have proved from Analysis I. If (xk) is Cauchy in Rn
, then (x(k) ) is a Cauchy sequence of real numbers for j j → xj ∈ R each j ∈ {1, · · ·, n}. By the completeness of the reals, we know that xk for some x. So xk → x = (x1, · · ·, xn) since convergence in Rn is equivalent to componentwise convergence. Note that the spaces ℓ1, ℓ2, ℓ∞ are all complete with respect to the standard norms. Also, C([0, 1]) is complete with respect to ∥ · ∥∞, since uniform Cauchy convergence implies uniform convergence, and the uniform limit of continuous functions is continuous. However, C([0, 1]) with the L1 or L2 norms are not complete (see example sheet). The incompleteness of L1 tells us that C([0, 1]) is not large enough to to be complete under the L1 or L2 norm. In fact, the space of Riemann integrable functions, say R([0, 1]), is the natural space for the L1 norm, and of course contains C([0, 1]). As we have previously mentioned, this time R([0, 1]) is too large for ∥ · ∥ to be a norm, since 1 0 |f | dx = 0 does not imply f = 0. This is a problem we can solve. We just have to take the equivalence classes of Riemann integrable functions, where f and g are equivalent if 1 0 |f − g| dx = 0. But still, 33 4 Rn as a normed space IB Analysis II L1 is not complete on R([0, 1])/∼. This is a serious problem in the Riemann integral. This eventually lead to the Lebesgue integral, which generalizes the Riemann integral, and gives a complete normed space. Note that when we quotient our R([0, 1]) by the equivalence relation f ∼ g if 1 0 |f − g| dx = 0, we are not losing too much information about our functions. We know that for the integral to be zero, f − g cannot be non-zero at a point of continuity. Hence they agree on all points of continuities. We also know that by Lebesgue’s theorem, the set of points of discontinuity has
Lebesgue measure zero. So they disagree on at most a set of Lebesgue measure zero. Example. Let V = {(xn) ∈ RN : xj = 0 for all but finitely many j}. Take the supremum norm ∥ · ∥∞ on V. This is a subspace of ℓ∞ (and is sometimes denoted ℓ0). Then (V, ∥ · ∥∞) is not complete. We define x(k) = (1, 1 k, 0, 0, · · · ) for k = 1, 2, 3, · · ·. Then this is Cauchy, since x(k) − x(ℓ)∥ = 1 min{ℓ, k} + 1 → 0, but it is not convergent in V. If it actually converged to some x, then x(k) So we must have xj = 1 j, but this sequence not in V. j → xj. We will later show that this is because V is not closed, after we define what it means to be closed. Definition (Open set). Let (V, ∥ · ∥) be a normed space. A subspace E ⊆ V is open in V if for any y ∈ E, there is some r > 0 such that Br(y) = {x ∈ V : ∥x − y∥ < r} ⊆ E. We first check that the open ball is open. Proposition. Br(y) ⊆ V is an open subset for all r > 0, y ∈ V. Proof. Let x ∈ Br(y). Let ρ = r − ∥x − y∥ > 0. Then Bρ(x) ⊆ Br(y). x y Definition (Limit point). Let (V, ∥ · ∥) be a normed space, E ⊆ V. A point y ∈ V is a limit point of E if there is a sequence (xk) in E with xk ̸= y for all k and xk → y. (Some people allow xk = y, but we will use this definition in this course) Example. Let V = R, E = (0, 1). Then 0, 1 are limit points of E. The set of all limit points is [0, 1].
If E′ = (0, 1) ∪ {2}. Then the set of limit points of E′ is still [0, 1]. 34 4 Rn as a normed space IB Analysis II There is a nice result characterizing whether a set contains all its limit points. Proposition. Let E ⊆ V. Then E contains all of its limit points if and only if V \ E is open in V. Using this proposition, we define the following: Definition (Closed set). Let (V, ∥ · ∥) be a normed space. Then E ⊆ V is closed if V \ E is open, i.e. E contains all its limit points. Note that sets can be both closed or open; or neither closed nor open. Before we prove the proposition, we first have a lemma: Lemma. Let (V, ∥ · ∥) be a normed space, E any subset of V. Then a point y ∈ V is a limit point of E if and only if (Br(y) \ {y}) ∩ E ̸= ∅ for every r. Proof. (⇒) If y is a limit point of E, then there exists a sequence (xk) ∈ E with xk ̸= y for all k and xk → y. Then for every r, for sufficiently large k, xk ∈ Br(y). Since xk ̸= {y} and xk ∈ E, the result follows. (⇐) For each k, let r = 1 k. By assumption, we have some xk ∈ (B 1 {y}) ∩ E. Then xk → y, xk ̸= y and xk ∈ E. So y is a limit point of E. k (y) \ Now we can prove our proposition. Proposition. Let E ⊆ V. Then E contains all of its limit points if and only if V \ E is open in V. Proof. (⇒) Suppose E contains all its limit points. To show V \ E is open, we let y ∈ V \ E. So y is not a limit point of E. So for some r, we have (Br(y) \ {y}) ∩ E = ∅. Hence it follows that Br(y) ⊆ V \ E (since y ̸
∈ E). (⇐) Suppose V \ E is open. Let y ∈ V \ E. Since V \ E is open, there is some r such that Br(y) ⊆ V \ E. By the lemma, y is not a limit point of E. So all limit points of E are in E. 4.3 Sequential compactness In general, there are two different notions of compactness — “sequential compactness” and just “compactness”. However, in normed spaces (and metric spaces, as we will later encounter), these two notions are equivalent. So we will be lazy and just say “compactness” as opposed to “sequential compactness”. Definition ((Sequentially) compact set). Let V be a normed vector space. A subset K ⊆ V is said to be compact (or sequentially compact) if every sequence in K has a subsequence that converges to a point in K. There are things we can immediately know about the spaces: Theorem. Let (V, ∥ · ∥) be a normed vector space, K ⊆ V a subset. Then (i) If K is compact, then K is closed and bounded. 35 4 Rn as a normed space IB Analysis II (ii) If V is Rn (with, say, the Euclidean norm), then if K is closed and bounded, then K is compact. Proof. (i) Let K be compact. Boundedness is easy: if K is unbounded, then we can generate a sequence xk such that ∥xk∥ → ∞. Then this cannot have a convergent subsequence, since any subsequence will also be unbounded, and convergent sequences are bounded. So K must be bounded. To show K is closed, let y be a limit point of K. Then there is some yk ∈ K such that yk → y. Then by compactness, there is a subsequence of yk converging to some point in K. But any subsequence must converge to y. So y ∈ K. (ii) Let K be closed and bounded. Let xk be a sequence in K. Since V = Rn and K is bounded, (xk) is a bounded sequence in Rn. So by BolzanoWeierstrass, this has a convergent subsequence
xkj. By closedness of K, we know that the limit is in K. So K is compact. 4.4 Mappings between normed spaces We are now going to look at functions between normed spaces, and see if they are continuous. Let (V, ∥ · ∥), (V ′, ∥ · ∥′) be normed spaces, and let E ⊆ K be a subset, and f : E → V ′ a mapping (which is just a function, although we reserve the terminology “function” or “functional” for when V ′ = R). Definition (Continuity of mapping). Let y ∈ E. We say f : E → V ′ is continuous at y if for all ε > 0, there is δ > 0 such that the following holds: (∀x ∈ E) ∥x − y∥V < δ ⇒ ∥f (x) − f (y)∥V ′ < ε. Note that x ∈ E and ∥x − y∥ < δ is equivalent to saying x ∈ Bδ(y) ∩ E. Similarly, ∥f (x) − f (y)∥ < ε is equivalent to f (x) ∈ Bε(f (y)). In other words, x ∈ f −1(Bε(f (y))). So we can rewrite this statement as there is some δ > 0 such that E ∩ Bδ(y) ⊆ f −1(Bε(f (y))). We can use this to provide an alternative characterization of continuity. Theorem. Let (V, ∥ · ∥), (V ′, ∥ · ∥′) be normed spaces′. Then f is continuous at y ∈ E if and only if for any sequence yk → y in E, we have f (yk) → f (y). Proof. (⇒) Suppose f is continuous at y ∈ E, and that yk → y. Given ε > 0, by continuity, there is some δ > 0 such that Bδ(y) ∩ E ⊆ f −1(Bε(f (y))). For sufficiently large k, yk ∈ Bδ(y) ∩ E. So f (yk
) ∈ Bε(f (y)), or equivalently, So done. |f (yk) − f (y)| < ε. 36 4 Rn as a normed space IB Analysis II (⇐) If f is not continuous at y, then there is some ε > 0 such that for any k, we have B 1 k (y) ̸⊆ f −1(Bε(f (y))). Choose yk ∈ B 1 f (y)∥ ≥ ε, contrary to the hypothesis. k (y) \ f −1(Bε(f (y))). Then yk → y, yk ∈ E, but ∥f (yk) − Definition (Continuous function). f : E → V ′ is continuous if f is continuous at every point y ∈ E. Theorem. Let (V, ∥ · ∥) and (V ′, ∥ · ∥′) be normed spaces, and K a compact subset of V, and f : V → V ′ a continuous function. Then (i) f (K) is compact in V ′ (ii) f (K) is closed and bounded (iii) If V ′ = R, then the function attains its supremum and infimum, i.e. there is some y1, y2 ∈ K such that f (y1) = sup{f (y) : y ∈ K}, f (y2) = inf{f (y) : y ∈ K}. Proof. (i) Let (xk) be a sequence in f (K) with xk = f (yk) for some yk ∈ K. By compactness of K, there is a subsequence (ykj ) such that ykj → y. By the previous theorem, we know that f (yjk ) → f (y). So xkj → f (y) ∈ f (K). So f (K) is compact. (ii) This follows directly from (i), since every compact space is closed and bounded. (iii) If F is any bounded subset of R, then either sup F ∈ F or sup F is a limit point of F (or both), by definition of the supremum. If F is closed and bounded, then any limit point must be in F. So sup F ∈ F.
Applying this fact to F = f (K) gives the desired result, and similarly for infimum. Finally, we will end the chapter by proving that any two norms on a finite dimensional space are Lipschitz equivalent. The key lemma is the following: Lemma. Let V be an n-dimensional vector space with a basis {v1, · · ·, vn}. Then for any x ∈ V, write x = n j=1 xjvj, with xj ∈ R. We define the Euclidean norm by ∥x∥2 = 1 2. x2 j Then this is a norm, and S = {x ∈ V : ∥x∥2 = 1} is compact in (V, ∥ · ∥2). After we show this, we can easily show that every other norm is equivalent to this norm. This is not hard to prove, since we know that the unit sphere in Rn is compact, and we can just pass our things on to Rn. 37 4 Rn as a normed space IB Analysis II Proof. ∥ · ∥2 is well-defined since x1, · · ·, xn are uniquely determined by x (by (a certain) definition of basis). It is easy to check that ∥ · ∥2 is a norm. Given a sequence x(k) in S, if we write x(k) = n j=1 x(k) j vj. We define the following sequence in Rn: ˜x(k) = (x(k) 1, · · ·, x(k) n ) ∈ ˜S = {˜x ∈ Rn : ∥˜x∥Euclid = 1}. As ˜S is closed and bounded in Rn under the Euclidean norm, it is compact. Hence there exists a subsequence ˜x(kj ) and ˜x ∈ ˜S such that ∥˜x(kj ) − ˜x∥Euclid → 0. This says that x = n j=1 xjvj ∈ S, and ∥xkj − x∥2 → 0. So done. Theorem. Any two norms on a finite dimensional vector space are Lipschitz equivalent. The idea is to pick a basis, and prove that any norm is equivalent to ∥ · ∥2. To
show that an arbitrary norm ∥ · ∥ is equivalent to ∥ · ∥2, we have to show that for any ∥x∥, we have We can divide by ∥x∥2 and obtain an equivalent requirement: a∥x∥2 ≤ ∥x∥ ≤ b∥x∥2. a ≤ x ∥x∥2 ≤ b. We know that any x/∥x∥2 lies in the unit sphere S = {x ∈ V : ∥x∥2 = 1}. So we want to show that the image of ∥ · ∥ is bounded. But we know that S is compact. So it suffices to show that ∥ · ∥ is continuous. Proof. Fix a basis {v1, · · ·, vn} for V, and define ∥ · ∥2 as in the lemma above. Then ∥ · ∥2 is a norm on V, and S = {x ∈ V : ∥x∥2 = 1}, the unit sphere, is compact by above. To show that any two norms are equivalent, it suffices to show that if ∥ · ∥ is any other norm, then it is equivalent to ∥ · ∥2, since equivalence is transitive. For any we have x = n j=1 xjvj, xjvj n j=1 |xj|∥vj∥ ∥xvj∥2  ≤ ∥x∥2 j=1 by the Cauchy-Schwarz inequality. So ∥x∥ ≤ b∥x∥2 for b = ∥vj∥2 1 2. To find a such that ∥x∥ ≥ a∥x∥2, consider ∥ · ∥ : (S, ∥ · ∥2) → R. By above, we know that ∥x − y∥ ≤ b∥x − y∥2 38 4 Rn as a normed space IB Analysis II By the triangle inequality, we know that ≤ ∥x − y∥. So when x is close to y under ∥ · ∥2, then ∥x∥ and ∥y∥ are close. So ∥ · ∥ : (S, ∥ · ∥2) → R is
continuous. So there is some x0 ∈ S such that ∥x0∥ = inf x∈S ∥x∥ = a, say. Since ∥x∥ > 0, we know that ∥x0∥ > 0. So ∥x∥ ≥ a∥x∥2 for all x ∈ V. ∥x∥ − ∥y∥ The key to the proof is the compactness of the unit sphere of (V, ∥ · ∥). On the other hand, compactness of the unit sphere also characterizes finite dimensionality. As you will show in the example sheets, if the unit sphere of a space is compact, then the space must be finite-dimensional. Corollary. Let (V, ∥ · ∥) be a finite-dimensional normed space. (i) The Bolzano-Weierstrass theorem holds for V, i.e. any bounded sequence sequence in V has a convergent subsequence. (ii) A subset of V is compact if and only if it is closed and bounded. Proof. If a subset is bounded in one norm, then it is bounded in any Lipschitz equivalent norm. Similarly, if it converges to x in one norm, then it converges to x in any Lipschitz equivalent norm. Since these results hold for the Euclidean norm ∥ · ∥2, it follows that they hold for arbitrary finite-dimensional vector spaces. Corollary. Any finite-dimensional normed vector space (V, ∥ · ∥) is complete. Proof. This is true since if a space is complete in one norm, then it is complete in any Lipschitz equivalent norm, and we know that Rn under the Euclidean norm is complete. 39 5 Metric spaces IB Analysis II 5 Metric spaces We would like to extend our notions such as convergence, open and closed subsets, compact subsets and continuity from normed spaces to more general sets. Recall that when we defined these notions, we didn’t really use the vector space structure of a normed vector space much. Moreover, we mostly defined these things in terms of convergence of sequences. For example, a space is closed if it contains all its limits, and a space is open if its complement is closed. So what do we actually need in order to define convergence, and hence all the notions we’ve
been using? Recall we define xk → x to mean ∥xk − x∥ → 0 as a sequence in R. What is ∥xk − x∥ really about? It is measuring the distance between xk and x. So what we really need is a measure of distance. To do so, we can define a distance function d : V ×V → R by d(x, y) = ∥x−y∥. Then we can define xk → x to mean d(xk, x) → 0. Hence, given any function d : V × V → R, we can define a notion of “convergence” as above. However, we want this to be well-behaved. In particular, we would want the limits of sequences to be unique, and any constant sequence xk = x should converge to x. We will come up with some restrictions on what d can be based on these requirements. We can look at our proof of uniqueness of limits (for normed spaces), and see what properties of d we used. Recall that to prove the uniqueness of limits, we first assume that xk → x and xk → y. Then we noticed ∥x − y∥ ≤ ∥x − xk∥ + ∥xk − y∥ → 0, and hence ∥x − y∥ = 0. So x = y. We can reformulate this argument in terms of d. We first start with d(x, y) ≤ d(x, xk) + d(xk, y). To obtain this equation, we are relying on the triangle inequality. So we would want d to satisfy the triangle inequality. After obtaining this, we know that d(xk, y) → 0, since this is just the definition of convergence. However, we do not immediately know d(x, xk) → 0, since we are given a fact about d(xk, x), not d(x, xk). Hence we need the property that d(xk, x) = d(x, xk). This is symmetry. Combining this, we know that d(x, y) ≤ 0. From this, we want to say that in fact, d(x, y) = 0, and thus x = y. Hence we need the property that d(x, y) ≥ 0 for all x, y,
and that d(x, y) = 0 implies x = y. Finally, to show that a constant sequence has a limit, suppose xk = x for all k ∈ N. Then we know that d(x, xk) = d(x, x) should tend to 0. So we must have d(x, x) = 0 for all x. We will use these properties to define metric spaces. 5.1 Preliminary definitions Definition (Metric space). Let X be any set. A metric on X is a function d : X × X → R that satisfies 40 5 Metric spaces IB Analysis II – d(x, y) ≥ 0 with equality iff x = y – d(x, y) = d(y, x) – d(x, y) ≤ d(x, z) + d(z, y) The pair (X, d) is called a metric space. (non-negativity) (symmetry) (triangle inequality) We have seen that we can define convergence in terms of a metric. Hence, we can also define open subsets, closed subsets, compact spaces, continuous functions etc. for metric spaces, in a manner consistent with what we had for normed spaces. Moreover, we will show that many of our theorems for normed spaces are also valid in metric spaces. Example. (i) Rn with the Euclidean metric is a metric space, where the metric is defined by d(x, y) = ∥x − y∥ = (xj − yj)2. (ii) More generally, if (V, ∥ · ∥) is a normed space, then d(x, y) = ∥x − y∥ defines a metric on V. (iii) Discrete metric: let X be any set, and define d(x, y) = 0 x = y 1 x ̸= y. (iv) Given a metric space (X, d), we define g(x, y) = min{1, d(x, y)}. Then this is a metric on X. Similarly, if we define h(x, y) = d(x, y) 1 + d(x, y) is also a metric on X. In both cases, we obtain a bounded metric. The axioms are easily shown to be satisfied, apart from the triangle inequality. So
let’s check the triangle inequality for h. We’ll use a general fact that for numbers a, c ≥ 0, b, d > 0 we have ⇔ ≤ ≤. Based on this fact, we can start with d(x, y) ≤ d(x, z) + d(z, y). Then we obtain d(x, y) 1 + d(x, y) So done. ≤ = ≤ d(x, z) + d(z, y) 1 + d(x, z) + d(z, y) d(x, z) 1 + d(x, z) + d(z, y) + d(z, y) 1 + d(x, z) + d(z, y) d(x, z) 1 + d(x, z) + d(z, y) 1 + d(z, y). 41 5 Metric spaces IB Analysis II We can also extend the notion of Lipschitz equivalence to metric spaces. Definition (Lipschitz equivalent metrics). Metrics d, d′ on a set X are said to be Lipschitz equivalent if there are (positive) constants A, B such that Ad(x, y) ≤ d′(x, y) ≤ Bd(x, y) for all x, y ∈ X. Clearly, any Lipschitz equivalent norms give Lipschitz equivalent metrics. Any metric coming from a norm in Rn is thus Lipschitz equivalent to the Euclidean metric. We will later show that two equivalent norms induce the same topology. In some sense, Lipschitz equivalent norms are indistinguishable. Definition (Metric subspace). Given a metric space (X, d) and a subset Y ⊆ X, the restriction d|Y ×Y → R is a metric on Y. This is called the induced metric or subspace metric. Note that unlike vector subspaces, we do not require our subsets to have any structure. We can take any subset of X and get a metric subspace. Example. Any subspace of Rn is a metric space with the Euclidean metric. Definition (Convergence). Let (X, d) be a metric space. A sequence xn ∈ X is said to converge to x if d(xn, x) → 0 as a real sequence. In other words, (�
�ε)(∃K)(∀k > K) d(xk, x) < ε. Alternatively, this says that given any ε, for sufficiently large k, we get xk ∈ Bε(x). Again, Br(a) is the open ball centered at a with radius r, defined as Br(a) = {x ∈ X : d(x, a) < r}. Proposition. The limit of a convergent sequence is unique. Proof. Same as that of normed spaces. Note that notions such as convergence, open and closed subsets and continuity of mappings all make sense in an even more general setting called topological spaces. However, in this setting, limits of convergent sequences can fail to be unique. We will not worry ourselves about these since we will just focus on metric spaces. 5.2 Topology of metric spaces We will define open subsets of a metric space in exactly the same way as we did for normed spaces. Definition (Open subset). Let (X, d) be a metric space. A subset U ⊆ X is open if for every y ∈ U, there is some r > 0 such that Br(y) ⊆ U. 42 5 Metric spaces IB Analysis II This means we can write any open U as a union of open balls: U = y∈U Br(y)(y) for appropriate choices of r(y) for every y. It is easy to check that every open ball Br(y) is an open set. The proof is exactly the same as what we had for normed spaces. Note that two different metrics d, d′ on the same set X may give rise to the same collection of open subsets. Example. Lipschitz equivalent metrics give rise to the same collection of open sets, i.e. if d, d′ are Lipschitz equivalent, then a subset U ⊆ X is open with respect to d if and only if it is open with respect to d′. Proof is left as an easy exercise. The converse, however, is not necessarily true. Example. Let X = R, d(x, y) = |x − y| and d′(x, y) = min{1, |x − y|}. It is easy to check that these are not Lipschitz equivalent, but they induce the same set collection of open subsets. Definition (Topology
). Let (X, d) be a metric space. The topology on (X, d) is the collection of open subsets of X. We say it is the topology induced by the metric. Definition (Topological notion). A notion or property is said to be a topological notion or property if it only depends on the topology, and not the metric. We will introduce a useful terminology before we go on: Definition (Neighbourhood). Given a metric space X and a point x ∈ X, a neighbourhood of x is an open set containing x. Some people do not require the set to be open. Instead, it requires a neighbourhood to be a set that contains an open subset that contains x, but this is too complicated, and we could as well work with open subsets directly. Clearly, being a neighbourhood is a topological property. Proposition. Let (X, d) be a metric space. Then xk → x if and only if for every neighbourhood V of x, there exists some K such that xk ∈ V for all k ≥ K. Hence convergence is a topological notion. Proof. (⇒) Suppose xk → X, and let V be any neighbourhood of x. Since V is open, by definition, there exists some ε such that Bε(x) ⊆ V. By definition of convergence, there is some K such that xk ∈ Bε(x) for k ≥ K. So xk ∈ V whenever k ≥ K. (⇒) Since every open ball is a neighbourhood, this direction follows directly from definition. Theorem. Let (X, d) be a metric space. Then (i) The union of any collection of open sets is open (ii) The intersection of finitely many open sets is open. 43 5 Metric spaces IB Analysis II (iii) ∅ and X are open. Proof. (i) Let U = α Vα, where each Vα is open. If x ∈ U, then x ∈ Vα for some α. Since Vα is open, there exists δ > 0 such that Bδ(x) ⊆ Vα. So Bδ(x) ⊆ (ii) Let U = n i=1 Vα, where each Vα is open. If x ∈ V, then x ∈ Vi for all i = 1, · · ·, n. So ∃
δi > 0 with Bδi(x) ⊆ Vi. Take δ = min{δ1, · · ·, δn}. So Bδ(x) ⊆ Vi for all i. So Bδ(x) ⊆ V. So V is open. α Vα = U. So U is open. (iii) ∅ satisfies the definition of an open subset vacuously. X is open since for any x, B1(x) ⊆ X. This theorem is not important in this course. However, this will be a key defining property we will use when we define topological spaces in IB Metric and Topological Spaces. We can now define closed subsets and characterize them using open subsets, in exactly the same way as for normed spaces. Definition (Limit point). Let (X, d) be a metric space and E ⊆ X. A point y ∈ X is a limit point of E if there exists a sequence xk ∈ E, xk ̸= y such that xk → y. Definition (Closed subset). A subset E ⊆ X is closed if E contains all its limit points. Proposition. A subset is closed if and only if its complement is open. Proof. Exactly the same as that of normed spaces. It is useful to observe that y ∈ X is a limit point of E if and only if (Br(y) \ {y}) ∩ E ̸= ∅ for all r > 0. We can write down an analogous theorem for closed sets: Theorem. Let (X, d) be a metric space. Then (i) The intersection of any collection of closed sets is closed (ii) The union of finitely many closed sets is closed. (iii) ∅ and X are closed. Proof. By taking complements of the result for open subsets. Proposition. Let (X, d) be a metric space and x ∈ X. Then the singleton {x} is a closed subset, and hence any finite subset is closed. Proof. Let y ∈ X \ {x}. So d(x, y) > 0. Then Bd(y,x)(x) ⊆ X \ {x}. So X \ {x} is open. So {x} is closed. Alternatively, since {x} has no limit points, it contains all
its limit points. So it is closed. 44 5 Metric spaces IB Analysis II 5.3 Cauchy sequences and completeness Definition (Cauchy sequence). Let (X, d) be a metric space. A sequence (xn) in X is Cauchy if (∀ε)(∃N )(∀n, m ≥ N ) d(xn, xm) < ε. Proposition. Let (X, d) be a metric space. Then (i) Any convergent sequence is Cauchy. (ii) If a Cauchy sequence has a convergent subsequence, then the original sequence converges to the same limit. Proof. (i) If xk → x, then as m, n → ∞. d(xm, xn) ≤ d(xm, x) + d(xn, x) → 0 (ii) Suppose xkj → x. Since (xk) is Cauchy, given ε > 0, we can choose an 2 for all n, m ≥ N. We can also choose j0 such N such that d(xn, xm) < ε that kj0 ≥ n and d(xkj0, x) < ε 2. Then for any n ≥ N, we have d(xn, x) ≤ d(xn, xkj0 ) + d(x, xkj0 ) < ε. Definition (Complete metric space). A metric space (X, d) is complete if all Cauchy sequences converge to a point in X. Example. Let X = Rn with the Euclidean metric. Then X is complete. It is easy to produce incomplete metric spaces. Since arbitrary subsets of metric spaces are subspaces, we can just remove some random elements to make it incomplete. Example. Let X = (0, 1) ⊆ R with the Euclidean metric. Then this is incomplete, since 1 is Cauchy but has no limit in X. Similarly, X = R \ {0} is incomplete. Note, however, that it is possible to construct a metric d′ on X = R \ {0} such that d′ induces the same topology on X, but makes X complete. This shows that completeness is not a topological property. The actual construction is left as an exercise on the example sheet. k Example. We can create an easy example
of an incomplete metric on Rn. We start by defining h : Rn → Rn by h(x) = x 1 + ∥x∥, where ∥ · ∥ is the Euclidean norm. We can check that this is injective: h(x) = h(y), taking the norm gives if ∥x∥ 1 + ∥x∥ = ∥y∥ 1 + ∥y∥. 45 5 Metric spaces IB Analysis II So we must have ∥x∥ = ∥y∥, i.e. x = y. So h(x) = h(y) implies x = y. Now we define d(x, y) = ∥h(x) − h(y)∥. It is an easy check that this is a metric on Rn. In fact, we can show that h : Rn → B1(0), and h is a homeomorphism (i.e. continuous bijection with continuous inverse) between Rn and the unit ball B1(0), both with the Euclidean metric. To show that this metric is incomplete, we can consider the sequence xk = (k − 1)e1, where e1 = (1, 0, 0, · · ·, 0) is the usual basis vector. Then (xk) is Cauchy in (Rn, d). To show this, first note that h(xk) = 1 − e1. 1 k Hence we have d(xn, xm) = ∥h(xn) − h(xm)∥ = 1 n − 1 m → 0. So it is Cauchy. To show it does not converge in (Rn, d), suppose d(xk, x) → 0 for some x. Then since d(xk, x) = ∥h(xk) − h(x)∥ ≥ ∥h(xk)∥ − ∥h(x)∥, We must have However, there is no element with ∥h(x)∥ = 1. ∥h(x)∥ = lim k→∞ ∥h(xk)∥ = 1. What is happening in this example, is that we are pulling in the whole Rn in to the unit ball. Then under this norm, a sequence
that “goes to infinity” in the usual norm will be Cauchy in this norm, but we have nothing at infinity for it to converge to. Suppose we have a complete metric space (X, d). We know that we can form arbitrary subspaces by taking subsets of X. When will this be complete? Clearly it has to be closed, since it has to include all its limit points. It turns it closedness is a sufficient condition. Theorem. Let (X, d) be a metric space, Y ⊆ X any subset. Then (i) If (Y, d|Y ×Y ) is complete, then Y is closed in X. (ii) If (X, d) is complete, then (Y, d|Y ×Y ) is complete if and only if it is closed. Proof. (i) Let x ∈ X be a limit point of Y. Then there is some sequence xk → x, where each xk ∈ Y. Since (xk) is convergent, it is a Cauchy sequence. Hence it is Cauchy in Y. By completeness of Y, (xk) has to converge to some point in Y. By uniqueness of limits, this limit must be x. So x ∈ Y. So Y contains all its limit points. (ii) We have just showed that if Y is complete, then it is closed. Now suppose Y is closed. Let (xk) be a Cauchy sequence in Y. Then (xk) is Cauchy in X. Since X is complete, xk → x for some x ∈ X. Since x is a limit point of Y, we must have x ∈ Y. So xk converges in Y. 46 5 Metric spaces IB Analysis II 5.4 Compactness Definition ((Sequential) compactness). A metric space (X, d) is (sequentially) compact if every sequence in X has a convergent subsequence. A subset K ⊆ X is said to be compact if (K, d|K×K) is compact. In other words, K is compact if every sequence in K has a subsequence that converges to some point in K. Note that when we say every sequence has a convergent subsequence, we do not require it to be bounded. This is unlike the statement of the BolzanoWeierstrass theorem. In particular, R is
not compact. It follows from definition that compactness is a topological property, since it is defined in terms of convergence, and convergence is defined in terms of open sets. The following theorem relates completeness with compactness. Theorem. All compact spaces are complete and bounded. Note that X is bounded iff X ⊆ Br(x0) for some r ∈ R, x0 ∈ X (or X is empty). Proof. Let (X, d) be a compact metric space. Let (xk) be Cauchy in X. By compactness, it has some convergent subsequence, say xkj → x. So xk → x. So it is complete. If (X, d) is not bounded, by definition, for any x0, there is a sequence (xk) such that d(xk, x0) > k for every k. But then (xk) cannot have a convergent subsequence. Otherwise, if xkj → x, then d(xkj, x0) ≤ d(xkj, x) + d(x, x0) and is bounded, which is a contradiction. This implies that if (X, d) is a metric space and E ⊆ X, and E is compact, then E is bounded, i.e. E ⊆ BR(x0) for some x0 ∈ X, R > 0, and E with the subspace metric is complete. Hence E is closed as a subset of X. The converse is not true. For example, recall if we have an infinite-dimensional normed vector space, then the closed unit sphere is complete and bounded, but not compact. Alternatively, we can take X = R with the metric d(x, y) = min{1, |x − y|}. This is clearly bounded (by 1), and it is easy to check that this is complete. However, this is not compact since the sequence xk = k has no convergent subsequence. However, we can strengthen the condition of boundedness to total boundedness, and get the equivalence between “completeness and total boundedness” and compactness. Definition (Totally bounded*). A metric space (X, d) is said to be totally bounded if for all ε > 0, there is an integer N ∈ N and points x1, · · ·, xN ∈ X such that
N X = Bε(xi). It is easy to check that being totally bounded implies being bounded. We then have the following strengthening of the previous theorem. i=1 47 5 Metric spaces IB Analysis II Theorem. (non-examinable) Let (X, d) be a metric space. Then X is compact if and only if X is complete and totally bounded. Proof. (⇐) Let X be complete and totally bounded, (yi) ∈ X. For every j ∈ N, there exists a finite set of points Ej such that every point is within 1 j of one of these points. Now since E1 is finite, there is some x1 ∈ E1 such that there are infinitely many yi’s in B(x1, 1). Pick the first yi in B(x1, 1) and call it yi1. Now there is some x2 ∈ E2 such that there are infinitely many yi’s in 2 ). Pick the one with smallest value of i > i1, and call this yi2. B(x1, 1) ∩ B(x2, 1 Continue till infinity. This procedure gives a sequence xi ∈ Ei and subsequence (yik ), and also yin ∈ n j=1 B xj,. 1 j It is easy to see that (yin) is Cauchy since if m > n, then d(yim, yin ) < 2 completeness of X, this subsequence converges. n. By (⇒) Compactness implying completeness is proved above. Suppose X is not totally bounded. We show it is not compact by constructing a sequence with no Cauchy subsequence. Suppose ε is such that there is no finite set of points x1, · · ·, xN with X = N i=1 Bε(xi). We will construct our sequence iteratively. Start by picking an arbitrary y1. Pick y2 such that d(y1, y2) ≥ ε. This exists or else Bε(y1) covers all of X. Now given y1, · · ·, yn such that d(yi, yj) ≥ ε for all i, j = 1, · · ·, n, i ̸= j, we pick yn+1 such that d(yn+1, yj) ≥
ε for all j = 1, · · ·, n. Again, this exists, or else n i=1 Bε(yi) covers X. Then clearly the sequence (yn) is not Cauchy. So done. In IID Linear Analysis, we will prove the Arzel`a-Ascoli theorem that characterizes the compact subsets of the space C([a.b]) in a very concrete way, which is in some sense a strengthening of this result. 5.5 Continuous functions We are going to look at continuous mappings between metric spaces. Definition (Continuity). Let (X, d) and (X ′, d′) be metric spaces. A function f : X → X ′ is continuous at y ∈ X if (∀ε > 0)(∃δ > 0)(∀x) d(x, y) < δ ⇒ d′(f (x), f (y)) < ε. This is true if and only if for every ε > 0, there is some δ > 0 such that Bδ(y) ⊆ f −1Bε(f (x)). f is continuous if f is continuous at each y ∈ X. 48 5 Metric spaces IB Analysis II Definition (Uniform continuity). f is uniformly continuous on X if (∀ε > 0)(∃δ > 0)(∀x, y ∈ X) d(x, y) < δ ⇒ d(f (x), f (y)) < ε. This is true if and only if for all ε, there is some δ such that for all y, we have Bδ(y) ⊆ f −1(Bε(f (y))). Definition (Lipschitz function and Lipschitz constant). f is said to be Lipschitz on X if there is some K ∈ [0, ∞) such that for all x, y ∈ X, d′(f (x), f (y)) ≤ Kd(x, y) Any such K is called a Lipschitz constant. It is easy to show Lipschitz ⇒ uniform continuity ⇒ continuity. We have seen many examples that continuity does not imply uniform continuity. To show that uniform continuity does not imply Lipschitz, take X = X ′ = R. We define the metrics
as d(x, y) = min{1, |x − y|}, d′(x, y) = |x − y|. Now consider the function f : (X, d) → (X ′, d′) defined by f (x) = x. We can then check that this is uniformly continuous but not Lipschitz. Note that the statement that metrics d and d′ are Lipschitz equivalent is equivalent to saying the two identity maps i : (X, d) → (X, d′) and i′ : (X, d′) → (X, d) are Lipschitz, hence the name. Note also that the metric itself is also a Lipschitz map for any metric. Here we are viewing the metric as a function d : X × X → R, with the metric on X × X defined as ˜d((x1, y1), (x2, y2)) = d(x1, x2) + d(y1, y2). This is a consequence of the triangle inequality, since d(x1, y1) ≤ d(x1, x2) + d(x2, y2) + d(y1, y2). Moving the middle term to the left gives d(x1, y1) − d(x2, y2) ≤ ˜d((x1, y1), (x2, y2)) Swapping the theorems around, we can put in the absolute value to obtain |d(x1, y1) − d(x2, y2)| ≤ ˜d((x1, y1), (x2, y2)) Recall that at the very beginning, we proved that a continuous map from a closed, bounded interval is automatically uniformly continuous. This is true whenever the domain is compact. Theorem. Let (X, d) be a compact metric space, and (X ′, d′) is any metric space. If f : X → X ′ be continuous, then f is uniformly continuous. 49 5 Metric spaces IB Analysis II This is exactly the same proof as what we had for the [0, 1] case. Proof. We are going to prove by contradiction. Suppose f : X → X ′ is not uniformly continuous. Since f is not uniformly continuous, there is some ε > 0 such that for all δ
= 1 n but d′(f (xn), f (yn)) > ε. n, there is some xn, yn such that d(xn, yn) < 1 By compactness of X, (xn) has a convergent subsequence (xni) → x. Then we also have yni → x. So by continuity, we must have f (xni) → f (x) and f (yni) → f (x). But d′(f (xni), f (yni)) > ε for all ni. This is a contradiction. In the proof, we have secretly used (part of) the following characterization of continuity: Theorem. Let (X, d) and (X ′, d′) be metric spaces, and f : X → X ′. Then the following are equivalent: (i) f is continuous at y. (ii) f (xk) → f (y) for every sequence (xk) in X with xk → y. (iii) For every neighbourhood V of f (y), there is a neighbourhood U of y such that U ⊆ f −1(V ). Note that the definition of continuity says something like (iii), but with open balls instead of open sets. So this should not be surprising. Proof. – (i) ⇔ (ii): The argument for this is the same as for normed spaces. – (i) ⇒ (iii): Let V be a neighbourhood of f (y). Then by definition there is ε > 0 such that Bε(f (y)) ⊆ V. By continuity of f, there is some δ such that Bδ(y) ⊆ f −1(Bε(f (y))) ⊆ f −1(V ). Set U = Bε(y) and done. – (iii) ⇒ (i): for any ε, use the hypothesis with V = Bε(f (y)) to get a neighbourhood U of y such that U ⊆ f −1(V ) = f −1(Bε(f (y))). Since U is open, there is some δ such that Bδ(y) ⊆ U. So we get Bδ(y) ⊆ f −1(Bε(f (y))). So we get continuity. Coroll
ary. A function f : (X, d) → (X ′, d′) is continuous if f −1(V ) is open in X whenever V is open in X ′. Proof. Follows directly from the equivalence of (i) and (iii) in the theorem above. 50 5 Metric spaces IB Analysis II 5.6 The contraction mapping theorem If you have already taken IB Metric and Topological Spaces, then you were probably bored by the above sections, since you’ve already met them all. Finally, we get to something new. This section is comprised of just two theorems. The first is the contraction mapping theorem, and we will use it to prove PicardLindel¨of existence theorem. Later, we will prove the inverse function theorem using the contraction mapping theorem. All of these are really powerful and important theorems in analysis. They have many more applications and useful corollaries, but we do not have time to get into those. Definition (Contraction mapping). Let (X, d) be metric space. A mapping f : X → X is a contraction if there exists some λ with 0 ≤ λ < 1 such that d(f (x), f (y)) ≤ λd(x, y). Note that a contraction mapping is by definition Lipschitz and hence (uni- formly) continuous. Theorem (Contraction mapping theorem). Let X be a (non-empty) complete metric space, and if f : X → X is a contraction, then f has a unique fixed point, i.e. there is a unique x such that f (x) = x. Moreover, if f : X → X is a function such that f (m) : X → X (i.e. f composed with itself m times) is a contraction for some m, then f has a unique fixed point. We can see finding fixed points as the process of solving equations. One important application we will have is to use this to solve differential equations. Note that the theorem is false if we drop the completeness assumption. For example, f : (0, 1) → (0, 1) defined by x 2 is clearly a contraction with no fixed point. The theorem is also false if we drop the assumption λ < 1. In fact, it is not enough to assume d(f (x), f (y)) < d(x, y) for all x, y.
A counterexample is to be found on example sheet 3. Proof. We first focus on the case where f itself is a contraction. Uniqueness is straightforward. By assumption, there is some 0 ≤ λ < 1 such that d(f (x), f (y)) ≤ λd(x, y) for all x, y ∈ X. If x and y are both fixed points, then this says d(x, y) = d(f (x), f (y)) ≤ λd(x, y). This is possible only if d(x, y) = 0, i.e. x = y. To prove existence, the idea is to pick a point x0 and keep applying f. Let x0 ∈ X. We define the sequence (xn) inductively by xn+1 = f (xn). We first show that this is Cauchy. For any n ≥ 1, we can compute d(xn+1, xn) = d(f (xn), f (xn−1)) ≤ λd(xn, xn−1) ≤ λnd(x1, x0). 51 5 Metric spaces IB Analysis II Since this is true for any n, for m > n, we have d(xm, xn) ≤ d(xm, xm−1) + d(xm−1, xm−2) + · · · + d(xn+1, xn) = = m−1 j=n m−1 j=n d(xj+1, xj) λjd(x1, x0) ≤ d(x1, x0) ∞ j=n λj = λn 1 − λ d(x1, x0). Note that we have again used the property that λ < 1. This implies d(xm, xn) → 0 as m, n → ∞. So this sequence is Cauchy. By the completeness of X, there exists some x ∈ X such that xn → x. Since f is a contraction, it is continuous. So f (xn) → f (x). However, by definition f (xn) = xn+1. So taking the limit on both sides, we get f (x) = x. So x is a fixed point.