text
stringlengths 270
6.81k
|
---|
space, i.e. the map from the set of covering spaces to the subgroups of π1 is surjective. Now we want to prove injectivity. To do so, we need a generalization of the homotopy lifting lemma. Suppose we have path-connected spaces (Y, y0), (X, x0) and ( ˜X, ˜x0), with f : (Y, y0) → (X, x0) a continuous map, p : ( ˜X, ˜x0) → (X, x0) a covering map. When does a lift of f to ˜f : (Y, y0) → ( ˜X, ˜x0) exist? The answer is given by the lifting criterion. Lemma (Lifting criterion). Let p : ( ˜X, ˜x0) → (X, x0) be a covering map of pathconnected based spaces, and (Y, y0) a path-connected, locally path connected based space. If f : (Y, y0) → (X, x0) is a continuous map, then there is a (unique) lift ˜f : (Y, y0) → ( ˜X, ˜x0) such that the diagram below commutes (i.e. p ◦ ˜f = f ): ( ˜X, ˜x0) p (X, x0) ˜f f (Y, y0) if and only if the following condition holds: f∗π1(Y, y0) ≤ p∗π1( ˜X, ˜x0). Note that uniqueness comes from the uniqueness of lifts. So this lemma is really about existence. Also, note that the condition holds trivially when Y is simply connected, e.g. when it is an interval (path lifting) or a square (homotopy lifting). So paths and homotopies can always be lifted. Proof. One direction is easy: if ˜f exists, then f = p ◦ ˜f. So f∗ = p∗ ◦ ˜f∗. So we know that im f∗ ⊆ im p∗. So done. In the other direction, uniqueness follows from the uniqueness of lifts. So we only need to prove existence. We define ˜f as
|
follows: Given a y ∈ Y, there is some path αy : y0 y. Then f maps this to βy : x0 f (y) in X. By path lifting, this path lifts uniquely to ˜βy in ˜X. Then we set ˜f (y) = ˜βy(1). Note that if ˜f exists, then this must be what ˜f sends y to. What we need to show is that this is well-defined. 35 3 Covering spaces II Algebraic Topology Suppose we picked a different path α y : y0 y. Then this α y would have differed from αy by a loop γ in Y. Our condition that f∗π1(Y, y0) ≤ p∗π1( ˜X, ˜x0) says f ◦ γ is the image of a y also differ by a loop in ˜X, and hence have the same loop in ˜X. So ˜βy and ˜β end point. So this shows that ˜f is well-defined. Finally, we show that ˜f is continuous. First, observe that any open set U ⊆ ˜X can be written as a union of ˜V such that p| ˜V : ˜V → p( ˜V ) is a homeomorphism. Thus, it suffices to show that if p| ˜V : ˜V → p( ˜V ) = V is a homeomorphism, then ˜f −1( ˜V ) is open. Let y ∈ ˜f −1( ˜V ), and let x = f (y). Since f −1(V ) is open and Y is locally path-connected, we can pick an open W ⊆ f −1(V ) such that y ∈ W and W is path connected. We claim that W ⊆ ˜f −1( ˜V ). Indeed, if z ∈ W, then we can pick a path γ from y to z. Then f sends (f (γ)), this to a path from x to f (z). The lift of this path to ˜X is given by p|−1 ˜V (f (z)) �
|
� ˜V. whose end point is p|−1 ˜V (f (z)) ∈ ˜V. So it follows that ˜f (z) = p|−1 ˜V Now we prove that every subgroup of π1 comes from exactly one covering space. What this statement properly means is made precise in the following proposition: Proposition. Let (X, x0), ( ˜X1, ˜x1), ( ˜X2, ˜x2) be path-connected based spaced, and pi : ( ˜Xi, ˜xi) → (X, x0) be covering maps. Then we have p1∗π1( ˜X1, ˜x1) = p2∗π1( ˜X2, ˜x2) if and only if there is some homeomorphism h such that the following diagram commutes: ( ˜X1, ˜x1) h ( ˜X2, ˜x2) p1 p2 (X, x0) i.e. p1 = p2 ◦ h. Note that this is a stronger statement than just saying the two covering spaces are homeomorphic. We are saying that we can find a nice homeomorphism that works well with the covering map p. Proof. If such a homeomorphism exists, then clearly the subgroups are equal. If the subgroups are equal, we rotate our diagram a bit: ( ˜X2, ˜x2) h= ˜p1 p2 ( ˜X1, ˜x1) (X, x0) p1 Then h = ˜p1 exists by the lifting criterion. By symmetry, we can get h−1 = ˜p2. To show ˜p2 is indeed the inverse of ˜p1, note that ˜p2 ◦ ˜p1 is a lift of p2 ◦ ˜p1 = p1. 36 3 Covering spaces II Algebraic Topology Since id ˜X1 map. Similarly, ˜p1 ◦ ˜p2 is also the identity. is also a lift, by the uniqueness of lifts, we know ˜p2 ◦ ˜p1 is the identity ( ˜X1, ˜x1) ˜p2 p1 ( ˜X1, ˜
|
x1) ˜p1 ( ˜X2, ˜x2) (X, x0) p2 p1 Now what we would like to do is to forget about the basepoints. What happens when we change base points? Recall that the effect of changing base points is that we will conjugate our group. This doesn’t actually change the group itself, but if we are talking about subgroups, conjugation can send a subgroup into a different subgroup. Hence, if we do not specify the basepoint, we don’t get a subgroup of π1, but a conjugacy class of subgroups. Proposition. Unbased covering spaces correspond to conjugacy classes of subgroups. 37 4 Some group theory II Algebraic Topology 4 Some group theory Algebraic topology is about translating topology into group theory. Unfortunately, you don’t know group theory. Well, maybe you do, but not the right group theory. So let’s learn group theory! 4.1 Free groups and presentations Recall that in IA Groups, we defined, say, the dihedral group to be D2n = r, s | rn = s2 = e, srs = r−1. What does this expression actually mean? Can we formally assign a meaning to this expression? We start with a simple case — the free group. This is, in some sense, the “freest” group we can have. It is defined in terms of an alphabet and words. Definition (Alphabet and words). We let S = {sα : α ∈ Λ} be our alphabet, and we have an extra set of symbols S−1 = {s−1 α : α ∈ Λ}. We assume that S ∩ S−1 = ∅. What do we do with alphabets? We write words with them! We define S∗ to be the set of words over S ∪ S−1, i.e. it contains n-tuples x1 · · · xn for any 0 ≤ n < ∞, where each xi ∈ S ∪ S−1. Example. Let S = {a, b}. Then words could be the empty word ∅, or a, or aba
|
−1b−1, or aa−1aaaaabbbb, etc. We are usually lazy and write aa−1aaaaabbbb as aa−1a5b4. When we see things like aa−1, we would want to cancel them. This is called elementary reduction. Definition (Elementary reduction). An elementary reduction takes a word usαs−1 α v and gives uv, or turns us−1 α sαv into uv. Since each reduction shortens the word, and the word is finite in length, we cannot keep reducing for ever. Eventually, we reach a reduced state. Definition (Reduced word). A word is reduced if it does not admit an elementary reduction. Example. ∅, a, aba−1b−1 are reduced words, while aa−1aaaaabbbb is not. Note that there is an inclusion map S → S∗ that sends the symbol sα to the word sα. Definition (Free group). The free group on the set S, written F (S), is the set of reduced words on S∗ together with some operations: (i) Multiplication is given by concatenation followed by elementary reduction to get a reduced word. For example, (aba−1b−1) · (bab) = aba−1b−1bab = ab2 (ii) The identity is the empty word ∅. (iii) The inverse of x1 · · · xn is x−1 n · · · x−1 1, where, of course, (s−1 α )−1 = sα. 38 4 Some group theory II Algebraic Topology The elements of S are called the generators of F (S). Note that we have not showed that multiplication is well-defined — we might reduce the same word in different ways and reach two different reduced words. We will show that this is indeed well-defined later, using topology! Some people like to define the free group in a different way. This is a cleaner way to define the free group without messing with alphabets and words, but is (for most people) less intuitive. This definition also does not
|
make it clear that the free group F (S) of any set S exists. We will state this definition as a lemma. Lemma. If G is a group and φ : S → G is a set map, then there exists a unique homomorphism f : F (S) → G such that the following diagram commutes: F (S) S f φ G where the arrow not labeled is the natural inclusion map that sends sα (as a symbol from the alphabet) to sα (as a word). Proof. Clearly if f exists, then f must send each sα to φ(sα) and s−1 Then the values of f on all other elements must be determined by α to φ(sα)−1. f (x1 · · · xn) = f (x1) · · · f (xn) since f is a homomorphism. So if f exists, it must be unique. So it suffices to show that this f is a well-defined homomorphism. This is well-defined if we define F (S) to be the set of all reduced words, since each reduced word has a unique representation (since it is defined to be the representation itself). To show this is a homomorphism, suppose x = x1 · · · xna1 · · · ak, y = a−1 k · · · a−1 1 y1 · · · ym, where y1 = x−1 n. Then Then we can compute xy = x1 · · · xny1 · · · ym. f (x)f (y) = φ(x1) · · · φ(xn)φ(a1) · · · φ(ak)φ(ak)−1 · · · φ(a1)−1φ(y1) · · · φ(ym) = φ(x1) · · · φ(xn) · · · φ(y1) · · · φ(ym) = f (xy). So f is a homomorphism. We call this a “universal property” of F (S). We can show that F (S) is the unique group satisfying the conditions of this lemma
|
(up to isomorphism), by taking G = F (S) and using the uniqueness properties. 39 4 Some group theory II Algebraic Topology Definition (Presentation of a group). Let S be a set, and let R ⊆ F (S) be any subset. We denote by R the normal closure of R, i.e. the smallest normal subgroup of F (S) containing R. This can be given explicitly by R = n i=1 girig−1 i : n ∈ N, ri ∈ R, gi ∈ F (S). Then we write S | R = F (S)/R. This is just the usual notation we have for groups. For example, we can write D2n = r, s | rn, s2, srsr. Again, we can define this with a universal property. Lemma. If G is a group and φ : S → G is a set map such that f (r) = 1 for all r ∈ R (i.e. if r = s±1 m, then φ(r) = φ(s1)±1φ(s2)±1 · · · φ(sm)±1 = 1), then there exists a unique homomorphism f : S | R → G such that the following triangle commutes: 1 s±1 2 · · · s±1 S | R f φ G S Proof is similar to the previous one. In some sense, this says that all the group S | R does is it satisfies the relations in R, and nothing else. Example (The stupid canonical presentation). Let G be a group. We can view the group as a set, and hence obtain a free group F (G). There is also an obvious surjection G → G. Then by the universal property of the free group, there is a surjection f : F (G) → G. Let R = ker f = R. Then G | R is a presentation for G, since the first isomorphism theorem says G ∼= F (G)/ ker f. This is a really stupid example. For example, even the simplest non-trivial group Z/2 will be written as a quotient of a free group with two generators. However, this tells us that every group has a presentation
|
. Example. a, b | b ∼= a ∼= Z. Example. With a bit of work, we can show that a, b | ab−3, ba−2 = Z/5. 4.2 Another view of free groups Recall that we have not yet properly defined free groups, since we did not show that multiplication is well-defined. We are now going to do this using topology. Again let S be a set. For the following illustration, we will just assume S = {a, b}, but what we will do works for any set S. We define X by 40 4 Some group theory II Algebraic Topology a x0 b We call this a “rose with 2 petals”. This is a cell complex, with one 0-cell and |S| 1-cells. For each s ∈ S, we have one 1-cell, es, and we fix a path γs : [0, 1] → es that goes around the 1-cell once. We will call the 0-cells and 1-cells vertices and edges, and call the whole thing a graph. What’s the universal cover of X? Since we are just lifting a 1-complex, the result should be a 1-complex, i.e. a graph. Moreover, this graph is connected and simply connected, i.e. it’s a tree. We also know that every vertex in the universal cover is a copy of the vertex in our original graph. So it must have 4 edges attached to it. So it has to look like something this: ˜x0 In X, we know that at each vertex, there should be an edge labeled a going in; an edge labeled a going out; an edge labeled b going in; an edge labeled b going out. This should be the case in ˜X as well. So ˜X looks like this: a b b ˜x0 a a b b The projection map is then obvious — we send all the vertices in ˜X to x0 ∈ X, and then the edges according to the labels they have, in a way that respects the direction of the arrow. It is easy to show this is really a covering map. 41 4 Some group theory II Algebraic Topology We are now going to show that this tree “is” the free group.
|
Notice that every word w ∈ S∗ denotes a unique “edge path” in ˜X starting at ˜x0, where an edge path is a sequence of oriented edges ˜e1, · · ·, ˜en such that the “origin” of ˜ei+1 is equal to the “terminus” of ˜ei. For example, the following path corresponds to w = abb−1b−1ba−1b−1. ˜x0 We can note a few things: (i) ˜X is connected. So for all ˜x ∈ p−1(x0), there is an edge-path ˜γ : ˜x0 ˜x. (ii) If an edge-path ˜γ fails to be locally injective, we can simplify it. How can an edge fail to be locally injective? It is fine if we just walk along a path, since we are just tracing out a line. So it fails to be locally injective if two consecutive edges in the path are the same edge with opposite orientations: ˜ei ˜ei+1 ˜ei−1 ˜ei+2 We can just remove the redundant lines and get ˜ei−1 ˜ei+2 This reminds us of two things — homotopy of paths and elementary reduction of words. (iii) Each point ˜x ∈ p−1(x0) is joined to ˜x0 by a unique locally injective edge-path. (iv) For any w ∈ S∗, from (ii), we know that ˜γ is locally injective if and only if w is reduced. We can thus conclude that there are bijections F (S) p−1(x0) π1(X, x0) 42 4 Some group theory II Algebraic Topology that send ˜x to the word w ∈ F (S) such that ˜γw is a locally injective edge-path to ˜x0 ˜x; and ˜x to [γ] ∈ π1(X, x0) such that ˜x0 · [γ] = ˜x. So there is a bijection between F (S) and π1(X, x0). It is easy to see that the operations on F (S) and �
|
�1(X, x0) are the same, since they are just concatenating words or paths. So this bijection identifies the two group structures. So this induces an isomorphism F (S) ∼= π1(X, x0). 4.3 Free products with amalgamation We managed to compute the fundamental group of the circle. But we want to find the fundamental group of more things. Recall that at the beginning, we defined cell complexes, and said these are the things we want to work with. Cell complexes are formed by gluing things together. So we want to know what happens when we glue things together. Suppose a space X is constructed from two spaces A, B by gluing (i.e. X = A ∪ B). We would like to describe π1(X) in terms of π1(A) and π1(B). To understand this, we need to understand how to “glue” groups together. Definition (Free product). Suppose we have groups G1 = S1 | R1, G2 = S2 | R2, where we assume S1 ∩ S2 = ∅. The free product G1 ∗ G2 is defined to be G1 ∗ G2 = S1 ∪ S2 | R1 ∪ R2. This is not a really satisfactory definition. A group can have many different presentations, and it is not clear this is well-defined. However, it is clear that this group exists. Note that there are natural homomorphisms ji : Gi → G1 ∗ G2 that send generators to generators. Then we can have the following universal property of the free product: Lemma. G1 ∗ G2 is the group such that for any group K and homomorphisms φi : Gi → K, there exists a unique homomorphism f : G1 ∗ G2 → K such that the following diagram commutes: G2 j2 j1 G1 G1 ∗ G2 f φ1 φ2 K Proof. It is immediate from the universal property of the definition of presentations. Corollary. The free product is well-defined. Proof. The conclusion of the universal property can be
|
seen to characterize G1 ∗ G2 up to isomorphism. Again, we have a definition in terms of a concrete construction of the group, without making it clear this is well-defined; then we have a universal property 43 4 Some group theory II Algebraic Topology that makes it clear this is well-defined, but not clear that the object actually exists. Combining the two would give everything we want. However, this free product is not exactly what we want, since there is little interaction between G1 and G2. In terms of gluing spaces, this corresponds to gluing A and B when A ∩ B is trivial (i.e. simply connected). What we really need is the free product with amalgamation, as you might have guessed from the title. Definition (Free product with amalgamation). Suppose we have groups G1, G2 and H, with the following homomorphisms: i2 H G2 i1 G1 The free product with amalgamation is defined to be G1 ∗ H G2 = G1 ∗ G2/{(j2 ◦ i2(h))−1(j1 ◦ i1)(h) : h ∈ H}. Here we are attempting to identify things “in H” as the same, but we need to use the maps jk and ik to map the things from H to G1 ∗ G2. So we want to say “for any h, j1 ◦ i1(h) = j2 ◦ i2(h)”, or (j2 ◦ i2(h))−1(j1 ◦ i1)(h) = e. So we quotient by the (normal closure) of these things. We have the universal property Lemma. G1 ∗ H φi : Gi → K, there exists a unique homomorphism G1 ∗ H following diagram commutes: G2 is the group such that for any group K and homomorphisms G2 → K such that the H i1 G1 i2 G2 j2 j1 G1 ∗ H G2 φ1 f φ2 K This is the language we will need to compute fundamental groups. These definitions will (hopefully) become more concrete as we see more examples.
|
44 5 Seifert-van Kampen theorem II Algebraic Topology 5 Seifert-van Kampen theorem 5.1 Seifert-van Kampen theorem The Seifert-van Kampen theorem is the theorem that tells us what happens when we glue spaces together. Here we let X = A ∪ B, where A, B, A ∩ B are path-connected. A x0 B We pick a basepoint x0 ∈ A ∩ B for convenience. Since we like diagrams, we can write this as a commutative diagram: A ∩ B A B X where all arrows are inclusion (i.e. injective) maps. We can consider what happens when we take the fundamental groups of each space. Then we have the induced homomorphisms π1(A ∩ B, x0) π1(B, x0) π1(A, x0) π1(X, x0) We might guess that π1(X, x0) is just the free product with amalgamation π1(X, x0) = π1(A, x0) ∗ π1(A∩B,x0) π1(B, x0). The Seifert-van Kampen theorem says that, under mild hypotheses, this guess is correct. Theorem (Seifert-van Kampen theorem). Let A, B be open subspaces of X such that X = A ∪ B, and A, B, A ∩ B are path-connected. Then for any x0 ∈ A ∩ B, we have π1(X, x0) = π1(A, x0) ∗ π1(A∩B,x0) π1(B, x0). Note that by the universal property of the free product with amalgamation, π1(B, x0) → we by definition know that there is a unique map π1(A, x0) ∗ π1(A∩B,x0) π1(X, x0). The theorem asserts that this map is an isomorphism. Proof is omitted because time is short. Example. Consider a higher-dimensional sphere Sn = {v ∈ Rn+1 : |v| = 1} for n ≥ 2
|
. We want to find π1(Sn). The idea is to write Sn as a union of two open sets. We let n = e1 ∈ Sn ⊆ Rn+1 be the North pole, and s = −e1 be the South pole. We let A = Sn \ {n}, 45 5 Seifert-van Kampen theorem II Algebraic Topology and B = Sn \ {s}. By stereographic projection, we know that A, B ∼= Rn. The hard part is to understand the intersection. To do so, we can draw a cylinder Sn−1 × (−1, 1), and project our A ∩ B onto the cylinder. We can similarly project the cylinder onto A ∩ B. So A ∩ B ∼= Sn−1 × (−1, 1) Sn−1, since (−1, 1) is contractible. n s We can now apply the Seifert-van Kampen theorem. Note that this works only if Sn−1 is path-connected, i.e. n ≥ 2. Then this tells us that π1(Sn) ∼= π1(Rn) ∗ π1(Sn−1) π1(Rn) ∼= 1 ∗ π1(Sn−1) 1 It is easy to see this is the trivial group. We can see this directly from the universal property of the amalgamated free product, or note that it is the quotient of 1 ∗ 1, which is 1. So for n ≥ 2, π1(Sn) ∼= 1. We have found yet another simply connected space. However, this is unlike our previous examples. Our previous spaces were simply connected because they were contractible. However, we will later see that Sn is not contractible. So this is genuinely a new, interesting example. Why did we go though all this work to prove that π1(Sn) ∼= 1? It feels like we can just prove this directly — pick a point that is not in the curve as the North pole, project stereographically to Rn, and contract it there. However, the problem is that space-filling curves exist. We cannot guarantee that we can pick a point not on the curve! It is indeed possible to prove directly that given any curve on Sn (with n ≥ 1), we can deform it slightly so that it is no longer surject
|
ive. Then the above proof strategy works. However, using Seifert-van Kampen is much neater. Example (RPn). Recall that RPn ∼= Sn/{± id}, and the quotient map Sn → RPn is a covering map. Now that we have proved that Sn is simply connected, we know that Sn is a universal cover of RPn. For any x0 ∈ RPn, we have a bijection π1(RPn, x0) ↔ p−1(x0). Since p−1(x0) has two elements by definition, we know that |π1(RPn, x0)| = 2. So π1(RPn, x0) = Z/2. You will prove a generalization of this in example sheet 2. 46 5 Seifert-van Kampen theorem II Algebraic Topology Example (Wedge of two circles). We are going to consider the operation of wedging. Suppose we have two topological spaces, and we want to join them together. The natural way to join them is to take the disjoint union. What if we have based spaces? If we have (X, x0) and (Y, y0), we cannot just take the disjoint union, since we will lose our base point. What we do is take the wedge sum, where we take the disjoint union and then identify the base points: X Y x0 y0 x0 ∼ y0 X ∧ Y Suppose we take the wedge sum of two circles S1 ∧ S1. We would like to pick A, B to be each of the circles, but we cannot, since A and B have to be open. So we take slightly more, and get the following: x0 A B Each of A and B now look like this: We see that both A and B retract to the circle. So π1(A) ∼= π1(B) ∼= Z, while A ∩ B is a cross, which retracts to a point. So π1(A ∩ B) = 1. Hence by the Seifert-van Kampen theorem, we get π1(S1 ∧ S1, x0) = π1(A, x0) ∗ π1(A∩B,x0) π1(B, x0)
|
∼= Z ∗ 1 Z ∼= Z ∗ Z ∼= F2, where F2 is just F (S) for |S| = 2. We can see that Z ∗ Z ∼= F2 by showing that they satisfy the same universal property. Note that we had already figured this out when we studied the free group, where we realized F2 is the fundamental group of this thing. More generally, as long as x0, y0 in X and Y are “reasonable”, π1(X ∧ Y ) ∼= π1(X) ∗ π1(Y ). 47 5 Seifert-van Kampen theorem II Algebraic Topology Next, we would exhibit some nice examples of the covering spaces of S1 ∧ S1, i.e. the “rose with 2 petals”. Recall that π1(S1 ∧ S1, x0) ∼= F2 ∼= a, b. Example. Consider the map φ : F2 → Z/3 which sends a → 1, b → 1. Note that 1 is not the identity, since this is an abelian group, and the identity is 0. This exists since the universal property tells us we just have to say where the generators go, and the map exists (and is unique). Now ker φ is a subgroup of F2. So there is a based covering space of S1 ∧ S1 corresponding to it, say, ˜X. Let’s work out what it is. First, we want to know how many sheets it has, i.e. how many copies of x0 we have. There are three, since we know that the number of sheets is the index of the subgroup, and the index of ker φ is |Z/3| = 3 by the first isomorphism theorem. ˜x0 Let’s try to lift the loop a at ˜x0. Since a ∈ ker φ = π1( ˜X, ˜x0), a does not lift to a loop. So it goes to another vertex. ˜x0 a Similarly, a2 ∈ ker φ = π1( ˜X, ˜x0). So we get the following a ˜x0 a Since a3 ∈ ker φ, we get a
|
loop labelled a. a a a ˜x0 Note that ab−1 ∈ ker φ. So ab−1 gives a loop. So b goes in the same direction as a: b a ˜x0 b a a b 48 5 Seifert-van Kampen theorem II Algebraic Topology This is our covering space. This is a fun game to play at home: (i) Pick a group G (finite groups are recommended). (ii) Then pick α, β ∈ G and let φ : F2 → G send a → α, b → β. (iii) Compute the covering space corresponding to φ. 5.2 The effect on π1 of attaching cells We have started the course by talking about cell complexes, but largely ignored them afterwards. Finally, we are getting back to them. The process of attaching cells is as follows: we start with a space X, get a function Sn−1 → X, and attach Dn to X by gluing along the image of f to get X ∪f Dn: x0 y0 0 Since we are attaching stuff, we can use the Seifert-van Kampen theorem to analyse this. Theorem. If n ≥ 3, then π1(X ∪f Dn) ∼= π1(X). More precisely, the map π1(X, x0) → π1(X ∪f Dn, x0) induced by inclusion is an isomorphism, where x0 is a point on the image of f. This tells us that attaching a high-dimensional disk does not do anything to the fundamental group. Proof. Again, the difficulty of applying Seifert-van Kampen theorem is that we need to work with open sets. Let 0 ∈ Dn be any point in the interior of Dn. We let A = X ∪f (Dn \ {0}). Note that Dn \{0} deformation retracts to the boundary Sn−1. So A deformation retracts to X. Let B = ˚D, the interior of Dn. Then A ∩ B = ˚Dn \ 0 ∼= Sn−1 × (−1, 1) We cannot use y0 as our basepoint, since this point is not in A ∩ B. Instead
|
, pick an arbitrary y1 ∈ A ∩ B. Since Dn is path connected, we have a path γ : y1 y0, and we can use this to recover the fundamental groups based at y0. Now Seifert-van Kampen theorem says π1(X ∪f Dn, y1) ∼= π1(A, y1) ∗ π1(A∩B,y1) π1(B, y1). Since B is just a disk, and A ∩ B is simply connected (n ≥ 3 implies Sn−1 is simply connected), their fundamental groups are trivial. So we get π1(X ∪f Dn, y1) ∼= π1(A, y1). We can now use γ to change base points from y1 to y0. So π1(X ∪f Dn, y0) ∼= π1(A, y0) ∼= π1(X, y0). 49 5 Seifert-van Kampen theorem II Algebraic Topology The more interesting case is when we have smaller dimensions. Theorem. If n = 2, then the natural map π1(X, x0) → π1(X ∪f Dn, x0) is surjective, and the kernel is [f ]. Note that this statement makes sense, since Sn−1 is a circle, and f : Sn−1 → X is a loop in X. This is what we would expect, since if we attach a disk onto the loop given by f, this loop just dies. Proof. As before, we get π1(X ∪f Dn, y1) ∼= π1(A, y1) ∗ π1(A∩B,y1) π1(B, y1). Again, B is contractible, and π1(B, y1) ∼= 1. However, π1(A ∩ B, y1) ∼= Z. Since π1(A ∩ B, y1) is just (homotopic to) the loop induced by f, it follows that π1(A, y1) ∗ π1(A∩B,y1) 1 = (π1(A, y1) ∗ 1)/
|
π1(A ∩ B, y1) ∼= π1(X, x0)/f. In summary, we have π1(X ∪f Dn) = π1(X) n ≥ 3 π1(X)/f n = 2 This is a useful result, since this is how we build up cell complexes. If we want to compute the fundamental groups, we can just work up to the two-cells, and know that the higher-dimensional cells do not have any effect. Moreover, whenever X is a cell complex, we should be able to write down the presentation of π1(X). Example. Let X be the 2-torus. Possibly, our favorite picture of the torus is (not a doughnut): This is already a description of the torus as a cell complex! We start with our zero complex X (0): We then add our 1-cells to get X (1): b a We now glue our square to the cell complex to get X = X (2): 50 5 Seifert-van Kampen theorem II Algebraic Topology matching up the colors and directions of arrows. So we have our torus as a cell complex. What is its fundamental group? There are many ways we can do this computation, but this time we want to do it as a cell complex. We start with X (0). This is a single point. So its fundamental group is π1(X (0)) = 1. When we add our two 1-cells, we get π1(X (1)) = F2 Finally, to get π1(X), we have to quotient out by the boundary of our square, ∼= a, b. which is just aba−1b−1. So we have π1(X (2)) = F2/aba−1b−1 = a, b | aba−1b−1 ∼= Z2. We have the last congruence since we have two generators, and then we make them commute by quotienting the commutator out. This procedure can be reversed — given a presentation of a group, we can just add the right edges and squares to produce a cell complex with that presentation. Corollary. For any (finite) group presentation S | R, there exists a (finite) cell complex (of dimension 2
|
) X such that π1(X) ∼= S | R. There really isn’t anything that requires that finiteness in this proof, but finiteness makes us feel more comfortable. Proof. Let S = {a1, · · ·, am} and R = {r1, · · ·, rn}. We start with a single point, and get our X (1) by adding a loop about the point for each ai ∈ S. We then get our 2-cells e2 j for j = 1, · · ·, n, and attaching them to X (1) by fi : S1 → X (1) given by a based loop representing ri ∈ F (S). Since all groups have presentations, this tells us that all groups are funda- mental groups of some spaces (at least those with finite presentations). 5.3 A refinement of the Seifert-van Kampen theorem We are going to make a refinement of the theorem so that we don’t have to worry about that openness problem. We first start with a definition. Definition (Neighbourhood deformation retract). A subset A ⊆ X is a neighbourhood deformation retract if there is an open set A ⊆ U ⊆ X such that A is a strong deformation retract of U, i.e. there exists a retraction r : U → A and r idU rel A. This is something that is true most of the time, in sufficiently sane spaces. Example. If Y is a subcomplex of a cell complex, then Y is a neighbourhood deformation retract. 51 5 Seifert-van Kampen theorem II Algebraic Topology Theorem. Let X be a space, A, B ⊆ X closed subspaces. Suppose that A, B and A ∩ B are path connected, and A ∩ B is a neighbourhood deformation retract of A and B. Then for any x0 ∈ A ∩ B. π1(X, x0) = π1(A, x0) ∗ π1(A∩B,x0) π1(B, x0). A x0 B This is just like Seifert-van
|
Kampen theorem, but usually easier to apply, since we no longer have to “fatten up” our A and B to make them open. Proof. Pick open neighbourhoods A ∩ B ⊆ U ⊆ A and A ∩ B ⊆ V ⊆ B that strongly deformation retract to A ∩ B. Let U be such that U retracts to A ∩ B. Since U retracts to A, it follows that U is path connected since path-connectedness is preserved by homotopies. Let A = A∪V and B = B ∪U. Since A = (X \B)∪V, and B = (X \A)∪U, it follows that A and B are open. Since U and V retract to A ∩ B, we know A A and B B. Also, A ∩ B = (A ∪ V ) ∩ (. In particular, it is path connected. So by Seifert van-Kampen, we get π1(A ∪ B) = π1(A, x0) ∗ π1(A∩B,x0) π1(B, x0) = π1(A, x0) ∗ π1(A∩B,x0) π1(B, x0). This is basically what we’ve done all the time when we enlarge our A and B to become open. Example. Let X = S1 ∧ S1, the rose with two petals. Let A, B ∼= S1 be the circles. x0 A B Then since {x0} = A ∩ B is a neighbourhood deformation retract of A and B, we know that π1X ∼= π1S1 ∗ π1S1. 5.4 The fundamental group of all surfaces We have found that the torus has fundamental group Z2, but we already knew this, since the torus is just S1 × S1, and the fundamental group of a product is the product of the fundamental groups, as you have shown in the example sheet. So we want to look at something more interesting. We look at all surfaces. We start by defining what a surface is. It is surprisingly difficult to get mathematicians to agree on how we can define
|
a surface. Here we adopt the following definition: 52 5 Seifert-van Kampen theorem II Algebraic Topology Definition (Surface). A surface is a Hausdorff topological space such that every point has a neighbourhood U that is homeomorphic to R2. Some people like C more that R, and sometimes they put C2 instead of R2 in the definition, which is confusing, since that would have two complex dimensions and hence four real dimensions. Needless to say, the actual surfaces will also be different and have different homotopy groups. We will just focus on surfaces with two real dimensions. To find the fundamental group of all surfaces, we rely on the following theorem that tells us what surfaces there are. Theorem (Classification of compact surfaces). If X is a compact surface, then X is homeomorphic to a space in one of the following two families: (i) The orientable surface of genus g, Σg includes the following (please excuse my drawing skills): A more formal definition of this family is the following: we start with the 2-sphere, and remove a few discs from it to get S2 \ ∪g i=1D2. Then we take g tori with an open disc removed, and attach them to the circles. (ii) The non-orientable surface of genus n, En = {RP2, K, · · · } (where K is the Klein bottle). This has a similar construction as above: we start with the sphere S2, make a few holes, and then glue M¨obius strips to them. It would be nice to be able to compute fundamental groups of these guys. To do so, we need to write them as polygons with identification. Example. To obtain a surface of genus two, written Σ2, we start with what we had for a torus: b a a b 53 5 Seifert-van Kampen theorem II Algebraic Topology If we just wanted a torus, we are done (after closing the loop), but now we want a surface with genus 2, so we add another torus: b1 a1 a1 b2 b1 a2 a2 b2 To visualize how this works, imagine cutting this apart along the
|
dashed line. This would give two tori with a hole, where the boundary of the holes are just the dashed line. Then gluing back the dashed lines would give back our orientable surface with genus 2. In general, to produce Σg, we produce a polygon with 4g sides. Then we get π1Σg = a1, b1, · · ·, ag, bg | a1b1a−1 1 b−1 1 · · · agbga−1 g b−1 g. We do we care? The classification theorem tells us that each surface is homeomorphic to some of these orientable and non-orientable surfaces, but it doesn’t tell us ∼= Σ241, via some weird homeomorphism there is no overlap. It might be that Σ6 that destroys some holes. However, this result lets us know that all these orientable surfaces are genuinely different. While it is difficult to stare at this fundamental group and say that π1Σg ∼= π1Σg for g = g, we can perform a little trick. We can take the abelianization of the group π1Σg, where we further quotient by all commutators. Then the abelianized fundamental group of Σg will simply be Z2g. These are clearly distinct for different values of g. So all these surfaces are distinct. Moreover, they are not even homotopy equivalent. The fundamental groups of the non-orientable surfaces is left as an exercise for the reader. 54 6 Simplicial complexes II Algebraic Topology 6 Simplicial complexes So far, we have taken a space X, and assigned some things to it. The first was easy — π0(X). It was easy to calculate and understand. We then spent a lot of time talking about π1(X). What are they good for? A question we motivated ourselves with was to prove Rm ∼= Rn implies n = m. π0 was pretty good for the case when n = 1. If Rm ∼= R, then we would have Rm \ {0} ∼= R \ {0} S0. We know that |π0(S0)| = 2, while |π0(Rm \ {0})| = 1
|
for m = 1. This is just a fancy way of saying that R \ {0} is disconnected while Rm \ {0} is not for m = 1. We can just add 1 to n, and add 1 to our subscript. If Rm ∼= R2, then we have Rm \ {0} ∼= R2 \ {0} S1. We know that π1(S1) ∼= Z, while π1(Rm \ {0}) ∼= π1(Sm−1) ∼= 1 unless m = 2. The obvious thing to do is to create some πn(X). But as you noticed, π1 took us quite a long time to define, and was really hard to compute. As we would expect, this only gets harder as we get to higher dimensions. This is indeed possible, but will only be done in Part III courses. The problem is that πn works with groups, and groups are hard. There are too many groups out there. We want to do some easier algebra, and a good choice is linear algebra. Linear algebra is easy. In the rest of the course, we will have things like H0(X) and H1(X), instead of πn, which are more closely related to linear algebra. Another way to motivate the abandoning of groups is as follows: recall last time we saw that if X is a finite cell, reasonably explicitly defined, then we can write down a presentation S | R for π1(X). This sounds easy, except that we don’t understand presentations. In fact there is a theorem that says there is no algorithm that decides if a particular group presentation is actually the trivial group. So even though we can compute the fundamental group in terms of presentations, this is not necessarily helpful. On the other hand, linear algebra is easy. Computers can do it. So we will move on to working with linear algebra instead. This is where homology theory comes in. It takes a while for us to define it, but after we finish developing the machinery, things are easy. 6.1 Simplicial complexes There are many ways of formulating homology theory, and these are all equivalent at least for sufficiently sane spaces (such as cell complexes). In this course, we will be using simplicial homology, which
|
is relatively more intuitive and can be computed directly. The drawback is that we will have to restrict to a particular kind of space, known as simplicial complexes. This is not a very serious restriction per se, since many spaces like spheres are indeed simplicial complexes. However, the definition of simplicial homology is based on exactly how we view our space as a simplicial complex, and it will take us quite a lot of work to show that the simplicial homology is indeed a property of the space itself, and not how we represent it as a simplicial complex. We now start by defining simplicial complexes, and developing some general theory of simplicial complexes that will become useful later on. Definition (Affine independence). A finite set of points {a1, · · ·, an} ⊆ Rm is 55 6 Simplicial complexes II Algebraic Topology affinely independent iff n i=1 tiai = 0 with n i=1 ti = 0 ⇔ ti = 0 for all i. Example. When n = 3, the following points are affinely independent: The following are not: The proper way of understanding this definition is via the following lemma: Lemma. a0, · · ·, an ∈ Rm are affinely independent if and only if a1−a0, · · ·, an− a0 are linearly independent. Alternatively, n + 1 affinely independent points span an n-dimensional thing. Proof. Suppose a0, · · ·, an are affinely independent. Suppose Then we can rewrite this as λi(ai − a0) = 0. n i=1 − n i=1 λi a0 + λ1a1 + · · · + λnan = 0. Now the sum of the coefficients is 0. So affine independence implies that all coefficients are 0. So a1 − a0, · · ·, an − a0 are linearly independent. On the other hand, suppose a1 − a0, · · ·, an − a0 are linearly independent. Now suppose Then we can write n i=0 tiai
|
= 0, n i=0 ti = 0. t0 = − n i=1 ti. Then the first equation reads 0 = − n i=1 ti a0 + t1a1 + · · · + tnan = n i=1 ti(ai − a0). So linear independence implies all ti = 0. 56 6 Simplicial complexes II Algebraic Topology The relevance is that these can be used to define simplices (which are simple, as opposed to complexes). Definition (n-simplex). An n-simplex is the convex hull of (n + 1) affinely independent points a0, · · ·, an ∈ Rm, i.e. the set σ = a0, · · ·, an = n i=0 tiai : n i=0 ti = 1, ti ≥ 0. The points a0, · · ·, an are the vertices, and are said to span σ. The (n + 1)-tuple (t0, · · ·, tn) is called the barycentric coordinates for the point tiai. Example. When n = 0, then our 0-simplex is just a point: When n = 1, then we get a line: When n = 2, we get a triangle: When n = 3, we get a tetrahedron: The key motivation of this is that simplices are determined by their vertices. Unlike arbitrary subspaces of Rn, they can be specified by a finite amount of data. We can also easily extract the faces of the simplices. Definition (Face, boundary and interior). A face of a simplex is a subset (or subsimplex) spanned by a subset of the vertices. The boundary is the union of the proper faces, and the interior is the complement of the boundary. The boundary of σ is usually denoted by ∂σ, while the interior is denoted by ˚σ, and we write τ ≤ σ when τ is a face of σ. In particular, the interior of a vertex is the vertex itself. Note that these notions of interior and boundary are distinct from the topological notions of interior and boundary. Example. The standard n-simplex is spanned by the basis vectors {
|
e0, · · ·, en} in Rn+1. For example, when n = 2, we get the following: 57 6 Simplicial complexes II Algebraic Topology We will now glue simplices together to build complexes, or simplicial com- plexes. Definition. A (geometric) simplicial complex is a finite set K of simplices in Rn such that (i) If σ ∈ K and τ is a face of σ, then τ ∈ K. (ii) If σ, τ ∈ K, then σ ∩ τ is either empty or a face of both σ and τ. Definition (Vertices). The vertices of K are the zero simplices of K, denoted VK. Example. This is a simplicial complex: These are not: Technically, a simplicial complex is defined to be a set of simplices, which are just collections of points. It is not a subspace of Rn. Hence we have the following definition: Definition (Polyhedron). The polyhedron defined by K is the union of the simplices in K, and denoted by |K|. 58 6 Simplicial complexes II Algebraic Topology We make this distinction because distinct simplicial complexes may have the same polyhedron, such as the following: Just as in cell complexes, we can define things like dimensions. Definition (Dimension and skeleton). The dimension of K is the highest dimension of a simplex of K. The d-skeleton K (d) of K is the union of the n-simplices in K for n ≤ d. Note that since these are finite and live inside Rn, we know that |K| is always compact and Hausdorff. Usually, when we are given a space, say Sn, it is not defined to be a simplicial complex. We can “make” it a simplicial complex by a triangulation. Definition (Triangulation). A triangulation of a space X is a homeomorphism h : |K| → X, where K is some simplicial complex. Example. Let σ be the standard n-simplex
|
. The boundary ∂σ is homeomorphic to Sn−1 (e.g. the boundary of a (solid) triangle is the boundary of the triangle, which is also a circle). This is called the simplicial (n − 1)-sphere. We can also triangulate our Sn in a different way: Example. In Rn+1, consider the simplices ±e0, · · ·, ±en for each possible combination of signs. So we have 2n+1 simplices in total. Then their union defines a simplicial complex K, and |K| ∼= Sn. The nice thing about this triangulation is that the simplicial complex is invariant under the antipodal map. So not only can we think of this as a triangulation of the sphere, but a triangulation of RPn as well. As always, we don’t just look at objects themselves, but also maps between them. Definition (Simplicial map). A simplicial map f : K → L is a function f : VK → VL such that if a0, · · ·, an is a simplex in K, then {f (a0), · · ·, f (an)} spans a simplex of L. The nice thing about simplicial maps is that we only have to specify where the vertices go, and there are only finitely many vertices. So we can completely specify a simplicial map by writing down a finite amount of information. It is important to note that we say {f (a0), · · ·, f (an)} as a set span a simplex of L. In particular, they are allowed to have repeats. 59 6 Simplicial complexes II Algebraic Topology Example. Suppose we have the standard 2-simplex K as follows: a2 a0 a1 The following does not define a simplicial map because a1, a2 is a simplex in K, but {f (a1), f (a2)} does not span a simplex: f (a0), f (a1) f (a2) On the other hand, the following is a simplicial map, because now {f (a1), f (a2)} spans a simplex, and note that {f (a0), f (a1), f (
|
a2)} also spans a 1-simplex because we are treating the collection of three vertices as a set, and not a simplex. f (a0), f (a1) f (a2) Finally, we can also do the following map: f (a0), f (a1) f (a2) The following lemma is obvious, but we will need it later on. Lemma. If K is a simplicial complex, then every point x ∈ |K| lies in the interior of a unique simplex. As we said, simplicial maps are nice, but they are not exactly what we want. We want to have maps between spaces. Lemma. A simplicial map f : K → L induces a continuous map |f | : |K| → |L|, and furthermore, we have |f ◦ g| = |f | ◦ |g|. There is an obvious way to define this map. We know how to map vertices, and then just extend everything linearly. Proof. For any point in a simplex σ = a0, · · ·, an, we define |f | n i=0 tiai = n i=0 tif (ai). The result is in L because {f (ai)} spans a simplex. It is not difficult to see this is well-defined when the point lies on the boundary of a simplex. This is clearly continuous on σ, and is hence continuous on |K| by the gluing lemma. The final property is obviously true by definition. 60 6 Simplicial complexes II Algebraic Topology 6.2 Simplicial approximation This is all very well, but we are really interested in continuous maps. So given a continuous map f, we would like to find a related simplicial map g. In particular, we want to find a simplicial map g that “approximates” f in some sense. The definition we will write down is slightly awkward, but it turns out this is the most useful definition. Definition (Open star and link). Let x ∈ |K|. The open star of x is the union of all the interiors of the simplices that contain x, i
|
.e. StK(x) = ˚σ. x∈σ∈K The link of x, written LkK(x), is the union of all those simplices that do not contain x, but are faces of a simplex that does contain x. Definition (Simplicial approximation). Let f : |K| → |L| be a continuous map between the polyhedra. A function g : VK → VL is a simplicial approximation to f if for all v ∈ VK, f (StK(v)) ⊆ StL(g(v)). (∗) The following lemma tells us why this is a good definition: Lemma. If f : |K| → |L| is a map between polyhedra, and g : VK → VL is a simplicial approximation to f, then g is a simplicial map, and |g| f. Furthermore, if f is already simplicial on some subcomplex M ⊆ K, then we get g|M = f |M, and the homotopy can be made rel M. Proof. First we want to check g is really a simplicial map if it satisfies (∗). Let σ = a0, · · ·, an be a simplex in K. We want to show that {g(a0), · · ·, g(an)} spans a simplex in L. Pick an arbitrary x ∈ ˚σ. Since σ contains each ai, we know that x ∈ StK(ai) for all i. Hence we know that f (x) ∈ n i=0 f (StK(ai)) ⊆ n i=0 StL(g(ai)). Hence we know that there is one simplex, say, τ that contains all g(ai) whose interior contains f (x). Since each g(ai) is a vertex in L, each g(ai) must be a vertex of τ. So they span a face of τ, as required. We now want to prove that |g| f. We let H : |K| × I → |L| ⊆ Rm be defined by (x, t) → t|g|(x) + (1 − t)f (x). This is clearly continuous. So we
|
need to check that im H ⊆ |L|. But we know that both |g|(x) and f (x) live in τ and τ is convex. It thus follows that H(x × I) ⊆ τ ⊆ |L|. To prove the last part, it suffices to show that every simplicial approximation to a simplicial map must be the map itself. Then the homotopy is rel M by the construction above. This is easily seen to be true — if g is a simplicial approximation to f, then f (v) ∈ f (StK(v)) ⊆ StL(g(v)). Since f (v) is a vertex and g(v) is the only vertex in StL(g(v)), we must have f (v) = g(v). So done. 61 6 Simplicial complexes II Algebraic Topology What’s the best thing we might hope for at this point? It would be great if every map were homotopic to a simplicial map. Is this possible? Let’s take a nice example. Let’s consider the following K: How many homotopy classes of continuous maps are there K → K? Countably many, one for each winding number. However, there can only be at most 33 = 27 simplicial maps. The problem is that we don’t have enough vertices to realize all those interesting maps. The idea is to refine our simplicial complexes. Suppose we have the following simplex: What we do is to add a point in the center of each simplex, and join them up: This is known as the barycentric subdivision. After we subdivide it once, we can realize more homotopy classes of maps. We will show that for any map, as long as we are willing to barycentrically subdivide the simplex many times, we can find a simplicial approximation to it. Definition (Barycenter). The barycenter of σ = a0, · · ·, an is ˆσ = n i=0 1 n + 1 ai. Definition (Barycentric subdivision). The (first) barycentric subdivision K of K is the simplicial complex: K = {ˆσ0, · · ·,
|
ˆσn : σi ∈ K and σ0 < σ1 < · · · < σn}. If you stare at this long enough, you will realize this is exactly what we have drawn above. The rth barycentric subdivision K (r) is defined inductively as the barycentric subdivision of the r − 1th barycentric subdivision, i.e. K (r) = (K (r−1)). Proposition. |K| = |K | and K really is a simplicial complex. Proof. Too boring to be included in lectures. 62 6 Simplicial complexes II Algebraic Topology We now have a slight problem. Even though |K | and |K| are equal, the identity map from |K | to |K| is not a simplicial map. To solve this problem, we can choose any function K → VK by σ → vσ with vσ ∈ σ, i.e. a function that sends any simplex to any of its vertices. Then we can define g : K → K by sending ˆσ → vσ. Then this is a simplicial map, and indeed a simplicial approximation to the identity map |K | → |K|. We will revisit this idea later when we discuss homotopy invariance. The key theorem is that as long as we are willing to perform barycentric subdivisions, then we can always find a simplicial approximation. Theorem (Simplicial approximation theorem). Let K and L be simplicial complexes, and f : |K| → |L| a continuous map. Then there exists an r and a simplicial map g : K (r) → L such that g is a simplicial approximation of f. Furthermore, if f is already simplicial on M ⊆ K, then we can choose g such that |g||M = f |M. The first thing we have to figure out is how far we are going to subdivide. To do this, we want to quantify how “fine” our subdivisions are. Definition (Mesh). Let K be a simplicial complex. The mesh of K is µ(K) = max{v0 − v1 : v0, v1 ∈ K}. We have the following lemma that tells us how
|
large our mesh is: Lemma. Let dim K = n, then µ(K (r)) ≤ n r n + 1 µ(K). The key point is that as r → ∞, the mesh goes to zero. So indeed we can make our barycentric subdivisions finer and finer. The proof is purely technical and omitted. Now we can prove the simplicial approximation theorem. Proof of simplicial approximation theorem. Suppose we are given the map f : |K| → |L|. We have a natural cover of |L|, namely the open stars of all vertices. We can use f to pull these back to |K| to obtain a cover of |K|: {f −1(StL(w)) : w ∈ VL}. The idea is to barycentrically subdivide our K such that each open star of K is contained in one of these things. By the Lebesgue number lemma, there exists some δ, the Lebesgue number of the cover, such that for each x ∈ |K|, Bδ(x) is contained in some element of the cover. By the previous lemma, there is an r such that µ(K (r)) < δ. Now since the mesh µ(K (r)) is the smallest distance between any two vertices, the radius of every open star StK(r)(x) is at most µ(K (r)). Hence it follows that StK(r) (x) ⊆ Bδ(x) for all vertices x ∈ VK(r). Therefore, for all x ∈ VK(r), there is some w ∈ VL such that StK(r) (x) ⊆ Bδ(x) ⊆ f −1(StL(w)). 63 6 Simplicial complexes II Algebraic Topology Therefore defining g(x) = w, we get f (StK(r) (x)) ⊆ StL(g(x)). So g is a simplicial approximation of f. The last part follows from the observation that if f is a simplicial map, then it maps vertices to vertices. So we can pick g(v) = f (v). 64 7 Simplicial homology II Algebraic Topology 7 Simplicial homology 7.1 Simplicial homology For
|
now, we will forget about simplicial approximations and related fluff, but just note that it is fine to assume everything can be considered to be simplicial. Instead, we are going to use this framework to move on and define some new invariants of simplicial complexes K, known as Hn(K). These are analogous to π0, π1, · · ·, but only use linear algebra in the definitions, and are thus much simpler. The drawback, however, is that the definitions are slightly less intuitive at first sight. Despite saying “linear algebra”, we won’t be working with vector spaces most of the time. Instead, we will be using abelian groups, which really should be thought of as Z-modules. At the end, we will come up with an analogous theory using Q-modules, i.e. Q-vector spaces, and most of the theory will carry over. Using Q makes some of our work easier, but as a result we would have lost some information. In the most general case, we can replace Z with any abelian group, but we will not be considering any of these in this course. Recall that we defined an n-simplex as the collection of n + 1 vertices that span a simplex. As a simplex, when we permute the vertices, we still get the same simplex. What we want to do is to remember the orientation of the simplex. In particular, we want think of the simplices (a, b) and (b, a) as different: a b a b Hence we define oriented simplices. Definition (Oriented n-simplex). An oriented n-simplex in a simplicial complex K is an (n + 1)-tuple (a0, · · ·, an) of vertices ai ∈ Vk such that a0, · · ·, an ∈ K, where we think of two (n + 1)-tuples (a0, · · ·, an) and (aπ(0), · · ·, aπ(n)) as the same oriented simplex if π ∈ Sn is an even permutation. We often denote an oriented simplex as �
|
�, and then ¯σ denotes the same simplex with the opposite orientation. Example. As oriented 2-simplices, (v0, v1, v2) and (v1, v2, v0) are equal, but they are different from (v2, v1, v0). We can imagine the two different orientations of the simplices as follows: v1 v2 v0 v2 v0 v1 One and two dimensions are the dimensions where we can easily visualize the orientation. This is substantially harder in higher dimensions, and often we just work with the definition instead. Definition (Chain group Cn(K)). Let K be a simplicial complex. For each n ≥ 0, we define Cn(K) as follows: Let {σ1, · · ·, σ} be the set of n-simplices of K. For each i, choose an orientation on σi. That is, choose an order for the vertices (up to an even 65 7 Simplicial homology II Algebraic Topology permutation). This choice is not important, but we need to make it. Now when we say σi, we mean the oriented simplex with this particular orientation. Now let Cn(K) be the free abelian group with basis {σ1, · · ·, σ}, i.e. Cn(K) ∼= Z. So an element in Cn(K) might look like σ3 − 7σ1 + 52σ64 − 28σ1000000. In other words, an element of Cn(K) is just a formal sum of n-simplices. For convenience, we define C(K) = 0 for < 0. This will save us from making exceptions for n = 0 cases later. For each oriented simplex, we identify −σi with ¯σi, at least when n ≥ 1. In this definition, we have to choose a particular orientation for each of our simplices. If you don’t like making arbitrary choices, we could instead define Cn(K) as some quotient, but it is slightly more complicated. Note that if there are no n-simplices (e.g. when n = −1), we can still
|
meaningfully talk about Cn(K), but it’s just 0. Example. We can think of elements in the chain group C1(X) as “paths” in X. For example, we might have the following simplex: v1 σ1 σ2 v0 σ3 v2 σ6 Then the path v0 v1 v2 v0 v1 around the left triangle is represented by the member σ1 − σ2 + σ3 + σ1 = 2σ1 − σ2 + σ3. Of course, with this setup, we can do more random things, like adding 57 copies of σ6 to it, and this is also allowed. So we could think of these as disjoint union of paths instead. When defining fundamental groups, we had homotopies that allowed us to “move around”. Somehow, we would like to say that two of these paths are “equivalent” under certain conditions. To do so, we need to define the boundary homomorphisms. Definition (Boundary homomorphisms). We define boundary homomorphisms by dn : Cn(K) → Cn−1(K) (a0, · · ·, an) → n i=0 (−1)i(a0, · · ·, ˆai, · · ·, an), where (a0, · · ·, ˆai, · · ·, an) = (a0, · · ·, ai−1, ai+1, · · ·, an) is the simplex with ai removed. This means we remove each vertex in turn and add them up. This is clear if we draw some pictures in low dimensions: 66 7 Simplicial homology II Algebraic Topology v0 v1 −v0 v1 If we take a triangle, we get v1 v1 v0 v2 v0 v2 An important property of the boundary map is that the boundary of a boundary is empty: Lemma. dn−1 ◦ dn = 0. In other words, im dn+1 ⊆ ker dn. Proof. This just involves expanding the definition and working through the mess. With this in mind, we will define
|
the homology groups as follows: Definition (Simplicial homology group Hn(K)). The nth simplicial homology group Hn(K) is defined as Hn(K) = ker dn im dn+1. This is a nice, clean definition, but what does this mean geometrically? Somehow, Hk(K) describes all the “k-dimensional holes” in |K|. Since we are going to draw pictures, we are going to start with the easy case of k = 1. Our H1(K) is made from the kernel of d1 and the image of d2. First, we give these things names. Definition (Chains, cycles and boundaries). The elements of Ck(K) are called k-chains of K, those of ker dk are called k-cycles of K, and those of im dk+1 are called k-boundaries of K. Suppose we have some c ∈ ker dk. In other words, dc = 0. If we interpret c as a “path”, if it has no boundary, then it represents some sort of loop, i.e. a cycle. For example, if we have the following cycle: e1 e0 e2 We have We can then compute the boundary as c = (e0, e1) + (e1, e2) + (e2, e0). dc = (e1 − e0) + (e2 − e1) + (e0 − e2) = 0. 67 7 Simplicial homology II Algebraic Topology So this c is indeed a cycle. Now if c ∈ im d2, then c = db for some 2-chain b, i.e. c is the boundary of some two-dimensional thing. This is why we call this a 1-boundary. For example, suppose we have our cycle as above, but is itself a boundary of a 2-chain. v1 v0 v2 We see that a cycle that has a boundary has been “filled in”. Hence the “holes” are, roughly, the cycles that haven’t been filled in. Hence we define the homology group as the cycles quotiented by the boundaries, and we
|
interpret its elements as k-dimensional “holes”. Example. Let K be the standard simplicial 1-sphere, i.e. we have the following in R3. e2 e1 e0 Our simplices are thus K = {e0, e1, e2, e0, e1, e1, e2, e2, e0}. Our chain groups are C0(K) = (e0), (e1), (e2) ∼= Z3 C1(K) = (e0, e1), (e1, e2), (e2, e0) ∼= Z3. All other chain groups are zero. Note that our notation is slightly confusing here, since the brackets · can mean the simplex spanned by the vertices, or the group generated by certain elements. However, you are probably clueful enough to distinguish the two uses. Hence, the only non-zero boundary map is d1 : C1(K) → C0(K). We can write down its matrix with respect to the given basis. −1 0 1 −1 0 1 0 1 −1 68 7 Simplicial homology II Algebraic Topology We have now everything we need to know about the homology groups, and we just need to do some linear algebra to figure out the image and kernel, and thus the homology groups. We have H0(K) = ker(d0 : C0(K) → C−1(K)) im(d1 : C1(K) → C0(K)) ∼= C0(K) im d1 ∼= Z3 im d1. After doing some row operations with our matrix, we see that the image of d1 is a two-dimensional subspace generated by the image of two of the edges. Hence we have H0(K) = Z. What does this H0(K) represent? We initially said that Hk(K) should represent the k-dimensional holes, but when k = 0, this is simpler. As for π0, H0 just represents the path components of K. We interpret this to mean K has one path component. In general, if K has r path components, then we expect H0(K) to be Zr. Similarly, we have H1(K) = ker d
|
1 im d2 ∼= ker d1. It is easy to see that in fact we have ker d1 = (e0, e1) + (e1, e2) + (e2, e0) ∼= Z. So we also have H1(K) ∼= Z. We see that this H1(K) is generated by precisely the single loop in the triangle. The fact that H1(K) is non-trivial means that we do indeed have a hole in the middle of the circle. Example. Let L be the standard 2-simplex (and all its faces) in R3. e2 e1 e0 Now our chain groups are C0(L) = C0(K) ∼= Z3 ∼= (e0), (e1), (e2) C1(L) = C1(K) ∼= Z3 ∼= (e0, e1), (e1, e2), (e2, e0) C2(L) ∼= Z = (e0, e1, e2). 69 7 Simplicial homology II Algebraic Topology Since d1 is the same as before, the only new interesting boundary map is d2. We compute d2((e0, e1, e2)) = (e0, e1) + (e1, e2) + (e2, e0). We know that H0(L) depends only on d0 and d1, which are the same as for K. So H0(L) ∼= Z. Again, the interpretation of this is that L is path-connected. The first homology group is H1(L) = ker d1 im d2 = (e0, e1) + (e1, e2) + (e2, e0) (e0, e1) + (e1, e2) + (e2, e0) ∼= 0. This precisely illustrates the fact that the “hole” is now filled in L. Finally, we have H2(L) = ker d2 im d3 = ker d2 ∼= 0. This is zero since there aren’t any two-dimensional holes in L. We have hopefully gained some intuition on what these homology groups mean. We are now going to spend a lot of time developing formalism
|
. 7.2 Some homological algebra We will develop some formalism to help us compute homology groups Hk(K) in lots of examples. Firstly, we axiomatize the setup we had above. Definition (Chain complex and differentials). A chain complex C· is a sequence of abelian groups C0, C1, C2, · · · equipped with maps dn : Cn → Cn−1 such that dn−1 ◦ dn = 0 for all n. We call these maps the differentials of C·. d0 0 C0 d1 C1 d2 C2 d3 · · · Whenever we have some of these things, we can define homology groups in exactly the same way as we defined them for simplicial complexes. Definition (Cycles and boundaries). The space of n-cycles is The space of n-boundaries is Zn(C) = ker dn. Bn(C) = im dn+1. Definition (Homology group). The n-th homology group of C· is defined to be ker dn im dn+1 Zn(C) Bn(C) Hn(C) = =. In mathematics, when we have objects, we don’t just talk about the objects themselves, but also functions between them. Suppose we have two chain complexes. For the sake of simplicity, we write the chain maps of both as dn. In general, we want to have maps between these two sequences. This would correspond to having a map fi : Ci → Di for each i, satisfying some commutativity relations. 70 7 Simplicial homology II Algebraic Topology Definition (Chain map). A chain map f· : C· → D· is a sequence of homomorphisms fn : Cn → Dn such that for all n. In other words, the following diagram commutes: fn−1 ◦ dn = dn ◦ fn 0 0 d0 C0 d1 C1 d2 C2 d3 · · · f0 f1 f2 D0 d0 d1 D1 d2 D2 d3 · · · We want to have homotopies. So far, chain complexes seem to be rather rigid
|
, but homotopies themselves are rather floppy. How can we define homotopies for chain complexes? It turns out we can have a completely algebraic definition for chain homotopies. Definition (Chain homotopy). A chain homotopy between chain maps f·, g· : C· → D· is a sequence of homomorphisms hn : Cn → Dn+1 such that gn − fn = dn+1 ◦ hn + hn−1 ◦ dn. We write f· g· if there is a chain homotopy between f· and g·. The arrows can be put in the following diagram, which is not commutative: Cn−1 dn Cn gn fn hn hn−1 Dn dn+1 Dn+1 The intuition behind this definition is as follows: suppose C· = C·(K) and D· = C·(L) for K, L simplicial complexes, and f· and g· are “induced” by simplicial maps f, g : K → L (if f maps an n-simplex σ to a lower-dimensional simplex, then f•σ = 0). How can we detect if f and g are homotopic via the homotopy groups? Suppose H : |K| × I → |L| is a homotopy from f to g. We further suppose that H actually comes from a simplicial map K × I → L (we’ll skim over the technical issue of how we can make K × I a simplicial complex. Instead, you are supposed to figure this out yourself in example sheet 3). Let σ be an n-simplex of K, and here is a picture of H(σ × I): 71 7 Simplicial homology II Algebraic Topology Let hn(σ) = H(σ × I). We think of this as an (n + 1)-chain. What is its boundary? We’ve got the vertical sides plus the top and bottom. The bottom should just be f (σ), and the top is just g(σ), since H is a homotopy from f to g. How about the sides? They are what we get when we pull the boundary ∂σ up with the hom
|
otopy, i.e. H(∂σ × I) = hn−1 ◦ dn(σ). Now note that f (σ) and g(σ) have opposite orientations, so we get the result dn+1 ◦ hn(σ) = hn−1 ◦ dn(σ) + gn(σ) − fn(σ). Rearranging and dropping the σs, we get dn+1 ◦ hn − hn−1 ◦ dn = gn − fn. This looks remarkably like our definition for chain homotopies of maps, with the signs a bit wrong. So in reality, we will have to fiddle with the sign of hn a bit to get it right, but you get the idea. Lemma. A chain map f· : C· → D· induces a homomorphism: f∗ : Hn(C) → Hn(D) [c] → [f (c)] Furthermore, if f· and g· are chain homotopic, then f∗ = g∗. Proof. Since the homology groups are defined as the cycles quotiented by the boundaries, to show that f∗ defines a homomorphism, we need to show f sends cycles to cycles and boundaries to boundaries. This is an easy check. If dn(σ) = 0, then dn(fn(σ)) = fn(dn(σ)) = fn(0) = 0. So fn(σ) ∈ Zn(D). Similarly, if σ is a boundary, say σ = dn(τ ), then fn(σ) = fn(dn(τ )) = dn(fn(τ )). So fn(σ) is a boundary. It thus follows that f∗ is well-defined. Now suppose hn is a chain homotopy between f and g. For any c ∈ Zn(C), we have gn(c) − fn(c) = dn+1 ◦ hn(c) + hn−1 ◦ dn(c). Since c ∈ Zn(C), we know that dn(c) = 0. So gn(c) − fn(c) = dn+1 ◦ hn(c) �
|
� Bn(D). Hence gn(c) and fn(c) differ by a boundary. So [gn(c)] − [fn(c)] = 0 in Hn(D), i.e. f∗(c) = g∗(c). The following statements are easy to check: Proposition. (i) Being chain-homotopic is an equivalence relation of chain maps. 72 7 Simplicial homology II Algebraic Topology (ii) If a· : A· → C· is a chain map and f· g·, then f· ◦ a· g· ◦ a·. (iii) If f : C· → D· and g : D· → A· are chain maps, then g∗ ◦ f∗ = (g· ◦ f·)∗. (iv) (idC· )∗ = idH∗(C). The last two statements can be summarized in fancy language by saying that Hn is a functor. Definition (Chain homotopy equivalence). Chain complexes C· and D· are chain homotopy equivalent if there exist f· : C· → D· and g· : D· → C· such that f· ◦ g· idD·, g· ◦ f· idC·. We should think of this in exactly the same way as we think of homotopy equivalences of spaces. The chain complexes themselves are not necessarily the same, but the induced homology groups will be. Lemma. Let f· : C· → D· be a chain homotopy equivalence, then f∗ : Hn(C) → Hn(D) is an isomorphism for all n. Proof. Let g· be the homotopy inverse. Since f· ◦ g· idD·, we know f∗ ◦ g∗ = idH∗(D). Similarly, g∗ ◦ f∗ = idH∗(C). So we get isomorphisms between Hn(C) and Hn(D). 7.3 Homology calculations We’ll get back to topology and compute some homologies. Here, K is always a simplicial complex, and C· = C·(K). Lemma. Let f : K → L be a simplicial map. Then f induces
|
a chain map f· : C·(K) → C·(L). Hence it also induces f∗ : Hn(K) → Hn(L). Proof. This is fairly obvious, except that simplicial maps are allowed to “squash” simplices, so f might send an n-simplex to an (n − 1)-simplex, which is not in Dn(L). We solve this problem by just killing these troublesome simplices. Let σ be an oriented n-simplex in K, corresponding to a basis element of Cn(K). Then we define fn(σ) = f (σ) 0 f (σ) is an n-simplex f (σ) is a k-simplex for k < n. More precisely, if σ = (a0, · · ·, an), then fn(σ) = (f (a0), · · ·, f (an)) 0 f (a0), · · ·, f (an) spans an n-simplex otherwise. We then extend fn linearly to obtain fn : Cn(K) → Cn(L). It is immediate from this that this satisfies the chain map condition, i.e. f· commutes with the boundary operators. 73 7 Simplicial homology II Algebraic Topology Definition (Cone). A simplicial complex is a cone if, for some v0 ∈ Vk, |K| = StK(v0) ∪ LkK(v0). v0 We see that a cone ought to be contractible — we can just squash it to the point v0. This is what the next lemma tells us. Lemma. If K is a cone with cone point v0, then inclusion i : {v0} → |K| induces a chain homotopy equivalence i· : Cn({v0}) → Cn(K). Therefore Hn(K The homotopy inverse to i· is given by r·, where r : k → v0 is the only map. It is clear that r· ◦ i· = id, and we need to show that i· ◦ r· id. This chain homotopy can be defined by hn : Cn(K) → Cn+1(K), where hn associates to
|
any simplex σ in LkK(v0) the simplex spanned by σ and v0. Details are left to the reader. Corollary. If ∆n is the standard n-simplex, and L consists of ∆n and all its faces, then Hk(L) = Proof. K is a cone (on any vertex). Z k = 0 k > 0 0 What we would really like is an example of non-trivial homology groups. Moreover, we want them in higher dimensions, and not just the examples we got for fundamental groups. An obvious candidate is the standard n-sphere. Corollary. Let K be the standard (n − 1)-sphere (i.e. the proper faces of L from above). Then for n ≥ 2, we have Hk(K) = . Proof. We write down the chain groups for K and L. 0 0 C0(L) = C0(K) dL 1 dK 1 C1(L) = C1(K) dL n−1 Cn−1(L) dL n Cn(L) = Cn−1(K) Cn(K) = 0 dK n−1 · · · · · · 74 7 Simplicial homology II Algebraic Topology For k < n − 1, we have Ck(K) = Ck(L) and Ck+1(K) = Ck+1(L). Also, the boundary maps are equal. So Hk(K) = Hk(L) = 0. We now need to compute Hn−1(K) = ker dK n−1 = ker dL n−1 = im dL n. We get the last equality since n−1 ker dL im dL n = Hn−1(L) = 0. We also know that Cn(L) is generated by just one simplex (e0, · · ·, en). So Cn(L) ∼= Z. Also dL n is injective since it does not kill the generator (e0, · · ·, en). So Hn−1(K) ∼= im dL n ∼= Z. This is very exciting, because at last, we have a suggestion for a non-trivial
|
invariant of Sn−1 for n > 2. We say this is just a “suggestion”, since the simplicial homology is defined for simplicial complexes, and we are not guaranteed that if we put a different simplicial complex on Sn−1, we will get the same homology groups. So the major technical obstacle we still need to overcome is to see that Hk are invariants of the polyhedron |K|, not just K, and similarly for maps. But this will take some time. We will quickly say something we’ve alluded to already: Lemma (Interpretation of H0). H0(K) ∼= Zd, where d is the number of path components of K. Proof. Let K be our simplicial complex and v, w ∈ Vk. We note that by definition, v, w represent the same homology class in H0(K) if and only if there is some c such that d1c = w − v. The requirement that d1c = w − v is equivalent to saying c is a path from v to w. So [v] = [w] if and only if v and w are in the same path component of K. Most of the time, we only care about path-connected spaces, so H0(K) ∼= Z. 7.4 Mayer-Vietoris sequence The Mayer-Vietoris theorem is exactly like the Seifert-van Kampen theorem for fundamental groups, which tells us what happens when we glue two spaces together. Suppose we have a space K = M ∪ N. M N 75 7 Simplicial homology II Algebraic Topology We will learn how to compute the homology of the union M ∪ N in terms of those of M, N and M ∩ N. Recall that to state the Seifert-van Kampen theorem, we needed to learn some new group-theoretic notions, such as free products with amalgamation. The situation is somewhat similar here. We will need to learn some algebra in order to state the Mayer-Vietoris theorem. The objects we need are known as exact sequences. Definition (Exact sequence). A pair of homomorphisms of abelian groups is exact (at B) if f A B g C im f = ker g. A collection of hom
|
omorphisms fi−1 · · · Ai fi Ai+1 fi+1 Ai+2 fi+2 · · · is exact at Ai if We say it is exact if it is exact at every Ai. ker fi = im fi−1. Recall that we have seen something similar before. When we defined the chain complexes, we had d2 = 0, i.e. im d ⊆ ker d. Here we are requiring exact equivalence, which is something even better. Algebraically, we can think of an exact sequence as chain complexes with trivial homology groups. Alternatively, we see the homology groups as measuring the failure of a sequence to be exact. There is a particular type of exact sequences that is important. Definition (Short exact sequence). A short exact sequence is an exact sequence of the form 0 f A B g C 0 What does this mean? – The kernel of f is equal to the image of the zero map, i.e. {0}. So f is injective. – The image of g is the kernel of the zero map, which is everything. So g is surjective. – im f = ker g. Since we like chain complexes, we can produce short exact sequences of chain complexes. Definition (Short exact sequence of chain complexes). A short exact sequence of chain complexes is a pair of chain maps i· and j· 0 i· A· j· C· 0 B· 76 7 Simplicial homology II Algebraic Topology such that for each k, 0 Ak ik Bk jk Ck 0 is exact. Note that by requiring the maps to be chain maps, we imply that ik and jk commute with the boundary maps of the chain complexes. The reason why we care about short exact sequences of chain complexes (despite them having such a long name) is the following result: Theorem (Snake lemma). If we have a short exact sequence of complexes 0 A· i· B· j· C· 0 then a miracle happens to their homology groups. In particular, there is a long exact sequence (i.e. an exact sequence that is not short) · · · Hn(A) Hn−1(A) i∗ i∗ Hn(B) ∂∗ Hn−1(B) j∗ j∗ Hn(C) Hn−1(C) ·
|
· · where i∗ and j∗ are induced by i· and j·, and ∂∗ is a map we will define in the proof. Having an exact sequence is good, since if we know most of the terms in an exact sequence, we can figure out the remaining ones. To do this, we also need to understand the maps i∗, j∗ and ∂∗, but we don’t need to understand all, since we can deduce some of them with exactness. Yet, we still need to know some of them, and since they are defined in the proof, you need to remember the proof. Note, however, if we replace Z in the definition of chain groups by a field (e.g. Q), then all the groups become vector spaces. Then everything boils down to the rank-nullity theorem. Of course, this does not get us the right answer in exams, since we want to have homology groups over Z, and not Q, but this helps us to understand the exact sequences somewhat. If, at any point, homology groups confuse you, then you can try to work with homology groups over Q and get a feel for what homology groups are like, since this is easier. We are first going to apply this result to obtain the Mayer-Vietoris theorem. Theorem (Mayer-Vietoris theorem). Let K, L, M, N be simplicial complexes with K = M ∪ N and L = M ∩ N. We have the following inclusion maps: i L j N M k K. Then there exists some natural homomorphism ∂∗ : Hn(K) → Hn−1(L) that 77 7 Simplicial homology II Algebraic Topology gives the following long exact sequence: ∂∗ · · · Hn(L) i∗+j∗ Hn(M ) ⊕ Hn(N ) k∗−∗ Hn(K) ∂∗ Hn−1(L) i∗+j∗ Hn−1(M ) ⊕ Hn−1(N ) k∗−∗ Hn−1(K) · · · H0(M ) ⊕ H0(N ) k
|
∗−∗ H0(K) · · · 0 Here A ⊕ B is the direct sum of the two (abelian) groups, which may also be known as the Cartesian product. Note that unlike the Seifert-van Kampen theorem, this does not require the intersection L = M ∩ N to be (path) connected. This restriction was needed for Seifert-van Kampen since the fundamental group is unable to see things outside the path component of the basepoint, and hence it does not like non-path connected spaces well. However, homology groups don’t have these problems. Proof. All we have to do is to produce a short exact sequence of complexes. We have 0 Cn(L) in+jn Cn(M ) ⊕ Cn(N ) kn−n Cn(K) 0 Here in + jn : Cn(L) → Cn(M ) ⊕ Cn(N ) is the map x → (x, x), while kn − n : Cn(M ) ⊕ Cn(N ) → Cn(K) is the map (a, b) → a − b (after applying the appropriate inclusion maps). It is easy to see that this is a short exact sequence of chain complexes. The image of in + jn is the set of all elements of the form (x, x), and the kernel of kn − n is also these. It is also easy to see that in + jn is injective and kn − n is surjective. At first sight, the Mayer-Vietoris theorem might look a bit scary to use, since it involves all homology groups of all orders at once. However, this is actually often a good thing, since we can often use this to deduce the higher homology groups from the lower homology groups. Yet, to properly apply the Mayer-Vietoris sequence, we need to understand the map ∂∗. To do so, we need to prove the snake lemma. Theorem (Snake lemma). If we have a short exact sequence of complexes 0 A· i· B· j· C· 0 then there is a long exact sequence · · · Hn(A) Hn−1(A) i∗ i∗ Hn(B) ∂∗ Hn−1(B) j
|
∗ j∗ Hn(C) Hn−1(C) · · · where i∗ and j∗ are induced by i· and j·, and ∂∗ is a map we will define in the proof. 78 7 Simplicial homology II Algebraic Topology The method of proving this is sometimes known as “diagram chasing”, where we just “chase” around commutative diagrams to find the elements we need. The idea of the proof is as follows — in the short exact sequence, we can think of A as a subgroup of B, and C as the quotient B/A, by the first isomorphism theorem. So any element of C can be represented by an element of B. We apply the boundary map to this representative, and then exactness shows that this must come from some element of A. We then check carefully that this is well-defined, i.e. does not depend on the representatives chosen. Proof. The proof of this is in general not hard. It just involves a lot of checking of the details, such as making sure the homomorphisms are well-defined, are actually homomorphisms, are exact at all the places etc. The only important and non-trivial part is just the construction of the map ∂∗. First we look at the following commutative diagram: 0 0 in in−1 An dn An−1 Bn dn Bn−1 jn jn−1 Cn dn Cn−1 0 0 To construct ∂∗ : Hn(C) → Hn−1(A), let [x] ∈ Hn(C) be a class represented by x ∈ Zn(C). We need to find a cycle z ∈ An−1. By exactness, we know the map jn : Bn → Cn is surjective. So there is a y ∈ Bn such that jn(y) = x. Since our target is An−1, we want to move down to the next level. So consider dn(y) ∈ Bn−1. We would be done if dn(y) is in the image of in−1. By exactness, this is equivalent to saying dn(y)
|
is in the kernel of jn−1. Since the diagram is commutative, we know jn−1 ◦ dn(y) = dn ◦ jn(y) = dn(x) = 0, using the fact that x is a cycle. So dn(y) ∈ ker jn−1 = im in−1. Moreover, by exactness again, in−1 is injective. So there is a unique z ∈ An−1 such that in−1(z) = dn(y). We have now produced our z. We are not done. We have ∂∗[x] = [z] as our candidate definition, but we need to check many things: (i) We need to make sure ∂∗ is indeed a homomorphism. (ii) We need dn−1(z) = 0 so that [z] ∈ Hn−1(A); (iii) We need to check [z] is well-defined, i.e. it does not depend on our choice of y and x for the homology class [x]. (iv) We need to check the exactness of the resulting sequence. We now check them one by one: (i) Since everything involved in defining ∂∗ are homomorphisms, it follows that ∂∗ is also a homomorphism. 79 7 Simplicial homology II Algebraic Topology (ii) We check dn−1(z) = 0. To do so, we need to add an additional layer. 0 0 0 in in−1 An dn An−1 Bn dn Bn−1 jn jn−1 Cn dn Cn−1 dn−1 dn−1 dn−1 An−2 in−2 Bn−2 jn−2 Cn−2 0 0 0 We want to check that dn−1(z) = 0. We will use the commutativity of the diagram. In particular, we know in−2 ◦ dn−1(z) = dn−1 ◦ in−1(z) = dn−1 ◦ dn(y) = 0. By exactness at An−2, we know in−2 is injective. So we must have dn−1(
|
z) = 0. (iii) (a) First, in the proof, suppose we picked a different y such that jn(y) = jn(y) = x. Then jn(y − y) = 0. So y − y ∈ ker jn = im in. Let a ∈ An be such that in(a) = y − y. Then dn(y) = dn(y − y) + dn(y) = dn ◦ in(a) + dn(y) = in−1 ◦ dn(a) + dn(y). Hence when we pull back dn(y) and dn(y) to An−1, the results differ by the boundary dn(a), and hence produce the same homology class. (b) Suppose [x] = [x]. We want to show that ∂∗[x] = ∂∗[x]. This time, we add a layer above. 0 0 0 An+1 in+1 Bn+1 jn+1 Cn+1 dn+1 dn+1 dn+1 in in−1 An dn An−1 Bn dn Bn−1 jn jn−1 Cn dn Cn−1 0 0 0 By definition, since [x] = [x], there is some c ∈ Cn+1 such that x = x + dn+1(c). By surjectivity of jn+1, we can write c = jn+1(b) for some b ∈ Bn+1. By commutativity of the squares, we know x = x + jn ◦ dn+1(b). The next step of the proof is to find some y such that jn(y) = x. Then jn(y + dn+1(b)) = x. So the corresponding y is y = y + dn+1(b). So dn(y) = dn(y), and hence ∂∗[x] = ∂∗[x]. 80 7 Simplicial homology II Algebraic Topology (iv) This is yet another standard diagram chasing argument. When reading this, it is helpful to look at a diagram and see how the elements
|
are chased along. It is even more beneficial to attempt to prove this yourself. (a) im i∗ ⊆ ker j∗: This follows from the assumption that in ◦ jn = 0. (b) ker j∗ ⊆ im i∗: Let [b] ∈ Hn(B). Suppose j∗([b]) = 0. Then there is some c ∈ Cn+1 such that jn(b) = dn+1(c). By surjectivity of jn+1, there is some b ∈ Bn+1 such that jn+1(b) = c. By commutativity, we know jn(b) = jn ◦ dn+1(b), i.e. jn(b − dn+1(b)) = 0. By exactness of the sequence, we know there is some a ∈ An such that in(a) = b − dn+1(b). Moreover, in−1 ◦ dn(a) = dn ◦ in(a) = dn(b − dn+1(b)) = 0, using the fact that b is a cycle. Since in−1 is injective, it follows that dn(a) = 0. So [a] ∈ Hn(A). Then i∗([a]) = [b] − [dn+1(b)] = [b]. So [b] ∈ im i∗. (c) im j∗ ⊆ ker ∂∗: Let [b] ∈ Hn(B). To compute ∂∗(j∗([b])), we first pull back jn(b) to b ∈ Bn. Then we compute dn(b) and then pull it back to An+1. However, we know dn(b) = 0 since b is a cycle. So ∂∗(j∗([b])) = 0, i.e. ∂∗ ◦ j∗ = 0. (d) ker ∂∗ ⊆ im j∗: Let [c] ∈ Hn(C) and suppose ∂∗([c]) = 0. Let b ∈ Bn be such that jn(b) = c
|
, and a ∈ An−1 such that in−1(a) = dn(b). By assumption, ∂∗([c]) = [a] = 0. So we know a is a boundary, say a = dn(a) for some a ∈ An. Then by commutativity we know dn(b) = dn ◦ in(a). In other words, dn(b − in(a)) = 0. So [b − in(a)] ∈ Hn(B). Moreover, j∗([b − in(a)]) = [jn(b) − jn ◦ in(a)] = [c]. So [c] ∈ im j∗. (e) im ∂∗ ⊆ ker i∗: Let [c] ∈ Hn(C). Let b ∈ Bn be such that jn(b) = c, and a ∈ An−1 be such that in(a) = dn(b). Then ∂∗([c]) = [a]. Then i∗([a]) = [in(a)] = [dn(b)] = 0. So i∗ ◦ ∂∗ = 0. (f) ker i∗ ⊆ im ∂∗: Let [a] ∈ Hn(A) and suppose i∗([a]) = 0. So we can find some b ∈ Bn+1 such that in(a) = dn+1(b). Let c = jn+1(b). Then dn+1(c) = dn+1 ◦ jn+1(b) = jn ◦ dn+1(b) = jn ◦ in(a) = 0. So [c] ∈ Hn(C). Then [a] = ∂∗([c]) by definition of ∂∗. So [a] ∈ im ∂∗. 81 7 Simplicial homology II Algebraic Topology 7.5 Continuous maps and homotopy invariance This is the most technical part of the homology section of the course. The goal is to see that the homology groups H∗(K) depend only on the polyhedron, and not the simplicial structure on it.
|
Moreover, we will show that they are homotopy invariants of the space, and that homotopic maps f g : |K| → |L| induce equal maps H∗(K) → H∗(L). Note that this is a lot to prove. At this point, we don’t even know arbitrary continuous maps induce any map on the homology. We only know simplicial maps do. We start with a funny definition. Definition (Contiguous maps). Simplicial maps f, g : K → L are contiguous if for each σ ∈ K, the simplices f (σ) and g(σ) (i.e. the simplices spanned by the image of the vertices of σ) are faces of some some simplex τ ∈ L. σ τ f g The significance of this definition comes in two parts: simplicial approximations of the same map are contiguous, and contiguous maps induce the same maps on homology. Lemma. If f, g : K → L are simplicial approximations to the same map F, then f and g are contiguous. Proof. Let σ ∈ K, and pick some s ∈ ˚σ. Then F (s) ∈ ˚τ for some τ ∈ L. Then the definition of simplicial approximation implies that for any simplicial approximation f to F, f (σ) spans a face of τ. Lemma. If f, g : K → L are continguous simplicial maps, then f∗ = g∗ : Hn(K) → Hn(L) for all n. Geometrically, the homotopy is obvious. If f (σ) and g(σ) are both faces of τ, then we just pick the homotopy as τ. The algebraic definition is less inspiring, and it takes some staring to see it is indeed what we think it should be. Proof. We will write down a chain homotopy between f and g: hn((a0, · · ·, an)) = n i=0 (−1)i[f (a0), · · ·, f (ai), g(ai), · · ·, g(an)], where the square brackets means the corresponding oriented simple
|
x if there are no repeats, 0 otherwise. We can now check by direct computation that this is indeed a chain homotopy. 82 7 Simplicial homology II Algebraic Topology Now we know that if f, g : K → L are both simplicial approximations to the same F, then they induce equal maps on homology. However, we do not yet know that continuous maps induce well-defined maps on homologies, since to produce simplicial approximations, we needed to perform barycentric subdivision, and we need to make sure this does not change the homology. K K Given a barycentric subdivision, we need to choose a simplicial approximation to the identity map a : K → K. It turns out this is easy to do, and we can do it almost arbitrarily. Lemma. Each vertex ˆσ ∈ K is a barycenter of some σ ∈ K. Then we choose a(ˆσ) to be an arbitrary vertex of σ. This defines a function a : VK → VK. This a is a simplicial approximation to the identity. Moreover, every simplicial approximation to the identity is of this form. Proof. Omitted. Next, we need to show this gives an isomorphism on the homologies. Proposition. Let K be the barycentric subdivision of K, and a : K → K a simplicial approximation to the identity map. Then the induced map a∗ : Hn(K ) → Hn(K) is an isomorphism for all n. Proof. We first deal with K being a simplex σ and its faces. Now that K is just a cone (with any vertex as the cone vertex), and K is also a cone (with the barycenter as the cone vertex). Therefore Hn(K) ∼= Hn(K ) ∼= Z n = 0 0 n > 0 So only n = 0 is (a little) interesting, but it is easy to check that a∗ is an isomorphism in this case, since it just maps a vertex to a vertex, and all vertices in each simplex are in the same homology class. To finish the proof, note that K is built up by gluing up simplices, and K is built by gluing up subdivided simplices. So to understand their homology groups, we use the
|
Mayer-Vietoris sequence. Given a complicated simplicial complex K, let σ be a maximal dimensional simplex of K. We let L = K \ {σ} (note that L includes the boundary of σ). We let S = {σ and all its faces} ⊆ K and T = L ∩ S. We can do similarly for K, and let L, S, T be the corresponding barycentric subdivisions. We have K = L ∪ S and K = L ∪ S (and L ∩ S = T ). By the previous lemma, we see our construction of a gives a(L) ⊆ L, a(S) ⊆ S and a(T ) ⊆ T. So these induce maps of the corresponding homology groups Hn(T ) Hn(S) ⊕ Hn(L) Hn(K ) Hn−1(T ) Hn−1(S) ⊕ Hn−1(L) a∗ a∗⊕a∗ a∗ a∗ a∗⊕a∗ Hn(T ) Hn(S) ⊕ Hn(L) Hn(K) Hn−1(T ) Hn−1(S) ⊕ Hn−1(L) 83 7 Simplicial homology II Algebraic Topology By induction, we can assume all but the middle maps are isomorphisms. By the five lemma, this implies the middle map is an isomorphism, where the five lemma is as follows: Lemma (Five lemma). Consider the following commutative diagram: A1 a A2 B1 b B2 C1 c C2 D1 d D2 E1 e E2 If the top and bottom rows are exact, and a, b, d, e are isomorphisms, then c is also an isomorphism. Proof. Exercise in example sheet. We now have everything we need to move from simplical maps to continuous maps. Putting everything we have proved so far together, we obtain the following result: Proposition. To each continuous map f : |K| → |L|, there is an associated map f∗ : Hn(K) → Hn(L) (for all n) given by f∗ = s∗ ◦ ν
|
−1 K,r, where s : K (r) → L is a simplicial approximation to f, and νK,r : Hn(K (r)) → Hn(K) is the isomorphism given by composing maps Hn(K (i)) → Hn(K (i−1)) induced by simplical approximations to the identity. Furthermore: (i) f∗ does not depend on the choice of r or s. (ii) If g : |M | → |K| is another continuous map, then (f ◦ g)∗ = f∗ ◦ g∗. Proof. Omitted. Corollary. If f : |K| → |L| is a homeomorphism, then f∗ : Hn(K) → Hn(L) is an isomorphism for all n. Proof. Immediate from (ii) of previous proposition. This is good. We know now that homology groups is a property of the space itself, not simplicial complexes. For example, we have computed the homology groups of a particular triangulation of the n-sphere, and we know it is true for all triangulations of the n-sphere. We’re not exactly there yet. We have just seen that homology groups are invariant under homeomorphisms. What we would like to know is that they are invariant under homotopy. In other words, we want to know homotopy equivalent maps induce equal maps on the homology groups. The strategy is: (i) Show that “small” homotopies don’t change the maps on Hn 84 7 Simplicial homology II Algebraic Topology (ii) Note that all homotopies can be decomposed into “small” homotopies. Lemma. Let L be a simplicial complex (with |L| ⊆ Rn). Then there is an ε = ε(L) > 0 such that if f, g : |K| → |L| satisfy f (x) − g(x) < ε, then f∗ = g∗ : Hn(K) → Hn(L) for all n. The idea of the proof is that if f (x) − g(x) is small enough, we can barycentrically subdivide K such
|
that we get a simplicial approximation to both f and g. Proof. By the Lebesgue number lemma, there is an ε > 0 such that each ball of radius 2ε in |L| lies in some star StL(w). Now apply the Lebesgue number lemma again to {f −1(Bε(y))}y∈|L|, an open cover of |K|, and get δ > 0 such that for all x ∈ |K|, we have f (Bδ(x)) ⊆ Bε(y) ⊆ B2ε(y) ⊆ StL(w) for some y ∈ |L| and StL(w). Now since g and f differ by at most ε, we know g(Bδ(x)) ⊆ B2ε(y) ⊆ StL(w). Now subdivide r times so that µ(K (r)) < 1 2 δ. So for all v ∈ VK(r), we know StK(r)(v) ⊆ Bδ(v). This gets mapped by both f and g to StL(w) for the same w ∈ VL. We define s : VK(r) → VL sending v → w. Theorem. Let f g : |K| → |L|. Then f∗ = g∗. Proof. Let H : |K| × I → |L|. Since |K| × I is compact, we know H is uniformly continuous. Pick ε = ε(L) as in the previous lemma. Then there is some δ such that |s − t| < δ implies |H(x, s) − H(x, t)| < ε for all x ∈ |K|. Now choose 0 = t0 < t1 < · · · < tn = 1 such that ti − ti−1 < δ for all i. Define fi : |K| → |L| by fi(x) = H(x, ti). Then we know fi − fi−1 < ε for all i. Hence (fi)∗ = (fi−1)∗. Therefore (f0)∗ = (fn)∗, i.e. f
|
∗ = g∗. This is good, since we know we can not only deal with spaces that are homeomorphic to complexes, but spaces that are homotopic to complexes. This is important, since all complexes are compact, but we would like to talk about non-compact spaces such as open balls as well. Definition (h-triangulation and homology groups). An h-triangulation of a space X is a simplicial complex K and a homotopy equivalence h : |K| → X. We define Hn(X) = Hn(K) for all n. Lemma. Hn(X) is well-defined, i.e. it does not depend on the choice of K. Proof. Clear from previous theorem. We have spent a lot of time and effort developing all the technical results and machinery of homology groups. We will now use them to do stuff. 85 7 Simplicial homology II Algebraic Topology Digression — historical motivation At first, we motivated the study of algebraic topology by saying we wanted to show that Rn and Rm are not homeomorphic. However, historically, this is not why people studied algebraic topology, since there are other ways to prove this is true. Initially, people were interested in studying knots. These can be thought of as embeddings S → S3. We really should just think of S3 as R3 ∪ {∞}, where the point at infinity is just there for convenience. The most interesting knot we know is the unknot U : A less interesting but still interesting knot is the trefoil T. One question people at the time asked was whether the trefoil knot is just the unknot in disguise. It obviously isn’t, but can we prove it? In general, given two knots, is there any way we can distinguish if they are the same? The idea is to study the fundamental groups of the knots. It would obviously be silly to compute the fundamental groups of U and T themselves, since they are both isomorphic to S1 and the groups are just Z. Instead, we look at the fundamental groups of the complements. It is not difficult to see that π1(S3 \ U ) ∼= Z, and with some
|
suitable machinery, we find π1(S3 \ T ) ∼= a, b | a3b−2. Staring at it hard enough, we can construct a surjection π1(S3 \ T ) → S3. This tells us π1(S3 \ T ) is non-abelian, and is certainly not Z. So we know U and T are genuinely different knots. 7.6 Homology of spheres and applications Lemma. The sphere Sn−1 is triangulable, and Hk(Sn−1) ∼= Z k = 0, n − 1 otherwise 0 86 7 Simplicial homology II Algebraic Topology Proof. We already did this computation for the standard (n − 1)-sphere ∂∆n, where ∆n is the standard n-simplex. We immediately have a few applications Proposition. Rn ∼= Rm for m = n. Proof. See example sheet 4. Theorem (Brouwer’s fixed point theorem (in all dimensions)). There is no retraction Dn onto ∂Dn ∼= Sn−1. So every continuous map f : Dn → Dn has a fixed point. Proof. The proof is exactly the same as the two-dimensional case, with homology groups instead of the fundamental group. We first show the second part from the first. Suppose f : Dn → Dn has no fixed point. Then the following g : Dn → ∂Dn is a continuous retraction. f (x) x g(x) So we now show no such continuous retraction can exist. Suppose r : Dn → ∂Dn is a retraction, i.e. r ◦ i id : ∂Dn → ∂Dn. Sn−1 i Dn r Sn−1 We now take the (n − 1)th homology groups to obtain Hn−1(Sn−1) i∗ Hn−1(Dn) r∗ Hn−1(Sn−1). Since r ◦ i is homotopic to the identity, this composition must also be the identity, but this is clearly nonsense, since Hn−1(Sn−1) ∼= Z while Hn−1(Dn) ∼=
|
0. So such a continuous retraction cannot exist. Note it is important that we can work with continuous maps directly, and not just their simplicial approximations. Otherwise, here we can only show that every simplicial approximation of f has a fixed point, but this does not automatically entail that f itself has a fixed point. For the next application, recall from the first example sheet that if n is odd, then the antipodal map a : Sn → Sn is homotopic to the identity. What if n is even? The idea is to consider the effect on the homology group: a∗ : Hn(Sn) → Hn(Sn). 87 7 Simplicial homology II Algebraic Topology By our previous calculation, we know a∗ is a map a∗ : Z → Z. If a is homotopic to the identity, then a∗ should be homotopic to the identity map. We will now compute a∗ and show that it is multiplication by −1 when n is even. To do this, we want to use a triangulation that is compatible with the antipodal map. The standard triangulation clearly doesn’t work. Instead, we use the following triangulation h : |K| → Sn: The vertices of K are given by VK = {±e0, ±e1, · · ·, ±en}. This triangulation works nicely with the antipodal map, since this maps a vertex to a vertex. To understand the homology group better, we need the following lemma: Lemma. In the triangulation of Sn given by vertices VK = {±e0, ±e1, · · ·, ±en}, the element x = ε0 · · · εn(ε0e0, · · ·, εnen) is a cycle and generates Hn(Sn). ε∈{±1}n+1 Proof. By direct computation, we see that dx = 0. So x is a cycle. To show it generates Hn(Sn), we note that everything in Hn(Sn) ∼= Z is a multiple of the generator, and since x has coefficients ±1, it cannot be a multiple of anything else (apart from −x). So x is indeed a generator. Now we can
|
prove our original proposition. Proposition. If n is even, then the antipodal map a id. Proof. We can directly compute that a∗x = (−1)n+1x. If n is even, then a∗ = −1, but id∗ = 1. So a id. 7.7 Homology of surfaces We want to study compact surfaces and their homology groups. To work with the simplicial homology, we need to assume they are triangulable. We will not prove this fact, and just assume it to be true (it really is). Recall we have classified compact surfaces, and have found the following orientable surfaces Σg. 88 7 Simplicial homology II Algebraic Topology We also have non-compact versions of these, known as Fg, where we take the above ones and cut out a hole: We want to compute the homology groups of these Σg. One way to do so is to come up with specific triangulations of the space, and then compute the homology groups directly. However, there is a better way to do it. Given a Σg, we can slice it apart along a circle and write it as Σg = Fg−1 ∪ F1. Then, we need to compute H∗(Fg). Fortunately, this is not too hard, since it turns out Fg is homotopic to a relatively simple space. Recall that we produced Σg by starting with a 4g-gon and gluing edges: b1 a1 a1 b2 b1 a2 a2 b2 Now what is Fg? Fg is Σg with a hole cut out of it. We’ll cut the hole out at the center, and be left with b1 a1 a1 b2 b1 a2 a2 b2 We can now expand the hole to the boundary, and get a deformation retraction from Fg to its boundary: 89 7 Simplicial homology II Algebraic Topology b1 a1 a1 b2 b1 a2 a2 b2 Gluing the edges together, we obtain the rose with 2g petals, which we shall call X2g: Using the Mayer-Vietoris sequence (exercise), we see that Hn(Fg) ∼= Hn(X2g) ∼= �
|
� Z n = 0 Z2g n = 1 n > 1 0 We now compute the homology groups of Σg. The Mayer-Vietoris sequence gives the following 0 H2(S1) H2(Fg−1) ⊕ H2(F1) H2(Σg) H1(S1) H1(Fg−1) ⊕ H1(F1) H1(Σg) H0(S1) H0(Fg−1) ⊕ H0(F1) H0(Σg) We can put in the terms we already know, and get 0 H2(Σg) Z Z2g H1(Σg) Z Z2 Z 0 0 By exactness, we know H2(Σg) = ker{Z → Z2g}. We now note that this map is indeed the zero map, by direct computation — this map sends S1 to the If we look at the picture, after the boundary of the hole of Fg−1 and F1. deformation retraction, this loop passes through each one-cell twice, once with each orientation. So these cancel, and give 0. So H2(Σg) = Z. To compute H1(Σg), for convenience, we break it up into a short exact sequence, noting that the function Z → Z2g is zero: 0 Z2g H1(Σg) ker(Z → Z2) 0. 90 7 Simplicial homology II Algebraic Topology We now claim that Z → Z2 is injective — this is obvious, since it sends 1 → (1, 1). So the kernel is zero. So H1(Σg) ∼= Z2g. This is a typical application of the Mayer-Vietoris sequence. We write down the long exact sequence, and put in the terms we already know. This does not immediately give us the answer we want — we will need to understand one or two of the maps, and then we can figure out all the groups we want. 7.8 Rational homology, Euler and Lefschetz numbers So far, we have been working with chains with integral coefficients. It turns out we can use rational coeffi
|
cients instead. In the past, Cn(K) was an abelian group, or a Z-module. If we use rational coefficients, since Q is a field, this becomes a vector space, and we can use a lot of nice theorems about vector spaces, such as the rank-nullity theorem. Moreover, we can reasonably talk about things like the dimensions of these homology groups. Definition (Rational homology group). For a simplicial complex K, we can define the rational n-chain group Cn(K, Q) in the same way as Cn(K) = Cn(K, Z). That is, Cn(K, Q) is the vector space over Q with basis the n-simplices of K (with a choice of orientation). We can define dn, Zn, Bn as before, and the rational nth homology group is Hn(K; Q) ∼= Zn(K; Q) Bn(K; Q). Now our homology group is a vector space, and it is much easier to work with. However, the consequence is that we will lose some information. Fortunately, the way in which we lose information is very well-understood and rather simple. Lemma. If Hn(K) ∼= Zk ⊕ F for F a finite group, then Hn(K; Q) ∼= Qk. Proof. Exercise. Hence, when passing to rational homology groups, we lose all information about the torsion part. In some cases this information loss is not very significant, but in certain cases, it can be. For example, in RP2, we have Hn(RP2) ∼= Z n = 0 Z/2 n = 1 n > 1 0 If we pass on to rational coefficients, then we have lost everything in H1(RP2), and RP2 looks just like a point: Example. We have Hn(RP2; Q) ∼= Hn(∗; Q). Hn(Sn; Q) ∼= Q k = 0, n otherwise 0. 91 7 Simplicial homology II Algebraic Topology We also have
|
Hn(Σg, Q) ∼= Q Q2g 0 k = 0, 2 k = 1 otherwise. In this case, we have not lost any information because there was no torsion part of the homology groups. However, for the non-orientable surfaces, one can show that so Hk(En) = Z k = 0 Zn−1 × Z/2 k = 1 0 otherwise, Hk(En; Q) = Q Qn−1 0 k = 0 k = 1 otherwise. This time, this is different from the integral coefficient case, where we have an extra Z2 term in H1. The advantage of this is that the homology “groups” are in fact vector spaces, and we can talk about things like dimensions. Also, maps on homology groups are simply linear maps, i.e. matrices, and we can study their properties with proper linear algebra. Recall that the Euler characteristic of a polyhedron is defined as “faces − edges + vertices”. This works for two-dimensional surfaces, and we would like to extend this to higher dimensions. There is an obvious way to do so, by counting higher-dimensional surfaces and putting the right signs. However, if we define it this way, it is not clear that this is a property of the space itself, and not just a property of the triangulation. Hence, we define it as follows: Definition (Euler characteristic). The Euler characteristic of a triangulable space X is χ(X) = (−1)i dimQ Hi(X; Q). i≥0 This clearly depends only on the homotopy type of X, and not the triangula- tion. We will later show this is equivalent to what we used to have. More generally, we can define the Lefschetz number. Definition (Lefschetz number). Given any map f : X → X, we define the Lefschetz number of f as L(f ) =
|
i≥0 (−1)i tr(f∗ : Hi(X; Q) → Hi(X; Q)). Why is this a generalization of the Euler characteristic? Just note that the trace of the identity map is the number of dimensions. So we have χ(X) = L(id). 92 7 Simplicial homology II Algebraic Topology Example. We have χ(Sn) = 2 n even 0 n odd We also have χ(Σg) = 2 − 2g, χ(En) = 2 − n. Example. If α : Sn → Sn is the antipodal map, we saw that α∗ : Hn(Sn) → Hn(Sn) is multiplication by (−1)n+1. So L(α) = 1 + (−1)n(−1)n+1 = 1 − 1 = 0. We see that even though the antipodal map has different behaviour for different dimensions, the Lefschetz number ends up being zero all the time. We will soon see why this is the case. Why do we want to care about the Lefschetz number? The important thing is that we will use this to prove a really powerful generalization of Brouwer’s fixed point theorem that allows us to talk about things that are not balls. Before that, we want to understand the Lefschetz number first. To define the Lefschetz number, we take the trace of f∗, and this is a map of the homology groups. However, we would like to understand the Lefschetz number in terms of the chain groups, since these are easier to comprehend. Recall that the homology groups are defined as quotients of the chain groups, so we would like to know what happens to the trace when we take quotients. Lemma. Let V be a finite-dimensional vector space and W ≤ V a subspace. Let A : V → V be a linear map such that A(W ) ⊆ W. Let B = A|W : W → W and C : V /W → V /W the induced map on the quotient. Then Proof. In the right basis, tr(A) = tr(B) + tr(C).
|
A = B A 0 C. What this allows us to do is to not look at the induced maps on homology, but just the maps on chain complexes. This makes our life much easier when it comes to computation. Corollary. Let f· : C·(K; Q) → C·(K; Q) be a chain map. Then (−1)i tr(fi : Ci(K) → Ci(K)) = i≥0 i≥0 (−1)i tr(f∗ : Hi(K) → Hi(K)), with homology groups understood to be over Q. This is a great corollary. The thing on the right is the conceptually right thing to have — homology groups are nice and are properties of the space itself, not the triangulation. However, to actually do computations, we want to work with the chain groups and actually calculate with chain groups. 93 7 Simplicial homology II Algebraic Topology Proof. There is an exact sequence 0 Bi(K; Q) Zi(K; Q) Hi(K; Q) 0 This is since Hi(K, Q) is defined as the quotient of Zi over Bi. We also have the exact sequence 0 Zi(K; Q) Ci(K; Q) di Bi−1(K; Q) 0 This is true by definition of Bi−1 and Zi. Let f H i, f C i maps induced by f on the corresponding groups. Then we have i, f Z, f B i be the various L(|f |) = (−1)i tr(f H i ) = = i≥0 i≥0 i≥0 (−1)i(tr(f Z i ) − tr(f B i )) (−1)i(tr(f C i ) − tr(f B i−1) − tr(f B i )). Because of the alternating signs in dimension, each f B with opposite signs. So all f B i cancel out, and we are left with i appears twice in the sum L(|f |) = i≥0 (−1)i tr(f C i ). Since tr(id |Ci(K;Q)) = dimQ Ci(K; Q), which is just the number of i-simplices, this tells us the Euler characteristic we just defi
|
ned is the usual Euler characteristic, i.e. χ(X) = (−1)i number of i-simplices. Finally, we get to the important theorem of the section. i≥0 Theorem (Lefschetz fixed point theorem). Let f : X → X be a continuous map from a triangulable space to itself. If L(f ) = 0, then f has a fixed point. Proof. We prove the contrapositive. Suppose f has no fixed point. We will show that L(f ) = 0. Let δ = inf{|x − f (x)| : x ∈ X} thinking of X as a subset of Rn. We know this is non-zero, since f has no fixed point, and X is compact (and hence the infimum point is achieved by some x). Choose a triangulation L : |K| → X such that µ(K) < δ 2. We now let g : K (r) → K be a simplicial approximation to f. Since we picked our triangulation to be so fine, for x ∈ σ ∈ K, we have |f (x) − g(x)| < δ 2 94 7 Simplicial homology II Algebraic Topology since the mesh is already less than δ 2. Also, we know So we have |f (x) − x| ≥ δ. |g(x) − x| > δ 2. So we must have g(x) ∈ σ. The conclusion is that for any σ ∈ K, we must have g(σ) ∩ σ = ∅. Now we compute L(f ) = L(|g|). The only complication here is that g is a map from K (r) to K, and the domains and codomains are different. So we need to compose it with si : Ci(K; Q) → Ci(K (r); Q) induced by inverses of simplicial approximations to the identity map. Then we have L(|g|) = = i≥0 i≥0 (−1)i tr(g∗ : Hi(X; Q) → Hi(X; Q)) (−1)i tr
|
(gi ◦ si : Ci(K; Q) → Ci(K; Q)) Now note that si takes simplices of σ to sums of subsimplices of σ. So gi ◦ si takes every simplex off itself. So each diagonal terms of the matrix of gi ◦ si is 0! Hence the trace is L(|g|) = 0. Example. If X is any contractible polyhedron (e.g. a disk), then L(f ) = 1 for any f : X → X, which is obvious once we contract the space to a point. So f has a fixed point. Example. Suppose G is a path-connected topological group, i.e. G is a group and a topological space, and inverse and multiplication are continuous maps. If g = 1, then the map rg : G → G γ → γg has no fixed point. This implies L(rg) = 0. However, since the space is path connected, rg id, where the homotopy is obtained by multiplying elements along the path from e to rg. So χ(G) = L(idG) = L(rg) = 0. So if G = 1, then in fact χ(G) = 0. This is quite fun. We have worked with many surfaces. Which can be topological groups? The torus is, since it is just S1 × S1, and S1 is a topological group. The Klein bottle? Maybe. However, other surfaces cannot since they don’t have Euler characteristic 0. 95exact sequence of complexes 0 A· i· B· q· C· 0. Then there are maps such that there is a long exact sequence ∂ : Hn(C·) → Hn−1(A·) · · · Hn(A) Hn−1(A) i∗ i∗ Hn(B) ∂∗ Hn−1(B) q∗ q∗ Hn(C). Hn−1(C) · · · The method of proving this is sometimes known as “diagram chasing”, where we just “chase” around commutative diagrams to find the elements we need. The idea of the proof is as follows — in the short exact sequence, we
|
can think of A as a subgroup of B, and C as the quotient B/A, by the first isomorphism theorem. So any element of C can be represented by an element of B. We apply the boundary map to this representative, and then exactness shows that this must come from some element of A. We then check carefully that these is well-defined, i.e. does not depend on the representatives chosen. Proof. The proof of this is in general not hard. It just involves a lot of checking of the details, such as making sure the homomorphisms are well-defined, are actually homomorphisms, are exact at all the places etc. The only important and non-trivial part is just the construction of the map ∂∗. First we look at the following commutative diagram: 0 0 An dn An−1 in in−1 Bn dn Bn−1 qn qn−1 Cn dn Cn−1 0 0 To construct ∂∗ : Hn(C) → Hn−1(A), let [x] ∈ Hn(C) be a class represented by x ∈ Zn(C). We need to find a cycle z ∈ An−1. By exactness, we know the map qn : Bn → Cn is surjective. So there is a y ∈ Bn such that qn(y) = x. Since our target is An−1, we want to move down to the next level. So consider dn(y) ∈ Bn−1. We would be done if dn(y) is in the image of in−1. By exactness, this is equivalent saying dn(y) is in the kernel of qn−1. Since the diagram is commutative, we know qn−1 ◦ dn(y) = dn ◦ qn(y) = dn(x) = 0, 25 3 Four major tools of (co)homology III Algebraic Topology using the fact that x is a cycle. So dn(y) ∈ ker qn−1 = im in−1. Moreover, by exactness again, in−1 is injective. So there is a unique z ∈ An−1 such that in−1(
|
z) = dn(y). We have now produced our z. We are not done. We have ∂∗[x] = [z] as our candidate definition, but we need to check many things: (i) We need to make sure ∂∗ is indeed a homomorphism. (ii) We need dn−1(z) = 0 so that [z] ∈ Hn−1(A); (iii) We need to check [z] is well-defined, i.e. it does not depend on our choice of y and x for the homology class [x]. (iv) We need to check the exactness of the resulting sequence. We now check them one by one: (i) Since everything involved in defining ∂∗ are homomorphisms, it follows that ∂∗ is also a homomorphism. (ii) We check dn−1(z) = 0. To do so, we need to add an additional layer. 0 0 0 An dn An−1 in in−1 Bn dn Bn−1 qn qn−1 Cn dn Cn−1 dn−1 dn−1 dn−1 An−2 in−2 Bn−2 qn−2 Cn−2 0 0 0 We want to check that dn−1(z) = 0. We will use the commutativity of the diagram. In particular, we know in−2 ◦ dn−1(z) = dn−1 ◦ in−1(z) = dn−1 ◦ dn(y) = 0. By exactness at An−2, we know in−2 is injective. So we must have dn−1(z) = 0. (iii) (a) First, in the proof, suppose we picked a different y such that qn(y) = qn(y) = x. Then qn(y − y) = 0. So y − y ∈ ker qn = im in. Let a ∈ An be such that in(a) = y − y. Then dn(y) = dn(y − y) + dn(y) = dn ◦ in(a) + dn(y) = in
|
−1 ◦ dn(a) + dn(y). Hence when we pull back dn(y) and dn(y) to An−1, the results differ by the boundary dn(a), and hence produce the same homology class. 26 3 Four major tools of (co)homology III Algebraic Topology (b) Suppose [x] = [x]. We want to show that ∂∗[x] = ∂∗[x]. This time, we add a layer above. 0 0 0 An+1 in+1 Bn+1 qn+1 Cn+1 dn+1 dn+1 dn+1 An dn An−1 in in−1 Bn dn Bn−1 qn qn−1 Cn dn Cn−1 0 0 0 By definition, since [x] = [x], there is some c ∈ Cn+1 such that x = x + dn+1(c). By surjectivity of qn+1, we can write c = qn+1(b) for some b ∈ Bn+1. By commutativity of the squares, we know x = x + qn ◦ dn+1(b). The next step of the proof is to find some y such that qn(y) = x. Then qn(y + dn+1(b)) = x. So the corresponding y is y = y + dn+1(b). So dn(y) = dn(y), and hence ∂∗[x] = ∂∗[x]. (iv) This is yet another standard diagram chasing argument. When reading this, it is helpful to look at a diagram and see how the elements are chased along. It is even more beneficial to attempt to prove this yourself. (a) im i∗ ⊆ ker q∗: This follows from the assumption that in ◦ qn = 0. (b) ker q∗ ⊆ im i∗: Let [b] ∈ Hn(B). Suppose q∗([b]) = 0. Then there is some c ∈ Cn+1 such that qn(b) = dn+1(c). By surjectivity of
|
qn+1, there is some b ∈ Bn+1 such that qn+1(b) = c. By commutativity, we know qn(b) = qn ◦ dn+1(b), i.e. qn(b − dn+1(b)) = 0. By exactness of the sequence, we know there is some a ∈ An such that in(a) = b − dn+1(b). Moreover, in−1 ◦ dn(a) = dn ◦ in(a) = dn(b − dn+1(b)) = 0, using the fact that b is a cycle. Since in−1 is injective, it follows that dn(a) = 0. So [a] ∈ Hn(A). Then i∗([a]) = [b] − [dn+1(b)] = [b]. So [b] ∈ im i∗. 27 3 Four major tools of (co)homology III Algebraic Topology (c) im q∗ ⊆ ker ∂∗: Let [b] ∈ Hn(B). To compute ∂∗(q∗([b])), we first pull back qn(b) to b ∈ Bn. Then we compute dn(b) and then pull it back to An+1. However, we know dn(b) = 0 since b is a cycle. So ∂∗(q∗([b])) = 0, i.e. ∂∗ ◦ q∗ = 0. (d) ker ∂∗ ⊆ im q∗: Let [c] ∈ Hn(C) and suppose ∂∗([c]) = 0. Let b ∈ Bn be such that qn(b) = c, and a ∈ An−1 such that in−1(a) = dn(b). By assumption, ∂∗([c]) = [a] = 0. So we know a is a boundary, say a = dn(a) for some a ∈ An. Then by commutativity we know dn(b) = dn ◦ in(a). In other words, dn(b − in(a)) = 0.
|
So [b − in(a)] ∈ Hn(B). Moreover, q∗([b − in(a)]) = [qn(b) − qn ◦ in(a)] = [c]. So [c] ∈ im q∗. (e) im ∂∗ ⊆ ker i∗: Let [c] ∈ Hn(C). Let b ∈ Bn be such that qn(b) = c, and a ∈ An−1 be such that in(a) = dn(b). Then ∂∗([c]) = [a]. Then i∗([a]) = [in(a)] = [dn(b)] = 0. So i∗ ◦ ∂∗ = 0. (f) ker i∗ ⊆ im ∂∗: Let [a] ∈ Hn(A) and suppose i∗([a]) = 0. So we can find some b ∈ Bn+1 such that in(a) = dn+1(b). Let c = qn+1(b). Then dn+1(c) = dn+1 ◦ qn+1(b) = qn ◦ dn+1(b) = qn ◦ in(a) = 0. So [c] ∈ Hn(C). Then [a] = ∂∗([c]) by definition of ∂∗. So [a] ∈ im ∂∗. Another piece of useful algebra is known as the 5-lemma: Lemma (Five lemma). Consider the following commutative diagram If the two rows are exact, m and p are isomorphisms, q is injective and is surjective, then n is also an isomorphism. Proof. The philosophy is exactly the same as last time. We first show that n is surjective. Let c ∈ C. Then we obtain d = t(c) ∈ D. Since p is an isomorphism, we can find d ∈ D such that p(d) = d. Then we have q(j(d)) = u(p(d)) = u(f (c)) = 0. Since q is injective, we know j(
|
d) = 0. Since the sequence is exact, there is some c ∈ C such that h(c) = d. We are not yet done. We do not know that n(c) = c. All we know is that d(n(c)) = d(c). So d(c − n(c)) = 0. By exactness at C, we can find some b 28 3 Four major tools of (co)homology III Algebraic Topology such that s(b) = n(c) − c. Since m was surjective, we can find b ∈ B such that m(b) = b. Then we have So we have n(g(b)) = n(c) − c. n(c − g(b)) = c. So n is surjective. Showing that n is injective is similar. Corollary. Let f : (X, A) → (Y, B) be a map of pairs, and that any two of f∗ : H∗(X, A) → H∗(Y, B), H∗(X) → H∗(Y ) and H∗(A) → H∗(B) are isomorphisms. Then the third is also an isomorphism. Proof. Follows from the long exact sequence and the five lemma. That wasn’t too bad, as it is just pure algebra. Proof of homotopy invariance The next goal is want to show that homotopy of continuous maps does not affect the induced map on the homology groups. We will do this by showing that homotopies of maps induce homotopies of chain complexes, and chain homotopic maps induce the same map on homology groups. To make sense of this, we need to know what it means to be a homotopy of chain complexes. Definition (Chain homotopy). A chain homotopy between chain maps f·, g· : C· → D· is a collection of homomorphisms Fn : Cn → Dn+1 such that gn − fn = dD n+1 ◦ Fn + Fn−1 ◦ dC n : Cn → Dn for all n. The idea of the chain homotopy is that Fn(σ) gives us an
|
n + 1 simplex whose boundary is gn − fn, plus some terms arising from the boundary of c itself: gn(σ) gn(σ) Fn(σ) : dFn(σ) = + Fn−1(dσ) fn(σ) fn(σ) We will not attempt to justify the signs appearing in the definition; they are what are needed for it to work. The relevance of this definition is the following result: Lemma. If f· and g· are chain homotopic, then f∗ = g∗ : H∗(C·) → H∗(D·). Proof. Let [c] ∈ Hn(C·). Then we have gn(c) − fn(c) = dD n+1Fn(c) + Fn−1(dC n (c)) = dD n+1Fn(c), where the second term dies because c is a cycle. So we have [gn(c)] = [fn(c)]. 29 3 Four major tools of (co)homology III Algebraic Topology That was the easy part. What we need to do now is to show that homotopy of maps between spaces gives a chain homotopy between the corresponding chain maps. We will change notation a bit. Notation. From now on, we will just write d for dC n. For f : X → Y, we will write f# : Cn(X) → Cn(Y ) for the map σ → f ◦ σ, i.e. what we used to call fn. Now if H : [0, 1] × X → Y is a homotopy from f to g, and σ : ∆n → X is an n-chain, then we get a homotopy [0, 1] × ∆n [0,1]×σ [0, 1] × X H Y from f#(σ) to g#(σ). Note that we write [0, 1] for the identity map [0, 1] → [0, 1]. The idea is that we are going to cut up [0, 1] × ∆n into n + 1-simplices. Suppose we can find a collection of chains Pn ∈ Cn+1([0, 1] × ∆n)
|
for n ≥ 0 such that d(Pn) = i1 − i0 − (−1)j([0, 1] × δj)#(Pn−1), n where j=0 i0 : δn = {0} × ∆n → [0, 1] × ∆n i1 : δn = {0} × ∆n → [0, 1] × ∆n and δj : ∆n−1 → ∆n is the inclusion of the jth face. These are “prisms” connecting the top and bottom face. Intuitively, the prism P2 looks like this: and the formula tells us its boundary is the top and bottom triangles, plus the side faces given by the prisms of the edges. Suppose we managed to find such prisms. We can then define Fn : Cn(X) → Cn+1(Y ) by sending (σ : ∆n → X) → (H ◦ ([0, 1] × σ))#(Pn). 30 3 Four major tools of (co)homology III Algebraic Topology We now calculate. dFn(σ) = d((H ◦ (1 × ∆n))#(Pn)) = (H ◦ ([0, 1] × σ))#(d(Pn)) = (H ◦ ([0, 1] × σ))# i1 − i0 − (−1)j([0, 1] × δj)#(Pn−1) n j=0 = H ◦ ([0, 1] × σ) ◦ i1 − H ◦ ([0, 1] × σ) ◦ i0 − n j=0 (−1)jH# ◦ (([0, 1] × σ) ◦ δj)#(Pn−1 (−1)jH# ◦ (([0, 1] × σ) ◦ δj)#(Pn−1) j=0 = g ◦ σ − f ◦ σ − Fn−1(dσ) = g#(σ) − f#(σ) − Fn−1(dσ). So we just have to show that Pn
|
exists. We already had a picture of what it looks like, so we just need to find a formula that represents it. We view [0, 1] × ∆n ∈ R × Rn+1. Write {v0, v1, · · ·, vn} for the vertices of {0} × ∆n ⊆ [1, 0] × ∆n, and {w0, · · ·, wn} for the corresponding vertices of {1} × ∆n. Now if {x0, x1, · · ·, xn+1} ⊆ {v0, · · ·, vn} ∪ {w0, · · ·, wn}, we let by [x0, · · ·, xn] : ∆n → [0, 1] × ∆n (t0, · · ·, tn+1) = tixi. This is still in the space by convexity. We let Pn = n i=0 (−1)i[v0, v1, · · ·, vi, wi, wi+1, · · ·, wn] ∈ Cn+1([0, 1] × ∆n). It is a boring check that this actually works, and we shall not bore the reader with the details. Proof of excision and Mayer-Vietoris Finally, we prove excision and Mayer-Vietoris together. It turns out both follow easily from what we call the “small simplices theorem”. Definition (C U subspaces of X such that their interiors cover X, i.e. n (X) and H U n (X)). We let U = {Uα}α∈I be a collection of ˚Uα = X. α∈I Let C U n (X) ⊆ Cn(X) be the subgroup generated by those singular n-simplices σ : ∆n → X such that σ(∆n) ⊆ Uα for some α. It is clear that if σ lies in Uα, then so do its faces. So C U n (X) is a sub-chain complex of C·(X). We write H U n (X) = Hn(C U· (X)). 31
|
3 Four major tools of (co)homology III Algebraic Topology It would be annoying if each choice of open cover gives a different homology theory, because this would be too many homology theories to think about. The small simplices theorem says that the natural map H U ∗ (X) → H∗(X) is an isomorphism. Theorem (Small simplices theorem). The natural map H U isomorphism. ∗ (X) → H∗(X) is an The idea is that we can cut up each simplex into smaller parts by barycentric subdivision, and if we do it enough, it will eventually lie in on of the open covers. We then go on to prove that cutting it up does not change homology. Proving it is not hard, but technically annoying. So we first use this to deduce our theorems. Proof of Mayer-Vietoris. Let X = A ∪ B, with A, B open in X. We let U = {A, B}, and write C·(A + B) = C U· (X). Then we have a natural chain map C·(A) ⊕ C·(B) jA−jB C·(A + B) that is surjective. The kernel consists of (x, y) such that jA(x) − jB(y) = 0, i.e. jA(x) = jB(y). But j doesn’t really do anything. It just forgets that the simplices lie in A or B. So this means y = x is a chain in A ∩ B. We thus deduce that we have a short exact sequence of chain complexes C·(A ∩ B) (iA,iB ) C·(A) ⊕ C·(B) jA−jB C·(A + B). Then the snake lemma shows that we obtain a long exact sequence of homology groups. So we get a long exact sequence of homology groups · · · Hn(A ∩ B) (iA,iB ) Hn(A) ⊕ Hn(B) jA−jB H U n (X) · · ·. By the small simplices theorem, we can replace H U obtain Mayer-Vietoris. n (X)
|
with Hn(X). So we Now what does the boundary map ∂ : Hn(X) → Hn−1(A ∩ B) do? Suppose we have c ∈ Hn(X) represented by a cycle a + b ∈ C U n (X), with a supported in A and b supported in B. By the small simplices theorem, such a representative always exists. Then the proof of the snake lemma says that ∂([a + b]) is given by tracing through Cn(A) ⊕ Cn(B) jA−jB Cn(A + B) Cn−1(A ∩ B) (iA,iB ) Cn−1(A) ⊕ Cn−1(B) d We now pull back a + b along jA − jB to obtain (a, −b), then apply d to obtain (da, −db). Then the required object is [da] = [−db]. We now move on to prove excision. 32 3 Four major tools of (co)homology III Algebraic Topology Proof of excision. Let X ⊇ A ⊇ Z be such that Z ⊇ ˚A. Let B = X \ Z. Then again take U = {A, B}. By assumption, their interiors cover X. We consider the short exact sequences 0 0 C·(A) C·(A + B) C·(A + B)/C·(A) C·(A) C·(X) C·(X, A) 0 0 Looking at the induced map between long exact sequences on homology, the middle and left terms induce isomorphisms, so the right term does too by the 5-lemma. On the other hand, the map C·(B)/C·(A ∩ B) C·(A + B)/C·(A) is an isomorphism of chain complexes. Since their homologies are H·(B, A ∩ B) and H·(X, A), we infer they the two are isomorphic. Recalling that B = X \ ¯Z, we have shown that H∗(X \ Z, A \ Z) ∼= H∗(X, A). We now provide a sketch proof of the small simplices theorem. As mentioned, the idea is to cut our simplices
|
up, and one method to do so is barycentric subdivision. Given a 0-simplex {v0}, its barycentric subdivision is itself. If x = {x0, · · ·, xn} ⊆ Rn spans an n-simplex σ, we let bx = 1 n + 1 n i=0 xi be its barycenter. If we have a 1-simplex Then the barycentric subdivision is obtained as We can degenerately describe this as first barycentrically subdividing the boundary (which does nothing in this case), and then add the barycenter. In the case of a 2-simplex: we first barycentrically subdivide the boundary: 33 3 Four major tools of (co)homology III Algebraic Topology Then add the barycenter bx, and for each standard simplex in the boundary, we “cone it off” towards bx: More formally, in the standard n-simplex ∆n ⊆ Rn+1, we let Bn be its barycenter. For each singular i-simplex σ : ∆i → ∆n, we define Cone∆n i (σ) : ∆i+1 → ∆n by (t0, t1, · · ·, ti+1) → t0bn + (1 − t0) · σ (t1, · · ·, ti+1) 1 − t0. We can then extend linearly to get a map Cone∆n i : Ci(∆n) → Ci+1(∆n). Example. In the 2-simplex the cone of the bottom edge is the simplex in orange: Since this increases the dimension, we might think this is a chain map. Then for i > 0, we have dCone∆n i (σ) = i+1 j=0 Cone∆n i (σ) ◦ δj i+1 = σ + (−1)jCone∆n i−1(σ ◦ δj−1) j=1 = σ − Cone∆n i−1(dσ). For i = 0, we get dCone∆n i (σ) = σ −
|
ε(σ) · bn, 34 3 Four major tools of (co)homology III Algebraic Topology In total, we have i + Cone∆n where ci = 0 for i > 0, and c0(σ) = ε(σ)bn is a map C·(∆n) → C·(∆n). i−1d = id − c·, dCone∆n We now use this cone map to construct a barycentric subdivision map ρX n : Cn(X) → Cn(X), and insist that it is natural that if f : X → Y is a map, then f# ◦ ρX n ◦ f#, i.e. the diagram n = ρY Cn(X) f# Cn(Y ) ρX n ρY n Cn(X) f#. Cn(Y ) So if σ : ∆n → X, we let ιn : ∆n → ∆n ∈ Cn(∆n) be the identity map. Then we must have n (σ) = ρX ρX n (σ#ιn) = σ#ρ∆n n (ιn). So if know how to do barycentric subdivision for ιn itself, then by naturality, we have defined it for all spaces! Naturality makes life easier for us, not harder! So we define ρX n recursively on n, for all spaces X at once, by (i) ρX 0 = idC0(X) (ii) For n > 0, we define the barycentric subdivision of ιn by n (ιn) = Cone∆n ρ∆n n−1(ρ∆n n−1(dιn)), and then extend by naturality. This has all the expected properties: Lemma. ρX· is a natural chain map. Lemma. ρX· is chain homotopic to the identity. Proof. No one cares. Lemma. The diameter of each subdivided simplex in (ρ∆n n n+1 diam(∆n). k n )k(ιn) is bounded by Proof. Basic geometry. Proposition. If c
|
∈ C U n (X), then pX (c) ∈ C U Moreover, if c ∈ Cn(X), then there is some k such that (ρX n (X). n )k(c) ∈ C U n (X). Proof. The first part is clear. For the second part, note that every chain is a finite sum of simplices. So we only have to check it for single simplices. We let σ be a simplex, and let V = {σ−1˚Uα} be an open cover of ∆n. By the Lebesgue number lemma, there is some ε > 0 such that any set of diameter < ε is contained in some σ−1˚Uα. So we can choose 35 3 Four major tools of (co)homology III Algebraic Topology k > 0 such that (ρ∆n So each lies in some σ−1˚Uα. So n )(ιn) is a sum of simplices which each has diameter < ε. (ρ∆n n )k(ιn) = C V n (∆n). So applying σ tells us (ρ∆n n )k(σ) ∈ C U n (X). Finally, we get to the theorem. Theorem (Small simplices theorem). The natural map U : H U is an isomorphism. ∗ (X) → H∗(X) Proof. Let [c] ∈ Hn(X). By the proposition, there is some k > 0 such that (ρX n is chain homotopic to the identity. Thus so is (ρX n (x). We know that ρX n )k(c)] = [c]. So the map H U n (X) → Hn(X) is surjective. n )k. So [(ρX n )k(c) ∈ C U To show that it is injective, we suppose U ([c]) = 0. Then we can find some z ∈ Hn+1(X) such that dz = c. We can then similarly subdivide z enough such that it lies in C U n+1(X). So this shows that [c] = 0 ∈ H U n (X). That’s it
|
. We move on to (slightly) more interesting stuff. The next few sections will all be slightly short, as we touch on various different ideas. 36 4 Reduced homology III Algebraic Topology 4 Reduced homology Definition (Reduced homology). Let X be a space, and x0 ∈ X a basepoint. We define the reduced homology to be ˜H∗(X) = H∗(X, {x0}). Note that by the long exact sequence of relative homology, we know that ˜Hn(X) ∼= Hn(X) for n ≥ 1. So what is the point of defining a new homology theory that only differs when n = 0, which we often don’t care about? It turns out there is an isomorphism between H∗(X, A) and ˜H∗(X/A) for suitably “good” pairs (X, A). Definition (Good pair). We say a pair (X, A) is good if there is an open set U containing ¯A such that the inclusion A → U is a deformation retract, i.e. there exists a homotopy H : [0, 1] × U → U such that H(0, x) = x H(1, x) ∈ A H(t, a) = a for all a ∈ A, t ∈ [0, 1]. Theorem. If (X, A) is good, then the natural map H∗(X, A) H∗(X/A, A/A) = ˜H∗(X/A) is an isomorphism. Proof. As i : A → U is in particular a homotopy equivalence, the map H∗(A) H∗(U ) is an isomorphism. So by the five lemma, the map on relative homology H∗(X, A) H∗(X, U ) is an isomorphism as well. As i : A → U is a deformation retraction with homotopy H, the inclusion is also a deformation retraction. So again by the five lemma, the map {�
|
�} = A/A → U/A H∗(X/A, A/A) H∗(X/A, U/A) is also an isomorphism. Now we have Hn(X, A) Hn(X/A, A/A) ∼ ∼ Hn(X, U ) excise A Hn(X \ A, U \ A) Hn(X/A, U/A) excise A/A Hn We now notice that X \ A = X vertical map is actually an isomorphism. So the result follows. A and. So the right-hand 37 5 Cell complexes III Algebraic Topology 5 Cell complexes So far, everything we’ve said is true for arbitrary spaces. This includes, for example, the topological space with three points a, b, c, whose topology is {∅, {a}, {a, b, c}}. However, these spaces are horrible. We want to restrict our attention to nice spaces. Spaces that do feel like actual, genuine spaces. The best kinds of space we can imagine would be manifolds, but that is a bit too strong a condition. For example, the union of the two axes in R2 is not a manifold, but it is still a sensible space to talk about. Perhaps we can just impose conditions like Hausdorffness and maybe second countability, but we can still produce nasty spaces that satisfy these properties. So the idea is to provide a method to build spaces, and then say we only consider spaces built this way. These are known as cell complexes, or CW complexes Definition (Cell complex). A cell complex is any space built out of the following procedure: (i) Start with a discrete space X 0. The set of points in X 0 are called I0. (ii) If X n−1 has been constructed, then we may choose a family of maps {ϕα : Sn−1 → X n−1}α∈In, and set X n = X n−1 α∈In Dn α /{x ∈ ∂Dn α ∼ ϕα(x) ∈ X n−1}. We call X n the n-skeleton of X We call the image of Dn open cell eα. α \ ∂Dn α in X n the (iii) Finally, we de�
|
�ne X = X n n≥0 with the weak topology, namely that A ⊆ X is open if A ∩ X n is open in X n for all n. We write Φα : Dn α → X n for the obvious inclusion map. This is called the characteristic map for the cell eα. Definition (Finite-dimensional cell complex). If X = X n for some n, we say X is finite-dimensional. Definition (Finite cell complex). If X is finite-dimensional and In are all finite, then we say X is finite. Definition (Subcomplex). A subcomplex A of X is a simplex obtained by using a subset I n ⊆ In. Note that we cannot simply throw away some cells to get a subcomplex, as the higher cells might want to map into the cells you have thrown away, and you need to remove them as well. We note the following technical result without proof: Lemma. If A ⊆ X is a subcomplex, then the pair (X, A) is good. 38 5 Cell complexes III Algebraic Topology Proof. See Hatcher 0.16. Corollary. If A ⊆ X is a subcomplex, then Hn(X, A) ∼ ˜Hn(X/A) is an isomorphism. We are next going to show that we can directly compute the cohomology of a cell complex by looking at the cell structures, instead of going through all the previous rather ad-hoc mess we’ve been through. We start with the following lemma: Lemma. Let X be a cell complex. Then (i) Hi(X n, X n−1∈In (ii) Hi(X n) = 0 for all i > n. (iii) Hi(X n) → Hi(X) is an isomorphism for i < n. Proof. (i) As (X n, X n−1) is good, we have an isomorphism Hi(X n, X n−1) ∼ ˜Hi(X n/X n−1). But we have X n/X n−1 ∼= Sn α, α∈In the space obtained from Y = α by collapsing down the subspace
|
Z = {xα : α ∈ In}, where each xα is the south pole of the sphere. To compute the homology of the wedge X n/X n−1, we then note that (Y, Z) is good, and so we have a long exact sequence α∈In Sn Hi(Z) Hi(Y ) ˜Hi(Y /Z) Hi−1(Z) Hi−1(Y ). Since Hi(Z) vanishes for i ≥ 1, the result then follows from the homology of the spheres plus the fact that Hi( Xα) = Hi(Xα). (ii) This follows by induction on n. We have (part of) a long exact sequence Hi(X n−1) Hi(X n) Hi(X n, X n−1) We know the first term vanishes by induction, and the third term vanishes for i > n. So it follows that Hi(X n) vanishes. 39 5 Cell complexes III Algebraic Topology (iii) To avoid doing too much point-set topology, we suppose X is finitedimensional, so X = X m for some m. Then we have a long exact sequence Hi+1(X n+1, X n) Hi(X n) Hi(X n+1) Hi(X n+1, X n) Now if i < n, we know the first and last groups vanish. So we have Hi(X n) ∼= Hi(X n+1). By continuing, we know that Hi(X n) ∼= Hi(X n+1) ∼= Hi(X n+2) ∼= · · · ∼= Hi(X m) = Hi(X). To prove this for the general case, we need to use the fact that any map from a compact space to a cell complex hits only finitely many cells, and then the result would follow from this special case. For a cell complex X, let n (X) = Hn(X n, X n−1) ∼= C cell Z. α∈In We define dcell n : C cell n (X) → C cell n−1(X) by the composition Hn(X n, X n−1) ∂ Hn−1(X n−1) q Hn−
|
1(X n−1, X n−2). We consider 0 0 Hn(X n+1) Hn(X n) ∂ qn Hn+1(X n+1, X n) dcell n+1 Hn(X n, X n−1) dcell n Hn−1(X n−1, X n−2) ∂ qn−1 Hn−1(X n−1) 0 Hn−1(X n) Referring to the above diagram, we see that n ◦ dcell dcell n+1 = qn−1 ◦ ∂ ◦ qn ◦ ∂ = 0, (X), dcell· ) is a since the middle ∂ ◦ qn is part of an exact sequence. So (C cell· chain complex, and the corresponding homology groups are known as the cellular homology of X, written H cell n (X). 40 5 Cell complexes III Algebraic Topology Theorem. Proof. We have Hn(X) ∼= Hn(X n+1) H cell n (X) ∼= Hn(X). = Hn(X n)/ im(∂ : Hn+1(X n+1, X n) → Hn(X n)) Since qn is injective, we apply it to and bottom to get = qn(Hn(X n))/ im(dcell n+1 : Hn+1(X n+1, X n) → Hn(X n, X n−1)) By exactness, the image of qn is the kernel of ∂. So we have = ker(∂ : Hn(X n, X n−1) → Hn−1(X n−1))/ im(dcell = ker(dcell = H cell n )/ im(dcell n+1) n (X). n+1) Corollary. If X is a finite cell complex, then Hn(X) is a finitely-generated abelian group for all n, generated by at most |In| elements. In particular, if there are no n-cells, then Hn(X) vanishes. If X has a cell-structure with cells in even-dimensional cells only, then H∗(X) are
|
all free. We can similarly define cellular cohomology. Definition (Cellular cohomology). We define cellular cohomology by C n cell(X) = H n(X n, X n−1) and let dn cell be the composition H n(X n, X n−1) q∗ H n(X n) ∂ H n+1(X n+1, X n). This defines a cochain complex C· cell(X) with cohomology H ∗ cell(X) ∼= H ∗(X). H ∗ cell(X), and we have One can directly check that C· cell(X) ∼= Hom(C cell· (X), Z). This is all very good, because cellular homology is very simple and concrete. However, to actually use it, we need to understand what the map dcell n : C cell n (X) = α∈In Z{eα} → C cell n−1(X) = Z{eβ} β∈In−1 is. In particular, we want to find the coefficients dαβ such that dcell n (eα) = dαβeβ. It turn out this is pretty easy 41 5 Cell complexes III Algebraic Topology Lemma. The coefficients dαβ are given by the degree of the map α = ∂Dn Sn−1 α ϕα X n−1 X n−1/X n−2 = γ∈In−1 Sn−1 γ Sn−1 β, fαβ where the final map is obtained by collapsing the other spheres in the wedge. In the case of cohomology, the maps are given by the transposes of these. This is easier in practice that in sounds. In practice, the map is given by “the obvious one”. Proof. Consider the diagram Hn(Dn α, ∂Dn α) (Φα)∗ Hn(X n, X n−1) ∂ ∼ ∂ Hn−1(∂Dn α) ˜Hn−1(Sn−1 β ) (ϕα)∗ collapse Hn−1(X n−1) ˜H
|
n−1 Sn−1 γ dcell n q Hn−1(X n−1, X n−2) excision ∼ ˜Hn−1(X n−1/X n−2) By the long exact sequence, the top left horizontal map is an isomorphism. Now let’s try to trace through the diagram. We can find isomorphism 1 fαβ dαβ 1 eα dαγeγ dαγeγ So the degree of fαβ is indeed dαβ. Example. Let K be the Klein bottle. b π a v We give it a cell complex structure by – K 0 = {v}. Note that all four vertices in the diagram are identified. v 42 5 Cell complexes III Algebraic Topology – K 1 = {a, b}. a x0 b – K 2 is the unique 2-cell π we see in the picture, where ϕπ : S1 → K 1 given by aba−1b. The cellular chain complex is given by 0 C cell 2 (K) dcell 2 C cell 1 (K) dcell 1 C cell 0 (K) Zπ Za ⊕ Zb Zv We can now compute the maps dcell i. The d1 map is easy. We have d1(a) = d1(b) = v − v = 0. In the d2 map, we can figure it out by using local degrees. Locally, the attaching map is just like the identity map, up to an orientation flip, so the local degrees are ±1. Moreover, the inverse image of each point has two elements. If we think hard enough, we realize that for the attaching map of a, the two elements have opposite degree and cancel each other out; while in the case of b they have the same sign and give a degree of 2. So we have So we have d2(π) = 0a + 2b. H0(K) = Z H1(K) = Z ⊕ Z 2b H2(K) = 0 = Z ⊕ Z/2Z We can similarly compute the cohomology. By dualizing, we have C 2 cell(K)(K) C 1 cell(K) (0) C 0 cell(K) (0 2) Z Z ⊕
|
Z Z So we have H0(K) = Z H1(K) = Z H2(K) = Z/2Z. Note that the second cohomology is not the dual of the second homology! However, if we forget where each factor is, and just add all the homology groups together, we get Z⊕Z⊕Z/2Z. Also, if we forget all the torsion components Z/2Z, then they are the same! 43 5 Cell complexes III Algebraic Topology This is a general phenomenon. For a cell complex, if we know what all the homology groups are, we can find the cohomologies by keeping the Z’s unchanged and moving the torsion components up. The general statement will be given by the universal coefficient theorem. Example. Consider RPn = Sn/(x ∼ −x). We notice that for any point in RPn, if it is not in the equator, then it is represented by a unique element in the northern hemisphere. Otherwise, it is represented by two points. So we have RPn ∼= Dn/(x ∼ −x for x ∈ ∂Dn). This is a nice description, since if we throw out the interior of the disk, then we are left with an Sn−1 with antipodal points identified, i.e. an RPn−1! So we can immediately see that RPn = RPn−1 ∪f Dn, for f : Sn−1 → RPn−1 given by f (x) = [x]. So RPn has a cell structure with one cell in every degree up to n. What are the boundary maps? We write ei for the i-th degree cell. We know that ei is attached along the map f described above. More concretely, we have f : Si−1 s ϕi RPi−1 RPi−1/RPi−2 = Si−1 t. The open upper hemisphere and lower hemisphere of Si−1 morphically to Si−1 \ {∗}. Furthermore, s t are mapped homeo- f |upper = f |lower ◦ a, where a is the antipodal map. But we know that deg(a) = (−1)i. So we have a zero map if i is odd, and a 2 map if i is
|
even. Then we have · · · 2 Ze3 0 Ze2 2 Ze1 0 Ze0. What happens on the left end depends on whether n is even or odd. So we have Hi(RPn) = Z i = 0 Z/2Z i odd, i < n 0 Z i even, 0 < i < n i = n is odd otherwise 0. We can immediately work out the cohomology too. We will just write out the answer: H i(RPn) = Z i = 0 i odd, i < n 0 Z/2Z i even is odd otherwise 0 44 6 (Co)homology with coefficients III Algebraic Topology 6 (Co)homology with coefficients Recall that when we defined (co)homology, we constructed these free Z-modules from our spaces. However, we did not actually use the fact that it was Z, we might as well replace it with any abelian group A. Definition ((Co)homology with coefficients). Let A be an abelian group, and X be a topological space. We let C·(X; A) = C·(X) ⊗ A with differentials d ⊗ idA. In other words C·(X; A) is the abelian group obtained by taking the direct sum of many copies of A, one for each singular simplex. We let We can also define Hn(X; A) = Hn(C·(X; A), d ⊗ idA). n (X; A) = Hn(C cell· H cell (X) ⊗ A), and the same proof shows that H cell n (X; A) = Hn(X; A). Similarly, we let C·(X; A) = Hom(C·(X), A), with the usual (du
|
al) differential. We again set H n(X; A) = H n(C·(X; A)). We similarly define cellular cohomology. If A is in fact a commutative ring, then these are in fact R-modules. We call A the “coefficients”, since a general member of C·(X; A) looks like nσs, where nσ ∈ A, σ : ∆n → X. We will usually take A = Z, Z/nZ or Q. Everything we’ve proved for homology holds for these with exactly the same proof. Example. In the case of C cell· C cell· (RPn, Z/2), all the differentials are 0. So we have (RPn), the differentials are all 0 or 2. So in Hi(RPn, Z/2) = Z/ Similarly, the cohomology groups are the same. On the other hand, if we take the coefficients to be Q, then multiplication by 2 is now an isomorphism. Then we get for n not too large. C cell· (RPn, Q) Q n odd n even 0 45 7 Euler characteristic III Algebraic Topology 7 Euler characteristic There are many ways to define the Euler characteristic, and they are all equivalent. So to define it, we pick a definition that makes it obvious it is a number. Definition (Euler characteristic). Let X be a cell complex. We let χ(X) = n (−1)n number of n-cells of X ∈ Z. From this definition, it is not clear that this is a property of X itself, rather than something about its cell decomposition. We similarly define χZ(X) = n (−1)n rank Hn(X; Z). For any field F, we define χF(X) = n (−1)n dimF Hn(X; F). Theorem. We have χ = χZ = χF. Proof. First note that the number of n cells of X is the rank of C cell we will just write as C
|
n. Let n (X), which Zn = ker(dn : Cn → Cn−1) Bn = im(dn+1 : Cn+1 → Cn). We are now going to write down two short exact sequences. By definition of homology, we have 0 Bn Zn Hn(X; Z) 0. Also, the definition of Zn and Bn give us 0 Zn Cn Bn−1 0. We will now use the first isomorphism theorem to know that the rank of the middle term is the sum of ranks of the outer terms. So we have χZ(X) = (−1)n rank Hn(X) = (−1)n(rank Zn − rank Bn). We also have So we have rank Bn = rank Cn+1 − rank Zn+1. χZ(X) = = n (−1)n(rank Zn − rank Cn+1 + rank Zn+1) (−1)n+1 rank Cn+1 n = χ(X). For χF, we use the fact that rank Cn = dimF Cn ⊗ F. 46 8 Cup product III Algebraic Topology 8 Cup product So far, homology and cohomology are somewhat similar. We computed them, saw they are not the same, but they seem to contain the same information nevertheless. However, cohomology is 10 times better, because we can define a ring structure on them, and rings are better than groups. Just like the case of homology and cohomology, we will be able to write down the definition easily, but will struggle to compute it. Definition (Cup product). Let R be a commutative ring, and φ ∈ C k(X; R), ψ ∈ C (X; R). Then φ ψ ∈ C k+(X; R) is given by (φ ψ)(σ : ∆k+ → X) = φ(σ|[v0,...,vk]) · ψ(σ|[vk,...,vk+]). Here the multiplication is multiplication in R, and v0, · · ·, v are the vertices of ∆k+ �
|
� Rk++1, and the restriction is given by σ|[x0,...,xi](t0,..., ti) = σ. tjxj This is a bilinear map. Notation. We write H ∗(X; R) = H n(X; R). n≥0 This is the definition. We can try to establish some of its basic properties. We want to know how this interacts with the differential d with the cochains. The obvious answer d(φ ψ) = (dφ) (dψ) doesn’t work, because the degrees are wrong. What we have is: Lemma. If φ ∈ C k(X; R) and ψ ∈ C (X; R), then d(φ ψ) = (dφ) ψ + (−1)kφ (dψ). This is like the product rule with a sign. Proof. This is a straightforward computation. Let σ : ∆k++1 → X be a simplex. Then we have ((dφ) ψ)(σ) = (dφ)(σ|[v0,...,vk+1]) · ψ(σ|[vk+1,...,vk++1]) = φ k+1 i=0 (−1)iσ|[v0,...,ˆvi,...,vk+1] · ψ(σ|[vk+1,...,vk++1]) (φ (dψ))(σ) = φ(σ|[v0,...,vk]) · (dψ)(σ|vk,...,vk++1]) = φ(σ|[v0,...,vk]) · ψ k++1 (−1)i−kσ|[vk,...,ˆvi,...,vk++1] i=k = (−1)kφ(σ|[v0,...,vk]) · ψ k++1 (−1)iσ|[vk,...,ˆvi,...,vk++1]. i=k We notice that the last term of the first expression, and the first term of the second expression are exactly the same, except the signs di�
|
�er by −1. Then the remaining terms overlap in exactly 1 vertex, so we have ((dφ) ψ)(σ) + (−1)kφ (dψ)(σ) = (φ ψ)(dσ) = (d(φ ψ))(σ) as required. 47 8 Cup product III Algebraic Topology This is the most interesting thing about these things, because it tells us this gives a well-defined map on cohomology. Corollary. The cup product induces a well-defined map : H k(X; R) × H (X; R) H k+(X; R) ([φ], [ψ]) [φ ψ] Proof. To see this is defined at all, as dφ = 0 = dψ, we have d(φ ψ) = (dφ) ψ ± φ (dψ) = 0. So φ ψ is a cocycle, and represents the cohomology class. To see this is well-defined, if φ = φ + dτ, then φ ψ = φ ψ + dτ ψ = φ ψ + d(τ ψ) ± τ (dψ). Using the fact that dψ = 0, we know that φ ψ and φ ψ differ by a boundary, so [φ ψ] = [φ ψ]. The case where we change ψ is similar. Note that the operation is associative on cochains, so associative on H ∗ too. Also, there is a map 1 : C0(X) → R sending σ → 1 for all σ. Then we have [1] [φ] = [φ]. So we have Proposition. (H ∗(X; R),, [1]) is a unital ring. Note that this is not necessarily commutative! Instead, we have the following graded commutative condition. Proposition. Let R be a commutative ring. If α ∈ H k(X; R) and β ∈ H (X; R), then we have α β = (−1)kβ α Note that this is only true for the cohomology classes. It is not true in
|
general for the cochains. So we would expect that this is rather annoying to prove. The proof relies on the following observation: Proposition. The cup product is natural, i.e. if f : X → Y is a map, and α, β ∈ H ∗(Y ; R), then f ∗(α β) = f ∗(α) f ∗(β). So f ∗ is a homomorphism of unital rings. Proof of previous proposition. Let ρn : Cn(X) → Cn(x) be given by σ → (−1)n(n+1)/2σ|[vn,vn−1,...,v0] The σ|[vn,vn−1,...,v0] tells us that we reverse the order of the vertices, and the factor of (−1)n(n+1)/2 is the sign of the permutation that reverses 0, · · ·, n. For convenience, we write εn = (−1)n(n+1)/2. 48 8 Cup product III Algebraic Topology Claim. We claim that ρ· is a chain map, and is chain homotopic to the identity. We will prove this later. Suppose the claim holds. We let φ ∈ C k(X; R) represent α and ψ ∈ C (X; R) represent β. Then we have (ρ∗φ ρ∗ψ)(σ) = (ρ∗φ)(σ|[v0,...,vk](ρ∗ψ)(σ|[vk,...,vk+]) = φ(εk · σ|[vk,...,v0])ψ(εσ|[vk+,...,vk]). Thus, we can compute ρ∗(ψ φ)(σ) = (ψ φ)(εk+σ|[vk+,...,v0]) = εk+ψ(σ|[vk+,...,vk ])φ(σ|[vk,...,v0]) = εk+εkε(ρ∗φ ρ∗ψ)(σ). By checking it directly, we can see that εn+εkε = (−1)k. So we have α β = [
|
φ ψ] = [ρ∗φ ρ∗ψ] = (−1)k[ρ∗(ψ φ)] = (−1)k[ψ φ] = (−1)klβ α. Now it remains to prove the claim. We have dρ(σ) = εn n (−1)jσ|[vn,...,ˆvn−i,...,v0] i=0 n ρ(dσ) = ρ (−1)iσ|[v0,...,ˆvi,....,vn] i=0 n = εn−1 j=0 (−1)jσ|[vn,...,ˆvj,v0]. We now notice that εn−1(−1)n−i = εn(−1)i. So this is a chain map! We now define a chain homotopy. This time, we need a “twisted prism”. We let Pn = i (−1)iεn−i[v0, · · ·, vi, wn, · · ·, wi] ∈ Cn+1([0, 1] × ∆n), where v0, · · ·, vn are the vertices of {0} × ∆n and w0, · · ·, wn are the vertices of {1} × ∆n. We let π : [0, 1] × ∆n → ∆n be the projection, and let F X n : Cn(X) → Cn+1(X) be given by σ → (σ ◦ π)#(Pn). 49 8 Cup product III Algebraic Topology We calculate dF X n (σ) = (σ ◦ π)#(dPn) = (σ ◦ π#) i j≤i (−1)j(−1)iεn−i[v0, · · ·, ˆvj, · · ·, vi, w0, · · ·, wi] + j≥i (−1)n+i+1−j(−1)iεn−i[
|
v0, · · ·, vi, wn, · · ·, ˆwj, · · ·, vi] . The terms with j = i give (σ ◦ π)# i εn−i[v0, · · ·, vi−1, wn, · · ·, wi] (−1)n+1(−1)iεn−i[v0, · · ·, vi, wn, · · ·, wi+1] + i = (σ ◦ π)#(εn[wn, · · ·, w0] − [v0, · · ·, vn]) = ρ(σ) − σ The terms with j = i are precisely −F X n−1(dσ) as required. It is easy to see that the terms are indeed the right terms, and we just have to check that the signs are right. I’m not doing that. There are some other products we can define. One example is the cross product: Definition (Cross product). Let πX : X × Y → X, πY : X × Y → Y be the projection maps. Then we have a cross product × : H k(X; R) ⊗R H (Y ; R) a ⊗ b H k+(X × Y ; R) X a) (π∗ Y b) (π∗. Note that the diagonal map ∆ : X → X × X given by ∆(x) = (x, x) satisfies ∆∗(a × b) = a b for all a, b ∈ H ∗(X; R). So these two products determine each other. There is also a relative cup product : H k(X, A; R) ⊗ H k(X; R) → H k+(X, A; R) given by the same formula. Indeed, to see this is properly defined, note that if φ ∈ C k(X, A; R), then φ is a map φ : Ck(X, A) = Ck(X) Ck(A) → R. In other words, it is a map Ck(X) →
|
R that vanishes on Ck(A). Then if σ ∈ Ck+(A) and ψ ∈ C (X; R), then (φ ψ)(σ) = φ(σ|[v0,...,vk]) · ψ(σ|[vk,...,vk+]). 50 8 Cup product III Algebraic Topology We now notice that [v0, · · ·, vk] ∈ Ck(A). So φ kills it, and this vanishes. So this is a term in H k+(X, A; R). You might find it weird that the two factors of the cup product are in different things, but note that a relative cohomology class is in particular a cohomology class. So this restricts to a map : H k(X, A; R) ⊗ H k(X, A; R) → H k+(X, A; R), but the result we gave is more general. Example. Suppose X is a space such that the cohomology classes are given by k 1 H k(X, Z What can x x be? By the graded commutativity property, we have So we know 2(x x) = 0 ∈ H 6(X, Z) ∼= Z. So we must have x x = 0. x x = −x x. 51 9 K¨unneth theorem and universal coefficients theorem III Algebraic Topology 9 K¨unneth theorem and universal coefficients theorem We are going to prove two theorems of similar flavour — K¨unneth’s theorem and the universal coefficients theorem. They are both fairly algebraic results that relate different homology and cohomology groups. They will be very useful when we prove things about (co)homologies in general. In both cases, we will not prove the “full” theorem, as they require knowledge of certain objects known as Tor and Ext. Instead, we will focus on a particular case where the Tor and Ext vanish, so that we can avoid mentioning them at all. We start with K¨unneth’s theorem. Theorem (K¨unneth’s theorem). Let R be a commutative ring, and suppose that H n(Y ; R) is a
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.